Transcript
ssAGfhBInT0 • David Ferrucci: AI Understanding the World Through Shared Knowledge Frameworks | AI Podcast Clips
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0157_ssAGfhBInT0.txt
Kind: captions
Language: en
do you think that shared knowledge if we
can maybe escape the hardware question
how much is encoded in the hardware just
the shared knowledge in the software the
the history the many centuries of wars
and so on that came to today that shared
knowledge how hard is it to encode and
did you have a hope can you speak to how
hard is it to encode that knowledge
systematically in a way that could be
used by a computer so I think it is
possible to learn to form machine to
program machine to acquire that
knowledge with a similar foundation in
other words in a similar interpretive
interpretive foundation for processing
that knowledge but what do you mean by
that so in other words foundation we
view the world in a particular way and
so in other words we we have if you will
as humans we have a frame reference for
bringing the world around us so we have
multiple frameworks for interpreting the
world around us but if you're
interpreting for example social
political interactions you're thinking
about whether there's people there's
collections and groups of people they
have goals the goals largely built
around survival and quality of life that
are their fundamental economics around
scarcity of resources and when when
humans come and start interpreting a
situation like that because you brought
you've grown up like historical events
they start interpreting situations like
that they apply a lot of this a lot of
this this fundamental framework for
interpreting that well who are the
people what were their goals what
resources did they have how much power
influence that they have over the other
like this fundamental substrate if you
will for interpreting and reasoning
about that so I think it is possible to
imbue a computer with that that stuff
that humans like take for granted when
they go and sit down and try to
interpret things and then and then with
that with that foundation they acquire
they start acquiring the details the
specifics in any given situation are
then able to ensure
with regard to that framework and then
given that interpretation they can do
what they can predict but not only can
they predict they can predict now with
an explanation that can be given in
those terms in the terms of that
underlying framework that most humans
share now you could find humans that
come in interpret events very
differently than other humans because
they're like using a different different
framework you know movie matrix comes to
mind where you know they decided the
humans were really just batteries and
that's how they interpreted the value of
humans as a source of electrical energy
so but um but I think that you know for
the most part we we have a way of
interpreting the events or do social
events around us because we have this
shared framework it comes from again the
fact that we're we're similar beings
that have similar goals similar emotions
and we is we can make sense out of these
these frameworks make sense to us so how
much knowledge is there do you think so
it's you said it's possible well there's
all its tremendous amount of detailed
knowledge in the world there you know
you can imagine you know effectively
infinite number of unique situations and
unique configurations of these things
but the the knowledge that you need what
I refer to as like the frameworks for
you need for interpreting them I don't
think I think that's those are finite
you think the frameworks I'm more
important than the bulk of them now so
it's like framing yeah because what the
frameworks do is they give you now the
ability to interpret and reason and to
interpret and reasoning to interpret and
reason over the specifics in ways that
other humans would understand what about
the specifics you know who acquired the
specifics by reading and by talking to
other people so mostly actually just
even if you can focus on even the
beginning the common-sense stuff the
stuff that doesn't even require reading
or you normally requires playing around
with the world or something just being
able to sort of manipulate objects drink
water and so on all does that every time
we try to do that kind of thing in
robotics or
yeah it seems to be like an onion you
seem to realize how much knowledge is
really required to perform you in some
of these basic tasks do you have that
sense as well and if so how do we get
all those details are they written down
somewhere idea they have to be learned
through experience so I think when like
if you're talking about sort of the
physics the basic physics around us for
example acquiring information about for
acquiring how that works yeah I think
that I think there's a combination of
things going I think there's a
combination of things going on I think
there is like fundamental pattern
matching like what were you talking
about before where you see enough
examples enough date about something you
start assuming that and with similar
input I'm gonna predict similar outputs
you don't can't necessarily explain it
at all you may learn very quickly that
when you let something go it falls to
the ground that's it that's a sickness
acerra lee explain that but that's such
a deep idea if you let something go like
they do gravity I mean people were
letting things go and counting on them
falling low before they understood
gravity but that seems to be a that's
exactly what I mean is before you take a
physics class or study anything bonyen
just the idea that stuff falls to the
ground and they be able to generalize
that other all kinds of stuff falls to
the ground it just seems like a non if
you without encoding it like hard coding
it in it seems like a difficult thing to
pick up it seemed a gift of a lot of
different knowledge to be able to
integrate that into the framework sort
of into everything else so both know
that stuff falls to the ground and start
to reason about socio-political
discourse
so both like the very basic and the
high-level reasoning decision-making I
guess my question is how hard is this
problem and sorry to linger on it
because again and we'll get to it for
sure
as well Watson with jeopardy did its
take on a problem that's much more
constrained but has the same hugeness of
scale at least from the outsider's
perspective so I'm asking the general
life question of to be able to be an
intelligent being and recently in the
world about both gravity and politics
how hard is that problem so I think it's
solvable okay
now beautiful so what about what about
time travel okay as convinced not as
convinced yeah okay no I said I I think
it is I mean I I took it as solvable I
mean I think that it's alert it's
versatile it's about getting machines to
learn learning is fundamental and I
think we're already in a place that we
understand for example how machines can
learn in various ways right now our
learning our learning stuff is sort of
primitive in that we haven't sort of
taught machines to learn the frameworks
we don't communicate our frameworks
because of our shared in some cases we
do but we don't annotate if you will all
the data in the world with the
frameworks that are inherent or
underlying our understanding instead we
just operate with the data so if we want
to be able to reason over the data in
similar terms in the common frameworks
we need to be able to teach the computer
or at least we need to program the
computer to require to have access to
and acquire learn the frameworks as well
and connect the frameworks to the data I
think this I think this can be done I
think we can start
I think machine learning for example
with enough examples can start to learn
these basic dynamics will they
relate the necessary to gravity not
unless they can also acquire those
theories as well
and put the experiential knowledge and
connect it back to the theoretical
knowledge I think if we think in terms
of these class of architectures that are
are designed to both learn the specifics
find the patterns but also acquire the
frameworks and connect the data to the
frameworks if we think in terms of
robust architectures like this I think
there is a path toward getting there
geez in terms of encoding architectures
like that do you think systems they were
able to do this will look like new all
networks or representing if you look
back to the eighties and nineties of the
expert systems so more like graphs the
systems that are based in logic able to
contain a large amount of knowledge
where the challenge was the automated
acquisition of that knowledge the I
guess the question is when you collect
both the frameworks and the knowledge
from the data what do you think that
thing will look like yeah so man I think
think is asking a question they look
like neural networks is a bit of a red
herring I mean I think that they they
will they will certainly do inductive or
pattern match based reasoning and I have
already experimented with architectures
that combine both that use machine
learning and neural networks to learn
certain classes of knowledge in other
words to find repeated patterns in order
or in order for it to make good
inductive gases but then ultimately to
try to take those learnings and and
marry them in other words connect them
to frameworks so that it can then reason
over that in terms of their humans
understand so for example at elemental
cognition we do both we have
architectures that that do both but both
those things but also have a learning
method for acquiring the frameworks
themselves and saying look ultimately I
need to take this data I need to
interpret it in the form of these
frameworks with a can reason over it so
there is a fundamental knowledge
representation like what you saying like
these graphs of logic if you will there
are also neural networks that acquire
certain class of information they then
they they and align them with these
frameworks but there's also a mechanism
to acquire the frameworks themselves yes
so it seems like the idea of framework
requires some kind of collaboration with
humans absolutely so do you think of
that collaboration as well and unless to
be clear let's be clear only for the
express purpose that you're designing
you you're designing oisin designing and
intelligence that can ultimately
communicate with humans in terms of
frameworks that help them understand
things right so so now to be really
clear you can create you can
independently create an a machine
learning system and an intelligent
intelligence that I might call an
alien's elegans that does a better job
than you with some things but can't
explain the framework to you that
doesn't mean is it might be better than
you at the thing it might be that you
cannot comprehend the framework that it
may have created for itself that is
inexplicable to you that's a reality but
you're more interested in a case where
you can I I am yeah I per might sort of
approach to AI is because I've set the
goal for myself I want machines to be
able to ultimately communicate
understanding with human I want to me
will acquire and communicate acquire
knowledge from humans and communicate
knowledge to humans they should be using
what you know inductive machine learning
techniques are good at which is to
observe patterns of data whether it be
in language or whether it be in images
or videos or whatever to acquire these
patterns to induce the generalizations
from those patterns but then ultimately
work with humans to connect them to
frameworks interpretations if you will
that ultimately make sense to humans of
course the machine is gonna have the
strength egg it has the richer or longer
memory but that you know it has the more
rigorous reasoning abilities the deeper
reasoning abilities so be an interesting
you know complementary relationship
between the human and the machine
do you think that ultimately needs
explained ability like a machine so if
you look study for example Tesla
autopilot a lot or humans I don't know
if you've driven the vehicle or are
aware of what it
so you basically
the human and machine are working
together there and the human is
responsible for their own life to
monitor the system and you know the
system fails every few miles and so
there's there's hundreds of millions of
those failures a day and so that's like
a moment of interaction dc's yeah that's
exactly right that's a moment of
interaction where you know the the the
machine has learned some stuff it has a
failure
somehow the failures communicated the
human is now filling in the mistake if
you will or maybe correcting or doing
something that is more successful in a
case the computer takes that learning so
I believe that the collaboration between
human and machine I mean that's sort of
a primitive example of sort of a more
another example is where the machine is
literally talking to you and saying look
I'm I'm reading this thing I know I know
that like the next word might be this or
that but I don't really understand why I
have my gas can you help me understand
the framework that supports this and
then can kind of take a choir that take
that and reason about it and reuse it
the next time it's reading to try to
understand something not on not unlike a
human student might do I mean I remember
like when my daughter was the first
grade in she was had a reading
assignment about electricity and you
know somewhere in in the text it says an
electricity is produced by water flowing
over turbines or something like that and
then there's a question that says well
how was it originally created and so my
daughter comes to me and says I mean I
could you know created and produced or
kind of synonyms in this case so I can
go back to the text and I can copy by
water flowing over turbines but I have
no idea what that means like I don't
know how to interpret water flowing over
turbines and what electricity even is I
mean I can get the answer right by
matching the text but I don't have any
framework for understanding what this
means at all and framework really I mean
it's a set of not too mathematical but
axioms of ideas that you bring to the
table and interpreting stuff and then
you build those up somehow you build
them up with the expert
that there's a shared understanding of
what they are Sheriff it's the social
the network us humans
do you ever sense that humans on earth
in general share set of like how many
frameworks are there I mean it depends
on how you bound them right so in other
words how big or small like their their
individual scope but there's lots and
there are new ones I think they're I
think the way I think about its kind of
Anna lair I think the architectures are
being layered in that there's there's a
small set of primitives that allow you
the foundation to build frameworks and
then there may be you know many
frameworks but you have the ability to
acquire them and then you have the
ability to reuse them I mean one of the
most compelling ways of thinking about
this is the reasoning by analogy where I
could say oh wow I've learned something
very similar you know I never heard of
this I never heard of this game soccer
but if it's like basketball in the sense
that the goals like the hoop and I have
to get the ball in the hoop and I have
guards and I have this and I have that
like we're weird is the where where are
the similarities and where the
difference is and I have a foundation
now for interpreting this new
information and then the different
groups like the Millennials will have a
framework and then and then well that
you know never you know yeah well crass
and Republicans
well I Neal's nobody wants that
framework well I mean I think
understands it right I mean they're
talking about political and social ways
of interpreting the world around them
and I think these frameworks are still
largely largely similar I think they
differ in maybe what some fundamental
assumptions and values are now from a
reasoning perspective like the ability
to process the framework of Magna might
not be that different the implications
of different fundamental values or
fundamental assumptions in those
framework frameworks may reach very
different conclusions so from so from a
a social perspective that conclusions
may be very different from an
intelligence perspective I've you know I
just followed where my assumptions took
me near the product the process itself
looks similar but that's a fascinating
idea that frameworks really helped carve
how a statement will be interpreted
I mean having a Democrat and the
Republican framework and read the exact
same statement and the conclusions that
you derive would be totally different
from an ad respective is fascinating
what we would want out of the AI is to
be able to tell you that this
perspective one perspective one set of
assumptions is going to lead you here
another set of such luncheons is gonna
lead you there and - and in fact you
know to help people reason and say oh I
see where I see where our differences
lie yeah you know I have this
fundamental belief about that I have
this fundamental belief about that yeah
that's quite brilliant from my
perspective and NLP there's this idea
that there's one way to really
understand a statement but there
probably isn't there's probably an
infinite number always done just as well
there's a lot of ending and there's lots
of different interpretations and the you
know the the broader you know the
broader to the the content the richer it
is and so you know you you and I can
have very different experiences with the
same text obviously and if we're
committed to understanding each other we
start and that's the other important
point like if we're committed to
understanding each other we start
decomposing and breaking down our
interpretation towards more and more
primitive components until we get to
that point where we say oh I see why we
disagree and we try to understand how
fundamental that this women really is
but that requires a commitment to
breaking down that interpretation in
terms of that framework in a logical way
otherwise you know and this is why I
like I think of AIS is really
complementing and helping human
intelligence to overcome some of its
biases and its predisposition to be
persuaded by you know buys but more
shallow reasoning in the sense that like
we get over this idea well I you know
you know I'm right because I'm
Republican or I'm right because I'm
democratic and someone labeled this is
democratic point of view or it has the
following keywords in it and and and if
the machine can help us break that
argument down and say wait a second you
know what do you really think about this
right so essentially holding us
accountable to doing more critical
thinking
to sit and think about that as fast
that's I love that I think that's really
empowering use of AI for the public
discourse that's completely
disintegrating currently I don't know as
we learn how to do it on social medias
right
you