David Ferrucci: IBM Watson, Jeopardy & Deep Conversations with AI | Lex Fridman Podcast #44
Whtt2H5_isM • 2019-10-11
Transcript preview
Open
Kind: captions
Language: en
following is a conversation with David
Ferrucci he led the team that built
Watson the IBM question-answering system
that beat the top humans in the world at
the game of Jeopardy for spending a
couple hours of David I saw a genuine
passion not only for abstract
understanding of intelligence but for
engineering it to solve real-world
problems under real-world deadlines and
resource constraints where science meets
engineering is where brilliant simple
ingenuity emerges people who work
adjoining it to have a lot of wisdom
earned two failures and eventual success
David is also the founder CEO and chief
scientist of elemental cognition a
company working to engineer AI systems
that understand the world the way people
do this is the artificial intelligence
podcast if you enjoy it subscribe on
YouTube give it five stars and iTunes
support it on patreon or simply connect
with me on Twitter Alex Friedman spelled
Fri D M a.m. and now here's my
conversation with David Ferrucci
your undergrad was in biology with a
with an eye toward medical school before
you went on for the PhD in computer
science so let me ask you an easy
question what is the difference between
biological systems and computer systems
in your when you sit back look at the
Stars and think philosophically I often
wonder I often wonder whether or not
there is a substantive difference and I
think the thing that got me into
computer science and artificial
intelligence was exactly this
presupposition that if we can get
machines to think or I should say this
question this philosophical question if
we can get machines to think to
understand to process information the
way do we do so if we can describe a
procedure or describe a process even if
that process where the intelligence
process itself then what would be the
difference
so from philosophical standpoint I'm not
trying to convince that there are there
is I mean you can go in the direction of
spirituality you can go in the direction
of a soul but in terms of you know what
we can what we can experience from an
intellectual and physical perspective
I'm not sure there is clearly there
implement there are different
implementations but if you were to say
as a biological information processing
system fundamentally more capable than
one we might be able to build out of
silicon or or some other substrate I
don't I don't know that there is how
distant do you think is the biological
implementation so fundamentally they may
have the same capabilities but is it
really a far mystery where a huge number
of breakthroughs are needed to be able
to understand it or is that something
that for the most part in the important
aspects echoes are the same kind of
characteristics yeah that's interesting
I mean I so you know your question
presupposes that there's this goal to
recreate you know what we perceive is
biological intelligence I'm not I'm not
sure that's the I'm not sure that that's
how I would state the goal I mean I
think that studying the goal good so I
think there are a few goals I think that
understanding the human brain and how it
works is important for us to be able to
diagnose and treat issues for us to
understand our own strengths and
weaknesses both intellectual
psychological and physical so
neuroscience and on sending the brain
from that perspective has a there's a
clear clear goal there from the
perspective of saying I want to I want
to I want to mimic human intelligence
that one's a little bit more interesting
human intelligence certainly has a lot
of things we Envy it's also got a lot of
problems too so I think we're capable of
sort of stepping back and saying what do
we want out of it what do we want out of
an intelligence how do we want to
communicate with that intelligence how
do we want to behave how do we want it
to perform now of course it's it's
somewhat of an interesting argument
because I'm sitting here as a human with
a biological brain and I'm critiquing
this trends and weaknesses of human
intelligence and saying that we have the
capacity just the capacity to step back
and say gee what what is intelligence is
what do we really want out of it and
that even in and of itself suggests that
human intelligence is something quite
amiable that it could you know it can it
can it can introspect that it could
introspect that way and the flaws you
mentioned the flaws the human self yeah
but I think I think that flaws that
humans wholeness house is extremely
prejudicial and bias and the way it
draws many inferences do you think those
are sorry to interrupt you think those
are features or are those bugs do you
think the the prejudice the
forgetfulness the fear what other flaws
list them all what love maybe that's a
flaw you think those are all things that
can be get gotten getting in the way of
intelligence or the essential components
of and
well again if you go back and you define
intelligence as being able to sort of
accuracy accurately precisely rigorously
reason develop answers and justify those
answers in an objective way
yeah then human intelligence has these
flaws and that it tends to be more
influenced by some of the things you
said and it's and it's largely an
inductive process meaning it takes past
data uses that to predict the future
very advantageous in some cases but
fundamentally biased and prejudicial in
other cases because it's gonna be
strongly influenced by its priors
whether they're whether they're right or
wrong from some you know objective
reasoning perspective you're gonna favor
them because that's those are the
decisions or those are the paths that
succeeded in the past and I think that
mode of intelligence makes a lot of
sense for when your primary goal is to
act quickly and and and survive and make
fast decisions and I think those create
problems when you want to think more
deeply and make more objective and
reasons that decisions of course humans
capable of doing both they do sort of
one more naturally than they do the
other but they're capable of doing both
you're saying they do the one that
responds quickly in it more naturally
right because that's the thing you kind
of need to not be eaten by the Predators
in the world for example but I mean
better than we've we've learned to
reason through logic we've developed
science we train people to do that
I think that's harder for the individual
to do I think it requires training and
you know and and and teaching I think we
are human - certainly is capable of it
but we find more difficult and then
there are other weaknesses if you will
as you mentioned earlier it's just
memory capacity and how many chains of
inference can you actually go through
without like losing your way so just
focus and so the way you think about
intelligence and we're really sort of
floating this philosophical slightly
but I think you're like the perfect
person to talk about this because we'll
get to jeopardy and beyond that's like
an incredible one of the most incredible
accomplishments in AI in the history of
AI but hence the philosophical
discussion so let me ask you've kind of
alluded to it but let me ask again
what is intelligence underlying the
discussions we'll have with with
jeopardy and beyond how do you think
about intelligence is it a sufficiently
complicated problem being able to reason
your way through solving that problem is
that kind of how you think about what it
means to be intelligent so I think of
intelligence to primarily two ways one
is the ability to predict so in other
words if I have a problem what's gonna
can I predict what's going to happen
next whether it's to you know predict
the answer of a question or to say look
I'm looking at all the market dynamics
and I'm going to tell you what's going
to happen next or you're in a in a room
and somebody walks in and you're going
to predict what they're going to do next
or what they're going to say next doing
that in a highly dynamic environment
full of uncertainty be able to lots of
lockdown the more the more variables the
more complex the more possibilities the
more complex but can I take a small
amount of prior data and learn the
pattern and then predict what's going to
happen next accurately and consistently
that's a that's certainly a form of
intelligence what do you need for that
by the way you need to have an
understanding of the way the world works
in order to be able to unroll it into
the future all right thank you one thing
is needed to predict depends what you
mean by understanding IIIi need to be
able to find that function and this is
very much like what function deep
learning does machine learning does is
if you give me enough prior data and you
tell me what the output variable is that
matters I'm going to sit there and be
able to predict it and if I can predict
you predict it accurately so that I can
get it right more often than not I'm
smart if I do that with less data and
less training time I'm even smarter
if I can figure out what's even worth
predicting I'm smarter meaning I'm
figuring out what path is gonna get me
toward a goal
what about picking a goal so again well
that's interesting about picking our
goal sort of an interesting thing I
think that's where you bring in what do
you pre-programmed to do we talked about
humans and humans a pre-programmed to
survive so sort of their primary you
know driving goal what do they have to
do to do that and that that could be
very complex right so it's not just it's
not just figuring out that you need to
run away from their ferocious tiger but
we survive in social context as an
example so understanding the subtleties
of social dynamics becomes something
that's important for surviving finding a
mate reproducing right so we're
continually challenged with complex sets
of variables complex constraints rules
if you will that we we or patterns and
we learn how to find the functions and
predict the things in other words
represent those patterns efficiently and
be able to predict what's going to
happen that's a form of intelligence
that doesn't really record that doesn't
really require anything specific other
than ability to find that function and
and predict that right answer it's
certainly a form of intelligence but
then when we when we say well do we
understand each other in other words do
would you perceive me as as intelligent
beyond that ability to predict so now I
can predict but I can't really
articulate how I'm going to that process
what my underlying theory is for
predicting and I can't get you to
understand what I'm doing so that you
can follow you can figure out how to do
this yourself if you hadn't if you did
not have for example the right pattern
matching machinery that I did and now we
have potentially have this breakdown
where in effect I'm intelligent but I'm
sort of an alien intelligence relative
to you you're intelligent but nobody
knows about it or I can see the I can
see the output knowing so so you're
saying let's
to separate the two things one is you
explaining why you were able to predict
the future and and the second is me
being able to like impressing me that
you're intelligent me being able to know
that you successfully predicted the
future do you think that's well it's not
a pressing you item intelligent in other
words you may be convinced that I'm
intelligent in some form so high well
because of my ability to predict so I
would imagine that wow wow you're right
all here you're you're right more times
than I am you're doing something
interesting that's a form that's a form
of intelligence but then what happens is
if I say how are you doing that and you
can't communicate with me and you can't
describe that to me now I'm a label you
a savant I mean I may say well you're
doing something weird and it's and it's
just not very interesting to me because
you and I can't really communicate and
and so now this is interesting right
because now this is you're in this weird
place where for you to be recognized as
intelligent the way I'm intelligent then
you and I sort of have to be able to
communicate and then my we start to
understand each other and then my
respect and my my appreciation my
ability to relate to you starts to
change so now you're not an alien
intelligence anymore yours you're our
human intelligence now because you and I
can communicate and so I think when we
look at when we look at when we look at
animals for example animals can do
things we can't quite comprehend we
don't quite know how they do them but
they can't really communicate with us
they can't put what they're going
through in our terms and so we think of
them in sort of low there are these
alien intelligences and they're not
really worthless so what we're worth we
don't treat them the same way as a
result of that but it's it's hard
because who knows what you know what's
going on so just a quick elaboration on
that the explaining that you're
intelligent the explaining the the
reasoning the one end to the prediction
is not some kind of mathematical proof
if we look at humans look at political
debates and discourse on Twitter it's
mostly just telling stories so you
usually your task is sorry that your
task is not to tell an accurate
depiction of how you reason but to tell
a story real or not that convinces me
that there was a mechanism by which you
ultimately that's what a proof is I mean
even a mathematical proof is is that
because ultimately the other
mathematicians have to be convinced by
your proof otherwise in fact they're
been that the measurement success yeah
yeah there have been several proofs out
there where mathematicians would study
for a long time before they were
convinced that it actually proved
anything right you never know if it
proved anything until the community of
mathematicians decided that it did so I
mean so it's but it's it's a real thing
yeah and and that's sort of the point
right is that ultimately on you know
this notion of understanding us
understanding something there's
ultimately a social concept in other
words you I have to convince enough
people that I I did this in a reasonable
way I did this in a way that other
people can understand and and replicate
and that make sense to them so we're
very human Houghton's is bound together
in that way we're bound up in that sense
we sort of never really get away with it
until we can consider convince others
that our thinking process you know make
sense did you think the general question
of intelligence is then also social
constructs so if we task asked questions
of an artificial intelligence system is
this system intelligent the answer will
ultimately be a socially constructed I
think I think so I so I think you're
making to be a mess I'm saying we can
try to define intelligence in this super
objective way that says here here's this
data I want to predict this type of
thing learn this function and then if
you get it right often enough we
consider you intelligent but that's more
like a stepfather that I think it I
think it is it doesn't mean it's
useful if it could be incredible useful
it could be solving a problem we can't
otherwise solve and can solve it more
reliably than we can but then there's
this notion of can humans take
responsibility for the decision that
you're that you're making can we make
those decisions ourselves can we relate
to the process that you're going through
and now you as an agent whether you're a
machine or another human frankly are now
obliged to make me understand how it is
that you're arriving at that answer and
allow me I mean me or the obviously a
community or a judge of people to decide
whether or not whether or not that makes
sense and by the way that happens with
the humans as well you're sitting down
with your staff for example and you ask
for suggestions about what to do next
and someone says well I think you should
buy and I think you should buy this much
or would have or sell or whatever it is
or I think you should launch the product
today or tomorrow or launch this product
versus that product whatever decision
may be and you ask why and the person so
I just have a good feeling about it and
it's not you're not very satisfied now
that person could be you know you might
say well you've been right you know
before but I'm gonna put the company on
the line can you explain to me why I
should believe this and that explanation
may have nothing to do with the truth
just them and all them convinced the
wrong yes they'll be wrong she's got to
be convincing but it's ultimately got to
be convinced and that's why I'm saying
it's we're bound together right our
intelligences are bound together in that
sense we have to understand each other
and and if for example you're giving me
an explanation I mean this is a very
important point right you're giving me
an explanation and I'm and I and I and I
have iton I'm not good and then I'm not
good at reasoning well and being
objective and following logical paths
and consistent paths and I'm not good at
measuring and sort of computing
probabilities across those paths what
happens is collectively we're not going
to do we're not going to do well
how hard is that problem the second one
so we I think will talk quite a bit
about the the first on a specific
objective metric benchmark performing
well but being able to explain the steps
the reasoning how hard is that probably
that's I think that's very hard I mean I
think that that's um well it's hard for
humans the thing that's hard for humans
as you know may not necessarily be hard
for computers and vice-versa so sorry so
how hard is that problem for computers I
think it's hard for computers and the
reason why are related to or saying that
it's also hard for humans is because I
think when we step back and we say we
want to design computers to do that one
of the things we have to recognize is
we're not sure how to do it well I'm not
sure we have a recipe for that and even
if you wanted to learn it it's not clear
exactly what data we use and what
judgments we use to learn that well and
so what I mean by that is if you look at
the entire enterprise of science science
is supposed to be at a bad objective
reason and reason right so we think
about who's the most intelligent person
or group of people in the world do we
think about the savants who can close
their eyes and give you a number we'd
think about the think tanks or the
scientists of the philosophers who kind
of work through the details and write
the papers and come up with the
thoughtful logical proves and use the
scientific method and I think it's the
latter and my point is that how do you
train someone to do that and that's what
I mean by it's hard how do you what's
the process of training people to do
that well that's a hard process we work
as a society we work pretty hard to get
other people to understand our thinking
and to convince them of things now we
could for so
weighed them obviously talked about this
like human flaws or weaknesses we can
persuade through persuade then through
emotional means but to but to get them
to understand and connect to and follow
a logical argument is difficult we try
it we do it we do it as scientists we
try to do it as journalists we know we
try to do it as you know even artists in
many forms as writers as teachers we go
to a fairly significant training process
to do that and then we could ask what
why is that so hard
but it's hard and for humans it takes a
lot of work and when we step back and
say well step back and say well how do
we get a machine - how do we get a
machine to do that it's a vexing
question how would you begin to try to
solve that and maybe just a quick pause
because there's an optimistic notion in
the things you're describing which is
being able to explain something through
reason but if you look at algorithms
that recommend things that we look at
next
well there's Facebook Google advertising
based companies you know their goal is
to convince you to buy things based on
anything so that could be reason because
the best of advertisement is showing you
things that you really do need and
explain why you need it but it could
also be through emotional manipulation
the algorithm that describes why a
certain reason a certain decision was
was made how hard is it to do it through
emotional manipulation and why is that a
good or a bad thing so you've kind of
focused on reason logic really showing
in a clear way why something is good one
is that even a thing that us humans do
and and and - how do you think of the
differences in the reasoning aspect and
the emotional manipulation
well they you know so you call it
emotional manipulation but more
objectively is essentially saying you
know thing you know there are certain
features of things that seem to attract
your attention I'm gonna kind of give
you more of that stuff
manipulation is a bad word yeah I mean
I'm not saying it's good right or wrong
is it it works to get your attention and
it works to get you to buy stuff and
when you think about algorithms that
look at the patterns of the you know
patterns of features that you seem to be
spending your money on and is there
going to give you something with a
similar pattern so I'm going to learn
that function because the objective is
to get you to click on and/or get you to
buy and or whatever it is I don't know I
mean that it is like it is what it is I
mean that's what the algorithm does
you can argue whether it's good or bad
it depends what your you know what your
what your goal is
I guess this seems to very useful for
convincing telling us the thing for
convincing humans yeah it's good because
you gives again this goes back to how
does a human you know what is the human
behavior like how does a human you know
brain respond to things I think there's
a more optimistic view of that too which
is that if you're searching for certain
kinds of things you've already reasoned
that you need them and these these
algorithms are saying look that's up to
you
the reason whether you need something or
not that's your job you know you you met
you may have an unhealthy addiction to
this stuff or you may have a reasoned
and thoughtful
explanation for why it's important to
you and the algorithms are saying hey
that's like whatever like that's your
problem all I know is you're buying
stuff like that you're interested in
stuff like that could be a bad reason
could be a good reason that's up to you
I'm gonna show you more of that stuff
and so and I and I and I think that
that's it's not good or bad it it's not
reason or not reason the algorithm is
doing what it does which is saying you
seems to be interested in this I'm going
to show you more that stuff and I think
we're seeing it's not just in buying
stuff but even in social media you're
reading this kind of stuff I'm not
judging on whether it's good or bad I'm
not reasoning at all I'm just saying I'm
gonna show you other stuff with similar
features and you know and like and
that's it and I wash my hands from it
and I say that's all you know that's all
what's going on you know there is you
know people are so harsh on AI systems
so one the bar of performance is
extremely high and yet we also asked
them to in the case of social media to
help find the better angels of our
nature and help make a better society so
what do you think about the role of it
that so that agrees you that's that's
the interesting dichotomy right because
on one hand we're sitting there and
we're sort of doing the easy part which
is finding the patterns we're not
building the systems not building a
theory that it's consumable and
understandable other humans that could
being explained and justified and and so
on one hand to say oh you know AI is
doing this why isn't doing this other
thing well those other things a lot
harder and it's interesting to think
about why why why it's harder and
because you're interpreting you're
interpreting the data in the context of
prior models in other words
understandings of what's important in
the world what's not important what are
all the other abstract features that
drive our decision-making
what's sensible what's not sensible
what's good what's bad what's moral
what's valuable what is it where is that
stuff no one's applying the
interpretation so when I when I see you
clicking on a bunch of stuff and I look
at these simple features the raw
features the features that are there in
a data like what words are being used
or how long the material is more other
very superficial features what colors
are being used in the material like I
don't know why you're clicking on the
stuff you're looking or if it's products
what the price of what the price is or
what the categories or stuff like that
and I just feed you more of the same
stuff that's very different than kind of
getting in there and saying what does
this mean what the stuff you're reading
like why are you reading it what
assumptions are you bringing to the
table are those assumptions sensible is
the miss the material make any sense
does it does it lead you to thoughtful
good conclusions again there's judgment
this interpretation judgment involved in
that process that isn't really happening
in in in the AI today that's harder
right because you have to start getting
at the meaning of this of the of the
stop of the content you have to get at
how humans interpret the content
relative to their value system and
deeper thought processes so that's what
meaning means is not just some kind of
deep timeless semantic thing that the
statement represents but also how a
large number of people are likely to
interpret so that's again even meaning
is a social construct it's so you have
to try to predict how most people would
understand this kind of statement yeah
meaning is often relative but meaning
implies that the connections go beneath
the surface of the artifact so if I show
you a painting it's a bunch of colors in
a canvas what does it mean to you and it
may mean different things at different
people because of their different
experiences it may mean something even
different to the artist to who painted
it as we try to get more rigorous with
our communication we try to really nail
down that meaning so we go from abstract
art to precise mathematics precise
engineering drawings and things like
that we're really trying to say I want
to narrow that that space of possible
interpretations
because the precision of the
communication ends up becoming more and
more important and so that means that I
have to specify and I think that's why
this becomes really hard because if I'm
just showing you an artifact and you're
looking at it superficially whether it's
a bunch of words on a page or whether
it's you know brushstrokes on a canvas
or pixels on a photograph you can sit
there and you can interpret lots of
different ways at many many different
levels but when I want to when I want to
align our understanding of that I have
to specify a lot more stuff that's
actually not in it not directly in the
artifact now I have to say well how you
were how are you interpreting this image
and that image and what about the colors
and what do they mean to you what's what
perspective are you bringing to the
table
what are your prior experiences with
those artifacts
what are your fundamental assumptions
and values what what is your ability to
kind of reason to chain together logical
implication as you're sitting there and
saying well if this is the case then I
would conclude this and if that's the
case then I would conclude that and it
so your reasoning processes and how they
work your prior models and what they are
your values and your assumptions all
those things now come together into the
interpretation getting in sick of that
is hard and yet humans able to intuit
some of that without any pre because
they have the shared experience me and
we're not talking about shared two
people have any shares know me as a
society that's correct we have this
shared experience and we have similar
brains so we tend to Institute in other
words part of our shared experiences are
shared local experience like we may live
in the same culture we may live in the
same society and therefore we have
similar education we have similar what
we like to call prior models about the
world prior experiences and we use that
as a think of it as a wide collection of
interrelated variables and they're all
bound to similar things and so we take
that as our background and we start
interpreting things similarly but as
humans we have it we have
a lot of shared experience we do have
similar brains similar goals similar
emotions under similar circumstances
because we're both humans so now one of
the early questions you ask well how is
biological and you know computer
information systems fundamentally
different well one is you know one is
come you means come with a lot of
pre-programmed stuff yeah a ton of
program stuff and they were able to
communicate because they have a lot of
it because they share that stuff do you
think that shared knowledge if it can
maybe escape the hardware question how
much is encoded in the hardware just the
shared knowledge in the software the the
history the many centuries of wars and
so on that came to today that shared
knowledge how hard is it to encode and
did you have a hope can you speak to how
hard is it to encode that knowledge
systematically in a way that could be
used by a computer so I think it is
possible to learn to form machine to
program machine to acquire that
knowledge with a similar foundation in
other words in a similar interpretive
interpretive foundation for processing
that knowledge but what do you mean by
that so in other in other words
foundation we view the world in a
particular way and so in other words we
we have i if you will as humans we have
a frame reference for bringing the world
around us so we have multiple frameworks
for interpreting the world around us but
if you're interpreting for example
social political interactions you're
thinking about what there's people
there's collections and groups of people
they have goals the goals largely built
around survival and quality of life that
are their fundamental economics around
scarcity of resources and when when
humans come and start interpreting a
situation like that because you've
brought you've grown up like historical
events they start interpreting
situations like that they apply a lot of
this a lot of this this fundamental
framework for interpreting that well who
are the people
what were their goals what
users did they have how much power
influence that they have over the other
like this fundamental substrate if you
will for interpreting and reasoning
about that so I think it is possible to
in view a computer with that that stuff
that humans like take for granted when
they go and sit down and try to
interpret things and then and then with
that with that foundation they acquire
they start acquiring the details the
specifics in any given situation are
then able to interpret it with regard to
that framework and then given that
interpretation they can do what they can
predict but not only can they predict
they can predict now with an explanation
that can be given in those terms in the
terms of that underlying framework that
most humans share now you could find
humans that come in interpret events
very differently than other humans
because they're like using a different
different framework you know movie
matrix comes to mind where you know they
decided the humans were really just
batteries and that's how they
interpreted the value of humans as a
source of electrical energy so but um
but I think that you know for the most
part we we have a way of interpreting
the events or do social events around us
because we have to share at framework it
comes from again the fact that we're
we're similar beings that have similar
goals similar emotions and we is we can
make sense out of these these frameworks
make sense to us so how much knowledge
is there do you think so it's you said
it's possible well there's all its
tremendous amount of detailed knowledge
in the world there you know you can
imagine you know effectively infinite
number of unique situations and unique
configurations of these things but the
the knowledge that you need what I refer
to as like the frameworks for you for
interpreting them I don't think I think
that's those are finite you think the
frameworks I'm more important than the
bulk of them now so it's like framing
yeah because the frameworks do is they
give you now the ability to interpret
and reason and to interpret and
reasoning to interpret and reason over
the specific
in ways that other humans would
understand what about the specifics you
know who acquired the specifics by
reading and by talking to other people
and so mostly actually just even if we
can focus on even the beginning the
common-sense stuff the stuff that
doesn't even require reading or
animalistic requires playing around with
the world or something just being able
to sort of manipulate objects drink
water and so on all does that every time
we try to do that kind of thing in
robotics or AI it seems to be like an
onion you seem to realize how much
knowledge is really required to perform
you in some of these basic tasks do you
have that sense as well and if so how do
we get all those details are they
written down somewhere idea they have to
be learned through experience so I think
when like if you're talking about sort
of the physics the basic physics around
us for example acquiring information
about for acquiring how that works yeah
I think that I think there's a
combination of things going I think
there's a combination of things going on
I think there is like fundamental
pattern matching like what were you
talking about before where you see
enough examples enough data about
something you start assuming that and
with similar input I'm going to predict
similar outputs you don't can't
necessarily explain it at all you may
learn very quickly that when you let
something go it falls to the ground
that's a that's a sickness is horribly
explained that but that's such a deep
idea if you let something go like they
do gravity I mean people were letting
things go and counting on them falling
well before they understood gravity but
that seems to be a that's exactly what I
mean is before you take a physics class
or the or study anything about Newton
just the idea that stuff falls to the
ground and they be able to generalize
that other all kinds of stuff falls to
the ground it just seems like a non if
without encoding it like hard coding it
in it seems like a difficult thing to
pick up it seemed like gift of Allah
of different knowledge to be able to
integrate that into the framework sort
of into everything else so both know
that stuff falls to the ground and start
to reason about social political
discourse so both like the very basic
and the high-level reasoning
decision-making I guess my question is
how hard is this problem and sorry to
linger on it because again and we'll get
to it for sure as well Watson with
jeopardy did its take on a problem
that's much more constrained but has the
same hugeness of scale at least from the
outsider's perspective so I'm asking the
general life question of to be able to
be an intelligent being and reason in
the in the world about both gravity and
politics how hard is that problem
so I think it's solvable okay now
beautiful so what about what about time
travel okay convinced not as convinced
yet okay no I said I I think it is I
mean I I took it as solvable I mean I
think that it's alert it's versatile
it's about getting machines to learn
learning is fundamental and I think
we're already in a place that we
understand for example how machines can
learn in various ways right now our
learning our learning stuff is sort of
primitive in that we haven't sort of
taught machines to learn the frameworks
we don't communicate our frameworks
because of our shared in some cases we
do but we don't annotate if you will all
the data in the world with the
frameworks that are inherent or
underlying our understanding instead we
just operate with the data so if we want
to be able to reason over the data in
similar terms in the common frameworks
we need to be able to teach the computer
or at least we need to program the
computer to require to have access to
and acquire
learn the frameworks as well and connect
the frameworks to the data I think this
I think this can be done I think we can
start
I think machine learnings for example
with enough examples can start to learn
these basic dynamics will they relate
the necessary to gravity not unless they
can also acquire those theories as well
and put the experiential knowledge and
connected back to the theoretical
knowledge I think if we think in terms
of these class of architectures that are
are designed to both learn the specifics
find the patterns but also acquire the
frameworks and connect the data to the
frameworks if we think in terms of
robust architectures like this I think
there is a path toward getting there
jeez in terms of encoding architectures
like that do you think systems they were
able to do this will look like and you
know that works or representing if you
look back to the eighties and nineties
of the expert systems so more like
graphs the systems that are based in
logic able to contain a large amount of
knowledge where the challenge was the
automated acquisition of that knowledge
the I guess the question is when you
collect both the frameworks and the
knowledge from the data what do you
think that thing will look like yeah so
I mean I think think is asking a
question they look like neural networks
is a bit of a red herring I mean I think
that they they will they will certainly
do inductive or pattern match based
reasoning and I've already experimented
with architectures that combine both
that use machine learning and neural
networks to learn certain classes of
knowledge in other words to find
repeated patterns in order or in order
for it to make good inductive guesses
but then ultimately to try to take those
learnings and and marry them in other
words connect them to frameworks so that
it can then reason over that in terms of
their humans understand so for example
at elemental cognition we do both we
have architectures that that do both but
both those things but also have a
learning method for acquiring the
frameworks themselves and saying look
ultimately I need to take this data I
need to interpret it in the form of
these frameworks so they can reason over
it so there is a fundamental knowledge
representation like what you saying like
these graphs of logic if you will there
are also neural networks that acquire
certain class of information they then
they they and align them with these
frameworks but there's also a mechanism
to acquire the frameworks themselves yes
so it seems like the idea of framework
requires some kind of collaboration with
humans absolutely so do you think of
that collaboration as well and unless to
be clear let's be clear only for the for
the express purpose that you're
designing you you're designing machine
designing and intelligence that can
ultimately communicate with humans in
terms of frameworks that help them
understand things right so so now to be
really clear you can create you can
independently create an a machine
learning system and an intelligent
intelligence that I might call an
alien's elegans that does a better job
than you with some things but can't
explain the framework to you that
doesn't mean is it might be better than
you at the thing it might be that you
cannot comprehend the framework that it
may have created for itself that is
inexplicable to you that's a reality but
you're more interested in a case where
you can I I am yeah I per might sort of
approach to AI is because I've set the
goal for myself I want machines to be
able to ultimately communicate
understanding with human I want to meet
would acquire and communicate acquire
knowledge from humans and communicate
knowledge to humans they should be using
what you know inductive machine learning
techniques are good at which is to
observe patterns of data whether it be
in language or whether it be in images
or videos or whatever to acquire these
patterns to induce the generalizations
from those patterns but then ultimately
work with humans to connect them to
frameworks interpretations if you will
that ultimately make sense to humans of
course the machine is gonna have the
strength egg it has the richer or longer
memory but that you know it has the more
rigorous reasoning abilities the deeper
reasoning abilities so be it interesting
you know complementary relationship
between the human and the machine do you
think that ultimately needs explained
ability like a machine so if we look we
study for example Tesla autopilot a lot
or humans I don't know if you've driven
the vehicle or are aware of what is it
so you basically
the human and machine are working
together there and the human is
responsible for their own life to
monitor the system and you know the
system fails every few miles and so
there's there's hundreds of there's
millions of those failures a day and so
that's like a moment of interaction DC
yeah that's exactly right that's a
moment of interaction where you know the
the the machine has learned some stuff
it has a failure
somehow the failures communicated the
human is now filling in the mistake if
you will or maybe correcting or doing
something that is more successful in
that case the computer takes that
learning so I believe that the
collaboration between human and machine
I mean that's sort of a permanent
example of sort of a more another
example is where the machine is
literally talking to you and saying look
I'm I'm reading this thing I know I know
that like the next word might be this or
that but I don't really understand why I
have my gas can you help me understand
the framework that supports this and
then can kind of take acquire that take
that and reason about it and reuse it
the next time it's reading to try to
understand something not on not unlike a
human student might do I mean I remember
like when my daughter was the first
great in she was had a reading
assignment about electricity and you
know somewhere in in the text it says
and electricity is produced by water
flowing over turbines or something like
that and then there's a question that
says well how was electricity created
and so my daughter comes to me and says
I mean I could you know created and
produced or kind of synonyms in this
case so I can go back to the text and I
can copy by water flowing over turbines
but I have no idea what that means like
I don't know how to interpret water
flowing over turbines and what
electricity even is I mean I can get the
answer right by matching the text but I
don't have any framework for
understanding what this means at all and
framework really I mean it's a set of
not to be mathematical but axioms of
ideas that you bring to the table and
interpreting stuff and then you build
those up somehow you build them up with
the expert
that there's a shared understanding of
what they are Sheriff it's the social
network that us humans do you have a
sense that humans on earth in general
share a set of like how many frameworks
are there I mean it depends on how you
bound them right so in other words how
big or small like their their individual
scope but there's lots and there are new
ones I think they're I think the way I
think about is kind of an a layer I
think that the architectures are being
layered in that there's there's a small
set of primitives that allow you the
foundation to build frameworks and then
there may be you know many frameworks
but you have the ability to acquire them
and then you have the ability to reuse
them I mean one of the most compelling
ways of thinking about this is or
reasoning by analogy where I could say
oh wow I've learned something very
similar you know I never heard of this I
never heard of this game soccer but if
it's like basketball in the sense that
the goals like the hoop and I have to
get the ball in the hoop and I have
guards and I have this and I have that
like we're weird is the where where are
the similarities and where the
difference is and I have a foundation
now for interpreting this new
information and then the different
groups like the Millennials will have a
framework and then and then well that
you never you know yeah well Kratz and
Republicans
well I Neal's nobody wants that
framework well I mean I think
understands it right I mean you're
talking about political and social ways
of interpreting the world around them
and I think these frameworks are still
largely largely similar I think they
differ in maybe what some fundamental
assumptions and values are now from a
reasoning perspective like the ability
to process the framework of Magna might
not be that different the implications
of different fundamental values or
fundamental assumptions in those
framework frameworks may reach very
different conclusions so from so from a
social perspective that conclusions may
be very different from an intelligence
perspective I you know I just followed
where my assumptions took me yeah the
product the process itself would look
similar but that's a fascinating idea
that frameworks really helped carve how
a statement will be interpreted
I mean having a Democrat and the
Republican framework and read the exact
same statement and the conclusions that
you derive would be totally different
from an ad respective is fascinating
what we would want out of the AI is to
be able to tell you that this
perspective one perspective one set of
assumptions is going to lead you here in
other setups as luncheons is gonna leave
you there and to and in fact you know to
help people reason and say oh I see
where I see where our differences lie
yeah you know I have this fundamental
belief about that I have this
fundamental belief about that yeah
that's quite brilliant from my
perspective and NLP there's this idea
that there's one way to really
understand a statement but there
probably isn't there's probably an
infinite number of ways then just as
well well there's a lot finding on
there's lots of different
interpretations and the you know the the
broader you know the broader to the the
contents the richer it is and so you
know you you and I can have very
different experiences with the same text
obviously and if we're committed to
understanding each other we start and
that's the other important point like if
we're committed to understanding each
other we start decomposing and breaking
down our interpretation towards more and
more primitive components until we get
to that point where we say oh I see why
we disagree and we try to understand how
fundamental that disagreement really is
but that requires a commitment to
breaking down that interpretation in
terms of that framework in a logical way
otherwise you know and this is why I
like I think of a eyes is really
complementing and helping human
intelligence to overcome some of its
biases and its predisposition to be
persuaded by you know buys but more
shallow reasoning in the sense that like
we get over this idea well I you know
you know I'm right because I'm a
Republican or I'm right because I'm
democratic and someone labeled this is
democratic point of view or it has the
following keywords in it and and if the
machine can help us break that argument
down and say wait a second you know what
do you really think about this right so
essentially holding us accountable to
doing more critical thinking
to sit and think about that as fast
that's I love that I think that's really
empowering use of AI for the public
discourse it's completely disintegrating
currently I don't know as we learn how
to do it on social medias right so one
of the greatest accomplishments in the
history of AI is Watson competing
against in a game of Jeopardy against
humans and you were a lead in that
accrue at a critical part of that let's
start the very basics what is the game
of Jeopardy the game for us humans human
versus human right so it's to take a
question and answer it actually no but
it's not right it's really not it's
really it's really to get a question and
answer but it's what we call a factoid
questions so this notion of like it's it
really relates to some fact that
everything few people would argue
whether the facts are true or not in
fact most people what and jeopardy kind
of counts on the idea that these these
statements have factual answers and and
the idea is to first of all determine
whether or not you know the answer which
is sort of an interesting twist so first
of all understand the question you have
to understand the question what is it
asking and that's a good point because
the questions are not asked directly
right they're all like the way the
questions are asked is nonlinear it's
like it's a little bit witty it's a
little bit playful sometimes it's a it's
a little bit tricky yeah they're asked
and exactly in numerous witty tricky
ways exactly what they're asking is not
obvious it takes it takes an experienced
humans a while to go what is it even
asking right and it's sort of an
interesting realization that you have
was a missus Oh what's the Jeopardy is a
question answering Shou and there's a go
like I know a lot and then you read it
and you're you're still trying to
process the question and the champions
have answered and moved on there's like
there's three questions ahead at the
time you figured out what the question
even met so there's there's definitely
an ability there to just parse out what
the question even is so that was
certainly challenging it's
interesting historically though if you
look back at the jeopardy games much
earlier
you know 63 yeah and I think the
questions were much more direct it
weren't quite like that they got sort of
more and more interesting the way they
asked them that sort of got more and
more interesting and subtle and nuanced
and humorous and witty over time which
really required the huma
Resume
Read
file updated 2026-02-13 13:24:22 UTC
Categories
Manage