Transcript
oD54vJlG-S4 • What is Intelligence? - François Chollet and Lex Fridman | AI Podcast Clips
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0117_oD54vJlG-S4.txt
Kind: captions
Language: en
[Music]
can you try to define intelligence like
what does it mean to be more or less
intelligent is it completely coupled to
a particular problem or is there
something a little bit more universal
yeah I do believe all intelligence is
specialized intelligence even human
intelligence has some degree of
generality when all intelligence systems
have some degree of generality they're
always specialized in in one category of
problems so the the human intelligence
is specialized in the human experience
and that shows at various levels that
shows in some prior knowledge that's
innate that we have at birth knowledge
but things like agents goal-driven
behavior visual priors but what makes an
object privacy about time and so on that
shows also in the way we learn for
instance is very very easy for us to
pick up language it's very very easy for
us to learn certain things because we
are basically hard-coded to learn them
and we are specialized in solving
certain kinds of problem and we are
quite useless when it comes to other
kinds of problems for instance we we are
not really designed to handle very long
term problems we have no capability of
seeing that the very long term we don't
have them very much working memory you
know so how do you think about a long
term do you think long term planning
over talking about scale of years
millennia what do you mean by long term
were not very good well human
intelligence is specialized in the human
experience and humans experience is very
short like one lifetime is short even
within one lifetime we have a very hard
time envisioning you know things on a
scale of years like it's very difficult
to project yourself at at the scale of
favi at the scale of ten years and so on
right we can solve only fairly narrowly
scoped problems so when it comes to
solving bigger problems larger scale
problems we are not actually doing it on
an individual level so it's not actually
our brain
in doing it we we have this thing called
civilization right which is itself a
sort of problem-solving system a sort of
artificial intelligence system right and
it's not running on one brain is ring on
network of brains
in fact it's running on much modern
network of brains it's running on a lot
of infrastructure like books and
computers and the internet and human
institutions and so on and that is
capable of handling problems on the on a
much greater scale than any individual
human if you look at some computer
science for instance that's an
institution that solves problems and
it's it is super human right I took
Preston on a greater scale it controls
you must be a problem than an individual
human good and science itself
science as a system as an institution is
a kanafeh artificially intelligent
problem-solving algorithm that is
superhuman
yes all these computer science is like a
theorem prover at a scale of thousands
maybe hundreds of thousands of human
beings at a scale what do you think is a
intelligent agent so there's us humans
at the individual level there is
millions maybe billions of bacteria in
our skin there is that's at the smaller
scale you can even go to the particle
level as systems that behave you
couldn't say intelligently in some ways
and then you can look at the earth as a
single organism you can look at our
galaxy and even the universal organism
do you think how do you think about
scale and defining intelligent systems
and we're here at Google there is
millions of devices doing computation
just in a distributed way how do you
think what intelligence there's a scale
you can always characterize anything as
a system right I think people who
talk about things like intelligence
explosion tend to focus on one Asian is
basically one brain like one brain
considered in isolation like a brain a
jaw that's controlling a body in a very
lack top-to-bottom kind of fashion and
that body is person goes into an
environment so it's a very hierarchical
view you have the brain at the top of
the pyramid then you have to bother just
plainly receiving orders and then the
body is manipulating objects in
environment and so on so everything is
subordinate to this one thing this
epicenter which is the brain but in real
life intelligent agents don't really
work like this right there is no strong
delimitation between the brain and the
body
Stallings you have to look not just at
the brain but at the nervous system but
then the nervous system and the body are
not free to separate entities so you
have to look at an entire animal as one
agent but then you start realizing as
you observe in any more of any length of
time that a lot of the intelligence of
an animal is actually externalized
that's especially true for humans a lot
of our intelligence is externalized when
you write down some notes there is
externalized intelligence when you write
the computer program you are
externalizing cognition so it's
externalizing books it's externalized in
in computers the internet in other
humans
it's externalizing language and so on so
it's there is no like hardly limitation
of what makes an intelligent agent it's
all about context okay but alphago is
better at go than the best human player
you know there's levels of skill here so
do you think there is such a ability as
such a concept as a intelligence
explosion a specific task and then well
yeah do you think it's possible to have
a category of tasks on which you do have
something like an exponential growth of
ability to solve that particular problem
I think if you consider specificity corn
is probably possible to some extent I
also don't think we have to speculate
about it because we have real-world
examples of recursively self-improving
intelligence systems right for instance
science is a problem-solving system and
knowledge generation system like a
system that experiences the world in
some sense and then gradually
understands it and can act on it and
that system is superhuman and it is
clearly recursively self-improving
because science feeds into technology
technology can be used to build better
tools better computers better
instrumentation and so on which in turn
can make sense faster right so science
is probably the closest thing we have
today to a reclusive yourself improving
super human AI and you can just observe
you know it's science its scientific
progress through the exploding which you
know it's a vision isn't is an
interesting question you can use that as
a basis to try to understand what we
happen with a superhuman AI that has a
science track behavior
you