Transcript
Gi8LUnhP5yU • Max Tegmark: Life 3.0 | Lex Fridman Podcast #1
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0043_Gi8LUnhP5yU.txt
Kind: captions
Language: en
as part of MIT course six as $0.99
artificial general intelligence I've
gotten the chance to sit down with max
tegmark he is a professor here at MIT is
a physicist spent a large part of his
career studying the mysteries of our
cosmological universe but he's also
studied and delved into the beneficial
possibilities and the existential risks
of artificial intelligence amongst many
other things he's the co-founder of the
future of life Institute author of two
books both of which I highly recommend
first our Mathematica universe
second is life 3.0 he's truly an
out-of-the-box thinker and fun
personality so I really enjoyed talking
to him if you would like to see more of
these videos in the future please
subscribe and also click the little bell
icon to make sure you don't miss any
videos also Twitter linked in AGI that
MIT that I do if you want to watch other
lectures or conversations like this one
better yet go read Max's book life 3.0
chapter 7 on goals is my favorite it's
really where philosophy and engineer and
come together and it opens with a quote
by dusty s key the mystery of human
existence lies not and just stayin alive
but in finding something to live for
lastly I believe that every failure
rewards us with an opportunity to learn
in that sense I've been very fortunate
to fail in so many new and exciting ways
and this conversation was no different
I've learned about something called
radio frequency interference or RFI look
it up
apparently music and conversations from
local radio stations can bleed into the
audio that you're recording in such a
way that almost completely ruins that
audio it's an exceptionally difficult
sound source to remove so I've gotten
the opportunity to learn how to avoid
RFI in the future during recording
sessions of also gotten the opportunity
to learn how to use Adobe Audition
and isotope rx6 to do some noise some
audio repair
of course this is exceptionally
difficult noise to remove I am an
engineer I'm not an audio engineer
neither is anybody else in our group but
we did our best
nevertheless I thank you for your
patience and I hope you're still able to
enjoy this conversation do you think
there's intelligent life out there in
the universe
let's open up with an easy question I
have a lien minority of you here
actually when I give public lectures
Alfred asked for show of hands who
thinks there's intelligent life out
there somewhere else and almost everyone
put their hands up and when I ask why
they'll be like oh there's so many
galaxies out there there's gonna be but
I'm a numbers nerd right so when you
look more carefully at it it's not so
clear at all if we when we talk about
our universe first of all we don't mean
all of space did we actually mean I
don't you can throw me in the universe
if she wants behind you there it's we
simply mean the spherical region of
space from which light has a time to
reach us so far during the fourteen
point eight billion year 13.8 billion
years since our Big Bang there's more
space here but this is what we call a
universe because that's all we have
access to mm-hmm so is there intelligent
life here that's gotten to the point of
building telescopes and computers my
guess is no actually no the probability
of it happening on any given planet is
some number we don't know what it is and
what we do know is that the number can't
be super-high because there's over a
billion earth-like planets in the Milky
Way galaxy alone many of which are
billions of years older than Earth and
aside from some UFO believers in other
reason is much evidence that any super
20 civilization has come here at all and
so that's the famous Fermi paradox right
and then if you if you work the numbers
what you find is that if you have no
clue what the probability is of getting
life on a given planet
could be 10 to the minus 10 and the
minus 20 or 10 minus to any power tensor
equally likely if you want to be really
open-minded that translates into it
being equally likely that our nearest
neighbor is 10 to the 16 meters away 10
to the 17 meters away 10 to the 18 now
by the time he gets much less than than
10 to the 16 already we pretty much know
there is nothing else that's close and
when you get the other would have
discovered us they yeah they would have
been discovered as long or if they're
really close we would have probably know
that some engineering projects that
they're doing and if it's beyond 10 to
the 26 meters that's already outside of
here so my guess is actually that there
are we are the only life in here they've
gotten the point of building advanced
tech which i think is very um puts a lot
of responsibility on our shoulders not
screw up you know I think people who
take for granted that it's okay for us
to screw up have an accident in a
nuclear war or go extinct somehow
because there's a star trek-like
situation out there with some other life
forms are gonna come and bail us out and
doesn't matters what I think they're
lulling us into a false sense of
security I think it's much more prudent
to say you know let's be really grateful
for this amazing opportunity we've had
and make the best of it just in case it
is down to us so from a physics
perspective do you think intelligent
life so it's unique from a sort of
statistical view of the size of the
universe but from the basic matter of
the universe how difficult is it for
intelligent life to come about the kind
of advanced tech building life I in is
implied in your statement that is really
difficult to create something like a
human species well I think I think what
we know is that going from no life to
having life that can do ARCA a level of
tech there's some sort of - going beyond
that than actually settling our whole
universe with life there is some road
major roadblock there which is
great filter as that's sometimes called
which which tough to get through it's
either that roadblock is either beef
behind us or in front of us I'm hoping
very much that it's behind us I'm super
excited every time we get a new report
from NASA saying they failed to find any
life on Mars
like just awesome because that suggests
that the hard part maybe what maybe he
was getting the first ribosome or or
some some very low-level kind of
stepping stone so they were home free
cuz if that's true then the future is
really only limited by our own
imagination it'd be much luckier if it
turns out that this level of life is
kind of a dime a dozen but maybe there
is some other problem like as soon as a
civilization gets advanced technology
within a hundred years they get into
some stupid fight with themselves and
poof yeah no that would be a bummer
yeah so you've explored the mysteries of
the universe the cosmological universe
the one that's sitting between us today
I think you've also begun to explore the
other universe which is sort of the
mystery the mysterious universe of the
mind of intelligence of intelligent life
so is there a common thread between your
interest or in the way you think about
space and intelligence oh yeah when I
was a teenager yeah I was already very
fascinated by the biggest questions and
I felt that the two biggest quite
mysteries of all in science where our
universe out there and our universe in
here yeah so it's quite natural after
having spent a quarter of a century on
my career thinking a lot about this one
I'm now indulging in the luxury of doing
research on this one it's just so cool I
feel the time is right now for you
Trane's greatly deepening our
understanding of this just start
exploring this one yeah because I think
I think a lot of people view
intelligence as something mysterious
that can only exist and biological
organisms like us and therefore dismiss
all talk about artificial general
intelligence is science fiction
but from my perspective as a physicist
you know I am a blob of quarks and
electrons moving around in a certain
pattern and processing information in
certain ways and this is also a blob of
quarks and electrons
I am NOT smarter than the water bottle
because I'm made of different kind of
works I'm made of up quarks and down
quarks exact same kind as this it's a
there's no secret sauce I think in me
it's all about the pattern of the
information processing and this means
that there's no law of physics saying
the way that we can't create technology
which can have helped us by being
incredibly intelligent and helped us
crack mysteries that we couldn't in
other words I think we really only seen
the tip of the intelligence iceberg so
far yeah so the perceptron ium yeah so
you can't go in this amazing term it's a
hypothetical state of matter sort of
thinking from a physics perspective what
is the kind of matter that can help as
you're saying a subjective experience
emerged consciousness emerge so how do
you think about consciousness from this
physics perspective very good question
so again I think many people have
underestimated our ability to make
progress on this and by convincing
themselves it's hopeless because somehow
we're missing some ingredient that we
need or some new consciousness particle
or whatever I happen to think that we're
not missing anything and and that it's
or not the interesting thing about
consciousness that gives us this amazing
subjective experience of colors and
sounds and emotions and so on is rather
something at the higher level about the
patterns of information processing
that's why that's why I am like to think
about this idea of perceptron Neum what
does it mean for an arbitrary physical
system to be conscious in terms of what
its particles are doing or its
information is doing I don't think I
don't hate carbon chauvinism you know
this attitude you have to be made of
carbon atoms to be smart or conscious
something about the information
processing yes kind of matter performs
yeah and you know yeah I have my
favorite equations here describing
various fundamental aspects of the world
I feel that I think one day maybe
someone who's watching this will come up
with the equations that information
processing has to satisfy to be
conscious I'm quite convinced there is
big discovery to be made there yeah
because let's face it sumit we know that
some information processing is conscious
because we are yeah conscious but we
also know that a lot of information
processing is not conscious like most of
the information processing happening in
your brain right now is not conscious
there is like 10 megabytes per second
coming in and even just through your
visual system you are not conscious
about your heartbeat regulation or or
most things by even even like if I just
ask you to like read what it says here
you look at it and then oh now you know
what it said but you don't aware of how
the computation actually happened you're
like the your consciousness is like the
CEO that got an email at the end we
leave with a final answer so what is it
that makes the difference I think that's
a both of us great science mystery we're
actually starting it a little bit in my
lab here at MIT but I also I think it's
just a really urgent question the answer
for started I mean if you're an
emergency room doctor and you have an
unresponsive patient coming in and
wouldn't it be great if in addition to
having a CT scan
you had a consciousness scanner mm-hmm
that could figure out whether this
person is actually having locked-in
syndrome or is actually comatose and in
the future imagine if we build the
robots or the machine that we can have
really good conversations which I think
it's mostly very likely to happen right
wouldn't you want to know like if your
home helped a robot is actually
experiencing anything or just like a
zombie I mean would you prefer what
would you prefer would you prefer that
it's actually unconscious so that you
don't have to feel guilty about
switching it off or giving me boring
chores or would you prefer
well the certainly would we would prefer
I would prefer the appearance of
consciousness
but the question is whether the
appearance of consciousness is different
than cost consciousness itself and sort
of ask that as a question yeah do you
think we need to you know understand
what consciousness is solve the hard
problem of consciousness in order to
build something like an a GI system no I
don't think that and I think we we will
probably be able to build things even if
we don't answer that question but if we
want to make sure that what happens is a
good thing we better solve it first so
it's a wonderful controversy you're
raising there there where you have
basically three points of view about the
heart problem sir there are two
different points of view that both
conclude that the hard problem of
consciousness is BS you're on one hand
you have some people like Daniel Dennett
who say this is our consciousness is
just BS because consciousness is the
same thing as intelligence there's no
difference so anything which acts
conscious is conscious just like like we
are and then there are also a lot of
people including many top AI researchers
I know you say all conscience is just
because of course machines
should never be conscious tonight
they're always gonna is gonna be zombies
never have to feel guilty about how you
treat them and then there's a third
group of people including Giulio Tononi
for example and and another just of
chakana brothers
I would put myself Falls on this middle
camp who say that actually some
information processing is conscious and
in some is not so let's find the
equation which can be used to determine
which it is and I think we've just been
a little bit lazy kind of running away
from this problem for a long time it's
been almost taboo would even mention the
c-word a lot of circles because look but
we should stop making excuses this is a
science question and we can the rock
there are ways we can even test test any
theory that makes predictions for this
and coming back to this helper robot I
mean so you said you'd want to help a
robot to certainly act conscious and
treat you like to have conversations
with us I think so wouldn't you would
you feel would you feel a little bit
creeped out if you realize that it was
just glossed up the tape recorder they
know there was just Sambi and there's
some faking emotion would you prefer
that it actually had an experience or
will you prefer that it's actually not
experiencing anything so you feel you
don't have to feel guilty about what you
do to it it's such a difficult question
because you know it's like when you're
in a relationship and you say well I
love you and the other person I love you
back it's like asking well do they
really love you back or are they just
saying they love you back
do you don't you really want them to
actually love you I it's hard to it's
hard to really know the difference
between everything seeming like there's
consciousness present there's
intelligence present there's affection
passion love and and actually being
there I'm not sure do you have a
question let's just like to make it a
bit more pointed so Mass General
Hospital is right across the river right
yes suppose suppose you're going in for
a medical procedure and they're like you
know
furnish the agent what we're gonna do is
we're gonna give you a muscle relaxant
so you won't be able to move and you're
gonna feel excruciating pain during the
whole surgery but you won't be able to
do anything about it but then we're
gonna give you this drug that erases
your memory of it would you be cool
about that no what
difference that you're conscious about
it or not if there's no behavioral
change right right that's a really
that's a really clear way to put it
that's yeah it feels like in that sense
experiencing it is a valuable quality so
actually being able to have subjective
experiences at least in that cases is
valuable and I think we humans have a
little bit of a bad track record also of
making these self-serving arguments that
other entities aren't conscious you know
people often say oh these animals can't
feel pain right it's okay to boil
lobsters because we asked them if it
hurt and they didn't say anything and
now there was just the paper out saying
lobsters did do feel pain when you boil
them and they're banning it in
Switzerland it and and we did this with
slaves too often and say oh they don't
mind they don't maybe or aren't
conscious or women don't have souls or
whatever I'm a little bit nervous when I
hear people just take as an axiom that
machines can't have experience ever I
think this is just this really
fascinating science question is what it
is
let's research it and try to figure out
what it is it makes the difference
between
unconscious intelligent behavior and
conscious intelligent behavior so in
terms of so if you think of a Boston
Dynamics human robot being sort of with
a broom being pushed around the its
starts it starts pushing on his
consciousness question so let me ask do
you think an AGI system like a few
neuroscientists believe needs to have a
physical embodiment needs to have a body
or something like a body no I don't
think so you mean to have to have a
conscious experience to have
consciousness I do think it helps a lot
to have a physical embodiment learn the
kind of things about the world that
they're important to us humans for sure
but I don't think
bah diamond is necessary after you've
learned it just have the experience
think about when you're dreaming right
your eyes are closed you're not getting
any sensory input you're not behaving or
moving in any way but there's still an
experience there right and so there's
clearly the experience that you have
when you see something tool in your
dreams isn't coming from your eyes it's
just the information processing itself
in your brain which is that experience
right but if I put another way I'll say
because it comes from neuroscience is
the reason you want to have a body and a
physical something like a physical like
a you know a physical system is because
you want to be able to preserve
something in order to have a self you
could argue would you you need to have
some kind of embodiment of self to want
to preserve well now we're getting a
little bit on Drop amorphic that's inter
and super more fising things miss Mamie
tossing like self-preservation instincts
I mean we are evolved organisms right
right so Darwinian evolution endowed us
and other involve all organism with the
self-preservation instinct as those that
didn't have those self-preservation
genes are clean out of the gene pool
right right but if you build an
artificial general intelligence the mind
space that you can design is much much
larger than just a specific subset of
minds that can evolve that happen so
they CERN a GI mind doesn't necessarily
have to have any self-preservation
instinct it also doesn't necessarily
have to be so individualistic as I'd
like imagine if you could just first of
all it or we're also very afraid of
death you know I suppose you could back
yourself up every five minutes and then
your airplane is about to crash you like
shucks I'm just counted I'm gonna lose
the last five minutes of experiences
it's my last cloud backup you're dying
you know it's not this big a deal or if
we could just copy experiences between
our minds easily like me which we could
easily do if we were silicon based right
then
maybe we would feel a little bit more
like a hive mind actually but maybe is
he so so there's a so I don't think we
should take for granted at all that AG I
will have to have any of those sort of
competitive as alpha male instincts
right on the other hand you know this is
really interesting because I think some
people go too far and say oh of course
we don't have to have any concerns
either that advanced AI will have those
instincts because we can build anything
you want that there's there's a very
nice set of arguments going back to
Steve Omohundro and Nick Bostrom and
others just pointing out that when we
build machines we normally build them
with some kind of goal you know win this
chess game drive this car safely or
whatever and as soon as you put in a
goal into machine especially if it's
kind of open-ended goal and the machine
is very intelligent it'll break that
down into a bunch of sub goals and one
of those gold will almost always be
self-preservation because if it breaks
or dies in the process it's not gonna
accomplish the goal right like suppose
you just build a little you have a
little robot and you tell it to go down
the Starmark get here and and and get
you some food make your cookin Italian
dinner you know and then someone mugs it
and tries to break it down the way that
robot has an incentive to not destroy it
and defend itself or run away because
otherwise it's gonna fail and cooking
you dinner it's not afraid of death but
it really wants to complete the dinner
cooking gold so it will have a
self-preservation instinct continue
being a functional Asian yeah and and
similarly if you give any kind of warm
and they she's go to an AGI
it's very likely they want to acquire
more resources so it can do that better
and it's exactly from those sort of sub
goals that we might not have intended
that but some of the concerns about AGI
safety come you give it some goal which
seems completely harmless and then
before you realize it it's also trying
to do these other things which you
didn't want it to do and it's moment be
smarter than us so so lastly and let me
pause just because I
in a very kind of human centric way see
fear of death is a valuable motivator
haha so you don't think do you think
that's an artifact of evolution so
that's the kind of mind space evolution
created they were sort of almost
obsessed about self-preservation kind of
genetic well you don't think that's
necessary to be afraid of death so not
just a kind of sub goal of
self-preservation just so you can keep
doing the thing but more fundamentally
sort of have the finite thing like this
ends for you at some point the
interesting do I think it's necessary
before what precisely for intelligence
but also for consciousness so for those
for both do you think really like a
finite death and the fear of it is
important so before I can answer well
before we can agree on whether it's
necessary for intelligence or for
consciousness we should be clear or how
we define those two words because share
a lot of really smart people to find
them in very different ways I was in
this on this panel and with AI experts
and they couldn't they couldn't agree on
how to define intelligence even so I
define intelligence simply as the
ability to accomplish complex goals
I like your broad definition because
again I don't want to be a carbon
chauvinist right and in that case no it
certainly certainly doesn't require fear
of death I would say alpha go alpha zero
is quite intelligent
I don't think alpha zero has any fear of
being turned off because it doesn't
understand the concept of that even and
and similarly consciousness I mean you
could certainly imagine very simple kind
of experience if you know if certain
plants have any kind of experience I
don't think they were afraid of dying if
there's nothing they can do about it
anyway much so there wasn't that much
value and but more seriously I think if
you ask not just about being conscious
but maybe having what you would
we might call an exciting life for you
feel passion and I didn't really
appreciate the little things maybe there
but somehow maybe there perhaps it does
help having having my backdrop today
it's finite you know let's let's make
the most of this this live to the
fullest so if you if you knew you were
gonna slip forever if you think you
would change your yeah in some
perspective it would be an incredibly
boring life living forever so in the
sort of loose subjective terms that you
said of something exciting and something
in this that other humans would
understand I think is yeah it seems that
the the finiteness of it is important
well the good news I have for you then
is based on what we understand about
cosmology everything is in our universe
is Pro ultimately probably finite alone
although pay crunch or bit or big what's
to expand anything yeah we couldn't have
a Big Chill or a Big Crunch or a big rip
or that's the big snap or death bubbles
all over more than a billion years away
so we should we certainly have vastly
more time than our ancestors thought but
there is still it's still pretty hard to
squeeze in an infinite number of compute
cycles even though there are some
loophole let's just might be possible
but I think I you know some people like
to say that you should live as if you're
about you're gonna die in five years or
something that sort of optimal maybe
it's a good it subs we should build our
civilization as if it's all finite to be
on the safe side right exactly so you
mentioned in defining intelligence as
the ability solve complex goals where
would you draw a line how would you try
to define human level intelligence and
superhuman level intelligence where this
consciousness part of that definition no
consciousness does not come into this
definition so so I think your
intelligence is it's a spectrum but
there are very many different kinds of
goals you can have you can have a goal
to be a good chess player a good goal
player a good car driver
a good investor good poet etc so
intelligence that bind by its very
nature isn't something you can measure
but it's one number overall goodness no
no there are some people who are more
better at this some people are better
than that um right now we have machines
that are much better than us at some
very narrow tasks like multiplying large
numbers fast memorizing large databases
playing chess playing go and soon
driving cars but there's still no
machine that can match a human child in
general intelligence but but artificial
general intelligence AGI in the name of
your course of course that is by its
very definition the the quests the build
a mission in seen that can do everything
as well as we can up to the old holy
grail of AI from from back to its
inception and then 60s if that ever
happens of course I think it's gonna be
the biggest transition in the history of
life on earth but it but it doesn't
necessarily have to wait the big impact
about until machines are better than us
at knitting the really big change
doesn't come exactly the moment they're
better than us at everything
the really big change comes first there
big changes when they start becoming
better at us at doing most of the jobs
that we do because that's it can takes
away much of the demand for human labor
and then the really whopping change
comes when they become better than us at
AI research right right because right
now the timescale of AI researcher is
limited by the human research and
development cycle of years typically at
all along the tape from one release of
some software or iPhone or whatever to
the next but once once we have once
Google can replace 40,000 engineers by
40,000 equivalent pieces of software or
whatever right then that doesn't there's
no reason that has to be years it can be
in principle much faster and the
timescale of future progress in AI and
also all of science and technology will
will be driven by machines not
so it's this point simple point which
lives right this incredibly fun
controversy about whether it can be an
intelligence explosion so-called
singularities Vernor Vinge called it
that the idea is articulated by IJ good
obviously way back fifties but you can
see Alan Turing and others thought about
it even earlier not did you ask me what
exactly what I define England's yeah so
this the the glib answer is it to say
something which is better than us at all
cognitive tasks will look better than
any human and all cognitive tasks but
the really interesting bar I think goes
a little bit lower than that actually
it's when they can when they're better
than us it AI programming and can a
general learning so that they can can if
they want to get better than I said
anything by just studying so they're
better is a keyword and better as
towards this kind of spectrum of the
complexity of goals it's able to
accomplish yeah so another way to so no
and that's certainly a very clear
definition of human law so there's it's
almost like a sea that's rising you
could do more and more and more things
as a graphic that you show it's really
nice way to put it so there's some Peaks
that and there's an ocean level
elevating and you saw more and more
problems but you know just kind of to
take a pause and we took a bunch of
questions and a lot of social networks
and a bunch of people asked a sort of a
slightly different direction on
creativity and and things like that
perhaps aren't a peak the it's you know
human beings are flawed and perhaps
better means having being a having
contradiction being fought in some way
so let me sort of yeah
start and start easy first of all so you
have a lot of cool equations let me ask
what's your favorite equation first of
all I know they're all like your
children but like which one is that it's
the master key of
want the mechanics of the microworld
this equation to protect you like
everything to do with atoms molecules
and all that we have yeah so okay it's a
quantum mechanics is certainly a
beautiful mysterious formulation of our
world so I'd like to sort of ask you
just as an example it perhaps doesn't
have the same beauty as physics does but
in mathematics abstract the Andrew Wiles
who proved the firm as last theta so he
just saw this recently and it kind of
caught my eye a little bit this is three
hundred fifty eight years after it was
conjectured so this very simple
formulation everybody tried to prove it
everybody failed and say here's this guy
comes along and eventually it proves it
and then fails to prove it and proves it
again in 94 and he said like the moment
when everything connected into place the
in an interview said it was so
indescribably beautiful that moment when
you finally realized the connecting
piece of two conjectures he said it was
so indescribably beautiful it was so
simple and so elegant I couldn't
understand how I'd missed it and I just
stared at it in disbelief for twenty
minutes then then during the day I
walked around the department and at
Keamy keep coming back to my desk
looking to see if it was still there it
was still there I couldn't contain
myself I was so excited it was the most
important moment on my working life
nothing I ever do again will mean as
much so that particular moment and it
kind of made me think of what would it
take and I think we have all been there
at small levels maybe let me ask have
you had a moment like that in your life
where you just had an ideas like wow yes
I wouldn't
self and the same breath as Andrew wilds
but I've certainly had a number of um
aha moments mo when I realized something
very cool about physics just as
completely made my head explode in fact
some of my favorite discoveries I made
later I later realize if they had been
discovered earlier someone who sometimes
got quite famous for it so I find this
too late for me to even publish it but
that doesn't diminish in any way an
emotional experience you have when you
realize it like yeah Wow yeah
so what would it take and at that moment
that wow that was yours in a moment so
what do you think it takes for an
intelligent system and a GI system an AI
system to have a moment like that that's
a tricky question because there are
actually two parts to it right one of
them is cannot accomplish that proof it
cannot prove that you can never write a
to the N plus B to the N equals 3/2 that
equals e to the N for all integers well
etc etc when when n is bigger than 2 the
simply in any question about
intelligence can you build machines that
are that intelligent and I think by the
time we get a machine that can
independently come up with that level of
proofs probably quite close to AGI the
second question is a question about
consciousness when will we will willins
how likely is it that such a machine
will actually have any experience at all
as opposed to just being like a zombie
and would we expect it to have some sort
of emotional response to this or
anything at all I can to human emotion
work no but when it accomplishes its
machine goal it did the views it to
somehow it's something very positive and
right and and sublime and and and and
deeply meaningful I would certainly hope
that if in the future we do create
machines that are our peers or even our
dis
since yeah I would certainly hope that
they do have this sort of sublime
sublime appreciation of life in a way my
absolutely worst nightmare would be that
in at some point in the future the
distant future maybe I cost much as
teeming with all this post biological
life doing all the seemingly cool stuff
and maybe the fun last humans or the
time era our species eventually fizzles
out we'll be like well that's ok because
we're so proud of our descendants here
and look what I like my most nightmare
is that we haven't solved the
consciousness problem and we haven't
realized that these are all the zombies
they're not aware of anything anymore
than the tape recorders it has an any
kind of experience so the whole thing
has just become a play for empty benches
that would be like the ultimate zombie
apocalypse me III would much rather in
that case that we have these beings
which just really appreciate how how
amazing it is
and in that picture what would be the
role of creativity we had a few people
ask about creativity do you think when
you think about intelligence I mean
certainly the the story told the
beginning of your book involved you know
creating movies and so on yeah sort of
making making money you know you can
make a lot of money in our modern world
with music and movies so if you are
intelligent system you may want to get
good at that yeah but that's not
necessarily what I mean by creativity is
it important on that complex goals where
the sea is rising for there to be
something creative creative or am I
being very human centric and thinking
creativity somehow special
relative to intelligence my hunch is
that we should think your creativity
simply as an aspect of intelligence and
[Music]
we
we have to be very careful with with
human vanity we had we have this
tendency to very often one and say as
soon as machines can do something we try
to diminish it that's a long but that's
not like real intelligence you know
you're the night trader or there were or
this or that or the other thing maybe if
we ask ourselves to write down a
definition of what we actually mean by
being creative what we mean by Andrew
Wiles what he did there for example
don't we often mean that someone takes
you very unexpected leap mm-hmm it's not
like taking feet 573 and multiplying in
my 224 by justice step of
straightforward cookbook like rules
right if this you may be making you even
make a connection between two things
that people have never thought was
connect very surprising or something
like that I think I think this is an
aspect of intelligence and this is some
actually one of the most important
aspect of it maybe the reason we humans
are tend to be better at it than
traditional computers is because it's
something that comes more naturally if
you're a neural network then if you're a
traditional logic gate based computer
machine
you know we physically have all these
connections and you activate here
activator here activate here ping you
know I my hunch is that if we ever build
a machine where you could just give it
the task hey hey you say hey you know I
just realized that I have I want to
travel around the world instead this
months can you teach my eight a GI
course for me and it's like ok I'll do
it and it does everything that you would
have done and they provides us and so
yeah that that would in my mind involve
a lot of creativity yeah so I had such a
beautiful way to put it I think we do
try to grab grasp at the you know the
definition of intelligence is everything
we don't understand how how to build so
like so we as humans try to find things
well that we have on machines don't
happen maybe creativity is just one of
the things one of the words we use to
describe that that's really interesting
where to put it out
think we need to be that defensive I
don't think anything good comes out of
saying oh we're somehow special you know
I it's contrariwise there are many
examples in history of we're trying to
pretend that were somehow superior to
all other intelligent beings has led the
pretty bad results right
Nazi Germany they said that they were
somehow superior to other people today
we still do a lot of cruelty to animals
by saying that we're social superiors
and how and the other they can't feel
pain
slavery was justified by the same kind
of really weak weak arguments and and I
don't think if we actually go ahead and
build artificial general intelligence it
can do things better than us I don't
think we should try to found our
self-worth on some sort of bogus claims
of superiority in in terms of our
intelligence I think it's we shouldn't
stand Joe find our calling and then the
meaning of life from from experiences
that we have right you know I can have I
can have very meaningful experiences
even if there are other people who are
smarter than me you know when I go to
faculty meeting here and I was talking
about something that I certainly realize
oh boy he has a Nobel Prize he has a
Nobel Prize he has no pride I don't have
what does that make me enjoy life any
less or would enjoy talking those people
less of course not see my and the
contrariwise I I feel very honored and
privileged to get to interact with with
other very intelligent beings that are
better than me a lot of stuff so I don't
think there's any reason why we can't
have the same approach with with
intelligent machines that's a really
interesting so people don't often think
about that they think about when there's
going if there's machines that are more
intelligent you naturally think that
that's not going to be a beneficial type
of intelligence you don't realise it
could be
you know like peers of Nobel Prizes that
that would be just fun to talk with and
they might be clever about certain
topics and you can have fun having a few
drinks with them so well another example
is we can all relate to it why it
doesn't have to be a terrible thing to
be impressed the friends of people are
even smarter than us all around is when
when you and I were both two years old I
mean our parents were much more
intelligent than us right here worked
out okay yeah because their goals were
aligned with our goals
yeah and that I think is really the
number one T issue we have to solve its
value align the value alignment problem
exactly because people who see too many
Hollywood movies with lousy science
fiction plot lines they worry about the
wrong thing right they worry about some
machines only turning evil it's not
malice they wish that the issue probably
concerned its competence by definition
intelligent makes you makes you very
competent if you have a more intelligent
goal playing mr. computer playing as the
less intelligent one and when we define
intelligence is the ability to
accomplish go winning right it's gonna
be the more intelligent one that wins my
and if you have a human and then you
have an AGI and that's more intelligent
in all ways and they have different
goals guess who's gonna get their way
right so I was just reading about I was
just reading about this particular
rhinoceros species that was driven
extinct just a few years ago bummer is
looking at this cute picture mommy run
oestrus with it's it's child you know
and why did we humans private extinction
wasn't because we were evil Rhino haters
right as a whole it was just because we
our goals weren't aligned with those of
the rhinoceros and it didn't work out so
well for the rhinoceros because we were
more intelligent right so I think it's
just so important that if we ever do
build AGI before we
we have to make sure that it it learns
to understand our goals that it adopts
our goals and it retains those goals so
the cool interesting problem there is
being able us as human beings trying to
formulate our values so you know you
could think of the United States
Constitution as a as a way that people
sat down at the time a bunch of white
men but which is a good example I should
we should say they formulated the goals
for this country and a lot of people
agree that those goals actually hold up
pretty well that's an interesting
formulation of values and failed
miserably in other ways so for the value
alignment problem and a solution to it
we have to be able to put on paper or in
in in a program human values how
difficult do you think that is very but
it's so important we really have to give
it our best and it's difficult for two
separate reasons there's the technical
value alignment problem of figuring out
just how to make machines understand our
goals adopt them and retain them and
then there's a separate part of it the
philosophical part whose values anyway
and since we it's not like we have any
great consensus on this planet on values
how what mechanism should we create them
to aggregate and decide okay what's a
good compromise right at that second
discussion can't this be left the tech
nerds like myself right that's right and
if we refuse to talk about it and then
AGI gets built who's gonna be actually
making the decision about whose values
it's gonna be a bunch of dudes and some
tech company yeah yeah and are they
necessarily - it's it's so
representative of all humankind that we
want to just entrusted to them or they
even is uniquely qualified to speak the
future human happiness just because
they're good at programming any I I'd
much rather have this be a really
inclusive conversation but do you think
it's possible sort of so you create a
beautiful vision that includes so the
diversity cultural diversity and various
specs on discussing rights freedoms
human dignity but how hard is it to come
to that consensus do you think it's
certainly a really important thing that
we should all try to do but do you think
it's feasible I I think there's no
better way to guarantee failure than to
try to refuse to talk about it or or
refuse to try and I also think it's a
really bad strategy to say okay let's
first have a discussion for a long time
and then once we reach complete
consensus then we'll try to load it into
the Machine know it we shouldn't let
perfect be the enemy of good instead we
should start with the kindergarten
ethics - pretty much everybody agrees on
and put that into our machines now we're
not doing that even look at the you know
anyone who builds this passenger
aircraft wants it to never under any
circumstances fly into a building or
mountain right yet the September 11
hijackers were able to do that and even
more embarrassingly you know and that he
has Lubitz this depressed
Germanwings pilot when he flew his
passenger jet into the Alps killing over
a hundred people he just told the
autopilot to do it he told the freaking
computer to change the altitude 200
meters and even though it had the GPS
maps everything the computer was like
okay no so which we should take those
very basic values though where the
problem is not that we don't agree that
maybe the problem is just we've been too
lazy to try to put it into our machines
and make sure but from now on air
airplanes will just which all have
computers in them but we'll just never
just refuse to do something like that go
into safe mode
maybe lock the cockpit door or than here
at the airport and and there's so much
other technology in our world as well
now where it's really quite becoming
quite timely to put in some sort of very
basic values like this even in cars we
were have enough vehicle terrorism
attacks by now of you love different
trucks and bands into pedestrians that
it's not at all a crazy idea to just
have that hardwired into the car just
yeah there are a lot of there's always
gonna be people who for some reason want
to harm others
most of those people don't have the
technical expertise to figure out how to
work around something like that so if
the car just won't do it
it helps it let's start there so there's
a lot of that's a great point so not not
chasing perfect there's a lot of things
that a lot that most of the world agrees
on yeah and this look there let's start
there and and then once we start there
we'll also get into the habit of having
these kind of conversations about okay
what else should we put in here and I
have these discussions this should be a
gradual process then great so but that
also means describing these things and
describing it to a machine
so one thing we had a few conversation
was Stephen Wolfram I'm not sure if
you're familiar with Stephen but yeah I
know quite well so he is you know he
played you know works with a bunch of
things but you know cellular automata
are these simple computable things these
computation systems and he kind of
mentioned that you know we probably have
already within these systems already
something that's AGI meaning like we
just don't know it because we can't talk
to it so if you give me this chance to
try to try to release form a question
out of this is I think it's an
interesting idea to think that we can
have intelligent systems but we don't
know how to describe something to them
and they can't communicate with us I
know you're doing a little bit work an
explainable AI trying to get AI to
explain itself so what are your thoughts
of natural language processing or some
kind of other communication how how does
the AI explain something to us how do we
explain something to it to machines or
you think of it differently so there are
two separate parts to your question
there are them one of them has to do
with communication which is super
interesting you don't get that insect
the other is whether we already have AGI
but we just haven't noticed it yeah
right there I beg to differ right and
don't think there's anything in any
cellular automaton or anything or the
internet itself or whatever that has
artificial
it didn't really do exactly everything
we humans can do better I think today if
the day that happens when that happens
we will very soon notice we will
probably notice even before andif
because in a very very big way but for
the second part though sorry so the
because you you have this beautiful way
to formulating consciousness as as a you
know as information processing you can
think of intelligence and information
processing and this you can think of the
entire universe there's these particles
and these systems roaming around that
have this information processing power
you don't you don't think there is
something with the power to process
information in the way that we human
beings do that's out there that that
needs to be sort of connected to it
seems a little bit philosophical perhaps
but there's something compelling to the
idea that the power is already there
would you know yes the focus should be
more on these I'm being able to
communicate with it mhm well I agree
that that and some in a certain sense
the hardware processing power is already
out there because our universe itself
can think of it as being a computer
already right it's constantly computing
what water waves have evolved the water
waves and the river Charles and how to
move the air molecules around that s
Lloyd has pointed out my colleague here
that you can even in a very rigorous way
think of our entire universe as being a
quantum computer it's pretty clear that
our universe supports this amazing
processing power because you can even
the within this physics computer that we
live in right we can even build actually
laptops and stuff so clearly the power
is there it's just that most of the
compute power that nature has it's in my
opinion kind of wasting on boring stuff
like simulating yet another ocean wave
somewhere where no one is even looking
right so in a sense of what life does
what we are doing when we build
computers is where we channeling all
this compute that nature is doing anyway
into doing things that are more
interesting than just yet another ocean
wave you know and let's do something
cool here so the raw hardware power
and sherbet and then and even just like
computing what's gonna happen for the
next five seconds in this water ball you
know it takes in a ridiculous amount of
compute if you do it on a human computer
in yeah
this water ball was did it but that does
not mean this water bottle has AGI and
because AGI means it should also be able
to like have written my book during his
interview yes and I don't think it's
just communication problems as far as
you know don't think it can do it and
other Buddhists say when they watch the
water and that there is some beauty that
there's some depth and being sure that
they can communicate with communication
that's also very important here because
I mean look part of my job is being a
teacher and I know some very intelligent
professors even who just have a better
hard time communicating they come up
with all these brilliant ideas but but
to communicate with somebody else you
have to also be able to simulate their
own mind yes and pettite build well
enough and understand model of their
mind that you can say things that they
will understand and that's quite
difficult and that's why today it's so
frustrating if you have a computer that
makes some cancer diagnosis and you ask
it well why are you saying I should have
a surgery if it and if they don't know
can only reply or I was trained on five
terabytes of data and this is my
diagnosis boop boop beep beep yeah I
didn't doesn't really instill a lot of
confidence right right so I think we
have a lot of work do one on
communication there so what kind of what
kind of I think you're doing a little
bit work and explainable eh uh yeah what
do you think are the most promising
avenues is it mostly about sort of the
Alexa problem of natural language
processing of being able to actually use
human interpretable methods of
communication so being able to talk to a
system and talk back to you or is there
some more fundamental problems to be
solved I think it's all of above human
the natural language processing is
obviously important but they're also
more nerdy
fundamental problems like if you if you
take you play chess
mmm I have to give this Paris key when
did you learn Russian nobody watching
papyrus key I talk after the back more
people can you teach yourself Russian to
Tao what amalgam of bills of sim through
dinner
Wow but I would see languages do you
know
wow that's really impressive I've had
some contact base but my point was if
you play chess but you have you looked
at the alpha zero games there are the
actual games now just checking out some
of them are just mind-blowing really
beautiful and if you ask how did it do
that
you got that talk to them is hassabis I
know others from beef mine all they will
ultimately be able to give you is big
tables of numbers matrices that define
the neural networking and you can stare
at these know people's numbers till your
face turned blue and it's you know I can
understand much about why it made that
move and even if you have a natural
language processing that can tell you in
human language about all five seven
points to eight still not gonna really
help so I think think there's a whole
spectrum of a fun challenge they're
involved in and taking a computation
that does intelligent things and
transforming me into something equally
good equally intelligent but it's more
understandable and I think that's really
valuable because I think as we put
machines in charge of evermore
infrastructure in our world the power
grid the trading on the stock market
weapons systems and so on
it's absolutely crucial that we can
trust these a is a do or I want and
trust really comes from understanding
all right in a very fundamental way and
that's why I'm that's why I'm working on
this because I think the more if we're
gonna have some hope of ensuring that
machines have adopted our goals and that
they're gonna retain them that kind of
trust and
thank you needs to be based on things
you can actually understand preferably
even make it perfectly to improve
theorems on even with a self-driving car
right if someone just tells you it's
been trained on tons of data and I never
crashed it's it's less reassuring than
if someone actually has a proof maybe
it's a computer verified proof but still
it says that under no circumstances is
this car just gonna swerve into oncoming
traffic and that kind of information
helps will build trust and build the
alignment the alignment of goals the at
least awareness that your goals your
values are aligned and I think even a
very short term if you look at her you
know that today right this absolutely
pathetic state of cybersecurity that we
have when it's or is it three billion
yahoo accounts which are packed almost
every American's credit card and so on
you know it's why is this happening it's
ultimately happening because we have
software took nobody fully understood
how it worked that's why the bugs hadn't
been found right now and I think AI can
be used very effectively for offense for
hacking but it can also be used for
defense know hopefully automating
verifiability and creating is systems
that are built in different
so you can actually prove things about
them right and it's it's important
so speaking of software that nobody
understands how it works of course a
bunch of people asked by your paper
about your thoughts of why does deep and
cheap learning work so well that's the
paper but what what are your thoughts on
deep learning these kind of simplified
models of our own brains have been able
to do some successful perception work
pattern recognition work and now with
alpha zero and so on do some some clever
things what are your thoughts about the
promise limitations of this piece great
I think there are a number of very
important insights very important
lessons we can already draw from these
kind of successes one of them is when
you look at the human brain you see it's
very complicated a tenth of eleven
neurons and there are all these
different kinds of neurons and
yadda-yadda and there's been a long
debate about whether the fact that we
have dozens of different kinds is
actually necessary for intelligence
which a now I think quite convincingly
answer that question no it's enough to
have just one kind if you look under the
hood of alpha zero there's only one kind
of neuron and it's ridiculously simple a
simple mathematical thing so it's it's
not the it's just like in physics it's
not the D if you have a gas with waves
in it it's not the detailed nature of
the molecule the matter it's the
collective behavior or somehow it
similarly it's it's it's this
higher-level structure of the network
that matters not that you have twenty
guys I think whom our brain is such a
complicated mess because it wasn't
devolved just to be intelligent it was
evolved to also be self assembling right
and self repairing right and
evolutionarily attainable matches and so
on yeah so I think it's pretty my my
hunch is that we're gonna understand how
to build a GI before we fully understand
how our brains work just like we we
understood how to build flying machines
long before we were able to build a
mechanical work bird yes are you going
names you're given that the example
exactly of mechanical birds and
airplanes
yeah my plans do a pretty good job of
flying without really mimicking bird
flight and even now after 100 is 100
years later did you see the TED talk
with a mr. mechanical bird you mentioned
it's amazing but even after that right
we still don't fly in mechanical birds
because it turned out the way we came up
with but simpler and it's better for our
purposes and I think it might be the
same there that's one lesson and another
lesson it is one what did when our paper
was about
well first we wife is a physicist
thought it was fascinating how there is
a very closed mathematical relationship
actually between our artificial neural
networks and a lot of things that we've
studied for in physics go by nerdy names
like the renormalization group equation
and napoleons and yada yada yada and
when you look a little more closely at
this you have you at first there was a
well there's something crazy here that
doesn't make sense because we know that
if you even want to build a super simple
neural network with hell that part cat
pictures and dog pictures right that you
can do that very very well now but if
you think about it a little bit you
convince yourself it must be impossible
because if I have one megapixel even if
each pixel is just black or white
there's 2 to the power 1 million
possible images which is way more than
there are atoms in our universe right so
in order to and then for each one of
those I have to assign a number which is
the probability that it's a dog right so
an arbitrary function of images is a
list of more numbers than there are
atoms in our universe so clearly I can't
store that under the hood of my my GPU
or maybe my computer yet somehow works
so what does that mean well it means
that the out of all of the problems that
you could all try to solve with a neural
network
almost all of them are impossible to
solve with a reasonably sized one but
then what we should show it in our paper
was was that they the fraks the kind of
problems the fraction of all the
problems that you could possibly
pose that there that we actually care
about given the laws of physics is also
an infinitesimally tiny little part and
amazingly they're basically the same
part yeah it's almost such that our
world was created for I mean they kind
of come together yeah but you could say
maybe where the world created the world
that the world was created for us but I
have a more modest interpretation which
is that instead evolution in downest but
neural networks precisely for that
reason because this particular
architecture as opposed to the one in
your laptop is very very well adapted
solving the kind of problems of nature
kept presenting it our ancestors will
read so it makes sense that why do we
have a brain in the first place it's to
be able to make predictions about the
future mm-hm and so on so if we had a
sucky system which could never solve it
wouldn't have a logic so but it's so
this is this is a I think you're very
beautiful fact yeah we also we also
realize that there's there that we
they've been it's been earlier work on
on why deeper networks are good but we
were able to show an additional cool
fact there which is that them even
incredibly simple problems like suppose
I gave you it I found the numbers and
asked you to multiply them together in
re you can write it's the few lines of
code boom done trivial if you just try
to do that with a neural network that
has only one single hidden layer in it
you can do it but you're gonna need two
to the power a thousand neurons and to
multiply a thousand numbers which is
again more neurons than their atoms in
our universe okay that's nothing but if
you're allowed if you love yourself
make it a deep network with many layers
you only need four thousand neurons it's
perfectly feasible so that's really
interesting there yeah yeah so on
another architecture type I mean you
mentioned Schrodinger's equation and
what what are your thoughts about
quantum computing and the role of this
kind of computational unit in creating
an intelligent system in some Hollywood
movies that are a lot mentioned my name
you don't want to spoil them
the
the NAD is building a quantum computer
list yes because the word quantum sounds
cool and so it's right mines first of
all I think we don't need quantum
computers they build a GI I suspect your
brain is not quantum computer and then
they found sense so you don't even wrote
a paper about that what many years ago
would excite Chocula the decoherence
so-called B coherence time that how long
it takes until the quantum computer
nosov what your neurons are doing gets
erased mm-hmm why just random noise from
the environment and then it's about 10
to the minus 21 seconds so as cool as it
would be to have a quantum computer in
my head I don't think that fast
yeah on the other hand there are very
cool things you could do with quantum
computers though I think we'll be able
to do soon when we get big what bigger
ones that might actually help machine
learning do even better than the brain
mm-hmm though for example one this is
moonshot but hey you know learning is
very much same thing is search mm-hmm if
you have if you try to train a neural
network to get really learn to do
something really well you'd have some
lost function you have some you have a
bunch of knobs you can turn represented
by a bunch of numbers and you're trying
to tweak them so that it become as good
as possible at this thing so if you
think of a landscape with some Valley
where each dimension of the landscape
corresponds to some number you can
change you're trying to find the minimum
and it's well-known that if you have a
very high dimensional landscape
complicated things super hard to find
the minimum later quantum mechanics is
amazingly good at this right if I want
to know what's the lowest energy state
this water can possibly have incredibly
hard to compute but we can but nature
will happily figure this out for you if
you just cool it down and make you very
very cold if you put a ball somewhere
it'll roll down to its minimum and this
happens metaphorically
and the energy landscape too and quantum
mechanics even used as a mode some
clever tricks which today is machine
learning systems don't like if you're
trying to find the minimum and you get
stuck in a little local minima here in
quantum mechanics you can actually
tunnel through the barrier and get
unstuck in Yemen and that's really
interesting yeah so it may be for
example it will one day use quantum
computers that help train neural
networks better that's really
interesting okay so as a component of
kind of the learning process for example
yeah
let me ask sort of wrapping up here a
little bit let me let me return to the
questions of our human nature and and
love as I mentioned so do you think you
mentioned sort of a helper robot you can
think of also personal robots do you
think the way we human beings fall in
love and get connected to each other
it's possible to achieve in an AI system
and human-level AI intelligence system
do you think we would ever see that kind
of connection or you know in all this
discussion about solving complex goals
yeah as this kind of human social
connection do you think that's one of
the goals and the peaks and valleys that
with the raising sea levels that we'll
be able to achieve or do you think
that's something that's ultimately or at
least in the short term relative to
other goals is not achievable I think
it's all possible and I mean in in
recent there's that there's a very wide
range of guesses as you know among AI
researchers when we're gonna get a GI
some people you know like your friend
Rodney Brooks says it's gonna be hundred
hundreds of years least and then there
are many others I think it's gonna
happen relative much sooner and recent
polls and be half or so or AI
researchers think it's we're gonna get
AGI within decades so if that happens of
course then I think these things are all
possible but in terms of whether it will
happen I don't I think we shouldn't
spend so much time asking what do we
think will happen in the future as if we
are just some sort of pathetic your
passive bystanders you know waiting for
the future
happen to us hey we're the ones creating
this future right so we should be
proactive about it and ask us of what
sort of future we would like to have
happen that's right
trying to make it like that well what I
prefer it to some sort of incredibly
boring zombie like future where there's
all these mechanical things happen it is
no fashion no emotion no experience
maybe even no I would of course much
rather prefer it if all the things that
we find that we value the most about
humanity our subjective experience
passion inspiration you love you know if
we can create a future where those are
those things do exist no I think
ultimately it's not our universe giving
meaning to us just us giving me the
universe and if we build more advanced
intelligence let's let's make sure we're
building in such a way that meaning
these but it's part of it I want a lot
of people that seriously study this
problem and think of it from different
angles have trouble and the majority of
cases if they think through that happen
you know are the ones that are not
beneficial to humanity right and so yeah
so what what are your thoughts was an
engine what's what should people you
know I really don't like people to be
terrified
you should what's a way for people to
think about it in a way that instead you
know we can solve it okay to make it
better yeah no I don't think panicking
is gonna help in any way it's not
increase chances of things going well
either even if you are in a situation
where there is a real threat
does it help if everybody just freaks
out right no of course of course not I
think yeah there are of course ways in
which things can go horribly wrong first
of all it's important when we think
about this thing this about the problems
and risks that also remember how huge
the upsides can be if we get it right I
had everything everything we love about
society and civilization of the product
of intelligence so if we can amplify our
intelligence or machine intelligence and
not anymore lose our loved one to what
we're told as an uncurable disease and
things like this of
we should aspire to that so that can be
a motivator I think reminding ourselves
that the reason we try to solve problems
is not just because we're trying to
avoid gloom but because we're trying to
do something great but then in terms of
the risks I think um the entry the
important question is to ask what can we
do today they will actually help yes how
come good may in it and dismissing the
risk is not one of them you know it I
find it quite funny often when I'm in on
discussion panels about these things how
the people who work for come for
companies lobbies they're always like oh
nothing to worry about nothing to worry
about nothing to worry about
and the it's always all it's only
academics sometimes it's expressed
concerns that's not surprising at all if
you think about it Upton Sinclair
quipped right that it's hard to make
your man believe in something when you
think some the fans are not believing in
it and and frankly we know a lot of
these people in companies and that
they're just as concerned as anyone else
but if you're the CEO of a company
that's not something you want to go on
record saying when you have silly
journalists so we're gonna put a picture
of a Terminator robot when they quote
you so so the issues are real and the
way I am the way I think about what the
issue is it is basically you know but
the real choice we have is first of all
are we gonna stir this dismiss this the
risks and say well you know let's just
go ahead and build machines that can do
everything we can do better and cheaper
you know let's just make yourselves
obsolete as fast as possible or what
could possibly go wrong right that's one
attitude the opposite attitude that I
think is to say is incredible potential
you know let's think about what kind of
future we're really really excited about
what are the shared goals that we can
really aspire towards and then let's
think really hard on how about how we
can actually get there as it's a start
with it no don't start thinking about
the risk start thinking about the goals
goals yeah and then when you do that
then you can think about the obstacles
you want to avoid well they often get
students coming in right here into my
office for career advising always ask
them this very question where you
want to be in the future man if all she
can say is oh maybe I'll have cancer
maybe I'll run over by a tortoise and
obstacles instead of the bill he's just
gonna end up a hypochondriac paranoid
yeah whereas if she comes in and fire in
her eyes and it's like I want to be
there and then we can talk about the
obstacles and see how we can circumvent
them that's I think a much much
healthier attitude and um that's really
well plan and I I feel it's it's very
challenging to come up with a vision for
the future which we wish we are
unequivocally excited about I'm not just
talking now in the vague terms like yeah
let's cure cancer fine I'm talking about
what kind of society do we want to
create what do we want it to mean you
know to be human in the Age of AI in the
age of AGI so if we can have this
conversation broad inclusive
conversation and gradually start
converging towards some some future that
with some direction at least that we
want to steer towards right then then no
we'll be much more motivated to
constructively take on the obstacles and
I think if I had if I had the I think if
you make if I try to wrap this up in a
more sixteenth way I think I think we
can all agree already now that we should
aspire to build AGI but doesn't
overpower us but that empowers us and
think of the many various ways that can
do that whether that's from my side of
the world of autonomous vehicles I I'm
personally actually from the camp that
believes there's human level
intelligence is required to to achieve
something like vehicles that would
actually be something we would enjoy
using and being part of so that's one
example and certainly there's a lot of
other types of robots in medicine and so
on so focusing on those and then and
then coming up with the obstacles coming
up with the ways that that can go wrong
and solving those one at a time and just
because you can build an autonomous
vehicle even if you could build one that
would drive this finalize you know maybe
there are some things in life that we
would actually want to do ourselves
that's right my
like for example if you think of our
society as a whole there's something
that we find very meaningful to do and
that doesn't mean we have to stop doing
them just because machines can do them
better you know I'm not gonna stop
playing tennis just the base of my build
a tennis robot yeah beat me people are
still playing chess and even go yeah and
I in this in the very near term even
some people are advocating basic income
replace jobs but if you if the
government is gonna be willing to just
hand out cash to people for doing
nothing then one should also seriously
consider whether the government should
also just hire a lot more teachers and
nurses and the kind of jobs which people
often find great fulfillment in doing
right I get very tired of hearing
politicians saying oh we can't afford
hiring more teachers but we're going to
maybe have basic income if we can have
more serious research and thought into
what gives meaning to our lives and the
jobs give so much more than income right
mm-hm and then think about in the future
well what are the role of the yeah what
are the roles that we want to have
people feeling empowered by machines and
I think sort of I come from the Russia
from the Soviet Union and I think for a
lot of people in the 20th century going
to the moon going to space was an
inspiring thing I feel like the the the
universe of the mind so AI understanding
creating intelligence is that for the
21st century so it's really surprising
and I've heard you mention this it's
really surprising to me both on the
research funding side that it's not
funded as greatly as it could be but
most importantly on the politicians side
that it's not part of the public
discourse except in the kilobots
Terminator kind of view that people are
not yet I think perhaps excited by the
possible positive future that we can
build together certainly should be
because politicians usually just focus
on the next election cycle right the
single most important thing I feel we
humans have learned and the entire
history of science is there were the
Masters of underestimation we
underestimated
the science of our cosmos
again and again realizing of everything
we thought existed was just a small part
of something grander right planet solar
system the galaxy clusters of guises
universe so and we now know that we but
the future has just so much more
potential than our ancestors could ever
have dreamt of this cosmos well imagine
if all of Earth was completely devoid of
life except for Cambridge Massachusetts
that would wouldn't it be kind of lame
if all we ever aspired to it to stay in
Cambridge Massachusetts forever and then
go extinct in one week
even though Earth was gonna continue on
for longer that that sort of attitude I
think we have now on the cosmic scale we
can fluid life can flourish on earth not
foreign for four years but for billions
of years yes I can even tell you about
how to move it out of harm's way when
its own Sun gets too hot and and then we
have so much more resources out here
which today yeah maybe there are a lot
of other planets with bacteria or a cow
like life on them but I most of this all
this opportunity seems as far as we can
fail to be largely dead like the Sahara
Desert and yet we have the opportunity
but to help
life flourish promise there are billions
of year and so like let's quit
squabbling about when some little border
should be drawn one-fifth one mile to
the left to right and realize hey you
know we can do such incredible things
yeah and that's I think why it's really
exciting that yeah you and others are
connected with some of the working la
mosque is doing because he's literally
going out into that space we're
exploring our universe and it's
wonderful that is exactly why Elon Musk
is so it misunderstood right misconstrue
him is some kind of pessimistic dooms
there the reason he cares so much about
the I safety is because he more than
almost anyone else appreciates these
amazing opportunities they will squander
if we wipe out out here on earth
and we're not just gonna wipe out the
next generation but all generations and
this incredible opportunity that's out
there that would be really a waste and
AI for people who think that we better
to do without technology well let me
just mention that if we don't improve
our technology the question isn't
whether humanity is gonna go extinct
question is just whether we're gonna get
taken out by the next big asteroid or
the next supervolcano or something else
dumb that we could easily prevent with
more tech right and if we want life to
flourish throughout the cosmos AI is the
key to it as I mentioned a lot of detail
in my book right there even many of the
most inspired sci-fi writers I feel have
totally underestimated the opportunities
for space travel especially to other
galaxies because they weren't thinking
about the possibility of AGI which just
makes it so much easier right yeah so
that goes to your view of AGI that
enables our progress that enables a
better life so that's a beautiful that's
a beautiful way to put it and then
something to strive for so max thank you
so much thank you for your time today
it's been awesome thank you so much
thanks Super Bowls Rory yes
you