Demis Hassabis: DeepMind - AI, Superintelligence & the Future of Humanity | Lex Fridman Podcast #299
Gfr50f6ZBvo • 2022-07-01
Transcript preview
Open
Kind: captions
Language: en
the following is a conversation with
demus hasabis
ceo and co-founder of deepmind
a company that has published and builds
some of the most incredible artificial
intelligence systems in the history of
computing including alfred zero that
learned
all by itself to play the game of gold
better than any human in the world and
alpha fold two that solved protein
folding
both tasks considered nearly impossible
for a very long time
demus is widely considered to be one of
the most brilliant and impactful humans
in the history of artificial
intelligence and science and engineering
in general
this was truly an honor and a pleasure
for me to finally sit down with him for
this conversation and i'm sure we will
talk many times again in the future
this is the lex friedman podcast to
support it please check out our sponsors
in the description and now dear friends
here's demis
hassabis
let's start with a bit of a personal
question
am i an ai program you wrote to
interview people until i get good enough
to interview you
well i'll be impressed if if you were
i'd be impressed by myself if you were i
don't think we're quite up to that yet
but uh maybe you're from the future lex
if you did would you tell me is that is
that a good thing to tell a language
model that's tasked with interviewing
that it is in fact um ai maybe we're in
a kind of meta turing test uh probably
probably it would be a good idea not to
tell you so it doesn't change your
behavior right this is a kind of
heisenberg uncertainty principle
situation if i told you you behave
differently yeah maybe that's what's
happening with us of course this is a
benchmark from the future where they
replay 2022 as a year before ais were
good enough yet and now we want to see
is it going to pass exactly
if i was such a
program would you be able to tell do you
think so to the touring test question
you've talked about
the benchmark for solving intelligence
what would be the impressive thing
you've talked about winning a nobel
prize in a system winning a nobel prize
but i still return to the touring test
as a compelling test the spirit of the
touring test is a compelling test
yeah the turing test of course it's been
unbelievably influential and turing's
one of my all-time heroes but i think if
you look back at the 1950 papers
original paper and read the original
you'll see i don't think he meant it to
be a rigorous formal test i think it was
more like a thought experiment almost a
bit of philosophy he was writing if you
look at the style of the paper and you
can see he didn't specify it very
rigorously so for example he didn't
specify the knowledge that the expert or
judge would have um not you know how
much time would they have to investigate
this so these important parameters if
you were gonna make it uh a true sort of
formal test
um and you know some by some measures
people claimed the turing test passed
several you know a decade ago i remember
someone claiming that with a with a kind
of very bog standard normal uh
logic model um because they pretended it
was a it was a kid so the the judges
thought that the machine you know was
was a was a child so um that would be
very different from an expert ai person
uh interrogating a machine and knowing
how it was built and so on so i think um
you know we should probably move away
from that as a formal test and move more
towards a general test where we test the
ai capabilities on a range of tasks and
see if it reaches human level or above
performance on maybe thousands perhaps
even millions of tasks eventually and
cover the entire sort of cognitive space
so i think
for its time it was an amazing thought
experiment and also 1950s obviously it
was barely the dawn of the computer age
so of course he only thought about text
and now um we have a lot more different
inputs
so yeah maybe the better thing to test
is the generalizability so across
multiple tasks but i think it's also
possible as as systems like god show
that
eventually that might map right back to
language so you might be able to
demonstrate your ability to generalize
across tasks
by then communicating your ability to
generalize across tasks which is kind of
what we do through conversation anyway
when we jump around
ultimately what's in there in that
conversation is not just you moving
around knowledge
it's you moving around like these
entirely different modalities of
understanding that ultimately map to
your ability to
to uh operate successfully in all these
domains which you can think of as tasks
yeah i think certainly we as humans use
language as our main generalization
communication tool so i think we end up
thinking in language and expressing our
solutions in language um so it's going
to be very powerful
uh uh mode in which to uh explain you
know the system to explain what it's
doing um but i don't think it's the only
uh uh modality that matters so i think
there's gonna be a lot of you know
there's there's a lot of different ways
to express uh capabilities uh other than
just language
yeah visual
robotics body language
um
yeah action is the interactive aspect of
all that that's all part of it but
what's interesting with gato is that
it's a it's it's it's sort of pushing
prediction to the maximum in terms of
like you know mapping arbitrary
sequences to other sequences and sort of
just predicting what's going to happen
next so prediction seems to be
fundamental to intelligence
and what you're predicting
doesn't so much matter yeah it seems
like you can generalize that quite well
so obviously language models predict the
next word um gato predicts potentially
any uh action or any token uh and it's
just the beginning really it's our most
general agent one could call it so far
but um you know that itself can be
scaled up massively more than we've done
so far obviously we're in the in the
middle of doing that but the big part of
solving agi is creating benchmarks that
help us get closer and closer sort of
creating benchmarks that test the
journalizability and it's just still
interesting that this fella alan turing
was one of the first and probably still
one of the only people that was trying
maybe philosophically but was trying to
formulate a benchmark that could be
followed it is
even though it's it's fuzzy it's still
sufficiently rigorous to where you can
run that test and i still think
something like the touring test will at
the end of the day
be the thing that truly
impresses other humans
so that you can have a close friend
who's an ai system
for that friend to be a good friend
they're going to have to be able to
uh play starcraft
and they're going to have to do all of
these tasks um get you a beer so the
robotics tasks
uh play games with you use language
humor all those kinds of things but that
ultimately can boil down to language it
feels like not in terms of the ai
community but in terms of the actual
impact of general intelligence on the
world it feels like language will be the
place where it truly shines i think so
because it's such an important kind of
input output for us i think you're right
i think the turing test that what the
the kind of the the philosophy behind it
which is the idea of can can a machine
mimic the behaviors of a human and men
and i would say wider than just language
and text
then you know in terms of actions and
everything else creativity all these
things then um if it can sort of match
or exceed human cognitive capabilities
then i think we have a you know true
intelligence so i thought from that
perspective you're right i think he did
formulate the right uh kind of uh setup
i just i think there'll be a kind of
humor
in the ai systems of the future looking
back to this conversation
and thinking about the touring test
and uh also thinking about
by that time they would know which year
they were finally able to sort of cross
the threshold of human level
intelligence and think
how funny it is that we humans were
still
confused about this whole problem
that has been long ago been solved
well hello to the ai agents of the
future anyway so going back to your
to your journey when did you fall in
love with programming first
well it's pretty uh pretty young age
actually so um
you know i started off uh actually games
was my first love so starting to play
chess when i was around four years old
and then um it was actually with
winnings from a chess competition that i
managed to buy my first chess computer
when i was about eight years old it was
a zx spectrum which was hugely popular
in the uk at the time
and uh it's amazing machine because i
think it trained a whole generation of
programmers in the uk because it was so
accessible you know you literally
switched it on and there was the basic
prompt and you could just get going and
um my parents didn't really know
anything about computers so but because
it was my money from a chess competition
i could i could say i i wanted to buy it
uh and then you know i just went to
bookstores got books on programming and
um started typing in you know the
programming code and and then of course
um once you start doing that you start
adjusting it and then making your own
games and that's when i fell in love
with computers and realized that they
were a very magical device um in a way i
kind of i would have been able to
explain this at the time but i felt that
they were sort of almost a magical
extension of your mind i always had this
feeling and i've always loved this about
computers that you can set them off
doing something some task for you you
can go to sleep come back the next day
and it's solved
um you know that feels magical to me so
i mean all machines do that to some
extent they all enhance our natural
capabilities obviously cars make us
allow us to move faster than we can run
but this was a machine to extend the
mind
and and then of course ai is the
ultimate expression of what a machine
may be able to do or learn so
very naturally for me that thought
extended into into ai quite quickly
remember the the programming language
that was first
started
special to the machine no it was just
the base it was just i think it was just
basic uh on the zx spectrum i don't know
what specific form it was and then later
on i got a commodore amiga which uh
was a fantastic machine no you're just
showing off so yeah well lots of my
friends had atari st's and i i managed
to get amigas it was a bit more powerful
and uh and that was incredible and used
to do um programming in assembler and
and uh also amos basic this this
specific form of basic it was incredible
actually as well all my coding skills
and when did you fall in love with ai so
when did you first
start to gain an understanding that you
can not just write programs that
do some mathematical operations for you
while you sleep but something that's
a keen to
bringing an entity to life
sort of
a thing that can figure out something
more complicated than uh
than a simple mathematical operation
yeah so there was a few stages for me
all while i was very young so first of
all as i was trying to improve at
playing chess i was captaining various
england junior chess teams and at the
time when i was about you know maybe 10
11 years old i was gonna become a
professional chess player that was my
first
thought um that dream was there sure she
tried to get to the highest level yeah
so i was um you know i got to when i was
about 12 years old i got to master stand
and i was second highest rated player in
the world to judith polgar who obviously
ended up being an amazing chess player
and uh world women's champion and when i
was trying to improve at chess where you
know what you do is you obviously first
of all you're trying to improve your own
thinking processes so that leads you to
thinking about thinking how is your
brain coming up with these ideas why is
it making mistakes how can you how can
you improve that thought process but the
second thing is that you it was just the
beginning this was like in the in the
early 80s mid 80s of chess computers if
you remember they were physical boards
like the one we have in front of us and
you pressed down the you know the
squares and i think kasparov had a
branded version of it that i i i got and
um you were you know used to they're not
as strong as they are today but they
were they were pretty strong and you
used to practice against them um to try
and improve your openings and other
things and so i remember i think i
probably got my first one i was around
11 or 12. and i remember thinking um
this is amazing you know how how has
someone programmed uh uh this this chess
board to play chess uh and uh it was
very formative book i bought which was
called the chess computer handbook by
david levy which came out in 1984 or
something so i must have got it when i
was about 11 12 and it explained fully
how these chess programs were made i
remember my first ai program being uh
programming my amiga it couldn't it
wasn't powerful enough to play chess i
couldn't write a whole chess program but
i wrote a program for it to play othello
reversey it's sometimes called i think
in the u.s and so a slightly simpler
game than chess but i used all of the
principles that chess programs had alpha
beta search all of that and that was my
first ai program i remember that very
well was around 12 years old so that
that that brought me into ai and then
the second part was later on uh when i
was around 1617 and i was writing games
professionally designing games uh
writing a game called theme park which
um had ai as a core gameplay component
as part of the simulation um and it sold
you know millions of copies around the
world and people loved the way that the
ai even though it was relatively simple
by today's ai standards um was was
reacting to the way you as the player
played it so it was called a sandbox
game so it's one of the first types of
games like that along with simcity and
it meant that every game you played was
unique
is there something you could say just on
a small tangent
about
really impressive ai from a game design
human enjoyment perspective
really impressive ai that you've seen in
games and maybe what does it take to
create ai system and how hard of a
problem is that so a million questions
just as a brief tangent
well look i think um
games uh games have been significant in
my life for three reasons so first of
all to to i was playing them and
training myself on games when i was a
kid then i went through a phase of
designing games and writing ai4 games so
all the games i i professionally wrote
uh had ai as a core component and that
was mostly in the in the 90s and the
reason i was doing that in games
industry was at the time the games
industry i think was the cutting edge of
technology so whether it was graphics
with people like john carmack and quake
and those kind of things or ai i think
actually all the action was going on in
games and and we've seen we're still
reaping the benefits of that even with
things like gpus which you know i find
ironic was obviously invented for
graphics computer graphics but then
turns out to be amazingly useful for ai
it just turns out everything's a matrix
multiplication it appears you know in
the whole world
so um so i think games at the time had
the most cutting edge ai and a lot of
the the games uh uh we you know i was
involved in writing so there was a game
called black and white which was one
game i was involved with in the early
stages of which i still think is the
most um
impressive uh example of reinforcement
learning in a computer game so in that
game you know you trained a little pet
animal uh and
yeah and it sort of learned from how you
were treating it so if you treated it
badly then it became mean yeah and then
it would be mean to to your villagers
and your and your population the sort of
uh the little tribe that you were
running uh but if you were kind to it
then it would be kind and people were
fascinated by how that was and so was i
to be honest with the way it kind of
developed and um especially the mapping
to good and evil yeah it made you made
you realize made me realize that you can
sort of in the way in the choices you
make can define
uh the
where you end up and that means
all of us are capable of the good
uh evil it all matters in uh the
different choices along the trajectory
to those places that you make it's
fascinating i mean games can do that
philosophically to you and it's rare it
seems rare yeah well games are i think a
unique medium because um you as the
player you're not just passively
consuming the the entertainment right
you're actually actively involved as an
as a as an agent so i think that's what
makes it in some ways can be more
visceral than other other mediums like
you know films and books so the second
so that was you know designing ai and
games and then the third use uh uh i've
we've used of ai is in deep mind from
the beginning which is using games as a
testing ground for proving out ai
algorithms and developing ai algorithms
and that was a that was a sort of um a
core component of our vision at the
start of deepmind was that we would use
games very heavily uh as our main
testing ground certainly to begin with
um because it's super efficient to use
games and also you know it's very easy
to have metrics to see how well your
systems are improving and what direction
your ideas are going in and whether
you're making incremental improvements
and because those games are often rooted
in something that humans did for a long
time beforehand
there's already a strong
set of rules like it's already a damn
good benchmark yes it's really good for
so many reasons because you've got
you've got you've got clear measures of
how good humans can be at these things
and in some cases like go we've been
playing it for thousands of years um and
and uh often they have scores or at
least win conditions so it's very easy
for reward learning systems to get a
reward it's very easy to specify what
that reward is um and uh also at the end
it's easy to you know to test uh
externally you know
how strong is your system by of course
playing against you know the world's
strongest players at those games so it's
it's so good for so many reasons and
it's also very efficient to run
potentially millions of simulations in
parallel on the cloud so um i think
there's a huge reason why we were so
successful back in you know starting out
2010 how come we were able to progress
so quickly because we'd utilize games
and um you know at the beginning of deep
mind we also hired some amazing game
engineers uh who i knew from my previous
uh lives in the games industry and uh
and that helped to bootstrap us very
quickly and plus it's somehow super
compelling almost at a philosophical
level of man versus machine
over over a chessboard or a go board
and especially given that the entire
history of ai is defined by people
saying it's going to be impossible to
make a machine that
beats a human being in chess
and then once that happened
people were certain when i was coming up
in ai that go
is not a game that could be solved
because of the combinatorial complexity
it's just too it's it's it's you know
no matter how much moore's law you have
compute is just never going to be able
to crack the game of go yeah and so that
then there's something compelling about
facing sort of taking on the
impossibility of that task from the
ai
researcher perspective engineer
perspective and then as a human being
just observing this whole thing
your
beliefs about what you thought was
impossible
being broken apart
it's it's uh humbling
to realize we're not as smart as we
thought
it's humbling to realize that the things
we think are impossible now perhaps will
be done
in the future there's something
really powerful about a game ai system
being a human being in a game that
drives that message
uh home for like millions billions of
people especially in the case of go sure
well look i think it's a i mean it has
been a fascinating journey and and
especially as i i think about it from i
can understand it from both sides both
as the ai
you know creators of the ai um but also
as a games player originally so you know
it was a it was a really interesting it
was i mean it was a fantastic um but
also somewhat bittersweet moment the
alphago match for me um uh seeing that
and and and being obviously heavily
heavily involved in that um but you know
as you say chess has been uh the i mean
kasparov i think rightly called it the
drosophila of of intelligence right so
it's sort of i i love that phrase and
and i think he's right because chess has
been um hand in hand with ai from the
beginning of the the whole field right
so i think every ai practitioner
starting with turing and claude shannon
and all those uh the sort of forefathers
of of of of the field um tried their
hand at writing a chess program uh i've
got original audition of claude
shannon's first chess program i think it
was 1949 uh the the original sort of uh
paper and um they all did that and
turing famously wrote a chess program
that but all the computers around there
were obviously too slow to run it so he
had to run he had to be the computer
right so he literally i think spent two
or three days running his own program by
hand with pencil and paper and playing
playing a friend of his uh with his
chess program so
of course deep blue was a huge moment uh
beating
off um but actually when that happened i
remember that very very vividly of
course because it was you know chess and
computers and ai all the things i loved
and i was at college at the time but i
remember coming away from that being
more impressed by kasparov's mind than i
was by deep blue because here was
kasparov with his human mind not only
could he play chess more or less to the
same level as this brute of a
calculation machine um but of course
kasparov can do everything else humans
can do ride a bike talk many languages
do politics all the rest of the amazing
things that kasparov does and so with
the same brain yeah and and yet deep
blue uh brilliant as it was at chess it
had been hand coded for chess and um
actually had distilled the knowledge of
chess grand masters uh into into a cool
program but it couldn't do anything else
like it couldn't even play a strictly
simpler game like tic-tac-toe so um
something to me was missing from um
intelligence from that system that we
would regard as intelligence and i think
it was this idea of generality and and
also learning yeah um so and that's what
we tried to do out with alphago yeah we
alphago and alpha zero mu zero and then
got on all the things that uh we'll get
into some parts of there's just a
fascinating trajectory here but let's
just stick on chess briefly uh on the
human side of chess you've proposed that
from a game design perspective the thing
that makes chess
compelling as a game
uh is that there's a creative tension
between a bishop
and the knight
can you explain this first of all it's
really interesting to think about what
makes the game compelling
makes it stick across centuries
yeah i was sort of thinking about this
and actually a lot of even amazing chess
players don't think about it necessarily
from a games designer point of view so
it's with my game design hat on that i
was thinking about this why is chess so
compelling
and i think a critical uh reason is the
the dynamicness of of of the different
kind of chess positions you can have
whether they're closed or open and other
things comes from the bishop and the
night so if you think about how
different the the the capabilities of
the bishop and knight are in terms of
the way they move and then somehow chess
has evolved to balance those two
capabilities more or less equally so
they're both roughly worth three points
each so you think that dynamics was
always there and then the rest of the
rules are kind of trying to stabilize
the game well maybe i mean it's sort of
i don't know his chicken and egg
situation probably both came together
but the fact that it's got to this
beautiful equilibrium where you can have
the bishop and knight they're so
different in power um but so equal in
value across the set of the universe of
all positions right somehow they've been
balanced by humanity over hundreds of
years um i think gives gives the game
the creative tension uh that you can
swap the bishop and knights uh for a
bishop for a knight and you you they're
more or less worth the same but now you
aim for a different type of position if
you have the knight you want a closed
position if you have the bishop you want
an open position so i think that creates
a lot of the creative tension in chess
so some kind of controlled creative
tension
from an ai perspective
do you think ai systems convention
design games that are optimally
compelling to humans
well that's an interesting question you
know sometimes i get asked about
ai and creativity and and this and the
way i answered that is relevant to that
question which is that i think they're
different levels of creativity one could
say so i think um if we define
creativity as coming up with something
original right that's that's useful for
a purpose then you know i think the kind
of lowest level of creativity is like an
interpolation so an averaging of all the
examples you see so maybe a very basic
ai system could say you could have that
so you show it millions of pictures of
cats and then you say give me an average
looking cat right generate me an average
looking cat i would call that
interpolation then there's extrapolation
which something like alphago showed so
alphago played you know millions of
games of go against itself
and then it came up with brilliant new
ideas like move 37 in game two bringing
a motif strategies and go that that no
humans had ever thought of even though
we've played it for thousands of years
and professionally for hundreds of years
so that that i call that extrapolation
but then that's still there's still a
level above that which is you know you
could call out the box thinking or true
innovation which is could you invent go
right could you invent chess and not
just come up with a brilliant chess move
or brilliant go move but can you can you
actually invent chess or something as
good as chess or go and i think one day
uh ai could but what's missing is how
would you even specify that task to a a
program right now and the way i would do
it if i was best telling a human to do
it or a games designer a human games
designer to do it is i would say
something like go i would say um
come up with a game that only takes five
minutes to learn which go does because
it's got simple rules but many lifetimes
to master right or impossible to master
in one lifetime because so deep and so
complex um and then it's aesthetically
beautiful uh and also uh it can be
completed in three or four hours of
gameplay time which is you know useful
for our us you know in in a human day
and so um you might specify these side
of high level concepts like that and
then you know with that and maybe a few
other things uh one could imagine that
go satisfies uh those those constraints
um but the problem is is that we we're
not able to specify abstract notions
like that high-level abstract notions
like that yet to our ai systems um and i
think there's still something missing
there in terms of um high-level concepts
or abstractions that they truly
understand and that you know combinable
and compositional um so for the moment
i think ai is capable of doing
interpolation extrapolation but not true
invention so coming up with rule sets
uh and optimizing
with complicated objectives around those
rule sets we can't currently do
but you could take a specific rule set
and then run a kind of self-play
experiment to see how long
just observe how an ai system from
scratch learns how long is that journey
of learning and maybe
if it satisfies some of those other
things you mentioned in terms of
quickness to learn and so on and you
could see a long journey to master for
even an ai system then you could say
that this is a promising game
um but it would be nice to do almost
like alpha codes or programming rules so
generating rules that kind of
uh
that automate even that part of the
generation of rules so i have thought
about systems actually um that i think
would be amazing in in for a games
designer if you could have a system that
um takes your game plays it tens of
millions of times maybe overnight and
then self balances the rules better so
it tweaks the the rules and the maybe
the equations and the and the and the
parameters so that the game uh is more
balanced the units in the game or
some of the rules could be tweaked so
it's a bit of like a giving a base set
and then allowing a monte carlo tree
search or something like that to sort of
explore it right and i think that would
be super super a powerful tool actually
for for balancing auto balancing a game
which usually takes
thousands of hours from hundreds of
games human games testers normally to to
balance some one you know game like
starcraft which is you know blizzard are
amazing at balancing their games but it
takes them years and years and years so
one could imagine at some point when
this uh this stuff becomes uh efficient
enough to you know you might be able to
do that like overnight
do you think a game that is optimal
designed by an ai system
would look very much like uh planet
earth
maybe maybe it's only the sort of game i
would love to make is is and i've tried
you know my in my game's career the
games design career you know my first
big game was designing a theme park an
amusement park then uh with games like
republic i tried to you know have games
where we designed whole cities and and
allowed you to play in so and of course
people like will wright have written
games like sim earth uh trying to
simulate the whole of earth pretty
tricky but um i see earth i haven't
actually played that one so what is it
does it incorporative evolution or yeah
it has evolution and it's sort of um it
tries to it sort of treats it as an
entire biosphere but from quite a high
level
so
nice to be able to sort of zoom in zoom
out zoom in exactly so obviously he
couldn't do that was in the night i
think he wrote that in the 90s so it
couldn't you know it wasn't it wasn't
able to do that but that that would be
uh obviously the ultimate sandbox game
of course
on that topic do you think we're living
in a simulation
yes well so okay so i'm gonna jump
around from the absurdly philosophical
to the short term sure very very happy
to so i think uh my answer to that
question is a little bit complex because
uh there is simulation theory which
obviously nick bostrom i think famously
first proposed um
and uh i don't quite believe it in in
that sense so um in the in the sense
that uh are we in some sort of computer
game or have our descendants somehow
recreated uh uh earth in the you know
21st century and and some for some kind
of experimental reason i think that um
but i do think that we that that we
might be that the best way to understand
physics and the universe is from a
computational perspective so
understanding it as an information
universe and actually information being
the most fundamental unit of uh reality
rather than matter or energy so a
physicist would say you know matter or
energy you know e equals m c squared
these are the things that are are the
fundamentals of the universe i'd
actually say information um which of
course itself can be can specify energy
or matter right matter is actually just
you know we're we're just out the way
our bodies and all the molecules in our
body arrange is information so i think
information may be the most fundamental
way to describe the universe and
therefore you could say we're in some
sort of simulation because of that um
but i don't i do i'm not i'm not really
a subscriber to the idea that um you
know these are sort of throw away
billions of simulations around i think
this is actually very critical and
possibly unique this simulation
particular one yes so but and you just
mean
treating the universe
as a computer
that's
processing and modifying information
is is a good way to solve the problems
of physics of chemistry of biology
and perhaps of humanity and so on yes i
think understanding physics in terms of
information theory uh might be the best
way to to really uh understand what's
going on here
from our understanding of a universal
turing machine from our understanding of
a computer do you think there's
something outside of the capabilities of
a computer that is present in our
universe you have a disagreement with
roger penrose
the nature of consciousness he he thinks
that consciousness is more than just a
computation
uh do you think all of it the whole
shebang is can be can be a competition
yeah i've had many fascinating debates
with uh sir roger penrose and obviously
he's he's famously and i read you know
emperors of new mind and and um
and his books uh his classical books uh
and they they were pretty influential
and you know in the 90s and um he
believes that there's something more you
know something quantum that is needed to
explain consciousness in the brain um i
think about what we're doing actually at
deepmind and what my career is being
we're almost like true rings champion so
we are pushing turing machines or
classical computation to the limits what
are the limits of what classical
computing can do now um and at the same
time i've also studied neuroscience to
see and that's why i did my phd in was
to see also to look at you know is there
anything quantum in the brain from a
neuroscience or biological perspective
and um and so far i think most
neuroscientists and most mainstream
biologists and neuroscientists would say
there's no evidence of any quantum uh
systems or effects in the brain as far
as we can see it's it can be mostly
explained by classical uh classical
theories so
and then so there's sort of the the
search from the biology side and then at
the same time there's the raising of the
water uh at the bar from what classical
turing machines can do uh uh and
and you know including our new ai
systems and uh as you alluded to earlier
um you know i think ai especially in the
last decade plus has been a continual
story now of surprising uh events uh and
surprising successes knocking over one
theory after another of what was thought
to be impossible you know from go to
protein folding and so on and so i think
um
i would be very hesitant to bet against
how far the uh universal turing machine
and classical computation paradigm can
go and and my betting would be
that all of certainly what's going on in
our brain uh can probably be mimicked or
or approximated on a on a classical
machine um not you know not requiring
something metaphysical or quantum and
we'll get there with some of the work
with alpha fold
which i think begins the journey of
modeling this beautiful and complex
world of biology so you think all the
magic of the human mind comes from this
just a few pounds of mush
of a biological computational mush
that's
akin to some of the neural networks
not directly but in spirit that deep
mind has been working with well look i
think it's um you say it's a few you
know of course it's this is the i think
the biggest miracle of the universe is
that um it is just a few pounds of mush
in our skulls and yet it's also our
brains are the most complex objects in
the in that we know of in the universe
so there's something profoundly
beautiful and amazing about our brains
and
i
think that it's an incredibly uh
incredible efficient machine and and uh
uh
and it's a is you know phenomenal
basically and i think that building ai
one of the reasons i want to build ai
and i've always wanted to is i think by
building an intelligent artifact like ai
and then comparing it to the human mind
um that will help us unlock the
uniqueness and the true secrets of the
mind that we've always wondered about
since the dawn of history like
consciousness dreaming uh creativity uh
emotions what are all these things right
we've we've wondered about them since
since the dawn of humanity and i think
one of the reasons and you know i love
philosophy and philosophy of mind is we
found it difficult is there haven't been
the tools for us to really other than
introspection to from very clever people
in in history very clever philosophers
to really investigate this
scientifically but now suddenly we have
a plethora of tools firstly we have all
the neuroscience tools fmri machines
single cell recording all of this stuff
but we also have the ability computers
and ai to build uh intelligent systems
so i think that um
uh
you know i think it is amazing what the
human mind does and um and and i'm kind
of in awe of it really and uh and i
think it's amazing that without human
minds we're able to build things like
computers and and actually even you know
think and investigate about these
questions i think that's also a
testament to the human mind yeah the
universe built the human mind that now
is building computers that help
us understand both the universe and our
own human mind right that's exactly it i
mean i think that's one you know one
could say we we are
maybe we're the mechanism by which the
universe is going to try and understand
itself yeah
it's beautiful so let's let's go to the
basic building blocks of biology that i
think
is another angle at which you can start
to understand the human mind the human
body which is quite fascinating which is
from the basic building blocks start to
simulate start to model
how from those building blocks you can
construct bigger and bigger more complex
systems maybe one day the entirety of
the human biology so
here's another problem that thought to
be impossible to solve which is protein
folding and alpha fold or
specific alpha fold 2
did just that it solved protein folding
i think it's one of the biggest
breakthroughs
uh certainly in the history of
structural biology but uh in general in
in science
um
maybe from a high level
what is it and how does it work
and then we can ask some fascinating
sure questions after sure um so maybe
like to explain it uh to people not
familiar with protein folding is you
know i first of all explain proteins
which is you know proteins are essential
to all life every function in your body
depends on proteins sometimes they're
called the workhorses of biology and if
you look into them and i've you know
obviously as part of alpha fold i've
been researching proteins and and
structural biology for the last few
years you know they're amazing little
bio nano machines proteins they're
incredible if you actually watch little
videos of how they work animations of
how they work
and um proteins are specified by their
genetic sequence called the amino acid
sequence so you can think of those their
genetic makeup and then in the body uh
in in nature they when they when they
fold up into a 3d structure so you can
think of it as a string of beads and
then they fold up into a ball now the
key thing is you want to know what that
3d structure is
because the structure the 3d structure
of a protein
is what helps to determine what does it
do the function it does in your body
and also if you're interested in drug
drugs or disease you need to understand
that 3d structure because if you want to
target something with a drug compound or
about to block that something the
protein is doing uh you need to
understand where it's going to bind on
the surface of the protein so obviously
in order to do that you need to
understand the 3d structure so the
structure is mapped to the function the
structure is mapped to the function and
the structure is obviously somehow
specified by the by the amino acid
sequence and that's the in essence the
protein folding problem is can you just
from the amino acid sequence the
one-dimensional
string of letters can you
immediately computationally predict the
3d structure right and this has been a
grand challenge in biology for over 50
years so i think it was first
articulated by christian anfinsen a
nobel prize winner in 1972 uh as part of
his nobel prize winning lecture and he
just speculated this should be possible
to go from the amino acid sequence to
the 3d structure we didn't say how so
i you know it's been described to me as
equivalent to fermat's last theorem but
for biology right you should as somebody
that uh very well might win the nobel
prize in the future but outside of that
you should do more of that kind of thing
in the margins just put random things
that will take like 200 years to solve
set people off for 200 years it should
be possible exactly and just don't give
any interest exactly i think everyone's
exactly should be i'll have to remember
that for future so yeah so he set off
you know with this one throwaway remark
just like fermat you know he he set off
this whole 50-year uh
uh uh field really of computational
biology and and they had you know they
got stuck they hadn't really got very
far with doing this and and um until now
until alpha fold came along this is done
experimentally right very painstakingly
so the rule of thumb is and you have to
like crystallize the protein which is
really difficult some proteins can't be
crystallized like membrane proteins and
then you have to use very expensive
electron microscopes or x-ray
crystallography machines really
painstaking work to get the 3d structure
and visualize the 3d structure so the
rule of thumb in in experimental biology
is that it takes one phd student their
entire phd to do one protein uh and with
alpha fold two we were able to predict
the 3d structure in a matter of seconds
um and so we were you know over
christmas we did the whole human
proteome or every protein in the human
body all 20 000 proteins so the human
proteins like the equivalent of the
human genome but on protein space and uh
and sort of revolutionize really what uh
a structural biologist can do because
now um they don't have to worry about
these painstaking experimentals you know
should they put all of that effort in or
not they can almost just look up the
structure of their proteins like a
google search
and so there's a data set on which it's
trained and how to map this amino acids
because first of all it's incredible
that a protein this little chemical
computer is able to do that computation
itself in some kind of distributed way
and do it very quickly
that's a weird thing and they evolved
that way because you know in the
beginning
i mean that's a great invention just the
protein itself yes i mean and then they
there's i think probably a history of
like uh they evolved
to have many of these proteins and those
proteins figure out how to be computers
themselves
in such a way that you can create
structures that can interact in
complexes with each other in order to
form high level functions i mean it's a
weird system that they figured it out
well for sure i mean we you know maybe
we should talk about the origins of life
too but proteins themselves i think are
magical and incredible uh uh uh as i
said little little bio-nano machines and
um
and and actually levantal who is another
scientist uh uh a contemporary of
anfinsen uh he he coined this eleventh
house what became known as levantal's
paradox which is exactly what you're
saying he calculated roughly a protein
an average protein which is maybe 2 000
amino acids
bases long is um
is is can fold in maybe 10 to the power
300 different conformations so there's
10 to the power 300 different ways that
protein could fold up and yet somehow in
nature physics solves this solves this
in a matter of milliseconds so proteins
fold up in your body in you know
sometimes in fractions of a second so
physics is somehow solving that search
problem and just to be clear in many of
these cases maybe you correct me if i'm
wrong there's often a unique way
for that sequence to form itself yes so
among that huge number of possibilities
yes it figures out a way how to
stability
uh
in some cases there might be a
misfunction so on which leads to a lot
of the disorders and stuff like that but
yes most of the time it's a unique
mapping and that unique mapping is not
obvious no exactly that's just what the
problem is exactly so there's a unique
mapping usually in a healthy in if it's
healthy and as you say in disease
so for example alzheimer's one one one
conjecture is that it's because of a
misfolded protein a protein that folds
in the wrong way amyloid beta protein so
um and then because it falls in the
wrong way it gets tangled up right in
your in your neurons so
um it's super important to understand
both healthy functioning and also
disease is to understand uh you know
what what these things are doing and how
they're structuring of course the next
step is sometimes proteins change shape
when they interact with something so um
they're not just static necessarily in
in biology
maybe you can give some interesting sort
of beautiful things to you about these
early days of alpha fold of of solving
this problem because
unlike games this is
real physical systems that are less
amenable to
self-play type of mechanisms
the the size of the data set is smaller
that you might otherwise like so you
have to be very clever about certain
things is there something you could
speak to um
what was very hard to solve and what are
some beautiful aspects about the the
solution yeah i would say alpha fold is
the most complex and also probably most
meaningful system we've built so far so
it's been an amazing time actually in
the last you know two three years to see
that come through because um as we
talked about earlier you know games is
what we started on uh building things
like alphago and alpha zero but really
the ultimate goal was to um not just to
crack games it was just to to to build
use them to bootstrap general learning
systems we could then apply to real
world challenges specifically my passion
is scientific challenges like protein
folding and then alpha fold of course is
our first big proof point of that and so
um you know in terms of the data uh and
the amount of innovations that had to go
into it we you know it was like more
than 30 different component algorithms
needed to be put together to crack the
protein folding um i think some of the
big innovations were that um
kind of building in some hard coded
constraints around physics and
evolutionary biology um to constrain
sort of things like the bond angles uh
uh in the in the in the protein and
things like that um
a lot but not to impact the learning
system so still allowing uh the system
to be able to learn the physics uh
itself um from the examples that we had
and the examples as you say there are
only about 150 000 proteins even after
40 years of experimental biology only
around 150 000 proteins have been the
structures have been found out about so
that was our training set which is um
much less than normally we would like to
use
but using various tricks things like
self distillation so actually using
alpha folds predictions um some of the
best predictions that it thought was
highly confident in we put them back
into the training set right to make the
training set bigger
that was critical to to alpha fold
working so there was actually a huge
number of different um uh innovations
like that that were required to to
ultimately crack the problem after fold
one what it produced was a distagram so
a kind of
a matrix of the pairwise distances
between all of the molecules in the in
the in the protein and then there had to
be a separate optimization process to uh
create the 3d structure
and what we did for alpha volt2 is make
it truly end to end so we went straight
from the amino acid sequence of of of
bases to
the 3d structure directly without going
through this intermediate step and in
machine learning what we've always found
is that the more end to end you can make
it the better the system and it's
probably because um we you know the in
the end the system is better at learning
what the constraints are than than we
are as the human designers of specifying
it so anytime you can let it flow end to
end and actually just generate what it
is you're really looking for in this
case the 3d structure you're better off
than having this intermediate step which
you then have to hand craft the next
step for so
so it's better to let the gradients and
the learning flow all the way through
the system um from the end point the end
output you want to the inputs so that's
a good way to start a new problem
handcraft a bunch of stuff add a bunch
of manual constraints with a small
intent learning piece or a small
learning piece and grow that learning
piece until it consumes the whole thing
that's right and so you can also see you
know this is a bit of a method we've
developed over doing many sort of
successful outfits we call them alpha x
projects right is and the easiest way to
see that is the evolution of alphago to
alpha zero so alphago was um a learning
system but it was specifically trained
to only play go right so uh and what we
wanted to do with first version of go is
just get to world champion performance
no matter how we did it right and then
and then of course alphago zero we we we
removed the need to use human games as a
starting point right so it could just
play against itself from random starting
point from the beginning so that removed
the the need for human knowledge uh
about go and then finally alpha zero
then generalized it so that any things
we had in there the system including
things like symmetry of the go board uh
were removed so the alpha zero could
play from scratch any two player game
and then mu0 which is the final
latest version of that set of things was
then extending it so that you didn't
even have to give it the rules of the
game it would learn that for itself so
it could also deal with computer games
as well as board games so that line of
alpha golf goes zero alpha zero mu zero
that's the full trajectory of what you
can take from
uh imitation learning
to full self
superv
Resume
Read
file updated 2026-02-14 13:53:18 UTC
Categories
Manage