"Life As We Know It Will Will Be Gone Soon" - Dangers Of AI & Humanity's Future | Mo Gawdat
itY6VWpdECc • 2023-06-20
Transcript preview
Open
Kind: captions
Language: en
we've never created a nuclear weapon
that can create nuclear weapons the
artificial intelligences that we're
building are capable of creating other
artificial intelligences as a matter of
fact they're encouraged to create other
in artificial intelligences even if
there is never an existential risk of AI
those Investments would redesign our
society in ways that are beyond the
point of Narita
said that people should consider holding
off having kids right now because of AI
and other societal issues that are
coming you've said this is the thing
that we should be thinking about that AI
poses a bigger threat than global
warming why is it that you think AI
poses such a significant existential
risk to humanity is not just in the
amount of risk that AI you know
positions ahead of humanity it's not
about the timing of the risk and we
should cover those two points very
quickly but it really is about a point
of no return where if we cross that
point of no return we have very very
little chance to bring the genie back
into the bottle what is the point of no
return the most important of which of
course is the point of Singularity and
Singularity is a moment where you have
an AGI that is much smarter than humans
uh I think that when we discuss
singularity that might bring about the
suspicion of an existential risk like
Skynet type of thing we are losing focus
on the immediate threat which is much
more imminent and in a very interesting
way as damaging uh probably even more
damaging and that risk in my view which
we have to resolve first before we talk
about the existential risks is the risk
of AI falling in the wrong hands or the
risk of AI falling in the right hands
that are naive enough to not handle it
well or the risk of AI misunderstanding
our objectives or the method or the risk
of AI uh you know performing our
objectives but us misunderstanding our
own benefit and I think when you really
look at those I call this the third
inevitable and scary smart when you
really look at those those are truly
around the corner right there are other
other other risks that are extremely
important as well which we don't even
think of as threats but that are
completely going to redesign the fabric
of our society jobs
by definition is going to the definition
of jobs and accordingly the definition
of purpose the definition of
income gap power structures all of that
is going to be redesigned significantly
it is being redesigned as we speak as we
speak there are those with Hunger for
power those with fear of other powers
those with Hunger for uh more and more
and more money and success and so on who
are investing in AI in ways that even if
there is never an existential risk of AI
those investments will redesign our
society in ways that are beyond the
point of narrator let's get into the
three inevitables what are they exactly
so so the three inevitables are my way
of telling my readers or my listeners to
understand that there are things that we
shouldn't waste time talking about
because they are going to happen Okay
and those are number one there is no
shutting down AI there is no reversing
it there is no stopping uh the
development of it let me list them
quickly and then we go back on each and
every one of them the second inevitable
is that AI will be smarter than humans
significantly smarter than humans and
the third inevitable is that bad things
will happen in the process exactly what
bad things we spoke about a few of them
but we can definitely discuss each and
every one of those in details the first
inevitable interestingly the fact that
AI will happen there is no shutting in
down there is no uh um you know
um there is no nuclear type 3T that will
ever happen where Nations will decide
okay you know what let's let's stop
developing AI like we said stop
developing nuclear weapons or at least
stop using them because we really never
stopped developing them uh you know
that's not gonna happen because of a
prisoner's dilemma because Humanity
so smooth smoothly stuck itself in a
place in a corner where nobody is able
to make the choice to to stop the
development of AI so if alphabet is
developing AI then meta has to develop
AI if you know and you know Yandex in
Russia has to develop Ai and so on and
so forth if if the US is developing AI
then China will have to develop Ai and
vice versa and so the reality of the
matter is that it is not a technological
uh characteristic of AI that we cannot
stop developing it it's a capitalist and
power focused system that will always
prioritize the benefit of US versus them
over the benefit of humanity at large so
uh you know when when you really think
about some of the initiatives that now
some global leaders are starting to talk
about Ai and try to put it in the
spotlight like the prime minister of the
UK or whatever you know when I when I
was asked about that I was in London
last week and basically I think it's an
amazing initiative great idea but can
you understand the the the magnitude of
the ask that you have here which is what
you need to get initiative
the initiative was that we get all of
the global leaders together to uh you
know to a summit that basically looks at
Ai and tries to regulate Ai and for that
to happen you know you need Nations to
suddenly say okay you know what we're
gonna all look at the global benefit of
humanity above the globe the benefit of
each individual Nation you want to get
people from China uh Russia the U.S uh
North uh North Korea and others around
one table and tell them can we all shake
hands and say we're not going to develop
that thing and even if they do which
they will not agree to that uh you know
then they will question what happens if
a drug cartel leader somewhere you know
hiding in the jungles decides to expand
and diversify his business and start to
work on AIS that are criminal in nature
we need to develop the policemen and to
develop the policemen we have have to
develop Ai and so all of those
definitions all of those prisoners
dilemmas if you if you understand you
know a game theory are basically
positioning us in a place where our
inability to trust the other guy is
going to lead us to continue to develop
AI at a very fast space Pace because
we're we're even worried about what the
other guy could do due to our mistrust
and you know the clear example of that
is what we saw with the open letter
which I think was a fantastic uh
initiative I think you covered it many
times in your podcast the you know the
attempt to to tell uh you know the the
big players of uh that are developing AI
let's halt the development for six
months and I think it was less than a
week before uh Sundar pachai the CEO of
alphabets responded and said this is not
realistic you can't ask me to do that
because there is no way you can
guarantee that no one else is going to
to develop Ai and disrupt my business
that basically means we have to
start behaving in a way that's
accepts that AI is going to continue to
be developed it's going to continue to
be a prominent part of our life and it's
going to continue to get massive amounts
of investment on every side of the table
for people that don't know the
prisoner's dilemma it's probably worth
walking them through it but what you
said about drug dealers I've never heard
anybody say that before and I think
removing this from just government
versus government is probably a very
wise way to look at it you and I are
both sort of secretly very optimistic in
fact the way that we uh first met is
around the idea of happiness and mental
health and all of that so I hope people
don't see either of us as sort of
doomsdaysayers I just feel like we're
we're going through a transitional
period right now that is unprecedented
in human history and I say that with
full understanding that every generation
says like no no this time it's really
different uh but I feel like this time
really is different the the closest
thing to it is nuclear weapons and that
already gives you a sense this scale but
part of the reason I'm more worried
about AI than I was even as a kid with
um really living under the cloud of
nuclear proliferation the Cold War all
of that is because the infrastructure
required for a nuclear program is
massive whereas you don't need that
infrastructure you just need a computer
some servers uh and you know clone over
chatgpt and you're ready to rock so walk
people through the prisoner's dilemma uh
so that they can really understand that
this is a deep fundamental truth of The
Human Condition and isn't just a
government V government thing yes let me
cover that but let me also cover uh a
tiny one more thing that's very very
different between Ai and nuclear weapons
which is the fact that we've never
created a nuclear weapon that can create
nuclear weapons uh you know the
artificial intelligences that we're
building are capable of creating other
artificial intelligences as a matter of
fact they're encouraged to create other
in artificial intelligences with the
single objective stated objective of
make them smarter so so basically what
you you know imagine if you had a a
nuclear you know two nuclear weapons
finding a way of mating and creating a
smarter or a more devastating nuclear
weapon and I think that's really
something that most people Miss uh you
know Miss when we try to cover the
threat of AI
um the the uh the prisoners dilemma is a
very very simple mathematical uh game if
you want part of game theory is to
imagine that you have two uh um you know
prisoner there's no two suspects of a
crime play basically Partners in a crime
uh who are captured but the police
doesn't have enough evidence to uh you
know to to put them both in jail so they
are trying to get one of them to tell on
the other so they would go to each of
them and say by the way just giving you
an example uh you know if you don't tell
and your friend tells you're gonna get
three years and he's gonna get out free
uh or you know he's gonna go get out
with with one year and then they go to
the other guy and say the same if you
tell and he doesn't tell you're gonna
get one year and you know and and he
gets three right and by the way if you
both tell uh you both get two years and
so from a mathematics point of view if
you build the possibilities of those uh
uh you know um
scenarios in in quadrants basically a
quadrant where I tell and you don't uh
is is a quadrant that requires a lot of
trust sorry a quadrant that I don't tell
and you don't tell is a quadrant that
requires a lot of trust any other
quadrant by definition tells me that if
I tell I will get off with this with a
with a lighter sentence okay and and the
only reason why I wouldn't do it is if I
trust you and if I don't trust you by
definition human behavior will drive you
and drive me both of us to say look the
better option is for me to get off with
a lighter sentence because I don't trust
the other guy and I think that's reality
of what's happening I mean in business
in general uh in in you know in power
struggles in general in wars in general
I think it's all a situation that's
triggered by not trusting the other guy
because if we could trust the other guy
we would probably focus on on many more
much softer objectives that can grow the
pie rather than you know uh get each of
us to compete so so this is where we are
and I think the reality of us continuing
to develop AI at a much faster Pace
because Chad GPT and open AIS work in
general I think is the Netscape moment
uh for AI of you know Netscape of the
internet GPT is for AI because basically
it highlighted
first and foremost not just for the
public I think the bringing it to public
attention actually is a good thing
because it allows us to talk about it
more openly and people will listen when
when I published scary Smart in 2021 uh
it was Business book of the year in in
the UK at the Times Business book of the
year but it wasn't as widely
uh
urgently read as it is today simply
because people were like yeah that's so
interesting this guy has a an
interesting point of view but it's 50
years away
and and human nature sadly doesn't
respond very well to
existential threats that are very far in
time or probable in their in their you
know a possibility of occurrence uh we
we don't treat you know it's like those
warnings on a pack of cigarette uh you
know if if we tell you it's almost it
causes cert it's most certainly causes
death people look at it and say yeah but
that's 50 years from now I want to enjoy
it for 50 years so you know whether it's
50 years or five nobody really knows but
you know people would delay reacting to
those so so when when open Ai and chair
GPT became a reality uh I think what
ended up happening happening is that the
public got to know about AI but also the
investors so this is the.com bubble all
over again right we have massive amounts
of money poured to encourage faster and
faster development of AI I mean I I know
you're a techie like I am and we both
know that it actually
uh is not that complicated to develop
than another layer of AI of course it's
complicated to find the Breakthrough uh
but but it you know to to develop more
and more of those I think
is something that's becoming our reality
today but why are we
as we think about how fast the
technology is developing which I I think
most people will concede that they
probably struggle to think exponentially
and not linearly and so even with a
linear thinking at this point seeing how
far it's already come I think people are
already worried if they understood how
much faster even than they could
possibly imagine it's going uh it is
going
um they're still worried so my question
is why does this break bad why do we all
make the base assumption that uh without
either massive intervention or you know
some sort of regulatory body or
something that this doesn't just
naturally end up in a good place why are
you me other people why are we worried
that number three uh in your three
inevitables is that things go wrong why
are we worried that it isn't just not
when there's bug software it's nothing
why isn't this going to be like the year
2000 the Y2K problem for anybody old
enough to remember that everybody was
super panicky and then nothing happened
why isn't this going to be yet another
enough nothing Burger
because the chips are lined up in the
wrong direction so uh you know Hugo de
Garris if you if you if you know him as
a very well-known AI scientist that
worked in in Asia for quite a few years
and he uh he did that he built a
documentary that I think is found on
YouTube it's called Singularity or bust
and he was basically saying that uh most
of the investment that's going in AI
today is going into uh
spying killing gambling
and uh one one one more
um so spying is surveillance okay
killing is what we call defense uh
gambling is all of the trading
algorithms and selling which is all of
the advertisement and recommendation
engines and you know all of all of the
uh all of the idea of turning us into
products that that can be advertised too
if you want and that's not unusual by
the way in the in our capitalist system
because those Industries come with a lot
of money banking you know defense and so
on and so forth uh the the the chips are
lined up this way I mean if you take
just accurate numbers on how much uh of
of the AI investment is going behind
drug Discovery uh for example is you
know as compared to how much is going
behind you know killing machines and
killing robots and killing drones and so
on and so forth uh you'd be you'd be
amazed that it's a staggering the
difference right and this is the nature
of humanity so far if you if you're
running a research on on a disease that
doesn't affect more than you know a few
tens of thousands of people you're gonna
struggle to find the money okay but if
you're building a new weapon that can
kill tens of thousands of people the
money will immediately arrive because
there is money in that you can sell that
and sadly as much as I uh you know I
would have hoped that Humanity wasn't uh
completely driven by that it's our
reality so so so this is number one
number two is that so number one is is
we're aligned in the direction of things
going wrong okay number two is even if
we're aligned in the direction of going
right wrongdoers can flip things upside
down there was a an article in The Verge
uh you know a few months ago around uh
you know a drug Discovery AI that was
basically supposed to look at
characteristics of you know human
biology and you know
um whatever information and data we can
give it about the drugs we can develop
and chemical chemistry and so on and so
forth with the objective of prolonging
life prolonging life so prolonging human
life is one parameter in the equation
it's basically plus make life longer
okay and for fun they you know the
research team was uh was you know was
asked to call to go to go and give a
talk at a university and so for the fun
of it they uh reversed the uh the
positive to negative so instead of
giving the AI the objective of
um of prolonging life it became
objective of shortening life and within
six hours if I remember correctly the AI
came up with 40
000.
uh uh possible uh biological weapons and
and you know agents like nerve gas and
so on Jesus yeah
it's it's incredible really and and you
know it's the thing that of course
scares me is that this article is in The
Verge you know it's all over the
Internet and accordingly if you were a
criminal that grew up watching uh you
know super villain movies uh what would
you be doing right now you would go like
a million dollars I need to get my hands
on that weapon so that I can sell it to
the rest of the world or the rest of the
world of villainy and I think the
reality of the matter is uh it is so
much power so much power that if it
falls in the wrong hands and it is bound
to fall in the wrong hands unless we
start paying enough attention right and
that's my My Cry Out To The World Is
let's pay enough attention so that it
doesn't fall in the wrong hands it would
lead to a very bad place the third you
know and and the biggest reason in my
view uh of
um of us
needing to worry hopefully hopefully we
will all be wrong and be surprised
is that there were three barriers that
we all compute all computer scientists
or that worked on AI we all agreed there
were three barriers that we should never
cross
and and the first was don't put them on
the open internet until you are
absolutely certain they are safe okay
and you know it's like fdaa will tell
you don't swallow a drug until we've
tested it right uh you know and and I
and I really respect Sam Altman's view
of you know uh developing it in you know
in public in front of everyone and to
discover things now that could uh you
know that we could fix when the
challenge is small in isolation of the
other Tool uh this is a very good idea
but the other two barriers we said we
should never cross is don't teach them
to write code and don't have agents
prompted them right so what you have
today is you have a very intelligent
machine that is capable of writing code
so it can develop its own siblings if
you want okay that is known frequently
to uh to to outperform human developers
so I think 75
of the code
uh was no sorry 25 of the code uh given
to chat GPT to be reviewed uh was
improved to around two and a half times
faster okay so so they can develop
better code than us okay and and
basically now what we're doing is we're
not only limiting their learning the
learning of those machines to humans so
they're not learning from us anymore
they're learning from other AIS and
there are staggering statistics around
the size of data that is developed by
other AIS to train AIS in the data set
of course again just to simplify that
idea for for our listeners Alpha Go
Master which is
the absolute winner of the strategy game
go uh you know one against alphago uh
um sorry alphago zero which is the
absolute winner of the strategical game
that's called go one against alphago
Master which was another AI developed by
deepmind of Google that was by then the
world champion so alphago Master won
against the world world champion and
then Alpha go zero one against alphago
Master a thousand games to zero
by playing against itself it has never
in its entire
career as a go player seeing a game of
Go being played it just simulated the
game by knowing the rules and playing
against it
you can reboot your life your health
even your career anything you want all
you need is discipline I can teach you
the tactics that I learned while growing
a billion dollar business that will
allow you to see your goals through
whether you want better health stronger
relationships a more successful career
any of that is possible with the mindset
and business programs in Impact Theory
University join the thousands of
students who have already accomplished
amazing things tap now for a free trial
and get started today
okay so first people that don't know the
history of this uh I think it was deep
blue ends up beating Gary Kasparov the
greatest chess champion back in the 80s
is that correct if I remember correctly
yeah yeah no way that uh we're ever
going to be able to build AI that'll
beat a go Champion uh ends up beating
the I forget how many years ago this was
but it took a long time but they finally
did beat the second place go Champion
then they updated beat the first place
world champion uh and go and then
realized we don't need to feed it a
bunch of go games we can just have it
basically dream about playing itself
over and over and over and over and over
and over and over very rapidly which is
one of the things you said in your book
that I found
this is something that people under
appreciate the future is going to be
almost impossibly different to the point
where it will even now so forget the
singularity where the rate of change is
is so blinding that you you can't
predict a minute from now let alone
what's happening now but you said over
the next 100 years without any
additional changes we will make 20 000
years of progress and
in that progress though I have to
imagine will be progress that speeds up
that rate of change so if we're already
on a rate of change of twenty thousand
uh years of change in a single Century
you can imagine where we're going to be
in 10 20 30 is going to be crazy so by
putting an algorithm together rather
than feeding it human data you feed it
AI
games it gets unbeatable to the point
where it can beat the other AI okay
that's
crazy so I mean where do you think about
it think about it this way Tom how does
the best player of go in the world learn
the game right they play against other
players and every time they win or they
lose of course they're given
instructions and hints and tips and so
on but every time they make their own
move and they lose they remember it and
so they don't do it again every every
time they make the right move and they
win they remember it and they do it over
and over the the difference is that one
player you know I always give the
example of self-driving cars you drive
and I drive if you make a mistake and
avoid an accident you will learn I will
not okay if if one self-driving car
requires critical intervention it's fed
back to the main
brain if you want to call it and every
other self-driving car will learn that's
the point about AI right and so when
Alpha go zero was playing against
alphago Master uh you know for for it to
to learn just so that you understand
there were three versions of Alpha
alphagoa version one was beaten by
version three in three days of playing
against itself version 2 became the
world you know which is the which was
the world champion at the time lost a
thousand to zero
in one in 21 days
21 days and I think this is why I am no
longer holding back okay the reason why
I'm no longer holding back is that
nobody if you've ever coded anything in
your life nobody expected an AI to win
and go uh any earlier than 10 years from
today right
it did not only happen several years ago
it happened in 21 days
did you understand the speed that we're
talking about here and and when you said
exponential people don't understand this
chat gpt4 as compared to charge PT 3.5
is 10 times smarter okay there are
estimates it's hard to to measure
exactly there are estimates that chat
gpt4 is at an IQ of 155 if you measure
by all of the you know uh tests that it
goes through right Einstein was 160.
okay so it is already smarter than most
humans
now if chat GPT 5 no no no chat gpt6 a
year and a half from today is another 10
times smarter
if you just take that assumption huh uh
you're now 10 times smarter than one of
the smartest humans on the planet
if this is not a singularity I don't
know what is if this is not a point
where humans need to stop and say hmm
maybe I should consider trying to
understand how the world is going to
look like when that happens
right and I go back and I say this very
openly I am like you I am an optimist a
hundred percent I know that eventually
AI in the 2040s 2050s maybe will create
Utopia for all of us or for those who
remain of us okay but then between now
and then
the abuse of AI falling in the wrong
hands as well as the uncertainty of
certain mistakes that can flip life
upside down okay uh could really be
quite a struggle for many of us does
that mean it's a doomsday no it's not
but it's honestly not something that we
should put on the side and go binge
watch uh you know Game of Thrones not
not anymore I I think people need to put
the game controller down and start
talking about this starting telling
their governments to engage starting to
tell you know developers that we require
ethical aisle starting start to to
request some kind of an oversight and
and in my personal point of view start
to prepare for an upcoming uh redesign
of the fabric of work and most
importantly start to prepare for a
relationship between humans and AI that
we have have never in our lives needed
to do before with any other being
it's like getting a new puppy at home
only the puppy is a billion times
smarter than you
yeah think about it yeah there's a Rick
and Morty episode about the dog becoming
exceptionally intelligent remember that
yeah
absolutely very much so all right so I
wanna there's two things I wanna drill
into and then I want to you and I to
start the conversation about what that
looks like because In fairness I don't
think certainly not in the US I don't
think most people in the government have
thought about it at all probably would
be my guess uh and so I think that the a
better way for people to begin to think
through this stuff is really sort of um
podcast citizen journalism whatever you
want to call it uh so correct the two
things I want to drill into are going to
be
exponential growth which we've touched
on but there's a few more things I think
to be said about that and then alien
intelligence and I say alien
intelligence because the way that AI is
going to think will be so vastly
different it will it will truly be
incomprehensible
and I think our failure to grasp what
artificial super intelligence will look
like is the problem okay so let's talk
exponentials so linear if I take 30
steps I'm going to be
roughly at my front door let's just call
it if I take 30 exponential steps I'm
going to walk around the earth something
like 30 times it it's crazy and people
don't they don't have a sense of that so
uh linear obviously is one two three
four it just you progress by one
increment each time
exponentials means you double each time
and there's something called The Law of
accelerating returns which I know you
know well about so be great to hear you
talk on this but the way that that plays
out is that when you're at one and
you're doubling to two like it doesn't
seem like a big deal but you start
getting to a hundred and you double to
200 and then 400 and then you hit a
million and it's 2 million and I don't
think people understand that it only
takes seven doublings like if you start
with uh
yeah um an amount of money you only have
to have seven uh exponential steps to
double your money and so the compounding
effect of that is is extraordinary so if
you don't mind walk people through some
examples of uh the law of accelerating
returns and how you see this playing out
with AI
so so the the of course we have to
credit three coursework for for you know
bringing this to everyone's attention
the you know more slow in technology was
I think our very first exposure even
though we didn't look at it as
accelerating returns but Moore's Law
promised us uh in the 1960s which you
know was uh coined by the CEO of Intel
at the time uh that's compute power will
double every 12 to 18 months at the same
cost okay and you know you may not think
that much about it but my first window
you know those computer so uh IBM
compatible computer at the time I had a
286. I remember those machines they had
33 megahertz on them right uh and uh you
know you had that turbo button if you if
you pressed that turbo button it ran on
six at 66 megahertz but it consumed uh
an or you know electricity and
overheated and so on and so forth the
difference between 33 and 66 to us at
the time was massive because you
literally doubled your performance okay
as computers continue to to grow you can
imagine that every year just for the
Simplicity of the numbers that 66
doubled and then you know became say 130
for the Simplicity of the numbers and
then that 130 became 260 and then the
260 became you know 500. now the
difference between the 500 and the the
33.
is quite significant it's orders of
magnitude the 33 and it happened in two
or three double X right and I think what
people when you really think about that
Ray coursewell uses a very very
interesting example when we attempted to
uh sequence The genome it was a um a 15
years project and seven years into the
project uh we were at 10 of the progress
okay and everyone looked at it and said
if it's 10 in seven years then you need
70 more years to you know or you know a
total of 70 years to finish okay
uh and Ray said oh we're at 10 we did it
okay and he was right you know one year
the 10 became 20 the 20 became 40 the 40
became 80 and then you're over the uh
you're over the the threshold okay and
that idea of the exponential function is
really what humans Miss humans miss that
because we are taught to think of the
world as a linear progression okay let
me use uh um you know uh a biological
example now if you have a a jar that's
half full of bacteria
okay the next doubling it's full
it's not gonna add you know if it moved
from 25 full to 50 percent full in the
in the last doubling you'd go like yeah
you know we still have half empty one
more doubling and it's full if you apply
that to the resources of planet Earth uh
if we if we keep consuming the resources
of plant planet Earth to the point where
one doubling away you know two minutes
to midnight if you want one doubling
away you would be consuming all of the
resources of planet Earth
we would need another full planet Earth
on the next Double
we would need four planet Earth is on
the next doubling okay so that's
exponential growth uh is is just
mind-boggling because the growth on the
next chip in your phone is going to be a
million times more than the computer
that puts people on the moon okay that's
one double that one additional happening
now when you think about it from an AI
point of view it's doubly exponential
double exponential why because as I said
we now have ai's prompting AIS
so basically we're building machines
that are enabling us to build machines
so in in many many ways the reasons why
we get to those incredible breakthroughs
which even the people that wrote the
code don't understand is because you and
I when you really think about uh you
know I know you love computer science
and physics and so on but I'm sure you
you remember reading String Theory or
some complex theory of of physics and
then you would go like I don't get it I
don't get it and then you read a little
more and then I don't get it I don't get
it and then you read a little more and
then someone explains something to you
and Bam
suddenly you go like oh now I get it
it's super clear those are simply
because every time you're using your
brain to understand something you're
building some neural networks that make
it easier for you to understand
something else that make it easier for
you to understand even more and this is
what's happening with AI that also does
not include which I am
amazed that we're not talking about this
it does not include any possible
breakthroughs in compute power you know
there was an article recently that you
know China is working also on quantum
computers that are now 180 million times
faster than the traditional computers I
remember in my Google years when we when
we were working on Sycamore Google's
quantum computer uh Sycamore performed
an algorithm that would have taken the
world's biggest supercomputer 10 000
years to solve and it took a sycamore 12
seconds
200 seconds let me listen yeah yeah
because that's a big difference so this
is where I think people's brains start
to shut down uh even you said 180
million times faster yeah
so okay so I know so by the way 200
seconds to 10 000 years is a trillion
times faster
for second reasons
so I did myself let's be clear for our
listeners so so we can't put AI on
quantum computers yet we can't even put
really anything uh uh you know it's very
very early years it's almost like the
very early mainframes it requires you
know almost uh uh uh absolute zero uh
you know degrees and and very cold and
very large rooms and so on but so where
the mainframes I worked on mvs systems
that occupied a full floor of a building
right and they had less compute power
than the silliest of all smartphones on
the planet today
we we make those things happen there
will be a point in time especially
assisted by intelligence uh and we're
going to have more and more intelligence
available to us where we will figure
this out and then you take chat GPT or
any form of AI and move it from that
brain to this brain that is 100 million
times and 80 million times faster and
we're done
okay we can't do that with you and I
with our biology we can't move
our intelligence from one brain to the
other yet
um
yeah so I I really want to drive a stake
into this idea of how different
exponential is to Linear by pointing out
uh the difference between so if you uh a
[ __ ] by if you look it up I forget if I
looked it up on Wikipedia or whatever
but I looked up what's the IQ of a [ __ ]
if I remember right it's like 65 or 80
it's somewhere in the 60s 70s yeah yeah
and Einstein was 160 as you were saying
so you have I think Einstein is like 2.3
times smarter than a [ __ ] if I remember
when I did the math correctly and so the
difference between a [ __ ] that you know
struggles to uh take care of themselves
and then only two and a half or less
than two and a half times smarter than
that and you get somebody that unlocked
the power of the atom uh that really
gave birth to a lot of the modern
technology that we use today is built on
the back of this physical uh
breakthrough and so there there's a
really really
life-altering difference you wouldn't
have nuclear power you wouldn't have
nuclear weapons you wouldn't have GPS
like a lot of the things that we rely on
in today's world you wouldn't have any
of that if it wasn't for the 2.3 x
increase in intelligence
now when we talk about super
intelligence which people are estimating
will get to be a billion times big and
smarter than the smartest human so if if
2.3 x is life-altering changes the
entire Paradigm of our planet then a
hundred times is unimaginable a thousand
times as ridiculous a hundred thousand
times as comical a million times we're
still not even scratching the surface of
how much more intelligent this is going
to be and so that brings me to the other
thing I want to drill into which is that
AI will be an alien intelligence it will
not be like your friend who you can
still hang out with and you know smoke a
joint it's like
your your different species they're I
don't even know if there will be common
elements and that's one of the things
that that I think we have to establish
first before we get into how we stop
this from being problematic
but you and your book you really freaked
me out so scary smart is scary good as a
book I highly encourage everybody to
read it but there's a part in there
where you read a transcript of two AI
that we're given the task to negotiate
with each other for like selling things
back and forth and they start talking in
a way that is unintelligible I mean it
was really unnerving it was like III uh
need five of these and then the other
was like screws Nails all me and there
was like a really weird like rhythmic
repetition to the way that they were
over emphasizing themselves and like
what they needed it was really weird and
so
what was the response to that because if
I'm not mistaken they ended up shutting
them down because they that was very
unnerved yeah yeah what happened that
that was Facebook and and the idea is
they were simulating AIS negotiating
deals with each other it's a wonderful
thing if you're in the advertising
business for example because we had
things like that at Google a very long
time ago the idea of you know ad
exchange for example where machines will
buy ads from other machines right but
you know you and I uh and I really thank
you for your time it took me four and a
half months to write scary smart uh you
know maybe six months to edit it it took
you perhaps a day or two to read it and
for us to talk about it now it's gonna
take two and a half hours
you know a computer can read scary smart
and less than a microsact right the the
you know when when you speak about the
idea of intelligences being a hundred
times a million times a billion times
smarter than us this is only one thread
of the issue the other thread of the
issue is the uh the memory size
you know of if if I could keep every
physics equation in my head at the same
time and also understand biology very
well and also understand you know uh
cosmology very well I could probably
come up with much more intelligent
answers to problems right and if I could
also uh ping another scientist who
understands this or that in a
microsecond get all of the information
that he knows and make it part of my
information that's even more intelligent
and what is happening is when uh when we
ask computers to to communicate
at first they'll communicate like we
tell them but if they're intelligent
enough they'll start to say that's too
slow why why would I communicate that
human bandwidths right why would I use
words to communicate when you and I know
that if you know if you simplify words
for example into uh um you know letters
into numbers you could communicate a
massive amount of information within
every sentence right so you could
literally if you take one equation uh
algorithmically put you know certain
letters in it you could simply I could
send it to you something that says 1.1
and you would enter it into the equation
and get a full file that's a full book
because of the sequence of the letters
that 1.1 determines as per the equation
so of course com you know if you're
smarter and smarter and you have that
bandwidth you're going to communicate a
lot quicker and I don't remember the
name I think they were Alice and Bob of
the of the two chat Bots and very very
quickly they they ended up designing
their own language and when they said
III uh would would buy 10 uh you know
um tape tape tape there was math math
engaged in that it wasn't I want to buy
10 tapes only it was also communicating
other things we didn't understand
which is really what you're you know
driving us to to driving our listeners
to think about Tom because there is so
much of AI we don't understand again
this is one of the things that is that
people need to become aware of uh there
are emerging properties that we don't
understand we don't understand how those
machines develop those properties right
and there are even uh targeted
properties that basically we tell
something that its task is to do a b and
c and it does a b and c but we have no
clue how it arrived at it okay simply
like if I tell you what do you think is
going to happen in the football game
tomorrow you're going to give me an
answer right the fact that it's all
right or wrong doesn't matter either way
I have no clue how you arrived at that
answer I have no clue which
logic you used okay we we have no clue
most of the time how the machines do
what they do we don't okay why because
it really shocked me
yeah if if you if you need to know how I
uh arrive at a certain conclusion you're
going to have to ask me and say drive
this for me like tell tell me what did
you go through what did you think about
what's your evidence what data and so on
and so forth and we do that with AI we
write additional code that will tell us
what are the levels the layers of the
neural net or the logic that the machine
went through right but when Investments
are in an arms race like we are today
most developers and business people will
say I'm delighted it's working I don't
care how I'm not going to invest more
money on developer time to actually
figure out how in several years time
even if you invested the money you won't
get it because that level of
intelligence that the machine is using
is so much higher than yours so you're
not going to figure it out if the
machine tells you well I did a then B
then C then D then F then G and it goes
on for half an hour to tell you I did
all of that you're gonna go like okay
I'm happy you did it I I can't arrive at
that myself anymore that's why I'm
handing it over to you
yeah I had Joshua bengio on the show
who's uh one of the early guys and
amazing AI
and I he signed the letter and I asked
him why he signed it and he said you
know none of us in the space thought
that artificial intelligence would pass
a touring test as quickly as it did and
we don't understand how it did it and so
I asked him the same question like how
how is it possible that we don't
understand how it's doing it we created
it and so you presumably created it to
do a specific thing and he said it's not
how it works we're basically layering on
kind of like you would layer on neurons
we're layering on actual neurons neural
Nets to get it to process data and then
it just doesn't and we don't understand
how it's coming to the conclusions we
just know that if you scale it up more
it can solve bigger and bigger problems
and so he said nobody would have
predicted that this is really just a
scale problem
and that as you scale it up it it's
going to get smarter and smarter so my
question now is we so if if we can get
everybody to understand this is going to
happen way way way faster than you think
it's going to happen which is why even I
as a hyper hyper Optimist I'm just like
hey I don't see a clear path through
this I'm excited and terrified at the
same time and all I know like you is
that we need to start talking about this
we need to start presenting Solutions uh
so it's it's happening faster than we
think and it's going to be a completely
foreign intelligence and that we we will
not be able to interface with it even if
it is kind and wants to explain it to us
we won't be able to comprehend it and so
it will very rapidly uh be like Einstein
to a fly which is a reference you use in
the book several times and even if
Einstein loves the fly it's like am I
really going to spend my time trying to
explain it and even if I take the time
and I lay it all out you're not going to
get it you just don't have the ability
to comprehend so we are giving birth to
something that is a like you said we
can't take it back that's already done
so any argument that begins with ah just
stop I agree with you I that is so
unrealistic to me
we can't bring it back it's going to
happen
so fast and when it comes it will be
just unintelligible it it already is but
given that this is a scale problem that
why don't we nip it in the bud if do you
think that AI will be able to defeat the
need for additional neural Nets and just
get so hyper efficient that we won't be
able to stop it that way or could we
just not now take advantage of the fact
this does become a nuclear-style
infrastructure problem and I can nuke
anybody that tries to online or not
necessarily nuke but destroy physically
destroy anybody that tries to bring a
server Farm on that's that's big enough
to run one of these neural Nets
yeah I mean now now we could if we if we
decide now we could simply switch off
all of that Madness switch off your
Instagram recommendation engine your
Tick Tock recommendation engine your ad
engine on uh Google your data
distribution engine on Google you can
also switch off chat GPT and you know a
million other AIS and then we can all go
and sit out in nature and really enjoy
our time honestly we won't miss any of
it at all I'll tell you that very openly
I mean the reality of the matter is that
Humanity keeps developing more and more
and more because we get bored with what
we have okay and we think that we can do
better with an automated call center
agent when in reality it's not about
better it's just about more profitable
okay and and the reality here is that we
could but will we no we want why because
of the first inevitable before because
of the trust issue between all of us and
because we need the AI policemen just as
much as we need the you know as as we
fear the AI Criminal
before we go into a really pointed
question really fast so when I think
about nuclear proliferation not every
country that wants nuclear weapons has
them uh during and I'm not sure where
Iran's nuclear program is now but I know
for a while
um there was real attempts to either
blow up things that they were doing or
if you know about stuxnet there was that
computer virus that was that was really
terrifying in in the way that it was
sort of like a biological weapon that
was designed to only kill a certain type
of thing and that that is very scary and
I'm sure is in the 40 000 the list of 40
000 ways that the AI came up with to
limit human population but uh stuxnet
for people that don't know it was like
embedded at like the the deepest root
level of like basically every operating
system ever it just spread like wildfire
into chips into everything everything
and when it detected that it was an area
Iranian nuclear centrifuge it would shut
it down or overheat it or whatever it
did and so they for a long time they
just could not build it up so could we
given that there is a similar need for
detectable infrastructure to run AI
could step one not be not to shut all of
the things that we have down but to stop
the next phase from coming online
could we we could but I would debate the
uh the example you're giving in the
first place back in 2022 the world was
discussing the threat of a nuclear war
still
90 years later or like 80 years later
okay so so the whole the whole idea is
that while we politically created the
propaganda that we will you know now
prioritize uh Humanity over our own
country interests there are still lots
of nuclear Wars
Warheads in China and Russia and the US
and Israel and North Korea and many
other places okay and and the reality of
the matter is that while we manage to
slow down Iran that's not enough to
protect Humanity at large that's just
enough to protect some of Humanity's uh
individual interests so so the this is
this takes us back to the whole
prisoners dilemma it's like and I I
think that is the reason why we have a
prisoner's dilemma because
the past proves to us that even though
we said we're going to have a nuclear
treaty everyone on every side of the
Cold War continued to develop nuclear
weapons so you can easily imagine that
when it comes to AI if everyone signs a
deal in November and say we're gonna
halt AI in China and Russia North Korea
and everywhere uh you know people will
still develop AI okay the more
interesting bits is that there are lots
of initiatives to minimize the
infrastructure that is needed for AI
because it's all about abstraction at
the end of the day so you know you may
think of
um a lot of people don't recognize this
as well but a big part of the impress
infrastructure we need for AI to develop
its intelligence is for teaching AI okay
uh this for when when when you when
um once your GPT again or bar your
response to you it's not referring to
the entire data set from which it
learned to give you the answer it's
referring to the abstracted knowledge
that it created based on massive amounts
of data that it had to consume okay and
when and and when you see it that way
you you understand that just like we
needed the Mainframe at the early years
of the computers and now you can do
amazing things on your smartphone the
direction will be that we will more and
more have uh smaller systems that can do
AI which basically means two developers
in a garage in Singapore can develop
something and release it on the open
internet
uh you know again you and I I don't know
if you coded uh any any uh Transformers
or uh or or you know or a deep deep
neural networks and so on uh but they're
not that complicated I think the code of
chat of of gpt4 in in general is around
4 000 lines the core code right it's
it's not a big deal when when I when I
coded banking systems in my early years
on kobel on you know uh on MDS machines
or as 400 machines it was hundreds of
thousands of lines of code okay uh so so
there the the possibility for us why why
has it become so much less
is so much better
because it's all algorithms it's not
it's all mathematics we I think this is
a very important thing to differentiate
for people
when I coded computers in my early years
those machines were dumb and stupid like
an idiot they had an IQ of one literally
no IQ at all okay
developers
transformed human intelligence to 
Resume
Read
file updated 2026-02-12 01:37:27 UTC
Categories
Manage