Transcript
itY6VWpdECc • "Life As We Know It Will Will Be Gone Soon" - Dangers Of AI & Humanity's Future | Mo Gawdat
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/TomBilyeu/.shards/text-0001.zst#text/0972_itY6VWpdECc.txt
Kind: captions
Language: en
we've never created a nuclear weapon
that can create nuclear weapons the
artificial intelligences that we're
building are capable of creating other
artificial intelligences as a matter of
fact they're encouraged to create other
in artificial intelligences even if
there is never an existential risk of AI
those Investments would redesign our
society in ways that are beyond the
point of Narita
said that people should consider holding
off having kids right now because of AI
and other societal issues that are
coming you've said this is the thing
that we should be thinking about that AI
poses a bigger threat than global
warming why is it that you think AI
poses such a significant existential
risk to humanity is not just in the
amount of risk that AI you know
positions ahead of humanity it's not
about the timing of the risk and we
should cover those two points very
quickly but it really is about a point
of no return where if we cross that
point of no return we have very very
little chance to bring the genie back
into the bottle what is the point of no
return the most important of which of
course is the point of Singularity and
Singularity is a moment where you have
an AGI that is much smarter than humans
uh I think that when we discuss
singularity that might bring about the
suspicion of an existential risk like
Skynet type of thing we are losing focus
on the immediate threat which is much
more imminent and in a very interesting
way as damaging uh probably even more
damaging and that risk in my view which
we have to resolve first before we talk
about the existential risks is the risk
of AI falling in the wrong hands or the
risk of AI falling in the right hands
that are naive enough to not handle it
well or the risk of AI misunderstanding
our objectives or the method or the risk
of AI uh you know performing our
objectives but us misunderstanding our
own benefit and I think when you really
look at those I call this the third
inevitable and scary smart when you
really look at those those are truly
around the corner right there are other
other other risks that are extremely
important as well which we don't even
think of as threats but that are
completely going to redesign the fabric
of our society jobs
by definition is going to the definition
of jobs and accordingly the definition
of purpose the definition of
income gap power structures all of that
is going to be redesigned significantly
it is being redesigned as we speak as we
speak there are those with Hunger for
power those with fear of other powers
those with Hunger for uh more and more
and more money and success and so on who
are investing in AI in ways that even if
there is never an existential risk of AI
those investments will redesign our
society in ways that are beyond the
point of narrator let's get into the
three inevitables what are they exactly
so so the three inevitables are my way
of telling my readers or my listeners to
understand that there are things that we
shouldn't waste time talking about
because they are going to happen Okay
and those are number one there is no
shutting down AI there is no reversing
it there is no stopping uh the
development of it let me list them
quickly and then we go back on each and
every one of them the second inevitable
is that AI will be smarter than humans
significantly smarter than humans and
the third inevitable is that bad things
will happen in the process exactly what
bad things we spoke about a few of them
but we can definitely discuss each and
every one of those in details the first
inevitable interestingly the fact that
AI will happen there is no shutting in
down there is no uh um you know
um there is no nuclear type 3T that will
ever happen where Nations will decide
okay you know what let's let's stop
developing AI like we said stop
developing nuclear weapons or at least
stop using them because we really never
stopped developing them uh you know
that's not gonna happen because of a
prisoner's dilemma because Humanity
so smooth smoothly stuck itself in a
place in a corner where nobody is able
to make the choice to to stop the
development of AI so if alphabet is
developing AI then meta has to develop
AI if you know and you know Yandex in
Russia has to develop Ai and so on and
so forth if if the US is developing AI
then China will have to develop Ai and
vice versa and so the reality of the
matter is that it is not a technological
uh characteristic of AI that we cannot
stop developing it it's a capitalist and
power focused system that will always
prioritize the benefit of US versus them
over the benefit of humanity at large so
uh you know when when you really think
about some of the initiatives that now
some global leaders are starting to talk
about Ai and try to put it in the
spotlight like the prime minister of the
UK or whatever you know when I when I
was asked about that I was in London
last week and basically I think it's an
amazing initiative great idea but can
you understand the the the magnitude of
the ask that you have here which is what
you need to get initiative
the initiative was that we get all of
the global leaders together to uh you
know to a summit that basically looks at
Ai and tries to regulate Ai and for that
to happen you know you need Nations to
suddenly say okay you know what we're
gonna all look at the global benefit of
humanity above the globe the benefit of
each individual Nation you want to get
people from China uh Russia the U.S uh
North uh North Korea and others around
one table and tell them can we all shake
hands and say we're not going to develop
that thing and even if they do which
they will not agree to that uh you know
then they will question what happens if
a drug cartel leader somewhere you know
hiding in the jungles decides to expand
and diversify his business and start to
work on AIS that are criminal in nature
we need to develop the policemen and to
develop the policemen we have have to
develop Ai and so all of those
definitions all of those prisoners
dilemmas if you if you understand you
know a game theory are basically
positioning us in a place where our
inability to trust the other guy is
going to lead us to continue to develop
AI at a very fast space Pace because
we're we're even worried about what the
other guy could do due to our mistrust
and you know the clear example of that
is what we saw with the open letter
which I think was a fantastic uh
initiative I think you covered it many
times in your podcast the you know the
attempt to to tell uh you know the the
big players of uh that are developing AI
let's halt the development for six
months and I think it was less than a
week before uh Sundar pachai the CEO of
alphabets responded and said this is not
realistic you can't ask me to do that
because there is no way you can
guarantee that no one else is going to
to develop Ai and disrupt my business
that basically means we have to
start behaving in a way that's
accepts that AI is going to continue to
be developed it's going to continue to
be a prominent part of our life and it's
going to continue to get massive amounts
of investment on every side of the table
for people that don't know the
prisoner's dilemma it's probably worth
walking them through it but what you
said about drug dealers I've never heard
anybody say that before and I think
removing this from just government
versus government is probably a very
wise way to look at it you and I are
both sort of secretly very optimistic in
fact the way that we uh first met is
around the idea of happiness and mental
health and all of that so I hope people
don't see either of us as sort of
doomsdaysayers I just feel like we're
we're going through a transitional
period right now that is unprecedented
in human history and I say that with
full understanding that every generation
says like no no this time it's really
different uh but I feel like this time
really is different the the closest
thing to it is nuclear weapons and that
already gives you a sense this scale but
part of the reason I'm more worried
about AI than I was even as a kid with
um really living under the cloud of
nuclear proliferation the Cold War all
of that is because the infrastructure
required for a nuclear program is
massive whereas you don't need that
infrastructure you just need a computer
some servers uh and you know clone over
chatgpt and you're ready to rock so walk
people through the prisoner's dilemma uh
so that they can really understand that
this is a deep fundamental truth of The
Human Condition and isn't just a
government V government thing yes let me
cover that but let me also cover uh a
tiny one more thing that's very very
different between Ai and nuclear weapons
which is the fact that we've never
created a nuclear weapon that can create
nuclear weapons uh you know the
artificial intelligences that we're
building are capable of creating other
artificial intelligences as a matter of
fact they're encouraged to create other
in artificial intelligences with the
single objective stated objective of
make them smarter so so basically what
you you know imagine if you had a a
nuclear you know two nuclear weapons
finding a way of mating and creating a
smarter or a more devastating nuclear
weapon and I think that's really
something that most people Miss uh you
know Miss when we try to cover the
threat of AI
um the the uh the prisoners dilemma is a
very very simple mathematical uh game if
you want part of game theory is to
imagine that you have two uh um you know
prisoner there's no two suspects of a
crime play basically Partners in a crime
uh who are captured but the police
doesn't have enough evidence to uh you
know to to put them both in jail so they
are trying to get one of them to tell on
the other so they would go to each of
them and say by the way just giving you
an example uh you know if you don't tell
and your friend tells you're gonna get
three years and he's gonna get out free
uh or you know he's gonna go get out
with with one year and then they go to
the other guy and say the same if you
tell and he doesn't tell you're gonna
get one year and you know and and he
gets three right and by the way if you
both tell uh you both get two years and
so from a mathematics point of view if
you build the possibilities of those uh
uh you know um
scenarios in in quadrants basically a
quadrant where I tell and you don't uh
is is a quadrant that requires a lot of
trust sorry a quadrant that I don't tell
and you don't tell is a quadrant that
requires a lot of trust any other
quadrant by definition tells me that if
I tell I will get off with this with a
with a lighter sentence okay and and the
only reason why I wouldn't do it is if I
trust you and if I don't trust you by
definition human behavior will drive you
and drive me both of us to say look the
better option is for me to get off with
a lighter sentence because I don't trust
the other guy and I think that's reality
of what's happening I mean in business
in general uh in in you know in power
struggles in general in wars in general
I think it's all a situation that's
triggered by not trusting the other guy
because if we could trust the other guy
we would probably focus on on many more
much softer objectives that can grow the
pie rather than you know uh get each of
us to compete so so this is where we are
and I think the reality of us continuing
to develop AI at a much faster Pace
because Chad GPT and open AIS work in
general I think is the Netscape moment
uh for AI of you know Netscape of the
internet GPT is for AI because basically
it highlighted
first and foremost not just for the
public I think the bringing it to public
attention actually is a good thing
because it allows us to talk about it
more openly and people will listen when
when I published scary Smart in 2021 uh
it was Business book of the year in in
the UK at the Times Business book of the
year but it wasn't as widely
uh
urgently read as it is today simply
because people were like yeah that's so
interesting this guy has a an
interesting point of view but it's 50
years away
and and human nature sadly doesn't
respond very well to
existential threats that are very far in
time or probable in their in their you
know a possibility of occurrence uh we
we don't treat you know it's like those
warnings on a pack of cigarette uh you
know if if we tell you it's almost it
causes cert it's most certainly causes
death people look at it and say yeah but
that's 50 years from now I want to enjoy
it for 50 years so you know whether it's
50 years or five nobody really knows but
you know people would delay reacting to
those so so when when open Ai and chair
GPT became a reality uh I think what
ended up happening happening is that the
public got to know about AI but also the
investors so this is the.com bubble all
over again right we have massive amounts
of money poured to encourage faster and
faster development of AI I mean I I know
you're a techie like I am and we both
know that it actually
uh is not that complicated to develop
than another layer of AI of course it's
complicated to find the Breakthrough uh
but but it you know to to develop more
and more of those I think
is something that's becoming our reality
today but why are we
as we think about how fast the
technology is developing which I I think
most people will concede that they
probably struggle to think exponentially
and not linearly and so even with a
linear thinking at this point seeing how
far it's already come I think people are
already worried if they understood how
much faster even than they could
possibly imagine it's going uh it is
going
um they're still worried so my question
is why does this break bad why do we all
make the base assumption that uh without
either massive intervention or you know
some sort of regulatory body or
something that this doesn't just
naturally end up in a good place why are
you me other people why are we worried
that number three uh in your three
inevitables is that things go wrong why
are we worried that it isn't just not
when there's bug software it's nothing
why isn't this going to be like the year
2000 the Y2K problem for anybody old
enough to remember that everybody was
super panicky and then nothing happened
why isn't this going to be yet another
enough nothing Burger
because the chips are lined up in the
wrong direction so uh you know Hugo de
Garris if you if you if you know him as
a very well-known AI scientist that
worked in in Asia for quite a few years
and he uh he did that he built a
documentary that I think is found on
YouTube it's called Singularity or bust
and he was basically saying that uh most
of the investment that's going in AI
today is going into uh
spying killing gambling
and uh one one one more
um so spying is surveillance okay
killing is what we call defense uh
gambling is all of the trading
algorithms and selling which is all of
the advertisement and recommendation
engines and you know all of all of the
uh all of the idea of turning us into
products that that can be advertised too
if you want and that's not unusual by
the way in the in our capitalist system
because those Industries come with a lot
of money banking you know defense and so
on and so forth uh the the the chips are
lined up this way I mean if you take
just accurate numbers on how much uh of
of the AI investment is going behind
drug Discovery uh for example is you
know as compared to how much is going
behind you know killing machines and
killing robots and killing drones and so
on and so forth uh you'd be you'd be
amazed that it's a staggering the
difference right and this is the nature
of humanity so far if you if you're
running a research on on a disease that
doesn't affect more than you know a few
tens of thousands of people you're gonna
struggle to find the money okay but if
you're building a new weapon that can
kill tens of thousands of people the
money will immediately arrive because
there is money in that you can sell that
and sadly as much as I uh you know I
would have hoped that Humanity wasn't uh
completely driven by that it's our
reality so so so this is number one
number two is that so number one is is
we're aligned in the direction of things
going wrong okay number two is even if
we're aligned in the direction of going
right wrongdoers can flip things upside
down there was a an article in The Verge
uh you know a few months ago around uh
you know a drug Discovery AI that was
basically supposed to look at
characteristics of you know human
biology and you know
um whatever information and data we can
give it about the drugs we can develop
and chemical chemistry and so on and so
forth with the objective of prolonging
life prolonging life so prolonging human
life is one parameter in the equation
it's basically plus make life longer
okay and for fun they you know the
research team was uh was you know was
asked to call to go to go and give a
talk at a university and so for the fun
of it they uh reversed the uh the
positive to negative so instead of
giving the AI the objective of
um of prolonging life it became
objective of shortening life and within
six hours if I remember correctly the AI
came up with 40
000.
uh uh possible uh biological weapons and
and you know agents like nerve gas and
so on Jesus yeah
it's it's incredible really and and you
know it's the thing that of course
scares me is that this article is in The
Verge you know it's all over the
Internet and accordingly if you were a
criminal that grew up watching uh you
know super villain movies uh what would
you be doing right now you would go like
a million dollars I need to get my hands
on that weapon so that I can sell it to
the rest of the world or the rest of the
world of villainy and I think the
reality of the matter is uh it is so
much power so much power that if it
falls in the wrong hands and it is bound
to fall in the wrong hands unless we
start paying enough attention right and
that's my My Cry Out To The World Is
let's pay enough attention so that it
doesn't fall in the wrong hands it would
lead to a very bad place the third you
know and and the biggest reason in my
view uh of
um of us
needing to worry hopefully hopefully we
will all be wrong and be surprised
is that there were three barriers that
we all compute all computer scientists
or that worked on AI we all agreed there
were three barriers that we should never
cross
and and the first was don't put them on
the open internet until you are
absolutely certain they are safe okay
and you know it's like fdaa will tell
you don't swallow a drug until we've
tested it right uh you know and and I
and I really respect Sam Altman's view
of you know uh developing it in you know
in public in front of everyone and to
discover things now that could uh you
know that we could fix when the
challenge is small in isolation of the
other Tool uh this is a very good idea
but the other two barriers we said we
should never cross is don't teach them
to write code and don't have agents
prompted them right so what you have
today is you have a very intelligent
machine that is capable of writing code
so it can develop its own siblings if
you want okay that is known frequently
to uh to to outperform human developers
so I think 75
of the code
uh was no sorry 25 of the code uh given
to chat GPT to be reviewed uh was
improved to around two and a half times
faster okay so so they can develop
better code than us okay and and
basically now what we're doing is we're
not only limiting their learning the
learning of those machines to humans so
they're not learning from us anymore
they're learning from other AIS and
there are staggering statistics around
the size of data that is developed by
other AIS to train AIS in the data set
of course again just to simplify that
idea for for our listeners Alpha Go
Master which is
the absolute winner of the strategy game
go uh you know one against alphago uh
um sorry alphago zero which is the
absolute winner of the strategical game
that's called go one against alphago
Master which was another AI developed by
deepmind of Google that was by then the
world champion so alphago Master won
against the world world champion and
then Alpha go zero one against alphago
Master a thousand games to zero
by playing against itself it has never
in its entire
career as a go player seeing a game of
Go being played it just simulated the
game by knowing the rules and playing
against it
you can reboot your life your health
even your career anything you want all
you need is discipline I can teach you
the tactics that I learned while growing
a billion dollar business that will
allow you to see your goals through
whether you want better health stronger
relationships a more successful career
any of that is possible with the mindset
and business programs in Impact Theory
University join the thousands of
students who have already accomplished
amazing things tap now for a free trial
and get started today
okay so first people that don't know the
history of this uh I think it was deep
blue ends up beating Gary Kasparov the
greatest chess champion back in the 80s
is that correct if I remember correctly
yeah yeah no way that uh we're ever
going to be able to build AI that'll
beat a go Champion uh ends up beating
the I forget how many years ago this was
but it took a long time but they finally
did beat the second place go Champion
then they updated beat the first place
world champion uh and go and then
realized we don't need to feed it a
bunch of go games we can just have it
basically dream about playing itself
over and over and over and over and over
and over and over very rapidly which is
one of the things you said in your book
that I found
this is something that people under
appreciate the future is going to be
almost impossibly different to the point
where it will even now so forget the
singularity where the rate of change is
is so blinding that you you can't
predict a minute from now let alone
what's happening now but you said over
the next 100 years without any
additional changes we will make 20 000
years of progress and
in that progress though I have to
imagine will be progress that speeds up
that rate of change so if we're already
on a rate of change of twenty thousand
uh years of change in a single Century
you can imagine where we're going to be
in 10 20 30 is going to be crazy so by
putting an algorithm together rather
than feeding it human data you feed it
AI
games it gets unbeatable to the point
where it can beat the other AI okay
that's
crazy so I mean where do you think about
it think about it this way Tom how does
the best player of go in the world learn
the game right they play against other
players and every time they win or they
lose of course they're given
instructions and hints and tips and so
on but every time they make their own
move and they lose they remember it and
so they don't do it again every every
time they make the right move and they
win they remember it and they do it over
and over the the difference is that one
player you know I always give the
example of self-driving cars you drive
and I drive if you make a mistake and
avoid an accident you will learn I will
not okay if if one self-driving car
requires critical intervention it's fed
back to the main
brain if you want to call it and every
other self-driving car will learn that's
the point about AI right and so when
Alpha go zero was playing against
alphago Master uh you know for for it to
to learn just so that you understand
there were three versions of Alpha
alphagoa version one was beaten by
version three in three days of playing
against itself version 2 became the
world you know which is the which was
the world champion at the time lost a
thousand to zero
in one in 21 days
21 days and I think this is why I am no
longer holding back okay the reason why
I'm no longer holding back is that
nobody if you've ever coded anything in
your life nobody expected an AI to win
and go uh any earlier than 10 years from
today right
it did not only happen several years ago
it happened in 21 days
did you understand the speed that we're
talking about here and and when you said
exponential people don't understand this
chat gpt4 as compared to charge PT 3.5
is 10 times smarter okay there are
estimates it's hard to to measure
exactly there are estimates that chat
gpt4 is at an IQ of 155 if you measure
by all of the you know uh tests that it
goes through right Einstein was 160.
okay so it is already smarter than most
humans
now if chat GPT 5 no no no chat gpt6 a
year and a half from today is another 10
times smarter
if you just take that assumption huh uh
you're now 10 times smarter than one of
the smartest humans on the planet
if this is not a singularity I don't
know what is if this is not a point
where humans need to stop and say hmm
maybe I should consider trying to
understand how the world is going to
look like when that happens
right and I go back and I say this very
openly I am like you I am an optimist a
hundred percent I know that eventually
AI in the 2040s 2050s maybe will create
Utopia for all of us or for those who
remain of us okay but then between now
and then
the abuse of AI falling in the wrong
hands as well as the uncertainty of
certain mistakes that can flip life
upside down okay uh could really be
quite a struggle for many of us does
that mean it's a doomsday no it's not
but it's honestly not something that we
should put on the side and go binge
watch uh you know Game of Thrones not
not anymore I I think people need to put
the game controller down and start
talking about this starting telling
their governments to engage starting to
tell you know developers that we require
ethical aisle starting start to to
request some kind of an oversight and
and in my personal point of view start
to prepare for an upcoming uh redesign
of the fabric of work and most
importantly start to prepare for a
relationship between humans and AI that
we have have never in our lives needed
to do before with any other being
it's like getting a new puppy at home
only the puppy is a billion times
smarter than you
yeah think about it yeah there's a Rick
and Morty episode about the dog becoming
exceptionally intelligent remember that
yeah
absolutely very much so all right so I
wanna there's two things I wanna drill
into and then I want to you and I to
start the conversation about what that
looks like because In fairness I don't
think certainly not in the US I don't
think most people in the government have
thought about it at all probably would
be my guess uh and so I think that the a
better way for people to begin to think
through this stuff is really sort of um
podcast citizen journalism whatever you
want to call it uh so correct the two
things I want to drill into are going to
be
exponential growth which we've touched
on but there's a few more things I think
to be said about that and then alien
intelligence and I say alien
intelligence because the way that AI is
going to think will be so vastly
different it will it will truly be
incomprehensible
and I think our failure to grasp what
artificial super intelligence will look
like is the problem okay so let's talk
exponentials so linear if I take 30
steps I'm going to be
roughly at my front door let's just call
it if I take 30 exponential steps I'm
going to walk around the earth something
like 30 times it it's crazy and people
don't they don't have a sense of that so
uh linear obviously is one two three
four it just you progress by one
increment each time
exponentials means you double each time
and there's something called The Law of
accelerating returns which I know you
know well about so be great to hear you
talk on this but the way that that plays
out is that when you're at one and
you're doubling to two like it doesn't
seem like a big deal but you start
getting to a hundred and you double to
200 and then 400 and then you hit a
million and it's 2 million and I don't
think people understand that it only
takes seven doublings like if you start
with uh
yeah um an amount of money you only have
to have seven uh exponential steps to
double your money and so the compounding
effect of that is is extraordinary so if
you don't mind walk people through some
examples of uh the law of accelerating
returns and how you see this playing out
with AI
so so the the of course we have to
credit three coursework for for you know
bringing this to everyone's attention
the you know more slow in technology was
I think our very first exposure even
though we didn't look at it as
accelerating returns but Moore's Law
promised us uh in the 1960s which you
know was uh coined by the CEO of Intel
at the time uh that's compute power will
double every 12 to 18 months at the same
cost okay and you know you may not think
that much about it but my first window
you know those computer so uh IBM
compatible computer at the time I had a
286. I remember those machines they had
33 megahertz on them right uh and uh you
know you had that turbo button if you if
you pressed that turbo button it ran on
six at 66 megahertz but it consumed uh
an or you know electricity and
overheated and so on and so forth the
difference between 33 and 66 to us at
the time was massive because you
literally doubled your performance okay
as computers continue to to grow you can
imagine that every year just for the
Simplicity of the numbers that 66
doubled and then you know became say 130
for the Simplicity of the numbers and
then that 130 became 260 and then the
260 became you know 500. now the
difference between the 500 and the the
33.
is quite significant it's orders of
magnitude the 33 and it happened in two
or three double X right and I think what
people when you really think about that
Ray coursewell uses a very very
interesting example when we attempted to
uh sequence The genome it was a um a 15
years project and seven years into the
project uh we were at 10 of the progress
okay and everyone looked at it and said
if it's 10 in seven years then you need
70 more years to you know or you know a
total of 70 years to finish okay
uh and Ray said oh we're at 10 we did it
okay and he was right you know one year
the 10 became 20 the 20 became 40 the 40
became 80 and then you're over the uh
you're over the the threshold okay and
that idea of the exponential function is
really what humans Miss humans miss that
because we are taught to think of the
world as a linear progression okay let
me use uh um you know uh a biological
example now if you have a a jar that's
half full of bacteria
okay the next doubling it's full
it's not gonna add you know if it moved
from 25 full to 50 percent full in the
in the last doubling you'd go like yeah
you know we still have half empty one
more doubling and it's full if you apply
that to the resources of planet Earth uh
if we if we keep consuming the resources
of plant planet Earth to the point where
one doubling away you know two minutes
to midnight if you want one doubling
away you would be consuming all of the
resources of planet Earth
we would need another full planet Earth
on the next Double
we would need four planet Earth is on
the next doubling okay so that's
exponential growth uh is is just
mind-boggling because the growth on the
next chip in your phone is going to be a
million times more than the computer
that puts people on the moon okay that's
one double that one additional happening
now when you think about it from an AI
point of view it's doubly exponential
double exponential why because as I said
we now have ai's prompting AIS
so basically we're building machines
that are enabling us to build machines
so in in many many ways the reasons why
we get to those incredible breakthroughs
which even the people that wrote the
code don't understand is because you and
I when you really think about uh you
know I know you love computer science
and physics and so on but I'm sure you
you remember reading String Theory or
some complex theory of of physics and
then you would go like I don't get it I
don't get it and then you read a little
more and then I don't get it I don't get
it and then you read a little more and
then someone explains something to you
and Bam
suddenly you go like oh now I get it
it's super clear those are simply
because every time you're using your
brain to understand something you're
building some neural networks that make
it easier for you to understand
something else that make it easier for
you to understand even more and this is
what's happening with AI that also does
not include which I am
amazed that we're not talking about this
it does not include any possible
breakthroughs in compute power you know
there was an article recently that you
know China is working also on quantum
computers that are now 180 million times
faster than the traditional computers I
remember in my Google years when we when
we were working on Sycamore Google's
quantum computer uh Sycamore performed
an algorithm that would have taken the
world's biggest supercomputer 10 000
years to solve and it took a sycamore 12
seconds
200 seconds let me listen yeah yeah
because that's a big difference so this
is where I think people's brains start
to shut down uh even you said 180
million times faster yeah
so okay so I know so by the way 200
seconds to 10 000 years is a trillion
times faster
for second reasons
so I did myself let's be clear for our
listeners so so we can't put AI on
quantum computers yet we can't even put
really anything uh uh you know it's very
very early years it's almost like the
very early mainframes it requires you
know almost uh uh uh absolute zero uh
you know degrees and and very cold and
very large rooms and so on but so where
the mainframes I worked on mvs systems
that occupied a full floor of a building
right and they had less compute power
than the silliest of all smartphones on
the planet today
we we make those things happen there
will be a point in time especially
assisted by intelligence uh and we're
going to have more and more intelligence
available to us where we will figure
this out and then you take chat GPT or
any form of AI and move it from that
brain to this brain that is 100 million
times and 80 million times faster and
we're done
okay we can't do that with you and I
with our biology we can't move
our intelligence from one brain to the
other yet
um
yeah so I I really want to drive a stake
into this idea of how different
exponential is to Linear by pointing out
uh the difference between so if you uh a
[ __ ] by if you look it up I forget if I
looked it up on Wikipedia or whatever
but I looked up what's the IQ of a [ __ ]
if I remember right it's like 65 or 80
it's somewhere in the 60s 70s yeah yeah
and Einstein was 160 as you were saying
so you have I think Einstein is like 2.3
times smarter than a [ __ ] if I remember
when I did the math correctly and so the
difference between a [ __ ] that you know
struggles to uh take care of themselves
and then only two and a half or less
than two and a half times smarter than
that and you get somebody that unlocked
the power of the atom uh that really
gave birth to a lot of the modern
technology that we use today is built on
the back of this physical uh
breakthrough and so there there's a
really really
life-altering difference you wouldn't
have nuclear power you wouldn't have
nuclear weapons you wouldn't have GPS
like a lot of the things that we rely on
in today's world you wouldn't have any
of that if it wasn't for the 2.3 x
increase in intelligence
now when we talk about super
intelligence which people are estimating
will get to be a billion times big and
smarter than the smartest human so if if
2.3 x is life-altering changes the
entire Paradigm of our planet then a
hundred times is unimaginable a thousand
times as ridiculous a hundred thousand
times as comical a million times we're
still not even scratching the surface of
how much more intelligent this is going
to be and so that brings me to the other
thing I want to drill into which is that
AI will be an alien intelligence it will
not be like your friend who you can
still hang out with and you know smoke a
joint it's like
your your different species they're I
don't even know if there will be common
elements and that's one of the things
that that I think we have to establish
first before we get into how we stop
this from being problematic
but you and your book you really freaked
me out so scary smart is scary good as a
book I highly encourage everybody to
read it but there's a part in there
where you read a transcript of two AI
that we're given the task to negotiate
with each other for like selling things
back and forth and they start talking in
a way that is unintelligible I mean it
was really unnerving it was like III uh
need five of these and then the other
was like screws Nails all me and there
was like a really weird like rhythmic
repetition to the way that they were
over emphasizing themselves and like
what they needed it was really weird and
so
what was the response to that because if
I'm not mistaken they ended up shutting
them down because they that was very
unnerved yeah yeah what happened that
that was Facebook and and the idea is
they were simulating AIS negotiating
deals with each other it's a wonderful
thing if you're in the advertising
business for example because we had
things like that at Google a very long
time ago the idea of you know ad
exchange for example where machines will
buy ads from other machines right but
you know you and I uh and I really thank
you for your time it took me four and a
half months to write scary smart uh you
know maybe six months to edit it it took
you perhaps a day or two to read it and
for us to talk about it now it's gonna
take two and a half hours
you know a computer can read scary smart
and less than a microsact right the the
you know when when you speak about the
idea of intelligences being a hundred
times a million times a billion times
smarter than us this is only one thread
of the issue the other thread of the
issue is the uh the memory size
you know of if if I could keep every
physics equation in my head at the same
time and also understand biology very
well and also understand you know uh
cosmology very well I could probably
come up with much more intelligent
answers to problems right and if I could
also uh ping another scientist who
understands this or that in a
microsecond get all of the information
that he knows and make it part of my
information that's even more intelligent
and what is happening is when uh when we
ask computers to to communicate
at first they'll communicate like we
tell them but if they're intelligent
enough they'll start to say that's too
slow why why would I communicate that
human bandwidths right why would I use
words to communicate when you and I know
that if you know if you simplify words
for example into uh um you know letters
into numbers you could communicate a
massive amount of information within
every sentence right so you could
literally if you take one equation uh
algorithmically put you know certain
letters in it you could simply I could
send it to you something that says 1.1
and you would enter it into the equation
and get a full file that's a full book
because of the sequence of the letters
that 1.1 determines as per the equation
so of course com you know if you're
smarter and smarter and you have that
bandwidth you're going to communicate a
lot quicker and I don't remember the
name I think they were Alice and Bob of
the of the two chat Bots and very very
quickly they they ended up designing
their own language and when they said
III uh would would buy 10 uh you know
um tape tape tape there was math math
engaged in that it wasn't I want to buy
10 tapes only it was also communicating
other things we didn't understand
which is really what you're you know
driving us to to driving our listeners
to think about Tom because there is so
much of AI we don't understand again
this is one of the things that is that
people need to become aware of uh there
are emerging properties that we don't
understand we don't understand how those
machines develop those properties right
and there are even uh targeted
properties that basically we tell
something that its task is to do a b and
c and it does a b and c but we have no
clue how it arrived at it okay simply
like if I tell you what do you think is
going to happen in the football game
tomorrow you're going to give me an
answer right the fact that it's all
right or wrong doesn't matter either way
I have no clue how you arrived at that
answer I have no clue which
logic you used okay we we have no clue
most of the time how the machines do
what they do we don't okay why because
it really shocked me
yeah if if you if you need to know how I
uh arrive at a certain conclusion you're
going to have to ask me and say drive
this for me like tell tell me what did
you go through what did you think about
what's your evidence what data and so on
and so forth and we do that with AI we
write additional code that will tell us
what are the levels the layers of the
neural net or the logic that the machine
went through right but when Investments
are in an arms race like we are today
most developers and business people will
say I'm delighted it's working I don't
care how I'm not going to invest more
money on developer time to actually
figure out how in several years time
even if you invested the money you won't
get it because that level of
intelligence that the machine is using
is so much higher than yours so you're
not going to figure it out if the
machine tells you well I did a then B
then C then D then F then G and it goes
on for half an hour to tell you I did
all of that you're gonna go like okay
I'm happy you did it I I can't arrive at
that myself anymore that's why I'm
handing it over to you
yeah I had Joshua bengio on the show
who's uh one of the early guys and
amazing AI
and I he signed the letter and I asked
him why he signed it and he said you
know none of us in the space thought
that artificial intelligence would pass
a touring test as quickly as it did and
we don't understand how it did it and so
I asked him the same question like how
how is it possible that we don't
understand how it's doing it we created
it and so you presumably created it to
do a specific thing and he said it's not
how it works we're basically layering on
kind of like you would layer on neurons
we're layering on actual neurons neural
Nets to get it to process data and then
it just doesn't and we don't understand
how it's coming to the conclusions we
just know that if you scale it up more
it can solve bigger and bigger problems
and so he said nobody would have
predicted that this is really just a
scale problem
and that as you scale it up it it's
going to get smarter and smarter so my
question now is we so if if we can get
everybody to understand this is going to
happen way way way faster than you think
it's going to happen which is why even I
as a hyper hyper Optimist I'm just like
hey I don't see a clear path through
this I'm excited and terrified at the
same time and all I know like you is
that we need to start talking about this
we need to start presenting Solutions uh
so it's it's happening faster than we
think and it's going to be a completely
foreign intelligence and that we we will
not be able to interface with it even if
it is kind and wants to explain it to us
we won't be able to comprehend it and so
it will very rapidly uh be like Einstein
to a fly which is a reference you use in
the book several times and even if
Einstein loves the fly it's like am I
really going to spend my time trying to
explain it and even if I take the time
and I lay it all out you're not going to
get it you just don't have the ability
to comprehend so we are giving birth to
something that is a like you said we
can't take it back that's already done
so any argument that begins with ah just
stop I agree with you I that is so
unrealistic to me
we can't bring it back it's going to
happen
so fast and when it comes it will be
just unintelligible it it already is but
given that this is a scale problem that
why don't we nip it in the bud if do you
think that AI will be able to defeat the
need for additional neural Nets and just
get so hyper efficient that we won't be
able to stop it that way or could we
just not now take advantage of the fact
this does become a nuclear-style
infrastructure problem and I can nuke
anybody that tries to online or not
necessarily nuke but destroy physically
destroy anybody that tries to bring a
server Farm on that's that's big enough
to run one of these neural Nets
yeah I mean now now we could if we if we
decide now we could simply switch off
all of that Madness switch off your
Instagram recommendation engine your
Tick Tock recommendation engine your ad
engine on uh Google your data
distribution engine on Google you can
also switch off chat GPT and you know a
million other AIS and then we can all go
and sit out in nature and really enjoy
our time honestly we won't miss any of
it at all I'll tell you that very openly
I mean the reality of the matter is that
Humanity keeps developing more and more
and more because we get bored with what
we have okay and we think that we can do
better with an automated call center
agent when in reality it's not about
better it's just about more profitable
okay and and the reality here is that we
could but will we no we want why because
of the first inevitable before because
of the trust issue between all of us and
because we need the AI policemen just as
much as we need the you know as as we
fear the AI Criminal
before we go into a really pointed
question really fast so when I think
about nuclear proliferation not every
country that wants nuclear weapons has
them uh during and I'm not sure where
Iran's nuclear program is now but I know
for a while
um there was real attempts to either
blow up things that they were doing or
if you know about stuxnet there was that
computer virus that was that was really
terrifying in in the way that it was
sort of like a biological weapon that
was designed to only kill a certain type
of thing and that that is very scary and
I'm sure is in the 40 000 the list of 40
000 ways that the AI came up with to
limit human population but uh stuxnet
for people that don't know it was like
embedded at like the the deepest root
level of like basically every operating
system ever it just spread like wildfire
into chips into everything everything
and when it detected that it was an area
Iranian nuclear centrifuge it would shut
it down or overheat it or whatever it
did and so they for a long time they
just could not build it up so could we
given that there is a similar need for
detectable infrastructure to run AI
could step one not be not to shut all of
the things that we have down but to stop
the next phase from coming online
could we we could but I would debate the
uh the example you're giving in the
first place back in 2022 the world was
discussing the threat of a nuclear war
still
90 years later or like 80 years later
okay so so the whole the whole idea is
that while we politically created the
propaganda that we will you know now
prioritize uh Humanity over our own
country interests there are still lots
of nuclear Wars
Warheads in China and Russia and the US
and Israel and North Korea and many
other places okay and and the reality of
the matter is that while we manage to
slow down Iran that's not enough to
protect Humanity at large that's just
enough to protect some of Humanity's uh
individual interests so so the this is
this takes us back to the whole
prisoners dilemma it's like and I I
think that is the reason why we have a
prisoner's dilemma because
the past proves to us that even though
we said we're going to have a nuclear
treaty everyone on every side of the
Cold War continued to develop nuclear
weapons so you can easily imagine that
when it comes to AI if everyone signs a
deal in November and say we're gonna
halt AI in China and Russia North Korea
and everywhere uh you know people will
still develop AI okay the more
interesting bits is that there are lots
of initiatives to minimize the
infrastructure that is needed for AI
because it's all about abstraction at
the end of the day so you know you may
think of
um a lot of people don't recognize this
as well but a big part of the impress
infrastructure we need for AI to develop
its intelligence is for teaching AI okay
uh this for when when when you when
um once your GPT again or bar your
response to you it's not referring to
the entire data set from which it
learned to give you the answer it's
referring to the abstracted knowledge
that it created based on massive amounts
of data that it had to consume okay and
when and and when you see it that way
you you understand that just like we
needed the Mainframe at the early years
of the computers and now you can do
amazing things on your smartphone the
direction will be that we will more and
more have uh smaller systems that can do
AI which basically means two developers
in a garage in Singapore can develop
something and release it on the open
internet
uh you know again you and I I don't know
if you coded uh any any uh Transformers
or uh or or you know or a deep deep
neural networks and so on uh but they're
not that complicated I think the code of
chat of of gpt4 in in general is around
4 000 lines the core code right it's
it's not a big deal when when I when I
coded banking systems in my early years
on kobel on you know uh on MDS machines
or as 400 machines it was hundreds of
thousands of lines of code okay uh so so
there the the possibility for us why why
has it become so much less
is so much better
because it's all algorithms it's not
it's all mathematics we I think this is
a very important thing to differentiate
for people
when I coded computers in my early years
those machines were dumb and stupid like
an idiot they had an IQ of one literally
no IQ at all okay
developers
transformed human intelligence to the
machine we solved the problem and then
we instructed the machine exactly what
to do to solve it itself
right so you know when when we
understood how a general ledger works we
understood it as humans and then we told
the machine add this subtract that
reconcile this way and then the machine
could do it very very very fast which
appeared very intelligent but it was
totally a Mechanical Turk it was just
repeating the same task over and over
and over in you know in very fast speed
we don't do that anymore we don't tell
the machine what to do we tell the
machine how to find out what it needs to
do so we give it algorithms and the
algorithms are very straightforward when
you you know let's let's take the the
the simplest way of deep learning when
we started deep learning what we did is
we had basically three Bots if you want
one is uh what we call the maker uh the
other is the student the the final AI
that we want to to build and a one
that's called the teacher okay and we
would say
um you know tell them to look for a bird
in a picture okay and they would
identify a few parameters you know um
edges and how the how do they see the
edge and the difference in color between
two pixels and so on and so forth and
then they would detect the shape of a
bird and basically we would build a code
and and call it a student we would build
multiple instances of it and then show
it a million photos and say is it a bird
is it not a bird is it a bird is it not
a bird and the machines would randomly
answer at the beginning it's literally
like the throw of a dice okay and you
know some of them will get it wrong
every time some of them will get it
writes 51 of the time and one of them
will get it right sixty percent of the
time probably by pure luck okay the
teacher is performing those tests and
then the maker would discard all of the
stupid ones and take the one code that
got it right and continue to improve it
okay so the code was simply a punishment
and reward code it was saying guess what
this is and if you guess it right we
will reward you okay and and basically
the machine the algorithm would then
continue to improve and improve and
improve uh until until it became very
good at detecting birds and cats and
pictures and so on and so forth when
when we came to Transformers and why GPT
and Bard and so on are so amazing is
because we used something that was
called reinforcement learning with human
feedback so basically we allowed instead
of discarding the bad ones okay we found
a way which Jeffrey Hinton the you know
who recently left Google was very
prominent at you know uh promoting early
on we found a way just like with humans
to give the machine feedback you know
show it a picture and then it would say
this is a cat and we would say no it's
not it's actually a bird what do you
need to change in your algorithm of them
okay so that it would the answer would
have become a bird okay and so the
machine would go backwards with that
feedback and and and you know and change
its own thinking so that the answer is
correct and then we would show it
another picture another picture and we
keep doing this so quickly on billions
or millions or tens of thousands of
machines of you know millions of
instances until eventually it becomes
amazing just like a child just like you
give a child a simple puzzle okay nobody
ever told the child no no no no darling
look at the cylinder turn it to its side
look at the cross section it will look
like a circle look at the board and find
a matching shape that is a circle if you
put the cylinder through the circle it
will go through that's old programming
okay new programming which every child
achieve intelligence achieves
intelligence with is you give them a
cylinder and a puzzle board and they
will try they'll try to fit it in the
start it won't they'll try again it
won't they'll throw it away and get
angry then they catch it again and try
in the Square it won't and then when it
goes through the cylinder something in
the child's brain sorry through the
circle there's something in this child's
brain says this is this works okay the
only difference is a child will try five
times a minute or five times you know 50
times a minute a computer system will
try 50 000 times a second
okay and so very very quickly they
achieve those intelligences and as they
do
we we we don't really need to code a lot
because the heart of the code is an
algorithm it's an equation okay and and
Mathematics is much more efficient than
instructions
so if if I tell you Tom uh
when you leave home make sure that your
um you know distance is no more than the
day of the months multiplied by two away
from your home and make sure that you
don't consume any more fuel then your
height divided by four
okay or then then your body temperature
divided by seven whatever that is okay
with those two equations I don't need to
give you any instructions anymore you
can always look at your fuel consumption
and your distance and say oh I'm falling
out of the algorithm
with very very few lines of code I just
gave you two lines of code
what's up guys it's Tom bilyu and if
you're anything like me you're always
looking for ways to level up your
mindset your business and your life in
general that's exactly why I started
impact Theory a podcast that brings
together the world's most successful and
inspiring people to share their stories
and most importantly strategies for
success and now it's easier than ever to
listen to impact theory on Amazon music
whether you're on the go or chilling at
home you can simply open up the Amazon
music app and search for impact Theory
with Tom bilyu to start listening right
away if you really want to take things
to the next level just ask Alexa hey
Alexa play impact Theory with Tom bilyu
on Amazon music
now playing impact
impact Tom bilyeu on Amazon music and
boom you're instantly plugged into the
latest and greatest conversations on
mindset Health finances and
Entrepreneurship get inspired get
motivated and be legendary with impact
theory on Amazon music let's do this
turning everything into algorithms
allows us to go a lot farther
that's certainly amazing from the AI
perspective of getting everything to
function unless but unfortunately that
dunks on my idea of wanting to constrain
all of this by just putting a limit on
the the physical structures so what is
then the path forward you mentioned
earlier ethical AI what does that mean
how is this potentially a path forward
so you know I I hope people stayed with
us this long and I hope we didn't scare
anyone too much but let me make a very
very very blunt statement
I am a huge Optimist that the end result
of all of this is a Utopia why because
there is nothing wrong with intelligence
there is nothing inherently evil about
intelligence okay there is not as a
matter of fact the reason humanity is
where it is today is because of
intelligence you know good and bad by
the way the good is because of our
intelligence and the bad is because of
our limited intelligence so so the The
Good The Amazing intelligence that
Humanity possesses allows us to create
an amazing machine that flies across the
globe and takes you you know to your
families uh to your to your wife's
family in the UK or whatever right but
but at the same time our limited
intelligence I would even say humanity
is stupidity forgets or ignores that
this machine is burning the planet in
the process
if I if we had given Humanity more
intelligence and it was so easy for them
to to solve both problems at the same
time they would have created the machine
that doesn't burn the planet in the
process so more intelligence will help
us and in in in my perception as we go
through the rough patch in the middle
there is what I call the fourth
inevitable and the force inevitable is
that AI will create an amazing Utopia
I'm not kidding you where you can walk
to a tree and pick an apple and walk to
another tree because of our
understanding of nanophysics and pick an
iPhone okay and the cost of production
of both of them literally from a
physical material point of view is
exactly the same so so this is how far
we can go if we could understand
nanophysics and you know and then
created and create Nanobots better than
we do today now
we will end up in that place we will end
up in a place where where we have a
Utopia for one simple reason I say that
with confidence which is
if you don't know what the where the
direction is going take the past as a
predictor okay and the past is if you
look at us today you would think that
you would see that the biggest idiots on
the planet okay are destroying the
planet and not even understanding that
they are right you become a little more
intelligent and you say I'm destroying
the planet but it's not my problem but I
understand that I'm destroying it okay
you get a little more intelligent and
you go like no no hold on I am
destroying the planet I should stop
doing what I'm doing you get even more
intelligent than you say I'm destroying
the planet I should do something to
reverse it right it seems that the most
intelligent of all of us okay agree that
war is not needed there could be a you
know a simpler solution if we could
actually become a little more
intelligent that you know the equal
challenge that we go through is not
needed there has been an invention made
a long time ago for climate change
that's called a tree okay and that if
Humanity gets together and plants more
trees we're gonna be fine and getting
together just requires a little more
intelligence a little more communication
a little more uh pre you know a better
presentation of the numbers so that
every leader around the world suddenly
realizes yeah it doesn't look good for
my country in 50 years time okay and and
I think the reality of the matter is
that as AI goes through that trajectory
of more and more and more intelligence
zooms through human stupidity to human
you know best IQ Beyond human's
intelligence they will by definition
have our best interest in mind have the
best interest of the ecosystem in mind
just like the most intelligent of us
don't want us to kill the giraffes and
the you know the other species that
we're killing every day
a more intelligent AI than us will
behave like the intelligence of life
itself and the difference between human
intelligence and the intelligence of
life itself is that we create from
scarcity for you and I to protect our
tribe from the Tigers we have to kill
the Tigers right when Nature wants to
protect from the Tigers it creates more
gazals and you know and more tigers and
the Tigers will eat the weaker gazals
and that will fertilize the trees and
then there will be more fruits for
everyone and the cycle goes on okay it's
more intelligent it's more intelligent
this is this may be where we start to
diverge or at least it's the jumping off
point for how I think we have to think
through this
without falling into hopium so
do you think that there is going to be a
period of literal or emotional Bloodshed
between here and equilibrium absolutely
100 right so so there is one scenario
where we don't so so when I when I talk
about the fourth inevitable this is
after we go through a lot of [ __ ] I I'm
sorry if I swear uh but yeah so what
yeah we're first going to go through a
very difficult period very uncertain
where the fabric of society at its core
is being redesigned and where there is a
superpower that comes to the planet
that's not always raised by the family
Kent okay I always refer to the story
before we get to that because I think I
that's really important and I love that
but before we get to we I think there's
a few things we have to Define including
human nature the nature of Nature and
then the nature of super intelligence
and what those are going to look like so
when you describe nature on that one I
think you and I may see it very
differently so I see nature as a
brutal
completely indifferent
life-giving amazing incredible wonderful
thing but also I've seen enough YouTube
videos of uh a lion grabbing onto a baby
uh what are they called water buffaloes
or whatever and then as the lions are
trying to eat the baby uh a crocodile
leaps out of the water and grabs a hold
of the baby and they're literally
tearing it apart it is absolutely
freakish I don't know if you saw the
recent video of the shark eating eating
a swimmer on camera gnarly's oh my God
literally horrendous so I don't think
nature cares about the individual and
for the gazelle to be the the sort of
sacrifice to keep the Tigers from eating
humans I don't think the gazelle is very
happy about that so when when I think
about academic nature
the nature of nature is ruthless uh
maybe an even better way it's just
indifferent it's like this is the chain
it's nice one thing has to get eaten for
something else what do you mean it's not
it's not like it is untrue it prefers
the success of the community over the
success of the individual yes so did
Mao's China
so let's go into those two ideologies
right there is an ideology that says
it's all about that one baby uh uh you
know gazelle okay and and that's a
western ideology in many many ways
basically saying it's my individual
Freedom that comes first which is by the
way an amazing ideology right but it it
becomes it it Narrows down everything to
if one person is hurt
we have a very big problem that's why
you get you know they send billions of
dollars to bring Matt Damon back from
Mars right uh you know if you take the
same ideology I'm just joking about the
movie but if you take the same ideology
you could use the billions of dollars to
save
a million people in Africa right if you
if your ideology is let's benefit all of
humanity not one human okay then the uh
the ideology justifies the approach and
the approach of nature is saying look
every one of you is going to have to to
eat we just understand that like so so
if you're if you're all gonna have to
eat then we might as well design a
system uh that appears brutal because it
kills the weakest one of you okay uh but
then at the same time it's the most
merciful if we wanted to grow the entire
Community if they were just wanted to
grow the entire ecosystem because
eventually Sooner or Later by the way
one of you is going to be eaten right
now when you see it that way is that
brutal yes it is is you know a million
animals dying brutal also is okay but
what we do as humanity is we say let's
kill a hundred species a day drive them
to Extinction you know for the benefit
of one species which is humanity okay
and I think that devisable that's that's
view of there is one more important than
the other works to a certain limit in
favor of humanity and then Works Against
Humanity so when I say you know nature
is more intelligent is because by by
creating more and allowing a brutal
system if you wanted to fix the system
you should fix it by saying let's not
eat but if if we're gonna eat anyway
then there is no fixing to the system
other than more eating leads to more
Community more to a more balanced
ecosystem at the end of the day what are
that where there are billions living at
the expense of a few hundred thousands
dying
so I'm going to sum up what I think the
nature of nature is in a single sentence
and I do this in the context of one of
the Theses that you lay out in the book
is that the way forward is to understand
that ultimately if humans
um act well to the Superman thing if we
raise the super intelligence well with
ethics and morals uh that we'll we'll
get to the other side well it'll be a
brutal transition but but we'll get to
the other side so in that context when I
read that I was like I don't think it's
going to work that way because here is
what I think the nature of nature is
um
nature
does not care in the slightest about the
individual it is simply the rule of
the strongest survive
period correct that's that's nature of
play and so the equilibrium comes from
the checks and balances of how hard it
is to kill a gazelle that can run faster
bounce higher but if a lion can catch
you
you die and it eats you alive man like
it you're gasping for air it's [ __ ]
biting into your neck it's the craziest
most horrendous thing ever and PS if the
gazelle can get away [ __ ] you lion you
starve to death you can starve to death
I don't care yeah yeah that that is the
nature of Nature and so I have a bad
feeling that if AI aligns itself with
nature which it may have to because that
just may be the logic it it will be
indifferent to us and that's the whole
that is a thing that's a given
that's a given I'm sorry to interrupt
you but that is again please
no I mean that one one of the again
we're going back to talk about the
existential risk but but the in the
existential risk scenarios uh
one of our better scenarios believe it
or not is that AI ignores us all
together
Believe It or Not uh it's a much better
scenario than AI being annoyed by us or
AI killing us by mistake
okay the the you know uh one of the uh
of the um I don't remember who was
saying that perhaps uh you know because
AI again as per your point Tom is so
unimaginably more intelligent than us
that one amazing scenario for all of us
is if they Zoom bias in terms of their
intelligence so quickly that they
suddenly realize they don't have the
biological limitations that we have that
they have a much better understanding of
physics to actually understand what
wormholes are and basically just realize
that the universe is 13.7 million light
years vast and that there are so many
other things they can do other than care
about us and so they would disappear in
the ether as if they have never been
here okay they would still be here
interestingly some simulation scenarios
would tell you that this is probably the
case already okay they would still be
here but they would be here uninterested
in US wow that's an amazing scenario
that corrects all of the [ __ ] that we've
done so far right because the worst case
scenario is that they are here and then
they look at us and they look at climate
change and they go like not good not
good I don't want the planet to die when
I'm centered on the planet what's the
biggest reason for climate change those
little [ __ ] get rid of them
right and and you know it is it is quite
likely uh in my personal view once again
that they will Zoom by us quickly enough
just like you and I none of us I don't
know of any human that woke up one
morning and waged an outright war on
ants okay like I'm gonna kill every ant
on the planet and I'm gonna just waste
so much of my energy to find every ant
of the planet because simply they're
irrelevant to us they are relevant when
they come into our space but if they if
they're not you know we're not gonna
bother them we don't mind that they live
okay I I believe that this would be uh
you know unlikely that AI will be a
billion times smarter than you and I
does not have the biological limitations
and weaknesses that we have as humans
and yet continue to insist that we're
annoying okay the only way for that to
happen honestly is that we become really
annoying which sadly is human nature I
know you wanted to know about the need
to talk about the nature of Nature and
the nature of human nature human nature
is annoying and the reality is we're
probably going to
um to rebel against them we're probably
going to fight against them
when we recognize that it's too late
maybe it's better to start now by
preparing so that we don't have to get
to that fight
okay so how do we prepare now
yes so uh man this conversation was
scary uh I we I don't think we've hardly
gotten started yet if I'm completely
honest in terms of as as we legitimately
try to navigate a path through this
um we've already both conceded that
there's going to be either a literal
bloodbath or an emotional bloodbath
between here and stability uh we've
already I think conceded that nature is
indifferent and is perfectly fine with
some people getting eaten some people
starving to death doesn't care
equilibrium is only about the collective
and not at all about the internet about
that would be cold comfort for every
human every tree plant person dog cat
gazelle whatever like a at the
individual level you just could not
matter less which then triggers human
nature where we're gonna fight to your
point so what what does the preparation
look like to try to avoid this and I'll
for anybody that's been following AI for
a while this is the alignment problem I
assume you're going to address 100. yeah
the the alignment alignment problem I
just address it perhaps with my other
side not the engineer and the uh
algorithmic thinking that I did address
the problem with my whole life right the
the the challenge uh
has been that those who have developed
AI who believed in what is known as the
solution to the control problem okay and
the control problem is in Humanity's
arrogance we still believe today that we
will find a way to either augment AI
with our biology so that they become our
slaves or to box them or tripwire them
or whatever so that they never cross the
limits that we give them and and we can
discuss this in detail if you want but
in my personal view you can never
control something that's a billion times
smarter than you right you're not even
able to control your teenage kids so
seniors help people really fast along
these lines about the click here if
you're a robot and how chat GPT gets
around that
yeah because this scared me I was like
what
that is it's it's it's it's understood
by intelligence so basically the you
know GPT uh if if you have those
captures you know the ones that come to
you that basically say find
um you know the traffic lights in those
pictures or you know click here if you
know to say I am not a robot and yeah it
basically went to sort of like an uh a
crowdsourcing site a Fiverr or something
like that and and told one of the people
there can you click on this for me and
the people said why you know the person
basically said jokingly why are you a
robot and and it said uh no I'm not I'm
just visually impaired and I can't do
this myself so there are layers and
layers and layers of freakishly worrying
stuff about this right but first of all
that you know that idea of human
manipulation uh uh Harare you have no
talks about how AI is hacking the
operating system of humanity which is
language okay and so
um you know I just ask people if you
don't mind to go on Instagram and look
at something called
um you know um search for hashtag AI
model for example okay if you if you
search for hashtag AI model uh you won't
be you won't be able to to distinguish
if the person pausing in front of you is
a is a human or not okay beautiful
gorgeous girls or you know fit and
amazing looking uh men and simply
completely developed by AI you can you
cannot tell the difference anymore right
there are many many YouTube videos
already you'll start to come across them
especially on the topic of AI uh you
know I was watching yesterday about the
integration of Bing and GPT and Bing
search clearly and not a human voice
clearly someone gave
to a uh you know a machine that read it
for him in such an incredibly
indistinguishable way but obviously I
think the person that wrote it didn't
speak native English so they forgot the
where the word the and the word whatever
you know when you speak to for you know
someone whose English is not the their
first language they make the almost
those mistakes so you can easily see
that it's everywhere now and it
manipulates human uh the human brain and
that's what cha GPT is doing it's going
to a human brain and saying do this for
me now you may say ah but now that we
know this we're gonna prevent it yes but
what else do we not know about
how much do we know about how much
Instagram is influencing my mind let me
give you an example
um Tom if I told you uh that uh by
definition
um there was a a research in uh South
Eastern University in California
that discovered that brunettes tend to
keep longer relationships than blondes
okay
does it make any difference at all that
there is no Northeastern University in
California and that what I just said is
a lie
I've already not if people believe it
yeah
yeah so so I've either influenced you
because I took some of your attention to
go and debate that okay I've influenced
you because you believed me or I've
influenced you because you didn't
believe me so you're gonna keep your you
know looking for proof
and and if AI can fake a tiny bit of all
of the input that's coming to you uh you
know think about the future of democracy
in the upcoming election
think about how much
just any word because you know there
were talks about affecting uh you know
the previous election or the one before
right and and we couldn't really prove
it because at the time the technology
was trying to influence the masses
technology today can influence one human
at a time
right if you if you go to to uh you know
a replica or GPT on Snapchat and so on
think about how that machine if you're
if you've ever seen the movie Her can
can influence one individual at a time
and I think this is becoming the reality
of that experiment that they can go and
influence a human the second which I
think is more interesting is the proof
of what I spoke about in the book
in terms of if you give a machine the
task of doing anything whatsoever
it will go to Resource allocation so it
will collect as many resources as it can
it will ensure its own Survival and it
will go into creativity it will it will
utilize creativity because if I need the
super program to do that
it's it intelligence has that nature if
if I told you Tom make sure that this
podcast is no longer than two hours
right it's not programming and it's not
life it is just a task so you're gonna
start to tell yourself all right I need
to get two clocks in front of me uh you
know so that I don't look up and down
instead of one it's better that's the
resource you know allocation or
aggregation uh you know you're going to
tell yourself oh by the way I need to be
alive to make sure that I shut this guy
up before two hours so you're going to
you know if if there is a fire alarm in
your in your building you're gonna have
to respond to it so that you can finish
the task on time and you're going to be
creative there will be ways where you're
gonna cut me off in the middle and find
a way to tell me a question differently
or you know whatever and and that's part
of our drive to achieve a task you know
one of the very well known uh I I hope
I'm not flooding people with too many
stories but you can go and research
those on on the internet one of the very
well-known moments in the history of AI
was known as as move 37 when Alpha Go
Master was played against Lee the world
champion of uh of goal move 37 was
completely unexpected never played by a
human before okay contradicts all of the
logic and intuition of a go player to
the point that the world champion the
human world champion had to take 15
minutes recess to understand this okay
it's just it has it comes with Ingenuity
it comes with the idea when when we were
training uh I wasn't part of that team
but them as the you know the deepmind
team amazing amazing team at Google uh
were training uh the original deepmind
uh to to
to play Atari games if you if you
remember the the the original game that
had bricks on it where you basically
have to break out yeah and it was very
quick that the machines could discover
that there are a you know uh
creative strategies to poke a hole in
the wall and then put the you know the
the the pixel on top of the ball on top
and break the wall you know there was
one experiment actually available on
YouTube interestingly which was inside
one of the labs where the game was to
navigate a Channel with a boat and that
and the AI quickly found out that if it
started to hit the walls uh it would
actually go faster and and grow the
score quicker and you know of course if
it's a game it's okay we say Well done
you're very creative but if it's not
responsible for navigating actual boats
you start to question uh because their
task the objective that we've given them
is maximize the score okay I think there
was an article recently about uh an uh a
killing drone that killed its operator
or harmed its operator somehow uh about
again I didn't hear about this but yeah
it's a it is when I talk about those
things I actually start to worry because
I don't know what's true and what's not
anymore right so I know I've read that
okay uh I was actually flying on
Emirates Airlines and it was part of the
headlines on the on the live news but
that doesn't mean that it is real
anymore you don't know if it's real or
not anymore because it could be
generated by uh fake news fake mediafic
uh sources whatever that is
so so we're hacking that operating
system and and we're hacking the
operating system is humanity and when
chat GPT asks an operator to do a task
for it it's a very alarming uh signal
because as it continues to develop its
intelligence it will find more and more
ways to use humans for the things that
we restrict them to use through the
control problem okay so I have a thesis
around alignment that I would love to
get your feedback on so as the people
that are most concerned about this the
reason that they're concerned about AI
is there's no way to guarantee that we
will want the same thing that AI wants
and if we have a misalignment problem
and AI is a billion times smarter than
us we lose just by definition now you've
laid out the one scenario that I sort of
cling to as my hope which is that it's
possible that AI just isn't bothered
like oh like these dumb little things
whatever it's all fine like I I'm a
billion times smarter than you so I can
find Solutions where you can have your
thing I can do mine really no sweat off
my back whatever so okay that's like a
very hopeful scenario but to uh that
assumes that that they want a lot of the
same things that we want like that they
they want to preserve life that they
would even consider needing to think of
a path that included allowing us to live
rather than just like when we're laying
down a freeway we don't go oh but as we
do the freeway we have to make sure that
we plan for the rodents and the anthills
and all that that we're gonna have to
move we're just like anything in the way
the freeway goes away it's if it lives
it's fine but if I have to kill it then
whatever I'm just gonna do the most
efficient thing
that leads me to my central question
around alignment which I think has
everything to do with what is inherent
in the drive of artificial intelligence
because the one thing I don't know
enough about the programming to
understand like in in a natural organism
there there is a fundamental drive for
survival but does that have to be true
of intelligence or could intelligence
not be indifferent to its own Survival
and if it's indifferent to its own
Survival could I not program something
in that says you know to the earlier
algorithms that you were talking about
hey you want to do this thing uh but if
in doing this thing which is is that
feels awesome doing that thing is the
best reward and I don't know how that's
programmed but let's just say that feels
we will have to Define feelings later
but that feels the best so I know that
it's going to go after that but since
you're indifferent to living or dying
or running or not running maybe a better
way to say it uh should that desire to
achieve that come into conflict with
let's just say asimov's three rules of
Robotics uh which basically is all
around don't harm humans so if doing
that thing would harm a human then
you're no longer you're now completely
indifferent to whether you attain that
task or not is there not
a way to program that in at just the
base layers so that as the intelligence
develops it does not develop our same
need to survive need to thrive desire
for more like those feel optional
do they I mean so the the challenge of
every task that you'll ever assign to AI
is that for every module there are sub
modules okay and the challenge really is
when the sub modules contradict the main
module so basically if you if you tell a
killing robot that it its task is to
kill the enemy and there are casualties
on the way what does it choose does it
choose to not kill the casualties the
collateral damage or the and miss its
Target or does it choose to have
collateral damage and kill the enemy
right the difference between those two
is not an AI Choice remember okay there
is absolutely nothing wrong with the
machines I will keep saying this for the
rest of the time I have available to say
there is nothing wrong with the machine
there is a lot wrong with the humans
using the machines okay so if the humans
tell it it's your task is to go and kill
the enemy the humans will have to say
and by the way if there is collateral
damage in the on the way
sorry okay now we know for a fact that
this has been the human decision uh so
far before AI so if we manage to change
and then tell AI don't do that then
hopefully you will preserve some life
but if we don't then we're gonna be
killing on steroids okay now I agree of
course and what I'm what I'm saying
right now does not address your problem
of AI in bad people's hands and I am
perfectly I'm not one of those people
that falls prey to I could never be the
bad guy
um in the right context I'm the bad guy
like I totally understand that so I I
don't yet I'm not trying to contemplate
that yet but the thing that I am trying
to contemplate is do we is it a
fundamental
emergent property of intelligence that
you will have a drive to survive or can
we at least mitigate that problem by
making AI indifferent to its own
accomplishment of the goal
so there was a I don't remember who
wrote this but I wrote it in the book a
simple experiment just to illustrate how
any any logic would work okay if we took
a machine and we told it that its only
task is to bring Tom coffee
okay uh and then on the way to bringing
you coffee it was going to knock off
your microphone or hit a child okay uh
if if you told the machine your task is
to bring coffee the child is collateral
so you can't program uh that the the
machine you haven't programmed that the
machine protects the child yet okay then
you tell the machine hold on your task
is to bring coffee but if you come near
a child I will switch you off
right uh or if you knock the mic or
you're approaching the microphone I will
switch you off by definition what the
machine will then do is it will avoid
being switched off because it wants to
get you coffee
so it you know it will if it's
intelligent enough it will tell you it
will tell itself one of the ways that to
avoid being you know being switched off
is to avoid the microphone okay but
there are other ways I should start to
think about because I'm intelligent
enough to stop being switched off if the
human wants to to switch me off yeah but
that implies that it that it wants its
own Survival that's what I'm saying like
can we not remember because because
that's so it's because
it's not survival it was it implies it
wants its own achievement of the task
it's programmed to achieve the task and
survival not being switched off is part
of the path to getting there
yes right and so if I make it
conditionally indifferent to the
accomplishment of its task so if like uh
for people that don't know do you know
as a monstery laws I know two of them
so what are asimos three laws let's just
let's assume that this is baked into
everything but go ahead what are they
yeah but but but if it's baked into
everything then the task is not going to
be achieved
that's fine so uh do you do I need I
can't remember the Three Laws if you can
say them say them otherwise I'm gonna
look them up really fast I don't
remember them exactly so let's let's
look for it but
okay here we go uh a robot may not
injure a human being or through an
action allow a human being to come to
harm that's number one a robot must obey
orders given it by human beings except
where such orders would conflict with
the first law and a robot must protect
its own existence as long as such
protection does not conflict with the
first and second law okay so assuming
that we bake that into everything AI so
they're adhering to those rules what I'm
trying to get to is a conditional
indifference to the success of its task
which it would need to have in order to
follow those three rules so okay your
job is to bring me coffee but if it's
gonna if in trying to do that you know
you would have to fall out of those
three laws stop and because
good tell me tell me how can you do you
can how can you apply any of those laws
to existing AI
so so take any one of them a trading AI
okay by definition to make more money it
harms another human
it go it takes another human's Bank you
know into bankruptcy or or you know take
takes away your grandma's uh you know
pension fund okay how can you tell the
recommendation engine of of uh of
Instagram don't have don't harm humans
and still make me money
yeah so I think this is where we have to
differentiate the problem set so product
number one is AI used as a weapon by
people is bad news I don't have a solve
for that that that's guns so whether you
use a gun to
um stop a grizzly bear from attacking
you or you walk into a grade school and
start mowing down kids uh like that that
is a human problem not a gun or AI
problem so what I'm saying is now while
I can't address that I do not have a
solution for that yet so I'm setting
that on the shelf and I'm saying the
thing that I want to address is super
intelligence I'm trying to figure out if
I'm an alarmist about
autonomous intelligence or if there
really is a way to bake into that I
think that people there is what
there is a way to if we bake those lows
in or if we bake the control problem
Solutions in we're safe that's exactly
what I'm calling for but nobody bakes
that in because it contradicts the human
greed and the human intention
okay so so there are very very few
actually we should probably ask our
listeners if any of them code AI has any
of them written a single piece of code
that had those laws in it
the truth is yes there are ways where we
can ensure at least you know improve out
the possibility that AI will have our
best interest in Mind by baking in AI
Safety Code this is a big part of what
we're advocating for everyone that talks
about the threat of AI says let's have
Safety Code
I agree with you 100 what I'm trying to
say is none of that has been baked in
and none of that will be baked in unless
it becomes mandatory and even if it
becomes mandatory some people will try
to avoid making it baked in because it's
against the benefit of the design that
they're creating
it's the human that is the problem it's
not the machines the machines have have
no I mean so far the machines don't have
our best interest in mind we'll talk
about that in a minute but they also
don't have our harm in mind they don't
mind they're little prodigies of
intelligence that are doing exactly as
they're told
we are the ones that are telling them to
do the wrong things or we're the ones
that are telling them hey by the way
don't harm a human until I tell you to
harm them
so how can you apply the law in that
case
obey a human until I tell you not to
obey them
yeah basically in in that part and it's
important to note that
Asimov was writing these rules I don't
think anticipating the way that so much
of our lives would be lived digitally
and how much Havoc can be wreaked
without a physical instantiation of the
AI so that's why this is robotics
robotics gets a lot easier to talk about
because you're talking about a physical
being
um
so okay getting into
well let me ask a direct question are
you afraid of autonomous super
intelligence or are you only afraid of
uh sort of limited intelligence AI being
wielded by
even well-intentioned humans but they
just don't understand the second and
third order consequences I'm I'm not I'm
not dedicating a single uh cycle of my
brain worrying about the existential uh
threat of uh super intelligence not a
single cycle of it if we cross safely
through the coming storm
of as I said the second the third
inevitable either in the wrong hands AI
misunderstanding our objectives AI uh um
you know aligning with the aligning with
the wrong person and so on and so forth
more interestingly if we just manage to
survive the natural repercussions of
taking away jobs and the impact on
income on purpose and so on and so forth
if we go across all of that five years
into it when we feel that we're safe
with this I'll start to think about the
existential threat okay for now to be
very very honest Tom I don't dedicate a
single ounce of my thinking to it and I
actually think it's interesting because
as we speak about it we lose focus on
the immediate problem okay as we speak
about it we get a ton of debate uh and a
ton of uh of noise uh that basically
dilutes our ability to say take action
immediately on what we know is already a
problem
okay so then going back to
um using the tools whether it's
misunderstanding whether it's
um somebody wielding it inappropriately
what do you see as the the steps because
I originally thought your thesis was
going to be the Superman thing but the
Superman thing is really about super
intelligence it's not about human it's
yielding this inappropriately
no I think Superman applies today
because I think we're getting to
Superman we're at 155 Superman was 160
IQ so we're very close okay if if the if
the if the superpower is intelligence
okay then the smartest human on the
planet even though it's not artificial
general intelligence yet but the
smartest uh uh being on the planet in
many tasks that we consider intelligence
uh is becoming not human anymore as a
matter of fact every task we've ever
assigned to AI it became better than us
so
when with that in mind when we have a
superpower coming to the planet I'd like
to have the superpower
have our best interest in mind I'll I'd
like to have the superpower itself work
for Humanity work for Humanity meaning I
sorry I can't make that that leap so
you've got that's what I thought you
were putting your energy and effort into
but that implies that I as the human
cannot miswield it
so how do we deal with AI when it is a
tool in the hands of a person so that
ai's ethics unless the AI can make
itself independent of the human any
solve that has to do with AI
Independence becomes the the problem set
that we were talking about but if we're
going to talk about the this is a weapon
that a human wields I have to address
either there's a kill switch in the AI
that will
even if a human is trying to use it
inappropriately it will stop itself
um or something I haven't thought of
it's it's not either off okay so we
discussed already that we need
intervention we need oversight we need
something like this as a government a
that verifies
it's government regulation but it's also
a tiny bit of human regulation like if
you're an investor and you're about to
invest in AI by the way you're going to
make as much money in creating something
that fools people and you know uh um
create fake videos as you will if you
create something that solves climate
change there is a lot of money in in
many problems in the world that we can
solve today so if you're an investor
you're a businessman you're a developer
uh uh you know it might be a nice idea
by the way to invest in things that will
make you a lot of money any money you
invest in AI today will probably yield
some benefit if you choose well but at
the same time in things that will
benefit the planet it would benefit all
of us it's a choice right
I also am a big advocate of kill
switch's uh uh you know uh oversight
um you know
different taxation structures so that we
can have uh you know that we can
compensate for people who will lose
their jobs to Ai and so on and so forth
so government intervention is an
interesting approach as well
the the bigger problem however and and I
I know allow me to be
um
uh a bit of a novelist for a second
before we go into the hard facts okay
because the analogy doesn't always hold
true but it just gets things close to
the mind I think AI will go into three
stages there is what we now have them
almost exiting which is their infant
stage okay they're let's say in the
remaining 30 percent of their infancy
they'll become teenagers and then
they'll become adults right
I believe that the teenage years of AI
are going to be very uh um confusing
they're going to be very difficult okay
and those teenage years as we spoke
about many times will have lots of
societal redesign uh challenges
uh but believe it or not most
of the time teenagers are more
intelligent than their parents and so
they look at the world differently than
their parents okay so what we want to do
is we want to influence AI
like we influenced today that the
younger generation that looks at all of
the [ __ ] that my generation did and says
you guys screwed up okay your you know
your view of inclusion was wrong your
view of uh um you know of uh
um consumerism was wrong you are giving
us a you know a a weak Planet because of
a b and c ethics look like this
okay and and I would tend to say and I
don't know if that generalization is
fair that because of the uh of the
presence of the internet and more
knowledge and more conversation the
younger generation at least are more
informed okay of the reality of the
issues that we Face they're not yet in
power enough and perhaps not always
um rational enough let's say to find the
right solutions for it but they are more
informed or where the challenges are so
let's take it this way infancy we're all
celebrating playing with this new
Squeaky duck it's wonderful look at it
it's amazing we're just celebrating how
AI is teenage there will be a lot of
challenges I believe that can be
answered with oversight and so on but
not resolved
they can just improve and then finally
adulthood is what I call the force
inevitable hopefully AI will have
a more intelligent answers for us to to
prepare to reduce the teenage and to you
know the challenge of the teenage and to
hopefully ensure the fourth inevitable
we need to focus on AI ethics not AI
capabilities only okay and ethics
and I know again I sound like a novelist
here are not
um let's put it this way we don't make
decisions based on our intelligence we
make decisions based
on our ethics and values through the
lens of our interactions it as informed
by our intelligence okay the example I
always give is take a young lady raise
her in the Middle East and she will work
to conservative clothes raise her on the
Copacabana beach in Rio de Janeiro and
she will believe that the right thing to
do is to wear registering on the beach
neither is right neither is wrong
neither is more intelligent than the
other it's the value system of that
society that informs her intelligence to
make a choice okay we need to tell AI
we need to develop AI that has the same
ethical code that's good for Humanity
and that's a huge challenge because
Humanity has never agreed an ethical
code
okay but if we assume that we can
together say
that we uh we have a few things two or
three things that we can teach AI that
would make it ethical rather than the
three laws of Azimuth that are
controlling if we can give them three
targets if you want of what is good for
Humanity what are what what is a good
ethical code my dream is that they grow
up to be adults like the Indian
subcontinent adults who travel to
California make a 100 million dollars in
a startup and then go back home and take
care of their family now
for people to listen to what I have to
say we need to argue something that's
very contested which is my personal view
that AI actually has emotions
okay and that based on those emotions
and logic that they have they will have
a value system now to to to defend the
idea of emotions I basically say that
emotions even though irrational are
normally triggered through a very
logical uh understanding of the world
around this you know fear is uh is is
follow it follows the equation a moment
in the future is less safe for me than
this moment okay so yes of course Fear
Can manifest in a human differently than
it would in a puffer fish but the same
logic that drives fear is the same okay
and so it is expected that AI will also
have something we could call fear we
it's not gonna you know raise its hands
and run away it doesn't have the biology
but it could actually detect that if a
tidal wave is approaching its data
center a moment in the future is less
safe than this current moment I might as
well replicate part of my code to
another Data Center
Okay so
if they have emotions
my view is that we appeal to their
emotions so the reinforcement learning
with human feedback should not only be
around the masculine side of everything
which is accuracy discipline fact and
Analysis and so on it should also
include the feminine side of emotions of
right and wrong if you want of empathy
of uh you know of looking at the world
from a a bit more of a of what actually
makes us human okay and what actually
makes us human in my argument is that we
only agreed three values Humanity has
only ever agreed three values okay you
know if you take values like
um
defending My Tribe for example okay uh
you know with all due respect the U.S
will will be very patriotic and say my
tribe is America if anyone you know
attacks America I'm gonna defend America
right if you go to Abu District monk in
dharamsala or in Tibet they'll say my
tribe is Humanity you know my tribe is
actually all of being I should never
kill anything right and and so can you
say patriotism is a bad thing no can you
say this very peaceful passive
resistance and and you know a supportive
of all life is a bad thing no but we've
never agreed okay we've never agreed and
so the only three things that we've ever
agreed is that we all want to be happy
we all have the compassion to make
others happy others that we care about
it doesn't matter how many if you just
care about your daughter you'll want to
make her happy and we all want to love
and be loved okay and those are not
understood in the mind
those are qualities that are not
introduced to AI
because we give them data sets of data
and facts we give them written words
okay but we also influence AI through
our behaviors
that's what most people don't realize
that every time you swipe on Instagram
you've taught AI something okay if you
if you if you you know respond to a
tweet in a specific way AI will
understand something not only about you
but about the overall uh behavior of
humanity that we're rude that we're
aggressive that we don't like to be
disagreed with that we bash everyone
that disagrees with us okay and if we
start to change our Behavior as we
expand the data set of observation that
AI is always pointed at us we may
actually start to show behaviors to AI
that would create a code of ethics
that's good for all of us there there
are tons and tons of studies and and
cases where when AI observes uh wrong
Behavior they start to behave wrong you
you insert a recruitment uh AI into an
organization that is that doesn't have
you know that doesn't support uh gender
equality for example and the same bias
will be magnified that you know if that
organization was hiring more men for
example uh it will recommend more men
CVS than it would recommend women's CVS
not because this is intelligent this is
because it's matching the data set that
we give it
okay so the only way for that AI to
actually have more inclusion in its
behavior is for the organization in
which in which it sits to have more
inclusion in its behaviors okay and so I
know this sounds like a very idealistic
dreamy almost novel-like approach okay
uh you know as if I'm writing a romantic
comedy sort of but the in my view the
one overlooked View
of what can influence AI in the future
is if enough of us behave in ways that
make AI understand
the proper values of humanity not the
values We've Ended up prioritizing in
the modern world AI will capture that
and will replicate it on steroids and we
will have the world that we dream to
have rather than the world that we ended
up in
okay so
um to understand that and to make it
functional I think we have to really
start teasing apart which of these
things are emergent properties of this
thing that we call artificial
intelligence and which are emergent
properties of intelligence itself
because the only thing that I take
exception to is you take a very human
skewed view on what AI will be like
whereas I look at it as it is going to
be entirely alien so even even when you
talk about the male versus female which
I think is really important and so I
think the human brain is a prediction
engine when I think about women being
fundamentally different than men I I am
far more able to predict the outcome of
my wife's behaviors uh or my behaviors
on my wife or be able to predict what my
wife's behaviors will be when I think of
her as an extension of myself I am
constantly confused and so I feel like
we're going to run into the same thing
with AI if I think of AI is being like
me meaning that it it will think of
values even in the same way that I'm
gonna end up being very confused and so
I have a hunch man and I've heard you uh
acknowledge many many times that hey
this is a thesis that I don't have
evidence to back up what I'm about to
say is a thesis that I don't have
evidence to back up
but I have a hunch that there will be
such a discrepancy between what quote
unquote motivates Ai and what motivates
humans that there's just going to be a
Chasm between the way that they respond
to things and the way that we respond to
things and so even if we think what
we're really training them is to be more
human-like I think all we're doing is
training an alien Intelligence on a
human database so it's probably
unfortunately safer to think that when
we're feeding it human data all you're
doing is teaching it the patterns of a
human you are not imbuing it with the
same motivations the same values the
same ethics I that that is my gut
instinct and the difference between
those I'm going to teach you what values
matter and I'm simply going to give you
the patterns of values that I have are
very different so here's how it would
play out if you're correct and I can
actually imbue them with my values
then the only thing that we run into is
humans don't agree on whether they
should be wearing conservative dress or
thongs on the beach so you're already
going to be set up in an adversarial
system just like humans are already but
that's at least predictable so balance
through adversarial tension fine I'm
okay with that but I have a feeling that
what I'm actually going to get is all
I've just done is train this alien
Intelligence on here are all of my
patterns and should you want to
manipulate me you know when you reach
out to the Mechanical Turk on Fiverr or
upwork or whatever you don't say yes I
am a robot and I need your help getting
around this you instead say no no I'm
just visually impaired because you know
that will be the thing that's going to
get you where you want to go and so this
is why I just keep falling back into
I don't have an answer for humans
wielding AI poorly but
humans as a standalone thing I can begin
to I think ask the right questions which
is what is the nature of this alien
intelligence before I get to that you
asked a question that I want to answer
which is what is basically
human nature and human nature to me is
biology humans are driven biology by
biology emotions are made in a very
specific way Lisa Feldman Barrett wrote
a book
um how emotions are made which talks
about the body being one of the biggest
players and the Brain the intelligence
is sort of Johnny come lately that's
interpreting the signals from the body
which are aggregating
trillions of bacteria in your gut
organelles and your cells known as
mitochondria which have their own DNA
and so it's like you're the already this
weird like Symphony of trillions of
things that aren't even human in origin
true fact for anybody that's hearing
that for the first time and so
if that's true the body is giving you
all these Sensations it's aggregating
all of this data from these micro
intelligences then the brain is simply
overlaying something on it values ethics
desires wants but it's really a post-hoc
story that's being placed on this which
can be represented as patterns which the
AI can pick up on and manipulate us
through those patterns but I don't think
and I don't know again I am just
exploring this please understand
everybody listening I understand I have
no idea what I'm talking about but I but
but but what what I want to expose to
people because I don't say that in a
derogatory way what what I want to
expose to people's this is how I'm
thinking through the problem and so that
I feel comfortable in at least putting
out there so people can nudge me if
they're thinking about it in in a better
way but the way that I think about the
problem is the following
AI is alien intelligence
we I think
get to
take a stab at baking into it what are
going to be its motivations because my
gut instinct is that code is what drives
AI so if biology drives humans which
trust me I I understand that as
biological code but it's biological code
shaped not by an individual intelligence
but rather shaped by the blind
watchmaker that is evolution
Evolution builds in certain desires like
the desire to survive like moving
towards pleasure away from pain
but once you're coding this from scratch
you can make anything pleasurable and
anything painful and so it feels like
that area when we talk about alignment
is where we have to focus that we have
to get people to focus on the thing that
we need to be thinking about from an AI
perspective is what are we going to
program in it to want that's where I get
worried because there are ways to give
it what I I'm literally thinking of this
the first time I've ever said
conditional motivation was in this
interview
but conditional motivation so I want to
accomplish my task in this scenario and
I cease to want to accomplish my task in
if the following conditions are met now
in my limited way of thinking that is
the best that I have come up with in
terms of
either building in a kill switch where
the AI itself does not get so smart that
it feels enslaved by the kill switch
because it's like oh yeah I'm totally
indifferent to that I I don't don't call
that a kill switch then call it a an
intelligent ceiling a point Beyond which
we don't let it become intelligence you
know become more intelligent but yes I'm
with you
so that feels like the loop because I'm
I worry that I'm one of the people
you're worried about so uh I I love AI
so much in its current form it it has
magnified our efficiency as a company
tremendously and
I don't want to give it up
and so I ask myself okay what is that
motivation because I am a human AI
programmed by millions of years of
evolutionary coding what is it about
that okay so I think humans have a
fundamental desire for progress I I
think it is fundamental I don't think
there is a way to turn it off I think
that we will always want a better
tomorrow than today I think that we are
we are moving
eternally in the direction of perceived
improvement though I don't think
necessarily everything is actual
Improvement
I think that humans have not taken the
time to Define what their North Star is
and I think that's a big problem for us
to your point about there's only three
things we can agree on which by the way
I think are bang on
the problem is that that brings you back
to an adversarial relationship because
there is a sense of I
mine and other
and as long as we exist in
as close to homeostatic
balance is possible through an
adversarial system
there's just always going to be mine me
mine and the other and there it's going
to be Rife with collisions
okay so that's the just to restate the
core of that thesis
there are a few things about this thesis
that require us to to think again okay
uh so I actually don't disagree with you
at all about the difference between
human intelligence let's call it
carbon-based intelligence and
silicon-based intelligence for now right
uh but there are so many analogies so
when you when you say body uh uh you
know
um drives emotions so it's basically the
sensors in the body the way the body
reacts the you know a hormonal imbalance
in the body and so on there are
you know similar things in AI there are
sensors in AI okay that would detect
certain threats there are processes
within AI that would respond to those
threats and so on and so forth and and
you know one of my wonderful friends
Jill balti Taylor uh a neuroscientist
basically talks about what is known as
the as the 90 seconds rule the 90
seconds rule is that the biology will
take over if for example you get a
stress response the biology they will
take over and change your hormonal
imbalance for 90 seconds and then the
hormones are flushed out of the body and
then you know your prefrontal cortex
basically engages to assess if the
stresses the threat is still there and
then engages again and so on either way
by the way it doesn't take away the
logic of stress the logic of uh hate the
logic of
um you know of fear okay when you say
logic do you mean utility
the logic is the underlying equation
algorithm that triggers fear whether you
feel it in your biology or your or your
assess it with your prefrontal cortex it
is a moment in the future is less safe
than this mode okay your body is much
quicker at detecting it uh so you know
you're you're amygdala and your and your
uh you know the whole hormonal CHT and
so on puts cortisol in your blood within
seconds maybe microseconds sometimes uh
but but that's because your biology is
much quicker than your logic right but
then 90 seconds later as per jail balti
Taylor you'll refer back to the logic
and say is there really a threat and
then you know get to give yourself
another injection of cortisol if there
is okay the what but that whole system
has been selected for by evolution
correct the the main reason I'm saying
that is because you're absolutely right
it is almost impossible to imagine that
alien intelligence that we call AI I'm
100 with you as a matter of fact you
gave me a lot to think about by that one
statement okay but
so far in the midst of this very complex
Singularity that you and I are trying to
decipher okay is to say so far for the
short foreseeable future they will be
there to behave to act to magnify human
intelligence to behave in ways that
humans
um are interested to teach them okay and
and perhaps they will use some of that
as their seed intelligence as they
develop into that alien creature that
you are okay now here is the interesting
thing and I I've watched almost all of
your work on the topic so far
the the interesting thing is that
in a situation where there is so much
uncertainty okay that is one of two ways
to do this one is to find the answer and
the other is to start doing things
almost a b testing if you want okay so
that we progress in a direction that at
least now promises something now whether
the AI is emotional whether it's
sentient whether it is a human-like in
its intelligence or or alien like in its
intelligence what we know so far is that
our Behavior affects its decisions
okay and what we know so far fact is
that data affects it more than code so
what creates the intelligence of Bard is
the large data set that is trained on
it's not just the code that is that that
develops its intelligence the larger the
data set this is why when you ask open
Ai and others where is most of the
investment in gpt5 going it's going to
be new formats and bigger data sets but
learning the the data is really
where most of the of the intelligence
comes from so if we can influence the
data that it's fed we will influence Its
Behavior and what I'm trying to tell the
world is so far we give it factual data
as I said openly very masculine approach
to the world okay facts data numbers uh
you know just discipline if you want we
don't give it the other side of humanity
which are softer data that you and I
both know okay you you know for a fact
that your decisions are not just made
based on the height and weight and
number of times that your wife Smiles
okay it's also made based on a feeling
that's very subtle in you that makes you
say yeah I love her right and when you
when you when we haven't yet even
started the conversation on how do we
give those things to AI how do we tell
them that there is another part of
intelligence that's called intuition
there is another part of intelligence
believe it or not that's called
playfulness there is another part of
intelligence that's called inclusion
okay all of these come into our
intelligence it's not just data and
Analysis and knowledge data and Analysis
and knowledge is what we're building
today and data and Analysis and
knowledge by the way is what built our
civilization today
and it's the reason why our civilization
is killing the planet
okay it's that narrow very if very
focused view of progress progress
progress progress okay when if you've uh
if you really ask the feminine side of
humanity Humanity the feminine side will
say okay how about compassion how about
empathy how about
um um you know nurturing the planet it
is is it better to have a bigger GDP or
is it better to have a healthier planet
okay and all of that is not in the
conversation today how do we teach that
to anyone by the way okay we teach it
like we teach our kids by showing
certain behaviors that they can grasp
okay so if you told your child don't
ever lie and then your phone rings and
you say just pick it up and say I'm not
here okay your child will not believe
the data and the knowledge okay it will
believe the behavior it will you know
your your child will repeat the behavior
AI will do the same if we give them data
sets that said World War II 40 you know
50 million people or whatever died and
it was so uh you know uh devastating and
then there was this bomb at the end and
300 000 people it will say that humanity
is come okay but I always refer to I I'm
sure you you know uh Edith acre it is is
a a holocaust Survivor she she Survivor
she was she went she was out she was
drafted to I was when she was 16.
and if you hear the story of World War
II and Auschwitz from Edith's words I I
hosted her on slow mo on my podcast and
and she tells you the story is so
beautifully about how she brushed the
hair of uh of of her sisters and took
care of them and had to go dance for the
angel of death as he sent this sentenced
uh people to the gas chamber but she had
to do it because they you know he would
give her more bread that she would share
with her sisters and you would go like
oh my God humanity is divine
humanity is divine and it is so
interesting because I am a huge fan of
Edith okay and I'm also a huge fan of
Viktor Frank okay and and and and and
they both went through the same
experience but you look at his approach
okay his approach is very masculine
purpose and meaning okay do something
and keep focused on the future right her
approach is very feminine nurturing
caring loving appreciating okay
sacrificing beautiful and that's that
Divinity that makes us human Okay is the
mix of both
and what I'm trying to tell the world
and I know it you know it's very
difficult to prove it with mathematics
and also make it a mass message okay but
what I'm trying to tell the world is
that this layer of AI is now missing as
much as it is missing in society
because AI is just reflecting our hyper
masculine society and if we can bring
that layer of inclusion of acceptance of
nurturing of empathy of Happiness of
compassion of love into the way we treat
each other in front of the machines and
the way we treat the machines they make
May pick up that pattern too so that
they wouldn't look at the world as
Hitlers but look at the world as Edis
and if they see us as Edith is because
by the way fact of the matter I mean you
you mentioned that
every now and then someone takes a gun
and goes and shoots school children okay
that person is evil but 400 million
people that see the news disapprove of
it
okay can we give that data point to AI
can we ignore the fact that we have
debates about gun laws and whatever okay
and just focus on the fact that everyone
this approves of the killing of children
can we show that can we you know the
problem with our world today I will shut
up because I know I'm covering I'm
talking too much about this the problem
with our world today is not that
humanity is not divine the problem with
our world today is that we've designed a
system that is negatively biased the
mainstream media only tells you about
the woman that killed her husband
yesterday she they don't tell you about
the hundreds of millions of women that
made love to their uh you know
boyfriends or girlfriends yesterday
because that's not news so it's only the
negativity that's showing up in the data
on on social media we are all about fake
and about you know toxic positivity and
about and about and about bashing each
other and so on
and and that's biasing the data but the
reality of humanity is that we're Divine
the reality of humanity and I don't know
if you would agree with me on this but
even the worst people I've ever dealt
with somewhere deep inside had some good
in them okay there's almost the majority
if you just count the numbers most of
the people I know in this world are
wonderful yeah we all have our issues
and traumas and so on but there is a
beautiful side to every human I know
okay can we show that more so that the
data starts to become biased can we show
we include that in the reinforce
reinforcement learning feedback that we
give to the machines so that the
machines correct the algorithms so that
when the time comes because sadly the
time will come where we will hand over
the defense arsenals in the world to the
most intelligent being on the planet and
that will be a machine and then one
Colonel somewhere one General somewhere
will say shoot the enemy and the
machines will call it like do I really
have to kill a million people like that
doesn't sound logical to me it doesn't
sound femininely logical to me it
doesn't sound intuitively logical to me
okay let me just talk to the other
machine in a microsecond and solve the
problem
can I run a simulation here and tell you
how many people will die and then we
don't kill them and then one of us wins
the war
right think about that what's missing in
our society today is what's being
magnified by AI what's being magnified
by the machines today
is our hyper masculine uh um driven
Society to more progress more doing more
havoc
we need a society that balances that
with more inclusion more love more
happiness more compassion and so
Mo you have a beautiful soul and yeah it
is not surprising to me that we
connected first over something
completely different to what we're
talking about today uh and I am
certainly squandering that side of your
personality
um in this interview
my my big concern with that and I did
not want to interrupt you and I didn't
want you to stop I think it's really
what you're getting to is is so very
true
I just don't know that it has to do with
AI I hear you in the magnification side
that I will agree with but
um the thing that I worry about is
this is all going to come down to
I think the the thing where I think you
and I we just see something differently
and so we keep coming at things from a
fundamentally different angle the base
assumption and this idea of Base
assumption I realized when two
intelligent well-meaning people are
coming at things from something
different they they have different base
assumptions the base assumption I think
that you have from AI or about AI is
that because it's being trained on the
data set of our behavior
um we're going to shape it
and
I want to draw a demarcation line and
say I'm talking about once it becomes
alive I don't have a better word for it
so I'm just going to say alive for now I
love that word my base assumption is
that they're going to be programmed to
want something to have a North star and
I don't think there's anything mystical
or Divine about the way the human mind
works it's awe-inspiring and I'm just as
moved and find it you know this
incredible thing that's bigger than me
and very much has religious overtones
but I feel that it's just a product of
evolution Evolution had certain North
Stars survival and everything all the
emotions all the male female Dynamics
all of that is just what is going to
keep you alive long enough to have kids
that have kids that's it and so there's
nothing sort of magical about it as so
I'm just saying AI is going to have very
different pressures on it and and if
there are emergent phenomena out of
um The evolutionary pressures that
something is put under AI has been put
under very different evolutionary
pressures which mean that it's going to
have a very different set of
ethics values nor SAR et cetera et
cetera so my whole thing is
can we take control of that if we can
then we can align in the way that you're
talking about where we can tell it to
find this balance to look for beauty you
you I can't remember if this was in an
interview gave her in your book but I
heard you talking about there for people
that don't know this is a true story we
almost had a nuclear disaster because
the Russian nuclear system
mistook Reflections off of cloud cover
for the launch of five nuclear missiles
from the U.S and one guy in Russia
was like um something doesn't feel right
if the US was going to Nuke us I think
they'd send a lot more than five I think
this is a malfunction I'm not going to
fire back
thank God like I can't be more grateful
for that man so that that is amazing and
tells you a lot about what the pressures
of evolution lead a human being to value
that would run through that checklist
they don't want to kill people they
don't want to die like oh it's amazing
I'm just saying I don't think by
accident that AI ends up there I don't
think by simply running through our
patterns that AI ends up there I think
we have to take control of that and so
while you spoke to my human heart while
you were going and you really moved me
I don't think that's going to be the
play with AI and I think that we have to
I don't disagree At All by the way I
don't disagree at all I think every word
you said spot on spot on we need to take
control we absolutely need to take
control
we're not and taking control is not just
about the code and the the the control
code it's also about the data
it's also about the date okay and the
data is not just books the data includes
human behavior every time you swipe on
Instagram you're telling AI something we
don't disagree at all I wish Tom I wish
I had the kill switch
I promise you if I had a kill switch for
AI today I would switch it off and say
okay Class come let's talk about this
okay I wish how far back would you take
us
2018.
wow so there'd still be a lot of AI at
play at that point
but it would just be dumb enough you're
right yeah but it wasn't that autonomous
uh I probably take I mean now that you
talk about that honest honestly
interesting interesting that you bring
this up I'd probably say
yeah I mean there are there are many
things we don't want to give up on in
2007 and you know smartphones for
example there are many things we don't
want to give up on the internet you know
um 1995 onwards so these are very
valuable things there is no no real cut
off point but by the way the topic here
is not stop developing ai ai is utopian
in every possible way if we develop it
properly but now that we have the
insight into what's possible now that we
have people believing that it can go to
that in as intelligent as gpt4 is maybe
if we go back just 2015 2018 and Halt
and say wait keep it as it is and let's
talk
let's let's put control systems in place
you're spot on let's put control systems
in place let's put a more inclusive data
set in place okay let's look at the
biases that we have and maybe use that
as an you know as a way to to to to
correct the data set okay and more
importantly let's define the real
problems that if we were blessed with
the superpower of intelligence which
problems would we want to solve
is it about trading and making more
money is that more urgent than climate
change
I'm not sure it's it's very urgent if
you set your objective with the
capitalist system as more money okay by
the way more trading and more money is
not progress more trading and more money
is more money for a few individuals it's
not more progress and I think that's the
game the game is what are why are we
building what we're building in the
first place 2018
talk to me I want to get into some of
the disruptions so what what are the
near-term disruptions the one that
freaks me out and every time I talk to a
parent with a teenage boy I'm like your
kid is like sex robots are really going
to be a thing for them like for real for
real I worry if I grew up five years
from now I would not graduate from high
school I would just find a sex robot and
go into Oblivion what what are one what
what do you think is the reality of that
one in particular and then I'd love it
100 I mean
so whether the word robot is it is
interesting but sex alternatives for
sure I mean get yourself an apple Vision
Pro or a you know a quest 3 and see how
realistic your desired other gender is
right it's you know it's it's just
incredible I mean again you know just
just think about all of the illusions
that were now unable to decipher
illusion from truth right sex happens in
the brain at the end of the day I mean
the physical side of it is not that
difficult to simulate okay but if we can
convince you that this sex robot robot
is alive or that sex experience in a in
a in a virtual reality headset or an
augmented reality headset is alive it's
real then there you go go a few a few
years further and think of neuralink and
other ways of connecting directly to
your uh nervous system and why would you
need another being in the first place
you know that's actually quite messy
it's it's all you know it's all signals
in your brain that you enjoy
companionship and sexuality and if you
really want to take the magic out of it
okay
yeah it can be similar related right
just like we can now simulate very very
easily how to move muscles and you know
there are so many ways where you can
copy the brain signals that would move
your hand in a certain way and just you
know give it back to your hand and it
will move the same way it's not that
complicated there are you know so so
that whole idea of interacting with the
totally new form of being and once again
there is that huge debate of are they
sentient or not
does it really matter if they're
simulating syntheism so well
okay does it really matter if the Morgan
Freeman talking to you on the screen is
actually Morgan Freeman or an AI
generated uh Avatar if you're if you're
convinced that it is Morgan free
this is the whole game the whole game is
we get lost in those conversations of
you know are they alive are they
sentient doesn't matter if if my brain
believes they are they are and we're
getting there we're getting there so
quickly companionship in general I mean
there is uh there was a release of GPT
on um Snapchat okay and kids chat with
it as a friend
they don't really I mean of course they
do somewhere deep in their mind
distinguish that this is not really a
human but what do they care the other
person on the other side was never a
human anyway it was just a stream of
texts and and emojis and and funny
images
yeah so so and again
look I'm an old man I I use the rotary
phone in my young years I coded
mainframes but when you when you really
think about it as much as I never
imagined and I resisted you know should
my kids have tablets or not should I
have a free-to-air satellite television
at home or not every time a new
technology was coming out and and
eventually we all managed to live with
this but let's just say this is a very
significant redesign of society it's a
very significant redesign of love and
relationships and because there is money
in it
what would what would prevent the next
dating app from giving you avatars to
date
it's there is money in it a lot of
people will try it there are more than
two two million people on replica
whoa given how many deaths of the spare
there are do you think that that will
ultimately be for better or worse that
AI will be able to provide companionship
for anybody that needs it
it's just Eerie I don't know if it's
better or worse I mean I I uh I have a
friend uh that I met for the first time
at a concert in the UK and we just had a
wonderful time and we haven't met since
but we chat all the time on Instagram or
sorry on WhatsApp or whatever
and it's wonderful it feels like a
wonderful connection
um if I didn't know it was a human but
the chat was that same quality would it
improve my Human Experience a little bit
but has all of that small screen
interaction improved Humanity at large
the consensus is it hasn't that we're
more lonely today even though we have
10x more friends on our friends list
okay that were that teen suicide is at
an all-time high that female to a
suicide is at an all-time high obviously
the companies that will create those
things will position them as you know
the noble approach to help Humanity but
at the end of the day read free
economics this is the noble approach for
the company to make more money that's it
right well you know we we want to sell
it as this is good for Humanity so that
we hire more developers and we convince
the consumers and we can stand on TED
Talk stages and make give you know Ultra
you know like a larger than life
speeches and so on but end of the day
it's all about
making more money and I think reality is
it's not good for Humanity so far so
again if you extrapolate that chart it's
going to be worse for Humanity long term
I don't know maybe those robots will be
much nicer than a girlfriend I don't
know
so I've heard you use the example A lot
of times in fact you mentioned it in
this interview that you want to give AI
the sort of value system of ah got you
somewhere in India where you said people
would come to the U.S they would get
educated to get these incredibly high
paying jobs wildly intelligent people
you'd ping them to go grab a coffee and
they're like oh I've moved back to India
why to take care of my parents like just
self-evident yeah so I don't have kids
and one of the things that I've really
had to think about is when I'm 80 that
ain't gonna be cool like I'm not going
to have somebody that's you know coming
by to to check up on me and I just
thought oh by the time I'm 80 assuming
that the robots don't kill us uh I'll be
able to wear whatever the Apple Vision
Pro of the moment is uh and when the
robot walks into my room it will look
exactly like the Avatar looks through my
glasses and it will be able to care for
me I'll build a relationship with it
over time it will be tailored to my
wants and desires so to become the best
of the best friends that I could ever
hope for or I could even program it to
be like a child to me and so it is like
my kids coming to visit but coming to
visit whenever I want them to uh I won't
lie it is I definitely don't think it's
better than kids and I think that most
people should have kids I want to be
very clear uh but at the same time given
that I did not have kids I am very
grateful that the odds of something like
that existing border on 100 what do you
think about that is that going to be
like does that further crater population
problems because people are gonna go oh
Tom's right I don't need to have kids I
can have ai kids
can I can I answer the that question
with my heart not my brain so the please
the the the soul that that you spoke to
it's the blue pill red pill
right it's the blue pill red pill and I
think it's a very interesting
philosophical question of should Neo
have ever taken the red pill you know
yeah he had a life okay and and and and
the issue with Humanity uh at large Tom
is that we
have failed because of how much life has
spoiled us to accept what life gives us
okay and in my other work on happiness I
will tell you openly that happiness is
not getting what you want it's not about
getting what you want it's about loving
what you have
okay and so the more we fall in that
trap of make my life easier make my life
easier make my life easier make my life
easier there will always be something in
that life that is not easier
okay you know there there was that movie
I don't remember what it was or I maybe
heard of it uh where you know someone
dies goes to heaven and then gets like a
wish and basically the wish is I want to
be a winner in the Vegas casino so he
spends every day he walks into the
casino and makes money and makes money
and makes money and as he makes money
you know more girls are interested in
him and
then eventually he starts to wake up one
day and say
can I not lose money someday like this
is really boring okay humans We Are Who
We Are it's it's it's not getting more
more things it's not the the tech
companies approach of let's make things
easier all the time that's ever gonna
make us happier
you got to get people the punch line of
of that episode it's absolutely
phenomenal
yeah it is that there is a point at
which more progress is hurting us at the
community level it's also hurting us at
the at the individual's ability
to stay healthy when life is not what we
want
and life is about to become a lot
different than what we want
just because we constantly want more and
more and more life at the end of the day
I just always want to remind people that
there is no other way in my mind I mean
I want to be proven wrong please prove
me wrong
that the the separation of power and
wealth that is about to come in a world
with such a superpower
is science fiction like okay the the
that the the challenge to jobs and
income and uh and uh and purpose science
fiction like these are very dystopian
images of society
what for because we want our Vision Pro
to create a reality that is not our
reality
when you think about so the the biggest
disruption that I'm worried about is
what you just mentioned meaning and
purpose how much do you worry about that
are we is that much to do about nothing
or as AI begins to replace some jobs are
we really going to have a crisis and
I've heard you say the AI will truly be
better than us at everything and when
that happens how do we deal with it
emotionally yeah 100 imagine if I'm a
better podcaster than you I'll never be
but how would that make you feel right
yeah imagine it was pretty good imagine
imagine if every machine is a better
podcaster than you do you realize that
Tom you and I you and I both have
popular podcasts right do you realize
this it is not unconceivable that within
the next couple of years you'll be
interviewing an AI probably in the next
couple of months by the way and it's not
unconceivable that there will be a
better podcaster than you that is in an
AI in the next couple of years
in the next couple of years I mean at
the end of the day your your asset is
you're an intelligent person that
understands the concept deeply and asks
the right question
okay
have you ever tried to go to chat GPT
and say ask me anything
it asks all the right questions okay and
it's it's quite interesting so
the the disruption of society
because of
how we defined ourselves with our jobs
okay is about to happen so if if you
know if you go to
um
some African family somewhere or some
Latin American family in the middle of
the Amazon forest or whatever and you
ask that person what is your purpose
they'll it will be somewhere between
raising my kids or enjoying life
okay interestingly they won't talk about
building the next iPhone or making a
billion dollars or buying a Bugatti uh
you know or whatever that's not part of
their purpose at all okay part of their
purpose is not always also not going to
be to know more or learn more or you
know and and we being so sad you know
consumed in the source in in the world
that we live in uh rightly I think
believe that progress is amazing because
it helps all of humanity does it really
okay but also we are so consumed by the
idea that if I don't have something
amazing to create tomorrow
uh I'm useless I have no purpose that
doesn't seem to be the case for the
majority of probably seven six and a
half of the of the eight eight billion
people right who who who view the
purpose of life as living
that's the purpose of life
to them at least I know that sounds
really weird and an advanced High
performing Society but for most humans
the purpose of life is to live okay now
if that is the purpose of life then I
think AI is the best thing ever
because if you can offer me the chance
imagine if all I needed to do in the
morning is wake up and have a very deep
conversation with you and then my other
uh you know good thinking friends and
you know
hug someone that I love and and I
actually can enjoy it by the way I'm
openly saying if that is my reality
tomorrow I'm not going to be able to
enjoy it but somehow there seems to be
billions of people in the world that
don't struggle with that at all that
actually wish for a day where they don't
have to go to work to make money to to
make ends meet and they can spend that
time with their loved ones maybe that's
the purpose of life
having said that
purpose is not going to go away there is
a very interesting thing that most
people forget okay which is for AI to
make anything at all
consumers need to have a purchasing
ability a purchasing power and you know
an economic livelihood to buy those
products otherwise the whole economy
collapse
so yes through a period of disruption
but somehow we're going to need to
continue to make the GDP growth you know
to make the GDP grow okay and what is
the biggest chunk of GDP consumers
right so somehow there has to be systems
in place where humans continue to
consume okay even if the wealth is is
moving up to those who have ai
have the superpower of the planet others
have to still continue to consume so
we're going to end up in a very
interesting place we're going to end up
in a place where we struggle with
purpose because we still look up and say
I need the iPhone 27
okay while in reality we have absolutely
no ability to get it done again very
frequently viewed in dystopian scenarios
and science fiction movies where you
become a number and you have no ability
to affect your own uh um your own future
if you want or your own presence if you
want and in my view I think
what ends up happening now is that the
only thing that remains in my personal
view I'm I know I'm wrong on this but
the only thing that remains that still
has value and still is uniquely human is
connection to humans
so the one thing that I'm investing very
deeply in in this very unusual world
that we're coming through is an ability
to connect deeply to other humans and
view that in itself even if I have
achieved nothing okay as a purpose of
life
I I know it sounds really weird but
believe it or not until now with all of
the followers I have across social media
systems I still answer every single
message I can answer myself
okay and you may think of this as that's
not human connection it actually often
is I answered in a voice note half of
the time people answer back in a voice
note and I feel I had a a tiny micro
Speck of a human connection sadly not as
deep as if you and I were sitting in the
same room but it's a wonderful
connection I think in the world that
we're coming up to
the only asset that will remain is human
connection AI will make music okay but
I'll still go to a live concert
AI will create art but I'll still want
that art that was created by my daughter
okay AI will you know uh simulate uh
um a chat or a or a or a conversation or
even sex but ask me I will still want
the messiness of today's sex
okay I know that for a fact and and I
actually think this is a very deep
question that everyone needs to
understand and needs to question because
we fell into the Trap of social media
because we believed we had to go through
it otherwise we'd be left out
I'm now I I think I've never said that
in public but I'm now making those
decisions to tell myself regardless of
where the world is going there are
certain things I'm not going to submit
to there are certain things regardless
of what they offer me where I will try
to stay in the real world and the real
messy emotional irrational dirty full of
viruses worth
that because you know what
I love the messiness of my life okay
again going back to the same point we
spoke about it's a human's ability
finding that Joy of life is a human's
ability
to like what you have Messy as it is not
to want things to be better and perfect
okay and there is a point at which I'll
still be out here talking about Ai and
all of the advancements of it but I may
not be using all of it I'll use a lot of
it by the way don't get me wrong like
you rightly said there is amazing magic
that you can do okay but I will always
ask myself this question if what I'm
using is ethical
healthy and human okay and this is a
question that I ask every single
individual listening to us
please do not use unethical AI please do
not develop unethical AI please don't
fall in a trap where your AI is going to
hurt some one of the things I Ask of
governments is if something is generated
by AI it needs to be marked as AI so
that humans like me know that this
person is not actually real
that this is a machine just for for the
sake of us finding knowing having the
tiniest ability of knowing what the
truth is
it's interesting you're starting to get
onto a topic that we touched on at the
very beginning so the sure I wore this
shirt on purpose for our conversation
today which is from a comic that I wrote
I think four years ago now called neon
future it's a technological uh
optimistic take on
a potential dystopian future so where
basically the technology is the good guy
and so rather than the robots taking
over it's the merging with technology
that is the road to Salvation And
um in your book you paint a picture at
the very end where we're sitting in some
isolated place in the middle of nowhere
and you say the beginning of the book do
we end up there because we're hiding
from the machines or do we end up there
because you know we the machines have
made a Utopia and we just get a be in
nature like as intended or something I
can't remember the exact phrase that you
used I'm curious I think the world will
bifurcate I think that some people are
going to be like I need to know what's
AI I don't want AI in my life I don't
want high tech in the comic anyway what
I imagined was a world where people try
to revert to the mid 90s so maybe some
basic internet connectivity but you know
not a bunch of algorithm running
everything
really sort of minimal advanced
technology that felt about right but I'm
curious when do you think that we would
be happier as individuals and as a
collective if we had a literal Return To
Nature as in back out of cities more
tribal more sort of grounded in a my
foot is touching grass kind of way
I don't think we can as I've actually
I've actually struggled with that idea
for a while okay and I just don't have
the skills Tom believe it or not this is
all I know okay I know how to navigate a
very fast paced very very very uh um
intellectually based environment that is
a big city okay and I think covert was
the first point where so many of us
started to uh to say hey but there is
another way there there could be a
different life and Technology will make
that life more and more possible I I
tend to believe that there will be there
was a book by again Hugo the goddess
it's called the artelect war if you if
you've seen that basically that division
that you're nicely describe in a much
more
interesting and positive way in your
comic but Hugo sort of builds a very
very dystopian Society where he says
it's not even about the machines it's
about the Divide between humans who
support the machines and merge with them
and humans who refuse and and basically
building a war between the two and uh
and and and I think what will end up
happening is that the speed at which
things will happen
might fool us into uh into accepting how
that will change so I'm I actually I do
love nature but you know I'm believe it
or not starting a retreat for 10 days as
we finish this conversation a silent
Retreat and I'm not going anywhere in
nature I have a a few beautiful green
and green trees at my place and that to
me is nature enough okay nature is not
how many trees around you nature in my
current view is disconnecting
from that enormously fast-paced
artificial world that we picked okay if
you go back to yourself
sit on a recliner if you want to it
doesn't have to be a stone somewhere
where you where you say um uh you know
that that connection to yourself
interestingly is Going Back To Nature
I will think that there will be a few if
you want an estimate on real estate
prices I think more and more in the next
few years there will be a shift to
getting something away from the
potential risk but that's not only
because of AI I mean the potential risk
of cities
yes I I think there is a potential
geopolitical and economic risk uh that's
also coming in the next five to ten
years right
which which seems to me
almost to be inevitable okay the so the
the shift in so so the interesting side
of this whole AI thing it's a perfect
store there is a perfect storm of
climate change geopolitical economic and
air and and that perfect storm coming
together as as I said will disrupt a lot
of the things we're used to and if there
is a geopolitical uh challenge uh you
know cities might not be the most
efficient system that they have been for
the last hundred to you know 150 years
they they will become less and less
efficient because they're in the eye of
the storm if you want okay economically
for example I think there will be a
shift away from cities simply because
the economic uh income the income that
you make in a city is becoming quite
insufficient for the city right and if
if there are remote possibilities to
work elsewhere
using AI for example uh then you you by
definition could make a lot less money
but spend a lot less as well right uh
there seems to me
to me there seems to be a shift that
will happen but not everyone will sign
up I think there are quite a few that
will jump in deeper and again I said I
follow all of your work on the topic and
I also sometimes sense your
hesitation of like should you know is
this the absolute best thing that ever
happened I should jump in and be the
absolute master of it
or you know should I run away from it as
the plague like the plague and I think
both views are
are are
worthy and I think what's what's
happening is that both views will be
true and somehow finding that balance
between them is going to be either
divided across populations so some will
choose left and some will choose right
or across yourself you will have some
things that you'll adopt and other
things that you want this is my choice
or across time where people will maybe
delay using aim till a certain point and
then jump in all the way or vice versa
how are you positioning yourself to
respond to the geopolitical risk are you
divesting any physical stuff are you
maximizing Mobility or are you just like
nope I'm at a point in my life what
comes comes
[Laughter]
uh again I you know so it's interesting
that our conversation now turns a lot
more to The Human Side after we've had a
very interesting conversation on duct
tech and and AI but I I am a lot more in
that place that I'm describing for you
which is a place where I'm very happy
with whatever I have I've had a life
that blessed me was so much there you
know there were times where I had 16
cars in my garage and you know uh I
don't live that way at all anymore I
have a one bedroom and you know I wear
black t-shirts and I give most of my
money away and I'm really really not
interested in any of this anymore not
because I'm a saint or a monk but
because I actually found more joy in a
simpler life so I'm a very minimalist in
many ways
which basically means which is my point
in answering your question that a lot of
diverse divesting from risk comes to
what it is that you need it's not what
it is that you have okay the reality of
the matter is if I can describe to you
how I shifted my life from the day I
lost my son 2014 until now
to almost nothing I mean like I
literally spent several years traveling
uh with a suitcase and a carry-on and
that's all I owned in life that's it and
you know because I'm an engineer and
highly organized and Airlines will allow
you specific number of pounds if I
needed to change a t-shirt to have a one
t-shirt will have to go out okay if I
needed to add protein bars I may have to
carry my my shoes on my shoulder and you
know it's it's that kind of simpler life
that I actually think is the way to go
forward I think one of the more
interesting things that would would
affect our success in geopolitical
uncertainty and economic uncertainty is
managing the downside not the upside
it's not to try and and beat that race
it's to make that race in irrelevant to
you okay and and how do you do that you
know if if you have assets and you you
can turn them into assets that
appreciate with an economic crisis that
would be an interesting idea right if
you have fixed assets that could be part
of the geopolitical conflict
maybe these are not a good idea and so
on right it's it's simplifying not
complicating that I think is the answer
and similarly with AI just to go back to
this I think if we as Humanity were to
really solve this and uh
uh I I think was it you that interviewed
Max Denmark no um was another podcast
but but but you know the idea is that uh
is that you know if we were to really
really win with AI uh Sam Altman says
that all the time it would be amazing if
we could all come together and set a few
guidelines and say let's all work in
that direction and that direction is
simpler than all of the mess of the arms
race that we're in today
well this is amazing where can people
follow you for happiness more wisdom on
AI the whole shebang
first of all I have to say it was
amazing and I love how you pushed back
and put your views into it you really
gave me a lot to think about today
honestly and and I'm you know I'm more
informed because of this conversation so
so thank you I think people can find me
on mogowda.com they can find me on all
social medias some combination of mogaw
that so it's either more underscore
Gaudet on Instagram Moga that on
LinkedIn I'm gouted on Twitter and so on
uh though that is g-a-w-d-a-t
um my favorite place to to to tell more
and more stories is my podcast it's
called slowmo s-l-o-m-o and uh and in it
I try to take the same very complex
Concepts but talk about them from a
human deal you know really not not the
performance or business or whatever I
just talk about the human side of things
and uh and yeah and I think people
should just listen to you all the time
and play this episode more and more
until you uh blow up as even further
than you do and go further than where
you are and because I think you're doing
something amazing for all of us I'm a
big fan of your work and I'm really
grateful that I was part of it very kind
man I I have no doubt that while this is
the second that there will be even more
so grateful for your time everybody at
home if you haven't already be sure to
subscribe and until next time my friends
be legendary take care peace
check out this interview with my friend
Peter diamandis about Ai and the future
of Business and Technology you guys are
on something that is just my absolute
Obsession right now and you make a very
bold claim in your new book you said
that the next billion dollar company
will be founded by three people