Truth About Elon Musk vs Sam Altman: AI, Immortality, War, Power & Simulation Theory | Bryan Johnson
Rtv-W7IE4Mw • 2024-03-19
Transcript preview
Open
Kind: captions
Language: en
you're on the precipice of an artificial
super intelligence Revolution that will
alter your day-to-day life in ways that
none of us can quite imagine but no
matter how much human creativity is
unlocked by AI or how disruptive it will
be to the traditional economy no change
would be as dramatic as unlocking
immortality today's guest believes the
rate of progress in AI means our only
goal should be to stay alive long enough
for AI to take over he believes that
algorithms May hold the key to living
forever here we go for round three with
Brian
Johnson do you think that we as a
species should hand over our
decision-making process entirely to AI
when it's ready yeah I think we we need
a new form of cognition for ourselves
and paed with AI for the species and uh
if you look at hom sapiens as an
independent source of intelligence we're
pretty good at some things we're
terrible at other things and the things
we're terrible at and include our our
willingness to engage in
self-destructive behaviors or planetary
destruction or even destroying each
other instead of trying to build
something uh that is much Superior than
our own uh self-interests and So
currently we are the best form of
Intelligence on this planet but we need
to evolve to something Superior so I've
tried to demonstrate with blueprint that
yes I think that as a
opening conversation we should be
mindful that it may be the time that we
graduate to a new form of intelligence
okay those sound slightly different to
me so what I took away from a ton of
research on you is that basically we to
your point we know that we can't think
through these problems well we know that
we're on the cusp of artificial
intelligence that is almost certainly
going to be way more powerful than us
unlike anything that we could ever
imagine that immortality becomes just
one of the problems that we'll never
solve for ourselves but AI could
potentially solve for us I think you've
even said Evolution gave us man and man
will create God yeah so if we're taking
that lens on AI are we as humans
evolving into something or have we
created the thing that you would for our
own benefit I think is how you see it
want to sign over our decision-making
process to so I think your question is
correct I'm saying that if we lay out
let's say that we do 10,000 things as
individuals in that we seek out food we
seek out shelter we seek out love we try
to solve problems we learn we we list
out all the things we do as humans and
then then you start applying
computational intelligence to each one
of these things and say can an algorithm
do this better than we can you know can
it do uh can it perform mathematical
computations better than humans yes it
can you know much better than we can in
our heads much better we can on paper
it's faster it's better and then if you
size up and say of the things that AI
can do better than humans now what
remains we humans are better than AI
what is the time frame of which is going
to close that Gap so that's one way to
look at it and it's reasonable to say in
any one of those functions AI is
probably going to be better at every
single one of us on every single one of
those functions it's hard to short that
bet and then if you say okay so what
things might be imminent and what I was
trying to Pro prove with blueprint is
one of the most sacred things about
being human which is my autonomy and
free will to make decisions about what
I'm in the mood for or what I want to do
my preference in any given moment and I
started with health and wellness so I
said Can an algorithm decide what I eat
can an algorithm decide when I go to bed
can an algorithm decide what exercise
protocol I should do and I seated
control and I said yes I'm going to let
the I do this and I'm going to believe
or trust in this process and I'm going
to see the results can I actually be
better at being me than I can myself and
three years in the answer is yes that I
would much rather have an algorithm take
care of my health and wellness than I
would myself I am prone to
self-destructive behavior and so I look
at the bigger model and I say it seems
like the most obvious conclusion I could
make in looking at the world right now
that algorithms are going to be superior
Superior to us in all in all things
therefore uh it would be much more
constructive for me to adopt this
framework and build with it rather than
try to resist it okay so uh having had
the privilege of reading your book don't
die I know where some of this goes so I
know that your punchline is people need
to have that aha moment for themselves
but I'd love to walk through what those
building blocks are because I I am way
Pro AI like I could not be more excited
about a future with AI uh however I'm
equally paranoid that this all goes
wrong I have a huge problem with
authority so I have just terrifying
fears of authoritarianism which I feel
knocking at the door right now uh for us
certainly in the west so my initial
reaction to this is hey I love being
able to opt out I love being able to use
algorithms that may have insights but I
want to be definitively in control of
whether I do or don't um engage with
this and one of the the sort of playful
questions that you will ask people is if
you couldn't opt out of the AI would it
still make sense to do it so if we
couldn't opt out of the AI would you
still want us to do
it yes because probabilistically I think
it increases our chance of uh survival
and so what I'm really saying is that
what I find is in
conversation most people will
find call it five to 10 good arguments
on why what I'm saying shouldn't be
considered you know why it it
immediately steps into authoritarianism
or it steps into some kind of dystopic
environment or blank blank blank and I
concede that they reasonable
contemplations and my framework is I
remove myself from the the next few
years and even the next few decades and
I say I'm only going to look at our
moment from the perspective of the 25th
Century that's the only thing that gives
me Clarity because if you are trying to
extract wisdom in the moment of this
noise it's almost impossible and you
hear basically an infinite number of
opinions that all Log Jam the situation
and so I guess when I say when I look at
it from that perspective I can say well
okay on a few hundredy year time scale
I'm going to think it's reasonable to
say say that algorithms are going to get
better at a rate than our native
abilities that uh that speed of
improvement is probably going to even
outstrip our pared abilities like I
don't think I'm going to get a you know
a chip in my brain that is going to make
me a super intelligent species right it
may augment a few things here and there
but it's not going to rival um a
computational system with millions of
servers it's just not going to do it so
I'm not going to become that super AI
myself I'm going to live in this larger
framework of intelligence and so if I
say if that's the case and we're heading
down this trajectory right now we need
to think about it from that kind of
multi 100-year time scale if we want to
survive ourselves and that's really the
point I'm trying to make and if you if
you take that frame it opens up the mind
because like you're saying and you make
good points is there's so many ways to
kill this idea right so I'm basically
saying um an algorithm takes better care
of me than I can
myself we are facing multiple
existential crisis a species we're on
we're on the eve of the creation of baby
uh we're on the eve of creating super
intelligence what do we do as a species
I'm asking that question in the most
sober form possible if we create super
intelligence and we drop it into the
games we play right now as humans we're
going to say okay are we going to become
better at War are we going to use it to
make more money are we going to use it
to get more social media followers like
how do we use our new super intelligence
and I'm saying if you take the Super
intelligence and put it into current
games the homo sapiens Play We increase
the likelihood of we going to kill
ourselves and so we have to basically
look at this from A New Perspective and
what I'm suggesting is as we create
super intelligence the the new game of
existence is eliminating all sources of
death for humans and the planet that we
try to eliminate death for us
individually we eliminate sources of
death for the planet we eliminate
sources of death at all causes across
every function Society that's the new
game we play as super intelligence okay
so let me give people a mile marker so
you and I did an episode about a year
ago where you walked us through what the
blueprint for health is you striving for
immortality you've been on my radar for
years and years but recently you've
really rocketed to prominence because
what you've done
to
demonstrate that we may be able to slow
aging Poss possibly reverse aging has
caught the attention of a lot of people
some people think it's stupid other
people are so inspired they're
completely embracing it but the idea was
and correct me if I go wrong anywhere
here um but this will help people
understand your point about seeding
control to an AI uh you say I'm made up
of trillions of cells um me being an
authoritarian Overlord does not work
well because I eat ice cream I eat late
I'm 60 lbs heavier than you are now
certainly a lot less muscle mass losing
my hair going gray like all the things
would not want to do people need to only
look at the photos of where you started
uh I'm going to use the Democratic model
and I'm going to let my cells speak up
yes and so the AI is just reading the
will of the people as it were the will
of the cells and it's saying ooh for
your liver to be happy you need to eat
this extract right now uh for your heart
to be healthy you need to go run in the
treadmill on and on and on yeah you were
the most measured man in history uh we
will certainly get into the measurements
of your penis which is amazing it's
utterly fascinating uh but that's what's
driving this so you as you said you've
been doing this for three years you've
seen the results in just your
physicality um quality of your mental
state because you're getting perfect
sleep scores and all that stuff okay so
now if people hold that in their mind
this is somebody who's run the very
first leg of this test of can algorithms
actually make my life better and that
begins driving the thinking um why do
you think because you've held a bunch of
um to use a nice fancy word salons where
you invite people to your house and you
throw this idea would you give yourself
over to the algorithm and you said it
people go through this very predictable
course of no basically and they have a
reaction if it would make our lives
better why do we push back yeah people
push back for uh reasons that are
99% common
they perceive that their
autonomy is better than happiness so the
the beginning of the of the question is
you get the best physical mental
spiritual health of your life like the
best you've never felt better in your
entire life and people are willing to
give that up so that they can maintain
autonomy for some unknown reason it's
basically the autonomy is going to make
them more miserable but they'd rather be
miserable and have autonomy than the
best of their of their life but they
just can't let go of control of autonomy
and usually behind the autonomy it's a
few perceptions one is I can't have my
vices anymore I'm not going to be able
to make that you know spur the moment
decision to eat the cookie or the ice
cream or whatever or you may miss out on
social norms you want to participate in
you you think I can no longer go out
with friends or I can no longer do this
thing but they're viewing it from a a
framework of loss and that loss aversion
is so significant that they don't care
they're giving up the best version of
their existence and so that's like a
first set of common and then others is
just deep distrust there's good reasons
why people in society
distrust organizations distrust others
so it's a deep distrust and so I
understand why people say no because
it's rational and reasonable uh the the
larger context is that if there's a loss
aversion to this thing on the flip side
to those who are open to
gain if you're if you live you know a
few thousand few th000 years ago it say
you're Alexander the Great what's the
most ambitious thing you can do in that
moment or one of the most ambitious
things you can do you can raise an army
conquer territory and establish an
Empire if you fast forward a few hundred
years uh maybe you can start playing
with mathematics maybe you can start
playing with poetry you know like
Society has progressed and in any given
age you can kind of in your time in
place Express the most ambitious thing
possible and so the question in 2024 is
what what is Peak expression of
ambition like the absolute Max you know
in mellin it was selling around the
world in the in the 1500s and Armstrong
is going to to the moon so in 2024 what
is it and so it used to be that there's
five there's three levels of ambition uh
start a company start a country start a
religion you know and those are on time
scales because you build a company it
may do well for a certain time but then
it goes out of business and you it kind
of Fades countries have usually have
longer durations than countries and
religions usually Outlast them all maybe
thousands of years now there's a four
and five so now you'd say number four is
don't die number five is hodas or become
a God and so that's you know number four
and number five are things that people
have always dreamed of doing all
throughout history but it's never been
practical you've always had to make up
stories like in this religion if you
obey these rules you get this afterlife
where this amazing thing happens or I'm
going to go to the jungle and drink this
Elixir because of whatever it's been in
the imaginations but it's never been
practical and so what I'm suggesting is
this thought experiment teases out where
we're at in time and place but I'm
suggesting as a species we have not yet
internalized the ambition right before
us we don't understand that death can be
conquered it is a reasonable thing to
say that's possible and if you do that
then the idea of becoming some kind of
expansive
omnipotent kind of intelligence we don't
know what the limitation is and so it's
an interesting goal and that's what I'm
trying to do is I hit at this from why
we're scared and then also trying to
flip around and say Here's the
opportunity which we don't really know
exists yet all right this is another I
think really important mile marker for
people uh and that mile marker would be
we're at a pivotal moment in history and
once you put you and everything you're
saying into the context of before that
this would not have made any sense
living the BL blueprint life making all
these sacrifices leaning into
aestheticism where you're just not doing
all the fun stuff that people think of
drugs drinking party um you're not going
to do that stuff because if you forsake
it in this very unique moment in time
you can cross a Chasm to Super
intelligence and we have faith that the
super intelligence will solve these
problems yes okay that's the first half
of this mile marker the second half is
this idea of the only people you're
trying to impress live in the year 2400
the 25th Century yeah and you're making
the assumption that this really is the
pivotal moment that we think it is that
that it is a moment upon Which history
turns and how we handle this moment is
going to determine whether the future
looks back on us as the people that
messed up this moment or they look back
on us as the people that that really
laid the groundwork so that this new um
AI fueled hopefully amazing future can
actually come into existence well said
okay so uh there in lies the
framework what will people in
2500 respect about us in this moment
what do we need to pull off they will
say we
are eternally grateful that homo sapiens
in the early 21st century when they
first saw Sparks of super
intelligence they were wise enough to
realize that the games they were playing
which was primarily capitalism of how
much money you could make the status you
could achieve the power you could
acquire territory conflict they acquired
the wisdom to see past a moment of time
and they said we are going to direct
every bit of soile energy individually
and collectively and Conquer death that
means uh individually we're not going to
kill each other we're not going to kill
the planet and we're going to take the
Super intelligence and align it for the
sole objective of eliminating death
across all of
society that that was the rallying cry
and so what was what what they did this
is the 25th Century what they did was
interesting is they were so clever they
saw that don't die was the the most
played game on planet Earth that they
had all these fractions of different
religions and nation states and ethnic
you know ethnic groups and gender they
were at war with each other over every
conceivable divide you could imagine and
they just fought Non-Stop and they were
able to unify themselves and say you
know what we're all playing the Don't
Die game every second of every day it's
the most played game in in that time and
place even more so than capitalism and
they were wise enough to wake up and say
you know what this is the moment We join
together on the one thing we all agree
upon let's sort this thing out and they
did that and now
intelligence is thriving throughout the
Galaxy because they were at that
critical point where intelligence went
on this you know exponential curve of
growth throughout the the solar system
do you think that compassion is innate
to intelligence I hope but do you have
any reason to believe that's true no no
reason whatsoever yeah that's what
scares me so I love your thesis I want
your thesis to be true and And yet when
I think about what humans are really
like we we play Don't Die there is no
doubt but you and I are both history
Buffs and so we know that there are
these horrifying spikes of kill and
Conquer yes and that that is native to
the human mind it seems to me the battle
that don't die is up against uh and I
don't know if you like the idea of
categorizing don't die as a new religion
but I will very much say that's what it
looks like from the outside
um so you've got this new religion
here's what you're going to be up
against the human mind has a tremendous
capacity for compassion and so I get
what you're trying to tap into and what
you want people to breathe life into but
we are also hypert tribal that was
necessary to survive to get us here we
have these what I call evolutionary
algorithms that are implanted in our
brains that make us the way that we are
that make us desire autonomy that make
us desire the dopamine feedback loop
that social media takes so much
advantage of that Oreos take advantage
of the the whole uh big agriculture uh
industrial complex takes advantage of
kill and Conquer you've got the um
military industrial complex taking
advantage of it but all of them are
riding on the back of a thing that
already exists in our mind and so you've
talked about don't die as being like hey
this might be the one you don't say
religion I'm going to use that word
Anytime by all means tell me to stop but
uh that we are living in a time where
this religion instead of taking a
thousand years to catch on that this
could take off this could have that
rapid
acceleration if you set aside super
intelligence forcing us to adopt it or
inspiring us to adopt it either way but
if you take that off the table I don't
see the signs that we will do that
naturally do
you yeah
um this is why I tried to be the example
myself I approached this and I said I'm
not a holy being you know like somehow
above the Primal instincts that we all
have so I know if I have in my house bad
food I'm probably going to eat it and I
know if I put myself in certain
situations I'm probably going to make
bad decisions and so this is why I said
I'm going to willingly build an
algorithm that takes better care of me
than I can myself and so then when I
squawk inside and I'm like I don't want
to do this anymore I want to do
something else I'm Bound by the
algorithm I mean this is a story as old
as you know ulyses being tied to the
mask right like he knew he wanted to
hear the siren song but he told his
mates to tie him to the mask so that
when they when he could hear it he
couldn't say anything he put He put wax
in their ears so that they couldn't hear
him uh say give the command to release
him and so I was doing the same thing
and so um I concur with what you're
saying the idea that humanity is going
to
cheerfully walk into this
scenario it's hard to see now having
said that sometimes these things uh the
principles I'm talking about arrive in
benign ways so OIC OIC is an example of
an algorithm that takes better care of
you than you can
yourself so OIC works really well for
weight loss now it has a whole bunch of
side effects it's not an ideal drug like
it's very complicated that said people
it's almost can't be bought it's in that
high of demand because you take it and
it just turns off these bad parts of
your brain and allows you to lose weight
and become the person you want to become
being able to communicate effectively is
critically important whether you're a
manager a CEO an entrepreneur grammarly
can help you have a greater impact at
work with better and faster everyday
Comm communication grammarly is an AI
writing partner that's trusted by tens
of millions of professionals and 96% of
them report that grammarly helps some
craft more impactful writing from
ideating on video titles or product
names to summarizing Long documents and
replying to client emails better and
faster than ever before all of these AI
features are for free and they work
where you work it literally Works across
over
500,000 apps and websites make your
point and have a greater impact with
grammarly sign up now and download
grammarly for free at grammarly.com
impact Theory that's grammarly.com
impact Theory and so in that regard
people are willing to take an algorithm
that does something for them they can't
do themselves to achieve an outcome they
want and so it doesn't have to be a top-
down control scenario it could just be
very benign ways where we find where
algorithms actually help help us achieve
the things we already care about and we
willfully do these things now for me I
wanted to see if I could get to age
escape velocity so was a goal I wanted
to say so I'm willing to make that
trade-off so I think you can take what
you said you can adjust it just a little
bit and see all the examples in life
where we all of us already willfully
partake of the situation and so we run
our mouths and say how scared we are how
we're never going to do it is this
dystopic Meanwhile we're all already
already doing it in so many ways so I
think it can actually it can work itself
out we need to be thoughtful that this
is the case I don't think we want to
walk into the
future blind about what we're doing I
think it's worthwhile having this
discussion and for everyone to say let
me tell you all the reasons why I hate
it we can cycle through it together and
then we can think through how we might
execute okay so uh the big problem for
me is alignment so for people that
aren't familiar with the idea of AI
alignment um one I will Point people to
Elon Musk is now suing Sam Alman uh who
founded open AI Elon went on just a
absolute World Tour trying to convince
world leaders including the US Congress
and Senate please slow AI down nobody
would listen to him and so finally he
developed a fatalistic Viewpoint and his
way forward was let's develop an AI
company that's open so at least
everybody has access to the same thing
so that it can't be leveraged against
people Sam Alman then turns it into a
for-profit company closes it so people
can't see how it's working working they
don't have access to it elon's now suing
him uh what do you take away from that
and then what do you think about that
the alignment problem how should people
think about it there's so many layers to
that situation
um open AI is the most powerful and most
successful AI company in the world
there's a lot of money at stake um
elon's comp uh building a competing AI
product uh there if you if you pel it
back just from the headlines there's so
much happening behind the scenes that's
not not part of the widely told story so
it's more nuanced it's more
complex there's another element where
humans are going to be humans and
they're going to play the game of
thrones game that's also there um
there's also a legitimate conversation
on how we use these AI
systems and are they open are they
closed do governments regulate uh does
the US position itself differently
relative to China so it's this layered
complicated nuanced topic that I think
is very difficult to speak coherently to
because there's so many competing
interests and In This Moment uh you know
I or anyone else could probably say a
hundred things about the situation and
it's why I go back out to 21st century
to say how could I try to sober myself
up to say anything meaningful about this
moment and so these systems are going to
feed into how Humanity currently does
what it does we go to war we have
violence we go for Domination we try for
power
[Music]
and I'm trying to suggest
that we need to go after a Zeitgeist
shift a Zeitgeist like a cultural shift
that is unimaginable to us right now
like right now if I say to you
we're going to point all our attention
on trying to solve all things that cause
death for us collectively that's
unimaginable and you put forward one
argument you said that's dystopic
whatever uh it's also just PR
practically what does that even mean are
people going to do it and I don't think
in our current mindset that's going to
happen what I am suggesting is if you
look at
Co how the world behaved in response to
co was also un thinkable that the entire
world would shut down within weeks over
a virus the entire world rebuilt itself
around one thing within a few weeks and
what I'm hypothesizing is that AI
progress is going to move along at a
certain speed and there will be certain
demonstrations and certain realities
that introduce existential crisis for
the human race it will maybe call into
question uh who do we trust
uh with news yeah welcome to 2024 yeah
who do we trust who is uh who in the
government is in charge of what uh who
is in charge of knowledge who is in
charge of identity verification who is
in charge of you know like you take all
these basic functions of society and you
now have this new question who's
actually in charge and who do we trust
that starts breaking down all these
layers of society that has that has kept
things relatively stable and
predictable and when that happens we're
just freestyling as a species again and
we kind of have to rebuild from scratch
and say okay what are these basic sturdy
building blocks of society on how we
keep Law and Order of how we keep Trust
of how we actually build and so we're
going to we're going to have
experience decades or centuries
equivalent of change in the coming years
now someone may say like I'm being too
ambitious on AI is not real fine say
it's 10 years say it's 20 like whatever
like take whatever time frame you want
for all intents and purposes if you're
thinking about it from a 205th
perspective it's right now it doesn't
matter if it's one year or five or 10 or
20 or 5050 it's all right now it's about
the future of the species and it's about
this contemplation of we're probably not
going to make it through this moment if
we uh if we can't figure out how to not
to annihilate ourselves when you say
that we might annihilate
ourselves what odds are you giving that
like do you feel that we are on the
precipice where there is a meaningful
percentage chance that we actually don't
make it through this moment which I want
you to Define like you worried about
climate change the most are you worried
about AI the most worried about war yeah
you know an asteroid wiped out the
dinosaurs and so we have that kind of
risk we have solar flares as problems we
have um like we have all kinds of
exential risks that are outside of our
control or or less so then we have the
ones that are in our control so will we
use weapons of mass
destruction um will we point AI uh uh to
cause unmitigated harm will
AI Crisis happen because it just runs
away out of our control um will clim
will our climate change so much that the
Earth becomes so difficult to live here
that our our supply chains break down
and we come back we we crawl back to be
Hunter and gatherers like we saw that
when the supply chains for broke down
with covid Society kind of broke down
you couldn't get computers you couldn't
get companies couldn't get their
supplies and the world just kind of
broke down from one little virus and now
you've got this complicated situation
where the climate is making food
supplies challenging and transport and
rivers dry up and like all the things
we're seeing uh it might render us
basically neutered to operate as a
species in some kind of effective way
which renders us kind of powerless and
so what I'm saying is we yes the
situation is pretty serious and we've
done pretty well we've had nukes for a
couple decades and and you know
gratefully we we've only used it twice
sad you know sadly we used it but um
only a limited number know two times and
so what I'm saying is um if we're having
breakfast and a tsunami is on its way
what we have for breakfast kind of
matters but kind of doesn't like we've
got a bigger problem to handle what I'm
suggesting In This Moment is the
situation at our right in front of us is
sufficiently serious that we should be
reallocating all of our attention to
figure out how we make this thing
through this is not a normal day and um
so like when you bring up you know like
this drama between this and that company
sure um fine it's it's it's a drama that
people understand it's a drama people
want to comment on it's good fun it um
it does redirect our attention away from
this bigger problem that I'm trying to
address okay in your book so I won't
feel bad pushing you on this in your
book one of the characters says uh a
number will do we can debate later I
want to get a sense of what what in fact
that same character goes on to say like
we have to be able to agree with the
stakes are and so this is where Sam
Harris who I love and think is amazing
he believes I don't understand his
thinking he has very great graciously
agreed to come back on the show which I
can't wait for um I think I do
understand his is thinking and I think
the disconnect is I don't believe that
Trump in this case is an existential
threat meaning that we are at risk of
human the human race ending um so
because we disagree on the stakes we
disagree on all the follow on to-do
items yeah now what I'm trying to
understand is at the very beginning of
this interview you said yes we should
when a when AI is ready we should give
our decision-making over to that entity
and I'm saying
okay are you saying that because you
actually believe this unique moment in
time is existential in nature or is this
just well there's always asteroids
there's always those things is this
moment uniquely dangerous yes there's an
element of danger more
importantly we are still the architects
of intelligence we're still we are the
ones building AI we are pointing it to
do certain things we training it to
become good at certain things we're
giving it
feedback we're giving birth to Super
intelligence that is more omnipresent on
my mind than
anything and what I'm saying is the way
in which we give birth like the way we
raise super intelligence is the most
critical thing because I think on the
time scales that we're talking about I
don't think whether I'm going to live to
the year 200 or 120 you know be 120
years old or 200 years old I think
that this will be brought to a head in
the next few years five 10 years I think
it's imminent now if I'm wrong like we
have more time great but it's something
that when you're gambling the future of
intelligent existence and we don't know
if it's happened before in the Galaxy we
haven't found any evidence of it maybe
it's out there maybe it's not we don't
know we are so fortunate to have it on
this planet and we as a species are we
are willing to gamble our
existence and it's a representation that
you death has always been inevitable and
when death is inevitable you kind of
don't care like you're willing to
just say fine like it's going to end for
everyone so why not When Death Becomes
inevitable or When Death Becomes a maybe
and maybe we can extend our lives it's a
different reframe on how much we value
con ious and so what I'm saying is if we
could shift our framework and this is
what I've been trying through the
blueprint I'm trying to say I've been
trying to
say death may not be inevitable and if
it may not be inevitable that may give
us something to Aspire to so that we
want to solve these imminent problems in
front of us but that hasn't been a clean
thing because the when I say death may
not be inevitable people will say I
don't want to live forever I never said
forever I just said I I don't want to
die because people say then I uh I'm
going to get bored or you know they come
up with all these reasons and so this um
this is why back to your first question
you know do I think
algorithms should we as a species begin
adopting these
algorithms in many ways yes
because as a species our intelligence is
pretty Limited in acting in our own best
interest from the small things like been
Jing to the big things like discounting
the future that we've never even
experienced it's it's idiotic to say I
don't want to live some duration of time
we have no idea what it's going to be
like we've never been in this situation
before so for any human to foreclose
that opportunity is beyond foolish but
yet we do it and we just want to for we
stop that thought process and we stop
the conversation and you kill the will
to live it's it's a really weird
attribute of own intelligence so like we
have all these weird things that were
both brilliant and were idiotic and it's
very hard to tease out where one begins
and the other ends if we get AI
engineering
wrong could it wipe out all of humanity
certainly give me a percentage chance no
idea no one knows nobody can say
anything intelligent about that question
so if I said um
that that should be the number one thing
we think about is AI alignment because
there's any chance that it's existential
would that feel like the right base
assumption to operate from exactly okay
uh that makes sense to me um now going
back to this was all nested inside of an
idea and I just wanted to make sure that
we got all of that clear so we're
building from the base assumption that
if we don't engineer or bir in your word
AI correctly it has some chance however
slight to completely annihilate humanity
and therefore you have to take this
extraordinarily um seriously step number
one if I can get people to uh recognize
believe not sure which of those is more
apt but if I can get them to recognize
that that not dying we're drawing a line
right now though I think we need to come
back to this uh between not dying and
living forever but if I can get people
to understand that for the first time
ever not dying is a real possibility uh
that that will hopefully create the
fundamental shift in the way that they
think about um Humanity going forward
that's going to be necessary to do AI
right to align it well okay now all of
that was nested inside of this idea that
covid happens we have this unbelievable
response we um Marshall forces quickly
in a way that we never thought we could
I assume this is tied to the idea of how
don't die as a religious movement could
sweep that AI could present an
existential threat question glimmer of
of something hey if we don't address
this immediately and with the
forcefulness of the entire world in
unity we're in real trouble so that sets
the table now the question becomes you
and I may look at what happened during
covid differently I am
horrified by the
authoritarian tap dance to grab power
that happened during that time and that
is literally why I am super in love with
AI and deploying it as fast as I can and
cannot wait to see it come to fruition
and at the same time I'm like reading
about Mouse China I'm reading about
Stalin's Russia I'm like just really
getting more and more Paranoid by the
day about how often and quickly this
goes wrong yeah so do you have a
different take like was Co only a
beautiful thing from your perspective
and it like showed how we can come
together and that you look at that as
the blueprint for AI yeah I mean so yeah
Co was an unmitigated disaster for
everyone and uh I don't think it for me
it played out exactly how you would
expect it to play out among Waring
humans Waring humans yes I didn't see it
coming I thought it was going to be
beautiful and first few weeks when it
felt beautiful and everyone came
together I like oh my God this is
amazing and then it rapidly fell apart
it's just it's so algorithmic I mean
it's exactly how humans behave in Mass
there's zero surprises about the whole
thing but what I'm suggesting is I'm
just introducing the idea that we often
time think that something is
impossible until it's not and what I'm
suggesting right now is in this moment
you and I could be talking about a
thousand different things like you know
fashion or politics or drama or like
whatever is happening that can fill the
air and what I'm suggesting is that we
do this and it consumes our attention at
the cost of realizing this bigger
picture phenomena which you just said
aligning with AI and making sure our our
Earth is actually inhabitable and that
we humans don't annihilate our eles is
the most
significant situation on planet Earth it
is much more important than anything
else in existence we could be talking
about
and I was first to say you like you're
saying you don't trust others uh you
don't trust Authority you don't
trust individuals technically I said I
have a problem with authority and I
don't trust individuals but slightly in
my mind two different things yeah okay
so yes on both those things I don't
trust
myself I don't I don't trust what my
mind says I don't trust what my mind
wants to do I don't trust what it thinks
it wants I don't trust my mind and my
entire life has been a process of
learning to not trust my mind and so a
thought experiment for me that really
helps make this clear is if I travel in
time and I hang out with Homo erectus a
million years ago so they have an axe in
their hand and we say Homo rectus wears
food where's sh with shelter and where's
danger we listen to their answers
because they know then we say what is
the future of the species and we laugh
because there's no way they're going to
imagine all the things we have today
this Set uh the technology we have our
ability to travel outside of the Earth's
biosphere in this moment I think it's
reasonable to imagine that we are the
equivalent of homo
erectus that's how primitive we are in
our thoughts we we have nothing
intelligent to say about the future the
only thing and I've as I've gone through
this thought process I've come back to
this observation the only intelligent
thing I can say right now in this moment
is that I don't want to die even one
layer above that I don't
know and that's never been the case uh
as a species we we've never had a wall
of fog right in front of us and so um I
think it's possible
that we could be steps away from the
most extraordinary existence to ever
happen in the
Galaxy that our Consciousness could be
more
expansive than we have imagination to
contemplate that requires us to sober up
a little bit and realize what's
happening right now and Rise Above
ourselves because
the way we do things now is probably
going to lead to to some undesirable
outcome if we don't res situate
ourselves on a new goal okay so I'll
call that human alignment how do we
align humans step number one you're
trying to get people excited you might
be able to live forever you're hoping
that causes a shift is that your only
card or is there another card to play
yeah so I'm suggesting that every single
one of us should become all problems and
so if I say that I don't want to die
individually that's what I built
blueprint for is to say how can I
scientifically not die as an individual
if I say how can we stop the Earth from
being uninhabitable so how do we
actually take on climate change now
typically the Mind goes to I'm going to
recycle my Amazon box I'm going to vote
for somebody who does this I'm going to
you like these are the path our mind
typically think if no one thinks I'm
going to become climate change and so
the way you do that is you realize that
we treat planet Earth the same way we
treat our bodies that relationship is
identical we pollute the Earth however
we want just like we pollute our bodies
however we want there's no constraints
on what we do and so if I'm going to
become the climate change problem I'm
going to adopt this uh no not die
infrastructure in my own life like I'm
going to try to go to zero death across
all aspects if I want to align with AI I
think of myself as 35 trillion cells and
so what I'm trying to say is uh yes I'll
finish that thought I'm 35 trillion
cells I need to get 35 int 35 trillion
intelligent agents to want one thing and
it's not have fun with friends it's not
meaning making it's not you know
whatever it's don't die I'm reduced to
one goal above all goals and so in that
way I'm acknowledging that I'm
one part in an8 billion person part game
and that there's power in each one of us
doing these things individually and that
prevents us from saying I'm going to
want something and to take action I'm
going to blame everyone else I'm going
to point my finger and say everyone else
is doing something wrong or they should
be doing this or that I'm trying to say
every single one of us owns this problem
that we have to solve don't die by
ourselves first and then collectively
so it's a mass call for all of us to be
that and so to also to answer your
question on don't die don't die is a
recipe book don't die is a medical
protocol don't die is an engineering
guide don't die is a economic plan don't
die is philosophical don't die is
religious like don't die is all things
it's applicable to every domain so if
you're an AI engineer and you're
building AI what are you building it to
do what are its attributes how do you
train it what's the feedback mechanism
who do you sell to what do they do with
it these are all very practical things
if you're a don't die in building that
you've got guardrails now what you're
going to do other than just this
maximize how much money you're going to
make with this given
thing okay um the area that I am deeply
concerned by in that analysis is that I
the idea of becoming the problem I'm not
sure you're
seeing um what actually drives human
beings so you talked about we have this
wall of fog and the only thing I know is
don't die everything even one step above
that doesn't make sense I'm not going to
be able to comment you might not have
wisdom to offer but there is a thing
that's driving you so this goes back to
my thing about you have evolutionarily
planted algorithms running in your mind
I don't think there's a way around that
and this is I think the the thing that
um is the problem to solve so when
people talk about Ai and they say the
problem to solve is alignment when I
think about humans the problem to solve
is everyone you included in my
estimation are slaves to those
algorithms so for instance you you are
saying that I don't think you would do
the things the algorithm was telling you
to do from diet sleep all of that if it
made you hate your life and feel worse
about yourself it makes you feel great
you say I've never been happier and you
say that as a way of saying see it works
you're not saying um it it's just about
the clock I miserable I hate it but I
can see that my aging clock has slowed
down and therefore I'm going to keep
doing it it's I'm happier I feel better
I sleep better I have better cognition
that is a response not to truth that is
a response to the evolutionary algorithm
running in your brain makes thinking
clearly and being well rested feel
awesome and so I would say you're
pursuing feeling awesome so when I think
about U becoming climate change for
instance people pollute because of the
tragedy of the commons they are it it is
exactly the same as money
printing the government prints money
because it socializes losses it says hey
everybody we're all going to share we're
all going to get diluted by printing
extra money so no no one class group
uh me as a giant Corporation I'm pumping
this into the river because the river
carries it away and so now it's like H
it's all of our problems anybody that
had to worry about so it's not just me
anymore that has I still have to deal
with it but it's now manageable for me
because I'm only one of all of us that
has to deal with that and so I have a
profit motive and a distribution of pain
and suffering that incentivizes me to do
that
so that to me those are algorithms at
work for better or worse the only way to
get so far the only way to get humans to
act as a collective one we have an
evolutionary algorithm as a social
species so there is massive amounts of
cooperation no doubt but we need a
religious element to really get us so I
get why zero or uh don't die has that
element to it that tends to take a long
time and it's only ingroup and religions
kill a lot um so that becomes the thing
that I worry about is what you end up
seeing is to get something done at the
population
level an authority has to come in and
force it upon you and there was an
awesome meme going around it says what
um authoritarian leaders claim to be
doing and it shows the Care Bears and
like love shooting out of their hearts
and it says what they're actually doing
and it shows a real photo of a young
woman being um she's kneeling facing a
wall with her hands behind her head and
somebody has an AK-47 pointed at the
back of her head now for fans of History
you will know that I can't fathom how
many millions of people have died in
that kind of state sponsored government
violence uh you need look no farther
than Pax Mongolia for just like an
unimaginable amount of brutality or hey
go back Hitler anyone heard of him so uh
all of this stuff goes just horrifically
wrong because of the algorithms running
in the human mind
yeah Crossing that
Chasm
seems impossible if I'm honest like I
don't know how we use this will be good
for you as a way to get everybody to
fall in line yeah yeah I hear you on it
feels
impossible and I guess the question is
what could possibly happen to make that
impossible
possible I have one answer yeah which
you and I are talking about this off
camera you and I approaches very
differently my thing is I I like to show
people how I think even though I know I
no one should listen to me on how to
align a eye I just do not know enough
about the problem however it's fun to
walk through here's how I try to think
up from first principles so I may change
my mind in an hour and realize that
there was something I missed but on AI
alignment you have to get AI aligned the
only way I can see to do that is to make
AI completely agnostic to outcome it
cannot want something because the moment
it wants life over death the moment it
wants to make sure that you get the
outcome that you asked for now all of a
sudden you get into deranging territory
where you get the paperclip maximizer
problem you ask the AI with all the good
intentions in the world hey make us more
paper clips become efficient at making
paper clips and then it suddenly looks
at you and goes oh man the atams your
body be way better more useful
configured as paper clips uh so I think
that problem is so pernicious the only
thing that you can do is make AI
completely agnostic such that when asked
to stop it will stop immediately now
does
that carry
some fourth fifth order consequence that
I'm not looking at almost certainly but
that that is me doing my best to
think up from anything that wants
something if it is if its sole reason
for existing is to optimiz for that then
I know eventually I become an antill in
the way of it creating to reaching its
goal and so it will just become if it's
a billion times smarter than us it just
will be so
indifferent and it will know what you
want doesn't really
matter because I have to think as the AI
I have to think population level M so
anyway a kill switch is basically my
punchline that is designed to do no harm
that it doesn't want one thing over
another when it's told to stop it stops
yeah yeah this is I guess if I if I try
to piece certain things together in my
mind to make sense at this moment
because everything you said they're good
arguments they're reasonable arguments
they they're based upon this reality
when I read history and I look back
at momentous time
almost always the solution to these
problems were unimaginable to those who
existed in that
time it it just came from nowhere that's
happened repeatedly can you give us some
examples uh like um discovering so
discovering that microscopic objects
that were beyond the resolution of the
eyes these things called germs were
responsible for infection and death and
that simple things like washing one's
hands and cleaning instruments between
surgeries would lessen that death rate
and increase the mortality increase
lifespans the idea that microscopic
objects could be an influential thing in
our lives was
absurd absolute
Bonkers turn out to be
true and that's just happened throughout
history and s
Resume
Read
file updated 2026-02-12 01:37:56 UTC
Categories
Manage