Nick Bostrom: Simulation and Superintelligence | Lex Fridman Podcast #83
rfKiTGj-zeQ • 2020-03-26
Transcript preview
Open
Kind: captions
Language: en
the following is a conversation with
Nick Bostrom a philosopher at University
of Oxford and the director of the future
of humanity Institute he has worked on
fascinating and important ideas in
existential risk simulation hypothesis
human enhancement ethics and the risks
of super intelligent AI systems
including in his book super intelligence
I can see talking to Nick multiple times
in this podcast many hours each time
because he has done some incredible work
in artificial intelligence in technology
space science and really philosophy in
general but we have to start somewhere
conversation was recorded before the
outbreak of the corona virus pandemic
that both Nick and I I'm sure will have
a lot to say about next time we speak
and perhaps that is for the best because
the deepest lessons can be learned only
in retrospect on the storm has passed I
do recommend you read many of his papers
on the topic of existential risk
including the technical report titled
global catastrophic risks survey that he
co-authored with Anders Sandberg for
everyone feeling the medical
psychological and financial burden of
this crisis
I'm sending love your way stay strong
we're in this together we'll beat this
thing this is the artificial
intelligence podcast you can enjoy it
subscribe on YouTube review it with five
stars on a podcast supported on patreon
or simply connect with me on Twitter
Alex Friedman spelled Fri D ma n as
usual I'll do one or two minutes of ads
now and never any ads in the middle that
can break the flow of the conversation I
hope that works for you and doesn't hurt
the listening experience this show is
presented by cash app the number-one
finance app in the App Store when you
get it
use code lex podcast cash Apple s you
said mind your friends buy Bitcoin and
invest in the stock market with as
little as one dollar since cash app does
fractional share trading let me mention
that the order execution algorithm that
works behind the scenes to create the
abstraction of fractional orders is an
algorithmic marvel so big props to the
cash app engineers for solving a hard
problem that in the end provides an easy
interface that takes a step up to the
next layer of abstraction over the stock
market making trading more accessible
for new investors and diversification
much easier so again
you get cash out from the App Store
Google Play and use the collects podcast
you get $10 and cash Apple also donate
$10 the first an organization that is
helping to advance robotics and STEM
education for young people around the
world and now here's my conversation
with Nick Bostrom at the risk of asking
the Beatles to play yesterday or the
Rolling Stones to play satisfaction let
me ask you the basics what is the
simulation hypothesis that we are living
in a computer simulation
what is the computer simulation how
we're supposed to even think about that
well so the hypothesis is meant to be
understood in a literal sense not that
we can kind of metaphorically view the
universe as an information processing
physical system but that there is some
advanced civilization who built a lot of
computers and that what we experience is
an effect of what's going on inside one
of those computers so that the the world
around us our own brains everything we
see in perceive and think and feel would
exist because this computer is running
certain programs do you think of this
computer as something similar to the
computers of today these deterministic
sub touring machine type things is that
what we're supposed to imagine or we're
supposed to think of something more like
a like a like a quantum mechanical
system something much bigger something
much more complicated something much
more mysterious from our current
perspective so the ones we have today
would you find them in bigger certainly
you'd need more memory and more
processing power I don't think anything
else would be required now it might well
be that they do have addition maybe they
have quantum computers and other things
that would give them even more
implausible but I don't think it's a
necessary assumption in order to get to
the conclusion that a technology mature
civilization would be able to create
these kinds of computer simulations with
conscious beings inside them so do you
think the simulation hypothesis is an
idea that's most useful in philosophy
computer science physics sort of where
do you see it having valuable kind of
start a starting point in terms of a
thought experiment of it is it useful I
guess it's more in in informative and
interesting and maybe important it's not
designed to be useful for something else
okay interesting
sure but is it philosophically
interesting or is there some kind of
implications of computer science and
physics I think not so much for computer
science or physics per se certainly it
would be of interest in philosophy I
think also to say cosmology or physics
in as much as you're interested in the
fundamental building blocks of the world
and the rules that govern it and if we
are in a simulation there is then the
possibility that say physics at the
level were the computer running the
simulation could could be different from
the physics governing phenomena in the
simulation so I think might be
interesting from point of view of
religion or just from for a kind of
trying to figure out what what the heck
is going on so we mentioned the
simulation hypothesis so far there is
also the simulation argument which I
tend to make a distinction so simulation
hypothesis we are living in a computer
simulation simulation argument this
argument that tries to show that one of
three propositions is true one of which
is the simulation hypothesis but there
are two alternatives in the original
simulation argument which which we can
get to yeah let's go there by the way
confusing terms Picasa people will I
think probably naturally thinks
simulation argument equals simulation
hypothesis just terminology wise but
let's go there so simulation hypothesis
means that we are living in simulations
the hypothesis that we're living in
simulation simulation argument has the
three complete possibilities that cover
all possibilities so what yeah so it's
like a disjunction it says at least one
of these three is true yeah although it
doesn't on its own tell us which one so
the first one is that almost all
civilizations at our current stage of
technological development go extinct
before they reach technological maturity
so there is some great filter that makes
it so that basically none of the
civilizations throughout you know maybe
vast cosmos will ever get to realize the
full potential of technological develop
and this could be theoretically speaking
this could be because most civilizations
kill themselves too eagerly or destroy
themselves early or it might be super
difficult to build a simulation so the
the span of time theoretically it could
be both now I think it looks like we
would technically be able to get there
in a time span that is short compared to
say the lifetime of planets and other
sort of astronomical processes so your
intuition is the build simulation is not
well so this is interesting concept of
technological maturity it's kind of an
interesting concept to have other
purposes as well we can see even based
on our current limited understanding
what some lower bound would be on the
capabilities that you could realize by
just developing technologies that we
already see are possible so for example
one one of my research fellows here eric
drexler back in india teas studied
molecular manufacturing that is you
could analyze using theoretical tools
and computer modeling the performance of
various molecularly precise structures
that we didn't then and still don't did
I have the ability to actually fabricate
but you could say that well if we could
put these atoms together in this way
then the system would be stay
and it would you know rotate with at
this speed and have what these
computational characteristics and he
also outlined some pathways that would
enable us to get to this kind of
molecularly
manufacturing in the fullness of time
you could do other other studies we have
done you can look at the speed at which
say it would be possible to colonize the
galaxy if you had mature technology we
have an upper limit which is the speed
of light we have sort of a lower current
limit which is how fast current Rockets
go we know we can go faster than that by
just you know making them bigger and
have more fuel and stuff and and you can
then start to describe the technological
affordances that would exist once a
civilization has had enough time to
develop Eva at least those technologies
we're already not possible then maybe
they would discover other new physical
phenomena as well that we haven't
realized that would enable them to do
even more but but at least there is this
kind of basic set of capabilities in
Jilin garnett well how do we jump from
molecular manufacturing to deep-space
exploration to mature technology like
what's the connection well so these
would be two examples of technological
capability sets that we can have a high
degree of confidence or physically
possible in our universe under that a
civilization that was allowed to
continue to develop its science and
technology would eventually attain you
can Intuit like we can kind of see the
set of breakthroughs they're likely to
happen so you can see like what did you
call the technological set with
computers maybe at easiest I mean the
one is we could just imagine bigger
computers using exactly the same parts
that we have so you can kind of scale
things that way right but you could also
make processors bit faster if you had
this molecular nanotechnology that
director x2 described he characterized a
kind of crude computer built with these
parts that that would perform you know
at a million times the human brain while
being
we can be smaller the size of a sugar
cube and he made no claim that that's
the optimum computing structure like
fraud you know we could build a faster
computers that would be more efficient
but at least you could do that if you
had the ability to do things that were
atomically precise yes
means you can combine these two you
could have this kind of nanomolecular
ability to build things at the bottom
and then say at this as a spatial scale
that would be attainable through space
colonizing technology you could then
start for example to characterize a
lower bound on the amount of computing
power that technology material
civilization would have if it could grab
resources you know planets and so forth
and then use this molecular
nanotechnology to optimize them for
computing you'd get a very very high
lower bound on the amount of compute so
sorry define some terms so
technologically mature civilization is
one that took that piece of technology
to its to its lower bound
what is it technological matures well
yeah so that mean it's a strong concept
and we really need for the simulation
hypothesis I just think it's interesting
in its own right so it would be the idea
that there is some stage of
technological development for you
basically maxed out that you developed
all those general-purpose widely useful
technologies that could be developed or
at least kind of come very close to the
my you know 99.9 percent there or
something so that's that's that's an
independent question you can think
either that there is such a ceiling or
you might think it just goes the
technology tree just goes on forever
where where is your sense for I would
guess that there is I I'm a maximum that
you would start to asymptotes towards so
new things won't keep springing up new
ceilings in terms of basic technological
capabilities I think that yeah there's
like a finite set of those that can
exist in this universe more of our I
mean I wouldn't be that surprised if we
actually reached close to that level
fairly shortly after we have say machine
super intelligence so I don't think it
would take million
of years for a human originating
civilization to begin to do this it
think it's like more more likely to
happen on historical timescales but that
that's that's an independent speculation
from the simulation argument I mean for
the purpose of the simulation argument
it doesn't really matter whether it goes
indefinitely far up or whether there is
a ceiling as long as we know we could at
least get to a certain level and it also
doesn't matter whether that's gonna
happen in a hundred years or five
thousand years or 50 million years like
the timescales really don't make any
difference for the ceilin garna a little
bit like there's a big difference
between a hundred years and ten million
years you know so it doesn't really not
matter because you just said this is a
matter if we jump scales to beyond
historical skills so we described that
so for the simulation argument sort of
doesn't it matter that we if it takes
ten million years it gives us a lot more
opportunity to destroy civilization in
the mean time yeah well so it would
shift around the probabilities between
these three alternatives that is if we
are very very far away from being able
to create these simulations if it's like
say the billions of years into the
future then it's more likely that we
will fail ever to get there they're more
time for us to kind of you know give go
extinct along the way and similarly for
other civilizations so it's important to
think about how hard it is to build
simulation from in terms of yeah
figuring out which of the disk jockeys
but for the simulation argument itself
which is agnostic as to which of these
three alternatives is true okay it's
like you don't have to sit like this
immolation argument would be true
whether or not we thought this could be
done in five hundred years or it would
take five hundred million years so for
sure the simulation argument stands I'm
sure there might be some people who
oppose it but it doesn't matter I mean
it's it's very nice those three cases
covered but the fun part is at least not
saying what the probabilities are but
kind of thinking about kind of intuitive
reasoning about what's more likely what
what
the kind of things that would make some
of the arguments less and more so like
but let's actually I don't think we went
through them so number one is we destroy
ourselves before we ever create simulate
right so that's kind of sad but we have
to think not just what what might
destroy us I mean the day there could be
some whatever disastrous for me crowd
slamming the earth a few years from now
that that could destroy us right but
you'd have to postulate in order for
this first disjunct to be true that
almost all civilizations throughout the
cosmos also failed to reach
technological maturity and the
underlying assumption there is that
there is likely a very large number of
other intelligent civilizations well if
there are yeah then they would virtually
all have to succumb in the same way I
mean then that that leads off another I
guess there are a lot of little
digressions that you know there so there
yeah give me dragging us back there are
these there is a set of basic questions
that always come up in conversations
with interesting people yeah like the
Fermi paradox like there's like you
could almost define whether person is
interesting whether they're at some
point because there was a Fermi paradox
comes up like well so forward it's worse
it looks to me that the universe is very
big I mean in fact according to the most
popular current cosmological theory is
infinitely big and so then it would
follow pretty trivially that that it
would contain a lot of other
civilizations in fact infinitely many if
you have some locals stochasticity and
infinitely many is like you know
infinitely many lumps of matter one next
to another there's a kind of random
stuff in each one then you're going to
get all possible outcomes with
probability one infinitely repeated so
so then then certainly that would be a
lot of extraterrestrials out there I'd
maybe short of that if the universe is
very big there might be a finite but
large number if we literally
one yet and then of course if we went
extinct then all of civilizations at our
current stage would have gone extinct
before becoming technological material
so then it kind of becomes trivially
true that a very high fraction of those
Quantic things but if we think there are
many I mean it's interesting because
there are certain things that plausibly
could kill us like a certain if you look
at existential risks and it might be a
different like that that the best answer
to what would be most likely to kill us
might be a different answer than the
best answer to the question if there is
something that kills almost everyone
what would that be because that would
have to be some risk factor that was
kind of uniform over all possible
civilizations yeah so in this for the
for the seekers argument you have to
think about not just us but like every
civilization dies out before they create
this simulation yeah or something very
close to everybody okay so what's number
two in well so number two is the
convergence hypothesis that is that
maybe like a lot of some of these
civilizations do make it through to
technological maturity but out of those
who do get there they all lose interest
in creating these simulations so they
just they have the capability of doing
it but they choose not to yeah not just
a few of them decide not to but you know
you know out of a million you know maybe
not even a single one of them would do
it and I think when you say lose
interest that sounds like unlikely
because it's like they get bored or
whatever but it could be so many
possibility within that igniculus
I mean losing interest could be
it could be anything from it being
exceptionally difficult to do to
fundamentally changing the sort of the
fabric of reality if you do it as
ethical concerns all those kinds of
things could be exceptionally strong
pressures well certainly I mean yeah
ethical concerns I mean not really too
difficult to do I mean in a sense that's
the first adopter that you get to
technical maturity where you would have
the ability using only a tiny fraction
of your resources to create many many
simulations so it wouldn't be the case
that they would need to spend half of
their GDP forever in order to create one
simulation and the head is like
difficult debate about whether they
should you know invest half of their GDP
for this it would more be like well if
any little fraction of the civilization
feels like doing this at any point
during maybe they're you know millions
of years of existence then there would
be millions of simulations but but
certainly that could be many conceivable
reasons for why there would be this
convert many possible reasons for not
running ancestor simulations or other
computer simulations even if you could
do so cheaply by the way what's an
ancestor simulation well that would be
the type of computer simulation that
would contain people all like those we
think have lived on our planet in the
past and like ourselves in terms of the
types of experiences to have and and
where those simulated people are
conscious so it's like not just
simulated in the same sense that a a
non-player character would be simulated
in the current computer game where it's
kind of has you can have at our body and
then a very simple mechanism that moves
it forward or backwards or but but
something where the the simulated being
has a brain let's say that simulated at
a sufficient level of granularity that
that it would have the same subjective
experiences as we have so where does
consciousness fit into this do you think
simulation like is there are different
ways to think about how this can be
simulated
just like you're talking about now do we
have to simulate each brain within the
larger simulation is it enough to
simulate just the brain just the minds
and not the simulation I'm not the big
in the universe itself like is there
different ways to think about this yeah
I guess there is a kind of premise in
the simulation argument rolled in from
philosophy of mind that is that it would
be possible to create a conscious mind
in a computer and that what determines
whether some system is conscious or not
is is not like whether it's built from
our organic biological neurons but maybe
something like what the structure of the
computation is that it implements so we
can discuss that if we want but I think
it would be far worse worse might be
that it would be sufficient say if you
had a computation that was identical to
the computation in the human brain down
to the level of neuron so if you had a
simulation with 100 billion neurons
connected in the same ways to human
brain and you'd then roll that forward
with the same kind of synaptic weights
and so forth so you actually had the
same behavior coming out of this as a
human without brain would have done then
I think that would be conscious now it's
possible you could also generate
consciousness without having that
detailed simulation there I'm getting
more uncertain exactly how much you
could simplify or abstract away
canyonland garnett what do you mean I
missed where your place in consciousness
in a second well so that so if you are a
computational is do you think that what
creates consciousness is the
implementation of a computation some
property emergent property in the
computation itself yes the idea yeah you
could say that but then the question is
which what what's the class of
computations such that when they are
wrong consciousness emerges so if you
just have like something that I adds 1
plus 1 plus 1 plus 1 like a simple
computation you think maybe that's not
gonna have any consciousness if on the
other hand the computation is one like
our human brains are performing where as
part of the computation there is like
you know a global work space is
sophisticated attention mechanism there
is like self representations of other
cognitive processes and a whole lot of
other things that possibly would be
conscious and in fact if it's exactly
like ours I think definitely it would
but exactly how much less than the full
computation that the human brain is
performing would be required is a little
bit I think of an open question he asks
another interesting question as well
which is would it be sufficient to just
have say the brain or would you need the
environment right that's a nice way in
order to generate the same kind of
experiences that we have and there is a
bunch of stuff we don't know I mean if
you look at say current virtual reality
environments one thing that's clear is
that we don't have to simulate all
details of them all the time in order
for say that the human player to have
the perception that there is a full
reality and that you can have say
procedurally generated virtual might
only render a scene when it's actually
within the view of the player character
and so similarly if this if this if this
environment that that we perceive is
simulated it might be that all of the
parts that come into our view are
rendered at any given time and a lot of
aspects that never come into view say
the details of this microphone I'm
talking into exactly what each atom is
doing at any given point in time might
not be part of the simulation only a
more coarse-grained representation so
that to me is actually from an
engineering perspective why the
simulation hypothesis is really
interesting to think about is how much
how difficult is it to
sort of in a virtual reality context I
don't know fake is the right word but to
construct a reality that is sufficiently
real to us to be to be immersive in that
way that the physical world is I think
that's just that's actually probably an
answerable question of psychology of
computer science of how how where's the
line where it becomes so immersive that
you don't want to leave that world yeah
alright that you don't realize while
you're in it that it is a virtual world
yeah those are two actually questions
yours is the more sort of the good
question about the realism but mine from
my perspective what's interesting is it
doesn't have to be real but it how how
can we construct the world that we
wouldn't want to leave oh yeah I mean I
think that might be too low a bar I mean
if you think say when people first had
the pong or something like that like I'm
sure there were people who wanted to
keep playing it for a long time because
it was fun and I wanted to be in this
little world I'm not sure we would say
it's immersive I mean I guess in some
sense it is but like an absorbing
activity it doesn't even have to be but
they left that world though that's the
so like I think that bar is deceivingly
high so they eventually look so they you
can play pong or Starcraft or would have
more sophisticated games for hours for
four months you know Wow well the
Warcraft could be in a big addiction but
eventually they escape that ah so you
mean when it's uh absorbing enough that
you would spend your entire it would ya
choose to spend your entire life in
there and then thereby changing the
concept of what reality is but as your
reality your reality becomes the game
not because you're fooled but because
you've made that choice yeah and it may
be different people might have different
preferences regarding that some Saul
might even even if you had any perfect
virtual reality might still prefer not
to spend the rest of their lives there
meaning philosopher there's this
experience machine thought experiment
have you come across this so Robert
Nozick had this thought experiment where
you imagine some crazy super-duper
neuroscientist of the future have
created a machine that could give you
any experience you want if you step in
there and for the rest of your life you
can kind of pre-programmed it in
different ways so you're you know
fondest dreams could come true you could
whatever you dream you want to be a
great artist a great lover like have a
wonderful life all of these things mmm
if you step into the experience machine
will be your experiences constantly
happy and but we kind of disconnect from
the rest of reality and it would float
there in the tank and the Gnostic
thought that most people would choose
not to enter the experience machine I
mean many might want to go there for a
holiday but they wouldn't want to check
out of existence permanently and so he
thought that was an argument against
certain views of value according to what
we what we value is a function of what
we experience because in the experience
machine you can have any experience you
want and yet many people would think
that would not be much value so
therefore what we value depends on other
things than what we experience so ok can
you can you take that argument further
what about the fact that maybe what we
values the up and down of life so you
could have up and downs in the
experience machine right but what can't
you have in the experience machine well
I mean that then becomes an interesting
question to explore but for example real
connection with other people if the
experience machine is the solar machine
where it's only you like that's
something you wouldn't have there you
would have this objective experience
that would be like fake people yeah but
when if you gave somebody flowers that
wouldn't be any bother they were
actually got happy it would just be a
little simulation of somebody smiling
but the simulation would not be the kind
of simulation I'm talking about in the
simulation argument where
simulated creatures conscious it would
just be a kind of smiley face that would
look perfectly real to you so we're now
drawing a distinction between appear to
be perfectly real and actually being
real yeah so that could be one thing I
mean like a big impact on history maybe
it's also something you won't have if
you check into this experience machine
so some people might actually feel the
life I want to have for me is one where
I have a big positive impact on history
unfolds so let's see if you could kind
of explore these different possible
explanations for why this you wouldn't
want to go into the experience machine
if that's if that's what you feel and
what one interesting observation
regarding this Nozick thought experiment
and the conclusions he wanted to draw
from it is how much is a kind of a
status quo effect so a lot of people
might not want to jettison card reality
to plug in to this dream machine but if
they instead we're told well what you've
experienced up to this point was a dream
now
do you want to disconnect from this and
enter the real world when you have no
idea maybe what the real world is or
maybe you could say well you're actually
a farmer in Peru growing you know
peanuts and you could live for the rest
of your life
in this well or or would you want to
continue your your dream life as Alex
Friedman gone around the world making
podcasts and doing research so if the
status quo was that the that they were
actually in the experience machine
howling a lot of people might prefer to
live the life that they are familiar
with rather than sort of bail out into
something the change itself the leap
yeah it might not be so much the the
reality itself that we're after but it's
more that we are maybe involved in
certain projects and relationships and
we have you know a self-identity and
these things that's our values are kind
of connected with carrying that forward
and then whether
it's inside a tank or outside a tank in
Peru or whether inside a computer
outside a computer that's kind of less
important to what what we ultimately
care about yeah but still so just linger
on it it is interesting I find maybe
people are different but I find myself
quite willing to take the leap to the
farmer in Peru especially as the virtual
reality system become more realistic I
I find that possibility and I think more
people would take that leap but so in
this in this thought experiment just to
make sure we are understand so in this
case that the farmer in Peru would not
be a virtual reality that would be the
real the real that really real that your
life like before this whole experience
machine started well I kind of assumed
from that description
you're being very specific but that kind
of idea just like washes away the
concept of what's real I mean I'm still
a little hesitant about your kind of
distinction between real and illusion
because when you can have an illusion
that's feels I mean that looks real and
you know what III don't know how you can
definitively say something is real or
not like what's what's a good way to
prove that something is real in that
context well so I guess in this case
it's Morris depression in one case
you're floating in a tank with these
wires by the super-duper neuroscientists
plugging into your head giving you Lex
Friedman experiences in the other you're
actually tilling the soil in Peru
growing peanuts and then those peanuts
are being eaten by other people all
around the world by the exports and this
that's two different possible situations
in the one and the same real world that
that you could choose to occupy but just
to be clear when you're in a vat with
wires and the neuroscientists you can
still go farming in Peru right mmm but
like well you could you could if you
wanted to you could have the experience
of farming in Peru but what that
wouldn't actually be any peanuts grown
well but what makes a peanut so
so peanut could be grown and you could
feed things with that peanut and why
can't all of that be done in a
simulation
I hope first of all that they actually
have peanut farms in Peru I guess we'll
get a lot of comments otherwise angry I
was way up to the point you should know
you can't realize in that climate now I
mean I I think I mean I I in the
simulation I think there's a sense the
important sense in which it should all
be real nevertheless there is a
distinction between inside the
simulation and outside the simulation or
in the case of no.6 thought experiment
whether you're in the VAT or outside the
VAT and some of those differences may or
may not be important I mean that that
comes down to your values and
preferences so if they if the experience
machine only gives you the experience of
growing peanuts but you're the only one
in in the experience machines there's
other you can within the experience
machine others can plug in well they're
versions of the experience machine so in
fact you might want to have distinguish
different thought experiments different
versions of it so in in like in the
original thought experiment maybe it's
only right just you so and you think I
wouldn't want to go in there well that
tells you something interesting about
what you value and what you care about
then you could say well what if you add
the fact that there would be other
people in there and you would interact
with them well it starts to make it more
attractive right then you can add in
well what if you could also have
important long-term effects on human
history in the world and you could
actually do something useful even though
you were in there that makes it maybe
even more attractive like you could
actually have a life that had a purpose
and consequences and so as you sort of
add more into it it becomes more similar
to the the baseline reality that that
you were comparing it to yeah but I just
think inside the experience machine and
without taking those steps you just
mentioned you you you still have an
impact on long-term history
of the creatures that live inside that
of the quote-unquote fake creatures that
live inside that experience machine and
that like at a certain point you know if
there's a person waiting for you inside
that experience machine maybe your newly
found wife and she dies she has fears
she has hopes and she exists in that
machine when you plug out when you
unplug yourself and plug back in she's
still there going on about her life oh
well in that case yeah she starts to
have more of an independent existence i
independent existence but it depends I
think on how she's implemented in the
experience machine take one the mid case
where all she is is a static picture on
the wall of photograph right so you
think well I can look at her right but
that's it there's no that then you think
well it doesn't really matter much what
happens to that and any more than a
normal photographs if you tear it up
right it means you can't see it anymore
but you haven't harmed the person whose
picture you tore up to go home but but
if she's actually implemented say at a
neural level of details so that she's a
fully realized digital mind with the
same behavioral repertoire as you have
then very plausibly she would be a
conscious person like you are and then
you would what you do in in this
experience machine would have real
consequences for how this other mind
felt so you have to like specify which
of these experience machines you're
talking about I think it's not entirely
obvious that it will be possible to have
an experience machine that gave you a
normal set of human experiences which
include experiences of interacting with
other people without that also
generating consciousnesses corresponding
to those other people that is if you
create another entity that you perceive
and interact with that to you looks
entirely realistic not just when you say
hello they say hello back but you have a
rich interaction many days deep
conversations like it might be that the
only
possible way of implementing that would
be one that also has a side effect
instantiated this other person in enough
detail that you would have a second
consciousness there I think that's to
some extent an open question so you
don't think it's possible to fake
consciousness and say well it might be I
mean I think you can certainly fake if
you have a very limited interaction with
somebody you could certainly fake that
that is if all you have to go on is
somebody said hello to you that's not
enough for you to tell whether that was
a real person there or a pre-recorded
message or you know like a very
superficial simulation that has no
conscious Ness because that's something
easy to fake we could already fake it
now you can record a voice recording and
you know but but if you have a richer
set of interactions where you're allowed
to answer ask open-ended questions and
probe from different angles that
couldn't sort of be you could give can't
answer to all of the possible ways that
you could probe it then it starts to
become more plausible that the only way
to realize this thing in such a way that
you would get the right answer for many
which angle you probe it would be a way
of instance ating it we also
instantiated a conscious mind yeah movie
on the intelligence part but there's
something about me that says
consciousness is easier to fake like I
I've recently gotten my hands on a lot
of rubas don't ask me why or how but and
I've made them there's just a nice
robotic mobile platform for experiments
and I made them scream and/or moan in
pain so on just to see when they're
responding to me and it's just a sort of
psychological experiment myself and I
think they appear conscious to me pretty
quickly my guy to me at least my brain
can be tricked quite easily right I said
if I introspect and they it's harder for
me to be tricked that something is
intelligent so I just have this feeling
that inside this experience machine just
saying that you're conscious and having
certain qualities of the interaction
like being able to suffer like being
able to hurt like being able to wander
about the essence of your own existence
not actually I mean you know the
creating the illusion that you're
wandering about it is enough to create
the fit of consciousness and be create
the illusion of consciousness and
because of that create a really
immersive experience to where you feel
like that is the real world so you think
there's a big gap between appearing
conscious and being conscious or is it
not just that gets very easy to be
conscious I'm not actually sure what it
means to be conscious all I'm saying is
the illusion of consciousness is enough
for this to create a social interaction
that's as good as if the thing was
conscious meaning I'm making it about
myself right yeah I mean I guess there
are a few differences one is how good
the interaction is which might mean if
you don't really care about like probing
hard for whether the thing is conscious
maybe maybe it would be a satisfactory
interaction whether or not you really
thought it was conscious now if you
really care about it being contrasting
in like inside this experience machine
yes how easy would it be to fake it and
you say it sounds easy easy yeah then
the question is would that also mean
it's very easy to instantiate
consciousness like it's much more widely
spread in the world and we have thought
it doesn't require a big human brain
with a hundred billion neurons all you
need is some system that exhibits basic
intentionality and can respond and you
already have consciousness like in that
case I guess you still have a close
coupling they denied that did I guess
that a case would be where they can come
apart where we could create the
appearance of there being a conscious
mind without actually not being another
conscious mind I'm yeah I'm somewhat
agnostic exactly where these lines go I
think one one observation that makes it
possible that you could have very
realistic appearances relatively simply
which also is relevant for the
simulation argument and in terms of
thinking about how realistic with the
virtual reality model have to be in
order for the
creature not to notice that anything was
awry well just think of our own humble
brains during the wee hours of the night
when we are dreaming many times well
dreams are very mersive but often you
also don't realize that you're in a
dream and that's produced by simple
primitive three-pound lumps of neural
matter effortlessly so if a simple brain
like this can create a virtual reality
that seems pretty real to us then how
much easier would it be for a super
intelligent civilization with planetary
sized computers optimized over the eons
to create a realistic an environment for
you to interact with yeah and by the way
behind that intuition is that our brain
is not that impressive relative to the
possibilities of what technology could
bring it's also possible that the brain
is the epitome is the ceiling like just
because ceiling how it's not possible
meaning like this is the smartest
possible thing that the universe could
create so that's seems unlikely unlikely
to me yeah I mean for some of these
reasons we alluded to earlier in terms
of designs we already have four
computers that would be faster by many
orders of magnitude than the human brain
yeah but it could be that the
constraints the cognitive constraints in
themselves is what enables the
intelligence so the more the more
powerful you make the computer the less
likely is to become super intelligent
this is where I say dumb things to push
back and uh yeah I'm not sure I father
we might you know I mean so there are
different dimensions of intolerance yeah
a simple one is just speed like if you
could solve the same challenge faster in
some sense yes you're like smarter so
there I think we have very strong
evidence for thinking that you could
have a computer in this universe that
would be much faster than the human
brain and therefore have speed super
into
it's like be completely superior maybe a
million times faster then maybe there
are other ways in which you could be
smarter as well maybe more qualitative
ways right and there
the concepts are a little bit less
clear-cut so it's harder to make a very
crisp neat firmly logical argument for
why that could be qualitative
superintelligence as opposed to just
thinks that we're faster although I
still think it's very plausible and for
various reasons that that are less than
watertight arguments but when you can
sort of for example if you look at
animals and brains and even within
humans like there seems to be like
Einstein versus random person like it's
not just that Einstein was a little bit
faster but like how long would it take a
normal person to invent general
relativity it's like it's not twenty
percent longer than it took Einstein or
something like that it's like I don't
know whether that we do it at all or it
would take millions of years or some
totally bizarre so well you put your
tuition is that the computer size will
get you go the increasing the size of
the computer and the speed of the
computer might create some much more
powerful levels of intelligence that
would that enable some of the things
we've been talking about would like the
simulation being able to simulate an
ultra realistic environment ultra
realistic yes ception of reality yeah I
mean it's like they're speaking it would
not be necessary to have super
intelligence in order to he'll say the
technology to make these simulations
ancestor simulations or other kinds of
simulations and as a matter of fact that
thing if if there are if we are in a
simulation it would most likely be one
built by a civilization that had super
intelligence it certainly would help a
lot I mean it could build more efficient
large-scale structures if you had super
intelligence I also think that if you
had the technology to build these
simulations that's like a very advanced
technology it seems kind of easier to
get technology to super intelligence
yeah so I'd expect by the time that
could make these fully realistic
simulations of human history with human
brains in there like before that they
got to that stage I would have figured
out how to create machines super tall
or maybe biological enhancements of
their own brains if there were
biological creatures to start with so we
talked about the the three parts of the
simulation argument one we destroy
ourselves before we ever create the
simulation two we somehow everybody
somehow loses interest in creating
simulation three we're living in a
simulation so you've kind of I don't
know if your thinking has evolved on
this point but you kind of said that we
know so little that these three cases
might as well be equally probable
so probabilistically speaking where do
you stand on this yeah I know I mean I
don't think equal necessarily would be
the most supported probability
assignment so how would you without
assigning actual numbers wait wait
what's more or less likely in your in
your well I mean historically tended to
punt on the question of like has between
these three so maybe you ask me another
way is which kind of things would make
it each of these more or less likely
what cried VI certainly in general terms
if you think anything that say increases
or reduces the probability of one of
these we tend to slosh probability
around on the other so if if one becomes
less probable like the other would have
to cuz gotta add up to one yes
so if we consider the first hypothesis
the first alternative that there's this
filter that makes it so that virtually
no civilization reaches technological
maturity in particular our own
civilization if that's true then it's
like very unlikely that we would reach
technical maturity just because if
almost no civilization at our stage does
it then it's unlikely that we do it so
hang on sorry again longer than that for
a second well if it's the case that
almost all civilizations at our current
stage of technological maturity fails at
failed at our current stage of technical
development failed to reach maturity
that would give us very strong reason
for thinking we will
to reach technical material and also so
the flipside of that is the fact that
we've reached it means that many other
civilizations yeah so that means if we
get closer and closer to actually
reaching technological maturity there's
less and less distance left where we
could go extinct before we are there and
therefore the probability that we will
reach increases as we get closer and
that would make it less likely to be
true that almost all civilizations at
our current stage failed to get there
like we would have this what the one
case we started ourselves would be very
close to getting there that would be
strong evidence it's not so hard to get
too technical maturity so to the extent
that we you know feel we are moving
nearer to technology maturity that that
would tend to reduce the probability of
the first alternative and increase the
probability of the other - it doesn't
need to be a monotonic change like if
every once in a while some new threat
comes into view some bad news thing you
could do with some novel technology for
example you know that that could change
our probabilities in the other direction
but that the technology again you have
to think about as that technology has to
be able to equally in an even way affect
every civilization out there yeah pretty
much I mean that strictly speaking is
not real I mean that could that could be
two different existential risk and every
civilization you know you know one or
the other like but none of them kills
more than 50% like yeah but that
incidentally so in some of my the work I
mean on machine super intelligence like
so I wanted some existential risks where
they did sort of super intelligence AI
and how we must make sure you know to
handle that wisely and carefully it's
not the right kind of existential
catastrophe to make
first alternative true though like it
might be bad for us if the future lost a
lot of value as a result of it being
shaped by some process that optimized
for some completely non human value but
even if we got killed by machine
superintendence is that machine super
intelligence might still attain
technical maturity so I see so you're
not very you're not human exclusive this
could be any intelligent species that
achieves like it's all about the
technological maturity it's not that the
humans have to attain it right like
super intelligence replace us and that's
just as well fascination as well yeah
yeah I mean it could interact with the
second high pop foul turn ative like if
the thing that replaced us was either
more likely or less likely than we would
be to have an interest in creating
ancestor simulations you know that that
could affect probabilities but yeah to a
first-order like if we all just die then
yeah we won't produce any simulations
because we are dead but if we all die
and get replaced by some other
intelligent thing that then gets the
technical maturity the question remains
of course if my not that thing that
needs some of its resources to to do
this stuff so can you reason about this
stuff this is given how little we know
about the universe is it reasonable to
to reason about these probabilities so
like how little
well maybe you can disagree but to me
it's not trivial to figure out how
difficult it is to build a simulation we
kind of talked about it a little bit we
also don't know like as we tried to
start building it like start creating
virtual worlds and so on how that
changes the fabric of society like
there's all these things along the way
that can fundamentally change just so
many aspects of our society about our
existence that we don't know anything
about like the kind of things we might
discover when we understand to a greater
degree the fundamental the physics like
the theory if we have a
break through have a theory and
everything how that changes stuff how
that changes deep space exploration and
so on so like is it still possible to
reason about probabilities given how
little we know yes I think though there
will be a large residual of uncertainty
that we'll just have to acknowledge and
I think that's true for most of these
big-picture questions that we might
wonder about it's just we are small
short-lived small brained cognitively
very limited humans with little evidence
and it's amazing we can figure out as
much as we can really about the cosmos
but it okay so there's this cognitive
trick that seems to happen where I look
at the simulation argument 
Resume
Read
file updated 2026-02-13 13:24:12 UTC
Categories
Manage