Nick Bostrom: Superintelligence | AI Podcast Clips
IiSKptK92S4 • 2020-03-30
Transcript preview
Open
Kind: captions
Language: en
let's talk about superintelligence at
least for a little bit and let's start
at the basics what tu is intelligence
yeah I didn't
not to get too stuck with the
definitional question I mean I their
common sense understand like the ability
to solve complex problems to learn from
experience to plan to reason some
combination of things like that
it's consciousness mixed up into that or
no it's consciousness mixed up into that
as well I don't think I think it could
be fairly intelligent at least without
being conscious probably and so then
what is super intelligence so yeah that
would be like something that was much
more had much more general cognitive
capacity than we humans are so if we
talk about general super intelligence it
would be much faster learner be able to
recent much better MIT plans that are
more effective at achieving its goals
say in a wide range of complex
challenging environments in terms of as
we turn our eye to the idea of sort of
existential threats from super
intelligence do you think super
intelligence has to exist in the
physical world or can it be digital only
sort of we think of our general
intelligence as us humans as an
intelligence that's associated with the
body that's able to interact with the
world that's able to affect the world
directly with physically I mean digital
only is perfectly fine I think I mean
you you could you it's physical in a
sense that obviously the computers and
the memories are physical but it's
capably to affect the world sort of
could be very strong even if it has a
limited set of actuators if it can types
text on the screen or something like
that that would be I think ample so in
terms of the concerns of existential
threat of AI how can any AI system
that's in the digital world have
existential risk sort of what what are
the attack vectors for a digital system
well I mean I guess maybe to take one
step back so I should emphasize that
I also think there's this huge positive
potential from machine intelligence
including super intelligence and I want
to stress that because like some of my
write writing has focused on what can go
wrong and when I wrote the book super
intelligence at that point I felt that
there was a kind of neglect of what
would happen if AI succeeds and in
particular need to get a more granular
understanding of where the pitfalls are
so we can avoid them I think that since
since the book came out in 2014 there
has been a much wider recognition of
that and a number of research groups are
not actually working on developing say
AI alignment techniques and so on and so
forth so that's I'd I'd like yeah I
think now it's important to make sure we
bring back onto the table the upside as
well and there's a little bit of a
neglect now on the upside which is I
mean if you look at something to a
friend if you look at the amount of
information there's available or people
talking and people being excited about
the positive possibilities of general
intelligence that's not it's far
outnumbered by the negative
possibilities in in terms of our public
discourse possibly yeah it's hard to
measure so but what are you kneeling on
that's a little bit what are some to you
possible big positive impacts of general
intelligence super intense means a super
it because I tend to also want to
distinguish these two different contexts
of thinking about AI and high impacts
they're kind of near term and long term
if you want both of which I think are
legitimate things to think about and
people should you know discuss both of
them but but they are different and they
often get mixed up and then then I get
you get confusion like I think you get
simultaneously I've made been overhyping
of the near term and and under hyping of
the long term and so I think as long as
we keep them apart we can have like two
good conversations but or we can mix
them together and have one bad
conversation can you clarify just
the two things we were talking about the
near term in the long term yeah and what
are the distinction well it's a blurry
distinction but say the things I wrote
about in this book super intelligence
long term things people are worrying
about today with I don't know
algorithmic discrimination or even
things self-driving cars and drones and
stuff more near term and then then of
course equally but in some medium term
where that kind of overlap and they want
evolves into the other but I don't write
I think both yeah the dishes look kind
of somewhat different depending on which
of these contexts so I think I think
it'd be nice if we can talk about the
long term mm-hm and think about a
positive impact or a better world
because of the existence of the long
term super intelligent do you have use
of such a war yeah I mean it I guess
it's not a hard to articulate because it
seems obvious that the world has a lot
of problems as it currently stands and
it's hard to think of any one of those
which it wouldn't be useful to have like
a friendly aligned super intelligence
working on so from health you know to
the economic system to be able to sort
of improve the investment and trade and
foreign policy decisions all that kind
of stuff
all that kind of stuff and a lot more I
mean what's the killer app well I don't
think there is one I think AI I
especially artificial general
intelligence is really the ultimate
general purpose technology so it's not
that there is this one problem this one
area where it will have a big impact but
if and when it succeeds it will really
apply across the board in all fields
where human creativity and intelligence
and problem-solving is useful which is
pretty much all fields right there the
thing that it would do is give us a lot
more control over nature it wouldn't
automatically solve the problems that
arise
from conflict between humans
fundamentally political problems some
subset of those might go away if you
just had more resources and cooler tech
but some subset would require
coordination that is not automatically
achieved just by having more technical
capability but but anything that's not
of that sort I think you just get like
an enormous boost with this kind of
cognitive technology what once it goes
all the way not again
that doesn't mean I'm like thinking or
people don't recognize what's possible
with current technology and like
sometimes things get over height but I
mean those are perfectly consistent
views to hold the ultimate potential
being enormous and then it's a very
different question of how far are we
from that or what can we do with
near-term technology so what's your
intuition about the idea of intelligence
explosion so there's this you know when
you start to think about that leap from
the near term to the long term the
natural inclination like for me sort of
building machine learning systems today
it seems like it's a lot of work to get
the general intelligence but there's
some intuition of exponential growth of
exponential improvement of intelligence
explosion can you maybe try to elucidate
they try to talk about what's your
intuition about the possibility of an
intelligence explosion they won't be
this gradual slow process there might be
a phase shift yeah I think it's we don't
know how explosive it will be I think
for what it's worth I've she was fairly
likely to me that at some point I will
be some intelligence expulsion like some
period of time where progress in AI
becomes extremely rapid roughly roughly
in the area where you might say it's
kind of human equivalent in coral
cognitive faculties that the concept of
human equivalent like this starts to
break down when you look too closely at
it but and and just how explosive does
something have to be for it to
we called an intelligence expulsion like
does it have to be like overnight
literally or a few years or so but but
overall I guess in if you had if you
plotted the opinions of different people
in the world I guess I would be somewhat
more probability towards the
intelligence expulsion scenario then
probably the average you know hey I
research here I guess so
and then the other part of the
intelligence explosion or just forget
explosion just progress is once you
achieve that gray area of human level
intelligence is it obvious to you that
we should be able to proceed beyond it
to get the super intelligence yeah that
seems I mean as much as any of these
things can be obvious given we've never
had one people have different views more
people of different views is like that
it's like some some some degree of
uncertainty that always remains for any
big futuristic philosophical grand
question that just we realize humans are
fallible especially about these things
but it does seem as far as I'm judging
things based on my own impressions that
it seems very unlikely that that would
be a ceiling at or near human cognitive
capacity but and this is a I don't know
this is a special moment and it says
both terrifying and exciting to create a
system that's beyond our intelligence so
maybe you can step back and and say like
how does that possibly make you feel
that we can create something it feels
like there's a line beyond which it
steps you'll be able to outsmart you and
therefore it feels like a step where we
lose control well I don't think that a
lot of follows that is you could imagine
and in fact this is what a number of
people are working towards making sure
that we could ultimately the project
higher levels of problem-solving ability
while still making sure that they are
aligned like they're in the service of
human values I mean so so it
in control I think is not enough given
that would happen now I asked how it
makes me feel like I mean to some extent
I've lived with this for so long since
as this as long as I can remember being
being an adult or even a teenager it
seemed to me obvious that at some point
I I will succeed and so I actually
misspoke I didn't mean control I meant
because the control problem is an
interesting thing and I think we the
hope is at least we should be able to
maintain control over systems that are
smarter than us but there the we do lose
our specialness it's sort of we'll lose
our place as the smartest coolest thing
on earth and there's an ego involved
that that humans are very good at
dealing with I mean I I value my
intelligence as a human being it seems
like a big transformative step to
realize you there's something out there
that's more intelligent I mean you don't
see that I think yes a lot I think it
really small I mean I think there
already a lot of things out there that
are I mean certainly if you think the
universe is big there's gonna be other
civilizations that already have super
intelligences or that just naturally
have brains the size of beach balls and
they're like completely leaving us in
the dust and we having some face to face
we have some face to face but I mean
that's not my question what what would
happen in in a kind of posthuman world
like how much day-to-day would these
super intelligences be involved in the
lives of ordinary
I you could imagine some scenario where
it would be more like a background thing
that would help protect against some
things but you wouldn't like that there
wouldn't be this intrusive kind of like
making you feel bad by like making
clever jokes on your expert like there's
like all sorts of things that maybe the
human context would feel awkward about
that you don't want to be the dumbest
kid in your class everybody picks it
like a lot of those things
maybe you need to abstract away from if
you're thinking about this context where
we have infrastructure that is the some
sense beyond any or all humans I mean
it's a little bit like say the
scientific community as a whole if you
think of that as a mind it's a little
bit of metaphor but I mean obviously
it's gonna be like way more capacious
than any individual so in some sense
there is this mind like thing already
out there that's that you're just vastly
more intolerant and than a new
individual is and we think okay that's
you just accept that as a fact that's
the basic fabric of our existence is
intelligence yeah you get used to a lot
of I mean there's already Google and
Twitter and Facebook these sister
recommender systems that are the basic
fabric of our and I could see them
becoming I mean do you think of the
collective intelligence of these systems
as already perhaps reaching super
intelligence level well I mean so here
it comes to this the concept of
intelligence and the scale and what
human level means the the kind of
vagueness and indeterminacy of those
concepts starts to dominate how he would
answer that question so the like say the
Google search engine has a very high
capacity of a certain kind like retrieve
it remember remembering and retrieving
information particularly like text or
images that are you have a kind of
string a word string key like obviously
superhuman at that but a vast set of
other things it can't even do at all not
just not do well but so so you have
these current AI systems that are
superhuman in some limited domain and
then like radically subhuman in all
other domains so it's same way that
chess like are just a simple computer
that can multiply really large numbers
right it's gonna have this like one
spike of super intelligence and then
kind of a zero level of
capability across all other cognitive
fields and yeah I don't necessarily
think the general nasai mean I'm not so
attached with it but I could sort of
it's a it's a gray area and it's a
feeling but to me sort of alpha zero is
somehow much more intelligent much much
more intelligent than deep blue hmm and
just say which tomato you could say well
these are both just board game that
they're both just able to play board
games who cares if they're gonna do
better or not but there's something
about the learning the self play
learning yeah that makes it crosses over
into that land of intelligence that
doesn't necessarily need to be general
in the same way Google is much closer to
deep blue currently in terms of its
search engine now then it is to sort of
alpha zero and the moment it becomes the
moment these recommender systems really
become more like alpha zero but being
able to learn a lot without the
constraints of being heavily constrained
by human interaction that seems like a
special moment in time certainly
learning ability seems to be an
important facet of general intelligence
that you can take some new domain that
you haven't seen before and you weren't
specifically pre-programmed for and then
figure out what's going on there and
eventually become really good at it
so that's something alpha zero it has
much more often than deep blue had and
in fact I mean systems like alpha zero
can learn not just go but other in fact
probably beat deep blue in chess and so
forth right so you say yes general and
it matches the intuition we feel it's
more intelligent and it also has more of
this general purpose learning ability
and if we get systems that have been
more general-purpose learning ability it
might also trigger an even stronger
intuition that they are actually
starting to get smart so if you were to
pick a future what would eating a utopia
looks like with a GI systems sort of is
it the neural link brain computer
interface world where we're kind of
really close to a neutral linked with
systems is it possibly where a GI
systems replace us completely while
maintaining the the values and the the
consciousness is it something like it's
a completely invisible fabric like you
mentioned a society where just AIDS and
a lot of stuff that we do like carrying
diseases and so on
what does utopia if you get to pick yeah
I mean it's a good question and a deep
and difficult one I'm quite interested
in it I don't have all the answers yet
but or might never have but I think
there are some different observations
one could make one one is if this if the
scenario actually did come to pass it
would open up this vast space of
possible modes of being on one hand
material and resource constraints would
just be like expanded dramatically so
you there would be a lot of a big pie
let's say right also it would enable us
to to do things including to ourselves
or not like that do you eat it would
just open up this much larger design
space you know options based and and we
have ever had access to in in human
history so I think two things follow
from that what one is that we probably
would need to make a fairly fundamental
rethink of what ultimately we value like
think things through more from first
principles in the context would be so
different from the familiar that we
could have just take what we've always
been doing and then like oh well we have
this cleaning robot that like cleans the
the dishes in the sink and a few other
small things and like I think we would
have to go back to first principles and
so from even from the individual level
go back to the first principles of what
what is the meaning of life what is
happiness how it is fulfillment yeah and
then also connected to this large space
of of resources is that it would be
possible and I think something we should
aim for is to do well by the lights of
more than one value system that is
we wouldn't have to choose only one
value criterion and say we're gonna do
something that's course really high on
the metric of say hedonism and then is
like a zero by other criteria like kind
of wire headed brains in about and it's
like a lot of pleasure that's good but
then like no no Beauty you know
achievement like no III or or or pic and
I think to some significant not
unlimited sense but the significant
sense it would be possible to do very
well by many criteria like maybe you
could get like 98% of the best according
to several criteria at the same time
given this this the secret expansion of
the option space and so so have
competing value systems competing
criteria as a sort of for everything
just like our Democrat versus Republican
there seems to be this always multiple
parties that are useful for our progress
in society even though might seem
dysfunctional inside the moment but
having the multiple value systems needs
to be beneficial for I guess a balance
of power so that's yeah let's not not
exactly what I have in mind that it's
well although alchemy may be in an
indirect way it is but that if you had
the chance to do something that scored
well in several different metrics our
first instinct should be to do that
rather than immediately leap to the
thing ah which one's of these value
systems and we're gonna screw over like
our first in let's first try to do very
well by all of them yeah then it might
be that you can't get a hundred percent
of all and you would have to then like
have the hard conversation about which
one will only get ninety-seven pretty go
there's my cynicism that all of
existence is always a trade-off but you
say that maybe it's not such a bad trade
office first
he's trying well this would be a
distinctive context in which at least
some of the constraints would be removed
now there's probably stupid
laughs in the end it's just that we
should first make sure we at least take
advantage of this abundance so in terms
of thinking about this like yeah one
should think I think in this kind of
frame of mind of generosity and a
inclusiveness to different value systems
and and see how far one can get there
first
and I think one could do something that
that would be very good according to
many different criteria
you
Resume
Read
file updated 2026-02-13 13:24:48 UTC
Categories
Manage