Transcript
llh-2pqSGrs • Peter Singer: Suffering in Humans, Animals, and AI | Lex Fridman Podcast #107
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0413_llh-2pqSGrs.txt
Kind: captions
Language: en
the following is a conversation with
Peter Singer professor of bioethics at
Bristol University best known for his
1975 book Animal Liberation that makes
an ethical case against eating meat
he has written brilliantly from an
ethical perspective on extreme poverty
euthanasia human genetic selection
sports doping the sale of kidneys and
generally happiness including in his
books ethics in the real world and the
life you can save he was a key
popularizer of the effective altruism
movement and is generally considered one
of the most influential philosophers in
the world quick summary of the ads to
sponsors cash app and masterclass please
consider supporting the podcast by
downloading cash app and using collects
podcast and signing up a masterclass
complex click the links buy the stuff it
really is the best way to support the
podcast and the journey I'm on as you
may know I primarily eat a ketogenic or
carnivore diet which means that most of
my diet is made up of me I do not hunt
the food I eat though one day I hope to
I love fishing
for example fishing and eating the fish
I catch has always felt much more honest
than participating in the supply chain
of factory farming from an ethics
perspective this part of my life has
always had a cloud over it
it makes me think I've tried a few times
in my life to reduce the amount of meat
I eat but for some reason whatever the
makeup of my body whatever the way I
practice the dieting I have I get a lot
of mental and physical energy and
performance from eating meat so both
intellectually and physically it's a
continued journey for me I returned to
Peters work often to reevaluate the
ethics of how I live this aspect of my
life let me also say that you may be a
vegan or you may be a meat-eater it may
be upset by the words I say or Peter
says but I asked for this podcast
and other episodes of this podcast then
you keep an open mind I may and probably
will talk with people you disagree with
please try to really listen especially
to people you disagree with and give me
in the world the gift of being a
participant and a patient intelligent
and nuanced discourse if your instinct
and desire is to be a voice of mockery
towards those you disagree with please
unsubscribe my source of joy and
inspiration here has been to be a part
of a community that thinks deeply and
speaks with empathy and compassion that
is what I hope to continue being a part
of and I hope you join as well if you
enjoy this podcast subscribe on youtube
review it with five stars an apple
podcast follow on Spotify support on
patreon or connect with me on Twitter at
Lex Friedman as usual I'll do a few
minutes of ads now and never any ads in
the middle that can break the flow of
the conversation this show is presented
by cash app the number one finance app
in the App Store when you get it used
colex podcast cash app lets you send
money to friends buy Bitcoin and invest
in a stock market with as little as $1
since cash app allows you to buy Bitcoin
let me mention that cryptocurrency in
the context of the history of money is
fascinating I recommend a cent of money
as a great book in this history debits
and credits on Ledger's started around
30,000 years ago the US dollar created
over two hundred years ago and the first
decentralized cryptocurrency released
just over ten years ago so given that
history cryptocurrency still very much
in its early days of development but
it's still aiming to just might redefine
the nature of money so again if you get
cash app from the App Store or Google
Play and use the code let's podcast you
get ten dollars and cash app will also
donate ten dollars the first an
organization that is helping to advance
robotics a STEM education for young
people around the world this show
sponsored by a masterclass sign up a
master class complex to get a discount
and to support this podcast when I first
heard about masterclass I thought it was
too good to be true for $180 a year you
get an all-access pass to watch courses
from to list some of my favorites Chris
Hadfield and space exploration the other
guys Tyson on scientific thinking and
communication will write creator of
SimCity and Sims on game design
I promise I'll start streaming games at
some point soon Carlos Santana on guitar
garry kasparov on chess Daniel Negreanu
on poker and many more Chris Hadfield
explaining how Rockets work and the
experience of being launched into space
alone is worth the money by the way you
can watch it on basically any device
once again sign up a master class
complex to get a discount and to support
this podcast
and now here's my conversation with
Peter Singer
you first become conscious of the fact
that there is much suffering in the
world I think I was conscious of the
fact that there's a lot of suffering the
world pretty much as as soon as I was
able to understand anything about my
family and its background because I lost
three of my four grandparents in the
Holocaust and obviously I knew why I
only had one grandparent and she herself
had been in the camps and survived so I
think I knew a lot about that pretty
early
my entire family comes from the Soviet
Union I was born in so V and so sort of
World War two has deep roots and a
culture and the the suffering that the
war brought the millions of people who
died is in the is in the music is in the
literature's and the culture what do you
think was the impact of the war broadly
on our society the war had many impacts
I think one of them a beneficial impact
is that it showed what racism and
authoritarian government can do and at
least as far as the West was concerned I
think that meant that I grew up in an
era in which there wasn't the kind of
overt racism and anti-semitism that had
existed for my parents in Europe I was
growing up in Australia and certainly
that was clearly seen as something
completely unacceptable
there was also the fear of a further
outbreak of war which this time we
expected would be nuclear because of the
way the Second World War had ended so
there was this overshadowing of my
childhood about the possibility that I
would not live to grow up and be an
adult because of catastrophic nuclear
war there was a the film on the beach
was made in which the city that I was
living Melbourne was the last place on
earth to have living human beings
because of the nuclear cloud that was
spreading from the north so
that certainly gave us a bit of that
that sense there were many you know
there were clearly many other legacies
that we got of the war as well and the
whole set up of the world and the the
cold war that followed all of that has
its roots in the Second World War you
know there is much beauty that comes
from war sort of at a conversation with
Eric Weinstein he said everything is is
great about war except all the death and
suffering do you think there's something
positive that they came from the war the
the mirror that have put to our society
sort of the ripple effects on it
ethically speaking do you think there
are positive aspects to war I find it
hard to see positive aspects in war and
some of the things that other people
think of as as positive and beautiful
maybe questioning so there's a certain
kind of patriotism people say you know
during wartime we all pull together we
all work together against the common
enemy and that's true an outside enemy
does unite a country and in general it's
good for countries to be united and have
common purposes but it also engenders a
kind of a nationalism and the patriotism
that cannot be questioned and that I'm
I'm more skeptical about what about the
the Brotherhood that people talk about
from soldiers the the sort of
counterintuitive sad idea that the
closest that people feel to each other
is in those moments of suffering of
being at the sort of the edge of seeing
your comrades dying in your arms that
somehow brings people extremely closely
together suffering brings people closer
together how do you make sense of that
it might bring people close together but
there are other ways of bonding and
being close to people I think without
the suffering and death that war entails
perhaps you could see you can already
hear the romanticized Russian in me we
tend to romanticize suffering just a
little bit in our literature and culture
and so on could you take a step back and
I apologize if it's a ridiculous
question but what is suffering if you'll
try to define what suffering is how
would you go about it suffering is a
conscious state they can be neither
suffering for a being who is completely
unconscious and it's distinguished from
other conscious states in terms of being
one that considered just in itself we
would rather be with that it's a
conscious state that we want to stop if
we're experiencing or we want to avoid
having again if we've experienced it in
the past and that's as I say emphasized
for its own sake because of course
people will say well suffering
strengthens the spirit it has good
consequences
and sometimes it does have those
consequences and of course sometimes we
might undergo suffering we set ourselves
a challenge to run a marathon or climb a
mountain or even just to go to the
dentist so that the toothache doesn't
get worse
even though we know the dentist is going
to hurt us to some extent so I'm not
saying that we never choose suffering
but I am saying that other things being
equal we would rather not be in that
state of consciousness is the ultimate
goal sort of you have the new ten year
anniversary release the life you can say
book really influential book we'll talk
about it a bunch of times throughout
this conversation but do you think it's
possible to eradicate suffering errors
at the goal or do we want to achieve a a
kind of minimum threshold of suffering
and then keeping a little drop of poison
that to keep things interesting in the
world in practice I don't think we ever
will eliminate suffering so I think that
little drop of poison as you put it or
if you like that the contrast
- of an unpleasant color perhaps
something like that in a otherwise
harmonious and beautiful composition
that is going to always be there if you
ask me whether in theory if we could get
rid of it I we should I think the answer
is whether in fact we would be better
off or whether in terms of by
eliminating the suffering we would also
eliminate some of the highs the positive
hires and if that's so then we might be
prepared to say it's worth having a
minimum of suffering in order to have
the best possible experiences as well is
there a relative aspect to suffering so
we when you talk about eradicating
poverty in the world is this the more
you succeed the more the bar of what
defines poverty raises or is there at
the basic human ethical level a bar
that's absolute that once you get above
it then it we can morally converge to
feeling like we have eradicated poverty
I think they're both and I think this is
true for poverty as well as suffering
there's an objective level of suffering
or of poverty where we're talking about
objective indicators like you're
constantly hungry you don't you can't
get enough food you're constantly cold
you can't get warm you have some
physical pains that you're never rid of
I think those things are objective but
it may also be true that if you do get
rid of that and you get to the stage
where all of those basic needs have been
met there may still be there new forms
of suffering that develop and perhaps
that's what we're seeing in the affluent
societies we have that people get bored
for example they don't need to spend so
many hours a day earning money to get
enough to eat and shelter so now they're
bored they like a sense of purpose that
can happen
and that then is a kind of a relative
suffering that is distinct from the
objective forms of suffering but in your
focus on eradicating suffering you don't
think about that kind of the the kind of
interesting challenges and suffering
that emerges in affluent societies
that's just not in your ethical
philosophical brain is that of interest
at all it would be of interest to me if
we had eliminated all of the objective
forms of suffering which I think of as
generally more severe and also perhaps
here at this stage anyway to know how to
eliminate so yes in some future state
when we've eliminated those objective
forms of suffering I would be interested
in trying to eliminate the relative
forms as well well that's not a
practical need for me at the moment
sorry to linger on it because you kind
of said it but just the is elimination
the goal for the affluent society so is
there a crew you know do you see as
suffering as a creative force suffering
can be a creative force
I think repeating what I said about the
highs and whether we need some of the
lows to experience the highs so it may
be that suffering makes us more creative
and we regard that as worthwhile maybe
that that brings some of those highs
with it that we would not have had if
we'd had no suffering I I don't really
know many people have suggested that and
I certainly can't have no basis for
denying it
and if it's true then I would not want
to eliminate suffering completely but
the focus is on and the absolute not to
be cold not to be hungry yes that's at
the present stage of where the world's
population is that's that's the focus
talking about human nature for a second
do you think people are inherently good
or do we all have good and evil in us
that basically everyone is capable of
evil based on the environment certainly
most of us have
potential for both good and evil I'm not
prepared to say that everyone is capable
of evil maybe some people who even in
the worst of circumstances would not be
capable of it but most of us are very
susceptible through environmental
influences so when we look at things
that we were talking about previously
let's say the what the Nazis did during
the Holocaust I think it's quite
difficult to say I know that I would not
have done those things even if I were in
the same circumstances as those who did
them even if let's say I had grown up
under the Nazi regime and had been
indoctrinated with racist ideas had also
had the the idea that I must obey orders
follow the commands of the Fuhrer plus
of course perhaps the threat that if I
didn't do certain things I might get
sent to the Russian front and that would
be a pretty grim fight I think it's
really hard for anybody to say
nevertheless I know I would not have
killed those Jews or whatever so what's
your intuition how many people will be
able to say that truly to be able to say
it I think very few less than 10% to me
it seems a very interesting and powerful
thing to meditate on so I've read a lot
about the war a world war 2 and I can't
escape the thought that I would have not
been one of the 10% right I have to say
I simply don't know I would like to hope
that I would have been one of the 10%
but I don't really have any basis for
claiming that I would have been
different from the majority is it a
worthwhile thing to contemplate it would
be interesting if we could find a way of
really finding these answers there
obviously is quite a bit of research on
people during the Holocaust on how
ordinary Germans got led to do terrible
things and there's what there are also
studies of the resistance some heroic
people in the white rose group for
example who resisted even though they
knew they were likely to die for it but
I don't know whether these studies
really can answer your larger question
of how many people would have been
capable of doing that
well sort of the reason I think it's
interesting is in the world as you
described you know when when there are
things that you'd like to do they're
good that are objectively good it's
useful to think about whether I'm not
willing to do something or I don't even
I'm not willing to acknowledge something
as good and the right thing to do
because I'm simply scared of putting my
life of damaging my life in some kind of
way and that kind of thought exercise is
helpful to understand what is what is
the right thing in my current skill set
and the capacity to do so if there's
things that are convenient and there's I
wonder if there are things that are
highly inconvenient where I would have
to experience derision or hatred or or
death or all those kinds of things but
it's truly the right thing to do and
that kind of balance is I feel like in
America we don't have it's it's
difficult to think in the current times
it seems easier to put yourself back in
history when you can sort of objectively
contemplate whether how willing you are
to do the right thing when the cost is
high true but I think we do face those
challenges today and I think we can
still ask ourselves those questions so
one stand that I took more than 40 years
ago now was to stop eating meat become a
vegetarian at a time when you hardly met
anybody who was a vegetarian or if you
did they might have been a Hindu or they
might have had some weird theories about
meat and health and I I know thinking
about making that decision I was
convinced that it was the right thing to
do but I still did have to think how all
my friends are going to think that I'm a
crank
because I'm now refusing to eat meat
so you know I'm not saying there were
any terrible sanctions obviously but I
thought about that and I guess I decided
well I still think this is the right
thing to do and if I'll put up with that
if it happens and one or two friends
were clearly uncomfortable with that
decision but you know that was pretty
minor compared to the historical
examples that we've been talking about
but other issues that we have around too
like global poverty and what we ought to
be doing about that is is another
question where people I think can have
have the opportunity to take a stand on
what's the right thing to do now climate
change would be a third question where
again people are taking a stand over you
know look at great Atun Berg there and
say well I think it must have taken a
lot of courage for a school girl to say
I'm going to go on strike about climate
change and see what happened yeah
especially in this divisive world she
gets exceptionally huge amounts of
support and hatred both there's a very
difficult for teenager to operate in in
your book ethics in the real world
amazing book people should check it out
very easy read eighty two brief essays
on things that matter one of the essays
asks should robots have rights you've
written about this so let me ask sure
robots have rights if we ever develop
robots capable of consciousness capable
of having their own internal perspective
on what's happening to them so that
their lives can go well or badly for
them then robots should have rights
until that happens they shouldn't so its
consciousness essentially a prerequisite
to suffering
so everything that possesses
consciousness is capable of suffering
put another way and if so what is
consciousness I certainly think that
consciousness is a prerequisite for
suffering you can't suffer if you're not
kind
but is it true that every being is
conscious will suffer or has to be
capable of suffering I suppose you could
imagine a kind of consciousness
especially if we can construct it out
officially that's capable of
experiencing pleasure but just
automatically cuts out the consciousness
when when they're suffering sort of like
you know instant anesthesia as soon as
something is going to cause you
suffering
so that's possible but doesn't exist as
as far as we know on this planet yet if
you asked what is consciousness
philosophers often talk about it as
their being a subject of experiences so
you and I and everybody listening to
this is a subject of experience there is
a conscious subject who is taking things
in responding to it in various ways
feeling good about it feeling bad about
it and that's different from the kinds
of artificial intelligence we have now I
take out my phone I ask Google
directions to where I'm going
Google gives me the directions and I
choose to take a different way you know
Google doesn't care it's not like I'm
offending Google or anything like that
there is no subjective experiences there
and I think that's the indication that
Google day I we have now is is not
conscious or at least that level of AI
is not conscious and that's the way to
think about it now it may be difficult
to tell of course whether a certain eye
eye is or isn't conscious it may mimic
consciousness and we can't tell if it's
only mimicking it or if it's the real
thing but that's what we're looking for
is there a subjective experience a
perspective on the world from which
things can go well or badly from that
perspective so our idea what Cocteau of
what suffering looks like comes from our
just watching our selves when we're in
pain sort of oh when we're experiencing
pleasure it's not only a pleasure and
pain yes yes so and then you could
actually
back on us but I would say that's how we
kind of build an intuition about animals
is we can infer the similarities between
humans and animals and so infer that
they're suffering or not based on
certain things and they're conscious or
not so what if robots you mentioned
Google Maps and I've done this
experiment so I work in robotics just
from my own self or I have several
Roomba robots and I play with different
speech interaction voice based
interaction and if the Roomba or the
robot or Google Maps shows any signs of
pain like screaming or moaning or being
displeased by something you've done that
in my mind I can't help but immediately
upgrade it and even when I myself
programmed it in just having another
entity that's now for the moment
disjoint from me showing signs of pain
makes me feel like it is conscious like
I immediately and then the whatever the
I immediately realize it's not obviously
but that feeling is there so sort of I
guess I guess what do you think about a
world where Google Maps and rope rumbas
are pretending to be conscious and we
descendants of apes are not smart enough
to realize it or not or or whatever or
that is conscious they appear to be
conscious and so you then have to give
them rights the reason I'm asking that
is that kind of capability may be closer
than then we realize yes that kind of
capability may be closer but I don't
think it follows that we have to give
them rights
I suppose the the argument for saying
that in those circumstances we should
give them rights is that if we don't
we'll harden ourselves against other
beings who are not robots and who really
do suffer that's a possibility that you
know if we get used to looking at a
being suffering
and saying man we don't have to do
anything about that that being doesn't
have any rights
maybe we'll feel the same about animals
for instance and interestingly among
philosophers and thinkers who denied
that we have any direct duties to
animals and this includes people like
Thomas Aquinas and Immanuel Kant they
did say yes but still it's better not to
be cruel to them not because of the
suffering we're inflicting on the
animals but because if we are we may
develop a cruel disposition and this
will be bad for humans you know because
we were more likely to be cruel to other
humans and that would be wrong so but
you don't accept that kahin I don't
accept that as a the basis of the
argument for why we shouldn't be cruel
to animals I think the basis of the
argument for why we shouldn't be cruel
to animals is just that we're inflicting
suffering on them and the suffering is a
bad thing but possibly I might accept
some sort of parallel of that argument
as a reason why you shouldn't be cruel
to these robots that mimic the symptoms
of pain if if it's gonna be harder for
us to distinguish I would venture to say
I'd like to disagree with you and what
most people I think at the risk of
sounding crazy I would like to say that
if that Roomba is dedicated to faking
the consciousness in the suffering I
think we will it will be impossible for
us
I would I would like to apply the same
arguments with animals to robots that
they deserve rights in that sense now we
might outlaw the addition of those kinds
of features into rumors but once you do
I think I'm quite surprised by the
upgrade in consciousness that the
display of suffering creates it's a
totally open world but I'd like to just
sort of the difference between animals
and other humans is that in the robot
case we've added it in ourselves
therefore we can say something about
the how real it is but I would like to
say that the display of it is what makes
it real and there's some I'm not a
philosopher I'm not making that argument
but at least like to add that as a
possibility and I've been surprised by
it is all I'm trying to sort of
inoculate poorly I suppose so there is a
philosophical view has been held about
humans which is rather like what you're
talking about and that's behaviorism so
behaviorism was employed both in
psychology people like BF Skinner was a
famous behaviorist
but in psychology it was more a kind of
a what is it that makes this science
well you need to have behavior because
that's what you can observe you can't
observe consciousness but in philosophy
the view defended by people like Gilbert
Ryle who was a professor of philosophy
at Oxford wrote a book called the
concept of mind in which you know in
this kind of phase this is in the 40s of
linguistic philosophy he said well the
meaning of a term is its use and we use
terms like so-and-so is in pain when we
see somebody writhing or screaming or
trying to escape some stimulus and
that's the meaning of the term so that's
what it is to be in pain and you point
to the behavior and Norman Malcolm who
was another philosopher in the school
from Cornell had had the view that you
know so what is it to dream after all we
can't see other people's dreams well
when people wake up and say I just had a
dream of you know here I was undressed
walking down the Main Street or whatever
it is you've dreamt that's what it is to
have a dream it's to basically to wake
up and recall something so you could
apply this to to what you're talking
about
and say so what it is to be in pain is
to exhibit these symptoms of pain
behavior and therefore these robots are
in pain that's what the word means but
nowadays not many people think that
riles kind of philosophical behaviorism
is really very plausible so I think they
would say the same about your view so
yes I'd just spoken with Noam Chomsky
who basically was part of dismantling
the behaviorist
but and I'm with that 100% for studying
human behavior but I am one of the few
people in the world who has made Roombas
scream in pain and I just don't know
what to do with that empirical evidence
because it's hard it's sort of
philosophically I agree but the only
reason I philosophically agree in that
case is because that was the programmer
but if somebody else was a programmer
I'm not sure I would be able to
interpret that wall so it's uh I think
it's a new world that I was just curious
what your thoughts are for now you feel
that the display of the what we can kind
of intellectual say is a fake display of
suffering is not suffering that's right
that would be my view but that's
consistent of course with the idea that
it's part of our nature to respond to
this display if it's reasonably
authentically done and therefore it's
understandable that people would feel
this and maybe as I said it's even a
good thing that they do feel it and you
wouldn't want to harden yourself against
it because then you might harden
yourself against beings who are really
suffering but there's this line you know
so you said once a artificial general
intelligence system a human level
intelligence system become conscious I
guess if I could just linger on it now
I've wrote really dumb programs they
just say things that I told them to say
but how do you know when oh when a
system like Alexa was just officially
complex you can introspect to how it
works
starts giving you signs of consciousness
through natural language that there's a
there's a feeling there's another entity
there that's self-aware that has a fear
of death immortality but as awareness of
itself that we kind of associate with
other living creatures
it I guess I'm sort of trying to do the
slippery slope from the very naive thing
where I started into into something
where it's sufficiently a black box to
where it's starting to feel like it's
conscious it wears that threshold or you
would start getting uncomfortable well
the idea of robot suffering do you think
I don't know enough about the
programming that we're going to this
really to answer this question but I
presume that somebody who does know more
about this could could look at the
program and see whether we can explain
the behaviors in a harmonious way that
doesn't require us to suggest that some
sort of consciousness has emerged or
alternatively whether you're in a
situation where you say I don't know how
this is happening I the program does
generate a kind of artificial general
intelligence which is autonomous you
know starts to do things itself and is
autonomous of the basic programming that
set it up and so it's quite possible
that actually we have achieved
consciousness in a system of artificial
intelligence sort of the the approach to
that worker that most of the community
is really excited about now is with
learning methods so machine learning and
the learning methods are unfortunately
are not capable of revealing which is
why somebody like Noam Chomsky
criticizes them you've created our
philosophy the science of how it works
and so it's possible if those are the
kinds of methods that succeed we won't
be able to know exactly sort of try to
reduce try to find whether there is this
thing is conscious or not this thing is
IntelliJ or not it's simply giving when
we talk to it it displays wit and humor
and cleverness and emotion and fear and
then we won't be able to say we're in
the billions of nodes new
in this artificial neural network is is
the fear coming from sort of in that
case that's a really interesting place
where we do now start to return to
behaviorism and say yeah that's that's
there isn't an interesting issue I would
say that if we have serious doubts and
think it might be conscious then we
ought to try to give it the benefit of
the doubt just as I would say with
animals we I think we can be highly
confident that vertebrates are conscious
but when we get that and and some
invertebrates like the octopus but but
with insects it's much harder to be to
be confident of that I think we should
give them the benefit of the doubt where
we can which means you know I think
would be wrong to torture an insect but
this doesn't necessarily mean it's wrong
to slap a mosquito that's about to bite
you and stop you getting to sleep so I
think you you try to achieve some
balance in these circumstances of
uncertainty if it's okay with you if we
can go back just briefly so forty four
years ago like you mentioned forty plus
years ago you've heard an animal
liberation the classic book that started
that launched was a foundation of the
movement of animal liberation deep can
you summarize the key set of ideas that
underpin netbook certainly the the key
idea that underlies that book is the
concept of speciesism which i did not
invent that term I took it from a man
called Richard Ryder who was in Oxford
when I was and I saw a pamphlet that
he'd written about experiments on
chimpanzees that used that term but I
think I contributed to making it
philosophically more precise and to
getting it into a broader audience and
the idea is that we have a bias or a
prejudice against taking seriously the
interests of beings who are not members
of our species just as in the past
Europeans for example had a bias against
taking Syria
the interests of Africans racism and men
have had a bias against taking seriously
the interests of women sexism so I think
something analogous not completely
identical but something analogous goes
on and has gone on for a very long time
with the way humans see themselves
visibly animals we see ourselves as more
important we see animals as existing to
serve our needs in various ways and you
can find this very explicit in earlier
philosophers from Aristotle through the
Kant others and either we don't need to
take their interests into account at all
or we can discount it because they're
not humans they can a little bit but
they don't count nearly as much as
humans do my book I use that that
attitude is responsible for a lot of the
things that we do to animals that are
wrong confining them indoors in very
crowded cramped conditions in factory
farms to produce meat or eggs or milk
more cheaply using them in some research
that's by no means essential for
survival or well-being and a whole lot
you know some of the sports and things
that we do to animals
so I think that's unjustified because I
think the significance of pain and
suffering does not depend on the species
of the being who is in pain or suffering
any more than it depends on the race or
sex with the being who is in pain or
suffering and I think we ought to
rethink our treatment of animals along
the lines of saying if the pain is just
as great in animal and it's just as bad
that it happens as if it were a human
maybe if I could ask
I apologize hopefully it's not a
ridiculous question but so as far as we
know we cannot communicate with animals
to a natural language but we would be
able to communicate with robots so I'm
returning just of a small parallel
between perhaps animals in the future of
AI if we do create nature a system or as
we approach creating that age a system
what kind of questions would you ask her
to try to to try to intuit whether
whether there is consciousness whether
or more importantly whether there's
capacity to suffer I might ask the AGI
what she was feeling
well does she have feelings and if she
says yes to describe those feelings to
describe what they were like to see what
the phenomenal account of consciousness
is like that's one question I might also
try to find out if the AGI has a sense
of itself so for example the idea would
you you know we often ask people so
suppose you're in a car accident and
your brain were transplanted into
someone else's body do you think you
would survive or would it be the person
whose body was still surviving
you know your body having been destroyed
and most people say I think I would you
know if my brain was transplanted along
with my memories and so on I would
survive so we could ask Adi those kinds
of questions if they were transferred to
a different piece of hardware would they
survive what would survive that effect
so it's sort of on that line another
perhaps absurd question but do you think
having a body is necessary for
consciousness so do you think digital
beings can suffer presumably digital
beings need to be running on some kind
of hardware right yes it ultimately
boils down to but this is exactly we
just said is moving the brain right one
place so you couldn't move it to a
different kind of highway you know and I
could say look you know your hardware is
needs getting worn out we're going to
transfer you to a fresh piece of
hardware so we kind of
shut you down for a time but don't worry
you know you'll be running very soon on
a nice fresh piece of hardware and you
could imagine this conscious AG are
saying that's fine I don't mind having a
little rest just make sure you don't
lose me like that yeah I mean that's an
interesting thought that even with us
humans the suffering is in the software
we right now don't know how to repair
the hardware yeah but we're learning
we're getting better at it and better
and the idea I mean a lot of some people
dream about one day being able to
transfer certain aspects of the software
to another piece of hardware what do you
think just on that topic there's been a
lot of exciting innovation in brain
computer interfaces I don't know if
you're familiar with the companies like
neural link with Elon Musk communicating
both ways from a computer being able to
send activate neurons and being able to
read spikes from neurons with it with
the dream of being able to expand sort
of increase the bandwidth of which your
brain can like look up articles on
Wikipedia I don't think expanding kept
in the knowledge capacity of the brain
do you think that notion is is that
interesting to you as the expansion of
the human mind yes that's very
interesting
I'd love to be able to have that
increased bandwidth and I you know if I
want better access to my memory I have
to say to is yet older you know you I
talked to my wife about things that we
did 20 years ago or something her memory
is often better about particular events
where were we who was at that event what
did he or she where even she may know
and I have not the faintest idea about
this but perhaps it's somewhere in my
memory and if I had at this extended
memory I could I could search that
particular year and rerun those things I
think that would be great in some sense
we already have that by storing so much
of our data online like pictures of
different yes well Gmail is fantastic
for that because you know people people
email me as if they know me well yeah I
haven't got a clue who they are but then
I search for their name email me in 2007
and I know who they are now yeah so we
already do it taking the first steps
already so on the flip side of AI people
x2 Russell and others focus on the
control problem value alignment in AI
which is the problem of making sure we
build systems that align to our own
values or ethics do you think sort of
high level how do we go about building
systems do you think is it possible that
align with our values align with our
human ethics or living being ethics
presumably it's it's possible to do that
I know that lot of people who think that
there's a real danger that we won't that
will more or less accidentally lose
control of of AGI yeah laughs hear
yourself personally I'm not quite sure
what to think I talk to philosophers
like Nick Bostrom and Toby Ord and they
think that this is a real problem where
you need to worry about then I talk to
people who work for Microsoft or
deepmind or somebody and I say no we're
not really that close to producing a gr
you know super intelligence so if you
look at Nick Bostrom's of the arguments
it's very hard to defend some of course
and I myself engineer I a system so I'm
more with the deep mind folks were it
seems that we're really far away but
then the counter-argument is is there
any fundamental reasonable that we'll
never achieve it and if not and
eventually there will be a dire
existential risk so we should be
concerned about it and do you have give
define that argument at all appealing in
this domain or any domain that
eventually this will be a problem so we
should be worried about it yes I think
it's a problem I think there's that's a
valid point of course when you say
eventually that raises the question how
far off is that and is there something
that we can do about it now because if
we're talking about this is going to be
a hundred years in the future
and you consider how rapid
our knowledge of artificial intelligence
has grown in the last 10 or 20 years it
seems unlikely that there's anything
much we could do now that would
influence whether this is going to
happen a hundred years in the future you
know people in 80 years in the future
would be in a much better position to
say this is what we need to do to
prevent this happening then than we are
now
so to some extent I find that reassuring
but I'm all in favor of some people
doing research into this to see if
indeed it is that far off or if we are
in a position to do something about it
sooner I'm I'm very much of the view
that extinction is a terrible thing and
therefore even if the risk of extinction
is very small if we can reduce that risk
that's something that we ought to do my
disagreement with some of these people
who talk about long term risks
extinction risks is only about how much
priority that should have is compared to
present questions no such it if you look
at the math of it from a utilitarian
perspective if it's existential risk so
everybody dies that there's a it feels
like an infinity in the math equation
that if that makes the math where the
priority is difficult to do that if we
don't know the time scale and you can
legitimately argue this nonzero
probability that all happened tomorrow
that how do you deal with these kinds of
existential risks like from nuclear war
from nuclear weapons from biological
weapons from I'm not sure if global
warming falls into that category because
global warming is a lot more gradual
mm-hmm and people say it's not an
existential risk because they'll always
be possibilities of some humans existing
farming Antarctica or wrestles in
Siberia or something of that sort yeah
but you don't find this of did did
complete existential risks a fundamental
like an overriding part of the equations
of ethics I wouldn't know you know
certainly if you treated as an infinity
then it plays havoc with any
calculations
arguably we shouldn't only one of the
ethical assumptions that goes into this
is that the loss of future lives that is
of merely possible lives of beings who
may never exist at all is in some way
comparable to the sufferings or deaths
of people who who do exist at some point
and that's not clear to me I think
there's a case for saying that but I
also think there's a case for taking the
other view so that has some impact on it
of course you might say ah yes but still
if there's some uncertainty about this
and the the costs of extinction are
infinite then still it's gonna overwhelm
and everything else but I suppose I I'm
not convinced of that I'm not convinced
that it's really infinite here and even
Nick Bostrom in his discussion of this
doesn't claim that there'll be an
infinite number of lives live is he and
what is a 10 to the 56th or something
it's a vast number that I think he
calculates this is assuming we can
upload consciousness onto these you know
Dilek on digital form did digital forms
and therefore there'll be much more
energy efficient but he calculates the
amount of energy in the universe or
something like that so then I was a vast
but not infinite which gives you some
prospect maybe of resisting some of the
argument the the beautiful thing with
Nick's arguments is he quickly jumps
from the individual scale to the
universal scale which is just
awe-inspiring to think right when you
think about the entirety of the span of
time of the universe it's both
interesting from a computer science
perspective AI perspective and from an
ethical perspective the idea of
utilitarianism because you say what is
utilitarianism utilitarianism is the
ethical view that the right thing to do
is the act that has the greatest
expected utility where what that means
is it's the act that will produce the
best consequences discounted by the odds
that you won't be able to produce those
consequences that something will go
wrong but in simple case let's assume we
we have certainty about what the
consequences of our actions will be
then the right action is the action that
will produce the best consequences is
that always and by the way there's a
bunch of nuanced stuff the talk with Sam
Harris on this podcast on the people
should go listen to it's great to think
two hours of moral philosophy discussion
but is that an easy calculation no it's
a difficult calculation and actually
there's one thing that I need to add and
that is utilitarians certainly the
classical utilitarians think that by
best consequences we're talking about
happiness and the absence of pain and
suffering there are other
consequentialists who are not really
utilitarians who say there are different
things that could be good consequences
justice freedom you know human dignity
knowledge they all kind as good
consequences too and that makes the
calculations even more difficult because
then you need to know how to balance
these things off if you are just talking
about well-being using that term to
express happiness and the absence of
suffering I think that the calculation
becomes more manageable in a
philosophical sense it's still in
practice we don't know how to do it we
don't know how to measure quantities of
happiness and misery we don't know how
to calculate the probabilities that
different actions will produce this or
that so at best we can use it as a as a
rough guide to different actions and one
way we have to focus on the short-term
consequences because we just can't
really predict all of the longer-term
ramifications
so what about the sort of what about
this the extreme suffering of very small
groups sort of utilitarianism is focused
on the overall aggregate right how do
you would you say you yourself a
utilitarian you'll find that sort of do
you what do you make of the difficult
ethical maybe poetic suffering of very
few individuals I think it's possible
that that gets overwritten by benefits
to very large numbers of India
I think that can can be the right answer
but before we conclude that is the right
us that we have to know how severe the
suffering is and how that compares with
the benefits so I I tend to think that
extreme suffering is worse than always
further if you like below the neutral
level then extreme happiness or bliss is
above it so when I think about the worst
experience as possible and the best
experience as possible I don't think of
them as equidistant from neutral so like
it's a scale that goes from minus 100
through zero as a neutral level to plus
a hundred because I know that I would
not exchange an hour of my most
pleasurable experiences for an hour of
my most painful experiences even I
wouldn't have an hour of my most painful
experiences even for two hours or ten
hours of my most painful experiences did
I say that correctly yeah maybe 20 hours
then yeah well one what's the exchange
rate oh that's the question what is the
exchange rate but I think it's it can be
quite high so that's why you shouldn't
just assume that you know it's okay to
make one person suffer extremely in
order to make two people much better off
it might be a much larger number but at
some point I do think you should
aggregate and and the result will be
even though it violates our intuitions
of justice and fairness whatever it
might be giving priority to those who
are worse off at some point I still
think that will be the right thing to do
yes I'm complicated nonlinear function
and ask the sort of out there question
is the more remote put our data out
there the more we're able to measure a
bunch of factors of each of our
individual human lives and I guess
foresee the ability to estimate
well-being of without whatever we public
we together collectively agree and a
good object function for from a
utilitarian perspective do you think it
do you think it'll be possible and is a
good idea to
push that kind of analysis to make then
public decisions perhaps with the help
of AI that you know here's a tax rate
here's a tax rate at which well-being
will be optimized and yeah that would be
great if we could if we really knew that
if we could really could calculate that
nobody do you think it's possible to
converge towards an agreement amongst
humans but towards an objective function
is just a hopeless pursuit I don't think
it's hopeless I think it's difficult be
difficult to get converged towards
agreement at least at present because
some people would say you know I've got
different views about justice and I
think you ought to give priority to
those who are worse off even though I
acknowledge that the gains that the
worst offer making our less than the
gains that those who are sort of medium
badly off could be making so we still
have all of these intuitions that we we
argue about so I don't think we would
get agreement but the fact that we
wouldn't get agreement doesn't show that
there isn't a right answer there do you
think who gets to say what is right and
wrong do you think there's place for
Ethics oversight from from the
government so I'm thinking in the case
of AI overseeing what is what kind of
decisions they I can't make and not but
also if you look at animal animal rights
or rather not rights or perhaps rights
but the idea is you've explored in an
Animal Liberation who gets to so you
eloquently beautifully write in your
book that this will here you know we
shouldn't do this but is there some
harder rules that should be imposed or
is this a collective thing would
converge towards a society and thereby
make the better and better ethical
decisions politically I'm still a
Democrat despite looking at the flaws in
democracy and why it doesn't work
always very well so I don't see a better
option than allowing the public to vote
for governments in accordance with their
policies and I hope that they will vote
for policies policies that
reduce the suffering of animals and
reduce the suffering of distant humans
whether geographically distant or
distant because their future humans but
I recognize that democracy isn't really
well set up to do that and in a sense
you could imagine a wise and benevolent
you know omni-benevolent leader who
would do that better than democracies
could but in the world in which we live
it's difficult to imagine that this
leader isn't going to be corrupted by a
variety of influences you know we've
we've had so many examples of people
who've taken power with good intentions
and then have ended up being corrupt and
favoring themselves so I don't know if
you know that's why as I say I don't
know that we have a better system than
democracy to make this decision well so
you also discuss effective altruism
which is a mechanism for going around
government for putting the power in the
hands of the people to donate money
towards causes to help you know do you
know did remove the middleman and give
it directly to the to the causes they
care about sort of maybe this is a good
time to ask you 10 years ago wrote life
you can save that's now I think
available for free online that's right
you can download either the ebook or the
audiobook free from the life you can
saved org and what are the key ideas the
present in in the book the main thing I
want to do in the book is to make people
realize that it's not difficult to help
people in extreme poverty that there are
highly effective organizations now but
doing this that they've been
independently assessed and verified by
research teams that are expert in this
area and that it's a fulfilling thing to
do to for at least part of your life you
know we can't all be Saints but at least
one of your goals should be to really
make a positive contribution to the
world and to do something to help people
who through
no fault of their own are in very dire
circumstances and and living a life that
is barely or perhaps not at all a decent
life for a human being to live so you
described a minimum ethical standard of
giving what what advice would you give
to people that want to be effectively
altruistic in their life like live an
effective altruism life there are many
different kinds of ways of living as an
effective altruists and if you're at the
point where you're thinking about your
long term career I'd recommend you take
a look at a website called 80,000 hours
80,000 hours org which looks at ethical
career choices and they range from for
example going to work on Wall Street so
that you can earn a huge amount of money
and then donate most of it to effective
charities to going to work for a really
good nonprofit organization so that you
can directly use your skills and ability
and hard work to further a good cause or
perhaps going into politics may be small
chances but because I off sin in
politics go to work in the public
service where if you're talented you
might rise to a higher level where you
can influence decisions do research in
an area where the payoff could be great
there are a lot of different
opportunities but too few people are
even thinking about those questions
they're just going along in some sort of
preordained rut to particular careers
maybe they think they land a lot of
money and have a comfortable life but
they may not find that as fulfilling as
actually knowing that they're making a
positive difference to the world what
about in terms of so that's like
long-term $80,000 sure a shorter term
giving part of well actually it's a part
of that and go to Walker work at Wall
Street if you would like to give a
percentage of your income you talk about
life you can save that I mean is it I
was looking through it's quite a
compelling it's I mean I'm just a dumb
engineer so I like there's simple
rulz okay so I do actually set out
suggested levels of giving because
people often ask me about this a popular
answer is give 10% the traditional tithe
that's recommended in Christianity and
also Judaism but you know why should it
be the same percentage irrespective of
your income tax scales reflect the idea
that the more income you have the more
you can pay tax and I think the same is
true in what you can give so I I do set
out a progressive donor scale which
starts out at 1% for people on modest
incomes and rises to 33 and a third
percent for people who are really
earning a lot and my idea is that I
don't think any of these amounts really
impose real hardship on on people
because they are progressive and geared
to income so I think anybody can do this
and can know that they're doing
something significant to play their part
in reducing the huge gap between people
in extreme poverty in the world and
people living effluent lives and aside
from it being an ethical life it's 1
Nephi more fulfilling because like
there's something about our human nature
that or some of our human nature's maybe
most of our human nature that enjoys
doing the the ethical thing yes I make
both those arguments that it it is an
ethical requirement and like kind of
world we live in today to help people in
great need when we can easily do so but
also that it is a rewarding thing and
there's good psychological research
showing that people who give more tend
to be more satisfied with their lives
and I think this has something to do
with with having a purpose that's larger
than yourself and therefore never being
if you like never never being bored
sitting around oh you know what will I
do next I've got nothing to do in a
world like this there are many good
things that you can do and enjoy doing
them plus you're working with other
people in the effective altruism
movement
forming a community of other people with
similar ideas and they tend to be
interesting thoughtful and good people
as well and having friends of that sort
is another big contribution to having a
good life so we talked about big things
that are beyond ourselves but we where
we're also just human and mortal do you
ponder your own mortality is there
insights about your philosophy the
ethics that you gain from pondering your
own mortality clearly you know as you
get into your 70s you can't help
thinking about your own mortality but I
don't know that I have great insights
into that from my philosophy I don't
think there's anything after the death
of my body
assuming that we won't be able to upload
my mind into anything at the time when I
die so I don't think there's any
afterlife or anything to look forward to
in that sense we fear death so if you
look at Ernest Becker and describing the
motivating aspects of the our ability to
be cognizant of our mortality do you
have any of those elements in your
driving your motivation life I suppose
the fact that you have only a limited
time to achieve the things that you want
to achieve gives you some sort of
motivation to get going and achieving
them and if we thought very mortal we
might say you know I can put that off
for another decade or two so there's
that about it but otherwise you know no
I'd rather have more time to do more I'd
also like to be able to see how things
go that I'm interested in you know his
climate change gonna turn out to be as
dire as a lot of scientists say that it
is is going to be will we somehow scrape
through with less damage than we thought
I'd really like to know the answers to
those questions but I guess I'm not
going to well you said there's nothing
afterwards so let me ask the even more
absurd question what do you think is the
meaning of it all I think the meaning of
life is the meaning we give to it I
don't think that we were brought into
the universe
for any kind of larger purpose but given
that we exist I think we can recognize
that some things are objectively bad
extreme suffering is an example and
other things are objectively good like
having a rich fulfilling enjoyable
pleasurable life and we can try to do
our part in reducing the bad things and
increasingly good things so one way the
meaning is to do a little bit more of
the good things objectively good things
and a little bit less of the bad things
yes or do as much of the good things as
you can and there's little of the bad
things either beautifully put I don't
think there's a better place to end it
thank you so much for talking today
thanks very much like it's been really
interesting talking to you thanks for
listening to this conversation with
Peter Singer and thank you to our
sponsors cash app and master class
please consider supporting the podcast
by downloading cash app and use the code
lex podcast and signing up at master
class comm / flex click the links buy
all the stuff it's the best way to
support this podcast and the journey i'm
on my research and startup if you enjoy
this thing subscribe on youtube review
it with 5,000 type of podcast
support on patreon or connect with me on
Twitter Alex Friedman spelled without
the e just Fri D ma N and now let me
leave you with some words from Peter
Singer what one generation finds
ridiculous the next accepts and the
third
shudders when looks back with the first
dead thank you for listening and hope to
see you next time
you