Kate Darling: Social Robotics | Lex Fridman Podcast #98
7KTbEn7PiaY • 2020-05-23
Transcript preview
Open
Kind: captions
Language: en
the following is a conversation with
Kate darling a researcher at MIT
interested in social robotics robotics
and generally how technology intersects
with society she explores the emotional
connection between human beings and
lifelike machines which for me is one of
the most exciting topics in all of
artificial intelligence as she writes in
her bio she's a caretaker of several
domestic robots including her plio
dinosaur robots named Yochai Peter and
mr. spaghetti she is one of the funniest
and brightest minds I've ever had the
fortune to talk to this conversation was
recorded recently but before the
outbreak of the pandemic for everyone
feeling the burden of this crisis
I'm sending love your way this is the
artificial intelligence podcast if you
enjoy it subscribe on YouTube review it
with five stars an apple podcast
supported on patreon are simply connect
with me on Twitter Alex Friedman spelled
Fri D ma n as usual I'll do a few
minutes of ads now and never any ads in
the middle that can break the flow of
the conversation I hope that works for
you and doesn't hurt the listening
experience quick summary of the ads to
sponsors masterclass and expressvpn
please consider supporting the podcast
by signing up to master class and master
class complex and getting expressvpn and
expressvpn comm slash flex pod this show
is sponsored by master class sign-up and
master class comm / flex to get a
discount and to support this podcast
when I first heard about master class I
thought it was too good to be true for
$180 a year you get an all-access pass
to watch courses from the list some of
my favorites Chris Hatfield on space
exploration Neil deGrasse Tyson on
scientific thinking and communication
will write creator SimCity and sims love
those games on game design Carlos
Santana on guitar garry kasparov on
chess daniel negreanu on poker and many
more Chris had
explaining how Rockets work and the
experience of being launched into space
alone is worth the money by the way you
can watch it on basically any device
once again sign up on master class comm
/ flex to get a discount and to support
this podcast this show sponsored by
Express vpm get it at expressvpn comm /
FlexPod to get a discount and to support
this podcast I've been using expressvpn
for many years I love it it's easy to
use press the big power on button and
your privacy is protected and if you
like you can make it look like your
locations anywhere else in the world I
might be in Boston now but it can make
it look like I'm in New York London
Paris or anywhere else this has a large
number of obvious benefits certainly it
allows you to access international
versions of streaming websites like the
Japanese Netflix or the UK Hulu
expressvpn works on any device you can
imagine I use it on Linux shout-out to
bond to 2004 Windows Android but it's
available everywhere else to once again
get it at expressvpn comm / luxe pod to
get a discount and to support this
podcast and now here's my conversation
with Kate darling
Kota robot ethics at Harvard what are
some ethical issues that arise in the
world with robots yeah that was a
reading group that I did when I like at
the very beginning first became
interested in this topic so I think if I
taught that class today would look very
very different robot ethics
it sounds very science fictiony
especially did back then but I think
that some of the issues that people in
robot ethics are concerned with her just
around the ethical use of robotic
technology in general so for example
responsibility for harm automated weapon
systems things like privacy and data
security things like and automation and
labor markets and then personally I'm
really interested in some of the social
issues that come out of our social
relationships with robot one-on-one
relationship with robot yeah I think
most of stuff we have to talk about is
like one-on-one social stuff that's what
I love and I think that's what you're
you know as well and they're expert in
but a societal oh there's like there's a
presidential candidate now and Joo yang
running concerned about automation and
robots and AI and general taking away
jobs he has a proposal of ubi universal
basic income of everybody gets a
thousand bucks yeah as a way to sort of
save you if you lose your job from
automation to allow you time to discover
what it is that you would like to or
even love to do yes so I lived in
Switzerland for 20 years and universal
basic income has been more of a topic
there separate from the whole robots and
jobs issue so it's so interesting to me
to see kind of these Silicon Valley
people latch on to this concept that
came from a very kind of left-wing
socialist you know
kind of a different place in Europe
but on the automation labor markets
topic I think that it's very is so
sometimes in those conversations I think
people overestimate where robotic
technology is right now and we also have
this fallacy of constantly comparing
robots to humans and thinking of this as
a one-to-one replacement of jobs so even
like Bill Gates a few years ago said
something about you know maybe we should
have a system that taxes robots for
taking people's jobs and it just I I
mean I'm sure that was taken out of
context you know he's a really smart guy
but that sounds to me like kind of
viewing it as a one to one replacement
versus viewing this technology as kind
of a supplemental tool that of course is
going to shake up a lot of stuff it's
gonna change the job landscape but I
don't see you know robots taking all the
jobs in the next 20 years that's just
not how it's gonna work all right so
maybe drifting into the land of more
personal relationships with robots and
interaction and so on I gotta warn you I
go I may ask some silly philosophical
questions I apologize
so please do okay do you think humans
will abuse robots in their interaction
so you've you've had a lot of and we'll
talk about it sort of anthropomorphize a
ssin and and work you know this this
intricate dance emotional dance between
human and robot but this seems to be
also a darker side what people when they
treat the other as servants especially
they can be a little bit abusive or a
lot abusive do you think about that do
you worry about that yeah I do you think
about that so I mean one of my one of my
main interests is the fact that people
subconsciously treat robots like living
things and even though they know that
they're interacting with a machine and
what it means in that context to behave
you know violently I don't know if you
could say abuse because you're not
actually you know abusing the the inner
mind of the robot that robot isn't
doesn't have any feelings as far as you
know well yeah
it was
depends on how we define feelings and
consciousness but I think that's another
area where people kind of overestimate
where we currently are with the
technology like the robots are not even
as smart as insects right now and so I'm
not worried about abuse in that sense
but it is interesting to think about
what does people's behavior towards
these things mean for our own behavior
is it desensitizing the people to you
know be verbally abusive to a robot or
even physically abusive and we don't
know is a similar connection from like
if you play violent video games what
connection does that have to desensitize
ation to violence as I haven't haven't
read literature on that I wonder about
that because everything I've heard
people don't seem to any longer be so
worried about violent video games
correct
we've seemed the the research on it is
it's a difficult thing to research so
it's sort of inconclusive but we seem to
have gotten a sense at least as a
society that people can compartmentalize
when it's something on a screen and
you're like you know shooting a bunch of
characters or running over people with
your car that doesn't necessarily
translate to you doing that in real life
we do however have some concerns about
children playing violent video games and
so we do restrict it there I'm not sure
that's based on any real evidence either
but it's just the way that we've kind of
decided you know we want to be a little
more cautious there and the reason I
think robots are a little bit different
is because there is a lot of research
showing that we respond differently to
something in our physical space than
something on a screen we will treat it
much more viscerally much more like a
physical actor and so I it's it's
totally possible that this is not a
problem and it's the same thing as
violence in video games you know maybe
you know restrict it with kids to be
safe but adults can do what they want
but we just need to ask the question
again because we don't have any evidence
at all yet maybe there's an intermediate
place to I did my research on twitter by
research I mean scrolling through your
Twitter feed
you mentioned that you were going at
some point to an animal law conference
so I have to ask do you think there's
something that we can learn from animal
rights the guys are thinking about
robots oh I think there is so much to
learn from that I'm actually writing a
book on it right now that's why I'm
going is conference so I'm I'm writing a
book that looks at the history of animal
domestication and how we've used animals
for work for weaponry for companionship
and you know one of the things the books
the book tries to do is move away from
this fallacy that I talked about of
comparing robots in humans because I
don't think that's the right analogy but
I do think that on a social level even
on a social level there's so much that
we can learn from looking at that
history because throughout history we've
treated most animals like tools like
products and then some of them we've
treated differently and we're starting
to see people treat robots in really
similar ways so I think it's a really
helpful predictor to how we're going to
interact with the robots do you think
we'll look back at this time like a
hundred years from now and see what we
do to animals is like some of the way we
view like the Holocaust with the world
war two that's a great question I mean I
hope so I am not convinced that we will
but I often wonder you know what are my
grandkids gonna view as you know
abhorrent that my generation did that
they would never do and I'm like well
what's the big deal you know it's it's a
fun question to ask yourself
there's always seems that there's
atrocities that we discover later so the
things that at the time people didn't
see as you know you look at everything
from slavery to any kinds of abuse
throughout history so I think the kind
of insane wars that were happening to
the way war was carried out and rape and
the kind of violence that was happening
during war in that we now you know we
see his atrocities but at the time
perhaps didn't as much and so now
I have this intuition that I have this
worry maybe I'm you're going to probably
criticize me but I do anthropomorphize
robots I have I don't see a fundamental
philosophical difference in a robot in a
human being in terms of once the
capabilities are matched so the fact
that we're really far away doesn't in
terms of capabilities and then that from
from natural language processing
understanding generation to just
reasoning and all that stuff I think
once you solve it I see though this is a
very great area and I don't feel
comfortable the kind of abuse that
people throw robots
subtle but I can see it becoming I can
see basically a civil rights movement
for robots in the future do you think
let me put it in the form of a question
do you think robots should have some
kinds of rights well it's interesting
because I came at this originally from
your perspective I was like you know
what there's no fundamental difference
between technology and like human
consciousness like we we can probably
recreate anything we just don't know how
yet and so there's no reason not to give
machines the same rights that we have
once like you say they're kind of on an
equivalent level but I realized that
that is kind of a far future question I
still think we should talk about it
because I think it's really interesting
but I realized that it's actually we
might need to ask the robot rice
question even sooner than that um well
the machines are still you know quote
unquote really you know dumb and not on
our level because of the way that we
perceive them and I think one of the
lessons we learn from looking at the
history of animal rights and one of the
reasons we may not get to a place in a
hundred years where we view it as wrong
to you know eat or otherwise you know
use animals for our own purposes is
because historically we've always
protected those things that we relate to
the most so one example is whales no one
gave a shit about the whales am I
allowed to swear
freedom yeah no one gave a shit about
the whales until someone recorded them
singing and suddenly people were like oh
this is a beautiful creature and now we
need to save the whales and that started
the whole save the whales movement in
the 70s so I'm as much as I and and I
think a lot of people want to believe
that we care about consistent biological
criteria that's not historically how we
formed our alliances yeah so what why do
we why do we believe that all humans are
created equal
killing of a human being no matter who
the human being is that's what I meant
by equality is bad and then because I'm
connecting that to robots and I'm
wondering whether mortality so the
killing Act is what makes something
that's the fundamental first right so
I'm I am currently allowed to take a
shotgun and shoot a Roomba I think I'm
not sure but I'm pretty sure it's not
considered murder right or even shutting
them off so that's that's where the line
appears to be right is is mortality a
critical thing here I think here again
like the animal analogy is really useful
because you're also allowed to shoot
your dog but people won't be happy about
it
so we give we do give animals certain
protections from like you know you're
not allowed to torture your dog and you
know set it on fire at least in most
states and countries you know but you're
still allowed to treat it like a piece
of property in a lot of other ways and
so we draw these you know arbitrary
lines all the time and you know there's
a lot of philosophical thought on why
viewing humans is something unique is
not is just speciesism and not you know
based on any criteria that would
actually justify making a difference
between us and other species do you
think in general people
most people are good do you think do you
think there's evil and good in all of us
that's revealed through our
circumstances and through our
interactions I like to view myself as a
person who like believes that there's no
absolute evil and good and that
everything is you know gray but I do
think it's an interesting question like
when I see people being violent towards
robotic objects you said that bothers
you because the robots might someday you
know be smart and it is that what well
it bothers me because it reveals so I
personally believe because I've studied
way to my some Jewish I studied the
Holocaust in World War two exceptionally
well I personally believe that most of
us have evil in us that what bothers me
is the abuse of robots reveals the evil
and human beings yeah
and it's I think it doesn't but just
bother me it's I think it's an
opportunity for roboticists to make help
people be find the better sides the
angels of their nature right yeah that
abuse isn't just a fun side thing that's
a you revealing a dark part that you
shouldn't there should be hidden deep
inside yeah I mean molasse but some of
our research does indicate that maybe
people's behavior towards robots reveals
something about their tendencies for
empathy generally even using very simple
robots that we have today that like
clearly don't feel anything so you know
West world is maybe you know not so far
often it's like you know depicting the
bad characters as willing to go around
and shoot and rape the robots and the
good characters is not wanting to do
that even without assuming that the
robots have consciousness so there's a
opportunity at Cynthia's opportunity to
almost practice empathy the on robots is
an opportunity to practice empathy I
agree with you some people would say
why are we practicing empathy on robots
instead of you know on our fellow humans
or on animals that are actually alive
and experienced the world and I don't
agree with them because I don't think
empathy is a zero-sum game and I do
think that it's a muscle that you can
train and that we should be doing that
but some people disagree so the
interesting thing you've heard you know
raising kids sort of asking them or
telling them to be nice to the smart
speakers to Alexa and so on saying
please and so on during the requests I
don't know if I'm a huge fan of that
idea because yeah that's towards the
idea of practicing empathy
I feel like politeness I'm always polite
to all the all the systems that we build
especially anything that speech
interaction-based like when we talk to
the car I will always have a pretty good
detector for please - I feel like there
should be a room for encouraging empathy
in those interactions yeah okay so I
agree with you so I'm gonna play devil's
advocate so what is then what is the
dose our argument there the devil's
advocate argument is that if you are the
type of person who has abusive
tendencies or needs to get some sort of
like behavior like that out needs an
outlet for it that it's great to have a
robot that you can scream at so that
you're not screaming at a person and we
just don't know whether that's true
whether it's an outlet for people or
whether it just kind of as my friend
once said trains their cruelty muscles
and makes them more cruel in other
situations oh boy yeah in that expanse
to other topics which they I don't know
that you know there's a is a topic of
sex which is weird one that I tend to
avoid is from robotics perspective and
mostly general public doesn't they talk
about sex robots and so on is that an
area you've touched at all research-wise
like the way because that's what people
imagine sort of any kind of interaction
between human and robot that shows any
kind of compassion they immediately
think from
product perspective in the near term is
sort of expansion of what pornography is
and all that kind of stuff yeah that's
kind of you to like characterize it as
though there's thinking rationally about
product I feel like sex robots are just
such a like titillating news hook for
people that they become like the story
and it's really hard to not get fatigued
by it when you're in the space because
you tell someone you do human robot
interaction of course the first thing
they want to talk about is sex robots
really yeah it happens a lot and it's
it's unfortunate that I'm so fatigued by
it because I do think that there are
some interesting questions that become
salient when you talk about you know sex
with robots see what I think would
happen when people get sex robots like
if you let some guys okay guys get
female sex robots what I think there's
an opportunity for is an actual like
like they'll actually interact what I'm
trying to say they won't outside of the
sex would be the most fulfilling part
like the interaction it's like the folks
who this movies on this right who pray
pay a prostitute and then end up just
talking to her the whole time so I feel
like there's an opportunity it's like
most guys and people in general joke
about the sex act but really people are
just lonely inside and I'm looking for
connection many of them and it'd be
unfortunate if that it's that connection
is established through the sex industry
I feel like it should go too
into the front door of like people are
lonely and they want a connection well I
also feel like we should kind of deep
you know D stigmatize the sex industry
because you know even prostitution like
they're prostitutes that specialize in
disabled people who don't have the same
kind of opportunities to explore their
sexuality so it's I I feel like we
should like D stigmatize all of that
generally yeah but yeah that connection
and that loneliness is an interesting
you know topic that you bring up because
while people are Const
we worried about robots replacing humans
and oh if people get sex robots and the
sex is really good then they won't want
their you know partner or whatever but
we rarely talk about robots actually
filling a hole where there's nothing
yeah and what benefit that can provide
to people yeah I think that's an
exciting there's a whole giant there's a
giant hole that's not unfillable by
humans it's asking too much of your of
people you your friends and people
you're in a relationship with in your
family to fill that hole there's a
because you know it's exploring the full
like people you know exploring the full
complexity and richness of who you are
like who are you really like the people
your family doesn't have enough patience
to really sit there and listen to who
are you really and I feel like there's
an opportunity to really make that
connection with robots
I just really were complex as humans and
we're capable of lots of different types
of relationships so whether that's you
know with family members with friends
with our pets or with robots I feel like
there's space for all of that and all of
that can provide value in a different
way yeah absolutely so I'm jumping
around currently most of my works and
autonomous vehicles so the most popular
topic amongst is the trolley problem so
most most most robots just uh kind of
hate this question but what do you think
of this thought experiment what do you
think we can learn from it outside of
the silliness of the actual application
of it to the autonomous vehicle I think
it's still an interesting ethical
question and that's in itself just like
much of the interaction with robots has
something to teach us
but from your perspective do you think
there's anything there well I think
you're right that it does have something
to teach us because but but I think what
people are forgetting in all these
conversations is the origins of the
trolley problem and what it was meant to
show us which is that there is no right
answer and that sometimes our moral
intuition that comes to us instinctively
is not actually what
we should follow if we care about
creating systematic rules that apply to
everyone so I think that as a
philosophical concept it could teach us
at least that but that's not how people
are using it right now like we have and
these are friends of mine and like I
love them dearly and their project adds
a lot of value but if we're viewing the
moral machine project as what we can
learn from the trolley problems the
moral machine is I'm sure you're
familiar it's this website that you can
go to and it gives you different
scenarios like oh you're in a car you
can decide to run over you know these
two people or this child you know what
do you choose do you choose the homeless
person do you choose the person who's
jaywalking and so it pits these like
moral choices against each other and
then tries to crowdsource the
quote-unquote correct answer which is
really interesting and I think valuable
data but I don't think that's what we
should base our rules in autonomous
vehicles on because it is exactly what
the trolley problem is trying to show
which is your first instinct might not
be the correct one if you look at rules
that then have to apply to everyone and
everything so how do we encode these
ethical choices in interaction with
robots so for example Lata knows
vehicles there is a serious ethical
question of do I protect myself but
that's my life I have higher priority
than the life of another human being
because that changes certain control
decisions that you make so if your life
matters more than other human beings
then you'd be more likely to swerve out
of your current lane so currently
automated emergency braking systems that
just break they don't ever swerve right
so swerving into oncoming traffic or or
no just in a different Lane can cause
significant harm to others but it's
possible that it causes less harm to you
so that's a difficult ethical question
do you you do you do you have a hope
that like the trolley problem is not
supposed to have a right answer
do you hope that when we have robots at
the table we'll be able to discover the
right answer for some of these questions
well what's happening right now I think
is this this question that we're facing
of you know what ethical rules should we
be programming into the machines is
revealing to us that our ethical rules
are much less programmable than we you
know probably thought before and so
that's a really valuable insight I think
that these issues are very complicated
and that in in a lot of these cases it's
you can't really make that call like not
even as a legislator and so what's gonna
happen in reality I think is that you
know car manufacturers are just gonna
try and avoid the problem and avoid
liability in any way possible or like
they're gonna always protect the driver
because who's gonna buy a car if it's
you know programmed to kill someone kill
kill you instead of someone else so
that's what's gonna happen in reality
but what did you mean by like once we
have robots at the table like do you
mean when they can help us figure out
what to do no I mean when robots are
part of the ethical decisions so no no
not they help us well oh you mean when
it's like should I run over a robot or a
person right that kind of thing so what
no what no no no so when you it's
exactly what you said which is when you
have to encode the ethics into an
algorithm you start to try to really
understand what are the fundamentals of
the decision making process you make
just make certain decisions should you
like capital punishment should you take
a person's life or not to punish them
for a certain crime sort of you can use
you can develop an algorithm to make
that decision right and the hope is that
the act of making that algorithm however
you make it so there's a few approaches
will help us actually get to the core of
what what is right and what is wrong
under our current societal standards
isn't that what's happening right now
and we're realizing that we don't have a
consensus on what's right and wrong I
mean in politics in general well like
when we're thinking about these trolley
problems and autonomous vehicles and how
to program ethics into machines and how
to you know make make AI algorithms fair
and equitable where we're realizing that
this is so complicated and it's
complicated in part because there is
doesn't seem to be a one right answer in
any of these cases do you hope for like
one of the ideas of the moral machine is
that crowdsourcing can help us converge
towards like democracy can help us
converge towards the right answer do you
have a hope for crowdsourcing well yes
and no so I think that in general you
know I have a legal background and
policymaking is often about trying to
suss out you know what rules does this
society this particular Society agree on
and then trying to codify that so the
law makes these choices all the time and
then tries to adapt according to
changing culture but in the case of the
moral machine project I don't think that
people's choices on that website
necessarily necessarily reflect what
laws they would want in place if given I
think you would have to ask them a
series of different questions in order
to get up what their consensus is I
agree but that that has to do more with
the artificial nature of I mean they're
showing some cute icons on a screen
that's that's almost so if you for
example we do a lot of work in virtual
reality
and so if you make if you put those same
people into virtual reality where they
have to make that decision they should
be very different I think I agree with
that that's one aspect and the other
aspect is it's a different question to
ask someone would you run over the
homeless person or the doctor in this
scene or do you want cars to always run
over the homeless people yeah so let's
talk about anthropomorphism do me at the
prom or fizzell if I can pronounce it
correctly is is one of the most
fascinating phenomena from like both the
engineering perspective
and psychology perspective machine
learning perspective in robotics in
general can you step back and define at
the prom or fizzle how you see it in
general terms in your in your work
sure so anthropomorphism is this
tendency that we have to project
human-like traits and behaviors and
qualities onto nonhumans and we often
see it with animals like well will
project emotions on animals that may or
may not actually be there okay we often
see that we're trying to interpret
things according to our own behavior
when we get it wrong but we do it with
more than just animals we do it with
objects you know teddy bears we see you
know faces in the headlights of cars and
we do it with robots very very extremely
you think that can be engineered can
that be used to enrich an interaction Oh
in and they a system in the human oh
yeah for sure and do you and do you see
it being used that way often like I
don't I haven't seen whether it's Alexa
or any of the smart speaker systems
often trying to optimize for the ethical
or physician you said you haven't seen I
haven't seen they they keep moving away
from that I think they're afraid of that
they they actually so I only recently
found out but did you know that Amazon
has like a whole team of people who are
just there to work on Alexis personality
so I know that depends on UI personality
I didn't know I didn't know that exact
thing but I do know that the how the
voice is perceived has worked on a lot
whether that if it's a pleasant feeling
about the voice but that has to do more
with the texture of the sound and the
audience on what personality is more
like it's like what's her favorite beer
when you ask her and and the personality
team is different for every country to
like there's a different personality for
a German Alexa than there is for
American Alexa that's it I think it's
very difficult to
you know use the are really really
harness the anthropomorphism with these
voice assistance because the voice
interface is still very primitive and I
think that in order to get people to
really suspend their disbelief and treat
a robot like it's alive
less is sometimes more you you want them
to project onto the robot and you want
the robot to not disappoint their
expectations for how it's going to
answer or behave in order for them to
have this kind of illusion and with
Alexa I don't think we're there yet or
Siri that just they're just not good at
that but if you look at some of the more
animal-like robots like the baby seal
that they use with the dementia patients
so much more simple design doesn't try
to talk to you you can't disappoint you
in that way it just makes little
movements and sounds and people stroke
it and it responds to their touch and
that is like a very effective way to
harness people tendency to kind of treat
the robot like a living thing yeah so
you bring up some interesting ideas in
your paper chapter I guess at the poem
Orphic framing human robot interaction
that I read the last time we scheduled
this a long time what are some good and
bad cases event them for morphism and in
your perspective like one is the good
one is it bad well I just start by
saying that you know while design can
really enhance the end the premiere film
it doesn't take a lot to get people to
treat a robot like it's alive like
people will over 85% of rumbas have a
name which I'm I don't know the numbers
for your regular type of vacuum cleaner
but they're not that high right so
people will feel bad for the room but
when it gets stuck they'll send it in
for repair and want to get the same one
back and that's that one is not even
designed to like make you do that so I
think that some of the cases where it's
maybe a little bit concerning that
anthropomorphism is happening is when
you have something that's supposed to
function like a tool and people are
using it in the wrong way
and one of the concerns is military
robots we're so gosh mm like early 2000s
which is a long time ago
iRobot the room a company made this
robot called the pack bot that was
deployed in Iraq and and Afghanistan
with the bomb disposal units that were
there and the soldiers became very
emotionally attached to the robots and
that's you know fine until a soldier
risks his life to save a robot which you
really don't want but they were treating
them like pets like they would name them
they would give them funerals with gun
salutes they would get really upset and
traumatized when the robot got broken so
you in situations where you want a robot
to be a tool in particular when it's
supposed to like do a dangerous job that
you don't want a person doing it it can
be hard when people get emotionally
attached to it
that's maybe something that you would
want to discourage another case for
concern is maybe when companies try to
leverage the emotional attachment to
exploit people so if it's something
that's not in the consumers interest
trying to like sell them products or
services or exploit an emotional
connection to keep them you know paying
for a cloud service for a social robot
or something like that might be I I
think that's a little bit concerning as
well yeah the emotional manipulation
which probably happens behind the scenes
now with some like social networks and
so on but making it more explicit what's
your favorite robot like you know a real
no real real robot which you have felt a
connection with or not like not not at
the core morphic connection but I mean
like you just sit back as a damn this is
an impressive system Wow
so two different robots so the the plio
baby dinosaur robot that is no longer
sold that came out in 2007 that one I
was very impressed with it was but but
from an anthropomorphic perspective
I was impressed with how much I bonded
with it how much I like wanted to
believe that it had this inner life
can you describe Cleo the can you
describe what what it is how big is it
what can actually do ya plio is about
the size of a small cat it had a lot of
like motors that gave it this kind of
lifelike movement it had things like
touch sensors and an infrared camera so
it had all these like cool little
technical features even though it was a
toy and the thing that really struck me
about it was that it could mimic pain
and distress really well so if you held
it up by the tail it had a tilt sensor
that you know told it what direction it
was facing and it would start to squirm
and cry out if you hit it too hard it
would start to cry so it was very
impressive in design and what's the
second robot that you were you said
there might have been two that you liked
yeah so the Boston Dynamics robots are
just impressive feats of engineering
have you met them in person
yeah I recently got a chance to go visit
and I you know I was always one of those
people who watched the videos and was
like this is super cool but also it's a
product video like I don't know how many
times that they had to shoot this to get
it right but visiting them I you know
I'm pretty sure that I was very
impressed let's put it that way yeah in
terms of the control I think that was a
transformational moment for me when I
met spot many in person because okay
maybe this is a psychology experiment
but I anthropomorphised
the crap out of it so I immediately it
was like my best friend right I mean
it's really hard for anyone to watch
spot move and not feel like it has
agency yeah did this movement especially
the arm on spot mini really obvi
obviously looks like a head yeah that
and they say no wouldn't mean it that
way but it obviously it looks exactly
like that and so it's almost impossible
to not think of it as a almost like the
baby dinosaur but slightly larger
and in this movement of the of course
the intelligence is that their whole
idea is that it's not supposed to be
intelligent it's a platform on which you
build higher intelligence it's actually
really really dumb it's just a basic
movement platform yeah but even dumb
robots can like we can immediately
respond to them in this visceral way
what are your thoughts about Sofia the
robot this kind of mix of some basic
natural language processing and
basically an art experiment yeah an art
experiment is a good way to characterize
it I'm much less impressed with Sofia
than I am with Boston Dynamics she said
she likes you she says she admires you
she yeah she followed me on Twitter at
some point yeah as she tweets about how
much she likes you so so wouldn't that
mean I have to be nicer not I was
emotionally manipulating it no how do
you think of the whole thing that
happened with Sofia is quite a large
number of people kind of immediately had
a connection and thought that maybe
we're far far more advanced with
robotics than we are all right she
didn't even think much I'm surprised how
little people cared that they kind of
assumed that
well of course AI can do this yeah and
then they if they assume that I felt
they should be more impressed well you
know what I mean like really
overestimate where we are and so in
something I don't even I don't even
think Sofia was very impressed over it
is very impressive I think she's kind of
a puppet to be honest but yeah I think
people have are a little bit influenced
by science fiction pop culture to think
that we should be further along than we
are so what's your favorite robots and
movies in fiction wall-e wall-e what do
you like about wall-e the humor the
cuteness the the perception control
systems operating and wallahi that makes
it all just in general the design of
wall-e the robot I think that animators
figured out you know starting in like
Ben
1940's how to create characters that
don't look real but look like something
that's even better than real that we
really respond to and think is really
cute they figured out how to make them
move and look in the right way and
wall-e is just such a great example of
that
you think eyes big eyes or big something
that's kind of AI ish so it's always
playing on some aspect of the human face
right often yeah
so big eyes well I think one of the one
of the first like animations to really
play with this was Bambi and they
weren't originally gonna do that they
were originally trying to make the deer
look as lifelike as possible like they
brought deer into the studio and had a
little zoo there so the animators could
work with them and then at some point
they were like hmm if we make really big
eyes and like a small nose and like big
cheeks kind of more like a baby face
then people like it even better than if
it looks real do you think the future of
things I collects are in the home has
possibility to take advantage of that to
build on that to create these systems
that are better than real that created
closed human connection I can pretty
much guarantee you without having any
knowledge that those companies are
working on that on that design behind
the scenes like pretty sure I totally
disagree with you really so that's what
I'm interested in I'd like to build such
a company I know a lot of those folks
and they're afraid of that because you
don't well how do you make money off of
it well but even just like making a lexa
look a little bit more interesting than
just like a cylinder would do so much
it's it's an interesting thought but I
don't think people are from Amazon
perspective looking for that kind of
connection they want you to be addicted
to the services provided by Alexa not to
the device so the the device itself it's
felt that you can lose a lot because if
you create a connection and then if
there's it creates more opportunity for
frustration for
for negative stuff then it does for
positive stuff is I think the way they
think about it that's interesting
like I agree that there is it's very
difficult to get right and you have to
get it exactly right otherwise you wind
up with Microsoft's Clippy okay easy now
what's your problem with Clippy oh you
like clip these clothes your friends
yeah I'll just I just I just talked to
the would just had this argument and
they Microsoft CTO and they and he said
he said he's not bringing Clippy back
they're not bringing Clippy back and
that's very disappointing is I think it
was clip II was the greatest assistance
we've ever built it was a horrible
attempt of course but it's the best
we've ever done because it was in real
attempt you haven't like a actual
personality and I mean it was obviously
technology was way not there at the time
of being able to be a recommender system
for assisting you in anything and typing
in Word or any kind of other application
but still was an attempt of personality
that was legitimate I'm sure I thought
was brave yes oh yes okay you know
you've convinced me I'll be slightly
less hard unclick and I know I have like
an army of people behind me who also
miss Clippy so really I want to meet
these people who are these people it's
the people who like to hate stuff when
it's there and and miss it when it's
gone
[Laughter]
exactly
alright so Anki and Gebo the two
companies two amazing companies social
robotics companies that have recently
been closed down yeah why do you think
it's so hard to create a personal
robotics company so making a business
out of essentially something that people
would anthropomorphize have a deep
connection with why is it so hard to
make it work the business case not there
or what is it I think it's a number of
different things I don't think it's
going to be this way forever I think at
this current point in time it
so much work to build something that
only barely meets people's like minimal
expectations because of science fiction
and pop-culture giving people this idea
that we should be further than we
already are like when people think about
a robot assistant in the home they think
about Rosie from the Jetsons or
something like that and on key and and
giba did such a beautiful job with the
design and getting that interaction just
right but I think people just wanted
more they wanted more functionality I
think you're also right that you know
the business case isn't really there
because there hasn't been a killer
application that's useful enough to get
people to adopt the technology in great
numbers I think what we did see from the
people who did you know get geebo is a
lot of them became very emotionally
attached to it but that's not I mean
it's kind of like the Palm Pilot back in
the day most people are like why do I
need this why would I they don't see how
they would benefit from it until they
you know have it or some other company
comes in and makes it a little better
yet like how how far away are we do you
think I mean how hard is this problem
it's a good question and I think it has
a lot to do with people's expectations
and those keep shifting depending on
what science fiction that is popular but
also it's two things it's people's
expectation and people's need for an
emotional connection yeah and then I
believe the need is pretty high yes but
I don't think we're aware of it
that's right there's like it I really
think we're this is like the life as we
know it so we've just kind of gotten
used to it of really I hate to be dark
because I have close friends but we've
gotten used to really never being close
to anyone all right and we're deeply I
believe okay this is hypotheses I think
we're deeply lonely all of us even those
in deep fulfilling relationships in fact
what makes us relationship fulfilling I
think is that they at least tap into
that deep loneliness a little bit but I
feel like there's more opportunity to
explore that that doesn't interfere with
the human relationship
you have it expands more on the that
yeah the the rich deep unexplored
complexity that's all of us weird apes
okay right do you think it's possible to
fall in love with a robot oh yeah
totally
do you think it's possible to have a
long-term committed monogamous
relationship
oh the robot well yeah there are lots of
different types of long-term committed
monogamous relationships I think
monogamous implies like you're not going
to see other humans and sexually or like
you basically on Facebook have to say
I'm in a relationship with this person
this robot I just don't like again I
think this is comparing robots to humans
when I would rather compare them to pets
like you get a robot it fulfills you
know this loneliness that you have in us
maybe not the same way as a pet maybe in
a different way that is even you know
supplemental in a different way but you
know I'm not saying that people won't
like do this be like oh I want to marry
my robot or I want to have like a you
know sexual relation monogamous
relationship with my robot but I don't
think that that's the main use case for
them well you think that there's still a
gap between human and pet so between
husband and pet there's a relation
earring so that that's a gap that can be
closed but I think it could be closed
someday but why would we close that like
I I think it's so boring to think about
recreating things that we already have
when we could when we could create
something that's different I know you're
thinking about the people who like don't
have a husband and like what could we
give them yeah but but let's I guess
what I'm getting at is maybe not so like
the movie her yeah right so a better
husband well may be better in some ways
like it's I I do think that robots are
going to
continued to be a different type of
relationship even if we get them like
very human looking or when you know the
voice interactions we have with them
feel very like natural and human like I
think they're still gonna be differences
and there were in that movie too like
towards the end yeah it goes off the
rails it's just a movie so that your
intuition is that that because because
you kind of said two things right so one
is why would you want to basically
replicate the husband Yeah right and the
other is kind of implying that it's kind
of hard to do so you like anytime you
try you might build something very
impressive but it'll be different I
guess my question is about human nature
it's like how hard is it to satisfy that
role of the husband so removing any of
the sexual stuff aside is the is more
like the mystery detention the dance of
relationships you think with robots
that's difficult to build what's you I
think that well it also depends I'm not
reading about robots now in 50 years in
like indefinite amount of time where I'm
thinking abilities five or ten years
five or ten years I think that robots at
best will be like a more similar to the
relationship we have with our pets than
relationship that we have with other
people I got it so what do you think it
takes to build a system that exhibits
greater and greater levels of
intelligence like it impresses us with
its intelligence you know a Roomba so
you talk about ethical moral ization
that doesn't i think intelligence is not
required if i can tell us probably gets
in the way sometimes like you mentioned
but what do you think it takes to create
a system where we sense that it has a
human level intelligence something that
obviously something conversational human
level intelligence that problem is it'd
be interesting to sort of hear your
perspective not
just purely that talked to a lot of
people how hard is the conversational
agents yeah how hard is it to pass a
Turing test
but my sense is it's it's easier than
just solving it's easier than solving
the pure and natural language processing
problem because I feel like you can
cheat yeah so yeah so how hard is it to
pass the Turing test any of you I well I
think again it's all about expectation
management if you set up people's
expectations to think that they're
communicating with what was it a 13 year
old boy from the Ukraine yeah that's
right
then they're not going to expect perfect
English they're not going to expect
perfect you know understanding of
concepts or even like being on the same
wavelength in terms of like conversation
flow so it's much easier to pass in that
case do you think you kind of alluded
this to with audio do you think it needs
to have a body I think that we
definitely have so we treat physical
things with more social agency because
we're very physical creatures I think a
body can be useful does it get in the
way is there negative aspects like yeah
there can be so if you're trying to
create a body that's too similar to
something that people are familiar with
like I have this robot cat at home that
Hasbro makes and it's very disturbing to
watch because I'm constantly assuming
that it's gonna move like a real cat and
it doesn't because it's like a 100
dollar piece of technology so it's very
like disappointing and it's very hard to
treat it like it's alive so you can get
a lot wrong with the body too but you
can also use tricks same as you know the
expectation management of the 13 year
old boy from the Ukraine if you pick an
animal that people aren't intimately
familiar with like the baby dinosaur
like the baby seal that people have
never actually held in their arms you
can get away with much more because they
don't have these preformed expectations
yeah I'm thinking a TED talk or
something that clicked for me that
nobody actually knows what a dinosaur
looks
so you can actually get away with a lot
more that was great
do you think he needs so what do you
think about consciousness and mortality
being displayed in a robot so not
actually having consciousness but having
these kind of human elements that are
much more than just the interaction much
more than just like you mentioned with a
dinosaur moving kind of interesting ways
but really being worried about its own
death and really acting as if it's aware
and self-aware and identity have you
seen that done in robotics what do you
think abou
Resume
Read
file updated 2026-02-13 13:22:33 UTC
Categories
Manage