Kind: captions Language: en the following is a conversation with Kate darling a researcher at MIT interested in social robotics robotics and generally how technology intersects with society she explores the emotional connection between human beings and lifelike machines which for me is one of the most exciting topics in all of artificial intelligence as she writes in her bio she's a caretaker of several domestic robots including her plio dinosaur robots named Yochai Peter and mr. spaghetti she is one of the funniest and brightest minds I've ever had the fortune to talk to this conversation was recorded recently but before the outbreak of the pandemic for everyone feeling the burden of this crisis I'm sending love your way this is the artificial intelligence podcast if you enjoy it subscribe on YouTube review it with five stars an apple podcast supported on patreon are simply connect with me on Twitter Alex Friedman spelled Fri D ma n as usual I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation I hope that works for you and doesn't hurt the listening experience quick summary of the ads to sponsors masterclass and expressvpn please consider supporting the podcast by signing up to master class and master class complex and getting expressvpn and expressvpn comm slash flex pod this show is sponsored by master class sign-up and master class comm / flex to get a discount and to support this podcast when I first heard about master class I thought it was too good to be true for $180 a year you get an all-access pass to watch courses from the list some of my favorites Chris Hatfield on space exploration Neil deGrasse Tyson on scientific thinking and communication will write creator SimCity and sims love those games on game design Carlos Santana on guitar garry kasparov on chess daniel negreanu on poker and many more Chris had explaining how Rockets work and the experience of being launched into space alone is worth the money by the way you can watch it on basically any device once again sign up on master class comm / flex to get a discount and to support this podcast this show sponsored by Express vpm get it at expressvpn comm / FlexPod to get a discount and to support this podcast I've been using expressvpn for many years I love it it's easy to use press the big power on button and your privacy is protected and if you like you can make it look like your locations anywhere else in the world I might be in Boston now but it can make it look like I'm in New York London Paris or anywhere else this has a large number of obvious benefits certainly it allows you to access international versions of streaming websites like the Japanese Netflix or the UK Hulu expressvpn works on any device you can imagine I use it on Linux shout-out to bond to 2004 Windows Android but it's available everywhere else to once again get it at expressvpn comm / luxe pod to get a discount and to support this podcast and now here's my conversation with Kate darling Kota robot ethics at Harvard what are some ethical issues that arise in the world with robots yeah that was a reading group that I did when I like at the very beginning first became interested in this topic so I think if I taught that class today would look very very different robot ethics it sounds very science fictiony especially did back then but I think that some of the issues that people in robot ethics are concerned with her just around the ethical use of robotic technology in general so for example responsibility for harm automated weapon systems things like privacy and data security things like and automation and labor markets and then personally I'm really interested in some of the social issues that come out of our social relationships with robot one-on-one relationship with robot yeah I think most of stuff we have to talk about is like one-on-one social stuff that's what I love and I think that's what you're you know as well and they're expert in but a societal oh there's like there's a presidential candidate now and Joo yang running concerned about automation and robots and AI and general taking away jobs he has a proposal of ubi universal basic income of everybody gets a thousand bucks yeah as a way to sort of save you if you lose your job from automation to allow you time to discover what it is that you would like to or even love to do yes so I lived in Switzerland for 20 years and universal basic income has been more of a topic there separate from the whole robots and jobs issue so it's so interesting to me to see kind of these Silicon Valley people latch on to this concept that came from a very kind of left-wing socialist you know kind of a different place in Europe but on the automation labor markets topic I think that it's very is so sometimes in those conversations I think people overestimate where robotic technology is right now and we also have this fallacy of constantly comparing robots to humans and thinking of this as a one-to-one replacement of jobs so even like Bill Gates a few years ago said something about you know maybe we should have a system that taxes robots for taking people's jobs and it just I I mean I'm sure that was taken out of context you know he's a really smart guy but that sounds to me like kind of viewing it as a one to one replacement versus viewing this technology as kind of a supplemental tool that of course is going to shake up a lot of stuff it's gonna change the job landscape but I don't see you know robots taking all the jobs in the next 20 years that's just not how it's gonna work all right so maybe drifting into the land of more personal relationships with robots and interaction and so on I gotta warn you I go I may ask some silly philosophical questions I apologize so please do okay do you think humans will abuse robots in their interaction so you've you've had a lot of and we'll talk about it sort of anthropomorphize a ssin and and work you know this this intricate dance emotional dance between human and robot but this seems to be also a darker side what people when they treat the other as servants especially they can be a little bit abusive or a lot abusive do you think about that do you worry about that yeah I do you think about that so I mean one of my one of my main interests is the fact that people subconsciously treat robots like living things and even though they know that they're interacting with a machine and what it means in that context to behave you know violently I don't know if you could say abuse because you're not actually you know abusing the the inner mind of the robot that robot isn't doesn't have any feelings as far as you know well yeah it was depends on how we define feelings and consciousness but I think that's another area where people kind of overestimate where we currently are with the technology like the robots are not even as smart as insects right now and so I'm not worried about abuse in that sense but it is interesting to think about what does people's behavior towards these things mean for our own behavior is it desensitizing the people to you know be verbally abusive to a robot or even physically abusive and we don't know is a similar connection from like if you play violent video games what connection does that have to desensitize ation to violence as I haven't haven't read literature on that I wonder about that because everything I've heard people don't seem to any longer be so worried about violent video games correct we've seemed the the research on it is it's a difficult thing to research so it's sort of inconclusive but we seem to have gotten a sense at least as a society that people can compartmentalize when it's something on a screen and you're like you know shooting a bunch of characters or running over people with your car that doesn't necessarily translate to you doing that in real life we do however have some concerns about children playing violent video games and so we do restrict it there I'm not sure that's based on any real evidence either but it's just the way that we've kind of decided you know we want to be a little more cautious there and the reason I think robots are a little bit different is because there is a lot of research showing that we respond differently to something in our physical space than something on a screen we will treat it much more viscerally much more like a physical actor and so I it's it's totally possible that this is not a problem and it's the same thing as violence in video games you know maybe you know restrict it with kids to be safe but adults can do what they want but we just need to ask the question again because we don't have any evidence at all yet maybe there's an intermediate place to I did my research on twitter by research I mean scrolling through your Twitter feed you mentioned that you were going at some point to an animal law conference so I have to ask do you think there's something that we can learn from animal rights the guys are thinking about robots oh I think there is so much to learn from that I'm actually writing a book on it right now that's why I'm going is conference so I'm I'm writing a book that looks at the history of animal domestication and how we've used animals for work for weaponry for companionship and you know one of the things the books the book tries to do is move away from this fallacy that I talked about of comparing robots in humans because I don't think that's the right analogy but I do think that on a social level even on a social level there's so much that we can learn from looking at that history because throughout history we've treated most animals like tools like products and then some of them we've treated differently and we're starting to see people treat robots in really similar ways so I think it's a really helpful predictor to how we're going to interact with the robots do you think we'll look back at this time like a hundred years from now and see what we do to animals is like some of the way we view like the Holocaust with the world war two that's a great question I mean I hope so I am not convinced that we will but I often wonder you know what are my grandkids gonna view as you know abhorrent that my generation did that they would never do and I'm like well what's the big deal you know it's it's a fun question to ask yourself there's always seems that there's atrocities that we discover later so the things that at the time people didn't see as you know you look at everything from slavery to any kinds of abuse throughout history so I think the kind of insane wars that were happening to the way war was carried out and rape and the kind of violence that was happening during war in that we now you know we see his atrocities but at the time perhaps didn't as much and so now I have this intuition that I have this worry maybe I'm you're going to probably criticize me but I do anthropomorphize robots I have I don't see a fundamental philosophical difference in a robot in a human being in terms of once the capabilities are matched so the fact that we're really far away doesn't in terms of capabilities and then that from from natural language processing understanding generation to just reasoning and all that stuff I think once you solve it I see though this is a very great area and I don't feel comfortable the kind of abuse that people throw robots subtle but I can see it becoming I can see basically a civil rights movement for robots in the future do you think let me put it in the form of a question do you think robots should have some kinds of rights well it's interesting because I came at this originally from your perspective I was like you know what there's no fundamental difference between technology and like human consciousness like we we can probably recreate anything we just don't know how yet and so there's no reason not to give machines the same rights that we have once like you say they're kind of on an equivalent level but I realized that that is kind of a far future question I still think we should talk about it because I think it's really interesting but I realized that it's actually we might need to ask the robot rice question even sooner than that um well the machines are still you know quote unquote really you know dumb and not on our level because of the way that we perceive them and I think one of the lessons we learn from looking at the history of animal rights and one of the reasons we may not get to a place in a hundred years where we view it as wrong to you know eat or otherwise you know use animals for our own purposes is because historically we've always protected those things that we relate to the most so one example is whales no one gave a shit about the whales am I allowed to swear freedom yeah no one gave a shit about the whales until someone recorded them singing and suddenly people were like oh this is a beautiful creature and now we need to save the whales and that started the whole save the whales movement in the 70s so I'm as much as I and and I think a lot of people want to believe that we care about consistent biological criteria that's not historically how we formed our alliances yeah so what why do we why do we believe that all humans are created equal killing of a human being no matter who the human being is that's what I meant by equality is bad and then because I'm connecting that to robots and I'm wondering whether mortality so the killing Act is what makes something that's the fundamental first right so I'm I am currently allowed to take a shotgun and shoot a Roomba I think I'm not sure but I'm pretty sure it's not considered murder right or even shutting them off so that's that's where the line appears to be right is is mortality a critical thing here I think here again like the animal analogy is really useful because you're also allowed to shoot your dog but people won't be happy about it so we give we do give animals certain protections from like you know you're not allowed to torture your dog and you know set it on fire at least in most states and countries you know but you're still allowed to treat it like a piece of property in a lot of other ways and so we draw these you know arbitrary lines all the time and you know there's a lot of philosophical thought on why viewing humans is something unique is not is just speciesism and not you know based on any criteria that would actually justify making a difference between us and other species do you think in general people most people are good do you think do you think there's evil and good in all of us that's revealed through our circumstances and through our interactions I like to view myself as a person who like believes that there's no absolute evil and good and that everything is you know gray but I do think it's an interesting question like when I see people being violent towards robotic objects you said that bothers you because the robots might someday you know be smart and it is that what well it bothers me because it reveals so I personally believe because I've studied way to my some Jewish I studied the Holocaust in World War two exceptionally well I personally believe that most of us have evil in us that what bothers me is the abuse of robots reveals the evil and human beings yeah and it's I think it doesn't but just bother me it's I think it's an opportunity for roboticists to make help people be find the better sides the angels of their nature right yeah that abuse isn't just a fun side thing that's a you revealing a dark part that you shouldn't there should be hidden deep inside yeah I mean molasse but some of our research does indicate that maybe people's behavior towards robots reveals something about their tendencies for empathy generally even using very simple robots that we have today that like clearly don't feel anything so you know West world is maybe you know not so far often it's like you know depicting the bad characters as willing to go around and shoot and rape the robots and the good characters is not wanting to do that even without assuming that the robots have consciousness so there's a opportunity at Cynthia's opportunity to almost practice empathy the on robots is an opportunity to practice empathy I agree with you some people would say why are we practicing empathy on robots instead of you know on our fellow humans or on animals that are actually alive and experienced the world and I don't agree with them because I don't think empathy is a zero-sum game and I do think that it's a muscle that you can train and that we should be doing that but some people disagree so the interesting thing you've heard you know raising kids sort of asking them or telling them to be nice to the smart speakers to Alexa and so on saying please and so on during the requests I don't know if I'm a huge fan of that idea because yeah that's towards the idea of practicing empathy I feel like politeness I'm always polite to all the all the systems that we build especially anything that speech interaction-based like when we talk to the car I will always have a pretty good detector for please - I feel like there should be a room for encouraging empathy in those interactions yeah okay so I agree with you so I'm gonna play devil's advocate so what is then what is the dose our argument there the devil's advocate argument is that if you are the type of person who has abusive tendencies or needs to get some sort of like behavior like that out needs an outlet for it that it's great to have a robot that you can scream at so that you're not screaming at a person and we just don't know whether that's true whether it's an outlet for people or whether it just kind of as my friend once said trains their cruelty muscles and makes them more cruel in other situations oh boy yeah in that expanse to other topics which they I don't know that you know there's a is a topic of sex which is weird one that I tend to avoid is from robotics perspective and mostly general public doesn't they talk about sex robots and so on is that an area you've touched at all research-wise like the way because that's what people imagine sort of any kind of interaction between human and robot that shows any kind of compassion they immediately think from product perspective in the near term is sort of expansion of what pornography is and all that kind of stuff yeah that's kind of you to like characterize it as though there's thinking rationally about product I feel like sex robots are just such a like titillating news hook for people that they become like the story and it's really hard to not get fatigued by it when you're in the space because you tell someone you do human robot interaction of course the first thing they want to talk about is sex robots really yeah it happens a lot and it's it's unfortunate that I'm so fatigued by it because I do think that there are some interesting questions that become salient when you talk about you know sex with robots see what I think would happen when people get sex robots like if you let some guys okay guys get female sex robots what I think there's an opportunity for is an actual like like they'll actually interact what I'm trying to say they won't outside of the sex would be the most fulfilling part like the interaction it's like the folks who this movies on this right who pray pay a prostitute and then end up just talking to her the whole time so I feel like there's an opportunity it's like most guys and people in general joke about the sex act but really people are just lonely inside and I'm looking for connection many of them and it'd be unfortunate if that it's that connection is established through the sex industry I feel like it should go too into the front door of like people are lonely and they want a connection well I also feel like we should kind of deep you know D stigmatize the sex industry because you know even prostitution like they're prostitutes that specialize in disabled people who don't have the same kind of opportunities to explore their sexuality so it's I I feel like we should like D stigmatize all of that generally yeah but yeah that connection and that loneliness is an interesting you know topic that you bring up because while people are Const we worried about robots replacing humans and oh if people get sex robots and the sex is really good then they won't want their you know partner or whatever but we rarely talk about robots actually filling a hole where there's nothing yeah and what benefit that can provide to people yeah I think that's an exciting there's a whole giant there's a giant hole that's not unfillable by humans it's asking too much of your of people you your friends and people you're in a relationship with in your family to fill that hole there's a because you know it's exploring the full like people you know exploring the full complexity and richness of who you are like who are you really like the people your family doesn't have enough patience to really sit there and listen to who are you really and I feel like there's an opportunity to really make that connection with robots I just really were complex as humans and we're capable of lots of different types of relationships so whether that's you know with family members with friends with our pets or with robots I feel like there's space for all of that and all of that can provide value in a different way yeah absolutely so I'm jumping around currently most of my works and autonomous vehicles so the most popular topic amongst is the trolley problem so most most most robots just uh kind of hate this question but what do you think of this thought experiment what do you think we can learn from it outside of the silliness of the actual application of it to the autonomous vehicle I think it's still an interesting ethical question and that's in itself just like much of the interaction with robots has something to teach us but from your perspective do you think there's anything there well I think you're right that it does have something to teach us because but but I think what people are forgetting in all these conversations is the origins of the trolley problem and what it was meant to show us which is that there is no right answer and that sometimes our moral intuition that comes to us instinctively is not actually what we should follow if we care about creating systematic rules that apply to everyone so I think that as a philosophical concept it could teach us at least that but that's not how people are using it right now like we have and these are friends of mine and like I love them dearly and their project adds a lot of value but if we're viewing the moral machine project as what we can learn from the trolley problems the moral machine is I'm sure you're familiar it's this website that you can go to and it gives you different scenarios like oh you're in a car you can decide to run over you know these two people or this child you know what do you choose do you choose the homeless person do you choose the person who's jaywalking and so it pits these like moral choices against each other and then tries to crowdsource the quote-unquote correct answer which is really interesting and I think valuable data but I don't think that's what we should base our rules in autonomous vehicles on because it is exactly what the trolley problem is trying to show which is your first instinct might not be the correct one if you look at rules that then have to apply to everyone and everything so how do we encode these ethical choices in interaction with robots so for example Lata knows vehicles there is a serious ethical question of do I protect myself but that's my life I have higher priority than the life of another human being because that changes certain control decisions that you make so if your life matters more than other human beings then you'd be more likely to swerve out of your current lane so currently automated emergency braking systems that just break they don't ever swerve right so swerving into oncoming traffic or or no just in a different Lane can cause significant harm to others but it's possible that it causes less harm to you so that's a difficult ethical question do you you do you do you have a hope that like the trolley problem is not supposed to have a right answer do you hope that when we have robots at the table we'll be able to discover the right answer for some of these questions well what's happening right now I think is this this question that we're facing of you know what ethical rules should we be programming into the machines is revealing to us that our ethical rules are much less programmable than we you know probably thought before and so that's a really valuable insight I think that these issues are very complicated and that in in a lot of these cases it's you can't really make that call like not even as a legislator and so what's gonna happen in reality I think is that you know car manufacturers are just gonna try and avoid the problem and avoid liability in any way possible or like they're gonna always protect the driver because who's gonna buy a car if it's you know programmed to kill someone kill kill you instead of someone else so that's what's gonna happen in reality but what did you mean by like once we have robots at the table like do you mean when they can help us figure out what to do no I mean when robots are part of the ethical decisions so no no not they help us well oh you mean when it's like should I run over a robot or a person right that kind of thing so what no what no no no so when you it's exactly what you said which is when you have to encode the ethics into an algorithm you start to try to really understand what are the fundamentals of the decision making process you make just make certain decisions should you like capital punishment should you take a person's life or not to punish them for a certain crime sort of you can use you can develop an algorithm to make that decision right and the hope is that the act of making that algorithm however you make it so there's a few approaches will help us actually get to the core of what what is right and what is wrong under our current societal standards isn't that what's happening right now and we're realizing that we don't have a consensus on what's right and wrong I mean in politics in general well like when we're thinking about these trolley problems and autonomous vehicles and how to program ethics into machines and how to you know make make AI algorithms fair and equitable where we're realizing that this is so complicated and it's complicated in part because there is doesn't seem to be a one right answer in any of these cases do you hope for like one of the ideas of the moral machine is that crowdsourcing can help us converge towards like democracy can help us converge towards the right answer do you have a hope for crowdsourcing well yes and no so I think that in general you know I have a legal background and policymaking is often about trying to suss out you know what rules does this society this particular Society agree on and then trying to codify that so the law makes these choices all the time and then tries to adapt according to changing culture but in the case of the moral machine project I don't think that people's choices on that website necessarily necessarily reflect what laws they would want in place if given I think you would have to ask them a series of different questions in order to get up what their consensus is I agree but that that has to do more with the artificial nature of I mean they're showing some cute icons on a screen that's that's almost so if you for example we do a lot of work in virtual reality and so if you make if you put those same people into virtual reality where they have to make that decision they should be very different I think I agree with that that's one aspect and the other aspect is it's a different question to ask someone would you run over the homeless person or the doctor in this scene or do you want cars to always run over the homeless people yeah so let's talk about anthropomorphism do me at the prom or fizzell if I can pronounce it correctly is is one of the most fascinating phenomena from like both the engineering perspective and psychology perspective machine learning perspective in robotics in general can you step back and define at the prom or fizzle how you see it in general terms in your in your work sure so anthropomorphism is this tendency that we have to project human-like traits and behaviors and qualities onto nonhumans and we often see it with animals like well will project emotions on animals that may or may not actually be there okay we often see that we're trying to interpret things according to our own behavior when we get it wrong but we do it with more than just animals we do it with objects you know teddy bears we see you know faces in the headlights of cars and we do it with robots very very extremely you think that can be engineered can that be used to enrich an interaction Oh in and they a system in the human oh yeah for sure and do you and do you see it being used that way often like I don't I haven't seen whether it's Alexa or any of the smart speaker systems often trying to optimize for the ethical or physician you said you haven't seen I haven't seen they they keep moving away from that I think they're afraid of that they they actually so I only recently found out but did you know that Amazon has like a whole team of people who are just there to work on Alexis personality so I know that depends on UI personality I didn't know I didn't know that exact thing but I do know that the how the voice is perceived has worked on a lot whether that if it's a pleasant feeling about the voice but that has to do more with the texture of the sound and the audience on what personality is more like it's like what's her favorite beer when you ask her and and the personality team is different for every country to like there's a different personality for a German Alexa than there is for American Alexa that's it I think it's very difficult to you know use the are really really harness the anthropomorphism with these voice assistance because the voice interface is still very primitive and I think that in order to get people to really suspend their disbelief and treat a robot like it's alive less is sometimes more you you want them to project onto the robot and you want the robot to not disappoint their expectations for how it's going to answer or behave in order for them to have this kind of illusion and with Alexa I don't think we're there yet or Siri that just they're just not good at that but if you look at some of the more animal-like robots like the baby seal that they use with the dementia patients so much more simple design doesn't try to talk to you you can't disappoint you in that way it just makes little movements and sounds and people stroke it and it responds to their touch and that is like a very effective way to harness people tendency to kind of treat the robot like a living thing yeah so you bring up some interesting ideas in your paper chapter I guess at the poem Orphic framing human robot interaction that I read the last time we scheduled this a long time what are some good and bad cases event them for morphism and in your perspective like one is the good one is it bad well I just start by saying that you know while design can really enhance the end the premiere film it doesn't take a lot to get people to treat a robot like it's alive like people will over 85% of rumbas have a name which I'm I don't know the numbers for your regular type of vacuum cleaner but they're not that high right so people will feel bad for the room but when it gets stuck they'll send it in for repair and want to get the same one back and that's that one is not even designed to like make you do that so I think that some of the cases where it's maybe a little bit concerning that anthropomorphism is happening is when you have something that's supposed to function like a tool and people are using it in the wrong way and one of the concerns is military robots we're so gosh mm like early 2000s which is a long time ago iRobot the room a company made this robot called the pack bot that was deployed in Iraq and and Afghanistan with the bomb disposal units that were there and the soldiers became very emotionally attached to the robots and that's you know fine until a soldier risks his life to save a robot which you really don't want but they were treating them like pets like they would name them they would give them funerals with gun salutes they would get really upset and traumatized when the robot got broken so you in situations where you want a robot to be a tool in particular when it's supposed to like do a dangerous job that you don't want a person doing it it can be hard when people get emotionally attached to it that's maybe something that you would want to discourage another case for concern is maybe when companies try to leverage the emotional attachment to exploit people so if it's something that's not in the consumers interest trying to like sell them products or services or exploit an emotional connection to keep them you know paying for a cloud service for a social robot or something like that might be I I think that's a little bit concerning as well yeah the emotional manipulation which probably happens behind the scenes now with some like social networks and so on but making it more explicit what's your favorite robot like you know a real no real real robot which you have felt a connection with or not like not not at the core morphic connection but I mean like you just sit back as a damn this is an impressive system Wow so two different robots so the the plio baby dinosaur robot that is no longer sold that came out in 2007 that one I was very impressed with it was but but from an anthropomorphic perspective I was impressed with how much I bonded with it how much I like wanted to believe that it had this inner life can you describe Cleo the can you describe what what it is how big is it what can actually do ya plio is about the size of a small cat it had a lot of like motors that gave it this kind of lifelike movement it had things like touch sensors and an infrared camera so it had all these like cool little technical features even though it was a toy and the thing that really struck me about it was that it could mimic pain and distress really well so if you held it up by the tail it had a tilt sensor that you know told it what direction it was facing and it would start to squirm and cry out if you hit it too hard it would start to cry so it was very impressive in design and what's the second robot that you were you said there might have been two that you liked yeah so the Boston Dynamics robots are just impressive feats of engineering have you met them in person yeah I recently got a chance to go visit and I you know I was always one of those people who watched the videos and was like this is super cool but also it's a product video like I don't know how many times that they had to shoot this to get it right but visiting them I you know I'm pretty sure that I was very impressed let's put it that way yeah in terms of the control I think that was a transformational moment for me when I met spot many in person because okay maybe this is a psychology experiment but I anthropomorphised the crap out of it so I immediately it was like my best friend right I mean it's really hard for anyone to watch spot move and not feel like it has agency yeah did this movement especially the arm on spot mini really obvi obviously looks like a head yeah that and they say no wouldn't mean it that way but it obviously it looks exactly like that and so it's almost impossible to not think of it as a almost like the baby dinosaur but slightly larger and in this movement of the of course the intelligence is that their whole idea is that it's not supposed to be intelligent it's a platform on which you build higher intelligence it's actually really really dumb it's just a basic movement platform yeah but even dumb robots can like we can immediately respond to them in this visceral way what are your thoughts about Sofia the robot this kind of mix of some basic natural language processing and basically an art experiment yeah an art experiment is a good way to characterize it I'm much less impressed with Sofia than I am with Boston Dynamics she said she likes you she says she admires you she yeah she followed me on Twitter at some point yeah as she tweets about how much she likes you so so wouldn't that mean I have to be nicer not I was emotionally manipulating it no how do you think of the whole thing that happened with Sofia is quite a large number of people kind of immediately had a connection and thought that maybe we're far far more advanced with robotics than we are all right she didn't even think much I'm surprised how little people cared that they kind of assumed that well of course AI can do this yeah and then they if they assume that I felt they should be more impressed well you know what I mean like really overestimate where we are and so in something I don't even I don't even think Sofia was very impressed over it is very impressive I think she's kind of a puppet to be honest but yeah I think people have are a little bit influenced by science fiction pop culture to think that we should be further along than we are so what's your favorite robots and movies in fiction wall-e wall-e what do you like about wall-e the humor the cuteness the the perception control systems operating and wallahi that makes it all just in general the design of wall-e the robot I think that animators figured out you know starting in like Ben 1940's how to create characters that don't look real but look like something that's even better than real that we really respond to and think is really cute they figured out how to make them move and look in the right way and wall-e is just such a great example of that you think eyes big eyes or big something that's kind of AI ish so it's always playing on some aspect of the human face right often yeah so big eyes well I think one of the one of the first like animations to really play with this was Bambi and they weren't originally gonna do that they were originally trying to make the deer look as lifelike as possible like they brought deer into the studio and had a little zoo there so the animators could work with them and then at some point they were like hmm if we make really big eyes and like a small nose and like big cheeks kind of more like a baby face then people like it even better than if it looks real do you think the future of things I collects are in the home has possibility to take advantage of that to build on that to create these systems that are better than real that created closed human connection I can pretty much guarantee you without having any knowledge that those companies are working on that on that design behind the scenes like pretty sure I totally disagree with you really so that's what I'm interested in I'd like to build such a company I know a lot of those folks and they're afraid of that because you don't well how do you make money off of it well but even just like making a lexa look a little bit more interesting than just like a cylinder would do so much it's it's an interesting thought but I don't think people are from Amazon perspective looking for that kind of connection they want you to be addicted to the services provided by Alexa not to the device so the the device itself it's felt that you can lose a lot because if you create a connection and then if there's it creates more opportunity for frustration for for negative stuff then it does for positive stuff is I think the way they think about it that's interesting like I agree that there is it's very difficult to get right and you have to get it exactly right otherwise you wind up with Microsoft's Clippy okay easy now what's your problem with Clippy oh you like clip these clothes your friends yeah I'll just I just I just talked to the would just had this argument and they Microsoft CTO and they and he said he said he's not bringing Clippy back they're not bringing Clippy back and that's very disappointing is I think it was clip II was the greatest assistance we've ever built it was a horrible attempt of course but it's the best we've ever done because it was in real attempt you haven't like a actual personality and I mean it was obviously technology was way not there at the time of being able to be a recommender system for assisting you in anything and typing in Word or any kind of other application but still was an attempt of personality that was legitimate I'm sure I thought was brave yes oh yes okay you know you've convinced me I'll be slightly less hard unclick and I know I have like an army of people behind me who also miss Clippy so really I want to meet these people who are these people it's the people who like to hate stuff when it's there and and miss it when it's gone [Laughter] exactly alright so Anki and Gebo the two companies two amazing companies social robotics companies that have recently been closed down yeah why do you think it's so hard to create a personal robotics company so making a business out of essentially something that people would anthropomorphize have a deep connection with why is it so hard to make it work the business case not there or what is it I think it's a number of different things I don't think it's going to be this way forever I think at this current point in time it so much work to build something that only barely meets people's like minimal expectations because of science fiction and pop-culture giving people this idea that we should be further than we already are like when people think about a robot assistant in the home they think about Rosie from the Jetsons or something like that and on key and and giba did such a beautiful job with the design and getting that interaction just right but I think people just wanted more they wanted more functionality I think you're also right that you know the business case isn't really there because there hasn't been a killer application that's useful enough to get people to adopt the technology in great numbers I think what we did see from the people who did you know get geebo is a lot of them became very emotionally attached to it but that's not I mean it's kind of like the Palm Pilot back in the day most people are like why do I need this why would I they don't see how they would benefit from it until they you know have it or some other company comes in and makes it a little better yet like how how far away are we do you think I mean how hard is this problem it's a good question and I think it has a lot to do with people's expectations and those keep shifting depending on what science fiction that is popular but also it's two things it's people's expectation and people's need for an emotional connection yeah and then I believe the need is pretty high yes but I don't think we're aware of it that's right there's like it I really think we're this is like the life as we know it so we've just kind of gotten used to it of really I hate to be dark because I have close friends but we've gotten used to really never being close to anyone all right and we're deeply I believe okay this is hypotheses I think we're deeply lonely all of us even those in deep fulfilling relationships in fact what makes us relationship fulfilling I think is that they at least tap into that deep loneliness a little bit but I feel like there's more opportunity to explore that that doesn't interfere with the human relationship you have it expands more on the that yeah the the rich deep unexplored complexity that's all of us weird apes okay right do you think it's possible to fall in love with a robot oh yeah totally do you think it's possible to have a long-term committed monogamous relationship oh the robot well yeah there are lots of different types of long-term committed monogamous relationships I think monogamous implies like you're not going to see other humans and sexually or like you basically on Facebook have to say I'm in a relationship with this person this robot I just don't like again I think this is comparing robots to humans when I would rather compare them to pets like you get a robot it fulfills you know this loneliness that you have in us maybe not the same way as a pet maybe in a different way that is even you know supplemental in a different way but you know I'm not saying that people won't like do this be like oh I want to marry my robot or I want to have like a you know sexual relation monogamous relationship with my robot but I don't think that that's the main use case for them well you think that there's still a gap between human and pet so between husband and pet there's a relation earring so that that's a gap that can be closed but I think it could be closed someday but why would we close that like I I think it's so boring to think about recreating things that we already have when we could when we could create something that's different I know you're thinking about the people who like don't have a husband and like what could we give them yeah but but let's I guess what I'm getting at is maybe not so like the movie her yeah right so a better husband well may be better in some ways like it's I I do think that robots are going to continued to be a different type of relationship even if we get them like very human looking or when you know the voice interactions we have with them feel very like natural and human like I think they're still gonna be differences and there were in that movie too like towards the end yeah it goes off the rails it's just a movie so that your intuition is that that because because you kind of said two things right so one is why would you want to basically replicate the husband Yeah right and the other is kind of implying that it's kind of hard to do so you like anytime you try you might build something very impressive but it'll be different I guess my question is about human nature it's like how hard is it to satisfy that role of the husband so removing any of the sexual stuff aside is the is more like the mystery detention the dance of relationships you think with robots that's difficult to build what's you I think that well it also depends I'm not reading about robots now in 50 years in like indefinite amount of time where I'm thinking abilities five or ten years five or ten years I think that robots at best will be like a more similar to the relationship we have with our pets than relationship that we have with other people I got it so what do you think it takes to build a system that exhibits greater and greater levels of intelligence like it impresses us with its intelligence you know a Roomba so you talk about ethical moral ization that doesn't i think intelligence is not required if i can tell us probably gets in the way sometimes like you mentioned but what do you think it takes to create a system where we sense that it has a human level intelligence something that obviously something conversational human level intelligence that problem is it'd be interesting to sort of hear your perspective not just purely that talked to a lot of people how hard is the conversational agents yeah how hard is it to pass a Turing test but my sense is it's it's easier than just solving it's easier than solving the pure and natural language processing problem because I feel like you can cheat yeah so yeah so how hard is it to pass the Turing test any of you I well I think again it's all about expectation management if you set up people's expectations to think that they're communicating with what was it a 13 year old boy from the Ukraine yeah that's right then they're not going to expect perfect English they're not going to expect perfect you know understanding of concepts or even like being on the same wavelength in terms of like conversation flow so it's much easier to pass in that case do you think you kind of alluded this to with audio do you think it needs to have a body I think that we definitely have so we treat physical things with more social agency because we're very physical creatures I think a body can be useful does it get in the way is there negative aspects like yeah there can be so if you're trying to create a body that's too similar to something that people are familiar with like I have this robot cat at home that Hasbro makes and it's very disturbing to watch because I'm constantly assuming that it's gonna move like a real cat and it doesn't because it's like a 100 dollar piece of technology so it's very like disappointing and it's very hard to treat it like it's alive so you can get a lot wrong with the body too but you can also use tricks same as you know the expectation management of the 13 year old boy from the Ukraine if you pick an animal that people aren't intimately familiar with like the baby dinosaur like the baby seal that people have never actually held in their arms you can get away with much more because they don't have these preformed expectations yeah I'm thinking a TED talk or something that clicked for me that nobody actually knows what a dinosaur looks so you can actually get away with a lot more that was great do you think he needs so what do you think about consciousness and mortality being displayed in a robot so not actually having consciousness but having these kind of human elements that are much more than just the interaction much more than just like you mentioned with a dinosaur moving kind of interesting ways but really being worried about its own death and really acting as if it's aware and self-aware and identity have you seen that done in robotics what do you think about doing that I think it does is that a is that a powerful good thing well it's a I think it can be a design tool that you can use for different purposes so I can't say whether it's inherently good or bad but I do think it can be a powerful tool the fact that the you know Clio mimics distress when you quote-unquote would hurt it his is a really powerful tool to get people to engage with it in a certain way I had a research partner that I did some of the empathy work with named Kailash Nandi and he had built a robot for himself that had like a life span and that would stop working after a certain amount of time just because he was interested in like whether he himself would treat it differently and we know from you know Tamagotchis those like those little games that that we used to have that we're extremely primitive that like people respond to like this idea of mortality and you know you can get people to do a lot with little design tricks like that now whether it's a good thing depends on what you're trying to get them to do have a deeper relationship have a deeper connection sign a relationship if it's for their own benefit that sounds great okay a lot of other reasons I see so what kind of stuff are you worried about so is this a mostly about manipulation of your emotions for like advertisement so on things like that yeah or data collect I mean you could think of governments misusing this to extract information from people it's you know just just like any other technological tool just raises a lot of questions what's if you if you look at Facebook if you look at Twitter and social networks there's a lot of concern of data collection now how what's from the legal perspective or in general how do we prevent the violation of sort of these companies crossing a line it's a gray area but crossing a line they shouldn't in terms of manipulating like we're talking about a manipulating our emotion manipulating our behavior using tactics that are not so savory yeah it's it's really difficult because we are starting to create technology that relies on data collection to provide functionality and there's not a lot of incentive even on the consumer side to curb that because the other problem is that the harms aren't tangible they're not really apparent to a lot of people because they kind of trickle down on a societal level and then suddenly we're living in like 1984 which you know sounds extreme but that book was very prescient and I'm not worried about you know these systems you know I I I have you know Amazon's echo at home and and you know tell Alexa all sorts of stuff and and it helps me because you know Alexa knows what you know brand of diaper we use and so I can just easily order it again so I don't have any incentive to like ask a lawmaker to curb that but when I think about that data then being used against you know low income people to target them for you know scammy loans or education programs that's then a societal effect that I think is very severe and you know legislators should be thinking about well yeah there's the the Garrett gray area is the removing ourselves from consideration of like of explicitly defining objectives and more saying well we want to maximize engagement in our social network yeah and and then just because you're not actually doing a bad thing it makes sense you want people to to keep a conversation going to have more conversations to keep coming back again and again to have conversations and whatever happens after that you're kind of not exactly directly responsible you're only indirectly responsible so it's I think it's a really hard problem do I are you optimistic about us ever being able to solve it you mean the problem of capitalists like because the problem is that the companies are acting in the company's interests and not in people's interest and when those interests are aligned that's great but the completely free market doesn't seem to work because of this information asymmetry but it's hard to know how to so say you would try to do the right thing I guess I guess what I'm trying to say is I'm it's not obvious for these companies what the good thing for society is to do like I don't think they sit there and with I don't know whether it with a glass of wine and a cat like petting a cat evil cat and and there's two decisions and one of them is good for society one is good for the for the profit and they choose the profit I think they actually there's a lot of money to be made by doing the the right thing for society like that because Google Facebook have so much cash that day actually was especially Facebook was significantly benefit for making decisions that are good for society it's good for their brand right so but I don't know if they know what society that's the we I don't think we know what's good for society in terms of how yeah how we manage the conversation on Twitter or how we design will talk about robots like should we emotionally manipulate you into having a deep connection with Alexa or not yeah yeah you have optimism that we'll be able to solve some of these questions well I'm gonna say something that's controversial like in my circles which is that I don't think that companies who are reaching out to ethicists and trying to create interdisciplinary ethics boards I don't think that that's totally just trying to whitewash the problem and and and so that they look like they've done something I think that a lot of companies actually do like you say care about what the right answer is they don't know what that is and they're trying to find people to help them find them not in every case but I think I you know it's much too easy to just vilify the companies as like you said sitting there with their cat going her 1 million dollars that's not what happens a lot of people are well-meaning even within companies I think that what we do absolutely need is more interdisciplinarity both within companies but also within the policy-making space because we're you know we've hurdled into the world where technological progress is much faster it seems much faster than it was and things are getting very complex and you need people who understand the technology but also people who understand what the societal implications are and people who are thinking about this in a more systematic way to be talking to each other there's no other solution I think you've also done work on intellectual property so if you look at the algorithms these companies are using like YouTube Twitter Facebook so on and that's kind of the those are mostly secretive in the recommender systems behind behind these algorithms do you do you think about it IP and transparency about how goes like this like what the responsibility these companies to open-source the algorithms or at least reveal to the public what's how these algorithms work so I personally don't work on that there are a lot of people who do though and there are a lot of people calling for transparency in fact Europe's even trying to legislate transparency maybe they even have at this point where like if if an algorithmic system makes some sort of decision that affects someone's life that you need to be able to see how that decision was made I you know it's it's a it's a tricky balance because obviously companies need to have you know some sort of competitive advantage and you can't take all that away or you stifle innovation but yeah for some of the ways that these systems are already being used I think it it is pretty important that people understand how they work what are your thoughts in general on intellectual property in this weird age of software AI robotics oh that it's broken I mean the system is just broken so did can you describe I actually I don't even know what intellectual property is in the space of software what it means to I mean I so I believe I have a patent on a piece of software from my PhD you believe you don't know no we went through a whole process yeah I do the spam emails like will frame your patent for you yes much like a thesis so uh but that's useless right or not what word is IP stand in this age what what is what's the right way to do it what's the right way to protect and own ideas and why don't when it's just code and in this mishmash of something that feels much softer than a piece of machinery yeah idea I mean it's hard because you know there are different types of intellectual property and they're kind of these blunt instruments they're they're like it's like patent law is like a wrench like it works really well for an industry like the pharmaceutical industry but when you try and apply it to something else it's like I don't know I'll just like hit this thing with the wrench and hope it works so software you know software you have a couple different options software like any code that's written down in some tangible form is automatically copyrighted so you have that protection but that doesn't do much because if someone takes the basic idea that the code is executing and just does it in a slightly different way they can get around the copyright so that's not a lot of protection then you can patent software but that's kind I mean getting a patent cost I don't know if you remember what yours costs or like was it an institution yes during university yeah they it was insane there's so many lawyers so many meetings and it made me feel like it must have been hundreds of thousands of dollars yeah crazy it's it's insane the costs of getting a patent and so this idea of like protecting the like inventor in their own garage like came up with a great idea is kind of that's the thing of the past it's all just companies trying to protect things and it costs a lot of money and then with code it's oftentimes like you know by the time the patent is issued which can take like five years you probably your code is obsolete at that point so it's it's a very again a very blunt instrument that doesn't work well for that industry and so you know at this point we should really have something better but we don't do like open source yeah it's open so it's good for society you think all of us should open source code well so at the Media Lab at MIT we have an open source default because what we've noticed is that people will come in there like write some code and they'll be like how do I protect this and we're like mmm like that's not your problem right now your problem isn't that someone's gonna steal your project your problem is getting people to use it at all like there's so much stuff out there like we don't even know if you're gonna get traction for your work and so open sourcing can sometimes help you know get people's work out there but ensure that they get attribution for it for the work that they've done so I like I'm a fan of it in a lot of contexts obviously it's not like a one-size-fits-all solution so what I gleaned from your Twitter is your mom I saw a quote a reference to baby bot what have you learned about robotics and AI from raising a human baby bot well I think that my child has made it more apparent and that the systems we're currently creating aren't like human intelligence like it there's not a lot to compare there it's just you he has learned and developed in such a different way than a lot of the AI systems were creating that that's not really interesting to me to compare but what is interesting to me is how these systems are going to shape the world that he grows up in and so I'm like even more concerned about kind of the societal effects of developing systems that you know rely on massive amounts of data collection for example so is you going to be allowed to use like Facebook or Facebook is over kids don't use that at snapchat what do they use Instagram Jets over to I don't know I just heard that tick tock is over which I've never even seen so I don't know no we're old we don't know twitch and you just I'm gonna start gaming and streaming my my gameplay so what do you see is the future of personal robotic social robotics interaction with our robots like what are you excited about if you were to sort of philosophize about what might happen the next 5-10 years that would be cool to see oh I really hope that we get kind of a home robot that makes it that's a social robot and not just Alexa like it's you know I really loved the Anki products I thought Gebo was had some really great aspect so I'm hoping that a company cracks that meet you so okay it was wonderful talking today likewise thank you so much it's fun thanks for listening to this conversation with Kate darling and thank you to our sponsors expressvpn and master class please consider supporting the podcast by signing up to master class and master class comm slash flex and getting expressvpn at expressvpn comm / lex pod if you enjoy this podcast subscribe on youtube review it with five stars on Apple podcast supported on patreon or simply connect with me on Twitter at Lex Friedman and now let me leave you with some tweets from Kate darling first tweet is the pandemic has fundamentally changed Who I am I now drink the leftover milk in the bottom of the cereal bowl second tweet is I came on here to complain that I had a really bad day and saw that a bunch of you are hurting - love - everyone thank you for listening and hope to see you next time you