Transcript
kq0VO1FqE6I • Rosalind Picard: Affective Computing, Emotion, Privacy, and Health | Lex Fridman Podcast #24
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0078_kq0VO1FqE6I.txt
Kind: captions
Language: en
the following is a conversation with
Rosalind Picard she's a professor at MIT
director of the effective computing
Research Group at the MIT Media Lab and
co-founder of two companies Affectiva
and in Pataca over two decades ago she
launched the field of affective
computing with her book of the same name
this book described the importance of
emotion in artificial and natural
intelligence the vital role of emotional
communication has to the relationship
between people in general and human
robot interaction I really enjoy talking
with Roz over so many topics including
emotion ethics privacy wearable
computing and her recent research in
epilepsy and even love and meaning this
conversation is part of the artificial
intelligence podcast if you enjoy
subscribe on youtube itunes or simply
connect with me on twitter at Lex
Friedman spelled Fri D and now here's my
conversation with Rosalind Picard
more than 20 years ago you've coined the
term effective computing and let a lot
of research in this area since then as I
understand the goal is to make the
machine detect and interpret the
emotional state of a human being and
adapt the behavior of the machine based
on the emotional state so how is your
understanding of the problem space
defined by effective computing changed
in the past 24 years so it's the scope
the application is the challenge is
what's involved how is that evolved over
the years yeah actually originally when
I defined the term effective computing
it was a bit broader than just
recognizing and responding intelligently
to human emotion although those are
probably the two pieces that we've
worked on the hardest the original
concept also encompassed machines that
would have mechanisms that functioned
like human emotion does inside them it
would be any computing that relates to
arises from or deliberately influences
human emotion
so the human-computer interaction part
is the part that people tend to see like
if I'm you know really ticked off at my
computer and I'm scowling at it and I'm
cursing at it and it just keeps acting
smiling and happy like that little
paperclip used to do yeah dancing
winking that kind of thing just makes
you even more frustrated right and I
thought that stupid thing needs to see
my effect and if it's gonna be
intelligent which Microsoft researchers
had worked really hard on it actually
had some of the most sophisticated AI in
and at the time that thing's gonna
actually be smart it needs to respond to
me and you and we can send it very
different signals so by the way just a
quick interruption the Clippy maybe is
in Word 95 a 98 I remember when it was
born but many people do you find
yourself with that reference that people
recognize what you're talking about
still to this point I don't expect the
newest students to these days but I've
mentioned it through a lot of audiences
like how many of you know this clip
ething and still the majority people
seem to know it so Clippy kind of looks
at maybe natural language processing
what you were typing and tries to help
you complete I think I don't even
remember what Clippy was was except
annoying yeah it's right some people
actually liked it I miss I would hear
those stories you miss it
why missed the the annoyance they felt
like there's a element was there
somebody was there and we're in it
together and they were annoying it's
like it's like a puppy that just doesn't
get it
they Crippin up the college crime and in
fact they could have done it smarter
like a puppy if they had done like if
when you yell yell did it or cursed at
it if it had put its little ears back
and its tail down and jerked off
probably people would have wanted it
back right but instead when you yelled
it it what did it do its smiled it
winged it danced right if somebody comes
to my office and I yell at them they
started smiling winking and dancing I'm
like I never want to see you again so
Bill Gates got a standing ovation when
he said it was going away because people
were so ticked it was so emotionally
unintelligent right it was intelligent
about whether you're writing a letter
what kind of help you needed for that
context it was completely unintelligent
about hey if you're annoying your
customer don't smile in their face when
you do it so that kind of mismatch was
something the developers just didn't
think about and intelligence at the time
was really all about math and language
and chess and you know games problems
that could be pretty well defined social
emotional interaction is much more
complex than chess or go or any of the
games I that people are trying to solve
and in order to understand that required
skills that most people in computer
science actually we're lacking
personally well let's talk about
computer science if things gotten better
since since the work since the message
since you've really launched the field
with a lot of research work in the space
I still find as a person like yourself
who's deeply passionate about human
beings and and yet Emin computer science
there still seems to be a lack of
sorry to say empathy now again as
computer scientists yeah well where
hasn't gotten bad let's just say there's
a lot more variety among computer
scientists these days it's computer
scientists are much more diverse group
today than they were 25 years ago and
that's good we need all kinds of people
to become computer scientists so that
computer science reflects more what
society needs and you know there's
brilliance among every personality type
so it need not be limited to people who
prefer computers to other people how
hard do you think it is how your view of
how difficult it is to recognize emotion
or to create a deeply emotionally
intelligent interaction has it gotten
easier or harder as you've explored it
further and how far away away from
cracking this the if you think of the
Turing test solving the intelligence
looking at the Turing test for emotional
intelligence I think it is as difficult
as I thought it was going to be I think
my prediction of its difficulty is
spot-on
I think the time estimates are always
hard because they're always a function
of society's love and hate of a
particular topic if society gets excited
and you get you know hundreds of it you
get thousands of researchers working on
it for a certain application that
application gets solved really quickly
the general intelligence the lack of the
the computers complete lack of ability
to have awareness of what it's doing the
fact that it's not conscious the fact
that there's no signs of it becoming
conscious the fact that it doesn't read
between the lines those kinds of things
that we have to teach it explicitly what
other people pick up implicitly we don't
see that changing yet there aren't
breakthroughs yet that lead us to
believe that that's going to go any
faster which means that it's still going
to be kind of stuck with a lot of
limitations where it's probably only
going to do the right thing and very
limited narrow
pre-specified context where we can pre
script prescribe pretty much what's
what's gonna happen there so I don't see
the I it's hard to predict the date
because when people don't work on it
it's infinite when everybody works on it
you get a nice piece of it you know well
solved in a short amount of time I
actually think there's a more important
issue right now then the difficulty of
it and that's causing some of us to put
the brakes on a little bit usually we're
all just like step on the gas
let's go faster this is causing us to
pull back and put the brakes on and
that's the way that some of this
technology is being used in places like
China right now and that worries me so
deeply that it's causing me to pull back
myself on a lot of the things that we
could be doing and try to get the
community to think a little bit more
about ok if we're gonna go forward with
that how can we do it in a way that puts
in place safeguards that protects people
the technology were referring to is just
when a computer senses the human being
like the human face yeah right yeah so
what there's a lot of exciting things
there like forming a deep connection of
the human being book so what are your
worries how that could go wrong is it in
terms of privacy is it in terms of other
kinds of privacy so here in the US if
I'm watching a video of say a political
leader and and in the u.s. we're quite
free as we all know to even criticize
the you know the president the United
States right here that's not a shocking
thing it happens you know about every
five seconds sorry but in China what
happens if you criticize the leader of
the government right and so people are
very careful not to do that however what
happens if you're simply watching a
video and you make a facial expression
that's shows a little bit of skepticism
right well and you know here we're
completely for you to do that in fact
for
free to fly off the handle and say
anything we want usually I mean there
are some restrictions you know when when
the athlete does this as part of the
national broadcast maybe the teams get a
little unhappy about picking that forum
to do it right but that's more question
of judgment we we have these freedoms
and in places that don't have those
freedoms what if our technology can read
your underlying affective state what if
our technology can read it even
non-contact what if our technology can
read it without your prior consent and
here in the US and my first company we
started a Factiva we have worked super
hard to turn away money and
opportunities that try to read people's
effect without their prior informed
consent and even the software that is
licensable you have to sign things
saying you will only use it in certain
ways which essentially is get people's
buy-in right don't don't do this without
people agreeing to it there are other
countries where they're not interested
in people's buy-in they're just gonna
use it they're gonna inflict it on you
and if you don't like it you better not
scowl in the direction of any censors so
one let me just comment on a small
tangent do you know with the idea of
adversarial examples and deep fakes and
so on yeah what you bring up is actually
the the in that one sense deep fake
provide a comforting protection that
they you can no longer really trust that
the video of your face was legitimate
and therefore you always have an escape
clause if a government is trying if a
stable balanced ethical government is
trying to accuse the or something at
least you have protection you could say
was fake news as it's a popular term now
thinking of it we we know how to go into
the video and see for example your heart
rate and respiration and whether or not
they've been tampered with
and we also can put like fake heart rate
and respiration in your video now too we
decided we needed to do that after we
after we developed a way to extract it
we decided we also needed a way to jam
it Hey and so the fact that we took time
to do that other step to write that was
time that I wasn't spending making the
machine more effectively intelligent and
there's a choice and how we spend our
time which is now being swayed a little
bit less by this goal and a little bit
more like by concern about what's
happening in society and what kind of
future do we want to build and as we
step back and say okay we don't just
build AI to build AI to make Elon Musk
more money or to make Amazon Jeff Bezos
more money you could gosh you know
that's that's the wrong ethic why are we
building it what what is the point of
building AI it used to be it was driven
by researchers in academia to get papers
published and to make a career for
themselves and to do something cool
right like cuz maybe it could be done
now we realize that this is enabling
rich people to get vastly richer the
poor are the divide is even larger and
is that the kind of future that we want
maybe we want to think about maybe we
want to rethink AI maybe we want to
rethink the problems in society that are
causing the greatest inequity and
rethink how to build AI that's not about
a general intelligence but that's about
extending the intelligence and
capability that have-nots so that we
closed these gaps in society do you hope
that kind of stepping on the brake
happens organically because I think
still majority of the force behind AI is
the desire to publish papers is to make
money without thinking about the why do
you hope it happens organically is there
room for regulation is yeah great
questions I prefer the you know they
talk about the carrot versus the stick I
definitely prefer the carrot to the
stick and you know in in our free world
we there's only so much stick right your
find a way around it I generally think
less regulation is better that said even
though my position is classically carrot
no stick no regulation I think we do
need some regulations in this space I do
think we need regulations around P
protecting people with their data that
you own your data not Amazon not Google
I would like to see people own their own
data I would also like to see the
regulations that we have right now
around lie detection being extended to
emotion recognition in general that
right now you can't use the lie detector
on an employee when you're on a
candidate when you're interviewing them
for a job I think similarly we need to
put in place protection around reading
people's emotions without their consent
and in certain cases like characterizing
them for a job and other opportunities
so I've also I also think that when
we're reading a motion that's predictive
around mental health that that should
even though it's not medical data that
that should get the kinds of protections
that our medical data gets what most
people don't know yet is right now with
your smartphone use and if you're
wearing a sensor and you want to learn
about your stress and your sleep and
your physical activity and how much
you're using your phone and your social
interaction all of that non-medical data
when we put it together with machine
learning now call to AI even though the
founders of a I wouldn't have called it
that that capability cannot only tell
that you're calm right now or that
you're getting a little stressed but it
can also predict how you're likely to be
tomorrow if you're likely to be sick or
healthy happy or sad stressed or calm
especially when you're tracking data
over time especially when we're tracking
a week of your data or more you have an
optimism towards you know a lot of
people on our phones are worried about
this camera that's looking at us for the
most part unbalanced do you are you
optimistic about the benefits that can
be brought from that camera that's
looking at billions of us or should we
be more worried
I think we should be a little bit more
worried about who's looking at us and
listening to us the device sitting on
your countertop in your kitchen whether
it's you know Alexa Google home or Apple
Siri these devices want to listen while
they say ostensibly to help us and I
think there are great people in these
companies who do want to help people I
let me not brand them all bad I'm a user
of products from all all of these
companies I'm naming all the a company's
alphabet Apple Amazon they are awfully
big companies right they have incredible
power and you know what if what if China
were to buy them
right and suddenly all of that data were
not part of free America but all of that
data were part of somebody who just
wants to take over the world and you
submit to them and guess what happens if
you so much as smirk the wrong way when
they say something that you don't like
well they have reeducation camps right
that's a nice word for them by the way
they have a surplus of organs for people
who have surgery these days they don't
have an organ donation problem because
they take your blood and they know
you're a match and the doctors are on
record of taking organs from people who
are perfectly healthy and not prisoners
they're just simply not the favored ones
of the government and you know that's a
pretty freaky evil Society and we can
use the word evil there I was born in
the Soviet Union I can certainly connect
to the to the worry that you're
expressing at the same time probably
both you and I and you very much so you
know there's an exciting possibility
that you can have a deep connection with
a machine yeah yeah right so students
who say that they you know when you list
like who do you
most wish you could have lunch with or
dinner with right and though right like
I don't like people I just like
computers and one of them said to me
once when I had this party at my house I
want you to know this is my only social
event of the year my ones okay now this
is a brilliant machine learning person
right and we need that kind of
brilliance and machine learning and I
love that computer science welcomes
people who love people and people who
are very awkward around people I love
that this is a field that anybody could
join we need all kinds of people and you
don't need to be a social person I'm not
trying to force people who don't like
people to suddenly become social at the
same time if most of the people building
the AI is the future are the kind of
people who don't like people we've got a
little bit of a problem
hold on a second so let me let me push
back on it so don't you think a large
percentage of the world can you know
there's loneliness there is a huge
problem with loneliness that's growing
and so there's a longing for connection
do you if you're lonely you're part of a
big and growing group yes so what
weren't we're in it together I guess if
you're lonely you drive good you're not
alone that's a good line but do you
think there's uh you talked about some
worry but do you think there's an
exciting possibility that's something
like Alexa when these kinds of tools can
alleviate that loneliness in a way that
other humans can't yeah yeah definitely
I mean a great book can kind of
alleviate loneliness very because you
just get sucked into this amazing story
and you can't wait to go spend time with
that character right and they're not a
human character there is a human behind
it but yeah it can be an incredibly
delightful way to pass the hours and it
can meet needs even you know I I don't
read those trashy romance books but
somebody does right and what are they
getting from this well probably some of
that feeling of
being there right being there in that
social moment that romantic moment or
connecting with somebody I've had a
similar experience reading some science
fiction books connecting with the
character Orson Scott Card and you know
just amazing writing and Ender's Game
and and speaker for the dead terrible
title but those kind of books that pull
you into a character and you feel like
you're can if you feel very social it's
very connected even though it's not
responding to you and a computer of
course can respond to you so it can
deepen it right you can have a very deep
connection much more than the movie her
right
you know plays up right what much more I
mean movie her is already a pretty
pretty deep connection right well but it
but it's just a movie right it's
scripted it's just you know but I mean
like there can be a real interaction
where the character can learn and you
can learn you could imagine it not just
being you and and one character you can
imagine a group of characters you can
imagine a group of people and characters
human Nai connecting where maybe a few
people can't can't sort of be friends
with everybody but a few people and
there a ice can be friend more people
there can be an extended human
intelligence in there where each human
can connect with more people that way
but it's it's still very limited but
there are just what I mean is there many
more possibilities than what's in that
movie so there's a tension here the one
you expressed a really serious concern
about privacy about how governments can
misuse the information and there's a
possibility of this connection so let's
let's look at Alexa yeah so a personal
assistance for the most part as far as
I'm aware they ignore your emotion they
ignore even the context or the existence
of you the intricate beautiful complex
aspects of who you are
except maybe aspects of your voice that
how but recognize this before speech
recognition do you think they should
move towards trying to understand your
emotion
all of these companies are very
interested in understanding human
emotion they they want more people are
telling ciri every day they want to kill
themselves they Apple wants to know the
difference between if a person is really
suicidal versus if a person is just kind
of fooling around with Siri right the
words may be the same the tone of voice
and what surrounds those words is is
pivotal to understand if they should
respond in a very serious way bring help
to that person or if they should kind of
jokingly tease back you know ah you just
wanna you know sell me for something
else right like like how do you respond
when somebody says that well you know
you do want to err on the side of being
careful and taking it seriously people
want to know if the person is happy or
stressed in Part B so let me give you an
altruistic reason and a business profit
motivated reason and there are people
and companies that operate on both
principles the altruistic people really
care about their customers and really
care about helping you feel a little
better at the end of the day and it
would just make those people happy if
they knew that they made your life
better if you came home stressed and
after talking with their product you
felt better there are other people who
maybe have studied the way effect
affects decision-making and prices
people pay and they know if I should
tell you like the work of Jen Lerner on
heartstrings and purse strings you know
if we manipulate you into a slightly
sadder mood you'll pay more Yeah right
you'll pay more to change your situation
you'll pay more for something you don't
even need to make yourself feel better
so you know if they sound a little sad
maybe I don't want to cheer them up
maybe first I want to help them get
something a little shopping therapy
right that helps them which is really
difficult for a company that's primarily
funded on advertisement so they're
encouraged to yet to get if you get you
can offer you products or
primarily funded value buying things
from their store so I think we should be
you know maybe we need regulation in the
future to put a little bit of a wall
between these agents that have access to
our emotion and agents that want to sell
us stuff maybe there needs to be a
little bit more of a firewall in between
those
so maybe digging in a little bit on the
interaction with Alexa
you mentioned of course a really serious
concern about like recognizing emotion
if somebody is speaking of suicide or
depression asan but what about the
actual interaction itself do you think
so if I if I you know you mentioned
Clippy in being annoying what is the
objective function we're trying to
optimize is it minimize annoyingness or
minimize or maximize happiness or both
we look at human human relations I think
that push and pull the tension the dance
you know the annoying the flaws that's
what makes it fun
so is there is there a room for like
what what you wanna you want to have a
little push and pull think kids sparring
right you know I see my sons and they
one of them wants to provoke the other
to be upset and that's fun and it's
actually healthy to learn where your
limits are to learn how to self-regulate
you can imagine a game where it's trying
to make you mad and you're trying to
show self-control and so if we're doing
a AI human interaction that's helping
build resilience and self-control
whether it's to learn how to not be a
bully or how to turn the other cheek or
how to deal with an abusive person in
your life then you might need an AI that
pushes your buttons right but in general
do you want an AI that pushes your
buttons mmm probably depends on your
personality I don't
I want one that's respectful that is
there to serve me and that is there to
extend my ability to do things I'm not
looking for a rival I'm looking for a
helper
and that's the kind of a I put my money
on your census for the majority of
people in the world in order to have a
rich experience that's what they're
looking for as well so they're not
looking if you look at the movie her
spoiler alert I believe the the program
the the woman in the movie her leaves
the yeah portion for somebody else right
because they don't want to be dating
anymore
right like well do use your senses if
Alexis said you know what I'm actually
had enough of you for a while so I'm
gonna shut myself off you don't see that
as like I'd say you're trash cuz I paid
for you right okay yeah we we've got to
remember and this is where this blending
human AI as if we're equals
is really deceptive because AI is
something at the end of the day that my
students and I are making in the lab and
we're choosing what it's allowed to say
when it's allowed to speak what it's
allowed to listen to what it's allowed
to act on given the inputs that we
choose to expose it to what outputs it's
allowed to have it's all something made
by a human and if we want to make
something that makes our lives miserable
fine I'd I wouldn't invest in it as a
business you know unless it's just there
for self-regulation training but I think
we you know we need to think about what
kind of future we want and actually your
question I really like the what is the
objective function is it to calm people
down sometimes is it to always make
people happy and calm them down well
there was a book about that right the
brave new world you know make everybody
happy
take your soma if you're unhappy take
your happy pill and if you refuse to
take your happy pill well well threaten
you by sending you to Iceland to live
there I lived in Iceland three years
it's a great place don't think you're so
much a little TV commercial there I was
a child there for a few years this
wonderful place
so that part of the book never scared me
but but really like do we want AI to
manipulate us into submission into
making us happy well if you are a you
know like a power obsessed sick dictator
individual who only wants to control
other people to get your jollies in life
then yeah you want to use AI to extend
your power in your scale just to force
people into submission if you believe
that the human race is better off being
given freedom and the opportunity to do
things that might surprise you then you
want to use AI to extend people's
ability to build you want to build AI
that extends human intelligence that
empowers the weak and helps balance the
power between the weak and the strong
not that gives more power to the strong
so in in this process of empowering
people in sensing people and what is
your sense on emotion in terms of
recognizing emotion the difference
between emotion that is shown and
emotion that is felt so yeah yeah
emotion that is expressed on the surface
through your face your body and not
various other things and what's actually
going on deep inside on the biological
level on the neuroscience level or some
kind of cognitive level yeah whoa no
easy questions here boy yeah I'm sure
there's no there's no definitive answer
but what's your sense how far can we get
by just looking at the face we're very
limited when we just look at the face
but we can get further than most people
think we can get people think hey I have
a great poker face therefore all you're
ever gonna get from me is neutral well
that's naive we can read with the
ordinary camera on your laptop or on
your phone we can read from a neutral
face if your heart is racing we can read
from a neutral face if your breathing is
becoming irregular and showing signs of
stress we can read under some conditions
that maybe I won't give you details on
how your heart rate variable
the power is changing that could be a
sign of stress even when your heart rate
is not necessarily accelerating so I'm
sorry from physio sensitive from the
face from the color changes that you
cannot even see but the camera can see
that's amazing so so you can get a lot
of signal but so we get things people
can't see using a regular camera and
from that we can tell things about your
stress so if you are just sitting there
with a blank face thinking nobody can
read my emotion while you're wrong right
so that's really interesting but that's
from sort of visual information from the
face that's almost like cheating your
way just the physiological state of the
body by being very clever with when you
can do as a tional processing signal
processing so that's a really impressive
but if you just look at the stuff we
humans can see the bulk of the smile
smirks the subtle all the face so then
you can hide that on your face for a
limited amount of time now if you if
you're just going in for a brief
interview and you're hiding it that's
pretty easy for most people if you are
however surveilled constantly everywhere
you go then it's gonna say gee you know
Lex used to smile a lot and now I'm not
seeing so many smiles and Roz used to
you know laugh a lot and smile a lot
very spontaneously and now I'm only
seeing these not so spontaneous looking
smiles and only when she's asked these
questions you know that's some sleep we
could look at that too so now I have to
be a little careful - when I say we you
think we can't read your emotion and we
can it's not that binary what we're
reading is more some physiological
changes that relate to the your
activation now that doesn't mean that we
know everything about how you feel in
fact we still know very little about how
you feel your thoughts are still private
you're nuanced feelings are still
completely private we can't read any of
that so there's some relief that we
can't read that even brain
ding can't read that wearables can't
read that however as we read your body
state changes and we know what's going
on in your environment and we look at
patterns of those over time we can start
to make some inferences about what you
might be feeling and that is where it's
not just the momentary feeling but it's
more your stance towards things and that
could actually be a little bit more
scary with certain kinds of governmental
control freak people who want to know
more about are you on their team or are
you not and getting that information
through overtime so you're saying
there's a lot of things by looking at
the change over time yeah so you've done
a lot of exciting work both in computer
vision and physiological sense like
wearables what do you think is the best
modality for what's the the the best
window into the emotional soul like is
it the face is that the voice it's a
body you want to know without you know
everything is informative everything we
do is informative so for health and
well-being and things like that
define the wearable physio techno some
measuring physiological signals is the
best for health based stuff so here I'm
gonna answer empirically with data and
studies we've been doing we've been
doing studies now these are currently
running with lots of different kinds of
people but where we've published data
and I can speak publicly to it the data
are limited right now to New England
college students so that's a small group
among New England college students when
they are wearing a wearable like like
the impact embrace here that's measuring
skin conductance movement temperature
and when they are using a smartphone
that is collecting their time of day of
when they're texting who they're texting
their movement around it their GPS the
weather information based upon their
location and when it's using machine
learning and putting all of that
together and looking not
just it right now but looking at your
rhythm of behaviors over about a week
when we look at that we are very
accurate of forecasting tomorrow's
stress mood and happy sad mood and
health and when we look at which pieces
of that are most useful first of all if
you have all the pieces you get the best
results if you have only the wearable
you get the next best results and that's
still better than 80% accurate at
forecasting tomorrow's levels it's not
exciting because the wearable stuff with
physiological information it feels like
it violates privacy less than the
non-contact face based methods yeah it's
it's interesting I think what people
sometimes don't you know it's fine the
early days people would say oh wearing
something or giving blood is invasive
right whereas a camera is less invasive
because it's not touching you I think on
the contrary the things that are not
touching you or maybe the scariest
because you don't know when they're on
or off and you don't know when people
and you don't know who's behind it right
a wearable depending upon what's
happening to the data on it if it's just
stored locally or if it's streaming and
what it is being attached to in in a
sense you have the most control over it
because it's also very easy to just take
it off take it off right now it's not
sensing me so if I'm uncomfortable with
what it's sensing now I'm free yeah
right if I'm comfortable with what it's
sensing then and I happen to know
everything about this one what it's
doing with it so I'm quite comfortable
with it then I'm you know I have control
I'm comfortable control is one of the
biggest factors for an individual in
reducing their stress if I have control
over it if I know all there is to know
about it then my stress is a lot lower
and I'm making an informed choice about
whether to wear it or not or winter wear
it or not I wanna wear it sometimes
maybe not others right this is that
control that I'm with you that control
even if you had the ability to turn it
off yeah is a really important and we
need to maybe you know if there's
regulations maybe that's number one to
protect is people's ability to it's easy
to opt out as to opt in right so you've
studied a bit of neuroscience as well
how if looking at our own minds of the
biological stuff or the neurobiological
the neuroscience to get the signals in
our brain helped you understand the
problem in the approach of effective
computing so originally I was the
computer architect that I was building
hardware and computer designs and I
wanted to build ones that work like the
brain so I've been studying the brain as
long as I've been studying how to build
computers have you figured out anything
yet it's so amazing you know they used
to think like oh if you remove this
chunk of the brain and you find this
function goes away well that's the part
of the brain that did it and then later
they realize if you remove this other
chunk of the brain that function comes
back and oh no we really don't
understand it brains are so interesting
and changing all the time and able to
change in ways that will probably
continue to surprise us when we were
measuring stress you may know the story
where we found an unusual big skin
conductance pattern on one wrist in one
of our kids with autism and in trying to
figure out how on earth you could be
stressed on one wrist and not the you
know that like how can you get sweaty on
one wrist right when you when you get
stressed but that sympathetic
fight-or-flight response like you he
kind of should like sweat more in some
places than others but not more on one
wrist than the other that didn't make
any sense we learned that what had
actually happened was a part of his
brain had unusual electrical activity
and that caused an unusually large sweat
response on one wrist and not the other
and since then we've learned that
seizures caused this unusual electrical
activity and depending where the seizure
is if it's in one place and it's staying
there you can have a big electrical
response we can pick up with a wearable
at one part of the body you can also
have a seizure that spreads over the
whole brain generalized grand mal
seizure and that response spreads and we
can pick it up pretty much anywhere as
we learned this and then later built
embrace that's now FDA cleared for
seizure detection we have also built
relationships with some of the most
amazing doctors in the world who not
only help people with unusual brain
activity or epilepsy but some of them
are also surgeons and they're going in
and they're implanting electrodes not
just to momentarily read the strange
patterns of brain activity that we'd
like to see return to normal but also to
read out continuously what's happening
in some of these deep regions of the
brain during most of life when these
patients are not seizing most of the
time they're not seizing most of the
time they're fine and so we are now
working on mapping those deep brain
regions that you can't even usually get
with EEG scalp electrodes because the
changes deep inside don't reach the
surface but interesting when some of
those regions are activated we see a big
skin conductance response who would have
thunk it right like nothing here but
something here in fact right after
seizures that we think are the most
dangerous ones that precede what's
called pseudo sudden unexpected death in
epilepsy there's a period where the
brain waves go flat and it looks like
the person's brain has stopped but it
hasn't the activity has has gone deep
into a region that can make the cortical
activity look flat like a quick shutdown
signal here it can unfortunately cause
breathing to stop if it progresses long
enough before that happens we see a big
skin conductance response in the data
that we have the longer this flattening
the bigger our response here so we have
been trying to learn you know initially
like why why are we getting a big
response here when there's nothing here
well it turns out there's something much
deeper so we can now go inside the
brains
of some of these individuals fabulous
people who usually aren't seizing and
get this data and start to map it so
that's active research that we're doing
right now with with top medical partners
so this this wearable sensor this
looking skin conductance can capture
sort of the ripples of the complexity of
what's going on in our brain so you this
this little device you have a hope that
you can start to get the signal from the
from the interesting things happening in
the brain yeah we've already published
the strong correlations between the size
of this response and the flattening that
happens after words and unfortunately
also in a real suit up case where the
patient died because the well we don't
know why we don't know if somebody was
there it would have definitely prevented
it but we know that most students happen
when the person's alone and in this suit
up is an acronym su DEP and it stands
for the number two cause of years of
life lost actually among all
neurological disorders
stroke is number one suit up as number
two but most people haven't heard of it
actually I'll plug my TED talk it's on
the front page of Ted right now that
talks about this and we hope to change
that I hope everybody who's heard of
SIDS and stroke will now hear of suit up
because we think in most cases it's
preventable if people take their meds
and aren't alone when they have a
seizure not guaranteed to be preventable
there are some exceptions but we think
most cases probably are so you have this
embrace now in the version two wristband
right for epilepsy management that's the
one that's FDA approved yes and which is
kind of weird weird yes that's okay it
essentially means it's approved for
marketing got it just a side note how
difficult is that to do it's essentially
getting FDA it's organized computer
science technology it's so agonizing
it's much harder than publishing
multiple papers and top medical journals
yeah we published peer-reviewed top
medical journal Neurology best results
and that's not good enough for the FDA
is that system
so if we look at the peer review of
medical journals there's flaws the
strengths is the FDA approval process
how does it compare to the peer review
process is it have a strengthened I
think we peer review over FDA any day
but is that a good thing is that a good
thing for FDA are you saying does it
stop some amazing technology from
getting through yeah it does the FDA
performs a very important good role in
keeping people safe they keep things
they put you through tons of safety
testing and that's wonderful and that's
great I'm all in favor of the safety
testing sometimes they put you through
additional testing that they don't have
to explain why they put you through it
and you don't understand why you're
going through it and it doesn't make
sense and that's very frustrating and
maybe they have really good reasons and
they just would it would do people a
service to articulate those reasons be
more transparent so as part of them
Pataca you have sensors so what kind of
problems can we crack what kind of
things from seizures to autism - I think
I've heard you mentioned depression and
what kind of things can we alleviate can
we detect what's your hope of what how
we can make an world a better place with
this wearable tech I would really like
to see my you know fellow brilliant
researchers step back and say you know
what are what are the really hard
problems that we don't know how to solve
that come from people maybe we don't
even see in our normal life because
they're living in the poorer places
they're stuck on the bus there they
can't even afford the uber or the lifts
or the data plan or all these other
wonderful things we have that we keep
improving on meanwhile there's all these
folks left behind in the world and
they're struggling with horrible
diseases with depression with epilepsy
with diabetes with just awful stuff that
maybe a little more time and attention
hanging out with them and learning what
are their challenges in life what are
their needs how do we help them have job
skills how do we help them have a hope
and a future and a chance to have the
great life that so many of us building
technology have and then how would that
reshape the kinds of AI that we build
how would that reshape the new you know
apps that we build or the maybe we need
to focus on how to make things more
low-cost and green instead of
thousand-dollar phones I mean come on
you know why can't we be thinking more
about things that do more with less for
these books quality of life is not
related to the cost of your phone you
know it's not something that you know
it's been shown that what about
seventy-five thousand dollars of income
and Happiness is the same okay however I
can tell you you get a lot of happiness
from helping other people and get a lot
more than seventy-five thousand dollars
buys so how do we connect up the people
who have real needs with the people who
have the ability to build the future and
build the kind of future that truly
improves the lives of all the people
that are currently being left behind so
let me return just briefly and a point
maybe in movie her so do you think if we
look farther into the future he says so
much of the benefit from making our
technology more empathetic to us human
beings would make them better tools
empower us make make our lives better
well if we look farther into the future
do you think we'll ever create an AI
system that we can fall in love with and
loves us back on the level that is
similar to human to human interaction
like in the movie her or beyond I think
we can simulate it and ways that could
you know sustain engagement for a while
would it be as good as another person I
don't think so
for if you're used to like good people
now if you've just grown up with nothing
but abuse and you can't stand human
beings can we do something that helps
you there that gives
something through a machine yeah but
that's pretty low bar right if you've
only encountered pretty awful people
if you've encountered wonderful amazing
people we're nowhere near building
anything like that
and I'm I would not bet on building it I
would bet instead on building the kinds
of AI that helps all helps kind of raise
all boats that helps all people be
better people helps all people figure
out if they're getting sick tomorrow and
it helps give them what they need to
stay well tomorrow that's the kind of AI
want to build that improves human lives
not the kind of AI that just walks on
The Tonight Show and people go wow look
how smart that is you know really like
and then it goes back in a box you know
so on that point if we continue looking
a little bit into the future do you
think an AI that's empathetic and does
improve our lives need to have a
physical presence of body and even let
me cautiously say the C word
consciousness and even fear of mortality
so some of those human characteristics
do you think he needs to have those
aspects or can it remain simply a
machine learning tool that learns from
data of behavior that that learns to
make us based on previous patterns feel
better or doesn't need those elements of
consciousness and it depends on your
goals if you're making a movie it needs
a body it needs a gorgeous body it needs
to act like it has consciousness it
needs to act like it has emotion right
because that's what sells that's what's
gonna get me to show up and enjoy the
movie okay in real life
does it need all that well if you've
read Orson Scott Card Ender's Game
speaker for the dead you know it could
just be like a little voice in your
earring right and you could have an
intimate relationship and it could get
to know you and it doesn't need to be a
robot but that doesn't make this
compelling of a movie right I mean we
already think it's kind of weird when a
guy's looks like he's talking to himself
on the train you know even though it's
earbuds so we have
embodied is more powerful embodied when
you compare interactions with an
embodied robot versus a video of a robot
versus no robot the robot is more
engaging the robot gets our attention
more the robot when you walk in your
house is more likely to get you to
remember to do the things that you asked
it to do because it's kind of got a
physical presence you can avoid it if
you don't like it it could see here
avoiding it there's a lot of power to
being embodied there will be embodied
AIS they have great power and
opportunity and potential there will
also be a eyes that aren't embodied that
just our little software assistants that
help us with different things that may
get to know things about us will they be
conscious there will be attempts to
program them to make them appear to be
conscious we can already write programs
that make it look like what do you mean
of course I'm aware that you're there
right I mean it's trivial to say stuff
like that it's it's easy to fool people
but does it actually have conscious
experience like we do nobody has a clue
how to do that yet that seems to be
something that is beyond what any of us
knows how to build now will it have to
have that I think you can get pretty far
with a lot of stuff without it will we
accord it rights well that's more a
political game that it is a question of
real consciousness
yeah can you go to jail for turning off
Alexa is what that's the question for an
election maybe a few decades well Sophia
robots already been given rights as a
citizen in Saudi Arabia right even
before women have full rights then the
robot was still put back in the box to
be shipped to the next place where it
would get a paid appearance right yeah
dark and almost
comedic if not absurd so I've heard you
speak about your journey and finding
faith sir uh and how you discovered some
wisdoms about life and beyond from
reading the Bible ma say that you said
scientists who often assume that nothing
exists beyond what can be currently
measured materialism materialist and
scientism yes in some sense this
assumption enables the near term
scientific method assuming that we can
uncover the mysteries of this world by
the mechanisms of measurement that we
currently have but we easily forget that
we've made this assumption so what do
you think we missed out on by making
that assumption that hmm it's fine to
limit the scientific method to things we
can measure and reason about and
reproduce that's fine I think we have to
recognize that sometimes we scientists
also believe in things that happen
historically you know like I believe the
Holocaust happened I can't prove events
from past history scientifically you
prove them with historical evidence
right it was the impact they had on
people with eyewitness testimony and and
things like that so a good thinker
recognizes that science is one of many
ways to get knowledge it's not the only
way and there there's been some really
bad philosophy and bad thinking recently
you can call it scientism where people
say science is the only way to get to
truth and it's not it just isn't there
are other ways that work also like
knowledge of love with someone you don't
you don't prove your love through
science right so history philosophy love
a lot of other things in life show us
that there's more ways to gain knowledge
and truth if you're willing to believe
there is such a thing and I believe
there is
than science I do I am a scientist
however and in my science I do limit my
science to the things that the
scientific method can can do but I
recognize that it's myopic to say that
that's all there is right there's just
like you listed there's all the why
questions and really we know for being
honest with ourselves the percent of
what we really know is is basically zero
relative to the full mystery of this
measure theory a set of measure zero if
I have a finite amount of knowledge
which I do so you said that you believe
in truth so let me ask that old question
what do you think this thing is all
about
life on Earth life the universe and
everything I did everything was Douglas
Adams yeah 42 my favorite number my
street address my husband right yes -
the exact same number for our house we
got to pick it there's a reason we
picked 42 yeah so is it just 40 tours
there's do you have other words that you
can put around it well I think there's a
grand adventure and I think this life is
a part of it I think there's a lot more
to it than meets the eye and the heart
and the mind and the soul here I think
we we see but through a glass dimly in
this life we see only a part of all
there is to know if if people haven't
read the the Bible they should if they
consider themselves educated and you
could read proverbs and find from end s
wisdom in there that cannot be
scientifically proven but when you read
it there's something in you like like a
musician knows when the instruments
played right and it's beautiful there's
something in you that comes alive and
knows that there's a truth there that
like your strings are being plucked by
the master instead of by me right
playing when I pluck it but probably
when you play it sound spectacular right
and when you when you encounter those
truths there's something in you that
sings and knows that there is more than
what I can prove mathematically or
program a computer to do don't get me
wrong the math is gorgeous the
computer programming can be brilliant
it's inspiring right we want to do more
none of the squashes my desire to do
science or to get knowledge through
science I'm not I'm not dissing the
science at all I grow even more in awe
of what the science can do because I'm
more in awe of all there is we don't
know and really at the heart of science
you have to have a belief that there's
truth that there's something greater to
be discovered and some scientists may
not want to use the face word but it's
faith that drives us to do science it's
faith that there is truth that there's
something to know that we don't know
that it's worth knowing that it's worth
working hard and that there is meaning
that there is such a thing as meaning
which by the way science can't prove
either we have to kind of start with
some assumptions that there's things
like truth and meaning and these are
really questions philosophers own right
this is their space of philosophers and
theologians at some level so these are
things science you know if we when
people claim that science will tell you
all truth that's there's a name for that
it's it's its own kind of face it's
scientism and it's very myopic yeah
there's a much bigger world out there to
be explored in in ways that science may
not at least for now allow us to explore
yeah and there's meaning and purpose and
hope and joy and love and all these
awesome things that make it all
worthwhile too
I don't think there's a better way to
end it right thank you so much for
talking today pleasure great questions
you