Judea Pearl: Human-Level AI and the Test of Free Will | AI Podcast Clips
0rd3TB_dIQo • 2019-12-13
Transcript preview
Open
Kind: captions
Language: en
I know you're not a futurist but are you
excited have you when you look back in
your life long for the idea of creating
a human level intelligence yeah I'm
driven by that all my life I'm doing
just by one thing but I go slowly
I go from what I know to the next step
incrementally so without imagining what
the end goal looks like do you imagine
what the end goal is gonna be a machine
that can answer sophisticated questions
counterfactuals of a great compassion
[Music]
responsibility and free will so what is
a good test there's a touring test a
reasonably free will doesn't exist yet
how would you test free well and that's
so far we know only one thing meaning if
robots can communicate with reward and
Punishment among themselves hitting each
other on the wrist and say you shouldn't
have done it
okay playing better soccer because you
can do that what do you mean because
they can do that because it can
communicate among this because of the
communication they can do because of the
communicate like us reward and
punishment yes you didn't pass the ball
the right the right time and so for
therefore you're gonna sit on the bench
for the next two if they start
communicating like that the question is
will they play a better soccer it
supposed to work as if what what they do
now without this ability to reason about
reward and Punishment responsibility and
it's fine I can only think about
communication communications and in an
honest a natural language but just
communication just communication and
that's important to have a quick and
effective means of communicating
knowledge if the coach tells you should
have passed the ball pink he conveys so
much knowledge to
supposed to would go down and change
your software that's the alternative but
the coach doesn't know your software so
how can it coach tell you you should
have passed the ball but that our
language is very effective if you just
pass the ball you know your software you
tweak the right module and next time you
don't do it now that's for playing
soccer and the rules are well-defined
well not well defined when you should
pass the ball is not what the fuck'd
know it's very soft there is no Z yes
it's art but in terms of aligning values
between computers and humans do you
think this cause and effect type of
thinking is important to align the
values values morals ethics under which
the machines make decisions is is the
cause-effect where the two can come
together qualification is necessary
component to build a ethical machine
because the machine has to empathize to
understand what's good for you to build
a model of use of you as a recipient we
should be very much but what is
compassion they imagine it you suffer
pain as much as me as much as I do have
already a model of myself right so it's
very easy for me to map you to mine I
don't have to rebuild the model it's
much easier to say oh you are like me
okay therefore I would not hate you and
the machine has to imagine it has to try
to fake to be human essentially so you
can imagine that you're they you're like
me right
moreover with me let's defend it that's
consciousness they have a model of
yourself where do you get this model you
look at yourself as if you are a part of
the environment if you build a model of
yourself versus the environment then you
can say I need to have a model of myself
I have abilities I have desires and so
forth okay I have a blue
print of myself though not the full
detail because I cannot get the halting
problem right but I have a blueprint so
that level of a blueprint I can modify
things I can look at myself in the
mirror and say hmm if I change this much
tweak this one I'm gonna perform
differently that is what we mean by free
will and consciousness gorgeous
you
Resume
Read
file updated 2026-02-13 13:24:15 UTC
Categories
Manage