Transcript
y0c1yrPDLN8 • Peter Norvig: We Are Seduced by Our Low-Dimensional Metaphors | AI Podcast Clips
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0135_y0c1yrPDLN8.txt
Kind: captions
Language: en
[Music]
anytime use noodle networks anytime you
learn from data form representation from
day in an automated way it's not very
explainable as to or it's not
introspective to us humans in terms of
how this neural network sees the world
where why does it succeed so brilliantly
on so many in so many cases and fail so
miserably in surprising ways and small
so what do you think is this is the
future there can simply more data better
data more organized data solve that
problem or is there elements of symbolic
systems they need to be brought in which
are a little bit more explainable yeah
so I prefer to talk about trust and
validation and verification rather than
just about explain ability and then I
think explanations are one tool that you
use towards those goals and I think it
is an important issue that we don't want
to use these systems unless we trust
them and we want to understand where
they work and where they don't work and
in an explanation can be part of that
right so I apply for loan and I get
denied I want some explanation of why
and you have in Europe we have the GD P
R that says you're required to be able
to get that but on the other hand the
explanation alone is not enough right so
you know we were used to dealing with
people's and with organizations and
corporations and so on and they can give
you an explanation and you have no
guarantee that that explanation relates
to reality right right so the bank can
tell me well you didn't get the loan
because you didn't have enough
collateral and that may be true or it
may be true that they just didn't like
my religion or or something else I can't
tell from the explanation and that's
that's true whether the decision was
made by a computer or by a person
so I want more
I do want to have the explanations and I
want to be able to have a conversation
to go back and forth and said well you
gave this explanation but what about
this and what would have happened if
this had happened and what would I need
to change that so I think a conversation
is a better way to think about it than
just an explanation as a single output
and I think we need testing of various
kinds right so in order to know was the
decision really based on my collateral
or was it based on my religion or skin
color or whatever I can't tell if I'm
only looking at my case but if I look
across all the cases then I can detect
the pattern that's right so you want to
have that kind of capability you want to
have these adversarial testing right so
we thought we're doing pretty good at
object recognition in images we said
look we're hats or pretty close to human
level performance on an image net and so
on and then you start seeing these
adversarial images and you say wait a
minute that part is nothing like human
performance you can mess with it really
easily you can mess with it really
easily right and yeah you could do that
to humans too right so in a different
way perhaps right humans don't know what
color the dress was right and so they're
vulnerable to certain attacks that are
different than the attacks on the
machines but the you know the taxol
machines are so striking they really
change the way you think about what
we've done right and the way I think
about it is I think part of the problem
is we're seduced by our low dimensional
metaphors right yeah so you know you
don't like that phrase you look in in a
text book and you say okay now we've
mapped out the space and you know a cat
is here and dog is here and maybe
there's a tiny little spot in the middle
where you can't tell the difference but
mostly we've got it all covered and if
you believe that metaphor then you say
well we're nearly there and you know
there's only gonna be a couple of
adversarial images but I think that's
the wrong metaphor and what you should
really say is it's not a 2d flat space
that we've got mostly covered it's a
million
mentioned space and cat is this string
that goes out in this crazy bath and if
you step a little bit off the path in
any direction you're in nowheres land
and you don't know what's gonna happen
and so I think that's where we are and
now we've got to deal with that so it
wasn't so much an explanation but it was
an understanding of what the models are
and what they're doing and now we can
start exploring how do you fix that
yeah validating that robustness of the
system so onbut take you back to the
this this word trust do you think we're
a little too hard on our robots in terms
of the standards we apply so you know of
there's a dance there's a there's a
there's a dance and nonverbal and verbal
communication between humans you know if
we apply the same kind of standard in
terms of humans
you know we trust each other pretty
quickly I you know you and I haven't met
before and there's some degree of trust
yeah right that nothing's gonna go crazy
wrong and yet to AI when we look at AI
systems or we seem to approach the
skepticism always always you know it's
like they have to prove through a lot of
hard work that they're even worthy of
even inkling of our trust they would do
what do you what do you think about that
how do we break that barrier close that
gap I think that's right I think that's
a big issue just listening my friend
Marc Moffitt is a naturalist and he says
the most amazing thing about humans is
that you can walk into a coffee shop or
a busy street in a city and there's lots
of people around you that you've never
met before and you don't kill each other
yeah
he says chimpanzees cannot do that yeah
right right if if you pansies in a
situation where here's some that are
from my tribe
things happen fresh in your coffee shop
this delicious food around you know yeah
yeah but but we humans have figured that
out yeah right and you know for the most
part for the most part we still go to
war we still do terrible things but for
the most part we've learned to trust
each other and live together so that's
gonna be important for our AI systems as
well and I also I think in a lot of the
emphasis is on AI but in many cases yeah
as part of the technology but isn't
really the main thing so a lot of what
we've seen is more due to communications
technology than AI ta AI technology yeah
you want to make these good decisions
but the reason we're able to have any
kind of system at all is we've got the
communications so that we're collecting
the data and so that we can reach lots
of people around the world I think
that's a bigger change that we're
dealing with
you