Transcript
owGn_BS--Hs • Daniel Kahneman: How Hard is Autonomous Driving? | AI Podcast Clips
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/lexfridman/.shards/text-0001.zst#text/0289_owGn_BS--Hs.txt
Kind: captions
Language: en
is it seems that almost every robot
human collaboration system is a lot
harder than people realize so do you
think it's possible for robots and
humans to collaborate successfully if we
talked a little bit about semi
autonomous vehicles like in the Tesla
autopilot but just in tasks in general
if you think we talked about current
you'll know where it's being kind of
system one do you think those same
systems can borrow humans for system to
type tasks and collaborate successfully
well I think that in any system where
humans and the Machine interact that the
human would be superfluous within a
fairly short time and that is if the
machine is advanced enough so that it
can really help the human then it may
not need the human for a long time now
it would be very interesting if if there
are problems that for some reason the
machine doesn't cannot so but that
people could solve then you would have
to build into the machine and ability to
recognize that it is in that kind of
problematic situation and and to call
the human that that cannot be easy
without understanding that is it's it
must be very difficult to to program a
recognition that you are in a
problematic situation without
understanding the problem but that's
very true in order to understand the
full scope of situations that are
problematic you almost need to be smart
enough to solve all those problems it's
not clear to me how much the machine
will need the human I think the example
of chess is very instructive I mean
there was a time at which Kasparov was
saying that human machine combinations
will beat everybody even stockfish
doesn't need people yeah
and alpha zero certainly doesn't need
people the question is just like you
said how many problems are like chess
and how many problems are the ones where
are not like chess where well every
problem probably in the end is like
chess the question is how long is that
transition period I mean you know that
that's a question I would ask you in
terms of main autonomous vehicle just
driving is probably a lot more
complicated than go to solve that yes
and that's surprising because it's open
no I mean you know I couldn't that's not
surprising to me because the because
that there is a hierarchical aspect to
this which is recognizing a situation
and then within the situation bringing
bringing up the relevant knowledge and
and for that hierarchical type of system
to work you need a more complicated
system than we currently have a lot of
people think because as human beings
this is probably the the cognitive
biases they think of driving is pretty
simple because they think of their own
experience this is actually a big
problem for a AI researchers or people
thinking about AI because they evaluate
how hard a particular problem is based
on very limited knowledge basically and
how hard it is for them to do the task
yeah and then they take for granted I
mean maybe you can speak to that because
most people tell me driving is trivial
and and humans in fact are terrible at
driving is what people tell me and I see
humans and humans are actually
incredible at driving and driving is
really terribly difficult yeah so is
that just another element of the effects
that you've described in your work on
the psychology side oh no I mean I
haven't really you know I would say that
my research has contributed nothing to
understanding the ecology into Anas
in the structure of situations and the
complexity of problems so all all we
know is very clear that let go it's
endlessly complicated but it's very
constrained so and and in the real world
there are far fewer constraints and and
many more potential surprises so so
that's obviously because it's not always
obvious to people right so when you
think about well I mean you know people
thought that reasoning was hard and
perceiving was easy but you know they
quickly learned that actually modeling
vision was tremendously complicated and
modeling even proving theorems was
relatively straightforward to push back
in and out a little bit on the quickly
part they haven't took several decades
to learn that and most people still
haven't learned that I mean our
intuition of course AI researchers have
but you drift a little bit outside the
specific AI feel there the intuition is
still perceptible yes all no I mean
that's true I mean the intuitions the
intuitions of the public haven't changed
radically and they are there as you said
they're evaluating the complexity of
problems by how difficult it is for them
to solve the problems and that's got
very little to do with the complexities
of solving them in AI
you