Sertac Karaman (MIT) on Motion Planning in a Complex World - MIT Self-Driving Cars
0fLSf3NO0-s • 2017-12-13
Transcript preview
Open
Kind: captions
Language: en
first we have shirts and we'll give
those all tomorrow and Friday if you're
here for the shirts if you here for the
knowledge today our speaker is cert at
Carmen he is a professor here at MIT in
the aero-astro department he builds and
studies autonomous vehicles that move on
land and in the air that includes ones
that have 18 wheels and two wheels and
everything in between robots that move
fast and aggressively and robots that
move slowly and safely
he takes both the formal optimization
based approach and the data-driven deep
learning approach to robotics he's a
mentor to me and many other researchers
here at MIT and Beyond and while he is
one of the leading experts in the world
and building autonomous vehicles for the
nerds out there he still programs he
programs on a Kinesis keyboard uses
Emacs which is how you know he's legit
so please thanks please give a warm
welcome to Suresh thank ya thanks a lot
thanks a lot like sight
I really had the pleasure to work with
Lex for some time and it seems like this
class is him and the TAS have put
together some amazing class I'm really
happy to be here thank you so much for
joining he gave me this title past
present future of motion planning or
something hopefully that's not quite
exactly what you were expecting so I
took a whole bunch of slides from
different talks and put them together
and I am hoping to just kind of go
through all you know as much as I can
and to tell you some of the interesting
things I think in a domain that's
happening and touch upon motion planning
at some point may be a starting point
would be to tell you a little bit about
my background it is exactly a decade
probably today that I shook John
Boehner's hand who you've met before I
shook John Boehner's hand as a graduate
student and joined the dark urban
challenge team it's been exactly a
decade off of it we worked through it
with a number of people
some of them are in the audience I can
count some and the at the time that we
were doing these kind of things back in
the day it was an academic project you
can look at the DARPA urban challenge
teams and you'll recognize they're all
University teams at least all the
finishers and it came from an academic
project - the thing that's going to
change the world in ten years so I hope
to give you a bit of a history and then
some some thoughts on that as well okay
let me start with my background so I
started graduate school with this we
built these beasts that I'm going to
talk to you about a little bit I wonder
if John there talked at all but I'll
give you some details
this was our entry to the DARPA urban
challenge I was a Land Rover lr3 that we
made autonomous that navigated through
that course and it was one of the six
finishers a number of my friends you
know went out and they did their own
careers with a number of others we
stayed here at MIT we built a number of
other autonomous vehicles let me show
you one thing that we have done that I
was kind of doing that I was the motion
planning lead for was this autonomous
forklift it was a forklift that you
could literally take a megaphone and
speak to you could say forklift go to X
Y Z and it would go to that location
here is trying to go to receiving which
happens to be an area where trucks pull
up with pallets on it so that you can
kind of pick this pallets up and and you
can put a mouse back so it's going to go
there it has a front camera it looks
through that camera it beams that camera
image to a handheld tablet device made
by Nokia back in the day there was a
company called Nokia they would make
these phones and handheld devices so you
could see what it's seeing you would
circle so you didn't have tapping back
then but you kept these pan gestures you
could circle something and the thing
would scan it and take a look at it you
could so you know we don't let me just
kind of go through this because it's
kind of a bit slow so it'll scan through
the pallet it'll pick it up but one
thing I would like to show you guys is
that once that's done you can you can
also talk to a tablet the tablet would
recognize your voice and then it would
command the robot to do that kind of
thing this was before autonomous cars
before iPhone before
Alexa before Siri and things like that
so I spent like a couple years kind of
doing this type of project that really
shaped up my PhD thesis and later when I
started as a faculty I also worked on a
number of things so let me show you one
we built like autonomous golf carts and
in Singapore's and US National
University Singapore campuses to go
there and do mobility on demand and so
on the one thing that I ended up doing
there was throughout these projects I
focus mainly on motion planning that you
are expecting the one algorithm that I
was working on was called rapidly
exploring random tree the idea is quite
simple so you're starting in the middle
of off so this is the area that you're
looking at there's that orange dot that
you're starting from you want to go to
the magenta goal region there's this red
obstacles you want to find a path that
starts from the initial condition goes
to the goal that's the very basic motion
planning problem
turns out this problem is
computationally pretty challenging
especially as the number of dimensions
of this province is two-dimensional but
if you increase the number of dimensions
you can prove that any complete
algorithm meaning any algum that we
towards a solution lamina exists and
returns fail or otherwise will scale
exponential it's computation time so at
some point you're going to run out of
memory or time to do these things the
album that I was working on was called
rapidly exploring random 3 the idea is
simple you just land on a bunch of
samples every time you put like a random
sample you connect it to it the nearest
node in a tree of trajectories that
you're building and in this way you sort
of rapidly explore the state space to
find a whole bunch of paths some of
these paths may reach the ball so those
that's the path that you pick so it's
going to run in a second
as you can see it's just sampling the
environment trying to build this set of
trajectories that don't collide the
obstacles if your trajectory Kleist but
an obstacle you just kind of delete it
and you move on with other samples and
then you would build this kind of a tree
ok it's an algorithm that's kind of
pretty widely used and and it goes well
beyond these kind of simple cases for
example in our urban challenge kind of
entry we were using this algorithm so
here you're seeing the algorithm in
action so we're trying to park at a
location during what DARPA call the
enqueue event so you can see a whole
bunch of cars that our vehicle is seeing
generating this map
read our obstacles black is a drivable
region it's going to try to park into it
and then it's going to unpark you're
seeing something hairy here so that's a
set of trajectories that are generated
by the robot by the RT algorithm so it's
trying to unpark now go there so as you
can see that trajectories are going back
and then going towards that obstacles
it's generating trajectory is picking
the best one so we've used the solder
and throughout the race it worked okay
so you can see the performance as it's
running so this is a media that video
that's made about 30 times faster kind
of showing you how the thing works when
we switch to the forklift kind of
algorithm forklift platform I started
working on this and the one thing that
we realized is that you know the the
forklift tries to go here to park in
front our truck and it finds this
trajectory at some point it discovers
there's an obstacle here and it finds
this looping trajectory and and it never
gets out of that loop you would think
that it's trying to minimize the path
length so you would think that it would
be easier to come up with something that
just kind of turns left and aligns but
it turns out that once you have that
loop even if you add more samples to it
you're stuck with that loop and so you
would never improve this type of
trajectory so back in the day professor
said teller who passed away
unfortunately a couple of years ago but
he really pushed me he was telling me
this doesn't work and every time it just
makes this loop right in front of the
army generals who are the sponsor and it
just looks ridiculous you need to fix
this kind of thing and try and find the
fix for it we realized that the
algorithm actually has some fundamental
flaws in it
so specifically we were able to kind of
write down a formal proof that the rrt
algorithm actually fails to converge the
optimal solutions is this kind of
something interesting so you would think
that if you add more samples you will
get better and better trajectories but
it turns out that the first few
trajectories that you found it just
constrains you so it closes the the
space that you want to search and you're
stuck with bad trajectories and this
almost always happens sometimes you're
lucky your bad trajectory is kind of
good enough but most of the time it's
pretty bad we were able to come up with
another album that we called our arty
star which just does a little bit more
work but guarantees asymptotic
optimality meaning
it will always converge to optimum
solutions and the difference
computational difference between the two
is very little if you were to run them
side by side our artists our tree would
look like this what it's doing is it's
it's just looking at the pads locally
and it's just kind of correcting I'm
locally just a little bit and that
little bit correction is enough to
converge to below the optimal trajectory
so that turned out to be my doctoral
thesis back in 2011 and we applied to a
number of things let me show you one
simulation scenario imagine a race car
coming into like a turn we also turn
very quickly generates these
trajectories so the right thing to do is
to kind of slow down a little bit start
skinning hit one end of the road now
start speeding up and go as fast as
possible so that you hit the other end
the road and you complete the turn these
kind of things would come out just
naturally from the algorithm okay you
don't have to program you have to do
these kind of things but you just run
the algorithm and these are that this is
the best rejected finds it would be it'd
be impossible to get something like this
from an IRT we applied at a number of
other robots as well I don't know like
PR to type robots or this autonomous
forklifts and got good results out of it
so that kind of maybe gives you a bit of
an idea of my background meaning like my
graduate school experience a little bit
and the PhD let me kind of tell you a
bit quickly what my research group does
so I always say sort of so we do a lot
of things in a fortunate and unfortunate
way so it's hard to find the focus
sometimes admittedly but I usually tell
people that we work on autonomous
vehicles the problem is quite
interesting both at the vehicle level
meaning how are you going to build these
autonomous vehicles individually and
also interesting other systems that when
you think about it most of the
autonomous vehicle is most valuable if
you put them into a system that they can
work let me give you some examples so a
system the autonomous vehicles would be
for example this Kibo system scenario
you know nowadays you buy something from
Amazon the way it's you'd buy two books
the way it's packed is that books are
brought by robots to a picker and the
picker just puts them into the same box
and sends it to you so this is done by
500 autonomous vehicles for example
there would be a good example of a
system another one is that there
ports are on Dyer in the world you know
that are working just completely with
autonomous vehicles and cranes if you
project a little bit forward you can
think maybe you know you can have drawn
delivery systems and and they maybe
don't have enough battery so they have
to relay packages to one another so you
need to build a system or some vehicles
or if you have I don't know like
autonomous cars maybe it's best to use
them in like an uber like scenario so
you can autonomous taxis that they can
work together and such so let me tell
you a bit more on the vehicle level
problems and the system level problems
some of the crazy things that we try to
do on the vehicle level we're interested
in all aspects about perception and
planning usually challenges are sort of
either complexity or either
computational complexity so you it's
very hard just computationally so you
really need to know or it's just the
system becomes very complex so we need
to figure that out
we're for example recently motivated by
really fast and agile kind of vehicles
how we can build that like one thing
that we were motivated for example is
sort of like imagine there's a drone
flying and you want to you want to catch
it in the fly I wonder if this is gonna
play so you know turns out that
Netherlands police is some people fly
UAVs around and you somehow want to take
it down
it's not like you can shoot at it so
people train Eagles and things like that
so we thought it would be great to
actually build these types of robots
that we try to in our group so you can
once you start to do these kind of
things you wonder like how much I can
push the boundaries of very very agile
vehicles and systems so here you're
going to see a falcon diving for a prey
you're going to see a goose right at the
last like a split-second so if you look
at the scene from a 20 Hertz camera this
is what you would see so they are
definitely much faster they do very
complicated you know planning and
maneuvering to be able to do these kind
of things so you know in the research
group we look at a number of different
perception problems where you're
multi-agency have ultra high rate
cameras like for example we have drones
with 200 Hertz cameras on and so you're
trying to understand the person that
you're tracking its dynamics its
intentions on the control level you're
trying to pull off really complicated
maneuvers like the one that you've seen
the race car now you want to do it in
real time at like a kilohertz probably
so how can you do these types of things
we use a lot high-performance computing
so for example the drones that we have
actually have GPUs on them they fly GPUs
they fly like teraflop computers to be
able to do these kind of things we also
use them offline like the deep learning
computers that you would use normally
you have access to things like DG x1 and
so on that we use that to compute
controllers for example here's an
example of I don't know like one GPU
drawn just kind of passing through a
window this is from a long time ago but
these are the controllers that we would
compute on supercomputers and we would
deploy and on the perception side for
example we're looking at things where
like you can use visual Arama tree you
can just have a camera and just look
through the world from the camera and
try to understand your own position so
we have certain algorithms to pick the
features just right so that you can do
these things with just like 10 features
or something like that
so they're just computationally very
efficient on the system's aspects of
things and when you put them together
yeah so this is maybe kind of yeah so
the question was what do you mean by
sort of computing the controllers would
you want to find the best constants so
controllers are actually pretty
complicated objects so you have a drone
it has suppose it has 16 there's
actually 12 degrees of freedom but
suppose there's six degrees of freedom
it's a six dimensional space six
dimensional space is very very large
suppose you discretize every dimension
with 200 points so six dimensional
position and orientation 200 points 200
to the six would be thousands of
trillions if you were to write one byte
for every point in the States are you
looking at the state space where every
point in the state space what's the
action that I'm going to do if I end up
at that position and orientation what
action should I do if you use one byte
to write it in the memory it would make
2.5 petabytes of this controller it's
pretty large when you think about it you
don't really need it would be very
surprising if that menu were really to
be able to describe it like an
information theoretic terms to be able
describe it it'd be very surprising if
it requires thousands of trillions of
parameters I mean how complicated is it
really so millions maybe but trillions
seriously so what we do is to be able to
compute these things we take very simple
controllers like for example zero don't
do anything we compress them like isn't
data compression and then we work on the
compressed versions and then that
compressed version grows at to a level
that comes down to something like two
megabytes
that's probably essentially what you
would need rather than three terabytes
for example we use kind of you know
singular value decomposition type
techniques to do compression you may
have done the same thing using images
for example if you compress an image
JPEG you save an order-of-magnitude no
surprise right if you compress video you
save to three orders of magnitude
because video is three-dimensional as
you increase the dimensions there's more
to compress so when you compress this
way this saves ten orders of magnitude
which honestly is no surprise when you
think about a delivery so those are the
control like the planning and control
items that we use these viral
supercomputers stole so we compute them
in I don't know five minutes that gives
you a lookup table that's two megabytes
you put in so that you can quickly
execute it then look up tables
essentially do you want to kilo Hertz
control you won't be able to compute a
trajectory of technique okay that
question came in and that's the whole
talk in terms of present of motion
planning and I can show you some other
stuff and there's a lot to do especially
in terms of agility on the systems
domain as well like I don't know I
pulled up this is not the kind of stuff
that only stuff that we do but I pulled
up the most interesting thing I think
maybe the most crazy thing off of my
hard disk imagine you have a whole bunch
of vehicles coming to an intersection
suppose they're fully autonomous how
would you make it so that they would
pass through the intersection as fast as
possible okay so if you were to really
utilize algorithms that will do that
here is what I would look like so you
would have vehicles coming in and you
could it looks like so you probably
don't want to sit in this vehicle just
sort of like just to understand the
fundamental limit sort of situation just
to understand how far you can push these
things you can see
looks like they're getting very lucky
but really what's happening is that
they're just speeding and slowing down
just so little so that they could avoid
one another so you can actually sit down
and do some math and try to understand
you know given the dynamics like your
acceleration deceleration limits how
fast you can push these things maybe it
doesn't immediately apply to
self-driving cars but certainly you can
use it in their houses and things like
that which would actually improve
operations quite a bit I wonder if any
of you have seen kiba systems where
houses you look at it most of the robots
are stopped they're just sitting there
yes so the question is is there anyone
sort of working on robustness aspects of
distributed control so that's a good
point it's it's very right we have
looked at things like from the
theoretical perspective it turns out
that like even in this case there's
something like a critical density of
these things so below the critical
density things are very simple you're
going to be robust you're going to be
able to find Pat's and you're going to
be able to execute above the critical
density things are very hard it's very
fairchild like if something fails just
kind of the whole system will crash into
one another and this is no surprise
either like this is kind of the physics
of many you know just like you see it
everywhere I mean it's the same thing as
I don't know you heat this thing there's
the critical temperature above it it
looks different below it it looks like a
liquid you can use the same kind of
thinking or theoretical arguments to
come up with these types of things and I
know that a lot of people work on
specific controllers for vehicle level
to guarantee robustness and so on
probably those are the kind of things
that one needs to do before implementing
these types of algorithms sort of like
in the current existing like multi
vehicle setups like Kiva systems or
ports and things like that we are far
away from this kind of thing
the main problem some of it is control
like we don't understand the control
aspects but we also don't trust our
sensors and things like that so that's
another big problem
so probably the more of the research is
only not research for implementations on
the sensor side I'd say okay so yeah so
we have been doing a number of other
projects currently as well on autonomous
vehicles if you're interested in any one
of them let me know I'm not gonna show
you videos but let me just kind of tell
you with one slide and a few pictures
this was several slides but I felt
really bad so um so we have an
autonomous tricycle that may sound funny
but it's actually pretty hard to test
with autonomous vehicles
so we currently have five of these and
we're hoping to build 30 so that we can
put them in and they're currently in a
little robotic enclosed area and Taiwan
and they're just driving around
collecting data so that we can for
example you can pay them into deep
learning algorithms we also have in a B
eyes warehouses we have these we have
one of these robots it's a very house
robot and supposed to be kind of like
you know I'm sure you know what we think
robotics like they make this robot on
it's supposed to be very easy you can
interact with so imagine a warehouse
robot that way you can just talk to it
you can tell it's tough to do when it
can do that you can show it you can hop
on it you can do it yourself type of
thing I am also a epi together with sort
of I'm working with Daniella ruse on mi
t--'s effort with Stanford and Toyota to
build safer vehicles and finally I'm
still API on the MIT Singapore
partnership right now from golf carts
we've moved into doing these electric
vehicles and and we're working on
basically integrating a lot of electric
vehicles together to make them kind of
work nicer we've also kind of not
looking into an autonomous kind of
wheelchair that's also in that project
that I didn't show him so my group works
on like a number of other projects in
this domain admittedly my group is a bit
more on the theory side as well so maybe
like half the group is a bit your theory
oriented the other half is more
experimental I usually say we have quite
a spectrum in the group so we would have
mathematicians like I would have people
who don't have any engineering degrees
like for example we have one post type
who is a mathematician by training is a
post doctoral scholar here we have
undergrad to undergraduates to graduate
students whose undergraduate degrees are
from mathematics on the other hand we
have sort of mechanical engineers and so
on who would actually build these things
throughout the group and we were funded
by a number of people throughout so okay
um there was supposed to be like a quick
summary and entrance into what I was
going to talk about so let me kind of
tell you maybe our DARPA urban challenge
effort so that I can tell you a little
bit more about how we implemented
his motion planning algorithms if time
allows I could talk more broadly about
motion planning algorithms but I don't
think we'll get a chance to okay so I'm
going to start with this effort the
darpur every challenge I'm sure many of
you have heard people usually believe
that it kind of just kick-started of all
these autonomous vehicles type what
answer that's been going on let me
introduce to you a little bit so this is
was DARPA did things called DARPA Grand
Challenge one and two I'll tell you in a
second what they are but this is the
third one essentially the idea is that
you would take a street-legal vehicle
you would instrument it with sensors and
computers and you would enter this race
to drive 60 miles in under six hours in
an urban traffic right there's other
vehicles driving around as well so the I
proposed is back in 2006 stated that
race in November 2007 the it was pretty
hard you know you would have to do a lot
of different things like u-turn skate
pointers you'd have to be careful with
stop signs and so on it's pretty
complicated but if you win it they would
give you two million dollars there's
good incentive 89 teams entered the race
we usually say it's a mighty spur Series
entry but MIT is non serious entry was I
guess the team that later turned into
cruise automation which GM ended up
buying for a billion dollars
so this is the serious one of our
entries they just want there to have fun
I think and then later they continued
their interest into autonomous cars and
and built cruise automation did a great
job we went after we were not directly
connected to it that team our team had
mainly MIT faculty postdocs and students
so we had eight full-time graduate
students kind of roughly I was one of
them you can see me right here I looked
different back then and we had a lot of
support from Draper laboratory mainly on
the sort of system integration vehicle
integration and support some of them are
in the audience and we also had some
vehicle engineering support from Olin
College we had a first version of the
vehicle where cables were coming out and
then Olin College came in and they
packaged it nicely
we built a vehicle it looked like this
we took a Land Rover lr3 line
one of the sponsors but also it was nice
that the vehicle was pretty big we put
an EMC driver wire system to it so this
is kind of a driver wire system for
people who are disabled like for example
if you can't use your legs they would
give you like a little joystick type
device so that you can actuate you know
gas and brake so it came very handy we
used it to make our vehicle driver wire
we needed to put a lot of sensors on it
so I'm going to say as I wish this
wasn't recorded but hey so I think our
situation was the following there was a
lot of other teams out there and they
were very experienced they had done the
other other Grand Challenges before and
so on we were not as experienced I would
say that our team was talented but not
experienced and we had a lot of sponsors
so we had a lot of money so our strategy
turned into if it fits on the vehicle
let's put it on the vehicle and we'll
figure out a way to use it if we don't
use it it's dead weight we'll just kind
of carry it so with that mindset we
ended up with five cameras sixteen
radars twelve planar laser scanners one
3d laser scanner and one GPS a new unit
this was a lot of sensors they generated
a lot of data you had to process it so
we had to buy a 40 CPU 40 gigs of ram
quanta computer that normally at that
time would run on like a Google server
type thing it was a server rack 10
computers essentially that we had to put
in so yeah we used to joke that this was
like the fastest mobile computer on
campus or something like both in terms
of speed and compute power now this
requires a lot of energy so we put on an
internal amount of generator now if this
generates a lot of heat so we put an air
conditioner on top you can kind of see
it here so that became our vehicle one
thing to note though is that we just had
the number of sensors was or a number of
computers was large but but the sensor
suit was very similar to the other
people who have finished one important
sensor was this 3d laser scanner that
I'm going to show you in a second so
this is the thing that sits on top of
the vehicle looks like that Kentucky
Fried Chicken type of bucket and
essentially what it has is that probably
a lot of people here are familiar but it
has
for lasers that measure range and those
sixty-four lasers are stacked up on a
vertical plane and that plane will turn
and 15 Hertz so it will give you a 3d
point cloud if you drive with it in
Harvard Square here is what the raw data
will look like this is colored by height
you're just looking at raw data and you
can you know easily pick up I don't know
bus here another building may be a
person a bunch of others so that gives
you a great data already like you could
work with this right so be taught so
other teams thought this sensor is made
by a company called Melodyne it came
pretty much just in time for the urban
challenge my guess is that if you didn't
have this 3d point cloud it would be
pretty hard to complete that challenge
there was only one team that didn't have
it and complete it and they had a 2d
laser scanner that was kind of turning
like they essentially build their own
Melodyne okay so we had also this sort
of 12 planar laser scanners you would
need these kind of things to cover the
blind spots of the vehicle the thing is
on top so you're not seeing kind of area
nearby we had five from the push rooms
looking down and seven on the skirts so
this is kind of what it would look like
so you're seeing the curves here and you
know a bunch of other things and the
vehicles are when the vehicles are very
close to you can still see them we had
16 radars radars are great they can see
very far like laser scanners would see
70 meters radars would see twice as much
the problem is that they have a very
narrow field of view so we needed 16 of
them to cover 27 degrees around the
vehicle 207 degrees around the vehicle
270 degrees so you know you can park
somewhere and you can see this is meters
per second you can see a whole bunch of
other vehicles kind of coming through
helps quite a bit and finally we had
five cameras in this configuration we
were using cameras to actually look at
lane markings I think actually you are
the only finishing team that was using
cameras for any purpose of any kind the
other vehicles were just kind of working
with the laser scanner and we were
mainly working with laser scanner but we
were picking up lane markings with this
and we bought this GPS em unit there was
an expensive thing but it would give you
your position you
the algorithmic stack it gets pretty
complicated I think by the end of the
race we would probably have like the
active code that was running could be
order hundreds of thousands of lines of
C code so maybe like two hundred
thousand good
I remember the forklift and there was
about half a million lines of code I
think this was a bit less we head around
like a hundred processes that are
running sending messages to one another
on that forty core system that you've
seen so that would generate a huge
software diagram so I simplified it for
you it turned into this you have some
sensors you get that data you process it
through perception algorithms you
generate a map of the environment close
to the robot and you have this
three-tier stack you have a navigator
much like your Google Maps it would
compute a map to get to your next goal
which may be kilometres away and it
would also give you the right the next
Waypoint that you should hit that would
hopefully be within your grid map and
there's a motion planner that looks at
the map sees all the obstacles and
everything sees the goal point and finds
that path to get to the goal point using
the RT and then once that trajectory is
computer it was passed to a controller
that actually steers the vehicle that
way so I've already shown you how the
motion planner works it just kind of
computes these things so here's the goal
point our car finds the path to get
there and you can run these things
together to get like a good behavior it
doesn't always go well let me show you
what doesn't work in the sky rakia
yes so we have like um honestly so so
here are a couple of things so we had
one thing is that we had a pretty good
simulation system going for motion
planning and things like that it helped
a lot like on the day of the on the sort
of like that was the day before the race
my 24/7 job was to keep simulating our
algorithms like I had two computers kind
of start simulation here start look at
it if one fails log it and and send it
out so simulation really helped we had
done some testing but I don't think we
actually I think the race itself was the
farthest that we had driven without any
human intervention like before then we
hadn't done that much I think this was
like 60 miles if I remember this
correctly we had done like a 20 mile
stretch or something like that but we
hadn't done as many so admittedly we
didn't have too much on the testing from
going the only reason why was because
it's just we didn't have time to do this
kind of thing we so I mean we started
maybe a year before that we put together
some of the infrastructure like this
message sending and things like that but
the vehicle itself to test it in reality
the vehicle I think the race was in
November we probably got this vehicle I
mean here another vehicle before but we
got this one clean I think it was April
and then we put the sensors on or
something like that so really it was
just the summer time that we had to test
and admittedly we couldn't test much and
Draper laboratory helped out a lot with
the testing if we didn't have them you
probably wouldn't test any so we're
probably just kind of failed outright or
something in this kind of thing testing
is very important it'll be very
important for future as well
simulation will be very important
simulation has come a long way actually
like nowadays you can I mean you guys
are working with simulator as you can
see but there's a lot of other things
that people are going to put out in the
next year or two and and you know like
we can nowadays ran there things that
you can show it to people and it's very
hard to like people don't
cigarette surrendering uh always wasn't
back then I think that would be probably
the right thing to do right now but back
then we had this one platform that you
know you could just run the whole
software stack but if you start up a
simulator it would actually simulate all
the sensor data and everything if you
don't start a simulator then the
processes will be waiting for the data
to come in so you could put it on a real
vehicle or something so back then we
thought that would be the best thing to
do the question was was your simulated
environment and your development
environment separate or integrate they
were very integrated right now I think
you would do things differently
yeah there's kind of a lot to talk about
so I thought that it would be just kind
of great to give you guys some ideas
given the the courses on autonomous
vehicles so here's an example of a case
that we got into so what's happening
here is we arrive at an intersection and
there's another car it's Cornell's car
and they're just sitting right in the
middle of the intersection and they
don't seem to be moving I think they've
been sitting there for a few minutes
before we even arrived so DARPA decided
that they should let us go and we're
probably going to take over and we'll do
great and it's going to be an important
moment in robotics history that for the
first time you know a robot takes or
another robot while the other robot is
stuck and it's going to be great so they
decide to go forward with this so here's
how we're seeing things from inside our
car our car is right here wants to go
there or our T generates trajectories
there's an object here that's the car
that we're seeing we're not seeing all
of it but we're seeing in a fraction of
it so we're going to play it a little
bit so you know like we were actually
able to turn around it so I think I need
to stop it somewhere but now let's look
at here so we seen the whole car the new
goal point is further away regenerating
this trajectories looks great
it turns
that this car is just somehow stuck for
some reason so we wrote a paper together
with them it's not I'm clear to them
either but my understanding is that they
think that the obstacle is on top of the
car and the way the algorithm is written
is it just kind of generates a
trajectory and asks if the trajectory is
collision-free or not right the
collision checker doesn't say this part
of the trajectory is in collision it's
just every time it passes a trajectory
because the route is in collision it
just says you know there's nothing that
they have another little piece where it
just updates the map every time there's
no information from the sensors if
there's no new information there's no
need to update so they ended up getting
stuck on this obstacle and they're not
refreshing their map because nothing is
moving up until we move right next to
them they refresh again and they say oh
I'm actually not sitting around obstacle
that was an error so next time the path
comes going forward it says this is a
great path go forward with it
that happens right when we're passing so
if you look at this blob right now as I
play it the blob starts to move so they
are going in a direction that we are
going a quick thing will happen so if
our car if our car at some point
realizes that there's no paths a
collision is imminent and there's
nothing to do about it it generates
shows that wide circle around it and
that basically means that we are headed
to a crash there's nothing we can do
about it we're just going to slam the
brakes and hope not to bad things happen
so it starts to do that I think at this
time this camera is more fun to look at
you can kind of take a look at it and
sort of see what happened and so this
kind of like collision happens we
collide with the car DARPA what they did
is that they actually pulled the Cornell
car back they started us we finished
they finished as well so both of the
teams finished well you can see some of
the things that are a little bit hard
for example if you yourself were
deriving our intersection that there's a
car that's sitting there you probably
would stop your car take out go and ask
if there's anything wrong even if you
don't do that suppose you're not very
decent of a human being you don't decide
not to do that you would still steer
away from the car you probably wouldn't
get as close to this car as we do so
there are some problems that are at the
inference level that we do without even
thinking and it's actually kind of hard
things for these types of cars to do
especially if you're going fast you're
in a complicated environment you're not
expecting things and you might collide
into things we do different kinds of
inference that we can't do a name but
you know you look at the way a person
walks on the sidewalk and you can say
well this person is kind of dangerous or
maybe we will walk into the street or
not you know you make that decision and
it's actually pretty complicated thing
for a robot to do okay so you know this
is kind of like the results of the race
I'm not gonna go too much into it
basically the idea is that 89 people
started six finished we were one of the
finishers CMU came first so they got the
2 million dollar check I believe
Stanford came second they got a 1
million dollar check Virginia Tech came
third they got half a million we came
fourth we didn't get anybody but you
know we got a lot of experience it was
great to be a part of it I think one
note is that the Google car that you may
have heard a lot was essentially sort of
like a spinoff from this race so if you
look at the Google color you will see
that the sensing package is very similar
it's very laser scanner oriented has a
couple of radars on it that it could
utilize and is working somewhat with the
cameras but not so much essentially
Google engineered this thing that we
built or all the other teams built
independently they engineered it for ten
years and that's the kind of thing that
they utilize nowadays there's also like
this whole Tesla brand of camera based
cars or deep learning and so on that's
coming in and just very recently back
ten years ago you know we knew about
people learning and so on but it just it
just didn't work the moment somebody
figured out doing it on a GPU it started
working pretty well okay so there's a
lot I can tell you about path planning
but I think here is kind of maybe what I
should do if you if you do not mind
rather than I'm telling you about our
RTS and making this into a lecture that
I'm not sure if you're going to like it
let me talk maybe a little bit more
about south driving vehicles and I think
that's something that you might enjoy
better
so the question is sort of building it
from scratch what was the biggest
challenge so I'm going to say admittedly
I was a junior student back time so my
challenge was to get these controllers
and some parts of the rrt working and I
had simulation systems and things like
that and life was good for me I would
think that I mean we ended up building
pretty complicated hardware so that was
one of the challenges and that probably
all in college Draper you know they did
all of that that was great
the other challenge that we had is that
nowadays there's like maybe you guys use
it like robot operating system and so on
that infrastructure software we had none
of that so we ended up building our own
I don't know if anybody uses but there
is this thing called lightweight
communications and marshalling LCN so
that ended up being built for this and
it just kind of got spun out there was
another big challenge that we actually
faced so LCM nowadays is utilized
throughout the industry like for example
for autonomous cars we'll use it
Toyota will use it no Donna mean uses it
so it ended up coming out of this this
challenge and it was probably like the
first you know I would say the first six
seven months was devoted to it and and
for necessity I mean we just we wanted
to do other things but we just couldn't
because you needed something like this
there was another big challenge I would
say testing was a big challenge things
like that
pretty collaborative as far as where I
was because probably papers published
paper with other team things like that
and I've seen like my aren't people
founded becoming much more isolated like
yeah wrong I guess it's good and bad
it's kinda hard to assess so competition
is always good so the the the question
was that you know back in the day we
were really collaborative like it's very
interesting that we actually wrote a
paper with Cornell about our collision
just to teach the whole community why
these kind of things happen but nowadays
like everybody is just kind of doing
their own thing and there's no kind of
going out so there's there's a quick
question is a quick answer for that and
there's a kind of broader answer so the
quick answer is that yeah I mean it
became important there's a lot of you
know sort of people invested a lot of
money and they are expecting returns and
things like that and that affects the
environment that definitely drove it I
think we're still you know trying to
work on it in academia and trying to
publish papers but a lot of people are
you know worried about competing with
these huge companies and things like
that which I think it's not a big war
because there's a lot to do still
sobani when you look at the industry
there's little competition but that for
some reason the broader answer is that
that became a norm so back 50 years ago
you would look at the top company of the
day this is like starting from a century
ago like with bowel for example they
would form labs and they will publish in
there to science and things like that
will be very open and novel day is the
big companies of the day they kind of
rather prefer secretive labs and things
like that so that I think Microsoft was
the last big company of the day to do
that nowadays googles and apples and
things like that they don't do that
anymore
there's a bit of that as well good or
bad but it became that way and sometimes
competition is good honestly it's a good
thing that people feel like you don't
know what the others are doing and you
want to compete so that makes you better
and better even though the others maybe
I don't know ok any other questions yeah
maybe a vision only a challenge or or
something that races
I'm not sure I don't think it's purely
an industry problem because it's it's
still kind of it's it's quite
complicated honestly so there may be
things that people can do but i i i am
wondering if DARPA would be interested
in doing a challenge so let's set DARPA
aside differently and research otherwise
like when you think about DARPA dark
ways a defense agency and when they talk
about the challenge they had honestly
defense problems in mind so for example
they didn't allow you to go around and
drive in the area with your sensors the
idea was that they would give you a map
of the environment 24 hours in advance
and then five minutes before they would
give you a mission like hit this
waypoint hit that way point and so on so
that's a military setting they were
really it really the whole thing started
with the US Congress mandate to get you
know one third of combat vehicles
autonomous by 2015 which didn't happen
but it was a war military setting so
DARPA is usually sort of that minded and
they did the DARPA Robotics
so the idea is to build a quadcopter
that can fly here 20 meters a second
like 40 miles an hour in indoor
environments type of thing I think
they'll do that but there may be other
things like there may be other you know
people kind of coming in pushing the
boundary of research like something for
example just with cameras would be very
interesting and I think we are just in
it may be a couple years away from doing
that very well and probably neat
learning would be a lot of it ok so I
don't know much time and I don't want to
hold you here but sort of you know let
me tell you a few things about
autonomous cars in general and let's see
if we can you know in like 10 minutes we
can fit something interesting
transportations is a very interesting
thing it actually defines how you live
quite a bit so if you look at for
example the kind of cities that you know
you know you may be living in today they
look like this and they are produced
thanks to one invention that was the
affordable car which was about as
you go if you look at it you know
throughout the last century like in
1950s cars were big and and you would
find you know that everywhere these kind
of subways were being constructed for
the first time
the reason was cities were dirty they
were deemed diseased prone so now he had
the car you could move way into a better
living lifestyle and it would improve it
and that was the 20th century invention
that you had it also changed the cities
quite a bit I mean like for example this
is Boston's sort of central artery that
was built in you know 50s that around
that time to kind of service the colors
coming in and out of the city the cars
kind of generated this kind of thing you
know in some places at the extreme like
if you go to places like Los Angeles in
the United States you will see the
suburban sprawl it's very different in
other places so places that didn't have
the time to expand that didn't have the
resources to expand or just didn't have
the place to expand it caused many
problems like here's the suburbia and in
Mexico City you can see the dirt that it
generates in the distance even if you're
rich it doesn't really matter you know
even in in rich countries this quick
expansion it just doesn't work and it
creates if anything just ugly
environments and in some places it
creates like you need to be dense and
you need to be big and so you have the
cars but you just have to build you know
big buildings that you cannot even serve
with cars
so you generates these type of things
where you know like there's a I think
it's probably my just I was just gonna
say only let the congestion and
pollution in the rest of the world but
it generated these kind of things where
I don't know if you heard there was a
traffic jam in China it lasted like nine
days and it was a hundred miles long so
it generated this kind of thing is just
a quick introduction of the affordable
car sort of what it did to the
environment in the cities there so
pollution is one problem and so on but
if you look through it it's actually
pollution and energy consumption wise a
lot of it comes from the cars especially
inside the cities an interesting point
is that if you look through the cars the
cars are actually pretty inefficient the
way they sort of sit currently
if you look true for example BMWs over
the years you would see that they get
heavier and they get faster this is very
correlated if you get faster you have to
become heavier because you have to pass
crash tests and things like that so you
know you're you know just to be faster
so in order to pasture has two crash
tests you build structure and things
like that and that makes the vehicle
heavier ultimately so like a BMW that
you would buy in the 70s would weigh
something like you know twenty five
hundred pounds nowadays it's like you
know like four thousand pounds roughly
so it would you know if you look at the
average passenger weight that it's
carrying it's about 25 times the weight
of the passengers and the size as well
you know it's about ten times the size
of the passengers that it carries in
terms of parking spots if you look
through the cities there are places in
the you know usually what we have in the
United States is that for every car we
have two parking spots so roughly that's
the number in some places parking spots
take up like half to sea
so for example in a way on average it's
about one third so you might ask the
question like this is the kind of thing
this is the kind of environment that
city is created and do you really want
to live in this type of environment and
it's to kind of give you the idea I mean
if you if you walk out a lot of the
in
Resume
Read
file updated 2026-02-13 13:23:15 UTC
Categories
Manage