Chris Gerdes (Stanford) on Technology, Policy and Vehicle Safety - MIT Self-Driving Cars
LDprUza7yT4 • 2017-12-06
Transcript preview
Open
Kind: captions Language: en so today we have Chris Gertie's with us he's a professor at Stanford University where he studies how to build autonomous cars that perform at or beyond human levels both on the racetrack and on public roads so that includes a race car that goes 120 miles an hour autonomously on the racetrack this is awesome he spent most of 2016 as the chief innovation officer at the United States Department of Transportation and was part of the team that developed a federal automated vehicle policy so he deeply cares about the role that artificial intelligence plays in our society both from the technology side and the policy perspective so he is now I guess you could say a policy wonk world renowned engineer and I think Oh was a car guy yes so he told me that he did a Q&A session with a group of three graders through great third graders last week and he answered all of their heart hitting questions so I encourage you guys to continue on that thread and ask Chris questions after his talk so please give a warm welcome to Chris great Lex thanks for that great introduction and thanks for having me here to talk to everybody today so this is this is sort of my first week back in a civilian role I wrapped up at USDOT last week so I'm gonna no longer speaking and officially representing the department although some of the slides are very similar to things that I used to speak and represent the department so I think as of Friday this was still fairly current but I am sort of talking in my own capacity here so I wanted to talk about both the technology side and the policy side of automated vehicles and in particular how some of the techniques that you're learning in this class around deep learning and neural networks really place some challenges on regulators and policymakers attempting to ensure vehicle safety so just a bit about some of the the cars in my background I am a car guy and I've gotten a chance to work on a lot of cool ones I actually have been working in automated vehicles since 1992 in the Lincoln Town Cars in the upper corner are part of an automated highway project I worked on as a PhD student at Berkeley I then went to freight lidar heavy trucks in daimler-benz and worked with suspensions on heavy trucks before coming to Stanford and doing things like building p1 in the upper right corner there that's an entirely student built electric steer by wire drive by wire vehicle we've also instrumented vintage racecars electrified a DeLorean which I'll show a little bit later and worked as Lex mentioned with Shelley which is our self-driving Audi TT which is an automated race car in addition to the Stanford work I was a co-founder of peloton technology which is a truck platooning firm looking at bringing platooning technology so vehicle to vehicle communication which allows for shorter following distance out on the highway so these are some of the things i've had a chance to work with to give you a little bit of a sense this is shelley going around the racetrack at Thunderhill she can actually go up to about 120 miles an hour or so on that track it's really just limited by the length of the straight it's kind of fun to watch from the outside a little disconcerting occasionally as you see there's nobody in the car although from inside it actually looks all pretty chill so Shelly we've been working with her for a while out on the track she's able to get performance now which exceeds the capability of anybody on the development team I'll even many of us are amateur racers in fact actually most of my PhD students have their novice racing license we make sure that they get that license before going out on the track and testing so Shelly could be in anybody in the research group she actually can beat the president of the track david Vaadin now and we've had the opportunity to work recently with Junior Hildebrandt the IndyCar driver who finished six this last year in the Indy 500 he's faster but but he's actually only about a second or so faster on a minute and 25 second lap so we're approaching his performance and he's actually helping us get there now the interesting thing about this is that we've approached this problem really from one of physics force equals mass times acceleration so the car is really out there calculating what it needs to do to break down into the next corner how much grip that it thinks it has and so forth as it's going around the track it's not actually a learning approach at its core although we've added on top a number of algorithms for learning because it turns out that the difference between the cars performance and the human performance really getting that last little bit of capability out of the tires humans drive instinctively in a way the best of humans at any rate drive instinctively in a way which is constantly pushing to the limits of the cars capability and so if you sort of prejudge what those limits are you're not going to be quite as fast and so that's one of the things we've actually been working with learning algorithms on is to try to figure out well how much friction do I have in this particular corner and how is that changing as the tires warm up and as a track warms up from the course of the morning till the afternoon these are the things that we need to be fast on the racetrack but they're also the things that you need to take into account to be safe in the real world because what we're trying to do with this project is understand how the car can drive at the maximum capability of the limits of the friction between the tire and the road now racecar drivers do that to be fast as they say in racing if you want to finish first you have to finish so it's important that they actually be fast but also accident free so we're trying to learn the same things so that on the road when you may have unknown conditions ahead of you the car can make the safest maneuver that's using all the friction in between the tire in the road to avoid ultimately any accident that the car would be physically capable of avoiding that's our goal with that so we've had a lot of fun with Shelley we've gotten to drive the car up Pikes Peak in the Bonneville Salt Flats actually Shelley appeared in an Audi commercial with Zach Quinto and Leonard Nimoy and so at the end of the commercial they both look at each other and declare it fascinating so if you're as big of a science fiction fan as I am you realize that once your work has been declared fascinating by two Spock's there's nowhere to go so I had to take a stint and try something different in government and so I spent the last year as the first chief innovation officer at the US Department of Transportation which I think honestly was the coolest gig in the federal government because I really didn't have any assigned day-to-day responsibilities but I got to kind of dive in and help with all manner of really cool projects including the development of the first federal automated vehicle policy so it's a really great opportunity to sort of see things from a different perspective and so what I wanted to do was you know kind of coming into this from an engineer give you a perspective of what is it like from somebody looking at the regulatory side on vehicle safety and how are they thinking about the technologies you're developing and where does that actually leave some opportunities for engineers to make some big contributions to society so let's start with with what vehicle safety is like today so today we have a system of federal motor vehicle safety standards so these are rules they're minimum performance requirements and each of them must have associated with it an objective test so you can tell does the vehicle meet this requirement or does it not meet this requirement now interestingly there is no federal agency that is testing vehicles before they are sold we rely in this country on a system of manufacturers self certification so the government puts these rules out there and manufacturers go we got this we can meet this and then they sell certify and put the vehicles out on the market the National Highway Traffic Safety Administration can then purchase vehicles and test them and make sure that they comply but we rely on manufacturers self-certification this is a different system than in most of the rest of the world which actually has pre market certification where before you can sell it the government agency has to say yes we've checked it and it meets all the requirements Aviation in this country for instance has that aircraft require certification before they can be sold cars do not now where did that system come from so a little quick history lesson in 1965 Ralph Nader released a book entitled unsafe at any speed and this is often thought of as a book about the Corvair it's it's not the Corvair featured prominently in there as an example of a design that Nader considered to be unsafe what was very interesting about this this book was that he was actually advocating for things like airbags and anti-lock brakes back in 1965 these technologies didn't come along until much later his argument was that the auto industry had failed it wasn't a failure of engineering but it was a failure of imagination and if you're interested in vehicle safety I would really recommend you read this book because it's fascinating they have quotes from people in the 1960s basically saying that we believe that any collision more than about forty or forty-five miles an hour is not survivable therefore there's no reason for seatbelts there's no reason for collapsible steering wheels in fact there's a quote from somebody who made great advances in Road Safety saying I can't conceive of what help a seatbelt would give you beyond like firmly bracing yourself with your hands those of you who have studied physics know that's kind of patently ridiculous but there was a common feeling that there was no sense of doing anything about vehicle crash worthiness because once you got above a certain speed it was inherently unsurvivable and I think it's interesting to look at that today because if we were to be in a collision I think if any of us were to be in a collision in around about 40 miles an hour in a in a modern automobile we'd probably expect to walk away you know we wouldn't really be thinking about our survival and so what this did is it led to a lot of public outcry and ultimately the National traffic and Motor Vehicle Safety Act in 1966 which established nitzan established this set of federal motor vehicle safety standards now the process to get a new standard made which is a rulemaking process in government is very time-consuming optimistically about the minimum time it can possibly take is two years realistically it's more like seven and so if you think about going through this process that's really problematic I mean think about what we were talking about with automated vehicles two years ago or seven years ago I think about trying to start seven years ago and make laws they're gonna determine how those vehicles operate on the road today it's crazy right there's really no way to do that and the other thing is is that if you think about it our system evolved from really this sense of failure of imagination that the government needs to say hey industry do this stop slacking off these are the requirements get there but I think it's hard to argue today with all the advances in automation that there is any failure of imagination on the part of industry people are coming up with all sorts of ideas and concepts for new transportation and automation tech companies startup companies large OEMs there's all sorts of concepts being tested out on the road it's hard to argue that there's still any lack of imagination now the question is are things like this legal it's an interesting question right can I actually legally do this well from the federal level there's an interesting report that came out about ten months ago from the folks across the street at Volpe who did scan and said well what are the things that might prevent you based on the current federal motor vehicle safety standards from putting an automated vehicle out on the road and the answer was honestly not much if you have a vehicle if you start and you automate a vehicle that is currently meeting all the standards because there are no standards that relate specifically to automation you can certify your vehicle as meeting the federal motor vehicle safety standards therefore there's nothing at the federal level that prevents in general an automated vehicle from being put on the road so it makes sense so if there isn't a safety standard that you have to meet then you can put a vehicle out on the road that meets all the existing ones and does something new and there's no federal barrier to that now there are a couple of exceptions there were a few points in there that referenced a driver and in fact Nitsa gave a an interpretation of the rule which is one of the things that they can do is to say well we're going to give an interpretation it's not making a new rule but basically interpreting the ones that we have and they said that actually these references to the driver could in fact refer to the AI system and so that actually is now a policy statement from from the department that many of the references to driver in the federal motor vehicle safety standards can be replaced with your self-driving aai system and the rules applied accordingly so in fact there's very little that prevents you from putting a vehicle out on the road if it meets the current standards so if it's a modern production car automated federal motor vehicle safety standards don't stop that now a lot of the designs that I showed though things that wouldn't have a steering wheel or other things are actually not compliant because there are requirements that you have a steering wheel that you have pedals again these are best practices that evolved in the days of course when people were not thinking of cars that could drive themselves and so these things would require an exemption by Nitsa a process of saying that okay this vehicle is allowed on the road even though it doesn't meet the current standards because it meets some equivalent and studying that equivalent can be a bit of a challenge okay so the question then is well alright if the federal government is responsible and that's by the traffic safety act is responsible for safety on the roads but it can't prevent people from putting anything out what do you do right one approach is to say well let's get some federal motor vehicle safety standards out there but as we already said that's probably about a seven year process and if you were to start setting in best practices now what would that look like so we've got this challenge we want to encourage this technology to come out onto the roads and be tested because that's the way you're gonna learn to get the real-world data to get the real-world experience at the same time the federal government is responsible for safety on the nation's roads it can recall things that don't work so if you do put your automated system out on the highway and it's deemed to present an unreasonable risk to safety even if you're an aftermarket manufacturer the government can tell you to take that off the road but the question is how can you do better how can you be proactive to try to have a discussion here so we know standards are maybe not the best way of doing that because they're too slow we'd like to make sure the public is protected but this technology gets tested and so the approach taken to sort of provide some encouragement for this innovation while at the same time looking at safety was the federal automated vehicle policy which rolled out in September so this was an attempt to really say okay let's put out a different framework from the federal motor vehicle safety standards let's actually put out a system of voluntary guidance so what Anisa is doing is to ask manufacturers to voluntarily follow certain guidance and submit to the agency a letter that they have followed a certain safety assessment now the interesting thing is is that the way that this is set up is not to tell manufacturers how to do something but really to say these are the things that we want you to address and we want you to come to us to explain how you've addressed them with the idea that from this best practices will emerge we'll be able to figure out in the future what really is the best way of ensuring some of these safety items so this rolled out in September we've got the BMI t car here on the side so you see you've got the Massachusetts license plate so thanks to Brian for for bringing that if you do put gaudy stickers on your card then you get closer to the center so that's something to consider for for for future future reference but this was was rolled out in Washington Washington DC by the secretary and consists largely of of multiple parts but I think the most relevant to vehicle design is this 15 point safety assessment so these are the 15 points that that are assessed and I'd like to kind of talk about a few of these in some more detail and it starts with this concept of an operational design domain and minimal risk or fallback conditions and what that means is instead of trying to put a taxonomy on here and say well your automation system could be an adaptive cruise control that works on the highway or it could be fully self-driving or it might be something that operates a low-speed shuttle the guidance asked the manufacturers to define this and the definition is known as operational design domain so in other words you tell us where your system is supposed to work is it supposed to work on the highway is it supposed to work in restricted areas can it work in all-weather or is this sort of something that operates only in daylight hours in the sunshine in this area of South Florida all of those are fine but the it's incumbent upon the manufacturer developer to define the operational design domain and then once you've defined where the system operates you need to define how you make sure that it is only operating in those conditions how do you make sure the system stays there and what's your fallback in case it doesn't and that fallback can be different obviously if this is a car which is normally human driven as you see here from the volvo drive me experiment it might be reasonable to say we're gonna ask the human driver to retake control whereas clearly if you're going to enable blind passengers or you are going to have a vehicle that has no steering wheel you need a different fallback system and so within the the guidance it really allows manufacturers to have a lot of different concepts of what they want their automation to be so long as they can define where it works what the fallback is in the event that it doesn't work and how you have educated the consumer about what your technology does and what it doesn't do so that people have a good understanding of the system performance a few things if we go down you see also validation methods and ethical considerations are our aspects that are brought up here as well and so validation methods are really interesting as it applies to AI so really the idea is that there's lots of different ways that you might tell an automated vehicle you might go out on the test track and run it through a series of standard maneuvers you may develop a certain number of miles of experience driving in real-world traffic and figure out how does the vehicle behave in a limited environment there's questions about a test track obviously because you don't have the sort of unknowns that can happen in the real-world environment but if you test in one real-world environment you also have a question of is this transferable information so if I've driven a certain number of miles in Mountain View California does that tell me anything about how the vehicle is likely to behave in Cambridge Massachusetts maybe maybe not it's a little bit hard to extrapolate sometimes and then finally there's also the idea of simulation and analysis so if I can record these situations if I can actually create a virtual environment of the sorts of things that I see on the road maybe I can actually run the vehicle through many many of these scenarios perturbed in some way and actually test the system much more robustly in simulation than I can ever actually do out on the road so the guidance is actually neutral on which of these techniques manufacturers take and allow manufacturers to approach it in different ways and I think you know based upon conversations when you think about the way customers are companies develop this they do take all these different approaches a company like Tesla for instance which is recording all the data streams from all their vehicles basically is able to run ideas or technologies silently in their vehicle they can actually test systems out get real-world data and then decide whether or not to make that system active companies that don't have that access to data really can't use that sort of development method and may rely much more heavily on simulation or test track experience so the guidance really doesn't have this particular blend of this and in fact it does envision that you might have over-the-air software updates in the in the future so it is interesting though to think about whether you have data driven approaches things like artificial neural networks or whether you actually start to program in hard and fast rules because as you start to think about requirements on a system how do you actually set require on a system which has learned its behavior and you don't necessarily know what the internal workings or our algorithms look like there's another one that that comes up which is the ethical consideration so I'm gonna pick on MIT for a moment here so this is an area that I actually did a lot of work on with Stanford together with with some philosophers who join joined our group and so when people hear ethical considerations in automated vehicles it often conjures up the trolley car problem and and so this sort of classic formulation here about the fact that you have a self-driving car which is heading towards a group of 10 people and it can either plow in and kill those 10 people or it can divert and kill the driver what do you do and these are classic questions in philosophy you actually look in fact at at the trolley car problem which is I have a runaway trolley car and I need to either divert it to another track where it will kill somebody who's wandering across that track or the five people on the trolley car are killed what do I do well in fact it's this article points out it's like you know before they the automated vehicles can become widespread car makers must solve an impossible ethical dilemma of algorithmic morality so if all this wasn't hard enough I mean your understanding how tough the technology is to actually program this stuff and then you have to get the regulations right and now we actually have to solve impossible philosophical questions well I don't think that's actually true and I think you know it's good for engineers to work with philosophers but not to be so literal about this this is a question that philosophers can ask but engineers might ask a number of different questions like who's responsible for the brakes on this trolley why wasn't there a backup system I mean why am I headed into a group of 10 people without any capability to stop so an engineer would in fact have to answer this question but might approach it much differently so if I look at the trolley car problem I might say ok let's see my options are I've got a trolley car which is out of control first of all I'd like to have an emergency braking system let's make sure that I have that well there's a chance that that could break as well so my emergency if my base breaking system goes and my emergency braking system goes my next option would be to divert it to this sidetrack well knowing that that's my option I should probably put up a fence with a warning sign that says do not cross runaway trolley track okay now let's say that I've done all of that the brakes fail the big emergency brakes fail I have to divert the trolley and somebody has ignored my sign and crossed over the fence and now he's hit by the trolley do I feel a little differently about this whole scenario and then I did at the beginning of just trying to decide who lived and who died the solution was made but by thinking of it as an engineer trying to reduce risk and not by thinking of levels of morality and who deserves to live or die and so I think this is a very important issue and the reason it's in the guidance is not to get basically have everybody solve trolley car problems but to try to think about these larger issues and so I think ethics is is not just about these sorts of situations which actually will be in automated vehicles I think addressed much more by engineering principles than by trying to figure out from philosophical merits who deserves to live and die but there's broader issues here just any time that you have concern for human safety how close do I get to pedestrians how close do I get to bicycles how much care should I put in to other people in the environment that's very much an ethical question and it's an ethical question that manufacturers are actually already addressing today if you look at the automatic emergency braking systems that most manufacturers are putting on their vehicles they will actually use a different algorithm depending upon whether that obstacle in front of it is a vehicle or a human so they're already detecting and making a decision that the impact of this vehicle with the human could be far worse than the impact in this vehicle with a vehicle and so they're choosing to brake a little bit more heavily in that case that's actually where these ethical considerations come in and the idea of the guidance is to begin to share and have a discussion openly about how manufacturers are approaching this with the idea of getting to a best practice where not only the people in automated vehicles but other road users feel that there's an appropriate level of care taken for their well-being that's one of the areas where ethics is important the other area where ethics is important is that we have different objectives as we drive down the road we have objectives for safety we'd like to get there we have objectives for mobility we'd like you to get there probably pretty quickly and we also have the idea of legality we'd like to follow the rules but sometimes these things come into conflict with each other so let's say you're driving down the road and there's a van that's parked where it has absolutely no business parking you've got a double yellow line is it okay to cross well at least in California there's no exception to the double yellow line representing the lane boundary for a vehicle that's parked where it has no business being parked so according to the vehicle code you're supposed to kind of come to a stop here I don't think any of us would right in fact actually when you're in California and you're riding through the hills and you come upon a cyclist virtually every vehicle on the road is deviating across the double yellow line to give extra room to the cyclists that's also not what you're supposed to do by the vehicle code you're supposed to stay on your side of the double yellow line but slow to an appropriate speed to pass right so there's behaviors where our desire for mobility or our desire for safety are outweighing our desire for legality this becomes a challenge if you think about how do I program the self-driving car should it be based on the way that humans drive or should it be based on the way that the legal code tells me to drive of course the legal code was never actually anticipating a self-driving car from a human standpoint that double yellow line is a great shorthand that says maybe there's something coming up here where you don't want to be in this other Lane but if I actually have a car with the sensing capability to make that determination itself this is a double yellow line actually all that meaningful anymore these are things that have to be sorted out speed limits being another one you know if we're out on the highway it's usually a little bit flexible do we give that same flexibility to the automated vehicle or do we create this wonderful automated vehicle roadblocks of vehicles going to the speed limit when nobody else around them is do we allow them to accelerate a little bit to merge into the flow of traffic do we allow vehicles to speed if they could avoid an accident is our desire for safety greater than our desire for legality these are the sort of ethical questions then I think are really important these are things that need to be talked through because I believe if we actually have vehicles that follow the law nobody will want to drive with them and so we need to think about either ways of giving flexibility to the vehicles or to the law in the sense that vehicles can drive like humans do so this brings up some really interesting areas I think with respect to learning and programming and so the question is you know should our automated vehicles drive like humans and exhibit the same behavior that humans do or should they drive like robots and actually execute the way that the law tells them that they should drive obviously fixed rules can be one solution to this behavior learned from human drivers could be another solution to this we might have some sort of balance of different objectives that we do more analytically in terms of how much we want to obey the double yellow line when there are other things influencing it in the environment now what's interesting is that is you start to think about this there's limits to any of these approaches in the extreme you know as we found with our self-driving racecar if you're not learning from experience you're not making use of all the data you're not gonna do as well and there's no way that you can possibly pre program an automated vehicle for every scenario it's going to encounter somehow you have to think about interpolating somehow you have to think about learning at the same time you can say well why don't we just measure humans well human error is actually the the cause or a factor the primary factor in 94 percent of accidents it's either a lack of judgment or lack of perception on the part of the human so if we're simply following humans we're actually only learning how well humans can do things and we're leaving a lot on the table in terms of the potential of the car and so this is a really interesting discussion that I think will continue to be both in the development side of these vehicles in the policy side what is the right balance what do I want to learn versus what do I want a program how do I avoid leaving anything on the table here so because it's the point where you know I've had a bunch of slides with words here I want to give people a little bit of a sense for what you could be leaving on the table if in fact you don't adapt this is Marty marty is a DeLorean that we've been working with in my lab now DeLoreans are really fantastic cars unless you want to accelerate brake or turn it really didn't do any of those things terribly well there's no power steering there's an underpowered engine and and very small brakes all of these things are fixable in fact what's nice about the DeLorean is it separates quite nicely the whole fiberglass tub comes up you can take out the engine you can take out the brakes you can make some modifications to the frame stiffen the suspension work with renova motors start up in Silicon Valley to put in a new electric drivetrain and put it all back together and when you do you come up with a car that's actually pretty darn fun and when we've programmed to drive itself this is Adam Savage from Mythbusters going along for a drive [Music] what do you see is Marnie doing something at a level of precision that we're pretty sure no human driver can meet Junior said there's no way he can do this you see it's going into a perfect drift doing a perfect doughnut around this cone and then it launches itself through the next gate sideways into the next cone now it's doing this you see it shoots through the gate missing those cones and then launches into a tight circle around the next cone it's actually doing this as sort of an algorithm similar to orbital mechanics if you think about how it's how it's actually orbiting these different points as it sets the trajectory now the limit on this as tires as you can see as it comes around here the tires disintegrate into many chunks flying at the camera as we do this but the the ability of the car to really continue even as the tires heat up to execute this pretty pretty nice trajectory here you see it going through the gates again and launching into a stable equilibrium putting pretty much the tire tracks right over where they were in the previous run and then finally ending so this is a sort of thing that I think is possible as you look at these vehicles there's a huge potential out there for these things to not drive about as well as an average human but to far exceed human performance in their abilities to use all the capabilities of the tires to do some amazing things so maybe that's not the way that you want your your daily drive to go although when we first posted some of this some of this video one of the commenters was like I want this car that way I can like go into the store to buy donuts while it sits in the parking lot doing donuts wasn't a use case that I had thought of but that's one of one of the things that we thought of this really how if you limit yourself to only thinking about what the tires can do before they get to the saturation of the friction in the road you're only taking to account one class of trajectories there's a lot more beyond that that could be very advantageous in some emergency situations would it be great if the car had access to that now that's not a way that we're going to get if we only sort of monitor day to day driving we're not going to get that capability in our cars so one other aspect that came through in the in the policy which I think is extremely important as we think about neural networks and learning is this idea of data sharing and there's a huge potential to accelerate the development of automated vehicles if we can share some information about edge case scenarios in particular so if you think about trying to train a neural network to handle some extreme situations that's really much easier if your set of training data contains those extreme situations right so if you think about the weird things that can happen out on the road if you had a database of those and those comprised your training set you'd have a head start in terms of being able to get a neural net where I can begin to validate that it would work in these situations so the question is you know is there a way for the ecosystem around self-driving cars to actually share some of this information so that different players can actually share some information about the critical situations and be able to make sure that if you learn something that yes you can make your cars safer but actually all the cars out on the road gets safer now clearly you need to balance this with some other considerations there's there's the intellectual property concerns of the company there's privacy concerns of any individuals who might be involved but it does seem to me that there's a big potential here to think about ways of sharing certain data that can contribute to safety and this is a discussion that's going to be ongoing and I think academia can do a lot to sort of help broker this discussion because you know the first level people say you know data sharing I don't know companies aren't going to share we're not going to get the information we need but most of the time people stay in the abstract as opposed to saying well what information would be most helpful what information it's really going to give people confidence in the safety of these cars it's gonna let regulators understand how they operate and at the same time is going to protect the amount of development effort that companies put in there I think there is a solution here and in fact if you look at aviation there's a really good example that already exists it's known as the Esaias system it's started with only four Airlines that decided to share safety information with each other and this goes through mitre which is a federally funded R&D center and it's actually now up to 40 Airlines and if companies get kicked out of the mitre a project they really try very hard to get back in now this is anonymized data its anonymized data so that you know companies actually get a assessment of what their safety record is like and they can compare it to other airlines in the abstract but they can't compare it to any identifiable airline so there's no ranking of this it's not used for any enforcement techniques and it took people a long time to kind of build up and begin to share that but now there's a huge amount of trust and they're sharing more and more data and looking at ways that they can perhaps actually start to code in things like weather and time of day which had been removed for anonymization purposes and the original version of the system so I think there's some good examples out there and this is something that's very important to think about for automated vehicles and I think as this discussion goes forward those of you who are interested in developing these vehicles using techniques that rely on data are going to be an important voice for the importance of data sharing I think there's a there's a large role here to kind of make people aware that this actually does have value in the larger ecosystem so this is something that I was able to work on more broadly as well so I was part now is the d-o-t representative on the National Science and Technology Committee's Subcommittee on machine learning and artificial intelligence and this was one of the recommendations that was really pushed forward as well because AI has tended to really make great advances with the availability of good datasets and in order to make those sort of good advances in transportation this group is also advocating that those datasets need to be made broadly available so this is a little bit about the vision behind the the automated vehicle policy what the goal was to really achieve here the idea of trying to move towards a proactive safety culture not to necessarily put in regulations prematurely and try to set standards honestly we don't know the best way to develop automated vehicles but to allow the government to kind of get involved in discussions with manufacturers early and be comfortable with what's going out on the roadway and actually to kind of help the u.s. to continue to play a leading role in this obviously if vehicles are going to be banned from the roads it would be very difficult for the country to continue to be a place where people could could test and develop this technology and then the belief really that there can be an acceleration of the safety benefits of this through data sharing so each car doesn't have to encounter all the weird situations itself but in fact can learn from what other vehicles experience and the idea is that really this is meant to be an evolving framework so it comes out as guidance it really generates conversations it generates best practices which can eventually evolved into standards and law and there's a huge opportunity here because the belief isn't that the National Highway Traffic Safety Administration will be doing all of the development of these best practices but that that'll really evolve from what companies do and what all of us at universities are able to do to sort of generate ways to solve these problems in creative manners ways to actually keep the innovation going but ensure that we have safety so as you start to think about all of the AI systems that you're developing and you start to flip around a little bit and think about how does a regulator gonna get comfortable that it's not going to do something weird these are great research questions I think these are great practical questions and these are things that will need to be worked out going forward so I you with that as a challenge to think about to think as you take this course not only about the technology that you're learning but how do you communicate that to other people and where are the gaps that need to be filled because I think you'll find some great opportunities for for research startup companies and ultimately work with policy and government there so thanks for the opportunity to talk to all of you and I want to stop there because probably the things that you want to talk about are more interesting than the things that I wanted to talk about so I'm happy to take questions along there good we had a quick hand here yeah accidents were part of our economies the excess rates are extremely low do you think some of these safety requirements may roll back like I do I think that's a great question and okay so the question thanks for reminding me so the question was whether in the future when you have all vehicles automated would we be able to actually roll back things like airbags and seatbelts and other things that we have on there what we might know is as passive safety devices in vehicles I believe that we will in fact actually one of the things that I think is most extraordinary if you think about this from a sustainability standpoint when you look at the average sort of mass of vehicles and average occupancy of vehicles in the u.s. you know with single with passenger cars we're using maybe about ninety percent of the energy to move the vehicle as opposed to moving the people inside and one of the reasons for that is crashworthiness standards which are great because that's what's enabled us to be surviving these crashes at 40 miles an hour but if we do have vehicles that are not going to crash or if they are going to have certain modes which might be designed with very carefully design you know crush areas or things like this we could potentially take a lot of that mass out particularly if these are low-speed vehicles which are designed only for the urban environment and they're not going to to crash because they're going to drive you know somewhat conservatively or in some ways separated from pedestrians then I think you can get a lot of the mass out and then you start to actually have transportation options which you know from an environmental standpoint are comparable to cycling so so I think I think that's actually a really really good goal to strive for although we either have to kind of limit the environment or think in the far future with some of those techniques to apply it which you guys learn good yeah that's a great question so what are we what are we doing with Shelly is our mission really just to drive as fast as possible and faster than a human or are we trying to learn from this something that we can apply to other automated vehicles it really is a desire to learn from other automated you know for the development of other automated vehicles and we've often said that at the point where you know the difference between Shelly's performance in the human driver you know starts to be really mundane things like you know our shift pattern or something which isn't applicable we kind of lose interest at that however you know up to this point every insight that we've gotten from Shelly has been directly transferable and we've programmed the car to do some emergency lane changes in situations where you don't have enough room to brake and we've actually been demonstrating in some cases that the car can can do this much faster than a human even an expert humans response can be so there's certain scenarios that we've done like that and I would say from the bigger picture what's really fascinating is that we originally started out with this idea of let's find the best path around the track and track it as close as we can but in fact when you look at human race car drivers what they're doing is actually very different they're pushing the car to the limits and then sort of seeing what paths that opens up to them and it flips the problem a bit on its head in a way that I think is actually very applicable for developing safety systems out on the road but it's not a way that people have looked at it to the best of my knowledge up to this point and so you know that's really what we're hoping is that the inspiration in trying to reproduce human performance there leads us to better safety algorithms so long you know so far that's been the case and when that ceases to be the case I think we are definitely much less interested yeah so so liability is a good question so what what who is liable if I can can sort of rephrase you know for an accident in an automated vehicle on the one hand that's kind of an open question on the other hand we do have a court system and so whenever there are new technologies these things are actually generally figured out in the courts and it can be different from state to state so this is one aspect where you know potentially some discussions so that manufacturers aren't subject to different conditions in different states would be helpful but the way that it works now is that it's it's usually not binary we have in the US a sense of joint and several liability and so you can actually assign different portions of responsibility to different players in the game you have had companies like Volvo and in fact Google make statements that if there are vehicles are involved in accidents then they would expect to be liable for it so people have often talked about needing something really new for liability but I'm not sure that's the case we do have a court system that can ultimately figure out who is liable with new technologies and we have some manufacturers that are starting to make some statements about assuming product liability for that the one thing that really could be helpful as I mentioned is perhaps some harmonization because right now insurance is something that is set state-by-state and so the rules in one state as to who's at fault for an accident may be very different in another state okay so what what if companies you know as they send in the safety letters are are using criteria to set safety that that may not be broadly acceptable to the to the public whether the public would like these vehicles to have greater safety I think you know the the nice thing about this process is first of all we would know that right so we would have a sense that companies are developing with certain measures of safety in mind and there could actually be a discussion as to you know whether that is setting an acceptable level it's it's a difficult question because it's it's not clear that people really know what an acceptable level is is it does it have to be safer than then humans drive now you know my personal feeling I would say yes and does it have to be much much safer well that that's hard to say you know you start to then get into the situation of we're comfortable to a certain extent with our existing legal system and with the fact that humans could cause errors that have fatal consequences do we feel the same way about machines right you know we tend to think the machines really should to have a higher level of perfection so we may as a society be less tolerant people will often say well so long as the overall national figures go down that would be good but that's really not going to matter much to the families who are impacted by an automated vehicle particularly if it's a if it's a scenario with very very bad optics and what do I mean by that it's if you think about the failures of mechanical systems because they're different than the failures of human beings they can often like look really bad right if you sort of think about a vehicle that doesn't detect something and then just continues to plow ahead you know visually that's that's really striking and that's the sort of thing that you know we'd get replayed and be in people's consciousness and raise some fears and so you I think that's that's an issue that's going to have to be have to be sorted out these are average being different you know between research in other parts of the world to exchange technologies yes so that's that's a good question what's being done really from a global standpoint to sort of share ideas to share research and to kind of work through some of these things particularly on the policy side so most of the auto manufacturers are global corporations and so a lot of the research in this is done in very different parts of the world so renault-nissan for instance is doing a lot in Silicon Valley in Europe and and in Japan and I think you see a lot of that with the different manufacturers one of the cool things that I got to do as part of my role was to go with the Secretary of Transportation to the g7 transportation ministers meeting in Japan and address the ministers about sort of the the u.s. policy on on automated vehicles and one of the parts of that discussion was well the US has a very different set of rules so we have this manufacturer self certification as opposed to pre market certification but testing for instance is something that has to be done regardless so either it's testing that's done by a manufacturer or it's testing that's done by for instance in you know in Germany the the the tooth and other agencies that are responsible for for road safety and so the idea is maybe we should be
Resume
Categories