Driving is one of those tasks that is so tedious, dangerous, and demanding that it almost screams to be handled by robots. Recently, the technology has finally begun to to catch up with common sense. The elevator pitch for self driving cars is a no-brainer.
1.2 million people die in car accidents every year, and 50,000 are maimed. We could save almost all of those lives. Millions of people waste billions of hours commuting . Now they can work, watch Netflix, or read a book. Robot cars would let us get rid of parking lots and traffic jams.
Blind people, the elderly, and people too young to drive would be able move around freely without a human driver. The savings in lives, dollars, and productivity is incalculable. Machines don’t get drunk, tired, or distracted. They follow traffic laws exactly. These are things that everybody wants, with far-reach implications — the hundred billion-dollar question is, how long is it going to take us to get there?
A World of Driverless Cars
Google describes the project in a recent blog update like this:
“Ever since we started the Google self-driving car project, we’ve been working toward the goal of vehicles that can shoulder the entire burden of driving. Just imagine: You can take a trip downtown at lunchtime without a 20-minute buffer to find parking. Seniors can keep their freedom even if they can’t keep their car keys. And drunk and distracted driving? History. […] they will take you where you want to go at the push of a button. And that’s an important step toward improving road safety and transforming mobility for millions of people.”
Autonomous cars have been something of a hot topic in recent years, with Google leading the charge. Google has driven its fleet of experimental robot cars more than 1.1 million kilometers without serious incident, and recently premiered a new low-speed electric prototype to fine-tune city driving – with no steering wheel or brakes whatsoever.
Outside Google, Toyota, Honda, and Ford all have their own self-driving car projects, although none of them are nearly as advanced as Google’s. In fact, several automakers have dismissed the idea of fully autonomous cars out of hand as too challenging, focusing instead on driver assistance features.
Google, for its part, has outlined an aggressive timeline to commercialization, hoping to partner with automakers to release autonomous vehicles, running Google software and manufactured by third parties before the close of the decade. In fact, Google intends these vehicles to hit the market no later than 2018. So what’s standing in the way of that goal?
Google’s prototype is really, really good — but it isn’t perfect. Here’s how the car works now:
The primary sense organ of the robot is a spinning LIDAR turret on the roof of the car. The LIDAR turret paints the world around the car with an infrared laser beam at very high speed. By recording the position and intensity of laser light reflected back, a simple machine vision algorithm can quickly compute a three-dimensional map of the objects around the car many times a second, allowing it to identify objects like cars, pedestrians, sidewalks, and traffic cones.
The car, as a secondary sense, has a number of cameras that it uses to gather additional information about the world around it (identifying signals from cyclists and other cars, and reading the status of traffic lights and signs). Finally, the car has a GPS, which tells it, to within a few meters’ accuracy, where it is located in space.
None of these senses are good enough, on their own, to direct the car, but by using clever software to fuse these data sources together, the car is able to make intelligent driving decisions. To make the task easier, Google has been using streetview cars with LIDAR turrets on them for years – cars that, along with providing you with weird journeys into the past, have been systematically 3d mapping streets all over the world.
All of this data has been meticulously tagged to let the car’s computer know the positions of traffic lights, and what the speed limits and lane designations are for each road.
The robot can fine-tune its GPS position by comparing its current LIDAR data to old 3d maps of the street it’s on, to ensure that it doesn’t drift out of its lane (this also allows it to navigate when GPS isn’t an option, like when it’s driving through a tunnel or a parking garage). Furthermore, the car can access the metadata for its local environment to tell it when the speed changes and to know where to look for traffic signals.
This combination of hardware and software can do a lot of remarkable things: it can see and predict the motions of cyclists and pedestrians. It can identify construction cones and roads blocked by detour signs, and deduce the intentions of traffic cops with signs.
It can handle four-way-stops, adjust its speed on the highway to keep up with traffic, and even adjust its driving to make the ride comfortable for its human payload. The software is also aware of its own blind spots, and behaves cautiously when there might be cross-traffic or a pedestrian hiding in them.
There are, unfortunately, also some things that the car can’t do. The biggest issue is weather: Google’s cars have mostly been tested in California. In a larger roll-out across the world, autonomous cars will need to cope gracefully with flash flooding, heavy fog, and deep snow. Which is a problem, because all of those seriously mess with the heavy lifter of the robot’s senses: the LIDAR.
Snow and standing water scatter the laser beam, making it difficult to reliably collect data, and fog or heavy rain can dramatically cut the distance the LIDAR can see. Without a reliable LIDAR, the robot is literally dead in the water.
Fixing the weather problem is still an open area of research. If we’re lucky, it may be possible to use clever noise-filtering algorithms to extract meaningful data even from weather-clouded LIDAR, or shift the burden to the cameras, allowing the robot to continue to maneuver, although probably at a reduced speed.
If not, it may be necessary to add a new suite of sensors (perhaps SONAR or RADAR) to give the robot 3d mapping capabilities even in the event of LIDAR failure. Either way, Google’s working on it.
A deeper problem, though, is what’s called the long tail. Think of it like this: the majority of driving that self-driving cars will be asked to do is on the freeway. For a robot, freeway driving is easy. The next use case would probably be low-speed city driving in good weather, which robots are also pretty good at.
Unfortunately, even though these represent probably 90% of all the driving situations the cars will ever face, they aren’t the only two possibilities. What about parades? What about ambulances? Rock slides? Car accidents? Flat tires? Jaywalking dogs? Road construction? Tornadoes? Getting pulled over by the police?
The point is that as you go down the list of cases the car has to handle, sorted by probability, you find that there are an almost infinite number of them, each with a tiny slice of the probability pie. You can’t hard code behavior for every possibility.
You have to accept that eventually your robot car will encounter something you didn’t plan for, and will behave incorrectly. It might even get people killed. The best you can do is try to cover enough cases well enough that the robot is still safer to use than a human-directed car.
Right now, the Google car isn’t quite far enough down that list yet, but it is starting to get close, and Google is working on developing safe fallback behavior to ensure that the car won’t actively harm anyone, even in the case of software failure or unanticipated driving conditions.
Google’s method of building up these cases is clever: the company has a policy that when the car makes an error, or a human is forced to take control, the incident is logged, and the software is revised until it can pass simulated versions of the same scenario. Any large-scale changes to the software is tested against this database of incidents to ensure that nothing has been inadvertently broken.
There are softer limitations as well – the LIDAR turrets used by the robots currently clock in at more than $30,000. The good news here is that this is largely because those LIDAR turrets are a specialty item used for only a few applications. Mass production will certainly bring those costs down.
Furthermore, if self-driving cars are adopted under the cab model (likely provided by Google’s protege, Uber), the needed ratio of cars to car users will likely be low: people going to similar places can be carpooled together by centralized routing software in exchange for reduced fees, and cars can maintain more or less continuous usage. This reduces the cost per user dramatically, even if the cars themselves are very expensive.
Self driving cars sound pretty much like a grocery list of things that scare regulators: autonomous robots with lethal force, disruptive new technology, mechanized unemployment, and large corporations putting millions of cameras all over the world.
Robot cars will probably kill people (though at a rate much lower than human drivers), they’ll displace millions of truck drivers and hundreds of thousands of cab drivers, and they’ll provide Google with an enormous amount of personal data about their users. Needless to say, there’s going to be some resistance to getting self-driving cars legalized, particularly since they require major overhauls to the regulatory infrastructure already in play.
In order for self driving cars to become a legal, mainstream part of our lives, we’re going to have to give up on some very old legal precepts: including the idea that the human being in the driver’s seat of a car is responsible for its actions.
The states that have issued preliminary regulation to allow for the testing of autonomous vehicles (including California and Nevada) have employed a variety of legal shortcuts to allow the research to take place.
In California, for example, the person who initiates the car’s journey is legally the operator, even if they aren’t actually in the car at the time. This is an obviously inadequate long-term answer, as this means that (for example) the operator could be charged with DUI, even if they were nowhere near the vehicle that they dispatched while drinking.
California hopes to release more permanent regulation for such consumer vehicles by early 2015, but Consumer Watchdog, an independent advocacy group, is lobbying for them to delay the regulation for eighteen months to allow more thorough safety testing.
Google hopes to encourage lawmakers to place liability for the car’s actions with the manufacturers of the self-driving hardware, which they see as the fairest way to distribute blame: it seems silly for the law to hold a human operator responsible for behavior that they have no control over.
The regulators involved admit that legislating for autonomous vehicles is a difficult problem:
“We’re really good at licensing drivers and regulating vehicles and the car sales industry, but we don’t have a lot of expertise in developing those types of standards,” Soublet said. “So as we start approaching things like that, we have to back off. We don’t have the technical ability to do it. We have to come at this from a regulatory perspective of what we as a department are capable of.”
They do, however, agree that the field is worth the effort.
“It’s an issue that draws you in. It’s our future. We find it very exciting to work on […] Brian [Soublet] and I, we can’t believe that we’re working on this. It’s something that will change the way that we all live.”
Federal regulation is on its way, but may not arrive for several years. The National Highway Traffic Safety Administration released a preliminary statement on the issue, in which it expressed some enthusiasm for the prospect of fully autonomous vehicles.
“America is at a historic turning point for automotive travel. Motor vehicles and drivers’ relationships with them are likely to change significantly in the next ten to twenty years, perhaps more than they have changed in the last one hundred years.”
However, the NHTSA also seems unprepared to issue any clear regulation in the forseeable future, and plans mostly to leave these regulatory issues in the hands of individual states, raising the possibility of poorly regulated states being ‘dead zones’ that autonomous cars on cross-country road trips must avoid. This is where the good news starts. The hopeful mother of these machines is Google, which also happens to be one of the largest lobby juggernauts in the United States (it ranks eighth, beating out Boeing and Lockheed Martin). Google is well prepared to guide regulation into a shape friendly to the future of autonomous vehicles.
The Road Ahead
If there’s a simple message to take away from the situation right now, it’s this: the challenges left to solve before autonomous vehicles can go mainstream are difficult, and substantial. The technology and legal infrastructure is not currently in place to allow these vehicles to truly fulfill their potential. However, these problems are also well defined, solvable, and being investigated by some of the smartest people on the planet.
There is a very good chance that the technology, at least, will be ready to be deployed in test markets like California and Nevada by Google’s tentative 2018 date. There’s an even better chance that, by ten years from now, the technology will have radically transformed the way that nearly everyone on Earth lives their lives.
These changes will range from car culture (the end of automobile ownership as an adult rite of passage), the way people work and socialize, and the way we design our cities. If these challenges can be met, it’ll be the most significant change in transportation since the invention of the automobile.
Feature Image: “The Love Bug“, by JD Hancock
Images: “Google self driving car at the Computer History Museum“, by Don DeBold, “Google Self-Driving Car“, by Roman Boed, “Toyota self-driving car“, David Berkowitz, “Velodyne High-Def LIDAR“, Steve Jurvetson