The popular media are filled with stories of the imminent rise of the self-driving car, making life infinitely easier and putting everyone from taxi drivers to traffic cops to insurance lawyers out of work. For better or worse, the facts don’t match the hype. Yet.
The latest big news in autonomous vehicles (AVs) is that a car drove itself across the United States. Well, almost.
The automotive component manufacturer Delphi had a modified Audi Q5 crossover driven from San Francisco to New York in 9 days, and they report that “Nearly 3,400 miles were covered with 99 percent of the drive in fully automated mode.”
This is an amazing feat of engineering, and far exceeds what most would have thought possible just a few years ago. But that doesn’t mean we will be napping in traffic anytime soon.
Since the car drove itself 99 percent of the time, it means that there were nearly 340 miles of road that the car could not handle by itself. Delphi didn’t give details on what that last 1% consisted of. We don’t know if it was rain, construction, potholes or ducks crossing the road.
The most famous autonomous vehicle is the Google car, versions of which have collectively travelled hundreds of thousands of miles without causing an accident. What the hyped articles don’t mention is that the vast majority of these miles were on test courses where the car had a fully defined map of the route. In the real world, maps are not so accurate. A few years ago while I was on a newly built stretch of the Trans-Canada highway, my GPS showed that I was driving through the forest. Detours due to construction, traffic and weather can change in an instant, so pre loaded maps are not sufficient; the car has to be able to navigate unknown terrain by using a complex system of cameras and sensors to read the surface of the road, the flow of traffic, and any relevant obstacles, signals and signage.
It is no coincidence that the demonstrations so far have been in dry, sunny climates with clearly marked roads. Some stories report that AVs do not yet have the capacity to deal with heavy rain, let alone snow or the commonly misnamed “black ice” (the ice is clear; the road is black). It has been pointed out that the AVs do not have to drive perfectly, just better than humans, and in most conditions, they are already there, but before the public will trust AVs, we have to be confident that the engineers and software developers have tackled that last one percent of conditions, and it is impossible to tell how long that will take.
Apart from road conditions, AVs will have to be able to judge people’s behaviour and intentions. At some point you’ve probably stopped at an intersection, waiting for an elderly person to cross the street, only to have them wave you on so they don’t hold up traffic. Will an AV be able to understand their intent, or will it have to sit and wait until the pedestrian crosses? Similar situations are even harder to judge. A vehicle in front of you is blocking the lane while waiting for a parking spot. The driver signals to you to go around. Apart from the difficulty of an AV camera seeing through a driver’s window well enough to recognize gestures, will it be able to “look around” the blocking car to judge when it is safe to go into the oncoming lane?
Will AVs be programmed to intentionally break traffic laws? In the above example, driving into an oncoming lane may be technically illegal. A more common example is a car attempting to enter a busy road from a side street. No one will stop and let them in if they stay behind the stop sign and wait. In many cases the only way to get onto the main road is to slowly ease your car out until some kind person sees you and leaves a gap. Again, this is technically illegal in most cases.
In addition to navigating, there are situations where an AV would have to make life or death decisions. Many conversations about AVs contain questions like “What if a child runs in front of the car and it has to choose between driving on to the sidewalk where there is another child or into oncoming traffic?” Manufacturers are in serious discussions about how to solve these problems, but appear to be far from a solution. Perhaps this is another reason that most testing has been done far from pedestrians.
Even if the cars could safely handle all driving conditions, the laws are not yet ready. I haven’t found an explanation for how it was legal for Delphi to have the driver give up control of the car. It seems to me that that would violate safe driving laws in most jurisdictions. California, Nevada and Florida have specific laws that allow AVs as long as there is a driver who can take control when necessary, but those are the only three states so far. I am not aware of any such laws in Canada.
Totally autonomous vehicles will not suddenly appear like a new model. Instead of waiting until the vehicles are capable of complete autonomy, manufacturers are taking a piecemeal approach, and adding features to cars that enable them to perform a limited subset of self-driving tasks. Ford advertises vehicles that can park themselves, many makes offer features such as adaptive cruise control (which changes the car’s speed to match the vehicle ahead), lane change warnings, and automatic braking to avoid collisions. Cadillac has announced that in 2018, some of its cars will be able to drive themselves on the highway. Similarly, Tesla has announced a software upgrade to their existing Model S vehicles that will allow an “autopilot” mode for highway driving only. BMW has advertised a preview of an upcoming vehicle that can be “summoned” by your smartphone to drive itself from the parking lot to pick you up.
Over the next few years, we will see these types of features expanded until the vehicles can handle everyday driving. The question remains as to when or whether we will change laws to allow a vehicle to be completely driverless.
We have seen why AVs will take longer than the hype suggests. In my next post, I will address the overhyped consequences of their use (Hint: delivery drivers have nothing to fear).