There has been a lot of hype recently about automated cars and how they could improve road safety. However, recent research shows that there are a lot of safety issues that need to be addressed before they become available to the public.
Mark Harris, The Guardian, reports:
Google’s self-driving cars might not yet have caused a single accident on public roads, but it’s not for want of trying.
Between September 2014 and November 2015, Google’s autonomous vehicles in California experienced 272 failures and would have crashed at least 13 times if their human test drivers had not intervened, according to a document filed by Google with the California Department of Motor Vehicles (DMV).
When California started handing out permits for the testing of self-driving cars on public roads, it had just a few conditions. One was that manufacturers record and report every “disengagement”: incidents when a human safety driver had to take control of a vehicle for safety reasons.
Google lobbied hard against the rule. Ron Medford, director of safety for the company’s self-driving car project, wrote at the time: “This data does not provide an effective barometer of vehicle safety. During testing most disengages occur for benign reasons, not to avoid an accident.”
The first annual reports were due on 1 January, and Google is the first company to share its data publicly. The figures show that during the 14-month period, 49 Google self-driving cars racked up over 424,000 autonomous miles and suffered 341 disengagements, when either the cars unexpectedly handed control back to their test drivers, or the drivers intervened of their own accord. The reports include both Google’s own prototype “Koala” cars and its fleet of modified Lexus RX450h.
In 272 of those disengagements, the car detected a technology failure such as a communications breakdown, a strange sensor reading or a problem in a safety-critical system such as steering or braking.
Google calls these “immediate manual control” disengagements. As the name suggests, the test driver is given audio and visual signals to alert them that they should take over driving without delay. Google test drivers typically responded to these warnings in 0.8 seconds.
In the remaining 69 disengagements, the human driver took control of the car on their own initiative, simply by grabbing the steering wheel or pressing the accelerator or brake pedal. The car automatically cedes control when this happens. Drivers do this fairly regularly if they suspect the car is doing (or is about to do) something hazardous or in response to other road users.
However, Google admits that its drivers actually took over from their vehicles “many thousands of times” during the period. The company is reporting only 69 incidents because Google thinks California’s regulations require it only to report disengagements where drivers were justified in taking over, and not those where the car would have coped on its own.
The company decides this by replaying each disengagement in its online simulator over and over again. Google says that its powerful software, which now drives over 3m virtual miles each day, can accurately predict the behaviour of other drivers, pedestrians and cyclists and can thus determine whether the test driver’s intervention was required for safety.
Bryant Walker Smith, assistant professor in the School of Law at the University of South Carolina, says the DMV could reasonably ask for more information. “Google could be clearer on how it draws the line between those driver-initiated disengagements that it reports and those that it does not,” he says. “The DMV is entitled to interpret its own rule, and it may have questions on this point.”
Consumer Watchdog, a California-based campaign group, said the report shows that self-driving cars still need a human driver behind the wheel. Privacy project director John M Simpson said: “It’s unfathomable that Google is pushing back against those sensible safety protecting regulations. How can Google propose a car with no steering wheel or brakes when its own tests show that in 15 months the robot technology failed and handed control to the driver 272 times and a driver decided to intervene 69 times?
“Release of the disengagement report was a positive step, but Google should also release any video it has of the disengagement incidents, as well as any technical data it collected.”
In 56 of the 69 driver disengagements reported to the DMV, Google calculated that its car would probably not have come into contact with another object. But, admits Google in its report, “we identified some aspect of the [car]’s behaviour that could be a potential cause of contacts in other environments or situations if not addressed. This includes proper perception of traffic lights, yielding properly to pedestrians and cyclists, and violations of traffic laws.
Google classified the final 13 disengagements as “simulated contacts”: situations that would have resulted in a crash had the human driver not taken over. “In these cases, we believe a human driver could have taken a reasonable action to avoid the contact but the simulation indicated the [car] would not have taken that action,” the company says.
The report could be seen as a blow to Google’s insistence that self-driving cars should be fully autonomous. It latest prototypes are designed to operate without any driving controls for their human occupants to take over in an emergency (although those currently on public roads do have backup controls fitted).
“It demonstrates that it is valuable to have a safety driver in the vehicle while testing, which is something we’ve always believed,” said Chris Urmson, director of Google’s self-driving car program. “But if you look at [regular] drivers, they’re effectively untrained in America. Expecting them to vigilantly monitor a system that operates as well as this does is a really a very challenging problem.”
Google is not the only company to have filed a disengagement report with the DMV. Volkswagen/Audi, Mercedes-Benz, Delphi, Tesla, Bosch and Nissan have all filed reports, which are currently under review by the department to confirm that they contain all the required information. The DMV told the Guardian that it does not currently have an expected date to complete its analysis of the data or draw conclusions from it.
While Google has been testing its self-driving cars since 2008, the company will not be releasing disengagement data from before 2014. “This is the period we’re required to share with the DMV. Any data we would have from before that is just outdated,” Urmson says.
Google notes that disengagements have been getting less common over the period of the report. However, Urmson cautions against expecting disengagements to drop regularly, year on year. “We’re continually adding capabilities to our vehicles, pushing them into more challenging situations,” he says. “Over the long view, we’d expect disengagements to be improving, but as we test in more challenging weather or driving situations, you could expect locally this to not look as good. And it really isn’t representative of where the technology will be when we’re ready to release it.”
Google’s parent company, Alphabet, is reported to be planning to spin out its self-driving car technology into its own business later this year.
So there is still a way to go before we see automated cars as a closer reality, but will they ever be completely safe? The complications of driving include more than just traffic lights and pedestrians. What about taking into account driving conditions, such as ice on the roads? Or fog? Then there’s what to consider when it comes to an emergency situation, perhaps you’re close behind a collision and you have to swerve to stay safe, or even avoiding a passing ambulance. All of these examples are something an experienced driver would be able to combat safely, but can the same be said for a robot with only algorithms as guidance?