- Automated Vehicles
- Automated Driving Systems
- PUBLIC SAFETY
- Policy and Regulations
Self-Certification of Transportation Safety Is Bound to Fail
When transporting people and goods using vehicles, safety must be the paramount concern, whether moving on roads, rails, or in the air. When humans are the operators of these vehicles, we have standards of performance that are expected, and people are tested before they are given a license to operate. We don’t allow people to simply attest, “I know how to drive,” and then let them go. We certainly should not be doing this with companies developing transportation systems.
In some parts of the world, such as Europe, automakers are required to go through a type approval process where new vehicles or systems are evaluated by independent third parties to verify compliance with regulations before they go on sale. Unfortunately, many in the US have such an aversion to regulation that in most cases we allow businesses to self-certify that they are in compliance. In fact, many in positions of power don’t even want that. Elon Musk recently posted on his social media platform a call for “comprehensive deregulation” after revealing that Tesla is currently the subject of multiple Justice Department probes.
In recent years, we’ve seen the downside of self-certification for regulatory compliance. In 2018 and 2019, two Boeing 737 Max aircraft crashed, killing 346 people. This was the result of modified control systems that Boeing had been allowed to self-certify were safe. In retrospect, they clearly were not.
But the 737 crashes occurred in an environment where at least regulations existed. There are plenty of safety regulations governing ground vehicles, but none of them really cover automated driving. Creators of automated driving systems (ADS) have been pressing ahead for more than a decade to develop technology that doesn’t need a human driver. There are plenty of potential benefits to ADS, including freedom of mobility for those who can’t drive, but it’s essential that we validate the safety of such systems.
While we have standards for evaluating humans that want to drive (however lax those may be in the US), there are generally no such regulations for software-based drivers. The US has a regulatory regime that is reactive, with rules only being set after something goes wrong. But we already know what can go wrong with drivers and vehicles, so regulators need to set some performance standards (without prescribing specific technical solutions) and put in place a system for type approval before automated vehicles are released on public roads.
ADS developers also need to be required to share comprehensive data on all anomalous events, with no redactions for what the companies decide is confidential data. This is essential to evaluating safety and enabling researchers to determine what more needs to be done.
Cruise was recently granted a permit to operate 24-hour paid, driverless robotaxi services in San Francisco, California, despite many unexplained issues with cars blocking traffic and impeding emergency responders. Waymo has had some similar issues, although apparently not as frequently. Following a recent incident where a pedestrian was struck by a hit-and-run human driver and then run over by a Cruise vehicle, California regulators determined that Cruise had withheld crucial data about the incident as well as others and suspended the company’s permit for driverless operations. Subsequently, Cruise suspended its driverless operations in all 10 cities where it is testing vehicles.
A desire to get a profitable return on the work you are doing is the core of a capitalist economy. But speed to market is generally in direct conflict with doing the job right. Cruise is not the first to withhold important data and almost certainly won’t be the last. Those seeking to profit in an arena where public safety is at stake should not be trusted to certify that safety without independent validation and data transparency, and deregulation should be the last thing considered by policymakers.