- Automated Driving
- Automated Driving Systems
- AI
- Open Data
- Automotive Industry
We Need Better Access to Automated Driving Data
As the automotive world creeps ever closer toward automated driving, it’s becoming clear that we need better access to data about what these systems are actually doing. If we are going to rely on either high level automation or a blend of human and machine operation, data transparency is crucial. Unfortunately, what we have right now is near total opacity, and regulators need to change that.
Traditionally, with human drivers, conventional forensic examination can generally determine if a crash was triggered by mechanical failure, environmental conditions, human error, or some blend of those. The same is not at all true with software-driven automation, especially when it relies heavily on so-called AI. An increasing number of vehicles incorporate assistive automation that relies on human supervision and readiness to take back full control.
Root Cause Analysis
When something goes wrong with a vehicle, it’s important to understand what happened, not only to apportion blame but also to take corrective action so that the accident hopefully won’t be repeated. Modern vehicles generally include event data recorders (EDRs) that capture short snippets of data in the event of a crash. However, this data is limited to items such as speed, acceleration, and trajectory. EDRs don’t capture information about whether an assist system such as GM Super Cruise or Tesla Autopilot was active or when it may have disengaged.
Such data is critical to understanding the efficacy of assist systems. It’s generally accepted that employing sensors and software should be able to help avoid or mitigate crashes, but to date, automakers have been unwilling to share that data. If regulators such as the National Highway Traffic Safety Administration (NHTSA) in the US are going to evaluate these Level 2 (L2) systems and upcoming Level (L3) and Level 4 automation, they need visibility into much more about what is happening in the moments leading up to incidents.
For example, on April 17, 2021, a Tesla crashed near Houston, Texas with two fatalities. One person was in the front passenger seat, the other was in the rear seat, and no one was apparently at the wheel. Tesla has telemetry data and logs stored in the vehicle about what functions were active. CEO Elon Musk posted on Twitter that Autopilot was not active at the time of the crash. However, what Tesla has not revealed was when or if Autopilot was active prior to impact and when it disengaged. This is the sort of data that investigators need to understand the safety impacts of these systems.
The Entire Industry Is Automating
Guidehouse Insights’ Market Data: Automated Driving Vehicles report projects sales of more than 100 million vehicles with L2 or L3 capability by 2030. Though Tesla has garnered the most attention for crashes with Autopilot, GM will be adding hands-free Super Cruise to more than 20 models by the end of 2022 and Ford is launching its own similar system BlueCruise in 2021, while most other automakers have similar product plans.
In June 2020, the United Nations Economic Commission for Europe (UNECE) passed the first harmonized regulation covering L3 automated lane keeping systems. The 60 UNECE member countries across Europe and Asia are now deploying these rules, which include requirements for data logging with an open standardized interface. The US is not a member, but the NHTSA should move to adopt similar requirements that mandate recording of this data and access with an interface not controlled by manufacturers that have a conflict of interest. If the public is to trust automated driving systems, there must be data transparency so that safety claims can be independently verified.