- AI
- Machine Learning
- Software
- Automated Driving Systems
- ROBOTAXIS
- PUBLIC SAFETY
Automated Driving Isn’t Magic and Shouldn’t Be Treated as Such
More than 60 years ago, Arthur C. Clarke wrote “Any sufficiently advanced technology is indistinguishable from magic.” This was several years before he wrote about a rogue AI system in 2001: A Space Odyssey. AI has made some amazing strides in the past decade, and in the last year alone it has been pushed into all sorts of new applications. But as an October 2, 2023, crash involving a Cruise robotaxi in San Francisco proved, unsupervised AI is still a long way from being reliable enough to control physical systems.
The types of neural networks used in automated driving system (ADS) perception are conceptually similar (but not identical) to the large language models (LLMs) many of us have become familiar with since the late 2022 arrival of ChatGPT. At a high level, all of these systems are advanced prediction engines. They are trained by feeding them petabytes of data that are used to set parameters in the model based on sequences of patterns in that data. Once trained, new data is run through a model that tries to match the patterns and predict what should come next in the sequence based on previous history.
The problem is that these statistical models don’t actually have any understanding of the information being processed beyond the relationships between individual bits of data. This is why LLM chatbots are prone to creating responses that are syntactically and grammatically correct, and can even mirror different styles, but may either have no real meaning or be completely wrong. This is often referred to as an AI hallucination.
Similarly, in ADS perception, it can be easy to fool systems into mischaracterizing targets in the environment around the vehicle. Worse still, all sensors currently used on vehicles, including lidar, radar, and cameras, are limited to line of sight. Anything that is blocked by another object can’t be detected.
Human senses, of course, have the same limitation. However, humans typically learn a concept called object permanence as infants. When a parent plays peekaboo with an infant, the goal is to teach them that even when the parent’s face is covered, they are still there.
Based on details revealed in the investigation of the October 2023 crash—when a pedestrian was struck by a human hit-and-run driver and then run over and dragged by a Cruise robotaxi—the Cruise ADS seems to completely lack object permanence. Once the pedestrian crossed out of sight in the path of the human-driven car, the robotaxi seemed to forget she ever existed. Moments later when the victim was thrown back into the path of the robotaxi, it did not realize she was the same target that had just been tracked and could not properly classify her in time. As a result, she was run over, and with only her legs sticking out from beneath the vehicle, the ADS didn’t realize she was a person.
Given AI models’ lack of ability to understand what is being perceived, it is essential that developers actually build in some deterministic rules, or guardrails, to mitigate the consequences when those systems err, which they inevitably will. Among those rules needs to be some way to replicate object permanence. All ADS have some sort of prediction engine to estimate what each target in the environment will do in the next 3-5 seconds. Part of that needs to be the continuation of tracking targets even when they are out of view.
In 2017, Mobileye published a paper on a model called Responsibility-Sensitive Safety (RSS), which aims to provide such guardrails to constrain what can happen due to an AI’s lack of understanding. RSS has since been integrated into the IEEE P2846 standard, and something like this should be included in every ADS. I had my first ride in an automated vehicle in 2008, and at the time, it seemed almost magical, but as an engineer, I knew it wasn’t. I was well aware that software systems can make mistakes. We need to both expect those mistakes and do whatever we can to mitigate the consequences.