• Automated Driving
  • Automated Vehicles
  • Uber
  • Ride-Hailing

Automated Driving Developers Need to Ask, "Should We?"

Sam Abuelsamid
Nov 27, 2019

Smart Car 2

In March 2018, Elaine Herzeberg became the first victim of an automated driving (AD) test vehicle when she was struck and killed by one of Uber’s development vehicles as she crossed a street in Tempe, Arizona. In November 2019, the National Transportation Safety Board (NTSB) issued a scathing report on the incident and Uber’s development program. Every AD company should read this report and take its lessons to heart. Expediency is simply not acceptable for safety critical products.

As an engineer, I was taught the importance of anticipating all the ways that a product might be used or abused. It’s impossible to anticipate everything, but many are obvious and should be accounted for—top of mind is that people will not behave the way you hope with your product.

Uber’s AD System Was Not Road-Ready

Many factors combined to lead to Herzberg’s death, both technical and human. On the technical side, Uber’s perception system was not at an adequate level of maturity and did not function well enough for public road testing. In the seconds leading up to the impact, the perception system alternated between classifying Herzberg as a pedestrian, vehicle, and bicycle, making it impossible to accurately predict what she would do. That in turn led to indecisive path planning decisions and the crash.

According to NTSB, Uber also lacked proper safety protocols on both the engineering and operations side for its entire program. There was no formal safety plan for the organization and Uber was the only major AD company testing on public roads with a single safety operator in the vehicle. Other companies have used two operators, one to monitor the vehicle and take over as needed with a second focused on data collection.

This lack of a safety focused led to one of the most critical failures for both technology and humans. Uber engineers decided that the system would only focus on pedestrians in designated crosswalks. It was not designed to respond to people jaywalking. Herzberg was crossing a wide street midway between crosswalks at a time in the evening when there was almost no traffic.

In 2018, Stanford University artificial intelligence researcher and AD investor Andrew Ng told Bloomberg, “What we tell people is, ‘please be lawful and please be considerate.’” Ng and others have advocated that humans need to be reprogrammed not to jaywalk to enable AD deployment faster. This naive attitude doesn’t properly anticipate what people will actually do.

Safety Should Be the Main Priority

Whether considering the behavior of safety operators in test vehicles or other road users, every person involved in AD development needs to step back and account for all kinds of behavior. If they cannot build a system that functions safely within that context, it’s time for them to find a new field.

In a 2016 interview with Business Insider, former Uber CEO Travis Kalanick said, "If we are not tied for first, then the person who is in first, or the entity that's in first, rolls out a ridesharing network that is far cheaper or far higher quality than Uber's, then Uber is no longer a thing." As described in NTSB's report, this led to a culture of cutting corners in the AD program where safety was sacrificed for speed to market, playing a major role in Herzberg’s death. Uber has made a lot of changes since the fatal crash, but there are still others in the AD community that don’t seem to have asked the question, “Just because we can be first, should we?”