- Automated Driving
- Tesla
- Automated Vehicles
Researchers Demonstrate Why Most Automated Driving Systems Use HD Maps
At Tesla Autonomy Day in April 2019, CEO Elon Musk tried to demonstrate how the company was going to achieve fully automated driving (AD) capability with a fairly simple and low cost suite of sensors and software. One point made by Musk was that Tesla’s full self-driving system wouldn’t require high definition (HD) maps as other AD systems did. A recent demonstration by an Israeli cybersecurity company demonstrates why Musk is probably wrong.
Global navigation satellite systems (GNSS), which include the US’s GPS, Europe’s Galileo, and Russia’s GLONASS, all function similarly. A receiver gets time-stamped radio signals from a constellation of satellites orbiting the earth. The satellites are synchronized and the offset between the signals caused by the distance the signals have to travel is used to triangulate the position of the receiver.
This system works well in open areas, but it often gives erroneous results in dense urban areas, where the signal may bounce off multiple buildings before getting to a receiver. The signals are relatively weak as well, allowing a local ground-based transmitter to overwhelm the satellite signals.
Fooling Tesla
Regulus Cyber used this limitation to demonstrate how a Tesla with Navigate on Autopilot functionality could be fooled into thinking it was in a different position than it actually was. Local signals mimicking the satellite signals but with different time stamps made the in-vehicle GPS believe it was further down the road than it was. Since Tesla only uses basic street level maps, the system had no way to verify its true position and it turned off the road into a rest area rather than the exit ramp it was supposed to go to.
HD Maps Compliment Sensors
Other AD developers use HD maps with locations of static landmarks such as buildings and overpasses. Cameras, lidar, and radar can detect these landmarks and triangulate position to an accuracy within 1-2 cm. This can also be accomplished with underground maps and ground penetrating radar. This is valuable in bad weather when roads are covered in snow and lanes can’t be seen. However, it also works as a verification check against GPS localization. When the HD map and GPS positioning don’t match, the vehicle can detect an issue and call back to base for assistance while getting to a minimum risk condition.
HD maps are an important complementary sensor modality just as cameras can be complemented by radar, lidar, infrared, and potentially other sensors. With each sensor measuring the world in different ways, it can provide an extra layer of safety robustness.
Musk is right that a vehicle can nominally navigate its environment on cameras alone. A human driver can do the same (although we use more than just our eyes), but we nonetheless augment our senses with driver assist and alert systems. A camera-only system will always have limitations that make it brittle just as any system that relied on radar or lidar would be. Similarly, GNSS and street level maps are inadequate for localization in high level automation.
Higher Risk of Failure Without HD Maps
If Musk insists on sticking with a low cost approach that doesn’t have sufficient safeguards, Tesla will be deploying a system with a higher risk of failure than competitors. Musk talks about features like maps and lidar being a crutch, but the reality is that machine vision and AI need support and likely will for years to come. Prematurely deploying automated vehicles that aren’t robust may turn away consumers and hurt adoption of technology with real benefits.
Guidehouse Insights’ Using HD Maps to Navigate the Mobility Landscape report takes a deeper dive into how HD maps are built and used for mobility applications.