• Automated Driving Systems
  • Advanced Driver Assistance Systems
  • Automotive Mapping

From Autonomy to ADASs with Robust Maps

Sam Abuelsamid
Jan 04, 2023

Guidehouse Insights

One of the running themes of my conference talks over the past 2 years has been the evolution from automated driving systems (ADSs) to advanced driver assistance systems (ADASs). This trend accelerated in 2022 and will likely continue in 2023 and beyond as automakers and suppliers seek to leverage the lessons learned from ADS development, which is progressing far more slowly than had been hoped, to improve on ADASs. A pair of recent ride evaluations demonstrated the importance of incorporating robust mapping into these more capable ADASs as they come to market.

A Tale of Two Systems

On a fundamental level, Mobileye’s SuperVision and Tesla’s Full Self-Driving (FSD) beta seem fairly similar. Both rely on a suite of cameras to allow a vehicle to navigate from point to point with minimal human interaction. But the reality of how well they accomplish this could not be more different.

Mobileye uses 11 cameras, most of them high resolution 8-megapixel sensors, along with its Road Experience Management (REM) maps. SuperVision is a direct outgrowth of Mobileye’s work on hands-off, eyes-off, brain-off level 4 (L4) ADSs. The company has created SuperVision’s hands-off, eyes-on L2+ system by using the same camera subsystem without the L4 radar and lidar. REM maps are crowdsourced from millions of vehicles around the world equipped with Mobileye’s EyeQ4-based ADAS from multiple automakers. Using the cameras and other vehicle data, Mobileye builds maps that incorporate road information but also static features that can be used for precise localization and driver behavior data about actual trajectories and stopping points at intersections. SuperVision is in production on the Chinese-market Zeekr 001, with more applications coming soon.

Tesla uses 8 lower resolution cameras and more basic street-level navigation maps for its Autopilot and FSD systems. While Tesla owner documentation and government filings identify the FSD beta as a hands-on, eyes-on L2 ADAS, the name implies otherwise, and CEO Elon Musk frequently claims it will enable fully automated driving. Countless videos on YouTube and a recent drive in a friend’s Model 3 have convinced me that the system is nowhere near ready for unsupervised use. In fact, my friend even put a student driver sticker on the back of the Model 3 to alert other drivers to provide extra clearance.

Built-In Knowledge Matters

FSD does fine at the most basic driving tasks of staying in a lane on a highway and even executing automatic overtaking. However, when it comes to dealing with more complex road architectures like the Michigan left, FSD is utterly hopeless. Even in other, more straightforward scenarios, like handling the split in Jefferson Avenue while approaching the Huntington Place convention center in Detroit, FSD is totally inconsistent. The lack of built-in knowledge and an overreliance on flawed inference algorithms make the system dangerous to use.

SuperVision, on the other hand, performs much more capably on a variety of main thoroughfares, residential side streets, and highways. This is due at least in part to its embedded knowledge of how humans traverse the same roads on a daily basis, as well as its superior ability to precisely locate itself on the road by triangulating from static features rather than relying on GPS. It’s not perfect, and on my ride it still misclassified some things the cameras saw, such as a construction barrier that was identified as a truck, but it never came close to a collision. In the absence of true general AI, building in this type of knowledge of the world is essential to making these new ADASs safer.