- Automated Driving Systems
- Lidar
- Machine Learning
Enhancing Radar with Machine Learning for Improved Safety
While lidar has been getting much of the attention as a new sensor modality to support advanced driver assistance systems (ADAS) and automated driving systems (ADS) in recent years, radar is on the verge of making a huge leap forward. The combination of high resolution imaging radar sensors and machine learning (ML) to process radar data is expected to start making its way into production vehicles in the near future.
Radar is not new to the automotive scene. Police have used radar to catch speeders since 1954, and Toyota began installing radar on some vehicles in the late 1980s. Today, it’s not uncommon to find as many as five radar sensors on vehicles to enable features such as adaptive cruise control, blind spot monitoring, and hands-free driving assist. Unlike cameras, radar has the advantage of being able to sense through atmospheric obscurants such as fog, rain, and snow and works as well in total darkness as in daylight.
However, those sensors are relatively low resolution with only 12 virtual channels from four transmitter and three receiver antennas. This setup allows accurate measurement of the speed and distance of objects but has limited ability to isolate multiple objects or determine height.
Adding More Channels
New generation imaging radar sensors use a variety of techniques to create from hundreds to thousands of virtual channels. The brute force approach is to simply add more antennas. Mobileye and Arbe are developing sensors with 48 x 48 antenna arrays that produce 2,304 virtual channels while Continental is launching a 192-channel sensor this year.
An alternative approach leverages software in either the control of the sensor or the post-processing of the data. Oculii, which was recently acquired by vision processing chip maker Ambarella, uses dynamic waveform generation in place of the static waveforms typically used by these sensors. This beamforming effectively causes a radar sensor to generate 10 to 20 times the number of returns over a wider range. Using a standard 12-channel radar that can already be found on hundreds of millions of vehicles, Oculii is able to generate a 192-channel image. This channel generation is all done with new firmware that Oculii is expected to license for its first production program in 2023.
Smarter Data Processing
New insights can be achieved through post-processing radar data with ML algorithms. During CES 2022, Aptiv provided live, virtual demonstrations with a Ford Mustang Mach-E driving around Las Vegas. Using standard ADAS sensors with the signal processed by an additional AI accelerator chip, Aptiv was able to accurately detect and classify moving and stationary vehicles, pedestrians, cyclists, and the drivable area of the road.
Mobileye, which is best known for vision-based ADAS, is also developing ADS and in-house imaging radar and lidar sensors. The 2,304 channel radar sensor is among the highest resolution automotive radar sensors available. Mobileye’s ML post-processing of the radar transforms the results into something that looks just like a lidar point cloud. In the Mobileye ADS, the radar and lidar data are fused and compared against the environment model generated from camera data. The radar-to-point-cloud transformation allows the combined signal to be run through the same perception algorithms as the lidar, making verification much easier.
Radar is a relatively low cost, truly solid-state sensing technology that works very well in all weather conditions. These and other innovations in the radar industry have the potential to significantly improve the performance of ADAS/ADS and the safety of future vehicles.