- Automated Driving Systems
- Lidar
- Tesla Model 3
- Sensors
The Challenge of Making Machines See
At the Frankfurt Motor Show this September, BMW displayed a one-off version of its new X6 SUV coated in a special paint called Vantablack from UK-based Surrey NanoSystems. This paint absorbs more than 99.965% of visible light that strikes it, making photos of the vehicle look as though its body has been masked out in Photoshop. While this is an extreme example, it highlights one of the major challenges that developers of automated driving systems face.
During the show, lidar vendor Ouster took one of its sensors into the display area to evaluate it. To an optical camera, the Vantablack vehicle looks like a colorless void with no discernable shape or texture. Despite almost no reflectivity, the lidar sensor was able to detect the vehicle and get its general shape from a distance of about 10 meters, which was the limit of the display area. According to an Ouster spokesman, the extremely low reflectivity of the vehicle would likely limit detection range to about 20 meters.
Ouster Demonstrates that Its Lidar Can Detect the Blackest Car Ever Made
(Source: Ouster)
The carbon nanotube Vantablack paint is designed for specialized applications and its fragility and cost would preclude it from use on ordinary road vehicles. However, reflectivity is a major concern for a variety of sensor types in real world applications. Matte finishes have become increasingly common in the past decade and it is a known issue that dark colored vehicles have significantly shorter detection ranges than shiny, lighter finishes.
BMW X6 Vantablack
(Source: BMW)
Lidar vendors often specify detection range based on ability to see a target of a certain size and reflectivity. However, ambient conditions such as rain, snow, or dust can have a major effect on this performance.
Difficulty on the Road
But it’s not just a car's color that matters. The color of the road surface, surrounding area, and lane markings all have an effect on a sensor’s ability to detect where a vehicle is and where it needs to go. A recent test drive of a Tesla Model 3 on Dequindre Road in Madison Heights, Michigan, demonstrated one of these challenges. The concrete-paved roadway had recently been repaired with regular sections of the pavement running across the road removed and replaced. To the eye, the effect on an overcast day was alternating bars about 10 to 12 feet wide of lighter, sun-bleached concrete and newer, slightly darker gray. Tens of thousands of human drivers traverse this busy road every day without issue. But the Tesla’s Autopilot system could not seem to understand the lane markings and alternating pavement and refused to engage.
This was a relatively straightforward challenge on a day with no inclement weather, but one of the most advanced machine vision systems in the world didn’t know what to do.
Companies developing automated driving systems still have a lot of work to do before these vehicles can become automated robo-taxis. Fusing multiple sensor modalities to use the ability to see under different conditions will be essential. That may also include adding sensors that aren’t commonly used today, such as infrared cameras, high resolution radar, and ground penetrating radar. Some of these are less sensitive to the optical reflectivity of targets while others assist with localization. Localization based on high definition maps would also overcome some of the challenge of not being able to reliably detect the road surface.
While the metaphor of who is winning the race to self-driving is a fundamentally bad one, if it must be used, I like to think of the field as heading into turn one of the 24 hours of Le Mans. To finish first, first you must finish. No one is close to finishing yet.