- The sensor landscape including redundant and complementary sensor applications
- How cameras have been used historically (to gather depth) in advanced driver assistance systems (ADAS) / automated vehicles (AVs)
- How multi view perception works
- The benefits of multi view (range, detail, design, cost)
Accurate Ranging Perception for Assisted and Automated Driving
In order for software to reliably assist a human or completely take over the task of driving a vehicle, it needs accurate and reliable information on the location of other road users or objects in the local environment. Current vision-based solutions with either single or clustered multi-focal length cameras must rely on machine learning to estimate the range and trajectory of road users, an inherently error-prone approach. Active sensors like lidar and radar can improve the data, but they are costly and consume additional power.
Wide-based multi-camera solutions that use physics can provide accurate range detection over distances most of the more expensive active sensors cannot approach. These solutions can also complement sensors as part of a multi-modal sensing suite for fail operational capability. This webinar discusses how multi view cameras with physics-based measurement can improve advanced driver assistance systems (ADAS) and automated driving performance reliability and safety.
- What are the redundant and complementary applications of ADAS/AV sensor technology?
- How do cameras gather depth in ADAS/AV applications?
- What is multi view perception and how does it work?
- What are the benefits of multi view perception in the short and long term?
- Automotive suppliers and auto component makers
- AV stakeholders
- Startup technology firms
- Safety groups
Thanks for registering!
You're all set to join our webinar: "Accurate Ranging Perception for Assisted and Automated Driving"
November 16, 2021
You'll receive an email with details on how to join.