In order for software to reliably assist a human or completely take over the task of driving a vehicle, it needs accurate and reliable information on the location of other road users or objects in the local environment. Current vision-based solutions with either single or clustered multi-focal length cameras must rely on machine learning to estimate the range and trajectory of road users, an inherently error-prone approach. Active sensors like lidar and radar can improve the data, but they are costly and consume additional power.
Wide-based multi-camera solutions that use physics can provide accurate range detection over distances most of the more expensive active sensors cannot approach. These solutions can also complement sensors as part of a multi-modal sensing suite for fail operational capability. This webinar discusses how multi view cameras with physics-based measurement can improve advanced driver assistance systems (ADAS) and automated driving performance reliability and safety.