• AI
  • Machine Learning
  • Software
  • Automated Driving Systems
  • ADAS
  • PUBLIC SAFETY

Grounding AI in Reality

Sam Abuelsamid
May 24, 2024

Brain-like structure inside a glass ball, possibly a light bulb, atop a digital surface akin to a computer motherboard

In March I attended the 2024 edition of Nvidia’s GPU Technology Conference (GTC) in San Jose, California. For the first time since 2019, thousands of hardware and software engineers descended on the event in person to learn about the latest Nvidia technologies and techniques for deploying them. As has been the case in the past, Nvidia concurrently hosted an industry analyst symposium where we got to hear from Nvidia engineers, executives, and partners. However, the highlight is always the Q&A session with company founder and CEO Jensen Huang.

Nvidia’s valuation has increased almost fourfold in the past year, based on the fact that it is selling every H100 GPU it can provide to nearly all of the companies around the world that are developing anything related to AI. The Hopper GPU architecture in the H100 will soon be supplanted by the even more powerful Blackwell architecture in the new B200 platform that was announced at GTC.

Over the past 18 months, most people reading this have probably experimented with one or more of the generative AI tools that have been released, such as ChatGPT, Gemini, Stable Diffusion, and many others. Anyone that has experimented enough with these tools has probably also experienced what have come to be called AI hallucinations. The AI systems will just make things up that have no relation to reality.

During the Q&A, Huang spoke about how when humans learn things, we go back and reflect on those lessons, make connections to other things we know, and consider what could happen with certain actions. In effect, we simulate scenarios and generate ideas in a similar way to generative AI, combining elements we’ve learned into something new.

“So you’re now simulating these multi-multistep scenarios, and you’re doing it completely in your head, and then you do that enough times, then what happens? It becomes reality to you,” said Huang. “It becomes your reality. Now that reality, if you go on too long, if you’re in, like, a padded room, and you just sit here and you do this, when you come back out, you are insane. And now why is that? Because you were never grounded on truth.”

Therein lies the challenge with using AI tools and reinforcement learning. If a model is trained on an enormous data set without any grounding in what is real, it starts to generate results that are unreal because it has no actual understanding of what is real. There are many dangers with this, such as the potential for producing and disseminating misinformation.

But AI is increasingly being utilized in safety-critical systems such as driver assist and automated driving. In particular, it is being used by many companies for perception processing of sensor signals and path planning. It is essential that these systems don’t just make up results if they aren’t sure what is being detected.

Huang’s comment about grounding any AI system is particularly insightful. While many have talked about the need for so-called guardrails for AI, this is not necessarily simple, and for complex systems, it may not scale easily. Training AI systems probably should include some degree of human quality assurance of both the model and the training data. In the automotive safety realm, Mobileye has developed its Responsibility-Sensitive Safety model that is intended to provide grounding for its perception systems, while Nvidia has a similar concept called Safety Force Field. Whatever solution is used, it’s critical to ensure that AI doesn’t simply iterate on itself forever, especially as more AI-generated content finds its way into the future. Not grounding AI in reality is a surefire way for it to go insane.