Waiting for AI perfection before deploying? Intel’s Lama Nachman says “don’t wait!”

Karina Estrada
2 min readJun 29, 2021
Photo by Andrea De Santis on Unsplash

If you wait for AI to be perfect, you’ll never deploy it.

That’s one of the big takeaways from this fascinating conversation between Kimberly Nevala, SAS’s Strategic Advisor focused on AI and machine learning, and Lama Nachman, Intel Fellow and Director of Anticipatory Computing Lab in Intel Labs, for our latest Pondering AI podcast. Nachman should know — she worked directly with Professor Stephen Hawking and a team of researchers to develop a software platform and sensing system to help Hawking communicate.

Today, in the Anticipatory Computing Lab, Nachman and her team are constantly deploying emerging AI capabilities, most of which are far from perfect, then poking and prodding them to find gaps and areas of improvement. Understanding the imperfections of AI systems is just part of the job. She’s not waiting around for the perfect conditions — she’s making AI happen.

In this episode of our podcast, Nachman shared insights from her team’s groundbreaking work in areas ranging from early childhood education to assistive computing, helping those with a limited ability to speak or move to communicate with AI-assisted systems. In every aspect of the Lab’s work, she is focused on the potential for unintended consequences from AI systems — and how to avoid them.

That’s where humans come in. “Humans are just very good at things AI is not good at,” Nachman says. “That’s why we need to be thinking about combined human-AI systems that bring out the best in AI.” For Nachman, this type of collaboration offers one of the fastest and most effective paths to improving on the imperfections of AI systems. The systems her team creates and deploys are notable for their numerous “on ramps” for human input and refinement.

Along the way, Nachman and her team grapple with a number of AI questions that are inevitably encountered by any team undertaking an AI initiative, including:

· How can we build AI systems that are constantly learning and improving?

· How do we use human input?

· What level of uncertainty and imperfection is acceptable in AI systems?

· How can we account for uncertainty in the deployment of AI in a way that improves these systems over the long term?

· Which potentially negative implications should we account for in deploying AI — and what role do humans have in avoiding those possibilities?

This 40-minute podcast is packed with insights from Nachman on all these questions and more, and includes lots of fascinating practical examples from the Lab. If you’re actively engaged in AI development and deployment, or just interested in AI, this short conversation between Nachman and Nevala is a must-listen: https://www.sas.com/gms/redirect.jsp?detail=GMS176395_245884

--

--

Karina Estrada

Karina Estrada was born in Puerto Rico and currently works at SAS in Cary NC, as a senior partner marketing manager.