As artificial intelligence (AI) continues to disrupt industries across the globe, Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and a member of Amazon‘s board of directors, offered a candid roadmap for business leaders looking to deploy AI effectively.
His message was clear: Don’t just adopt AI — experiment with it, understand its limitations and integrate it with human decision-making to unlock its full potential.
“Machine learning is going to reinvent many areas of work,” Huttenlocher said Tuesday (April 1) in a keynote address at the 2025 MIT AI Conference in Cambridge, Massachusetts. “If you don’t have skunk works projects, you’re going to be left behind.”
Huttenlocher said businesses do not necessarily need to make big upfront investments to gain a competitive edge with AI. Instead, they should focus on small, targeted experiments that lead to little successes and build from there — something he learned from Amazon.
“You can start something very small, doesn’t have to be expensive. But if you get a positive surprise, double down,” he advised. “The thing about doubling down repeatedly is then you end up with something big.”
Huttenlocher said it’s important for companies to remember that at the beginning, growth will be very slow. “It doesn’t get big until you’ve really had a bunch of positive surprises.”
To determine what is a positive surprise, companies have to identify “good” KPIs so they can measure it, he added.
Huttenlocher also urged companies to think beyond simply applying AI to improve existing processes.
The real opportunity lies in re-envisioning products, services and operations through the lens of AI, he added. That means focusing not just on optimization, but on innovation.
Read more: Amazon Unveils AI Agent That Can Shop and Place Orders
Four Inconvenient Truths About AIWhenever a new technology emerges, it enters into a hype cycle period before maturing. Huttenlocher sees the same thing happening with AI. He pointed to four ideas about AI that are often overlooked.
1. GenAI models of today do not make good AI agents.
While advances in GenAI models in the past five years have been “nothing short of completely stunning,” Huttenlocher nevertheless said they are “not so great at being agents.”
They are trained to produce text and images, “but it’s much harder to get them to get things done,” he said. “It’s actually relatively challenging with today’s training techniques to get them to work very reliably.”
This insight is crucial for businesses hoping to use AI to drive real-world outcomes. GenAI is still useful in the right tasks.
In areas like robotics, Huttenlocher noted, training AI systems on both actions and observations is producing real results, especially when paired with simulation.
But the road to scalable, autonomous agents remains a work in progress.
This week, Amazon debuted a solution that it said makes AI agents more reliable. Called Nova Act, the AI model lets developers break down complicated workflows into a series of single acts. By structuring the process this way, Nova Act is creating reliable building blocks for AI agents.
2. AI does not reason like humans do, amid the hype around reasoning models.
Huttenlocher said AI models do not reason like people, and therefore standalone systems are not optimal, especially for highly regulated industries like health care.
“I don’t think it ever will deliver human reasoning,” he said. AI “may do very high-quality things that don’t align with the way humans would reason about something. … That’s why I’m very skeptical of standalone AI systems.”
For example, Huttenlocher said that AI used as a medical consultant can enhance physician decision-making, whereas AI acting as an autonomous expert can actually lead to worse outcomes.
Instead, he encouraged collaborative AI — designing AI systems to work with humans instead of taking over their jobs.
3. Think of cultural, not just digital transformation.
Successfully deploying AI requires cultural as well as technical transformation.
Businesses need teams that understand both the nuances of their industry and the capabilities of AI. That’s why cross-functional collaboration — between the business and technical side of the company — is essential.
MIT is modeling this approach through its interdisciplinary initiatives. Through its Schwarzman College of Computing, Huttenlocher said, the university has hired 50 new faculty, half in core computing fields and half embedded in other disciplines focused on “infusing” AI throughout.
“There’s huge … employer demand for people trained in computing. But it’s not so much compute,” he said. “It’s more (about) formulating and solving problems.”
4. AI is not inherently good or bad.
The narrative around AI today, unfortunately, veers from utopian to dystopian: Either it will save the world or destroy it, Huttenlocher said.
While these make the best stories, “it doesn’t necessarily mean that they’re going to be the reality of the future,” he said.
For companies and society at large, the advice they get is similarly at polar opposites. “We should be really timid about using it; we should be incredibly audacious about using it,” Huttenlocher said.
The examples typically communicated by both camps are either AI will reinforce human biases and errors or AI will lead to better and fairer decisions, he added.
“Much of what you see out there just takes one of those two positions, as if it’s gospel, and doesn’t even recognize the other. And I pretty much dismiss anything” this divergent, Huttenlocher said.
Instead, it’s best to take a balanced view and be “attentive” about how AI is being used, he said.
The post MIT’s Dean of Computing: 4 Inconvenient Truths About GenAI appeared first on PYMNTS.com.