Financial services companies are still in the early innings of deploying generative and agentic artificial intelligence (AI) despite the rhetoric coming from AI vendors, industry executives and academics during a roundtable discussion held Thursday (March 27) by the Securities and Exchange Commission.
It’s not from underestimating AI’s benefits. Enterprises can raise back-office efficiency with generative AI — such as in operations, compliance, human resources and the like — and scale customer-facing tasks like in wealth management, said Sarah Hammer, executive director of the Wharton School.
But the on-the-ground truth is that “a lot” of financial services companies are adopting technology at a “much slower rate” than that of tech companies developing and selling the technology, said Hardeep Walia, managing director and head of AI and personalization at Charles Schwab.
“All of us are in those early innings. We’re all experimenting, we’re all doing evaluations, we’re all trying to calculate the ROI,” Walia said. “Companies will continue to experiment, but I think right now most use cases … have a human in the loop.”
Hammer added that while generative AI is helpful, especially in inefficient processes like clearing and settlement, “companies are still thinking about value. … A lot of [firms] struggle to understand how to measure return on investment, because this is an incredibly expensive technology.”
But other panelists said costs have been coming down and open-source AI models like DeepSeek are putting a lid on expenditures.
Peter Slattery, a researcher at MIT’s FutureTech, said another hurdle is that GenAI suffers from “last mile” issues.
A large language model might match 90% of a human worker’s performance and accuracy. But it will need “some exponential increase to get up to the level of quality where you can use it to substitute for humans,” Slattery said. As such, it is “very unlikely” AI will be fully automated “anytime soon.”
But Tyler Derr, CTO of Broadridge, said the main thrust of AI adoption is not about replacing people. “It’s more about enhancing human operations.”
Read more: SEC to Consider How to Harmonize AI-Related Disclosures Across Industries
MIT’s AI Risk RepositoryThere’s another challenge enterprises have to consider: new risks introduced by AI. As the technology takes on more roles at a company, traditional risk management frameworks may be insufficient.
Slattery would know. He leads MIT’s AI Risk Repository, which is updated every three months. In the most recent update, they added 100 risks related to AI agents and started a new category of multi-agent risks.
For example, consider the issue of liability, which needs to be reinvented because it is tricky to apply to new technologies like agentic AI. “If my agent works with your agent, and there’s something [that happens] neither of us want, are we both responsible?” Slattery asked.
Developers already cannot understand what’s going on inside “black box” AI models, so what happens “if we have millions of instances of those models collaborating across different tech stacks?” Slattery added.
“There’s fundamentally new risks there, and a lot we need to think about,” Slattery said.
Hammer echoed the concern, pointing to the need for robust governance policies. “I have not seen companies pulling back on responsible AI,” she said, citing global regulatory frameworks like the EU AI Act and U.S.-based efforts to regulate AI without stifling innovation.
Broadridge’s Derr emphasized that risk policies and procedures must be dynamically updated. “It’s not a static process,” he said. “You’re going to have to continue to evaluate it as new use cases come up.”
Asked how firms can best work with the SEC on AI regulations, Derr said “like cybersecurity, this is a team sport. All of us working together is going to increase our chances of success because we all learn from each other.”
The post AI Adoption in Finance: Promise vs Reality — SEC Roundtable Reveals Cautious Approach appeared first on PYMNTS.com.