AI has set up shop across the corporate landscape, with goods, services and technology companies increasingly using the rapidly evolving software systems to replace certain automated tasks and reduce reliance on human workers. Nonetheless, AI and its increasingly human-devoid agentic iterations are far from in charge.
A forthcoming report from PYMNTS Intelligence suggests that while GenAI (defined as LLMs designed to tease out insights and produce novel text, images, audio and videos) is enhancing productivity across industries, many of the routine jobs it performs require significant human guidance, inputs and review. Meanwhile, the vision of widespread use of truly autonomous AI agents — a next-generation software that operates entirely without human oversight, such as chatbots and software that reconfigures supply chains and distribution routes — is still largely a distant prospect.
‘Isn’t Here Yet’The report, based on a survey of 60 chief operating officers (COOs) at large U.S. firms across goods, technology and services sectors conducted in April 2025, reveals a stark contrast: between the ambitious potential of “agentic” systems capable of fully independent decision making and action without human hands, and the day-to-day reality of how enterprises are deploying the technology.
Most business implementations of GenAI today still require employees to ensure processes stay on track, correct against faulty assumptions or biases and produce valid results and outcomes.
“Agentic AI in the pure sense isn’t here yet,” one of the surveyed sources told PYMNTS Intelligence.
Go, HumansTasks that would seem ripe for pure automation, such as generating feedback on product processes, managing cybersecurity systems and innovating new products and services, still rely heavily on human involvement. At technology firms surveyed, COOs indicated that nearly all of these functions require a human operator when GenAI is deployed. Across services and goods firms, human oversight was necessary for these tasks 60% to 100% of the time. Even more mundane uses, like assisting employees or generating summaries of emails and reports, still require human supervision for 50% to 100% of cases, depending on the industry.
The enduring reliance on humans stems from a fundamental reality: From reconfiguring global supply chains to cope with tariffs to designing market strategies five years out, most enterprise functions are complex, interdependent and steeped in specific context. These conditions, the report indicates, currently challenge the capabilities of today’s nascent agentic AI. While the software enhances speed and productivity, it is not (yet) displacing the need for creativity, judgment and multi-layered decision-making — the domain of people, not machines. In sectors where context, ethics or regulatory implications are significant, such as healthcare, finance and logistics, manual oversight is particularly non-negotiable.
Cruise Control = Risky BusinessWhere automation is increasing, it remains largely confined to narrow, clearly defined contexts and industry-specific use cases. Companies that make use of full autonomy currently limit it to structured, rule-based tasks, like generating software code and detecting fraud. Such tasks are described as repeatable, logic-driven and lower risk. For example, COOs at technology firms report that AI tools for identifying fraudulent behavior, errors or inconsistencies are mostly automated 100% of the time. Services firms reported the same level of automation for generating software code.
The biggest barrier to broader automation is not just technical limitations but also enterprise risk tolerance. COOs are cautious about ceding full autonomy for tasks that could impact brand trust, create legal exposure or negatively affect customer experience. This suggests that automation success depends critically on matching the AI tool precisely to the task and avoiding a “one-size-fits-all” approach that could lead to failed deployments.
Despite the operational limitations in achieving full autonomy, the financial picture for companies aggressively adopting automation is positive. Among the high-automation enterprises surveyed, none reported concerns about recouping their investment in AI tools. This contrasts sharply with lower-automation firms, where half still question whether GenAI can deliver meaningful value. The report suggests that automation validates itself financially over time, with the longer a company using GenAI, the more value it finds.
However, this financial confidence comes with a significant trade-off: increased adoption correlates directly with heightened risk, particularly concerning data security and privacy. Eight in 10 highly automated firms now cite data security and privacy as their top concerns. This is more than double the 39% reported by less-automated firms. As agentic AI systems interact with more sensitive workflows and datasets, the risk surface expands considerably.
Other notable drawbacks cited by COOs include issues with integration and implementation (62%) and concerns about the accuracy of AI-generated outputs (57%). Interestingly, regulatory compliance was a top concern for only a small minority, likely reflecting a lag in rulemaking as the technology rapidly evolves.
Read more:
Product Officers Embrace GenAI’s Role in Fast-Tracking Early-Stage Innovations
What Enterprise CFOs Want From GenAI
Walmart Embraces Agentic AI in New Era of Retail
The post Exclusive: Agentic AI Vision Meets Reality in New PYMNTS Report appeared first on PYMNTS.com.