Generative artificial intelligence (AI) is expanding workers’ capabilities far beyond their current skill sets, but experts warn that human oversight remains crucial to ensure quality and avoid pitfalls.
AI is transforming office work across industries. Marketing, HR and software development employees now handle complex data analysis and coding tasks, often without needing more extensive technical training. This shift enables new capabilities and reshapes traditional job roles and skill requirements.
“This exciting new tech will have a profound impact on financial services, for both customers and businesses in the sector,” Scarlett Sieber, chief strategy officer at Money20/20, told PYMNTS. “Customers will receive hyper-personalized financial advice, faster customer support and a more seamless user experience. Financial service providers will benefit from reduced costs, improved decision-making and risk management, and potentially enhanced audit efficiency and regulatory compliance.”
A recent Boston Consulting Group study provides data to back up these observations. In an experiment involving over 480 consultants, researchers found that employees using generative AI (GenAI) achieved an average score of 86% of the benchmark set by data scientists when performing coding tasks — a 49-percentage-point improvement over those not using AI. Even more striking, consultants who had never written code reached 84% of the data scientist benchmark when using AI.
The Promise and Perils of AI AugmentationWhile AI’s potential is clear, experts caution that its implementation is challenging. Gillian Laging, co-founder of Scopey, a scope management platform, told PYMNTS that GenAI outputs “often remain generic” and require human review, especially for nuanced or creative work.
This concern is particularly acute in fields where precision and creativity are paramount. In software development, for instance, there are growing reports of junior developers relying too heavily on AI-generated code without fully understanding its implications or potential flaws. This can lead to security vulnerabilities, performance issues, or code that simply doesn’t function as intended.
AI consultant Tom Hall agreed with the need for caution. He said he sees AI as “an enabler of our workers at all levels” but emphasized that it “needs to be socialized with workers as a tool that will help them, not something that can be ‘plug and played’ to solve all their problems.”
Hall’s experience highlights another critical aspect of AI integration: the human factor.
“For example, being able to quickly query a GenAI tool for document lookup or a process overview helps employees free up time to focus on their core role and/or reduce unnecessary stress,” he said.
Experts recommend a balanced approach to harness AI’s potential while mitigating risks. Laging advocated for built-in human oversight, citing her company’s practice of having users review AI-drafted scopes of work before sending them to clients.
This approach acknowledges that while AI can dramatically speed up certain processes, human judgment remains invaluable. For instance, an AI might draft a comprehensive project scope, but a seasoned professional can spot nuances, potential issues or client-specific considerations that the AI might miss.
Both Laging and Hall stressed the importance of clear guidelines and training for employees using AI tools, mainly when dealing with sensitive data or intellectual property. This is especially crucial as the line between personal and professional use of AI tools becomes increasingly blurred.
“Companies should assume that employees will use AI tools, so providing clear guidelines — especially in cases involving IP or sensitive data — will be more effective than restricting use altogether,” Laging said. This approach can help organizations avoid potential legal and ethical issues that might arise from unchecked AI use.
Beyond GenAIWhile much of the current buzz surrounds GenAI, Laging highlighted the critical role of analytical AI in working with proprietary data to generate meaningful insights.
“For companies to unlock the true potential of AI, analytical AI is equally critical,” she said.
This type of AI, which focuses on parsing and interpreting large datasets, can provide companies with strategic advantages beyond task automation. Laging said that at Scopey, they use analytical AI to monitor conversations throughout a project and flag any client-requested changes. This allows companies to manage scope more effectively and quote for additional work where necessary.
As organizations navigate this new landscape, the focus remains on harnessing AI’s potential while maintaining human expertise at the core of decision-making.
“Companies need to approach AI from a people-first perspective rather than a tech-first perspective,” Laging said. “We’re working to enhance — not replace — the expertise and decision-making of the workforce.”
This sentiment is echoed in the broader industry conversation about AI integration. Rather than viewing AI as a replacement for human workers, businesses are exploring how AI can augment human capabilities, allowing workers to focus on higher-level strategic thinking, creativity and interpersonal skills that AI cannot replicate.
The post AI Expands Worker Capabilities, But Human Oversight Remains Key appeared first on PYMNTS.com.