Google’s top security chief is warning that artificial intelligence (AI) is forcing companies to radically rethink how they protect digital operations, as traditional cybersecurity measures prove increasingly inadequate against sophisticated AI-powered attacks.
The stark assessment comes as retailers and payment processors rush to fortify their platforms. This is part of a technological arms race that could reshape the eCommerce industry. Experts say AI tools promise to slash the billions of dollars lost annually to online fraud.
“AI-powered systems can analyze billions of URLs, emails and messages in real time to detect and block sophisticated phishing attempts and social engineering attacks before they reach users,” J Stephen Kowski, field CTO at SlashNext, told PYMNTS.
“Advanced machine learning models can now understand the context and intent of communications, moving beyond simple pattern matching to identify threats that would bypass traditional security tools. This proactive approach represents a shift from reactive, signature-based detection to predictive threat prevention that adapts to new attack variations in real time.”
Observers say that AI will force businesses to spend more on new security systems, but these AI tools could save money by preventing online fraud. Small and large companies alike will be able to use AI to spot suspicious behavior and stop cyberattacks before they happen. And AI tools are themselves robust defenses against online attacks.
AI Security ChallengesAt a recent Cloud Security Alliance symposium, Google Cloud’s security head, Phil Venables, urged a sweeping overhaul of cybersecurity practices as AI creates new vulnerabilities and defensive opportunities. He emphasized that organizations can no longer rely on traditional security measures given AI’s unique risks, from data poisoning to model manipulation, Venture Beat reported.
He argued that companies must now implement end-to-end protections designed explicitly for AI systems, including rigorous data sanitization, comprehensive access controls, and “circuit breakers” that can halt harmful model outputs. While these new security frameworks require investment, they’re essential for organizations seeking to harness AI’s benefits safely while guarding against sophisticated threats.
Kowski said that modern security systems need to expand beyond traditional indicators of compromise to identify AI-generated content, particularly in phishing and social engineering attacks that leverage large language models.
“Real-time detection capabilities must evolve to analyze behavioral patterns, linguistic nuances, and contextual anomalies that could indicate AI-powered threats,” he added. “The integration of machine learning models into existing security frameworks can help identify sophisticated AI-generated attacks while maintaining high accuracy and low false positive rates.”
Keeping Attackers at BayBrian Vlootman, CISO at Backbase, told PYMNTS that organizations need to develop new detection systems specifically designed to monitor and identify the behavior patterns of AI agents, a new type of autonomous software. The system watches for three key warning signs: strange patterns in how AI models are used, unusually high numbers of requests, and suspicious spikes in computing power.
Vlootman said his company has modernized its security infrastructure by integrating AI-powered defense systems. These advanced tools can spot potential fraudulent account access by monitoring user behaviors simultaneously. The system evaluates multiple indicators to identify suspicious activity patterns, including how users type, move their mouse, and interact with the banking platform’s interface.
Andy Grayland, CISO at Silobreaker, told PYMNTS he believes AI has the potential to revolutionize cyber threat intelligence. He notes that identifying threats through data collection from various web sources — including the surface web, deep web, and dark web — requires substantial human effort.
“Good analysts are expensive, and we are seeing practical applications in the development of AI to filter the unimaginably large dataset down to just the vital information needed to allow those analysts to focus on more of the important threats than the day-to-day drudgery of filtering out false positives,” he added.
“The increased use of AI will see smaller companies being able to afford cyber threat intelligence whilst larger companies will be able to use AI to multiply their output on the same annual budget massively.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post As AI Advances, So Do Cyber Threats on Commerce appeared first on PYMNTS.com.