Watch more: What’s Next in Payments With Veriff’s Hubert Behaghel
The surge in artificial intelligence-powered fraud has placed identity verification in the spotlight.
Tools capable of generating photorealistic faces, synthetic voices and forged identity documents have lowered the barrier to entry for fraud rings.
As generative AI tools make it easier than ever to fabricate documents, faces and voices, the digital economy is confronting a foundational question. How do you actually know who is on the other side of the screen?
“It’s about being able to answer three questions with confidence. Are you who you say you are? Can you be trusted? And are you still the same person related to the account?” Veriff Chief Technology Officer Hubert Behaghel told PYMNTS during a discussion for the March edition of the “What’s Next In Payments” series, “How Will AI Change Identity?”
“The challenge is really instrumenting the full lifecycle of the customer with strong identity and authentication,” Behaghel said.
In other words, identity verification is no longer just a moment during onboarding. It’s becoming infrastructure.
“It’s not that identity is under attack,” Behaghel said. “It’s how we’ve been implementing identity that needs to go back to fundamentals.”
The Deepfake Problem and Its Identity LimitsFinancial services, online marketplaces and increasingly HR and recruiting platforms are among the most targeted sectors by fraudsters using AI-powered spoof tactics.
However, focusing too narrowly on deepfakes can risk misunderstanding the real issue, Behaghel said. A convincing fake selfie might fool a human observer, but a properly designed identity system should be examining more than just an image.
“Deepfake is only attacking really one layer, which is the computer vision element,” he said. “It’s impressive because, as human beings, we can’t easily assess it. But if you treat identity seriously, identity is a much more dynamic and multidimensional concept.”
Instead of treating identity verification as a single technological checkpoint, companies are building layered identity architectures that combine signals from multiple sources.
Fraudsters, however, are evolving their tactics. As digital identity becomes central to everything from banking to healthcare and social platforms, infrastructure providers are becoming targets. For companies in the identity verification space, that means the problem isn’t just catching fraud attempts but protecting the massive volumes of sensitive identity data flowing through their systems.
“KYC data is highly valued by fraudsters,” Behaghel said. “The providers themselves become a surface of attack.”
Trust Isn’t Just TechnologyThe technical architecture behind Veriff’s identity verification platform illustrates how complex modern identity checks have become. At the foundation is device intelligence, or signals collected from the hardware and software used to access a service. That might include details about operating systems, sensors or device behavior.
Then comes network-level analysis, correlating location, IP data and connection patterns. Only after that does the system examine media inputs, such as selfies or videos. Even these visual signals are cross-checked against device characteristics. For example, a suspicious mismatch between a phone’s hardware profile and the format of an uploaded video could indicate manipulation.
But the real strength comes from correlating signals across layers rather than evaluating them independently, Behaghel said.
“If you create discrete systems for each layer, you’re going to limit your thinking to dozens or maybe hundreds of data points,” he said. “But when all of that data lives together, you can think in terms of thousands or tens of thousands of signals. That’s a completely different game.”
Contrary to the prevailing narrative that automation could eventually eliminate human oversight, Veriff intentionally integrates human analysts into its verification pipeline.
“Some people think the human in the loop is legacy,” Behaghel said. “For us, it’s actually what gives us an edge.”
Preparing for the Age of AI AgentsAnother emerging challenge is the rise of AI agents, or autonomous software systems capable of performing tasks like buying goods, managing subscriptions or executing financial transactions.
The technology itself may be here, but trust systems aren’t ready. The bigger question is accountability. If an AI agent makes a decision—say, wiring $100,000 from a bank account—who is ultimately responsible?
“The first level of being ready for agents is actually to be ready to ban them because the premise is you are not ready,” Behaghel said, adding that identity verification will play a key role in creating a chain of accountability between humans and the agents acting on their behalf.
“We haven’t yet seen the level of ecosystem building that’s needed,” he said.
Despite the evolution of AI and biometric verification, the security industry is still grappling with a much simpler weakness: passwords. Credential reuse and large-scale data breaches continue to provide attackers with massive databases of personal information that can fuel everything from credential-stuffing attacks to targeted identity fraud.
Those same datasets can even help train AI systems designed to impersonate individuals more convincingly.
“We still see too many services online that just use login and password,” Behaghel said.
“We need to take identity seriously before we dream bigger,” he added. “The weak link is always the one that stops the whole system from progressing.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post Veriff Warns Deepfakes Are Distracting Firms From the Real Identity Problem appeared first on PYMNTS.com.