
Artificial intelligence has become the single largest uncontrolled channel for corporate data exfiltration, surpassing both shadow SaaS and unmanaged file sharing, according to a new report from AI and browser security company LayerX. The research, based on real-world enterprise browsing telemetry, indicates that the primary risk from AI in the enterprise is not a future threat, but a present-day reality unfolding in everyday workflows.
Sensitive corporate data is already flowing into generative AI tools like ChatGPT, Claude, and Copilot at high rates, primarily through unmanaged personal accounts and the copy-and-paste function.
The rapid, ungoverned adoption of AIAI tools have achieved a level of adoption in just two years that took other technologies decades to reach. Nearly half of all enterprise employees (45%) already use generative AI, with ChatGPT alone reaching 43% penetration. AI now accounts for 11% of all enterprise application activity, rivaling file-sharing and office productivity apps.
This growth has largely occurred without corresponding governance. The report found that 67% of AI usage happens through unmanaged personal accounts, leaving security teams with no visibility into which employees are using which tools or what data is being shared.
Sensitive data is leaking through files and copy-pasteThe research uncovered alarming trends in how sensitive data is being handled with AI platforms.
The report identifies copy-and-paste into generative AI as the number one vector for corporate data leaving enterprise control. Traditional security programs focused on scanning file attachments and blocking unauthorized uploads are missing this threat entirely.
Other major security blind spotsThe report highlights two other critical areas where corporate data is at risk.
The report offers several clear recommendations for security leaders.
The findings paint a clear picture: the enterprise security perimeter has shifted to the browser, where employees fluidly move sensitive data between sanctioned and unsanctioned tools. The report concludes that if security teams do not adapt to this new reality, AI will not just shape the future of work, but also the future of data breaches.