
A USENIX study by researchers from University College London, Mediterranea University of Reggio Calabria, and University of California, Davis, found that popular AI browser extensions collect sensitive user data including medical records, banking information, social security numbers, and social media activity during simulated real-world browsing scenarios.
AI-assisted web browsers emerged in 2025, offering features such as website summaries, refined searches, chatbots, and autonomous task execution. These browsers challenge established options like Google Chrome, Apple Safari, Microsoft Edge, and Mozilla Firefox. OpenAI’s Atlas, for instance, allows users to perform searches or interact with ChatGPT directly within the browsing interface.
Google Chrome commands approximately 70 percent of the global user base. AI browsers including Perplexity’s Comet, Opera Neon, Dai, and ChatGPT aim to attract users through advanced capabilities. Legacy browsers such as Firefox incorporate AI elements to expand their market presence. McKinsey & Company forecasts the browser industry will generate $750 billion in revenue by 2028.
AI browsers perform functions beyond internet access, such as completing forms, making Amazon purchases, or revising essays. They operate via an always-available chatbot that examines website content and agentic modes that handle complex tasks. These processes require analysis of open webpages and integration with prior requests, search histories, and interactions. Most AI browsers function autonomously, without explicit user instructions or consent.
Browser extensions serve as interfaces for generative AI models including OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama. Extensions access the large language model’s application programming interface on the browser’s backend to deliver personalized experiences. For autonomous operation, they inject content scripts into webpages, processed by background service workers. This setup distinguishes AI browsers from standard chatbots, which process only user-entered data. AI browsers automatically extract information from visited websites.
The August 2025 USENIX study focused exclusively on browser extensions, not full AI browsers, because leading products like OpenAI’s Atlas and Perplexity’s Comet launched afterward. The underlying data requirements remain identical for both types.
Delivered at the 2025 USENIX Security Symposium, the study examined extensions such as ChatGPT for Google, Sider, Monica, Merlin, MaxAI, Perplexity, HARPA, TinaMind, and Microsoft’s Copilot. Researchers replicated everyday browsing in private and public contexts, including news reading, YouTube viewing, pornography access, and tax form completion. Privacy tests involved specific prompts to identify collected data. Extensions captured images and text content encompassing medical diagnoses, social security numbers, and dating app preferences.
Merlin transmitted banking details and health records to external destinations. Merlin and Sider AI captured activity even in private browsing modes. Traffic decryption analysis showed multiple assistants forwarding webpage content to their servers and third-party trackers. Sider and TinaMind transmitted user prompts and identifying details like IP addresses to Google Analytics, facilitating cross-site user tracking.
Microsoft’s Copilot retained chat histories from prior sessions in the browser background, persisting across uses. Google, Copilot, Monica, ChatGPT, and Sider analyzed user activity to construct profiles based on age, gender, income, and interests, applying these for personalized responses over multiple sessions.
Among tested assistants, Perplexity demonstrated the strongest privacy measures. It lacked the ability to recall previous interactions, and its servers avoided accessing personal data within private browsing spaces. Perplexity still processed page titles and user location information.
Transitioning from extensions to standalone browsers highlights ongoing privacy issues. OpenAI’s Atlas and Perplexity’s Comet, the leading AI browsers, exhibit data collection practices. OpenAI states Atlas selectively analyzes content, but this selectivity excludes privacy considerations. The chatbot processes all website images and text. Atlas’s primary features rely on optional memory functions that retain browsing history depictions to customize user interactions. Users cannot specify which website elements the browser retrieves. OpenAI’s help page outlines mitigation steps: removing pages from the chat interface, blocking sensitive URLs from chatbot access, and clearing browsing history.
Perplexity’s Comet maintains search history on users’ local devices rather than company servers. It accesses URLs, text, images, search queries, download history, and cookies to support core operations. Comet’s agentic mode and Memory personal search tool leverage search history and preferences for task completion. The browser requests permissions for Google accounts, covering emails, contacts, settings, and calendars. It supports opt-in integrations with third parties. Security experts advise limiting the chatbot sidebar to non-sensitive webpages. Perplexity provides a detailed data settings explainer on its website.
Once stored on AI company servers, user data falls outside individual control. Providers repurpose this data to train large language models, frequently without explicit consent. Similar practices occur across social media, e-commerce, search engines, and messaging platforms through opaque agreements and default opt-ins. Browsers access particularly sensitive information. OpenAI fulfilled 105 U.S. government data requests in the first half of 2025.
Atlas offers two data usage options. The default “Improve the model for everyone” permits OpenAI to incorporate webpage information into ChatGPT training during chatbot queries. “Include web browsing” integrates full browsing history into training datasets. OpenAI anonymizes data prior to model use, though specifics on anonymization boundaries remain limited. Users may disable both settings.
AI browsers face substantial security vulnerabilities due to their operational design. Prompt injection attacks allow hackers to embed malicious instructions in backend systems. AI browsers struggle to differentiate these from legitimate inputs, risking extraction of login credentials, banking details, and personal data.
Brave’s October 2025 study, conducted by its privacy research team, identified prompt injections as a systemic challenge for AI browsers, increasing phishing risks. LayerX Security reported Perplexity Comet users face 85 percent higher vulnerability to such attacks compared to Google Chrome users.
OpenAI Chief Information Officer Dane Stuckey stated on X that prompt injection “remains a frontier, unsolved security problem.” Perplexity’s blog post called on AI companies to begin “rethinking security from the ground up.”