The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 

Cloudflare admits a bot filter bug caused its worst outage since 2019

DATE POSTED:November 19, 2025
Cloudflare admits a bot filter bug caused its worst outage since 2019

Cloudflare’s Content Delivery Network experienced a significant outage on Tuesday, November 19, 2025, due to a misconfigured query in its Bot Management system, impacting various internet services globally.

Cloudflare co-founder and CEO Matthew Prince detailed the cause in a blog post, identifying a problem within the Bot Management system, which manages automated crawlers. This specific issue resulted in Cloudflare’s “worst outage since 2019.”

Approximately 20 percent of the web traffics through Cloudflare’s network, as reported by the company last year. The outage disconnected numerous services, including X, ChatGPT, and Downdetector, for several hours. This incident resembles previous disruptions involving Microsoft Azure and Amazon Web Services.

Cloudflare’s bot controls address challenges like crawlers scraping data for generative AI training. The company recently introduced the “AI Labyrinth,” a mitigation method utilizing AI-generated content to impede non-compliant AI crawlers and bots.

However, the outage stemmed from modifications to a database’s permissions system, not from generative AI technology, DNS, or malicious activities such as a “hyper-scale DDoS attack,” which Cloudflare initially considered.

Prince explained that the Bot Management system’s machine learning model, which generates bot scores for network requests, uses a frequently updated configuration file to identify automated requests. A “change in our underlying ClickHouse query behaviour that generates this file caused it to have a large number of duplicate ‘feature’ rows.”

This query alteration led the ClickHouse database to produce duplicate information. The configuration file rapidly exceeded preset memory limits, causing the failure of “the core proxy system that handles traffic processing for our customers, for any traffic that depended on the bots module.” Consequently, companies using Cloudflare’s rules to block bots incorrectly cut off legitimate traffic, while customers not employing the generated bot score in their rules remained online.

Cloudflare has outlined four specific plans to prevent similar incidents:

  • Hardening ingestion: Reinforcing the ingestion of Cloudflare-generated configuration files to the same standard as user-generated input.
  • Enabling more global kill switches: Implementing additional overall disablement options for features.
  • Eliminating core dumps: Preventing core dumps or other error reports from overwhelming system resources.
  • Reviewing failure modes: Examining error condition failure modes across all core proxy modules.

Featured image credit