Cloudflare experienced a 25-minute outage on December 5, 2025, affecting 28% of HTTP traffic. The incident was triggered by configuration changes addressing a critical React vulnerability, exposing a bug in Cloudflare's FL1 proxy software.
Major Cloudflare Outage Disrupts Global Internet Services
On December 5, 2025, Cloudflare experienced a significant network outage that affected approximately 28% of all HTTP traffic served by the company's global infrastructure. The incident, which lasted approximately 25 minutes from 08:47 to 09:12 UTC, caused widespread HTTP 500 errors across numerous websites and services that rely on Cloudflare's content delivery network and security services.
Root Cause: Security Patch Gone Wrong
The outage was triggered by configuration changes Cloudflare was implementing to protect customers against a critical React Server Components vulnerability, CVE-2025-55182. This vulnerability, rated CVSS 10.0 (the highest possible severity), allows remote code execution through insecure deserialization of malicious requests affecting React versions 19.0-19.2 and Next.js versions 15-16.
Cloudflare was increasing its Web Application Firewall (WAF) buffer size from 128KB to 1MB to better protect customers using React applications. During this process, the company attempted to disable an internal WAF testing tool that didn't support the increased buffer size. This seemingly minor change, deployed through Cloudflare's global configuration system, exposed a previously unknown bug in the company's FL1 proxy software.
Technical Breakdown: The Lua Exception That Broke the Internet
When the killswitch was applied to disable the testing tool, it triggered a Lua exception in Cloudflare's rules module:
'[lua] Failed to run module rulesets callback late_routing: /usr/local/nginx-fl/lua/modules/init.lua:314: attempt to index field 'execute' (a nil value)'
This error occurred because the code attempted to access a 'rule_result.execute' object that didn't exist after the killswitch was applied. The bug had existed undetected for years in Cloudflare's FL1 proxy, which uses Lua scripting. Interestingly, the same error didn't occur in Cloudflare's newer FL2 proxy written in Rust, highlighting the benefits of strongly-typed programming languages.
Impact and Scope
The outage affected customers who had their web assets served by Cloudflare's older FL1 proxy AND had the Cloudflare Managed Ruleset deployed. Approximately 28% of all HTTP traffic passing through Cloudflare's network was impacted, causing HTTP 500 errors for affected websites. Major platforms including X (Twitter), LinkedIn, Zoom, Spotify, Discord, Canva, ChatGPT, and various cryptocurrency exchanges reported issues during the outage window.
'Any outage of our systems is unacceptable, and we know we have let the Internet down again following the incident on November 18,' stated Dane Knecht in Cloudflare's official post-mortem blog post.
Second Major Incident in Two Weeks
This December 5 outage followed a similar incident on November 18, 2025, where Cloudflare experienced a longer availability disruption affecting nearly all customers. Both incidents shared concerning similarities: they were triggered by configuration changes intended to address security concerns, and both propagated rapidly through Cloudflare's global network.
Cloudflare, which according to Wikipedia serves approximately 19.3% of all websites, has become critical internet infrastructure. The company's position between users and origin servers means that when Cloudflare experiences issues, even fully functional applications appear broken to end users.
Planned Improvements and Industry Implications
Following both incidents, Cloudflare has committed to implementing several critical improvements:
- Enhanced Rollouts & Versioning: Implementing gradual deployment systems with health validation for configuration changes
- Streamlined Break Glass Capabilities: Ensuring critical operations remain possible during failures
- 'Fail-Open' Error Handling: Systems will default to known-good states rather than dropping requests when encountering errors
'These kinds of incidents, and how closely they are clustered together, are not acceptable for a network like ours,' Cloudflare acknowledged in their official statement.
Timeline of Events
The incident unfolded rapidly: at 08:47 UTC, the configuration change was deployed and propagated to Cloudflare's network. By 08:48, full impact was felt across affected systems. Cloudflare declared an incident at 08:50 based on automated alerts. The change was reverted at 09:11, and by 09:12 UTC, all traffic was restored.
The outage highlights the delicate balance between security improvements and system stability in today's complex internet infrastructure. As Cloudflare works to implement its promised improvements, the internet community will be watching closely to ensure that critical infrastructure providers can deliver both security and reliability in an increasingly interconnected digital world.
Nederlands
English
Deutsch
Français
Español
Português