
It began with a shrug and a refresh, rather than a siren, as these contemporary mini-disasters always do. As the small loading wheel spins as if it’s thinking deeply, someone taps Uber Eats once more. Mid-gesture, a bettor’s app freezes. The same complaint keeps coming up in a group chat after a gamer gets kicked and blames their own connection. The atmosphere has already changed from annoyance to that particular, helpless rage reserved for invisible systems by the time people acknowledge it’s not them.
Customers reported experiencing elevated 403 errors on the one.one.one.one landing page, which is a curiously symbolic location for a public-facing crack to appear, among other “issues” that Cloudflare acknowledged on its status page. Cloudflare pointed out that other functions were acting strangely, resulting in errors and timeouts, but DNS resolution using the public 1.1.1.1 resolver wasn’t the issue. You can understand how those carefully worded updates feel: calm on the surface, frantic underneath, if you’ve ever watched a supposedly simple status page while your own service burns.
| Category | Details |
|---|---|
| Company | Cloudflare, Inc. |
| What it does | Web infrastructure services (security, performance, routing/CDN, DNS, and edge networking) (cloudflarestatus.com) |
| Incident window (reported) | Friday, 20 Feb 2026 (evening UTC) |
| What users saw | Elevated 403 errors and timeouts; widespread app/site failures |
| Cloudflare’s official system page | https://www.cloudflarestatus.com |
| Specific affected component mentioned by Cloudflare/community posts | Impact on a subset of BYOIP prefixes/prefix advertisements (routing) (Cloudflare Community) |
| A technical thread that surfaced | Impact to a subset of BYOIP prefixes / prefix advertisements (routing) (Cloudflare Community) |
| Why it matters | Cloudflare sits in front of a huge number of services; when it stumbles, failures “fan out” quickly (The Independent) |
When the internet breaks, it usually breaks in a sideways direction. According to reports, the disruption was caused by several well-known services, including games and workplace platforms that people consider to be nightly routines, as well as Uber Eats and Bet365. It was frustrating in part because the details differed by region and by dependency: while you’re staring at a blank screen, your friend across town might be ordering dinner just fine. The refusal to behave consistently is perhaps the most psychologically damaging aspect of outages like this one.
Some of the more insightful explanations under the hood concentrated on IP prefix ads, which are connected to Cloudflare’s BYOIP products and are something that most customers never consider. For instance, an incident report from Laravel Cloud stated that a subset of customers, including them, had their IP prefix advertisements removed as a result of Cloudflare’s incident. “Prefix advertisements withdrawn” sounds like a white paper until you apply it to real-world situations. For example, traffic that once knew where to go suddenly doesn’t, and even if the app is functioning properly, a working service may become inaccessible.
The narrative rapidly shifts from Cloudflare to architecture, which is unsettling. The idea that the entire internet depends on “like three companies” because it’s less expensive—or “modern,” depending on how cynical you are—was encapsulated in a comment I saw going around. As we watch this play out, it seems as though we have so aggressively optimized for convenience that we have also optimized for collective failure. Not necessarily a malicious failure. It was just a typical human configuration-and-routing error.
In recent months, Cloudflare has been remarkably open about how large networks can malfunction. The company reported in January 2026 that an automated routing policy configuration error in Miami had resulted in a route leak incident, affecting external parties as traffic was inadvertently diverted through Cloudflare’s location. Additionally, Cloudflare reported a major traffic outage in December 2025 that affected a subset of customers and started at 08:47 UTC and ended about 25 minutes later. Engineers silently respect these postmortems, which also unintentionally highlight how narrow the margin for error can be.
The way people think about outages has also evolved. Online, a bad day could have felt like a random storm ten years ago. It feels like a hiccup in the supply chain now. People start mentally mapping dependencies—Cloudflare in front of X, a CDN in front of a betting platform, or a routing issue affecting services that didn’t “do” anything wrong—after instantly checking status pages, Downdetector, and social media. With arrows and question marks growing, you can practically see the invisible diagram being drawn in real time across the internet.
Then comes the part that users remember and businesses detest: timing. Outages on Friday nights are painful because they interfere with the small routines of the week, such as ordering food, making plans for the weekend, having a quick flutter, or playing a game with friends. Although it’s not a high tragedy, it serves as a stark reminder of how infrastructure mediates modern leisure. Not only do people become less productive, but they also lose the perception that everything is going as it should. Although the irritation is mild, it builds up over time and is persistent.
The long-term implications of this specific Cloudflare incident are still unknown. Most people move on once the apps start loading again, and the web always bounces back. However, the memory is likely to stick with developers and operators, particularly those who witnessed the removal of prefix ads. The episode fits a pattern: the internet’s most “boring” layers are also its most consequential, and they politely and repeatedly remind us that resilience is not the same as uptime. This is not because Cloudflare is particularly irresponsible.

