zlacker

[parent] [thread] 87 comments
1. mixedb+(OP)[view] [source] 2025-12-05 22:54:13
This is architectural problem, the LUA bug, the longer global outage last week, a long list of earlier such outages only uncover the problem with architecture underneath. The original, distributed, decentralized web architecture with heterogeneous endpoints managed by myriad of organisations is much more resistant to this kind of global outages. Homogeneous systems like Cloudflare will continue to cause global outages. Rust won't help, people will always make mistakes, also in Rust. Robust architecture addresses this by not allowing a single mistake to bring down myriad of unrelated services at once.
replies(21): >>WD-42+t >>ivanje+T4 >>chicke+V4 >>cyanyd+I6 >>tobyjs+N7 >>3roden+M9 >>UltraS+2a >>NicoJu+rc >>rossju+Uf >>JumpCr+Hi >>termin+kk >>rekrsi+9n >>Klonoa+vw >>m00dy+Nz >>cbsmit+5C >>johnco+cH >>jonhes+uN >>delusi+W81 >>psycho+bm1 >>lxgr+e32 >>theold+0j2
2. WD-42+t[view] [source] 2025-12-05 22:57:38
>>mixedb+(OP)
In other words, the consolidation on Cloudflare and AWS makes the web less stable. I agree.
replies(1): >>amazin+83
◧◩
3. amazin+83[view] [source] [discussion] 2025-12-05 23:16:11
>>WD-42+t
Usually I am allergic to pithy, vaguely dogmatic summaries like this but you're right. We have traded "some sites are down some of the time" for "most sites are down some of the time". Sure the "some" is eliding an order of magnitude or two, but this framing remains directionally correct.
replies(2): >>PullJo+Y4 >>UltraS+na
4. ivanje+T4[view] [source] 2025-12-05 23:26:51
>>mixedb+(OP)
Robust architecture that is serving 80M requests/second worldwide?

My answer would be that no one product should get this big.

5. chicke+V4[view] [source] 2025-12-05 23:26:59
>>mixedb+(OP)
You're not wrong, but where's the robust architecture you're referring to? The reality of providing reliable services on the internet is far beyond the capabilities of most organizations.
replies(1): >>coderj+gn1
◧◩◪
6. PullJo+Y4[view] [source] [discussion] 2025-12-05 23:27:05
>>amazin+83
Does relying on larger players result in better overall uptime for smaller players? AWS is providing me better uptime than if I assembled something myself because I am less resourced and less talented than that massive team.

If so, is it a good or bad trade to have more overall uptime but when things go down it all goes down together?

replies(3): >>Vorpal+l9 >>Lampre+id >>Aeolun+kd
7. cyanyd+I6[view] [source] 2025-12-05 23:41:44
>>mixedb+(OP)
Bro, but how do we make shareholder value if we don't monopolize and enshittify everything
8. tobyjs+N7[view] [source] 2025-12-05 23:51:16
>>mixedb+(OP)
I’m not sure I share this sentiment.

First, let’s set aside the separate question of whether monopolies are bad. They are not good but that’s not the issue here.

As to architecture:

Cloudflare has had some outages recently. However, what’s their uptime over the longer term? If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

But there’s a more interesting argument in favour of the status quo.

Assuming cloudflare’s uptime is above average, outages affecting everything at once is actually better for the average internet user.

It might not be intuitive but think about it.

How many Internet services does someone depend on to accomplish something such as their work over a given hour? Maybe 10 directly, and another 100 indirectly? (Make up your own answer, but it’s probably quite a few).

If everything goes offline for one hour per year at the same time, then a person is blocked and unproductive for an hour per year.

On the other hand, if each service experiences the same hour per year of downtime but at different times, then the person is likely to be blocked for closer to 100 hours per year.

It’s not really bad end user experience that every service uses cloudflare. It’s more-so a question of why is cloudflare’s stability seeming to go downhill?

And that’s a fair question. Because if their reliability is below average, then the value prop evaporates.

replies(18): >>gerdes+He >>embedd+Mh >>Nextgr+9j >>ccakes+ck >>randme+7o >>kjgkjh+Fr >>fallou+cu >>wat100+qB >>dfex+qF >>atmosx+BS >>geyser+aT >>sunrun+BT >>nialse+UU >>tonyhb+RV >>hector+zX >>lxgr+242 >>clicke+r62 >>chamom+p92
◧◩◪◨
9. Vorpal+l9[view] [source] [discussion] 2025-12-06 00:03:12
>>PullJo+Y4
From a societal view it is worse when everything is down at once. Leads to a less resilient society: It is not great if I can't buy essentials from one store because their payment system is down (this happened to one super market chain in Sweden due to a hacker attack some years ago, took weeks to fully fix everything, and then there was that whole Crowdstrike debacle globally more recently).

It is far worse if all of the competitors are down at once. To some extent you can and should have a little bit of stock at home (water, food, medicine, ways to stay warm, etc) but not everything is practical to do so with (gasoline for example, which could have knock on effects on delivery of other goods).

replies(1): >>pas+be1
10. 3roden+M9[view] [source] 2025-12-06 00:08:19
>>mixedb+(OP)
Would you rather be attacked by 1,000 wasps or 1 dog? A thousand paper cuts or one light stabbing? Global outages are bad but the choice isn’t global pain vs local pleasure. Local and global both bring pain, with different, complicated tradeoffs.

Cloudflare is down and hundreds of well paid engineers spring into action to resolve the issue. Your server goes down and you can’t get ahold of your Server Person because they’re at a cabin deep in the woods.

replies(4): >>jchw+Wa >>gblarg+1b >>psunav+Sb >>Lampre+0g
11. UltraS+2a[view] [source] 2025-12-06 00:10:58
>>mixedb+(OP)
They badly need smaller blast radius and to use more chaos engineering tools.
◧◩◪
12. UltraS+na[view] [source] [discussion] 2025-12-06 00:12:47
>>amazin+83
AWS and Cloudflare can recover from outages faster because they can bring dozens (hundreds?) of people to help, often the ones who wrote the software and designed the architecture. Outages at smaller companies I've worked for have often lasted multiple days, up to an exchange server outage that lasted 2 weeks.
◧◩
13. jchw+Wa[view] [source] [discussion] 2025-12-06 00:17:02
>>3roden+M9
In most cases we actually get both local and global pain, since most people are running servers behind Cloudflare.
◧◩
14. gblarg+1b[view] [source] [discussion] 2025-12-06 00:17:43
>>3roden+M9
Why would there be a centralized outage of decentralized services? The proper comparison seems to be attacked by a dog or a single wasp.
◧◩
15. psunav+Sb[view] [source] [discussion] 2025-12-06 00:24:13
>>3roden+M9
If you've allowed your Server Person to be a single point of failure out innawoods, that's an organizational problem, not a technological one.

Two is one and one is none.

16. NicoJu+rc[view] [source] 2025-12-06 00:28:26
>>mixedb+(OP)
You should really check Cloudflare.

There is not a single company that makes their infrastructure as globally available like Cloudflare.

Additionally, the downtime of Cloudflare seems to be objectively less than the others.

Now, it took 25 minutes for 28% of the network.

While being the only ones to fix a global vulnerability.

There is a reason other clouds wouldn't touch the responsiveness and innovation that Cloudflare brings.

◧◩◪◨
17. Lampre+id[view] [source] [discussion] 2025-12-06 00:35:33
>>PullJo+Y4
When only one thing goes down, it's easier to compensate with something else, even for people who are doing critical work but who can't fix IT problems themselves. It means there are ways the non-technical workforce can figure out to keep working, even if the organization doesn't have on-site IT.

Also, if you need to switchover to backup systems for everything at once, then either the backup has to be the same for everything and very easily implementable remotely - which to me seems unlikely for specialty systems, like hospital systems, or for the old tech that so many organizations still rely on (and remember the CrowdStrike BSODs that had to be fixed individually and in person and so took forever to fix?) - or you're gonna need a LOT of well-trained IT people, paid to be on standby constantly, if you want to fix the problems quickly, on account of they can't be everywhere at once.

If the problems are more spread out over time, then you don't need to have quite so many IT people constantly on standby. Saves a lot of $$$, I'd think.

And if problems are smaller and more spread out over time, then an organization can learn how to deal with them regularly, as opposed to potentially beginning to feel and behave as though the problem will never actually happen. And if they DO fuck up their preparedness/response, the consequences are likely less severe.

◧◩◪◨
18. Aeolun+kd[view] [source] [discussion] 2025-12-06 00:35:38
>>PullJo+Y4
> AWS is providing me better uptime than if I assembled something myself because I am less resourced and less talented than that massive team.

Is it? I can’t say that my personal server has been (unplanned) down at any time in the past 10 years, and these global outages have just flown right past it.

replies(1): >>Aperoc+Cq
◧◩
19. gerdes+He[view] [source] [discussion] 2025-12-06 00:48:26
>>tobyjs+N7
All of my company's hosted web sites have way better uptimes and availability than CF but we are utterly tiny in comparison.

With only some mild blushing, you could describe us as "artisanal" compared to the industrial monstrosities, such as Cloudflare.

Time and time again we get these sorts of issues with the massive cloudy chonks and they are largely due to the sort of tribalism that used to be enshrined in the phrase: "no one ever got fired for buying IBM".

We see the dash to the cloud and the shoddy state of in house corporate IT as a result. "We don't need in-house knowledge, we have "MS copilot 365 office thing" that looks after itself and now its intelligent - yay \o/

Until I can't, I'm keeping it as artisanal as I can for me and my customers.

replies(1): >>foobar+0C1
20. rossju+Uf[view] [source] 2025-12-06 01:00:43
>>mixedb+(OP)
You have a heterogeneous, fault-free architecture for the Cloudflare problem set? Interesting! Tell us more.
◧◩
21. Lampre+0g[view] [source] [discussion] 2025-12-06 01:02:29
>>3roden+M9
It's not "1,000 wasps or 1 dog", it's "1,000 dogs at once, or "1 dog at once, 1,000 different times". Rare but huge and coordinated siege, or a steady and predictable background radiation of small issues.

The latter is easier to handle, easier to fix, and much more suvivable if you do fuck it up a bit. It gives you some leeway to learn from mistakes.

If you make a mistake during the 1000 dog siege, or if you don't have enough guards on standby and ready to go just in case of this rare event, you're just cooked.

replies(1): >>philip+4Q
◧◩
22. embedd+Mh[view] [source] [discussion] 2025-12-06 01:19:10
>>tobyjs+N7
> Cloudflare has had some outages recently. However, what’s their uptime over the longer term? If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

Why is that the only option? Cloudflare could offer solutions that let people run their software themselves, after paying some license fee. Or there could be many companies people use instead, instead of everyone flocking to one because of cargoculting "You need a CDN like Cloudflare before you launch your startup bro".

replies(2): >>Moto74+3j >>tobyjs+nj
23. JumpCr+Hi[view] [source] 2025-12-06 01:25:31
>>mixedb+(OP)
> Homogeneous systems like Cloudflare will continue to cause global outages

But the distributed system is vulnerable to DDOS.

Is there an architecture that maintains the advantages of both systems? (Distributed resilience with a high-volume failsafe.)

◧◩◪
24. Moto74+3j[view] [source] [discussion] 2025-12-06 01:28:50
>>embedd+Mh
What you’re suggesting is not trivial. Otherwise we wouldn’t use various CDNs. To do what Cloudflare does your starting point is “be multiple region/multiple cloud from launch” which is non-trivial especially when you’re finding product-market fit. A better poor man’s CDN is object storage through your cloud of choice serving HTTP traffic. Cloudflare also offers layers of security and other creature comforts. Ignoring the extras they offer, if you build what they offer you have effectively made a startup within a startup.

Cloudflare isn’t the only game in town either. Akamai, Google, AWS, etc all have good solutions. I’ve used all of these at jobs I’ve worked at and the only poor choice has been to not use one at all.

◧◩
25. Nextgr+9j[view] [source] [discussion] 2025-12-06 01:29:43
>>tobyjs+N7
> If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

I disagree; most people need only a subset of Cloudflare's features. Operating just that subset avoids the risk of the other moving parts (that you don't need anyway) ruining your day.

Cloudflare is also a business and has its own priorities like releasing new features; this is detrimental to you because you won't benefit from said feature if you don't need it, yet still incur the risk of the deployment going wrong like we saw today. Operating your own stack would minimize such changes and allow you to schedule them to a maintenance window to limit the impact should it go wrong.

The only feature Cloudflare (or its competitors) offers that can't be done cost-effectively yourself is volumetric DDoS protection where an attacker just fills your pipe with junk traffic - there's no way out of this beyond just having a bigger pipe, which isn't reasonable for any business short of an ISP or infrastructure provider.

replies(1): >>Araina+xq
◧◩◪
26. tobyjs+nj[view] [source] [discussion] 2025-12-06 01:31:52
>>embedd+Mh
What do you think Cloudflare’s core business is? Because I think it’s two things:

1. DDoS protection

2. Plug n’ Play DNS and TLS (termination)

Neither of those make sense for self-hosted.

Edit: If it’s unclear, #2 doesn’t make sense because if you self-host, it’s no longer plug n’ play. The existing alternatives already serve that case equally well (even better!).

replies(1): >>stingr+mk
◧◩
27. ccakes+ck[view] [source] [discussion] 2025-12-06 01:38:59
>>tobyjs+N7
> If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

The point is that it doesn’t matter. A single site going down has a very small chance of impacting a large number of users. Cloudflare going down breaks an appreciable portion of the internet.

If Jim’s Big Blog only maintains 95% uptime, most people won’t care. If BofA were at 95%.. actually same. Most of the world aren’t BofA customers.

If Cloudflare is at 99.95% then the world suffers

replies(5): >>sherma+Ry >>chii+9C >>johnco+YG >>rainco+FS >>esrauc+tg1
28. termin+kk[view] [source] 2025-12-06 01:40:23
>>mixedb+(OP)
It's not as simple as that. What will result in more downtime, dependency on a single centralized service or not being behind Cloudflare? Clearly it's the latter or companies wouldn't be behind Cloudflare. Sure, the outages are more widespread now than they used to be, but for any given service the total downtime is typically much lower than before centralization towards major cloud providers and CDNs.
◧◩◪◨
29. stingr+mk[view] [source] [discussion] 2025-12-06 01:40:41
>>tobyjs+nj
Cloudflare Zero-Trust is also very core to their enterprise business.
30. rekrsi+9n[view] [source] 2025-12-06 02:13:13
>>mixedb+(OP)
On the other hand, as long as the entire internet goes down when Cloudflare goes down, I'll be able to host everything there without ever getting flack from anyone.
◧◩
31. randme+7o[view] [source] [discussion] 2025-12-06 02:23:20
>>tobyjs+N7
> If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

I’m tired of this sentiment. Imagine if people said, why develop your own cloud offering? Can you really do better than VMWare..?

Innovation in technology has only happened because people dared to do better, rather than giving up before they started…

◧◩◪
32. Araina+xq[view] [source] [discussion] 2025-12-06 02:47:30
>>Nextgr+9j
>The only feature Cloudflare (or its competitors) offers that can't be done cost-effectively yourself is volumetric DDoS protection

.... And thanks to AI everyone needs that all the time now since putting a site on the Internet means an eternal DDoS attack.

◧◩◪◨⬒
33. Aperoc+Cq[view] [source] [discussion] 2025-12-06 02:48:07
>>Aeolun+kd
Have your ISP never went down? Or did it went down in some night and you just never realized.
◧◩
34. kjgkjh+Fr[view] [source] [discussion] 2025-12-06 03:00:08
>>tobyjs+N7
That's an interesting point, but in many (most?) cases productivity doesn't depend on all services being available at the same time. If one service goes down, you can usually be productive by using an alternative (e.g. if HN is down you go to Reddit, if email isn't working you catch up with Slack).
replies(2): >>sema4h+Dw >>tobyjs+CB
◧◩
35. fallou+cu[view] [source] [discussion] 2025-12-06 03:24:58
>>tobyjs+N7
"My architecture depends upon a single point of failure" is a great way to get laughed out of a design meeting. Outsourcing that single point of failure doesn't cure my design of that flaw, especially when that architecture's intended use-case is to provide redundancy and fault-tolerance.

The problem with pursuing efficiency as the primary value prop is that you will necessarily end up with a brittle result.

replies(1): >>lockni+AT
36. Klonoa+vw[view] [source] 2025-12-06 03:48:45
>>mixedb+(OP)
> Rust won't help, people will always make mistakes, also in Rust.

They don't just use Rust for "protection", they use it first and foremost for performance. They have ballpark-to-matching C++ performance with a realistic ability to avoid a myriad of default bugs. This isn't new.

You're playing armchair quarterback with nothing to really offer.

◧◩◪
37. sema4h+Dw[view] [source] [discussion] 2025-12-06 03:50:10
>>kjgkjh+Fr
If HN, Reddit, email, Slack and everything else is down for a day, I think my productivity would actually go up, not down.
replies(1): >>zqna+yL
◧◩◪
38. sherma+Ry[view] [source] [discussion] 2025-12-06 04:17:47
>>ccakes+ck
Maybe worlds can just live without the internet for a few hours.

There are likely emergency services dependent on Cloudflare at this point, so I’m only semi serious.

replies(2): >>lockni+KS >>p-e-w+0m1
39. m00dy+Nz[view] [source] 2025-12-06 04:30:43
>>mixedb+(OP)
Obviously Rust is the answer to these kind of problems. But if you are cloudflare and have an important company at a global scale, you need to set high standarts for your rust code. Developers should dance and celebrate end of the day if their code compiles in rust.
◧◩
40. wat100+qB[view] [source] [discussion] 2025-12-06 04:48:56
>>tobyjs+N7
That’s fine if it’s just some random office workers. What if every airline goes down at the same time because they all rely on the same backend providers? What if every power generator shuts off? “Everything goes down simultaneously” is not, in general, something to aim for.
replies(1): >>tazjin+pi1
◧◩◪
41. tobyjs+CB[view] [source] [discussion] 2025-12-06 04:50:57
>>kjgkjh+Fr
Many (I’d speculate most) workflows involve moving and referencing data across multiple applications. For example, read from a spreadsheet while writing a notion page, then send a link in Slack. If any one app is down, the task is blocked.

Software development is a rare exception to this. We’re often writing from scratch (same with designers, and some other creatives). But these are definitely the exception compared to the broader workforce.

Same concept applies for any app that’s built on top of multiple third-party vendors (increasingly common for critical dependencies of SaaS)

42. cbsmit+5C[view] [source] 2025-12-06 04:56:30
>>mixedb+(OP)
I find this sentiment amusing when I consider the vast outages of the "good ol' days".

What's changed is a) our second-by-second dependency on the Internet and b) news/coverage.

◧◩◪
43. chii+9C[view] [source] [discussion] 2025-12-06 04:57:11
>>ccakes+ck
> If Cloudflare is at 99.95% then the world suffers

if the world suffers, those doing the "suffering" needs to push that complaint/cost back up the chain - to the website operator, which would push the complaint/cost up to cloudflare.

The fact that nobody did - or just verbally complained without action - is evidence that they didn't really suffer.

In the mean time, BofA saved cost in making their site 99.95% uptime themselves (presumably cloudflare does it cheaper than they could individually). So the entire system became more efficient as a result.

replies(2): >>yfw+ME >>lockni+6T
◧◩◪◨
44. yfw+ME[view] [source] [discussion] 2025-12-06 05:36:55
>>chii+9C
They didnt really suffer or they dont have choice?
◧◩
45. dfex+qF[view] [source] [discussion] 2025-12-06 05:46:52
>>tobyjs+N7
> If everything goes offline for one hour per year at the same time, then a person is blocked and unproductive for an hour per year. > On the other hand, if each service experiences the same hour per year of downtime but at different times, then the person is likely to be blocked for closer to 100 hours per year.

Putting Cloudflare in front of a site doesn't mean that site's backend suddenly never goes down. Availability will now be worse - you'll have Cloudflare outages* affecting all the sites they proxy for, along with individual site back-end failures which will of course still happen.

* which are still pretty rare

◧◩◪
46. johnco+YG[view] [source] [discussion] 2025-12-06 06:18:01
>>ccakes+ck
Look at it a user (or even operator) of one individual service that isn’t redundant or safety critical: if choice A has 1/2 the downtime of choice B, you can’t justify choosing choice B by virtue of choice A’s instability.
replies(1): >>moqmar+GL
47. johnco+cH[view] [source] 2025-12-06 06:20:30
>>mixedb+(OP)
Actually, maybe 1 hour downtime for ~ the whole internet every month is a public good provided by Cloudflare. For everyone that doesn’t get paged, that is.
◧◩◪◨
48. zqna+yL[view] [source] [discussion] 2025-12-06 07:31:27
>>sema4h+Dw
During 1st Cloudflare outage StackOverflow was down too.
◧◩◪◨
49. moqmar+GL[view] [source] [discussion] 2025-12-06 07:34:08
>>johnco+YG
That is exactly why you don't see Windows being used anymore in big corporations. /s
50. jonhes+uN[view] [source] 2025-12-06 08:09:09
>>mixedb+(OP)
Yeah, redundancy and efficiency are opposites. As engineers, we always chase efficiency, but resilience and redundancy are related.
◧◩◪
51. philip+4Q[view] [source] [discussion] 2025-12-06 08:43:40
>>Lampre+0g
I don't quite see how this maps onto the situation. The "1000 dog seige" also was resolved very quickly and transparently, so I would say it's actually better than even one of the "1 dog at once"s.
replies(1): >>rolisz+pj1
◧◩
52. atmosx+BS[view] [source] [discussion] 2025-12-06 09:17:51
>>tobyjs+N7
CloudFlare doesn’t have a good track record. It’s the third party that caused more outages for us than any other third party service in the last four years.
◧◩◪
53. rainco+FS[view] [source] [discussion] 2025-12-06 09:18:23
>>ccakes+ck
> A single site going down has a very small chance of impacting a large number of users

How? If Github is down how many people are affected? Google?

> Jim’s Big Blog only maintains 95% uptime, most people won’t care

Yeah, and in the world with Cloudflare people don't care if Jim's Blog is down either. So Cloudflare doesn't make things worse.

replies(1): >>dns_sn+TU
◧◩◪◨
54. lockni+KS[view] [source] [discussion] 2025-12-06 09:19:13
>>sherma+Ry
> Maybe worlds can just live without the internet for a few hours.

The world can also live a few hours without sewers, water supply, food, cars, air travel, etc.

But "can" and "should" are different words.

◧◩◪◨
55. lockni+6T[view] [source] [discussion] 2025-12-06 09:25:46
>>chii+9C
> The fact that nobody did - or just verbally complained without action - is evidence that they didn't really suffer.

What an utterly clueless claim. You're literally posting in a thread with nearly 500 posts of people complaining. Taking action takes time. A business just doesn't switch cloud providers overnight.

I can tell you in no uncertain terms that there are businesses impacted by Cloudflare's frequent outages that started work shedding their dependency on Cloudflare's services. And it's not just because of these outages.

◧◩
56. geyser+aT[view] [source] [discussion] 2025-12-06 09:26:13
>>tobyjs+N7
On the other hand, if one site is down you might have alternatives. Or, you can do something different until the site you needed is up again. Your argument that simultaneous downtime is more efficient than uncoordinated downtime because tasks usually rely on multiple sites being online simultaneously is an interesting one. Whether or not that's true is an empirical question, but I lean toward thinking it's not true. Things failing simultaneously tends to have worse consequences.
◧◩◪
57. lockni+AT[view] [source] [discussion] 2025-12-06 09:30:03
>>fallou+cu
> "My architecture depends upon a single point of failure" is a great way to get laughed out of a design meeting.

This is a simplistic opinion. Claiming services like Cloudflare are modeled as single points of failure is like complaining that your use of electricity to power servers is a single point of failure. Cloudflare sells a global network of highly reliable edge servers running services like caching, firewall, image processing, etc. And more importas a global firewall that protects services against global distributed attacks. Until a couple of months ago, it was unthinkable to casual observers that Cloudflare was such an utter unreliable mess.

replies(2): >>fallou+5D1 >>kortil+9P1
◧◩
58. sunrun+BT[view] [source] [discussion] 2025-12-06 09:30:21
>>tobyjs+N7
> If everything goes offline for one hour per year at the same time, then a person is blocked and unproductive for an hour per year.

This doesn’t guarantee availability of those N services themselves though, surely? N services with a slightly lower availability target than N+1 with a slightly higher value?

More importantly, I’d say that this only works for non-critical infrastructure, and also assumes that the cost of bringing that same infrastructure back is constant or at least linear or less.

The 2025 Iberian Peninsula outage seems to show that’s not always the case.

◧◩◪◨
59. dns_sn+TU[view] [source] [discussion] 2025-12-06 09:48:08
>>rainco+FS
Terrible examples, Github and Google aren't just websites that one would place behind Cloudflare to try to improve their uptime (by caching, reducing load on the origin server, shielding from ddos attacks). They're their own big tech companies running complex services at a scale comparable to Cloudflare.
◧◩
60. nialse+UU[view] [source] [discussion] 2025-12-06 09:48:15
>>tobyjs+N7
Paraphrasing: We are setting aside the actual issue and looking for a different angle.

To me this reads as a form of misdirection, intentional or not. A monopolist has little reason to care about downstream effects, since customers have nowhere else to turn. Framing this as roll your own versus Cloudflare rather than as a monoculture CDN environment versus a diverse CDN ecosystem feels off.

That said, the core problem is not the monopoly itself but its enablers, the collective impulse to align with whatever the group is already doing, the desire to belong and appear to act the "right way", meaning in the way everyone else behaves. There are a gazillion ways of doing CDN, why are we not doing them? Why the focus on one single dominant player?

replies(1): >>citize+LX
◧◩
61. tonyhb+RV[view] [source] [discussion] 2025-12-06 10:00:42
>>tobyjs+N7
Cloudbleed. It’s been a fun time.
◧◩
62. hector+zX[view] [source] [discussion] 2025-12-06 10:22:07
>>tobyjs+N7
> On the other hand, if each service experiences the same hour per year of downtime but at different times, then the person is likely to be blocked for closer to 100 hours per year.

I think the parent post made a different argument:

- Centralizing most of the dependency on Cloudflare results in a major outage when something happens at Cloudflare, it is fragile because Cloudflare becomes the single point of failure. Like: Oh Cloudflare is down... oh, none of my SaaS services work anymore.

- In a world where this is not the case, we might see more outages, but they would be smaller and more contained. Like: oh, Figma is down? fine, let me pickup another task and come back to Figma once it's back up. It's also easier to work around by having alternative providers as a fallback, as they are less likely to share the same failure point.

As a result, I don't think you'll be blocked 100 hours a year in scenario 2. You may observe 100 non-blocking inconveniences per year, vs a completely blocking Cloudflare outage.

And in observed uptime, I'm not even sure these providers ever won. We're running all our auxiliary services on a decent Hetzner box with a LB. Say what you want, but that uptime is looking pretty good compared to any services relying on AWS (Oct 20, 15 hours), Cloudflare (Dec 5 (half hour), Nov 18 (3 hours)). Easier to reason about as well. Our clients are much more forgiving when we go down due to Azure/GCP/AWS/Cloudflare vs our own setup though...

◧◩◪
63. citize+LX[view] [source] [discussion] 2025-12-06 10:26:13
>>nialse+UU
> Why the focus on one single dominant player?

I don’t the answer to the all questions. But here I think it is just a way to avoid responsibility. If someone choses CDN “number 3” and it goes down, business people *might* put a blame on this person for not choosing “the best”. I am not saying it is a right approach, I just seen it happens too many times.

replies(1): >>nialse+H82
64. delusi+W81[view] [source] 2025-12-06 12:42:25
>>mixedb+(OP)
What you've identified here is a core part of what the banking sector calls the "risk based approach". Risk in that case is defined as the product of the chance of something happening and the impact of it happening. With this understanding we can make the same argument you're making, a little more clearly.

Cloudflare is really good at what they do, they employ good engineering talent, and they understand the problem. That lowers the chance of anything bad happening. On the other hand, they achieve that by unifying the infrastructure for a large part of the internet, raising the impact.

The website operator herself might be worse at implementing and maintaining the system, which would raise the chance of an outage. Conversely, it would also only affect her website, lowering the impact.

I don't think there's anything to dispute in that description. The discussion then is if cloudflares good engineering lowers the chance of an outage happening more than it raises the impact. In other words, the things we can disagree about is the scaling factors, the core of the argument seems reasonable to me.

◧◩◪◨⬒
65. pas+be1[view] [source] [discussion] 2025-12-06 13:27:12
>>Vorpal+l9
it's not that simple, no?

users want to do things, if their goal depends on a complex chain of functions (provided by various semi-independent services) then the ideal setup would be to have redundant providers and users could simply "load balance" between them and that separate high-level providers' uptime state is clustered (meaning that when Google is unavailable Bing is up, and when Random Site A, goes down their payment provider goes down too, etc..)

So ideally sites would somehow sort themselves nearly to separate availability groups.

Otherwise simply having a lot of uncorrelated downtimes doesn't help (if we count the sum of downtime experienced by people). Though again it gets complicated by the downtime percentage, because likely there's a phase shift between the states when user can mostly complete their goals and when they cannot because too many cascading failures.

◧◩◪
66. esrauc+tg1[view] [source] [discussion] 2025-12-06 13:49:16
>>ccakes+ck
I'm not sure I follow the argument. If literally every individual site had an uncorrelated 99% uptime, that's still less available than a centralized 99.9% uptime. The "entire Internet" is much less available in the former setup.

It's like saying that Chipotle having X% chance of tainted food is worse than local burrito places having 2*X% chance of tainted food. It's true in the lens that each individual event affects more people, but if you removed that Chipotle and replaced with all local, the total amount of illness is still strictly higher, it's just tons of small events that are harder to write news articles about.

replies(2): >>psycho+Lm1 >>Akrony+ZE1
◧◩◪
67. tazjin+pi1[view] [source] [discussion] 2025-12-06 14:07:01
>>wat100+qB
That is literally how a large fraction of airlines work. It's called Amadeus, and it did have a big global outage not too long ago.
replies(1): >>wat100+Nt1
◧◩◪◨
68. rolisz+pj1[view] [source] [discussion] 2025-12-06 14:16:12
>>philip+4Q
Last week's cloudflare outage was not resolved as quickly...
◧◩◪◨
69. p-e-w+0m1[view] [source] [discussion] 2025-12-06 14:40:19
>>sherma+Ry
The world dismantled landlines, phone booths, mail order catalogues, fax machines, tens of millions of storefronts, government offices, and entire industries in favor of the Internet.

So at this point no, the world can most definitely not “just live without the Internet”. And emergency services aren’t the only important thing that exists to the extent that anything else can just be handwaved away.

replies(1): >>171862+t92
70. psycho+bm1[view] [source] 2025-12-06 14:41:03
>>mixedb+(OP)
That's a reflect of social organisation. Pushing for hierarchical organisation with a few key centralising nodes will also impact business and technological decisions.

See also https://en.wikipedia.org/wiki/Conway%27s_law

◧◩◪◨
71. psycho+Lm1[view] [source] [discussion] 2025-12-06 14:48:16
>>esrauc+tg1
No it's like saying if one single point of failure in a global food supply chain fails, nobody's going to eat today. And which is in contrast to if some supplier fails to provide a local food truck today their customers will have to go to the restaurant next door.
replies(1): >>esrauc+Tq1
◧◩
72. coderj+gn1[view] [source] [discussion] 2025-12-06 14:53:17
>>chicke+V4
I think it might be a organizational architecture that needs to change.

> However, we have never before applied a killswitch to a rule with an action of “execute”.

> This is a straightforward error in the code, which had existed undetected for many years

So they shipped an untested configuration change that triggered untested code straight to production. This is "tell me you have no tests without telling me you have no tests" level of facepalm. I work on safety-critical software where if we had this type of quality escape both internal auditors and external regulators would be breathing down our necks wondering how our engineering process failed and let this through. They need to rearchitect their org to put greater emphasis on verification and software quality assurance.

◧◩◪◨⬒
73. esrauc+Tq1[view] [source] [discussion] 2025-12-06 15:21:55
>>psycho+Lm1
Ah ok, it is true that if there's a lot of fungible offerings that worse but uncorrelated uptime can be more robust.

I think the question then is how much of the Internet has fungible alternatives such that uncorrelated downtime can meaningfully be less impact. If you have a "to buy" shopping list, the existence of alternative shopping list products doesn't help you, when the one you use is down it's just down, the substitutes cannot substitute on short notice. Obviously for some things there's clear substitutes though, but actually I think "has fungible alternatives" is mostly correlated with "being down for 30 minutes doesn't matter", it seems that the things where you want the one specific site are the ones where availability matters more.

replies(1): >>hunter+2z1
◧◩◪◨
74. wat100+Nt1[view] [source] [discussion] 2025-12-06 15:42:13
>>tazjin+pi1
Which should be a good example of why this should be avoided.
◧◩◪◨⬒⬓
75. hunter+2z1[view] [source] [discussion] 2025-12-06 16:23:32
>>esrauc+Tq1
The restaurant-next-door analogy, representing fungibility, isn't quite right. If BofA is closed and you want to do something in person with them, you can't go to an unrelated bank. If Spotify goes down for an hour, you're not likely to become a YT Music subscriber as a stopgap even though they're somewhat fungible. You'll simply wait, and the question is: can I shuffle my schedule instead of elongating it?

A better analogy is that if the restaurant you'll be going to is unexpectedly closed for a little while, you would do an after-dinner errand before dinner instead and then visit the restaurant a bit later. If the problem affects both businesses (like a utility power outage) you're stuck, but you can simply rearrange your schedule if problems are local and uncorrelated.

replies(1): >>psycho+KF1
◧◩◪
76. foobar+0C1[view] [source] [discussion] 2025-12-06 16:46:21
>>gerdes+He
Sorry for the downvotes but this is true many times with some basic HA you get better uptime than the big cloud boys, yes their stack and tech is fancier but we also need to factor in how much CF messes with it vs self hosted, anyway the self hosted wisdom is RIP these days and I mostly just run cf pages / kv :)
◧◩◪◨
77. fallou+5D1[view] [source] [discussion] 2025-12-06 16:55:40
>>lockni+AT
Your electricity to servers IS a single point of failure, if all you do is depend upon the power company to reliably feed power. There is a reason that co-location centers have UPS and generator backups for power.

It may have been unthinkable to some casual observers that creating a giant single point of failure for the internet was a bad idea but it was entirely thinkable to others.

◧◩◪◨
78. Akrony+ZE1[view] [source] [discussion] 2025-12-06 17:11:11
>>esrauc+tg1
Also what about individual sites having 99% uptime while behind CF with an uncorrelated uptime of 99.9%?

Just because CF is up doesnt mean the site is

◧◩◪◨⬒⬓⬔
79. psycho+KF1[view] [source] [discussion] 2025-12-06 17:16:23
>>hunter+2z1
If utility power outage is put on the table, then the analogy is almost everyone solely relying on the same grid, in contrast with being wired to a large set of independent providers or even using their own local solar panel or whatever autonomous energy source.
◧◩◪◨
80. kortil+9P1[view] [source] [discussion] 2025-12-06 18:34:53
>>lockni+AT
You do know that data centers use backup generators because electricity is a single point of failure right? They even have multiple power supplies plugged into different circuits.
81. lxgr+e32[view] [source] 2025-12-06 20:30:04
>>mixedb+(OP)
Not too long ago, critical avionics were programmed by different software developers and the software was run on different hardware architectures, produced by different manufacturers. These heterogeneous systems produced combined control outputs via a quorum architecture – all in a single airplane.

Now half of the global economy seems to run on same service provider, it seems…

◧◩
82. lxgr+242[view] [source] [discussion] 2025-12-06 20:38:16
>>tobyjs+N7
> If everything goes offline for one hour per year at the same time, then a person is blocked and unproductive for an hour per year.

The consequence of some services being offline is much, much worse than a person (or a billion) being bored in front of a screen.

Sure, it’s arguably not Cloudflares fault that these services are cloud-dependent in the first place, but even if service just degrades somewhat gracefully in an ideal case, that’s a lot of global clustering of a lot of exceptional system behavior.

Or another analogy: Every person probably passes out for a few minutes in their live at one point or another. Yet I wouldn’t want to imagine what happens if everybody got that over with at the very same time without warning…

◧◩
83. clicke+r62[view] [source] [discussion] 2025-12-06 21:02:32
>>tobyjs+N7
If you’re using 10 services and 1 goes down, there’s a 9/10 chance you’re not using it and you can switch to work on something else. If all 10 go down you are actually blocked for an hour. Even 5 years ago, I can’t recall ever being actually impacted by an outtage to the extent that I was like “well, might as well just go get something to eat because everything is down”.
◧◩◪◨
84. nialse+H82[view] [source] [discussion] 2025-12-06 21:23:18
>>citize+LX
True. Nobody ever got fired for choosing IBM/Microsoft/Oracle/Cisco/etc. Likely an effect of stakeholder (executives/MBAs) brand recognition.
◧◩
85. chamom+p92[view] [source] [discussion] 2025-12-06 21:29:52
>>tobyjs+N7
When I’m working from home and the internet goes down, I don’t care. My poor private-equity owned corporation, think of the lost productivity!!

But if I was trying to buy insulin at 11 pm before benefits expire, or translate something at a busy train station in a foreign country, or submit my take-home exam, I would be freeeaaaking out.

The cloudflare-supported internet does a whole lot of important, time-critical stuff.

◧◩◪◨⬒
86. 171862+t92[view] [source] [discussion] 2025-12-06 21:30:31
>>p-e-w+0m1
In my opinion, the world actually should be able to live without the internet, but that's another matter.
replies(1): >>sherma+Oe2
◧◩◪◨⬒⬓
87. sherma+Oe2[view] [source] [discussion] 2025-12-06 22:20:31
>>171862+t92
That’s what I was getting at. There’s a lot of life that can be lived offline.
88. theold+0j2[view] [source] 2025-12-06 22:56:42
>>mixedb+(OP)
Notwithstanding that most people using Cloudflare aren't even benefiting from what it actually provides. They just use it...because reasons.
[go to top]