zlacker

[parent] [thread] 11 comments
1. olinge+(OP)[view] [source] 2022-07-08 20:44:20
While the naysayers will say, "Why isn't this in the cloud?," I think the response times and uptime of hackernews is really impressive. If anyone has a write-up of the infrastructure that runs HN, I would be interested. Maybe startups really can be run off of a rasberry pi
replies(5): >>pyb+C >>tpmx+oo >>NKosma+xs >>lambda+sE >>clepto+f21
2. pyb+C[view] [source] 2022-07-08 20:46:13
>>olinge+(OP)
AWS has had more outages than HN in recent times
replies(3): >>mpyne+ob >>fartca+zp >>qwerty+VJ
◧◩
3. mpyne+ob[view] [source] [discussion] 2022-07-08 21:32:11
>>pyb+C
AWS also operates at a significantly larger scale. When was the last AWS outage due to two critical disks failing at the same time?
replies(1): >>hosteu+al
◧◩◪
4. hosteu+al[view] [source] [discussion] 2022-07-08 22:07:01
>>mpyne+ob
Failures due to increasing complexity are still failures.
replies(1): >>mpyne+vn
◧◩◪◨
5. mpyne+vn[view] [source] [discussion] 2022-07-08 22:15:32
>>hosteu+al
Sure, but you're talking about all of AWS as if every customer is impacted when any part of AWS suffers a failure. But that's not the case, which makes it quite an apples/oranges comparison.

But even comparing the apples to the oranges, this HN status page someone else pointed out https://hn.hund.io/ seems to show that HN has had more than one outage in just the past month. All but today's and last night's being quite short, but still. Sometimes you need some extra complexity if you want to make it to zero downtime overall.

That's not something the HN website needs but I think AWS is doing fine even if that's your point of comparison.

6. tpmx+oo[view] [source] 2022-07-08 22:18:30
>>olinge+(OP)
It seems like it is in the cloud (AWS) now. See https://news.ycombinator.com/item?id=32027091.
◧◩
7. fartca+zp[view] [source] [discussion] 2022-07-08 22:22:31
>>pyb+C
Agree. Same with places like github.
8. NKosma+xs[view] [source] 2022-07-08 22:34:31
>>olinge+(OP)
If I was (re)designing this, I would keep the existing bare metal server but I would also put in place double (or triple) cloud redundancy/failover. We all love HN so much that it should have zero downtime :-)
9. lambda+sE[view] [source] 2022-07-08 23:25:37
>>olinge+(OP)
HN was down for hours, no website hosted properly using cloud providers is down for more than a few minutes a year. It's trivial to set up multiple providers, multiple regions. Rather than having a few servers with some admin guy swapping out disks, really embarrassing for a so called tech site.
replies(1): >>coryrc+kr2
◧◩
10. qwerty+VJ[view] [source] [discussion] 2022-07-08 23:52:29
>>pyb+C
Brief AWS outages were limited to us-east-1 where they appear to deploy canary builds and I think they quickly learned from those missteps. OTOH I receive almost weekly emails on my oracle cloud instance connectivity being down. I don’t even understand who their customers are that can tolerate frequent outage

Edit - HN is on AWS now. https://news.ycombinator.com/item?id=32026571

11. clepto+f21[view] [source] 2022-07-09 02:17:54
>>olinge+(OP)
I made the comment that some of our web portals could run off of a raspberry pi perfectly fine, and I wasn’t necessarily suggesting we go do that, but merely trying to get the point across that we don’t need 700 interweaved AWS systems to do what a single host with Apache + Postgres has been doing fine for years.
◧◩
12. coryrc+kr2[view] [source] [discussion] 2022-07-09 15:45:44
>>lambda+sE
Hope you get a refund
[go to top]