zlacker

Tell HN: HN Moved from M5 to AWS

submitted by 1vuio0+(OP) on 2022-07-09 01:33:16 | 282 points 221 comments
[source] [links] [go to bottom]

After many years of remaining static, HN's IP address changed.^1

Old: 209.216.230.240

New: 50.112.136.166

Perhaps this is temporary.

Little known fact: HN is also available through Cloudflare. Unlike CF, AWS does not support TLS1.3.^2 This is not working while HN uses the AWS IP.

1. Years ago someone on HN tried to argue with me that IP addresses will never stay the same for very long. I used HN as an example of an address that does not change very often. I have been waiting for years. I collect historical DNS data. When I remind HN readers that most site addresses are more static than dynamic, I am basing that statement on evidence i have collected.

2. Across the board, so to speak. Every CF-hosted site I have encountered supports TLS1.3. Not true for AWS. Many (most?^3) only offer TLS1.2.

3. Perhaps a survey is in order.

replies(24): >>ethanw+a1 >>sillys+b1 >>fomine+i2 >>solard+14 >>matheu+j4 >>furyof+X4 >>dralle+65 >>wging+j6 >>deatha+A6 >>olalon+g7 >>betaby+r7 >>pelagi+s7 >>dangus+q9 >>throwo+C9 >>albert+5a >>rhacke+ha >>bushba+vb >>dang+Sb >>usrn+Wd >>metada+7e >>loxias+eg >>dubcee+oG >>ksec+Kk1 >>midisl+BH2
1. ethanw+a1[view] [source] 2022-07-09 01:42:46
>>1vuio0+(OP)
Is this why there was downtime?
replies(1): >>sillys+k1
2. sillys+b1[view] [source] 2022-07-09 01:42:46
>>1vuio0+(OP)
I was wondering how they got the server up so quickly when their actual hardware failed. Dropping a bunch of cash on a big virtual server while they rebuild makes sense.

It’s also a reminder that beefy virtual servers are pretty darn beefy nowadays. I wonder which tier they went with.

replies(1): >>jayd16+o6
◧◩
3. sillys+k1[view] [source] [discussion] 2022-07-09 01:43:58
>>ethanw+a1
Nah, the root cause was a double disk failure. Their primary server’s disk failed, and then their failover server failed. https://twitter.com/hnstatus/status/1545409429113229312?s=21...
4. fomine+i2[view] [source] 2022-07-09 01:53:23
>>1vuio0+(OP)
For who unaware: M5 hosting https://www.m5hosting.com/ It's not EC2 M5 instance.
replies(3): >>SomeBo+33 >>benatk+W6 >>smn123+On1
◧◩
5. SomeBo+33[view] [source] [discussion] 2022-07-09 02:01:15
>>fomine+i2
It wouldn't be a move to AWS otherwise.
replies(1): >>infogu+74
6. solard+14[view] [source] 2022-07-09 02:08:55
>>1vuio0+(OP)
I'm surprised HN was on bare-metal to begin with. Is there a writeup about their infrastructure somewhere?
replies(3): >>sixoth+x6 >>colech+M7 >>eforti+i8
◧◩◪
7. infogu+74[view] [source] [discussion] 2022-07-09 02:09:11
>>SomeBo+33
It's fair to clarify. To someone engrossed with AWS infra the phrase "moved from M5 to AWS" could be puzzling.
8. matheu+j4[view] [source] 2022-07-09 02:10:22
>>1vuio0+(OP)
Have they disclosure how much traffic HN receives daily?
replies(1): >>boolea+m9
9. furyof+X4[view] [source] 2022-07-09 02:15:46
>>1vuio0+(OP)
That argument really got to you, did it?
replies(1): >>undowa+n7
10. dralle+65[view] [source] 2022-07-09 02:16:25
>>1vuio0+(OP)
I hope this is only temporary. Where else will we discuss AWS outages when AWS goes down?

Not even a joke.

replies(5): >>cherio+n5 >>f0e4c2+36 >>banana+v6 >>Pakdef+s9 >>lazyli+VJ
◧◩
11. cherio+n5[view] [source] [discussion] 2022-07-09 02:18:12
>>dralle+65
The IP appears to be us-west-2, so we will still be able to discuss us-east-1 outages alright!
replies(1): >>ignora+qa
◧◩
12. f0e4c2+36[view] [source] [discussion] 2022-07-09 02:22:45
>>dralle+65
If architected correctly, stuff deployed into aws stays up during all but the most extreme outages.
replies(2): >>oars+u7 >>fulafe+kD
13. wging+j6[view] [source] 2022-07-09 02:25:26
>>1vuio0+(OP)
> Unlike CF, AWS does not support TLS1.3. This is not working while HN uses the AWS IP.

This seemed implausible so I looked into it, and it's wrong as stated (at best, it needs to be made more precise to capture what you intended). First, you've mentioned Cloudflare, but the equivalent AWS product (CloudFront) does support TLS 1.3 (https://aws.amazon.com/about-aws/whats-new/2020/09/cloudfron...).

HN isn't behind CloudFront, though, so you probably mean their HTTP(s) load balancers (ALB) don't support TLS 1.3. Even that's an incomplete view of the load balancing picture, since the network load balancers (NLB) do support TLS 1.3, https://aws.amazon.com/about-aws/whats-new/2021/10/aws-netwo....

replies(3): >>Aeolun+z7 >>1vuio0+WM >>19h+Oe1
◧◩
14. jayd16+o6[view] [source] [discussion] 2022-07-09 02:25:43
>>sillys+b1
It was down all morning, I half expected a post about a trip to Best Buy.
replies(1): >>sillys+G8
◧◩
15. banana+v6[view] [source] [discussion] 2022-07-09 02:26:38
>>dralle+65
It appears to be pointing at a bare EC2 instance, no doubt a lift-and-shift.

Even during the most extreme AWS events, my EC2 instances running dedicated servers kept seeing Internet traffic.

replies(1): >>pmoria+LR
◧◩
16. sixoth+x6[view] [source] [discussion] 2022-07-09 02:26:52
>>solard+14
I vaguely remember hearing HN is a single file script.
replies(1): >>vortic+w7
17. deatha+A6[view] [source] 2022-07-09 02:27:22
>>1vuio0+(OP)
> When I remind HN readers that most site addresses are more static than dynamic, I am basing that statement on evidence i have collected.

Sure. But without seeing the other sides argument, I have to wonder if their point wasn't that they're not designed to be stable for the purpose of identifying a service/thing on the Internet; things can and do move and change. Hardware failure is a good example of that. Just like a house address, those too are normally stable but people can & do move. Just with software, it's like we look our friend up in the white pages¹ prior to every visit, which one might not do in real life.

¹oh God I'm dating myself here.

replies(2): >>phailh+r9 >>1vuio0+yf
◧◩
18. benatk+W6[view] [source] [discussion] 2022-07-09 02:29:29
>>fomine+i2
It also isn't a move to an unreleased Apple processor.

There's https://en.wikipedia.org/wiki/Apple_A5 and https://en.wikipedia.org/wiki/Apple_M1 https://en.wikipedia.org/wiki/Apple_M2

19. olalon+g7[view] [source] 2022-07-09 02:30:54
>>1vuio0+(OP)
> Little known fact: HN is also available through Cloudflare. Unlike CF, AWS does not support TLS1.3. This is not working while HN uses the AWS IP.

What does that mean? How do you access HN through CloudFlare and what do you mean by AWS not supporting TLS1.3? You can certainly run any https server on EC2, including one that supports TLS1.3.

replies(3): >>jffry+ia >>tedmis+Ns >>1vuio0+7L
◧◩
20. undowa+n7[view] [source] [discussion] 2022-07-09 02:31:30
>>furyof+X4
My first thought as well -- but it was phrased as, "of course there is some specific person here who, due to a long-running disagreement over facts, has been passively gathering relevant data over a period of years."

It's sort of like Rule 34, but for HN. "There is data of it"

replies(1): >>furyof+Md
21. betaby+r7[view] [source] 2022-07-09 02:32:14
>>1vuio0+(OP)
Consistently without IPv6, even though AWS supports IPv6.
replies(2): >>dang+eh >>Pakdef+7k1
22. pelagi+s7[view] [source] 2022-07-09 02:32:20
>>1vuio0+(OP)
I'm Ok with whatever. I almost had a anxiety breakdown today... And realised how hooked I was on this site. I do not have any other social network accounts and it was truly telling how much this has become my de facto 'twitter'/'facebook'
replies(2): >>rpmism+6e >>riku_i+ef
◧◩◪
23. oars+u7[view] [source] [discussion] 2022-07-09 02:32:41
>>f0e4c2+36
And then where do we discuss these extreme outages?
replies(5): >>shawnz+48 >>rubyis+h8 >>static+3g >>VoidWh+Bi >>mekste+Ki
◧◩◪
24. vortic+w7[view] [source] [discussion] 2022-07-09 02:32:48
>>sixoth+x6
HN is wrote in ARC I believe.
◧◩
25. Aeolun+z7[view] [source] [discussion] 2022-07-09 02:33:15
>>wging+j6
NLB’s support everything that goes over TCP or UDP, that’s not exceptionally surprising.
replies(1): >>WatchD+sa
◧◩
26. colech+M7[view] [source] [discussion] 2022-07-09 02:35:16
>>solard+14
I don’t remember the exact details but basically one step above “running on that one computer in the closet”.
◧◩◪◨
27. shawnz+48[view] [source] [discussion] 2022-07-09 02:37:46
>>oars+u7
where did you discuss outages of HN's previous provider?
replies(1): >>dralle+nd
◧◩◪◨
28. rubyis+h8[view] [source] [discussion] 2022-07-09 02:38:51
>>oars+u7
reddit.... shudders
replies(1): >>marioj+Cq
◧◩
29. eforti+i8[view] [source] [discussion] 2022-07-09 02:38:53
>>solard+14
https://news.ycombinator.com/item?id=28479595
◧◩◪
30. sillys+G8[view] [source] [discussion] 2022-07-09 02:42:10
>>jayd16+o6
Yeah. Bare metal failing is approximately the worst case scenario behind data loss. Being down all morning is an impressive recovery time, because they had to provision an EC2 server and transfer all data to it.
replies(1): >>pmoria+gV
◧◩
31. boolea+m9[view] [source] [discussion] 2022-07-09 02:45:34
>>matheu+j4
https://news.ycombinator.com/item?id=16076041

4 million requests per day in 2018.

replies(1): >>herpde+Ob
32. dangus+q9[view] [source] 2022-07-09 02:45:53
>>1vuio0+(OP)
This sort of post-outage discussion and speculation is kind of like debating where Justin Bieber gets his tires changed and what brand he’s going to use after he ran over a nail.

The only reason we are here doing this sort of thing is because we are “Justin Bieber” fans. We aren’t here because changing a tire is interesting, unique, nor will we learn anything from it – especially this particular tire (HN is like the Toyota Corolla of “vehicles” compared to the other complex mission-critical distributed systems that make up other popular web services).

replies(2): >>usrn+pe >>loxias+dl
◧◩
33. phailh+r9[view] [source] [discussion] 2022-07-09 02:45:55
>>deatha+A6
Aside: How did you get the superscript? Is that supported by HN's formatter or is that just a literal superscript character?
replies(1): >>deatha+ma
◧◩
34. Pakdef+s9[view] [source] [discussion] 2022-07-09 02:45:56
>>dralle+65
Twitter is nice for learning about outages...
35. throwo+C9[view] [source] 2022-07-09 02:47:29
>>1vuio0+(OP)
> I have been waiting for years. I collect historical DNS data.

Why do you collect this? And in what format?

replies(1): >>Amfy+Ie
36. albert+5a[view] [source] 2022-07-09 02:50:49
>>1vuio0+(OP)
Why the move?

Hopefully it’s simply M5 didn’t have a server ready and they’ll migrate back.

Vultr has a great assortment of bare metal servers.

https://www.vultr.com/products/bare-metal/#pricing

replies(2): >>jacoop+Bb >>dang+ec
37. rhacke+ha[view] [source] 2022-07-09 02:52:04
>>1vuio0+(OP)
It is noticeably slower now.
replies(3): >>metada+de >>dang+dr >>Tijdre+Gs1
◧◩
38. jffry+ia[view] [source] [discussion] 2022-07-09 02:52:08
>>olalon+g7
I too have no idea what OP means by saying it's "available through Cloudflare"

  $ dig +noall +answer A news.ycombinator.com
  news.ycombinator.com. 0 IN A 50.112.136.166
  
  $ nslookup 50.112.136.166
  166.136.112.50.in-addr.arpa name = ec2-50-112-136-166.us-west-2.compute.amazonaws.com.
◧◩◪
39. deatha+ma[view] [source] [discussion] 2022-07-09 02:52:42
>>phailh+r9
Unicode has code points for superscript/subscript digits. That one is U+00B9: https://www.compart.com/en/unicode/U+00B9 (So it's "normal text", as far as HN is concerned. Note that HN does filter some things, like emoji.)

I was on macOS when I typed it, there it's Control+Cmd+Space, and then search for "super" which gets close enough.

On my Linux machine, I can either do Compose, ^, 1, or Super+e and then search for it. (But both of these require configuration; either setting a key to be Compose (I sacrifice RAlt), or setting up whatever it is the IME I have is for Super+e.)

replies(1): >>virapt+fh
◧◩◪
40. ignora+qa[view] [source] [discussion] 2022-07-09 02:53:02
>>cherio+n5
but... the s3 buckets are in us-east-1, and postgres in ap-southeast-2. More regions better than one, for maximum impact with minimum effort.
replies(2): >>pojzon+5z >>pmoria+tQ
◧◩◪
41. WatchD+sa[view] [source] [discussion] 2022-07-09 02:53:08
>>Aeolun+z7
Yeah but NLB can offload TLS from the app, which is what the parent commenter linked to. It’s not just passing through the TLS from the app(which is also possible).
42. bushba+vb[view] [source] 2022-07-09 03:00:45
>>1vuio0+(OP)
Curious, why not Digital Ocean? Which is a YC investment? I get why you choose AWS on a technical basis, but wouldn't it make sense to give the business to Digital Ocean to help their brand?
replies(3): >>dang+wc >>Godel_+Mp >>toast0+Qy
◧◩
43. jacoop+Bb[view] [source] [discussion] 2022-07-09 03:01:59
>>albert+5a
There is also Hetzner, their dedicated server pricing is very good.
replies(2): >>ta988+oc >>Amfy+Be
◧◩◪
44. herpde+Ob[view] [source] [discussion] 2022-07-09 03:03:41
>>boolea+m9
6 million per day as of 10 months ago: https://news.ycombinator.com/item?id=28479595
45. dang+Sb[view] [source] 2022-07-09 03:04:29
>>1vuio0+(OP)
It's temporary.
replies(3): >>edanm+6p >>herpde+mv >>futhey+c62
◧◩
46. dang+ec[view] [source] [discussion] 2022-07-09 03:08:17
>>albert+5a
> Why the move?

Our primary server died around 11pm last night (PST), so we switched to our secondary server, but then our secondary server died around 6am, and we didn't have a third.

The plan was always "in the unlikely event that both servers die at the same time, be able to spin HN up on AWS." We knew it would take us several hours to do that, but it seemed an ok tradeoff given how unlikely the both-servers-die-at-the-same-time scenario seemed at the time. (It doesn't seem so unlikely now. In fact it seems to have a probability of 1.)

Given what we knew when we made that plan, I'm pretty pleased with how things have turned out so far (fingers crossed—no jinx—definitely not gloating). We had done dry runs of this and made good-enough notes. It sucks to have been down for 8 hours, but it could have been worse, and without good backups (thank you sctb!) it would have been catastrophic.

Having someone as good as mthurman do most of the work is also a really good idea.

replies(8): >>omegal+Xc >>albert+vd >>nemoth+Df >>rstupe+Bg >>mwcamp+qt >>aditya+nv >>O_____+BM >>smn123+Zn1
◧◩◪
47. ta988+oc[view] [source] [discussion] 2022-07-09 03:10:01
>>jacoop+Bb
But they don't have servers in US do they?
replies(1): >>Amfy+Ee
◧◩
48. dang+wc[view] [source] [discussion] 2022-07-09 03:11:25
>>bushba+vb
Pretty sure Digital Ocean is not a YC-funded startup.

We wouldn't make HN decisions on that basis anyhow, though, I don't think. Maybe if all other things were literally equal.

◧◩◪
49. omegal+Xc[view] [source] [discussion] 2022-07-09 03:15:02
>>dang+ec
Do you have a postmortem on why both servers died so fast?
replies(1): >>dang+xd
◧◩◪◨⬒
50. dralle+nd[view] [source] [discussion] 2022-07-09 03:19:11
>>shawnz+48
Less people are interested in outages of HN's previous provider.
◧◩◪
51. albert+vd[view] [source] [discussion] 2022-07-09 03:19:56
>>dang+ec
Speaking for everyone, really appreciate you dang.

Question: so will HN be migrating back to M5 (or another hosting provider).

replies(1): >>dang+Jf
◧◩◪◨
52. dang+xd[view] [source] [discussion] 2022-07-09 03:19:59
>>omegal+Xc
It was an SSD that failed in each case, and in a similar way (e.g. both were in RAID arrays but neither could be rebuilt from the array - but I am over my skis in reporting this, as I barely know what that means).

The disks were in two physically separate servers that were not connected to each other. I believe, however, that they were of similar make and model. So the leading hypothesis seems to be that perhaps the SSDs were from the same manufacturing batch and shared some defect. In other words, our servers were inbred! Which makes me want to link to the song 'Second Cousin' by Flamin' Groovies.

The HN hindsight consensus, to judge by the replies to https://news.ycombinator.com/item?id=32026606, is that this happens all the time, is not surprising at all, and is actually quite to be expected. Live and learn!

replies(4): >>whitep+Sf >>hoofhe+4i >>loxias+fk >>ksec+6j1
◧◩◪
53. furyof+Md[view] [source] [discussion] 2022-07-09 03:21:53
>>undowa+n7
I'm certainly pleased by the result
54. usrn+Wd[view] [source] 2022-07-09 03:23:29
>>1vuio0+(OP)
My mail server's IP has been the same for 4 years now. Even the machine I'm typing this on only changes IPs every couple years or so and that's a residential IP that isn't supposed to be static.

Of course you want everything in DNS and if your IP is supposed to be dynamic you should have provisions for automatically updating it (I have some shell script somewhere that calls nsupdate over ssh although I looked for it the other day and couldn't find it which is a bit disturbing.)

replies(1): >>Tijdre+Lr1
◧◩
55. rpmism+6e[view] [source] [discussion] 2022-07-09 03:25:24
>>pelagi+s7
This is a special little corner of the internet, and I love it very much. I also have it blocked in etc/hosts on every work computer I ever use.
56. metada+7e[view] [source] 2022-07-09 03:25:24
>>1vuio0+(OP)
I was surprised they only had one backup server, especially given the competitive price of rackmount hardware these days. More replicas needed.

Although this was a fun exercise to learn how lost I feel without HN. Damn.

replies(2): >>dang+8g >>andrea+My
◧◩
57. metada+de[view] [source] [discussion] 2022-07-09 03:26:26
>>rhacke+ha
It actually seems faster for me compared to the past two weeks. Not sure if I was being throttled or something, at times it felt like it.
replies(1): >>dang+or
◧◩
58. usrn+pe[view] [source] [discussion] 2022-07-09 03:28:02
>>dangus+q9
HN is written in a custom lisp dialect. I'd argue it's more like one of those designer cars than a corolla.
replies(1): >>dang+vh
◧◩◪
59. Amfy+Be[view] [source] [discussion] 2022-07-09 03:29:33
>>jacoop+Bb
they are not very good for hosting user generated content, they suspend very fast for any sort of Abuse complaint. User generated content, will result in some Abuse complaints.
replies(1): >>jacoop+ST
◧◩◪◨
60. Amfy+Ee[view] [source] [discussion] 2022-07-09 03:29:51
>>ta988+oc
they do, in Ashburn. I would not host production with them.
replies(2): >>whitep+Wf >>closep+pu
◧◩
61. Amfy+Ie[view] [source] [discussion] 2022-07-09 03:30:31
>>throwo+C9
there’s commercial services that do this already: Securitytrails
◧◩
62. riku_i+ef[view] [source] [discussion] 2022-07-09 03:34:36
>>pelagi+s7
reddit has many interesting communities with good quality content and drama. But you know probably already
◧◩
63. 1vuio0+yf[view] [source] [discussion] 2022-07-09 03:37:32
>>deatha+A6
That was not the other side's "point". I routinely make the statement: Most sites submitted to HN have realtively static IP addresses, i.e., these addresses can change, but in fact they change only infrequently, if at all.^1 This is not an opinion. It is not a mindless regurgitation of something I read somewhere. I am looking at the data I have, not theorising. From where I sit, there is nothing to argue about.

1. Why do I state that. Because I kept reading about why DNS was created and always encountered the same parroted explanation, year after year. Something along the lines that IP addresses were constantly in flux. That may have been true when DNS was created and the www was young. But was it true today. I wanted to find out. I did experiments. I found I could use the same DNS data day after day, week after week, month after month, year after year.

Why would I care. Because by eliminating remote DNS lookups I was able to speed up the time it takes me to retrieve data from the www.^2 Instead of making the assumption that every site is going to switch IP addresses every second, minute, day or week, I assume that only a few will do that and most will not. I want to know about those sites that are changing their IP address. I want to know the reasons. When a site changes its IP address, I am alerted, as you see with today's change to HN's address. Whereas when people assume every site is frequently changing its IP address, they perform unnecesary DNS lookups for the majority of sites. That wastes time among other things. And, it seems, people are unaware when sites change addresses.

2. Another benefit for me is that when some remote DNS service does down (this has happened several times), I can still use the www without interruption. I already have the DNS data I need. Meanwhile the self-proclaimed "experts" go into panic mode.

replies(4): >>iampim+Wh >>virapt+Yi >>loxias+6o >>bawolf+ow
◧◩◪
64. nemoth+Df[view] [source] [discussion] 2022-07-09 03:38:34
>>dang+ec
>The plan was always "in the unlikely event that both servers die at the same time, be able to spin HN up on AWS.

>We had done dry runs of this in the past,

Incredible. Actual disaster recovery.

◧◩◪◨
65. dang+Jf[view] [source] [discussion] 2022-07-09 03:39:38
>>albert+vd
We've only had 5 minutes to talk about it so far, but unless something changes, I don't see why we wouldn't go back to M5.
replies(1): >>booi+tA
◧◩◪◨⬒
66. whitep+Sf[view] [source] [discussion] 2022-07-09 03:41:17
>>dang+xd
Do you happen to know the make and model of SSD?
replies(1): >>wolfga+Oh
◧◩◪◨⬒
67. whitep+Wf[view] [source] [discussion] 2022-07-09 03:42:06
>>Amfy+Ee
Why?
replies(1): >>ev1+Zy
◧◩◪◨
68. static+3g[view] [source] [discussion] 2022-07-09 03:43:39
>>oars+u7
If there's an extreme AWS outage, you're pretty much stuck to in-person or POTS.
◧◩
69. dang+8g[view] [source] [discussion] 2022-07-09 03:45:28
>>metada+7e
Our thinking was: (1) keep a hot standby to fail over to when we need it—that keeps downtime to seconds in routine cases (like pre-planned maintenance) and minutes or an hour in most failure cases—for example, when our primary server died last night, HN was down for about an hour while we brought up the standby; and (2) In the unlikely event that both the primary and standby servers fail at the same time, be able to bring up a fresh server from backup within hours, not days. The latter case is what happened today, and in the end we were down for just under 8 hours. (Assuming we don't sink back into the pit of hell overnight.)

Assuming things don't fail again in the next day or two, since we still have a lot to take care of (fingers crossed—definitely not gloating), I feel like this was pretty reasonable. We don't have a lot of dev or ops resources—few people work on HN, and only me full-time these days. The more complex one's replica architecture, the higher the maintenance costs. The simplicity of our setup has served us well in the 9 years that we've been running it, and I feel like the tradeoff of "several hours downtime once a decade" is worth it if you draw one of those risk/cost managerial whiteboard things.

replies(3): >>metada+lq >>toast0+cz >>tannha+DD
70. loxias+eg[view] [source] 2022-07-09 03:46:20
>>1vuio0+(OP)
> someone on HN tried to argue with me that IP addresses will never stay the same for very long

Deeply deeply agree with you. Not whoever was arguing. :) My "dialup" VM in the cloud I use for assorted things has had the same IP for at least 7 years, probably longer. (Thanks Linode!) After a few years, it's honestly not that hard to remember an arbitrary 32bit int. :)

  $ w3m -dump http://846235814   # ;)
replies(2): >>1vuio0+8i >>bawolf+7m
◧◩◪
71. rstupe+Bg[view] [source] [discussion] 2022-07-09 03:49:36
>>dang+ec
If I can ask what is an sctb?
replies(2): >>wolfga+ch >>quink+uh
◧◩◪◨
72. wolfga+ch[view] [source] [discussion] 2022-07-09 03:56:59
>>rstupe+Bg
Scott Bell is a former[1] HN moderator: https://news.ycombinator.com/user?id=sctb

[1]: https://news.ycombinator.com/item?id=25055115

◧◩
73. dang+eh[view] [source] [discussion] 2022-07-09 03:57:10
>>betaby+r7
It's on the list. We're slow.
replies(1): >>lma21+rV
◧◩◪◨
74. virapt+fh[view] [source] [discussion] 2022-07-09 03:57:17
>>deatha+ma
Not only super/subscript, there's also some convenient fractions: ½ ⅔ ⅜, etc
◧◩◪◨
75. quink+uh[view] [source] [discussion] 2022-07-09 03:59:36
>>rstupe+Bg
Scott Bell, user sctb.
◧◩◪
76. dang+vh[view] [source] [discussion] 2022-07-09 03:59:42
>>usrn+pe
With standard tires though.
replies(1): >>pvg+NU1
◧◩◪◨⬒⬓
77. wolfga+Oh[view] [source] [discussion] 2022-07-09 04:01:40
>>whitep+Sf
Just posted to the linked thread:

kabdib> Let me narrow my guess: They hit 4 years, 206 days and 16 hours . . . or 40,000 hours. And that they were sold by HP or Dell, and manufactured by SanDisk.

mikiem> These were made by SanDisk (SanDisk Optimus Lightning II) and the number of hours is between 39,984 and 40,032...

replies(1): >>dang+am
◧◩◪
78. iampim+Wh[view] [source] [discussion] 2022-07-09 04:02:53
>>1vuio0+yf
I call BS on your second point.

Just run a DNS server locally configured to serve stale records if upstream is unavailable.

As for your first point, the same local DNS server would also provide you with lower/no latency.

replies(1): >>1vuio0+Il
◧◩◪◨⬒
79. hoofhe+4i[view] [source] [discussion] 2022-07-09 04:04:22
>>dang+xd
I believe a more plausible scenario could be that each drive failed during the RAID rebuild and restriping process.

This is a known issue in NAS systems, and Freenas always recommended running two raid arrays with 3 disks in each array for mission critical equipment. By doing so, you can lose a disk in each array and keep on trucking without any glitches. Then if you happen to kill another disk during restriping, it would failover to the second mirrored array.

You could hotswap any failed disks in this setup without any downtime. The likelihood of losing 3 drives together in a server would be highly unlikely.

https://www.45drives.com/community/articles/RAID-and-RAIDZ/

replies(2): >>pmoria+MU >>aaaaaa+571
◧◩
80. 1vuio0+8i[view] [source] [discussion] 2022-07-09 04:05:04
>>loxias+eg
That's another myth that always iritated me: IPv4 addresses cannot be memorised. I have a few memorised, mainly some TLDs and Internic. With those I can "bootstrap" using DNS/FTP to get any other address. IPv4 are no more difficult to remember than phone numbers. Maybe people are not memorising phone numbers anymore because of mobile phone contacts storage but it does mean we are incapable of doing so.

Thank you for stating the truth.

replies(3): >>loxias+gm >>bawolf+Bm >>dang+vp
◧◩◪◨
81. VoidWh+Bi[view] [source] [discussion] 2022-07-09 04:09:51
>>oars+u7
HAM Radio
replies(1): >>namech+Sl
◧◩◪◨
82. mekste+Ki[view] [source] [discussion] 2022-07-09 04:11:13
>>oars+u7
Better to discuss the next day when it's over than seeing bunch of upset comments realtime posted.
replies(1): >>atmosx+th1
◧◩◪
83. virapt+Yi[view] [source] [discussion] 2022-07-09 04:13:10
>>1vuio0+yf
I think there's a case that's missed here though: IPs change when you move as well. There's a number of services which have different IPs depending on whether i use my home connection, or mobile, or go to the office in another city. It's not that they will stop working, but they'll be slightly less optimal.

> they perform unnecesary DNS lookups for the majority of sites

Is it actually unnecessary if the IPs can change? I'm fine with the extra 20ms on the access every once in a while in exchange for no mysterious failure every few years.

replies(1): >>1vuio0+po
◧◩◪◨⬒
84. loxias+fk[view] [source] [discussion] 2022-07-09 04:25:24
>>dang+xd
> So the leading hypothesis seems to be that perhaps the SSDs were from the same manufacturing batch and shared some defect.

Really sorry that you had to learn the hard way, but this is unfortunately common knowledge :/ Way back (2004) when I was shadowing-eventually-replacing a mentor that handled infrastructure for a major institution, he gave me a rule I took to heart from then forward: Always diversify. Diversify across manufacturer, diversify across make/model, hell, if it's super important, diversify across _technology stacks_ if you can.

It was policy within our (infrastructure) group that /any/ new server or service must be build-able from at least 2 different sources of components before going live, and for mission critical things, 3 is better. Anything "production" had to be multihomed if it connects to the internet.

Need to build a new storage server service? Get a Supermicro board _and_ a Tyan (or buy an assortment of Dell & IBM), then populate both with an assortment of drives picked randomly across 3 manufacturers, with purchases spread out across time (we used 3months) as well as resellers. Any RAID array with more than 4 drives had to include a hot spare. For even more peace of mind, add a crappy desktop PC with a ton of huge external drives and periodically sync to that.

He also taught me that it's not done until you do a few live "disaster tests" (yanking drives out of fully powered up servers, during heavy IO. Brutally ripping power cables out, quickly plugging it back in, then yanking it out again once you hear the machine doing something, then plug back in...), without giving anyone advance notice. Then, and only then, is a service "done".

I thought "Wow, $MENTOR is really into overkill!!" at the time, but he was right.

I credit his "rules for building infrastructure" for having a zero loss track record when it comes to infra I maintain, my whole life.

replies(3): >>erik_s+Wm >>dang+4q >>zimpen+Qr
◧◩
85. loxias+dl[view] [source] [discussion] 2022-07-09 04:34:26
>>dangus+q9
> We aren’t here because changing a tire is interesting, unique, nor will we learn anything from it

Speak for yourself, some of us (at least me) find "postmortum writeups" FASCINATING!!

I read them every chance I get. Most of the time the root cause wouldn't have affected me, but, I still occasionally will read one, think "oh crap! that could have bit me too!", then add the fix to my "Standard Operating Procedures" mental model, or whatnot. Some of us are still trying to "finish the game" with zero losses. :)

replies(1): >>dangus+Ap
◧◩◪◨
86. 1vuio0+Il[view] [source] [discussion] 2022-07-09 04:38:19
>>iampim+Wh
This is exactly the sort of comment to which I am referring. Maybe I am just getting trolled. I should just ignore this gibberish. How can something be "BS" if it works.^2 I am using this every day.

I used to serve DNS data over a localhost authoritative server. Now I store most DNS data in a localhost forward proxy.

If "upstream" means third party DNS service to resolve names piecemeal while accessing the www, I do not do that.^1

1. I do utilise third party DoH providers for bulk DNS data retrieval. Might as well, because DoH allows for HTTP/1.1 pipelining. I get DNS data from a variety of sources, rather than only one.

2. If it were "BS" then that would imply I am trying to mislead or deceive. The reverse is true. I kept reading sources of information about the internet that were meant to have me believe that most DNS RRs are constantly changing. I gathered DNS data. The data suggested those sources, whether intentionally or not, could be misleading and deceptive. Most DNS RRs did not change. BS could even mean that I am lying. But if I were lying and the DNS RRs for the sites I access were constantly changing, then the system I devised for using stored DNS data would not work. That is false. It works. I have been using it for years.

replies(3): >>bawolf+In >>loxias+1r >>1vuio0+J75
◧◩◪◨⬒
87. namech+Sl[view] [source] [discussion] 2022-07-09 04:40:29
>>VoidWh+Bi
People joke but from the top of a decent mountain I can reach 45 miles into the bay area from me on a $150 2m radio in my truck with rooftop antenna.
replies(2): >>loxias+Wr >>iasay+OF
◧◩
88. bawolf+7m[view] [source] [discussion] 2022-07-09 04:42:52
>>loxias+eg
Obviously a server vm getting renumbered is a rare occurance.
◧◩◪◨⬒⬓⬔
89. dang+am[view] [source] [discussion] 2022-07-09 04:43:04
>>wolfga+Oh
https://news.ycombinator.com/item?id=32028511
◧◩◪
90. loxias+gm[view] [source] [discussion] 2022-07-09 04:43:40
>>1vuio0+8i
You're welcome. :)

FWIW, I still memorize phone numbers too! I also avoid phone calls like the plague, so in reality it's just a handful of numbers per year.

◧◩◪
91. bawolf+Bm[view] [source] [discussion] 2022-07-09 04:47:48
>>1vuio0+8i
Literally nobody thinks its impossible to memorize 4 8-bit numbers. A bad user experience, sure.

> I have a few memorised, mainly some TLDs and Internic. With those I can "bootstrap" using DNS/FTP to get any other address.

Pretty sure that's true of 90% of everyone here, since all you really need to memorize is a public dns resolver, and 8.8.8.8 or 1.1.1 .1 is a particularly easy address to remember.

replies(1): >>andrea+Nw
◧◩◪◨⬒⬓
92. erik_s+Wm[view] [source] [discussion] 2022-07-09 04:51:27
>>loxias+fk
Didn’t Intel grant AMD some kind of license because the US government refused to buy x86 CPU models that only have one source?
replies(1): >>trasz+wj6
◧◩◪◨⬒
93. bawolf+In[view] [source] [discussion] 2022-07-09 04:58:53
>>1vuio0+Il
> How can something be "BS" if it works.

Nobody claimed it didn't work. The claim that is disputed is it is meaningfuly faster.

◧◩◪
94. loxias+6o[view] [source] [discussion] 2022-07-09 05:04:03
>>1vuio0+yf
> Because I kept reading about why DNS was created and always encountered the same parroted explanation, year after year. Something along the lines that IP addresses were constantly in flux. That may have been true when DNS was created and the www was young.

Interesting! That runs in direct conflict to what I learned eons ago (pre-web) for "why DNS?". (Or maybe, it conflicts with what my faulty meat brain remembers.)

The gist was "we have DNS because without it, people would have use numbers. people don't like numbers." DNS is primarily there to provide semantic meaning". The fact that it allows the numbers to change is.. a secondary bonus.

DNS exists for the same reason as variable names instead "variable numbers" (like a, b, c, d, &c) For us humans to provide semantic labels to things.

(an aside, "variable number" is exactly how things are still done in math and physics. This amuses me greatly.)

replies(3): >>1vuio0+nu >>bawolf+Jw >>ndrisc+ah1
◧◩◪◨
95. 1vuio0+po[view] [source] [discussion] 2022-07-09 05:06:20
>>virapt+Yi
In other words, DNS load balancing or something similar.

I am not really I fan because I like to choose the IP address, instead of letting someone else decide. I believe in user choice.

In some cases I have found the "most optimal" IP address for me is not always the one advertised based on the location of the computer sending the query.

It is like choosing a mirror when downloading open source software. I know which mirrors I prefer. The best ones for me are not necessarily always the ones closest geographically.

As for the question, the answer is yes. Because if it did not change then the query was not needed. If it does change then I will know and I will get the new address. The small amount of time it takes to get the new address and update a textfile is acceptable to me. I may also investigate why the address changed. Why did this HN submission go to the front page, why does it have so many points and comments. Some people are interested when stuff happens. I actually like "mysterious failures" because I want to know more about the sites I visit. Whereas an extra delay every time a TTL expires, for every name, again and again, over and over, every day, that is a lot of time cumulatively. Not to mention then I have to contend with issues of DNS privacy and security. When I started weaning myself off DNS lookups, there was no zone signing and encrypted queries.

The approach I take is not for everybody. I make HTTP requests outside the browser and I read HTML with a text-only browser. I do what works best for me.

replies(1): >>Shroud+7H
◧◩
96. edanm+6p[view] [source] [discussion] 2022-07-09 05:13:45
>>dang+Sb
Maybe you should pin this comment? I almost lost it in the shuffle.
replies(1): >>dang+gp
◧◩◪
97. dang+gp[view] [source] [discussion] 2022-07-09 05:15:24
>>edanm+6p
I thought about it, but it's fun to leave rewards for people who read entire threads.
replies(2): >>skywal+Ar >>boulos+os
◧◩◪
98. dang+vp[view] [source] [discussion] 2022-07-09 05:17:41
>>1vuio0+8i
I love that you are this passionate about static IP addresses. Static IPv4 addresses, to be precise. Illegitimi non carborundum!
replies(2): >>russel+Kv >>1vuio0+GG
◧◩◪
99. dangus+Ap[view] [source] [discussion] 2022-07-09 05:18:41
>>loxias+dl
What I’m saying is that other businesses/products end up with more interesting postmortems thanks to their more complex operating scenarios.
replies(1): >>loxias+Qu
◧◩
100. Godel_+Mp[view] [source] [discussion] 2022-07-09 05:21:21
>>bushba+vb
> help their brand

Who are the people that use HN and would notice it moved hosting, but haven’t heard of Digital Ocean?

◧◩◪◨⬒⬓
101. dang+4q[view] [source] [discussion] 2022-07-09 05:23:51
>>loxias+fk
> this is unfortunately common knowledge

This reminds me of Voltaire: "Common sense is not so common."

Thanks for the great comment—everything you say makes perfect sense and is even obvious in hindsight, but it's the kind of thing that tends to be known by grizzled infrastructure veterans who had good mentors in their chequered past—and not so much by the rest of us.

I fear getting karmically smacked for repeating this too often, but the more I think about it, the more I feel like 8 hours of downtime is not an unreasonable price to pay for this lesson. The opportunity cost of learning it beforehand would have been high as well.

replies(2): >>loxias+2x >>pmoria+sU
◧◩◪
102. metada+lq[view] [source] [discussion] 2022-07-09 05:25:42
>>dang+8g
@Dang, the state of affairs around here is already more than reasonable. In fact, it's incredible HN is almost never down, and the occasional 8-24 hour interruption every 5-7 years is actually a Good Thing for HN whores (like me) to reflect on how insanely much we are hooked on and love this stupid time sink technomancer site.

Cheers, you are the true and literal soul of the machine embodying the best spirit of the oftentimes beautiful thing that is Post-Paul-Graham HackerNews.

Please just promise to never die.

◧◩◪◨⬒
103. marioj+Cq[view] [source] [discussion] 2022-07-09 05:28:50
>>rubyis+h8
maybe not...

https://aws.amazon.com/solutions/case-studies/reddit-aurora-...

◧◩◪◨⬒
104. loxias+1r[view] [source] [discussion] 2022-07-09 05:32:28
>>1vuio0+Il
> I used to serve DNS data over a localhost authoritative server. Now I store most DNS data in a localhost forward proxy.

I run my own authoritative DNS on my router (tho not localhost. interesting), and have for a long time (since I started traffic shaping to push the ACKs to the front). Like you, I've also enjoyed having superior performance over those using public servers. Everyone says "but you can use 8.8.8.8 or 1.1.1.1! they're fast!." and I (we?) smile and nod.

Just did a quick little test for this comment. Resolving with 8.8.8.8 is fast! And... also between 800% and 2500% slower than using my (and your) setup. high five

Also, the haters don't know something that we do, which is that... sometimes 8.8.8.8 doesn't work!!!

A few weeks ago there was a website I couldn't access from a computer using 8.8.8.8. I thought, "that's odd", used dig, and it didn't resolve. From the same network I tried a different resolver -- worked. Tried 8.8.8.8 again -- fail. sshed a few hundred miles away to check 8.8.8.8 again -- working. tcpdump on the router, watched 8.8.8.8 fail to resolve in front of my eyes. About 4 minutes later, back to normal. "yes, sometimes the internet so-called gods fail."

I'm quite curious why you changed from an full authoritative setup to a proxying one. I've skimmed a handful of your past posts and agreed entirely, so we're both "right", or both wrong/broken-brained in the same way. ;-)

Is there something I could be doing to improve my already fantastic setup?

replies(1): >>1vuio0+kB
◧◩
105. dang+dr[view] [source] [discussion] 2022-07-09 05:34:05
>>rhacke+ha
I'm worried about that too. Don't have good data or even consistent gut feelings yet.

The big test will be when we hit peak load, perhaps on Monday or Tuesday.

replies(1): >>rhacke+eBc
◧◩◪
106. dang+or[view] [source] [discussion] 2022-07-09 05:35:15
>>metada+de
We don't throttle specific users*. At one time we did (it was called slowbanning) but that's been gone for close to a decade now.

It's a bit disconcerting how often I say "close to a decade now" now.

* Edit: we do rate-limit some accounts (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...), which throttles how much they can post to the site, but we don't throttle how quickly they can view the site.

replies(1): >>metada+dz
◧◩◪◨
107. skywal+Ar[view] [source] [discussion] 2022-07-09 05:36:33
>>dang+gp
In this kind of thread, I usually do CRTL+F "dang" ...
◧◩◪◨⬒⬓
108. zimpen+Qr[view] [source] [discussion] 2022-07-09 05:40:12
>>loxias+fk
> Way back (2004) [...] he gave me a rule [...]: Always diversify.

Annoyingly, in 2000-4, I was trying to get people to understand this and failing constantly because "it makes more sense if everything is the same - less to learn!" Hilariously*, I also got the blame when things broke even though none of them were my choice or design.

(Hell, even in 2020, I hit a similar issue with a single line Ruby CLI - lots of "everything else uses Python, why is it not Python?" moaning. Because the Python was a lot faffier and less readable!)

edit: to fix the formatting

◧◩◪◨⬒⬓
109. loxias+Wr[view] [source] [discussion] 2022-07-09 05:40:53
>>namech+Sl
That is so cool. If I had that (truck+radio) I'd be very tempted to scatter a few cheap relay nodes at good locations in the bay area. Monkeybrains+comcast is quite reliable.
◧◩◪◨
110. boulos+os[view] [source] [discussion] 2022-07-09 05:44:17
>>dang+gp
I upvoted your top-level comment! But what happens if the comments on this topic get long enough for pagination to kick in? Very few people click the new next button for the linked list :).
replies(1): >>andrea+Aw
◧◩
111. tedmis+Ns[view] [source] [discussion] 2022-07-09 05:48:26
>>olalon+g7
> > Little known fact: HN is also available through Cloudflare. ...

> What does that mean? How do you access HN through CloudFlare ...

As written, it seems to suggest HN is in the Cloudflare cache. But, I don't think there's a way to access the cached version if a site's not down. I wasn't around during today's outage, so I can't speak to whether a generic Cloudflare cached version of HN was available during the downtime.

◧◩◪
112. mwcamp+qt[view] [source] [discussion] 2022-07-09 05:54:36
>>dang+ec
If any of you have time to answer this, I'm curious about how you do backups. Are you using frequent rsyncs (or something functionally equivalent like BorgBackup), ZFS snapshots, or something custom? It looks like you must have had frequent backups, since when HN finally came back, it was quite close to the state that I remember it being in when it went down.
replies(1): >>dang+Uu
◧◩◪◨
113. 1vuio0+nu[view] [source] [discussion] 2022-07-09 06:02:53
>>loxias+6o
I did not study computer science so anything I know I learned from reading what was available in textbooks and on the internet itself. I learned DNS exists because the number of hosts was growing too quickly to keep updating a HOSTS file. The HOSTS file permits me to assign semantic meaning, i.e., names, to IP numbers. I can name hosts however I choose, and in practice I still do, because I like very short names. A simple analogy perhaps might be assigning names, images, sounds, etc. to different stored contact numbers on a mobile phone. The owner of the phone can control the semantic meaning assigned to the number, rather than delegating all control over this to someone else.

DNS, as I see it, lets someone else assign the names, i.e., the semantic meaning. Thus, assuming I am an internet user in the pre-DNS era, with the advent of DNS, I do not have to keep updating a HOSTS file when new hosts come online or change their address. This reduces administrative burden. The semantic meaning was already controllable pre-DNS, via the HOSTS file.

Many times I have read the criticisms of IP addresses as justifications for DNS. For example, IP addresses are (a) difficult to type or (b) difficult to remember. I simply cannot agree with such criticisms. As time goes on, and the www gets continually more nonsensically abstracted, I like IP addresses more and more.

◧◩◪◨⬒
114. closep+pu[view] [source] [discussion] 2022-07-09 06:03:05
>>Amfy+Ee
That’s a VPS offering only, they don’t do dedicated servers here.
◧◩◪◨
115. loxias+Qu[view] [source] [discussion] 2022-07-09 06:07:10
>>dangus+Ap
Ohhh. Gotcha. Agree. Agree.

Though... I've also read at least one "interesting" postmortem with complex operating scenarios and thought to myself (partially joking) that their failure isn't what they thought it was. The failure was having an unnecessarily complicated architecture in the first place, with too many abstractions and too much bloat. ;-) I would have "just" written it in C++ on Debian... ;-) (I'm exaggerating)

(I know that "my way" is fantastic.. until you need to scale across people)

◧◩◪◨
116. dang+Uu[view] [source] [discussion] 2022-07-09 06:07:33
>>mwcamp+qt
Custom, I guess (edit: that was an overstatement—let's say partly custom). We upload a snapshot every 24 hours to S3 and incremental updates every few kb (I think).

We use rsync for log files.

◧◩
117. herpde+mv[view] [source] [discussion] 2022-07-09 06:11:15
>>dang+Sb
How will you switch back to the new server once it's ready without losing database records?
replies(3): >>kijin+Sx >>brudge+xA >>dang+aD
◧◩◪
118. aditya+nv[view] [source] [discussion] 2022-07-09 06:11:25
>>dang+ec
I am surprised the forum rich of tech people is not yet containerised. A simple helm chart would make deployment to AWS (or any other kubernetes) a 5 min job.
replies(2): >>kijin+3x >>fulafe+eG
◧◩◪◨
119. russel+Kv[view] [source] [discussion] 2022-07-09 06:16:23
>>dang+vp
I'll save someone else the search.

https://en.m.wikipedia.org/wiki/Illegitimi_non_carborundum

replies(1): >>dang+aw
◧◩◪◨⬒
120. dang+aw[view] [source] [discussion] 2022-07-09 06:20:41
>>russel+Kv
https://news.ycombinator.com/item?id=4756677
replies(2): >>russel+Zx >>9wzYQb+IR
◧◩◪
121. bawolf+ow[view] [source] [discussion] 2022-07-09 06:23:48
>>1vuio0+yf
> Because I kept reading about why DNS was created and always encountered the same parroted explanation, year after year. Something along the lines that IP addresses were constantly in flux. That may have been true when DNS was created and the www was young. But was it true today. I wanted to find out. I did experiments. I found I could use the same DNS data day after day, week after week, month after month, year after year.

I have never in my life heard anyone claim this as the reason for dns.

The usual reason given is two fold:

Flat /etc/hosts files were getting large enough to be annoying.

The set of all dns records as a whole change constantly. Individual records dont change very much. But the time between at least one record changing is very small.

Both of these things are even more true today then they were when dns was invented.

◧◩◪◨⬒
122. andrea+Aw[view] [source] [discussion] 2022-07-09 06:25:59
>>boulos+os
The "next" link is such a godsend, and a nice little surprise when I discovered it. Love it (and not getting an obnoxious "what's new" dialog box)
replies(1): >>porker+Qx
◧◩◪◨
123. bawolf+Jw[view] [source] [discussion] 2022-07-09 06:27:32
>>loxias+6o
> The gist was "we have DNS because without it, people would have use numbers. people don't like numbers." DNS is primarily there to provide semantic meaning". The fact that it allows the numbers to change is.. a secondary bonus.

This is before i was born, but that sounds more like the reason why /etc/hosts was invented, which predates dns.

◧◩◪◨
124. andrea+Nw[view] [source] [discussion] 2022-07-09 06:28:28
>>bawolf+Bm
1.1 is even easier to type!
replies(1): >>jftuga+WY
◧◩◪◨⬒⬓⬔
125. loxias+2x[view] [source] [discussion] 2022-07-09 06:31:31
>>dang+4q
> it's the kind of thing that tends to be known by grizzled infrastructure veterans who had good mentors in their chequered past

And thanks right back at you.

I hadn't noticed before your comment that while not in the customary way (I'm brown skinned and was born into a working class family) I've got TONS of "privilege" in other areas. :D

My life would probably be quite different if I didn't have active Debian and Linux kernel developers just randomly be the older friends helping me in my metaphorical "first steps" with Linux.

Looking back 20+ years ago, I lucked into an absurdly higher than average "floor" when I started getting serious about "computery stuff". Thanks for that. That's some genuine "life perspective" gift you just gave me. I'm smiling. :) I guess it really is hard to see your own privilege.

> 8 hours of downtime is not an unreasonable price to pay for this lesson. The opportunity cost of learning it beforehand would have been high as well.

100% agree.

I'd even say the opportunity cost would have been much higher. Additionally, 8hrs of downtime is still a great "score", depending on the size of the HN organization. (bad 'score' if it's >100 people. amazing 'score' if it's 1-5 people.)

◧◩◪◨
126. kijin+3x[view] [source] [discussion] 2022-07-09 06:32:19
>>aditya+nv
I'd assume it takes more than 5 minutes to bring up a complete copy of the HN database on a different platform.

Deploying source code is trivial these days. Large databases, not so much, unless you're already using something like RDS.

replies(1): >>Kronis+sG
◧◩◪◨⬒⬓
127. porker+Qx[view] [source] [discussion] 2022-07-09 06:40:29
>>andrea+Aw
TIL what the "next" link does. Thanks!
◧◩◪
128. kijin+Sx[view] [source] [discussion] 2022-07-09 06:41:07
>>herpde+mv
Last time I checked, nearly every database worth using on a busy site supported some sort of real-time replication and/or live migration.

If that doesn't work, there's always the backup plan: say the magic words "scheduled maintenance", service $database stop, rsync it over, and bring it back up. The sky will not fall if HN goes down for another couple of hours, especially if it's scheduled ahead. :)

replies(1): >>solare+sC
◧◩◪◨⬒⬓
129. russel+Zx[view] [source] [discussion] 2022-07-09 06:43:00
>>dang+aw
That's from a decade ago!! You must have some super secret admin search because the Algolia HN search barfs on the phrase for some reason.
replies(3): >>loxias+hA >>dang+VC >>O_____+jG1
◧◩
130. andrea+My[view] [source] [discussion] 2022-07-09 06:54:24
>>metada+7e
One backup server is apparently sufficient, given the primary held up for 4.5 years. The issue was correlation between the primary and the spare which wouldn't have been solved by more replicas anyway.
◧◩
131. toast0+Qy[view] [source] [discussion] 2022-07-09 06:55:22
>>bushba+vb
Digital Ocean recently ended their support for FreeBSD, and HN runs on FreeBSD (at least at M5), and while you don't really need to run an OS your vm provider supports, it's kind of nice.
◧◩◪◨⬒⬓
132. ev1+Zy[view] [source] [discussion] 2022-07-09 06:58:18
>>whitep+Wf
I host a lot of production there, but always some layers underneath - like a high CPU or backing storage layer or database, worker nodes that pick up tasks. Never the front user facing.

Do not let any user generated content be accessible from any hetzner IP or you are one email away from shutdown pretty much. Don't forget germany's laws on speech too; they are nothing remotely similar to US. I would host, for example, a corporate site just fine, but last thing ever would be a forum or image hosting site or w/e

replies(1): >>Amfy+uV
◧◩◪◨
133. pojzon+5z[view] [source] [discussion] 2022-07-09 06:59:21
>>ignora+qa
Is there at least one case when whole region went down ;)) ?
replies(2): >>within+7A >>samspe+r02
◧◩◪
134. toast0+cz[view] [source] [discussion] 2022-07-09 07:00:54
>>dang+8g
It might be worth considering a way to get a we're working on it notice up quickly. (HN status on twitter worked, but it's kind of nicer when something loads at the main address), but an 8 hour outage once a decade for something that's not really critical is pretty good; no need to increase complexity, although try to get some storage diversity for the future, now that you've learned about that.
◧◩◪◨
135. metada+dz[view] [source] [discussion] 2022-07-09 07:01:04
>>dang+or
Thanks for the disclosure, if it gets crazy slow many I'll shoot you an email next time.
◧◩◪◨⬒
136. within+7A[view] [source] [discussion] 2022-07-09 07:10:30
>>pojzon+5z
At least once, in 2012? I remember because just two days before we had fully switched over to AWS and hadn’t done multi-region yet. We got our first downtime in 10 years and there was nothing we could do, unlike when we had the servers in a colo. We were at the mercy of Amazon. After that, we moved everything back to real physical servers.
◧◩◪◨⬒⬓⬔
137. loxias+hA[view] [source] [discussion] 2022-07-09 07:11:42
>>russel+Zx
It's also not a HN specific phrase....
◧◩◪◨⬒
138. booi+tA[view] [source] [discussion] 2022-07-09 07:13:48
>>dang+Jf
Well having a double server failure would definitely make me think about going back to the same solution..
replies(1): >>hdjjhh+XL
◧◩◪
139. brudge+xA[view] [source] [discussion] 2022-07-09 07:14:34
>>herpde+mv
Based on the “great rebuild” [0] (was it about a decade ago?), my understanding is that the database is text files in the file system hierarchy arranged rather like a btree.

Comment threads and comments each have a unique item number assigned monotonically.

The file system has a directory structure something like:

  |—1000000
  | |-100000
  | |-200000
  | |-…
  | |-900000
  |—2000000
  | |-100000
  | |-200000
  | |-…
  | |-900000
  |-…
I imagine that the comment threads (like this one) while text are actually arc code (or a dialect of it) that is parsed into a continuation for each user to handle things like showdead, collapsed threads and hell bans.

To go further out on a wobbly limb of out of my ass speculation, I suspect all the database credentialing is vanilla Unix user and group permissions because that is the simplest thing that might work and is at least as robust as any in-database credentialing system running on Unix would be.

Though simple direct file system IO is about as robust as reads and writes get since there’s no transaction semantics above the hardware layer, it is also worth considering that lost HN comments and stale reads don’t have a significant business impac

I mean HN being down didn’t result in millions of dollars per hour in lost revenue for YC…if it stayed offline for a month, there might be a significant impact to “goodwill” however.

Anyway, just WAGing.

[0] before the great rebuild I think all the files were just in one big directory and one day there were suddenly an impractical quantity and site performance fell over a cliff.

replies(1): >>krapp+FN
◧◩◪◨⬒⬓
140. 1vuio0+kB[view] [source] [discussion] 2022-07-09 07:24:33
>>loxias+1r
Using a forward proxy and mapped addresses instead of doing DNS lookups is just a phase in a long series of steps to eliminate the use of third party DNS service, i.e., shared caches,^1 then eliminate unnecessary DNS queries,^2 and finally eliminate the use of DNS altogther. However there are other reasons I use the proxy, namely control over TLS and HTTP.

1. This goes back to 2008 and "DNS cache poisoning". Easiest way to avoid it was to not use shared caches.

2. I created a fat stub resolver^3 that stored all the addresses for TLD nameservers, i.e., what is in root.zone,^4 instead the binary. This reduces the number of queries for any lookup by one. I then used this program to resolve names without using recursion, i.e., using only authoritative servers and RD bit unset. Then I discovered patterns in the different permutations of lookups to resolve names, i.e., common DNS (mis)configurations. I found I could "brute force" lookups by trying the fastest permutations or most common ones first. I could beat the speed of a cache for names not already in the cache. I could beat the speed of 8.8.8.8 or a local cache for names not already in the cache.

3. Fat for the time. It is tiny compared to today's Go and Rust binaries.

4. Changes to root.zone were rare. Changes are probably more common today what with all the gTLDs but generally will always be relatively infrequent. Classic example of DNS data that is more static than dynamic.

◧◩◪◨
141. solare+sC[view] [source] [discussion] 2022-07-09 07:35:25
>>kijin+Sx
You could even flush the DB to disk, mark a ZFS snapshot, resume writes, and then rsync that snapshot to a remote system ( or use zfs send-receive)
replies(1): >>pmoria+nS
◧◩◪◨⬒⬓⬔
142. dang+VC[view] [source] [discussion] 2022-07-09 07:39:20
>>russel+Zx
It just stuck in my memory randomly from back then.
replies(1): >>tptace+Op2
◧◩◪
143. dang+aD[view] [source] [discussion] 2022-07-09 07:41:15
>>herpde+mv
I guess the same way we switched to this one?

I wrote more about data loss at https://news.ycombinator.com/item?id=32030407 in case that's of interest.

replies(1): >>herpde+wZ1
◧◩◪
144. fulafe+kD[view] [source] [discussion] 2022-07-09 07:42:47
>>f0e4c2+36
Of those "correctly" architected apps, most are not properly tested for the failovers and won't actually work as architected (because of your own bugs or because aws failover stuff has bugs and you can't even test it).

Eg, falls over due to steep traffic spikes caused by outages when autoscaling mechanisms get previously unseen levels of load increases and enter some yoyo oscillation pattern, whole AZ is overloaded because all the failovers from the other failing AZ triggering at once, hit circuit breakers, spin up too slowly to ever pass health checks etc. Or can't detect something becoming glacially slow but not outright failing.

See eg https://www.theverge.com/2021/12/22/22849780/amazon-aws-is-d... & https://www.theverge.com/2020/11/25/21719396/amazon-web-serv... etc (many more examples are out there)

◧◩◪
145. tannha+DD[view] [source] [discussion] 2022-07-09 07:45:01
>>dang+8g
I'm guessing that several-hours-outage figure from restoring a full backup could be reduced by restoring the most up-to-date discussions first, then gradually restoring older discussions, then re-indexing for full-text search, all the while running in degraded mode but still having a front page. But tbh I'm just glad it wasn't an attack against free speech in tough times.
◧◩◪◨⬒⬓
146. iasay+OF[view] [source] [discussion] 2022-07-09 08:09:15
>>namech+Sl
The problem with that is the only people you have to talk to is hams.

(Ex ham)

replies(1): >>nwh5jg+lO
◧◩◪◨
147. fulafe+eG[view] [source] [discussion] 2022-07-09 08:13:15
>>aditya+nv
Kubernetes is for making said tech people rich, not for running the 90s style forum web app that said tech people like to use.
148. dubcee+oG[view] [source] 2022-07-09 08:15:35
>>1vuio0+(OP)
The IP you resolve to for Hacker News may not have changed or remained static but that doesn't have to the case for the rest of the world. Depending on market there may be one more many IPs a specific site resolves to.
◧◩◪◨⬒
149. Kronis+sG[view] [source] [discussion] 2022-07-09 08:15:48
>>kijin+3x
> I'd assume it takes more than 5 minutes to bring up a complete copy of the HN database on a different platform.

Hmm, that actually makes me wonder about how big it would actually be. The nature of HN (not really storing a lot of images/videos like Reddit, for example) would probably lend itself well to being pretty economical in regards to the space used.

Assuming a link of 1 Gbps, ideally you'd be able to transfer close to 125 MB/s. So that'd mean that in 5 minutes you could transfer around 37'500 MB of data to another place, though you have to account for overhead. With compression in place, you might just be able to make this figure a lot better, though that depends on how you do things.

In practice the link speeds will vary (a lot) based on what hardware/hosting you're using, where and how you store any backups and what you use for transferring them elsewhere, if you can do that stuff incrementally then it's even better (scheduled backups of full data, incremental updates afterwards).

Regardless, in an ideal world where you have a lot of information, this would boil down to a mathematical equation, letting you plot how long bringing over all of the data would take for any given DB size (for your current infrastructure/setup). For many systems out there, 5 minutes would indeed be possible - but that becomes less likely the more data you store, or the more complicated components you introduce (e.g. separate storage for binary data, multiple services, message queues with persistence etc.).

That said, in regards to the whole container argument: I think that there are definitely benefits to be had from containerization, as long as you pick a suitable orchestrator (Kubernetes if you know it well from working in a lab setting or with supervision under someone else in a prod setting, or something simpler like Nomad/Swarm that you can prototype things quickly with).

replies(2): >>kijin+OS >>pmoria+DT
◧◩◪◨
150. 1vuio0+GG[view] [source] [discussion] 2022-07-09 08:18:06
>>dang+vp
"Don't let the bastards grind you down."

bawolff is gonna keep trying.

I really do like static IPv4 addresses. I wish I owned one.

◧◩◪◨⬒
151. Shroud+7H[view] [source] [discussion] 2022-07-09 08:24:07
>>1vuio0+po
> I am not really I fan because I like to choose the IP address, instead of letting someone else decide. I believe in user choice.

Do you also object to anycast?

replies(1): >>1vuio0+kK
◧◩
152. lazyli+VJ[view] [source] [discussion] 2022-07-09 09:04:25
>>dralle+65
/r/sysadmin
◧◩◪◨⬒⬓
153. 1vuio0+kK[view] [source] [discussion] 2022-07-09 09:10:22
>>Shroud+7H
Why would I object. If it works, I will use it.

For example, I ping 198.41.0.4. I choose to ping that address over all the others, e.g., www.google.com or whatever other people use. That is what I mean by user choice. I know the address is anycasted. Where the packets actually go is not something I get to choose. It would be neat to be able control that, e.g., if source routing actually worked on today's internet. But I have no such expectations.

How do Tor users know that an exit node IP address listed for a foreign country is not anycasted and the server is actually located somewhere else.

Maybe check against a list of anycast prefixes.

http://raw.githubusercontent.com/bgptools/anycast-prefixes/m...

replies(1): >>1vuio0+VS4
◧◩
154. 1vuio0+7L[view] [source] [discussion] 2022-07-09 09:20:16
>>olalon+g7

   drill news.ycombinator.com @108.162.192.195
Those are CF IP addresses. Before HN switched from M5 to AWS, CF was an alternative way to access HN.

   echo|openssl s_client -connect 50.112.136.166:443 -tls1_3
◧◩◪◨⬒⬓
155. hdjjhh+XL[view] [source] [discussion] 2022-07-09 09:29:22
>>booi+tA
I'd say the opposite: having experienced this, they would make sure to significantly reduce the probability of it happening again.

In the past, I had a similar problem because of using hardware from the same batch. In retrospect, it's silly to be surprised they died at the same time.

◧◩◪
156. O_____+BM[view] [source] [discussion] 2022-07-09 09:39:56
>>dang+ec
Thanks mthurman!!

_____

Related:

Appears “mthurman” is Mark Thurman, a software engineer at Y Combinator since 2016; HN profile has no obvious clues.

https://www.linkedin.com/in/markethurman

https://news.ycombinator.com/user?id=mthurman

replies(1): >>O_____+B61
◧◩
157. 1vuio0+WM[view] [source] [discussion] 2022-07-09 09:45:02
>>wging+j6

   echo|bssl s_client -connect 50.112.136.166:443 -min-version tls1.3

   Connecting to 50.112.136.166:443
   Error while connecting: TLSV1_ALERT_PROTOCOL_VERSION
   94922006718056:error:1000042e:SSL routines:OPENSSL_internal:TLSV1_ALERT_PROTOCOL_VERSION:/home/bssl/boringssl-refs-heads-master/ssl/tls_record.cc:594:SSL alert number 70
replies(2): >>pgCKIN+rP >>wging+QT1
◧◩◪◨
158. krapp+FN[view] [source] [discussion] 2022-07-09 09:54:27
>>brudge+xA
You don't have to speculate - the Arc forum code is available at http://arclanguage.org.
replies(1): >>dpifke+PL2
◧◩◪◨⬒⬓⬔
159. nwh5jg+lO[view] [source] [discussion] 2022-07-09 10:02:21
>>iasay+OF
Well same for computers & the internet :)
replies(1): >>iasay+nP
◧◩◪◨⬒⬓⬔⧯
160. iasay+nP[view] [source] [discussion] 2022-07-09 10:16:10
>>nwh5jg+lO
Fair point!
◧◩◪
161. pgCKIN+rP[view] [source] [discussion] 2022-07-09 10:16:31
>>1vuio0+WM
For a moment I thought about a SNI issue but no, you are right:

Version: 2.0.7 OpenSSL 1.1.1n 15 Mar 2022

Connected to 50.112.136.166

Testing SSL server news.ycombinator.com on port 443 using SNI name news.ycombinator.com

  SSL/TLS Protocols:
SSLv2 disabled SSLv3 disabled TLSv1.0 enabled TLSv1.1 enabled TLSv1.2 enabled TLSv1.3 disabled
◧◩◪◨
162. pmoria+tQ[view] [source] [discussion] 2022-07-09 10:28:27
>>ignora+qa
The more regions a service is scattered over the greater the odds are that a single region outage somewhere in AWS will take down the whole service.

Consider the extreme case where your service is scattered over every AWS region: here an outage of any AWS region is guaranteed to take down your service.

Compare that to the case where your service is bound to only one region: then the odds of a single region outage taking down your entire service is reduced to 1 out of however many regions AWS has (assuming each region has an equal chance of suffering an outage).

To guard against outages, the failover service has to be scattered over entirely different regions (or, even better, on an entirely different service provider... which is probably a good idea anyway).

replies(1): >>ignora+C61
◧◩◪◨⬒⬓
163. 9wzYQb+IR[view] [source] [discussion] 2022-07-09 10:42:20
>>dang+aw
semper ubi sub ubi
replies(1): >>dang+Oc2
◧◩◪
164. pmoria+LR[view] [source] [discussion] 2022-07-09 10:42:52
>>banana+v6
> Even during the most extreme AWS events, my EC2 instances running dedicated servers kept seeing Internet traffic.

You were just lucky enough not to have been affected by AWS outages, but many others were.

You can get a lot of resilience to failure on AWS, but simply spinning up a dedicated EC2 instance is not nearly enough.

◧◩◪◨⬒
165. pmoria+nS[view] [source] [discussion] 2022-07-09 10:49:33
>>solare+sC
I'd be curious to know if HN was on ZFS.
◧◩◪◨⬒⬓
166. kijin+OS[view] [source] [discussion] 2022-07-09 10:52:57
>>Kronis+sG
Network transfer is only a small part of the equation.

You can't just rsync files into a fully managed RDS PostgreSQL or Elasticsearch instance. You'll probably need to do a dump and restore, especially if the source machine has bad disks and/or has been running a different version. This will take much longer than simply copying the files.

Of course you could install the database of your choice in an EC2 box and rsync all you want, but that kinda defeats the purpose of using AWS and containerizing in the first place.

replies(1): >>Kronis+J31
◧◩◪◨⬒⬓
167. pmoria+DT[view] [source] [discussion] 2022-07-09 11:02:40
>>Kronis+sG
> Assuming a link of 1 Gbps, ideally you'd be able to transfer close to 125 MB/s. So that'd mean that in 5 minutes you could transfer around 37'500 MB of data to another place, though you have to account for overhead. With compression in place, you might just be able to make this figure a lot better, though that depends on how you do things.

Ideally, all this data would have been already backed up to AWS (or your provider of choice) by the time your primary service failed, so all your have to do is spin up your backup server and your data would be waiting for you.

(Looks like HN does just this: https://news.ycombinator.com/item?id=32032316 )

replies(1): >>Kronis+e41
◧◩◪◨
168. jacoop+ST[view] [source] [discussion] 2022-07-09 11:05:51
>>Amfy+Be
I host a forum in it and there is nor problems. Of course a site as big as HN would've different rules and treatment.
◧◩◪◨⬒⬓⬔
169. pmoria+sU[view] [source] [discussion] 2022-07-09 11:10:58
>>dang+4q
> it's the kind of thing that tends to be known by grizzled infrastructure veterans who had good mentors in their chequered past—and not so much by the rest of us

This is why your systems should be designed by grizzled infrastructure veterans.

replies(1): >>dang+Dp2
◧◩◪◨⬒⬓
170. pmoria+MU[view] [source] [discussion] 2022-07-09 11:15:29
>>hoofhe+4i
Ideally, there should be redundancy in servers, too.. With different hardware, on different sides of the planet, on different service providers.
replies(1): >>hoofhe+J21
◧◩◪◨
171. pmoria+gV[view] [source] [discussion] 2022-07-09 11:20:52
>>sillys+G8
> Being down all morning is an impressive recovery time, because they had to provision an EC2 server and transfer all data to it

It looks like the data was already in S3: [1]

The recovery probably would have been a lot faster had they had a fully provisioned EC2 image standing by... which I'd bet they will from now on.

[1] - https://news.ycombinator.com/item?id=32032316

◧◩◪
172. lma21+rV[view] [source] [discussion] 2022-07-09 11:23:02
>>dang+eh
it's cool to go slow and steady ! :-)
◧◩◪◨⬒⬓⬔
173. Amfy+uV[view] [source] [discussion] 2022-07-09 11:23:23
>>ev1+Zy
this. matches my experience and the ones in my circle
◧◩◪◨⬒
174. jftuga+WY[view] [source] [discussion] 2022-07-09 11:55:06
>>andrea+Nw
For IPv6, you can ping 2600::
◧◩◪◨⬒⬓⬔
175. hoofhe+J21[view] [source] [discussion] 2022-07-09 12:28:18
>>pmoria+MU
Correct.. Your production server with dual mirrored arrays should have an identical warm spare. If it's in the same data center, then you need a separate offsite backup in case of a worst case disaster such as a tornado, fire, or nuclear strike.
◧◩◪◨⬒⬓⬔
176. Kronis+J31[view] [source] [discussion] 2022-07-09 12:37:06
>>kijin+OS
> You can't just rsync files into a fully managed RDS PostgreSQL or Elasticsearch instance. You'll probably need to do a dump and restore, especially if the source machine has bad disks and/or has been running a different version. This will take much longer than simply copying the files.

That is true, albeit not in all cases!

An alternative approach (that has some serious caveats) would be to do full backups of the DB directory, e.g. /var/lib/postgresql/data or /var/lib/mysql (as long as you can prevent invalid state data there) and then just starting up a container/instance with this directory mounted. Of course, that probably isn't possible with most if not all managed DB solutions out there.

◧◩◪◨⬒⬓⬔
177. Kronis+e41[view] [source] [discussion] 2022-07-09 12:40:30
>>pmoria+DT
> Ideally, all this data would have been already backed up to AWS (or your provider of choice) by the time your primary service failed, so all your have to do is spin up your backup server and your data would be waiting for you.

Sure, though the solution where you back up the data probably won't be the same one where the new live DB will actually run, so some data transfer/IO will still be needed.

replies(1): >>pmoria+P81
◧◩◪◨
178. O_____+B61[view] [source] [discussion] 2022-07-09 12:58:27
>>O_____+BM
Thanks sctb!!

(sctb is Scott, former HN mod)

https://news.ycombinator.com/item?id=25055115

https://news.ycombinator.com/user?id=sctb

◧◩◪◨⬒
179. ignora+C61[view] [source] [discussion] 2022-07-09 12:58:30
>>pmoria+tQ
> The more regions a service is scattered over the greater the odds are that a single region outage somewhere in AWS will take down the whole service.

Agree. I think I should have suffixed a /s to my comment above.

> To guard against outages, the failover service has to be scattered over entirely different regions (or, even better, on an entirely different service provider... which is probably a good idea anyway).

Something, something... the greatest trick the devil (bigcloud) ever pulled...

◧◩◪◨⬒⬓
180. aaaaaa+571[view] [source] [discussion] 2022-07-09 13:02:51
>>hoofhe+4i
Unless it was due to end of life, or power surge related failure.

Then more than one failing simultaneously isn't so inconceivable.

◧◩◪◨⬒⬓⬔⧯
181. pmoria+P81[view] [source] [discussion] 2022-07-09 13:17:23
>>Kronis+e41
> the solution where you back up the data probably won't be the same one where the new live DB will actually run, so some data transfer/IO will still be needed

The S3 buckets where HN is backed up to could themselves be constantly copied to other S3 buckets which could be the buckets directly used by an EC2 instance, were it ever needed in case of emergency.

That would avoid on-demand data transfer from the backup S3 buckets themselves at the time of failure.

The backup S3 buckets could also be periodically copied to Glacier for long-term storage.

That's for an all-AWS backup solution. Of course you could do this with (for example) another datacenter and tapes, if you wanted to... or another cloud provider.

◧◩
182. 19h+Oe1[view] [source] [discussion] 2022-07-09 14:02:09
>>wging+j6
TLS 1.3 needs to be explicitly enabled in CloudFront
replies(1): >>Matthi+dT1
◧◩◪◨
183. ndrisc+ah1[view] [source] [discussion] 2022-07-09 14:15:57
>>loxias+6o
> an aside, "variable number" is exactly how things are still done in math and physics. This amuses me greatly.

Variable names are usually idiomatic within a field/carry some semantics. e.g. k is angular wavenumber, omega is angular frequency. r is displacement. etc. They just use short names to prevent the name from distracting from the shape of the equations it's used in, so that it's easier to say things like "this behaves like a transport equation but with a source term that's proportional to the strength of the Foo field squared" or whatever.

Lots of phenomena have very similar governing equations, so downplaying the names of variables in favor of the structure/context they're used in allows for efficient transfer of intuition.

◧◩◪◨⬒
184. atmosx+th1[view] [source] [discussion] 2022-07-09 14:18:22
>>mekste+Ki
If the AWS status wasn’t a static HTML page, I would agree.
◧◩◪◨⬒
185. ksec+6j1[view] [source] [discussion] 2022-07-09 14:30:18
>>dang+xd
I mean to be fair, had this not been a SSD defect, the probability of four of them dying at the same time ( or in extreme close proximity of a few hours ) is indeed very unlikely. And choosing different SSD vendor would have prevented this happening even if one did have counter overflow. Or in this case the 2nd Server could have been a different Model or vendor.
replies(1): >>dang+8p2
◧◩
186. Pakdef+7k1[view] [source] [discussion] 2022-07-09 14:37:54
>>betaby+r7
There wouldn't be huge benefits from where I stand.
187. ksec+Kk1[view] [source] 2022-07-09 14:42:49
>>1vuio0+(OP)
I wonder if it is time for Server Upgrade.

HN was running on Xeon(R) CPU E5-2637 v4 [1], that is SandyBridge era. A 2 Core CPU serving 6M request a day.

If iPhone had more memory the whole of HN could be served from it.

[1] https://www.intel.com/content/www/us/en/products/sku/64598/i...

replies(2): >>Tijdre+zp1 >>minima+EU3
◧◩
188. smn123+On1[view] [source] [discussion] 2022-07-09 15:06:19
>>fomine+i2
thanks for this!

Interesting feature comparisons: https://www.m5hosting.com/iaas-cloud/

so it's a private cloud, not m5 managed cloud environments across multi public cloud providers

◧◩◪
189. smn123+Zn1[view] [source] [discussion] 2022-07-09 15:07:39
>>dang+ec
recovery with very little data loss, well done
◧◩
190. Tijdre+zp1[view] [source] [discussion] 2022-07-09 15:17:28
>>ksec+Kk1
I mean, if it's capable of handling the traffic, is there any reason to upgrade?
replies(1): >>namibj+eG3
◧◩
191. Tijdre+Lr1[view] [source] [discussion] 2022-07-09 15:31:50
>>usrn+Wd
Many dynamic residential IPs seem to be 'your IP changes after the connection has been offline for some amount of hours', i.e. only in the rare event of some long-lasting outage.
◧◩
192. Tijdre+Gs1[view] [source] [discussion] 2022-07-09 15:37:22
>>rhacke+ha
Where are you located?

Ever since I read someone's comment that 'HN is the fastest site they regularly visit', I've wondered if that's because they're in the western US (where HN is hosted).

I am in Western Europe and HN is decently fast, but tweakers.net and openstreetmap.org are faster.

◧◩◪◨⬒⬓⬔
193. O_____+jG1[view] [source] [discussion] 2022-07-09 16:45:22
>>russel+Zx
Agree, HN’s search returned nothing, possibly because the terms are not English, but no idea.

Try Google too next time if HN’s search doesn’t find what you’re looking for:

https://www.google.com/search?q=site:news.ycombinator.com+Il...

◧◩◪
194. Matthi+dT1[view] [source] [discussion] 2022-07-09 18:00:54
>>19h+Oe1
No - it's enabled by default for all available security policies. CloudFront allows to configure the minimum TLS version - the maximum is always TLS1.3.

https://docs.aws.amazon.com/AmazonCloudFront/latest/Develope...

However HN is not using CloudFront - so this doesn't matter for evaluating why HN is not supporting TLS1.3

◧◩◪
195. wging+QT1[view] [source] [discussion] 2022-07-09 18:04:22
>>1vuio0+WM
That still doesn't mean you can't use TLS 1.3 on AWS. For example, I have a Cloudfront-based site I haven't touched in years that works just fine with TLS 1.3.
replies(1): >>1vuio0+SW2
◧◩◪◨
196. pvg+NU1[view] [source] [discussion] 2022-07-09 18:09:14
>>dang+vh
Gotta get something better than those pizza cutters

https://www.youtube.com/watch?v=3d-OVlIYpuQ

◧◩◪◨
197. herpde+wZ1[view] [source] [discussion] 2022-07-09 18:41:11
>>dang+aD
You switched to the new one while the old one was down... that's not the same as switching between two live systems. Though, perhaps, in this particular case, the procedure might be.
replies(2): >>dang+So2 >>thegag+fp2
◧◩◪◨⬒
198. samspe+r02[view] [source] [discussion] 2022-07-09 18:48:52
>>pojzon+5z
Per the spreadsheet here https://awsmaniac.com/aws-outages/ :

There seem to have been multiple "full" outages in 2011-12 in AWS' us-east-1 region, which, granted, is the oldest AWS region and likely has a bunch of legacy stuff. By "full" outages I mean that a few core services fell over but the entire region become inaccessible due to those core failures.

replies(1): >>pojzon+TRc
◧◩
199. futhey+c62[view] [source] [discussion] 2022-07-09 19:31:34
>>dang+Sb
Interesting. If I had experienced this, I'd probably pick a random cloud provider, recreate the entire service in their cloud, image and document it, and have that documentation ready if I ever experienced another fire drill like the one yesterday.

Far easier to spin up a few large VMs on AWS for a few hours while you fix an issue than provision identical backup dedicated servers in a colo somewhere. And you can potentially just throw money at the issue while you fix the core service.

¯\_(ツ)_/¯

◧◩◪◨⬒⬓⬔
200. dang+Oc2[view] [source] [discussion] 2022-07-09 20:17:05
>>9wzYQb+IR
https://news.ycombinator.com/item?id=1352355
◧◩◪◨⬒
201. dang+So2[view] [source] [discussion] 2022-07-09 21:52:57
>>herpde+wZ1
Ah, I see. If we have to bring HN down for a few minutes, or (more likely) put it into readonly mode for a bit, we can do that–especially if it makes the process simpler and/or less risky.
replies(1): >>thegag+mp2
◧◩◪◨⬒⬓
202. dang+8p2[view] [source] [discussion] 2022-07-09 21:55:12
>>ksec+6j1
I'm not sure if 4 failed for that reason, or if only 2 failed for that reason and then the attempts to restore them from the RAID array failed for a different reason.

2 failures within a few hours is unlikely enough already though, unless there was a common variable (which there clearly was).

◧◩◪◨⬒
203. thegag+fp2[view] [source] [discussion] 2022-07-09 21:56:24
>>herpde+wZ1
He can choose to do it the easy way and just take an outage to move it, or possibly go into read only mode while moving.

We tend to over engineer things as if it’s the end of the world to take a 10 minute outage… and end up causing longer ones because of the added complexity.

◧◩◪◨⬒⬓
204. thegag+mp2[view] [source] [discussion] 2022-07-09 21:56:59
>>dang+So2
Lol missed this comment while writing mine.
replies(1): >>dang+Ot2
◧◩◪◨⬒⬓⬔⧯
205. dang+Dp2[view] [source] [discussion] 2022-07-09 21:59:13
>>pmoria+sU
That reminds me of Jerry Weinberg's dictum: whenever you hear the word "should" on a software project, replace it with "isn't".

>>590075

replies(2): >>kqr+yI5 >>mst+t36
◧◩◪◨⬒⬓⬔⧯
206. tptace+Op2[view] [source] [discussion] 2022-07-09 22:01:38
>>dang+VC
Dan is passionate about HN item IDs the way the rest of us are passionate about IPv4 addresses.
◧◩◪◨⬒⬓⬔
207. dang+Ot2[view] [source] [discussion] 2022-07-09 22:32:13
>>thegag+mp2
Not a problem in a thread about replicas and redundancy :)
replies(1): >>dredmo+py2
◧◩◪◨⬒⬓⬔⧯
208. dredmo+py2[view] [source] [discussion] 2022-07-09 23:16:56
>>dang+Ot2
So there's this great joke about cache coherency.

I'm sure you'll get it, eventually.

209. midisl+BH2[view] [source] 2022-07-10 00:46:06
>>1vuio0+(OP)
You could run this site on a Pentium 3. Why even bother with virtual hosting?
◧◩◪◨⬒
210. dpifke+PL2[view] [source] [discussion] 2022-07-10 01:30:40
>>krapp+FN
That's long out of date, unfortunately.
◧◩◪◨
211. 1vuio0+SW2[view] [source] [discussion] 2022-07-10 04:04:20
>>wging+QT1
"Unlike CF, AWS does not support TLS1.3. This is not working while HN uses the AWS IP."

The context of the above statement was the HN site, not every site that uses AWS.

Specifically, I mean that if HN uses CF, then TLS1.3 will be supported. (Before the outage I accessd HN through CF so I could use TLS1.3, because the M5 hosted site did not support it.) Whereas if HN uses AWS, then TLS1.3 may or may not be supported. As it happens, there is no support.^1

Not being more clear is on me and I apologise that the statement was misinterpreted. Nevertheless, the fact that there are other sites accessed through AWS that support TLS1.3 does not help the HN user here who wants to use TLS1.3, namely, me. That is the context of the comment: accessing HN using TLS1.3. It is not a review of AWS. It is a statement about accessing HN with TLS1.3.

1. For example, those using Cloudfront CDN services.

◧◩◪
212. namibj+eG3[view] [source] [discussion] 2022-07-10 13:29:08
>>Tijdre+zp1
Energy efficiency.
◧◩
213. minima+EU3[view] [source] [discussion] 2022-07-10 15:08:04
>>ksec+Kk1
That's the wrong SKU, it's actually Haswell/Broadwell and 4c8t, not 2c4t: https://www.intel.com/content/www/us/en/products/sku/92983/i...
◧◩◪◨⬒⬓⬔
214. 1vuio0+VS4[view] [source] [discussion] 2022-07-10 21:22:23
>>1vuio0+kK
https://raw.githubusercontent.com/netravnen/well-known-anyca...

https://www.ietf.org/archive/id/draft-wilhelm-grow-anycast-c...

◧◩◪◨⬒
215. 1vuio0+J75[view] [source] [discussion] 2022-07-10 23:03:59
>>1vuio0+Il
"2. Another benefit for me is that when some remote DNS service does down (this has happened several times), I can still use the www without interruption. I already have the DNS data I need. Meanwhile the self-proclaimed "experts" go into panic mode."

Above are the specific claims that were called "BS". One has to do with enabling me to use the www without interruption if DNS stops working.^1 The other has to do with "experts" going into panic mode.^2

Neither claim relates to something being "meaningfully faster."

1. Because I use stored DNS data.

2. Because none of them advise anyone to store DNS data, let alone use it. They opt to promote and support a system that relies on DNS to work 100% of the time.

◧◩◪◨⬒⬓⬔⧯▣
216. kqr+yI5[view] [source] [discussion] 2022-07-11 04:58:30
>>dang+Dp2
That goes along with "almost never" which is a synonym for "sometimes" and "maintenance-free" which is a synonym for "throw it out and buy a new one when it breaks".
replies(1): >>duckmy+Js6
◧◩◪◨⬒⬓⬔⧯▣
217. mst+t36[view] [source] [discussion] 2022-07-11 08:41:55
>>dang+Dp2
This is brilliant and I suspect generalises to "won't" when reading RFCs.
◧◩◪◨⬒⬓⬔
218. trasz+wj6[view] [source] [discussion] 2022-07-11 11:25:12
>>erik_s+Wm
I believe it was IBM, not the government.
◧◩◪◨⬒⬓⬔⧯▣▦
219. duckmy+Js6[view] [source] [discussion] 2022-07-11 12:28:56
>>kqr+yI5
"Almost never" -> "more often than you would want"
◧◩◪
220. rhacke+eBc[view] [source] [discussion] 2022-07-13 03:46:58
>>dang+dr
I think overall the feeling of "slowness" last week may have been from everyone coming back in all at once or something. It seems just as fast as it was again.
◧◩◪◨⬒⬓
221. pojzon+TRc[view] [source] [discussion] 2022-07-13 07:22:30
>>samspe+r02
Its over 10 years ago tho. Are there any RECENT full region outages ?

Im forseeing a full downtime in Frankfurt this winter tho. Germany is in really bad position when it comes to electricity.

[go to top]