zlacker

[return to "Leaked grant proposal details high-risk coronavirus research"]
1. chrsw+t8[view] [source] 2021-09-24 16:55:00
>>BellLa+(OP)
I could be missing something but this isn't exactly the smoking gun the title makes it seem. I'm sure there are proposals, plans and applications for all types of things. What I'm waiting for, perhaps naively, is strong evidence, revelated an independent investigation, that there was some foul business going on here. Until I see that, I'm more inclined to rely on the word of experts who have no connection to any of this. A novel aspect of a viral genome isn't enough for me to leap to the conclusion that it's human made.
◧◩
2. dang+sR[view] [source] 2021-09-24 20:59:11
>>chrsw+t8
The submitted title ("New Leaked Documents Point to Engineered Lab Origin for SARS‑CoV‑2") broke the site guidelines badly by editorializing. Submitters: please don't do that—it will eventually cause you to lose submission privileges on HN. Instead, follow the site guidelines, which include: "Please use the original title, unless it is misleading or linkbait; don't editorialize."

https://news.ycombinator.com/newsguidelines.html

(I'm assuming, of course, that it wasn't the article title that got subsequently changed. If that was the case, ignore the above.)

◧◩◪
3. klyrs+N41[view] [source] 2021-09-24 22:40:31
>>dang+sR
> (I'm assuming, of course, that it wasn't the article title that got subsequently changed. If that was the case, ignore the above.)

Not the first time I've seen you say this. Would it be worthwhile to fetch articles when they're submitted, if only for your own sanity?

◧◩◪◨
4. xoa+q71[view] [source] 2021-09-24 22:59:30
>>klyrs+N41
The truly irritating thing is that even that wouldn't necessarily be enough, because so many sites actually do live A/B/C/[n] title tests simultaneously to randomized sets of users then choose whichever one gets the most clicks or whatever metric first. Even without any manual shenanigans. So there's a window where merely refreshing or browsing from a different IP will yield a different title. Sometimes evidence is left in the URL or interactions with older systems on the a site but that's all baroque. So so so many edge cases in grabbing titles.

Probably not worth the effort on HN to try to automate vs just treating it case by case. It doesn't usually seem to be a problem. "Pre-optimization is the root of all evil" and all that.

Edit: or archive.org as dang says, but I don't know if even they see all versions of a page if there is a simultest situation. Regrettably seems pretty SOP on even reputable places.

[go to top]