zlacker

[parent] [thread] 13 comments
1. stinkb+(OP)[view] [source] 2025-08-28 02:26:35
So you're the ones who have been training the robots.
replies(2): >>smt88+s7 >>pyman+9F
2. smt88+s7[view] [source] 2025-08-28 03:36:17
>>stinkb+(OP)
Reddit and HN are among the highest quality sources of training text and are probably weighted very heavily as "probably human" in the mainstream models.

Any source of text with huge amounts of automated and community moderation will be better quality than, say, Twitter.

replies(1): >>what+6d
◧◩
3. what+6d[view] [source] [discussion] 2025-08-28 04:44:54
>>smt88+s7
Reddit is anything but high quality.
replies(4): >>mh-+ef >>jibal+fm >>Jepaco+Ot >>kelnos+DD
◧◩◪
4. mh-+ef[view] [source] [discussion] 2025-08-28 05:08:19
>>what+6d
Old Reddit was.
replies(1): >>AuryGl+5p
◧◩◪
5. jibal+fm[view] [source] [discussion] 2025-08-28 06:19:36
>>what+6d
"among the highEST" is comparative; it doesn't entail "high".
◧◩◪◨
6. AuryGl+5p[view] [source] [discussion] 2025-08-28 06:49:37
>>mh-+ef
Oh man, someone should train an LLM on pre-Digg death Reddit and modern Reddit and have them chat. It’d be a hoot.
◧◩◪
7. Jepaco+Ot[view] [source] [discussion] 2025-08-28 07:36:59
>>what+6d
That depends heavily on the subreddits you browse. There absolutely are places with high quality content, though it feels like they are getting sparser and sparser.
◧◩◪
8. kelnos+DD[view] [source] [discussion] 2025-08-28 09:18:46
>>what+6d
Not in that sense; high quality in the sense that there are a lot of actual, real people posting there, and those people tend to come from a pretty diverse set of backgrounds.
replies(1): >>Finite+xF
9. pyman+9F[view] [source] 2025-08-28 09:32:46
>>stinkb+(OP)
Although I'm sure @stinkbeatle was joking, I should clarify that most LLMs are trained on books and online articles written by professional writers. That's why they tend to have a rich vocabulary and use things like hyphens.

I agree, HN is an amazing community with brilliant people and top quality content, but it's not enough to train an LLM.

Last thing. An LLM is just a tool, it can clean up your writing the same way a photo app can enhance your pictures. It took a while for people to accept that grandma's photos looked professional because they had filters. Same will happen with text. With ChatGPT, anyone can write like a journalist. We're just not used to grandma texting like one, yet :)

replies(3): >>Arnt+lG >>Walter+QO >>Moru+qv6
◧◩◪◨
10. Finite+xF[view] [source] [discussion] 2025-08-28 09:38:04
>>kelnos+DD
Perhaps on the smaller subreddits, but have a look at /r/all on any given day and it's obvious that real people, and diverse backgrounds, it is not. Every single subreddit that goes above a certain activity threshold collapses into the exact same state of astroturfed, mass-produced political slop targeted towards low IQ people.
replies(1): >>Alexey+Cp6
◧◩
11. Arnt+lG[view] [source] [discussion] 2025-08-28 09:46:55
>>pyman+9F
I really like that I can use an LLM to change tone. "Change the following text to sound like bland American officespeak."

That said, this feature doesn't sound like a great leap for mankind.

◧◩
12. Walter+QO[view] [source] [discussion] 2025-08-28 11:18:46
>>pyman+9F
> HN is an amazing community with brilliant people

Correction: bright people

◧◩◪◨⬒
13. Alexey+Cp6[view] [source] [discussion] 2025-08-30 06:12:05
>>Finite+xF
Yeah, there is still a lot of manoshpere / rightoid adjacent content on Reddit. It used to be worse though.
◧◩
14. Moru+qv6[view] [source] [discussion] 2025-08-30 07:31:27
>>pyman+9F
> With ChatGPT, anyone can write like a journalist.

Minus the fact-checking, transparency, truth and social responsibility.

[go to top]