zlacker

[return to "The New York Times is suing OpenAI and Microsoft for copyright infringement"]
1. DamnIn+IK[view] [source] 2023-12-27 18:22:04
>>ssgodd+(OP)
I have deeply mixed feelings about the way LLMs slurp up copyrighted content and regurgitate it as something "new." As a software developer who has dabbled in machine learning, it is exciting to see the field progress. But I am also an author with a large catalog of writings, and my work has been captured by at least one LLM (according to a tool that can allegedly detect these things).

Overall, current LLMs remind me of those bottom-feeder websites that do no original research--those sites that just find an article they like, lazily rewrite it, introduce a few errors, then maybe paste some baloney "sources" (which always seems to disinclude the actual original source). That mode of operation tends to be technically legal, but it's parasitic and lazy and doesn't add much value to the world.

All that aside, I tend to agree with the hypothesis that LLMs are a fad that will mostly pass. For professionals, it is really hard to get past hallucinations and the lack of citations. Imagine being a perpetual fact-checker for a very unreliable author. And laymen will probably mostly use LLMs to generate low-effort content for SEO, which will inevitably degrade the quality of the same LLMs as they breed with their own offspring. "Regression to mediocrity," as Galton put it.

◧◩
2. MeImCo+8N[view] [source] 2023-12-27 18:35:10
>>DamnIn+IK
Ehh LLMs have become a fundamental part of my work flow as a professional. GPT4 is absolutely capable of providing links to sources and citations. It is more reliable than most human teachers I have had and doesnt have an ego about its incorrect statements when challenged on them. It does become less useful as you get more technical or niche but its incredibly useful for learning in new areas or increasing the breadth of your knowledge on a subject.
◧◩◪
3. neilv+aV[view] [source] 2023-12-27 19:20:25
>>MeImCo+8N
> LLMs have become a fundamental part of my work flow as a professional. GPT4 [...] doesnt have an ego about its incorrect statements when challenged on them.

To anthropomorphize it further, it's a plagiarizing bullshitter who apologizes quickly when any perceived error is called out (whether or not that particular bit of plagiarism or fabrication was correct), learning nothing, so its apology has no meaning, but it doesn't sound uppity about being a plagiarizing bullshitter.

[go to top]