zlacker

[parent] [thread] 22 comments
1. DamnIn+(OP)[view] [source] 2023-12-27 18:22:04
I have deeply mixed feelings about the way LLMs slurp up copyrighted content and regurgitate it as something "new." As a software developer who has dabbled in machine learning, it is exciting to see the field progress. But I am also an author with a large catalog of writings, and my work has been captured by at least one LLM (according to a tool that can allegedly detect these things).

Overall, current LLMs remind me of those bottom-feeder websites that do no original research--those sites that just find an article they like, lazily rewrite it, introduce a few errors, then maybe paste some baloney "sources" (which always seems to disinclude the actual original source). That mode of operation tends to be technically legal, but it's parasitic and lazy and doesn't add much value to the world.

All that aside, I tend to agree with the hypothesis that LLMs are a fad that will mostly pass. For professionals, it is really hard to get past hallucinations and the lack of citations. Imagine being a perpetual fact-checker for a very unreliable author. And laymen will probably mostly use LLMs to generate low-effort content for SEO, which will inevitably degrade the quality of the same LLMs as they breed with their own offspring. "Regression to mediocrity," as Galton put it.

replies(10): >>MeImCo+q2 >>logicc+j4 >>__loam+r4 >>seanmc+u4 >>asylte+W4 >>buckyf+z6 >>Shamel+q7 >>shrimp+yc >>throwa+5f2 >>dinvla+mm7
2. MeImCo+q2[view] [source] 2023-12-27 18:35:10
>>DamnIn+(OP)
Ehh LLMs have become a fundamental part of my work flow as a professional. GPT4 is absolutely capable of providing links to sources and citations. It is more reliable than most human teachers I have had and doesnt have an ego about its incorrect statements when challenged on them. It does become less useful as you get more technical or niche but its incredibly useful for learning in new areas or increasing the breadth of your knowledge on a subject.
replies(2): >>cjbpri+G2 >>neilv+sa
◧◩
3. cjbpri+G2[view] [source] [discussion] 2023-12-27 18:37:10
>>MeImCo+q2
> GPT4 is absolutely capable of providing links to sources and citations.

Do you mean in the Browsing Mode or something? I don't think it is naturally capable of that, both because it is performing lossy compression, and because in many cases it simply won't know where the text that was fed to it during training came from.

4. logicc+j4[view] [source] 2023-12-27 18:46:46
>>DamnIn+(OP)
>All that aside, I tend to agree with the hypothesis that LLMs are a fad that will mostly pass. For professionals, it is really hard to get past hallucinations and the lack of citations.

For writers maybe, but absolutely not for programmers, it's incredibly useful. I don't think anyone who's used GPT4 to improve their coding productivity would consider it a fad.

replies(1): >>thiht+cM1
5. __loam+r4[view] [source] 2023-12-27 18:47:06
>>DamnIn+(OP)
Finally a reasonable take on this site.
6. seanmc+u4[view] [source] 2023-12-27 18:47:20
>>DamnIn+(OP)
> Overall, current LLMs remind me of those bottom-feeder websites that do no original research--those sites that just find an article they like, lazily rewrite it, introduce a few errors, then maybe paste some baloney "sources" (which always seems to disinclude the actual original source). That mode of operation tends to be technically legal, but it's parasitic and lazy and doesn't add much value to the world.

Another way of looking at this is that bottom-feeder websites do work that could easily be done by an LLM. I've noticed a high correlation between "could be AI" and "is definitely a trashy click bait news source" (before LLMs were even a thing).

To be clear, if your writing could be replaced by an LLM today, you probably aren't a very good writer. And...I doubt this technology will stop improving, so I wouldn't make the mistake of thinking that 2023 will be a high point for LLMs and they aren't much better in 2033 (or whatever replaces them).

replies(1): >>stefan+GW
7. asylte+W4[view] [source] 2023-12-27 18:49:36
>>DamnIn+(OP)
LLMs are not a fad for many things especially programming. It improves my productivity at least by 100%. It’s also useful to understand specific and hard to Google questions or parsing docs quickly. I think it’s going to fizzle out for creative content though at least until these companies stop “aligning” it so much. Hard to be funny when you can’t even offend a single molecule.
8. buckyf+z6[view] [source] 2023-12-27 18:57:49
>>DamnIn+(OP)
I don’t view LLMs as a fad. It’s like drummers and drum machines. Machines and drummers co-exist really well. I think drum machines, among other things, made drummers better.
replies(2): >>tremon+Fc >>fennec+ZY2
9. Shamel+q7[view] [source] 2023-12-27 19:03:27
>>DamnIn+(OP)
> (according to a tool that can allegedly detect these things).

Eh, I would trust my own testing before trusting a tool that claims to have somehow automated this process without having access to the weights. Really it’s about how unique your content is and how similar (semantically) an output from the model is when prompted with the content’s premise.

I believe you, in any case. Just wanted to point out that lots of these tools are suspect.

◧◩
10. neilv+sa[view] [source] [discussion] 2023-12-27 19:20:25
>>MeImCo+q2
> LLMs have become a fundamental part of my work flow as a professional. GPT4 [...] doesnt have an ego about its incorrect statements when challenged on them.

To anthropomorphize it further, it's a plagiarizing bullshitter who apologizes quickly when any perceived error is called out (whether or not that particular bit of plagiarism or fabrication was correct), learning nothing, so its apology has no meaning, but it doesn't sound uppity about being a plagiarizing bullshitter.

11. shrimp+yc[view] [source] 2023-12-27 19:33:38
>>DamnIn+(OP)
Anthropic made $200M in 2023 and projected to make $1B in 2024. That's a laggard <2 year old startup. I don't think LLMs are a fad.
◧◩
12. tremon+Fc[view] [source] [discussion] 2023-12-27 19:34:14
>>buckyf+z6
It mainly made mediocre drummers sound better to the untrained ear.
replies(3): >>HaZeus+ch >>graphe+5x >>buckyf+593
◧◩◪
13. HaZeus+ch[view] [source] [discussion] 2023-12-27 19:57:23
>>tremon+Fc
Then it comes down to preference, but the craft and discipline objectively evolved as a result. Just as your trained ear may keep your preference to more refined percussive - a subject matter expert may care more for their native, untrained materials on their topic. In either case, music progressed in spite of the trained ears, just as AI will progress all walks of life in spite of the subject matter experts.

Nonetheless, trained ears and subject matter experts can still pick their preference.

◧◩◪
14. graphe+5x[view] [source] [discussion] 2023-12-27 21:20:26
>>tremon+Fc
I agree. Hitting perfect notes constantly with little or no variation is pretty hard for a person to do. Now anything "live" or proof of humanity is better sounding since it's not as sterile.
replies(1): >>idonot+dF
◧◩◪◨
15. idonot+dF[view] [source] [discussion] 2023-12-27 22:07:23
>>graphe+5x
I agree with this. I prefer live music with the imperfections. And I like it when unmixed live recordings are leaked
◧◩
16. stefan+GW[view] [source] [discussion] 2023-12-28 00:14:44
>>seanmc+u4
That's the joke, these sites are long produced by LLMs. The result is obvious.
◧◩
17. thiht+cM1[view] [source] [discussion] 2023-12-28 09:31:02
>>logicc+j4
Copilot has been way more useful to me than GPT4. When I describe a complex problem where I want multiple solutions to compare, GPT4 is useless to me. The responses are almost always completely wrong or ignore half of the details I’ve written in the prompt. Or I have to write them with already a response in mind, which kinda defeats why I would use it in the first place.

Copilot provides useful autocompletes maybe… 30% of the time? But it doesn’t waste too much as it’s more of a passive tool.

replies(2): >>rmorey+D83 >>NemoNo+mA4
18. throwa+5f2[view] [source] 2023-12-28 13:52:55
>>DamnIn+(OP)
We use LLMs for classification. When you have limited data, LLMs work better than standard classification models like random forests. In some cases, we found LLM generated labels to be more accurate than humans.

Labeling few samples, LoRA optimizing an LLM, generating labels on millions of samples and then training a standard classifier is an easy way to get a good classifier in matter of hours/days.

Basically any task where you can handle some inaccuracy, LLMs can be a great tool. So I don't think LLMs are a fad as such.

◧◩
19. fennec+ZY2[view] [source] [discussion] 2023-12-28 18:00:49
>>buckyf+z6
Neither, and NYT editors use all sorts of productivity tools, inspiration, references, etc too. Same as artists will usually find a couple references of whatever they want to draw, or the style, etc.

I agree with the key point that paid content should be licensed to be used for training, but the general argument being made has just spiralled into luddism at people who are fearful that these models could eventually take their jobs; and they will, as machines have replaced humans in so many other industries, we all reap the rewards, and industrialisation isn't to blame for the 1%, our shitty flag waving vote for your team politics are to blame.

◧◩◪
20. rmorey+D83[view] [source] [discussion] 2023-12-28 18:42:36
>>thiht+cM1
> When I describe a complex problem where I want multiple solutions to compare, GPT4 is useless to me

FWIW i don’t try to use it for this. mostly i use it to automate writing code for tasks that are well specified, often transformations from one format to another. so yes, with a solution in mind. it mostly just saves typing, which is a minority of the work, but it is a useful time saver

◧◩◪
21. buckyf+593[view] [source] [discussion] 2023-12-28 18:44:12
>>tremon+Fc
It allowed people to see the difference between drum machines and humans. Drummers could practice to sound more like the ‘perfect’ machines, but more importantly the best drummers learned how to differentiate themselves from machines. The best drummers actually became more human. Listen and look at Nate Smith - this guy plays with timing and feel and audience reactions in ways that machines cannot. Sometimes tools let humans expand their creativity in ways previously unheard of. Just like the LLMs are doing right now.
◧◩◪
22. NemoNo+mA4[view] [source] [discussion] 2023-12-29 07:17:23
>>thiht+cM1
Copilot is amazing. It single handedly returned me to the Microsoft ecosystem and changed the way I use the Internet. Huggingface is another great AI, I've used Githubs a bit, Codium a bit - all of these things are amazing.

This is not a fad, this is the beginning of a world that we can just actually naturally interact to accomplish things we have to be educated on how to accomplish now.

Haha, I love that people can't see the writing on wall - I think this is a bigger invention than the smartphone that I'm typing this on now, fr - just wait and see ;)

23. dinvla+mm7[view] [source] 2023-12-30 06:06:43
>>DamnIn+(OP)
Very much so. And their popularity has already been on decline for several months, and couldn’t be explained away by kids going on a summer vacation anymore.
[go to top]