Overall, current LLMs remind me of those bottom-feeder websites that do no original research--those sites that just find an article they like, lazily rewrite it, introduce a few errors, then maybe paste some baloney "sources" (which always seems to disinclude the actual original source). That mode of operation tends to be technically legal, but it's parasitic and lazy and doesn't add much value to the world.
All that aside, I tend to agree with the hypothesis that LLMs are a fad that will mostly pass. For professionals, it is really hard to get past hallucinations and the lack of citations. Imagine being a perpetual fact-checker for a very unreliable author. And laymen will probably mostly use LLMs to generate low-effort content for SEO, which will inevitably degrade the quality of the same LLMs as they breed with their own offspring. "Regression to mediocrity," as Galton put it.
Do you mean in the Browsing Mode or something? I don't think it is naturally capable of that, both because it is performing lossy compression, and because in many cases it simply won't know where the text that was fed to it during training came from.
For writers maybe, but absolutely not for programmers, it's incredibly useful. I don't think anyone who's used GPT4 to improve their coding productivity would consider it a fad.
Another way of looking at this is that bottom-feeder websites do work that could easily be done by an LLM. I've noticed a high correlation between "could be AI" and "is definitely a trashy click bait news source" (before LLMs were even a thing).
To be clear, if your writing could be replaced by an LLM today, you probably aren't a very good writer. And...I doubt this technology will stop improving, so I wouldn't make the mistake of thinking that 2023 will be a high point for LLMs and they aren't much better in 2033 (or whatever replaces them).
Eh, I would trust my own testing before trusting a tool that claims to have somehow automated this process without having access to the weights. Really it’s about how unique your content is and how similar (semantically) an output from the model is when prompted with the content’s premise.
I believe you, in any case. Just wanted to point out that lots of these tools are suspect.
To anthropomorphize it further, it's a plagiarizing bullshitter who apologizes quickly when any perceived error is called out (whether or not that particular bit of plagiarism or fabrication was correct), learning nothing, so its apology has no meaning, but it doesn't sound uppity about being a plagiarizing bullshitter.
Nonetheless, trained ears and subject matter experts can still pick their preference.
Copilot provides useful autocompletes maybe… 30% of the time? But it doesn’t waste too much as it’s more of a passive tool.
Labeling few samples, LoRA optimizing an LLM, generating labels on millions of samples and then training a standard classifier is an easy way to get a good classifier in matter of hours/days.
Basically any task where you can handle some inaccuracy, LLMs can be a great tool. So I don't think LLMs are a fad as such.
I agree with the key point that paid content should be licensed to be used for training, but the general argument being made has just spiralled into luddism at people who are fearful that these models could eventually take their jobs; and they will, as machines have replaced humans in so many other industries, we all reap the rewards, and industrialisation isn't to blame for the 1%, our shitty flag waving vote for your team politics are to blame.
FWIW i don’t try to use it for this. mostly i use it to automate writing code for tasks that are well specified, often transformations from one format to another. so yes, with a solution in mind. it mostly just saves typing, which is a minority of the work, but it is a useful time saver
This is not a fad, this is the beginning of a world that we can just actually naturally interact to accomplish things we have to be educated on how to accomplish now.
Haha, I love that people can't see the writing on wall - I think this is a bigger invention than the smartphone that I'm typing this on now, fr - just wait and see ;)