But for a long time, unrelated to AI. When Amazon was first available here in Spain (don't remember exactly what year, but before LLMs for sure), the amount of fraudulent reviews filling the platform was already noticeable at that point.
That industry you're talking about might have gotten new wings with LLMs, but it wasn't spawned by LLMs, it existed long time before that.
> the less people will trust content on those platforms.
Maybe I'm jarred from using the internet from a young age, but both me and my peers basically has a built-in mistrust against random stuff we see on the internet, at least compared to our parents and our younger peers.
"Don't believe everything you see on the internet" been a mantra almost for as long as the internet has existed, maybe people forgot and needed an reminder, but it was never not true.
When snail mail had a cost floor of $0.25 for the price of postage, email was basically free. You might get 2-3 daily pieces of junk mail in your house’s mailbox, but you would get hundreds or thousands in your email inbox. Slop comes at scale. LLMs didn’t invent spam, but they are making it easier to create more variants of it, and possibly ones that convert better than procedurally generated pieces.
There’s a difference between your cognitive brain and your lizard brain. You can tell yourself that mantra, but still occasionally fall prey to spam content. The people who make spam have a financial incentive to abuse the heuristics/signals you use to determine the authenticity of a piece of content in the same way cheap knockoffs of Rolex watches, Cartier jewelry, or Chanel handbags have to make the knockoffs appear as authentic as possible.
Hence I suspect that quite a few of these interfaces that are now being spammed with AI crap will end up implementing something that represents a fee, a paywall, or a trustwall. That should keep armies of AI slop responses from being worthwhile.
How we do that without killing some communities is yet to be seen.