That’s not true at all. The biggest issue is that it doesn’t work. You can’t actually trust ai systems and that’s not a product issue.
The AI isn’t the product, e.g. the ChatGPT interface is the main product that is layered above the core AI tech.
The issue is trustworthiness isn’t solvable by applying standard product management techniques on a predictable schedule. It requires scientific research.
I don't know about that, it seems to work just fine at creating spam and clone websites.
For a lot of (very profitable) use cases, hallucinations and 80/20 are actually more than good enough. Especially when they are replacing solutions that are even worse.
Producing spam has some margin on it, but is it really very profitable? And else?
Google or Meta (don't remember which) just put out a report about how many human-hours they saved last year using transformers for coding.