zlacker

[parent] [thread] 5 comments
1. anonym+(OP)[view] [source] 2024-05-15 18:43:13
> You can’t actually trust ai systems

For a lot of (very profitable) use cases, hallucinations and 80/20 are actually more than good enough. Especially when they are replacing solutions that are even worse.

replies(2): >>player+3m1 >>still_+Zo1
2. player+3m1[view] [source] 2024-05-16 07:19:27
>>anonym+(OP)
What use cases? This kind of thing is stated all the time, never any examples.
replies(2): >>roguas+O12 >>jedber+Ws2
3. still_+Zo1[view] [source] 2024-05-16 08:01:56
>>anonym+(OP)
What are examples of these (very profitable) use cases?

Producing spam has some margin on it, but is it really very profitable? And else?

◧◩
4. roguas+O12[view] [source] [discussion] 2024-05-16 13:58:58
>>player+3m1
All the usecases we see. Take a look at perplexity optimising short internet research. If I get this mostly right its fine enough, saved my 30 minutes of mindless clicking and reading - even if some errors are there.
replies(1): >>CooCoo+y72
◧◩◪
5. CooCoo+y72[view] [source] [discussion] 2024-05-16 14:28:37
>>roguas+O12
You make it sound like LLMs just make a few small mistakes when in reality they can hallucinate on a large scale.
◧◩
6. jedber+Ws2[view] [source] [discussion] 2024-05-16 16:28:19
>>player+3m1
Any use case where you treat the output like the work of a junior person and check it. Coding, law, writing. Pretty much anywhere that you can replace a junior employee with an LLM.

Google or Meta (don't remember which) just put out a report about how many human-hours they saved last year using transformers for coding.

[go to top]