zlacker

[parent] [thread] 8 comments
1. jimjim+(OP)[view] [source] 2025-04-15 20:55:52
A few hallucinations. It's right more times than it's wrong. Humans make mistakes as well. Cosmic justice.
replies(1): >>aarest+A
2. aarest+A[view] [source] 2025-04-15 21:00:54
>>jimjim+(OP)
Yes, but humans can be held accountable.
replies(4): >>jimjim+W1 >>Sparky+E3 >>mgracz+8h >>raphma+291
◧◩
3. jimjim+W1[view] [source] [discussion] 2025-04-15 21:09:08
>>aarest+A
I probably should have added sarcasm tags to my post. My very firm opinion is that AI should only make suggestions to humans and not decisions for humans.
◧◩
4. Sparky+E3[view] [source] [discussion] 2025-04-15 21:18:58
>>aarest+A
As annoying as it is when the human support tech is wrong about something, I'm not hoping they'll lose their job as a result. I want them to have better training/docs so it doesn't happen again in the future, just like I'm sure they'll do with this AI bot.
replies(2): >>rurp+s6 >>recurs+Of
◧◩◪
5. rurp+s6[view] [source] [discussion] 2025-04-15 21:35:19
>>Sparky+E3
That only works well if someone is in an appropriate job though. Keeping someone in a position they are unqualified for and majorly screwing up at isn't doing anyone any favors.
replies(1): >>Sparky+gj
◧◩◪
6. recurs+Of[view] [source] [discussion] 2025-04-15 22:37:23
>>Sparky+E3
> I'm not hoping they'll lose their job as a result

I have empathy for humans. It's not yet a thought crime to suggest that the existence of an LLM should be ended. The analogy would make me afraid of the future if I think about it too much.

◧◩
7. mgracz+8h[view] [source] [discussion] 2025-04-15 22:48:06
>>aarest+A
How is this not an example of humans being held accountable? What would be the difference here if a help center article contained incorrect information? Would you go after the technical writer instead of the founders or Cursor employees responding on Reddit?
◧◩◪◨
8. Sparky+gj[view] [source] [discussion] 2025-04-15 23:07:31
>>rurp+s6
Fully agree. My analogy fits here too.
◧◩
9. raphma+291[view] [source] [discussion] 2025-04-16 07:56:48
>>aarest+A
I'd argue that humans also more easily learn from huge mistakes. Typically, we need only one training sample to avoid a whole class of errors in the future (also because we are being held accountable).
[go to top]