>>aarest+A
I probably should have added sarcasm tags to my post. My very firm opinion is that AI should only make suggestions to humans and not decisions for humans.
>>aarest+A
As annoying as it is when the human support tech is wrong about something, I'm not hoping they'll lose their job as a result. I want them to have better training/docs so it doesn't happen again in the future, just like I'm sure they'll do with this AI bot.
>>Sparky+E3
That only works well if someone is in an appropriate job though. Keeping someone in a position they are unqualified for and majorly screwing up at isn't doing anyone any favors.
>>Sparky+E3
> I'm not hoping they'll lose their job as a result
I have empathy for humans. It's not yet a thought crime to suggest that the existence of an LLM should be ended. The analogy would make me afraid of the future if I think about it too much.
>>aarest+A
How is this not an example of humans being held accountable? What would be the difference here if a help center article contained incorrect information? Would you go after the technical writer instead of the founders or Cursor employees responding on Reddit?
>>aarest+A
I'd argue that humans also more easily learn from huge mistakes. Typically, we need only one training sample to avoid a whole class of errors in the future (also because we are being held accountable).