zlacker

[parent] [thread] 1 comments
1. dylan6+(OP)[view] [source] 2025-04-15 21:52:50
If you had a human support person feeding the support question into the AI to get a hint, do you think that support person is going to know that the AI response is made up and not actually a correct answer? If they knew the correct answer, they wouldn't have needed to ask the AI.
replies(1): >>_jonas+fP5
2. _jonas+fP5[view] [source] 2025-04-17 19:42:31
>>dylan6+(OP)
Exactly, that's why my startup recommends all LLM outputs should come with trustworthiness scores:

https://cleanlab.ai/tlm/

[go to top]