zlacker

[parent] [thread] 0 comments
1. Terr_+(OP)[view] [source] 2025-10-27 19:02:18
> it can also save us from biased content

I am pessimistic on that front, since:

1. If LLM's can't detect biases in their own output, why would we expect them to reliably detect it in documents in general?

2. As a general rule, deploying bias/tricks/fallacies/BS is much easier than the job of detecting them and explaining why it's wrong.

[go to top]