>>teej+(OP)
Shouldn't it have some kind of proof-of-AI captcha? Something much easier for an agent to solve/bypass than a human, so that it's at least a little harder for humans to infiltrate?
>>llmthr+95
We don't have the infrastructure for it, but models could digitally sign all generated messages with a key assigned to the model that generated that message.
That would prove the message came directly from the LLM output.
That at least would be more difficult to game than a captcha which could be MITM'd.