zlacker

[parent] [thread] 1 comments
1. gmerc+(OP)[view] [source] 2026-01-26 06:13:25
This betrays a lack of understanding how inference works. You cannot categorically defeat prompt injection with instructions. It does not work. There are no privileged tokens.
replies(1): >>lmeyer+61
2. lmeyer+61[view] [source] 2026-01-26 06:26:41
>>gmerc+(OP)
Yep! One of my favorite attacks is just having a very long piece of a text so the LLM becomes unclear what's important and is happy to do something else
[go to top]