zlacker

[parent] [thread] 0 comments
1. hn_acc+(OP)[view] [source] 2026-01-30 19:59:35
Once we've solved social engineering scams, we can iterate 10x as hard and solve LLM prompt injection. /s

It's like having 100 "naive/gullible people" who are good at some math/english but don't understand social context, all with your data available to anyone who requests it in the right way..

[go to top]