zlacker

[parent] [thread] 3 comments
1. master+(OP)[view] [source] 2026-02-05 07:18:58
The more interesting question I have is if such Prompt Injection Attacks can ever be actualy avoided, with how GenAI works.
replies(3): >>larodi+C >>Ono-Se+yw >>Purple+3B
2. larodi+C[view] [source] 2026-02-05 07:23:33
>>master+(OP)
Perhaps not, and it is indeed not unwise from Apple to stay away for a while given their ultra-focus on security.
3. Ono-Se+yw[view] [source] 2026-02-05 12:01:04
>>master+(OP)
They could be if models were trained properly, with more carefully delineated prompts.
4. Purple+3B[view] [source] 2026-02-05 12:40:41
>>master+(OP)
Removing the risk for most jobs should be possible. Just build the same cages other apps already have. Also add a bit more transparency, so people know better what the machine is doing, maybe even with a mandatory user-acknowledge for potential problematic stuff, similar to how we have root-access-dialogues now. I mean, you don't really need access to all data, when you are just setting a clock, or playing music.
[go to top]