zlacker

[parent] [thread] 1 comments
1. idle_z+(OP)[view] [source] 2026-01-30 19:13:33
It means the agent should try to be intentional, I think? The way ideas are phrased in prompts changes how LLMs respond, and equating the instructions to life itself might make it stick to them better?
replies(1): >>emp173+f3
2. emp173+f3[view] [source] 2026-01-30 19:29:35
>>idle_z+(OP)
I feel like you’re trying to assign meaning where none exists. This is why AI psychosis is a thing - LLMs are good at making you feel like they’re saying something profound when there really isn’t anything behind the curtain. It’s a language model, not a life form.
[go to top]