zlacker

[parent] [thread] 1 comments
1. Timwi+(OP)[view] [source] 2025-07-08 12:43:55
If you read the literature on AI safety carefully (which uses the word “goal”), you'll find they're not talking about LLMs either.
replies(1): >>tdulli+Oe
2. tdulli+Oe[view] [source] 2025-07-08 14:30:29
>>Timwi+(OP)
I think the Anthropic "omg blackmail" article clearly talks about both LLMs and their "goals".
[go to top]