zlacker
[parent]
[thread]
2 comments
1. tdulli+(OP)
[view]
[source]
2025-07-07 13:18:27
Author here. I am entirely ok with using "goal" in the context of an RL algorithm. If you read my article carefully, you'll find that I object to the use of "goal" in the context of LLMs.
replies(1):
>>Timwi+Aq2
◧
2. Timwi+Aq2
[view]
[source]
2025-07-08 12:43:55
>>tdulli+(OP)
If you read the literature on AI safety carefully (which uses the word “goal”), you'll find they're not talking about LLMs either.
replies(1):
>>tdulli+oF2
◧◩
3. tdulli+oF2
[view]
[source]
[discussion]
2025-07-08 14:30:29
>>Timwi+Aq2
I think the Anthropic "omg blackmail" article clearly talks about both LLMs and their "goals".
[go to top]