zlacker

[parent] [thread] 3 comments
1. Timwi+(OP)[view] [source] 2025-07-07 09:21:15
The author seems to want to label any discourse as “anthropomorphizing”. The word “goal” stood out to me: the author wants us to assume that we're anthropomorphizing as soon as we even so much as use the word “goal”. A simple breadth-first search that evaluates all chess boards and legal moves, but stops when it finds a checkmate for white and outputs the full decision tree, has a “goal”. There is no anthropomorphizing here, it's just using the word “goal” as a technical term. A hypothetical AGI with a goal like paperclip maximization is just a logical extension of the breadth-first search algorithm. Imagining such an AGI and describing it as having a goal isn't anthropomorphizing.
replies(1): >>tdulli+ks
2. tdulli+ks[view] [source] 2025-07-07 13:18:27
>>Timwi+(OP)
Author here. I am entirely ok with using "goal" in the context of an RL algorithm. If you read my article carefully, you'll find that I object to the use of "goal" in the context of LLMs.
replies(1): >>Timwi+US2
◧◩
3. Timwi+US2[view] [source] [discussion] 2025-07-08 12:43:55
>>tdulli+ks
If you read the literature on AI safety carefully (which uses the word “goal”), you'll find they're not talking about LLMs either.
replies(1): >>tdulli+I73
◧◩◪
4. tdulli+I73[view] [source] [discussion] 2025-07-08 14:30:29
>>Timwi+US2
I think the Anthropic "omg blackmail" article clearly talks about both LLMs and their "goals".
[go to top]