zlacker

[parent] [thread] 0 comments
1. SahAss+(OP)[view] [source] 2026-01-13 23:01:37
I think I generally agree, but I also think that treating them like people means that you expect reason, intelligence and a way to interrogate their way of "thinking" (very broad quotes here).

I think LLMs are to be treated as something completely separate from both predictable machines ("automatons") and people. They have separate concerns and fitness for a use-case than both existing categories.

[go to top]