zlacker

[parent] [thread] 3 comments
1. ben_w+(OP)[view] [source] 2025-05-14 20:58:16
Given that LLMs are trained on humans, who don't respond well to being dehumanised, I expect anthropomorphising them to be better than the opposite of that.

https://www.microsoft.com/en-us/worklab/why-using-a-polite-t...

replies(2): >>Schema+5B >>martin+1Z1
2. Schema+5B[view] [source] 2025-05-15 03:08:53
>>ben_w+(OP)
Aside from just getting more useful responses back, I think it's just bad for your brain to treat something that acts like a person with disrespect. Becomes "it's just a chatbot", "It's just a dog", "It's just a low level customer support worker".
replies(1): >>ben_w+6V
◧◩
3. ben_w+6V[view] [source] [discussion] 2025-05-15 07:30:10
>>Schema+5B
While I also agree with you on that, there are also prompts that make them not act like a person at all, and prompts can be write-once-use-many which lessens the impact of that.

This is why I tend to lead with the "quality of response" argument rather than the "user's own mind" argument.

4. martin+1Z1[view] [source] 2025-05-15 16:40:21
>>ben_w+(OP)
I am not talking about getting it to generate useful output, treating it extra politely or threatening with fines seems to give better results sometimes so why not, I am talking about the phrase "gets it". It does not get anything.
[go to top]