zlacker

[parent] [thread] 0 comments
1. aaronb+(OP)[view] [source] 2025-05-22 14:19:44
At one point I asked Grok, "I've heard that AIs are programmed to please the user, which could lead to prioritizing what the user wants to hear over telling the truth. Are you doing that?" It said it wasn't, and gave examples of where in its answers to me it had given objective answers (as it saw them) and then followed them up with encouragement. Fair enough. So I told it to always prioritize giving me an objective viewpoint, and after that, it started breaking answers up with an "objective facts" section and then an "opinion" sort of section.

But I've noticed recently it's started slipping back into more "That's a great idea" and "You've got this" cheerleading, so I'm going to have to tell it to knock that out again. It will definitely lean into confirmation bias if that's what you're looking for, and you don't explicitly tell it not to worry about how you'll feel about the answer.

I find it useful for bouncing ideas off of, while keeping in mind that I'm really bouncing them off myself and sort of a hive mind made up of what's been said in certain mainstream sectors of the Internet. I'm less creative than average, so I get more ideas that way than I'd get from just journaling, so that's worth something.

[go to top]