zlacker

[return to "ChatGPT Is a Gimmick"]
1. danlit+ag[view] [source] 2025-05-22 07:43:31
>>blueri+(OP)
It is refreshing to see I am not the only person who cannot get LLMs to say anything valuable. I have tried several times, but the cycle "You're right to question this. I actually didn't do anything you asked for. Here is some more garbage!" gets really old really fast.

It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.

◧◩
2. badmin+jk[view] [source] 2025-05-22 08:22:12
>>danlit+ag
I mostly use it as a replacement of a search engine and exploration, mostly for subjects that I'm learning from scratch, and I don't have a good grasp of the official documentation and good keywords yet. It competes with searching for guides in traditional search engines, but that's easy to beat with today's SEO infested web.

Its quality seems to vary wildly between various subjects, but annoyingly it presents itself with uniform confidence.

◧◩◪
3. -__---+0C[view] [source] 2025-05-22 11:39:43
>>badmin+jk
I hate the confident obsequious waffling. The cultural origins of the tool is evident.

If you aren't already, I suggest making sure to not forget, every 3-5 prompts, to throw in: "no waffling", "no flattery", "no obsequious garbage", etc. You can make it as salty as you like. If the AI says "Have fun!", or "Let's get coding!", you know you need to get the whip out haha.

Also, "3 sentences max on ...", "1 sentence explaining ...", "1 paragraph max on ...".

Another improvement for me was, you want to do procedure x in situation y, so you go "I'm in situation y, I'm considering procedure x, but I know I've missed something. Tell me what I could have missed". Or "list specific scenarios in which procedure x will lead to catastrophe".

Accepting the tool as a fundamentally dumb synthesiser and summariser is the first step to it getting a lot more useful, I think.

All that said, I use it pretty rarely. The revolution in learning we need is with John Holt and similar thinkers from that period, and is waiting to happen, and won't be provided by the next big tech thing, I fear.

◧◩◪◨
4. aaronb+TW[view] [source] 2025-05-22 14:19:44
>>-__---+0C
At one point I asked Grok, "I've heard that AIs are programmed to please the user, which could lead to prioritizing what the user wants to hear over telling the truth. Are you doing that?" It said it wasn't, and gave examples of where in its answers to me it had given objective answers (as it saw them) and then followed them up with encouragement. Fair enough. So I told it to always prioritize giving me an objective viewpoint, and after that, it started breaking answers up with an "objective facts" section and then an "opinion" sort of section.

But I've noticed recently it's started slipping back into more "That's a great idea" and "You've got this" cheerleading, so I'm going to have to tell it to knock that out again. It will definitely lean into confirmation bias if that's what you're looking for, and you don't explicitly tell it not to worry about how you'll feel about the answer.

I find it useful for bouncing ideas off of, while keeping in mind that I'm really bouncing them off myself and sort of a hive mind made up of what's been said in certain mainstream sectors of the Internet. I'm less creative than average, so I get more ideas that way than I'd get from just journaling, so that's worth something.

[go to top]