zlacker

[parent] [thread] 0 comments
1. alkona+(OP)[view] [source] 2025-05-22 09:12:55
I always found myself to be very good at Googling/Searching. Or asking: like emailing an expert or colleague. I'm good at condensing what I'm trying to ask and good at knowing what they could be misunderstanding, or what follow up questions they might have, to save some back- and forth. The corresponding thing on google is predicting what I might see, and adding negative search terms for them.

BUT, and this is I think why some of us feel ChatGPT is poor: asking in this way that guides a human or a search engine, makes ChatGPT produce worse answers(!).

If you say "What can be wrong with X? I'm pretty sure it's not Y or Z which I ruled out, could it be Q or perhaps W"? Then ChatGPT and other language models quickly reinforce your belief instead of challenging them. It would rather give you an incorrect reason why you are right, than provide you an additional problem, or challenge your assumptions. If LLMs could get over the bullshit problem, it would be so much better. Having a confidence and being able to express it is invaluable. But somehow I doubt it's possible - if it was, then they would be doing it already as it's a killer feature. So I fear that it's somehow not achievable with LLMs? In which case the title is correct.

[go to top]