zlacker

[parent] [thread] 2 comments
1. jorvi+(OP)[view] [source] 2026-02-05 03:48:40
What do you mean, "are you sure"? I literally saw and see it happen in front of my eyes. Just now tested it with slight variations of "ideal temperature waterfowl cooking", "best temperature waterfowl roasting", etc. and all these questions yield different answers, with temperatures ranging from 47c-57c (ignoring the 74c food safety ones).

That's my entire point. Even adding an "is" or "the" can get you way different advice. No human would give you different info when you ask "what's the waterfowl's best cooking temperature" vs "what is waterfowl's best roasting temperature".

replies(1): >>cruffl+e4
2. cruffl+e4[view] [source] 2026-02-05 04:31:23
>>jorvi+(OP)
Did you point that out to one of them… like “hey bro, I’ve asked y’all this question in multiple threads and get wildly different answers. Why?”

And the answer is probably because there is no such thing as an ideal temperature for waterfowl because the answer is “it depends” and you didn’t give it enough context to better answer your question.

Context is everything. Give it poor prompts, you’ll get poor answers. LLMs are no different than programming a computer or anything else in this domain.

And learning how to give good context is a skill. One we all need to learn.

replies(1): >>jhhh+I7
◧◩
3. jhhh+I7[view] [source] [discussion] 2026-02-05 05:14:15
>>cruffl+e4
If I made a new, not-AI tool called 'correct answer provider' which provided definitive, incorrect answers to things you'd call it bad software. But because it is AI we're going to blame the user for not second guessing the answers or holding it wrong ie. bad prompting.
[go to top]