zlacker

[return to "My AI skeptic friends are all nuts"]
1. gdubs+Z[view] [source] 2025-06-02 21:18:21
>>tablet+(OP)
One thing that I find truly amazing is just the simple fact that you can now be fuzzy with the input you give a computer, and get something meaningful in return. Like, as someone who grew up learning to code in the 90s it always seemed like science fiction that we'd get to a point where you could give a computer some vague human level instructions and get it more or less do what you want.
◧◩
2. csalle+z1[view] [source] 2025-06-02 21:22:05
>>gdubs+Z
It's mind blowing. At least 1-2x/week I find myself shocked that this is the reality we live in
◧◩◪
3. malfis+Y5[view] [source] 2025-06-02 21:45:03
>>csalle+z1
Today I had a dentist appointment and the dentist suggested I switch toothpaste lines to see if something else works for my sensitivity better.

I am predisposed to canker sores and if I use a toothpaste with SLS in it I'll get them. But a lot of the SLS free toothpastes are new age hippy stuff and is also fluoride free.

I went to chatgpt and asked it to suggest a toothpaste that was both SLS free and had fluoride. Pretty simple ask right?

It came back with two suggestions. It's top suggestion had SLS, it's backup suggestion lacked fluoride.

Yes, it is mind blowing the world we live in. Executives want to turn our code bases over to these tools

◧◩◪◨
4. Game_E+kp[view] [source] 2025-06-02 23:49:35
>>malfis+Y5
What model and query did you use? I used the prompt "find me a toothpaste that is both SLS free and has fluoride" and both GPT-4o [0] and o4-mini-high [1] gave me correct first answers. The 4o answer used the newish "show products inline" feature which made it easier to jump to each product and check it out (I am putting aside my fear this feature will end up kill their web product with monetization).

0 - https://chatgpt.com/share/683e3807-0bf8-800a-8bab-5089e4af51...

1 - https://chatgpt.com/share/683e3558-6738-800a-a8fb-3adc20b69d...

◧◩◪◨⬒
5. wkat42+8F[view] [source] 2025-06-03 02:20:12
>>Game_E+kp
The problem is the same prompt will yield good results one time and bad results another. The "get better at prompting" is often just an excuse for AI hallucination. Better prompting can help but often it's totally fine, the tech is just not there yet.
◧◩◪◨⬒⬓
6. Aeolun+8J[view] [source] 2025-06-03 03:02:03
>>wkat42+8F
If you want a correct answer the first time around, and give up if you don't get it, even if you know the thing can give it to you with a bit more effort (but still less effort than searching yourself), don't you think that's a user problem?
◧◩◪◨⬒⬓⬔
7. 3eb798+zL[view] [source] 2025-06-03 03:34:14
>>Aeolun+8J
If you are genuinely asking a question, how are you supposed to know the first answer was incorrect?
◧◩◪◨⬒⬓⬔⧯
8. socalg+vP[view] [source] 2025-06-03 04:25:01
>>3eb798+zL
The person that started this conversation verified the answers were incorrect. So it sounds like you just do that. Check the results. If they turn out to be false, tell the LLM or make sure you're not on a bad one. It still likely to be faster than searching yourself.
◧◩◪◨⬒⬓⬔⧯▣
9. mtlmtl+EU[view] [source] 2025-06-03 05:20:49
>>socalg+vP
That's all well and good for this particular example. But in general, the verification can often be so much work it nullifies the advantage of the LLM in the first place.

Something I've been using perplexity for recently is summarizing the research literature on some fairly specific topic(e.g. the state of research on the use of polypharmacy in treatment of adult ADHD). Ideally it should look up a bunch of papers, look at them and provide a summary of the current consensus on the topic. At first, I thought it did this quite well. But I eventually noticed that in some cases it would miss key papers and therefore provide inaccurate conclusions. The only way for me to tell whether the output is legit is to do exactly what the LLM was supposed to do; search for a bunch of papers, read them and conclude on what the aggregate is telling me. And it's almost never obvious from the output whether the LLM did this properly or not.

The only way in which this is useful, then, is to find a random, non-exhaustive set of papers for me to look at(since the LLM also can't be trusted to accurately summarize them). Well, I can already do that with a simple search in one of the many databases for this purpose, such as pubmed, arxiv etc. Any capability beyond that is merely an illusion. It's close, but no cigar. And in this case close doesn't really help reduce the amount of work.

This is why a lot of the things people want to use LLMs for requires a "definiteness" that's completely at odds with the architecture. The fact that LLMs are food at pretending to do it well only serves to distract us from addressing the fundamental architectural issues that need to be solved. I think think any amount of training of a transformer architecture is gonna do it. We're several years into trying that and the problem hasn't gone away.

◧◩◪◨⬒⬓⬔⧯▣▦
10. lazide+A01[view] [source] 2025-06-03 06:21:15
>>mtlmtl+EU
Yup, and worse since the LLM gives such a confident sounding answer, most people will just skim over the ‘hmm, but maybe it’s just lying’ verification check and move forward oblivious to the BS.
◧◩◪◨⬒⬓⬔⧯▣▦▧
11. fennec+bk1[view] [source] 2025-06-03 09:46:25
>>lazide+A01
People did this before LLMs anyway. Humans are selfish, apathetic creatures and unless something pertains to someone's subject of interest the human response is "huh, neat. I didn't know dogs could cook pancakes like that" then scroll to the next tiktok.

This is also how people vote, apathetically and tribally. It's no wonder the world has so many fucking problems, we're all monkeys in suits.

[go to top]