zlacker

[return to "My AI skeptic friends are all nuts"]
1. gdubs+Z[view] [source] 2025-06-02 21:18:21
>>tablet+(OP)
One thing that I find truly amazing is just the simple fact that you can now be fuzzy with the input you give a computer, and get something meaningful in return. Like, as someone who grew up learning to code in the 90s it always seemed like science fiction that we'd get to a point where you could give a computer some vague human level instructions and get it more or less do what you want.
◧◩
2. csalle+z1[view] [source] 2025-06-02 21:22:05
>>gdubs+Z
It's mind blowing. At least 1-2x/week I find myself shocked that this is the reality we live in
◧◩◪
3. malfis+Y5[view] [source] 2025-06-02 21:45:03
>>csalle+z1
Today I had a dentist appointment and the dentist suggested I switch toothpaste lines to see if something else works for my sensitivity better.

I am predisposed to canker sores and if I use a toothpaste with SLS in it I'll get them. But a lot of the SLS free toothpastes are new age hippy stuff and is also fluoride free.

I went to chatgpt and asked it to suggest a toothpaste that was both SLS free and had fluoride. Pretty simple ask right?

It came back with two suggestions. It's top suggestion had SLS, it's backup suggestion lacked fluoride.

Yes, it is mind blowing the world we live in. Executives want to turn our code bases over to these tools

◧◩◪◨
4. Game_E+kp[view] [source] 2025-06-02 23:49:35
>>malfis+Y5
What model and query did you use? I used the prompt "find me a toothpaste that is both SLS free and has fluoride" and both GPT-4o [0] and o4-mini-high [1] gave me correct first answers. The 4o answer used the newish "show products inline" feature which made it easier to jump to each product and check it out (I am putting aside my fear this feature will end up kill their web product with monetization).

0 - https://chatgpt.com/share/683e3807-0bf8-800a-8bab-5089e4af51...

1 - https://chatgpt.com/share/683e3558-6738-800a-a8fb-3adc20b69d...

◧◩◪◨⬒
5. wkat42+8F[view] [source] 2025-06-03 02:20:12
>>Game_E+kp
The problem is the same prompt will yield good results one time and bad results another. The "get better at prompting" is often just an excuse for AI hallucination. Better prompting can help but often it's totally fine, the tech is just not there yet.
◧◩◪◨⬒⬓
6. Aeolun+8J[view] [source] 2025-06-03 03:02:03
>>wkat42+8F
If you want a correct answer the first time around, and give up if you don't get it, even if you know the thing can give it to you with a bit more effort (but still less effort than searching yourself), don't you think that's a user problem?
◧◩◪◨⬒⬓⬔
7. 3eb798+zL[view] [source] 2025-06-03 03:34:14
>>Aeolun+8J
If you are genuinely asking a question, how are you supposed to know the first answer was incorrect?
◧◩◪◨⬒⬓⬔⧯
8. worthl+BN[view] [source] 2025-06-03 04:02:30
>>3eb798+zL
This is the right question.
[go to top]