zlacker

[parent] [thread] 9 comments
1. mieubr+(OP)[view] [source] 2025-05-21 12:57:26
I was looking for exactly this comment. Everybody's gloating, "Wow look how dumb AI is! Haha, schadenfreude!" but this seems like just a natural part of the evolution process to me.

It's going to look stupid... until the point it doesn't. And my money's on, "This will eventually be a solved problem."

replies(5): >>roxolo+73 >>grewso+67 >>Qem+xa >>Workac+eb >>spacem+Df1
2. roxolo+73[view] [source] 2025-05-21 13:20:53
>>mieubr+(OP)
The question though is what is the time horizon of “eventually”. Very different decisions should be made if it’s 1 year, 2 years, 4 years, 8 years etc. To me it seems as if everyone is making decisions which are only reasonable if the time horizon is 1 year. Maybe they are correct and we’re on the cusp. Maybe they aren’t.

Good decision making would weigh the odds of 1 vs 8 vs 16 years. This isn’t good decision making.

replies(2): >>rsynno+X3 >>ecb_pe+16
◧◩
3. rsynno+X3[view] [source] [discussion] 2025-05-21 13:26:59
>>roxolo+73
Or _never_, honestly. Sometimes things just don't work out. See various 3d optical memory techs, which were constantly about to take over the world but never _quite_ made it to being actually useful, say.
◧◩
4. ecb_pe+16[view] [source] [discussion] 2025-05-21 13:41:06
>>roxolo+73
> This isn’t good decision making.

Why is doing a public test of an emerging technology not good decision making?

> Good decision making would weigh the odds of 1 vs 8 vs 16 years.

What makes you think this isn't being done?

5. grewso+67[view] [source] 2025-05-21 13:46:28
>>mieubr+(OP)
Sometimes the last 10% takes 90% of the time. It'll be interesting to see how this pans out, and whether it will eventually get to something that could be considered a solved problem.

I'm not so sure they'll get there. If the solved problem is defined as a sub-standard but low cost, then I wouldn't bet against that. A solution better than that though, I don't think I'd put my money on that.

replies(1): >>disqar+QE1
6. Qem+xa[view] [source] 2025-05-21 14:09:24
>>mieubr+(OP)
> It's going to look stupid... until the point it doesn't. And my money's on, "This will eventually be a solved problem."

AI can remain stupid longer than you can remain solvent.

replies(1): >>disqar+nE1
7. Workac+eb[view] [source] 2025-05-21 14:12:58
>>mieubr+(OP)
To some people, it will always look stupid.

I have met people who believe that automobile engineering peaked in the 1960's, and they will argue that until you are blue in the face.

8. spacem+Df1[view] [source] 2025-05-21 20:11:21
>>mieubr+(OP)
People seem like they’re gloating as the message received in this period of the hype cycle is that AI is as good as a junior dev without caveats and it in no way is suppose to be stupid.
◧◩
9. disqar+nE1[view] [source] [discussion] 2025-05-21 23:33:38
>>Qem+xa
Haha, I like your take!

My variation was:

"Leadership can stay irrational longer than you can stay employed"

◧◩
10. disqar+QE1[view] [source] [discussion] 2025-05-21 23:38:32
>>grewso+67
You just inspired a thought:

What if the goalpost is shifted backwards, to the 90% mark (instead of demanding that AI get to 100%)?

* Big corps could redefine "good enough" as "what the SotA AI can do" and call it good.

* They could then layoff even more employees, since the AI would be, by definition, Good Enough.

(This isn't too far-fetched, IMO, seeing how we're seeing calls for copyright violation to be classified as legal-when-we-do-it)

[go to top]