zlacker

[parent] [thread] 5 comments
1. hypera+(OP)[view] [source] 2025-05-26 01:33:15
Reason about: sure. Independently solve novel ones without extreme amounts of guidance: I have yet to see it.

Granted, for most language and programming tasks, you don’t need the latter, only the former.

replies(1): >>Workac+a8
2. Workac+a8[view] [source] 2025-05-26 03:04:22
>>hypera+(OP)
99.9% of humans will never solve a novel problem. It's a bad benchmark to use here
replies(2): >>hypera+8g >>guappa+6l
◧◩
3. hypera+8g[view] [source] [discussion] 2025-05-26 04:53:43
>>Workac+a8
I agree. But it’s worth being somewhat skeptical of ASI scenarios if you can’t, for example, give a well formulated math problem to a LLM and it cannot solve it. Until we get a Reimann hypothesis calculator (or equivalent for hard/old unsolved maths) it’s kind of silly to be debating the extreme ends of AI cognition theory
replies(1): >>Camper+ms1
◧◩
4. guappa+6l[view] [source] [discussion] 2025-05-26 05:54:31
>>Workac+a8
But they will solve a problem novel to them, since they haven't read all of the text that exists.
◧◩◪
5. Camper+ms1[view] [source] [discussion] 2025-05-26 16:02:04
>>hypera+8g
"I'm taking this talking dog right back to the pound. It completely whiffed on both Riemann and Goldbach. And you should see the buffer overflows in the C++ code it wrote for me."
replies(1): >>hypera+iL7
◧◩◪◨
6. hypera+iL7[view] [source] [discussion] 2025-05-29 03:44:35
>>Camper+ms1
dog is a very different category man-made godlike super-intelligence
[go to top]