zlacker

[return to "AI just proved Erdos Problem #124"]
1. magica+Ao2[view] [source] 2025-12-01 03:40:02
>>nl+(OP)
The overhyped tweet from the robinhood guy raising money for his AI startup is nicely brought into better perspective by Thomas Bloom (including that #124 is not from the cited paper, "Complete sequences of sets of integer powers "/BEGL96):

> This is a nice solution, and impressive to be found by AI, although the proof is (in hindsight) very simple, and the surprising thing is that Erdos missed it. But there is definitely precedent for Erdos missing easy solutions!

> Also this is not the problem as posed in that paper

> That paper asks a harder version of this problem. The problem which has been solved was asked by Erdos in a couple of later papers.

> One also needs to be careful about saying things like 'open for 30 years'. This does not mean it has resisted 30 years of efforts to solve it! Many Erdos problems (including this one) have just been forgotten about it, and nobody has seriously tried to solve it.[1]

And, indeed, Boris Alexeev (who ran the problem) agrees:

> My summary is that Aristotle solved "a" version of this problem (indeed, with an olympiad-style proof), but not "the" version.

> I agree that the [BEGL96] problem is still open (for now!), and your plan to keep this problem open by changing the statement is reasonable. Alternatively, one could add another problem and link them. I have no preference.[2]

Not to rain on the parade out of spite, it's just that this is neat, but not like, unusually neat compared to the last few months.

[1] https://twitter.com/thomasfbloom/status/1995083348201586965

[2] https://www.erdosproblems.com/forum/thread/124#post-1899

◧◩
2. smcl+Db3[view] [source] 2025-12-01 11:53:06
>>magica+Ao2
See this is one of the reasons I struggle to get on board the AI hype train. Any time I've seen some breathless claim about it's capabilities that feels a bit too good to be true, someone with knowledge in the domain takes a closer look and it turns out to have been exaggerated and meant to draw eyeballs and investors to some fledgling AI company.

I just feel like if we were genuinely on the cusp of an AI revolution like it is claimed, we wouldn't need to keep seeing this sort of thing. Like I feel like a lot of the industry is full of flim-flam men trying to scam people, and if the tech was as capable as we keep getting told it is there'd be no need for dishonesty or sleight of hand.

◧◩◪
3. encycl+xV4[view] [source] 2025-12-01 20:47:01
>>smcl+Db3
I have commented elsewhere but this bears repeating

If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).

I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.

◧◩◪◨
4. jeeeb+uH9[view] [source] 2025-12-03 07:41:24
>>encycl+xV4
Couldn’t you say the exact same about the human mind though?
◧◩◪◨⬒
5. encycl+1ca[view] [source] 2025-12-03 11:24:23
>>jeeeb+uH9
No you couldn't because the human mind definitely DOES not work like an LLM. Though how it does work is an open academic problem. As an example please see the Hard problem of consciousness. There are things when it comes to the brain/mind which we even have a difficult time in defining let alone understanding.

To give a quick example vis-a-vis LLM's: I can reason and understand well enough without having to be 'trained' on near the entire corpus of human literary. LLM's of course do not reason or understand and their output is determined by human input. That alone indicates our minds work differently to LLM's.

I wonder how ChatGPT would fair if it were trained on birdsong and then asked for a rhyming couplet?

[go to top]