We've had automated theorem proving since the 60s. What we need is automated theorem discovery. Erdős discovered these theorems even if he wasn't really able to prove them. Euler and Gauss discovered a ton of stuff they couldn't prove. It is weird that nobody considers this to be intelligence. Instead intelligence is a little game AI plays with Lean.
AI researchers keep trying to reduce intelligence into something tiny and approachable, like automated theorem proving. It's easy: you write the theorem you want proven and hope you get a proof. It works or it doesn't. Nice and benchmarkable.
Automated axiom creation seems a lot harder. How is an LLM supposed to know that "between any two points there is a line" formalizes an important property of physical space? Or how to suggest an alternative to Turing machines / lambda calculus that expresses the same underlying idea?
Yeah that's the crux of the matter. How do AI did it? Using already existing math. If we need new math to prove Collatz, Goldbach or Riemman, LLMs are simply SOL. That's what's missing and hype boys always avoid to mention.
I've been thinking mathematicians have fun doing math, making discoveries, crafting proofs.
Does Tour de France & Co. make no sense since small, lightweight and powerful e-bicycles appeared?
Using computer as a helper like bicycles is one thing, using LLMs seems more like e-bicycle and is something another.
By that logic, we've had LLMs since the 60s!
> What we need is automated theorem discovery.
I don't see any reason you couldn't train a model to do this. You'd have to focus it on generating follow-up questions to ask after reading a corpus of literature, playing around with some toy examples in Python and making a conjecture out of it. This seems much easier than training it to actually complete an entire proof.
> Erdős discovered these theorems even if he wasn't really able to prove them. Euler and Gauss discovered a ton of stuff they couldn't prove. It is weird that nobody considers this to be intelligence.
Who says they don't? I wouldn't be surprised if HarmonicMath, DeepMind, etc have also thought about this kind of thing.
> Automated axiom creation seems a lot harder. How is an LLM supposed to know that "between any two points there is a line" formalizes an important property of physical space?
That's a good question! It would be interesting to see if this is an emergent property of multimodal LLMs trained specifically on this kind of thing. You would need mathematical reasoning, visual information and language encoded into some shared embedding space where similar things are mapped right next to each other geometrically.
> By that logic, we've had LLMs since the 60s!
From a bit earlier[0], actually:
Progressing to the 1950s and 60s
We saw the development of the first language models.
Were those "large"? I'm sure at the time they were thought to be so.0 - https://ai-researchstudies.com/history-of-large-language-mod...
An unproved theorem now proved is by definition new math. Will LLMs get you to Collatz, Goldbach, or Riemann? Unclear.
But it's not like there's some magical, entirely unrelated to existing math, "new math" that was required to solve all the big conjectures of the past. They proceeded, as always, by proving new theorems one by one.
I think eventually LLMs will also be used as part of systems that come up with new, broadly useful definitions, but we're not there yet.
You seem to be smuggling in the assumption that this problem was "important".
No. By _new math_ I mean new mathematical constructs and theories like (to mention the "newest" ones) category theory, information theory, homotopy type theory, etc. Something like Cantor inventing set theory, or Shannon with information theory, or Euler with graph theory.
AFAIK no new field of mathematics as been created _by_ AI. Feel free to correct me.