zlacker

[parent] [thread] 2 comments
1. agentu+(OP)[view] [source] 2023-05-16 15:02:10
And LLMs will never be able to reason about mathematical objects and proofs. You cannot learn the truth of a statement by reading more tokens.

A system that can will probably adopt a different acronym (and gosh that will be an exciting development... I look forward to the day when we can dispatch trivial proofs to be formalized by a machine learning algorithm so that we can focus on the interesting parts while still having the entire proof formalized).

replies(1): >>chaxor+R
2. chaxor+R[view] [source] 2023-05-16 15:06:41
>>agentu+(OP)
You should read some of the papers referred to in the above comments before making that assertion. It may take a while to realize the overall structure of the argument, how the category theory is used, and how this is directly applicable to LLMs, but if you are in ML it should be obvious. https://arxiv.org/abs/2203.15544
replies(1): >>agentu+nb
◧◩
3. agentu+nb[view] [source] [discussion] 2023-05-16 15:49:57
>>chaxor+R
There are methods of proof that I'm not sure dynamic programming is fit to solve but this is an interesting paper. However even if it can only solve particular induction proofs that would be a big help. Thanks for sharing.
[go to top]