zlacker

[parent] [thread] 1 comments
1. ants_e+(OP)[view] [source] 2025-08-22 00:41:54
LLMs aren't good at rote memorization. They can't even get quotations of humans right.

It's easier for the LLM to rewrite an idiomatic computational geometry algorithm from scratch in a language it understands well like Python. Entire computational geometry textbooks and research papers are in its knowledge base. It doesn't have to copy some proprietary implementation.

replies(1): >>gugago+g1
2. gugago+g1[view] [source] 2025-08-22 00:53:34
>>ants_e+(OP)
A search for "LLM Harry Potter" would suggest that LLMs are widely understood to be proficient at rote memorization.

(I find the example of the computational geometry algorithm being a clear case of direct memorization not very compelling, in any case.)

[go to top]