zlacker

[parent] [thread] 1 comments
1. godels+(OP)[view] [source] 2024-11-30 06:05:27
I haven't had a chance to read that, but that quote suggests I should (especially considering the author and the editors).

I often refer to "Elephant Fitting" w.r.t these systems. I suspect you understand this, but I think most think it is just about overfitting. But the way problem isn't about the number of parameters, but that parameters need to be justified. As explained by Dyson here[0]. Vladimir's quote really reminds me of this. Fermi likewise was stressing the importance of theory.

I think it is a profound quote, and you were (are?) lucky to have that friendship. I do think abstraction is at the heart of intelligence. François Chollet discusses it a lot, and he's far from alone. It seems to be well agreed upon in the neuroscience and cognitive science communities. I think this is essential to understand in our path forward to developing intelligent systems, because there are plenty of problems that need to be solved in which there is no algorithmic procedure. Where there is no explicit density function. Intractable, doubly intractable, and more problems. Maybe we're just too dumb, but it's clear there are plateaus where luck is needed to advance. I do not believe our current machines would be capable of closing a gap.

[0] https://www.youtube.com/watch?v=hV41QEKiMlM

replies(1): >>mturmo+0f6
2. mturmo+0f6[view] [source] 2024-12-03 07:51:51
>>godels+(OP)
Thank you for the link.

Yeah, what we are doing now in ML isn’t really “engineering” in the best sense of the word. We don’t have a theoretical machinery that can predict performance of ML designs. (Like the way you can for a coding scheme in communications theory.) We have a lot of clever architectures and techniques, but not a design capacity.

[go to top]