This assumes that all classes of problems reduce to functions which can be approximated, right, per the universal approximation theorems?
Even for cases where the UAT applies (which is not everywhere, as I show next), your caveat understates the case. There are dramatically better and worse algorithms for differing problems.
But I think a lot of people (including the comment above) misunderstand or misapply the UATs. Think about the assumptions! UATs assume a fixed length input, do they not? This breaks a correspondence with many classes of algorithms.*
## Example
Let's make a DNN that sorts a list of numbers, shall we? But we can't cheat and only have it do pairwise comparisons -- that is not the full sorting problem. We have to input the list of numbers and output the list of sorted numbers. At run-time. With a variable-length list of inputs.
So no single DNN will do! For every input length, we would need a different DNN, would we not? Training this collection of DNNs will be a whole lot of fun! It will make Bitcoin mining look like a poster-child of energy conservation. /s
* Or am I wrong? Is there a theoretical result I don't know about?
The grand goal of AI is a general learner that can at least tackle any kind of problem we care about. Are DNNs the best performing solution for every problem? No and I agree on that. But they are applicable to a far wider range of problems. There is no question what the better general learner paradigm is.
>* Or am I wrong? Is there a theoretical result I don't know about?
Thankfully, we don't need to get into theoreticals. Go ask GPT-4 to sort an arbitrary list of numbers. Change the length and try again.