Not linear algebra. Artificial neural networks create arbitrarily non-linear functions. That's the point of non-linear activation functions and it's the subject of the universal approximation theorems I mentioned above.
An LLM thinks in the same way excel thinks when you ask it to fit a curve.
So classes of functions (ANNs) that can approximate our desired function to arbitrary precision are what we should be expecting to be working with.