But that overhead is constant factor, more or less anything you can express well in matlab can be expressed straightforwardly in APL, too, if you have the right numerical routines. That's not true in the other direction though: there's a lot of stuff in APL you cannot express adequately in matlab at all. For example J (and these days Dyalog as well IIRC) have an operation called under which basically does this: u(f,g) = x => f^-1(g(f(x)). So you can write geometric_mean = u(mean, log).
It is completely impossible to implement something like "under" in matlab. Admittedly the J implementation at least of deriving a generalized inverse for an arbitrary function f is a somewhat ill-defined hack, but this is still something that is both conceptually and practically quite powerful. Also, whilst Matlab is really clunky for anything that is not a 2D array and hardcodes matrix multiplication as the one inner-product, APL has more powerful abstractions for manipulating arbitrary rank arrays and a more general concept of inner products.
Also, APL has some really dumb but cherished-by-the-community ideas that make the language less expressive and much more awkward to learn, e.g. the idea of replicating the terrible defect of normal mathematical notation where - is overloaded for negation and subtraction to every other function.
I’m a little curious about this. Does J have a notion of the relationship between certain functions and their inverse? What is it that enables “under” in J which makes it impossible in Matlab?
Yes. Many built-in words have inverses assigned, and you can assign inverse functions to your own words with :. https://code.jsoftware.com/wiki/Vocabulary/codot
EDIT: and here's a table with predefined inverses: https://code.jsoftware.com/wiki/Vocabulary/Inverses
But interesting nonetheless.