I feel like n^10 attention can capture patterns that lower complexity attention may not. So it seems arbitrary that we have n^2 attention.
The feedforward networks prior to the attention layer are effectively learning sophisticated kernels. If you're unfamiliar (or for those who are) a Kernel is just a generalization of the dot product which is the most fundamental way of defining "similarity" between two points.
By learning a kernel the transformer is learning the best way to define what "similar" means for the task at hand and then we simply apply some basic smoothing over the data. This will handle all sort of interesting ways to compare points and that comparison will allow all points to provide a little bit of information.
Anything you could hope to achieve by performing more comparisons would be better solved by a better similarity function.
Also, Simplical attention is pretty much what the OP was going for, but the hardware lottery is such that it's gonna be pretty difficult to get competitive in terms of engineering, not that people aren't trying (e.g. https://arxiv.org/pdf/2507.02754)