zlacker

Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation

submitted by fheins+(OP) on 2026-02-04 14:33:33 | 159 points 87 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
1. spacew+ea[view] [source] 2026-02-04 15:20:21
>>fheins+(OP)
Github here: https://github.com/glassroom/sata_attention
8. thomas+Yc[view] [source] 2026-02-04 15:33:26
>>fheins+(OP)
There's a graveyard of 100s of papers with "approximate near linear time attention."

They always hope the speed increase makes up for the lower quality, but it never does. The quadratic time seems inherent to the problem.

Indeed, there are lower bounds showing that sub n^2 algorithms can't work: https://arxiv.org/pdf/2302.13214

◧◩
16. fheins+Ri[view] [source] [discussion] 2026-02-04 15:59:50
>>thomas+Yc
As the error via linear approximation approaches similar magnitude as numerical error via quadratic computation, don’t the two start becoming comparable in practice?

I ask because in practice, for inference, attention is typically computed with low-precision (4-bit, 8-bit, 16-bit) floats.

Numerical error, in fact, may be a key factor as to why quadratic attention, in practice, exhibits context rot as context gets longer, analogous to an RNN:

https://www.anthropic.com/engineering/effective-context-engi...

◧◩◪
21. andy12+Zl[view] [source] [discussion] 2026-02-04 16:13:14
>>cubefo+hf
It really isn't sub N^2. The main attention is only O(Nk), but only thanks to a lightning indexer that still has complexity O(N^2). So overall it still has the same complexity; just with a smaller constant factor [1]

> DSA reduces the core attention complexity of the main model from O(L^2) to O(Lk), where k (<< L) is the number of selected tokens. Although the lightning indexer still has a complexity of O(L^2), it requires much less computation compared with MLA in DeepSeek-V3.1-Terminus

[1] https://arxiv.org/pdf/2512.02556

◧◩
23. naaski+rn[view] [source] [discussion] 2026-02-04 16:19:31
>>thomas+Yc
I think any kind of innovation here will have to take advantage of some structure inherent to the problem, like eliminating attention in favour of geometric structures like Grassman flows [1].

[1] Attention Is Not What You Need, https://arxiv.org/abs/2512.19428

◧◩◪◨⬒
33. gpm+Bx[view] [source] [discussion] 2026-02-04 17:02:21
>>Subicu+Ps
Not as trivially as the forwards direction, unsurprisingly information is lost, but better than you might expect. See for example https://arxiv.org/pdf/2405.15012
35. NedCod+Az[view] [source] 2026-02-04 17:11:41
>>fheins+(OP)
Reference implementation: https://github.com/glassroom/sata_attention
38. rieman+gA[view] [source] 2026-02-04 17:15:35
>>fheins+(OP)
Neat result. The symmetry exploitation here reminds me of recent work connecting neural network training dynamics to renormalization group theory. Charles Martin's SETOL paper https://arxiv.org/abs/2507.17912 shows that well-trained layers converge to something like an RG fixed point—the eigenvalue spectrum of the weight matrix develops power-law tails with exponent α ≈ 2, which is the signature of scale invariance. At this fixed point, the "effective correlation space" is low-dimensional: you can truncate the SVD aggressively and recover nearly identical test accuracy.

I wonder if there's a connection to your Taylor truncation order. In RG terms, higher-order polynomial interactions are "irrelevant operators"—they get suppressed as you flow toward the fixed point. If trained attention heads are sitting near this fixed point, that might explain why modest truncation orders work: the network has already learned to concentrate its computation in the lower-order terms. A testable prediction: layers with α closer to 2 (measurable via weightwatcher https://github.com/CalculatedContent/WeightWatcher) might need fewer Taylor terms for accurate approximation than layers with α far from 2. If true, you could potentially use the spectral statistics to adaptively choose truncation order per-head.

◧◩◪◨
42. naaski+FE[view] [source] [discussion] 2026-02-04 17:34:38
>>findal+kw
Indeed, and I think natural language and reasoning will have some kind of geometric properties as well. Attention is just a sledgehammer that lets us brute force our way around not understanding that structure well. I think the next step change in AI/LLM abilities will be exploiting this geometry somehow [1,2].

[1] GrokAlign: Geometric Characterisation and Acceleration of Grokking, https://arxiv.org/abs/2510.09782

[2] The Geometry of Reasoning: Flowing Logics in Representation Space, https://arxiv.org/abs/2506.12284

◧◩
43. fheins+jG[view] [source] [discussion] 2026-02-04 17:41:45
>>amluto+Ix
This is a form of linear attention (https://arxiv.org/abs/2006.16236) that approximates standard scaled dot-product attention to arbitrary precision, by adding Taylor terms in an efficient manner. Each additional Taylor term improves the approximation. Efficiency is achieved by exploiting certain mathematical symmetries that become evident only after decomposing the standard formulation of attention into an expression over chains of tensor products. The github repository's README walks through examples. The first example is with 8 Taylor terms.
◧◩◪
47. fheins+CK[view] [source] [discussion] 2026-02-04 17:58:12
>>csense+dI
[3] is linear attention, https://arxiv.org/abs/2006.16236, a well-known result with ~3K citations: https://scholar.google.com/scholar_lookup?arxiv_id=2006.1623...
◧◩
55. fheins+OW[view] [source] [discussion] 2026-02-04 18:47:49
>>rieman+gA
Yes, there must be a connection. While adaptive truncation may prove impractical, it should be possible to measure spectral statistics on sample data, and specify a different fixed truncation order per layer, per head, etc. The github repository lists many other possible improvements: https://github.com/glassroom/sata_attention#proof-of-concept
62. Kubuxu+G81[view] [source] 2026-02-04 19:45:08
>>fheins+(OP)
A paper on the same topic: On the Expressiveness of Softmax Attention: A Recurrent Neural Network Perspective, Gabriel Mongaras, Eric C. Larson, https://arxiv.org/abs/2507.23632

Video presentation if someone prefers it: https://www.youtube.com/watch?v=PN3nYBowSvM

Linear attention is a first-degree approximation of Softmax attention, and model performance gets better as you increase the degree of the Taylor approximation.

I'm thinking about adapting an existing model to Taylor-approximated attention. I think it should be possible with some model surgery and rehabilitation training.

◧◩◪◨⬒⬓
74. direwo+cU1[view] [source] [discussion] 2026-02-04 23:45:32
>>logicc+zB1
I think you meant commutative.

Attention also has some specific properties.

And sometimes results are just unexpected. Did you know that anything a Turing machine can do in t tome steps, a different Turing machine can do in O(sqrt(t log t)) memory cells? >>44055347

◧◩◪◨⬒
81. nine_k+1c2[view] [source] [discussion] 2026-02-05 02:02:30
>>helloh+TB
Integer multiplication x * y can be trivially done in O(k): k = log₂(min(x, y)). This is because we can do addition in constant time, adding all bits in parallel.

By combining many more adding units, we can do (fixed-size) multiplication in constant time, too: https://en.wikipedia.org/wiki/Dadda_multiplier

85. korbip+xr2[view] [source] 2026-02-05 04:25:16
>>fheins+(OP)
This was done already here as well: https://arxiv.org/abs/2507.04239
[go to top]