zlacker

[return to "Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation"]
1. thomas+Yc[view] [source] 2026-02-04 15:33:26
>>fheins+(OP)
There's a graveyard of 100s of papers with "approximate near linear time attention."

They always hope the speed increase makes up for the lower quality, but it never does. The quadratic time seems inherent to the problem.

Indeed, there are lower bounds showing that sub n^2 algorithms can't work: https://arxiv.org/pdf/2302.13214

◧◩
2. kristj+sm[view] [source] 2026-02-04 16:15:41
>>thomas+Yc
> self-attention is efficiently computable to arbitrary precision with constant cost per token

This paper at least aspires to reproduce 'true' attention, which distinguishes it from many of the others. TBD if its successful in that.

◧◩◪
3. logicc+qu[view] [source] 2026-02-04 16:47:42
>>kristj+sm
It can't be successful at that any more than 1+1 can equal 3. Fundamentally, if every token wants to be able to look at every previous token without loss of information, it must be O(n^2); N tokens looking at N tokens is quadratic. Any sub-quadratic attention must hence necessarily lose some information and be unable to support perfect recall on longer sequences.
◧◩◪◨
4. naaski+IV[view] [source] 2026-02-04 18:42:46
>>logicc+qu
Your argument just assumes there is no latent structure that can be exploited. That's a big assumption.
◧◩◪◨⬒
5. logicc+aA1[view] [source] 2026-02-04 21:53:45
>>naaski+IV
It's a necessary assumption for the universal approximation property; if you assume some structure then your LLM can no longer solve problems that don't fit into that structure as effectively.
◧◩◪◨⬒⬓
6. direwo+ZU1[view] [source] 2026-02-04 23:52:14
>>logicc+aA1
Neural nets are structured as matrix multiplication, yet, they are universal approximators.
[go to top]