zlacker

[return to "FlashAttention-T: Towards Tensorized Attention"]
1. jmward+zv[view] [source] 2026-02-04 00:06:10
>>matt_d+(OP)
I built guided window attn (literally predict the position of the window) a while ago and that works great. Why are we still stuck on any form of attn that looks at the entire context in any meaningful way? Do humans work this way? Do I need a whole book to predict the next word? Who out there is working on really new unique ways to deal with infinite history, other than me of course :)
◧◩
2. remexr+tg1[view] [source] 2026-02-04 06:36:45
>>jmward+zv
how does this compare to MoSA (arXiv:2505.00315)? do you require that there's a single contiguous window? and do you literally predict on position, or with a computed feature?
[go to top]