zlacker

[return to "FlashAttention-T: Towards Tensorized Attention"]
1. jmward+zv[view] [source] 2026-02-04 00:06:10
>>matt_d+(OP)
I built guided window attn (literally predict the position of the window) a while ago and that works great. Why are we still stuck on any form of attn that looks at the entire context in any meaningful way? Do humans work this way? Do I need a whole book to predict the next word? Who out there is working on really new unique ways to deal with infinite history, other than me of course :)
◧◩
2. mapont+wR[view] [source] 2026-02-04 02:35:08
>>jmward+zv
> Who out there is working on really new unique ways to deal with infinite history, other than me of course :)

I'm working on a novel (I think) linear attention mechanism in my personal lab that's O(L) for effectively infinite context. I haven't yet decided how much of it is going to be open source, but I agree with you that it's important to figure this out.

Was your work open? Is there some place I can read more about it? I'm trying to figure out what to do with my thing on the off-chance that it actually does turn out to work the way I want it to.

[go to top]