zlacker

[parent] [thread] 3 comments
1. chowel+(OP)[view] [source] 2025-05-21 22:12:51
Oh, that's not a problem. Just cache the retrieval lookups too.
replies(1): >>michae+57
2. michae+57[view] [source] 2025-05-21 23:20:25
>>chowel+(OP)
it's pointers all the way down
replies(1): >>drob51+qg
◧◩
3. drob51+qg[view] [source] [discussion] 2025-05-22 01:13:31
>>michae+57
Just add one more level of indirection, I always say.
replies(1): >>EGreg+Ch
◧◩◪
4. EGreg+Ch[view] [source] [discussion] 2025-05-22 01:28:44
>>drob51+qg
But seriously… the solution is often to cache / shard to a halfway point — the LLM model weights for instance — and then store that to give you a nice approximation of the real problem space! That’s basically what many AI algorithms do, including MCTS and LLMs etc.
[go to top]