zlacker

[parent] [thread] 4 comments
1. bee_ri+(OP)[view] [source] 2026-01-22 19:40:47
LLMs are kind of fun to play with (this is a website for nerds, who among us doesn’t find a computer that talks back kind of fun), but I don’t really understand why people pay for these hosted versions. While the tech is still nascent, why not do a local install and learn how everything works?
replies(2): >>causal+Z3 >>exe34+Kk
2. causal+Z3[view] [source] 2026-01-22 19:59:49
>>bee_ri+(OP)
Because my local is a laptop and doesn't have a GPU cluster or TPU pod attached to it.
replies(1): >>5d4140+Nw1
3. exe34+Kk[view] [source] 2026-01-22 21:38:03
>>bee_ri+(OP)
Claude code with opus is a completely different creature from aider with qwen on a 3090.

The latter writes code. the former solves problems with code, and keeps growing the codebase with new features. (until I lose control of the complexity and each subsequent call uses up more and more tokens)

◧◩
4. 5d4140+Nw1[view] [source] [discussion] 2026-01-23 08:41:51
>>causal+Z3
If you have enough RAM, you can run Qwen A3B models on the CPU.
replies(1): >>quikoa+P72
◧◩◪
5. quikoa+P72[view] [source] [discussion] 2026-01-23 13:37:49
>>5d4140+Nw1
RAM got a little more expensive lately for some reason.
[go to top]