zlacker

[parent] [thread] 3 comments
1. EagnaI+(OP)[view] [source] 2026-02-05 07:54:57
The secret is to not run out of quota.

Instead have Claude know when to offload work to local models and what model is best suited for the job. It will shape the prompt for the model. Then have Claude review the results. Massive reduction in costs.

btw, at least on Macbooks you can run good models with just M1 32GB of memory.

replies(2): >>BuildT+V1 >>kilroy+3A
2. BuildT+V1[view] [source] 2026-02-05 08:12:43
>>EagnaI+(OP)
I don't suppose you could point to any resources on where I could get started. I have a M2 with 64gb of unified memory and it'd be nice to make it work rather than burning Github credits.
replies(1): >>EagnaI+K4
◧◩
3. EagnaI+K4[view] [source] [discussion] 2026-02-05 08:36:15
>>BuildT+V1
https://ollama.com

Although I'm starting to like LMStudio more, as it has more features that Ollama is missing.

https://lmstudio.ai

You can then get Claude to create the MCP server to talk to either. Then a CLAUDE.md that tells it to read the models you have downloaded, determine their use and when to offload. Claude will make all that for you as well.

4. kilroy+3A[view] [source] 2026-02-05 13:00:48
>>EagnaI+(OP)
I strongly think you're on to something here. I wish Apple would invest heavily in something like this.

The big powerful models think about tasks, then offload some stuff to a drastically cheaper cloud model or the model running on your hardware.

[go to top]