zlacker

[parent] [thread] 4 comments
1. dabock+(OP)[view] [source] 2025-06-03 19:07:29
You can get 90%+ of the way there with a tiny “coder” LLM running on the Ollama backend with an extension like RooCode and a ton of MCP tools.

In fact, MCP is so ground breaking that I consider it to be the actual meat and potatoes of coding AIs. Large models are too monolithic, and knowledge is forever changing. Better just to use a small 14b model (or even 8b in some cases!) with some MCP search tools, a good knowledge graph for memory, and a decent front end for everything. Let it teach itself based on the current context.

And all of that can run on an off the shelf $1k gaming computer from Costco. It’ll be super slow compared to a cloud system (like HDD vs SSD levels of slowness), but it will run in the first place and you’ll get *something* out of it.

replies(2): >>macrol+1e >>esaym+ai
2. macrol+1e[view] [source] 2025-06-03 20:27:00
>>dabock+(OP)
Which MCPs do recommend?
replies(1): >>FinalD+Ur5
3. esaym+ai[view] [source] 2025-06-03 20:52:40
>>dabock+(OP)
Why don't you elaborate on your setup then?
replies(1): >>xandri+mu
◧◩
4. xandri+mu[view] [source] [discussion] 2025-06-03 22:13:30
>>esaym+ai
Because you can look it easily up. Jan, gtp4all, etc.

It's not black magic anymore.

◧◩
5. FinalD+Ur5[view] [source] [discussion] 2025-06-05 18:34:08
>>macrol+1e
DesktopCommander and Taskmaster are great to start with. With just these, you may start to see why OP recommends a memory MCP too (I don’t have a recommendation for that yet)
[go to top]