zlacker

[parent] [thread] 3 comments
1. mark_l+(OP)[view] [source] 2026-02-03 22:47:46
I configured Claude Code to use a local model (ollama run glm-4.7-flash) that runs really well on a 32G M2Pro macmini. Maybe my standards are too low, but I was using that combination to clean up the code, make improvements, and add docs and tests to a bunch of old git repo experiment projects.
replies(1): >>redund+Gr
2. redund+Gr[view] [source] 2026-02-04 01:33:27
>>mark_l+(OP)
Did you have to do anything special to get it to work? I tried and it would just bug out, things like respond with JSON strings summarizing what I asked of it or just outright getting things wrong entirely. For example, I asked it to summarize what a specific .js file did and it provided me with new code it made up based on the file name...
replies(1): >>mark_l+os
◧◩
3. mark_l+os[view] [source] [discussion] 2026-02-04 01:38:35
>>redund+Gr
Yes, I had to set the Ollama context size to 32K
replies(1): >>redund+sE
◧◩◪
4. redund+sE[view] [source] [discussion] 2026-02-04 03:13:41
>>mark_l+os
Thank you, it's working as expected now!
[go to top]