zlacker

[parent] [thread] 4 comments
1. simonw+(OP)[view] [source] 2026-02-03 23:05:28
I got Codex CLI running against it and was sadly very unimpressed - it got stuck in a loop running "ls" for some reason when I asked it to create a new file.
replies(3): >>daniel+Kj >>mhitza+f02 >>Camper+qf4
2. daniel+Kj[view] [source] 2026-02-04 01:00:26
>>simonw+(OP)
Yes sadly that sometimes happens - the issue is Codex CLI / Claude Code were designed for GPT / Claude models specifically, so it'll be hard for OSS models directly to utilize the full spec / tools etc, and might get loops sometimes - I would maybe try the MXFP4_MOE quant to see if it helps, and maybe try Qwen CLI (was planning to make a guide for it as well)

I guess until we see the day OSS models truly utilize Codex / CC very well, then local models will really take off

3. mhitza+f02[view] [source] 2026-02-04 14:42:54
>>simonw+(OP)
I would recommend you fiddle with the repeat penalty flags. I use local models often, and almost all I've tried needed that to prevent loops.

I'd also recommend dropping temperature down to 0. Any high temperature value feels like instructing the model "copy this homework from me but don't make it obvious".

4. Camper+qf4[view] [source] 2026-02-05 02:47:35
>>simonw+(OP)
You probably have seen it by now, but there was a llama.cpp issue that was fixed earlier today(?) to avoid looping and other sub-par results. Need to update llama-server as well as redownload the GGUFs (for certain quants).

https://old.reddit.com/r/unsloth/comments/1qvt6qy/qwen3coder...

replies(1): >>simonw+El4
◧◩
5. simonw+El4[view] [source] [discussion] 2026-02-05 03:42:33
>>Camper+qf4
I hadn't seen that, thanks very much!
[go to top]