You probably have seen it by now, but there was a llama.cpp issue that was fixed earlier today(?) to avoid looping and other sub-par results. Need to update llama-server as well as redownload the GGUFs (for certain quants).
https://old.reddit.com/r/unsloth/comments/1qvt6qy/qwen3coder...