zlacker

[return to "I was banned from Claude for scaffolding a Claude.md file?"]
1. bastar+Dj[view] [source] 2026-01-22 20:15:27
>>hugoda+(OP)
I've been doing something a lot like this, using a claude-desktop instance attached to my personal mcp server to spawn claude-code worker nodes for things, and for a month or two now it's been working great using the main desktop chat as a project manager of sorts. I even started paying for MAX plan as I've been using it effectively to write software now (I am NOT a developer).

Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.

Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.

I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.

I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".

◧◩
2. syntax+Ek[view] [source] 2026-01-22 20:23:17
>>bastar+Dj
Serious question, why is codex and mistral(vibe) not a real alternative?
◧◩◪
3. deaux+a81[view] [source] 2026-01-23 02:47:10
>>syntax+Ek
Codex: Three reasons. I've used all extensively, for multiple months.

Main one is that it's ~3 times slower. This is the real dealbreaker, not quality. I can guarantee that if tomorrow we woke up and gpt-5.2-codex became the same speed as 4.5-opus without a change in quality, a huge number of people - not HNers but everyone price sensitive - would switch to Codex because it's so much cheaper per usage.

The second one is that it's a little worse at using tools, though 5.2-codex is pretty good at it.

The third is that its knowledge cutoff is further in the past than both Opus 4.5 and Gemini 3 that it's noticeable and annoying when you're working with more recent libraries. This is irrelevant if you're not using those.

For Gemini 3 Pro, it's the same first two reasons as Codex, though the tool calling gap is even much bigger.

Mistral is of course so far removed in quality that it's apples to oranges.

◧◩◪◨
4. dudein+Wc3[view] [source] 2026-01-23 18:13:54
>>deaux+a81
Unpopular opinion but I prefer slow and correct.

My experience on Claude Max (still on it till end-of-month) has been frequent incomplete assignments and troubling decision making. I'll give you an example of each from yesterday.

1. Asked Claude to implement the features in a v2_features.md doc. It completed 8 of 10 but 3 incorrectly. I gave GPT-5.1-Codex-Max (high) the same tasks and it completed 10 of 10 but took perhaps 5-10x as long. Annoyingly, with LLM variability, I can't know for sure if I tried Claude again it would get it correct. The only thing I do know is that GTP-5.2 and 5.1 do a lot more "double-checking" both prior to executing and after.

2. I asked Claude to update a string being displayed in the UI of my app to display something else instead. The string is powered by a json config. Claude searched the code, somehow assumed it was being loaded by a db, did not find the json and opted to write code to overwrite whatever comes out of the 'db' (incorrect) to be what I asked for. This is... not desired behavior and the source of a category of hidden bugs that Claude has created in the past (other models do this as well but less often). Max took its time, found the source json file, and made the update in the correct place.

I can only "sit back and let an agent code" if I trust that it'll do the work right. I don't need it fast, I need it done right. It's already saving me hours where I can do other things in parallel. So, I don't get this argument.

That said, I have a Claude Max and OpenAI Pro subscription and use them both. I instead typically have Claude Opus work on UI and areas where I can visually confirm logic quickly (usually) and Codex in back-end code.

I often wonder how much the complexity of codebases affects how people discuss these models.

[go to top]