zlacker

[return to "The Codex App"]
1. strong+J4[view] [source] 2026-02-02 18:25:02
>>meetpa+(OP)
Genuinely excited to try this out. I've started using Codex much more heavily in the past two months and honestly, it's been shockingly good. Not perfect mind you, but it keeps impressing me with what it's able to "get". It often gets stuff wrong, and at times runs with faulty assumptions, but overall it's no worse than having average L3-L4 engs at your disposal.

That being said, the app is stuck at the launch screen, with "Loading projects..." taking forever...

Edit: A lot of links to documentation aren't working yet. E.g.: https://developers.openai.com/codex/guides/environments. My current setup involves having a bunch of different environments in their own VMs using Tart and using VS Code Remote for each of them. I'm not married to that setup, but I'm curious how it handles multiple environments.

Edit 2: Link is working now. Looks like I might have to tweak my setup to have port offsets instead of running VMs.

◧◩
2. raw_an+yi[view] [source] 2026-02-02 19:35:13
>>strong+J4
I have the $20 a month subscription for ChatGPT and the $200/year subscription to Claude (company reimbursed).

I have yet to hit usage limits with Codex. I continuously reach it with Claude. I use them both the same way - hands on the wheel and very interactive, small changes and tell them both to update a file to keep up with what’s done and what to do as I test.

Codex gets caught in a loop more often trying to fix an issue. I tell it to summarize the issue, what it’s tried and then I throw Claude at it.

Claude can usually fix it. Once it is fixed, I tell Claude to note in the same file and then go back to Codex

◧◩◪
3. strong+cr[view] [source] 2026-02-02 20:10:10
>>raw_an+yi
The trick to reach the usage limit is to run many agents in parallel. Not that it’s an explicit goal of mine but I keep thinking of this blog post [0] and then try to get Codex to do as much for me as possible in parallel

[0]: http://theoryofconstraints.blogspot.com/2007/06/toc-stories-...

◧◩◪◨
4. raw_an+Gt[view] [source] 2026-02-02 20:21:05
>>strong+cr
Telling a bunch of agents to do stuff is like treating it as a senior developer who you trust to take an ambiguous business requirement and letting them use their best judgment and them asking you if they have a question .

But doing that with AI feels like hiring an outsourcing firm for a project and they come back with an unmaintable mess that’s hard to reason through 5 weeks later.

I very much micro manage my AI agents and test and validate its output. I treat it like a mid level ticket taker code monkey.

◧◩◪◨⬒
5. boness+cy[view] [source] 2026-02-02 20:39:11
>>raw_an+Gt
My experience with good outsourcing firms is that they come back with heavily-documented solutions that are 95% of what you actually wanted, leaving you uncomfortably wondering if doing it yourself woulda been better.

I’m not fully sure what’s worse, something close to garbage with a short shelf life anyone can see, or something so close to usable that it can fully bite me in the ass…

[go to top]