zlacker

[return to "The Codex App"]
1. strong+J4[view] [source] 2026-02-02 18:25:02
>>meetpa+(OP)
Genuinely excited to try this out. I've started using Codex much more heavily in the past two months and honestly, it's been shockingly good. Not perfect mind you, but it keeps impressing me with what it's able to "get". It often gets stuff wrong, and at times runs with faulty assumptions, but overall it's no worse than having average L3-L4 engs at your disposal.

That being said, the app is stuck at the launch screen, with "Loading projects..." taking forever...

Edit: A lot of links to documentation aren't working yet. E.g.: https://developers.openai.com/codex/guides/environments. My current setup involves having a bunch of different environments in their own VMs using Tart and using VS Code Remote for each of them. I'm not married to that setup, but I'm curious how it handles multiple environments.

Edit 2: Link is working now. Looks like I might have to tweak my setup to have port offsets instead of running VMs.

◧◩
2. raw_an+yi[view] [source] 2026-02-02 19:35:13
>>strong+J4
I have the $20 a month subscription for ChatGPT and the $200/year subscription to Claude (company reimbursed).

I have yet to hit usage limits with Codex. I continuously reach it with Claude. I use them both the same way - hands on the wheel and very interactive, small changes and tell them both to update a file to keep up with what’s done and what to do as I test.

Codex gets caught in a loop more often trying to fix an issue. I tell it to summarize the issue, what it’s tried and then I throw Claude at it.

Claude can usually fix it. Once it is fixed, I tell Claude to note in the same file and then go back to Codex

◧◩◪
3. strong+cr[view] [source] 2026-02-02 20:10:10
>>raw_an+yi
The trick to reach the usage limit is to run many agents in parallel. Not that it’s an explicit goal of mine but I keep thinking of this blog post [0] and then try to get Codex to do as much for me as possible in parallel

[0]: http://theoryofconstraints.blogspot.com/2007/06/toc-stories-...

◧◩◪◨
4. raw_an+Gt[view] [source] 2026-02-02 20:21:05
>>strong+cr
Telling a bunch of agents to do stuff is like treating it as a senior developer who you trust to take an ambiguous business requirement and letting them use their best judgment and them asking you if they have a question .

But doing that with AI feels like hiring an outsourcing firm for a project and they come back with an unmaintable mess that’s hard to reason through 5 weeks later.

I very much micro manage my AI agents and test and validate its output. I treat it like a mid level ticket taker code monkey.

◧◩◪◨⬒
5. strong+iw[view] [source] 2026-02-02 20:31:27
>>raw_an+Gt
I fully believe that if I didn’t review its output and ask it to clean it up it would become unmaintainable real quick. The trick I’ve found though is to be detailed enough in the design from both a technical and non-technical level, sometimes iterating a few time on it with the agent before telling it to go for it (which can easily take 30 minutes)

That’s how I used to deal with L4, except codex codes much faster (but sometimes in the wrong direction)

◧◩◪◨⬒⬓
6. raw_an+aA[view] [source] 2026-02-02 20:47:42
>>strong+iw
It’s funny over the years I went from

1. I like being hands on keyboard and picking up a slice of work I can do by myself with a clean interface that others can use - a ticket taking code monkey.

2. I like being a team lead /architect where my vision can be larger than what I can do in 40 hours a week even if I hate the communication and coordination overhead of dealing with two or three other people

3. I love being able to do large projects by myself including dealing with the customer where the AI can do the grunt work I use to have to depend on ticket taking code monkeys to do.

Moral of the story: if you are a ticket taking “I codez real gud” developer - you are going to be screwed no matter how many b trees you can reverse on the whiteboard

◧◩◪◨⬒⬓⬔
7. AloysB+d21[view] [source] 2026-02-02 22:40:05
>>raw_an+aA
Moral of your story.

Each and everyone of us is able to write their own story, and come up with their own 'Moral'.

Settling for less (if AI is a productivity booster, which is debatable) doesn't equal being screwed. There is wisdom in reaching your 'enough' point.

◧◩◪◨⬒⬓⬔⧯
8. raw_an+H81[view] [source] 2026-02-02 23:03:41
>>AloysB+d21
If you look at the current hiring trends and how much longer it is taking developers to get jobs these days, a mid level ticket taker is definitely screwed between a flooded market, layoffs and AI.

By definition, this is the worse AI coding will ever be and it’s pretty good now.

◧◩◪◨⬒⬓⬔⧯▣
9. AloysB+So1[view] [source] 2026-02-03 00:22:55
>>raw_an+H81
I am really not convinced yet.

From all the data I have seen, the software industry is poised for a lot more growth in the foreseeable future.

I wonder if we are experiencing a local minima, on a longer upward trend.

Those that do find a job in a few days aren't online to write about it, so based on what is online we are lead to believe that it's all doom and gloom.

We also come out of a silly growth period where anyone who could sort a list and build a button in React would get hired.

My point is not that AI-coding is to be avoided at all costs, it's more about taming the fear-mongering of "you must use AI or will fall behind". I believe it's unfounded - use it as much or as little as you feel the need to.

P.S.: I do think that for juniors it's currently harder and require intentional efforts to land that first job - but that is the case in many other industries. It's not impossible, but it won't come on a silver plate like it did 5-7 years ago.

[go to top]