Giving it only 1 or 2 complex requirements at a time, and then having it iterate, I've found to be more effective.
Most LLMs don't "think", so when asking an LLM something, I generally try to think "would I be able to do this without thinking, if I had all the knowledge, but just had to start typing and go?".
You could break down your prompt into separate prompts like this maybe: https://chatgpt.com/share/683eb7d7-e7ec-8012-8b3b-e34d523dc9...
I think it broke things down in a weird way, but I definitely can't analyse the correctness of anything it outputs in this domain :P
Coding specific agents like Copilot might be better able to handle a complex initial prompt, since they take the initial prompt, and use LLMs to break it down into smaller steps, which ChatGPT doesn't do. They can sort of "think". Deep research AIs also have a sort of thinking too, so might do better.
One could also conclude that a large portion of software engineering is rather is mostly implementation of things that have been implemented many times before and that only a small portion consist of real software engineering where you have to develop code for a problem that nobody ever wrote before or that require a deep understanding of the problem domain.