That'll be dope. The tokens used (input,output,total) are actually saved within codex's jsonl files.
I've been very skeptical of the real usefulness of code assistants, much in part from my own experience. They work great for brand new code bases, but struggle with maintenance. Seeing your final result, I'm eager to see the process, specially the iteration.
I would happily use local models if I could get them to perform, but they’re super slow if I bump their context window high, and I haven’t seen good orchestrators that keep context limited enough.