>>tehnub+(OP)
It strikes me that more tokens likely give the LLM more time/space to "think". Also that more redundant tokens, like local type declarations instead of type inference from far away, likely often reduce the portion of the code LLMs (and humans) have to read.
So I'm not convinced this is either the right metric, or even if you got the right metric that it's a metric you want to minimize.