Wildly understating this part.
Even the best local models (ones you run on beefy 128GB+ RAM machines) get nowhere close to the sheer intelligence of Claude/Gemini/Codex. At worst these models will move you backwards and just increase the amount of work Claude has to do when your limits reset.
I have claude pro $20/mo and sometimes run out. I just set ANTHROPIC_BASE_URL to a localllm API endpoint that connects to a cheaper Openai model. I can continue with smaller tasks with no problem. This has been done for a long time.