The problems I have with the stuff relating to MCP is that the tech around it is developing so fast that it's hard for outsiders to catch up with what the best working setup is, for example.
What would you do, for example, if you want to selfhost this?
- which models (qwen ai coder?)
- which api (with ollama? Bolt? Aider? Etc)
- how to integrate PRs with a local gitlab/gogs/forgejo instance? Do you need another MCP agent for git that does that?
- which hardware dependencies to run it?
I am currently trying to figure out how to implement a practical workflow for this. So far I'm using still a synchronous MCP agent setup where it basically runs on another machine in the network because I have a too unperformant laptop to work with.
But how would I get to the point of async MCP agents that can work on multiple things in my Go codebases in parallel? With the mentioned PR workflows so that I can modify/edit/rework before the merges?
The author makes a lot of claims and talks always about that their opponents in the argument are not talking about the same thing. But what exactly is the same thing, which is reproducible locally for everyone?