For coding, interacting with the agent is best done via chat, especially if you’re trying to run teams of agents, then you’re not going to be looking at code all the time. The agent will summarize the changes, you will click on the diffs, approve them and move on. So it’s a very different experience from being the only one coding.
Edit:
Here’s a hot take -
A quick note on SwiftUI, it’s a piece of garbage that’s so hard to use that native devs despise it. So far no AI has been able to one shot it for me.
Blender most likely uses immediate mode - which is more resource intensive and less efficient than a stateful object oriented interface. But Zed uses a similar approach with (I think) success.
Then think about this, pre-AI, Google with all its billions, used web interfaces in all its desktop product GUIs :)
Apple, with all of its billions, created XCode which is inferior to every other IDE I have ever used. They still haven’t learned from Visual Studio. Microsoft is bad at a lot of things but developer tooling isn’t one of them.
All that to say, even if you knew what you wanted, taking that vision to reality is a difficult challenge. At least with AI, they could create a 100 different prototypes cheaply and pick the best direction based on that. That they should do, and they probably aren’t.