Bounds bounds bounds bounds. The important part for humans seems to be maintaining boundaries for AI. If your well-tested codebase has the tests built thru AI, its probably not going to work.
I think its somewhat telling that they can't share numbers for how they're using it internally. I want to know that Microsoft, the company famous for dog-fooding is using this day in and day out, with success. There's real stuff in there, and my brain has an insanely hard time separating the trillion dollars of hype from the usefulness.
So far, the agent has been used by about 400 GitHub employees in more than 300 our our repositories, and we've merged almost 1,000 pull requests contributed by Copilot.
In the repo where we're building the agent, the agent itself is actually the #5 contributor - so we really are using Copilot coding agent to build Copilot coding agent ;)
(Source: I'm the product lead at GitHub for Copilot coding agent.)
Most developers don't love writing tests, or updating documentation, or working on tricky dependency updates - and I really think we're heading to a world where AI can take the load of that and free me up to work on the most interesting and complex problems.
What is the job for the developer now? Writing tickets and reviewing low quality PRs? Isn't that the most boring and mundane job in the world?
Those orgs that value high-quality documentation won’t have undocumented codebases to begin with.
And let’s face it, like writing code, writing docs does have a lot of repetitive, boring, boilerplate work, which I bet is exactly why it doesn’t get done. If an LLM is filling out your API schema docs, then you get to spend more time on the stuff that’s actually interesting.
A good example of the kind of result is something like the Laravel documentation[1] and its associated API reference[2]. I don't believe AI can help with this.
[0]: https://en.wikipedia.org/wiki/Docstring