In any case, I think this is the best use case for AI in programming—as a force multiplier for the developer. It’s for the best benefit of both AI and humanity for AI to avoid diminishing the creativity, agency and critical thinking skills of its human operators. AI should be task oriented, but high level decision-making and planning should always be a human task.
So I think our use of AI for programming should remain heavily human-driven for the long term. Ultimately, its use should involve enriching humans’ capabilities over churning out features for profit, though there are obvious limits to that.
[0] https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...
Consider using Aider, and aggressively managing the context (via /add, /drop and /clear).
Good to see an official way of doing this.
1 - https://github.com/plandex-ai/plandex
Also, a bit more on auto vs. manual context management in the docs: https://docs.plandex.ai/core-concepts/context-management
Copilot Workspace could take a task, implement it and create a PR - but it had a linear, highly structured flow, and wasn't deeply integrated into the GitHub tools that developers already use like issues and PRs.
With Copilot coding agent, we're taking all of the great work on Copilot Workspace, and all the learnings and feedback from that project, and integrating it more deeply into GitHub and really leveraging the capabilities of 2025's models, which allow the agent to be more fluid, asynchronous and autonomous.
(Source: I'm the product lead for Copilot coding agent.)
But the upgraded Copilot was just in response to Cursor and Winsurf.
We'll see.
[0] >>43904611
The entire website was created by Claude Sonnet through Windsurf Cascade, but with the “Fair Witness” prompt embedded in the global rules.
If you regularly guide the LLM to “consult a user experience designer”, “adopt the multiple perspectives of a marketing agenc”, etc., it will make rather decent suggestions.
I’ve been having pretty good success with this approach, granted mostly at the scale of starting the process with “build me a small educational website to convey this concept”.
The claude and gemini models tend to be the slowest (yes, including flash). 4o is currently the fastest but still not great.
Edit: From the TFA: Using the agent consumes GitHub Actions minutes and Copilot premium requests, starting from entitlements included with your plan.
[0] https://docs.github.com/en/copilot/managing-copilot/monitori...
Really cool, thanks for sharing! Would you perhaps consider implementing something like these stats that aider keeps on "aider writing itself"? - https://aider.chat/HISTORY.html
every bullet hole in that plane is the 1k PRs contributed by copilot. The missing dots, and whole missing planes, are unaccounted for. Ie, "ai ruined my morning"
AMAZING
https://developers.google.com/gemini-code-assist/docs/review...
Definitely not Google Code, but better than Cloud Source Repositories.
On an unrelated note, it also suggested I use the "Strobe" protocol for encryption and sent me to https://strobe.cool which is ironic considering that page is all about making one hallucinate.
Well, that's back rationalization. I saw the advances like conducting meta sentiment analysis on medical papers in the 00's. Deep learning was clearly just the beginning. [0]
> Who would've thought (except you)
You're othering me, which is rude, and you're speaking as though you speak for an entire group of people. Seems kind of arrogant.
0. (2014) https://www.ted.com/talks/jeremy_howard_the_wonderful_and_te...
This is a popular workflow I first read about here[1].
This has been the most useful use case for LLMs for me. Actually getting them to implement the spec correctly is the hard part, and you'll have to take the reigns and course correct often.
[1]: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/
A good example of the kind of result is something like the Laravel documentation[1] and its associated API reference[2]. I don't believe AI can help with this.
[0]: https://en.wikipedia.org/wiki/Docstring
https://docs.github.com/en/copilot/managing-copilot/managing...
And this one too: https://docs.github.com/en/site-policy/privacy-policies/gith...
https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-3...
I recently created an course for LinkedIn Learning using generative AI for creating SDKs[0]. When I was onsite with them to record it, I found my Github Copilot calls kept failing.. with a network error. Wha?
Turns out that LinkedIn doesn't allow people onsite to to Copilot so I had to put my Mifi in the window and connect to that to do my work. It's wild.
Btw, I love working with LinkedIn and have 15+ courses with them in the last decade. This is the only issue I've ever had.. but it was the least expected one.
0: https://www.linkedin.com/learning/build-with-ai-building-bet...
Our key differentiator is cross-platform support - we work with Jira, Linear, GitHub, and GitLab - rather than limiting teams to GitHub's ecosystem.
GitHub's approach is technically impressive, but our experience suggests organizations derive more value from targeted automation that integrates with existing workflows rather than requiring teams to change their processes. This is particularly relevant for regulated industries where security considerations supersede feature breadth. Not everyone can just jump off of Jira on moment's notice.
Curious about others' experiences with integrating AI into your platforms and tools. Has ecosystem lock-in affected your team's productivity or tool choices?
https://github.com/dotnet/runtime/pull/115733 https://github.com/dotnet/runtime/pull/115732 https://github.com/dotnet/runtime/pull/115762
We have invested plenty of money and time into nuclear fusion with little progress. The list of key acheivments from CERN[1] is also meager in comparison to the investment put in, especially if you consider their ultimate goal to ultimately be towards applying research to more than just theory.
That's not hallucination. That's just an optical illusion.
[1] https://notes.jessmart.in/My+Writings/Pair+Programming+with+...