I work in a team of 5 great professionals, there hasn't been a single instance since Copilot launched in 2022 that anybody, in any single modification did not take full responsibility for what's been committed.
I know we all use it, to different extent and usage, but the quality of what's produced hasn't dipped a single bit, I'd even argue it has improved because LLMs can find answers easier in complex codebases. We started putting `_vendor` directories with our main external dependencies as git subtrees, and it's super useful to find information about those directly in their source code and tests.
It's really as simple. If your teammates are producing slop, that's a human and professional problem and these people should be fired. If you use the tool correctly, it can help you a lot finding information and connecting dots.
Any person with a brain can clearly see the huge benefit of these tools, but also the great danger of not reviewing their output line by line and forfeiting the constant work of resolving design tensions.
Of course, open source is a different beast. The people committing may not be professionals and have no real stakes so they get little to lose by producing slop whereas maintainers are already stretched in their time and attention.
> Ghostty is written with plenty of AI assistance, and many maintainers embrace AI tools as a productive tool in their workflow. As a project, we welcome AI as a tool!
> Our reason for the strict AI policy is not due to an anti-AI stance, but instead due to the number of highly unqualified people using AI. It's the people, not the tools, that are the problem.
Basically don't write slop and if you want to contribute as an outsider, ensure your contribution actually is valid and works.