zlacker

[parent] [thread] 2 comments
1. kg+(OP)[view] [source] 2025-08-21 19:01:32
The OP seems to be coming from the perspective of "my time as a PR reviewer is limited and valuable, so I don't want to spend it coaching an AI agent or a thin human interface to an AI agent". From that perspective, it makes perfect sense to want to know how much a human is actually in the loop for a given PR. If the PR is good enough to not need much review then whether AI wrote it is less important.

An angle not mentioned in the OP is copyright - depending on your jurisdiction, AI-generated text can't be copyrighted, which could call into question whether you can enforce your open source license anymore if the majority of the codebase was AI-generated with little human intervention.

replies(1): >>victor+E3
2. victor+E3[view] [source] 2025-08-21 19:23:37
>>kg+(OP)
As long as some of the code is written by humans it should be enforceable. If we assume AI code has no copyright (not sure it has been tested in courts yet) then it would only be the parts written by the AI. So if AI writes 100 lines of code in Ghostty then I guess yes someone can "steal" that code (but no other code in Ghostty). Why would anyone do that? 100 random lines of AI code in isolation isn't really worth anything...
replies(1): >>simonc+0E1
◧◩
3. simonc+0E1[view] [source] [discussion] 2025-08-22 11:05:10
>>victor+E3
You might be interested in reading Part 2 of the US Copyright Office's report on Copyright and Artificial Intelligence: <https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...>
[go to top]