GitHub, the Microsoft code-hosting shop that popularized AI-assisted software development, is having some regrets about its Copilot infatuation.
Last week, product manager Camilla Moraes opened a GitHub community discussion to address “a critical issue affecting the open source community: the increasing volume of low-quality contributions that is creating significant operational challenges for maintainers.”
AI slop has come home to roost and GitHub wants help from its community of software developers to figure out how to manage the mess.
“We’ve been hearing from you that you’re dedicating substantial time to reviewing contributions that do not meet project quality standards for a number of reasons – they fail to follow project guidelines, are frequently abandoned shortly after submission, and are often AI-generated,” Moraes wrote.
“As AI continues to reshape software development workflows and the nature of open source collaboration, I want you to know that we are actively investigating this problem and developing both immediate and longer-term strategic solutions.”
Moraes said GitHub is considering various options. These include possibly giving maintainers the option to disable pull requests entirely or to restrict pull requests to project collaborators; the ability to delete pull requests from the interface (to avoid having to look at AI slop); more granular permission settings for creating and reviewing pull requests; triage tools, possibly AI-based; and transparency/attribution mechanisms for signaling when AI tools are used.
GitHub did not immediately respond to a request to quantify the scope of the problem, which can show up in subpar pull requests (PRs) – code changes submitted to a Git repo in the hope they will be reviewed and merged into the codebase – and in shoddy bug reports (which may be accompanied by a pull request to fix the flaw).
But several thread participants acknowledged that dealing with AI-generated code and comments has become a pressing problem.
According to Xavier Portilla Edo, head of cloud infrastructure at Voiceflow and a part of the Genkit core team, only “1 out of 10 PRs created with AI is legitimate and meets the standards required to open that PR.”
Other open source projects have been trying to deal with the tide of AI slop that has swelled over the past two years. Daniel Stenberg, founder and lead developer of curl, and the Python security developer Seth Larson have both been vocal in their objections to the maintenance burden created by low-quality AI-generated bug reports. Despite Stenberg’s acknowledgement that AI bug reports can be helpful if done properly, the curl project recently shut down its bug bounty program to remove the incentive to submit low quality bug reports, whether authored by AI or otherwise.
Jiaxiao (Joe) Zhou, a software engineer on Microsoft’s Azure Container Upstream team and maintainer of Containerd’s Runwasi project and SpinKube, responded to Moraes about how AI code submissions are affecting open source maintainers.
“We held an internal session to talk about Copilot and there is a discussion on the topic where maintainers feel caught between today’s required review rigor (line-by-line understanding for anything shipped) and a future where agentic / AI-generated code makes that model increasingly unsustainable,” he said.
Zhou summarized these concerns as follows:
- Review trust model is broken: reviewers can no longer assume authors understand or wrote the code they submit.
- AI-generated PRs can look structurally “fine” but be logically wrong, unsafe, or interact with systems the reviewer doesn’t fully know.
- Line-by-line review is still mandatory for shipped code, but does not scale with large AI-assisted or agentic PRs.
- Maintainers are uncomfortable approving PRs they don’t fully understand, yet AI makes it easy to submit large changes without deep understanding.
- Increased cognitive load: reviewers must now evaluate both the code and whether the author understands it.
- Review burden is higher than pre-AI, not lower.
As noted by Nathan Brake, a machine learning engineer at Mozilla.ai, the open source community needs to figure out how to preserve community incentives to participate when AI is doing the coding work that traditionally earned recognition and the contributor is only writing up the issue description.
“[In my opinion,] much of open-source is really at risk because of this: we need to figure out a way to encourage knowledge sharing to keep alive what makes open source and GitHub so special: the community,” he said, pointing to a recent presentation at FOSDEM by Abby Cabunoc Mayes that addressed the issue.
Chad Wilson, primary maintainer for GoCD, expects that AI agents unleashed as a result of OpenClaw and Moltbook are going to make things worse.
In a post to the thread on Tuesday, he said that he had already dealt with one pull request related to documentation and realized that it was “plausible nonsense” only after spending significant time reviewing it.
With regard to AI disclosure requirements that have been endorsed by others, he said the risk is that the open source social compact will break if there’s no way to easily tell whether one is interacting with a human or an AI bot.
“I’m generally happy to help curious people in issues and guide them towards contributions/solutions in the spirit of social coding,” he wrote. “But when there is no widespread lack of disclosure of LLM use and increasingly automated use – it basically turns people like myself into unknowing AI prompters. That’s insane, and is leading to a huge erosion of social trust.” ®