News Daily Nation Digital News & Media Platform

collapse
Home / Daily News Analysis / Is AI killing open source?

Is AI killing open source?

May 14, 2026  Twila Rosenbaum  3 views
Is AI killing open source?

Open source has always rested on a quiet paradox: most critical software is maintained by a tiny core of unpaid volunteers, while billions of dollars in enterprise value depend on it. That tension is now reaching a breaking point, not because of burnout or funding, but because of a new kind of digital noise: AI-generated pull requests.

Mitchell Hashimoto, the founder of HashiCorp and a revered figure in open source, recently revealed that he is considering closing external pull requests entirely. The reason? He is drowning in what he calls slop PRs — submissions churned out by large language models and their AI agent assistants. Hashimoto is not alone. Flask creator Armin Ronacher has coined the term agent psychosis to describe the addictive cycle in which developers deploy autonomous coding agents that run wild through repositories, producing code that feels plausible but lacks the deep context, trade-off awareness, and historical understanding that a human maintainer brings.

The problem centers on a brutal economic asymmetry. A developer can spend sixty seconds prompting an AI agent to fix typos, optimize loops, or add features across a dozen files. But the maintainer must spend an hour carefully reviewing those changes, verifying edge cases, and ensuring alignment with the project’s long-term vision. Multiply that by hundreds of contributors, each using their personal LLM assistant, and the result is not a better project — it is a maintainer who walks away.

The OCaml community recently experienced this firsthand when a 13,000-line AI-generated pull request was rejected. Maintainers cited copyright concerns, lack of review resources, and the long-term burden of maintaining code that no one fully understands. One maintainer warned that such low-effort submissions risk bringing the entire pull request system to a halt. Even GitHub, the host of the world’s largest code forge, is reportedly exploring tighter pull request controls and UI-level deletion options because maintainers are overwhelmed by AI-generated submissions.

The impact is even more acute for small open-source libraries. Nolan Lawson, author of the popular blob-util JavaScript library, has chronicled how AI is making small utility libraries obsolete. If a developer can simply ask Claude or GPT-5 to generate a working function in seconds, why take on a dependency? The library’s existence was premised on the friction of writing that code manually — a friction that AI has eliminated. Lawson fears the loss of the educational value that these libraries provided: developers learned by reading others’ code, internalizing patterns and trade-offs. Now they get instant answers without understanding.

Armin Ronacher has offered a provocative alternative: just build it yourself. He suggests that if pulling in a dependency means dealing with constant churn from AI-generated contributions, the logical response is to retreat toward fewer dependencies and more self-reliance. Use AI to help, but keep the code inside your own walls. This leads to a strange irony: AI may reduce demand for small libraries while simultaneously increasing the volume of low-quality contributions to the libraries that remain.

This structural shift is pushing open source toward a state of bifurcation. On one side are massive enterprise-backed projects like Linux or Kubernetes — the cathedrals. These have the resources to build sophisticated AI-filtering tools, dedicated maintainer teams, and organizational weight to ignore the noise. On the other side are provincial projects run by individuals or small cores who simply stop accepting external contributions. The barrier to entry for contributors is being raised, not lowered.

The original vision of open source, popularized by Eric Raymond in The Cathedral and the Bazaar, celebrated radical transparency and the ability of anyone to contribute. That vision relied on the assumption that contributions were human acts of care and understanding. AI has automated the act of contribution without automating the act of care. The result is a flood of code that is cheap to produce but expensive to review.

Historical context helps frame this moment. Open source has always been an uneven playing field. The vast majority of contributions have always come from a tiny minority of developers. But that minority was willing to invest time because they understood the codebase and cared about its direction. AI agents have no such investment. They do not learn, they do not remember, and they have no stake in the community. They are mercenaries that generate noise at near-zero cost.

Rise of large language models began quietly but quickly accelerated. Early adopters used AI to write code snippets for personal projects, benefiting from the same asymmetry that now plagues open source. But as AI agents became more capable — able to research codebases, execute commands, and submit pull requests autonomously — the problem scaled from a personal nuisance to a platform-level crisis. SemiAnalysis recently noted that we have moved beyond simple chat interfaces into agentic tools that live in the terminal, like Claude Code, which can independently navigate a repository and push changes.

The tragedy is that AI was supposed to make open source more accessible. In many ways, it has. But in lowering the barrier to contribution, it has also lowered the value of each contribution. When everyone can submit code, nobody’s submission is special. The only remaining scarce resource is the human judgment required to say no.

This scarcity is reshaping how projects operate. Some projects are experimenting with automated validation pipelines that reject any PR not accompanied by a human-written explanation of intent. Others are moving to invitation-only contribution models. Still others are simply closing their issue trackers and relying on a handful of trusted committers. These responses mirror what has already happened in large enterprise open source projects, where commit access is tightly controlled and external contributions are rare.The future of open source, as the original article suggests, will be smaller, quieter, and much more exclusive. The era of the drive-by contributor is ending. The era of the verified human is beginning. For those who see open source as a collaborative utopia, this may seem like a loss. But for maintainers who have been buried under slop, it may be the only way to survive.

In practice, this means that the projects that thrive will be those that demand a high level of human effort, human context, and human relationship. The bazaar was a fun idea while it lasted, but it cannot survive the arrival of the robots. The most successful open source projects will be the ones that are the hardest to contribute to. They will reject the slop loops and the agentic psychosis in favor of slow, deliberate, and deeply personal development.

Some observers argue that AI itself can solve the problem it has created. Future AI tools may be able to review code with the same depth as humans, detecting subtle bugs and evaluating trade-offs. But that would require a different kind of AI — one that understands the historical context of a project, the preferences of its maintainers, and the unwritten rules of its community. Such an AI does not yet exist, and even if it did, it would raise profound questions about whose judgment it encodes.

In the meantime, the open source ecosystem is adapting in real time. Maintainers are becoming more aggressive in closing bad PRs. Projects are adopting CODEOWNERS files that limit who can approve changes. Some are even requiring contributors to have a history of substantive discussions before submitting code. The friction that AI removed is being reintroduced artificially.

The original article ends with a point that needs emphasis: we don’t need more code; we need more care. Care for the humans who shepherd the communities and create code that will endure beyond a simple prompt. That care is what open source has always been about, even if it was sometimes hidden behind the myth of mass participation. AI has stripped away the myth, revealing the core truth: open source is about people, not programs.


Source: InfoWorld News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy