People & Community
Society, Ethics & Sustainabilty

AI Can Contribute. It Can’t Lead.

Session Abstract

Today, AI writes code, reviews PRs, answers questions. Some communities ban it, others label it. Most will accept it eventually. But AI won’t show up to community calls for two years. It won’t mentor your next maintainer. We’re losing maintainers faster than we’re replacing them. Stop fighting it. Start investing in what it can’t replace: people.

Session Description

AI is already doing real work in open source. Answering questions, reviewing PRs, writing patches. Some communities have banned it. Others slap a label on it and move on. Most are going to end up accepting it, because policing AI use is exhausting and the tooling is genuinely useful.

Here’s what bothers me. Everyone is arguing about whether to allow AI contributions, and nobody is talking about what we lose when we stop needing humans to do the work. AI can write code. Fine. But it can’t show up to a community call every week for two years. It can’t help someone push their first PR or RFC. It can’t sit with a burned-out maintainer and convince them to stay. Leadership isn’t a pull request. It’s a relationship.

We already have a leadership problem. Projects are losing maintainers faster than they’re growing new ones. AI makes it worse by paving over the entry-level work that we used to get people involved.

And the policy landscape is all over the place. Apache requires disclosure. OpenTelemetry treats AI as a tool, not a contributor. The Linux Kernel won’t accept patches without a human standing behind them. Python is still figuring it out. What’s interesting isn’t the policies themselves. It’s what they reveal about how each community defines contribution, accountability, and who gets to belong.

There’s also an equity angle nobody wants to touch. AI tools lower real barriers for non-native English speakers, newcomers, and first-time contributors. That’s genuinely good. But if communities respond by raising the trust bar in ways that only benefit established insiders, we get a two-tier system. AI-assisted contributors who can never advance, and a shrinking group of people who decide who gets in.

My argument is simple. Stop spending energy fighting AI. Start spending it on what AI will never do. Mentoring people. Building trust. Growing the next generation of leaders. That’s what’s actually at risk.

Short Talk