Ideas

AI as Civic Infrastructure

Why AI governance is an infrastructure problem, not a technology problem

Last updated: · 3 min read

Algorithms help decide who gets a mortgage, which patients receive follow-up care, and how police departments deploy officers. Most people affected by those decisions have no idea AI is involved. We have been thinking about this as a technology problem. It is an infrastructure problem, and the difference changes everything about how you would govern it.

We've been here before. Every major technological transition that reshaped public life - electricity, telecommunications, public health, transportation - eventually forced a recognition: when a technology becomes foundational to democratic participation, markets alone can't govern it. We didn't let private companies decide which cities the interstate system would connect. We didn't let profit motives alone determine who received vaccines. In each case, we built public institutions with the authority and capacity to act on behalf of everyone. Markets didn't fail entirely. The stakes just required a different kind of accountability.

AI has reached that threshold, and yet the dominant frame for governing it remains regulatory - rules imposed on companies after the fact, designed primarily to limit harm. Treat AI as infrastructure and a different set of questions emerges. Who has authority over how the system operates? Who does it serve by default, and who bears the cost when it doesn't work? What obligations attach to building and maintaining it? We ask these questions about water systems, electrical grids, and public health. The intellectual discipline for governing shared systems at scale already exists. It just hasn't been applied to AI yet.

The technology industry's economic model explains part of that gap. AI remaining a product - built by private companies, governed by private choices, distributed according to ability to pay - is a feature for the people who build it, not a bug. And that model has produced extraordinary technical progress. But it has also concentrated the authority to shape public systems in a very small number of institutions, almost none of which answer to the people those systems affect. Infrastructure carries obligations that products don't. You can choose not to buy a product. You can't opt out of the system that determines your credit score or whether your health insurance claim gets approved.

At the Patrick J. McGovern Foundation, we've committed over $500 million to organizations building AI as public infrastructure, and the work looks different from what most people imagine when they hear "AI governance." Newsrooms using AI to surface patterns in government records that would take human reporters years to find. Climate networks building predictive models that help communities prepare for drought before displacement begins. Health systems in the Global South designing diagnostic tools with patients and providers rather than in labs a continent away. In India, we've invested in AI-powered agricultural advisories reaching smallholder farmers and maternal health platforms extending care to women in communities where clinics are scarce. In the Caribbean, we supported the creation of a regional AI center in Trinidad and Tobago. The thread running through all of this is the same: the people affected by AI systems participate in their design and governance.

The policy work reinforces what the field work shows. When I served on the UN Secretary-General's High-Level Advisory Body on AI, the question of public authority over AI systems sat at the center of every recommendation we developed. In my current role as the U.S. government's nominated expert to the Global Partnership on AI, the same tension surfaces in every working session - the gap between how rapidly AI is being deployed and how slowly public institutions are building the capacity to govern it. That structural mismatch between pace and readiness is where the real risk lives. Not in any single misuse of AI, but in the accumulating distance between the technology and the democratic institutions that should have authority over it.

India is a case I follow closely because it tests the framework in real conditions. They've built digital public infrastructure serving over a billion people, and that experience - designing systems at population scale with interoperability and public access as foundational principles - is precisely the institutional muscle that governing AI well requires. At the India AI Impact Summit in 2026, that vision took concrete form: shared compute infrastructure, open data frameworks, governance models that treat AI access as a public entitlement rather than a market outcome.

The countries that invest in governance alongside capacity, rather than treating governance as something you add later, will shape the terms of AI development for everyone else. And the window for building that architecture - while the underlying systems are still taking shape - is not unlimited.