Ideas

Institutional Imagination

We build technologies faster than we build the institutions to govern them. That gap is not inevitable. It is a choice.

Last updated: · 4 min read

Every generation faces a version of the same challenge: a new technology arrives faster than the institutions designed to govern public life can adapt. The question is never whether to build institutions. It's whether we build them with the same creativity and urgency we bring to the technologies themselves.

We are building models faster than we are building institutions. That observation isn't a critique of the pace of AI development. It's a description of where the danger actually lies.

Institutions are slow by design - deliberate, accountable, resistant to capture by any single interest. They embody accumulated wisdom about how to make decisions that affect many people over long time horizons. That conservatism is a feature. But when technology moves at the pace AI is moving, the gap between capability and governance becomes a problem that compounds. Not because AI is inherently dangerous, but because ungoverned power - any ungoverned power - tends to concentrate, to exclude, to serve those who hold it rather than those who need it.

We know what that gap looks like because we've lived through it before. Two decades ago, social media promised connection and knowledge. We trusted that markets would deliver fairness and that governance could wait. By the time the consequences were clear, the damage was embedded. Connection had become commerce. Access had become advertising. The technology arrived before the rules, and the distance between them determined who benefited and who bore the costs. AI gives us another chance to get the sequencing right. But only if we take the institutional side of the equation as seriously as the technical one.

That's what I mean by institutional imagination: the capacity to envision and build governance structures that don't yet exist, at the speed and scale the moment requires. The term is deliberate. Designing institutions for a technology this consequential demands the same creativity we celebrate in model development - the same willingness to experiment, iterate, and think at scale. The difference is that institutional design has to optimize for legitimacy and accountability, not just performance. And those are harder problems.

History offers a partial guide. Nuclear governance emerged from catastrophe - the IAEA and the Non-Proliferation Treaty took shape only after Hiroshima, decades of testing, and a near-miss in Cuba. Financial regulation has followed a similar pattern: reforms arriving after crises that could have been prevented. The pattern repeats because building institutions is expensive, slow, and politically unrewarding until something breaks. AI offers a chance to break that cycle, precisely because the harms are already visible - in automated credit denials, in biased hiring systems, in predictive policing algorithms deployed without oversight - but the architecture is still being written.

Some of that institutional construction is already underway. In 2025, 193 countries agreed through a UN resolution to create two new bodies: an independent scientific panel to assess AI risks and opportunities, and a global dialogue where governments, companies, and civil society collaborate on governance. I served on the UN Secretary-General's High-Level Advisory Body on AI where we developed those recommendations. From inside that process, I saw how often ambition gets lost in the machinery of politics. But I also saw something that surprised me - when the room includes voices from the Global South, from civil society, from communities that live with AI's consequences rather than profit from its deployment, the institutional designs that emerge are more durable and more legitimate.

At the Patrick J. McGovern Foundation, institutional capacity is what we fund. Not AI tools in isolation, but the organizations and systems that connect those tools to public accountability. We've supported the creation of AI governance centers in regions that lack them - from the Caribbean Artificial Intelligence Innovation Centre in Trinidad and Tobago to partnerships with Indian institutions building the technical expertise that governance demands. Fund.AI, our flagship convening, gathered more than 150 foundations and unlocked tens of millions in new investment directed at nonprofits building institutional infrastructure for AI. The work is unglamorous by design. Nobody holds a press conference to announce better coordination between regulators. But the difference between a governance framework that looks good on paper and one that actually protects people is almost always in the connective tissue - the feedback loops, the incident databases, the mechanisms that allow regulators in health to learn from failures in finance.

Philanthropy occupies a specific position in this work. Foundations can fund the institutional construction that governments are too slow to prioritize and that markets have no incentive to support. We can take the 30-year view, invest in capacity before it's needed, and support the civil society organizations that hold both government and industry accountable. Today, by 2027, global private investment in AI startups is projected to reach $900 billion, with more than 80 percent of that capital originating from three countries. That kind of concentration doesn't self-correct. It requires deliberate countervailing investment in institutions that can represent the public interest at a scale that matches the technology itself.

The countries that get this right will be the ones that treat governance as load-bearing architecture rather than something you add after the building is up. We have plenty of AI governance in the world. What we don't have enough of is AI democracy - systems where the people affected by AI have ongoing authority to shape how it's used, not just a seat at a consultation table. Building those systems requires institutional imagination. And the window for it is still open.