Ideas

The Future of Institutions

Democratic governance was designed for a world without AI. The question is whether we will build new institutions in time.

Last updated: · 4 min read

Many governance institutions were designed for a world without AI. They operate on timescales of years and decades; AI systems deploy in months. That mismatch is structural, not temporary. Closing it requires a kind of institutional ambition we haven't yet demonstrated.

For about 30 years, we have handed over decisions about technology - and about how that technology shapes our society, our democratic participation, our conception of justice - to a very small set of people and institutions. Whether we did it intentionally or not, the world that's been shaped for us hasn't been shaped for purpose. It's been shaped for power and profit.

I think about this as a problem of institutional lag. The institutions we rely on to govern public life - legislatures, regulatory agencies, international bodies, courts - were designed for a world without AI. They operate on timescales measured in years and decades. AI systems are being deployed on timescales measured in months. That mismatch isn't a temporary inconvenience. It's structural, and it will define who benefits from AI and who bears the costs unless we address it directly.

The challenge runs deeper than updating existing institutions. Many of the governance questions AI raises don't have an institutional home at all. When an algorithm denies a loan application across state lines using a model trained on data from three countries, which regulator has jurisdiction? When a public health system deploys a diagnostic tool developed by a private company using patient data from a different continent, who is accountable for errors? When an AI system used in policing produces biased outcomes, what mechanism exists for the affected community to challenge the system's design rather than an individual decision? These aren't hypotheticals. They're happening now, and the institutional architecture to address them is either absent or badly outdated.

Computation itself has become a form of sovereignty. In a piece for Frontiers Policy Labs, I made the case that a country's ability to participate in the AI era depends on whether it can build, evaluate, adapt, and govern AI systems on its own terms. That requires computational infrastructure, trained personnel, and governance capacity that many nations don't yet have. Countries that lack the ability to independently assess the AI systems being deployed in their health, education, and public administration systems aren't governing those systems - they're accepting them on someone else's terms. Regional compute initiatives like Europe's EuroHPC and India's AIRAWAT program are early attempts to address this, but the gap between what exists and what's needed remains enormous.

The international architecture is especially fragile. I served on the UN Secretary-General's High-Level Advisory Body on AI, where 193 countries agreed to create new institutions for scientific review and global dialogue on AI governance. That was a remarkable act of collective will in a period when multilateral cooperation is under strain everywhere. But the institutions that emerge from that process will only work if they include the voices that are typically excluded from technology governance - civil society, communities from the Global South, the people who live with AI's consequences rather than profit from its development. From inside that process, I saw that the institutional designs produced in diverse rooms are more durable and more legitimate than those produced in rooms of technical experts alone.

The question I keep returning to is whether democratic societies will build the institutional capacity to govern AI before the architecture of AI hardens around a set of choices made by a handful of private actors. The window for that institutional construction is open but narrowing. Every month that AI systems operate in domains like credit, healthcare, and criminal justice without adequate public oversight, the costs of changing course later increase. The pattern from prior technological transitions is consistent: the gap between when a technology arrives and when governance catches up determines who benefits and who bears the cost. Social media taught us what happens when that gap persists for a decade. We don't have to repeat the lesson.

At the McGovern Foundation, we invest in closing that gap. We fund organizations building the governance infrastructure that doesn't yet exist at the scale AI demands - AI governance centers in the Caribbean, technical capacity within Indian institutions, communities of practice connecting regulators, researchers, and civil society across borders. We support the ACLU's work on AI-driven discrimination. We fund the Paris Peace Forum's work on global AI governance norms. The through-line across all of it is the same conviction: the institutions that govern AI need to be as sophisticated, as well-resourced, and as intentional as the technology itself.

Many of our institutions, practices, and government models were born in a world without AI. Updating them isn't enough. We need to build new ones - designed for the speed, scale, and complexity of a technology that is already reshaping public life. That requires the same creativity, ambition, and investment that we currently reserve for building models. The question isn't whether our institutions will change. It's whether they'll change because we designed them to, or because we waited until they broke.