Britain’s Lost Advantage: Relearning the Art of Institution Building in the Age of AI
Britain once excelled at building institutions that turned invention into transformation. From the civil service and the rule of law to financial markets and higher education, it created systems that made innovation scalable – and that others emulated. The Industrial Revolution was not driven by the steam engine alone, but by the legal, financial and professional frameworks that enabled new technologies to reshape production, industry and society.
That imagination has faded. The country still produces world-class research and entrepreneurial talent yet struggles to turn them into sustained productivity and broad-based growth. The weakness is structural rather than technological. Decades of policy fragmentation, short-termism and departmental churn have hollowed out the very capabilities that once gave the nation influence, coherence and direction.
Nowhere is this clearer than in artificial intelligence. The UK cannot – and need not – compete with the United States or China in building foundation models: the steam engines of the data economy. Its comparative advantage lies in applying and governing them. This means integrating AI responsibly into finance, health, education and professional services while building the institutional foundations for safe and scalable adoption. It demands coordination, foresight and trust, the very capacities that have eroded. The ambition to be “the best place to develop and use AI technologies” is sound in spirit, but without systems that learn, scale and diffuse what works, experimentation becomes theatre rather than strategy.
Government recognises AI’s potential but remains caught in a cycle of initiatives without continuity. A taskforce here; a summit there; a new pro-innovation framework that dissolves when ministers or budgets change. The 2023 White Paper on AI Regulation articulated an admirable vision to “drive growth and prosperity”. The 2025 AI Opportunities Action Plan renews that ambition, promising to “lead in both building and using AI”. Yet ambition without institutional follow-through is not strategy.
This pattern reflects a deeper malaise. Successive governments have treated technological change as a series of interventions rather than as a long game of capability building. Industrial strategy has been relaunched, renamed and repealed so often that the term itself has become politically radioactive. The result is a patchwork of pilots optimised for visibility rather than learning. Several other mid-sized economies, including South Korea, Canada, Singapore and the UAE, show similar tendencies. Their challenge, like Britain’s, is not aspiration but institutional realism: the discipline to sustain coordinated action over time.
AI exposes this gap. It is not simply another wave of automation but a test of national capability. Its value depends on how societies organise data, align incentives and build legitimacy around its use. These are not engineering problems but institutional ones.
Recent failures have made this dependence visible. The CrowdStrike software crash in 2024 grounded flights and disrupted hospitals worldwide. What it revealed was not merely a coding error but a deeper vulnerability: the extent to which critical systems now depend on a handful of foreign cloud and cybersecurity providers. For Britain, this dependence risks replicating a “branch plant model” in the digital economy, a system in which core infrastructure and value capture are controlled elsewhere. It leaves the country exposed to shocks that it cannot mitigate. Resilience is no longer just a technical issue but a question of sovereignty and governance. Such vulnerabilities will only deepen in the AI era.
This is where the social sciences matter. They provide the frameworks and evidence to understand how societies coordinate, adapt and sustain trust in the face of technological disruption. Economists analyse incentives and spillovers; sociologists examine adoption, resistance and legitimacy; management scholars analyse strategy, organisation, governance and leadership during transitions. Together, these (and other) disciplines explain how innovation becomes capability, and how systems learn and evolve rather than merely react.
Rebuilding that capability does not mean resurrecting bureaucracy. It means restoring the discipline of institutional design by creating systems that learn, coordinate and adapt while remaining coherent. The UK Vaccine Taskforce showed how cross-sector collaboration can deliver under pressure. The Financial Conduct Authority’s Regulatory Sandbox demonstrated how controlled experimentation can inform wider reform. The task now is to make such coordination systematic rather than exceptional, and to extend it beyond policy into the governance of sectors, data infrastructures and firms.
Three institutional mechanisms could start that process.
1. Turn experimentation into national learning.
AI experimentation is valuable only if it generates cumulative learning. Every major pilot across departments, regulators or sectors should conduct and publish open evaluations of outcomes, lessons and diffusion. Renewal of funding should depend on evidence of institutional learning, not political visibility. Social scientists should play a leading role in ensuring these evaluations are rigorous, systematic and balanced.
2. Secure data and compute as strategic infrastructure.
Britain’s dependence on foreign cloud providers limits both resilience and innovation. Managing this dependency in the AI era is strategically vital. The country needs a coordinated effort to develop and govern shared data and compute infrastructures as public goods, ensuring transparent access, interoperability and long-term stewardship. This is not about digital autarky but about securing the foundations of an independent capability base.
3. Create permanent coordination platforms.
Effective governance of AI requires ongoing dialogue between regulators, industries and research communities. Permanent, empowered coordination bodies with representation from business, academia and the public sector can bridge silos, align standards and turn regulatory experimentation into system-level reform.
These are not grandiose reforms but pragmatic mechanisms to rebuild capability. They demand modest budgets but strong institutions, and above all, continuity.
A credible AI strategy must also acknowledge Britain’s distinct geopolitical position. The country remains deeply linked to the United States while maintaining economic and historical ties with China, especially through Hong Kong. Mirroring either superpower would be unwise. Britain’s opportunity lies in pragmatic engagement. It must anchor within US technological ecosystems while developing structured partnerships with Asia’s innovation networks. In a polarising world, the UK can act as a bridge only if its institutions are competent and trusted.
Some argue that scale matters more than design. In truth, scale created power, but design sustained it. Britain’s influence rested on vast scale – of trade, empire, finance and military reach – but it endured because of the institutions that organised and projected that scale effectively. As scale declined, so did its reach. Rebuilding institutional capability is no substitute for scale, but it remains the most credible way to turn ambition into lasting advantage.
Rebuilding institutional capacity is slow, unglamorous work. It offers no moonshots or quick wins. Yet without it, AI will amplify existing weaknesses – fragmented governance, short investment horizons and political volatility. With it, Britain could once again ensure that technological progress strengthens society, instead of subordinating it.
The Industrial Revolution turned steam into lasting prosperity because Britain built the systems around it. The AI Revolution will do the same only if we rebuild those systems – modern, inclusive and grounded in evidence. That is not nostalgia; it is strategy, and it requires leadership with the kind of long term vision and quiet resolve Britain once took for granted.
About the author
Professor Feng Li is Associate Dean for Research & Innovation and Chair of Information Management at Bayes Business School. His research investigates how digital technologies can be used to facilitate strategic innovation and organisational transformation across different sectors and domains. He advises senior business leaders and policy makers on how to manage the transition to new technologies, new business models, and new organisational designs in the digital economy.
Photo credit: Igor Omilaev on Unsplash