Here John Penrose makes the case for stronger evaluation of public policy and for drawing on the evidence of what works when devising and implementing policies to increase productivity and economic growth. Of course, the kind of rigorous evaluation John advocates, and the evidence of what has or is likely to work in growth-related policy, will naturally come from the social sciences.

Knowing ‘What Works’ Will Help Good Growth
The hugely successful 19th century American business leader John Wanamaker famously confessed he wasted half the money he spent on advertising, but he didn’t know which half. Today’s Government Ministers and officials are in a similar position, spending billions on public programmes which they know are important and which could almost certainly be improved, but with remarkably little evidence about how to do it safely, without throwing the baby out with the bathwater.
If we could improve the outcomes of those programmes the effects on our economy would be electric, particularly for left-behind or less-well-off individuals or communities. Even marginal improvements in the outcomes of back-to-work programmes at Department for Work and Pensions (DWP), or prisoner rehabilitation schemes in Department of Justice (DOJ), would help more people to lead proudly independent lives, provide employers with a deeper pool of talent to recruit from, and give taxpayers smaller benefits bills to fund as well. Plus it would drive much-needed improvements in public sector productivity, which has been stagnating for years.
The positive effects wouldn’t stop there. Understanding which public policy levers work well or badly to revive local economies in hard-up towns and cities, or to create successful economic clusters in the industries of tomorrow, would turn ‘Levelling Up’ from a zero-sum competition for scarce Whitehall infrastructure funds into a much bigger, broader and more effective project to redress Britain’s over-reliance on London and the southeast.
The good news is that today’s leaders can call on much better techniques than the 19th century tools which John Wanamaker could have used to spot which parts of each programme aren’t pulling their weight. The medical profession has industrialised the use of Randomised Control Trials (RCTs) to figure out which treatments work or don’t, and then used the evidence to create a continuous performance-improving ratchet which has upgraded life expectancies and economic outcomes almost everywhere. Online advertising and website design uses real-time A/B or ‘split’ testing to measure how tiny changes in wording, colour or screen layout nudge you or me more or less effectively to buy something, or to stay engaged with their sites.
This evidence-based outcome-evaluation tide has started to lap around parts of public policymaking too. Probably the leading UK examples are the National Institute for Clinical Excellence (NICE) which assesses whether new drugs are good enough to be used in the NHS, and the Education Endowment Foundation (EEF) which evaluates which teaching techniques work well or badly and should therefore be copied widely or abandoned completely. Plus there are 7 other ‘what works’ centres too, covering everything from crime reduction to local economic growth and strengthening families as well. And international interest is growing, with academics at Canada’s McMaster University Health Forum and Australia’s Monash University advising their Governments about it too.
But that’s as far as it goes. There are still huge gaps where the results achieved (or not) by enormous Government programmes aren’t independently assessed at all: for example, the outcomes of all those billions which DWP spends on back-to-work programmes aren’t included, and nor are DOJ’s prisoner rehab schemes.
Yawning omissions aren’t the only problems either. Most parts of Government will claim they are assessing how taxpayer money is being spent, but they usually mean something which falls well short of a ‘gold standard’ RCT test. These less-than-perfect assessments have a variety of different flaws:
- Some aren’t independent, so Ministers and officials can mark their own homework.
- Others don’t make their results public, so politically inconvenient or embarrassing ones can be delayed or ignored.
- Many don’t assess real outcomes and instead report on whether a process has been followed regardless of whether it worked; or they track a different measure (Key Performance Indicators are a favourite) that’s either more subjective, or less complete, or more easily ‘gameable’.
- Still others don’t state what outcome a programme or organisation was trying to achieve in advance (so successes or failures can’t be measured at all); or they have multiple goals and only assess the ones that worked while ignoring the failures; or they change the intended outcome afterwards if the original one wasn’t achieved.
What’s the answer? A single, simple but fundamental change to Government procurement rules, so every public grant, subsidy and procurement contract must state the outcomes (not outputs) they want to achieve in advance, followed by a prompt, independent and public post-completion evaluation of whether they have delivered them, including a one-word recommendation whether they should be renewed or repeated in future. Any public body intending to ignore a negative recommendation would have to publish its reasons before signing similar deals.
The evaluations would have to satisfy the Government’s existing Evaluation Standards (outlined in the ‘Magenta Book) to maintain quality, and any which aren’t on time or up to scratch will be presumed to have failed. There should be a proportionate ‘de minimis’ exemption for small-scale contracts, and evaluations of sensitive National Security contracts would have to be scrutinised secretly by the House of Commons Security and Intelligence Committee instead.
It will be politically easiest for a newly-elected Government to introduce this change at the start of its mandate, to inform its first spending review and justify any big changes it has to make.
The effects of this single change should be profound, making it objectively and publicly clearer which Government programmes really make a positive difference, and which others are well-intentioned but expensive failures. This will harness sustained long-term public and democratic pressure for each public body to phase out its least-effective contracts, grants and subsidies, and replace them with ones that work better instead. And it will cut regulatory burdens by replacing the ever-more intrusive, expensive and baroque output reporting processes which increasingly festoon public-sector programmes, with something much cheaper, simpler and more powerful too.
Most fundamentally, this change will create a performance ratchet to improve the quality and value for money of both public policy delivery and public sector productivity each year. Our economy will grow faster, with left-behind towns and cities outside London and the southeast starting to catch up for the first time in decades. The life-chances of disadvantaged or vulnerable people in left-behind communities will improve as the public services which are supposed to deliver them work better. If he was alive today, John Wanamaker would be rubbing his hands with glee.
About the author
John Penrose is Chair of Conservative Policy Forum and the Founder & Director of Centre for Small-State Conservatives. John was MP for Weston-super-Mare between 2005 and 2024, and he served in the Department for Culture, Media and Sport, as a Lord Commissioner of Her Majesty’s Treasury, and as Minister of State to Northern Ireland.
Image credit: Paul Silvan on Unsplash