Can we count on our economic statistics?
Economic statistics are the viewfinder through which we see our economy. When they are timely and accurate, they give a clear, comprehensive picture of how our economic world is evolving. But if they only give us a partial, or worse, inaccurate, perspective on the economy, we will be ill-equipped to spot economic problems, and in the dark about whether we’ve solved them.
Although the UK economy, and our experience of it, is changing all the time, one thing has remained constant in the post-war period: GDP is the go-to summary economic statistic, and the yardstick by which all governments are judged. When the economy is growing strongly, the government ‘holding the reins’ is considered to be doing well (and certainly takes the credit); when it tips into recession, it is the government that tends to shoulder the blame.
This is a problem if one is concerned with improving outcomes for the majority of people, as GDP has become an increasingly unsatisfying proxy for fair economic outcomes. As the IPPR Commission on Economic Justice discussed in its interim report, Time for Change, the gains from growth increasingly flow to the top 10 per cent of the income distribution, entrenching inequality and weakening the link between economic growth as we currently track it, and daily life as most people experience it.
To understand why growth no longer delivers broadly-shared prosperity, we need to unpack our productivity performance – without productivity growth, sustainable wage growth is impossible. But as Diane Coyle, David Pilling and others have argued very eloquently, our method for calculating productivity is much better suited to counting cars rolling off production lines and other manufacturing outputs than to valuing the services that now largely comprise our economic output. Crucial to measuring the value created by the provider of a service (the value-added, in economic language) is an assessment of the quality of that service. But often this is unobservable, bespoke to the individual receiving the service, or, as is especially the case for data services, improving beyond all recognition from one year to the next.
This matters because if we get our estimate of quality wrong, we get the price wrong, and prices are fundamental to the calculation of economic growth, ‘real’ wages (which measure purchasing power) and productivity. It looks increasingly likely that we habitually under-estimate the quality of goods and services, which means we over-estimate their price, and therefore over-deflate nominal values when we try to track them over time. What this means is that real GDP growth, real wage growth, and productivity growth, are all likely to be higher than we currently believe.
It is important to note at this point that our productivity problem is real: the UK has experienced a flattening off of productivity growth since the 2009 recession that cannot be attributed to measurement error. Mis-measurement also cannot account for the UK’s inferior performance relative to our European neighbours, when we all use the same method. But at present, we are unlikely to be getting an accurate reading of where in the economy the productivity issues lie.
To take an example: a recent study undertaken for the ONS to improve estimates of price change in data services found that real prices were likely to have fallen by between 35 and 90 percentage points between 2010 and 2015, much more than the price deflator currently in use suggests. This means the telecommunications sector is much more productive than we had thought. Although it doesn’t alter the overall size of the economy (the higher productivity in one sector is netted off against lower productivity in others) it shifts the productivity around in quite a profound way (and solves the riddle of low measured productivity in what by all accounts looks to be a phenomenally productive sector). And this was just a review of producer prices – if the same process is applied to consumer prices, it would reduce inflation for the period, and boost estimates of real wage growth.
Government makes policy decisions (one would hope) on the basis of evidence of where its intervention is needed. If this evidence is flawed, policy decisions will necessarily be flawed too. It’s all too easy to imagine the Government’s new Industrial Strategy committing to spend £X million ‘to boost the productivity of the telecoms sector’ – which would, we now suspect, be a waste of public money.
The quality of our regional productivity estimates is even more questionable. The ONS only compiles regional price data every six years – not frequently enough to be able to use them in the calculation of regional productivity growth. It therefore uses national prices to deflate all regional output. This means productivity is likely to be under-stated in cheaper parts of the country, like the North East, and over-stated in the more expensive places, like London. Again, one can imagine that £X million being spent on a fund to boost regional productivity, which may or may not be needed – and the productivity data giving little indication of how well such expenditure was working.
Again, it’s highly likely that productivity does differ across regions – measurement can’t account for all of the gap between London and the rest. The point is that if the data isn’t accurate, our diagnosis of the problem is likely to be inaccurate too. And that increases the risk that the cure we prescribe isn’t the right one – which is far from ideal in a context where we need to squeeze value from every penny of our public spending.
Debates about the quality of our economic statistics aren’t likely to get top billing on BBC Question Time anytime soon. But they could scarcely be more important if we really want to understand people’s experience of the economy, and have the aim of making it work for everyone.