Monday, October 16, 2017

Diane Coyle — Economic observation

On Friday all the researchers in the new Economic Statistics Centre of Excellence(ESCoE) met at its home in the National Institute to catch up on the range of projects and it was terrific to hear about the progress and challenges across the entire span of the research programme.
One of the projects is concerned with measuring uncertainty in economic statistics and communicating that uncertainty. The discussion sent me back to Oskar Morgenstern’s 1950 On the Accuracy of Economic Observations (I have the 2nd, 1963, edition). It’s a brilliant book, too little remembered. Morgenstern is somewhat pessimistic about both how meaningful economic statistics can be and whether people will ever get their heads around the inherent uncertainty.
“The indisputable fact that our final gross national product or national income data cannot possibly be free of error raises the question whether the computation of growth rates has any value whatsoever,” he writes, after showing that even small errors in levels data imply big margins of error in growth rates.
This is a huge problem scientifically and one that is acute in economics since data are evidence that tested hypotheses generated as theorems from the axioms of a theory.

There are two foundational issues in economics. The first is data collection and processing in specific cases. The second is the worth of historical data.

Even using contemporary methods there is a lot of uncertainty and fuzziness. But when it comes to historical data and its use comparatively, questions arise whether this "data" has any actual value at all.

A big problem arises from the rationalistic bent of conventional economics that places great emphasis on formal modeling when formal consistency has no bearing on semantic truth. Scientific models have to be tethered to reality through definitions and then tested against evidence. This requires accurate data.

Output can be no more accurate than the precision and reliability of measurement. This requires replicability of empirical testing.

Big Data might held overcome this and in real time, but Big Data doesn't generate theory. Without theory there is no predictive capacity based on understanding in terms of causal explanation, which is considered to be a necessary condition in scientific explanation.

The Enlightened Economist
Economic observation
Diane Coyle | freelance economist and a former advisor to the UK Treasury. She is a member of the UK Competition Commission and is acting Chairman of the BBC Trust, the governing body of the British Broadcasting Corporation

No comments: