More on the folly of drawing sweeping conclusions from DSGE modeling. This time based on nature of statistical reasoning.
If one follows über-statistician Nate Silver's blog, Five Thirty Eight, at The New York Times, one has grasp of how difficult statistical modeling is socially. Politics is one of the most researched and and polled fields in existence, with huge amounts spent on data gathering and processing. Nate's record has been good so far, but he explains the difficulty of coming to any tight conclusions owing to a variety of factors and always qualifies his statements.
No one trying to figure out political outcomes would dream of assuming a representational agent or perfect knowledge to simplify the design problem. Where do economist get the pass to do this and not get called out for it, especially when predictions turn out to be wildly off the mark. After all, this flawed reasoning goes into the grinder of policy making.
Lars sums it up reagrding DSGE:
The root of our problem ultimately goes back to how we look upon the data we are handling. In modern neoclassical macroeconomics – Dynamic Stochastic General Equilibrium (DSGE), New Synthesis, New Classical and “New Keynesian” – variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore have access to heaps of historical time-series. If we do not assume that we know the “data-generating process” – if we do not have the “true” model – the whole edifice collapses.
Economic actors behave no more predictably than voters in that they are the same people performing a similar function — choosing. The actors are just as diverse and volatile. A big difference is that voters have very limited choice well-recognized motives, and yet prediction is still dicey. On the other hand, economic actors have a wide range of choices and motives — and prediction can be relatively precise?
The unknown knowns of modern macroeconomics
by Lars P. Syll