Much of my writing about macroeconomic theory is of the hand-wringing variety: it cannot be "scientific" because (useful) forecasting is essentially impossible to do. This is a negative (non-constructive) argument; but that does not mean that we cannot be rigorous.
As a comment on my previous article ("Science and Economics") André asked, "If we are unable to test macroeconomic theory, how will we know that it works?" If we use a wide definition of "test," we are able to do so. However, this notion of "testing" would probably raise eyebrows among physical scientists, who perhaps assume that "forecasting" and "testing" would be the same thing in this context. It is possible to look at macro in a rigorous way, but we need to drop the embedded assumption that rigorous means the same thing as acting like physicists.
My arguments here should not actually be surprising to economists, as they are effectively a hidden background assumption in their worldview. Instead, this viewpoint is aimed at non-economists who want to treat macroeconomics like other fields of knowledge.
Brian ventures into philosophy of macroeconomics, a subject that most economists avoid, which means that they presuppose the foundations of their discipline, which implies that they impose their view based on hidden assumptions. Foundational studies attempt to clarify these matters.
I think a good approach to scientific rigor comes from Richard Feynman's observation that a key purpose of science as rigorous thinking is to keep from fooling ourselves. See his
Cargo Cult Science.
Generally speaking, these days "scientific" means using a formal model to represent actual change of events over time in accordance with some invariance that allows for prediction and therefore testing hypotheses. The rigor comes from both rigorous thinking provided by format ionization and also from the ability to compare a model with reality in order to determine its degree of representation. since models are simplification, few models will be exact replicas of event. They don't need to be in order to be useful for the purpose constructed.
So the first step is to generate a model based on assumptions. At the macro level such models are usually understood to be explanatory models in the sense of modeling some mechanism or transmission process that captures some useful level of invariance in changing events.
This implies that the properties of models must reflect the actual pattern of changing events.
This enables testing the model against what is modeled through observational checking.
A fundamental assumption is that the future resembles the past in the area of consideration to be able to identify invariance. Analysis of data from observing the past is indicative but not definitive, and all models are contingent on future observations. Science is therefore tentative.
The greater the degree of ergodicity, the more representational models can be. As uncertainly increases, models necessarily become less rigorous in the sense that the assumptions map future events if the reasoning is correct.
In the case of non-ergodicity, no amount of rigor in model construction or reasoning can guarantee that the future will continue to resemble the past in the way that such models suggest.
The greatest degree of rigor is provided by deterministic functions, which necessitates the ability to measure variables. The next degree of rigor is provided by stochastic functions which allows for estimation based on sampling, for example.
Biology accounts for a degree of apparent determinism in human behavior. Custom, habit account and path dependence account for some observed degree of patterned behavior in human affairs, but this tends to be local rather than universal.
However, where radical uncertainty exists, contingent models are needed, including conceptual models that take matters into consideration that are difficult to impossible to model formally. For instance, science is presumed to be consilient, so that assumptions that conflict with other areas are suspect.
Moreover, there is also a tendency to overgeneralize, fit curves, fudge and nudge, and even see faces in clouds. For example, there is a tendency to overgeneralize by projecting oneself and one's in-group on humanity and concluding that local characteristics are universal. This is the basis of much that is assumed about "human nature."
The result has been that in the social science, including economics, a distinction has been drawn between the micro and macro levels of scope and scale. The micro has tended to assume dominance, since the scope and scale permit a greater degree of rigor that is confirmed at least statistically.
Grand theories that explain behavior at the societal level have fallen out of favor because they are difficult to construct formally and also difficult to measure observationally. So the usefulness of such theories questionable and they fall victim to the charge of being speculative rather than scientific.
Grace O. Okafor's "
Grand Theories and Their Critiques: From C. Wright Mills to Post Modernism" explores this in the history of sociology. It is not difficult to find parallels in the history of economics.
Gary Becker's
rational choice approach has spread from economics to the other social sciences as a framework for modeling social, political and economic behavior. This has led to
criticism from several angles — bounded rationality, cognitive-affective bias, different types of decision making, contextual asymmetries, reflexivity and emergence, and uncertainty, for example.
Bond Economics
Rigour And MacroeconomicsBrian Romanchuk