Thanks to Stefan A. for pointing out the Wikipedia entry “All models are wrong”:This post is important because traditional philosophical issues are being raised in economics at present and most economists don't reveal much familiarity with the debate about these issues that has raged literally for millennia. Most of them remain unsettled for lack criteria that can be justified adequately as either absolute or so universally applicable as to qualify as natural.
This is a welcome turn for philosophers that have been bystanders to the economic debates scratching out heads at the baggage that is being dragged along not only unacknowledged but apparently unnoticed. And when it is noticed, the approach is often uninformed and simplistic. Cognitive and affective biases often sneak in. too.
From the viewpoint of critical thinking, such an approach is uncritical.
"All models are wrong" is a nonsense. Models are neither right or wrong, or true or false in a way that can be known from the simple reason that they are constructed of general propositions that can be disconfirmed by a negative instance, they are not confirmed short of testing all possible instances and getting a positive result. And that is just one general proposition about an indeterminate set.
All models are simplifications, which is inherent in the purpose of theoretical modeling. Simplified models are not exact replicas of the modeled. The requirement is rather that they are true enough to be useful for the purpose for which they are being used.
This implies that nothing can be deduced from the model that is not included in the premises, that is, the assumptions. If the assumptions are restrictive, as assumptions usually are for economy of explanation, then the explanation cannot be legitimately projected beyond the scope of the assumptions.
Other factors might not be relevant but they may be. For example, in statistical reasoning it is necessary to eliminate confounding variables. It is also important in experimental design in the sciences to eliminate bias introduced by the designers and observers.
Yet, prior to these issues, which most economists understand, there are also logical issues having to do with hidden assumptions and biases introduced by the use of language, for instance. This is where the approach is especially likely to be uncritical.
See, for example, Lars Syll's post today, Utility — an almost vacuous concept. Gary Becker's rational choice theory based on it has been adopted not only in conventional economics, but in other social sciences, and through these it influences policy. Another example, is the Cambridge capital debate over the meaning of "capital" in the MIT models. Paul Samuelson and Robert Solow were bested by Joan Robinson and Piero Sraffa when they could not provide an operational definition that withstood criticism. Even so, they did not change the model, nor did they give it up. Is it "true enough" for the use it is put to?
9 comments:
Even so, they did not change the model, nor did they give it up.
They need to be sacked and the institutions they work for have to be reformed.
"The requirement is rather that they are true enough to be useful for the purpose for which they are being used."
Correct.
This is the weakness of the mainstream that should be attacked... leave the ideology out of it.
I often say that "all models are wrong", but what I really mean is what you say - that models are never right nor wrong. I like the approach that says it is really a question of how useful a model is. A model will show how something might work, for example how phenomena X and Y might be related. We then need to decide whether the insight we gain from understanding the model helps us understand what is going on in the real world.
Meteorology is a field where 'all models are wrong'. Unlike economics, weather forecasts are more reliable. This is progress.
Science = progress and Ideology = stagnation
Better to say that about theoretical models. Not all models are neither right nor wrong. Some models do function descriptively and are either true or false based on factual evidence.
As Wittgenstein shows in the Tractatus, a descriptive proposition that assert that a possible state of affair existing in the world being described can be conceived as a model, picture or map that can be checked for veracity by comparing it to the world. An elementary proposition describes a state of affairs and asserts it to be the case. The propositional calculus is derived from a set of elementary propositions and logical operators as formation and transformation rules.
Hypotheses are theorems derived deductive from the starting point of the theoretical model as a general description of the world and are therefore logically necessary. If theorems that are contradictory can be derived, then the logical structure of the theory is not consistent.
But logical necessity of theorem is logical only. While theorems are "true" syntactically, they cannot guarantee correspondence with the world if they are used a hypotheses assumed to be true is the theory is representational.
But the hypothesis is a descriptive model of a possible state of affairs that can be used to compare the model with the evidence. This determines whether the hypothesis is true or false and tens toward confirmation of the theory or disconfirmation.
Generally speaking a single instance does not disconfirm an established theory, first, because the experiment needs to be replicated. But even after replication, the just sends a signal that something is wrong that needs to be explained and fixed.
If that is not possible, this is called an anomaly and anomalies reduce the usefulness of a theory as an explanatory and predictive device. This is how science advances one small step at time and sometimes in leap, when a new theory is elaborated that is more useful in explanation and prediction than the previous one.
So we need to be clear about the kind of modeling involved and how it functions. All models are not of the same kind.
These are fluid dynamic models, as far as I know. They provide a description of conditions and predicted ranges for several measurements such as temperature, humidity, wind speed, etc.
Several models are looked at before a forecast is derived.
Economic models should do a similar forecast that includes range estimates. If the future cannot be reliably predicted, then that should be admitted. The reliability of various forecasts should always be stated.
Theoretical models are for research purposes. Or in the case of economics, for ideological aims.
Scientifically speaking, there is conjecture, hypothesis and theory. Conjecture cannot be tested, while hypotheses can. A theory is the closest that science would accept as a 'truth' or 'sure thing'. What we claim to know about the physical world should not go past this milestone. There isn't any practical reason to do so.
Large corporations, governments (militaries, central banks, treasuries, etc) conduct detailed contingency planning that models different conditions and scenarios. This is complicated, requires a lot of resources (expensive), and is indefinite.
This is not how academic economists operate, nor should they be expected to without being provided with the necessary resources. They probably never will owing to the scope of the task.
IN this respect it is somewhat similar to weather predication, which is resource intensive (satellites, computing power) and beyond the scope of an individual to monitor continuously.
Not enough contingency planning is done, except for the military (which shows what our priorities are).
Meteorology is short to mid term and is important to the economy. Bad forecasts can cost money and people's lives. So it is taken seriously.
If economics is similarly important then it should be cleaned up. Make it as rigorous as meteorology, and invest in it accordingly.
Note that 'business economics' does not have nearly as many issues. You can't equip owners and manager with theoretical nonsense and expect them to survive in a competitive market.
Post a Comment