Showing posts with label philosophy of science. Show all posts
Showing posts with label philosophy of science. Show all posts

Friday, December 6, 2019

Lars P. Syll — The ergodicity problem in economics (wonkish)


Less wonkishly, the basic problem here can be viewed in terms of the logical fallacy of hasty generalization. Hasty generalization involves extending one's one's position, or that held by one's group, universally. In philosophy this result in claims of naturalism to humanity as a whole. For example, natural law is often reducible to a particular set of Western values that is generalized. The "laws" of economics are largely of this sort, and homo economicus as a rational agent that carries them out is basically a reflection of the economists that posited them, assuming all to be like them.

This fallacy has been a temptation from ancient times, but it culminated in the scientific age with the discovery of invariant laws of nature, in physics and astronomy in particular, in that these discoveries could be rendered universally using mathematical expressions. Subsequently, this formalism became a criterion of truth that prevailed for formalists above empirical observation. Owing to the success and prestige of the natural sciences, would-be scientists in other fields, and philosophers as well, sought to emulate the formalism of the natural sciences.

However, the great success of the natural sciences in discovering invariant lays in the ergodicity of the subject matter, which is rendered the mathematical expressions and formal models time-invariant. Lacking ergodicity of subject matter, this would not apply strictly. There is a significant difference between a general case and specific cases. In Economics Rules: The Rights and Wrongs of The Dismal Science, Dani Rodrik argues that of the art or craft of economics is being able to discern which model applies in which case. The natural sciences are not concerned with this kind of decision in the same way. There is a clear difference among theoretical science, experimental science, and engineering.

There is an old joke about some engineers and an economist shipwrecked with nothing to eat other than canned food. The engineers set about trying to figure out how to open the cans by applying their theoretical expertise and practical experience. The economist chimed with, "Let's just assume a can opener."

This actually happened in a less dramatic way. In effecting his synthesis of Keynesian and neoclassical thought, Paul Samuelson was confronted with Keynes having posited future uncertainty at the foundation of the "moral sciences," which we now call social science, including economics.

Samuelson solved the difficulty by assuming ergodicity as a methodological convenience for tractability, as had neoclassical economics in assuming equilibrium. This view became orthodox in conventional economics. The follow-up retort to heterodox objections then became, "The methodological debated is already settled." As Paul Krugman asserted, equilibrium and maximization as a framework.

This doesn't mean that economics or the other social sciences are not scientific or cannot be scientific. It just means that they are not the same as natural sciences and that making claims that approach this are unjustified.

Moreover, there is a difference between the meaning of being a science and being scientific. Being scientific just means observing the scientific method. Engineering is scientific in its approach, but this is applied science.

Being a science assumes a framework in terms of which theories can be compete. For example, the framework of physics includes the conservation laws, which are universal and independent. Of course, there is change over time in physics owing to motion and entropy, for example. But these phenomena are explained using models that data supports. The explanation (formula) is time-invariant, even though the data change.

Economics has no such framework, which is why there are competing views of how to approach economics in the first place. "The law of supply and demand" is not the same as the conservation laws in physics, and the assumption of equilibrium is not ergodicity.

Not being ergodic, economics is not a natural science, which is not the same as saying that economics cannot discover universal invariances that data support regardless of time series, economics, like the other social sciences, being historical. Moreover, social systems are complex adaptive system subject to reflexivity (learning from feedback) and emergence (change that is unforeseeable based on priors).

So the next time someone says, "Where's your model," ask them, "Which one?" 😀

Not one in business or finance takes forecasts as the same as or similar to the physics, and no one confuses weather forecasts to astronomical invariances. But economic forecasts and the reasoning which they are based are often treated as dogma in policy circles. That's a problem.

Lars P. Syll’s Blog
The ergodicity problem in economics (wonkish)
Lars P. Syll | Professor, Malmo University

Wednesday, November 13, 2019

Primer: Causality In Models — Brian Romanchuk


Since is about identifying regularity in change and developing theories that explain the causation, ideally in terms of variables and linear functions. This is a challenge even in complicated simple systems, e.g, in the natural sciences, and it is a huge challenge when dealing with complex adaptive systems in the life and social sciences. 

Complex adaptive systems are synergistic, meaning that they are greater than the sum of the parts, so that examining the parts alone is insufficient for analyzing the system as a whole, that is, the parts and their relationships.

Complex adaptive systems are also subject to emergence, that is, the appearance of properties that are unforeseeable based on analysis of the existing system. For example, systems with the capability to learn from feedback and change behavior based on learning are unpredictable based on discovery of new knowledge and its application. There is as yet no logic of discovery that formalized the process and no scientific theory that has penetrated the causality so to be be able to influence it.

Furthermore, a framework for approaching causality must be assumed and ideally defined operationally in science, as Brian does in this first sentence following:
One important consideration for indicator construction is the notion of causality (using systems engineering terminology). A non-causal model is a model where the output depends upon the future values of inputs. In the absence of access to a time machine, such a model cannot be directly implemented in the real world. In practice, a non-causal model output is “revised” as new datapoints are added to input series. The result is that we cannot use the latest values of the series to judge the quality of previous “predictions” of the model.
The use of non-causal model might be acceptable for the analysis of a historical episode, or an earlier economic regime (such as various Gold Standard periods). Since new data will not arrive, there will be no revisions....
However, causation is still an open question in the philosophy of science.•

Bond Economics
Primer: Causality In Models
Brian Romanchuk

• Causality can be defined as the "causal" connection between cause and effect, e.g., in terms of conditionality (sufficient condition, necessary condition, necessary and sufficient condition). Causality is established though a scientific theory that accounts for the connection, since "correlation is not causation."

Causation is the entire scope of the subject, which includes "causality" as just defined but is not limited to it. There are ontological and epistemology issues regarding causation that are not settled. 

Generally speaking, modern science assumes 1) ontological monism in assuming naturalism, that is, that "everything" can be explained by natural causes as observables (as in a theory of everything). It also assumes 2) epistemological realism in the sense that the mind (subjectivity) is the mirror reality (objectivity) when the scientific method is correctly applied. 

However, these assumptions regarding the framework for gaining knowledge are more presuppositions than stated assumptions. Philosophy of science attempts to bring clarity to this by examining the various issues that arise.

Wednesday, August 7, 2019

Book Review: Marxism and the Philosophy of Science — George Martin Fell Brown

Helena Sheehan’s Marxism and the Philosophy of Science , originally written in 1985, but reprinted at the end of 2017, recounts a wide history of serious Marxist thought on science starting with Marx and Engels themselves, and going up to the mass workers’ movements of the 1930s and 1940s. In keeping with a dialectical conception of science, Marxist ideas aren’t presented as static but evolving through debate and experiment in the face of new scientific and political challenges....
This review gives a brief historical account of how Marxism grew out of the naturalistic assumption of the scientific method that became the dominant world view of the liberal West elite that replaced the traditional theologically based worldview of the great chain of being. However, Marxism departed from the "standard" analysis based on Newtonian mechanism, e.g., followed by neoclassical economics, by including factors not reducible to assumptions resembling, imitating actually, the natural sciences — physics and chemistry.
Materialist ideas in science predate Marx and Engels by quite a bit, with forms of materialism going as far back as ancient Greece and being a significant part of the philosophy of the enlightenment in the eighteenth century. But the application of materialist methods for understanding the internal workings of society was a revolutionary contribution in more ways than one. Not only did it point to direct social and political revolution, it pointed to a different understanding of materialism itself. The materialism of the enlightenment philosophers was a highly mechanical conception, reducing nature and society to fixed objects either existing in stasis or confined to simple motion. The materialist conception of history put forward by Marx and Engels didn’t adhere to that approach.
In hindsight, nineteen century thought was influenced chiefly by Darwin in life science, Freud in psychology, Nietzsche in philosophy and Marx in economics, sociology and political theory (which were just emerging fields in his day). I make this claim from the point of view of their subsequent influence, both derivative and reactive. I know this is a provocative claim but defending it is beyond the scope of this comment. Think about it.

This is not to say that the great scientists working in physics and chemistry were also not key in contributing the 20th century dominant worldview in the West. Conventional economics tended to hew to this trend, while many heterodox thinkers branched out to include other rising influences.

Marx and Engles, Marxism as a tradition that follows them, and Marxianism as a tradition that is influence by them constitute an important strain in Western intellectual history. This review is interesting as an exploration of this.

Incidentally, I think this is exaggerated.
This [the dialectical method] was Hegel’s approach, but Hegel saw these contradictions and transformations as taking place only within the world of ideas. From Marx and Engels’ materialist perspective, these contradictions and transformations are part of nature itself.
I think that "only" is too strong. It is reading Hegel chiefly from the point of view of the Logic. Hegel's point was that nature is rational. He tried to account for this. Virtually all scientists agree that nature is rational in the sense that causal explanation in natural science is cannily mathematical. There is no way for science to account for this in terms of the assumptions of its model. It is just assumed to be the case. This conundrum goes back at least to the Greeks, who struggled with it, and were somewhat freaked out by the existence of irrational numbers that figured in scientific (to them) explanation.

On the other hand, Hegel was largely a traditionalist whereas Marx was a materialist. They are both contributors to German liberalism, which is different from Anglo-American liberalism. Contemporary economics is chiefly Anglo-American, and even most heterodox economists are working from within this worldview and its ideology. They agree pretty much on the worldview, with American capitalism at its foundation, but differ on ideology.

Another contribution Sheehan makes is recognizing the position of Engels in Marxism as not merely a contributor or even just a collaborator. Engels was an important thinker in this strain of thought being birthed, although he is now greatly overshadowed by Marx in recognition and reputation. He was a first-class thinker and researcher.
Sheehan discusses three key works of Engels on the question: Anti-Dühring, The Dialectics of Nature, and Ludwig Feuerbach and the End of Classical German Philosophy. The first of these was a polemic against Eugen Dühring, a briefly popular figure in the socialist movement, who put forward a crudely mechanical and schematic approach to science and politics. The Dialectics of Nature was an unfinished work, inspired by Marx’s own desire to write a work salvaging what was rational in Hegel’s thought. And Ludwig Feuerbach was a historical account of the philosophical road leading from Hegel to Marx.
In these works Engels grappled with a number of scientific questions of his day, from a dialectical perspective. He pointed to a number of the laws of development Hegel had put forward, such as the transformation of quantity into quality, and pointed out how they arise in nature and not simply in thought, as Hegel had put forward. He looked into how social conditions shaped scientific discovery....
Sheehan adds an interesting tidbit.
Ironically, when Stalin waged his war on genetics, he was actually putting forward the very neo-Lamarckian ideas Engels polemicized against [in Anti-Dühring: The Dialectics of Nature].
Warning: Longish.

Socialist Alternative
Book Review: Marxism and the Philosophy of Science
George Martin Fell Brown

Friday, June 28, 2019

Racism is a framework, not a theory — Andrew Gelman


I am posting this for its relevance to philosophy of logic and philosophy of science rather than specifics. It draws a useful distinction between frameworks that generate theories and which are themselves not testable (hence falsifiable) and the theories a framework is used to generate.

This is obviously relevant to philosophy of science, but why philosophy of logic? In his later work, Ludwig Wittgenstein sought to show that the overarching frameworks of a culture as a way of life are deeply embedded in the structure and function of ordinary language.

Such frameworks are "world pictures (Weltbilden) that function as a world view. Although many of the propositions they generate appear to be descriptive, many are normative in operation. For example, the fundamentals of a world views are criteria for valuation and judgment, hence, they are stipulations that cannot be falsified from within that world view. For example, in doing science methodological naturalism is fundamental.

As a result many make the illogical jump from a methodological assumption to a metaphysical assertion (materialism) in neoclassical economics methodological individualism, microfoundations, rational maximization, market forces and equilibrium are key fundamentals that are assumed as criteria. This is a reason that neoclassical economists reject "heterodox economics" out of hand.

World views and ideologies are similar and need to be distinguished. A world view is a way of seeing the world that is embedded in ordinary language based on interpretation of context. They are subconscious and very difficult to articulate since they are the basis for using a language to communicate. This applies even to formalizations to the degree that assumptions are stipulated that link that symbols to life. (Pure math says nothing about the world. It is about how the rules for sign-use work in a particular syntactical system.)

Conversely, ideology is articulated at least in outline and people within a single world view can recognized differences of expression based on competing ideologies. Politics functions in this way, for instance. American liberals and conservatives degree but they function within the same framework.

So the application of the point that statistics professor Andrew Gelman is making goes far beyond the context of racism.

Statistical Modeling, Causal Inference, and Social Science
Racism is a framework, not a theory
Andrew Gelman | Professor of Statistics and Political Science and Director of the Applied Statistics Center, Columbia University

Monday, May 20, 2019

Brian Romanchuk — Comments On Turning Points And Recessions

As the Canadian experience shows, deciding which episodes qualify as "recessions" can be debated. The safest course of action is to make it clear what definition you are using, and apply it consistently across regions.
This is logic and critic thinking 101, and also fundamental to math application. While it is basic for thinking critically, it is key for doing science.

This is a big problem for economics since economics is not regarded as a pure science but an applied science. Macroeconomics is considered a policy science. Ambiguity then becomes a hazard.

Terms like "inflation" and "recession," which are defined arbitrarily in terms of data selection, are especially slippery since measurement differs by locale and definitions are often revised, making historical comparison tricky.

Bond Economics
Comments On Turning Points And Recessions
Brian Romanchuk

Saturday, February 9, 2019

Andrew Gelman — Our hypotheses are not just falsifiable; they’re actually false.


On the practical side of philosophy of science. Adding nuance to Karl Popper on falsification.

Further argument for the view that theories are useful but not "true." This may seem to contradict the realist view that theories are general descriptions of causal relationships. But I don't think that this is what is is implied. Rather, useful theories can be viewed as fitting the data because they reveal underlying structures that are not observed directly but only indirectly. 

There is a often a tendency to transfer simple analogies too complicated and complex situations and events. Some causal relationship are observable, as it a hammer driving a nail, with the physical theory explaining it in terms of simple variables related in a function. 

But most interesting issues are much more complicated and nuanced and may be complex, e.g., subject to emergence owing to synergy. There may a constellation of factors involved, and this may be difficult to order in a hierarchy. Some factors may be catalysts that are necessary for an operation but do not themselves enter into it. These may be presumptions that are hidden assumptions.

In addition, statistics is by definition "inexact" in that it deals with probabilities, unlike deterministic functions in which the variables are all known and measurable, and are expressible in terms of a simple function.

While physics is mostly tractable other than at the edges, life sciences are less so, and social sciences and psychology even less. Economics combines social science and psychology, especially macroeconomics and political economy. Economic sociology and economic anthropology take this into account, global economic history also demonstrates it.

This is coming to the fore now as some critics of MMT, the Green New Deal, and "socialism" demand to see data-based model that "prove" proposed solutions have worked in the past. Of course, the record is important, but the demand for "proof" requires a degree of stringency that is not applied in social science and psychology because it is unattainable. Nor is this standard applied to conventional economics either, its econometric approaching being based on formalism rather than being empirically based.

Another important point that Andrew Gelman makes is the futility of pitting theories against each other. That is a recipe for disagreement in that the party that determines the framing wins. Whose assumptions are going to set the criteria? Why?
And, no, I don’t think it’s in general a good idea to pit theories against each other in competing hypothesis tests. Instead I’d prefer to embed the two theories into a larger model that includes both of them.
This is a good suggestion but it is general. Often, the disagreement is over fundamental criteria that determine a frame of reference. This should be obvious in the different approaches to economic theory and economic practice., e.g., econometric and institutional, static and dynamic, simple and complex, natural and historical.

Obviously, a short post like this can only suggest matters that need deeper reflection, open inquiry and sincere debate aimed at solutions to pressing design problems. This is no long just "theoretical." Humanity has to get this right to survive, let alone prosper. We have seemingly dug ourselves into a hole based on policy that is has turned out to impractical in the extreme, such as socializing negative externalities that have led to environmental degradation and threaten ecological collapse if not addressed successfully in a timely fashion. So, let's get with it.

Statistical Modeling, Causal Inference, and Social Science
Our hypotheses are not just falsifiable; they’re actually false.
Andrew Gelman | Professor of Statistics and Political Science and Director of the Applied Statistics Center, Columbia University

Thursday, September 13, 2018

George H. Blackford — Economists Should Stop Defending Milton Friedman’s Pseudo-science


Recommended reading on the history and philosophy of science, the philosophy of economics, and Milton Friedman's instrumentalism.

Evonomics
Economists Should Stop Defending Milton Friedman’s Pseudo-science
George H. Blackford | former Chair of the Department of Economics at the University of Michigan-Flint

Saturday, August 18, 2018

Andrew Gelman — The fallacy of the excluded middle — statistical philosophy edition


Some philosophy of statistics. Short read. Not wonkish.

Statistical Modeling, Causal Inference, and Social Science
The fallacy of the excluded middle — statistical philosophy edition
Andrew Gelman | Professor of Statistics and Political Science and Director of the Applied Statistics Center, Columbia University

Wednesday, June 27, 2018

Lars P. Syll — The main reason why almost all econometric models are wrong

Since econometrics doesn’t content itself with only making optimal predictions, but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions — most important of these are additivity and linearity. Important, simply because if they are not true, your model is invalid and descriptively incorrect. And when the model is wrong — well, then it’s wrong....
Simplifying assumptions versus oversimplification.

Lars P. Syll’s Blog
The main reason why almost all econometric models are wrong
Lars P. Syll | Professor, Malmo University

Friday, May 18, 2018

Jason Smith — A list of macro meta-narratives

In my macro critique, I mentioned "meta-narratives" — what did I mean by that? Noah Smith has a nice concise description of one of them today in Bloomberg that helps illustrate what I mean: the wage-price spiral. The narrative of the 1960s and 70s was that the government fiscal and monetary policy started pushing unemployment below the "Non-Accelerating Inflation Rate of Unemployment" (NAIRU), causing inflation to explode. The meta-narrative is the wage-price spiral: unemployment that is "too low" causes wages to rise (because of scarce labor), which causes prices to rise (because of scarce goods for all the employed people to buy). In a sense, the meta-narrative is the mechanism behind specific stories (narratives). But given that these stories are often just-so stories, the "mechanism" behind them (despite often being mathematically precise) is frequently a one-off model that doesn't really deserve the moniker "mechanism". That's why I called it a "meta-narrative" (it's the generalization of a just-so story for a specific macro event).

Now just because I call them meta-narratives doesn't mean they are wrong. Eventually some meta-narratives become a true models. In a sense, the "non-equilibrium shock causality" (i.e macro seismographs) is a meta-narrative I've developed to capture the narrative of women entering the workforce and 70s inflation simultaneously with the lack of inflation today.

Below, I will give a (non-exhaustive) list of meta-narratives and example narratives that are instances of them. I will also list some problems with each of them. This is not to say these problems can't be overcome in some way (and usually are via additional just-so story elements). None have yielded a theory that describes macro observables with any degree of empirical accuracy, so that's a common problem I'll just state here at the top.
The difference among just-so stories, handwaving, modeling for effect, and data-based modeling....

Worth looking at for the weekend — and thinking about.

Information Transfer Economics
A list of macro meta-narratives
Jason Smith

Monday, April 30, 2018

Jason Smith — The ability to predict


Another good one on foundations of science and economics, specifically macroeconomics — if you are into this sort of thing.
These papers also fail to make any empirical predictions or really engage with data at all. I get the impression that people aren't actually interested in making predictions or an actual scientific approach to macro- or micro-economics, but rather in simply using science as a rhetorical device....
Information Transfer Economics
The ability to predict
Jason Smith

Thursday, March 15, 2018

Lars P. Syll — Abduction – the induction that constitutes the essence​ of scientific reasoning


Abduction in this sense is reasoning to the best explanation based on relevant information available. (The use of "abduction" by C. S. Peirce, the originator of the term, is somewhat different. See abductive reasoning)

Math is an instrument of deduction. Deductive reasoning proceeds logically from a stipulated starting point, e.g., axioms, postulates, using deductive logic or mathematics.

Abduction involves constructing conceptual or mathematical models based on what is given. To simplify, abduction begins as a "word problem" involving observation and conceptual understanding. From this a model as a candidate for providing best explanation is developed and then tested against that which is being modeled.

Abduction stand in contrast to the intuitive approach to stipulating axioms as the basis for a deductive system. Conventional economics based on assuming equilibrium and maximization is intuitively based rather than abductive.

Induction is reasoning based on observation of particulars and assuming that the past resembles the future, eg., path dependence, hysteresis ergodicity. Abduction may employ induction, usually thought of in terms of probability and statistics.

Lars P. Syll’s Blog
Abduction — the induction that constitutes the essence​ of scientific reasoning
Lars P. Syll | Professor, Malmo University

Wednesday, January 17, 2018

Jason Smith — What to theorize when your theory's rejected

I was part of an epic Twitter thread yesterday, initially drawn in to a conversation about whether the word "mainstream" (vs "heterodox") was used in natural sciences (to which I said: not really, but the concept exists). There was one sub-thread that asked a question that is really more a history of science question (I am not a historian of science, so this is my own distillation of others' work as well a couple of my undergrad research papers).
Useful relative to philosophy of science and history of science, as well as foundations of economics. Philosophy of science makes use of the history of science.

It is also relevant to the orthodox and heterodox debate in economics.

Information Transfer Economics
What to theorize when your theory's rejected
Jason Smith

Wednesday, December 13, 2017

Jason Smith — On these 33 theses

The other day, Rethinking Economics and the New Weather Institute published "33 theses" and metaphorically nailed them to the doors of the London School of Economics. They're re-published here. I think the "Protestant Reformation" metaphor they're going for is definitely appropriate: they're aiming to replace "neoclassical economics" — the Roman Catholic dogma in this metaphor — with a a pluralistic set of different dogmas — the various dogmas of the Protestant denominations (Lutheran, Anabaptist, Calvinist, Presbyterian, etc). For example, Thesis 2 says:
2. The distribution of wealth and income are fundamental to economic reality and should be so in economic theory.
This may well be true, but a scientific approach does not assert this and instead collects empirical evidence that we find to be in favor of hypotheses about observables that are affected by the distribution of wealth. A dogmatic approach just assumes this. It is just as dogmatic as neoclassical economics assuming the market distribution is efficient.
In fact, several of the theses are dogmatic assertions of things that either have tenuous empirical evidence in their favor or are simply untested hypotheses. These theses are not things you dogmatically assert, but rather should show with evidence:
I wonder whether economics should be taught as a science, especially since conventional economists seem to think that economics is more like physics than the social sciences.

There are problems with assuming that, which I won't repeat. But to my mind, the most obvious difficulty is well-known among the public. Perhaps the most powerful argument for "science" is demonstrated not in words, or through experiment, but rather in the success of technology that everyone uses all the time to change the world.

Is there anything like this with respect to economics? Not only no, but also the opposite in many cases.

The study economics is not even a required in most business schools, because business schools have discovered that time is better spent in getting results. If it got results, business schools would be hiring the top economists. They are not.

The teaching of economics needs to be rethought in light not only of the failure of economists to deliver results but also in their making bad situations worse. The dismal handling of the aftermath of the global financial crisis is a case in point. In addition, conventional economists and policymakers have literally laid waste entire European countries and their economies.

A lot of people are likely thinking, if this science we want none of it. Monkeys throwing darts could probably do better.

And ironically, Western economists and policymakers were put to shame by the positive result that China showed using a command economy to address the issues promptly and avoid contraction. But Western economists explain this by "cheating."

Information Transfer Economics
Jason Smith

Tuesday, December 12, 2017

Lars P. Syll — On the non-applicability of statistical models


Math is purely formal, involving the relation of signs based on formation and transformation rules. Signs are given significance based on definitions. Math is applicable to the world through science to the degree that the definitions are amenable to measurement and the model assumptions approximate real world conditions (objects in relation to others) and events (patterned changes in these relations). Methodological choices determine the scope and scale of the model, which in turn determines the fitness of formal modeling for explanation of real world conditions and events.

Contemporary science is chiefly about applying formal modeling to theoretical explanation that covers a wide enough range of phenomena worth explaining to be of interest. The scientific project is about designing useful models for explaining phenomena and also designing experiments to test the model against observation. This involves measurement.

A further challenge is identifying parameters that can be measured to produce data and constructing models based on assumptions of how parameters are related with respect to states and how they change over time.

Then, there are also presumptions that are not stated. For example, it is presumed that science is consilient and therefore, any theoretical explanation that violates the conservation laws is ruled out automatically.

Beyond that philosophical foundations relating to metaphysics, epistemology, ethics, social and political philosophy, philosophy of science, the philosophy of the particular discipline, etc., also come into play.

Quite evidently, there is a lot of room for mistake and slip-ups in the process of "doing science."

Formalization and data are not magic wands, and assuming they are leads to magical thinking. Formalization is only rigorous — necessary based on application off rules — with respect to models. How models relate to what is modeled is contingent and depends on data. Data is dependent on observation and measurement.

All this is difficult enough in the natural sciences, but more difficult in the life sciences and much so in the social sciences.

The philosophy of economics, or foundations of economics if one prefers, needs to take all this into consideration and there needs to be lively debate about it. Is there?

Lars P. Syll’s Blog
On the non-applicability of statistical models
Lars P. Syll | Professor, Malmo University

Thursday, November 23, 2017

Lars P. Syll — Randomization — a philosophical device gone astray

When giving courses in the philosophy of science yours truly has often had David Papineau’s book Philosophical Devices (OUP 2012) on the reading list. Overall it is a good introduction to many of the instruments used when performing methodological and science theoretical analyses of economic and other social sciences issues.
Unfortunately, the book has also fallen prey to the randomization hype that scourges sciences nowadays....
Lars P. Syll’s Blog
Randomization — a philosophical device gone astray
Lars P. Syll | Professor, Malmo University

Wednesday, September 27, 2017

Lars P. Syll — Time to abandon statistical significance

As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.
We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models!
Lars P. Syll’s Blog
Time to abandon statistical significance
Lars P. Syll | Professor, Malmo University

Tuesday, September 26, 2017

Abandon Statistical Significance — Blakeley B. McShane, David Gal, Andrew Gelman, Christian Robert, and Jennifer L. Tacket

Abstract

In science publishing and many areas of research, the status quo is a lexicographic decision rule in which any result is first required to have a p-value that surpasses the 0.05 threshold and only then is consideration—often scant—given to such factors as prior and related evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain. There have been recent proposals to change the p-value threshold, but instead we recommend abandoning the null hypothesis significance testing paradigm entirely, leaving p-values as just one of many pieces of information with no privileged role in scientific publication and decision making. We argue that this radical approach is both practical and sensible.
Uncritically adopting universal rules and criteria is a sign of lazy thinking and likely ideological thinking aka dogmatism as well.

This move would overturn the existing scientific publishing model, it is unlikely to happen without considerable opposition. This model is key in establishing reputational credibility and advancement in the profession. Players like set rules. This is especially true in formal subjects, where training focuses on producing "the right answer" based on customary application of formal methods. The downside is group think and imposition of a consensus reality.

Abandon Statistical Significance
Blakeley B. McShane, David Gal, Andrew Gelman, Christian Robert, and Jennifer L. Tackett

Monday, September 4, 2017

Andrew Gelman — Rosenbaum (1999): Choice as an Alternative to Control in Observational Studies

Paul Rosenbaum’s 1999 paper “Choice as an Alternative to Control in Observational Studies” is really thoughtful and well-written. The comments and rejoinder include an interesting exchange between Manski and Rosenbaum on external validity and the role of theories....
Important in the most studies in social science, including economics, are necessarily observational rather than experimental. The question is how to design observational studies to make them as close as possible to experimental studies where tight control of variables is available.

Design problems involves choice that are implicit assumptions. Designers need to carefully choose (assume) consciously and intentionally rather than presume, which runs the risk of hidden assumptions that might have been avoided through greater advertence.

A good example is the Reinhart and Rogoff historical study on the effect of public debt that was vitiated by inadvertence to the different consequences of public debt under different monetary systems. MMT economists immediately pointed out that the presumption that all public debt is the same in its effects is false, owing to operational differences under different monetary regimes historically. This is actually more significant than the computational errors that were discovered subsequently and highly publicized in the media.

The R&R study was highly influential in policy formulation even though MMT economists had pointed out at the time of its release, and this led to very damaging effects when policy based on the study was implemented. This should not have happened in a professional environment.

Statistical Modeling, Causal Inference, and Social Science
Rosenbaum (1999): Choice as an Alternative to Control in Observational Studies
Andrew Gelman | Professor of Statistics and Political Science and Director of the Applied Statistics Center, Columbia University

Wednesday, August 30, 2017

Daniel Little — New thinking about causal mechanisms


Everyone is familiar with the nostrum, "correlation is not causality." Simply put, correlation can potentially identify input-output relationships with a certain degree of probability. But the relationship is a "black box."

Causal explanation involves opening the box and examining the contents. Correlation shows that something happens; causality in science explains how it happens, elucidating transmission in terms of operations. In formal systems the operators are rules, e.g., expressible by mathematical functions.

Generally speaking correlation is probabilistic, whereas causality is deterministic. Causes are logically antecedent to effects, but arguments based on prior occurrence are post hoc ergo propter hoc fallacies.

There is also probabilistic causation.
Informally, A probabilistically causes B if A's occurrence increases the probability of B. This is sometimes interpreted to reflect imperfect knowledge of a deterministic system but other times interpreted to mean that the causal system under study has an inherently indeterministic nature.
Causality in philosophy involves provision of an account of why something happens based on principles.

Causation is at the heart of the fundamental problems in philosophy of science. It's exploration began in the West in earnest with Aristotle and it has become one of the enduring questions.

Understanding Society
New thinking about causal mechanisms
Daniel Little | Chancellor of the University of Michigan-Dearborn, Professor of Philosophy at UM-Dearborn and Professor of Sociology at UM-Ann Arbor