Showing posts with label measurement. Show all posts
Showing posts with label measurement. Show all posts

Tuesday, May 21, 2019

Michelle Starr — Tomorrow The Definition of The Kilogram Will Change Forever. Here's What That Really Means

This is a big deal even though it won't be noticed by most people. However, precise measurement essential to science and measurement involves application of metrics defined by criteria. The units and their criteria are arbitrary. There was no such thing as a kilogram prior to the development and introduction of the metric system. Same with other measurement systems. The "trick" is to establish a constant criterion in a relative universe. That is as close to an absolute as human can construct. This post explains how the issue has been approached in physics.

Of course, a great deal more precision can be arrived at through physics than other science, which strive to use the measurements developed by physics in so far as possible, but physical measurement is applicable only to quantity. Measuring quality presents greater challenges. So do psychological "dimensions."

Science Alert
Tomorrow The Definition of The Kilogram Will Change Forever. Here's What That Really Means
Michelle Starr

Thursday, April 11, 2019

Michael Emmett Brady — Keynes’s Theory of Measurement is contained in Chapter III of Part I and in Chapter XV of Part II of the A Treatise on Probability

Abstract
Professor Yasuhiro Sakai (see 2016; 2018) has argued that there is an mysterious problem in the A Treatise on Probability, 1921 in chapter 3 on page 39 (page 42 of the 1973 CWJMK edition). He argues that there is an unsolved mystery that involves this diagram that has remained unexplained in the literature.
The mystery is that Keynes does not explain what he is doing in the analysis involving the diagram starting on the lower half of page 38 and ending on page 40 of chapter III. In fact, the mystery is solved for any reader of the A Treatise on Probability who reads page 37 and the upper half of page 38 carefully and closely. Keynes explicitly states on those pages that he will give only a brief discussion of the results of his approach to measurement on pages 38-40, but will provide a detailed discussion of his approach to measurement in Part II, after which the brief discussion of the results presented on pp.38-40 will be strengthened.
The Post Keynesian (Joan Robinson, G L S Shackle, Sydney Weintraub, Paul Davidson) and Fundamentalist (Donald Moggridge, Robert Skidelsky, Gay Meeks, Anna Carabelli, Athol Fitzgibbons, Rod O’Donnell, Tony Lawson, Jochen Runde) schools of economics, as well as economists, in general, such as Jan Tinbergen and Lawrence Klein, have ignored chapter XV of the A Treatise on Probability. Keynes demonstrates on pp.161-163 of the A Treatise on Probability in chapter XV that his approach to measurement is an inexact approach to measurement using approximation to define interval valued probability, which is based on the upper-lower probabilities approach of George Boole, who discussed this approach in great detail in chapters 16-21 of his 1854 The Laws of Thought. Therefore, the only conclusion possible is that the “mysterious” diagram presented on page 39 of the A Treatise on Probability is an illustration of Keynes’s approximation technique using interval valued probability, since the problem on pages 162-163 of the A Treatise on Probability explicitly works with seven “non numerical” probabilities while the illustration of Keynes’s approach using the diagram on page 39 works with six “non numerical” probabilities and one numerical. It is impossible for the diagram on page 39 to support any claim, as has been done repeatedly for the last 45 years by the Post Keynesian and Keynesian.
Fundamentalist schools, that Keynes’s theory was an ordinal theory that could only be applied some of the time. This leads precisely to the wrong conclusion that Keynes was arguing that macroeconomic measurement, in general, was impossible in economics, which was G L S Shackle’s conclusion.
An understanding of chapter XV of the A Treatise on Probability explains the conflict that existed between J M Keynes and J Tinbergen on the pages of the Economic Journal of 1939 -1940.The major point of discussion, underlying all of Keynes’s major points, was that Tinbergen’s exact measurement approach, taken from macroscopic physics, using the Normal probability distribution’s precise, exact, definite, linear, additive, and independent probabilities, was not possible given the type of data available in macroeconomics. Only an inexact approach to measurement using imprecise and indeterminate interval valued probability was tenable.
An understanding of chapter XV of Part II of the TP explains the fundamental point of disagreement between J M Keynes and J Tinbergen over the issue of measurement. Tinbergen brought his physic background with him to the study of economics. Tinbergen believed that the exact measurement approach that he had absorbed in his study of statistical physics, using additive, linear, exact, precise definite probability distributions like the Normal or log normal, could be used in the study of macroeconomics that would provide a precise and exact explanation of business cycles. Keynes, of course, given his great, overall experience in academia, industry, business, government, the stock markets, bond markets, money markets, banking, finance, and commodity futures markets, had vast experience that Tinbergen, an academic only, did not have. Keynes saw that Tinbergen’s application was the wrong one, although the technique would be applicable to studies of consumption and inventories.
Wonkish.

SSRN
Keynes’s Theory of Measurement is contained in Chapter III of Part I and in Chapter XV of Part II of the A Treatise on Probability (1921;1973 CWJMK Edition): Keynes Stated That the Exposition in Chapter III of the a Treatise on Probability Was 'Brief', While the Exposition in Chapter XV, Part II, Of the a Treatise on Probability, Was 'Detailed'
Michael Emmett Brady | California State University, Dominguez Hills

Monday, June 25, 2018

Austin Clemens — Policymakers can’t tackle inequitable growth if it isn’t measured

What we need is to disaggregate growth and report on the progress of all Americans. Instead of the one-number-fits-all approach of GDP growth, this new system would report growth for Americans along the income curve, much as the graphs above do. It might indicate, for example, that the bottom 50 percent of Americans experienced growth of 1.3 percent while Americans in the top 1 percent of earners experienced 4.5 percent income growth.
Unfortunately, such a system is not currently possible. The graphs above were created using academic datasets for which no federally produced analog exists. GDP growth is reported by the U.S. Commerce Department’s Bureau of Economic Analysis, but the BEA is currently incapable of creating a system of distributional national accounts because it lacks the necessary data to do so. This problem is outlined in our recent report on the issue. Correcting it will require action from Congress and the executive branch. Without it, policymakers and pundits will continue to trumpet a measure of economic progress that does not tell the real story of the economy.
WCEG — The Equitablog
Policymakers can’t tackle inequitable growth if it isn’t measured
Austin Clemens | Computational Social Scientist at WCEG

Monday, October 16, 2017

Diane Coyle — Economic observation

On Friday all the researchers in the new Economic Statistics Centre of Excellence(ESCoE) met at its home in the National Institute to catch up on the range of projects and it was terrific to hear about the progress and challenges across the entire span of the research programme.
One of the projects is concerned with measuring uncertainty in economic statistics and communicating that uncertainty. The discussion sent me back to Oskar Morgenstern’s 1950 On the Accuracy of Economic Observations (I have the 2nd, 1963, edition). It’s a brilliant book, too little remembered. Morgenstern is somewhat pessimistic about both how meaningful economic statistics can be and whether people will ever get their heads around the inherent uncertainty.
“The indisputable fact that our final gross national product or national income data cannot possibly be free of error raises the question whether the computation of growth rates has any value whatsoever,” he writes, after showing that even small errors in levels data imply big margins of error in growth rates.
This is a huge problem scientifically and one that is acute in economics since data are evidence that tested hypotheses generated as theorems from the axioms of a theory.

There are two foundational issues in economics. The first is data collection and processing in specific cases. The second is the worth of historical data.

Even using contemporary methods there is a lot of uncertainty and fuzziness. But when it comes to historical data and its use comparatively, questions arise whether this "data" has any actual value at all.

A big problem arises from the rationalistic bent of conventional economics that places great emphasis on formal modeling when formal consistency has no bearing on semantic truth. Scientific models have to be tethered to reality through definitions and then tested against evidence. This requires accurate data.

Output can be no more accurate than the precision and reliability of measurement. This requires replicability of empirical testing.

Big Data might held overcome this and in real time, but Big Data doesn't generate theory. Without theory there is no predictive capacity based on understanding in terms of causal explanation, which is considered to be a necessary condition in scientific explanation.

The Enlightened Economist
Economic observation
Diane Coyle | freelance economist and a former advisor to the UK Treasury. She is a member of the UK Competition Commission and is acting Chairman of the BBC Trust, the governing body of the British Broadcasting Corporation

Thursday, March 30, 2017

Austin Clemens — The once and future measurement of economic inequality in the United States

A slew of research into economic inequality replete with serious looking graphs may give the impression that measuring inequality in the United States is a solved problem. This is misleading. Inequality is still measured incompletely because existing U.S. government statistics do not attempt to match their estimates to the National Income and Product Accounts. NIPA is the source of the most reported and well-understood economic statistics such as the nation’s Gross Domestic Product and quarterly GDP growth figures.
Because existing estimates of economic inequality are not pegged to NIPA, they don’t account for all sources of income. They may exclude, for example, fringe benefits provided by employers such as employer-provided health insurance and retirement benefits, government transfers such as supplemental nutrition assistance or the child tax credit, government services such as public education, and tax expenditures such as the home mortgage tax deduction and tax breaks for employer-provided insurance. These exclusions, big and small, make many existing estimates of inequality fundamentally incomparable to our most well-established measures of economic growth....
Important analysis from the POV of stock-flow consistency follows. Efforts are underway to improve measurement to bring estimates of income in line with national income accounting in order to remove the inconsistencies arrive at a better understanding of income and wealth distribution in the US.
The ability to look at the geographic distribution of inequality and at slices of income within different income groups teases the possibilities of a more robust project to disaggregate the National Income and Product Accounts statistics that are currently the most referenced statistics of economic progress in the nation. Devoting federal resources to the project could allow us to track inequality not only by income bands, but also by age, geographic location, gender, ethnicity, and type of income.
WCEG — The Equitablog
The once and future measurement of economic inequality in the United States
Austin Clemens

Wednesday, October 22, 2014

Robert Oak — Inflation Deceptive, CPI 0.1% Yet Some Items Off the Charts


Everything you want to know about CPI (almost).

My take is that judging economic performance on figures like per capita GDP for "growth," CPI for "inflation," and U3 for "unemployment" is myopic and misleading. But it's these numbers that make news headlines and dominate "popular economics."

BTW, unleaded was $2.77 today at Costco, Iowa City/Coralville. That's a pretty huge "tax cut."

The Economic Populist
Inflation Deceptive, CPI 0.1% Yet Some Items Off the Charts
Robert Oak

Saturday, July 26, 2014

Merijn Knibbe — Estimating capital. Robert Gallman edition

In economics, there is an unfortunate rift between academics and the economists who actually measure the economy. Which means that academic economists give little attention to the extremely important question how economic concepts relate to actual measurements – one reason why so much of their work is naïve (‘Ricardian’ households which spend more when taxes go up and the like). Fortunately, economic historians, who often have to do the measurements themselves, often bridge part of the gap. Robert Gallman has some highly relevant remarks about different ways to measure (nineteenth century USA) capital – and how these relate to the future, the past, uncertainty, savings, consumption foregone and replacement costs. This still leaves out important parts of the concept of capital like liquidity, ownership and the ‘overlapping generations’ problem – which however does not make these remarks less valuable.
"Naïve" or BS?

Real-World Economics Review Blog
Estimating capital. Robert Gallman edition
Merijn Knibbe

Sunday, April 6, 2014

Peter Dorman — GDP and Well-Being, Positive and Normative

...It all goes back to the primordial distinction between positive and normative analysis. Positive analysis is explanatory, predictive, or simply descriptive: what and why. Normative analysis is evaluative: should. We economists beat the heads of our poor charges each year in introductory classes with this distinction. Positive analysis, we say, can be validated by reasoning and evidence, while normative analysis is ineluctably conditional on the values of whoever is doing the evaluating.
Yes and no. The distinction is important, but it is not ironclad. There are lots of ways the two types of analysis are connected, and I won’t get into the philosophical issues here, but it is obvious, just from paying attention, that economics wants to have a single analytical framework to answer both positive and normative questions.
Economists don’t want one model to predict what the equilibrium outcome will be and another, using completely different elements and based on different assumptions, to rank that outcome against others according to how beneficial it is. Most models in economics do double-duty: they support positive and normative analysis equally...
And where does that leave us? The distinction between positive and normative analysis is important and needs to be maintained. There should be no presumption that the concepts and models that work for one will work for the other. We should not sacrifice the fit between model and purpose in one realm in order to be able to shoehorn it into the other. I think, though I will not follow it up here, that welfare economics has suffered mightily from attempts to squeeze its analysis into the same models that work well for positive—explanatory and predictive—work.

So let’s not visit the same damage on our properly-functioning positive models, like GDP. Keep and even improve GDP as a measure of the size of monetary flows within an economy, and look elsewhere for appropriate indicators of human well-being. (I have a hunch that economists, who are good at the first task, will prove to be less well-suited to the second.) Do positive well, and do normative well, and don’t let either get in the way of the other.
EconoSpeak
GDP and Well-Being, Positive and Normative
Peter Dorman | Professor of Economics, Evergreen College

Neoliberalism is a conflation of the social and political with the economic at the opposite end of the spectrum from Marx, who also assumed that economics is foundational to the social and political.

At bottom both these views are materialistic and deterministic in contrast to the humanistic and evolutionary.

In addition, the absolute dichotomy between positive and normative, public and private, and other such distinctions in economics are the products of erroneous essentialist thinking that assumes that words denote essences or classes, thereby conflates special cases with a general case.

Words have a range of meaning dependent on context. At their extremes concepts like positive and normative, public and private are dichotomous but not as they approach mid-range. For example, economics is law-based and positive law is grounded in the ethical norms of a society. In addition, markets are dependent on a unit of account and in modern economies, the unit of account is the currency established by government.

This is not merely a feature of language richness. Cognitive science reveals that positive and normative overlap in brain functioning where perceiving, reasoning and feeling are entangled. This is obvious at the surface level when words have a neutral denotation but a positive or negative connotation. However, this is not always obvious but rather is embedded and remains implicit instead of being explicit.

Saturday, September 7, 2013

Brian Romanchuk — Inherent Problems With Single Good Economic Models (Wonkish)

A typical technique used in economic models is to base the analysis on the assumption that there is a single consumer good in the economy*. I argue that this causes some analytical blind spots for mainstream analysis of the economy. Since not everyone interested in mathematical modeling, I’ll state my conclusions up front, and then explain in more detail later.This post was triggered by this article on Mike Norman Economics by Tom Hickey which discusses the problems with prices indices.....
Bond Economics
Inherent Problems With Single Good Economic Models (Wonkish)
Brian Romanchuk