Thursday, May 3, 2018

Jason Smith — Three sigma deviation in the 10-year rate

Now you might wonder how raising interest rates to only about 2% could trigger a recession today in the same way raising interest rates to 14% did in the 80s. I admit I don't have a good answer to this except to say increasing labor force participation in the 80s probably provided a sufficient tailwind that Fed had to do do much more.

In any case, this makes for an excellent test of the model. Interest rates should come back down in the near term (about 6 months). A possible mechanism to bring them down is recession. The longer they stay at the 99.9% of their range or further, the more likely the model can be rejected.
Mirabile dictu! An economic model that is testable.

Information Transfer Economics
Three sigma deviation in the 10-year rate
Jason Smith

12 comments:

Matt Franko said...

This from Warren’s latest:

“Another chart that indicates we could already be in recession:”

He thinks we are already in recession....

Tom Hickey said...



Advisor Perspectives
ECRI Weekly Leading Index Update
Jill Mislinski, 4/27/18

Matt Franko said...

So the system is regulated via the ratio of the 10-yr rate vs. the one year rate ? And not the ratio of capital to assets at the depository institutions?

So if we examine the regulations at the OCC and at the Fed there will be no reference to capital or assets at the depositories? Just the ratio of interest rates on government securities?





Matt Franko said...

"Interest rates should come back down in the near term (about 6 months)."

Yo... the Fed is currently saying they are going to RAISE RATES over the next 6 months...

a +0.25% for June is 95% and another 0.25% by December is over 80%...

http://www.cmegroup.com/trading/interest-rates/countdown-to-fomc.html

Brian Romanchuk said...

Let me get this straight: everyone else in economics (except Jason Smith) is an ignorant buffoon that has no understanding of the data, and then his big idea is: the shape of the yield curve is related to recession probability. Wow, the Sveriges Riksbank better warm up the “Nobel” prize reception room.

Any fool with an econometrics package and internet access can generate *testable* predictions. Anyone familiar with market commentary knows that this is what has been going on in volume over the last couple of decades. The question to ask: why do most seasoned economy watchers ignore those prediction efforts?

Tom Hickey said...

Brian, I took it as his saying that the empirics are the test of his model. If the anomaly persists with no recession, then the model is suspect.

I haven't seen him making excessive claims about his information transfer approach and he admits that he is an amateur in economics and decided against becoming a quant, which was a decision that likely involved considerable opportunity cost, showing that markets are not determinative.

ECRI had had an excellent track record with its model but it recently made a bad call that called the model into question.

Brian Romanchuk said...

Sorry for the sarcasm, but he recently asserted that economics ignore data, then he puts forth some model in one of the most heavily tested areas of economics. And seriously, where does he think bond yields come from? They don’t magically appear on the Fed’s H.15 report, they are determined by investors — who are busy assessing recession risks. So you expect from first principles that you can find a relationship between the curve and recession probability.

Tom Hickey said...

That's true but I don't see it as an issue, as I will explain. I think what he is saying about information transfer is that you don't have know much about the contents of the black box, so to speak. The information speaks for itself one ether data is available. (This is the approach of Big Data using proto-AI).

For example, Feynman once said in effect that physics would be lot harder if electrons had feelings. Jason is saying, not necessarily. The information in information transfer speaks for itself. The feelings, etc., may not be relevant within limitations of the scope and scale of the model to yield useful results in terms of regularities. We just need to be given some simple data and to observe the rules it may be obeying and attempt to glean something from that. Jason is very circumspect here, as far as I can see. He admits that the output may not be huge, but it doesn't need to be huge to yield some useful results. The criterion, at this point in the development of the approach anyway, is being non-trivial.

In other words we may not need a lot of assumptions, whether conventional, behavioral or "heterodox," or to introduce multiple parameters in order to find some useful regularities.

For example, populations vary based on different institutional arrangements. We don't have to know a lot about a population or its institutional arrangements, formal and informal. We can find distributional effects just by plotting the data as it is collected without knowing anything about the arrangements behind them. Then we might examine the actual conditions to determine why the data are so distributed, although in the case of a normal distribution that would not be a big issue — it's "natural."

In this case, the how involves the behavior of the 10 yr. rate. The 10 yr. rate is a benchmark for firms in planning, along with the prime rate and the rate of blue chip corporate bonds, and this affects investment decisions. As I read what he is saying, the deviation is not forecasting a recession but it is an aberration that needs explaining if it persists. An obvious reason would be impending recession. Then the question is why? He doesn't answer that, but the obvious reason is decreasing investment owing to shifting expectations revealed in market action. We don't know expectations but markets signal them, albeit non-specifically, leave us guessing somewhat.

On the other hand, the result may be due to the design of the model, in which case the model needs to be rejected as presently designed. Time will tell.

So the assertions here seem to me to be modest and the tentative nature of models accepted unlike intuitively based models that claimed to be based on "natural" principles (sound like philosophy?).

continued

Tom Hickey said...

continuation

But most of us are here are probably more interested in the why rather than the how, although the how is also significant. We know from the data that there is a lot of inequality (how things stand). But there is little agreement over why or even if it it matters. Of course, it matters socially and politically, but that is not of interest in economics per se. The conventional view of economics is that distribution is irrelevant. It is just a natural outcome of market choices based on participants' rational preferences (assuming near perfect markets). That strains credulity.

I am not saying Jason is correct, and I don't have a horse in this race. I do suspect that an informational approach to economics may provide advantages and should be explored, which is what Jason is doing from his vantage as a physicist. That gives him advantages of a different perspective, but also involves disadvantages of being out of field. So we need to cut him some space there and he has been pretty quick to change course when he discovers a blind alley or an obvious mistake. In addition, he is one of the few people that has bothered to look at the wider field, including heterodox economics. He is clearly into this with both feet after his day job.

I don't see that Jason is claiming that he is correct at this point, and I don't see him as being arrogant either. He has pointed out some alleged issues with existing approaches to economics and provided some some good reasons for it. He has also critiqued proposed fixes. In addition, he has ventured a proposal for research based on information transfer and is sharing his research. All positives for me.

Jason is not alone in viewing Hayek's assumption of markets as information processors that spew out data that may contain regularities even though the myriad transactions may appear to occur somewhat randomly, e.g., EMH. Evolutionary biologists like David Sloan Wilson have also suggested the information approach as possibly more productive than the intuitive approach of conventional economics or the behavioral approach that is gaining ground. But Jason is actually producing and testing models.

Since I am more interested in policy than economics per se, I look through the lens of political economy, sociology, economic sociology, economic anthropology, political theory, and history. If I had to identify with a particular form of economics it would be cultural economics (economic anthropology) and institutional economics, along with Boulding's approach to general systems theory, taking the broader sweep, both globally now and historically.

Jason has looked to this somewhat for explanation, too. For example, the apparent effect of entry of women into the work force in number owing to historical conditions (WWII) and shifting cultural norms to explain a previous blip in the curve. That's not happening again now. So the question is, what is behind it. Recession, or? Or is it just the model? We'll see.

Tom Hickey said...

BTW, Chris Dillow criticizes Jason Smith on the score of faulting ecumenist for not valuing data.

Thirdly, all this is a counter-example to what Jason Smith says. He claims that of both heterodox and mainstream economics that “neither appear to value empirical data. Neither appear to make accurate forecasts.” But this is not true in this case. There’s a vast library of research on empirical asset pricing, some of which – momentum and low-beta – survives the replicability crisis. And we do have some (so far!) successfully accurate forecasts, that momentum and low-beta stocks generally out-perform.

http://stumblingandmumbling.typepad.com/stumbling_and_mumbling/2018/05/progress-in-economics.html

Tom Hickey said...

Ha ha. "Ecumenist" should be "economists." Overzealous auto correction I didn't catch.

Matt Franko said...

“trigger a recession today in the same way raising interest rates to 14% did in the 80s”

It’s not the rates per se it’s the effect the rate policy has on asset composition of the depositories....

If the depositories have to increase their % of risk free assets in response to the rate policy then credit to the productive sectors (risk assets) of the economy have to be reduced...

Fed’s Quarles out today (finally!) starting to admit this aspect of what they do (although still not understanding the full negative effects):


https://www.federalreserve.gov/newsevents/speech/quarles20180504a.htm