In what follows, I want to draw out some implications of an interesting post by Greg Obenshain at Verdad Capital. In the post, Obenshain laid out data showing a number of things about Treasury bonds. Most notably, that they are a great investment if you are worried about the prospect of a recession or depression – and this is so no matter at what starting yield you are investing.
One of the exhibits Obenshain showed, however, did not get sufficient attention. I think that it may have something to tell us about how we can start to think about his findings in an actual investment context.
Let us frame the discussion in terms of a standard 60/40 portfolio. But let us ignore the 60 for the moment and focus on the 40. Typically, the 40 can be disaggregated into cash and bonds. Usually, we are talking 30 in bonds and 10 in cash – although there are no firm rules for this.
Okay, so how might we start thinking about deciding how to allocate to cash versus bonds. One way to do this might be to take a familiar model from monetary economics and merge it with a familiar model from financial economics.
Below we have two such models. On the left, is the liquidity preference (LP) model from Macro 101. The red LP curve shows that, at higher/lower rates of interest (i) lower/higher amounts of cash (M) will be held by investors. The intuition is simple: a higher interest rate means higher return on cash savings invested, and this means less cash held.
On the right, we have something resembling a CAPM model. I have simply called it the risk-return (RR) curve and it shows that, in theory, a higher yield – in this case, the interest rate (i) as we are dealing with bonds – should compensate for higher risk measured as volatility.
When we add the RR curve to the old liquidity preference model we get what the economists call an equilibrium outcome. What we find is that a decision-maker using such a model would balance their desire for the higher yielding asset with their aversion to the higher levels of volatility entailed.
Now, here is where it gets interesting. Obenshain shows that volatility is not actually an increasing function of the interest rate. Chart below.
Instead what we see is that the one-year forward change in yields is ‘contained’ up until the interest rate rises to around 9%. After this, the market becomes more volatile. We can modify our little model to capture this.
Here I have included two seperate boxes, A and B. In box A there is no risk-reward trade-off, while in box B there is. What this means is that when we are in box A – roughly below 9% nominal interest rates on a 10-year bond – our allocation to bonds is purely driven by our desire for yield. While in box B we have to also consider volatility.
What does all this mean in plain English? We must be clear that this all rests on the assumptions embedded in the model. But if we accept them, then it means that on a purely mechanical asset allocation basis, lower yield Treasuries (sub 9%) are actually more attractive than higher yield Treasuries (above 9%).
“No way!” you might say, “When markets and/or the economy get rocky having higher yielding Treasuries are a boon because their yields will fall further and provide higher returns.” But Obenshain’s data shows that this is not the case.
Below is a regression plot of Obenshain’s two samples. The regressions shows the relationship between the starting Treasury yield and the total return in the period. The blue series are NBER recessions, the orange are S&P500 drawdowns greater than 10%. (I have removed the extreme outlier in 1981-82 as it throws the regression off).
There you have it. On a purely empirical basis we should be ‘indifferent’ to the starting Treasury yield insofar as we are seeking protection against recessions or stock market drawdowns. Yet on a theoretical basis – assuming we do not like volatility – we should be more inclined to invest in lower-yielding, sub-9% Treasuries.
It can therefore be said that it is, on balance, better to invest in lower-yielding than higher-yielding Treasury bonds. Pretty counterintuitive.
What are the time parameters of the data set?
In particular, if the data set extends only into say, the 1970s, it would be heavily skewed by the generation long structural reduction in interest rates as well as being skewed by the policies of the Volcker shock.
In fact, a quick look at a supposed graph of historical interest rates in America from 1798 onward shows only the Volcker shock period where interest rates exceeded 9%.
If this is accurate, then the conclusions above are based on an exogenous event – not any type of structural mechanism and any conclusions derived thereof are suspect.
I don’t see how Volcker is any more or less exogenous than any other interest rate hikes (it being called a ‘shock’ aside).
Could the conclusion be wrong due to limited sample? Sure. That’s always the case when dealing with econ and finance data.
We have one sample that shows my conclusion and zero that show the opposite of my conclusion. Evaluate as you will.
I would say that having a single period – the Volcker shock – as the only source of data concerning Treasury “value” for interest rates above 9% vs. below would absolutely constitute a SSS, and any conclusions from same being equally suspect.
So it isn’t just that you have “proof” based on the single period but that interest rates exceeded 9% in only 1 period in the entire 220+ year history of the United States.
Could be. No hard and fast rules for weighing probabilities with small sample sizes. Up to you what you take from the evidence.
Personally I find the data relevant within the time line and potentially a leading indicator, all things concerned, albeit not relevant forward considering the challenges covid has presented post some tumultuous geopol antics in the lead up. Yet is does denote a challenge to the orthodoxy of the period and that is something to consider.
On another note tell your ugly shirt mate I’m ready for a show down anytime ….