Econometricians, Financial Markets and Uncertainty: An Anthropological View

manaI recently read a paper by the anthropologist David Graeber entitled ‘The Sword, The Sponge, and the Paradox of Performativity: Some Observations on Fate, Luck, Financial Chicanery, and the Limits of Human Knowledge‘. Graeber sent it to me because we are hoping to write an article on the emergence of probability theory and its application in the financial markets.

The working title of our paper is ‘The Betrayal of Freedom and the Rise of the Future Machines’. The basic idea is to show that the predictive powers of social sciences — including economics and finance — were shown to be fairly vacuous in the 1960s from a variety of different directions. The response by the horrified professions was to bury the evidence and double down on probabilistic prediction. This coincided with the rise of finance and the whole thing produced the weird world of meaningless numbers and extreme instability that we face today.

Anyway, here I will provide a gloss on Graeber’s excellent paper as an accompaniment to my recent piece on the anthropology of the economics profession. It is not available online except behind a paywall but I urge readers to seek it out. It is one of the best psychological/anthropological descriptions of how and why people — from village elders to econometricians — try to use arcane and difficult methods to predict the future and dictate how people should organise themselves socially and economically. In order to discuss the paper I must first introduce to non-anthropologists two key terms.

The first is ‘mana‘. This is a difficult term to pin down as it has any different dimensions in many different cultures. But Graeber’s main angle of attack in the paper is that mana is a power that people believe they can gain control over to predict and influence the direction of future events.

The second is ‘performativity‘. Performativity is a sort of ‘thinking/doing makes it so phenomenon’. For example, the Queen of England is the Queen of England because everyone believes her to be the Queen of England. If everyone in the world stopped believing tomorrow that she was the Queen of England she would cease to be the Queen of England. Her social position is literally only real insofar as we believe it. Her royal actions and symbolism are thus a way to ‘perform’ this belief and reinforce it.

Now, onto the essay. Graeber thinks that many of the phenomena that anthropologists know as mana are actually very similar to concepts that we in the West employ such as fate, luck, chance and probability. Graeber notes that we as a society have been taught to think of events in terms of probabilities. But this is not present in non-Westernised cultures. He cites the following conversation he had with an educated Malagsy while he was in Madagascar doing fieldwork:

David: What do you think the chance is that a bus will come in the next
five minutes?
Zaka: Huh?
David: I was thinking of running up the hill to get some cigarettes. I figure
it’ll take maybe five minutes. What do you think the chance is that
a bus will come before I’m back?
Zaka: I don’t know. A bus might come.
David: But is it likely to?
Zaka: What do you mean?
David: You know, what’s the chance? Is there a very large chance it will
come? Or just a small chance?
Zaka: A chance can be big or small?
David: Well, is it more like 1 in 10? Or more like 50–50?
Zaka: How would I know? I don’t know when the bus is going to arrive. (p32)

It is clear that Zaka the Malagasy thinks David the American’s questions to be absolutely bizarre. It simply does not make any sense to him. Despite being educated he does not even think to ascribe to chance a numerical estimate. Graeber concludes:

Even when my Malagasy did become fluent, I never heard people employing language in the way that people would do so in America, for example, “I’d say 3 to 1 the cops won’t even notice that I’m parked here.” In fact, I discovered not only that such a way of thinking was unknown to most Malagasy, but also that, once explained, it seemed just as peculiar, exotic, and ultimately unfathomable as any of those classic anthropological concepts, such as mana, baraka, or s´akti, regularly employed in other parts of the world to put a name on the play of chance or to explain otherwise inexplicable conjunctures or events. Once I began to think about it, I realized that this puzzlement was a pretty reasonable response. Chance actually is a very peculiar concept. Zaka was right: the main thing is that we do not know when the bus is going to arrive. This is the only thing that we can say for certain. Anything could happen—the bus might break down, there might be a strike, an earthquake might hit the city. Of course, all these things are, from a statistical perspective, very unlikely, million-to-one chances, really. But it is that very application of numbers to the unknowable that struck my Malagasy interlocutors as bizarre—and not without reason. What a statistical perspective proposes is that we can make a precise quantification based on our lack of knowledge, that is, we can specify the precise degree to which we do not know what is going to happen. (pp32-33)

I am entirely in agreement with Graeber here. We can assume that the bus was not on a strict timetable because we are not dealing with an advanced and well-organised society. Thus giving the chance that the bus would arrive while Graeber went to get cigarettes a numerical estimate was a pretty mystical thing to do altogether. (I do find it amusing that Graeber said that the chances of a strike, the bus breaking down or the earthquake hitting were ‘a million-to-one’ though… it seems like even he finds it difficult to get away from his pre-conceived cultural way of thinking about such things!).

Think about this. If I have a coin that is fairly balanced I can give an objective probability estimate. The chance of the coin landing on either heads or tails is 0.5 or 50%. This number is not mystical. It is objective. But when Graeber asks what the chance of the bus turning up is or whether the cops will notice that he is parked illegally, these estimates are not objective. They are entirely mystical. They are, in fact, as Graeber rightly points out, the same sorts of magical thinking that many so-called primitive cultures use to try to grapple with the future.

The amusing thing, however, is that we actually employ people and give them social prestige to engage in these mana-like numerical concoctions. This is just like in supposedly more primitive societies where soothsayers and astrologers are given status as village elders and power to decide how the society should be organised. In our Western societies we hire economists and financial experts. Graeber writes:

Almost invariably, too, there are certain specialists who claim privileged, exclusive knowledge. In very hierarchical societies, elites will either try to monopolize such matters themselves (e.g., Azande princes maintain exclusive rights to officiate over the most important oracles) or attempt to forbid them as forms of impiety (both Catholic and Sunni authorities have been known to do this at one time or another). There is also a frequent, although not universal, tendency for these techniques to draw on forms of knowledge seen as foreign and exotic: the Arabic lunar calendar in Madagascar, Chinese numerology in Cuba, Babylonian zodiacs in China, and so on. (It is not the past, perhaps, but the future that really is a foreign country.)
From this perspective, it is quite easy to see that economic science has become, in contemporary North America above all, but in most of the industrialized world (or, perhaps better said, financialized world), exactly this sort of popular ‘technology of the future’. There are specialists who try to keep a monopoly on certain forms of arcane knowledge that allow them to predict what is to come, although in a way that, insofar as the situation becomes political, inevitably slips into performativity. At the same time, fluctuations in the financial markets, speculation on stocks, investments, and the machinations of commodities traders or central bankers, all these have become the stuff of everyday arguments over coffee or beer or around water coolers everywhere—just as they have become the veritable obsessions of certain cable watchers and denizens of Internet chat pages. There is also a tendency—quite typical of such popular technologies of the future as well—for idiosyncratic (‘crackpot’) theories to proliferate on the popular level. (pp39-40)

Graeber is absolutely correct here. This is where we come to the notion of ‘performativity’. If we examined the situation objectively we would quickly see that these people are mostly engaged in soothsaying but we do not. Why? Because the performance buttresses our hierarchical social and economic structures and lends them credibility and weight. We do not want to know the reality: namely, that our social structure is determined in line with political and social power. And thus we create fictions that ground it as being somehow ‘objective’.

The statistical estimations are mostly about performance in this regard. They are similar to a symbolic ceremony by the Queen of England. And in the financial markets this performance generates dynamics of its own. For example, when everybody is convinced that the markets will be calm they will get statistical read-outs saying that the markets will be calm precisely because they are acting as if the markets will be calm. But when they start to panic because something unseen happens, their statistical read-outs will suddenly change.

In truth their ‘future machines’ are just feeding back to them their own activity. It is like an animal looking in a mirror and thinking that another animal is staring back at them. In reality, it is just their own reflection. The whole situation would be hilarious but these dynamics are wreaking havoc on our societies. They are also trapping us as political actors because they give us a sense of fatalism about the future. They encourage us to think that the ‘experts’ have the whole thing figured out and that ‘politics’ should be structured in line with this. This is the true poison of modern economics and it is what makes modern economists such dangerous clowns. The unfortunate thing is that almost every single one of them, wrapped in their socially-sanctioned delusion of scientificity, have absolutely no idea what they are doing.

Posted in Uncategorized | 18 Comments

Economists: An Anthropological View

side-tribal-ceremony

Life Among The Econ‘ is a satirical paper written by the economist Axel Leijonhufvud and published in 1973. In the paper Leijonhufvud refers directly the great work of cultural anthropology The Savage Mind by the French Structuralist anthropologist Claude Levi-Strauss. Before moving on to the paper it is probably best to understand something about Levi-Strauss’ work as I think that the content of the paper would otherwise be lost on many economics-oriented readers.

Levi-Strauss argued that cultural life in primitive societies rested on haphazard and rather arbitrary organisation. People would create symbols and systems of organisation the only purpose of which was to facilitate smooth social relationships. While the members of a tribe might imbue the symbols with seemingly enormous value — and people who violated them might be severely punished — looked at from the outside they seemed rather arbitrary and changed from tribe to tribe mainly based on chance.

Anthropologists had long recognised that tribal structures — indeed, all cultural structures — require norms and myths which to live by. Myths are stories that give life meaning, while norms are somewhat like laws or prohibitions. Perhaps the best way to think about this is to take a recent law in our own societies that now seems antiquated but which was taken seriously only a few decades ago: namely, laws against homosexuality.

Why were there laws against homosexuality throughout most of the 20th century? Does homosexuality harm anyone? Today I would say that most people would say that it does not. So, why the laws? Simply because our cultures developed in that way. Other cultures did not. In Ancient Greece, for example, homosexuality was by no means against the law. The laws that remained in place until the end of the 20th century mostly derived from the Judeo-Christian tradition that we inherited. There was nothing functional about them. Things just happened that way. (Indeed, readers of cultural history will know that our own culture is basically unique in attributing to homosexuality an actual sexual identity. In most cultures sexual activity is dealt with based on acts, not proclivities that are supposed to be immutable).

The point is that much of cultural organisation is arbitrary. It often serves no real purpose. Evolutionary psychologists might tell you otherwise, but they are just modern day myth-makers telling stories that try to give us meaning and, ultimately, justify certain cultural patterns that we hold dear by appealing to the narrative structure of evolutionary biology and imposing it on cultural development metaphorically much in the same way as marginalist economics transferred metaphors from physics to the social sciences. Levi-Strauss introduced the idea of the ‘bricoleur’ as the person who engages in such constructions.

The bricoleur is not a person conceived of acting in line with a plan or toward a goal. For example, if I go to the shop, buy foodstuffs and cook food I obviously have a plan and a goal. A bricoleur — or, more accurately, a person partaking in the process of ‘bricolage’ — just throws things together in line with how he or she sees fit. There is no real point to the activity but it persists in all human cultures and makes up a key component of our cultural organisation. Perhaps the easiest way is to think of a child playing with lego bricks or an artist painting an abstract piece of art.

Society bestows upon bricoleurs important roles. In primitive society shamen or priests or some other caste are typically anointed to serve this role. They come up with stories of various kinds, contact the spirit world and even engage in fake healing in societies without medicine in which people feel like they need to do something in the face of illness and disease. Basically their role is to give meaning to those around them. In order to do so they are imbued with a certain aura that we do not find in, for example, the case of a modern dentist or the advertiser.

This aura allows their interpretations of the world to be accepted largely without question as these men are supposed to possess abilities and traits that the lay person could never understand. Obviously, modern day religion serves basically the same function, as do cults and even con-men who sell fake medicine to desperate and gullible people.

Leijonhufvud’s paper is written as a satire. But like all the best satires it serves a serious purpose. He makes up a tribe that he calls the ‘Econs’. He is referring to economists, of course. He is perfectly correct in doing so, economists do indeed serve basically the same function in society as shamen do in primitive society: namely, they tell stories about how the economy works and about how society should organise itself. He then goes on to say that the economics profession has actually developed as a sort of micro-tribe within society. He particularly notes the fetishisation of the model — which he jokingly refers to as the ‘modl’. He writes:

EconTribe`2

Anyone who has dealt with economists will be all too familiar with this. The model is actually part of the person’s identity. They internalise it and, when they engage in rivalry with other economists, they compare their models.  Viewed from the outside with a critical eye it is actually a very strange process. But if you have an appreciation for anthropology you will quickly see what is going on. The whole thing is about group formation and social alliances. Because the economists have long turned away from the real-world (unless it is viewed through the model, of course!) they need another way to argue about things (after all, aren’t academics supposed to argue about things?). So, what they have done is formed into various social groups that then build and compare models. It is really rather amusing when you have a sense of irony about the whole thing.

Leijonhufvud also noted something more ominous though. He saw clearly that the tendency was actually moving further and further away from empirical reality. He wrote:

EconTribe`1

For a while economics survived the turn to weird ‘savage mind’ style behavior. People who were more inclined toward the reality of doing practical work had a set of tools that were at least somewhat workable. But Leijonhufvud could see this changing especially with the rise of the general equilibrium theorists. Today this has completely taken over. DSGE models are claimed to be cutting edge for policy analysis and some even believe that Real Business Cycle models say something about the real world. Whereas the likes of Frank Hahn knew well that the general equilibrium framework was just an intellectual game, the students took it literally. That is when we entered what we might call the ‘dark age of economics’ and that is where we remain today.

You need only read the altogether strange defences of model-based forecasting to see that something properly weird is going on in the land of the Econ. It is well-known that the models don’t do much better than ‘intelligent guesswork’. (I would say they do worse than ‘intelligent guesswork’ but then I do not apparently have the authority to decide who is doing guesswork and whether they are ‘intelligent’ enough to be considered…). Wren-Lewis, for example, comes up with the following justification:

Take output for example. Output tends to go up each year, but this trend like behaviour is spasmodic: sometimes growth is above trend, sometimes below. However output tends to gradually revert to this trend growth line, which is why we get booms and recessions: if the level of output is above the trend line this year, it is more likely to be above than below next year. Using this information can give you a pretty good forecast for output. Suppose someone at the central bank shows that this forecast is as good as those produced by the bank’s model, and so the bank reassigns its forecasters and uses this intelligent guess instead.

This intelligent guesswork gives the bank a very limited story about why its forecast is what it is. Suppose now oil prices rise. Someone asks the central bank what impact will higher oil prices have on their forecast? The central bank says none. The questioner is puzzled. Surely, they respond, higher oil prices increase firms’ costs leading to lower output. Indeed, replies the central bank. In fact we have a model that tells us how big that effect might be. But we do not use that model to forecast, so our forecast has not changed. The questioner persists. So what oil price were you assuming when you made your forecast, they ask? We made no assumption about oil prices, comes the reply. We just looked at past output.

Woah! What the hell just happened there!? If I were working for a central bank and someone said to me “what will happen to output if oil prices rise substantially?” I would dutifully go and examine the relevant statistics. I would look to see if the data showed that large oil price increases had large effects on output. If the data showed that they did I would return to the person and say “after examining the evidence it looks to me like there is a good chance that a substantial rise in the oil price will affect output”. If I had a little more time I would then go and try to get data for countries and see what it said to substantiate my findings.

Wren-Lewis, on the other hand, would consult his model which would give him whatever result that he himself had built into it. Do you see the difference here? I hope you do. What I would be doing would be evidence-based research. It would be by no means perfect for any number of different reasons. But it would at least be evidence-based. Wren-Lewis would consult a model that comes pre-built with an estimate of how oil price increases should affect output. This is absolutely ‘savage mind’ behavior. Wren-Lewis would have us construct what anthropologists call ‘totems‘ and then consult these totems when we need insights into our social problems.

This is what I mean by the fact that the economists have come to believe their own fictions. It is very strange stuff altogether. They build the models based on the a priori assumptions that they hold. Seemingly they then forget these assumptions. Then when they need an answer they consult the model which spits back at them what they already built into it. This output is then assumed to be Truth because it comes imbued with a sort of aura. In more practical, real-world sciences this has a name: its called GIGO which stands for Garbage-In, Garbage-Out. In more primitive societies this is similar to constructing altars to supposed oracles and then going to these altars to find out about the future, only to find a Truth that you yourself have already built into the altar. (For a more colloquial example think of when people read images into clouds).

Sometimes after a few beers some of my friends — many of whom have PhDs — ask me about this modelling stuff and how it all works. They really do view it as being a sort of opaque practice, albeit one that they are inclined not to trust. When I explain it to them they literally don’t believe me a great deal of the time. There is one exception though — and he is a rather well-known anthropologist. Make of that what you will.

Posted in Psychology | 4 Comments

Confusing Accounting Identities With Behavioral Equations

economics101cartoon

Here’s an interesting little debate from earlier this year that I came across yesterday evening. It is between a number of market analysts over whether the current stock market is overvalued. Why is that interesting? Because the argument is focused on one of the best known foundational stones of heterodox economics: the Levy-Kalecki profit equation.

Weird? Not really. James Montier, a well-known investment analyst at GMO, has been using the profit equation as central to his forecasting work for a number of years. You can see his latest offering here (page 5). Montier’s argument is that profits in the US at the moment are heavily reliant on the still rather large budget deficits that are being run there. I made a similar argument on the Financial Times Alphaville blog over a year ago.

This is actually a non-controversial point. Private sector savings are equal, to the penny, to the budget deficit minus net imports. This is intuitively obvious: when the government spends money that money either accrues to a private sector institution within the country or to a foreigner abroad. We then divide the private sector into households and firms and we quickly see that budget deficits are equal, again to the penny, to net imports, household savings and… you got it: profits.

All of this is just basic accounting. The above cannot be in any sense ‘untrue’ because this is how the accounting apparatus works. So, why is David Bianco from Deutsche Bank disputing this? Basically he confuses an accounting identity with a behavioral equation. He writes:

This construct assumes that no savings are recycled as investment. This is not a small matter. It represents a major conceptual flaw in this framework, which taints the entire analysis. The equation above would only be correct if all savings were stuck in a Keynesian liquidity trap.

Um… no. It does not assume that. The above equation says nothing with regards to how an increase in savings will effect the level of investment. It is just an accounting identity. As Keynes’ protege Joan Robinson argued many years ago in her book An Introduction to Modern Economics (co-authored with John Eatwell):

For purposes of theoretical argument we are interested in causal relationships, which depend on behavior based on expectations, ex ante, whereas the statistics necessarily record what happened ex-post… An ex-post accounting identity, which records what has happened over, say, the past year, cannot explain causality; rather it shows what has to be explained. Keynes’ theory did not demonstrate that the rate of saving is equal to the rate of investment, but explained through what mechanism the equality is brought about. (pp216-217)

If Bianco were clearer about what he was arguing he would say something like this:

“Montier may be correct on the accounting but he is wrong on the causality. An increase in savings by firms and households will lead to an increase in investment by those same firms and households.”

This is, of course, the mainstream (pre-Keynesian) argument that assumes that, as savings increase, interest rates fall across the board and investment increases.

Note that this is a causal argument. At the point in Montier’s analysis when he summons Kalecki, he is not making a causal argument. He is just stating accounting facts. But Bianco’s argument implies that when we see increases in savings in the economy we should also see increases in investment. Does the data reflect this? Short answer: no. Below is a regression plotting gross private savings against gross private domestic investment (all data from FRED).

S+I48-13

As you can see, the relationship is extremely weak. If I were Bianco I wouldn’t be buying stocks based on a causal argument that yields an R-squared of 0.1095.

But perhaps, it will be argued, it takes a while for the savings to transmit into investment. Nope. If we lag the investment variable by a year the outcome is even worse: we get a slightly downward-sloping line (negative relationship) with an R-squared of 0.0022.

Even more interesting is if we turn to the past 15 years. In the above regression we can at least see a positive correlation between the two variables, however weak that might be. But if we take the last 15 years we see a negative correlation implying that as savings increases/decreases investment decreases/increases.

S+I98-13

We even get an R-squared that is a bit higher than the first regression — although I’d be somewhat reticent to bet the house on it.

Now, some people with some basic macro training may be a little confused about all this. After all, doesn’t savings equal investment? Isn’t this an identity that we are taught in the first pages of our macro textbook? This is, after all, precisely what Bianco complained about when he wrote of Montier’s analysis:

This is neither the point of the argument nor the general condition, thus the [Kalecki] equation above fails to recognize that: Investment = Savings.

 
But again, there is a failure to understand what is being discussed here. When we say that S = I in macroeconomics we are referring to a hypothetical closed economy with no government sector. In an economy with a government sector and an external sector this equation becomes the well-known sectoral balances identity which we can rearrange to read S = I + (G – T) + (X – M).
 
This explains the seeming contradiction in the above regressions. Savings do not equal investment in an open economy with a government sector at all. Rather it depends on the balances of the other sectors. But then we’re right back to where we started. We break savings down into household savings (Sh) and profits (P), rearrange and we get P = (I – Sh) + (G – T) + (X – M).
 
Now, if Bianco wants to bet on the US stock market because he thinks that US investment is going to increase substantially in the next few years as the government deficits decline that is his prerogative. But in doing so he must recognise that he is making a causal argument (one that is totally unsupported by the data, by the way). He believes that this large increase of savings by firms and households since 2008 will trigger a wave of investment spending in the coming years whereas Montier thinks that this is unlikely. In his latest offering Montier writes:
 
Given that the deficit is “forecast” by such august bodies as the Congressional Budget Office to decline significantly over the next few years, it will take either a remarkable recovery in investment spending or a significant re-leveraging by the household sector to hold margins at the levels we have witnessed of late. Thus, embedding such high margins in a valuation seems optimistic to me.(p4)
 
Given that corporations are currently using the cash they are accumulating to engage in share buybacks and given that the household sector has been rebuilding its balance sheet over the past few years after which we can only assume that they will be a tad more conservative in their borrowing, I think that Bianco’s argument is something of a leap of faith. But if he is correct then a major US recovery is just around the corner. Watch this space.
________________
** It is also not true that such a situation would only occur in a so-called “liquidity trap”. Even by the definition that the neo-Keynesians have given this term — which I think incorrect — a liquidity trap is when changes in monetary policy have no effect on the interest rate. But within the ISLM model investment may also be unresponsive to changes in the interest rate in which case changes in savings would not affect investment. This is what happens when the IS-curve becomes vertical and it is said that the economy is stuck in an “investment trap”.
Posted in Economic Theory, Market Analysis | 25 Comments

Taxation, Government Spending, the National Debt and MMT

American-national-debt

The other day my friend Rohan Grey — a lawyer and one of the key organisers behind the excellent Modern Money Network (bringing Post-Keynesian economics to Columbia Law School, yes please!) — directed me to an absolutely fascinating piece of writing. It is called ‘Taxes For Revenue Are Obsolete’ and it was written in 1945 by Beardsley Ruml. Ruml was the director of the New York Federal Reserve Bank from 1937-1947 and also worked on issues of taxation at the Treasury during the war.

The article lays out the case that taxation should not be focused on revenue generation. Rather, Ruml argues, it should be thought of as serving other purposes entirely. He writes:

MMT1Basically Ruml is making the same case that the Modern Monetary Theorists (MMTers) make: a country that issues its own sovereign currency and is unconstrained by a gold standard does not require tax revenue in order to fund spending. This is because the central bank always stands by ready and able to buy any sovereign debt issued that might lead to the interest rate rising. Indeed, it does this automatically in the way that it conducts its interest rate policy. Ruml then outlines what taxation is really for in such a country.

MMT2This is a fantastic summary and I really couldn’t put it better myself. The interesting question, however, is why people were making such statements at this moment in history? It should be remembered that the economist Abba Lerner had published a paper entitled ‘Functional Finance and the Federal Debt‘ just two years earlier which made a very similar case. In that classic paper he wrote:

MMT3So, what was it about this moment in history that allowed for such a clear-eyed view of government spending and taxation policies? The answer is simple: the war. World War II allowed economists, bankers and government officials to see clearly how the macroeconomy worked because the government was basically controlling the economy. World War II was perhaps the only time in history when capitalist economies were run on truly Keynesian principles. (You can make a case that the Nazi economy in the 1930s was also run on these principles, however, so perhaps it is better to say: a capitalist economy in a democratic state).

This meant that those working in government institutions and banks could see exactly what was happening and why it was happening. Because the central banks were exercising full control over the market for government debt and because the governments were running massive fiscal deficits it became crystal clear what the taxation system was really doing: first and foremost it was suppressing aggregate demand for goods and services in certain parts of the economy. In doing so it had two broad functions: an anti-inflation function and a redistribution function.

The experience of the war, I would argue, was the main reason why the neo-Keynesian economists in the US actually understood macroeconomic policy in a clear-sighted way. I do not believe that their theories would have allowed them to properly understand the economy. But their experiences in the war — from reading the daily newspapers to working in economic institutions — left a lasting impression that allowed them to properly understand the macroeconomic policy tools in the 1950s and 1960s. The textbooks that they were teaching said one thing but their experiences in the war told them another. (An exception to this might be James Tobin whose theoretical writings do reflect some of the war experiences).

When the younger generation came of age in the 1970s the mainstream economic theory ensured that they had absolutely no idea what they were talking about. They only had what they were being taught in the classroom and did not have the real-world experience that the older generation had. Everything went downhill from there and that, I think, is where the seeds were sown for the economic turmoil and confusion we live with today. It is also the key reason why the economists of the next generation must be taught in an entirely different way from the previous generation.

People like Rohan Grey and his colleagues in Columbia Law School as well as the blogs are having an enormous effect in this regard. But the mainstream institutions simply cannot respond because they are filled with dinosaurs who sense their underlying irrelevance. This makes them defensive and basically impossible to deal with. Nevertheless, events today are also having their own effect, even if these are not as pronounced as the effects that World War II had on the economists of the day. The failure of the QE programs to generate employment was a key step in giving people a clear-sighted view of what monetary policy is and how it works (or doesn’t!). While the clear stagnation that has set in after the crisis may help to loosen the idea of long-run full employment equilibrium that the mainstream holds so dear. Again, the mainstream will likely retain such ideas in theory, but they will probably be easier to deal with in practice. We’ve seen this movie before. It doesn’t end well. But it makes our lives easier in the short-run and puts the mainstream on the defensive with regards theory.

Meanwhile, there are a couple of people who might be considered the Rumls of our time. Former Deputy Secretary of the Treasury Frank Newman is a good example of this. His book Freedom From the National Debt is a document on par with Ruml’s excellent 1945 article. He has also got some play in the national news media. But not nearly enough of course. I will leave the reader with a short clip from Fox News where Newman makes very clear that the national debt in the US is a misnomer and not a huge concern (for a more in depth analysis by Newman try this talk given by Newman at the Columbia Law School)**.

**Note that the case Newman makes in the Fox News clip is the same that I was making in this post. It is an argument that basically allows us to show that the endogenous money argument largely works even if we ignore central bank action and it runs directly contrary to the narrative embedded in the ISLM.

Posted in Economic Policy, Economic Theory | 22 Comments

Does the Central Bank Control Long-Term Interest Rates?: A Glance at Operation Twist

 central-bank-logo-1

Although less prevalently talked about today many economists assume that while the central bank has control over the short-term rate of interest, the long-term rate of interest is set by the market. When Post-Keynesians make the case that when a country issues its own sovereign currency the rate of interest is controlled by the central bank and that the government never faces a financing constraint some economists deny this and point to the long-term rate of interest which they claim is under the control of the market. They say that if market participants decide to put the squeeze on the government they can raise the long-term rate of interest.

Keynes himself was wholly convinced that the central bank had full control over the long-term rate of interest. In a 1933 open letter to US President Franklin Roosevelt Keynes wrote:

The turn of the tide in great Britain is largely attributable to the reduction in the long-term rate of interest which ensued on the success of the conversion of the War Loan. This was deliberately engineered by means of the open-market policy of the Bank of England. I see no reason why you should not reduce the rate of interest on your long-term Government Bonds to 2½ per cent or less with favourable repercussions on the whole bond market, if only the Federal Reserve System would replace its present holdings of short-dated Treasury issues by purchasing long-dated issues in exchange. Such a policy might become effective in the course of a few months, and I attach great importance to it.

What Keynes was advocating was what has since been referred to as Operation Twist. This was a policy that was first initiated in the US during the Keynesian heyday under President John F. Kennedy. The Wikipedia page provides a nice overview of how it worked — note how it is identical to Keynes’ suggestion in his 1933 letter:

The Fed utilized open market operations to shorten the maturity of public debt in the open market. It performs the ‘twist’ by selling some of the short term debt (with three years or less to maturity) it purchased as part of the quantitative easing policy back into the market and using the money received from this to buy longer term government debt.

The policy basically did nothing. Below are the interest rates of the era.

10year1960sAs we can see the long-term treasury yield responded to the lowering of the Fed funds rate in 1960 but we can detect no change between the spread of the short-term and the long-term yield in 1961. The spread begins to close in 1962 but this is as a result of the increases in the Fed funds rate.

Recently the Federal Reserve Bank of San Francisco released a study claiming that the program had actually worked. I won’t get into the methodology of the study but I think its basically rubbish. The fact is that the stated aim of the program did not come to pass in any meaningful way. But the reason that the Fed probably commissioned the study was because they tried Operation Twist again once more in 2011. The Fed described the program thus after it had been completed:

Under the maturity extension program, the Federal Reserve sold or redeemed a total of $667 billion of shorter-term Treasury securities and used the proceeds to buy longer-term Treasury securities, thereby extending the average maturity of the securities in the Federal Reserve’s portfolio. By putting downward pressure on longer-term interest rates, the maturity extension program was intended to contribute to a broad easing in financial market conditions and provide support for the economic recovery.

So, did it work? Not unless the Fed were lying about when they started the program. The press release at the time dates the program to September 21st 2011. Keeping that in mind let’s look at the long-term interest rates in that period. (We do not bother showing the short-term interest rates here because, as everyone knows, they are basically zero throughout the period).

10year2011-2012Do you see that significant drop in long-term interest rates of about 1%? Well, that occurs in July 2011 and reaches its bottom in September 2011. This opens the possibility that the Fed actually undertook the program two months before they announced it. Unfortunately, there is no hard evidence of this and unless such evidence emerges we must assume that the second attempt at Operation Twist was indeed a failure.

Does this mean that Keynes was wrong and that the central bank does not control the long-term rate of interest? No. Keynes was actually confusing two distinct things in his letter to Roosevelt; namely, whether the central bank controlled the long-term rate of interest and whether it controlled the spread between the short-term and the long-term rate of interest. There is no evidence that the central bank has any meaningful control over the latter — although I am open to being proved wrong on this front should it ever turn out that Operation Twist II was actually initiated in the summer of 2011. But if we zoom out it is quite clear that the central bank has full control over the long-term rate of interest. (Click for a larger picture)

interest ratessss

On the left I have graphed all the interest rates together. The pattern should be clear to the reader. But in order to be concrete I have also included a regression of the the Fed funds rate on the ten-year bond yield. As we see the relationship is positive and quite statistically significant. It is quite clear that the central bank controls the long-term rate of interest through its short-term interest rate policy. Indeed, the fact that the regression does not produce a perfect fit is mainly due to the fact that the spread between the long-term rate and the short-term rate widens whenever the Fed drops the short-term rate significantly — this can be seen quite clearly in the graph on the left.

Some will claim that the long-term interest rate is actually tracking inflation. That is, when inflation rises the long-term interest rate would rise. Then the central bank merely reacts to this inflation by raising the short-term rate thus giving the statistical illusion of control. But this is not the case. If you look at the data carefully it is clear that it is the short-term rate driving the long-term rate and not inflation. There are many ways to illustrate this but perhaps the easiest is to run a regression of the long-term interest rate against the CPI which I have done below.

10 year and inflationAs we can see the fit is far less statistically significant than when we ran the regression of the short-term interest rate against the long-term interest rate. This shows quite clearly that, although the short-term rate may be raised by the central bank in response to inflation, it is clearly the short-term rate that is driving the long-term rate and not the rate of inflation.

So, does the central bank control the long-term interest rate? Yes. Does it control the spread between the long-term rate and the short-term rate? There is no evidence to confirm this and the evidence that we do have — taking the Fed at its word — suggests that they do not. But regardless, next time some economists tells you that the markets control the long-term rate of interest you can safely tell them that they have absolutely no idea what they are talking about.

Posted in Uncategorized | 14 Comments

Financial Times Contributors Understand ‘Liquidity Trap’ Better Than Neo-Keynesians Like Krugman

Bear_Trap_7423

I have long complained that the likes of Paul Krugman have grossly misinterpreted the meaning of the term ‘liquidity trap’. These economists seem to think that we are currently in a liquidity trap despite the fact that yields on bonds are extremely low across the board.

I have also long insisted that this is not simply an issue of “what Keynes really said” (it rarely is with me, but there is little point in shouting at those without ears to hear). Rather this is an eminently practical issue: the Keynesian idea of a liquidity trap is an extremely useful one when examining financial markets. It allows us to discuss interest rate dynamics at a granular level — something mainstreamers (and even some Post-Keynesians) are unable to do in any consistent way.

People working in financial markets often have a far better intuitive sense of this than economists. Yesterday the head of macro credit research at RBS Alberto Gallo did us all a great favour by utilising the term ‘liquidity trap’ properly in a column in the Financial Times entitled ‘Unwary Yield Hunters at Risk of Liquidity Trap’. First of all he highlighted the very non-liquidity trap environment that we are currently in. He wrote:

During the past five years, credit markets have attracted less experienced investors who switched from low-yielding Treasury bonds and money market funds to investment-grade and high-yield debt.

That’s right: in the current environment interest rates on risky assets have fallen, not risen as they would in a liquidity trap. In the article he highlights that liquidity might flee the market if the Fed engages in tightening monetary policy. He correctly points out that this could lead to a liquidity trap as liquidity flees the market and yields on risky bonds get stuck or trapped at a higher level. He writes:

High-yield bonds have had a record run. With a cumulative return of more than 150 per cent since 2009, they have beaten stocks in three out of the past six years. But the market is now stumbling, and regulators have highlighted signs of frothiness… Yields are near record lows and liquidity in secondary markets is declining, making it harder to exit swiftly… Low liquidity can “trap” sellers, accelerating price falls. This makes credit markets vulnerable to an exit from loose policy. (My Emphasis)

You see? Gallo considers the ‘trap’ a situation in which the price of the bonds falls and thus the yields rise. Just to drive that home he finishes his excellent article thus:

Liquidity in secondary markets is evaporating, and policy makers are shifting their focus to credit markets… Yield hunters should consider selling or they could get caught by the credit liquidity trap.

I’m not even necessarily endorsing the views of Gallo on the state of the market — I’m not as confident as he seems to be that the Fed will reverse course on their monetary easing. But I am endorsing his intuitive and correct use of the term ‘liquidity trap’. Gallo understands that a liquidity trap is what occurs when money flees risky asset markets; the prices of these assets then declines and the yield rises. This is precisely what Keynes and Minsky meant by liquidity trap.

While Minsky and Keynes generally discussed the liquidity trap as a general phenomenon that occurred across markets when the demand for risky bonds dried up and everyone fled to money and money substitutes, Gallo is perfectly correct that we can apply this term to specific markets. I see no reason why we cannot talk about a liquidity trap in a specific market. To be wholly consistent the Fed would have to lose control over the rate of interest in this single market — and it is not altogether clear that this would be the case in Gallo’s scenario — but I think that Gallo, who intuitively senses that a liquidity trap occurs when money evacuates a market and yields spike, is at least on the right track.

Is Gallo the Talmudic Keynes scholar that some people insist I am, always using terms in the ‘correct’ manner due to some sense of duty or impulse for dogmatism? I doubt it. It is far more likely that he is an experienced and capable financial market analyst who has come to the conclusion that this is a useful term to describe a certain phenomenon. As I have argued in the past, however, the term as used by Krugman et al is not a useful term. Rather it is synonymous with ‘zero interest rates’ and seems to me to be only used to hide the fact that they are saying something banal under the cloak of a term familiar in neo-Keynesian economics.

Don’t expect the neo-Keynesians to understand this though. They are so clueless about financial markets it is amusing to the point of almost being embarrassing. I sent one of the well known neo-Keynesian bloggers the link to the last post on Twitter. His response? “Someone tell this guy that yields and interest rates are the same thing”. You can’t even make this stuff up. And this guy brags about teaching… financial economics. Excuse me while I rush for the escape hatch.

I am beginning to suspect that economists this lacking in financial diction probably can barely even read the financial press — and if they do try you can be sure that after the material has been swallowed without chewing, almost nothing is digested. Lord help us if they ever get in charge or gain influence by training policymakers… oh wait…

Posted in Economic Theory | 5 Comments

On Meta-Analysis in Economics

meta111

Did you know that if you are male and eat beans every Tuesday morning at exactly 8.30am you are more likely to marry a supermodel? No. That’s not true. I just made that up. But I hear of statistical studies in the media that sound only slightly less ridiculous all the time. Often these have to do with diet, sexual psychology or… economics.

All three of these spheres are, of course, the sorts of things you find dealt with in the religious and mythological texts of old. This is because they are key psychological aspects of how we as humans form our identities. The manner in which we eat, what would today be called our sexual orientation/preferences (it should be noted that this was treated very differently prior to the 19th century…) and how we organise our societies are things that constitute key components of our personal identities.

These are slippery aspects of existence. Because they are effectively moral issues we as humans need to feel that they are constant throughout time and space. But anyone with any historical or cultural understanding knows that these shift this way and that over time. Diet fads fluctuate rapidly, while cuisines of various types go in and out of fashion. Sexual norms change from decade-to-decade (homosexuality was considered a mental disorder in the West until 1973!). And if you need to be told that fads in economic policies are historically contingent and reflective of the politics of the day then you probably shouldn’t be reading this blog.

Science dreams of reducing all of this to Reason. It has since at least the 19th century when religion fell by the wayside and science tried to fill the void. In every era there is some hocus pocus thrown up wearing the clothes of the scientist and handing down Moral Truths: about how we should eat, how we should conduct ourselves sexually and how we should run our societies. In the past 40 or so years these questions have increasingly fallen to social science disciplines (and dieticians) who use statistical techniques.

The problem is that the nature of the material that they are dealing with is not suited to the techniques they are using. The nature of the material is that it changes and evolves through time. We cannot anticipate these changes to any large extent either. Doing so would be like trying to predict what style of dress will be popular in 2080. This leads to the statistical literature generally being a mess. Indeed, the literature itself seems to evolve through time together with the data and the ideological fads that emerge and die off. I increasingly think that the statistical literature is coming to mirror the trends themselves but with a lag.

The latest attempt to impose some order on this chaos is the practice of so-called ‘meta-regression’. The idea is to take all of the studies showing all of the contradictory results, aggregate them and run regressions on them. In sciences where the material is suited to statistical study — that is, in sciences where causality does not change and evolve through time — this is quite sensible. But where the material doesn’t accommodate this such analysis likely only amplifies the underlying problems.

Take, for example, the following paper ‘Wheat From Chaff: Meta-Analysis As Quantitative Literature Review‘ by T.D. Stanley. In the paper Stanley says that we should use meta-regressions to do our literature reviews. The problem is that this assumes that the regressions on which we run the regressions have some underlying validity in the first place: that is, they can give us information about certain causal laws that will hold into the future.

Some of the examples that Stanley gives where meta-analyses have been applied in the past seem reasonable, others do not.

There are many examples where meta-analyses clarified a controversial area of research. For example, meta-analysis has been used to establish a connection between exposure to TV violence and aggressive behavior (Paik and Comstock, 1994), the efficacy of coronary bypass surgery (Held, Yusuf and Furberg, 1989), the risk of secondhand smoke (He et al., 1999), and the effectiveness of spending more money on schools (Hedges, Laine and Greenwald, 1994; Krueger, 1999). (p133)

The efficacy of coronary bypass surgery seems very reasonable. We know the mechanism through which this is supposed to work. But there still arises the question of environment. I should hope, for example, that the meta-analysis is being run on people in countries with similar diets and weather that some from similar income groups. This raises an issue that we shall encounter more critically in a moment.

The risk of second-hand smoke is slightly more dubious. This, as is well-known, is not something that is particularly easy to prove. I do not know how they do these studies but I would assume that they would look for instances of lung and heart disease in non-smoking people who co-habit with smokers. Something along these lines will be a reasonable approach. Again, this is because we know the mechanism through which smoking causes these diseases and we know that this has relative constancy through time and space.

Spending money on schools is far more difficult. First of all, Stanley doesn’t say what spending more money on schools is effective for. We can only assume that it has to do with educational outcomes. Personally I believe that spending more money on schools is generally effective in this regard simply due to intuition and personal experience. But it is not quite clear that we can meaningfully test it in statistical terms, nor is it clear that we should ever make such claims except in a very general sense. The causal mechanism is not clear here. There are many ways in which this money can be spent. It is also not clear that spending money will fix problems in all schools. Some schools may have issues related to funding. But some may have issues that have little to do with this: the class background of the children who attend or the structure of the testing regime come to mind as issues that may not be related to funding. Here we are beginning to see that the causes and effects become murky. While every smoker suffers from basically the same cause and effect mechanisms, this seems less likely in the case of schools.

The study linking TV violence and aggression sounds the alarm for me. That sounds like garbage. The causal link here seems highly abstract and based on some crude mechanistic stimulus-response view of human psychology. The methodological issues also seem problematic: is this a lab experiment or is it based on survey results? Both suffer from serious problems. I also see no way to establish causation: do people with violent tendencies watch violent TV programs or vice versa? If we cannot establish causation any information we do glean from the study — even if we believe in the study itself — will be largely useless.

I could look at all the studies individually, but I — like  you, dear reader — have limited time. We all need some sort of filtering system to sort sense from nonsense and what I just demonstrated above is how I tend to think about these issues; in economics, as well as when I’m reading the newspaper the above approach is how I usually deal with such issues. And I think it is pretty functional.

Anyway, back to meta-regressions. The problem with these is that they aggregate even more than the studies themselves. This is fine when we are dealing with material that is homogenous through time — that is, material where the causality is fairly stable — but it will not work where the causality is slippery. In the above examples again I would highlight the studies linking TV violence to aggression.

I have dealt with this question on here before. But let me give a practical example: that of the multiplier. Let’s say that I need to give a politician a number for the fiscal multiplier in their country. Now, many economists — assuming that causality is constant through time — would get as much time-series data as possible for the country in question and run regressions. But let’s say that some extreme event had happened in the past five years like, oh I don’t know, a financial crisis. I would think that the multiplier would likely have changed from before this crisis. Thus the question is raised whether we should estimate the multiplier using the whole time-series or using the data from after the crisis. My gut would say that we should probably use the data after the crisis but there are probably some ways to look into this in more depth.

The point is that we at least need to raise the question. But economists often do not. They aggregate, aggregate, aggregate. They choose datasets willy-nilly. They assume constant, homogenous causes. Why? Because, I think, they are more often than not already sure of what they are going to say and they use the empirical techniques to dress this up. There is a risk then that using meta-analyses will only give us a reflection of the average opinion of the economics community at any given moment in time. But these opinions are extremely prone to fads because the economics community is insular, pretentious, consensus-driven and ultimately insecure. Today NAIRU, yesterday monetarism. Tomorrow? God knows. Beans and supermodels probably wouldn’t be far off.

Posted in Statistics and Probability | 6 Comments