Economists: An Anthropological View

side-tribal-ceremony

Life Among The Econ‘ is a satirical paper written by the economist Axel Leijonhufvud and published in 1973. In the paper Leijonhufvud refers directly the great work of cultural anthropology The Savage Mind by the French Structuralist anthropologist Claude Levi-Strauss. Before moving on to the paper it is probably best to understand something about Levi-Strauss’ work as I think that the content of the paper would otherwise be lost on many economics-oriented readers.

Levi-Strauss argued that cultural life in primitive societies rested on haphazard and rather arbitrary organisation. People would create symbols and systems of organisation the only purpose of which was to facilitate smooth social relationships. While the members of a tribe might imbue the symbols with seemingly enormous value — and people who violated them might be severely punished — looked at from the outside they seemed rather arbitrary and changed from tribe to tribe mainly based on chance.

Anthropologists had long recognised that tribal structures — indeed, all cultural structures — require norms and myths which to live by. Myths are stories that give life meaning, while norms are somewhat like laws or prohibitions. Perhaps the best way to think about this is to take a recent law in our own societies that now seems antiquated but which was taken seriously only a few decades ago: namely, laws against homosexuality.

Why were there laws against homosexuality throughout most of the 20th century? Does homosexuality harm anyone? Today I would say that most people would say that it does not. So, why the laws? Simply because our cultures developed in that way. Other cultures did not. In Ancient Greece, for example, homosexuality was by no means against the law. The laws that remained in place until the end of the 20th century mostly derived from the Judeo-Christian tradition that we inherited. There was nothing functional about them. Things just happened that way. (Indeed, readers of cultural history will know that our own culture is basically unique in attributing to homosexuality an actual sexual identity. In most cultures sexual activity is dealt with based on acts, not proclivities that are supposed to be immutable).

The point is that much of cultural organisation is arbitrary. It often serves no real purpose. Evolutionary psychologists might tell you otherwise, but they are just modern day myth-makers telling stories that try to give us meaning and, ultimately, justify certain cultural patterns that we hold dear by appealing to the narrative structure of evolutionary biology and imposing it on cultural development metaphorically much in the same way as marginalist economics transferred metaphors from physics to the social sciences. Levi-Strauss introduced the idea of the ‘bricoleur’ as the person who engages in such constructions.

The bricoleur is not a person conceived of acting in line with a plan or toward a goal. For example, if I go to the shop, buy foodstuffs and cook food I obviously have a plan and a goal. A bricoleur — or, more accurately, a person partaking in the process of ‘bricolage’ — just throws things together in line with how he or she sees fit. There is no real point to the activity but it persists in all human cultures and makes up a key component of our cultural organisation. Perhaps the easiest way is to think of a child playing with lego bricks or an artist painting an abstract piece of art.

Society bestows upon bricoleurs important roles. In primitive society shamen or priests or some other caste are typically anointed to serve this role. They come up with stories of various kinds, contact the spirit world and even engage in fake healing in societies without medicine in which people feel like they need to do something in the face of illness and disease. Basically their role is to give meaning to those around them. In order to do so they are imbued with a certain aura that we do not find in, for example, the case of a modern dentist or the advertiser.

This aura allows their interpretations of the world to be accepted largely without question as these men are supposed to possess abilities and traits that the lay person could never understand. Obviously, modern day religion serves basically the same function, as do cults and even con-men who sell fake medicine to desperate and gullible people.

Leijonhufvud’s paper is written as a satire. But like all the best satires it serves a serious purpose. He makes up a tribe that he calls the ‘Econs’. He is referring to economists, of course. He is perfectly correct in doing so, economists do indeed serve basically the same function in society as shamen do in primitive society: namely, they tell stories about how the economy works and about how society should organise itself. He then goes on to say that the economics profession has actually developed as a sort of micro-tribe within society. He particularly notes the fetishisation of the model — which he jokingly refers to as the ‘modl’. He writes:

EconTribe`2

Anyone who has dealt with economists will be all too familiar with this. The model is actually part of the person’s identity. They internalise it and, when they engage in rivalry with other economists, they compare their models.  Viewed from the outside with a critical eye it is actually a very strange process. But if you have an appreciation for anthropology you will quickly see what is going on. The whole thing is about group formation and social alliances. Because the economists have long turned away from the real-world (unless it is viewed through the model, of course!) they need another way to argue about things (after all, aren’t academics supposed to argue about things?). So, what they have done is formed into various social groups that then build and compare models. It is really rather amusing when you have a sense of irony about the whole thing.

Leijonhufvud also noted something more ominous though. He saw clearly that the tendency was actually moving further and further away from empirical reality. He wrote:

EconTribe`1

For a while economics survived the turn to weird ‘savage mind’ style behavior. People who were more inclined toward the reality of doing practical work had a set of tools that were at least somewhat workable. But Leijonhufvud could see this changing especially with the rise of the general equilibrium theorists. Today this has completely taken over. DSGE models are claimed to be cutting edge for policy analysis and some even believe that Real Business Cycle models say something about the real world. Whereas the likes of Frank Hahn knew well that the general equilibrium framework was just an intellectual game, the students took it literally. That is when we entered what we might call the ‘dark age of economics’ and that is where we remain today.

You need only read the altogether strange defences of model-based forecasting to see that something properly weird is going on in the land of the Econ. It is well-known that the models don’t do much better than ‘intelligent guesswork’. (I would say they do worse than ‘intelligent guesswork’ but then I do not apparently have the authority to decide who is doing guesswork and whether they are ‘intelligent’ enough to be considered…). Wren-Lewis, for example, comes up with the following justification:

Take output for example. Output tends to go up each year, but this trend like behaviour is spasmodic: sometimes growth is above trend, sometimes below. However output tends to gradually revert to this trend growth line, which is why we get booms and recessions: if the level of output is above the trend line this year, it is more likely to be above than below next year. Using this information can give you a pretty good forecast for output. Suppose someone at the central bank shows that this forecast is as good as those produced by the bank’s model, and so the bank reassigns its forecasters and uses this intelligent guess instead.

This intelligent guesswork gives the bank a very limited story about why its forecast is what it is. Suppose now oil prices rise. Someone asks the central bank what impact will higher oil prices have on their forecast? The central bank says none. The questioner is puzzled. Surely, they respond, higher oil prices increase firms’ costs leading to lower output. Indeed, replies the central bank. In fact we have a model that tells us how big that effect might be. But we do not use that model to forecast, so our forecast has not changed. The questioner persists. So what oil price were you assuming when you made your forecast, they ask? We made no assumption about oil prices, comes the reply. We just looked at past output.

Woah! What the hell just happened there!? If I were working for a central bank and someone said to me “what will happen to output if oil prices rise substantially?” I would dutifully go and examine the relevant statistics. I would look to see if the data showed that large oil price increases had large effects on output. If the data showed that they did I would return to the person and say “after examining the evidence it looks to me like there is a good chance that a substantial rise in the oil price will affect output”. If I had a little more time I would then go and try to get data for countries and see what it said to substantiate my findings.

Wren-Lewis, on the other hand, would consult his model which would give him whatever result that he himself had built into it. Do you see the difference here? I hope you do. What I would be doing would be evidence-based research. It would be by no means perfect for any number of different reasons. But it would at least be evidence-based. Wren-Lewis would consult a model that comes pre-built with an estimate of how oil price increases should affect output. This is absolutely ‘savage mind’ behavior. Wren-Lewis would have us construct what anthropologists call ‘totems‘ and then consult these totems when we need insights into our social problems.

This is what I mean by the fact that the economists have come to believe their own fictions. It is very strange stuff altogether. They build the models based on the a priori assumptions that they hold. Seemingly they then forget these assumptions. Then when they need an answer they consult the model which spits back at them what they already built into it. This output is then assumed to be Truth because it comes imbued with a sort of aura. In more practical, real-world sciences this has a name: its called GIGO which stands for Garbage-In, Garbage-Out. In more primitive societies this is similar to constructing altars to supposed oracles and then going to these altars to find out about the future, only to find a Truth that you yourself have already built into the altar. (For a more colloquial example think of when people read images into clouds).

Sometimes after a few beers some of my friends — many of whom have PhDs — ask me about this modelling stuff and how it all works. They really do view it as being a sort of opaque practice, albeit one that they are inclined not to trust. When I explain it to them they literally don’t believe me a great deal of the time. There is one exception though — and he is a rather well-known anthropologist. Make of that what you will.

Posted in Psychology | 7 Comments

Confusing Accounting Identities With Behavioral Equations

economics101cartoon

Here’s an interesting little debate from earlier this year that I came across yesterday evening. It is between a number of market analysts over whether the current stock market is overvalued. Why is that interesting? Because the argument is focused on one of the best known foundational stones of heterodox economics: the Levy-Kalecki profit equation.

Weird? Not really. James Montier, a well-known investment analyst at GMO, has been using the profit equation as central to his forecasting work for a number of years. You can see his latest offering here (page 5). Montier’s argument is that profits in the US at the moment are heavily reliant on the still rather large budget deficits that are being run there. I made a similar argument on the Financial Times Alphaville blog over a year ago.

This is actually a non-controversial point. Private sector savings are equal, to the penny, to the budget deficit minus net imports. This is intuitively obvious: when the government spends money that money either accrues to a private sector institution within the country or to a foreigner abroad. We then divide the private sector into households and firms and we quickly see that budget deficits are equal, again to the penny, to net imports, household savings and… you got it: profits.

All of this is just basic accounting. The above cannot be in any sense ‘untrue’ because this is how the accounting apparatus works. So, why is David Bianco from Deutsche Bank disputing this? Basically he confuses an accounting identity with a behavioral equation. He writes:

This construct assumes that no savings are recycled as investment. This is not a small matter. It represents a major conceptual flaw in this framework, which taints the entire analysis. The equation above would only be correct if all savings were stuck in a Keynesian liquidity trap.

Um… no. It does not assume that. The above equation says nothing with regards to how an increase in savings will effect the level of investment. It is just an accounting identity. As Keynes’ protege Joan Robinson argued many years ago in her book An Introduction to Modern Economics (co-authored with John Eatwell):

For purposes of theoretical argument we are interested in causal relationships, which depend on behavior based on expectations, ex ante, whereas the statistics necessarily record what happened ex-post… An ex-post accounting identity, which records what has happened over, say, the past year, cannot explain causality; rather it shows what has to be explained. Keynes’ theory did not demonstrate that the rate of saving is equal to the rate of investment, but explained through what mechanism the equality is brought about. (pp216-217)

If Bianco were clearer about what he was arguing he would say something like this:

“Montier may be correct on the accounting but he is wrong on the causality. An increase in savings by firms and households will lead to an increase in investment by those same firms and households.”

This is, of course, the mainstream (pre-Keynesian) argument that assumes that, as savings increase, interest rates fall across the board and investment increases.

Note that this is a causal argument. At the point in Montier’s analysis when he summons Kalecki, he is not making a causal argument. He is just stating accounting facts. But Bianco’s argument implies that when we see increases in savings in the economy we should also see increases in investment. Does the data reflect this? Short answer: no. Below is a regression plotting gross private savings against gross private domestic investment (all data from FRED).

S+I48-13

As you can see, the relationship is extremely weak. If I were Bianco I wouldn’t be buying stocks based on a causal argument that yields an R-squared of 0.1095.

But perhaps, it will be argued, it takes a while for the savings to transmit into investment. Nope. If we lag the investment variable by a year the outcome is even worse: we get a slightly downward-sloping line (negative relationship) with an R-squared of 0.0022.

Even more interesting is if we turn to the past 15 years. In the above regression we can at least see a positive correlation between the two variables, however weak that might be. But if we take the last 15 years we see a negative correlation implying that as savings increases/decreases investment decreases/increases.

S+I98-13

We even get an R-squared that is a bit higher than the first regression — although I’d be somewhat reticent to bet the house on it.

Now, some people with some basic macro training may be a little confused about all this. After all, doesn’t savings equal investment? Isn’t this an identity that we are taught in the first pages of our macro textbook? This is, after all, precisely what Bianco complained about when he wrote of Montier’s analysis:

This is neither the point of the argument nor the general condition, thus the [Kalecki] equation above fails to recognize that: Investment = Savings.

 
But again, there is a failure to understand what is being discussed here. When we say that S = I in macroeconomics we are referring to a hypothetical closed economy with no government sector. In an economy with a government sector and an external sector this equation becomes the well-known sectoral balances identity which we can rearrange to read S = I + (G – T) + (X – M).
 
This explains the seeming contradiction in the above regressions. Savings do not equal investment in an open economy with a government sector at all. Rather it depends on the balances of the other sectors. But then we’re right back to where we started. We break savings down into household savings (Sh) and profits (P), rearrange and we get P = (I – Sh) + (G – T) + (X – M).
 
Now, if Bianco wants to bet on the US stock market because he thinks that US investment is going to increase substantially in the next few years as the government deficits decline that is his prerogative. But in doing so he must recognise that he is making a causal argument (one that is totally unsupported by the data, by the way). He believes that this large increase of savings by firms and households since 2008 will trigger a wave of investment spending in the coming years whereas Montier thinks that this is unlikely. In his latest offering Montier writes:
 
Given that the deficit is “forecast” by such august bodies as the Congressional Budget Office to decline significantly over the next few years, it will take either a remarkable recovery in investment spending or a significant re-leveraging by the household sector to hold margins at the levels we have witnessed of late. Thus, embedding such high margins in a valuation seems optimistic to me.(p4)
 
Given that corporations are currently using the cash they are accumulating to engage in share buybacks and given that the household sector has been rebuilding its balance sheet over the past few years after which we can only assume that they will be a tad more conservative in their borrowing, I think that Bianco’s argument is something of a leap of faith. But if he is correct then a major US recovery is just around the corner. Watch this space.
________________
** It is also not true that such a situation would only occur in a so-called “liquidity trap”. Even by the definition that the neo-Keynesians have given this term — which I think incorrect — a liquidity trap is when changes in monetary policy have no effect on the interest rate. But within the ISLM model investment may also be unresponsive to changes in the interest rate in which case changes in savings would not affect investment. This is what happens when the IS-curve becomes vertical and it is said that the economy is stuck in an “investment trap”.
Posted in Economic Theory, Market Analysis | 25 Comments

Taxation, Government Spending, the National Debt and MMT

American-national-debt

The other day my friend Rohan Grey — a lawyer and one of the key organisers behind the excellent Modern Money Network (bringing Post-Keynesian economics to Columbia Law School, yes please!) — directed me to an absolutely fascinating piece of writing. It is called ‘Taxes For Revenue Are Obsolete’ and it was written in 1945 by Beardsley Ruml. Ruml was the director of the New York Federal Reserve Bank from 1937-1947 and also worked on issues of taxation at the Treasury during the war.

The article lays out the case that taxation should not be focused on revenue generation. Rather, Ruml argues, it should be thought of as serving other purposes entirely. He writes:

MMT1Basically Ruml is making the same case that the Modern Monetary Theorists (MMTers) make: a country that issues its own sovereign currency and is unconstrained by a gold standard does not require tax revenue in order to fund spending. This is because the central bank always stands by ready and able to buy any sovereign debt issued that might lead to the interest rate rising. Indeed, it does this automatically in the way that it conducts its interest rate policy. Ruml then outlines what taxation is really for in such a country.

MMT2This is a fantastic summary and I really couldn’t put it better myself. The interesting question, however, is why people were making such statements at this moment in history? It should be remembered that the economist Abba Lerner had published a paper entitled ‘Functional Finance and the Federal Debt‘ just two years earlier which made a very similar case. In that classic paper he wrote:

MMT3So, what was it about this moment in history that allowed for such a clear-eyed view of government spending and taxation policies? The answer is simple: the war. World War II allowed economists, bankers and government officials to see clearly how the macroeconomy worked because the government was basically controlling the economy. World War II was perhaps the only time in history when capitalist economies were run on truly Keynesian principles. (You can make a case that the Nazi economy in the 1930s was also run on these principles, however, so perhaps it is better to say: a capitalist economy in a democratic state).

This meant that those working in government institutions and banks could see exactly what was happening and why it was happening. Because the central banks were exercising full control over the market for government debt and because the governments were running massive fiscal deficits it became crystal clear what the taxation system was really doing: first and foremost it was suppressing aggregate demand for goods and services in certain parts of the economy. In doing so it had two broad functions: an anti-inflation function and a redistribution function.

The experience of the war, I would argue, was the main reason why the neo-Keynesian economists in the US actually understood macroeconomic policy in a clear-sighted way. I do not believe that their theories would have allowed them to properly understand the economy. But their experiences in the war — from reading the daily newspapers to working in economic institutions — left a lasting impression that allowed them to properly understand the macroeconomic policy tools in the 1950s and 1960s. The textbooks that they were teaching said one thing but their experiences in the war told them another. (An exception to this might be James Tobin whose theoretical writings do reflect some of the war experiences).

When the younger generation came of age in the 1970s the mainstream economic theory ensured that they had absolutely no idea what they were talking about. They only had what they were being taught in the classroom and did not have the real-world experience that the older generation had. Everything went downhill from there and that, I think, is where the seeds were sown for the economic turmoil and confusion we live with today. It is also the key reason why the economists of the next generation must be taught in an entirely different way from the previous generation.

People like Rohan Grey and his colleagues in Columbia Law School as well as the blogs are having an enormous effect in this regard. But the mainstream institutions simply cannot respond because they are filled with dinosaurs who sense their underlying irrelevance. This makes them defensive and basically impossible to deal with. Nevertheless, events today are also having their own effect, even if these are not as pronounced as the effects that World War II had on the economists of the day. The failure of the QE programs to generate employment was a key step in giving people a clear-sighted view of what monetary policy is and how it works (or doesn’t!). While the clear stagnation that has set in after the crisis may help to loosen the idea of long-run full employment equilibrium that the mainstream holds so dear. Again, the mainstream will likely retain such ideas in theory, but they will probably be easier to deal with in practice. We’ve seen this movie before. It doesn’t end well. But it makes our lives easier in the short-run and puts the mainstream on the defensive with regards theory.

Meanwhile, there are a couple of people who might be considered the Rumls of our time. Former Deputy Secretary of the Treasury Frank Newman is a good example of this. His book Freedom From the National Debt is a document on par with Ruml’s excellent 1945 article. He has also got some play in the national news media. But not nearly enough of course. I will leave the reader with a short clip from Fox News where Newman makes very clear that the national debt in the US is a misnomer and not a huge concern (for a more in depth analysis by Newman try this talk given by Newman at the Columbia Law School)**.

**Note that the case Newman makes in the Fox News clip is the same that I was making in this post. It is an argument that basically allows us to show that the endogenous money argument largely works even if we ignore central bank action and it runs directly contrary to the narrative embedded in the ISLM.

Posted in Economic Policy, Economic Theory | 23 Comments

Does the Central Bank Control Long-Term Interest Rates?: A Glance at Operation Twist

 central-bank-logo-1

Although less prevalently talked about today many economists assume that while the central bank has control over the short-term rate of interest, the long-term rate of interest is set by the market. When Post-Keynesians make the case that when a country issues its own sovereign currency the rate of interest is controlled by the central bank and that the government never faces a financing constraint some economists deny this and point to the long-term rate of interest which they claim is under the control of the market. They say that if market participants decide to put the squeeze on the government they can raise the long-term rate of interest.

Keynes himself was wholly convinced that the central bank had full control over the long-term rate of interest. In a 1933 open letter to US President Franklin Roosevelt Keynes wrote:

The turn of the tide in great Britain is largely attributable to the reduction in the long-term rate of interest which ensued on the success of the conversion of the War Loan. This was deliberately engineered by means of the open-market policy of the Bank of England. I see no reason why you should not reduce the rate of interest on your long-term Government Bonds to 2½ per cent or less with favourable repercussions on the whole bond market, if only the Federal Reserve System would replace its present holdings of short-dated Treasury issues by purchasing long-dated issues in exchange. Such a policy might become effective in the course of a few months, and I attach great importance to it.

What Keynes was advocating was what has since been referred to as Operation Twist. This was a policy that was first initiated in the US during the Keynesian heyday under President John F. Kennedy. The Wikipedia page provides a nice overview of how it worked — note how it is identical to Keynes’ suggestion in his 1933 letter:

The Fed utilized open market operations to shorten the maturity of public debt in the open market. It performs the ‘twist’ by selling some of the short term debt (with three years or less to maturity) it purchased as part of the quantitative easing policy back into the market and using the money received from this to buy longer term government debt.

The policy basically did nothing. Below are the interest rates of the era.

10year1960sAs we can see the long-term treasury yield responded to the lowering of the Fed funds rate in 1960 but we can detect no change between the spread of the short-term and the long-term yield in 1961. The spread begins to close in 1962 but this is as a result of the increases in the Fed funds rate.

Recently the Federal Reserve Bank of San Francisco released a study claiming that the program had actually worked. I won’t get into the methodology of the study but I think its basically rubbish. The fact is that the stated aim of the program did not come to pass in any meaningful way. But the reason that the Fed probably commissioned the study was because they tried Operation Twist again once more in 2011. The Fed described the program thus after it had been completed:

Under the maturity extension program, the Federal Reserve sold or redeemed a total of $667 billion of shorter-term Treasury securities and used the proceeds to buy longer-term Treasury securities, thereby extending the average maturity of the securities in the Federal Reserve’s portfolio. By putting downward pressure on longer-term interest rates, the maturity extension program was intended to contribute to a broad easing in financial market conditions and provide support for the economic recovery.

So, did it work? Not unless the Fed were lying about when they started the program. The press release at the time dates the program to September 21st 2011. Keeping that in mind let’s look at the long-term interest rates in that period. (We do not bother showing the short-term interest rates here because, as everyone knows, they are basically zero throughout the period).

10year2011-2012Do you see that significant drop in long-term interest rates of about 1%? Well, that occurs in July 2011 and reaches its bottom in September 2011. This opens the possibility that the Fed actually undertook the program two months before they announced it. Unfortunately, there is no hard evidence of this and unless such evidence emerges we must assume that the second attempt at Operation Twist was indeed a failure.

Does this mean that Keynes was wrong and that the central bank does not control the long-term rate of interest? No. Keynes was actually confusing two distinct things in his letter to Roosevelt; namely, whether the central bank controlled the long-term rate of interest and whether it controlled the spread between the short-term and the long-term rate of interest. There is no evidence that the central bank has any meaningful control over the latter — although I am open to being proved wrong on this front should it ever turn out that Operation Twist II was actually initiated in the summer of 2011. But if we zoom out it is quite clear that the central bank has full control over the long-term rate of interest. (Click for a larger picture)

interest ratessss

On the left I have graphed all the interest rates together. The pattern should be clear to the reader. But in order to be concrete I have also included a regression of the the Fed funds rate on the ten-year bond yield. As we see the relationship is positive and quite statistically significant. It is quite clear that the central bank controls the long-term rate of interest through its short-term interest rate policy. Indeed, the fact that the regression does not produce a perfect fit is mainly due to the fact that the spread between the long-term rate and the short-term rate widens whenever the Fed drops the short-term rate significantly — this can be seen quite clearly in the graph on the left.

Some will claim that the long-term interest rate is actually tracking inflation. That is, when inflation rises the long-term interest rate would rise. Then the central bank merely reacts to this inflation by raising the short-term rate thus giving the statistical illusion of control. But this is not the case. If you look at the data carefully it is clear that it is the short-term rate driving the long-term rate and not inflation. There are many ways to illustrate this but perhaps the easiest is to run a regression of the long-term interest rate against the CPI which I have done below.

10 year and inflationAs we can see the fit is far less statistically significant than when we ran the regression of the short-term interest rate against the long-term interest rate. This shows quite clearly that, although the short-term rate may be raised by the central bank in response to inflation, it is clearly the short-term rate that is driving the long-term rate and not the rate of inflation.

So, does the central bank control the long-term interest rate? Yes. Does it control the spread between the long-term rate and the short-term rate? There is no evidence to confirm this and the evidence that we do have — taking the Fed at its word — suggests that they do not. But regardless, next time some economists tells you that the markets control the long-term rate of interest you can safely tell them that they have absolutely no idea what they are talking about.

Posted in Uncategorized | 14 Comments

Financial Times Contributors Understand ‘Liquidity Trap’ Better Than Neo-Keynesians Like Krugman

Bear_Trap_7423

I have long complained that the likes of Paul Krugman have grossly misinterpreted the meaning of the term ‘liquidity trap’. These economists seem to think that we are currently in a liquidity trap despite the fact that yields on bonds are extremely low across the board.

I have also long insisted that this is not simply an issue of “what Keynes really said” (it rarely is with me, but there is little point in shouting at those without ears to hear). Rather this is an eminently practical issue: the Keynesian idea of a liquidity trap is an extremely useful one when examining financial markets. It allows us to discuss interest rate dynamics at a granular level — something mainstreamers (and even some Post-Keynesians) are unable to do in any consistent way.

People working in financial markets often have a far better intuitive sense of this than economists. Yesterday the head of macro credit research at RBS Alberto Gallo did us all a great favour by utilising the term ‘liquidity trap’ properly in a column in the Financial Times entitled ‘Unwary Yield Hunters at Risk of Liquidity Trap’. First of all he highlighted the very non-liquidity trap environment that we are currently in. He wrote:

During the past five years, credit markets have attracted less experienced investors who switched from low-yielding Treasury bonds and money market funds to investment-grade and high-yield debt.

That’s right: in the current environment interest rates on risky assets have fallen, not risen as they would in a liquidity trap. In the article he highlights that liquidity might flee the market if the Fed engages in tightening monetary policy. He correctly points out that this could lead to a liquidity trap as liquidity flees the market and yields on risky bonds get stuck or trapped at a higher level. He writes:

High-yield bonds have had a record run. With a cumulative return of more than 150 per cent since 2009, they have beaten stocks in three out of the past six years. But the market is now stumbling, and regulators have highlighted signs of frothiness… Yields are near record lows and liquidity in secondary markets is declining, making it harder to exit swiftly… Low liquidity can “trap” sellers, accelerating price falls. This makes credit markets vulnerable to an exit from loose policy. (My Emphasis)

You see? Gallo considers the ‘trap’ a situation in which the price of the bonds falls and thus the yields rise. Just to drive that home he finishes his excellent article thus:

Liquidity in secondary markets is evaporating, and policy makers are shifting their focus to credit markets… Yield hunters should consider selling or they could get caught by the credit liquidity trap.

I’m not even necessarily endorsing the views of Gallo on the state of the market — I’m not as confident as he seems to be that the Fed will reverse course on their monetary easing. But I am endorsing his intuitive and correct use of the term ‘liquidity trap’. Gallo understands that a liquidity trap is what occurs when money flees risky asset markets; the prices of these assets then declines and the yield rises. This is precisely what Keynes and Minsky meant by liquidity trap.

While Minsky and Keynes generally discussed the liquidity trap as a general phenomenon that occurred across markets when the demand for risky bonds dried up and everyone fled to money and money substitutes, Gallo is perfectly correct that we can apply this term to specific markets. I see no reason why we cannot talk about a liquidity trap in a specific market. To be wholly consistent the Fed would have to lose control over the rate of interest in this single market — and it is not altogether clear that this would be the case in Gallo’s scenario — but I think that Gallo, who intuitively senses that a liquidity trap occurs when money evacuates a market and yields spike, is at least on the right track.

Is Gallo the Talmudic Keynes scholar that some people insist I am, always using terms in the ‘correct’ manner due to some sense of duty or impulse for dogmatism? I doubt it. It is far more likely that he is an experienced and capable financial market analyst who has come to the conclusion that this is a useful term to describe a certain phenomenon. As I have argued in the past, however, the term as used by Krugman et al is not a useful term. Rather it is synonymous with ‘zero interest rates’ and seems to me to be only used to hide the fact that they are saying something banal under the cloak of a term familiar in neo-Keynesian economics.

Don’t expect the neo-Keynesians to understand this though. They are so clueless about financial markets it is amusing to the point of almost being embarrassing. I sent one of the well known neo-Keynesian bloggers the link to the last post on Twitter. His response? “Someone tell this guy that yields and interest rates are the same thing”. You can’t even make this stuff up. And this guy brags about teaching… financial economics. Excuse me while I rush for the escape hatch.

I am beginning to suspect that economists this lacking in financial diction probably can barely even read the financial press — and if they do try you can be sure that after the material has been swallowed without chewing, almost nothing is digested. Lord help us if they ever get in charge or gain influence by training policymakers… oh wait…

Posted in Economic Theory | 6 Comments

On Meta-Analysis in Economics

meta111

Did you know that if you are male and eat beans every Tuesday morning at exactly 8.30am you are more likely to marry a supermodel? No. That’s not true. I just made that up. But I hear of statistical studies in the media that sound only slightly less ridiculous all the time. Often these have to do with diet, sexual psychology or… economics.

All three of these spheres are, of course, the sorts of things you find dealt with in the religious and mythological texts of old. This is because they are key psychological aspects of how we as humans form our identities. The manner in which we eat, what would today be called our sexual orientation/preferences (it should be noted that this was treated very differently prior to the 19th century…) and how we organise our societies are things that constitute key components of our personal identities.

These are slippery aspects of existence. Because they are effectively moral issues we as humans need to feel that they are constant throughout time and space. But anyone with any historical or cultural understanding knows that these shift this way and that over time. Diet fads fluctuate rapidly, while cuisines of various types go in and out of fashion. Sexual norms change from decade-to-decade (homosexuality was considered a mental disorder in the West until 1973!). And if you need to be told that fads in economic policies are historically contingent and reflective of the politics of the day then you probably shouldn’t be reading this blog.

Science dreams of reducing all of this to Reason. It has since at least the 19th century when religion fell by the wayside and science tried to fill the void. In every era there is some hocus pocus thrown up wearing the clothes of the scientist and handing down Moral Truths: about how we should eat, how we should conduct ourselves sexually and how we should run our societies. In the past 40 or so years these questions have increasingly fallen to social science disciplines (and dieticians) who use statistical techniques.

The problem is that the nature of the material that they are dealing with is not suited to the techniques they are using. The nature of the material is that it changes and evolves through time. We cannot anticipate these changes to any large extent either. Doing so would be like trying to predict what style of dress will be popular in 2080. This leads to the statistical literature generally being a mess. Indeed, the literature itself seems to evolve through time together with the data and the ideological fads that emerge and die off. I increasingly think that the statistical literature is coming to mirror the trends themselves but with a lag.

The latest attempt to impose some order on this chaos is the practice of so-called ‘meta-regression’. The idea is to take all of the studies showing all of the contradictory results, aggregate them and run regressions on them. In sciences where the material is suited to statistical study — that is, in sciences where causality does not change and evolve through time — this is quite sensible. But where the material doesn’t accommodate this such analysis likely only amplifies the underlying problems.

Take, for example, the following paper ‘Wheat From Chaff: Meta-Analysis As Quantitative Literature Review‘ by T.D. Stanley. In the paper Stanley says that we should use meta-regressions to do our literature reviews. The problem is that this assumes that the regressions on which we run the regressions have some underlying validity in the first place: that is, they can give us information about certain causal laws that will hold into the future.

Some of the examples that Stanley gives where meta-analyses have been applied in the past seem reasonable, others do not.

There are many examples where meta-analyses clarified a controversial area of research. For example, meta-analysis has been used to establish a connection between exposure to TV violence and aggressive behavior (Paik and Comstock, 1994), the efficacy of coronary bypass surgery (Held, Yusuf and Furberg, 1989), the risk of secondhand smoke (He et al., 1999), and the effectiveness of spending more money on schools (Hedges, Laine and Greenwald, 1994; Krueger, 1999). (p133)

The efficacy of coronary bypass surgery seems very reasonable. We know the mechanism through which this is supposed to work. But there still arises the question of environment. I should hope, for example, that the meta-analysis is being run on people in countries with similar diets and weather that some from similar income groups. This raises an issue that we shall encounter more critically in a moment.

The risk of second-hand smoke is slightly more dubious. This, as is well-known, is not something that is particularly easy to prove. I do not know how they do these studies but I would assume that they would look for instances of lung and heart disease in non-smoking people who co-habit with smokers. Something along these lines will be a reasonable approach. Again, this is because we know the mechanism through which smoking causes these diseases and we know that this has relative constancy through time and space.

Spending money on schools is far more difficult. First of all, Stanley doesn’t say what spending more money on schools is effective for. We can only assume that it has to do with educational outcomes. Personally I believe that spending more money on schools is generally effective in this regard simply due to intuition and personal experience. But it is not quite clear that we can meaningfully test it in statistical terms, nor is it clear that we should ever make such claims except in a very general sense. The causal mechanism is not clear here. There are many ways in which this money can be spent. It is also not clear that spending money will fix problems in all schools. Some schools may have issues related to funding. But some may have issues that have little to do with this: the class background of the children who attend or the structure of the testing regime come to mind as issues that may not be related to funding. Here we are beginning to see that the causes and effects become murky. While every smoker suffers from basically the same cause and effect mechanisms, this seems less likely in the case of schools.

The study linking TV violence and aggression sounds the alarm for me. That sounds like garbage. The causal link here seems highly abstract and based on some crude mechanistic stimulus-response view of human psychology. The methodological issues also seem problematic: is this a lab experiment or is it based on survey results? Both suffer from serious problems. I also see no way to establish causation: do people with violent tendencies watch violent TV programs or vice versa? If we cannot establish causation any information we do glean from the study — even if we believe in the study itself — will be largely useless.

I could look at all the studies individually, but I — like  you, dear reader — have limited time. We all need some sort of filtering system to sort sense from nonsense and what I just demonstrated above is how I tend to think about these issues; in economics, as well as when I’m reading the newspaper the above approach is how I usually deal with such issues. And I think it is pretty functional.

Anyway, back to meta-regressions. The problem with these is that they aggregate even more than the studies themselves. This is fine when we are dealing with material that is homogenous through time — that is, material where the causality is fairly stable — but it will not work where the causality is slippery. In the above examples again I would highlight the studies linking TV violence to aggression.

I have dealt with this question on here before. But let me give a practical example: that of the multiplier. Let’s say that I need to give a politician a number for the fiscal multiplier in their country. Now, many economists — assuming that causality is constant through time — would get as much time-series data as possible for the country in question and run regressions. But let’s say that some extreme event had happened in the past five years like, oh I don’t know, a financial crisis. I would think that the multiplier would likely have changed from before this crisis. Thus the question is raised whether we should estimate the multiplier using the whole time-series or using the data from after the crisis. My gut would say that we should probably use the data after the crisis but there are probably some ways to look into this in more depth.

The point is that we at least need to raise the question. But economists often do not. They aggregate, aggregate, aggregate. They choose datasets willy-nilly. They assume constant, homogenous causes. Why? Because, I think, they are more often than not already sure of what they are going to say and they use the empirical techniques to dress this up. There is a risk then that using meta-analyses will only give us a reflection of the average opinion of the economics community at any given moment in time. But these opinions are extremely prone to fads because the economics community is insular, pretentious, consensus-driven and ultimately insecure. Today NAIRU, yesterday monetarism. Tomorrow? God knows. Beans and supermodels probably wouldn’t be far off.

Posted in Statistics and Probability | 6 Comments

Is There Such Thing as an ‘Economics-Based Psychotic Delusion’?

Airloom

There is a somewhat well-known phenomenon called Jerusalem Syndrome that has gained some currency in popular culture (you can see some TV clips from the 1990s here). The folk legend goes something like this: people who are perfectly well-balanced psychologically take a trip to Jerusalem where they become psychotic; that is, they start producing severe psychiatric symptoms such as hallucinations, delusions and a general loss of contact with reality. The psychosis is, as one would expect, typically religious in nature.

Why do I say that this is a folk legend? Because it strikes me as being likely untrue. It seems far more likely that people experiencing incipient psychosis undertake a trip to Jerusalem as part of their delusion. The psychosis then becomes manifest when they arrive in the city. It is well-known that famous people who became psychotic often traveled to places of significance that they integrated into their delusions. The French playwright Antonin Artaud, for example, when he began to manifest serious psychotic symptoms came to believe that he had stumbled upon a walking stick that belonged in the past to Lucifer, Jesus Christ and St. Patrick. This prompted a trip to Ireland from which he left in a straightjacket.

The Jerusalem Syndrome brings up something that goes under-appreciated in popular culture but which psychologists and psychiatrists have long known: namely, that psychosis draws its material from the cultural material available at any given moment in time. One might say that the essence of psychosis is when certain themes from a person’s culture are pushed to their absolute extreme. Thus, for example, a devout Christian that becomes psychotic will begin to believe that they are the second coming of Jesus Christ. Many will even experience hallucinations of being crucified and so forth.

What is interesting about this, and what has been recognised by many since at least Freud, is that many cultural manifestations closely resemble psychotic symptoms. Religions and myths, for example, look extraordinarily similar to psychotic delusions — indeed, this has led some psychiatrists and philosophers to suggest that these cultural forms are often established by people experiencing what psychiatry now calls psychosis. For people interested in culture and the history of ideas this suggests that highly articulate psychotic delusions can often be extremely instructive in telling us where a society or a culture is at any given moment in time. It can tell us a lot about the underlying systems of belief that are present in society in any historical period.

Until the Enlightenment era psychosis mostly manifested as religious delusions. People that were then designated as witches and so on were often experiencing what would now be considered psychosis. (This is in contrast to people who were ‘possessed by demons’ who would be suffering from what would now be considered non-psychotic psychiatric symptoms and would be given the pre-modern version of psychotherapy: namely, exorcism). But when the Enlightenment began to spread through culture new types of delusions came with it.

The most famous and interesting of these was the case history of a man which is now known as the first description of paranoid schizophrenia. His name was James Tilly Matthews and he was a tea merchant from Great Britain who took part in the French Revolution. Matthews, who was clearly versed in the emergent sciences of his day, came to conclude that a gang of conspirators had constructed a machine called an ‘air loom’ which they were using to control his thoughts, feelings and behaviors (the picture at the top is Matthews’ drawing of the air loom). This was the first recorded delusion in which the fantasy of what would later be called an ‘influencing machine‘ appeared. In Matthews’ time he drew on the emergent discipline of engineering, today many people experiencing psychosis would likely draw on the computer sciences.

There is a very particular trait that ties the influencing machine to the old religious delusions. I shall quote from the Wikipedia page to highlight this:

The delusion often involves their being influenced by a ‘diabolical machine’, just outside the technical understanding of the victim, that influences them from afar. (My Emphasis)

Note that the influencing machine must not be understandable to the person experiencing the delusion. The delusion then becomes an effort to “endeavor to discover the construction of the apparatus by means of their technical knowledge”. In the past — and even today among more religiously minded people — delusions would involve some sort of connection with God. At first, as with the machine, the psychotic person is unable to quite grasp what God is communicating. He has some ineffable knowledge that the psychotic person must then ‘figure out’ — perhaps, for example, by undertaking a trip to Jerusalem. The psychotic episode then becomes that of figuring out what God is communicating (for a very famous instance of this see the Schreber case), just as in the instance of the influencing machine it is the process of figuring out how the machine works.

Now, why am I writing all of this on a blog about economics? For the simple reason that I have never seen a documented case of psychosis that involves metaphors derived from economics. And yet, I am certain that many of such cases exist. Indeed, they must. Discourse today, especially after the 2008 crisis, is so saturated with technical economic terms of reference that they have completely permeated popular culture. Books such as the absolutely awful Freakonomics have widely publicised ideas of rational choice and all that sort of thing.

But perhaps most of all: economics furnishes pre-made structures that can easily be integrated into delusions. For example, the idea of a utility-maximising agent who can effectively see into the future. This is an extremely similar concept to the idea of an omniscient God-figure — indeed, I think you can trace a direct lineage from Adam Smith’s ‘Hidden Hand’ (an explicit reference to Divinity), to Walras’ auctioneer, through Frank Ramsay’s central planner, right down to today’s rational agent with rational expectations. I have made this point many times before: contemporary mainstream economics discourse is the closest thing we have in academia today to a secular theology.

So, why haven’t we seen such a case emerge? Or does it exist in the literature and I have not been able to locate it? If it is indeed the former then I would imagine that people are simply not looking hard enough. But with the introduction of David Tuckett’s work into the field perhaps clinical psychologists might become more attuned to the cultural impact that ideas about economics are likely having. If anyone ever comes across anything like this please let me know. It would make for a lovely essay.

Posted in Psychology | 14 Comments

INET YSI Discussion of a Chapter of My Forthcoming Book

INET YSI

Earlier this week Amogh Sahu set up a INET YSI discussion group to deal with the 4th chapter of my forthcoming book which is entitled ‘Schemata: Abstraction and Modelling’. This is not available yet but I shared a draft with the group so that they could discuss it. I did this because I believe that the chapter represents a new approach to methodology and epistemology in economics (big claim, I know, but others will have to decide when the book comes out).

At the very least I hope that the chapter will at least allow people to discuss issues that are currently not discussed. Economists today simply do not know what they are saying with their models. I mean by that both that they are not clear about the relationship between models and the real world and that they have allowed a level of abstraction to creep into their modelling that leads them to be unsure what the modelling is actually saying. So, the disconnect is twofold: (1) economists are not clear on the relationship between their models and the real world and (2) economists are not clear on what the models are saying due to the highly abstract and speculative components out of which they are built.

The discussion is structured as such. After some preliminary introductions Amogh gives a breakdown of the chapter from the beginning of the clip until the 27.30 mark. I then give my input on what has been said, mainly clarifying the key points, from 27.30 until about 41.00. The conversation then opens up to general discussion. But it is structured as questions/statements addressed by Amogh and I. There are some echoes in the discussion that are my fault — I didn’t have a headset — but I think it is pretty listenable.

Posted in Economic Theory, Philosophy | Leave a comment

So-Called ‘Long-Run’ Monetarist Correlations and Non-Ergodicity

heterogeneous

Just doing some quick house-cleaning on some previous posts. When I ran regressions plotting the money supply against the CPI I was told that I should average them over 5 year periods because this would supposedly iron out ‘volatility’ and show long-run trends. I responded in a post saying that I was dubious of this practice because I did not believe it would show long-run trends at all. Rather I thought it was being used to screw with the data. Here is a concrete example, drawing on the M3 and the CPI, showing why it is wise to be on your guard against these sorts of aggregation.

Here is the M3 money supply mapped against the CPI from 1960 until 2005 (I previously thought this data was available only from 1981 but I dug a bit more in FRED and found this longer dataset):

m3cpi64-04

Looks good, right? Nice upward-sloping trend, right? Not really. Check that R-squared. It’s quite low. What might that be hiding? Well something rather serious actually. You see we had VERY different relationships between these two variables from 1960 to 1985 than we did from 1985 to 2005. The 1960-1985 results skew the other results. Here are the 1960-1985 results:

m3cpi64-84

Wow! Now that is a tight relationship! Really statistically significant! If we accept this methodology of averaging to show long-run trends then this might have some purchase. But let’s maintain some healthy skepticism rather than dancing for joy just yet. After all, we wouldn’t just want to find what we wanted to find would we?

You see, in the period 1985-2005 this relationship went completely in the other direction. Here is the data plotted from 1985-2005.

m3cpi84-04What does all this show? It certainly shows that between 1960 and 1985 there were positive correlations between the M3 money supply and the CPI in the US when we average both of those variables over 5 years. But it also shows that between 1985 and 2005 that relationship turns negative — and quite strongly so**.

The typical complaint is that the latter two regressions do not have enough observations. This is ironic coming from the same people who told us to run the so-called long-run averages in the first place as these significantly reduce the observations. But it is also highly misleading. Clearly we have two entirely different historical time periods here. When we lump them together what we get is that one time-period dominates the other and we do not get a nuanced, useful picture.

While I’d typically agree that you want as many observations as possible you can’t just throw them in the proverbial statistical blender. This can allow certain periods — or even certain datapoints — to dominate others and this can lead to a fake homogeneity in the dataset. This is one of the many dangers of averaging and aggregating.

It is a bit like if we had two countries. One had short people and the other had tall people. Then we add the average heights together from both countries and say that on average the two countries have medium height people. But this is highly misleading. In fact, the relevant information is that one country has short people and the other tall people. Ditto in the above. The relevant observation for anyone really interested in the data and not just interested in proving their pet thesis is that the monetarist correlation held in one period and ceased to hold in the second.

What can we draw from this exercise? A number of things actually. First of all, averaging statistics or taking big long-run averages can produce misleading results. Imagine we had gone with the first chart. Well, we would predict that in the next 5 years the 5 year average of the M3 would be correlated with the 5 year average of the CPI. But that was only due to a spurious aggregation. If we were forced to place a bet we would be on far better ground with the last chart plotting a negative relationship in the years running up to the most recent observation.

Seriously think about this. If I put a gun to your head and said that you had to bet your life on a horse race and you were only allowed to bet on a single horse over and over again until he won or lost (and, for the sake of argument, you could not change your bet after the exercise had started). Then I gave you data that showed that while he typically lost races in the first leg of his career which was two and a half years, he typically won them in the second leg which was two years. Would you go and calculate an average, show that he lost more than he won and stake your life on him losing? If you did you’d be a fool. Clearly something had likely changed in the physiology or the training of the horse in the meantime. Now if you had the time you might investigate what this change might have been. That would certainly be the research question that the data raises.

This ties into the second question: what do these results mean? What they indicate is that the relationship between M3 and CPI is non-homogenous over time. It changes. And, what’s more, it changes drastically. In one twenty-five year period it is positive. In the twenty year period that follows it is negative. This further leads to a provisional conclusion: it might be that heterogeneity is inherent in most economic statistics. But that is scary to the mainstream because they believe in timeless economic laws. In order to generate these laws they engage in methods like the spurious aggregation we see above.

The truth is, of course, that economic data is non-ergodic and the future does not reflect the past. But that implies that economists cannot formulate timeless laws. And that makes mainstream economists, who are typically inflexible and against true empirics in this regard, very sad indeed. Poor mainstream economists.

_________________

** The positive relationship begins to break down in the post-1985 period. But it really comes unwound in the post-1990 period. If we run regressions from 1960-1990 and from 1990-2005 we get interesting results which further buttress our findings. The 1960-1990 relationship remains positive but the R-squared drops from 0.81 to 0.76. Meanwhile, the 1990-2005 negative correlation increases dramatically and becomes extraordinarily well fitted, with an R-squared of 0.97. Almost a perfect fit. This, of course, to some extent represents the smaller number of variables being used. But it still shows the dynamics of the change in the underlying nature of the data.

Of course, this is all rather silly from another perspective. If we just plot the two datapoints together on a simple line graph we can clearly see when the correlation changed by eyeballing the chart. But then we will be accused by our mainstream friends of being ‘primitive’ and so forth. This ignores the fact that people working with data for a living who actually have to get things right and are not allowed the plush luxuries of speculation in academia use such eyeballing techniques all the time to great effect. Undertaking an exercise such as the above with open eyes shows you exactly why this is.

Posted in Economic Theory, Statistics and Probability | 15 Comments

The Backward World of ‘Arguments From Authority’ in Economics

6_Appeal_to_Authority

Unlike almost all the other human sciences economics suffers from chronically poor scholarship. Bad habits of citation and scholarship have become so ingrained in the discipline that to not adhere to them is often considered to be poor form or a sort of sin.

Every heterodox economist has probably heard this facile accusation thrown around a million times before: if you cite old sources or bring up old debates you are said to be ‘arguing from authority’. Here I hope to show that it is actually the mainstream that typically argue from authority.

First of all, what is an argument by appeal to authority? Here is an example from the Wikipedia page:

A says P about subject matter S.

A should be trusted about subject matter S.

Therefore, P is correct.

Note here the key point: it is asserted that P is correct because the authority figure says it. Here is a variant of the same type of argument again from the Wikipedia page:

B has provided evidence for position T.

A says position T is incorrect.

Therefore, B‘s evidence is false.

Again, the key point is that the evidence is said to be false because the authority figure says so. So, for example, the archaeologist’s findings of very old dinosaur bones is incorrect because the Bible says that the world is not that old. But it is obvious that neither I nor most other heterodox economists do this (and I invite readers to find one instance in which I have). In order to do so I would have to explicitly say: “Keynes said that position T is incorrect, therefore position T is incorrect.”

Here is a typical instance from a commenter on this blog apparently showing that I have appealed to authority to make an argument. I paraphrase the first part but it captures the essence of the accusation.

[When you make an argument you] appeal to authority by printing some irrelevant quote.

Now, why would I print a quote from a source? In academia we call this ‘citation’. The Wikipedia has this to say about citation:

Citation has several important purposes: to uphold intellectual honesty (or avoiding plagiarism), to attribute prior or unoriginal work and ideas to the correct sources, to allow the reader to determine independently whether the referenced material supports the author’s argument in the claimed way, and to help the reader gauge the strength and validity of the material the author has used.

While these are all valid reasons to engage in citation, I think that Wikipedia page has left out one other important reason for citation: namely, to contextualise the discussion in terms of the historical debates. If an historical debate might be said to be ongoing then it is useful to ground the debate within that historical discussion. In academia this typically takes the form of a literature review and these are usually thought better the further the scope of the sources used. For example, if I write an essay on Free Will my literature review will be stronger if I start the discussion with the debates of Luther and Erasmus in the 16th century rather than with the discussions of free will by the existentialists in the 1940s and 1950s.

Now this is all basic scholarship practice. One of the things noted by employers today who are supporting the students that are protesting the economics curriculum is that they come out of their economics training unable to write essays properly. This is reflective of the fact that these people are not taught basic academic standards in university and they have to be retrained in the workplace. This can create enormous issues. It can lead to unintentional plagiarism, an inability to communicate ideas properly and an inability to give a balanced view of a debate. These are problems that mainstream economists often display in my experience. Most especially the latter. Past debates that maintain relevance are buried and forgotten as if they had never happened. Mainstream economics moves forward not through logical development and integration, but through forgetting.

I also said that it is typically mainstream economists who engage in arguments by appeal to authority. What do I mean by this? Well, simply that they appeal to the authority of the hard sciences as justification for what they are doing. This is a very crude 19th century mode of appeal to authority that has been distrusted in the social sciences since the beginning of the 20th century.

It runs like this: hard scientists reason in manner X therefore manner X is the correct mode of reasoning to apply to social discipline Y. This particular appeal to authority grew up in the 17th and especially 18th century but I will not give a history of it here. Needless to say it is not considered proper scholarship in the humanities today. But economists who do not have proper methodological training engage in this all the time. Here is an example from the same comment left on my blog that I quoted from before:

Moving averages and time series filters are not inventions of economists. They are standard and well accepted part of statistics and engineering…

Here the question is avoided as to whether the method in question is actually appropriate to the material being studied and instead it is said that these methods are well accepted in statistics and engineering. Here “statistics and engineering” are brought in as authorities on the matter. So, the question is assumed to be closed. What is really at issue is whether these methods are appropriate for the particular purpose under discussion. But a general appeal to authority is made to lay the issue to rest without dealing with it. This is then buttressed by a fairly overt appeal to authority:

Climate science tells us there’s a long-term relationship between CO2 (and other) emmisions and temperature increase. Would you deny this just because yearly changes in CO2 and temperature are noisy and have low correlation? I hope not.

Here the authority rests with the climate scientists. The implication is that if I say that certain ways of calculating supposed long-term relationships in economic statistics may not be appropriate then I am also saying that the findings of the climate statisticians are incorrect. The “I hope not” at the end translates roughly as “I hope you wouldn’t violate that particular authority figure”.

The specific structure of this argument by appeal to authority is rife in economics. It runs like this: hard science discipline X uses method Y therefore if you say that method Y is inappropriate when used in soft science discipline Z then you are arguing against its use in discipline X. After this fallacy is deployed the authority of discipline X is invoked to shut down debate.

Another more simple variant on this theme is as follow: hard science discipline X uses this method therefore soft science discipline Y is de facto justified in using it without further methodological justification. Again the authority of discipline X is used to close down debate.

I have what I think to be a rather humorous paragraph in my forthcoming book on this argument by appeal to authority that runs as such:

In the late-19th century the marginalists found that if they reduced the world in such a way the thought experiments that they asserted were valid representations of world could be treated using mathematical methods derived from the physics of the era. They took on these methods – methods which are still taught in the classroom today – in order to lend their ideology a cloak of mystical objectivity borrowed from the physical sciences (Mirowski 1991). People then assumed that because they were using a similar mathematical method to manipulate their variables as physics was using then the truth-value of the content of marginalism were the same as the truth-value of the content of physics. This reasoning, if one can call it such, is a bit like assuming that when a person dresses up like a policeman they thereby gain the powers of arrest. Just because a discourse wears the same clothes as a sister discourse does not mean that the first discourse thereby gains the truth-value of the second discourse. To think otherwise is ridiculous in the extreme and is based on a most fundamental logical fallacy. But in the world of ideology the ridiculous is often elevated to the status of the sublime.

 

Posted in Economic Theory, Philosophy | 6 Comments