How to Forecast the Stock Market from a Macro Perspective

PE Ratio

My letter to the Financial Times regarding the recent article discussing stock market fragility and the debate over the interpretation of data between Robert Shiller and Jeremy Siegel has been published. In the letter I try to draw attention to an article I wrote for the FT Alphaville blog back in July showing how we could combine the Levy-Kalecki Profit Equation with Shiller’s index to make medium-term projections about the trajectory of the stock market.

While the FT Alphaville blog lays out my argument in more detail this is basically the crux of it:

The robustness of the stock market is generally gauged by looking at the price-earnings ratio (P/E Ratio). While this is by no means a perfect measure, it gives us a good idea of how solid stock market rallies might be. Now, the P/E Ratio has two components: prices, which are dictated by the buying mood of the markets, and earnings, which are largely driven by macroeconomic profits.

In the piece I point out that the best measure of the drivers of macroeconomic profits is the Levy-Kalecki equation. Thus by studying the evolution of macroeconomic profits in the economy we can then make projections about future corporate earnings. This, in turn, allows us to guess in what direction the P/E Ratio is heading; a fall in macro profits will cause it rise, while a rise in macro profits will cause it to fall.

Personally, I think that this is the best way to make medium-term macroeconomic projections about the stock market. It does not provide us with a ready-made answer — indeed, one would be hard to find the level that the P/E Ratio must reach before a crash definitely happens — but it does allow us to keep at the back of our minds the likely evolution given the macroeconomic fundamentals.

At the time of writing I took note that macroeconomic profits were likely on a downward course due to the sequestration at the beginning of the year and the subsequent fall in government deficits (the key driver of macro profits after the 2008 crisis). The figures for the P/E Ratio as they stood then were as follows:

January 2013: 21.90

July 2013: 23.49

And today they are as follows:

September 2013: 23.67

That’s a pretty slow crawl, of course, but the ratio certainly hasn’t fallen since I wrote that piece. Let’s have a look at where it is this coming January.

For now, as I pointed out in my letter, one really needs to keep an eye on the government deficit in the US as this is the main driver of macro profits. This is simplified advice for working investors, but one would do much better with proper data on all drivers of macro profits.

Posted in Market Analysis, Media/Journalism | Leave a comment

A Response to Matheus Grasselli on Probability and Law

Straw-man3

Matheus Grasselli has responded once more to one of my posts. The unfortunate part is that he has dragged some other poor souls into the quagmire of misunderstanding and poor reading. I suppose its now on me — having brought his attention to these issues — to clear up his misunderstandings. As we will quickly see Grasselli is not arguing with either me or with the other authors, rather he is arguing with himself.

First, his response to my own piece. This consists of two parts. The first is that Grasselli thinks that there is only one version of probability. He writes:

To say that there are alternative probabilities, one preferred by trained statisticians and another adopted by lawyers and judges is akin to say that there are alternative versions of chemistry, one suitable for the laboratory and another, more subtle and full of nuances, adopted by the refined minds of cooks and winemakers, honed in by hundreds or even thousands of years of experience. Clear nonsense, of course: the fact that a cook or winemaker uses tradition, taste, and rules of thumb, does not change the underlying chemistry.

As we will see throughout this response there are actually two different types of probability: those that can be given a numerical estimate and those that cannot. If I flip a balanced coin I can give it a numerical estimate. The chance of flipping a heads is 0.5 and the chance of flipping a tails is 0.5. If, however, I say that I think that Rand Paul will become the next Republican presidential nominee I cannot give this a numerical estimate. I can make a good argument for why I think this. But I cannot give it a numerical estimate as any estimate would be arbitrary.

Grasselli will respond that I can be a good Bayesian and give it an arbitrary estimate and then test my model against the data until I get a proper numerical estimate. But alas I cannot. Because Rand Paul’s nomination or lack of nomination is a unique event. It only happens once. By the time I know whether he has been nominated or not the estimate will be meaningless and prior to his nomination or lack of nomination I cannot assign a proper numerical value for the aforementioned reason that it is a unique event.

So, contrary to what Grasselli claims there are indeed two types of probability. Those that can be numerically estimated and those that cannot.

The second part of his response to my post was similar to this. He seems to have misread my post to mean that Bayesian statistics have nothing to do with “degrees of belief”. This was simply not in the text. What I said what that Bayesian statistics require quantitative measures of said degrees of belief and the more Keynesian approach does not require this. As we have already seen, this is quite obviously true.

Next Grasselli launches a particularly misguided attack against Lars Syll. Here he simply has not read Syll’s interesting piece at all. He has merely scanned it to pick out easy targets — targets he himself constructs. You see, Syll is discussing a very particular application of Bayesian statistics in his piece; namely, that which mainstream economists use in order to model so-called rational agents. Syll is trying to make the case that, and I quote, “it’s not self-evident that rational agents really have to be probabilistically consistent”. This is where his example of an agent that moves country comes in. This runs as follows:

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

What he is talking about is an agent in a model. All the agent can do, because of their Bayesian programming, is to use their present prior that they have from their experience in “Sweden”  and apply it to the new environment which they find themselves in. Syll’s point is that this is not what an actual rational person would do. Rather they would say “I don’t know what the unemployment situation is here”.

Grasselli takes Syll’s criticism of certain rational agent models and thinks that it is a naive criticism of Bayesian statistics. He then complains that we could estimate the unemployment in the new environment by applying arbitrary priors and running tests. But that was not Syll’s point at all. Syll was talking about a model of human behavior. He claimed that in certain rational agent models the agents simply project their previous priors forward and this is considered rational. But the unemployment example shows that this need not be rational at all. So, these models are not really modelling what a rational agent would do. This criticism has implications for how we treat genuine uncertainty in economic models.

Grasselli missed this, presumably, because he didn’t bother reading the piece. He just scanned it looking for easy targets; and found them, but not in the text.

Grasselli’s final comments about Kay are obscure. It is not clear whether he thinks that a court could be run using Bayesian statistics or not. Perhaps he might further enlighten us on this point — which was, by the way, the main point of my piece — and then we can have a debate rather than him attacking strawmen and me having to clean up the mess he makes.

Posted in Statistics and Probability | 13 Comments

Probabilities: Keynesian Legal Versus Bayesian Mathematical

Computer Judge

Lars Syll has (once again) directed me to a fascinating piece, this time by John Kay. Kay starts the piece by noting that in a recent legal case in Britain the judge was asked to define the term “beyond reasonable doubt” and, as is typical in such cases, refused to do so. The question, however, as Kay notes was not a silly one: in English law criminal cases are decided only if the evidence is “beyond reasonable doubt” while civil cases are decided based on the “the balance of probabilities”.

Kay also notes, however, that these terms are not being used in the manner a trained statistician might use them. He writes:

Scientists think in terms of confidence intervals – they are inclined to accept a hypothesis if the probability that it is true exceeds 95 per cent. “Beyond reasonable doubt” appears to be a claim that there is a high probability that the hypothesis – the defendant’s guilt – is true. Perhaps criminal conviction requires a higher standard than the scientific norm – 99 per cent or even 99.9 per cent confidence is required to throw you in jail. “On the balance of probabilities” must surely mean that the probability the claim is well founded exceeds 50 per cent.

And yet a brief conversation with experienced lawyers establishes that they do not interpret the terms in these ways. One famous illustration supposes you are knocked down by a bus, which you did not see (that is why it knocked you down). Say Company A operates more than half the buses in the town. Absent other evidence, the probability that your injuries were caused by a bus belonging to Company A is more than one half. But no court would determine that Company A was liable on that basis.

Clearly this is not in line with, say, a Bayesian interpretation of these terms. Rather such reasoning is much closer, as Syll notes in his post, to Keynes’ theories of probability which involve the weighting of an argument based on degrees of belief that are ultimately not subject to quantitative measurement.

Kay says in his post, correctly I think, that when those trained in statistics are confronted with the manner in which our legal institutions function they immediately think that the jurors are simply not educated sufficiently in modern mathematical methods. But, Kay says, to think this would be wrong. Rather we must understand that these legal institutions have evolved over centuries and have come to regard their approach to the weighting of evidence to be superior to any other. They have come to realise what some who are trained in statistics often simply will not concede: when faced with the extremely nuanced and complex aspects of a legal case quantitative probability estimates simply do not cut it.

Personally I remember thinking this when I used to hang around courts while training as a journalist. The court system is by no means perfect and one can often detect bias, but it is an extremely functional institution that I have always been impressed by. Having known a little bit about economic theory I remember sitting in the courts thinking “what if we replaced the judge with a computer?”. I did not think this as just some sci-fi fantasy but because I had seen mandatory minimum sentences in action and, not only did I think them grotesquely unfair, but I also noted that judges tended to issue them with a sigh.

At some level many judges knew that such methods of sentencing — relying as they do wholly on rigid rules that they had very little discretion in enforcing — were often unjust. It was in this context that I thought about the computer replacing the judge. In such a circumstance pretty much all sentences would be mandatory sentences because computers lack the nuances that humans possess. The legal system thereby produced would be nothing short of grotesque — perhaps reminding some of the Star Trek episode in which a computer allocated medical resources.

Now, here’s where the economics comes in: in my judgement economics is, in fact, more correctly thought of an extension of legal theory rather than a science — something that has not gone unnoticed by legal scholars. It is, in fact, a sort of Art of Governance and legal institutions are one of the founding pillars in our systems of governance. I think that Keynes was getting at something similar when he said that economics was a distinctly moral science. Here perhaps it is worth quoting his letter to Harrod of the 4th of July 1938 in the original:

It seems to me that economics is a branch of logic, a way of thinking; and that you [i.e. Harrod] do not repel sufficiently firmly attempts à la Schultz to turn it into a pseudo-natural-science… economics is essentially a moral science and not a natural science. That is to say, it employs introspection and judgments of value.

I think that this goes back to Keynes’ work on probability. His probability theories were, I think, supposed to be understood as how we should weigh up arguments in everyday life and come to a decision — as would a judge. He also thought that economics should be done along the same lines.

This seems to me the best way to apply economics. In fact, it seems to me, that the attempts to apply various statistical methods — I refer to econometrics and the like — to economic reasoning is not unlike replacing judges with computers. All critical economists should know the arbitrary policies that can be enforced through reliance on such methods and they should equally realise that, at the end of the day, the econometric studies chosen are chosen on the basis of their political usefulness and not vice versa. (Think, for example, of the IMF changing its multiplier estimates after the countries most closely associated with running it were hit with economic crises while having ignored such crises in the developing world for years).

Indeed, it would seem that economics really is subject to political judgements regardless of whether most economists use mathematical methods or not. We would all be a lot better off if we simply admitted what economics really is and what it is really used for. Then we can have a proper discussion of how it should be applied rather than people bickering over the supposed objectivity of their particular study.

The goal of a training in economics would then have to change too. Rather than attempting to turn economists into engineer-like functionaries as is currently the case, the goal would be to try to teach economics as part of a general Art of Governance. In this it would be closer to a training in law rather than a training in engineering. Much of the opacity and irrelevance of the discipline, I think, would then fall away. This is, of course, unlikely to happen because the engineer-like functionaries currently run the profession. But one can dream, right?

Posted in Economic Theory, Statistics and Probability | 2 Comments

Marginalist Microeconomics as Authoritarian Poison

mabuse authority

The world itself is the will to power — and nothing else! And you yourself are the will to power — and nothing else!” – Friedrich Nietzsche

Lars Syll has directed me to an interesting post by Ole Rogeberg. It is about an interview that was undertaken with labour economist Daniel Hamermesh. In the interview Hamermesh explains that the problems with economics today are mainly to do with the macro side, not the micro side. It is the macro side, Hamermesh explains, that failed to see the crisis, not the micro side.

First of all, this is an entirely dubious argument. Anyone who knows their history of economic thought knows that contemporary macro is basically an outgrowth of micro and has been so since the Lucas critique that effectively said that a real economic theory requires microfoundations (a topic I have written on before, here and here). Since contemporary macro — of both the New Keynesian or New Classical variety — builds itself on the principles of micro, any errors that the former falls into must rest in part on the structure of the latter.

That aside though, I think even the focus here is wrong. Economics is currently under fire for not just missing the financial crisis but also for providing a Panglossian opiate during the run-up to said crisis. This opiate came in two forms. The first was contemporary macro which essentially says that markets will sort themselves out and that any problems that arise will only have short-run effects.

The second was contemporary finance theory — most notably the Efficient Market Hypothesis and its “noise trader” derivatives. Finance theory made similar claims to contemporary macro — almost identical, in fact — but it also effectively gave trading advice to financial types in that they should trust the information that the market was feeding them and not try to think outside the box. This, some claim, led to a myopia on the part of those trading complex derivatives; a myopia that was then subject to a “surprise event” during the 2008 meltdown.

I don’t buy this story. I won’t get into why I don’t, but let’s just say that Ireland had a far, far bigger housing bubble than the US and we didn’t need sophisticated derivatives to hide the obvious from ourselves. The derivatives that went sour in the US banks in 2008 were a symptom, not a cause of the crisis.

Anyway, back to micro and the disrepute of economics. For me, micro is the worst part of contemporary economics by far. Any screw-ups that macro guys or finance guys make can ultimately be traced back to the biases ingrained in them by a training in micro. All notions of rational agents, of perfect foresight and information, of prices balancing supply and demand and not being subject to speculative pressures come directly from micro.

It is thus micro that is the real poison in economics; it is the assumptions that micro rests on that drag the rest of the discipline down into the swamp. And if you create a theory that doesn’t rest on the biases ingrained in the profession’s collective unconscious by micro it is these very biases that keep your work from being considered “serious”.

On that note I found it particularly interesting that Hamermesh cited Gary Becker as one of the “good guys” of contemporary economics. The reason that he says this is because Becker is one of the micro guys who tried to apply the autistic vision of contemporary micro to all sorts of other situations; for example, family members are seen to be selfish rational individuals who scrutinise each other in line with the paranoiac vision of game theory.

This shows clearly how the micro poison spreads. You see, it’s very versatile in its uselessness. While it says nothing of interest about how people actually behave and what motivates them, it nevertheless provides a framework that can be applied to almost everything (something that should make us suspicious from the outset). This means that savvy microeconomists can search out new applications in everything from family bargaining to workplace bargaining and so on.

This, in turn, can inspire actual attempts to apply such models in real life. A good example of this are the performance targets that the NHS used to try to boost employee performance — known in some academic literature as “targets and terror”. The targets led to Soviet-like absurdities within the hospital system. Here is some flavour:

Targets have been blamed for distorting clinical priorities. The Conservative party has claimed that the four-hour target for waiting times in accident and emergency (A&E) has led to distortions such as holding emergency patients in trolley waiting areas. And media reports based on internal ambulance service documents suggest that some patients have been held in ambulances outside emergency departments to avoid ‘starting the clock’ (Guardian 2008, Telegraph 2009).

Just as in the old Soviet management system people figure out ways to get around the targets and this causes all sorts of chaos and disruption. This is because people do not act in line with the rigid rules that the targets — and contemporary micro — assume and so, as they manipulate the rules to suit their own motivations, the system falls into chaos.

The NHS targets are a nice example of the authoritarian heart of contemporary micro (something I have called elsewhere a “blueprint for social control”). Like all authoritarian systems of thought contemporary micro gives its proponents a theory of man that functions for them as a means of control. They claim that man acts in a certain fashion and this ultimately leads to the setting up of structures to accommodate these supposedly innate behaviours. The reality, of course, is that man does not act in this way and so the structures turn authoritarian. It is only one or two steps from said authoritarianism to absurdism.

It is also for this reason that economists are so reticent about giving up their beloved micro. When you criticise it you often hear “but what better tools do we have?”. What that really means is “what better tools do we have to assert our authority over others?”. Thus much of the debate over micro is not a normal, rational debate. Much of it is a debate over who holds authority and on what basis. It is for this reason that contemporary micro is a malign spirit that is unlikely to be exorcised through rational argument; rather it is likely to remain in place so long as Western civilisation continues to be subjugated to the scientism it today kneels before.

Posted in Economic Theory | Leave a comment

What Do IQ Tests Really Measure?

IQ bell curve

IQ. What is it all about? In our society it is generally seen as a sort of symbol of social status. So much so that some join groups like Mensa in order to hang out with other high IQ people while others sign up for dating websites that are supposedly geared toward your IQ level.

Yes, the more democratically minded amongst us might get something of a sniff of elitism off the whole thing. Scratch the surface and you might even begin to smell something even more unseemly; I refer of course to the 19th and 20th century eugenics movement which was intimately bound up with the emerging idea of IQ.

Well, a recent study has come up with extremely interesting results that, to my mind, raise yet more questions as to what IQ tests truly measure. The study looks at a group of farmers but tracks them over the harvest season. This allowed researchers to track IQ against their current level of income. The BBC article summarises as such:

The farmers go through three crucial stages in this cycle; before the harvest, when they have taken out loans to grow the crops and thus are extremely poor; after the harvest, but before being paid, when farmers are at the greatest extent of their poverty; and after being paid.

Or, as Dr. Anandi Mani summarises the aim of the study:

“With the sugarcane farmers, we are comparing the same person when he has less money to when he has more money. We’re finding that when he has more money he is more intelligent, as defined by IQ tests,” said Dr Mani.

The results are dramatic. By testing against a control group the researchers find that when the farmers are poorer they have lower IQs! The BBC summarises as such:

The study concludes that those in poverty, by having more constant and extensive financial worries, expend more of their mental capacity on these concerns, so that less can be used for other tasks.

The question this raises, of course, is whether IQ tests are really measuring something called “intelligence” in any meaningful way. This present study brings to mind another that found that children who were less motivated to take IQ tests did worse on them than higher motivated children.

Kids who score higher on IQ tests will, on average, go on to do better in conventional measures of success in life: academic achievement, economic success, even greater health, and longevity. Is that because they are more intelligent? Not necessarily. New research concludes that IQ scores are partly a measure of how motivated a child is to do well on the test. And harnessing that motivation might be as important to later success as so-called native intelligence.

This strikes me as being correct. I remember a guy I used to know in secondary school who never did any work because he found it extremely boring. When the school counselor had him take an IQ test in order to discern whether he had a learning disability or not he told me that he purposely played up his ignorance to certain questions in the hope he would be labeled with a learning disability which he could use as an excuse to opt out of certain classes and lighten his workload. (I know he wasn’t lying because he told me prior to taking the test). The ploy didn’t work. He never got diagnosed with a learning disability but instead merely got a test score that didn’t reflect his probable ability (whatever that means…).

Then there are the anecdotes of those who score extremely high on these tests having taken them multiple times. Rick Rosner, for example, who appeared in the excellent TV series First Person being interviewed by Errol Morris comes across as someone who achieved most of his unusual feats through sheer persistence (one might even say: pathological and obsessive persistence). One wonders if his extremely high IQ scores have something to do with this also.

While IQ is generally thought in contemporary society to mean something like “raw intelligence” there seems very little evidence that this is the case. Some of what the tests pick up might be along these lines (indeed, I am not saying that a person with Down’s Syndrome can increase their IQ to normal levels through sheer effort), but much of what they pick up is probably not intelligence at all. And for that reason the social status of this measure should, I think, be called firmly into question.

Intelligence is an extraordinarily vague entity and people tend to be “intelligent” in vastly different ways. To single out one measure — which seems completely context dependent anyway — as having special status really does appear misleading. I really don’t mean this in a wishy-washy “everybody’s special” way either. It seems to me rather that the test is designed by people, frankly, that would do fairly well on the test. Indeed, the scent of elitism which some might catch a whiff off may be easier to explain than many might think.

Posted in Psychology | 1 Comment

Clarity and Obfuscation in the Use of Mathematics for Economic Reasoning

Goya Sleep of Reason

The Tony Lawson paper discussed on this blog the other day seems already to have begun to cause ripples in the heterodox community. The Real World Economics Review Blog has run a piece by Lars Syll on the paper and the responses have been rather varied.

One of the interesting claims that I noticed was that some people were saying that mathematics, due to its formal nature, provided economists with clarity. This was then typically followed up with appeals to how economics might become a science by increasingly mathematising. This argument seems entirely dubious to anyone who has ever investigated how science functions. But I do not here wish to either discuss whether economics is a science or if scientists working in other fields really do aspire to mathematical clarity rather than creative innovation.

Instead I would like to consider in what sense mathematics can provide economists with clarity in thinking through certain issues and in what sense it might do just the opposite. I think that a good example of mathematics providing clarity is the case of the Keynesian multiplier, which I have discussed on this blog before. The multiplier, when both imports and consumption are taken into account, generally looks something like this:

Multiplier Equation 1.3We can then manipulate this algebraically to get equilibrium income as such:

Muliplier EquilibriumHere is an example of a mathematical presentation providing clarity. Even without inputting any numbers into the equation we can immediately discern the factors that will generate equilibrium income, Yt*. It will be a component of autonomous consumption, C0, investment, I, government spending, G, and exports, X, minus autonomous imports, M0. It will also be positively multiplied by the consumption multiplier, c, and negatively multiplied by the import multiplier, m.

We know this because the components of income, as just laid out a moment ago, are in the numerator of the equation, while the multipliers are in the denominator. The multipliers are also being subtracted from/added to 1. The larger the denominator, the smaller the numerator and vice versa. So, anything that “subtracts” from the denominator — e.g. the consumption multiplier — will increase the numerator, while anything that “adds” to the denominator — e.g. the import multiplier — will decrease the numerator.

As we can see, even though this piece of algebra looks somewhat mysterious to someone not familiar with it, nevertheless to the trained eye it is actually possible to intuitively interpret it in a very tangible way. This, I think, is why it provides an example of a use of mathematics that at once provides clarity and insight into the underlying relationships.

Compare this presentation, however, with your typical econometric study. Such studies contain innumerable “black boxes” in that the reasoning behind the assumptions made is often entirely unclear. When it is not unclear and is made explicit (Wynne Godley’s forecasts at the Levy Institute are a model of econometric clarity, for example) one quickly sees that such assumptions are entirely arbitrary — often calling into question the entire endeavor.

One thus spends hours attempting to interpret and reconstruct such a study and, all too often, one comes away realising that the assumptions lead one inevitably to interpret the results as being almost entirely arbitrary. Keynes noted this well in his critique of Tinbergen when he wrote:

The labour it involved must have been enormous. The book is full of intelligence, ingenuity and candour; and I leave it with sentiments of respect for the author. But it has been a nightmare to live with, and I fancy that other readers will find the same. (Pp568)

Indeed, one recognises the labour that goes into such studies — especially if you have undertaken one yourself — but at the same time untangling it becomes “a nightmare to live with”. Why? Because such studies do not promote clarity at all. Instead they promote complete and total obscurantism. The mathematical symbols and manipulations become like a dense fog which the reader has to concentrate the depths of their attention and intelligence upon in order to dissipate, only to find that there is often nothing of substance there in any case.

This is not to say that econometrics is entirely useless. As Keynes says in the Tinbergen critique:

This does not mean that economic material may not supply more elementary cases where the method will be fruitful. Take, for instance, Prof. Tinbergen’s third example-namely, the influence on net investment in railway rolling-stock of the rate of increase in traffic, the rate of profit earned by the railways, the price of pig iron and the rate of interest. Here there seems a reasonable prima facie case for expecting that some of the necessary conditions are satisfied. (Pp567-568)

What Keynes is saying is that if we have a number of variables that we can assume to be very closely and immediately related then the econometric method may prove fruitful. I’ve always thought that a nice example of such a paper that did this entirely correctly was Basil Moore’s classic Unpacking the Post-Keynesian Black Box: Bank Lending and the Money Supply where Moore is extremely careful to lay out and justify the causal relationships before he engages in any econometric analysis.

Alas, however, the question as to what is the “correct” manner in which to undertake such a study remains impossibly hard to define. That gives users of the technique who have not bothered (or have not been able) to think through its methodological problems free reign to engage in nonsense. The reason for this is precisely because these mathematical techniques have a tendency, not to clarity at all, but to obscurantism and the moment one gives people a ticket allowing them to engage in obscurantist practices one runs the risk of spiking the proverbial punch.

The same points could be made in a slightly different manner about mathematical models. But the results are clear: while in certain instances mathematics can be used to increase clarity, in others it can be used to engage in obscurantism. The reason why I think that there should be only a limited place for mathematics in economics is because the risks in allowing it a prominent place are too great as it is the usefulness and relevance of economics as a discipline which is at stake.

It is far, far more difficult to engage in obfuscation and magical nonsense when using plain English than it is when using mathematics; not to mention the fact that it is far easier to catch people out. And as a general rule-of-thumb it is probably not unfair to say that as the number of equations grows, the lack of clarity tends to increase and so too do the difficulties in sorting the wheat from the chaff. It is thus the multiplication and proliferation of equations that tends to give rise to nightmares. I think that is what Lawson, Syll and others are getting at when they express skepticism over too heavy a use of mathematics in economics.

Posted in Economic Theory, Philosophy | 8 Comments

Playing Humpty Dumpty: More on the Definition of “Balance of Payments Crisis”

Outline-Format-of-a-Definition-Essay

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean — neither more nor less.”

My previous piece on the Kaldorians and their insistence on misusing well-established economic terminology to make their argument that the Eurozone crisis is a balance of payments crisis (BoP crisis) met with all sorts of calls for definitions. No matter how many I provided the calls continued, like a small child asking “why” over and over again.

Anyway, for the sake of posterity here are some more instances where a BoP crisis is clearly being defined as an inability for an entity with a sovereign currency being unable to meet its obligations to another entity with a sovereign currency. On page 230 of the IMF’s Balance of Payments and International Investment Position Manual: Sixth Edition it is written:

14.39 There are many situations in which it may not be feasible to rely on private and official resources to finance a current account deficit on a sustained basis. If a deficit is unsustainable, the adjustment will necessarily happen through change in the willingness of market participants to provide financing or depletion of reserves and other financial assets, or a combination of both. Such adjustments may be abrupt and painful (up to the possibility of a balance of payments crisis).

Or again:

14.42 If large amortization payments are due in the near future and expected financial inflows are not sufficient to cover payments falling due, it may be necessary to undertake adjustment measures beforehand to avoid more drastic measures required for dealing with a subsequent balance of payments crisis.

Clearly the IMF considers a BoP crisis to be one in which the funding of a current account deficit is at issue. This, of course, cannot happen in the Eurozone anymore than it can happen between states in the US because of the existence of the Target2 mechanism for balancing payments.

Fortunately or unfortunately, I really cannot be any clearer than presenting the IMF’s own manual on the issue. Of course, those that want to continue to play Humpty Dumpty and make up their own definitions of words will not relent. Why? Because when smart people make such stupid mistakes they often feel the need to dig their heels in rather than issue a mea culpa. But in doing so and continuing to play at being Humpty Dumpty it is only their own credibility they will damage.

Update: Serial Humpty Dumpty Ramanan has responded to my original post. In his latest post he gives us a nice definition of what a “balance of payments” is, but no definition of what a “balance of payments crisis” is. Of course, from his highly idiosyncratic use of the term toward the end of the piece we can see that he does not use normal vocabulary. This is enormously ironic given that, in the past, Ramanan has been insistent that others use standard definitions of terms because if they do not “one gets hodgepodge and/or endless redefinitions”. Indeed. Of course, when it comes to trying to debunk MMT Ramanan’s double standards slip into the mix and we get “hodgepodge and/or endless redefinitions”.

Update II: Here is more evidence that the IMF recognises a BoP crisis to be one in which the country in question cannot receive funding, leading to substantial downward pressure on their currency:

Why do balance of payments problems occur?

Bad luck, inappropriate policies, or a combination of the two may create balance of payments difficulties in a country—that is, a situation where sufficient financing on affordable terms cannot be obtained to meet international payment obligations. In the worst case, the difficulties can build into a crisis. The country’s currency may be forced to depreciate rapidly, making international goods and capital more expensive, and the domestic economy may experience a painful disruption.

The Irish euro, of course, has yet to depreciate against the German euro signalling that once again, the Eurozone crisis — despite being in part due to trade imbalances — is not a BoP crisis.

Posted in Economic Theory | 11 Comments

Can a Country Without a Currency Have a Currency Crisis?

balance-of-payments-bop

It is often the case that whole debates rest on a misunderstanding or lack of clarity over the definition of key terms. Take the ongoing debates between the group I label as the Kaldorians and the Modern Monetary Theorists (MMTers) over whether or not the Eurozone crisis is a balance-of-payments (BoP) crisis. A good example is a recent paper by Sergio Cesaratto. Cesaratto’s understanding of the debate is well summarised by the following quotation:

If one looks at the EMU through European lenses, one sees it as a collection of independent states with a currency in common and the crisis as a BoP crisis, albeit with some idiosyncrasies related to the monetary union. If one looks at the EMU through American lenses, one sees a flawed federal state with an irreversible commitment to the CU and the financial crisis appears due to the absence of adequate federal prevention and resolution mechanisms. If the EZ were a viable federal union, there would be no such crisis, the American argues. The disenchanted Euro-sceptic replies that the EZ is not a full federal union and is unlikely to evolve in that direction. Thus, the first perspective seems more realistic and the EZ crisis closer to a traditional BoP crisis. (Pp8)

From the perspective of this disenchanted European Euro-sceptic this whole debate rests on a poor definition of terms. You see, a BoP crisis has an alternative name: its called a “currency crisis”. The Wikipedia article is representative of this:

A BOP crisis, also called a currency crisis, occurs when a nation is unable to pay for essential imports and/or service its debt repayments.

You see, a BoP crisis requires that an entity has its own currency otherwise its not a BoP crisis. It is as simple as that. If this were not the case then we could interpret anything as a BoP crisis. The poverty in the neighborhood down the street? Oh, that’s a BoP crisis; their current account deficit is too large. The bankruptcy of Detroit? BoP crisis. And so on and so on.

In the literature a BoP crisis, as distinct from other crises, is distinguished by a rise in the current account deficit leading to pressure on a country’s currency. But if a country doesn’t have its own currency then this cannot happen. The situation is reflected in the fact that Irish and Spanish euros can still buy as many BMWs from Germany as they could before the crisis. While in a real BoP crisis this would not be the case; the BMWs would become more expensive.

So, why are good economists making such manifestly silly mistakes? Well, this goes back to the work of what I call the Kaldorians — who, by the way, are not reflective of the work of Nicholas Kaldor. The Kaldorians are obsessed with BoP crises; they see them everywhere and anywhere. Because their models are so focused on trade imbalances they forget the meaning of the term “BoP crisis” and interpret almost everything as a BoP crisis. When all you have is a hammer everything quickly starts to look like a nail.

Am I saying that trade imbalances in the Eurozone are unimportant? No. They are very important in that they have resulted in dangerous imbalances of effective demand. But that is not the same as saying that the Irish or Spanish crisis is a BoP crisis. So, let’s be categorical about this: a country without its own currency cannot suffer a currency/BoP crisis with its neighbors who share its currency. To say otherwise is to abuse the lexicon of economics and, frankly, to look a little silly.

Update: As always happens when you call people out on their misuse of words there has been some bickering. Here I will provide links that support my definition of a BoP crisis as requiring the entity undergoing said crises to possess its own currency. If people want to dispute this definition they must provide credible sources to the contrary. I think that they will quickly find that mine is the consensus view and their’s is the idiosyncratic view that seems to have grown out of the work of Thirlwall.

(1) Wikipedia article on currency crisis notes that it is the same as a BoP crisis.

(2) Wikipedia article on balance of payments crisis notes that it is the same as a currency crisis and involves an attack on a country’s currency.

(3) Economicshelp.org article states that current account deficit must be unsustainable which is not the case in a currency union.

(4) Financial Dictionary also states that the current account must be unsustainable.

(5) The IMF, while it distinguishes between a BoP crisis and a currency crisis, notes that a BoP crisis is one that “involves a shortage of reserves to cover balance of payments needs” which is clearly not the case in a monetary union.

I think that is enough evidence for now. I welcome commenters to provide credible sources to the contrary. And no, articles by Krugman and Martin Wolf that use the wrong definition are not sources. A source must actually lay out and justify the definition of the term.

Posted in Economic Theory | 70 Comments

Empty Theory: A Response to James R McLean on Michael Emmet Brady’s Purported Theories of Decision-Making

emptiness

James R McLean has written a fairly coherent piece on my challenge to Michael Emmet Brady. He has also given me a rather nice point of departure with which to make my case against Brady. He has done so by providing a misinterpretation of Freudian theory — something which, as readers of this blog will know, tickles my fancy somewhat. So, I should be able to kill two birds with one stone; one being what I see to be Brady’s grandiose claims and the other being a misunderstanding of Freudian theory.

Regarding Freudian theory McLean writes:

One is that any theory about human behavior could be, to the same degree, accused of tacitly assuming that everyone it purports to describe “knows” the theory. Freudian psychology, likewise, “assumes people have first read and understood Freud”—for surely, for someone to make decisions in line with Freud’s work on psychology they would have to have first read and understood this work.

This is in response to my claim that Brady’s decision theory relies on the implicit assumption that people have read either Brady’s work or Keynes’ Treatise on Probability (and interpreted the latter in the same manner as Brady). Thus, for McLean such a complaint “does not work as a rebuttal”. The problem with this? McLean’s reading of Freudian theory is wrong.

Freudian theory, in fact, does make claims that people will act in line with said theory despite not understanding it. In point of fact, since the popularisation of Freud many psychoanalysts have complained that peoples’ superficial understanding of his work has made their job a lot tougher because everyone tries to vulgarly interpret their own symptoms and dreams and this detracts from the analyst’s ability to do so. (This is somewhat similar to the contemporary doctor’s complaint about people who try to self-diagnose using online sources).

Let me take a recent example I came across which shows how Freudian psychology can be used to understand aspects of peoples’ desires and actions despite their not understanding said psychology. From there we can then go on to show why Brady’s theory of decision-making is no such thing.

Some time ago I was talking to a girl I know. The conversation turned to children being influenced by a certain cultural trend. She said “why would I care if my children were influenced by…” said trend. She then quickly bit her tongue and said, “I mean why would anyone care that their children were influenced by…” said trend. The girl in question did not have any children, of course. A Freudian would conclude that her slip of the tongue manifested a desire to have children and that perhaps this was what was going through her subconscious mind at the time of the conversation.

In fact, this should be fairly obvious. Slips of the tongue always have such meanings. They manifest desires on the part of the person doing them that either they do not consciously recognise or they are trying to keep hidden for whatever reason. In this way Freudian psychology can tell us things about the people despite these people not being aware of Freudian psychology — something I can confirm about the girl in question. That is what makes the theory useful.

Something similar can be said of decision-making theories in economics. Take the Efficient Market Hypothesis (EMH) as an example. The EMH tells us that, given what it claims about how people make decisions, an individual investor cannot persistently beat the market. If we believe this theory — which, by the way, I do not but that is irrelevant to this discussion — then we can take investment advice from it: we would buy into index funds that track the market. The EMH does not rest on the assumption that everyone in the market understands and adheres to the EMH. That is what, in the view of its proponents, makes it a useful theory.

Or take the competing theory: behavioral finance. Behavioral finance also purports to tell us information that we can use in the real world. As the popular manual on technical trading Technical Analysis: The Complete Resource for Financial Market Technicians puts it:

Behavioral finance is a quickly growing subfield of the finance discipline. This branch of inquiry focuses on social and emotional factors to understand investor decision making. Behavioral finance studies have pointed to cognitive biases, such as mental accounting, framing, and overconfidence, which impact investors’ decisions. These studies suggest that investors act irrationally, at times, and can drive prices away from the EMH true value. Investor sentiment and price anomalies, either as trends or patterns, have been the bulk of technical analysis study. Sentiment and psychological behavior have always been the unproven but suspected reason for these trends and patterns, and human bias has always been in the province of trading system development and implementation. (P49-50)

Again, behavioral finance does not assume that everyone in the market need understand behavioral finance in order for it to produce valid results about human decision-making. This is, like the EMH or Freudian psychology, what makes it a useful theory in the eyes of its proponents; it is the fact that it can purportedly tell us something about what makes people tick regardless of whether they understand the theory or not that provides us with relevant and useful information.

This is the question I raised with regards to Brady’s decision theory: can it provide us with such useful information? If it can tell us how people will behave then we should be able to better understand financial markets with it — after all, this is what rival theories like the EMH and behavioral finance purport to do. If this is, in turn, the case then Brady or someone who is familiar with his theories, should be able to lay out a manner in which we can apply the theory to financial market data — again, this is what proponents of the EMH and behavioral finance actually do with these theories.

The problem with Brady’s theory is that I do not believe that it is such a theory of decision-making. Rather I think that it pretends to be but is actually something rather different; it is actually an idiosyncratic probability theory that really tells us nothing useful about how people make decisions. In this sense, Brady’s theory is useless.

McLean, who seems like a thoughtful person, actually gave this some consideration when he wrote at the end of his piece:

So James, in what sense is Michael Emmett Brady circulating a useful decision theory? In what sense does it have something to say about the real world? And I would have to say that Mr. Brady has always discussed its value in the realm of public policy…

The problem with this? He never specified the novel insights  that Brady’s theories give us with regards to public policy. Do Brady’s theories give us insights that standard Post-Keynesian theory cannot regarding public policy? I have certainly not been made aware of such insights. But even if they do exist, Brady’s theories are still not what they claim to be; that is, decision-making theories in the proper sense of the term. And that is my fundamental problem with them.

Where I disagree with the EMH and am cautiously skeptical of the claims of behavioral finance, I nevertheless recognise that they can actually be applied to the real world — and thus falsified. No one has shown me that Brady’s theories can be. And that is my fundamental problem with them. I don’t believe there is any there there; I believe Brady’s theories to be basically empty. They make the claim to be a theory of human decision-making, but no one has shown me how they can be applied to study human decision-making.

Posted in Economic Theory, Philosophy, Psychology | 5 Comments

Oil Speculation and Syria: A Microstudy

oil-speculating-worth-the-price

There is nothing more irritating than an economist looking at you skeptically when you talk about speculation in certain markets and saying: “go on then, cook up a study and prove it is taking place”. What they mean is that you should work your ass off producing a largely meaningless and irrelevant econometric study that they will be able to dispute by messing with lags and filters until they get a result contrary to your own. Meanwhile, everyone in the markets knows that speculation is taking place. Why? Because its so intuitively obvious to anyone who follows these markets.

A good example of this is the recent rally in the oil markets on the back of the announcement of the US probably intervening in some way in the Syrian conflict. I have provided more long-term evidence on this blog before that such speculation is taking place, but now we have a good solid concrete event with which to show why it is so obvious to market participants what is going on and why econometric studies cannot capture such intuitions.

In an article misleadingly entitled Why Small Producer Syria Matters to Oil Markets Reuters lay out a number of facts about the Syrian oil market. I will here provide only the ones I think most important:

– Syria has not exported any oil since late 2011, when international sanctions came into force.

– “Syria is not a major oil producer (as was Libya), nor is it a major transit point for oil and gas exports (as is Egypt),” said Julian Jessop, head of commodities research at Capital Economics.

– Syria’s current production is estimated at just 50,000 bpd, all of which is refined domestically.

Pretty small time player, right? And they haven’t been exporting for two years. Also consider the fact that the country has been in civil war for months now; are US airstrikes or something similar really going to have that much of an effect on oil production and consumption? Doubtful.

But the effects that the news are having on market sentiment are enormous. Via the FT:

Some argue it has further to go. Société Générale analysts say Brent is likely to reach $125 a barrel if the west does launch airstrikes, and could touch $150 a barrel if such strikes disrupt production.
Such hysteria is all too familiar to anyone who follows financial markets. But the strangest thing is that its not as if I or any other skeptic have a monopoly on the facts in this case. Nor are these facts not widely known. At the very beginning of the above article the FT journalist writes:
By Middle Eastern standards the Syrian oil industry is a tiddler.
Indeed. Add in the fact that they haven’t been exporting since 2011 and that they’ve already been in civil war for months and there seems very little tangible reason for the recent price rises. Most analysts and market bulls are putting it down to the idea that the US intervention will escalate tensions in the Middle East. That might be true, but it is completely unclear why this would translate into an immediate shortfall of oil supply. Rather this is a projection of a general sense of fear onto the region and people are making their purchases accordingly.
This is just speculation pure and simple and the level of sophistication is not far off making a decision based on what a fortune-teller has read on your palm at a funfair. There’s a bit of anxiety in the markets and speculators are looking to pick up on that anxiety. They’re looking at those around them and mimicking what they’re doing. When the headlines disappear the market will subside. The bulls will pull out and look elsewhere for a quick buck and the bears will pile in. This is how financial markets really work and you don’t need any econometric studies to tell you this; indeed, they tend to impede our understanding by distracting from the truth of the matter.
Posted in Market Analysis | 3 Comments