Econometraps: The Repetition of a Major Confusion in Six Major Econometrics Textbooks

revolver_animal_trap

In the recent issue of the Real World Economics Review there was a rather interesting, if somewhat dense, article by Judea Pearl and Bryant Chen entitled Regression and Causation: A Critical Examination of Six Econometrics Textbooks. Lars Syll, who was earlier sent the paper, has weighed in here. It is a heavy and technical paper, but I think that the underlying results are of great interest to anyone concerned with applying econometrics to economics — or, conversely, to anyone who is, as I am, skeptical of such applications.

The papers appears to turn on a single dichotomy. The authors point out that there is a substantial difference between what they refer to as the “conditional-based expectation” and “interventionist-based expectation”. The first is given the notation:

conditional expectationWhile the second is given the notation:

interventional expectationThe difference between these two relationships is enormous. The first notation — that is, the “conditional-based expectation” — basically means that the value Y is statistically dependent on the value X. So, given a mass of past data points for the value Y and X we can then make a purely statistical prediction about the relationship between the two.

The reasoning runs something like this: “Since we know from past data that when the variable X changes by a given amount we then see a change in Y by a given amount, we can then assign a certain probability that such a relationship will carry into the future.”

The second notation — that is, the “interventionist-based expectation” — refers to something else entirely. It means that the value Y is causally dependent on the value X. This means that if we undertook an experiment in which we altered the value of X by some amount we would then see a fixed change in the value of Y.

All that may seem somewhat dense and confusing, so let us consider the example that the authors lay out (p3, footnote 3). They ask us to consider the case in which an employee’s earnings, let’s say X, are related to their expected performance, let’s say Y. Now, if we simply go out and take a statistical measure of earnings and expected performance we will find a certain relationship — this will be the conditional-based expectation and it will be purely a statistical relationship.

If, however, we take a group of employees and raise their earnings, X, by a given amount will we see the same increase in performance, Y, as we would expect from a study of the past statistics? Obviously not. This example, of course, is the interventionist-based expectation and is indicative of a causal relationship between the variables.

Now, what the authors of the paper find is that, when they survey six popular econometrics textbooks these differences are not adequately outlined at all. Indeed, the authors of the textbooks usually don’t even distinguish these two vastly different relationships by using different mathematical notation. The effect is that students confuse statistical relationships for causal relationships.

Obviously, this is deeply problematic from the point-of-view of applied economics. In economics we are mainly interested in causal rather than statistical relationships. If we want to estimate, for example, the multiplier, it is from a causal rather than a statistical point-of-view. Yet the training that many students receive leads to confusion in this regard. Indeed, we may go one further and ask whether such a confusion also sits in the mind of the textbook writers themselves.

This confusion between statistical relationships and causal ones has long been a problem in econometrics. Keynes, for example, writing his criticism of the econometric method in his seminal paper Professor Tinbergen’s Method noted that Tinbergen had made precisely this error.

In his book Tinbergen is trying to account for the fluctuations in investment using econometric techniques. But, as Keynes notes, such an approach does not account for causality at all — i.e. it cannot tell us the causal relations between the variables, but only the past statistical relations. This can clearly be seen in the fact that in the period of Tinbergen’s study the rate of interest varied little and this leads to the rather murky conclusion that the rate of interest is not having much of an impact on the rate of investment. Keynes writes,

For, owing to the wide margin of error, only those factors which have in fact shown wide fluctuations come into the picture in a reliable way. If a factor, the fluctuations of which are potentially important, has in fact varied very little, there may be no clue to what its influence would be if it were to change more sharply. There is a passage in which Prof. Tinbergen points out (p. 65), after arriving at a very small regression coefficient for the rate of interest as an influence on investment, that this may be explained by the fact that during the period in question the rate of interest varied very little. (p567)

Of course, this does not mean that the rate of interest had no potential causal relationship with fluctuations of investment. Rather it means that, due to the lack of substantial fluctuations in the interest rate in the period of observation, whatever effects it may or may not have had were simply not realised. Thus, from the given evidence we simply do not know.

Many would interpret Tinbergen’s results to say something like: “The rate of interest has very little effect on fluctuations in investment”. From a statistical point-of-view that statement is perfectly true for the period given. But from a causal point-of-view — which is what we as economists are generally interested in — it is completely hollow.

The question then arises: why, after over 70 years, are econometrics textbooks engaged in the same oversights and vaguenesses as some of the pioneering studies in the field? I think there is a simple explanation for this. Namely, that if econometricians were to be clear about the distinction between statistical and causal relations it would become obvious rather quickly that the discipline holds far less worth for economists than it is currently thought to possess.

Let me be clear about this: I am not saying that econometricians are engaged in some sort of conspiracy. I am not saying that they get together in smoke-filled rooms and conspire to bamboozle students into confusing statistical relationships with causal relationships. Rather I think that they succumb to this confusion themselves because they are trying to walk the line between being a statistician and being an economist.

The two disciplines are generally interested in entirely different forms of relationships and trying to locate the relationships that are of interest to an economist in the techniques of the statistician — which is what econometricians try to do — is probably a far more fruitless endeavor than popular opinion would think. So it is no surprise that in trying to mix oil and water — that is, statistical and causal inference — econometricians often come out with a terrible mess on their hands.

Indeed, it seems to me that such a mess actually provides the foundations on which the discipline rests. If it were ever cleared up sufficiently well, many would question the use of econometrics in economics altogether. Perhaps, for the econometrician, it is better to be confused than to be potentially unemployed.

Posted in Statistics and Probability | 13 Comments

What Will The Conventional Wisdom in Economics Be After 2008?

jk galbraith

I am currently rereading JK Galbraith’s The Affluent Society. It was one of the first books on economics that I ever read and I must say that it is well worth a reread, as there is much in it that I only now appreciate. It is a very rich book filled with interesting insights, not only regarding economics and the nature of economics but also regarding the nature of economists.

In the famous second chapter in which Galbraith coins the now commonplace term ‘conventional wisdom’ we find what I think to be a fascinating discussion. Galbraith starts by, in his own way, telling the reader clearly and in no uncertain terms that economics, together with the other social sciences, is a non-ergodic discipline and cannot be properly approached with the assumption that there exists fixed and timeless relationships. This then leads Galbraith to the interesting conclusion that this allows ample space for emotional or ideological thinking.

Economic, like other social, life does not conform to a simple and coherent pattern. On the contrary, it often seems incoherent, inchoate and intellectually frustrating. But one must have an explanation or interpretation of economic behavior. Neither man’s curiosity nor his inherent ego allows him to remain contentedly oblivious to anything that is so close to his life. Because economic and social phenomena are so forbidding, or at least so seem, and because they yield few hard tests of what exists and what does not, they afford to the individual a luxury not given by physical phenomena. Within a considerable range, he is permitted to believe what he pleases. He may hold whatever view of this world he finds most agreeable or otherwise to his taste. (p17)

There is a lot of truth in this statement. But it is what comes later that is particularly interesting. Galbraith ties this strange status of economics and its inherent enmeshment in what he calls the conventional wisdom with the way scholarship is structured within the discipline. He writes,

The conventional wisdom having been made more or less identical with sound scholarship, its position is virtually impregnable. The skeptic is disqualified by his very tendency to go brashly from the old to the new. Were he a sound scholar, he would remain with the conventional wisdom. (p19)

This is, of course, perfectly true and in this statement many will recognise how heterodox ideas have been pushed to one side for decades in the profession; they have been disqualified on the basis that “this is not how things are done”. One would also think of the profession’s tendency — and in this, the heterodox profession are not completely innocent — of favouring econometric over normal, empirical inquiry. This, I think, constrains economic discourse in a very extreme manner, as the results of such inquiries are often basically meaningless and their structure ensures that real debate does not take place.

Galbraith then goes on to make an interesting statement: he claims that it is not ideas that successfully challenge the conventional wisdom, but rather events. He writes,

The enemy of the conventional wisdom is not ideas but the march of events. As I have noted, the conventional wisdom accommodates itself not to the world that it is meant to interpret but to the audience’s view of the world. Since the latter remains with the comfortable and the familiar while the world moves on, the conventional wisdom is always in danger of obsolescence. This is not immediately fatal. The fatal blow to the conventional wisdom comes when the conventional ideas fail signally to deal with some contingency to which obsolescence has made them palpably inapplicable. (p22)

With this in mind Galbraith goes on to make what I think is his most interesting point in the whole chapter. He writes,

This, sooner or later, must be the fate of ideas which have lost their relation to the world. At this stage, the irrelevance will often be dramatized by some individual. To him will accrue the credit for overthrowing the conventional wisdom and for installing the new ideas. In fact, he will have only crystallized in words what the events have made clear, although this function is not a minor one. (p22)

Here Galbraith has in mind Keynes — or, at least, Keynes as a figure rather than as a thinker — whose individual person became symbolic of the change of ideas brought about by the Great Depression and the Second World War. He makes this clear in what follows.

In [1936], John Maynard Keynes launched his formal assault in The General Theory of Employment Interest and Money. Thereafter, the conventional insistence on the balanced budget under all circumstances and at all levels of economic activity was in retreat, and Keynes was on his way to being the new fountainhead of conventional wisdom. By the very late sixties a Republican President would proclaim himself a Keynesian. It would be an article of conventional faith that the Keynesian remedies, when put in reverse, would be a cure for inflation, a faith that circumstances would soon undermine. (p25)

Naturally, Galbraith thinks this whole drama rather amusing. Keynes, who truly went against the conventional wisdom of his time, was soon installed as the “new fountainhead of conventional wisdom”. In this, his ideas were, of course, formalised and watered down. They became congealed and useless in the hands of, for example, Paul Samuelson and other American neo-Keynesians.

We are now in a period in which the conventional wisdom that ruled from between the late-1970s and 2008 is crumbling. Economists, at least the younger ones and the more sensible ones, are changing their tune to a very large degree. The question then remains: who will be the fountainhead of the new conventional wisdom?

Will it be someone like Paul Krugman who is trying to resurrect the old Samuelsonian neo-Keynesian? This, I doubt. Krugman is a good populariser and much loved by liberals in the US for his politics, but he does not strike me as having the originality of Samuelson. So, who do we turn to then? I will give what I think are the two most likely cases.

The first is that economists turn en masse to some watered down version of Hyman Minsky’s economics. Although this seems like a radical proposal now, I see more and more young economists interested in Minsky’s work. I also see ample possibility that it might be sanitised into some sort of dominant paradigm. In a world obsessed with debt, it would be an easy sell. I can even guess what form it might take: it will focus on the idea that debt drives economic growth — since debt = money, this is not far from a sort of quasi Post-Keynesian form of monetarism.

The other possibility is that Keynes becomes, once again, the new fountainhead of the conventional wisdom. This may be even more appealing to having him be the fountainhead in 1936, because he is dead and can no longer speak; so people can do what they like with his ideas. This is a very strong possibility, I think. And most of the ideas that need to be picked up — like the ISLM — are already intact and waiting to be updated by, say, the insertion of a Taylor Rule. (In fact, David Romer has already done this in a paper where he laments that he has not seen this yet penetrate the textbooks. Watch this space…).

Ultimately, however, these are just speculations. But I think there is a fair chance that I am correct. I am also almost certain that any ideas that truly disturb the conventional wisdom of economics — that is, how it is structured as a discipline — will be scrupulously avoided. And with that in mind I leave the reader with a final quote from Galbraith.

With so extensive a demand, it follows that a very large part of our social comment — and nearly all that is well regarded — is devoted at any time to articulating the conventional wisdom. To some extent, this has been professionalized. Individuals, most notably the great television and radio commentators, make a profession of knowing and saying with elegance and unction what their audience will find most acceptable. But, in general, the articulation of the conventional wisdom is a prerogative of academic, public or business position. (pp20-21)

Posted in Economic Theory, Psychology | 1 Comment

Models, Myths and Underpants Gnomes: Should Economics Be Dominated By Modelling?

underpants_gnomes

In the latest issue of the Real World Economics Review Gustavo Marqués has an interesting paper entitled A Plea for Reorienting Philosophical Attention From Models to Applied Economics. In the paper Marqués examines some of the philosophical justifications that have been provided for the practice of economic modelling. In his survey he deals with three authors: Cartwright, Colander and Alexandrova. We will deal here with each in turn.

Cartwright, who I wrote about on this blog recently, provides a rather strange defence of abstract modelling. She claims that we should view models as ‘parables’. Marqués summarises her argument as such:

Parables, however, shed (or perhaps it would be better to say “suggest”) a lesson, that is not contained in the model itself, but must somehow be built from the outside taking into account relevant portions of available background knowledge. This means that models can have a “correct” lesson within them, but it must be partly construed out of the materials provided by the model on the basis of theoretical and extra theoretical knowledge. (p35)

Actually, I think that this is a rather common defence of modelling. Yes, a few philosophically unsophisticated modellers still today try to test their models against data using a variety of mathematical methods based on probability theories — whether Bayesian or frequentist. But these are usually those lower down in the intellectual pecking order. The more sophisticated theorists generally do assume something like a ‘parable’ defence of modelling — I think of Frank Hahn’s use of the general equilibrium framework to justify the use of Keynesian economics for real world policy.

This comes with various problems that Marqués points out which include the fact that the model might not deliver the right ‘abstract’ lesson; that there may be a variety of differing, even contradictory lessons from a given model and the modeller may pick up on the one that best suits their purpose; and, most damningly I think, that if models are just there to give out such parables why on earth need they be so mathematically bulky and precise if the lesson they impart is straightforward and imprecise.

But there is another serious objection to such an interpretation of models. That is, it sets them up as ‘myths’ in the anthropological sense. Anthropologists recognise that such myths provide the foundations upon which structuring principles for societies are built. They also recognise that those that interpret such myths — usually soothsayers, shamans or priests of some sort — are imbued with a strange sort of aura. If we start interpreting models in the same way as shaman interpret myths not only do we give the models a quasi-religious weight that seems rather primitivistic, but we also imbue economists with a mystical aura similar to that of a shaman or priest. I would imagine that, articulated in this way, many would be very cautious about turning economics into a sort of primitive religion.

Colander’s argument is more interesting. He points out that those using the dominant modelling framework prior to the crisis — that is, the DSGE framework — were actually blinded to the problems building up in the sub-prime mortgage market. The model, on this reading, acted as a sort of myopic filter that hid what was ultimately a very obvious reality — i.e. that mortgages were being handed out to less than creditworthy customers — that anyone not using the model would likely have picked up on.

While Colander’s criticism is interesting and important, his proposals for solutions are weak. First, he advocates that even more complex models are needed — a typical refrain heard from the modelling community. Secondly, he believes that economists should be trained better to pick the correct model given what is being observed. Colander, of course, misses entirely that these two goals may be completely in conflict with one another.

Contemporary models like the DSGE are unwieldy enough and require a lot of investment to get one’s head around (personally, I have never bothered with the intricacies). If we make the models even more complex, it seems to be pretty fantastic to assume that economists can both spend their time building and understanding such constructions while at the same time focusing on how to apply them.

Colander also doesn’t take into account the rather obvious fact that some people are just not suited to one or the other tasks. Some people cannot think in terms of highly abstract models, while others cannot think outside of this simply due to the way their mind works. Very few people can do both sufficiently well to take Colander’s approach — especially if we consider that many people must walk away with only an undergraduate or masters level training in economics to work in the world.

Colander’s rather fantastic and idealistic expectations of future economists hides what is a simple binary choice: do we want a profession dominated by abstract modellers whose ability to do applied work is seriously sub-par; or do we want a profession dominated with people who eschew precision and perfection in order to deal with the nitty-gritty of applied economics? We cannot, as Colander thinks, have it both ways and if we try the modellers will likely win out, as they become myth-makers carrying unwieldy contraptions that only they can ‘read’.

Alexandrova’s ideas suffer similar problems. She wants to use models as hypothesis-generating machines that we can then apply. But she provides no means by which we can make this transition. Thus, rather than being a solution, her paper merely provides a sort of taxonomy of what needs to be done — Step 1: build model to obtain an hypothesis; Step 2: make sure that the assumptions built in the model are applicable in the real-world… and so on.

Alexandrova’s approach suffers from what might be referred to as the “underpants gnomes profit plan problem” from the South Park cartoon. The problem is adequately outlined in the short clip below.

As Marqués writes,

Unless a clear connection between both programs can be exhibited (something that Alexandrova’s paper fails to show) to get busy in building models diverts resources from the technological approach of directly “building” in practice the desired result. This construction, it seems, does not need at all any of the solutions offered within the model. (p41)

It appears that Marqués runs into effectively the same problem over and over again. That is, each approach assumes some knowledge on the part of the users of the models that does not come (a) from the practice of modelling itself or (b) from the models once they are complete. This knowledge, which is identical to the ‘???’ in the underpants gnomes profit plan, is what I have referred to above as the ‘aura’ given to shamans and priests when they interpret myths. The myths here are, of course, the models and their interpretation is undertaken by the modeller.

The reason that this problem keeps arising is because each of the three authors surveyed by Marqués wants to keep the structure in which models become like myths and their interpretation carries with it a certain aura intact. Put more simply: none of the authors surveyed want to conclude that economics as a discipline should probably de-emphasise the role played by models in economics and instead train economists to do applied work in which they lay out clearly and explicitly their process of reasoning.

Such an approach would not only ensure that economics does not become a modern system of myth-making, but it would also allow economists to have rational arguments where instead today they just try to batter each other over the head with their models in a sort of a “my model is better than your model” display of silliness which rarely leads an argument to any conclusion. One is thus reminded of an old well-known quote from Keynes:

If economists could manage to get themselves thought of as humble, competent people on a level with dentists, that would be splendid.
Posted in Economic Theory, Philosophy | 3 Comments

Metaphor and Meaning: What is a Downward-Sloping Demand Curve?

law of demand

All too often a debate, or what sometimes passes for one, is due to one side not knowing the definition of terms. There are many reasons for this — sometimes, for example, people are just ignorant — but one reason that is often the case is that the use of a term that is semantically very general becomes congealed in a single usage. This perversion of language was noted by George Orwell in his seminal essay Politics and the English Language. Orwell called this linguistic error a ‘dying metaphor’. In his essay he explains this error as such:

A newly invented metaphor assists thought by evoking a visual image, while on the other hand a metaphor which is technically “dead” (e.g. iron resolution) has in effect reverted to being an ordinary word and can generally be used without loss of vividness. But in between these two classes there is a huge dump of worn-out metaphors which have lost all evocative power and are merely used because they save people the trouble of inventing phrases for themselves. Examples are: Ring the changes on, take up the cudgel for, toe the line, ride roughshod over, stand shoulder to shoulder with, play into the hands of, no axe to grind, grist to the mill, fishing in troubled waters, on the order of the day, Achilles’ heel, swan song, hotbed. Many of these are used without knowledge of their meaning (what is a “rift,” for instance?). (Emphasis Original)

Economics (indeed, all social science, as Orwell hints at in the essay) is particularly prone to the abuse of dying metaphors. This is due to the nature of the discourse. Economics, for example, often uses mathematical or geometrical metaphors and because of the seemingly concrete form that they take it is easy for the non-thinking student who learns by rote to ‘reify‘ the metaphors and mistake them for the actual relationships they are describingr; the linguistic counterpart of this error is that the mathematical or geometrical metaphor ‘dies’, as Orwell put it.

A good example of this is the downward-sloping demand curve. What is a downward-sloping demand curve? Well, it is, as are all linguistic terms, exactly what it says that it is: it is a demand curve that slopes downwards.

The best known of such demand curves is, of course, the one represented in the classic supply and demand graph that first year undergraduates are taught and which can be seen below.

demand_supply_demand1

This curve, when applied in basic microeconomics, depicts a number of relationships (diminishing marginal utility, substitution effects etc.) that do not here concern us. More important is that the curve is said to represent the Law of Demand. The Law of Demand can be summarised by the mathematical relationship:

law of demand

What that means is that quantity demanded is a function of price and this is understood to be an inverse relationship. These are really the only true mathematical properties of the above curve. The economic rationale for why these relationships hold is completely separate from the mathematical metaphor being used to describe them.

The downward-sloping demand curve in this regard is the metaphor; it is not the relationships that the metaphor seeks to describe. This is how all metaphors function. If I say that a man is boiling under the surface like a pressure cooker I mean that he is extremely angry and that this anger is likely to be released at some point. The metaphor — in this case, the boiling pressure cooker — is not identical with the relationship it seeks to explain — in this case, the uncontainable anger. For this reason the metaphor can be transferred. For example, if I said that a lover was boiling under the surface like a pressure cooker. The metaphor remains the same, but the underlying relationship it seeks to describe is almost entirely the opposite of the previous one.

Now, that’s all fine and dandy but why does this lead to confusion in economic discourse? Simple. Because some people begin to mistake the mathematical or geometrical metaphors for the relationships they describe. Thus a downward-sloping demand curve becomes equated in the minds of unthinking and unreflective people as being wholly synonymous with, for example, the relationships that underlie the microeconomic demand curve taught in undergraduate classes (i.e. diminishing marginal utility, substitution effects etc.).

Far be it from being a simple example of poor thinking and poor language use, this confusion can cause all sorts of confusions. You see, we find downward-sloping demand curves all over the place in (marginalist) economics. Take, for example, the money demand curve (L) in the ISLM model as laid out below:

money supply demand curveIs this a demand curve? Certainly. Is it downward-sloping? Evidently so. Does this make it a ‘downward-sloping demand curve’? Of course! But are the relationships that it describes the same as those underlying the microeconomic demand curve previously discussed? Absolutely not. But again, the mathematical or geometric metaphor should not be confused with the relationships it describes — just like the boiling pressure cooker should not be confused with the uncontainable anger; to do so is a logical fallacy.

Let us bring this further still, because I think there is some insight into economics in the above discussion. Take again our Law of Demand as expressed in algebraic form:

law of demand

Does the money demand curve in the above graph conform to this? Yes, it does. In the graphical representation it can clearly be seen that the quantity of money demand is inversely related to the interest rate which is, of course, the price of money. Thus, the conception of money in the ISLM framework can be said to depend not just on a downward-sloping demand curve for money (this is self-evident) but also on the Law of Demand.

Here is a hypothesis I want to lay in front of the reader: marginalist economics rests on its ability to sneak the Law of Demand, in a variety of different and sometimes unrecognisable forms, into many, many economic relationships. Indeed, this is perhaps one of the key pillars of the entire discourse.

With that considered, let us once again reinforce our original point: economists and mathematicians, often not being the most critical thinkers, have a terrible tendency to confuse their metaphorical representations with the relationships that underlie said metaphorical representations. This can lead to all sorts of embarrassing confusions — confusions that can, on occasion, be very destructive and stifle thinking altogether. If economists can train themselves to avoid the pitfall of dying metaphors, their ability to think about economic arguments and relationships and to present these to themselves and others will be greatly improved.

Update: There has been much discussion on various Facebook pages about the so-called Law of Demand in light of this post. I should point out that I was using it both as a point of departure and as a means to point out an inherent structural feature of marginalist economics. I do not in any way conceive it to be an actual “law” and think that its name is just another instance of the extreme pseudo-scientific pretentiousness of marginalist economics. For a very compact, dense and pointed overview of its actual epistemological status see the following post by Lord Keynes.

Posted in Economic Theory, Philosophy | Leave a comment

Nancy Cartwright’s Defense of and Attack on Economic Modelling

galileo-smackafool

Recently I thought it might be interesting to give the other side in the modelling debate a chance. Frankly, I have not found the debates online to be particularly stimulating or interesting, so I thought I’d go to what is supposed to be a prime source. Some digging led me to the philosopher Nancy Cartwright (no, not the Simpsons voice actress!). I thought this would be a particularly interesting source because, apparently, she has recently waned in her support for modelling. So, I cracked open her book Hunting Causes and Using Them: Approaches in Philosophy and Economics.

Before delving into the particulars, I’ll be blunt: Cartwright’s argument is not interesting and it can be refuted by simply pointing out her poor use of analogy. I have some sympathy for why she used such a poor analogy upon which to build her argument and I will discuss why I think this was below. However, if this is the best the field has to offer then the modellers should be rather concerned.

The relevant chapter to this discussion is the 15th of her book, entitled The Vanity of Rigour in Economics: Theoretical Models and Galilean Experiments. The first half of that title sounded interesting to me, the second set off the flashing red light; and rightly so. You see, Cartwright defends economic models that oversimplify and make unrealistic assumptions — she calls these, following Lucas (yes, that Lucas), “analogue economies” — because she says the experiments in, for example, physics do the same thing. Her paradigmatic case is, of course, Galileo dropping the weight off the tower.

Cartwright thinks — again, following Lucas, so far as I can tell — that economic models are analogous to experiments in physics. Why? Because they both oversimplify. Here is Cartwright’s characterisation of Galileo’s experiment:

Galileo’s experiments aimed to establish what I have been calling a tendency claim. They were not designed to tell us how any particular falling body will move in the vicinity of the earth; nor to establish a regularity about how bodies of a certain kind will move. Rather, the experiments were designed to find out what contribution the motion due to the pull of the earth will make, with the assumption that that contribution is stable across all the different kinds of situations falling bodies will get into. How did Galileo find out what the stable contribution from the pull of the earth is? He eliminated (as far as possible) all other causes of motion on the bodies in his experiment so that he could see how they move when only the earth affects them. That is the contibution that the earth’s pull makes to their motion. (p223 — My Emphasis)

Frankly, I don’t think I even need tell the informed reader what the problem with this analogy is. Cartwright assumes that because, in physics, we can establish “stable” relationships then we can do the same in economics. But, of course, this is simply not the case and the reason for this is because when Galileo drops weight after weight off the tower the results follow an ergodic pattern: the pull of the earth does not change. In economics, however, we deal with non-ergodic data; so, to use an example familiar to readers of this blog, a rise in the interest rate in the US in 1928 will have wildly different results to a rise in the interest rate in the US in 1980.

When we deal with actual stable relations in economics we have a name for them; they are called “accounting identities”. The GDP equation is a stable relation, but only because it is also an identity and, ultimately, a tautology. Apart from this, stable relationships simply do not exist any more than they do in the study of history. But Cartwright, drawing her extremely poor analogy, uses Galileo as a justification for abstract models:

Now I should like to argue that a great many of the unrealistic assumptions we find in models and experiments alike are not a problem. To the contrary, they are required to do the job; without them the experiment would be no experiment at all. For we do not need to assume that the aim of the kind of theorizing under discussion is to establish results in our analogue economies that will hold outside them when literally interpreted. Frequently what we are doing in this kind of economic theory is not trying to establish facts about what happens in the real economy but rather, following John Stuart Mill, facts about stable tendencies. (p221)

And there you have it. By confusing a cat with a dog, Cartwright makes her case. I said above that I have some sympathy for why she falls into this trap. I have such sympathy because, on the one hand, the rest of Cartwright’s book deals with sciences that study ergodic data and have been very successful using the experimental method. On the other hand, Cartwright has — and I do not know why — bought into the rhetoric of economists like Lucas that economics must be a hard science.

Put two and two together and you can understand fairly well why Cartwright has structured her argument in such a manner. Basically, I think that Cartwright got sucked into the black hole that the likes of Lucas have been occupying for years; and given that her background was in studying successful experiments using ergodic data, the moment she bought into the economist’s rhetoric she entirely ignored the difference in the nature of the data being scrutinised and wholly immersed herself in the wild, out-of-this-world thought experiments characteristic of the New Classicals and their ilk.

As I said at the beginning, Cartwright has by the end of the chapter turned on the economic models she once defended. But she has done so for only the most shallow reasons. She writes:

My claim then is that it is no surprise that individual analogue economies come with such long lists of assumptions. The model-specific assumptions can provide a way to secure deductively validated results where universal principles are scarce. But these create their own problems. For the validity of the conclusions appears now to depend on a large number of very special interconnected assumptions. If so, the validation of the results will depend then on the detailed arrangement of the structure of the model and is not, prima facie at least, available otherwise. We opt for deductive verification of our claims in order to achieve clarity, rigour and certainty. But to get it we have tied the results to very special circumstances; the problem is how to validate them outside. (p229)

Cartwright is now skeptical of the models because their results can only be tied “to very special circumstances”. This is a long, long way from recognising that the nature of non-ergodic, historical time is such that every economic constellation is literally unique. But the intuition is there, if only as a murky shadow of what it should be.

I’m sure I will get some kickback on this post; commenters will likely say something like: “Phil! You hold a defender of Lucas and the New Classicals up to attack even Post-Keynesian modelling!? That is absurd. We try to make our models with realistic, not unrealistic assumptions. You simply cannot draw a comparison!”

First of all, this is not actually true. Your typical Kaleckian or Marxian model, for example, makes an entirely unrealistic assumption that there are classes called “capitalists” and “workers” with specific saving rates and behaviours and so forth. In an age with various pension plans tied to all sorts of capital markets this is an absurd oversimplification; but then it always was. Secondly, even if it were true that Post-Keynesians go out of their way to be realistic — which I don’t think they do, unless we take the term “realistic” to mean something highly idiosyncratic — the above defense and criticisms still apply.

Such models, even if they were “realistic” where Lucas’ are not, still aspire to the same discovery of stable relations as Cartwright discusses above. They also seek to “isolate” these supposedly stable relations in the manner that Cartwright compares to Galileo’s experiment. And they ultimately fall on the basis that we take either my criticism — i.e. that all economic constellations are absolutely unique and historical time is non-ergodic — or even Cartwright’s cruder criticism — i.e. that they are so specific that they can only deal with very specific circumstances.

Again, I will say what I always say: models are didactic tools only. They should be judged on the basis of whether they teach something of real interest — as the SFC models do — or they teach completely irrelevant nonsense — as the New Classical models do. I’ll also say this: any economic relation that was stable enough for these models to capture would likely not be worth studying because it would likely just be an identity and, thus, a tautology. And it is only on very rare occasions that we need new tautologies.

Posted in Economic Theory, Philosophy | 2 Comments

How Does QE Work and What Does It Really Do?

QE Explained

After my previous post on the QE taper there was some discussion on the INET YSI Commons Facebook page about the QE programs. I think it might be constructive to sum up what was there discussed as it provides a very good overview of what the QE programs do, what they do not do and how they work.

First of all, however, it should be noted that the objectives of the QE programs appear to have changed over time. When the program was initially put in place by the Fed in 2008 — or, if one wants to go further back in time, when it was first enacted by the Bank of Japan in 2001 — it was generally thought that the program would boost investment in the real economy and thus create employment.

This would be the truly monetarist component of the QE program. The idea is that you increase the amount of money in the system, this money flows from the banks into the real economy and thus increases investment and employment.  A variant on this — what might be called the New Keynesian variant — is very similar: the QE program increases the amount of money in the system, this drives interest rates down across the board and these lower interest rates force businesses to invest in tangible assets.

Such ideas were the initial impetus for the QE program and by these criteria it failed spectacularly — as MMT and endogenous money proponents suspected that it would. Over time, however, the commentariat have revised their estimations of the QE programs in line with what it has actually done and attempted to justify its results post factum; the Fed, in turn, has endorsed such maneuvers with glee and run with them.

These maneuvers are why I think the QE program to be a largely false distraction, ultimately cooked up for the amusement of economists and economic commentators; a soporific ingested in order to distract oneself from the true determinants of our current problems. If the initial impetus for the QE programs has proved false and pundits nevertheless judge it based on some of the things it has accomplished in a peripheral manner one is certainly hard pushed to think that it is not largely a nonsense policy — a mask, covering up the impotence of central banks since 2008.

All that said, let us now consider briefly what QE actually does and how it does it.

As everyone knows, the program works by central banks creating money and using this newly created money to purchase various assets — mostly bonds, but also securities and some other stuff. When these assets are purchased two things effectively occur: their value increases and their yield falls.

Another way to think about this process is that the central banks are limiting the supply of assets. Imagine that there a set number of such assets in a given market — say, 1,000 — and that the average price is also set — say, at $1,000. Now, say that the central banks steps in and uses newly created money to buy up half the assets and this, in turn, doubles the price. The supply of assets in the market falls by half and the price doubles. (This is not an accurate portrayal of either the extent of central bank purcahses or of the price dynamics, but just a stylised example to help us understand the dynamics at work).

At the same time, however, that the price of the assets rise, their yields fall. Thus, again for the sake of an example, imagine that the assets initially had an average yield of 5%. But then, after the central bank purchases, their yield falls by half to 2.5%.

The situation is now clear. Holders of the assets have seen an increase in their net worth as the value of the asset has doubled, but they have seen a fall in their income streams as the amount that their assets yield has halved. In the economics jargon we might say that as their stock of wealth has increased, their income flows have diminished. This is what we might call the “primary effect” of QE.

There are, however, secondary effects. Because the yield has fallen on their assets and because some people who used to hold assets but sold them to the central bank now hold cash, which basically yields 0% (actually, the real rate is 0.25% in the US and 0.50% in the UK), these investors will now search out other markets in which to make money. In doing so, this will drive up the value of the assets in these markets and drive down their yields. Thus we have a sort of cascade effect where the initial burst of value given by the QE is translated from one market into another. These are the secondary effects.

Neil Lancastle has reminded me that there is also a tertiary effect. When companies see the value of their assets rise, they also see the value of their equity rise. This encourages them to take on more leverage and spend this on more assets. This, in turn, boosts the value of the assets even more.

As we can see, however, this cascade process eventually comes to an end. And what we finish with is a market in which asset prices are substantially higher but yields are suppressed. This proves to be something of a double-edged sword for savers and investors. On the one hand, they are glad to see that their assets are worth more money, but on the other their income streams are substantially diminished. After a while, this becomes tedious for investors as it eats into their wealth — this is why some in the financial markets, especially pension funds, get annoyed by the QE programs: they make it extremely difficult to accrue steady flows of income.

While the effects the programs have on investors is mixed, the effects it has on asset prices and leverage are not: it increases them beyond what they would be if the programs did not exist. This can be enormously problematic. Many like Chris Cook and Izabella Kaminska (and myself), have argued that the programs have led to a run-up in commodities prices, for example. Such a run-up in asset prices may also — and on this point the central banks are more than aware — lead to bubbles. I have, for example, discussed how current dynamics in the stock market may become fragile if government spending is cut and the real economy stagnates.

As we can see then, there are risks and downsides to the QE programs. They hurt as well as help investors and savers and they may generate instability and fragility in financial markets. The program, from this perspective, basically provides a sugar rush for the market, but it is in no way clear that as the impetus added to asset prices wears off the overall level of income does not fall due to decreased interest income.

Meanwhile, however, the program has extremely positive effects for debtors. As the cascade drives down yields across the markets, interest rates on all sorts of loans for consumers fall. This reduces the amount that these debtors have to pay in terms of interest on debt. The following chart shows the amount of their disposable income US households are using to service their debts:

QE Debtors

As we can see, this fell from some 13.5% before the QE programs to some 10.4% after. This is a net increase in US households’ disposable income of 3.1%. This is nothing to be sniffed at and, because they have more income to spend on other goods and services, this undoubtedly adds to demand and GDP.

It should be stressed, however, that these positive effects can only go so far. The first two round of QE were initiated between November 2008 and April 2011. As we can see, in this period debt service payments as a percent of disposable income for households fell substantially to 10.4%. The third round of QE which was initiated in September of 2012 has not had much of an effect at all; indeed from the data we do have for that quarter it appears that debt service payments as a percent of disposable income for households actually rose by about 0.2%. Talk about diminishing returns!

And that is it really. As we can see the effects of the QE programs are mixed. They can boost asset prices — but this is largely offset by the decline in yields that results. This also may have undesirable results and lead to fragility. The programs also alleviate the interest payment burden on debtors. This has a positive effect, but is subject to seriously diminishing returns.

Again we must stress though, the original idea behind QE was that it should increase real investment and employment. This simply has not happened and by this standard — which is the standard by which QE should be judged — the program has been an abysmal failure. A key lesson should be taken from this: business investment is first and foremost demand-led and does not simply respond to lower interest rates or increases in the base money supply. If the customers are not there to buy the goods and services, the companies will not invest  and hire in the real economy.

It is for that reason that the QE programs are largely a distraction talked about by an economic establishment that feels powerless when confronted with the confusing facts of the post-2008 world. Rather than recognising the serious structural problems we face — from moribund governments undertaking failed austerity programs to ever-widening income inequality — the establishment prefers to chatter over the QE programs, as if the wonks still had some control over the system. Well, the harsh reality is that they don’t.

Posted in Economic Policy, Market Analysis | 38 Comments

The Fed’s QE Taper: Downsides and Upsides

taperIzabella Kaminska over at FT Alphaville recently ran an interesting piece on QE. Her main argument for QE was that it attempted to put the squeeze on rich folks and that might not be bad for an unbalanced economy. Her point is well taken, as I’m sure she knows that downside risks QE has with regards to asset bubbles.

Let’s deal with this first. Kaminska quotes Brad Delong to make the case that such asset bubbles are not turning up. For example, Delong discusses equities about which he says:

A world in which U.S. equities currently have an earnings yield of 6%/year does not appear to be a world in which there is ample risk-bearing capacity going begging in the marketplace. If there were, there might be an argument that the Federal Reserve’s pulling Treasury duration risk out of the marketplace increases the risk of bubbleicious overleverage by feckless risk-lovers. But if there are feckless risk-lovers, why haven’t they bid equities up to more substantial price-earnings multiples? (Emphasis Mine)

Well, as I have argued before, the P/E ratio in the US stock market is being held down, to a large extent, by a surge in corporate profits. This is also what the likes of Robert Shiller are concerned with. These profits have two broad sources: student loans and, more importantly, the federal budget deficit.

These two sources are also what is driving the US economy more generally and while student loans appear to be piling up to the moon — with no thought given to the negative consequences of loans with a ‘no default’ clause burdening the nation’s youth for the rest of their lives — the federal budget deficit continues to go lower and lower. The latest round of cuts, for example, which included a $40bn cut in the highly stimulatory food stamp program are eventually going to be reflected in lower corporate profits.

This fall in corporate profits is then going to feed through to the P/E ratio in the form of lower earnings. Then economists like Delong might start getting itchy about the stock market. All this is already on the cards, as it were, and its only a matter of time before this happens.

Moving on to what effects QE has on growth. Well, with all the taper talk we got a nice experiment in what QE actually does. Almost instantly after Bernanke uttered the ‘T-word’ mortgage rates spiked by a whole percentage point as can be seen in the graph below.

Taper Mortgage Rate

How much of an effect does such a spike have on the economy? Well, first and foremost it lowers debtors’ disposable income by increasing their interest repayments. This can have an effect on the demand generated by said debtors.

The spike would also, it would be presumed, slow the housing market, the recovery in which has been a key component of the US economy so far staying out of recession. This, however, may not be as true as many might think as it was recently found that over half of homes are being purchased with cash.

Yes, rich folks in the US are seeing the lows in the property market as something of an investment opportunity. This is undoubtedly happening, to some extent, because of the low yield QE environment and while I do not currently fear a bubble in the US housing market, such cash purchases do raise serious questions about what type of society the US is becoming with rich and old folks owning lots of houses which they presumably rent to the poor and young, who are likely burdened by student loans on which they cannot default.

So, what is my estimation of QE? It is as good as it is bad. Beyond that it provides something for economists to talk about when the reality on the ground is that one of the fundamental problems in the US economy — that is, horrendously skewed income distribution — is getting ever worse. At the same time the only thing really keeping the whole thing ticking over — from the economy itself, to the earnings in the stock market — that is, the federal budget deficit, is silently fading away. Meanwhile, all one hears in the financial press is nattering over QE.

That, I think, is the true function of QE. It is a distraction for policy wonks, economists and market participants to talk about by the water cooler. And, dear reader, you need not point out that I too have just added my voice to that choir.

Posted in Economic Policy, Market Analysis | 1 Comment

How Do Stock-Flow Relations Work in Economics and Are They Inappropriate for Price Dynamics?

Bathtub stock flow diagram

The other day I did a post on the work of Katharina Pistor and included some very broad details of something related that I’m working on. Despite me saying clearly and multiple times in the piece that I was massively oversimplifying I nevertheless got some response pointing to supposed “errors” and “banalities” in what I was saying.

The most substantive was that prices cannot be understood in terms of stock-flow equilibria. What followed was a rather muddled argument but one that I think can be summarised by quoting here two comments that were left on that blog. The first ran like this:

At any moment, price in financial exchanges is determined by current orders placed by participants. There is no reason why those should be linked to past purchases in a mechanistic way implied by stock-flow relationship. Also, stock variables evolve, by definition, continuously, and thus cannot jump, while prices jump all the time.

So, the criticism here is that whereas with, say, income the stock rises in lockstep with the accumulation of flows and there are no “jumps” in the case of prices we sometimes see such “jumps”. The latter part of this statement is obvious nonsense and rests on taking too literally the term “flow”. What the commenter was presumably imagining was a bathtub with a tap turned on, and the steady “flow” of the water out of the tap adding to the “stock” of water in the tub.

But this metaphor, taken too literally, can blind us what we’re really dealing with here. In economics they can indeed “jump”. Imagine we have a flow of income that adds $5 to the stock every second. After ten seconds had passed the stock would have risen by $50 in a smooth, flow-like manner. But now let’s say that we alter this and say that $50 are added at every ten second interval. Then we will no longer see the same steady flow dynamic that we saw before and rather we will see what appears to be a “jump” in the stock every ten seconds.

Of course, this is actually an illusion because the second scenario is no more a “jump” than the first scenario, it just appears that way due to our having first conceived of the flow relation as being the accumulation of $5 every second and then later changing the nature of the flow relation.

The second criticism was tied to this and made slightly more sense, but was nevertheless based on another misconception. Here it is:

If asset price was stock variable driven by investment flows, that would mean that whenever I invest 1$ in the asset, the price always goes up by 1$ (or some multiple of it, but always by the same amount). That’s how a stock-flow relationship is defined, and it’s also how financial markets DO NOT work, period.

The ambiguity here is in the brackets. In a Keynesian multiplier stock-flow relationship, for example, the stock will rise by the amount of income times, say, the marginal propensity to consume (MPC). So, say we have a rise in income of $1 and the MPC is 0.2 we will have a total rise in income of $1.20. As the commenter says this is indeed “always the same amount”, but only over a set period.

So, what is the corollary when thinking of price dynamics? Simple. It’s a Marshallian construction called the price elasticity of demand. If a financial asset has a very low price elasticity of demand any buying/selling of this asset will have substantial price effects, while if there is a high price elasticity of demand any buying/selling of this asset will have far less significant price effects.

In marginalist economics, of course, it is thought that there is a negative relationship between, for example, price and quantity demanded. In my framework this is not necessarily true, yet the basic insight in the price elasticity of demand holds nevertheless — if in slightly modified form.

For an excellent real world example of this with reference to the gold market see this post by hedge fund manager Mark Dow.

Thus, if we imagine that we know the price elasticity of demand at any point in time — just as we imagine that we know the MPC in the multiplier relation — we can ensure that there are fixed relations between investment flows and price, as mediated by the price elasticity of demand. Just as the MPC provides the “bridge” between income received and total income in the multiplier relationship, the price elasticity of demand provides the “bridge” between investment flows into a financial asset and the price.

This is, of course, not a very heterodox idea at all, but it seems that some mainstream economists, so used to their downward-sloping demand curves and their efficient markets, have never really thought through in detail how prices work in the real world. But then, that is what my work is trying to remedy. That everyone will tell me that what I am saying is “sooooo obvious” is a given. But I’ve seen too many mistakes and misconceptions to be convinced that what I’m doing is not important; and, ironically, it seems to be those who tell me how banal and unimportant what I’m doing is that could do with being exposed to it the most. But is that not always the case in economics?

Addendum: We can lay this argument out mathematically with reference to the equations we laid out in the last post. The first equation laid out there was as follows:

eq1We can modify this by adding a price elasticity of demand term, d, as follows:1We can then take the second equation we laid out in our previous post, which was:

eq2And we can substitute this in to get:

2Now we can see that if the price elasticity of demand is a larger number, say above 1, then the effects of increased/decreased investment on price will be greater. While if it is a smaller number, say below 1, then the effects of increased/decreased investment on price will be lesser.

Posted in Economic Theory, Market Analysis, Toward a General Theory of Pricing | 5 Comments

Anti-Equilibrium: Katharina Pistor and the Need for a Non-Market Equilibrium Framework for Understanding Financial Markets

Disequilibrium-15-1Recently a friend of mine, Rohan Grey (founder of the excellent Modern Money Network), directed me to the work of Columbia law professor Katherina Pistor. Pistor’s work seems to be getting a lot of attention, having recently won the Max Planck research award. Having looked into it a bit I can safely say that Pistor “gets it” to a far greater extent than most economists.

To get an idea of what her work is all about I suggest watching this relatively short INET presentation (Pistor’s presentation starts around the 5 minute mark);

Pistor thinks that the manner in which we are regulating financial markets — with its emphasis on reducing transaction costs and information disclosure — is entirely wrong. I totally agree and what I find so interesting about Pistor’s work is that she recognises at a very fundamental level why regulators and economists are inclined to think otherwise.

At the start of her talk Pistor presents a basic supply and demand graph and says that this is how regulators and economists think of financial markets. She is absolutely correct. Even those theories that allow for disequilibria in markets — like the noise trader theory that I discussed in this post — take as their underlying structural principle the idea that markets are basically or even “naturally” efficient and will tend toward market equilibrium.

As I argued in the above post, this bias that is inherently built into the theory blinds economists, regulators and policymakers to the actual nature of such markets which is that they are, as Pistor says, inherently crisis-prone and hierarchical. Thus having regulations that basically focus on trying to allow markets to reach their supposedly “natural” equilibrium position will likely fail spectacularly.

What economic theory needs to do is to exorcise this specter of market equilibrium altogether. I have discussed how this might be done on this blog before, but let me elaborate a little further taking as a starting point Pistor’s presentation.

There are, as alluded to above, two components to Pistor’s criticism: (i) markets are inherently crisis-prone and (ii) markets are inherently hierarchical. Let’s deal with each aspect in turn.

If we concede that markets are inherently crisis-prone we must admit that they do not tend toward efficient market equilibrium positions. Think about that for a moment. Yes, theories like the noise trader theory — and there are other lesser known but similar theories which I will not discuss here — can account for bubbles and fluctuations in markets, but they tend to conceive of these as small ripples on an otherwise calm ocean.

This is because, in contrast to Pistor, they conceive of markets as being inherently stable and any crises that arise are anomalies (likely caused by some sort of anti-market process that can be regulated away). If we take as our basic precept, however, that markets are inherently unstable then we have to drop the assumption that they tend toward an efficient equilibrium.

The problem is that economists have internalised the efficient equilibrium bias to such an extent that they tend to view it simply as a methodological tool and not an a priori assumption about the nature of markets which is what it actually is. There were a few economists who recognised how the underlying idea of efficient equilibrium is far more than simply a methodological tool, but they are few and far between. In her excellent The Accumulation of Capital Joan Robinson, for example, wrote:

[We cannot] apply the metaphor of a balance which is seeking or tending towards a position of equilibrium though prevented from actually reaching it by constant disturbances. In economic affairs the fact that disturbances are known to be liable to occur makes expectations about the future uncertain and has an important influence upon any conduct (which is, in fact, all economic conduct) directed toward future results. For instance, [financial asset] owners (and their professional advisers) are always on the look-out to buy what will rise in value. A belief that a particular share is going to rise causes people to offer to buy it and so raises the price… This element of ‘thinking makes it so’ creates the situation in which a cunning guesser who can guess what the other guessers are going to guess is able to make a fortune. There are no solid weights to give us an analogy of a pair of scales in balance. (p59).

The first step that needs to be taken if we are to reform the theory of financial markets is that we must throw out the idea of efficient market equilibrium. If we do not, it will continuously come back to haunt us. The problem is that very few economists seem to be able to understand that this idea is not merely a methodological tool but rather a normative bias about the supposed “nature” of markets that deeply affects how we conceive of them.

The second aspect of Pistor’s criticism is tied to this. What she means when she says that markets are inherently hierarchical is that there exist a network of institutions that affect the capacity of various players in financial markets. She takes as her example the Federal Reserve and claims, rightly, that we can understand the hierarchy of the financial markets by looking at its balance sheet since the 2008 crisis. Here is the balance sheet (click to enlarge):

us-fed-assets

As Pistor says in her talk, an underwater homeowner cannot hand their mortgage over to the Fed in order to ensure their solvency, but if you’re a big bank about to go under you have all sorts of options at hand.

Although at first glance this has little to do with the idea of equilibrium, I would argue that it does. What the Fed does in times of crisis is that it chooses certain asset classes and then uses its money creation abilities to pump the value of these assets up to what it considers to be an appropriate price. There is an equilibrium process underlying this process, but not a market equilibrium one.

What is happening is that private sector investment is falling for some assets and central bank investment is piling in to stabilise the price. We can represent this algebraically in a very similar manner as we might the Keynesian income-determination process. Consider that price is set by investment either by the private sector, If, or by the central bank, which we will here call the government sector, Gf. Then:

eq1

To be crystal clear, that says that price, Pft, is determined by the price in the previous time period, Pft-1, private sector investment, Ift, and government investment, Gft.

Now, let’s further say that the change in private investment is determined by expectations of future price changes. Then:

eq2Again, what that equation says is that the change in private investment is determined by the expected future change in the price. I.e. if the future price is expected to fall, private sector investment will fall and if the future price is expected to rise, private investment will rise.

I have to say that I am oversimplifying here massively. The process is actually much more complicated than this, but I’m really just interested in bringing out the basic structure of what is going on here.

Okay, that said, let’s lay out what determines the equilibrium price, Pft*.

eq3As we can see — and again, we are massively oversimplifying here — the equilibrium price is determined (a) based on the expectations among investors of future price increases/decreases and (b) government, in this case central bank, action. Here is the key point: the equilibrium outcome is not a market equilibrium outcome; rather it is a stock-flow equilibrium outcome. The equilibrium price is purely determined by investment flows into the asset; just like income is determined in the Keynesian system by investment and consumption flows.

This is what I’m working on (in a far more complex way) for my dissertation. What I think is key to expanding our understanding of financial markets is to overthrow the bias inherent in the market equilibrium view. By introducing a stock-flow equilibrium framework we can see that these markets (1) are inherently unstable and dependent on expectations and (2) determined by investment flows that have an irreducible institutional element. This, I think, is how economists can begin to integrate and talk about work like Pistor’s.

Posted in Economic Theory, Toward a General Theory of Pricing | 9 Comments

Economic Modelling and Artificial Intelligence: Is Economic Reasoning Always Based on a “Hidden” Model?

HAL_9000

There’s a trope one hears from economists all too often when one discusses the usefulness (or uselessness) of models. The argument usually runs like this: the person questioning the use of models says, for example, that all the useful predictions over the past X number of years have not used formal models; then the person defending the models says that all these predictions were made using models, it was just that the models were not explicitly articulated.

There are a few variations on this trope, but the underlying assumption is always the same: people have, locked inside their heads, models of the world that they apply without even knowing it. The same is true of economists who wrongly think that they work from trained intuition. They are just being naive because they are, in fact, working from a model; it is just one that is, as of yet, unconscious to them.

Epistemologically, this is a very slippery argument. But rather than getting into the nitty-gritty I’m going to say that this argument has already been played out in the computer sciences and economists who claim that we all walk around with models in our heads might do well to pay this some attention.

To get an idea of what this debate was all about we have to rewind to the 1960s. At this time research into cybernetics and artificial intelligence (AI) had reached a level of optimism never seen before or since. Yes, there were some — like Stanley Kubrick and Norbert Weiner — who painted a dark picture of where cybernetics and AI might lead, but there was a general consensus that we were heading firmly in the direction of AI, of HAL 9000 and so on.

Then an obscure philosopher named Hubert Dreyfus, working in a tradition of philosophy completely alien to his native MIT, published a paper entitled Alchemy and AI which he wrote while working at the RAND corporation; a hotbed of AI research and center of US intellectual strategy in the Cold War. The paper started with a sober assessment of the AI movement at the time written in true RAND style.

Early successes in programming digital computers to exhibit simple forms of intelligent behavior, coupled with the belief that intelligent activities differ only in their degree of complexity, have led to a conviction that the information processing underlying any cognitive performance can be simulated on a digital computer. Attempts to simulate cognitive processes have, however, run into greater difficulties than anticipated. (p.iii)

Dreyfus was expressing a deeply felt skepticism that touched a nerve with the optimists in the AI community. This was probably not helped, of course, by the air of ridicule in the paper; in it Dreyfus compared AI research to alchemy. Dreyfus was soon isolated from his peers who attacked his ideas and his person. His criticisms were soon subject to a setback when he pointed out that a computer could not beat a ten-year old at chess. In response, AI proponents had Dreyfus play against the Mac Hack chess program and he lost; proving that Dreyfus’ criticisms had, perhaps, gone too far.

Nevertheless, Dreyfus’ criticisms were not the products of an angry crank. Rather they were an epistemological attack on the foundations of AI. AI assumed, as Dreyfus notes in the above quote, that people think essentially in terms of symbols and rules — as do computers. Dreyfus, however, came from the phenomenological tradition in philosophy and insisted that this was not the case. He summarised his position in the introduction to the MIT edition of What Computers Still Can’t Do:

My work from 1965 on can be seen in retrospect as a repeatedly revised attempt to justify my intuition, based on my study of Martin Heidegger, Maurice Merleau-Ponty, and the later Wittgenstein, that the GOFAI [Good Old Fashioned AI] research program would eventually fail. My first take on the inherent difficulties of the symbolic information-processing model of the mind was that our sense of relevance was holistic and required involvement in ongoing activity, whereas symbol representations were atomistic and totally detached from such activity. By the time of the second edition of What Computers Can’t Do in 1979, the problem of representing what I had vaguely been referring to as the holistic context was beginning to be perceived by AI researchers as a serious obstacle. In my new introduction I therefore tried to show that what they called the commonsense-knowledge problem was not really a problem about how to represent knowledge; rather, the everyday commonsense background understanding that allows us to experience what is currently relevant as we deal with things and people is a kind of know-how. The problem precisely was that this know-how, along with all the interests, feelings, motivations, and bodily capacities that go to make a human being, would have had to be conveyed to the computer as knowledge — as a huge and complex belief system — and making our inarticulate, preconceptual background understanding of what it is like to be a human being explicit in a symbolic representation seemed to me a hopeless task. (p.xi-xii — Emphasis Original)

Regular readers of this blog will recognise in this both mine, Lars Syll’s, Tony Lawson’s and ultimately Keynes’ critiques of applied economic modelling and the use of econometric techniques. The problems are the same. Whereas certain people in the economics community are trying to model processes in terms of symbols that are so complex that they cannot be captured in these symbols alone, the AI community were attempting an even more daunting but not unrelated task: to use symbolic forms to model human consciousness itself. The AI modellers were, in a very real sense, trying to play God.

They basically failed, of course. Today AI research is much more humble and, although many laymen and some futurist-types still hold fast to a Sci-Fi view of the possibilities of AI, most of Dreyfus’ substantial predictions have been vindicated. AI failed spectacularly at mimicking the processes of human consciousness through the manipulation of symbols on computer programs and the likelihood that this will take place at any point in the future are remarkably slim. The AI community have run into the problems that Dreyfus thought they would — problems such as how to simulate non-symbolic reasoning or what Dreyfus calls “know how” — and although there is still some optimism the level of difficulty that these problems pose has made the AI community far more cautious in their claims. Much of the investment in boundless visions of the possibilities of AI now appear mere emotionally-charged fantasy — backed, undoubtedly, by an all-to-human desire to play at being God.

What can economists learn from this? A great deal actually. When dealing with economic data we use processes of reasoning that do not conform to systems of symbols — i.e. to models. This is why basically all interesting and relevant predictions come from intuitive empirical work and why none are generated by applying models. We do not, contrary to what the modellers believe, all carry models around with us in our heads that are just waiting to be discovered and applied. And anyone who thinks so will likely prove to be sub-par at actual applied work.

Human processes of reasoning are enormously complex and it is very difficult — if not impossible — to get an “outside” or “God’s eye” view of them. Thus, attempting to replicate the processes of reasoning inherent in economic thinking in models will only be useful for didactic purposes — and even then it will only be useful if students are made aware that these models cannot be directly applied and do not directly simulate how economics is done.

With that in mind I leave you with a nice quote from one of the late Wynne Godley’s students, Nick Edmonds, who does a great deal of modelling but who is nevertheless reflective enough to recognise the limits of modelling.

I think it is very important to recognise the limits to what models can do.  It is easy to get seduced into thinking that a model is some kind or oracle.  This is a mistake.  Any model is necessarily a huge simplification.  The results depend critically on the assumptions made.  However complex and detailed they are, all they really reflect is the theories of the modellerThe model is not revealing any new truth, it is simply reflecting our own ideas, helping us to visualise how a massively complex system fits together. (My Emphasis)

Update: Here is an excellent film made about the philosophical tradition that Dreyfus comes from which explains in far greater detail than I can here why it is wrong to conceive of human thought and action in terms of models. It also features Dreyfus and has an extensive discussion about the AI debate from about the 14 minute mark on.

Update II: The film has been taken down from Youtube. I’ve linked to the trailer instead. The film is worth seeking out though.

Posted in Economic Theory, Philosophy | 9 Comments