Short-Period and Long-Period Analysis: Neoclassical Versus Historical

quote-in-the-long-run-we-are-all-dead-john-maynard-keynes-101395

In my previous post I was concerned with summarising Lawson’s argument regarding the term “neoclassical” for an audience that was not going to read his paper in full. Thus, I did not wish to insert too much of my own thoughts on the matter. However, I would now like to deal with something which I only mentioned in passing in yesterday’s post.

Toward the end of the paper Lawson makes a distinction between three different groups. He writes:

In short, I am suggesting that there are three basic divisions of modern economics that can be discerned in the actual practices of modern economists. These are:

1) those who both (i) adopt an overly taxonomic approach to science, a group dominated in modern times by those that accept mathematical deductivism as an orientation to science for us all, and (ii) effectively regard any stance that questions this approach, whatever the basis, as inevitably misguided;
2) those who are aware that social reality is of a causal-processual nature as elaborated above, who prioritise the goal of being realistic, and who fashion methods in the light of this ontological understanding and thereby recognise the limited scope for any taxonomic science, not least any that relies on methods of mathematical deductive modelling; and
3) those who are aware (at some level) that social reality is of a causal-processual nature as elaborated above, who prioritise the goal of being realistic, and yet who fail themselves fully to recognise or to accept the limited scope for any overly-taxonomic approach including, in particular, one that makes significant use of methods of mathematical deductive modelling.

Lawson points out that it is the first group that makes up much of the mainstream; the second group that makes up what Lawson calls the “core of heterodoxy”; and the third group that would fall into the neoclassical camp if we understand Veblen’s original use of the term.

Now, although any labels we apply to these groups will be to some extent arbitrary I think that by considering the three groups from a different historical angle might help to get a grip on how we might begin to get our heads around the new taxonomy which Lawson has unearthed. This historical angle will be that of Joan Robinson who, in retrospect, was a serial offender in the misuse of the term neoclassical but who also had an extremely good intuition regarding the differences Lawson has uncovered.

In her book Economic Heresies: Some Old-Fashioned Questions in Economic Theory Robinson tries to give some coherence to what she sees as the different trends in economics over the course of its history (since Ricardo, anyway). In this book she draws a clear distinction between the doctrines of Walras and those of Marshall and indicates that it is out of the latter that Keynesianism proper (what we would today call “Post-Keynesianism”) emerged. Robinson finds the key difference in this regard in how Walras and Marshall treat time. She writes:

The essential idea is that a short-period situation [in Marshall] is one in which productive capacity happens to be whatever it is. But a situation with a specific plant in existence today is not to be identified with the Walrasian concept of a given stock of factors of production; its role in analysis is quite different. Unlike the Walrasian concept, Marshall’s short-period is a moment in a stream of time in which expectations about the future are influencing present conduct, and it belongs to a monetary economy in which the division of proceeds between wages and profits emerges from the relation of money prices to money-wage rates. With the aid of this concept, we can analyze price policy in imperfect competition, the effects in the present of uncertainty about the future, and the meaning of equilibrium in a process of growth, all of which are ruled out by the assumption of a Walrasian market. (Pp17-18)

A number of comments are here in order. Clearly Robinson is recognising in Marshall what Veblen thought to be the hallmark of neoclassical economics: namely, its partial recognition of the “stream of time in which expectations about the future are influencing present conduct” but its remaining in the realm of static analysis where various equilibria must be compared. This is, of course, what is often called the Marshallian partial equilibrium approach. Walras, by contrast, is no good for even partial equilibrium analysis. The Walrasian system is one of general, not partial, equilibrium.

Thus we can recognise in Walras a characteristic that Veblen assigned to the classicals; that is, a tendency to think in terms of pure teleology. While we can recognise in Marshall the very thing that led Veblen to coin the term “neoclassical”; that is, the uneasy existence of a recognition of historical time together with the use of a methodology suited to a static analysis.

We also see this “neoclassical” aspect of Marshall in how he thinks of the “long-period”. Robinson continues:

We can make use of the distinction between the long- and the short-period concepts without being committed to any faith in equilibrium being established in the long run. Indeed, it is absurd to talk of “being in the long period,” or “reaching the long period,” as though it were a date in history. (Marshall himself thought of the economy as tending toward long-run equilibrium but never actually being there.) It is better to use the expressions “short period” and “long period” as adjectives, not as substantives. The “short period” is not a length of time but a state of affairs. Every event that occurs, occurs in a short-period situation; it has short-period and long-period consequences. The short-period consequences consist of reactions on output, employment and, perhaps, prices; the long-period consequences concern changes in productive capacity. (Pp18)

This was written in 1971 in a period of her life where Robinson was coming to reject the vast amount of economic theory that she had done in the past. Disappointed by the results and effects of the Capital Debates, she began to realise that she had merely been fighting symptoms and the sickness in economic theory ran much deeper. You get a sense of this especially in the first of these lectures she gave in Stanford in 1974.

I think in this passage you see Robinson breaking away from what Veblen would have called “neoclassical” analysis. This especially so when she says that it is better that we understand “the expressions “short period” and “long period” as adjectives, not as substantives”. Here she is highlighting the “short” and “long” aspects rather than the “periods” per se. These concepts should not be thought of as short or long in then sense that we can assign them a numerical unit of time — say, one year and ten years respectively — but merely as being short and long relative only to each other. Just as we might say a “short man” who is only short in relation to his peers (among the Inuit, for example, he might be rather tall). As we can see Robinson develops such arguments out of the Marshallian tradition and suggests that such analyses would be impossible in Walras.

So, back to Lawson’s taxonomy. Where does all this fit in? I would suggest that the Walrasian system fits in to Lawson’s first group. Most economics practiced and taught to today is, in this sense, Walrasian. Marshall broadly fits into Lawson’s third group which we might call neoclassical. I would suggest that much actual policy analysis undertaken today fits into this category; wherein the policy analyst keeps at the back of their mind an ultimately fixed or static model and then analyses policy based on this. All mathematical deductive analysis that is not strictly general equilibrium/Walrasian, whether modelling or econometrics, also fits into this category. Finally, Robinson’s own mature views fit nicely into Lawson’s second category. Robinson is engaged in exorcising any notion of non-historical time out of the concepts she is using; something only hinted at in Marshall.

The moment I finished reading Lawson’s paper I knew that the distinctions he had made were not simply of rhetorical value. In fact, they provide a taxonomy with which we can sort out various paradigms based on their underlying metaphysics (Lawson would say: ontology). This is enormously advantageous. For example, we know that both Marshall and Walras are marginalists, but only the former is neoclassical in the proper sense of the term. There is a lot of fruitful research that can be conducted along these lines. I, for one, will certainly be using the term “neoclassical” differently — or should I say: correctly? — from now on.

Posted in Economic Theory, Philosophy | Leave a comment

What is Neoclassical Economics? And Are Many Heterodox Economists Actually Neoclassical?

neoclassical-economics

After my recent post on a paper by Tony Lawson I was corresponding with the author and he suggested that I might want to take a look at a paper he has written that will be coming out in the September issue of the Cambridge Journal of Economics. The paper, which is entitled What is this ‘School’ Called Neoclassical?, can be downloaded from the advance access page of the CJE — just scroll down to June 20th.

In the previous post on Lawson I wrote the following:

I should say that while agree with Lawson’s arguments against the ideology crowd, which I shall lay out shortly, I do not agree that the term “neoclassical” does not denote a tendency in economics that has fed into the very formalism that, as we shall see, Lawson despises. I think that a great deal of the problems with modern economics is its marginalist and, ultimately as I have shown before, teleological tendencies. These are hallmarks of the neoclassical approach and it is these tendencies that accommodate to the problems that Lawson identifies.

The present paper can be seen, in some sense, as a response to that comment. Of course, Lawson was not actually responding to me directly but it seems that others have made similar comments and this is what Lawson is dealing with in the present paper.

Lawson starts by making the point that, as is at least somewhat well-known, the term “neoclassical economics” was coined by Thorstein Veblen in a 1900 paper entitled The Preconceptions of Economic Science. The paper deals, interestingly enough, with the metaphysical preconceptions of economics — something that I deal with quite often on this blog (many can be found here) and in other writing (see here, for example). Lawson highlights that Veblen’s target is much like that which is found in his own work except that what Veblen calls “metaphysics” Lawson generally refers to as “ontology”.

This, as we shall see, is extremely important because Veblen, in coining the term “neoclassical”, is not actually taking aim at the marginalists — if we may call them that — because of anything to do directly with utility theory or perfect markets or anything like that; more important for Veblen is their underlying metaphysics. Veblen sees this as being caught between two different paradigms: what he calls the “taxonomic” and the “evolutionary”.

Veblen is contrasting this contradictory stance of what he calls the neoclassicals with that of the classicals. In the classicals Veblen finds an animistic, teleological metaphysics in its most crude form which he then contrasts with the newer neoclassical view. He writes:

The earlier, more archaic metaphysics of the science, which saw in the orderly correlation and sequence of events a constraining guidance of an extra-causal, teleological kind, in this way becomes a metaphysics of normality which asserts no extra-causal constraint over events, but contents itself with establishing correlations, equivalencies, homologies, and theories concerning the conditions of an economic equilibrium. (Veblen, 1900, Pp255)

What Veblen is referring to as the “orderly correlation and sequence of events a constraining guidance of an extra-causal, teleological kind” is basically Adam Smith’s “hidden hand” which exerts a teleological pull on the direction of economic development. Veblen recognised in the hidden hand the teleological pull of God’s Will; something which I have noted elsewhere before.

In the newer neoclassical tradition, however, Veblen noted that much of this “archaic” religious or animistic metaphysics becomes instead a “metaphysics of normality” that tries to ground itself without any reference to an “extra-causal, teleological” force. The reason Veblen notes this is because he finds in the work of certain British neoclassical authors a recognition of what he calls the “evolutionary” approach — something Post-Keynesians may more readily understand if we say that this is an approach that recognises the presence of historical rather than logical time. Lawson summarises:

In identifying this specific strain (which shows unmistakable adaptation to the historical or evolutionary approach) Veblen proceeds merely by illustrating it with reference to two of its developers. One is the philosopher of science John Neville Keynes (the father of John Maynard Keynes), the other is the economist (and Keynes family friend) Alfred Marshall. (Lawson, 2013, Pp21)

In this approach Veblen finds the modern neoclassicals to be gradually moving away from the archaic, animistic tradition of the classicals and toward something more fruitful. Lawson again summarises:

It is precisely this tension, which is first illustrated using the contributions of [John Neville] Keynes and Marshall that I  take to be the essence of neoclassical economics, according to Veblen. In other words, the defining feature of all neoclassical economics is basically an inconsistent blend of the old and the new; it is in effect an awareness of the newer metaphysics of processual cumulative or unfolding causation, combined with a failure break away from methods of the older taxonomic view of science that are in tension with this modern ontology. Neoclassical economists are classical in their acceptance of a taxonomic orientation to science that does not rely on the design of God, albeit a taxonomic stance now primarily revealed at the level of method. But at that level of explicit ontological or metaphysical preconception, neoclassical economists reveal unmistakable adaptation the viewpoints of the evolutionary sciences, warranting the qualifier ‘neo’. (Lawson, 2013, Pp23)

This is the underlying tension apparent in what Veblen calls the neoclassical school. It has little to do with clearing markets, an automatic tendency to full employment or anything else — indeed, these are to be properly seen as mere symptoms of a more fundamental sickness — rather it is the fact that these neoclassicals do recognise that they are dealing with “processual” historical time while nevertheless retaining methods that are only suited to the study of logical time. In more familiar lexicon this is usually identified as the Marshallian partial equilibrium approach which was recognised by such economists as Joan Robinson as having moved away somewhat from the ahistorical Walrasian approach but which nevertheless retained problems of its own.

At the time Veblen thought that such a contradiction would prove short-lived and it was only a matter of time before the neoclassicals adopt the correct “evolutionary” ontological position in their study of the economy. Lawson points out, of course, that the exact opposite has been the case. He argues that this was in large part due to changes that were taking place in the field of mathematics at the time. Here I will quote Lawson at length as it is a very important argument and draws on previous work:

However, in the early part of the twentieth century changes occurred in the interpretation of the very nature of mathematics, changes that caused the classical reductionist programme itself to fall into disarray. With the development of relativity theory and especially quantum theory, the image of nature as continuous came to be re-examined in particular, and the role of infinitesimal calculus, which had previously been regarded as having almost ubiquitous relevance within physics, came to be re-examined even within that domain. The outcome, in effect, was a switch away from the long-standing emphasis on mathematics as an attempt to apply the physics model, and specifically the mechanics metaphor, to an emphasis on mathematics for its own sake. Mathematics, especially through the work of David Hilbert, became increasingly viewed as a discipline properly concerned with providing a pool of frameworks for possible realities. No longer was mathematics seen as the language of (non-social) nature, abstracted from the study of the latter. Rather, it was conceived as a practice concerned with formulating systems comprising sets of axioms and their deductive consequences, with these systems in effect taking on a life of their own. The task of finding applications was henceforth regarded as being of secondary importance at best, and not of immediate concern…

This emergence of the axiomatic method removed at a stroke various hitherto insurmountable constraints facing those who would mathematise the discipline of economics. Researchers involved with mathematical projects in economics could, for the time being at least, postpone the day of interpreting their preferred axioms and assumptions. There was no longer any need to seek the blessing of mathematicians and physicists or of other economists who might insist that the relevance of metaphors and analogies be established at the outset. In particular it was no longer regarded as necessary, or even relevant, to economic model construction to consider the nature of social reality, at least for the time being. Nor, it seemed, was it possible for anyone to insist with any legitimacy that the formulations of economists conform to any specific model already found to be successful elsewhere (such as the mechanics model in physics). Indeed, the very idea of fixed metaphors or even interpretations, came to be rejected by some economic ‘modellers’ (albeit never in any really plausible manner)…

The result was that in due course deductivism in economics, through morphing into mathematical deductivism on the back of developments within the discipline of mathematics, came to acquire a new lease of life, with practitioners (once more) potentially oblivious to any inconsistency between the ontological presuppositions of adopting a mathematical modelling emphasis and the nature of social reality. The consequent rise of mathematical deductivism has culminated in the situation we find today. (Lawson, 2013, Pp27-28 — My Emphasis)

This, of course, will strike many of us as familiar; especially those who have read recent criticisms of the mathematical method as applied to economics that I have published on this blog (here and here). Indeed, on this view it would appear that if we wish to stick to the proper use of the term we must reserve “neoclassical” for those who exhibit a tendency to at once recognise that economics is the domain of historical time and thus largely not conducive to mathematical modelling and at the same time use said mathematical modelling as a tool to understand the economy. Lawson summarises this conclusion nicely:

Somewhat ironically, then, albeit particularly advantageously, if the suggested interpretation of the term ‘neoclassical’ is accepted, usage of the category would serve to draw attention to precisely that inconsistency (of preconceptions of certain modelling practices with otherwise revealed ontological commitments) which the manner of its current usage helps obfuscate. The effect, in short, would be to reverse the term’s current role in the discipline; its usage would contribute to identifying, revealing and/or signalling the tension in question, rather than, as at present, serving to mask or otherwise divert attention from it. (Lawson, 2013, Pp28-29)

The implications of this, of course, are enormous, as anyone familiar with the heterodox community and their use of the term “neoclassical” will realise. It means that any heterodox economist who admits that economic processes take place in historical rather than logical time but who nevertheless utilise improper mathematical modelling methods should be identified as “neoclassical” because this is precisely the group that the term was coined for.

So, is this really Lawson’s goal? Does he really want to tag those in the heterodox community that use mathematical modelling extensively and unreflectively as neoclassicals? Yes and no. Here I will again quote Lawson to allow him to make his own case.

To return to a question already posed but not really answered, am I seriously suggesting that we employ the term ‘neoclassical’ to refer to the third of the identified groups of economists, which will clearly include many who self-identify as heterodox? I repeat that I am certainly suggesting that to use the term ‘neoclassical’ in this fashion is the most appropriate, and a coherent, use of the category for the reasons already given; although a better categorisation might be non-dogmatic taxonomists or non-dogmatic deductivists, in contrast with the dogmatic (mathematical) taxonomists/deductivists that are the mainstream… All things considered, however, in the end I do not really think it reasonable to distinguish or identify any group on the grounds of a shared fundamental inconsistency. My aim here, in reporting my findings, is, in the end partly rhetorical, namely, to point out that if coherence in use is required, then according to the seemingly most sustainable conception, many of those who use the term ‘neoclassical’ as an ill-defined term of abuse can be viewed ultimately as engaged in unwitting self-critique. But I am hoping, more fundamentally, that it is enough in this manner to communicate (in a yet furtherway) that in modern economics there prevails largely unrecognised a basic tension between ontology and method, one that hinders serious attempts to overcoming the real problems of the discipline. (Lawson, 2013, Pp33)

The implications of this are nothing short of enormous. If one agrees to stick to a clear and well-defined meaning of words then Lawson is wholly correct in his use of the term “neoclassical”. (That said, I think for other reasons there is good reason to hold discussions about the shortcomings of the marginalist approach). So, today if we want to be consistent we must use the term neoclassical to refer to a group of economists who at once recognise that economic processes take place in historical time but nevertheless use inappropriate mathematical modelling tools to try to capture this.

Given the recent trends toward econometric research a good deal of working heterodox economists fall under that heading; not to mention anyone using extensive mathematical modelling techniques. Food for thought, at the very least.

Posted in Economic Theory, Philosophy | 8 Comments

The Idiocy of the “Evolutionary” Paradigm in Psychology

badevolution

In the wake of the financial meltdown a lot of economists are turning to the discipline of evolutionary psychology for answers. Evolutionary psychology basically attempts to explain human psychology in terms of adaptive evolutionary principles. So, a person does X because it is part of their desire to reproduce and thus partake in the evolutionary game and so forth.

I’ve long thought this paradigm to be a crock. Human psychology is not some crude reflection of some abstraction called “evolution” — which is today worshiped in the scientific community as a sort of deity. Human psychology is far more complex and nuanced than that.

The educated public at large, who these days like their deities sanctioned by a man in a lab-coat, have generally embraced the evolutionary paradigm with open arms. They can’t seem to get enough of the latest gimmicky explanation of some type of behavior. This is especially so when the explanation revolves around titillating details about sex and sexuality which it so often does.

Recently, however, I came across an hilarious story in which a very thorough evolutionary psychologist called Satoshi Kanazawa said that women with higher IQs tend to be less inclined to have children. From an evolutionary standpoint this seems absurd because presumably the human race will evolve quicker with more intelligent members. Kanazawa writes:

If any value is deeply evolutionarily familiar, it is reproductive success. If any value is truly unnatural, if there is one thing that humans (and all other species in nature) are decisively not designed for, it is voluntary childlessness. All living organisms in nature, including humans, are evolutionarily designed to reproduce. Reproductive success is the ultimate end of all biological existence.

Hence anyone that is not reproducing is therefore “wrong” or “stupid” at some basic, “natural” level.

Of course, within its own frame Kanazawa’s argument makes complete sense. But the people that usually lap this stuff up don’t like what he’s saying and so they get mad at him. The whole thing is ridiculous.

Wait a few weeks to see another evolutionary psychology study being released which the up-market, liberal press will then salivate over. But the second that the research program conflicts with their cultural values they get up in arms.

That said the media pundits, despite being completely inconsistent, are ultimately right about this particular case. The argument is ridiculous. It is also, however, entirely consistent and follows perfectly well from the underlying structure of the paradigm in question. That is why the argument should not simply be called into question, but should raise questions about the normative metaphysics underlying the evolutionary point-of-view in psychology.

I assume, however, that in the coming months some other evolutionary psychologist will come up with obtuse arguments “refuting” Kanazawa’s work. I can see now the form that the argument will take: the psychologist will bend the theory in pretzels to accommodate high IQ women not reproducing from an evolutionary perspective. The argument will then look correct to those within the paradigm and simultaneously placate the educated public.

“Oh, don’t worry,” they will say, “Kanazawa is just silly, he didn’t really think the whole thing through; look, we can accommodate your cultural values perfectly well. Please continue to talk about our studies when we release them.” And it is thus that psychological theory itself is made to reflect cultural norms at any given moment in history; showing clearly what status the discipline truly possesses.

Posted in Psychology | 7 Comments

Tending to His Own Garden: Has Krugman Finally Turned on the ISLM?

Blue Velvet Hose

In a recent post Paul Krugman, as part of an ongoing debate with MMT/MMR advocate Cullen Roche, has said that the ISLM is not a good approach to macroeconomics. Hurrah! Right? Well, maybe not.

In fact, New Keynesians do not generally use the ISLM in its original form any more. A good example of this is a paper by David Romer entitled Keynesian Macroeconomics Without the LM Curve. What Romer does in that paper is essentially replace the classic LM curve in the ISLM with a Taylor Rule interest rate target.

From a Post-Keynesian/MMT perspective this is certainly more accurate than the classic ISLM, but it raises considerable problems of it own. As the Post-Keynesian monetary economist Marc Lavoie writes in his paper Money, Credit and Central Banks in Post-Keynesian Economics this is just “old wine in new bottles”.

The problem with the Taylor Rule is that it rests on the implicit idea of a natural rate of interest which was rejected by Keynes in the General Theory when he wrote:

In my Treatise on Money I defined what purported to be a unique rate of interest, which I called the natural rate of interest — namely, the rate of interest which, in the terminology of my Treatise, preserved equality between the rate of saving (as there defined) and the rate of investment. I believed this to be a development and clarification of Wicksell’s “natural rate of interest”, which was, according to him, the rate which would preserve the stability if some, not quite clearly specified, price-level. I had, however, overlooked the fact that in any given society there is, on this definition, a different natural rate of interest for each hypothetical level of employment. And, similarly, for every rate of interest there is a level of employment for which that rate is the “natural” rate, in the sense that the system will be in equilibrium with that rate of interest and that level of employment. Thus it was a mistake to speak of the natural rate of interest or to suggest that the above definition would yield a unique value for the rate of interest irrespective of the level of employment. I had not then understood that, in certain conditions, the system could be in equilibrium with less than full employment.

I have provided a separate criticism of the New Keynesians and the natural rate here that shows that theorists like Krugman are completely incoherent on this point. While this is the direction that this debate should take, I will deal here with another problem in Krugman’s reasoning which shows that he has not moved from the mainstream quantity of money/exogenous money position as some may have hoped from reading the post. In his latest piece Krugman writes:

So why am I bringing IS-LM into the discussion? First of all, I should have been much clearer than I have been that the LM curve I’ve been drawing is for a given monetary base, not a given M1, M2,or whatever. I guess I haven’t said that clearly, although it’s implicit in my old Japan paper (pdf), where I do state clearly the point that in the liquidity trap the central bank, while it can control the monetary base, generally can’t control broader monetary aggregates. (My emphasis)

Krugman’s statement implies that outside of what he calls “liquidity trap” conditions the central bank can indeed control “broader monetary aggregates”. This is simply false. We had experiments to this effect in late 1970s and early 1980s when the “mad monetarists” rose to power in central banks across the world and it was a complete failure. In Britain, for example, where the mad monetarists — sanctioned by that lunatic Thatcher — had more power to experiment with their quantity theory than perhaps anywhere else, the monetarists completely failed to exert any control over the broad monetary aggregates.

Below is a graph taken from the excellent paper by the late Wynne Godley and Ken Coutts entitled The British Economy Under Mrs. Thatcher which shows clearly how the monetary aggregates behaved during the reign of the mad monetarists.

M3 Britain

To give that some context, the monetarist experiment is usually dated as having taken place between 1979 and 1983. During this period the mad monetarists at the Bank of England pulled out all the stops in trying to control the broad monetary aggregates and, as we can see, failed completely. During the period 1979-1983 the M3 in Britain grew at a far faster pace than it did in the period 1976-1979 before the experiment took place. It also grew at a rate that was not far off the rate of growth during the first major inflationary burst of 1970-1973.

All this indicates that contrary to what Krugman seems to be implying, the central bank never controls the broad monetary aggregates — they merely set interest rates. This is exactly what the MMT/Post-Keynesian endogenous money argument tells us. As Godley famously wrote:

Governments can no more “control” stocks of either bank money or cash than a gardener can control the direction of a hosepipe by grabbing at the water jet.

Indeed. It is time the mainstream got this through their heads so that a real debate over whether there exists a natural rate of interest or not can take place. Provided, of course, the mainstream is confident that their theory stands up to scrutiny; because, frankly, I don’t think that it does.

Posted in Economic Theory | 5 Comments

Mathesis Universalis: Lawson’s Criticisms Fall Short of their Real Target

theory_of_everything

Lars Syll has linked to a really interesting paper by Tony Lawson amidst a discussion about maths and modelling in economics. The paper really is worth a read in its entirety. It is entitled Mathematical Modelling and Ideology in the Economics Academy: Competing Explanations of the Failings of the Modern Discipline? and can be found for free download here. In it Lawson deals with what makes mainstream economics so desperately poor and he ultimately undertakes an examination of what I called “Brain-Slug Economics” elsewhere.

He notes that his critics often chastise him for not arguing that neoclassical economics is inherently ideological. Lawson concedes that he rarely or never uses either the term “neoclassical” or the term “ideological” because he does not think that they are needed to critique the fundamental problems with the way much modern economics is done.

Before getting into this I should say that while agree with Lawson’s arguments against the ideology crowd, which I shall lay out shortly, I do not agree that the term “neoclassical” does not denote a tendency in economics that has fed into the very formalism that, as we shall see, Lawson despises. I think that a great deal of the problems with modern economics is its marginalist and, ultimately as I have shown before, teleological tendencies. These are hallmarks of the neoclassical approach and it is these tendencies that accommodate to the problems that Lawson identifies.

Anyway, back to his discussion of ideology. He identifies ideology as meaning two different things to two different camps of people who make the case that mainstream economics is ideological. I will here quote Lawson in the original:

1) Ideology1: a relatively unchallenged set of (possibly distorted or misleading) background ideas that every society or community possesses which forms the basis of, or significantly informs, general opinion or ‘common sense’, a basis that remains somewhat invisible to most of its members, appearing as ‘neutral’, resting on preconceptions that are largely unexamined. A consequence is that viewpoints significantly out of line with these background beliefs are intuitively seen as radical, nonsensical or extreme no matter what may be the actual content of their vision.

2) Ideology2: a set of ideas designed, or anyway intentionally employed, in order to justify, preserve or reinforce some existing state of affairs, where this state of affairs is preferred, perhaps because it facilitates or legitimates various advantages for some dominant or privileged group, and where these ideas mostly work in the manner described by way of intentionally masking or misrepresenting the nature of reality.

The two concepts of ideology lead to different viewpoints of the mainstream. Those that adhere to the first definition see the mainstream as dupes who cannot see beyond their own noses. They live in a society with a given set of rules (market, capitalist etc.) and this blinds them from the truth of the system. When confronted with the Red Pill and the Blue Pill they opt for the Blue Pill.

Those that adhere to the second definition actually seem to believe, as Lawson points out, in some sort of malign conspiracy. In this view, the economists are like paid hands who piece together a system of ideas for the Powers That Be. While the first viewpoint stems from a sort of crude and half-understood social constructivism, the second viewpoint stems from Marxism plain and simple; especially, as Lawson points out, Gramsci’s concept of “hegemony”.

Lawson says that such a view is completely untenable. Why? Because many of the leading theorists actively warn against interpreting their models in an ideological fashion (whatever way you want to define ideology). He quotes Frank Hahn to this effect:

[…] it cannot be denied that there is something scandalous in the spectacle of so many people refining the analyses of economic [equilibrium] states which they give no reason to suppose will ever, or have ever, come about. It probably is also dangerous. Equilibrium economics […] is easily convertible into an apologia for existing economic arrangements and it is frequently so converted.

Here we see that Hahn has a perfect amount of detachment from the society he inhabits, so he doesn’t fall into the category of Ideologist Mark I (or, at least, it appears so on the first vulgar reading). He also doesn’t seem to be at all apologetic for the system, which would make him a rather poor agent for the bourgeoisie. No, Lawson says that what is really going on is something altogether different. What these economists are chasing is the ghost of a perfectly balanced deductive theory. Lawson writes:

In truth in those cases where mainstream assumptions and categories are couched in terms of economic systems as a whole they are mainly designed to achieve consistency at the level of modelling rather than coherence with the world in which we live.

There is a level of narcissism operating here. Mathematics and deductive logic are inherently narcissistic systems of thought which give narcissistic psychological gains. To have created a model is to have created a self-contained toy world that “works” perfectly. It is like playing one of those online farming games where you try to build the best farm that you can — meaningless as this may be. In both cases you create your own little sandbox world in which you can play peacefully away from the bad smells and loud noises of reality, in all its complexity and confusion.

A particularly amazing instance where such narcissism shines through in all its glory is in the case of econometricians. Here I will again quote Lawson in the original:

If we focus on empirical contributions, specifically, it is clear that there are few attempts to repeat the results of others, progress the results of others, or even acknowledge the results of others. Even econometricians using identical, or almost identical, data sets are regularly found to produce quite contrasting conclusions, usually with little attempt at explanation. The systematic result here, as the econometrician Edward Leamer (1983) observes, is that: “hardly anyone takes anyone else’s data analysis seriously”.

I can absolutely confirm this. But I would go one further: the reason that econometricians don’t take each others’ data seriously is because, being econometricians, they know how the game is played. And if you know how the game is played you know how much room for arbitrariness and manipulation there is within the process. Working economists and econometricians who actually believe that the method works always consider themselves the exception to the rule. “Oh yes, he has no chance of having done it right, but of course I know that I have!”

The level of narcissism here is primitive. But I should be clear here: I am not saying that people in these fields are particularly narcissistic. No, what I am is that the method itself generates a narcissistic response from the people that undertake it. Just as war turns civilised men into rapists and murderers, these models turn modellers into hermetically sealed units with little real link to the outside world (at least, insofar as their work goes). No communication is needed; either with reality or with others. The Self is all that matters. The modeller is a God within his own sandbox. King of the castle in the little world he has built.

Lawson finally pinpoints this whole situation as a different form of ideology. He writes:

First and foremost, I want briefly to indicate an alternative ideology, a version of ideology1 (a set of background views manifest unquestioningly as if normal or neutral) that I believe does pervade the economics academy, one that is extremely widespread and indeed plays a significant contributory role in the failings of the discipline. But this is a set of beliefs that bears not directly upon the nature of the underlying economic system at all. Rather it is precisely the doctrine that all serious economics must take the form of mathematical modelling. [Emphasis original]

I agree entirely. I could not agree more. But here is where I will take Lawson’s argument to the logical conclusions that I believe he will not take it to because, well, frankly because he is to some extent within this ideology. The name for this ideology that Lawson identifies is one that we will all be familiar with: it is called Enlightenment and, as I have shown before, it is precisely this ideology that has buried any criticisms of itself and remained dominant in its blindness for centuries.

It is an ideology that pervades all our language, all our conceptions of how to organise our societies and even all our perceptions of ourselves. What we see unfolding on the blackboards of the mainstream economists is the project of Enlightenment in all its glory; it is the dream of the Mathesis Universalis that was never really locked up and instead went into hiding underground. And until its critics are willing to call it by its name they will continue to find that it slips though their fingers, as their fingers are ultimately made of the same slippery stuff.

That is, however, a rather broad topic and I do not hope to broach it here. Rather I will leave the last word to Lawson because, although he and I think that the problem is rooted in different places, we nevertheless agree on almost every issue of substance. After considering his own attendance at the INET conference he writes:

In fact so apparently compelling is the belief system in question (that mathematical deductive formalism is the proper way to do economics) that many heterodox economists too seemingly fall under its sway. Although heterodox modellers do not follow the mainstream in dogmatically insisting that we all everywhere adopt a formalistic orientation, it remains the case many heterodox economists fail to recognise that the conceptions they find inadequate in mainstream theory owe something to the mainstream modelling emphasis; and these heterodox economists continue excessively to explore alternative mathematical models and forms of formalism in the face of explanatory failures and unrealistic formulations.

Indeed. And Lawson also goes on to say something in passing which I believe deserves more attention:

Also, the pattern of behaviour in question seems to be gendered, with formalistic economics, and the concern with prediction, being relentlessly pursued largely by gendered males.

Hmmm… What was that I said about King of the castle again?

Update: Here is a video clip of Lawson which I found on Youtube that is worth spreading. It’s hard to tell, but I think I detect a bit of an Irish accent in Lawson’s speech. What is it with the Irish feeling the need to critique the pretensions of mathematical science? I do wonder… Perhaps it was due to the many years the country spent under the boot of the priesthood after which the Irish learned an innate distrust of those who base their claims to authority on mystical principles. Who knows?

Posted in Economic Theory, Philosophy | Leave a comment

Swimming Against the Tide of History: Krugman-Galbraith ’96

Tide sunset

Lars Syll has brought my attention to a very interesting exchange between Jamie Galbraith and Paul Krugman from 1996 that is archived on the latter’s website (or a website created for him, I cannot tell). Much of the discussion is on long dead arguments from the 1990s that now look as antiquated as early episodes of Friends. But there are two things that are interesting about the debate.

First of all, in retrospect, it looks to me like Krugman is on the wrong side of pretty much every issue. He seems cavalier about the rising income inequality in the US — something he has since renounced (but only due to reality really beating him over the head). He shrugs off a falling labour share in national income — something which has since become so obvious that not even the flashiest debater could cover it up. At one point his support of Pete Peterson’s Social Security privatisation campaign is mentioned (yep, old Pete Pete was selling that hoary “Social Security is imminently broke” line back in ’96 and Krugman at one point fell for it).

There also seems to be a sense with Krugman that the Washington Consensus would lead to general prosperity, although it is hard to pin — one year before the Asian crisis and the subsequent fracturing of said consensus. And then there’s the move toward a budget surplus by the Clinton administration; something now argued by many to have precipitated the rise in private sector debt that ultimately led to the financial crisis of 2008.

Krugman really was swimming one way while the tide of history was washing the other. And the confidence with which he asserts himself, the sureness that he is right and that all his opponents are just making “simple arithmetic mistakes” (seriously, read the exchange) is nothing short of embarrassing in retrospect. Hell, it’s even embarrassing at the time! Krugman readily admits he fell for the Peterson propaganda!

Another aspect of the exchange that is interesting, however, is the manner in which Krugman deals with those he disagrees with. If you read the exchange carefully you’ll see that Krugman is engaged in a sort of “divide and conquer” strategy. What he does is he picks out mistakes, not that his opponent (Galbraith) has made, but that his opponent’s friends have allegedly made. He then tries to have Galbraith denounce these people under the implicit threat that if he doesn’t he will fall into the same category as them.

This denunciation, this Judas kiss, then becomes a prerequisite for Galbraith being allowed the privilege of debating Krugman. Seriously, I’m not making this up. Read it yourself:

I often regret the feeling of obligation that led me to take on that unpopular role. For one thing, there is no better way to make a man hate you than to tell him that while he was walking around priding himself on being at the intellectual cutting edge, in fact he was merely insisting that two and two must add up to at least 25. And I would much rather have interesting arguments with people who might be right than spend my time trying to explain freshman-level concepts to unwilling listeners. I would hope that Galbraith will agree with me that all these doctrines are silly. If so, he and I can go on to discuss the “real” issues.

Well, that’s certainly an, erm, interesting debating strategy. No, actually let’s call it what it is: it’s a dirty war tactic. The goal is not to debate the opposition, but to carve them up and cause chaos within their ranks. It’s sneaky. It’s unpleasant. Frankly, I don’t think I’ve ever seen it before; even on the blogs.

Of course, as we said, Krugman was on the wrong side of history. Now we all know that inequality is a root cause of our problems and that labour’s share in the national income was on a serious downward trend. Now we know that the Washington Consensus tended to lead to enormous bubbles, unemployment and the disintegration of monetary systems. And although the particulars of the debates contained in these exchanges are now long buried and lack for me any familiar freshness, one wonders in retrospect whether it was the strength of the push of the historical tide that necessitated Krugman’s guerrilla war tactics.

Nowadays, of course, Krugman is reformed. His columns echo much of what Galbraith is saying. There is talk of inequality, of the need for government deficits and the dangers of serious economic instability due to unregulated financial markets. There is silence, however, with regards to those few who were persistently making these arguments. Krugman will drop the names of dead men from time to time — Keynes, Minsky and now Kalecki — but the silence of the names of those living who had been pushing these ideas all along is nothing short of deafening.

Now, perhaps, I am beginning to understand why this is the case; why Krugman is desperate not to concede any ground to the Post-Keynesians while at the same time gradually sliding — almost seamlessly — into the positions that they hold. The difference is that, as Galbraith argues in his letters, back in ’96 the opposition were completely marginalised and had no means by which to put forward their views to the general public, while today there is a whole host of platforms for such people.

One wonders if that tide is pulling once again, and Krugman might again be caught swimming the wrong way.

Posted in Economic History, Economic Theory, Media/Journalism | 9 Comments

Naked and Free, Economics Without Models: A Response to Grasselli

Streaking

Well, Grasselli has responded to my previous post. His response, full as it is with personal attacks, bile and insinuations is pretty embarrassing; Grasselli hasn’t yet learned that a rapid, heated exchange on Facebook is a little different from a blog post that is supposed to be carefully thought through. But I won’t engage Grasselli’s clear seething dislike of me because I think such would be pointless — infinitely amusing, especially his chess club insults (small brain etc.), but pointless.

The best way to approach this is by pulling out the substantive points in an orderly manner.

(1) Grasselli says that I am a hypocrite because I say that economists should make predictions but that they should not engage in crass games comparing various models. This is apparently a contradiction because, according to Grasselli, you need models to make predictions. Well, I make economic forecasts all the time and I never use models. This appears to blow Grasselli’s mind but this is only because, as discussed in the first post, he is a mathematician and not an economist so he doesn’t really know the trade.

Let me state this point clearly so that everyone understands what I’m saying: I do not use models for prediction; I do not believe that models are a good means by which to make predictions; and ultimately I think that models are only really didactic tools, “classroom gadgets” as John Hicks once said. (I have, by the way, also built models, so I’m not saying this from the standpoint of someone who eschews them; I just think them to be no more than didactic tools). Grasselli will likely not understand this point but that is on him, not on me.

(2) More importantly Grasselli has misunderstood what I meant when I said that his models cannot be used in meaningful empirical work. He says that his models can make precise predictions. He then discusses frequentist and Bayesian statistics. (He also, in a feat of admirable narcissism, seems to think that I refer to this as “the Grasselli approach”, although a close scrutiny of the text reveals no such statement).

Look, I’m not going to get into a debate over what Bayesian theory is or isn’t. There’s only two ways that you can use a model to generate a prediction. One way is just to assume that a model represents reality. For example, a New Classical economist might use their model to say that an increase in government spending will lead to inflation in the medium-to-long run. The other approach is to feed historical data into the model and try to extrapolate based on this — again there is the assumption that the model is correct, but it is being used to “process” data and project past empirical trends forward rather than make a priori predictions. The latter approach, which as Grasselli notes is the “frequentist” approach, fails due to economic data being non-ergodic.

Grasselli instead does something more so in line with the former approach. As he writes:

In Bayesian statistics, the modeler is free (in fact encouraged) to come up with her own priors, based on a combination of past experience, theoretical understanding, and personal judgment.

I was well aware of this when I wrote up my criticisms. This is precisely what I meant when I said that Grasselli’s aim was not what I consider real empirical work but rather the testing of his model. He comes up with a model based on what he calls “priors” and then tests this against the data. Why do I not think that this is real empirical work? Because it is ass-backwards. The point of Grasselli’s approach is not to discover new novel facts that might tell you how the economy is evolving through time but instead to confirm or disprove already held knowledge. Since I do a good deal of empirical work I know that this approach is basically useless; the scope is much to general.

A good applied economist is in the process of always discovering new facts buried in the data; not banging his model against the wall until he breaks through. Grasselli will, of course, not understand this criticism because, as we have already seen, he thinks that you need a model to make predictions. But I cannot help him here. It is not his field. This is why I don’t think that mathematicians should be invited into the tent. I don’t pretend to know how to do good mathematical theory, but for some reason a lot of mathematicians seem to think they know how to do applied work in economics. And then their jaws drop to the floor when I tell them that the approach to empirical work which I adhere to is not model-based but rather, to again quote Keynes (I’m not just doing this for fun…), to have an “organised and orderly method of thinking out particular problems” with which we then scrutinise the relevant facts and data.

(3) With regards to the next substantive point I think that I should quote Grasselli in full because whereas I am fairly confident that on the last point Grasselli really did not understand what I was saying, on this point I think that he is being evasive:

Pilkington seems to think that the only way to measure something is to go out with an instrument (a ruler, for example) and take a measurement. The problem is that risk, almost by definition, is a property if future events, and you cannot take a measurement in the future. ALL you can do is to create a model of the future and then “measure” the risk of something within the model. As Lady Gaga would say “oh there ain’t no other way”. For example, when you drive along the Pacific Coast Highway and read a sign on the side of the road that says “the risk of forest fire today is high”, all it means is that someone has a model (based on previous data, the theory of fire propagation, simulations and judgment) that takes as inputs the measurements of observed quantities (temperature, humidity, etc) and calculates probabilities of scenarios in which a forest fire arises. As time goes by and the future turns into the present you then observe the actual occurrence of forest fires and see how well the model performs according to the accuracy of the predictions, at which point you update the model (or a combination of models) based on, you guessed it, Bayes’s theorem.

Again, Grasselli is telling me things that I already know. Of course you cannot measure the future. What you can do, however, is make predictions about the future based on an analysis of past probability distributions (provided that the data is ergodic, which in this case it is not, but I won’t get into that right now). My criticism of Grasselli was simple: he does not have such a measure of risk. The article title that I quoted said that Grasselli had devised a “better way to measure systemic risk” but when pressed on it Grasselli could not give me any such measure. Indeed, Grasselli later claimed that one should “estimate nothing”.

The title of the article was completely misleading in this regard. When an investment professional hears someone claim that they can measure systemic risk they will obviously come to think that this measure can be applied in some concrete way. But Grasselli’s cannot. All he can do, as we have seen, is continuously test his model over and over again to prove or disprove it. Anything that Grasselli says about the level of systemic risk in the real world will simply be based, as we have seen, on his “own priors, based on a combination of past experience, theoretical understanding, and personal judgment”. But given that Grasselli is not actually an economist and clearly does not understand how to do robust empirical work I do not see why we should trust such “priors”.

This gets to the heart of what I think a lot of this modelling does. I think it provides a “wow” factor. A modeler flashes their model in the face of an investment professional or a politician and they are entranced by its complexity. The modeller then proceeds to give them advice. The politician or investment professional then believes that the advice is coming out of the model — which is now imbued with a mystical aura — when in fact the advice is coming from the modeller’s “own priors, based on a combination of past experience, theoretical understanding, and personal judgment”. This process, one that has been noted many times before by Post-Keynesians, strikes me personally as being extremely phony.

(4) Let’s wrap this up with Grasselli’s discussion of the Keynes quote I put forward. I laid out the Keynes quote because it is a very well-known quote in Post-Keynesian circles regarding methodology. I fear that Grasselli has missed the key part of the quote where Keynes says “the object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organised and orderly method of thinking out particular problems”. This is not a criticism of mathematics, as Grasselli seems to think, but a criticism of modelling as such. It is a recognition that models are limited didactic tools that cannot be applied directly to the data. When we approach the data we do so without models but instead with an “organised and orderly method of thinking out particular problems”.

What Grasselli does is approach the data purely through the model. The model becomes a sort of stand-in or prosthetic limb with which he handles the data. This is not the generally accepted method of empirical economic work among Post-Keynesians (although there are a couple of people who do this).

What frustrates me so much about Grasselli is that he doesn’t really understand what Post-Keynesian economics is all about. He has raided it for a few ideas but he has not engaged it in any deep and meaningful way. Every Post-Keynesian who reads this post will understand precisely what I am talking about (even though some, for whatever reason, might side with Grasselli) but there is a very good chance that Grasselli will not because he is simply not familiar with the way we do economics.

At the beginning of his post Grasselli says that I fear that he will destroy Post-Keynesian economics. I fear no such thing. I think that his work will fade rather quickly. What I do fear is that he will mislead younger aspiring Post-Keynesians and waste their time when they could be learning how to do economics. And what I also fear is that he might become a flashy, mathematicised representative of the discipline making claims that I think grandiose and applying ideas poorly. That’s what I fear.

Posted in Economic Theory, Market Analysis | 1 Comment

Inflation and Income Distribution: A Reply to the Vulgar Keynesian Policy Enthusiasts

inflation

Noah Smith has a recent post on inflation in which he urges economists to learn to love inflation. We live, as everyone knows, in a period of low inflation and high unemployment and I suppose Smith’s point is that we shouldn’t worry much about inflation and instead focus on income growth. Fair enough. But would-be Keynesian policy enthusiasts really should temper their language.

Real people — and their representatives in government — don’t like inflation much you see. “Irrational nonsense!” says the vulgar Keynesian policy enthusiast, “when the price of everything goes up, your wage should rise as well. Why? Because on average, we are all sellers of something. If you work in a tea shop and the price of tea goes up, your wage can be expected to go up as well, and so forth. Remember, every dollar that one person spends becomes the income of another person!” (The latter part of that quote is Smith).

That sounds great but things are actually a little more complicated. In fact, the man in the street’s distrust of inflation is probably more grounded than Smith’s abstractions. Why? Because inflation has a distributive element: it does not affect all income groups equally.

Last year a friend and colleague, Javier López Bernardo, and I did a mock report for a class we were taking on just this issue. We broke down the CPI and RPI accounts for the UK and reconstructed them to try to account for how inflation might affect different income groups (actually, Javier did a good deal of the dirty work of reconstructing so he should probably get the credit for that).

The results, while a bit rough and ready, are illustrative that inflation almost certainly hits lower income groups harder than higher income groups. The main reason for this is because food and energy inflation tends to be rather high, while the price of cars and electronic equipment and the like tends to be either steady or falling over time. You can see this with respect to the UK in the graph below:

Inflation1

Food and energy, of course, makes up a greater part of the basket of lower income households than it does higher income households. So what we did was we came up with new weights for the RPI which we thought more accurately reflected the baskets of lower income households. Since we did not have access to survey data we had to basically just make up the weights, but I think they are at least somewhat reasonable. Here is a list of the old weights versus the new weights:

Inflation2

And here are the results we got by comparing the RPI with the new lower income group inflation measure:

Inflation3

What we see is that the lower income group basket is more sensitive to price fluctuations than the standard inflation measure. If we had constructed a high income group basket and compared it, the lower income group basket would be even more sensitive again. This means that in times of high inflation lower income groups tend to see their incomes eroded more rapidly than higher income groups.

It is probably not unreasonable to view this in a sort of Duesenberry fashion. What I mean by that is that people generally care not so much about their absolute income level, but rather their income level relative to others. So, if they see, for example, their income being eroded at a rate of 12% in 1990 while the average rate of erosion is about 10% and the rate of erosion of wealthier incomes is maybe 8%, then they are likely to be pretty annoyed.

Of course inflation has other effects too. It redistributes income from creditors to debtors as the real value of debt is eroded. That is generally good for lower income households. But these households do not generally see this or understand it. They do see, however, that their costs are rising while the wealthy family down the road are not experiencing the same pain and this is likely to irritate them.

Keynesian policy enthusiasts should always remember that we live in a democracy; not a technocracy run by them. If they want their policies to be put in place they require the consensus of elected leaders which ultimately means the consensus of the mass of citizens. For this reason it is probably not such a good idea to go around “celebrating” inflation. Indeed, it often comes across as elitist and lacking in any populist appeal which makes it an easy target for libertarian types who prey on peoples’ fears and misunderstandings to spread ignorance and bad ideas.

It is for this reason that I have always admired the Modern Monetary Theorist (MMT) crowd. Their motto is “full employment and price stability” and they have designed a program, the Jobs Guarantee Program, to ensure just that. Their’s is an infinitely more salable, nuanced and thought-through program then the vulgar money-pump Keynesian policy enthusiasts like Smith (together with many of his New Keynesian colleagues).

Posted in Economic Policy, Politics | 1 Comment

Brain-Slug Economics: Grasselli’s Project to Turn Post-Keynesian Economics into Mathematical Formalism

Brain_Slugs

The danger when mathematicians try to do economic modelling is twofold. The first problem is that they often do not have a clue about what they are doing or the object that they are trying to model. The second problem is that they often begin to mistake the model for reality and make grandiose claims about what they have achieved or will potentially achieve that ring hollow when scrutinised.

Both of these problems are amplified as economists, who are often not very strong mathematicians, assume that because the mathematician is more mathematically savvy than they are that what they are doing must be correct and any doubts the economist has must be miscomprehensions. It is in this way that bad but highly mathematised economics becomes like a potential brain slug sitting on the forehead of good economists, leading them down blind alleys and wayward paths.

I’ve recently got into something of a spat with one Matheus Grasselli on the INET YSI Facebook page; a page where young economists looking for alternative approaches congregate. Grasselli is an associate professor of mathematics and he is currently part of the burgeoning industry of Minksy modelling that has sprung up since the crisis. You can see a presentation of Grasselli’s work here.

I heard about the work Grasselli and others were doing at the Fields Institute some time ago and I was instantly skeptical: isn’t there an established tradition in Post-Keynesian economics that goes back over 80 years that uses mathematics only in a very presentational manner? Indeed, doesn’t Post-Keynesian economics generally follow the spirit laid out by Keynes (a mathematician himself by training) in the following quote from his General Theory?

The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organised and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error. It is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep “at the back of our heads” the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials “at the back” of several pages of algebra which assume that they all vanish. Too large a proportion of recent “mathematical” economics are mere concoctions, as imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols.

Keynes, and those that followed him, were aware that the mathematics used by economists should only be part of establishing “an organised and orderly method of thinking out particular problems”. Once it was used to build giant formal models it risked moving away from the real-world entirely and becoming a fetish game of “my maths is bigger than your maths”. The result is schoolyard academic squabbles of the most boring and irrelevant kind.

Of the two sins laid out at the beginning Grasselli has fallen into both. First, he has made the claim that the Post-Keynesian concern with the non-ergodicity of economic systems and the implications of this for modelling and empirical mathematical research are without foundation. On the Facebook page he writes he following of what he refers to as the “ergodicity nonsense”:

OK, this ergodicity nonsense gets thrown around a lot, so I should comment on it.  You only need a process (time series, system, whatever) to be ergodic if you are trying to make estimates of properties of a given probability distribution based on past data. The idea is that enough observations through time (the so called time-averages) give you information about properties of the probability distribution over the sample space (so called ensemble averages). So for example you observe a stock price long enough and get better and better estimates of its moments (mean, variance, kurtosis, etc). Presumably you then use these estimates in whatever formula you came up with (Black-Scholes or whatever) to compute something else about the future (say the price of an option). The same story holds for almost all mainstream econometric models: postulate some relationship, use historical time series to estimate the parameters, plug the parameters into the relationship and spill out a prediction/forecast.

Of course none of this works if the process you are studying in non-ergodic, because the time averages will NOT be reliable estimates of the probability distribution. So the whole thing goes up in flames and people like Paul Davidson goes around repeating “non-ergodic, non-ergodic” ad infinitum.  The thing is, none of this is necessary if you take a Bayes’s theorem view of prediction/forecast. You start by assigning prior probabilities to models (even models that have nothing to do with each other, like an IS/LM model and a DSGE model with their respective parameters), make predictions/forecasts based on these prior probabilities, and then update them when new information becomes available. Voila, no need for ergodicity. Bayesian statistics could not care less if the prior probabilities change because they are time-dependent, the world changed, or you were too stupid to assign them to begin with.  It is only a narrow frequentist view of prediction that requires ergodicity (and a host of other assumptions like asymptotic normality of errors) to be applicable. Unfortunately, that’s what’s used by most econometricians. But it doesn’t need to be like that.  My friend Chris Rogers from Cambridge has a t-shirt that illustrates this point. It says: “Estimate Nothing!”. I think I’ll order a bunch and distribute to my students.

It is not clear that Grasselli’s approach here can be used in any meaningful way in empirical work. What we are concerned with as economists is trying to make predictions about the future. These range from the likely effects of policy, to the moves in markets worldwide. What Grasselli is interested in here is the robustness of his model. He wants to engage in schoolyard posturing saying “my model is better than your model because it made better predictions”. This is a very tempting path for academics because it allows them to engage in some sort of competition. That it is completely irrelevant matters little if a distracting competition is established to show off the various tricks one has learned.

Indeed, in misunderstanding the object of economic analysis — one so eloquently laid out by Keynes in the above quote — Grasselli sets up Post-Keynesian economics to become yet another stale classroom discipline with no bearing on real-world analysis whatsoever. He also risks turning out a new generation of students who cannot do any real empirical work and instead show off to others their mathematical prowess rather than their results. If Grasselli does order the “Estimate Nothing!” t-shirts perhaps he should have written on the back “And Become Completely Irrelevant!”.

The second sin Grasselli has committed has to do with the claims he makes for his models. As we will see this sin is committed for largely the same reasons as the first. Recently — and this is what sparked off the debate on the INET page — Grasselli had an article written up on his work by a fellow mathematician. The article was aimed at investment advisers and ran with the impressive title A Better Way to Measure Systemic Risk.

As anyone familiar with the investment community will know that title promises rather a lot. If you can measure systemic risk you can adjust your portfolio accordingly and you can get a distinct advantage over the other guy. That’s a big promise for investment guys; a bit of a Holy Grail, actually. I pointed out to Grasselli, however, that nowhere in the article could I see any method discussed for how to measure systemic risk. This did not surprise me as I don’t think that such a thing is possible using Minsky’s work, as it was something I gave quite a bit of thought to about a year ago when I was choosing my dissertation topic.

Now, I assume that Grasselli did not himself choose the title of the article. But he must have at least given an impression to the person that did — who, remember, is a mathematician himself and not some starry-eyed journalist. So, I called Grasselli out on this and said that I didn’t think he had such a measure. He countered that he did and laid out his approach. I said that he was just comparing models with one another and this meant nothing. Here is his response (which I think the clearest explanation of what he is doing):

I’m not comparing models, I’m comparing systems within the same model.  Say System 1 has only one locally stable equilibrium, whereas System 2 has two (a good one and a bad one). Which one has more systemic risk? There’s your first measure.  Now say for System 2 you have two sets of initial conditions: one well inside the basin of attraction for the good equilibrium (say low debt) and another very close to the boundary of the basin of attraction (say with high debt). Which set of initial conditions pose higher systemic risk? There’s your second measure. Finally, you are monitoring a parameter that is known to be associated with a bifurcation, say the size of the government response when employment is low, and the government needs to decide between two stimulus packages, one above and one below the bifurcation threshold. Which policy lead to higher systemic risk? There’s your third measure.

What Grasselli is doing here is creating a model in which he can simulate various scenarios to see which one produces high-risk and which will produce low-risk environments within said model. But is this really “measuring systemic risk”? I don’t think that it is. To say that it is a means to measure systemic risk would be like me saying that I found a way to measure the size of God and then when incredulous people came around to my house to see my technique they would find a computer simulation I had created of what I think to be God in which I could measure Him/Her/It.

What sounds impressive on the outside is actually rather mediocre and banal (it would also be a bit weird if it were not socially sanctioned, but I digress). It is also a largely irrelevant way to determine policy. Again, as economists what we need is a toolbox with which we can analyse certain problems, not a “maze of pretentious symbols and unhelpful symbols” that lead the economist “to lose sight of the complexities and interdependencies of the real world”, as Keynes wrote in 1936. Grasselli’s technique can be used to “wow” politicians and investors, but it cannot be used to make real choices which will always be carried out by practical, down-to-earth people with better or worse economic training.

So will the brain slug be passed around? Will Grasselli’s approach be adopted by Post-Keynesians? Will the debates over ergodicity evaporate from the journals as simple misunderstandings? Will the same pages pour with complex mathematical formulations and simulations? I doubt it. Some will buy into Grasselli’s program as a passing gimmick with good funding. Some will be drawn to it thinking that by having “bigger maths” than the neoclassicals we will win the debate — like a boys school bathroom game of a similar kind. But it will likely peter out when the results are seen to be what they will likely be: the naval-gazing of model-builders basking in self-admiration at the constructions they have built.

Or I may be completely wrong and my skepticism may be misplaced. Perhaps Grasselli’s program will produce wondrous new insights into economic systems that the likes of me have never imagined before. Maybe it will produce predictions about markets and economies which I could have never hoped to have made without the models. If such is the case I will be the first to say that I was wrong and I will give Grasselli all the praise that he will undoubtedly deserve. In the meantime, however, I continue to register not just my skepticism, but my extreme skepticism and plead with those who do engage in such exercises to tone down the claims they are making lest they embarrass the Post-Keynesian community at large.

Posted in Economic Theory | 5 Comments

Micro to Macro: A Note on the Kahn Wage-Profit Multiplier

micro-macro-1

Just a quick added note on the Kahn wage-profit multiplier. As shown in the last piece, Kahn’s multiplier was concerned with the generation of employment by each extra man employed. Here once again is the complete multiplier relation (read the last post to decode the variables):

Multiplier Equation 1.5Here’s a thought though: what if we replaced the number of men employed due to the wage-profit multiplier, k, with aggregate income, Y. And what if we replaced the wage and the profit per man employed, W and P respectively, with aggregate wages and aggregate profits. Then, we replace the two multiplier terms, b and n, with average propensity to consume of out of wages and average propensity to consume out of profits respectively. Finally, we replace imports per extra man employed, M, with average propensity to consume out of imports which, to use the standard multiplier algebra, might be mY and hey presto we have a macro wage-profit multiplier ready to use!

As I said in the last post, Kahn derived the multiplier from Keynes’ equations in his Treatise on Money. The above shows not simply how close that work was to being a true work of macroeconomics, but also how close its framework was to that of Kalecki.

 

Posted in Economic Theory | Leave a comment