Brain-Slug Economics: Grasselli’s Project to Turn Post-Keynesian Economics into Mathematical Formalism

Brain_Slugs

The danger when mathematicians try to do economic modelling is twofold. The first problem is that they often do not have a clue about what they are doing or the object that they are trying to model. The second problem is that they often begin to mistake the model for reality and make grandiose claims about what they have achieved or will potentially achieve that ring hollow when scrutinised.

Both of these problems are amplified as economists, who are often not very strong mathematicians, assume that because the mathematician is more mathematically savvy than they are that what they are doing must be correct and any doubts the economist has must be miscomprehensions. It is in this way that bad but highly mathematised economics becomes like a potential brain slug sitting on the forehead of good economists, leading them down blind alleys and wayward paths.

I’ve recently got into something of a spat with one Matheus Grasselli on the INET YSI Facebook page; a page where young economists looking for alternative approaches congregate. Grasselli is an associate professor of mathematics and he is currently part of the burgeoning industry of Minksy modelling that has sprung up since the crisis. You can see a presentation of Grasselli’s work here.

I heard about the work Grasselli and others were doing at the Fields Institute some time ago and I was instantly skeptical: isn’t there an established tradition in Post-Keynesian economics that goes back over 80 years that uses mathematics only in a very presentational manner? Indeed, doesn’t Post-Keynesian economics generally follow the spirit laid out by Keynes (a mathematician himself by training) in the following quote from his General Theory?

The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organised and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error. It is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep “at the back of our heads” the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials “at the back” of several pages of algebra which assume that they all vanish. Too large a proportion of recent “mathematical” economics are mere concoctions, as imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols.

Keynes, and those that followed him, were aware that the mathematics used by economists should only be part of establishing “an organised and orderly method of thinking out particular problems”. Once it was used to build giant formal models it risked moving away from the real-world entirely and becoming a fetish game of “my maths is bigger than your maths”. The result is schoolyard academic squabbles of the most boring and irrelevant kind.

Of the two sins laid out at the beginning Grasselli has fallen into both. First, he has made the claim that the Post-Keynesian concern with the non-ergodicity of economic systems and the implications of this for modelling and empirical mathematical research are without foundation. On the Facebook page he writes he following of what he refers to as the “ergodicity nonsense”:

OK, this ergodicity nonsense gets thrown around a lot, so I should comment on it.  You only need a process (time series, system, whatever) to be ergodic if you are trying to make estimates of properties of a given probability distribution based on past data. The idea is that enough observations through time (the so called time-averages) give you information about properties of the probability distribution over the sample space (so called ensemble averages). So for example you observe a stock price long enough and get better and better estimates of its moments (mean, variance, kurtosis, etc). Presumably you then use these estimates in whatever formula you came up with (Black-Scholes or whatever) to compute something else about the future (say the price of an option). The same story holds for almost all mainstream econometric models: postulate some relationship, use historical time series to estimate the parameters, plug the parameters into the relationship and spill out a prediction/forecast.

Of course none of this works if the process you are studying in non-ergodic, because the time averages will NOT be reliable estimates of the probability distribution. So the whole thing goes up in flames and people like Paul Davidson goes around repeating “non-ergodic, non-ergodic” ad infinitum.  The thing is, none of this is necessary if you take a Bayes’s theorem view of prediction/forecast. You start by assigning prior probabilities to models (even models that have nothing to do with each other, like an IS/LM model and a DSGE model with their respective parameters), make predictions/forecasts based on these prior probabilities, and then update them when new information becomes available. Voila, no need for ergodicity. Bayesian statistics could not care less if the prior probabilities change because they are time-dependent, the world changed, or you were too stupid to assign them to begin with.  It is only a narrow frequentist view of prediction that requires ergodicity (and a host of other assumptions like asymptotic normality of errors) to be applicable. Unfortunately, that’s what’s used by most econometricians. But it doesn’t need to be like that.  My friend Chris Rogers from Cambridge has a t-shirt that illustrates this point. It says: “Estimate Nothing!”. I think I’ll order a bunch and distribute to my students.

It is not clear that Grasselli’s approach here can be used in any meaningful way in empirical work. What we are concerned with as economists is trying to make predictions about the future. These range from the likely effects of policy, to the moves in markets worldwide. What Grasselli is interested in here is the robustness of his model. He wants to engage in schoolyard posturing saying “my model is better than your model because it made better predictions”. This is a very tempting path for academics because it allows them to engage in some sort of competition. That it is completely irrelevant matters little if a distracting competition is established to show off the various tricks one has learned.

Indeed, in misunderstanding the object of economic analysis — one so eloquently laid out by Keynes in the above quote — Grasselli sets up Post-Keynesian economics to become yet another stale classroom discipline with no bearing on real-world analysis whatsoever. He also risks turning out a new generation of students who cannot do any real empirical work and instead show off to others their mathematical prowess rather than their results. If Grasselli does order the “Estimate Nothing!” t-shirts perhaps he should have written on the back “And Become Completely Irrelevant!”.

The second sin Grasselli has committed has to do with the claims he makes for his models. As we will see this sin is committed for largely the same reasons as the first. Recently — and this is what sparked off the debate on the INET page — Grasselli had an article written up on his work by a fellow mathematician. The article was aimed at investment advisers and ran with the impressive title A Better Way to Measure Systemic Risk.

As anyone familiar with the investment community will know that title promises rather a lot. If you can measure systemic risk you can adjust your portfolio accordingly and you can get a distinct advantage over the other guy. That’s a big promise for investment guys; a bit of a Holy Grail, actually. I pointed out to Grasselli, however, that nowhere in the article could I see any method discussed for how to measure systemic risk. This did not surprise me as I don’t think that such a thing is possible using Minsky’s work, as it was something I gave quite a bit of thought to about a year ago when I was choosing my dissertation topic.

Now, I assume that Grasselli did not himself choose the title of the article. But he must have at least given an impression to the person that did — who, remember, is a mathematician himself and not some starry-eyed journalist. So, I called Grasselli out on this and said that I didn’t think he had such a measure. He countered that he did and laid out his approach. I said that he was just comparing models with one another and this meant nothing. Here is his response (which I think the clearest explanation of what he is doing):

I’m not comparing models, I’m comparing systems within the same model.  Say System 1 has only one locally stable equilibrium, whereas System 2 has two (a good one and a bad one). Which one has more systemic risk? There’s your first measure.  Now say for System 2 you have two sets of initial conditions: one well inside the basin of attraction for the good equilibrium (say low debt) and another very close to the boundary of the basin of attraction (say with high debt). Which set of initial conditions pose higher systemic risk? There’s your second measure. Finally, you are monitoring a parameter that is known to be associated with a bifurcation, say the size of the government response when employment is low, and the government needs to decide between two stimulus packages, one above and one below the bifurcation threshold. Which policy lead to higher systemic risk? There’s your third measure.

What Grasselli is doing here is creating a model in which he can simulate various scenarios to see which one produces high-risk and which will produce low-risk environments within said model. But is this really “measuring systemic risk”? I don’t think that it is. To say that it is a means to measure systemic risk would be like me saying that I found a way to measure the size of God and then when incredulous people came around to my house to see my technique they would find a computer simulation I had created of what I think to be God in which I could measure Him/Her/It.

What sounds impressive on the outside is actually rather mediocre and banal (it would also be a bit weird if it were not socially sanctioned, but I digress). It is also a largely irrelevant way to determine policy. Again, as economists what we need is a toolbox with which we can analyse certain problems, not a “maze of pretentious symbols and unhelpful symbols” that lead the economist “to lose sight of the complexities and interdependencies of the real world”, as Keynes wrote in 1936. Grasselli’s technique can be used to “wow” politicians and investors, but it cannot be used to make real choices which will always be carried out by practical, down-to-earth people with better or worse economic training.

So will the brain slug be passed around? Will Grasselli’s approach be adopted by Post-Keynesians? Will the debates over ergodicity evaporate from the journals as simple misunderstandings? Will the same pages pour with complex mathematical formulations and simulations? I doubt it. Some will buy into Grasselli’s program as a passing gimmick with good funding. Some will be drawn to it thinking that by having “bigger maths” than the neoclassicals we will win the debate — like a boys school bathroom game of a similar kind. But it will likely peter out when the results are seen to be what they will likely be: the naval-gazing of model-builders basking in self-admiration at the constructions they have built.

Or I may be completely wrong and my skepticism may be misplaced. Perhaps Grasselli’s program will produce wondrous new insights into economic systems that the likes of me have never imagined before. Maybe it will produce predictions about markets and economies which I could have never hoped to have made without the models. If such is the case I will be the first to say that I was wrong and I will give Grasselli all the praise that he will undoubtedly deserve. In the meantime, however, I continue to register not just my skepticism, but my extreme skepticism and plead with those who do engage in such exercises to tone down the claims they are making lest they embarrass the Post-Keynesian community at large.

About pilkingtonphil

Philip Pilkington is a macroeconomist and investment professional. Writing about all things macro and investment. Views my own.You can follow him on Twitter at @philippilk.
This entry was posted in Economic Theory. Bookmark the permalink.

5 Responses to Brain-Slug Economics: Grasselli’s Project to Turn Post-Keynesian Economics into Mathematical Formalism

  1. PeterP says:

    I had a long “debate” with Mr Grasselli. I think he doesn’t understand accounting so his math will lead him astray. Not his fault, he wrote the paper with Steve Keen who SHOULD know better. The key equation in their paper is:

    expenditure = income + change in debt.

    which makes no sense at all, because income = expenditure in every transaction. Debt *finances* the expenditure but cannot change how transactions are accounted for with double entry bookkeeping.

    http://fieldsfinance.blogspot.com/2012/10/of-course-its-model-duh-final-post-on.html

    • Matheus Grasselli says:

      Good to hear from you again PeterP. I remember our conversation very well and the positive impact that it had on my work with Steve. Namely, Steve came to me with a very specific question: how to come up with a model in which change in debt had a role in the interplay between currently received income and currently planned expenditure (as suggested by Minsky) and yet satisfied the accounting identity that over any finite period any recorded income has to equal recorded expenditure. What we came up with was a toy example where this happened, based on a discontinuous injection of debt into an otherwise continuous flow of income. Both the mathematics (integrals and all) and the accounting (recorded income = recorded expenditure) check, but you (and Ramanan) objected to the modelling methodology, whereby you argue that a model had to take into account (no pun! )not only the accounting identities over a finite period, but for each individual transaction as well. This is a valid point, but it’s a methodological one, not something that validates superficial statements of the sort “he doesn’t understand the accounting so his math will lead him astray”.

      It’s like saying that someone cannot use a mean-field approximation in statistical mechanics because in reality all interactions occur between a particle and all its neighbours (close or distant), because when you look down on a particle-by-particle basis there is no such a thing as a mean field, which is a modelling construct. Right or wrong, this is a methodological point, and should not lead to the conclusion that someone using a mean-field approximation “does not understand particles”.

      As it happens, I do think that the example we had in the paper is too artificial and should not be used to base on entire theory of investment like Minsky had hoped, so Steve, myself, and others (Bezemer, Hudson, etc) are actively working on alternative modelling approaches.

      So again, thank you for the insight, but please refrain easy shots like “he doesn’t understand accounting”. This sounds great in a shouting match, but it’s not how scholarly discussions should be conducted.

  2. Ramanan says:

    I was also in the same debate and agree with PeterP.

    In the end the usage of Lebesgue Integration proves nothing.

    For starters when economists say aggregate demand – it is aggregate demand (in whichever definition) for goods and services. It is incorrect to change the definition to include financial assets in this and insist that the new thing is aggregate demand because aggregate demand was demand for goods and services in the first place. So Grasselli’s definitions are wrong to begin with.

    I mention that because as you directly or indirectly say economic intuition should come first – rightly.

    In this case was a misunderstanding of what “aggregate demand” is in the first place and hence a lack of appreciation of economics intuition.

    I don’t think he and Keen have changed the definitions with people repeatedly pointing out errors.

    • Matheus Grasselli says:

      Hello to you too Ramanan, and thank you also for your contribution to my own understanding of stock-flow consistent models. I have used your blog multiple times as a source for definitions and examples, and now make a habit of using transaction matrices/balance sheets/flow-of-funds tables in all my paper and presentations, many of them based on yours, so credit where credit is due.

      But as in my reply to PeterP, an easy shot like accusing me of a “lack of appreciation of economics intuition” is neither true nor helpful. I place great emphasis in gaining as much intuition as I can, and you will not find ANY published work of mine where I say that I don’t appreciate it.

      Regarding the specific example you gave, where you draw the line in the list of possible transactions to say what counts and what does not count as “aggregate demand” (i.e, the place in your tables where you decide to place the [accounting memo – GDP]) is a matter of convention, not of economic intuition. In fact, card-carrying economists like Michael Hudson are doing active work to change the way one interprets NIPA data to highlight the weight of the FIRE sector in the economy.

      So as I said to PeterP, thank you for the insight, but please refrain from easy shots like “lack of appreciation for economic intuition” as if I was the only imbecile out there trying to change definitions at will without paying attention to what economists are saying.

      • Ramanan says:

        Matheus,

        Thanks for your response.

        In national accounts, aggregate demand is a short-form for “aggregate demand for goods and services”. Now you may not like it but why insist that aggregate demand (which is a short-form for aggregate demand for goods and services) includes demand for financial assets. It is true that debt is important but you can’t simply add debt to something which has a specific meaning.

        Your approach comes across something as: the standard definitions are wrong, here is the right definition; rather than saying in my approach aggregate demand is this.

        [An important point and it is these small nice things which are sometimes important is that aggregate demand is defined so that it is not equal to the aggregate supply and the difference is change in inventories. This definition is from Kaldor/Hicks/Godley but this is a digression here].

        An analogy might help since you have a physics background. Velocity is defined as dx/dt (in the vector form) and a body under a force changes velocity but you don’t add Force in the fundamental definition of velocity. Velocity is still dx/dt. A particular solution to equations may involve the force such as a charged particle under the influence of an electric force. The solution has v on the left hand side and F or E on the right. But v is still fundamentally dx/dt. Also equally importantly the solution has things such as mass and charge (for a given electric field). Back to macro, debt influences demand but you need some parameters around than simply adding debt. Now switch back to physics, what you are saying is like saying velocity is dx/dt plus (parameter times) force.

        I know analogies are rarely good but doesn’t stop anyone from giving them 🙂

        A slightly bad thing about accounting models is that you really need to get the accounting right. A small error can produce strange results. Also, I remember you saying you are not doing any accounting but these concepts are finally accounting concepts. Accounting is not simply bean-counting – it is a framework for being rigorous. Of course too much rigour is not necessarily good but some amount is required, else two people can talk past each other.

        The reason I mentioned economic intuition is that while modeling one has to first understand what the definition is and then use behavioural assumptions. Your approach came across like (sorry analogy again) a physicist changing some mathematical theory. You can do that but first you have to carefully change the mathematical structure so that it is self-consistent. The SNA people over the past 60-80 years have been very careful to make things self-consistent.

Leave a comment