Proud to Be a Nihilist: Bill Mitchell on Econometrics and Numerical Prediction

nihilism1ex3

Someone said to me a while back: “Phil, you are always railing against econometrics but some of the MMT guys and quite a few Post-Keynesians maintain that these techniques are useful and valid”. I recognise this fully well actually. It preoccupies me perhaps more than it should. Indeed, Post-Keynesians are using econometrics with increasing frequency — and at the very same time they are becoming increasingly interested in highly abstract modelling. I’m not a big fan of this trend as readers of this blog will probably have guessed.

Recently I had the good fortune to stumble upon a paper by the MMT economist Bill Mitchell entitled Econometrics, Realism and Policy in Post-Keynesian Economics. It is a defence of econometrics that is not only very good but also takes a number of different perspectives on the matter. So, I thought it might be productive to deal with it in some detail here.

In the paper Mitchell confronts the realist ontology. This ontology is best summed up, I think, by distinguishing closed from open systems. A good example of a closed system is a controlled scientific experiment. By setting the experiment up so that it is continuous through time (ergodic) and is not interfered with by outside forces, the experimenter ‘closes’ the system upon itself. For realists, any data then generated by this experiment can reliably be used to make inferences about the future.

An open system, on the other hand, is open to change, fluctuation and new trends emerging. It is also not closed to outside forces interfering. The realists think that open systems are what we generally deal with in the social sciences, including economics. We cannot reliably use data generated in such open systems to make predictions about the future because, for example, although inflation and wages may be strongly correlated over a certain time period they may not be in the next time period.

In a closed system experiment this would discredit the view that there is a relationship between wages and inflation because we would have data that falsified this hypothesis. But in economics we cannot falsify the statement that there is such a relationship because such a relationship may exist in some historical times periods but not in others. Clearly we are dealing with very different materials in such open systems than we are in closed systems.

In open systems we cannot establish any timeless laws, for example, and we also cannot definitively disprove an economic theory because the relationship it posited did not hold in a given historical time period. More extreme still, we cannot make valid inferences about the future based on past data. This is the nature of non-ergodic, open systems according to the realists.

Mitchell then goes on to say that realists like Tony Lawson do not understand what is being done when econometricians study time series data to draw inferences. He characterises Lawson’s position by quoting him in the original,

Econometricians seem universally to report their results as if they interpret themselves as working within the falsificationist bold predictions framework. (p10)

Mitchell responds to this by saying that econometrics does not test theories which, he seems to agree, are untestable. He writes,

Applied econometrics does not test economic theories. Economic theories are untestable. Intriligator (1978: 14) says an econometric model ‘is any representation of an actual phenomenon such as an actual system or process … Any model represents a balance between reality and manageability.’ While the data generating process (DGP) is held to be a true process, a model is considered fallible, it cannot be true because to make the process tractable a marginalisation of the DGP has to be made. A theory of the DGP might be true, but such speculation is futile because there is no way of telling. A model of the theory is false by definition. Hendry (1983: 70) distinguishes the DGP (the mechanism) which is true and unique, from the simplified representation of the DGP (the model) which is non-unique.

An econometric model is specified in terms of theoretically-motivated variables and applied to some data. These specific representations contain hypotheses which can be tested. Based on visible criteria, a particular representation can claim to be the most adequate current picture of the DGP. There can be an ordering of representations, some more adequate than others. All representations are tentative and time dependent. A Post Keynesian econometrician would only aspire to empirically adequate and hence tentative representations of theoretical posits which have satisfied a range of currently accepted diagnostic criteria. (p10)

I would characterise this statement as a fairly honest attempt at using econometrics. Many, however, do not use econometrics in this manner. Rather they seek True models that remain True perhaps forever into the future (there are now some people doing Post-Keynesian work that adheres to such a view, incidentally). Rather strangely, Mitchell later claims that the realists themselves are the ones seeking “ultimate laws” (p14) — I think he is here confusing the realist approach to a closed system, which according to them will have ultimate laws, to the realist approach to an open system, which will not.

In the above paragraph Mitchell seems to eschew such an approach and say that it is impossible. He opts instead for a “tentative and time dependent” approach. But later in the paper he will claim that models can be used to make accurate numerical predictions that will remain valid in the future. This is, in fact, philosophically identical to the claim that you can test an economic theory — it is just a matter of degree.

Mitchell writes,

When an econometrician thinks of an estimated model, he/she does not think in terms of a natural or physical model established in an experimental context. Hendry (1983: 72) says that we rather attempt to establish a ‘conjectural’ degree of stability for the sample of available data. Clearly, if we find evidence of instability, then the conjecture is problematic. However, we tentatively accept a concept of time dependent stability if our model displays within-sample stability (defined by conventional likelihood tests). (p15)

This stability, Mitchell says, can then be projected into the future. But this merely avoids the problem.

Imagine for a moment that we had ten years of time-series data on interest rates and investment. In these ten years the interest rate had risen on average 1% every two years. Now, it is found that the interest rate is not having a great deal of effect on investment — say, every 1% rise is correlated to a 0.25% fall in investment. Following this Mitchell would say that, assuming stability, if interest rates rise in the future they will not have a great deal of impact on investment. Further still, he could also make a numerical claim that for every 1% rise in interest rates there would be a 0.25% fall in investment moving into the future.

Now imagine that the central bank raises interest rates by 15% over a very short time period. By Mitchell’s calculations this would cause a 3.75% fall in investment. But what we actually see is, say, a 20% fall in investment. This is intuitively obvious, of course, because while small increases in interest rates may not have significant impacts on investment, large increases do. The magnitude of the increase matters — and this cannot be understood by looking at time series data that does not contain such a large rise.

Put more theoretically, the reason for this is that the time series data is made up of non-homogenous events. It is an open system. In an experiment we could just tinker with the control variables, raising interest rates now by 1%, now by 15%, now by 100% and we could crunch out a numerical prediction as to what future rate rises will do — perhaps within different bands and so forth.

But we cannot do this with the economic data as it is of a completely different nature. Feeding economic data through a computer will not be able to capture what the controlled experiment captured. In fact, these are only the beginnings of the problems with this approach but I don’t have space to cover them here. Keynes knew this well, writing in his famous screed against econometrics,

For, owing to the wide margin of error, only those factors which have in fact shown wide fluctuations come into the picture in a reliable way. If a factor, the fluctuations of which are potentially important, has in fact varied very little, there may be no clue to what its influence would be if it were to change more sharply. There is a passage in which Prof. Tinbergen points out (p. 65), after arriving at a very small regression coefficient for the rate of interest as an influence on investment, that this may be explained by the fact that during the period in question the rate of interest varied very little. (p567)

Mitchell goes on to argue that if we don’t agree that econometrics can churn out numerical estimates we are being “nihilistic”. I have heard this claim many times before. It is a strange use of the term indeed — which is supposed to mean “a belief in nothing”. But one of the key things that critics are insisting on is simply that numerical prediction is pretentious. This does not constitute a “belief in nothing”.

If I am starting a business or a relationship can I be certain it will work? No. Can I make numerical estimates of future cash-flows or time spent together? No. Does this make me a “nihilist”? Of course not. Likewise, if I advocate a policy by a government the exact effects of which I don’t know am I being a fool? I don’t think that I am.

In all of these cases I weigh up the arguments as best I can and proceed. Will I always be correct? No, I am far from infallible. But if I am open-minded and able to scrutinise my beliefs in light of new evidence I would like to think that I will be correct a great deal more of the time than the other guy. Obviously, if I spot an opportunity to build housing at the start of a housing boom I am in a far stronger position to make money than I am if I have a dream one night that people will buy into a new and obscure fashion trend where people wear large rubber ducks on their heads (although you never know!).

When dealing with non-ergodic time we simply have to use our judgement. Mitchell does this all the time on his own blog. And most of the time it is, so far as I can see, pretty spot on. Is his or my judgment science? No. But why do people today feel the need to mimic science in spheres of life where it does not apply? That, I would say, has something to do with the ideology of our time. But that is a topic that would take us too far today.

I would say though, that when you hear the word “nihilist” thrown around in such discussions perhaps what it really means is something like “someone who denies the Truth of the current cultural value system under which we live, one that values opinions expressed in quasi-scientific and numerical terms over opinions expressed more contingently and, perhaps, more honestly”. If that is the meaning we should give the word in such a context I would say, paraphrasing the late Martin Luther King, that I’m proud to be a nihilist.

Posted in Uncategorized | 7 Comments

Ergodicity Versus History: A Critical Commentary on the Work of Ole Peters

end-of-history-illusion

Lars Syll linked to a fantastic interview with the mathematician Ole Peters the other day that dealt with the topic of ergodicity and how it relates to economic and financial markets. First, a comment on the source.

The interview was conducted by Michael Mauboussin who is currently the Managing Director and Head of Global Financial Strategies at Credit Suisse but who was working with a hedge fund called Legg Mason Capital Management at the time of the interview. The latter firm were the ones who published the interview.

The reason I call attention to this is because I think people working in the financial industry ‘get’ the fact that economic processes are non-ergodic far, far better than most economists. The reason for this is because they follow trends on a daily basis and anyone with any honest experience of this will appreciate that there is simply no way to believe that financial markets are ergodic if you experience them in real-time.

Take a simple example: what was the biggest news for the financial markets in the past 12 months? Undoubtedly, the Fed’s tapering program. Now, I’ve argued before that while the tapering program will likely have little impact on the real economy it has a massive impact on financial markets. But here’s the thing: you cannot quantify the taper. When Bernanke announced the taper the financial market reacted instantaneously. And yet there is no immediate data-point for a Fed announcement so we cannot include this in a model except in the most arbitrary and farcical manner.

Further than this, and more fundamentally, the taper is a properly unique historical event. It is like, say, a defeat in an historical battle or the results of an election. It is not part of some deterministic, mechanical process — i.e. it is not subject to some law-like dynamic. Rather it is something that happens once, at one moment in time, has highly contingent effects and cannot be reversed. It is not a shock to a system otherwise in equilibrium — like if we disturbed a pendulum which would behave chaotically for some time and then return to equilibrium — rather it is part of an unfolding, diverse and novel process. In short, such events are more like moments in a narrative — in a film or a novel — than they are like logical moments in a mechanical system.

At the heart of this is, of course, the question of ergodicity. In this regard Peters gave what is perhaps the clearest exposition of ergodicity I have seen. I shall, for this reason, repeat it in full here.

In an ergodic system time is irrelevant and has no direction. Nothing changes in any significant way; at most you will see some short-lived fluctuations. An ergodic system is indifferent to its initial conditions: if you re-start it, after a little while it always falls into the same equilibrium behavior. For example, say I gave 1,000 people one die each, had them roll their die once, added all the points rolled, and divided by 1,000. That would be a finite-sample average, approaching the ensemble average as I include more and more people. Now say I rolled a die 1,000 times in a row, added all the points rolled and divided by 1,000. That would be a finite-time average, approaching the time average as I keep rolling that die. One implication of ergodicity is that ensemble averages will be the same as time averages. In the first case, it is the size of the sample that eventually removes the randomness from the system. In the second case, it is the time that I’m devoting to rolling that removes randomness. But both methods give the same answer, within errors. In this sense, rolling dice is an ergodic system. I say “in this sense” because if we bet on the results of rolling a die, wealth does not follow an ergodic process under typical betting rules. If I go bankrupt, I’ll stay bankrupt. So the time average of my wealth will approach zero as time passes, even though the ensemble average of my wealth may increase. A precondition for ergodicity is stationarity, so there can be no growth in an ergodic system. Ergodic systems are zero-sum games: things slosh around from here to there and back, but nothing is ever added, invented, created or lost. No branching occurs in an ergodic system, no decision has any consequences because sooner or later we’ll end up in the same situation again and can reconsider. The key is that most systems of interest to us, including finance, are nonergodic. (p2)

Although every word in the above passage is valuable if I were forced to highlight one sentence it would be the following: “Ergodic systems are zero-sum games: things slosh around from here to there and back, but nothing is ever added, invented, created or lost.” Now, let’s compare that with my characterisation of the structure of economic processes and financial markets as being similar to narratives.

In a narrative something is always added, invented, created and lost. Processes that have a narrative structure — that is, any and all historical processes — are by their very nature novel. They are the unfolding of novel events. If they were not and they were simply instances of past happenings projected into the future — as are ergodic processes — we could generate narrative structures by applying modelling techniques. So, we could use sophisticated modelling programs to generate historical and even fictional narratives. Computer programs could be created that would create both films and produce new historical research.

I have no doubt that some people think that this can be done — or will eventually be able to be done. But they are crackpots and cranks. And, as I have pointed out before, those who tackled this problem most directly — that is, those working in Artificial Intelligence (AI) research — have basically conceded today that their ability to produce systems that mimic human intellectual processes is extremely limited. With that, a comment or two on the rest of the Peters interview (I fear I will lose most of my audience from here, but however…).

Peters, quite sensibly, says that when building a portfolio in finance we should not use techniques that utilise “parallel universes” but rather techniques that try to imagine behavior through time in relation to given risks. The problem with the parallel universes picture, as I’ve pointed out before, is that it assumes normal or Gaussian distributions while financial markets do not look like this. Peters thinks that the solution lies in time-average maximisation — his way of trying to imagine behavior through time in relation to given risks,

In contrast [to the parallel universes idea], time-average maximization (geometric mean for multiplicative dynamics) doesn’t assume anything about the distributions. You stick in whatever distribution you like, crank the handle, and out comes your optimal investment strategy. (p8)

I think that there’s a problem with this. As I noted in the above linked piece the distribution of any given financial market is going to be different every time you take a reading of it. So, it cannot be projected forward. This gets back to the fact that financial markets and economic processes are characterised by novel events. The past cannot be projected forward to give us a picture, as if on a cinema screen, of the future.

This is the same problem encountered by enthusiasts of Bayesian methods in economic modelling: which priors do they use to confront the future if the future does not mirror the past? And if they posit arbitrary priors and keep updating them will they simply find themselves engaged in a perpetual and interminable regress, constantly searching for a formula that does not yet exist?

In actual fact, Peters recognises this to some extent. He says that the whole discussion of Gaussian distribution is an enormous distraction (I said the same thing in the above linked piece…). But then he gets bogged down in a rather technical issue — namely, the question of optimal leverage — and loses sight of the bigger picture. I don’t blame Peters in this regard. He is, after all, simply trying to navigate the market and build portfolios and I suspect that his advice on optimal leverage is sound. But he does not really deal in the interview with the Pandora’s Box that he has opened.

At the end of the interview Peters alludes that his thoughts on leverage — which are purely technical — have some bearing on the question of the housing market. We should read this, I assume, to imply that he is saying something meaningful about the 2008 financial crisis when he discusses optimal leveraging. Let me be categorical here: he is not. Peters is really not saying much of interest with regards to the financial crisis.

I don’t say this to denigrate Peters’ work; indeed, I am really glad to see a clearly articulate and intelligent mathematician talking about issues of ergodicity. But the fact is that the 2008 financial crisis was one generated at a properly macroeconomic level. The excess of leverage that we saw in the banking and financial system was truly an effect, a symptom, not a cause.

The real causes were to do with the new trade regimes that were established in the past three decades and the income inequality that accompanied them together with extremely confused government policies geared toward running public sector surpluses — all set against the backdrop of a deregulated credit system that expanded enormously to fill a macroeconomic black hole. These processes cannot be described by modelling. They are inherently historical and political.

Again, I am not saying that Peters thinks that fancy mathematical techniques can answer these questions. But I do think that less intelligent people will try to draw this conclusion from his work. Economists loath the idea that their discipline is not a clean science but rather an historical discipline intimately intertwined with politics. Most of them would love to hear that the whole thing can be explained by some silly geese confusing a time-average for an ensemble-average.

Equally well, this is music to the ears of certain financial technicians who think that they can provide a quick fix for the moribund financial industry. But these are merely illusions that will distract from the real issues facing us today. And I suspect that even finance, with its recent macro-political turn in the light of the chaos in the world economy since 2008, is beginning to wake up to this.

Posted in Economic Theory, Market Analysis, Statistics and Probability | 5 Comments

What is the Monetarist Position on Fiscal Deficits and is it Similar to Krugman’s?

570

In my previous post I showed that Krugman’s recent piece on Argentina completely glossed over the data in its assertions that the inflation in that country was due to fiscal deficits**. I also, somewhat offhandedly, referred to his argument as being ‘monetarist’. This caused some degree of confusion so I thought I should probably clear it up.

Okay, so let’s first try to get a grip on the monetarist position on deficits. This was most clearly brought out in the debates during the late-70s and early-80s under Thatcher in Britain. Note that in this period  the fiscal deficit was referred to as the PSBR (Public Sector Borrowing Requirement).

In his seminal account of the monetarist era in Britain, The Scourge of Monetarism, Nicholas Kaldor lays out clearly the monetarist position on the relationship between fiscal deficits, inflation and the money supply.

In the Green Paper on Monetary Control of March 1980 it is asserted that “it is sometimes helpful to examine how a particular control will affect items on the asset side of the banking system”. The Paper then proceeds to state an accounting identity which shows the change in the money stock (£M3) in a given period as the sum of five separately identified items, of which the PSBR is one… The main monetarist thesis is that the net dissaving of the public sector in ‘inflationary’ in so far as it is ‘financed’ by the banking system and not by the sale of debt (bonds or gilts) to the public. (pp48-49)

Kaldor’s account is a rather nice and clear view of the monetarist theory of the fiscal deficit. If the deficit is financed by selling debt to the public it ‘crowds out’ private investment by driving up interest rates. But if it is funded by the banking system — i.e. the central bank — it is inflationary. Sadowski, for example, completely missed this distinction when he wrote in response to my piece,

Pilkington is evidently drawing from a peculiar rewriting of the history of the Thatcher years, when the UK’s public sector borrowing requirement (PSBR) was targeted for reduction as a means holding down interest rates. No form of monetarism, not even an imaginary special Thatcherian variation, believes that inflation is a fiscal phenomenon.

Both parts of Sadowski’s arguments are completely incorrect. First of all, in the Thatcher years the monetarists closely watched to what extent the fiscal deficits were being funded by the central bank and chalked these up as a major cause of inflation. Second of all, Really Existing Monetarism under Thatcher did indeed believe that the (monetised) fiscal deficits added — by identity — to the M3 money supply and thus to inflation. It is Sadowski that is rewriting history by asserting otherwise.

Now, how does this monetarist argument regarding the relationship between (monetised) fiscal deficits, the money supply and inflation square with Krugman’s? Well, let’s turn to his piece on Argentina to see.

Running deficits and printing lots of money are inflationary and bad in economies that are constrained by limited supply; they are good things when the problem is persistently inadequate demand. (My Emphasis)

Do you see that? That is identical to the monetarist argument made in the late-70s and early-80s. And that is why I called Krugman’s argument ‘monetarist’… because it is monetarist!

So, what was the Keynesian position in those years? I don’t want to run through this in too much detail as it is enormously complex but Kaldor gives us a taste.

[The monetarist view] ignores the fact that the net saving, or net acquisition of financial assets of the private sector will be the same irrespective of whether it is held in the form of bank deposits or bonds. The part of the current borrowing of the public sector which is directly financed by net purchases of public debt by the banking system — and which has its counterpart in a corresponding increase in bank deposits held by the non-banking sector — is just as much part of the net saving of the private sector as the part which is financed by the sale of gilts to the private sector. (p49)

Now, I don’t want to get into a debate regarding the truth of the above statement, that is not my point here. My point is simply that Krugman’s argument is far closer to the monetarist position than to the Keynesian position. Anyone who states otherwise is either ignorant of these debates or does not understand the implications of endogenous money theory sufficiently well. And that is the end of the story.

Update: Thanks to Nick Edmonds who pointed to the following document in the comments section. The document reads:

It is not the intention to achieve this reduction in monetary growth by excessive reliance on interest rates. The Government is therefore planning for a substantial reduction over the medium-term in the Public Sector Borrowing Requirement (PSBR) as a percentage of Gross Domestic Product (GDP). The relationship between the PSBR and the growth of money supply is important but is not a simple one; it is affected by the economic cycle, the rate of inflation and the structure of the tax and public expenditure flows generating the borrowing requirement. But although the relationship between the PSBR and £M3 is erratic from year to year, there is no doubt that public sector borrowing has made a major contribution to the excessive growth of the money supply in recent years. (p16 — My Emphasis)

________

** Note that Mark Sadowski questioned the data I provided and supplied IMF data that was somewhat different. Although I am suspect of the IMF data, as Argentina basically told the institution to shove it in 2001-2002 they have every incentive to exaggerate the fiscal deficit, even if we take it at face value it makes the same point that I originally made: namely, that inflation and fiscal deficits are not correlated. You can read the exchange between Sadowski and I here.

Posted in Economic History, Economic Policy, Economic Theory | 36 Comments

Paul Krugman Pushes Factually Inaccurate Arguments About Argentina to Support His Discredited Monetarist Ideas

truth-lies-620x320

Well, Paul Krugman is out again waving his true colours in the wind while his die hard followers try desperately to look the other way and pretend that he’s not making stuff up. Basically Krugman is saying, following that pundit Yglesias, that Argentina’s inflation problems have to do with their fiscal balance. Here is the quote from Krugman,

Matthew Yglesias says what needs to be said about Argentina: there’s no contradiction at all between saying that Argentina was right to follow heterodox policies in 2002, but it is wrong to be rejecting advice to curb deficits and control inflation now. I know some people find this hard to grasp, but the effects of economic policies, and the appropriate policies to follow, depend on circumstances. (My Emphasis)

Of course, Krugman — instead of engaging in tough guy rhetoric (“doing what needs to be done” etc.) — could have done two quick Google searches to see if Argentina had been running major deficits in the years when it was suffering from inflation. If he had he would have found that for many of the years after the 2001 default Argentina ran substantial fiscal surpluses. The stats are pretty hard to track down in the original (the website is in Spanish) but Trading Economics has pulled them and their statistics are typically accurate.

argentina-government-budget

As we can see, the government ran substantial deficits in 2001-2003. This was at a time when GDP was shrinking at upwards of 6%. But once the economy left that major recession the government budget balance swung back into surplus and remained there until a brief deficit in 2010.

Now, if Krugman’s story were accurate we would expect to see inflation come down between 2004 and 2010, right? Do we? Nope. Not at all.

The following graph is the official inflation rate. Note that even though these statistics are well-known to under-exaggerate inflation they nevertheless track the unofficial measures insofar as their trends go — i.e. while they are not useful to give us a real picture of the rate of inflation they do give us a good picture of when the inflation accelerates and decelerates.

argentina-inflation-cpi

As we can see inflation soared in 2002. This would seem to overlap with the enormous budget deficit of that year. But the correlation is spurious. The inflation soared  that year because the Argentinian peso was devalued to such an extent that it was worth about 25% of what it was worth the previous year. The spike in inflation was due to a sharp, fourfold increase in import prices that were then passed through to the rest of the economy. Something very similar happened in Iceland after the banking crisis there in 2008.

Between 2004 and 2010, however, we see consistent levels of high inflation (the real figures would probably be close to 20-25% a year rather than the government’s 10%) — and this was in a period when the country was running substantial government surpluses. What does this suggest? Simply that the inflation is likely due to the value of the Argentinian peso together with a host of other factors. These other factors are basically a classic wage-price spiral with the unions demanding that their standards of living keep up with rising prices, while firms raise their prices to pass on the cost of higher wages.

Since 2011 the peso has continued to devalue and is today worth about half what it was worth back then. Again, this is probably the root cause driving the inflation in Argentina. But it has very little to do with the government deficit. Rather it has to do with the fact that Argentina has been faced with these inflationary problems since at least as far back as the late-1980s when the country experienced a bout of hyperinflation proper.

During the 1990s the government tried to wring the inflation out of the system with a misguided currency board arrangement that fell apart in 2001. And I don’t think anyone would openly advocate that they try that again.

So, what are the solutions? Unfortunately, there are no easy solutions. In an ideal world the government would allow the burst of inflation that is going to accompany the recent devaluation of the peso to run through the system and then they would step in with well-enforced wage and price controls. Such controls, if history is to be any guide, are often less popular than inflation — with both trade unions and companies feeling their rights being encroached upon.

So, the likely path that Argentina will have to take is to try to keep economic growth buoyant while navigating the inflation. By not allowing incomes to fall too much the government can ensure that people do not experience their loss of purchasing power as an all-out impoverishment. Meanwhile, the government should bring the trade unions and the management of the firms to the table and try to make them gradually see reason. But again, that’s a tough game indeed.

The last thing that Argentina need, however, is the likes of Paul Krugman with his silly Neo-Monetarist models of inflation telling them to cut government spending. Argentina is already extremely unpopular in the financial press because of the bitterness that still surrounds the 2001 default. When so-called ‘friends of the left’ like Krugman jump on the bandwagon as an easy way to outline their primitive theories of inflation it just adds fuel to the fire.

With their silly and discredited money supply-growth ideas — which, of course, are entirely backward (as I argued here inflation typically causes money growth and not vice versa) — they will only encourage the Argentinean public to vote in another bunch of lunatics who will try some idiotic arrangement like they did in the 1990s. Such a regime may temporarily put a lid on inflation but only at the cost of wrecking the economy and causing much suffering.

Yes, the Argentinian financial elite will be pleased that their money is temporarily sound, but it will only be a matter of time before the riots kick into high gear and the whole thing falls to pieces in an ugly and perhaps bloody mise-en-scène orchestrated by economists who think that their doctrines and their little geometric toys fly in some heavenly space above political and social realities.

Update: There has been some push-back on my interpretation of Krugman’s position as ‘monetarist’ in the above post. As I show here it is quite clear that Krugman falls on the monetarist side of the debate over fiscal policy and its affects on inflation.

Posted in Economic Policy | 47 Comments

Abstraction, Language and Modelling in Economics

abstraction

Alciphron is the title of the book by the philosopher George Berkeley that was most popular in his own time and is probably his least popular in ours. The reason for this is because the book deals with atheism and religion and many would suppose that this has little bearing on questions unrelated.

Most of the book is rather enjoyable on its own terms. It is written in dialogue form in a prose style that is easily among the best that you will encounter among Anglo-Saxon philosophers. The Alciphron of the title is the representative of what at the time was called ‘free-thinking’ but what Berkeley renames ‘minute philosophy’ — basically, skeptical atheism, hyper-rationalism, distrust of authority in general and what would later develop into Jacobinism through the political ideas of the likes of Rousseau.

What makes the bulk of the dialogue so entertaining is the vacuity of Alciphron who is apt to use empty phrases and tautologies in place of religious ideas. In his desperation to tear himself away from the morality of the day Alciphron tries to articulate his own moral principles. But given that these are cast in a framework of skepticism he finds it increasingly difficult when pressed on it to give them grounding. Any time this point is driven home Alciphron tends to hide behind empty words like Virtue and Honour which he reflects upon in a sort of poetic and mysterious way — the typical posture of what would later become the sentimentalism of the Romantics.

All of this is interesting in its own way, but personally I have never been much taken by moral philosophy. It is in the Seventh Dialogue, however, that Berkeley sketches out some very interesting ideas on human language — something that he recognised as being an absolutely central philosophical question.

He begins with a discussion of how words stand in place of ideas like chips stand in for money-values in a card game — this is the standard conception of the day and, to a large extent, remains so today. He then goes on to show that once we have agreed upon a definition of a word it need not call to mind that definition immediately for us to communicate the idea lying behind it. Just in the same way as once we agree upon a money-value for a chip in a card game we need not call to mind the actual coins that stand behind it every time we use it. He then writes,

From hence it seems to follow that words may not be insignificant, although they should not, every time they are used, excite the ideas they signify in our minds, it being sufficient, that we have it in our power to substitute things or ideas for their signs when there is occasion. It seems also to follow, that there may be another use of words, besides that of marking and suggesting distinct ideas, to wit, the influencing our conduct and actions; which may be done either by forming rules for us to act by, or by raising certain passions, dispositions, and emotions in our minds. A discourse, therefore, that directs how to act or excites to the doing or forbearance of an action may, it seems, be useful and significant, although the words whereof it is composed should not bring each a distinct idea into our minds. (p222)

Berkeley starts from a rather nuanced conception of language. Words not only stand for ideas, they also exert influence on our behaviors and form rules by which we act. Words can also be used rhetorically to exert emotional affects.

What’s more words need not, Berkeley maintains, stand for distinct ideas. He gives the example of consciousness or the ‘I’. In truth I have no conception, no distinct idea of myself as an ‘I’ and yet the word is required to explain all sorts of things. Another example is that of the word ‘number’ as a general abstract idea — i.e. not signifying a particular number (1, 2, 3, 4, etc.).

Do but try now whether you can frame an idea of number in abstract, exclusive of all signs, words, and things numbered. I profess for my own part I cannot. (p223)

These words are abstractions. They are not representative of clear ideas but they are functional and necessary nevertheless.

Berkeley then goes on to show how numbers, for example, allow us greater ease in performing certain operations dues to their clearness of notation.

But here lies the difference: the one, who understands the notation of numbers, by means thereof is able to express briefly and distinctly all the variety and degrees of number, and to perform with ease and despatch several arithmetical operations, by the help of general rules. Of all which operations as the use in human life is very evident, so it is no less evident, that the performing them depends on the aptness of the notation… Hence the old notation by letters was more useful than words written at length: and the modern notation by figures, expressing the progression or analogy of the names by their simple places, is much preferable to that for ease and expedition, as the invention of algebraical symbols is to this, for extensive and general use. (pp304-305)

This is one of the main reasons why we use abstract language. It is not the only reason, of course, but it is a very important one. This leads people to the use of analogy and metaphor — even to the anaological or metaphorical use of models and diagrams.

We substitute things imaginable, for things intelligible, sensible things for imaginable, smaller things for those that are too great to be comprehended easily, and greater things for such as are too small to be discerned distinctly, present things for absent, permanent for perishing, and visible for invisible. Hence the use of models and diagrams. (pp232-233)

But it is all too easy, if we get too bogged down within these abstractions and forget how they are connected with the practical realities that we are actually dealing with, to begin talking nonsense and trying to solve pseudo-problems with no real meaning.

Be the science or subject what it will, whensoever men quit particulars for generalities, things concrete for abstractions, when they forsake practical views, and the useful purposes of knowledge, for barren speculation, considering means and instruments as ultimate ends, and labouring to obtain precise ideas, which they suppose indiscriminately annexed to all terms, they will be sure to embarrass themselves with difficulties and disputes. (p308)

This is where economics has erred since at least the turn of the 19th century. The early marginalists occupied two groups. One were the Walrasians who, following Leon Walras, were perfectly content to confine themselves to barren speculation of unrealistic nonsense provided it was done in a nice, formal mathematical manner. The other group were the Marshallians who tried to bring such abstract speculation down to earth. Consider the following quote from Alfred Marshall from a letter he wrote in 1906,

[I had] a growing feeling in the later years of my work at the subject that a good mathematical theorem dealing with economic hypotheses was very unlikely to be good economics: and I went more and more on the rules — (1) Use mathematics as a shorthand language, rather than an engine of inquiry. (2) Keep to them till you have done. (3) Translate into English. (4) Then illustrate by examples that are important in real life. (5) Burn the mathematics. (6) If you can’t succeed in (4), burn (3). This last I did often.

Clearly Marshall was becoming ever more concerned about the use of formal modelling in economics. He could see that it was apt to get out of hand. Marshall’s followers in this sense were, of course, Keynes and the early Post-Keynesians. But they lost the battle. By the 1950s Walrasianism was in the ascent. Even Neo-Keynesians like Solow and Samuelson displayed a penchant for abstractionism that was apt to get carried away with itself and only produce irrelevant dross.

Today the situation has become entirely absurd and nonsensical. Economists that should be trying to articulate a new way forward are, like Don Quixote battling his windmills, spending most of their time and effort trying to refute DSGE models and insisting that New Classical economics is irrelevant. This rather mundane and largely useless behavior is what is currently passing for the avante-garde of economic discourse.

This is not so much the fault of these economists as it is that they have been educated in a discipline in which, to a very large extent, this is the only discourse going. Frankly, I wouldn’t want them coming up with any ‘positive’ approaches because it is all too likely that they would end up resembling the Walrasian muck, with its groundless abstractions and useless rubbish.

Economics can only change when its object of inquiry has changed. But this cannot happen with the present practitioners because those who currently make up the bulk of the discipline are more comfortable with engineering and physics than they are with history. They are not people who can articulate themselves in a manner suited to the material that they are dealing with. And those that are suited to the material are generally driven away from economics courses because of the ridiculous formal demands placed on them by insulated and irrelevant lecturers who teach nothing but second-rate mathematics.

In this regard it is difficult to see how the discipline can change its spots. My reckoning is that the change will have to come from outside of academia. It seems to me that academic economics is only producing students whose modus operandi is in picking holes in the status quo, unable to articulate an alternative. That, to me, indicates a profession rotting from the inside out. Those who really care about how economics should be done will likely look elsewhere to learn their trade. They will be able to ironically navigate the economics classroom meanwhile learning how to really do economics elsewhere.

Addendum — Modelling as an ‘I’: I mentioned above that probably the most primitive of our abstract general ideas is that of the ‘I’. While I have no clear idea of myself as an ‘I’, I nevertheless maintain that it exists based on the fact that I have consciousness. This ‘I’ then comes to encompass my will, my experience, my opinions and so forth.

Now, it is quite obvious to anyone working in economics that economists are very testy about their models. Often one feels that when you attack a model the builder of a model will react as if you are attacking his or her person. I think that this is actually explainable if we come to understand that for many economists their model becomes a sort of stand-in or representation of their knowledge of the economy. Thus, a model becomes a sort of representation of the economist as an economist. To attack the model then becomes to attack the adequacy of the economist.

I think that this simple observation explains a lot about the weirdness you often encounter in economists when you discuss their models — or the models of their leaders that they adhere to. Because this model represents them as an economist — the ‘I-economist’, as it were — they often feel that in attacking their model you are attacking their person. This, I think, explains a great deal of anxiety you encounter among economists when you raise the issue that models might be largely useless.

Responses such as the typical “But what is YOUR alternative model” and so forth should be read in this way — i.e. as an attempt to avoid the truth-content of the statement that models might be useless and channel the discussion into a battle between models, between ‘I’s. In this way the economist seeks to neutralise the statement that might undermine them as an ‘I-economist’ and instead tries to re-frame the debate in such a way that the ‘I-economist’ is maintained and the conversation becomes simply a fight between such ‘I-economists’.

This also goes a long way to explaining why, as some feminist economists have noted, modelling tends to be a male practice. Males in contemporary society have a far greater need to assert themselves as static ‘Is’ than females. An attack on models can easily be taken as an attack on one’s masculinity while, more radical still, an attack on modelling in general can be taken as an attack on contemporary representations of masculinity. While I am not generally inclined to explain intellectual constructions in such a manner I believe that this is indeed of relevance here.

Posted in Economic Theory, Philosophy | 5 Comments

Disturbing Distributions in Economic Statistics

parsonage-ghost

Lars Syll has recently published an excellent post on the dilemma of probability theory when applied to the social sciences in general and economics in particular. Syll argues that in order to apply probability theory — which is deeply embedded not simply in mainstream economic models but also in econometric techniques — we must first be sure that the underlying system being studied conforms to certain presuppositions of probability theory.

Syll illustrates this nicely by comparing the economy with a roulette wheel — something that, if ‘fairly balanced’, will actually yield outcomes to which probability theory can be applied.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done!

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!

I think that Syll could have buttressed his piece by providing some empirical evidence in this regard. One aspect of probability theory that data must conform to in order to be properly studied using techniques based on probability theory is known as the Central Limit Theorem.

The Central Limit Theorem basically states the conditions needed for a variable to be ‘normally distributed’. What this means is that the variable can be predicted using the well-known ‘bell-curves’ or ‘Gaussian functions’. If a variable is normally distributed we can use standard probabilistic techniques to analyse it. If it is not then we cannot.

So, what about economic variables? Are they normally distributed? Short answer: no, they are not.

The most well-studied of these is, of course, the stock market. After the crash it is often said that financial markets have ‘fat tails’ and that this implies that they are not predictable using probability theory. It means that events that in a normally distributed curve would be almost impossibly rare actually occur with some frequency in these markets. The best illustration of this is to plot the actual stock market against a normally distributed curve to see how far they diverge. Here is such a graph taken from the book Fractal Market Analysis: Applying Chaos Theory to Investment and Economics:

stock market distribution

As we can see the actual returns of the Dow Jones diverge substantially from the normal distribution curve. The implications for this, of course, are enormous and I will allow the author of the book to explain them himself.

What does this mean? The risk of a large event’s occurring is much higher than the normal distribution implies. The normal distribution says that the probability of a greater-than-three standard deviation event’s occurring is 0.5%, or 5 in 1,000. Yet the above shows us that the actual probability is 2.4%, or 24 in 1,000. Thus, the probability of a large event is almost five times greater than the normal distribution implies. As we measure still larger events the gap between theory and reality becomes even more pronounced. The probability of a four standard deviation event is actually 1% instead of 0.01%, or 100 times greater. (p26)

But surely this is just the ever-skittish stock market, right? Nope! We find similar properties when studying many economic variables. For example, in a paper entitled Are Output Growth-Rate Distributions Fat-Tailed? Some Evidence from OECD Countries the authors find that GDP growth rates and other economic variables show very similar properties to stock market data. They write,

The foregoing evidence brings strong support to the claim that fat tails are an extremely robust stylized fact characterizing the time series of aggregate output in most industrialized economies… As mentioned, fat tails have been indeed discovered to be the case not only for cross-sections of countries, but also for plants, firms and industries in many countries. In other words, the general hint coming from this stream of literature is in favor of an increasingly “non-Gaussian” economics and econometrics. (p17)

So, what do these authors say in response? Well, the response is usually some vague allusion to some new statistical or mathematical approach. This seems to me to be wholly unfounded. What the authors are actually encountering in the data is not simply a failing of normal distribution curve but instead actual uncertainty or non-ergodicity.

This could probably be shown if different time periods were taken and the probability distributions calculated out. For example, in the Dow Jones data plotted above we could break the data down into, say, five year periods and plot each one. It would quickly become obvious that the curve for, say, 1927-1932 would look vastly different from that of 1955-1960.

What this implies is that events like the 1929 stock market crash are not simply ‘outliers’ that sit neatly off the normal distribution curve. Rather they are properly uncertain events. They cannot be tamed with better models and they cannot be fitted with better statistical equipment. They are simply uncertain — at least when viewed from the point-of-view of a single time-series.

Some, like Taleb, have taken this to mean that predictions are impossible. I entirely disagree with this point-of-view. Many of the uncertain events we see in economics can indeed be predicted. But they can only be predicted by looking at the data within a given historical context or constellation. They cannot be predicted using models or probability estimates or anything of the sort. Rather economists must learn how to read data properly; how to not be sucked in by silly trends; and above all how to appreciate that they are not dealing with material that they can just feed into a computer and expect a neat outcome.

Posted in Economic Theory, Statistics and Probability | 10 Comments

New Blog Dealing With Devolution/Scottish Independence

devolution

A colleague and I are currently in the process of launching a new think tank called ‘Gradualis’. The think tank will deal with the political and economic aspects of devolution. Our first focus will be on Scotland as they will vote on their independence later this year.

We think that such issues will probably prove the flash point for much political change that we will see across Europe over the next two decades or so. We also think that heterodox economic ideas — particularly those related to Modern Monetary Theory (MMT) and monetary regimes — will be important in considering the key issues surrounding devolution. Indeed, I am currently designing an alternative monetary framework for Scotland that will be published upon launching the think tank.

We have already started a blog that we hope to integrate into the website when it launches in February. We think that readers will find much of interest there. A good deal of the present commentary on Scottish devolution in particular and devolutionary/nationalist movements across Europe in general is short-sighted at the best of times and downright ideologically blinded at the worst of times. We will try to maintain a detached, analytical view.

So, check out the first post on the blog now and feel free to comment and share. This is a real opportunity to get heterodox ideas considered by policymakers but we can only do so by generating sufficient buzz around our project.

Gradualis — The Spanish Armada Set Sail… For Scotland

Posted in Economic Policy, Politics | Leave a comment

A Tangential But Rather Interesting Interview With…

Recruiter-Asia-Interview

Amogh Sahu was kind enough to do an extended interview with me for his podcast. The aim was to try to tie in the work I have been doing on the philosophy of science with economic theory. I think that Sahu did a fantastic job drawing this out and I’m very pleased with the result.

Unfortunately, the sound quality is quite poor — even extremely poor at times. But if you turn up the volume I think that you can hear most of it.

There are some tangents in the interview. We discuss, for example, the 1970s in some detail and have a rather long back-and-forth on the work of the filmmaker Adam Curtis. But I think that it all sort of fits together in a strange sort of way. Anyway, see how far you make it through. Personally I thought some of the best stuff was toward the end.

Entitled Thoughts: Philip Pilkington on Economics, Science and Philosophy

Posted in Economic Theory, Philosophy | Leave a comment

Behavioral Economics as Victorian Moralising

temperance

The other day I wrote a post that was about some awful load of nonsense that was coming from behavioural economics under the self-contradictory name of ‘libertarian paternalism’. Since then I’ve been looking into behavioural economics in some detail and what I’ve found, rather predictably, is no less awful.

You see, while behavioural economics has done the profession an enormous service in pointing out the obvious — i.e. that perfectly optimised utility functions are rubbish — it has nevertheless sought to maintain intact the utility calculus framework, albeit in a more experimental guise. Thus behavioural economics continues to chase the ends of rainbows for pots of gold of various kinds.

Given that measures of ‘objective happiness’ based on well-ordered, timeless preferences is obvious nonsense, it is nevertheless interesting to investigate in some detail how the behavioural economists go about constructing these. What you find when you look into it is all sorts of moral judgements slipped in under the radar. I take as my example Daniel Kahneman’s chapter ‘Objective Happiness’ in the book Well-Being: Foundations of Hedonic Psychology.

Okay, so here’s how the basic framework functions. The behavioural economist approaches an experimental subject in the process of undergoing a state of pleasure or pain. In the chapter Kahneman gives the example of a patient undergoing a colonoscopy.

Now, the researcher then gives the patient a scale of one to ten and tells them that they should register how painful (or pleasurable) their experience is. The researcher must also recognise that the intervals may not be homogenous. So, for example, the pain-interval from 1 – 2 may not quantify as much pain as the pain-interval from 8-9. This is remedied, somewhat vaguely, by a ‘knowledgeable observer’ who re-orders the intervals to capture this dynamic. Kahneman does not give precise details of how this is done rendering even his basic framework suspect.

It is not arbitrary that the venture should begin to look fragile and shaky even at this point. Let us imagine, for example, that a person can realistically convey the pain they are feeling using a scale of one to ten — I am somewhat skeptical of this, but let us nevertheless pretend as if it were the case. Now imagine that a person must calculate that, for example, one minute of pain at Level 7 is equivalent to two minutes of pain at Level 6.

I would say that anyone who finds this remotely convincing is fooling themselves because the whole things sounds appealingly neat and tidy. But regardless, let us ignore these scruples and continue along.

The next oddity that arises in the Kahneman framework is the slipping in of various moral judgements. Take his ‘principle of temporal integration’, for example. This, he writes, is “consistent with the intuition that it is imprudent to seek short and intense pleasures that are paid for by prolonged mild distress” (p6). Note even the language applied here: it is ‘imprudent’. This is Victorian morality at its purest, passing itself off as an abstract principle.

Of course, in reality people engage in such ‘imprudent’ activities all the time; when they drink too much alcohol, for example, or when they play hazardous sports. But the behavioural economic Moral Majority does not want to hear about any of this. These activities, after all, violate the sacred principle of temporal integration!

The entire utility calculus then falls prey to what we might call, to rename the ‘principle of temporal integration’ in a likewise pretentious fashion, the tyranny of the average. What we find is that Kahneman thinks that we need to homogenise human experience in order to properly understand the ‘utility’ that it leads to. Extremes of pleasure are avoided in favour of a sort of numbing ‘happiness’.

Again, we’re on moral terrain here. This sounds much like the old Victorian cant that we need to lead a moderate existence and to engage in temperance so forth. I must be clear: if some people wish to lead boring lives where decisions are made based on some sort of ‘smoothing’ of pleasures over time, that is their prerogative. But I would plead with these people: keep it to yourselves and don’t attempt to impose it on others wearing your lab coat as a priest wears his collar.

If humanity had engaged in such ‘moderation’, in such ‘temporal integration’ it is likely that we would not be where we are today. Would Copernicus and Galileo have decided to make the pronouncements they did — extreme pronouncements at the time — regarding the motion of the earth if they had adhered to the principle of temporal integration? Of course not. They would have undertaken a quick utility calculus and decided that they would gain more ‘utility’ by burning their work. The same could be said of the great explorers, like Columbus, or a whole host of others who took risks and went against the grain.

Real working psychologists know this too. You don’t advertise a product saying that it will provide you with a maximum of temporally integrated utility. No, you advertise it saying that it will whisk you off on some exciting adventure of some sort. What this shows us is that this is how human desire is actually structured. When faced with ‘smoothing utility across time’ and going on an adventure the majority of the population choose the adventure. What gives the likes of Kahneman the right to say that he is correct and the majority of the population are wrong? After all, this is an arbitrary judgement on his part and seems to me to stem from his particular preferences and opinions.

Behavioural economics is nothing but Victorian morality passed down, through figures like Edgeworth, to the modern age. What is laughed at in the works of the Victorian moralists is codified into foreboding and difficult to understand terminology in behavioural economics. But it is all the same thing. It is a program that seeks to domesticate mankind and destroy what the vast majority of people hold dear.

And what is the great irony of this program? Namely, that it will be rejected by the marketplace. Where the behaviourists’ ideas may find practitioners in the more authoritarian halls of government the marketplace will continue to sell dreams of breaking boundaries, pushing extremities and engaging in short-term pleasures. This will put the behaviourists in a somewhat awkward position because, believing as they do that markets are basically a good means to allocate resources, they will find themselves hard pushed to explain why people are engaged in all sorts of naughtiness therein. But perhaps that is when the mask will slip off and oxymoronic doublethink terms like ‘libertarian paternalism’ will creep in.

Posted in Economic Theory, Psychology | 1 Comment

A Revolution in Economic Textbooks

textbook

For the past few months Bill Mitchell has been posting drafts of what should become his and Randy Wray’s Modern Monetary Theory textbook for economics students. From what I have seen so far this looks set to become the perfect textbook.

When I did a Masters last year one of the main problems that my macro lecturer — the excellent Post-Keynesian economist Engelbert Stockhammer — had was trying to explain the monetary system to students. Even pretty good Keynesian textbooks like Richard Froyen’s Macroeconomics were completely stuck in the past with regards to how the monetary system operates.

Mitchell and Wray’s book looks set to fix this. MMT integrates Post-Keynesian endogenous money theory into a more comprehensive system that shows students how governments borrow and spend. Indeed much of the complaints against MMT from Post-Keynesians — i.e. that the Treasury and Central Bank should not be consolidated — is precisely against the element of the theory that simplifies sufficiently to teach the monetary system in an adequately didactic manner. While those who are highly prejudiced against the consolidation will continue to dig their heels in I think that others will appreciate the didactic power of this simplification when the book comes out.

Another thing missing from books like Froyen’s is a good theory of growth. Regular readers of this blog will know that I am deeply skeptical of modelling in economics but I have always said that models are a good means to educate students provided the limitations are clearly explained.

In textbooks like Froyen’s the old Solow model is generally used. This is a truly awful, static model of growth that incorporates the worst excesses of marginalist abstraction from the real world. Indeed, in my opinion, it is one of the gravest sins that the neo-Keynesians committed — one that haunts us in so many ways today.

Mitchell, however, introduces the student to the supposedly more outdated Harrod-Domar model — the model which the Solow model was created in response to. The Harrod-Domar model is a far better starting point for the student in economics. Like the Solow model it is not so complex as to be useless for teaching (I’m afraid that the Godley-Lavoie models fall into this category for everyone except PhD candidates), but it also introduces the student to dynamics and with it contingency.

The model shows how there is no “natural” overlap between the rate of productivity growth and the rate of income growth. It also allows Mitchell to raise important distributional issues — indeed, the chapter is entitled “Distribution and Growth”, echoing the likes of Joan Robinson. But more important to me is that the student will be able to recognise that the real world is messy. There is no tendency for everything to just “work out”.

Different levels of productivity growth definitely occur in really-existing capitalist economies and these definitely do not produce a sufficient level of aggregate demand to sustain full employment of capital and labour. As can clearly be seen today distribution can also affect this process — when profits are high and wages are low the economy can enter a period of protracted secular stagnation.

Today the mainstream talk vacuously about such issues. When you read their statements — about secular stagnation, for example — with a critical eye they really have nothing to say. I hope that Mitchell and Wray’s book comes out as soon as possible. It will be a much needed antidote to much of this.

Posted in Economic Theory | 14 Comments