Someone said to me a while back: “Phil, you are always railing against econometrics but some of the MMT guys and quite a few Post-Keynesians maintain that these techniques are useful and valid”. I recognise this fully well actually. It preoccupies me perhaps more than it should. Indeed, Post-Keynesians are using econometrics with increasing frequency — and at the very same time they are becoming increasingly interested in highly abstract modelling. I’m not a big fan of this trend as readers of this blog will probably have guessed.
Recently I had the good fortune to stumble upon a paper by the MMT economist Bill Mitchell entitled Econometrics, Realism and Policy in Post-Keynesian Economics. It is a defence of econometrics that is not only very good but also takes a number of different perspectives on the matter. So, I thought it might be productive to deal with it in some detail here.
In the paper Mitchell confronts the realist ontology. This ontology is best summed up, I think, by distinguishing closed from open systems. A good example of a closed system is a controlled scientific experiment. By setting the experiment up so that it is continuous through time (ergodic) and is not interfered with by outside forces, the experimenter ‘closes’ the system upon itself. For realists, any data then generated by this experiment can reliably be used to make inferences about the future.
An open system, on the other hand, is open to change, fluctuation and new trends emerging. It is also not closed to outside forces interfering. The realists think that open systems are what we generally deal with in the social sciences, including economics. We cannot reliably use data generated in such open systems to make predictions about the future because, for example, although inflation and wages may be strongly correlated over a certain time period they may not be in the next time period.
In a closed system experiment this would discredit the view that there is a relationship between wages and inflation because we would have data that falsified this hypothesis. But in economics we cannot falsify the statement that there is such a relationship because such a relationship may exist in some historical times periods but not in others. Clearly we are dealing with very different materials in such open systems than we are in closed systems.
In open systems we cannot establish any timeless laws, for example, and we also cannot definitively disprove an economic theory because the relationship it posited did not hold in a given historical time period. More extreme still, we cannot make valid inferences about the future based on past data. This is the nature of non-ergodic, open systems according to the realists.
Mitchell then goes on to say that realists like Tony Lawson do not understand what is being done when econometricians study time series data to draw inferences. He characterises Lawson’s position by quoting him in the original,
Econometricians seem universally to report their results as if they interpret themselves as working within the falsificationist bold predictions framework. (p10)
Mitchell responds to this by saying that econometrics does not test theories which, he seems to agree, are untestable. He writes,
Applied econometrics does not test economic theories. Economic theories are untestable. Intriligator (1978: 14) says an econometric model ‘is any representation of an actual phenomenon such as an actual system or process … Any model represents a balance between reality and manageability.’ While the data generating process (DGP) is held to be a true process, a model is considered fallible, it cannot be true because to make the process tractable a marginalisation of the DGP has to be made. A theory of the DGP might be true, but such speculation is futile because there is no way of telling. A model of the theory is false by definition. Hendry (1983: 70) distinguishes the DGP (the mechanism) which is true and unique, from the simplified representation of the DGP (the model) which is non-unique.
An econometric model is specified in terms of theoretically-motivated variables and applied to some data. These specific representations contain hypotheses which can be tested. Based on visible criteria, a particular representation can claim to be the most adequate current picture of the DGP. There can be an ordering of representations, some more adequate than others. All representations are tentative and time dependent. A Post Keynesian econometrician would only aspire to empirically adequate and hence tentative representations of theoretical posits which have satisfied a range of currently accepted diagnostic criteria. (p10)
I would characterise this statement as a fairly honest attempt at using econometrics. Many, however, do not use econometrics in this manner. Rather they seek True models that remain True perhaps forever into the future (there are now some people doing Post-Keynesian work that adheres to such a view, incidentally). Rather strangely, Mitchell later claims that the realists themselves are the ones seeking “ultimate laws” (p14) — I think he is here confusing the realist approach to a closed system, which according to them will have ultimate laws, to the realist approach to an open system, which will not.
In the above paragraph Mitchell seems to eschew such an approach and say that it is impossible. He opts instead for a “tentative and time dependent” approach. But later in the paper he will claim that models can be used to make accurate numerical predictions that will remain valid in the future. This is, in fact, philosophically identical to the claim that you can test an economic theory — it is just a matter of degree.
When an econometrician thinks of an estimated model, he/she does not think in terms of a natural or physical model established in an experimental context. Hendry (1983: 72) says that we rather attempt to establish a ‘conjectural’ degree of stability for the sample of available data. Clearly, if we find evidence of instability, then the conjecture is problematic. However, we tentatively accept a concept of time dependent stability if our model displays within-sample stability (defined by conventional likelihood tests). (p15)
This stability, Mitchell says, can then be projected into the future. But this merely avoids the problem.
Imagine for a moment that we had ten years of time-series data on interest rates and investment. In these ten years the interest rate had risen on average 1% every two years. Now, it is found that the interest rate is not having a great deal of effect on investment — say, every 1% rise is correlated to a 0.25% fall in investment. Following this Mitchell would say that, assuming stability, if interest rates rise in the future they will not have a great deal of impact on investment. Further still, he could also make a numerical claim that for every 1% rise in interest rates there would be a 0.25% fall in investment moving into the future.
Now imagine that the central bank raises interest rates by 15% over a very short time period. By Mitchell’s calculations this would cause a 3.75% fall in investment. But what we actually see is, say, a 20% fall in investment. This is intuitively obvious, of course, because while small increases in interest rates may not have significant impacts on investment, large increases do. The magnitude of the increase matters — and this cannot be understood by looking at time series data that does not contain such a large rise.
Put more theoretically, the reason for this is that the time series data is made up of non-homogenous events. It is an open system. In an experiment we could just tinker with the control variables, raising interest rates now by 1%, now by 15%, now by 100% and we could crunch out a numerical prediction as to what future rate rises will do — perhaps within different bands and so forth.
But we cannot do this with the economic data as it is of a completely different nature. Feeding economic data through a computer will not be able to capture what the controlled experiment captured. In fact, these are only the beginnings of the problems with this approach but I don’t have space to cover them here. Keynes knew this well, writing in his famous screed against econometrics,
For, owing to the wide margin of error, only those factors which have in fact shown wide fluctuations come into the picture in a reliable way. If a factor, the fluctuations of which are potentially important, has in fact varied very little, there may be no clue to what its influence would be if it were to change more sharply. There is a passage in which Prof. Tinbergen points out (p. 65), after arriving at a very small regression coefficient for the rate of interest as an influence on investment, that this may be explained by the fact that during the period in question the rate of interest varied very little. (p567)
Mitchell goes on to argue that if we don’t agree that econometrics can churn out numerical estimates we are being “nihilistic”. I have heard this claim many times before. It is a strange use of the term indeed — which is supposed to mean “a belief in nothing”. But one of the key things that critics are insisting on is simply that numerical prediction is pretentious. This does not constitute a “belief in nothing”.
If I am starting a business or a relationship can I be certain it will work? No. Can I make numerical estimates of future cash-flows or time spent together? No. Does this make me a “nihilist”? Of course not. Likewise, if I advocate a policy by a government the exact effects of which I don’t know am I being a fool? I don’t think that I am.
In all of these cases I weigh up the arguments as best I can and proceed. Will I always be correct? No, I am far from infallible. But if I am open-minded and able to scrutinise my beliefs in light of new evidence I would like to think that I will be correct a great deal more of the time than the other guy. Obviously, if I spot an opportunity to build housing at the start of a housing boom I am in a far stronger position to make money than I am if I have a dream one night that people will buy into a new and obscure fashion trend where people wear large rubber ducks on their heads (although you never know!).
When dealing with non-ergodic time we simply have to use our judgement. Mitchell does this all the time on his own blog. And most of the time it is, so far as I can see, pretty spot on. Is his or my judgment science? No. But why do people today feel the need to mimic science in spheres of life where it does not apply? That, I would say, has something to do with the ideology of our time. But that is a topic that would take us too far today.
I would say though, that when you hear the word “nihilist” thrown around in such discussions perhaps what it really means is something like “someone who denies the Truth of the current cultural value system under which we live, one that values opinions expressed in quasi-scientific and numerical terms over opinions expressed more contingently and, perhaps, more honestly”. If that is the meaning we should give the word in such a context I would say, paraphrasing the late Martin Luther King, that I’m proud to be a nihilist.