Everyone complains about his memory, and no one complains about his judgment.
— François de La Rochefoucauld
In the comments to my post yesterday on monetarism the notion of the ‘long-run trend’ came up. A regular commenter ivansml insisted that monetarist theory ‘worked’ if you averaged the periods over five years. It does not — or, at least, it sometimes works and it sometimes does not; which means, to my mind, that it doesn’t work because either the monetarist ‘law’ holds or it doesn’t; science does not allow for a vague middle ground. But I want to focus more so on what is and what is not a ‘long-run trend’.
Mainstreamers like ivansml seem to think that you obtain a long-run trend by reducing the number of observations through some sort of averaging. So, if you have a monthly time series running over thirty years which yields 360 observations you reduce this to five year averages and thus obtain 6 observations instead. This is a bizarre concept of a long-run trend. It is widespread in the mainstream and, to my mind, just represents poor methodological understanding of what statistics say and do not say. When we average some data set and reduce the number of observations we do not obtain any new trend. Rather we just… reduce the number of observations.
The effects that this has on our conclusions will depend (a) on the particular structure of the data we are using, (b) on the averaging period that we choose — i.e. five years, two years, ten years etc. — and (c) on the year in which we start averaging — i.e. if I choose 1990 as my first observation for a three year average my data points will look different to if I choose 1991. Obviously all these effects are in some sense arbitrary — most especially the latter two.
Since (c) is perhaps the least intuitively obvious let us take a closer look at it. In the table below we have an arbitrary dataset. We have averaged over the course of three years. But in one column we have used 1990 as the starting point of the average and in the other column we have used 1991. Look at how different the results we get are!
This is why we typically try to get as many observations as possible when we do statistical work. We rarely, if ever, want to reduce the number of observations as this can lead us far astray. When we try to reduce the number of observations we can engage in manipulation even if this is not our intention. If we are not very careful we may find ourselves averaging the data in such a way as to produce the results that we a priori want to produce. I am of the opinion that this happens all the time in so-called empirical work in economics. Keynes seemed acutely aware of something similar in his famous critical work on Tinbergen when he wrote:
Although there may be many factors with different trends, there is only one trend line, and I have not understood the process by which this single trend is evolved. The use of rectilinear trend (in post-war years) means, apparently, that a straight line is drawn between the first year of the series and the last. The result is, of course, that it makes a huge difference at what date you stop. In the case of the United States (p. 56) the series runs from 1919 to 1933, which, as a result of the abnormal circumstances of the first and last years, involves the paradox that the United States was in a severe downward trend throughout the whole period, including the period ending in 1929, amounting in all to 20 per cent.; whereas if Prof. Tinbergen had stopped in 1929, he would have used a sharply rising trend line instead of a sharply falling one for the same years. This looks to be a disastrous procedure. Prof. Tinbergen is quite aware of the point. In a footnote to p. 47 he mentions that “the trend chosen for the American figures (post-war period) may be somewhat biased by the fact that the period starts with a boom year and ends with a slump year.” But he is not disturbed, since he has persuaded himself, if I follow him correctly, that it does not really make any difference what trend line you take. (pp565-566 — My Emphasis)
So, if we might be misled by simply averaging periods in order to arrive at ‘long-run trends’ what meaning can we ascribe this term? After all, it does seem intuitively plausible that ‘long-run trends’ exist. I think that they certainly do. But they are far vaguer than the marginalists seem to have in mind. Here is a long-run trend.
This trend states precisely this: “Over the 1,000 years from 1000AD to 2000AD, the population, measured in terms of men, women and children, rises at the same time as production measured in 1985 dollars rises”. That’s a trend. It’s long-run. It requires basically no statistical tomfoolery. It’s a long-run trend! It’s quite a humble trend and gives us no conjectural knowledge of any sorts of ‘laws’. It just is what it is: an historical long-run trend that may or may not continue into the future.
This is what we should mean by ‘long-term trend’. A long-term trend is a trend that appears to hold good over a number of years. It is not captured by some arbitrary process of averaging. That is likely just statistical tomfoolery that the researcher is tricking themselves with and magically obtaining results with after a long process of trial and error.
It should be mentioned that the old monetarists never, to my knowledge, tried this particular trick. What they instead did was they tried to say that there might be lags in how long it took for the money supply to affect prices. This allowed them to produce testable causal relationships. For example, they might say that a x% increase in the money supply would lead to a y% increase in inflation after a period of, say, 36 months.
While this is not nearly as bad data manipulation as reducing observations through arbitrary averages, it is not without its own problems. Again, such lagging can allow the researcher to alter the lags over and over until they find what it is they want. Keynes accused Tinbergen of precisely this in his famous criticism. He wrote:
The treatment of time-lags and trends deserves much fuller discussion if the reader is to understand clearly what it involves. To the best of my understanding, Prof. Tinbergen is not presented with his time-lags, as he is with his qualitative analysis, by his economist friends, but invents them for himself. This he seems to do by some sort of trial-and-error method. That is to say, he fidgets about until he finds a time-lag which does not fit in too badly with the theory he is testing and with the general presuppositions of his method. No example is given of the process of determining time-lags which appear, when they come, ready-made (cf. p. 48). But there is another passage (p. 39) where Prof. Tinbergen seems to agree that time-lags must be given a priori. (p565)
Historical statistics are a delicate beast and must be handled with care. Otherwise the researcher risks finding precisely what they wish to find in those columns of numbers. As the Bible says (Matthew 7:7): “Ask and it will be given to you; seek and you will find; knock and the door will be opened to you.” When in doubt — and we should always be in doubt handling historical statistics — clarity of thought and simplicity of assumptions are the only prophylactic against error. The idea of arbitrarily averaging a dataset to reduce the number of observations is murky in the extreme, hides more than it illuminates and is not at all clear in its assumptions. Lagging at least has the advantage of clarity. But it is clearly a practice that is very much so open to abuse.





















