Everyone complains about his memory, and no one complains about his judgment.
— François de La Rochefoucauld
In the comments to my post yesterday on monetarism the notion of the ‘long-run trend’ came up. A regular commenter ivansml insisted that monetarist theory ‘worked’ if you averaged the periods over five years. It does not — or, at least, it sometimes works and it sometimes does not; which means, to my mind, that it doesn’t work because either the monetarist ‘law’ holds or it doesn’t; science does not allow for a vague middle ground. But I want to focus more so on what is and what is not a ‘long-run trend’.
Mainstreamers like ivansml seem to think that you obtain a long-run trend by reducing the number of observations through some sort of averaging. So, if you have a monthly time series running over thirty years which yields 360 observations you reduce this to five year averages and thus obtain 6 observations instead. This is a bizarre concept of a long-run trend. It is widespread in the mainstream and, to my mind, just represents poor methodological understanding of what statistics say and do not say. When we average some data set and reduce the number of observations we do not obtain any new trend. Rather we just… reduce the number of observations.
The effects that this has on our conclusions will depend (a) on the particular structure of the data we are using, (b) on the averaging period that we choose — i.e. five years, two years, ten years etc. — and (c) on the year in which we start averaging — i.e. if I choose 1990 as my first observation for a three year average my data points will look different to if I choose 1991. Obviously all these effects are in some sense arbitrary — most especially the latter two.
Since (c) is perhaps the least intuitively obvious let us take a closer look at it. In the table below we have an arbitrary dataset. We have averaged over the course of three years. But in one column we have used 1990 as the starting point of the average and in the other column we have used 1991. Look at how different the results we get are!
This is why we typically try to get as many observations as possible when we do statistical work. We rarely, if ever, want to reduce the number of observations as this can lead us far astray. When we try to reduce the number of observations we can engage in manipulation even if this is not our intention. If we are not very careful we may find ourselves averaging the data in such a way as to produce the results that we a priori want to produce. I am of the opinion that this happens all the time in so-called empirical work in economics. Keynes seemed acutely aware of something similar in his famous critical work on Tinbergen when he wrote:
Although there may be many factors with different trends, there is only one trend line, and I have not understood the process by which this single trend is evolved. The use of rectilinear trend (in post-war years) means, apparently, that a straight line is drawn between the first year of the series and the last. The result is, of course, that it makes a huge difference at what date you stop. In the case of the United States (p. 56) the series runs from 1919 to 1933, which, as a result of the abnormal circumstances of the first and last years, involves the paradox that the United States was in a severe downward trend throughout the whole period, including the period ending in 1929, amounting in all to 20 per cent.; whereas if Prof. Tinbergen had stopped in 1929, he would have used a sharply rising trend line instead of a sharply falling one for the same years. This looks to be a disastrous procedure. Prof. Tinbergen is quite aware of the point. In a footnote to p. 47 he mentions that “the trend chosen for the American figures (post-war period) may be somewhat biased by the fact that the period starts with a boom year and ends with a slump year.” But he is not disturbed, since he has persuaded himself, if I follow him correctly, that it does not really make any difference what trend line you take. (pp565-566 — My Emphasis)
So, if we might be misled by simply averaging periods in order to arrive at ‘long-run trends’ what meaning can we ascribe this term? After all, it does seem intuitively plausible that ‘long-run trends’ exist. I think that they certainly do. But they are far vaguer than the marginalists seem to have in mind. Here is a long-run trend.
This trend states precisely this: “Over the 1,000 years from 1000AD to 2000AD, the population, measured in terms of men, women and children, rises at the same time as production measured in 1985 dollars rises”. That’s a trend. It’s long-run. It requires basically no statistical tomfoolery. It’s a long-run trend! It’s quite a humble trend and gives us no conjectural knowledge of any sorts of ‘laws’. It just is what it is: an historical long-run trend that may or may not continue into the future.
This is what we should mean by ‘long-term trend’. A long-term trend is a trend that appears to hold good over a number of years. It is not captured by some arbitrary process of averaging. That is likely just statistical tomfoolery that the researcher is tricking themselves with and magically obtaining results with after a long process of trial and error.
It should be mentioned that the old monetarists never, to my knowledge, tried this particular trick. What they instead did was they tried to say that there might be lags in how long it took for the money supply to affect prices. This allowed them to produce testable causal relationships. For example, they might say that a x% increase in the money supply would lead to a y% increase in inflation after a period of, say, 36 months.
While this is not nearly as bad data manipulation as reducing observations through arbitrary averages, it is not without its own problems. Again, such lagging can allow the researcher to alter the lags over and over until they find what it is they want. Keynes accused Tinbergen of precisely this in his famous criticism. He wrote:
The treatment of time-lags and trends deserves much fuller discussion if the reader is to understand clearly what it involves. To the best of my understanding, Prof. Tinbergen is not presented with his time-lags, as he is with his qualitative analysis, by his economist friends, but invents them for himself. This he seems to do by some sort of trial-and-error method. That is to say, he fidgets about until he finds a time-lag which does not fit in too badly with the theory he is testing and with the general presuppositions of his method. No example is given of the process of determining time-lags which appear, when they come, ready-made (cf. p. 48). But there is another passage (p. 39) where Prof. Tinbergen seems to agree that time-lags must be given a priori. (p565)
Historical statistics are a delicate beast and must be handled with care. Otherwise the researcher risks finding precisely what they wish to find in those columns of numbers. As the Bible says (Matthew 7:7): “Ask and it will be given to you; seek and you will find; knock and the door will be opened to you.” When in doubt — and we should always be in doubt handling historical statistics — clarity of thought and simplicity of assumptions are the only prophylactic against error. The idea of arbitrarily averaging a dataset to reduce the number of observations is murky in the extreme, hides more than it illuminates and is not at all clear in its assumptions. Lagging at least has the advantage of clarity. But it is clearly a practice that is very much so open to abuse.
Moving averages and time series filters are not inventions of economists. They are standard and well accepted part of statistics and engineering, and the reason we use them is that time series have inherent internal dynamics which produces cyclical components of varying frequencies, some of which we are not interested in. Add to this possibility of nonstationarity and cointegration, and it becomes clear one has to be careful in analyzing such data. It’s not about reducing number of observations, but about extracting relevant information from the data (and in the end, all statistics is about data reduction).
Here’s an example – climate science tells us there’s a long-term relationship between CO2 (and other) emmisions and temperature increase. Would you deny this just because yearly changes in CO2 and temperature are noisy and have low correlation? I hope not.
And, of course, this luddite approach of yours is rather self-serving. You present some evidence (which are not raw data either, or else you’d be plotting levels, not changes), and if anyone presents contrary evidence, you dismiss it as data mining and/or appeal to authority by printing some irrelevant quote. The fact that you pulled a quote of Keynes in which he’s factually wrong (Tinbergen didn’t estimate trends by joining first and last data point) just adds an additional layer of irony. Sorry, if you’re not willing to learn the tools, then maybe don’t analyze data, or at least stop making strong unsupported claims.
So the physical scientists who deal with homogenous, ergodic systems use these tools so that allows you — who deals with something entirely different — to use them too? You don’t think that maybe there might be a difference there? Hmm…
Haha! Keep at it buddy. No wonder there is so much agreement regarding empirical results in econometrics. Oh wait… no there isn’t! Lol! It’s a big fraud and everyone knows it. The level of cynicism of even mainstream economists of the methods is shocking and opportunistic. And you know this, ivansml. Everyone who is remotely self-conscious in economics does. People routinely shrug off the econometric results of other people because they know them to be shaky. But you won’t admit it. Because you don’t have the balls.
But hey, I’ll tell you what. Come up with some nice results that allow us to make predictions and we’ll use them to make some investments in the markets. Your money, of course… Like, let’s use M2 and M3 measures to make calls on future inflation and we’ll go long or short T-bills based on it. Are you confident enough to do this? Will you put your money where your mouth is?
Actually, I really like this idea. If you’re confident in your theories let’s apply them in the market. If you can give me future forecasts of inflation and other variables I’m sure I can figure out where to place the money.
We’ll call it the ‘ivansml challenge’ and we’ll show how economists should all be absolutely minted if they only took their research seriously! It’ll be revolutionary.
Are you game? We can run the challenge on this very blog. We can update your progress from month to month. It might be illuminating. You know, scientifically. And you seem confident in the “economic laws” you hold. So, why not? Or, is that confidence only a front for a sneaking underlying suspicion that you don’t have access to these supposed laws? Surely not. Surely all the scepticism of economists I encounter in the financial community is just negativism. Tell me it is, ivansml! Tell me it is! Prove it to me!
As an economics noob,
Something tells me ivansml teaches at a very prestigious institution. Would love to see the above challenge take place. Those boat races must be getting boring by now. 😛
. . .and (c) on the year in which we start averaging — i.e. if I choose 1990 as my first observation for a three year average my data points will look different to if I choose 1991.
This is, incidentally, why those who claim a global cooling trend always begin their analysis at the year 1998. Starting from an anomalous point can be used to deliberately skew results.
It’s remarkably easy to do. I suspect most of the statistical tests in economics are bogus. There are so many tricks and in my experience economists apply them almost habitually and unself-consciously.