Keynes on the Use and Abuse of Statistics and Probability

Bullshit stats

Much of Keynes’ A Treatise on Probability appears to have been written with the popularisation of the study of statistics that was emerging at the turn of the 20th century in mind. This makes it a rather remarkable document because it provides, if not a virgin eye, then at very least a critical one that was not blinded by the haze of statistics with which we are bombarded with today.

Therefore, I find the passages in which he discusses the use of statistics in general and the application of probability to these statistics in particular to not only be extremely interested, but of contemporary relevance. Just to be straight before proceeding, this is not my attempt to “prove” that Keynes had all the answers. Indeed, I am not that interested in what Keynes said per se — I hate hero-worship. Rather I think that many of the critical insights he makes bear on the problems faced in economics today.

In his A Treatise on Probability there is a passage in which Keynes discusses how statistics should be deployed — and how they should not. It reads as such:

Generally speaking, therefore, I think that the business of statistical technique ought to be regarded as strictly limited to preparing the numerical aspects of our material in an intelligible form, so as to be ready for the application of the usual inductive methods. Statistical technique tells us how to ‘count the cases’ when we are presented with complex material. It must not proceed also, except in the exceptional case where our evidence furnishes us from the outset with data of a particular kind, to turn results into probabilities; not, at any rate, if we mean by probability a measure of rational belief. (p392)

This is similar, of course, to certain critiques of the Bayesian method that I have recently published on this blog, although it would seem to me that it goes one further. Where I was objecting to turning non-numerical data — i.e. qualitative data — into numerical probabilities, Keynes is objecting to turning statistical data — i.e. quantitative data — into numerical probabilities.

Why does he object to this? I believe the answer lies in the fact that Keynes saw probability theory as a rather limited tool that should only be deployed in very specific cases. In A Treatise on Probability Keynes makes very clear that the theory of probability plays a secondary role in human reasoning. The primary role is allotted to the twin processes of Induction and Analogy (I shall write a separate post on these in the coming days). Keynes recognised that were these not allotted a primary role and were that role allowed to probabilistic reasoning the results would be a mess. This is indicated, for example, when he writes:

To argue that from the mere fact that a given event has occurred invariably in a thousand instances under observation, without any analysis of the circumstances accompanying the individual instances, that it is likely to occur invariably in future instances, is a feeble inductive argument, because it takes no account of the Analogy… But to argue, without analysis of the instances, from the mere fact that a given event has a frequency of 10 per cent in the thousand instances under observation, or even in a million instances, that its probability is 1/10 for the next instance, or that it is likely to have a frequency near to 1/10 in a further set of observations is a far feebler argument; indeed it is hardly an argument at all. Yet a good deal of statistical argument is not free from this reproach — though persons of common sense often conclude better than they argue, that is to say, they select for credence, from amongst arguments similar in form, those in favour of which there is in fact other evidence tacitly known to them though not explicit in the premisses stated. (pp407-408)

Indeed, things have not changed so much from Keynes’ day. Today too many using econometric techniques will largely circumvent proper argument — what Keynes would have called an argument from induction and analogy — and are instead more fascinated in projecting current trends forward. Only the Post-Keynesians, with their insistence on the non-ergodicity of economic data, really carry the torch for Keynes’ “common sense” view of how to handle data.

Some Bayesians will, of course, claim that they avoid this criticism. They say that they update their priors in line with new evidence. Maybe, but they continue to project the accumulated evidence into the future. Even when Bayesians test for robustness they are doing this. They are assuming “since the accumulated evidence so far says that my model is correct then I can rationally assume that it will be correct tomorrow”. This is old wine in new bottles.

What Keynes was really criticising was intellectual laziness. It’s nice to think that we can build computers and formulae that do our work for us. For some reason today such activity is given the mantle of Science — why, I don’t know. But in reality every problem we approach, every economic formation we study at any given moment in time, is unique in its particularity and we have to use our wits to unravel it as best we can. There are no easy solutions and it is a sad fact of our age that the harder, more challenging solutions are being daily degraded as being less scientific than the easy ones.

But there is also another hurdle: in economics there is a strong desire to emulate. Economists often want a strict set of rules that they apply to data that their peers are also following. I am not sure why this is the case as it does not appear to me to be the case in either the other social sciences or in the hard sciences. My guess, however, is that this is because economists try (unsuccessfully, I might add) to straddle the fence between the hard sciences and the social sciences.

Because their discipline is not actually a hard science and cannot mimic its methodology what we end up with is a discipline full of rigid, arbitrary rules that stifle creativity and ensure intellectual stagnation. In their quest to try to convince the world that they have a science as objective as chemistry or physics, the economists impose upon themselves an iron cage of capricious and silly rules that lead them down one garden path after another. One knows how to wake a lazy cat, but perhaps it is more difficult to make a fool stop dancing.

Advertisements

About pilkingtonphil

Philip Pilkington is a London-based economist and member of the Political Economy Research Group (PERG) at Kingston University. You can follow him on Twitter at @pilkingtonphil.
This entry was posted in Statistics and Probability. Bookmark the permalink.

2 Responses to Keynes on the Use and Abuse of Statistics and Probability

  1. Lars P Syll says:

    Good read, Philip. It also shows why reading Treatise is so important for understanding General Theory — something very few economists fully appreciate!

  2. Francis says:

    Wittgenstein once said: ‘Nonsense. Nonsense, – because you are making assumptions instead of simply describing. If your head is haunted by explanations here, you are neglecting to remind yourself of the most important facts’.

    The idea of method, i.e. scientific method, is somewhat chimerical anyhow. What exactly, for example, beyond the most mundane conception of investigation, do a paleontologist and a particle physicist have in common?

    Faddish imitation of what one presupposes to be the approach or method that others employ strikes me as being woefully unscientific and largely a priori in nature.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s