## Keynes and the “Fallacy of Aggregation” in Probability Theory

I recently came across a very nice lecture series by the philosopher Patrick Maher on Keynes’s discussions of probability (scroll down to the three Keynes lectures in this link — the other lectures are also worth a browse for those interested in the philosophy of probability). The first lecture has some nice quotes from Keynes’ Treatise on Probability about non-measurable probabilities and probabilities that cannot even be compared ordinally. But it is the second and third lectures that I want to focus on here.

In these two lectures Maher lays out, in a particularly clear manner, a problem that Keynes found with the so-called Principle of Indifference. The Principle of Indifference basically states that if there are, say, two possibilities and I have no further knowledge then I should rank these possibilities as evenly probable. Say, there are two cups in front of me and one contains a ball. Given that I have no further knowledge I can assign each cup a 50% probability of having the ball under it.

The Principle of Indifference follows a very simple formula. For n possibilities that we have no further knowledge about we assign a probability of 1/n. So, in our cup example we have two possibilities — two cups — so n = 2 and so the Principle of Indifference tells us that we should assign each cup a probability of 1/2 or 50%.

But Keynes found that this very intuitively reasonable proposition ran into contradictions. To see this consider an urn that contains two balls. We know that the balls are either white or black but we do not know their ratios. But since there are only two of them we can lay out their possible ratios as such:

1. 100% of the balls are white.

2. 50% of the balls are white.

3. 0% of the balls are white.

Now, we have three possibilities — so, n = 3. Since we have no other information we apply the Principle of Indifference and find that the chances of each ratio being true is 1/3 or 33.3333%. That is, the possibility of each ratio being true is one in three.

But, Keynes says, look what happens if we reformulate the statement. Let’s call the two balls A and B. Now let us lay out the possibilities.

1. A and B are both white.

2. A is white, B is black.

3. A is black, B is white.

4. A and B are both black.

Again, we have no further information so we apply the principle of indifference and find that each statement has a 1/4 chance of being true.

Now, apply the findings of the latter example to that of the former — recall that both are identical, we have just stated the problem in a different manner. Note that both (2) and (3) in the second example are synonymous with (2) in the first example. Therefore we get the modified result:

1. 100% of the balls are white. (1/4)

2. 50% of the balls are white. (2/4 or 1/2)

3. 0% of the balls are white. (1/4)

We get entirely different answers by simply framing the question in a different manner!

As Maher notes, Keynes asserts that the latter approach is the correct one. Maher criticises Keynes saying that this is entirely arbitrary. But I think that he’s wrong. The reason that the second application is a correct one with regards the Principle of Indifference is because it breaks down the argument into it’s most elementary parts.

By using the ratio method in the first approach we do not actually examine some of the possibilities and this gives us an incorrect measure of the probability. What happens is that we wrongly aggregate the components of the argument — i.e. the two independently existing balls — into a single ratio (50% of balls are white). The correct approach is to ask in turn “what is the likelihood that each ball is black or white”. We might call the error made in the ratio example the “fallacy of aggregation”.

I think that the manner in which we interpret this is actually at odds with some of the quotes that Maher provides from Keynes in his first lecture. Consider the following,

A proposition is not probable because we think it so. When once the facts are given which determine our knowledge, what is probable or improbable in these circumstances has been fixed objectively, and is independent of our opinion… When we argue that Darwin gives valid grounds for our accepting his theory of natural selection, we do not simply mean that we are psychologically inclined to agree with him… We believe that there is some real objective relation between Darwin’s evidence and his conclusions, which is… just as real and objective, though of a different degree, as that which would exist if the argument were… demonstrative.

As we can see Keynes is very fussed about the idea of some “objective reality” underlying the object of study — one that is “independent of our opinion”. I would be less concerned with this. As Keynes himself shows, it is the manner in which we cast the argument that yields two very different outcomes. It really is the manner in which we formulate our statements that leads to different conclusions. This has very little to do with “objective reality”.

It is not so much that the second formulation is better at aiming at some “objective reality” but rather that it breaks down the components of the argument in a more satisfactory manner. It is not because our reasoning is closer in line with the “real world” that we get better information in the second formulation of the problem but rather that our reasoning is better thought out.

Finally, a word on how this relates to economics. This example shows just how easy it is to fudge an econometric study by simply choosing, say, the wrong method of aggregation. It is a lot easier to detect these logical errors when the argument is made out in the open, in plain English. Trying to unearth them in an econometric study that is replete with tables containing t-statistics and r-squareds is tedious to the point of being unworthy of one’s time.

Keynes himself came across something very similar in his review of Tinbergen’s early work in econometrics. He wrote,

It will be observed that Prof. Tinbergen includes profits earned and the rate of interest as amongst the factors influencing investment. But, as Prof. Tinbergen himself points out, some economists would argue that it is the difference between these two factors which matters, rather than their absolute amounts. How does that affect matters? Moreover, they would mean the difference between profits measured as a percentage on current cost of capital goods and the rate of interest. Now, Prof. Tinbergen does not seem to care in what unit he measures profit. For the pre-war United States it is the share price index, for the pre-war United Kingdom non-labour income, for pre-war Germany dividends earned as a percentage of capital, for the post-war United States the net income of corporations, and for the post-war United Kingdom net profits earned as a percentage of capital. Thus it is sometimes a rate and sometimes an absolute quantity; and when in the final outcome he multiplies this hotch-potch, sometimes by a large coefficient and sometimes by a small one, and then subtracts from it the rate of interest multiplied (usually) by a small coefficient, I do not know whether there is room here for the theory that investment may be governed by the difference between the rate of profit on cost and the rate of interest on loans, or whether we have merely reached the number of the Beast.(pp562-563)

Of course, the reader interested in defending econometrics will say “oh, but this is just an irresponsible use of the data… we would never tolerate that!”. To which I would say: lies! Almost every econometric study that I have ever examined in detail makes these mistakes. And trying to pick them apart is, as Keynes says later in the paper, “a nightmare to live with”. The formal presentation of econometrics studies can hide all sorts of nonsense in this way.

Keynes even notes — and again, this will chime with anyone who has ever read such studies in depth — that Tinbergen is perfectly aware that all of this is problematic but carries on regardless.

Prof. Tinbergen is by no means unaware of what a difference the way he measures profit can make. He gaily points out as a matter of some interest, but not of any concern, that the series which he takes to represent profits in Germany leads to a regression coefficient for that factor twice as great as the series he takes for the United States, and the series he takes for Great Britain to a coefficient nearly four times as great. (This is an extraordinary example of the candid way in which, if only he is allowed to, get on with all this arithmetic unhindered, he is ready to admit at the end of it what must seem to the reader to be devastating inconsistencies.) He insists that his factors must be measurable, but about the units in which he measures them he remains singularly care-free, in spite of the fact that in the end he is going to add them all up. (p563)

The form of the argument takes over the content in this regard. All sorts of weird stuff gets slipped in under the radar unseen; which is probably why econometric studies are very rarely repeatable. If economists were primarily made to formulate their empirical arguments in plain English and only use econometrics as a presentational supplement to bolster certain very specific points — the applied work of Wynne Godley is outstanding in this respect — then an awful lot less nonsense would pass through the journals and into the halls of power. But econometrics often hides simple incompetence, and those that engage in it would be very reticent to see that exposed.

If Keynes’ fallacy of aggregation shows us nothing else, it should at least show us that when it comes to applied probability theory (i.e. econometrics) it is not so much the tools that are important as it is the person doing the work. And if the tools begin to become a fetish in and of themselves I see no good reason not to get rid of them to a very large extent.

Philip Pilkington is a macroeconomist and investment professional. Writing about all things macro and investment. Views my own.You can follow him on Twitter at @philippilk.
This entry was posted in Philosophy, Statistics and Probability. Bookmark the permalink.

### 8 Responses to Keynes and the “Fallacy of Aggregation” in Probability Theory

1. GermanAngst says:

This correspondes to the the sure-thing principle that requires that choosing between acts, the decision-maker does not take into account those states in which the acts yield the same consequence. This axiom corresponds to the “independence axiom” in von Neumann-Morgenstern.
in the example above it is obvious to follow keynes and i agree with you on the application of eg econometrics but in the real world can you act without some sort of sure-thing principle? I dont mean that there is in fact an objective and independent reality but one which we work to in practice and which then can become reality?
is some “fallacy of aggr” necessary?

• I think you can act without a sure-thing principle in place. You just take a bet, basically. That bet is (a) rarely based on numerical probabilities, (b) often based on non-numerical (ordinal) probabilities and also (c) often based on total uncertainty. I think that we basically do this every day.

I think the real relevant question is how we formulate the category that fall into (b). We don’t come “hard-wired” to be able to do this and we must learn by experience/induction. I think the learning-by-doing — I’m thinking of the work of the philosopher Hubert Dreyfus here — is actually more interesting than the fact that we ordinally rank the probabilities. But decision-theory etc. is so obsessed with the rankings that it loses sight of the really interesting questions.

• GermanAngst says:

I agree. especially on the different categories. I always wonder about the relationship of a), b) and c) or degrees of uncertainty. from this perspective total uncertainty is never really out of question and therefore an assumed b) bet could actually be a c) bet.
but if i really have to bet on something it seems at least from a behavioral/emotional perspective that it is difficult to act based on total uncertainty. what makes the learning-by-doing possible when you can’t go back in time?

• Some people learn by doing better than others, I suppose. I’m fairly convinced that a large portion of the population do not learn-by-doing in many of their activities after a certain age (maybe their mid-20s or so). People probably continue to learn by doing in some limited fields — work, mainly, and also in newly acquired hobbies — but in their general behavior (and investment/saving/consumption decisions!) I think that many do not and that invalidates most marginalist theory pretty quickly.

As to acting under total uncertainty, it’s an interesting question. We do it everyday without realising it. I think that its when it becomes conscious that it causes anxiety. You can explain the gold price that way after 2008, for example; people trying to grasp at certainty in the face of conscious uncertainty.

Politicians have to act under conditions of true uncertainty a good deal of the time too. This explains their love of arbitrary rules (like the 3% budget deficit rule under Maastricht). Indeed, in the face of true uncertainty people will often generate arbitrary rules. That is what anthropologists who study “totem and taboo” systems generally find. Economists do this too, by the way. Monetarism is a prime example.

• GermanAngst says:

again i totally agree. but where does the good “learning-by-doing” start and bad “totem and taboo” end? the difference between b) and c) is with great importance. I think you are right when you say that we often dont realise that we act under total uncertainty and that economists do it too. but is it really just pure rationalisation or rather one way of dealing with anxiety? I always have that in the back of my head when people talk about a sure-thing on for e.g. stock markets.
also, I don’t really know if that fits here but there this an ethnographic case study on the use of wampum and beaver as money in northamercian colonies. It’s interesting because both the indians and the europeans thought of their money as superior but still used both over quite some time.

• Actually I think about that a lot. Formulating that coherently would be the Holy Grail of epistemology. That’s what Kant was shooting for, I think. Frankly, I don’t think we can define it. We just have to trust that we can tell good from bad.

I know that sounds a little disheartening. But I can’t come up with much else. Not for now, anyway.

• GermanAngst says:

ultimately we probably can’t define it. but i think one can say more about how emotions work. there is a lot in philosophy and sociology on emotions and you mentioned dreyfus who seems to rely on Heidegger. he understood the importance of emotions. The question then turns from “knowing that”, to “knowing how” which shed some light on the intuitions of people. at least to me it seems that the combination of reason and emotions let us act.
something like this.

2. NeilW says:

Beware Economists bearing Aggregate Stats.