Matheus Grasselli has responded once more to one of my posts. The unfortunate part is that he has dragged some other poor souls into the quagmire of misunderstanding and poor reading. I suppose its now on me — having brought his attention to these issues — to clear up his misunderstandings. As we will quickly see Grasselli is not arguing with either me or with the other authors, rather he is arguing with himself.

First, his response to my own piece. This consists of two parts. The first is that Grasselli thinks that there is only one version of probability. He writes:

To say that there are alternative probabilities, one preferred by trained statisticians and another adopted by lawyers and judges is akin to say that there are alternative versions of chemistry, one suitable for the laboratory and another, more subtle and full of nuances, adopted by the refined minds of cooks and winemakers, honed in by hundreds or even thousands of years of experience. Clear nonsense, of course: the fact that a cook or winemaker uses tradition, taste, and rules of thumb, does not change the underlying chemistry.

As we will see throughout this response there are actually two different types of probability: those that can be given a numerical estimate and those that cannot. If I flip a balanced coin I can give it a numerical estimate. The chance of flipping a heads is 0.5 and the chance of flipping a tails is 0.5. If, however, I say that I think that Rand Paul will become the next Republican presidential nominee I cannot give this a numerical estimate. I can make a good argument for why I think this. But I cannot give it a numerical estimate as any estimate would be arbitrary.

Grasselli will respond that I can be a good Bayesian and give it an arbitrary estimate and then test my model against the data until I get a proper numerical estimate. But alas I cannot. Because Rand Paul’s nomination or lack of nomination is a unique event. It only happens once. By the time I know whether he has been nominated or not the estimate will be meaningless and prior to his nomination or lack of nomination I cannot assign a proper numerical value for the aforementioned reason that it is a unique event.

So, contrary to what Grasselli claims there are indeed two types of probability. Those that can be numerically estimated and those that cannot.

The second part of his response to my post was similar to this. He seems to have misread my post to mean that Bayesian statistics have nothing to do with “degrees of belief”. This was simply not in the text. What I said what that Bayesian statistics require quantitative measures of said degrees of belief and the more Keynesian approach does not require this. As we have already seen, this is quite obviously true.

Next Grasselli launches a particularly misguided attack against Lars Syll. Here he simply has not read Syll’s interesting piece at all. He has merely scanned it to pick out easy targets — targets he himself constructs. You see, Syll is discussing a very particular application of Bayesian statistics in his piece; namely, that which mainstream economists use in order to model so-called rational agents. Syll is trying to make the case that, and I quote, “it’s not self-evident that rational agents really have to be probabilistically consistent”. This is where his example of an agent that moves country comes in. This runs as follows:

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

What he is talking about is an agent in a model. All the agent can do, because of their Bayesian programming, is to use their present prior that they have from their experience in “Sweden” and apply it to the new environment which they find themselves in. Syll’s point is that this is not what an actual rational person would do. Rather they would say “I don’t know what the unemployment situation is here”.

Grasselli takes Syll’s criticism of certain rational agent models and thinks that it is a naive criticism of Bayesian statistics. He then complains that we could estimate the unemployment in the new environment by applying arbitrary priors and running tests. But that was not Syll’s point at all. Syll was talking about a model of human behavior. He claimed that in certain rational agent models the agents simply project their previous priors forward and this is considered rational. But the unemployment example shows that this need not be rational at all. So, these models are not really modelling what a rational agent would do. This criticism has implications for how we treat genuine uncertainty in economic models.

Grasselli missed this, presumably, because he didn’t bother reading the piece. He just scanned it looking for easy targets; and found them, but not in the text.

Grasselli’s final comments about Kay are obscure. It is not clear whether he thinks that a court could be run using Bayesian statistics or not. Perhaps he might further enlighten us on this point — which was, by the way, the main point of my piece — and then we can have a debate rather than him attacking strawmen and me having to clean up the mess he makes.

OK, since you asked for clarification, here it is:

(1) there are no such things as probabilities that cannot be quantified. You can choose not to quantify it, but that does not mean it cannot. In your Rand Paul example, all a Bayesian has to do is come up with a prior right now for how likely she thinks he will get the nomination, and update the probabilities based on each new piece of evidence (tv coverage, money coming into the campaign, etc). At each step of the way, the probabilities that she assigns mean nothing more than the odds the she would put on a bet. It doesn’t mean that the event needs to be repeated. In fact, Bayesian statistics is especially suited for unique events. Look, Nate Silver (and countless bookies) does this on a daily basis, so saying it cannot be done is to close your eyes to reality.

(2) Syll says “A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.” This is about as clear a statement as he could make about what he thinks Bayesian statistics is. He’s not talking about agents in a model with some sort of “Bayesian programming”. He wrote “A Bayesian would, however, argue…”. The fact that this is wrong is unfortunate (i.e no Bayesian would argue this), but also undeniable.

(3) Similarly, in my comment on Kay, I say “the court should look at further evidence and recalculate its belief that a bus from Company A actually hit the person. For example they could hear testimony from eye witnesses or look at video footage and use Bayes theorem to find the posterior probabilities”. Again, I don’t know how much clearer I can make that the court should, indeed, be run using Bayesian statistics.

(1) Okay, I could argue with the Rand Paul example because it actually doesn’t make sense when thought through. But let’s go simpler: how can I assign a probability of me getting a phone call from a woman tomorrow morning? I cannot assign a numerical probability simply because its a unique event that takes place at a set time. In order to set a numerical probability I would have to set an arbitrary prior and then run tests every morning. But because I’m only concerned with tomorrow morning I cannot do this. Therefore I cannot set a numerical probability.

(2) No, you’re still misreading. He is referring to what a Bayesian modeller would have their rational agents do. Look: “That is, in this case – and based on symmetry – A RATIONAL INDIVIDUAL would have to assign probability 10% to becoming unemployed and 90% of becoming employed.” He is talking about how Bayesians approach the modelling of rational agents in certain models. Read the damn post again, Grasselli. This is embarrassing.

(3) This requires another post. But you are wrong and the hint as to why is in point (1).

Excellent !

“Why I don’t like Bayesian statistics

by Andrew Gelman

professor of statistics and political science

at Columbia University

April 2008,

http://andrewgelman.com/2008/04/01/problems_with_b/

and further by Gelman

http://andrewgelman.com/2008/07/31/responses_to_my/

and

Critique of Bayesiansim

John D. Norton

Director,Center for Philosophy of Science and

Professor, Department of History and Philosophy of Science

University of Pittsburgh

Induction and Confirmation

http://www.pitt.edu/~jdnorton/homepage/research/ind_crit_Bayes.html

The Subjectivity of Scientists and the Bayesian Approach

James Press

Distinguished Professor of Statistics, Emeritus

University of California

Judith M. Tanur

Department of Sociology, Stony Brook University

This post (just as the one by Syll) is based on misunderstanding of subjective expected utility (SEU) theory, so Graselli is mostly right. The theory doesn’t require that my probabilistic beliefs must be estimated from some real data, or quantifiable from objective evidence. The theory simply states that if I have preferences over random outcomes (that is, offered two bets, I can always say which I prefer), and those preferences satisfy certain properties (as enumerated e.g. by Savage, or Anscombe & Aumann), I act as if my preferences were represented by a utility function and a subjective probability distribution. Or, in the words of Keynes:

“Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed.”.The theory doesn’t say where the beliefs come from, just as neoclassical consumer theory doesn’t explain where indifference curves come from – that’s left to determine for psychologists in general, or applied economists in particular situations. Sometimes macroeconomists assume that subjective beliefs are equal to objective probabilities generated by the model (rational expectations) – but that’s an additional assumption that’s not part of core theory.

Now, there is a hint of valid criticism of SEU and Bayesian paradigm in the sense that it doesn’t allow to express ignorance. But that’s not news – the criticism goes all the way to the Ellsberg (1961) paradox (if not earlier), and in past decades decision theorists have been quite busy developing more general theories that account for ambiguity aversion (see e.g. Gilboa & Marinacci [1] for a survey). Of course, all this research has received pretty much zero attention from heterodox economists.

[1] http://itzhakgilboa.weebly.com/uploads/8/3/6/3/8363317/gilboa_marinacci_ambiguity_and_bayesian_paradigm.pdf

This really isn’t the debate we’re having here. I was just pointing out that Grasselli didn’t even recognise that Syll was discussing SEU models and thought that, due to this, Syll did not understand that Bayesians can assign arbitrary priors. Grasselli often doesn’t read the stuff he’s criticising and that is what I’m pointing out here.

On another note, the problem with SEU lies here: “that is, offered two bets, I can always say which I prefer”. This isn’t just a problem because you cannot express ignorance, it is also a problem because — and this is what my pieces are about — there are many events that cannot be assigned numerical estimates. So, it is not realistic to “offer two bets” to our rational agent because there are many unique events that cannot be assigned numerical estimates. See: my woman on the phone example in my response to Grasselli’s comment above.

Re: Syll’s example – I think the point is that there is no single mainstream model of “rational agents”. Syll simply makes up an absurd example that relies on the implicit assumption that agent applies the same prior to unemployment rate in all countries, even when a Bayesian framework allows easily to have different priors for different countries – and agents in such model would be no less “rational”.

“there are many events that cannot be assigned numerical estimates. ”

What does it mean that something “cannot be assigned numerical estimate”? Sure it can. I assign probability of 0.574% to the event that a woman calls you tomorrow morning. My belief is not backed by any data, but SEU doesn’t require it to be. In case of Rand Paul and presidential elections, there are websites where people can make bets, right now, on this exact outcome even when it’s a one-time event.

(1) Syll’s point is that, in the actual case where the worker is moved and has to make a one time decision, they simply do not know. Assigning different priors is arbitrary and simply a tactic to cover up the fundamental uncertainty.

(2) Your probability that a woman will call me tomorrow is arbitrary and meaningless. You can make up any probability estimate you want for this but no matter what it will be arbitrary and meaningless

(3) The Rand Paul example is also wrong. What the bookies estimate is the desire of various bettors to bet based on priors that they have formulated in a fairly arbitrary manner. This does not produce some sort of actual probability estimate that such an event will happen. It merely reflects the arbitrary beliefs of gamblers.

If ten gamblers walk into a casino most of them are likely to lose the money that they have brought in. Otherwise the casino would go out of business. But every gambler walks in thinking that he will win. When he engages in bets he is assigning priors based on this belief. This belief, however, is objectively wrong because the house always wins.

The point is that gamblers betting on Rand Paul’s nomination need not be correct. And so the results of the betting community in the aggregate tell us nothing about Rand Paul’s probability of being elected. This is instinctively obvious because if it did Rand Paul would just watch the odds the bookies were giving out to know for sure if he will be elected.

Even if priors are chosen in arbitrary way (say, let’s have prior over unemployment rate be a beta distribution, because then the math works out nicely), the theory can still produce qualitative predictions – e.g. how does workers choice change when the precision of his prior varies. Maybe such predictions turn out to be wrong, but they would be at least testable, unlike repeatedly crying out “fundamental uncertainty”.

As for your other points – SEU doesn’t require that subjective beliefs are correct, or non-arbitrary, or meaningful, or reveal “true” odds when aggregated. It only requires that they sum to 1 and are updated according to Bayes theorem – so nothing you wrote is actually an argument against SEU. Again, lumping together Bayesian decision theory with rational expectations is just incorrect.

Well then you’ve confirmed what I’ve thought from the beginning. SEU theory is more meaningless babble.

Grasselli and I are having a different discussion. We’re concerned with whether Bayesian theory can be used to say anything interesting about unique events. It cannot, of course, (which I think you’ve just acknowledged) but it’s fun to watch Grasselli twist in the breeze in this regard,

I liked the original post, but this response to Grasselli has muddied the waters. His response is technically correct on the details, but does not engage with the original main point. According to Jack Good, there are 1,000s of different versions of Bayesianism, so whatever counter-example you construct, there is likely to be a version for which your criticism is not valid.

It would be good to have a very specific (ideally, mathematical) version of Bayesianism in a form in which its application was not controversial, together with a justification. The attempts that I have seen justify Bayesianism in terms of other things that also seem ‘mostly right’ but not universal. Ellsberg and Allias would be a good place to start. Or maybe the financial crisis, or the case against the Assad regime?

Most of Grasselli’s writing is characterised, as you sort of note, by misreading those he has criticised because he assumes that they are somehow less clever than him or missing the point they are actually making because he thinks that his other, made up issues are more pressing. This is unfortunate, but I cannot do much about it. It is nice to riff off his stuff if only to show the blindspots of the typical math guys who invade economics every few years — and Grasselli is nothing if not that.

But no, as far as my unique example goes, there is no form of Bayesianism that can deal with this. Bayesianism still rests on the repeating of events, it treats these in a different manner to frequentism but it requires that the events be repeatable regardless; otherwise the priors it assigns are arbitrary and useless. Thus it cannot deal with unique events.

Phil, As I understand them, the various arguments for Bayesianism largely assume that when there is no principled way of assigning priors one need only be concerned with consistency. Yes, this is arbitrary, but necessarily so. Yes, this is useless, but (they claim) so is everything else. So calling the priors arbitrary and useless is unpersuasive. I find the arguments of Keynes, and also those of Cedric Smith, persuasive.

Suppose you visit Hong Kong and are invited to bet on a two-horse race, being offered better than even odds on whichever horse you choose. Is this a no-lose bet?

Agreed on most points. The Bayesians like Grasselli that I have encountered claim that if the original arbitrary priors can be tested and altered over and over again in line with Bayes’ theorem then we can meaningfully test their complex and, I think, clumsy mathematical models. I hold that this is not the case because the vast majority of material we deal with in economics should be considered “unique events” that are not repeated over and over again. Or, put another way: historical data is heterogeneous and non-ergodic. That is what these posts are about and that is why Grasselli keeps responding.