Finance Panglossian: A Eulogy to Lawrence Henry Summers

clinton eulogy

Larry Summers has dropped out of the race for Fed chair — and the stock market has rallied! The irony should be noted, of course, because Summers has come to exemplify the Panglossian Wall Street liberal of the Clinton era and, one would hope, with the rather dramatic capsize of his political boat amidst a storm of opposition such an archetypal figure is now fading into the dusk; a dinosaur, from more primitive and regretful years, now long since past.

I will not, however, engage in a screed against Summers’ questionable career trajectory, his dubious opinions of women or anything else which I’m sure more capable people than me are eulogising on today now that Summers has, in a sense, passed on. Rather I would prefer to turn to a paper that Summers co-wrote in 1990 entitled Noise Trader Risk in Financial Markets. This paper, I think, gives us insight into the mind of the Panglossian Wall Street liberal of the Clinton era as it was in the process of formation.

The paper, as the title suggests, is part of the “noise trader” literature and is generally associated with the so-called New Keynesian school. The noise trader literature builds on the well-known Efficient Markets Hypothesis (EMH) literature and, in characteristic New Keynesian style, adds some frictions. The idea of “noise” lying behind the noise traders’ views, commonly associated with the name Fischer Black whose bogus nonsense I have written on before (here and here), basically states that markets cannot exist without noise because otherwise nobody would trade. Here is Black from his characteristically poorly written paper Noise:

Noise makes financial markets possible, but also makes them imperfect. If there is no noise trading, there will be very little trading in individual assets. People will hold individual assets, directly or indirectly, but they will rarely trade them.

The noise trader theorists then build models in which those traders that are trading on noise — which is effectively “bad information” — get the upper hand and, through a sort of process of intimidation, drive the typical EMH trader out of the market causing all sorts of chaos. Thus we have a sort of Gresham’s Law dynamic: bad traders drive out the good. The moral overtones so typical of theories based on mainstream microeconomics should be noted, and noted well. (It is also pallid nonsense based on the poor use of metaphor, as I have pointed out before).

The noise trader theory, however, is only used to show frictions in otherwise harmonious markets. Although the paper in question does entertain the idea that noise traders might take over the market and even that “rational” investors might then spend their time trying to anticipate said noise traders, the story is still a goodies and baddies narrative in which it is only when the baddies win out entirely that the market can become unstable. The proof, of course, is in the pudding: Summers only a few years later  became a zealous and infamous deregulator, crushing those who disagreed with him, thus proving beyond a shadow of a doubt that the noise trader theories were constructed to describe only minor deviations and “special cases”.

Thus, since the general case in financial markets, according to the New Keynesian Panglossians, is that they should be fit and healthy, it is assumed that noise is an exception that will not spread too far. One can clearly see this in the blog post Summers’ co-author of the paper in question Brad Delong published in the wake of 2008.

Among the things that Delong says that he was not expecting from the crisis were “the discovery that banks and mortgage companies had made no provision for how the loans they made would be renegotiated or serviced in the event of a housing-price downturn”; “the discovery that the rating agencies had failed in their assessment of lower-tail risk to make the standard analytical judgment: that when things get really bad all correlations go to one”; and “the panic flight from all risky assets – not just mortgages – upon the discovery of the problems in the mortgage market”.

Put simply, Delong assumed that were there noise in the financial markets, at the end of the day such noise was probably just a small ripple on an otherwise calm ocean. Even if housing did take a downturn in the late-00s the assumption that not just Delong but basically all New Keynesians made was that, since the rest of the components of the financial markets were efficient and thus robust, there would be no serious crisis. This would be forgivable perhaps, if one ignores the many crises that had arisen during the Clinton years — such as the East Asian crisis, the Russian debt crisis, and the implosion of Long Term Capital Management and the subsequent bailout.

Summers and many of his colleagues have, of course, had their Road to Damascus moment and have come to see the light — but only gradually, as Summers was the key figure in the Obama Administration blocking a larger and much needed bailout after the crisis. But in their conversion their legacy, as indicated by Summers’ fall from grace, may already be fading — and fast. Their contributions proved to be not merely irrelevant to real-world economic conditions, but potentially blinding myths responsible in part for the deregulatory zeal of the Clinton years — deregulation that is now proving so hard to reverse.

When Summers stands in front of those pearly gates I truly hope that St. Peter forgives him and lets him in, because it appears that us mere mortals here on earth have no such capacity to do so. Rest in peace, Lawrence Henry Summers.

Posted in Economic Theory, Media/Journalism | Leave a comment

Keynes’ Philosophy: Induction, Analogy and Probability

argument_inductive

In a recent post I dealt with Keynes’ opinions on the application of statistics and theories based on probability (e.g. econometrics). There I noted that Keynes thought that much applied work failed because it improperly deployed the use of Analogy and Induction. The natural question, which some then asked, was “what on earth are Analogy and Induction?” In this post I will deal with Keynes’ views on these two processes of reasoning — again, this is not hero-worship, Keynes was fallible and I disagree with him on many points, but I think that were contemporary economists to have a better understanding of these issues much of the disciplines irrelevance would begin to fade away.

According to Keynes all science is basically a process of Analogy and Induction. In his A Treatise on Probability Keynes draws on an argument laid out by Hume in the latter’s Treatise on Human Nature. It might be worth quoting from Hume at length here in order to move the argument along:

In reality, all arguments from experience are founded on the similarity which we discover among natural objects, and by which we are induced to expect effects similar to those which we have found to follow from such objects… Nothing so like as eggs; yet no one, on account of this appearing similarity, expects the same taste and relish in all of them. It is only after a long course of uniform experiments in any kind, and we attain a firm reliance and security with regard to a particular event. Now where is that process of reasoning which, from one instance, draws a conclusion so different from that which it infers from a hundred instances that are nowise different from that single one?

What Hume refers to as the similarity between objects — in this case, eggs — Keynes refers to as “Analogy”. We look at an egg, eat it, taste it and then look at another expecting the same taste simply because we equate the two by the way they look, feel and so on. This is a fundamental part of our reasoning that often goes unnoticed and it is what Keynes calls Analogy. It is the comparison of objects that we think to be alike.

Induction refers to the latter part of Hume’s argument; namely, increasing the number of instances of observation. In fact, Keynes refers to this increase in the number of experiments as Pure Induction. And he refers to any argument that combines Pure Induction and Analogy to be an “inductive argument”.

An inductive argument is thus one like that undertaken by Hume with his eggs. He draws an Analogy between what he thinks to be like objects and the undertakes experiments — by eating them — to see if they yield the same “taste and relish”. The argument comes to a close after Hume has eaten sufficient eggs to convince himself that he does indeed get said “taste and relish”.

Keynes, however, is not satisfied with Hume’s presentation. He thinks it too simple. So, he introduces some more distinctions that help us understand further the nature and form of inductive arguments. The most important of these is what he calls Negative Analogy. What is a Negative Analogy? Well, say we said to Hume “Mr. Hume, your egg experiment is very good but perhaps you should vary it somewhat. Why not take them…” — and this is Keynes’ example — “…to the country and the city and try them there to see if they still taste the same? Then why not try them first in June and then in July?”

If Hume engaged in such an experiment he would, of course, find that the same eggs tasted the same in the country and the city, but that whereas they were delicious in June had gone rotten in July.

Keynes says that when we increase the instance of Negative Analogy we increase certainty because we eliminate the possibility that other factors might be affecting our experimental results. The more instances of variations that we can show that do not affect the underlying relationship we are trying to establish the firmer is the Negative Analogy in question.

This framework then allows us to evaluate the Probability that our argument might be true. Keynes considers this in light of a hypothetical case where we try to find Analogy based on two variables — perhaps imagine that Hume not only wanted to see if the objects called “eggs” tasted the same but also whether they would taste bad after leaving them for a month. In his Treatise on Probability Keynes writes:

In an inductive argument, therefore, we start with a number of instances similar in some respects AB, dissimilar in others C. We pick out one or more respects A in which the instances are similar, and argue that some of the other respects B in which they are also similar are likely to be associated with the characteristics A in other unexamined cases. The more comprehensive the essential characteristics A, the greater the variety amongst the non-essential characteristics C, and the less comprehensive the characteristics B which we seek to associate with A, the stronger the likelihood or probability of the generalisation we seek to establish. (pp219-220)

So, let’s apply that to our egg example. We want to establish that eggs taste the same and that they all go bad when left for a month. We can test this, and thus affirm the Positive Analogy through experiment. But we must further strengthen the argument through Negative Analogy — so we must, for example, eat the eggs in different places or use different means to eat them or undertake different postures while eating them to ensure that these aspects are not affecting the experiment. As Keynes writes:

These are the three ultimate logical elements on which the probability of an empirical argument depends — the Positive Analogies and the Negative Analogies and the scope of generalisation. (p220)

Keynes then further breaks down two different types of generalisation. On the one hand, we have Universal Inductions in which the relationship will always and invariably hold true. And on the other we have Inductive Correlations where the relationship holds good some of the time — possibly with a given probability (1 in 50 swans are black and so on).

What should be stressed, however, is that Keynes does not, like those before him, seek out absolutely true inductions; rather inductions provide us with a probability of an induction being true or not. Keynes again:

An inductive argument affirms, not that a certain matter of fact is so, but that relative to certain evidence there is a probability in its favour. The validity of the induction, relative to the original evidence, is not upset, therefore, if, as a fact, the truth turns out to be otherwise. (p221)

“Aha!” the econometrician will say, “We know all this already. This is precisely what we try to do in econometrics. Keynes was one of us, after all!”

While it is true that econometrics aims at the same goal as Keynes’ probability theory of induction, this is not to say that what is being done is the same. Keynes’ problem with econometrics was that it was a poor means to undertake an inductive argument. One example of this is the assumption of the homogeneity of historical time in econometric studies. Keynes discusses this in his Professor Tinbergen’s Method:

Put broadly, the most important condition is that the environment in all relevant respects, other than the fluctuations in those factors of which we take particular account, should be uniform and homogeneous over a period of time. (p566)

Let’s say that we are undertaking an investigation of the relationship between interest rates and the rate of investment. And let’s say that we are taking a twenty year time period. We cannot, as econometricians typically do, study this by simply running regressions on twenty years of historical data. Instead we must examine it on a case-by-case basis — we must study each change in the rate of interest and see if each change had relatively constant effects on the rate of investment. As Keynes says:

The first step, therefore, is to break up the period under examination into a series of sub-periods, with a view to discovering whether the results of applying our method to the various sub-periods taken separately are reasonably uniform. If they are, then we have some ground for projecting our results into the future. (p567)

This may sound like nitpicking but I think we would get wildly different results from both methods. It is likely that the twenty year regression would produce a result that hinted that there might be some relatively continuous effect of changes in the interest rates on the rate of investment. Whereas if we broke the period up into sub-periods corresponding to each time the interest rate moved, we would likely find that it had highly indeterminate effects on the rate of investment.

After all, if an economist cannot see the wildly different effects that raising interest rates had in, say, 1928 in the US versus the effects they had in 1980 then I would suggest that a new career is in order. The simple fact is that context matters in such cases and Keynes understood that well. To assume homogeneity of historical time when dealing with economic relationships displays a poor and hobbled capacity for inductive argument.

This is merely one example, but there are many more. Keynes’ more general point is that induction is an extremely difficult procedure and to think that we can automate it in some way is completely fallacious. Instead he advocates a more intuitive approach based on the weighing up of arguments. Not necessarily numerically, as Keynes recognises that many arguments cannot be given numerical weights; even those that might have as their material numerical or quantitative data.

What I have outlined above is merely the skeleton of the framework he provides to engage in such weighting. But it at least gives the reader a flavour of how Keynes thought that arguments should be made and why this is so far from what much of the economics profession does today.

Posted in Philosophy, Statistics and Probability | 2 Comments

Long Live Hydraulic Keynesianism: Krugman on Godley and Vernengo on Krugman

Hydraulic Press

The other day I commented on a piece that was run in the NYT on Wynne Godley and other Levy Institute scholars. Since then Paul Krugman has weighed in on the debate and Matias Vernengo has responded. Even though I’ve been known to be somewhat harsh on Krugman I think that the piece he wrote actually contains the seeds of a constructive conversation — unlike his typical approach to heterodox economists who are still alive, which is to dismiss them out of hand and ignore them. Krugman, it seems to me, is only comfortable debating the dead; not a particularly difficult task, mind you.

First of all, however, it should be noted that many of the errors that Vernengo points out Krugman as having made are indeed rather egregious. Krugman’s characterisation of what he refers to as “hydraulic Keynesians” as relying on a stable consumption function — that is, that consumption will rise and fall in line with income in a stable fashion — is entirely false. I have seen this mistake made many times before. In the General Theory Keynes lays out this argument, but it is clear from the context that it is a ceteris paribus condition that should be subject to empirical scrutiny (although Keynes mistakenly does say that this a priori condition can be relied on with “great confidence”). Here is the passage in the original:

Granted, then, that the propensity to consume is a fairly stable function so that, as a rule, the amount of aggregate consumption mainly depends on the amount of aggregate income (both measured in terms of wage-units), changes in the propensity itself being treated as a secondary influence, what is the normal shape of this function? (GT, Chapter 8, III)

As we can see, this really is just a ceteris paribus argument laid out to make a more general point. And as Vernengo correctly points out the Keynesian economist James Duesenberry updated this argument with his relative income hypothesis which is by far superior to Friedman’s permanent income hypothesis as championed by Krugman. (I should note in passing that I am currently waiting on data from the Post-Keynesian economist Steven Fazzari on consumption by income group that I have promised FT Alphaville I will write a post for them on. The data, from what I have seen, provides interesting insights into Duesenberry’s hypothesis. Watch this space).

Krugman’s other mistake is to discuss Godley’s work as if he adhered to the old Phillips Curve and was thus proved wrong by the inflation of the 1970s. As Vernengo points out Godley came from the Cambridge tradition which, in contrast to the neoclassical-Keynesians in the US, held inflation to be primarily wage-led. From this statement it is crystal clear that Krugman has not read any of Godley’s work (which leads one to wonder what gives him authority to pass comment on it). For example, in the book Monetary Economics, co-authored with Marc Lavoie, Godley devotes a whole chapter to inflation which they introduce as such:

Three propositions are central to the argument of this chapter. First, as we are now describing an industrial economy which produces goods as well as services, we must recognize that production takes time. As workers have to be paid as soon as production starts up, while firms cannot simultaneously recover their costs through sales, there arises a systemic need for finance from outside the production sector. Second, when banks make loans to pay for the inventories which must be built up before sales can take place, they must simultaneously be creating the credit money used to pay workers which they, and the firms from which they buy goods and services, find acceptable as a means of payment. Third, we are about to break decisively with the standard assumption that aggregate demand is always equal to aggregate supply. Aggregate demand will now be equal to aggregate supply plus or minus any change in inventories. (p284)
Such a view, which combines endogenous money with wage-led inflation, tells a very different story to the old Phillips Curve. Lavoie and Godley write:
Inflation under these assumptions does not necessarily accelerate if employment stays in excess of its ‘full employment’ level. Everything depends on the parameters and whether they change. Inflation will accelerate if the value of [the reaction parameter related to real wage targeting] rises through time or if the interval between settlements shortens. If [the reaction parameter related to real wage targeting] turns out to be constant then a higher pressure of demand will raise the inflation rate without making it accelerate. An implication of the story proposed here is that there is no vertical long-run Phillips curve. There is no NAIRU. When employment is above its full-employment level, unless the [the reaction parameter related to real wage targeting] moves up there is no acceleration of inflation, only a higher rate of inflation. (p304)
Clearly, it is Krugman’s lack of familiarity with Godley’s work and his thinking that, in the post-war era, only one type of Keynesianism existed (neoclassical-synthesis Keynesianism) that has led to his confusion. Once again, Krugman shows poor scholarship and makes embarrassing mistakes in print that, I think, will one day come back to haunt him.
In spite of these rather egregious oversights, however, the main thrust of Krugman’s discussion is one that I think deserves some attention. I think he is basically correct in calling the Godley approach “hydraulic Keynesianism” — even if he is only right accidentally because he is clearly not familiar with the work — and he is also correct when he writes:
So why did hydraulic macro get driven out? Partly because economists like to think of agents as maximizers — it’s at the core of what we’re supposed to know — so that other things equal, an analysis in terms of rational behavior always trumps rules of thumb.
It was indeed the obsession with marginalism, rational agents and market equilibrium that drove out the far superior “hydraulic” approach to economics. Hydraulic approaches rely on stock-flow equilibrium outcomes rather than market equilibrium outcomes. As I have written before, the latter stinks of a determinism and a teleology that only exists in the minds of economists. As Krugman notes this whole belief system — for it is a belief system — is “at the core” of what economists are “supposed to know”. That Krugman says this with some degree of skepticism is refreshing indeed, because this is in my opinion the key problem with economics today that makes it less a framework for understanding the economy and more a doctrine based, ultimately, on an a priori, moral vision of man.
It is for this reason that Godley and Lavoie are far more cautious in saying that, for example, any level of employment past some arbitrary estimate of “full employment” will definitely lead to inflation while the NAIRU crowd — who, I believe, Krugman follows — say that it will. Godley and Lavoie do not want to make definitive statements about human behavior in an a priori manner which is why, in the quotes presented above, they leave it up in the air.
Mainstream economists will huff and puff about this and claim that Godley and Lavoie are thus saying nothing relevant because they are saying nothing determinate. But is this really the case? In reality, we simply do not know if when unemployment reaches a certain levels wage-increases will drive inflation up. This is why, for example, in the 1990s during the Clinton/Greenspan boom the NAIRU was revised downwards. In this period Krugman was arguing that NAIRU was some 5.5-6.0% unemployment, but he was proved wrong when, in 2000, the unemployment rate touched 4% with no substantial increase in inflation.
The key here is context. We need to contextualise such forecasts by taking into account, for example, the strength of labour market institutions among other things. This is not hard to do and can be left up to our judgement at any given moment in time. This is the advantage of the Cambridge tradition of hydraulic Keynesianism: it does not insist that the model has to tell us everything, but rather lays it out as a framework for intuitive empirical inquiry. This is far superior than pretentiously trying to build silly little models with all the answers, as both the Phillips Curve neoclassical-Keynesians and the NAIRU folks do.
The Cambridge tradition impels us, as economists, not to take modelling assumptions at face value and instead to apply our judgement and good sense in making forecasts and projections. In this regard, Krugman avoids what is perhaps the most salient point of the whole debate: using their good sense Godley and the Levy crowd got the fragility of the Clinton boom and the coming crash right and using his models Krugman got it wrong. That should, if economics even pretends to be remotely scientific, be the end of the story. Yet Krugman has a widely-read column in the NYT and Godley remains obscure and subject to the misreadings of people like Krugman. This raises the question as to what status economics actually holds in contemporary discourse.
Posted in Economic Theory | 18 Comments

Keynes on the Use and Abuse of Statistics and Probability

Bullshit stats

Much of Keynes’ A Treatise on Probability appears to have been written with the popularisation of the study of statistics that was emerging at the turn of the 20th century in mind. This makes it a rather remarkable document because it provides, if not a virgin eye, then at very least a critical one that was not blinded by the haze of statistics with which we are bombarded with today.

Therefore, I find the passages in which he discusses the use of statistics in general and the application of probability to these statistics in particular to not only be extremely interested, but of contemporary relevance. Just to be straight before proceeding, this is not my attempt to “prove” that Keynes had all the answers. Indeed, I am not that interested in what Keynes said per se — I hate hero-worship. Rather I think that many of the critical insights he makes bear on the problems faced in economics today.

In his A Treatise on Probability there is a passage in which Keynes discusses how statistics should be deployed — and how they should not. It reads as such:

Generally speaking, therefore, I think that the business of statistical technique ought to be regarded as strictly limited to preparing the numerical aspects of our material in an intelligible form, so as to be ready for the application of the usual inductive methods. Statistical technique tells us how to ‘count the cases’ when we are presented with complex material. It must not proceed also, except in the exceptional case where our evidence furnishes us from the outset with data of a particular kind, to turn results into probabilities; not, at any rate, if we mean by probability a measure of rational belief. (p392)

This is similar, of course, to certain critiques of the Bayesian method that I have recently published on this blog, although it would seem to me that it goes one further. Where I was objecting to turning non-numerical data — i.e. qualitative data — into numerical probabilities, Keynes is objecting to turning statistical data — i.e. quantitative data — into numerical probabilities.

Why does he object to this? I believe the answer lies in the fact that Keynes saw probability theory as a rather limited tool that should only be deployed in very specific cases. In A Treatise on Probability Keynes makes very clear that the theory of probability plays a secondary role in human reasoning. The primary role is allotted to the twin processes of Induction and Analogy (I shall write a separate post on these in the coming days). Keynes recognised that were these not allotted a primary role and were that role allowed to probabilistic reasoning the results would be a mess. This is indicated, for example, when he writes:

To argue that from the mere fact that a given event has occurred invariably in a thousand instances under observation, without any analysis of the circumstances accompanying the individual instances, that it is likely to occur invariably in future instances, is a feeble inductive argument, because it takes no account of the Analogy… But to argue, without analysis of the instances, from the mere fact that a given event has a frequency of 10 per cent in the thousand instances under observation, or even in a million instances, that its probability is 1/10 for the next instance, or that it is likely to have a frequency near to 1/10 in a further set of observations is a far feebler argument; indeed it is hardly an argument at all. Yet a good deal of statistical argument is not free from this reproach — though persons of common sense often conclude better than they argue, that is to say, they select for credence, from amongst arguments similar in form, those in favour of which there is in fact other evidence tacitly known to them though not explicit in the premisses stated. (pp407-408)

Indeed, things have not changed so much from Keynes’ day. Today too many using econometric techniques will largely circumvent proper argument — what Keynes would have called an argument from induction and analogy — and are instead more fascinated in projecting current trends forward. Only the Post-Keynesians, with their insistence on the non-ergodicity of economic data, really carry the torch for Keynes’ “common sense” view of how to handle data.

Some Bayesians will, of course, claim that they avoid this criticism. They say that they update their priors in line with new evidence. Maybe, but they continue to project the accumulated evidence into the future. Even when Bayesians test for robustness they are doing this. They are assuming “since the accumulated evidence so far says that my model is correct then I can rationally assume that it will be correct tomorrow”. This is old wine in new bottles.

What Keynes was really criticising was intellectual laziness. It’s nice to think that we can build computers and formulae that do our work for us. For some reason today such activity is given the mantle of Science — why, I don’t know. But in reality every problem we approach, every economic formation we study at any given moment in time, is unique in its particularity and we have to use our wits to unravel it as best we can. There are no easy solutions and it is a sad fact of our age that the harder, more challenging solutions are being daily degraded as being less scientific than the easy ones.

But there is also another hurdle: in economics there is a strong desire to emulate. Economists often want a strict set of rules that they apply to data that their peers are also following. I am not sure why this is the case as it does not appear to me to be the case in either the other social sciences or in the hard sciences. My guess, however, is that this is because economists try (unsuccessfully, I might add) to straddle the fence between the hard sciences and the social sciences.

Because their discipline is not actually a hard science and cannot mimic its methodology what we end up with is a discipline full of rigid, arbitrary rules that stifle creativity and ensure intellectual stagnation. In their quest to try to convince the world that they have a science as objective as chemistry or physics, the economists impose upon themselves an iron cage of capricious and silly rules that lead them down one garden path after another. One knows how to wake a lazy cat, but perhaps it is more difficult to make a fool stop dancing.

Posted in Statistics and Probability | 2 Comments

The Model that Maketh the Man? Wynne Godley in the NYT

Model maketh the man

The New York Times recently ran an article appraising the work of Wynne Godley and his colleagues and followers. This is fantastic. It is great to see this approach to economics, which the NYT rightly notes predicted the 2008 crash, get the proper media attention it deserves. The article, however, while extremely well-written and well-informed, is indicative of a danger that I have long been pointing out on this blog.

The article in question is keen to point out the importance it, and others, attach to the fact that Wynne Godley not only predicted the crash but also built models. This is on the back of a comment that I find rather misleading by Dirk Bezemer who says that although quite a few economists predicted the crash Godley “was the most scientific in the sense of having a formal model”. Spurred on by this rather questionable remark the author goes on to write:

Why does a model matter? It explicitly details an economist’s thinking, Dr. Bezemer says. Other economists can use it. They cannot so easily clone intuition.

Mainstream models assume that, as individuals maximize their self-interest, markets move the economy to equilibrium. Booms and busts come from outside forces, like erratic government spending or technological dynamism or stagnation. Banks are at best an afterthought.

The Godley models, by contrast, see banks as central, promoting growth but also posing threats. Households and firms take out loans to build homes or invest in production. But their expectations can go awry, they wind up with excessive debt, and they cut back. Markets themselves drive booms and busts.

Why do I find this misleading? Simple. Because as everyone knows Godley didn’t predict the crash because his models told him so. He predicted it — together with the Eurozone crisis — based on a combination of intuition and informal logical reasoning. Indeed, the author of the NYT article actually goes on to note that the Godley models cannot generate a financial crisis. Quoting a rather unfair appraisal from Charles Goodhart he writes:

For all Mr. Godley’s foresight, even economists who are doubtful about traditional economic thinking do not necessarily see the Godley-Lavoie models as providing all the answers. Charles Goodhart of the London School of Economics called them a “gallant failure” in a review. He applauded their realism, especially the way they allowed sectors to make mistakes and correct, rather than assuming that individuals foresee the future. But they are still, he wrote, “insufficient” in crises.

Gennaro Zezza of the University of Cassino in Italy, who collaborated with Mr. Godley on a model of the American economy, concedes that he and his colleagues still need to develop better ways of describing how a financial crisis will spread. But he said the Godley-Lavoie approach already is useful to identify unsustainable processes that precede a crisis.

While I think that Goodhart’s comment is too harsh, I think Zezza’s is too generous. It gives the impression that the Godley models are the entities that predicted that the economy was moving toward a crash, not Godley himself. But this is not true. As everyone familiar with Godley’s work at the Levy Institute knows, it was the sectoral financial balances framework that Godley used to predict the crash; this framework tips off the person using it as to the possibility that the private sector in general and the household sector in particular are becoming indebted and that this process is likely unsustainable.

While it is true that the Godley models will take into account whether this process is occurring, they are in no way needed to give the intuitive insights that the sectoral financial balances framework give. A person who takes a glance at these balances — and who has broadly the same perspective on the economy as Godley — will have just as much relevant information about the unsustainability of the processes at work than the person toying with the models. Indeed, they may have even more as their thinking is not being clouded with irrelevant details.

This brings us back to something noted regarding models in the NYT article. Namely that they, as Bezemer says, “explicitly detail an economists thinking” and that this means that “other economists can use them”. This statement is misleading in two ways. First of all, I do not think that models detail an economists thinking at all. Rather they give, at best, an idea of the framework being used and, at worst, a misleading outline of the processes of reasoning involved in thinking through a particular economic problem.

Secondly, and tied to this, is the idea that provided an economist has built us a model we can become, in a sense, his or her clone by learning said model. Again, and for the reasons just mentioned, this is extremely misleading. It is an appealing idea to both the modeller and the student of the model in that, for the modeller it assures a certain immortality and for the student it assures immediate access to the wisdom of previous economists. But none of this is true and, frankly, I don’t think that Godley, who did so much intuitive empirical work, would claim that it was.

This piece touched on a personal note for me, however, which is probably what led me to write this post. As already noted above by Zezza, Godley’s models have not yet integrated a means by which they can produce the financial crises they hint at when private sector debts build up. I have recently been toying with the idea that this might be accomplished by integrating the theory of prices that I am currently working on. I do not want to make any promises, as I have not had time to think this through in any great detail (I have to finish the theory of prices first!), but I think that there is a fair chance that it could be done.

This, naturally, leaves me torn. One of the reasons I started trying to build an alternative theory of prices was because I thought we needed an intuitive and teachable alternative to the stodgy old market equilibrium framework. My approach is very much inspired by Godley’s “economics without equilibrium or disequilibrium” approach which he laid out in a paper entitled Macroeconomics Without Equilibrium or Disequilibrium — this, in turn, was inspired by Kaldor who I also owe an enormous debt to. (I should mention that I don’t think this description is entirely accurate, however, and I shall be updating it somewhat to instead take into account different types of equilibrium rather than throwing the concept out the window altogether). However, I now feel that my approach may not be properly appreciated unless it is integrated into a popular model in order to show its utility.

What I am doing has always, in my own mind, been in line with Keynes’ comment that the aim of economic theory is not to build cumbersome models but rather to provide “an organised and orderly method of thinking out particular problems”. So, if I am correct and I can integrate my approach to pricing into the Godley models in order to generate the possibility of financial crises, I think that I will only do so with great trepidation and reluctance.

Posted in Economic Theory, Media/Journalism | Leave a comment

The Origins of Both Endogenous Money and the Industrial Revolution

industrial-revolution

The latest issue of the Review of Keynesian Economics (ROKE) is out and it looks like this publication is taking off fast. It includes, among other things, an introduction by the president of the Argentinian central bank (which is available free online) and a book review by me (which is not). But here I want to focus on one paper in particular. It is by William E. McColloch and is entitled A Shackled Revolution? The Bubble Act and Financial Regulation in Eighteenth Century England (it is available free in working form here).

The paper in question is arguing against the story told by some economists that the Bubble Act of 1720 was passed in response to the infamous South Sea Bubble getting out of hand and that this then constrained the ability of private firms to borrow and incorporate until its repeal in 1825. McColloch argues that this was not the case. On careful examination it appears that the Bubble Act was passed in order to keep the South Sea Bubble afloat as it ensured that competitors could not soak up the liquidity that the South Sea Company required to avoid falling into the abyss.

In this post, however, I want to focus on the latter aspect as it ties into debates regarding the endogeneity of money and the role of governments and central banks in the economy. First, however, I should probably give some quick historical background as many may be unfamiliar with the relationship between the South Sea Bubble and contemporary forms of credit creation.

The South Sea Company came into existence in 1711. The main reason for its birth was because government borrowing was becoming increasingly costly. Even though the Bank of England had been established in 1694 its powers were not yet fully realised. Add to this the loss of confidence among the public in government debt due to the funding of the War of Spanish Succession (1701-1714) and you can explain the spiking interest rates at the time.

The following graph gives a good idea of this by calculating the total debt service cost of public debt in the 18th century in England (apologies about the quality, I had to take a photo on my phone).

Debt Service England 1700-1800

The South Sea Company then was basically set up to soak up this debt and then issue stock instead. Effectively then, the South Sea Company was a shell company being used to transform distrusted public debt into trusted South Sea stock. And as we can see, with regards getting interest rates down it worked rather well. Even after the collapse of the company in 1720 the problems with public debt largely evaporated.

This is precisely when the Bank of England came into its own. As McColloch writes:

First… the Bank did not maintain anything like a fixed ratio between its bullion reserves and its note issue. Particularly in the latter half of the eighteenth century, the Bank appears to have significantly expanded its discount activities during periods of crisis, acting, in the view of some scholarship, as a lender of last resort. Second, while the private banks of London were formally prohibited from discounting directly with the Bank, a number of partners of some of the major London houses maintained drawing accounts with the Bank which likely came with informal access to the Bank’s discount facilities on occasion. Further, Clapham records that while the volume of discounts remains relatively stable between 1720 and 1750, there was, by the early 1760s, a ‘gigantic increase’ in volumes. This increase plainly coincides with the rise of the country banks, and the emerging process industrialization generally. (p310)

In other words, at the same time as the government debt situation normalised money appears to have become largely endogenous. The South Sea debacle can then be read as a minor blip on the way to the modern central banking system where the central bank provides discounting on demand and credit is extended in line with economic development.

This can be clearly seen in the fact that, as Matias Vernengo has pointed out, interest rates in 18th century England fell substantially across the board. In addition to this, McColloch notes, throughout the 18th century government involvement in the English economy grew immensely, averaging 12% of GDP in the 100 year period.

The industrial revolution (usually dated 1760-1820/1840) had its origins in the same period as that when the pillars of the state/central bank aspect of our modern mixed economies were put in place. Never did there really exist any pure “free market” that was then perverted by government and central bank usurpation. Rather the process of industrialisation and industrial revolution was buttressed by bigger government and more extensive central bank involvement in the economy.

Posted in Economic History | 5 Comments

Holey Models: A Final Response to Matheus Grasselli

Plan_Full_Of_Holes

Matheus Grasselli has responded to some of my previous posts (in the comments on this post) and some comments I made on the Facebook Young INET page. His points were as I thought they would be — indeed, I have already dealt with them in this post. For the sake of tying up loose ends, however, I will repeat my points here and make a few general comments about the use of mathematical modelling and numerical probabilistic reasoning in economics.

Grasselli had previously given me an example of how Bayesian probability can be used in the case of HIV testing. I fully agree with his approach because HIV testing and the results derived therefrom are based on experimental data that can be interpreted statistically because it is ergodic. I pointed out to Grasselli, however, that the material economics deals with is non-ergodic. For example, if I want to try to figure out how the Fed’s upcoming “tapering program” (i.e. the ceasing of the QE programs) will affect the US economy I cannot meaningfully use numerical probability estimates.

This is because the Fed taper is a unique event similar to the example I gave before of the probability of a woman calling me tomorrow morning. Grasselli takes up this example in his comment and concedes what I have been saying all along: there is no immediate way we can assign this a numerical probability and so it must be conceived of in a different manner. He writes:

The woman-calling example can be entirely laid out without numbers, exclusively with “degrees of belief” that get weaker or stronger as evidence comes forward. Think of a colour gradation, or the position of a dial, or don’t think about any metaphor at all, just stick with the difference between weak and strong belief and sense of it getting weaker or stronger. You might choose to assign numbers and they’ll be arbitrary as long as all the evidence you are gathering is also qualitative (e.g yes or no answers) and does not have any statistical regularity.

Fantastic. I’ve been saying this all along and I agree. Now, here’s the kicker: the vast majority of what we deal with as economists is data that must be interpreted in this way — i.e. that is not subject to numerical estimation. This means that we cannot use models that rely on numerical inputs. “But,” the smart reader will say, “this whole debate between you and Grasselli began because you criticised him for using such numerical estimates in his models. You said that economic data is non-ergodic and cannot use such estimates and thus his models were doomed to fail.”

So, what is Grasselli’s response to this? It is what I thought it might be: he intends on assigning these non-quantitative aspects numerical variables regardless — or, at least, so I gather from his comment. He writes:

The purpose of assigning numbers is to deal with evidence that is quantitative in nature and comes from phenomena with some degree of regularity. It is only when you have to combine both that you need to use numbers, and in this case the numbers will be anchored by the bits of data that are quantitative.

Take Nate Silver’s election-prediction model. It incorporates both quantitative data with a lot of regularity (polls, etc) and a tons of qualitative stuff, for which he has to come up with numbers just so that they can be incorporated in the model. Even though the priors that he assigns to these bits of the model are not based on statistical regularities, they get meaningfully integrated with others bits of data that do, and in the end produce a probabilistic estimate for the outcome of the election that is far from being pretentious, arbitrary, or meaningless. (My Emphasis)

But this is precisely my problem. You cannot assign this qualitative data numbers. As I said in my last post regarding the woman on the phone example (I quote this at length because I see no reason to retype an argument that I have already made elsewhere; note that we can avoid the whole “betting” thing here and just stick with my criticisms of assigning numerical probabilities):

Let’s take a real example that I used in comments of my last post: what are the chances that a woman will call me tomorrow morning between 9am and 11am and can I assign a numerical probability to this? I would say that I cannot. “Ah,” the Bayesian will say, “but you can. We will just offer you a series of bets and eventually you will take one and from there we will be able to figure out your numerical degree of belief in probabilistic terms!”

I think this is a silly load of old nonsense. The assumption here is that locked inside my head somewhere — in my unconscious mind, presumably — is a numerical degree of belief that I assign to the probability of an event happening. Now, I am not consciously aware of it, but the process of considering possible wagers brings it out into the open.

Why do I think that this is nonsense? Because I do not believe there is such a fixed degree of belief with a numerical value sealed into my skull. Rather I think that the wager I eventually accept will be largely arbitrary and subject to any number of different variables; from my mood, to the manner in which the wagers are posed, to the way the person looks proposing the wager (casinos don’t hire attractive women for nothing…).

Back to my example: what are the chances that a woman will call me tomorrow morning between 9am and 11am? Well, not insignificant because I am supposed to be meeting a woman tomorrow morning at 11.30am. Can I give this a numerical estimate? Well, it certainly would not be 0.95. Nor would it be 0.0095. But to ask me to be any more accurate would be, in my opinion, an absurd undertaking. And if you convinced me to gamble on it the wager I would be willing to accept would be extraordinarily arbitrary.

“But Phil,” some economists will say, “while it may be true that this is a questionable enterprise surely we can forgive it if it makes the model work. After all, surely we can get some idea of the problem by, say, assigning your woman on the phone example a numerical probability of, say, 0.08. I mean, if we are doing this when we are modelling surely these proxies for qualitative evaluation will work. They can’t be THAT far off the mark and besides they will likely only be small parts of a bigger model.”

For the economists who think this I will likely not convince them that they are wrong. After all, they likely have a stake in this game — possibly a financial stake in the form of funding — and it is in their interest to produce a model that “works” (note that I do not mean here that a model “works” as in it “produce relevant results” but rather that it functions on its own enclosed terms).

For those on the fence, however, I would only say this: when you start getting relaxed about undertaking even minor dubious actions in your thinking you will soon find that your entire intellectual edifice has been completely shot through and compromised by such actions. By the time you realise this it will be too late and your work will be so full of tiny holes and microfractures that it will be unable to produce either relevant results or relevant insights.

Since Tinbergen and Keynes’ critique in 1938 the statisticians, mathematicians and econometricians have been knocking on our door every few years saying that they have solved all the problems with the use of mathematical models that integrate statistical data. But its never true. And when you dig down its always the same problems that arise. This is because such problems are epistemological or even ontological; they have to do with the nature of the data we approach as economists. They are not simply due to errors or lack of sophistication; and they will plague anyone who takes such an approach to the point of making their work redundant with regards the real world.

But those advertising this approach will always find followers and funding. Nothing I or anyone else can say will prevent this from occurring. The best I can do is make those who go down what I consider (and Keynes considered) a dark path full of hocus pocus and black magic at least somewhat aware of the problems of their approach. With that, I think the debate is over, because both mine and Grasselli’s positions have been articulated. And so it is up to you, dear reader, to decide.

Posted in Economic Theory, Philosophy, Statistics and Probability | 3 Comments

Probability and The Principle of Indifference in Applied Economic Reasoning

Indifference

Carola Binder has brought my attention to a very commendable post she did regarding how people perceive their chances of being laid off. The results, from the point of view of the application of probability theory to economics, were very interesting. Here I quote Binder in full:

Prior to answering the question, survey takers are given this brief intro to help them understand probabilities: “Your answers can range from zero to one hundred, where zero means there is absolutely no chance, and one hundred means that it is absolutely certain. For example, when weather forecasters report the chance of rain, a number like 20 percent means ‘a small chance’, a number around 50 percent means ‘a pretty even chance,’ and a number like 80 percent means ‘a very good chance.'” Nonetheless, most people seem to have tremendous difficulty quantifying their probability of job loss. Over half of people choose 0% or 50% as their response.

People intuitively knew when asked this question that they had no idea how to calculate such a probability so many of them just shrugged their shoulders and gave it a 50/50 — this is not because they actually think that there is a 50/50 chance of being laid off but rather that they know they cannot come up with a numerical estimate of such a probability. Binder goes on to connect this with the Principle of Indifference which she lays out as follows:

The Principle of Insufficient Reason, or Principle of Indifferencesays that “if we are ignorant of the ways an event can occur (and therefore have no reason to believe that one way will occur preferentially compared to another), the event will occur equally likely in any way.”

She goes on to note that Keynes, in his Treatise on Probability was highly critical of the principle of indifference. This is where I would like to say a few words and show that I think these to be in keeping with Keynes’ thoughts on the matter.

It is clear that the researchers in Binder’s example had a statistical bent. But by gearing their study in such a way they ended up coming up with evidence that should be treated, as Binder says, with “a large grain of salt”. I can imagine a journalist coming at the same problem from a non-statistical angle. A good journalist would have formulated a series of open-ended questions with which they could probe the people with whom they were discussing the issue. By doing so, not only would they bring out nuance, but they would also encourage the people they were surveying to actually think about their fear of being unemployed in more detail.

A similar method could have been used by those doing the research in this case. After having discussed the fear of becoming unemployed with their respondents they could have given them non-quantitative options which they could choose with respect to their fear of becoming unemployed relative to before the economic crisis. For example, they could have had the options (1) “I fear becoming unemployed a great deal more than before the crisis”; (2) “I fear becoming unemployed a bit more…”; (3) “I fear becoming unemployed about the same…”; (4) “I fear becoming unemployed less…”.

Indeed, they could have even given them a numerical, but non-probability based estimate of how much they feared losing their job — say, based on a scale of 1 to 10 — and asked them to chose a number for how they felt prior to the crisis and one for how they felt after the crisis. Such methods would have produced far better results than the use of probability estimates which only led, it would seem, to confusion and for people to fall back on the intuitive “safe place” of the Principle of Indifference.

I think that this is one of the things that made Keynes suspicious of the Principle of Indifference. In his Treatise on Probability he writes:

In short, the Principle of Indifference is not applicable to a pair of alternatives , if we know that either of them is capable of being further split up into a pair of possible but incompatible alternatives of the same form as the original pair. (p60)

This is precisely what we see in our example above. The manner in which people understand whether they might be laid off or not is not a clear cut X versus Not X — “laid off” versus “not laid off” — which can then be given probability estimate. Rather the question needs to be broken down further and, ultimately, is not properly approached using the framework of probability. I will leave the last word to Keynes on this who, it would seem, had a properly nuanced view of how we actually conceive of alternatives in the real world.

This rule commends itself to common sense. If we know that the two alternatives are compounded of a different number or of an indefinite number of sub-alternatives which are in other respects similar, so far as our evidence goes to the original alternatives, then this is a relevant fact of which we must take into account. And as it affects the two alterantives in differing and unsymmetrical ways, it breaks down the fundamental condition for the valid application of the Principle of Indifference… It is worthwhile to add that [this] qualification… is fatal to the practical utility of the Principle of Indifference in those cases only in which it is possible to find no ultimate alternatives that satisfy the conditions. For if the original alternatives each comprise a definite number of indivisible and indifferent sub-alternatives, we can compute their probabilities. It is often the case, however, that we cannot by any process of finite subdivision arrive at indivisible sub-alternatives, or that, if we can, they are not on the evidence indifferent. (p61-62 — My emphasis)

In such cases, which are the rule rather than the exception in economics, a different approach is needed and, if such an approach attempts to utilitise a numerical probability framework, will do so only in the most pretentious and misleading of ways.

Posted in Uncategorized | 3 Comments

Bayesianism and Non-Ergodicity in Economics

bayes

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

— John Maynard Keynes

Bayesianism is rather irritating because it allows adherents to try to avoid the Post-Keynesian criticisms regarding the heterogeneous nature of historical date which leads to its non-ergodic nature and the consequent problems with fundamental uncertainty. Because the Post-Keynesian critiques are usually aimed at frequentist interpretations of probability they often appear to be superficially overcome when arguing with a Bayesian. This, however, is categorically not the case.

For the past few days I’ve been trying to find a rather “clean” simple critique of Bayesianism that could be applied from a Post-Keynesian perspective. Now I think that I have found such a critique.

A Bayesian named Andrew Gelman has written up a summary of the criticisms thrown by their detractors and asked his colleagues to respond. The most important criticism that Gelman raises from a Post-Keynesian perspective involves the selection of “priors”. In Bayesian statistics “priors” are prior statistical distributions.

Sticking to an example I’ve used before, let’s say that I am interested in the probability, P, that a woman will call me in the morning between the hours of 9am and 11am. Now, since I am beginning my experiment I have literally no idea what the probability that a woman will call me tomorrow as I have no experimental data. The somewhat arbitrary probability that I then cook up will be called my “prior”. Gelman, adopting the voice of a critic (i.e. me), puts this as such:

Where do prior distributions come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence. To put it another way, why should I believe your subjective prior? If I really believed it, then I could just feed you some data and ask you for your subjective posterior. That would save me a lot of effort! (p447)

Just to clarify a “posterior” is the probability that is assigned when evidence is inputted. Anyway, Gelman’s version of the criticism seems to me a rather weak version and not nearly what I will be saying when I move to a Post-Keynesian criticism of this method if it is to be applied in economics. But it elicited a fairly clear response from Joseph Kadane. He wrote:

Why should I believe your subjective prior?” I don’t think you should. It is my responsibility as an author to explain why I chose the likelihood and prior that I did. If you find my reasons compelling, you may decide that your prior and likelihood would be sufficiently close to mine that it is worth your while to read my papers. If not, perhaps not. (p455)

So what Kadane is saying is that, to go back to my example, when we assign the first prior probability as to whether a woman will call me tomorrow morning we should make an argument and if someone else doesn’t like this argument they should throw my paper in the bin. The prior assigned, however, will always be in some sense arbitrary in that it will not be formed, as a posterior would, on the basis of data.

Now, here’s where the Post-Keynesian critique comes in. In economics we deal with heterogeneous historical data that is non-ergodic. Another way of putting this is that such data is composed of complex and unique events. An interest rate hike in 1928 is very different from an interest rate hike in 1979. The future, you see, does not mirror the past when we are talking about historical time.

Let’s go back to our example. What a Post-Keynesian economist is interested in is whether a woman will call tomorrow morning, not the probability that a woman will call on any given morning. But, of course, all we can then do is posit an argument to form a prior and you can, as Hadane says, accept or reject it. Great. That is what Post-Keynesians do. They lay out an argument. And everything stands or falls on that alone.

For Post-Keynesians there is no interest in positing a prior and then waiting for data to update the argument because the argument, by design, only works once. Post-Keynesian arguments are, in a sense, disposable. They are thrown out as historical time unfolds and new ones are constructed. The only manner in which to do this is through induction and the application of a skill-set that one acquires through one’s career. This is also, by the way, how historians and others like lawyers work.

The idea that you can find one True model that you then update with posteriors over and over again is wrong simply because the nature of the data is non-ergodic. To exaggerate slightly, but not much, there is a new argument for every new dawn. It is by wrestling with the changing nature of the economy that we come to understand it. Any other method is doomed to failure.

Posted in Statistics and Probability | 5 Comments

The Bayesian Cult: Threatening Those Who Refuse to Fall Into Line

steer-clear-of-scientology

I thought it might be worthwhile to follow up my previous posts on probabilities with one in which I first of all clarify one or two points and second of all show why certain people are attracted to Bayesian statistics. I draw on a paper that Lars Syll (who else?) has drawn my attention to by the philosopher Clark Glymour. The following post will be an extended comment on Glymour’s excellent paper.

Bayesian statistics is a subjective measure of probabilities. What that means is that it assumes that we hold degrees of belief about certain things and that the probabilities are contained within these degrees of belief — they are not, as it were, “out there” and objective. The key to the method, however, is that we can articulate these degrees of belief in numerical terms. Glymour writes (note that he refers to “Bayesian subjectivists” as “personalists” in what follows):

We certainly have grades of belief. Some claims I more or less believe, some I find plausible and tend to believe, others I am agnostic about, some I find implausible and far-fetched, still others I regard as positively absurd. I think everyone admits some such gradations, although descriptions of them might be finer or cruder. The personalist school of probability theorists claim that we also have degrees of belief, degrees that can have any value between 0 and I and that ought, if we are rational, to be representable by a probability function. Presumably, the degrees of belief are to covary with everyday gradations of belief, so that one regards a proposition as preposterous and absurd just if his degree of belief in it is somewhere near zero, and he is agnostic just if his degree of belief is somewhere near a half, and so on. According to personalists, then, an ideally rational agent always has his degrees of belief distributed so as to satisfy the axioms of probability, and when he comes to accept a new belief he also forms new degrees of belief by conditionalizing on the newly accepted belief. There are any number of refinements, of course, but that is the basic view. (p69)

Okay, so how do we come up with this numerical estimate that transforms what Glymour calls a non-numerical “grade of belief” into a properly numerical “degree of belief” between 0 and 1? Simple. We imagine that we are given the opportunity to bet on the outcome. Given such an opportunity will then force us to “show our cards” as it were and assign a properly numerical degree of belief to some potential event.

Let’s take a real example that I used in comments of my last post: what are the chances that a woman will call me tomorrow morning between 9am and 11am and can I assign a numerical probability to this? I would say that I cannot. “Ah,” the Bayesian will say, “but you can. We will just offer you a series of bets and eventually you will take one and from there we will be able to figure out your numerical degree of belief in probabilistic terms!”

This is a similar process to a game teenage boys play. They come up with a disgusting or dangerous act and then ask their friend how much money they would want to do it. Through a sort of bargaining process they arrive at the amount that the person in question would undertake this act for. They then discuss amongst themselves the relative price put by each on said act.

I think this is a silly load of old nonsense. The assumption here is that locked inside my head somewhere — in my unconscious mind, presumably — is a numerical degree of belief that I assign to the probability of an event happening. Now, I am not consciously aware of it, but the process of considering possible wagers brings it out into the open.

Why do I think that this is nonsense? Because I do not believe there is such a fixed degree of belief with a numerical value sealed into my skull. Rather I think that the wager I eventually accept will be largely arbitrary and subject to any number of different variables; from my mood, to the manner in which the wagers are posed, to the way the person looks proposing the wager (casinos don’t hire attractive women for nothing…).

Back to my example: what are the chances that a woman will call me tomorrow morning between 9am and 11am? Well, not insignificant because I am supposed to be meeting a woman tomorrow morning at 11.30am. Can I give this a numerical estimate? Well, it certainly would not be 0.95. Nor would it be 0.0095. But to ask me to be any more accurate would be, in my opinion, an absurd undertaking. And if you convinced me to gamble on it the wager I would be willing to accept would be extraordinarily arbitrary.

Already having reached this far in the argument I suspect that there is some emotional trickery at play. What follows only confirms this. Let us play along with our Bayesian here for a moment, despite what they’re saying being obvious psychologically-deficient nonsense. Glymour writes:

Let us suppose, then, that we do have degrees of belief in at least some propositions, and that in some cases they can be at least approximately measured on an interval from 0 to 1. There are two questions: why should we think that, for rationality, one’s degrees of belief must satisfy the axioms of probability, and why should we think that, again for rationality, changes in degrees of belief ought to proceed by conditionalization? One question at a time. In using betting quotients to measure degrees of belief it was assumed that the subject would act so as to maximize expected gain. The betting quotient determined the degree of belief by determining the coefficient by which the gain is multiplied in case that P is true in the expression for the expected gain. So the betting quotient determines a degree of belief, as it were, in the role of a probability. But why should the things, degrees of belief, that play this role, be probabilities? Supposing that we do choose those actions that maximize the sum of the product of our degrees of belief in each possible outcome of the action and the gain (or loss) to us of that outcome. Why must the degrees of belief that enter into this sum be probabilities? Again there is an ingenious argument: if one acts so as to maximize his expected gain using a degree-of-belief function that is not a probability function, and if for every proposition there were a possible wager (which, if it is offered, one believes will be paid off if it is accepted and won), then there is a circumstance, a combination of wagers, that one would enter into if they were offered, and in which one would suffer a net loss whatever the outcome. That is what the Dutch Book argument shows; what it councils is prudence. (p71 — My Emphasis)

Yep, that’s right. It’s assumed by the Bayesian that if one moves to maximise one’s gains in the betting game while deploying degrees of belief that are not probabilities (I guess, my mea culpa above would indicate that I would likely be guilty of this) then we get our money taken from us over and over again like a sucker at a three-card Monte stall. The Bayesian edifice rests on a threat manifesting as a thought experiment. Glymour continues:

The Dutch Book argument does not succeed in showing that in order to avoid absurd commitments, or even the possibility of such commitments, one must have degrees of belief that are probabilities. But it does provide a kind of justification for the personalist viewpoint, for it shows that if one’s degrees of belief are probabilities, then a certain kind of absurdity is avoided. (p72)

Yes, that’s right. What the Bayesians do is they set up a thought experiment that probably doesn’t correlate at all with the real world and then they say “Well, within this experiment if you don’t align your degrees of belief with probabilities you will end up in absurd situations in which you are constantly robbed and you’ll look like a total clown”. This is a rhetorical trick. Very similar to the one played by marginalists who claims, for example, that one should be selfish to maximise one’s gain or that firms that do not align to market forces always capsize under the weight of competition.

More than that though, it is a particularly neurotic fantasy. Where the tired old marginalist fables play to a person’s selfishness, Bayesianism plays to a person’s insecurities. It assumes that the world is out to effectively rob you if you don’t fall into line with the Bayesian mode of thinking — as manifestly unrealistic as this mode of thinking is and as unproductive as it may prove. The world becomes a “bad place” and only by thinking in line with the Bayesian doctrine can one avoid its evils. I have in the past heard some compare Bayesianism to a religion. Now I understand why. Although its less a religion and more so a cult as these are precisely the sort of tricks that cults use to brainwash their followers.

Posted in Statistics and Probability | Leave a comment