Epicurean Philosophy as Progenitor to Marginalism

atoms and spaceAs is often pointed out marginalist economics tends to be characterised primarily by a couple of distinct axioms that operate ‘under the surface’ to produce its key results. Varoufakis and Ansperger neatly characterise these as: the axiom of methodological individualism; the axiom of methodological instrumentalism; and the axiom of methodological equilibration.

Having dealt with the third of these axioms at some length on this blog before, I wish to focus here on the first two and try to show that they are intimately intertwined with one another and ultimately grounded in a materialist ontology. In order to do this, we must go right back to the philosopher who, I think, more than any other lay the groundwork for marginalist economics; namely, the Ancient Greek philosopher Epicurus.

This may seem like a strange statement. After all, isn’t the axiom of methodological instrumentalism — which Varoufakis and Ansperger characterise as the assumption that “all behaviour is preference-driven or, more precisely, it is to be understood as a means for maximising preference-satisfaction” — isn’t this axiom ultimately derived from the utilitarian philosophy developed by the likes of Jeremy Bentham?

I would argue that this is simply not the case. In fact, very similar arguments can be found in the Epicurean philosophy which also aimed to maximise pleasure and avoid pain. In his Principle Doctrines, for example, Epicurus wrote,

The magnitude of pleasure reaches its limit in the removal of all pain. When such pleasure is present, so long as it is uninterrupted, there is no pain either of body or of mind or of both together.

This was then, among the Epicureans — who, by the way, were extremely cult-like — elevated to a sort of ethical maxim. In order to live the good life, one must pursue the maximum amount of pleasure and in the meantime avoid pain as best one can. Note that such a selfish doctrine is also at the heart of marginalist economics.

As is well-known, Epicurus’ ethical doctrines were intimately intertwined with his philosophy of materialistic atomism. What is less typically noticed is that such a philosophy is basically identical with the marginalist focus on atomistic individuals — that is, the first axiom that Varoufakis and Ansberger lay out; the axiom of methodological individualism.

Methodological individualism, according to John King who explored the history of the concept in his excellent The Microfoundations Delusion (which I reviewed for ROKE and did an extended post on for Naked Capitalism) notes how the principle was implicit in the marginalist revolution at the end of the 19th century, but that the term was only coined by Joseph Schumpeter in 1908 (pp52-53). The principle, according to Varoufakis and Ansberger, is basically “the idea that socio-economic explanation must be sought at the level of the individual agent”.

This is, in a very strong sense, an atomistic doctrine. This was noted in a very famous passage by Thorstein Veblen where he wrote in his Why is Economics Not an Evolutionary Science?:

The hedonistic conception of man is that of a lightning calculator of pleasure and pains, who oscillates like a homogeneous globule of desire of happiness under the impulse of stimuli that shift him about the area, but leave him intact.  He has neither antecedent nor consequent.  He is an isolated definitive human datum.

Clearly what Veblen has in mind is that, in marginalist economics, man is seen not only as an instrumental agent in pursuit of pleasure (Axiom 2), but also as a self-contained atom (Axiom 1).

This, of course, is identical to the Epicurean ontology which considers that the entire universe is made up of independent material atoms moving through an empty void known as ‘space’. This can be read in his Letter to Herodotus,

[E]ach atom is separated from the rest by void, which is incapable of offering any resistance to the rebound; while it is the solidity of the atom which makes it rebound after a collision, however short the distance to which it rebounds, when it finds itself imprisoned in a mass of entangling atoms.

Each atom is seen, like the individual in marginalist economics, to be completely cut off and self-contained. It is, then, of no surprise that Epicurean ethics involves individuals maximising pleasure and minimising pain — or, as the marginalists would put it, maximising utility and minimising disutility — it simply follows from the basic ontological position that he is putting forward.

This is where the notion of materialism also comes in. As I have pointed out on this blog before, materialism is really characterised by an attempt to get an impossible third-person, God’s eye perspective on both oneself and the universe. We might call this the ‘objectifying tendency’ inherent in the philosophy of materialism — it seeks to turn everything, including ourselves, in objects rather than subjects.

Such a philosophy is actually incoherent at the most basic level, but nevertheless we now see clearly that it is inherently tied up with the atomistic conception of the universe which was embedded in 19th century physical sciences and which was copied from the latter by marginalist economists. It is also, it should be said, entirely useless for economic analysis. As Keynes himself put it in his Treatise on Probability,

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending. (pp286-287)

What many didn’t realise — indeed, I don’t think that Keynes did himself — is that such a criticism is one implicitly laid against the materialist ontological position itself.

Posted in Economic Theory, Philosophy | Leave a comment

Did Irish Austerity Really Lead to an Increase in Competitiveness?

irish austerity

Ever since the austerity programs began in Europe after the crisis of 2008 Ireland has been a poster boy. Even though the economy has crumbled enthusiasts still point to the improved current account balance which they claim is due to an increase in exports which in turn is due to an increase in competitiveness. In actual fact this is a myth; much of the improvement in the current account balance is due to foreign corporations washing profits through the country and the rest is due to a fall in consumption.

In order to get a grasp on this we first have to understand what has happened to the Irish current account balance since 2008. In order to do this we have to break it down by sector. In the graph below we see the balances on merchandise and services, the two main components of the current account that have changed most substantially since the austerity began.

Irish CA Balances BY SECTOR

As we can see, the story of the recovery of the Irish current account has two parts. The first runs from 2008 to 2009. In this period net merchandise exports increased substantially. This was not, however, due to any obvious increase in competitiveness. Rather, as we see from the graph below, it was due to a fall in the consumption of merchandise imports from abroad due to diminished incomes.

Irish CA Balance MERCHANDISE

But even after net merchandise exports increased the current account was not yet in balance, let alone in surplus. In order to push it over the edge net services exports had to swing dramatically from deficit to surplus between 2010 and 2012.

So, the question arises as to what caused what is by all accounts an incredibly dramatic swing. In order to see this we must look at the chart below which breaks down the services balance into its components.

Irish CA Balance SERVICES

As can be seen, the main component that swung the services balance from a deficit in 2010 to a surplus in 2012 was ‘Computer Services’ which increased by about €10bn in just two years.

The problem with this, however, is that a good deal of this component is likely due to international corporations like Microsoft, Google and Facebook washing their profits through Ireland to avoid paying high corporation taxes.

The Irish business and financial website Finfacts broke this down extensively last year. One of the examples that it cites is Google but there are many more:

In 2003, Google Ireland had no revenue but in 2010 it reported revenues of €10.1bn (US$12.9bn), up €2.2bn on 2009. Google Inc reported worldwide revenues of $29.3bn. Most of Google Ireland’s revenues relate to advertising revenue raised in countries outside Ireland. It’s booked as an Irish export but it’s not an export as traditionally understood. Today, companies like Google can carve the world into a small number of single markets with revenues booked in low-tax jurisdictions that are unrelated to the economic activities in those locations.

Looked at this way, it seems that it is not competitiveness that brought the Irish current account back into surplus at all. Rather it was a combination of decreased consumption of foreign merchandise imports as Irish income fell and foreign corporations washing their profits through the country to avoid paying high rates of corporation tax.

Posted in Economic Policy, Politics | 3 Comments

The Economics of the Royal Mail Privatisation

royal mailThe privatisation of Royal Mail will begin tomorrow when the final launch price of the shares and their allocation will be announced. How are these shares likely to perform? Well, if history is any guide, they will perform fabulously well.

Have a look at this graph that was run in the Financial Times on Tuesday. It shows the amount by which the share price on the privatised British companies under Thatcher rose.

IMG_20110101_123736

As we can see, most of these companies did rather well. After only a year British Telecom’s share price was up over 80%, British Airways’ was up over 20% and British Gas’ was also up over 20%. The only one that didn’t do so well was British Steel. But this was a moribund industry that had to compete on the worldwide market, while the other three companies were domestic; as is Royal Mail.

What’s more, from what we currently know the markets are valuing Royal Mail far higher than any of the companies privatised under Thatcher. The Guardian reports:

The frenzy for the shares – fuelled by expectations of an immediate paper profit of 20% to 30% when trading begins on Friday – is more intense than demand for British Gas or British Telecom was at the height of the privatisation drive in the 1980s and 1990s.

The likelihood that these shares will thus result in windfall returns for private investors is very high.

Why then are the UK government privatising the company? Two main reasons, I think. (1) Ideology. (2) Because the UK government are engaged in austerity and so they don’t want to pony up the money that would be needed to modernise Royal Mail, instead they’re going to leave it to private investors. What will the results of this be? Higher prices for consumers, obviously.

Here’s an interesting question though: if we abstract away from ideology and the UK government’s obviously absurd austerity politics, what are the real economics behind the privatisation? Well, let’s try a counter-factual first. Let’s imagine that the government were to modernise Royal Mail by engaging in increased deficit spending. What would happen?

First of all, the interest rates on government debt would not budge given the current ZIRP environment. Secondly, there would be an immediate boost in demand for real goods and services as the investment drive began. The UK economy today is nowhere near full employment or full capacity, so this would not lead to inflation; rather it would just increase output and employment. (Probably not by very much, but still). The new capacity that is brought online by the investment drive will then increase productivity. Finally, prices for postage would remain stable; they may even fall if the modernisation produces extremely good productivity gains.

Now, let’s lay out the economics of the privatisation.

Let us imagine that after the shares are sold in the market private investors undertake an investment drive. Let us further imagine that most of the money for this investment is borrowed by the company. Again, interest rates will not rise in the ZIRP environment; so, no difference there. Likewise, there will be no inflationary pressure due to this increased investment; it will all contribute to increased employment and output. Again, the investment drive will increase productivity. Prices on postage, however, will likely rise.

So, what are the net effects? Simple. In the case of privatisation postage prices will increase and this will feed into the CPI — i.e. it will contribute to inflation. Whereas, if the government had done the investment themselves there would be no price increases and the CPI would remain as it would otherwise be; indeed, in time the modernisation may even contribute to a fall in postage prices.  In addition to this the price increases will accrue as profits to investors, so there will be a redistribution from consumers to investors. That’s it really.

Now, doesn’t it seem like the privatisation is an eminently stupid thing to do? Of course. But that’s what austerity politics and budget deficit taboos are all about: stupidity. Good news for those who buy up the shares though.

Posted in Economic Policy | Leave a comment

Paul Krugman Misses Key Component of His Own Model

Percent growth

John McHale over at Irish Economy has written a post on the possibility of a default by a sovereign currency issuer. In the post he discusses Paul Krugman’s stripped-down Mundell-Fleming model in which Krugman shows that a country issuing a sovereign currency not only cannot default but if there is a run on the government bonds of this country the result will be devaluation which will then lead to an expansion of output as exports increase.

Krugman’s model — put together in late 2012 — emulates the argument that MMT and other Post-Keynesian economists have been making for years: if a country has a sovereign currency then the government cannot default and any loss of confidence in the government bonds of that country will only affect the exchange rate. Before dealing with the model itself I want to first deal with some points made by McHale which strike me as misguided.

The first is that McHale seems to think that Krugman’s result only holds in a so-called liquidity trap. But this is simply not true. Even in “normal” times — whatever those might be — the central bank sets a target rate of interest (represented in Krugman’s model by a Taylor Rule). If there is a run on bonds the central banks will not change the rate of interest unless inflation rises. So, even in “normal” times the effects will largely be the same: the central bank will soak up bonds offloaded by foreign investors and all the effects will fall on the currency which, ceteris paribus, will depreciate.

The second point McHale makes that is problematic is the following,

So what then is left out?   I think one problem comes with the assumption that that the central bank can still set the domestic interest rate following its usual monetary policy rule (constrained by the zero lower bound).   What the stripped down model leaves out is that central bank is only directly affecting the overnight lending rate between banks, and also possibly future expectations of this rate.

This is where the experience of Ireland and other peripheral euro zone economies is relevant.   Stressed banks cannot access funding at the central bank’s policy rate.   This is what has led to the fragmentation of bank funding markets across the euro zone.   The weak creditworthiness of the sovereign is one of the factors affecting the creditworthiness of the banks, as the sovereign remains the primary capital backstop to the banking system.   And the high funding costs of the banks is keeping up lending rates to the real economy.

This passage seems to me rather confused for two reasons. First of all, the assertion that “weak creditworthiness of the sovereign is one of the factors affecting the creditworthiness of the banks”. This is true to some extent. But in order for this to affect the creditworthiness of the banks the value of bonds has to fall. In Krugman’s model, however, when bonds are offloaded by foreign investors they are soaked up by the central bank. Thus their nominal value does not fall. Given that the repo operations the central bank engages in rely on this nominal value, the fall in the real value of these bonds (through the fall in the exchange rate) will not make any difference to the bank’s abilities to get funding.

Tied to this McHale again seems to be confusing a sovereign currency issuer with a user of a currency. In the case of the former if there is a funding problem with banks the central bank will step in as lender of last resort. The European situation that McHale refers to only arose because in the Eurozone the ECB was reticent about acting as a lender of last resort. In the US and the UK after the crisis the central banks stepped in to allow stressed banks access to funding. So even in the case where an offloading of bonds by foreign investors did affect the creditworthiness of banks — which it would not — the central bank would use its money creation powers to fill the gap.

So, does that mean that Krugman’s model is correct? No, it is wrong. Even on its own terms Krugman has missed something in his model, as has McHale. The key to this is in the Taylor Rule that Krugman uses to determine the interest rate.

As is well-known the Taylor Rule sets interest rates in line (a) with inflation and (b) with unemployment (some variations leave out the latter, but we can ignore this for our present argument). Now, a devaluation of the currency has two effects: it makes exports cheaper and imports more expensive. The latter effect can, of course, cause price inflation. So even in Krugman’s own model there is a possibility that a fall in the exchange rate would lead to higher inflation which would, in turn, lead the central bank to hike interest rates to meet their Taylor Rule.

Am I saying that I agree with this interpretation? No, because I don’t think that central banks set the interest rate in line with a Taylor Rule at all. I also don’t think that exchange rate movements have linear effects on imports and exports — which is why I would be sceptical about the increased output that Krugman assumes from the devaluation. Regardless, however, Krugman has missed a key component of his own model because he ignored the effects that price rises due to rising import costs might have on the interest rate through his Taylor Rule function.

The lesson? Better to throw this whole New Keynesian approach out altogether. It merely leads to confusion. The Post-Keynesian and MMT approach is far more informative. And what’s more they arrived at Krugman’s results years before him.

Posted in Economic Theory | 3 Comments

Matter and Models

all in the mind

What if all the world’s inside of your head
Just creations of your own?
You can live in this illusion
You can choose to believe
You keep looking but you can’t find the woods
While you’re hiding in the trees

— Nine Inch Nails Right Where it Belongs

In two previous posts (here and here) I have been dealing, directly or indirectly, with the philosophy of Bishop George Berkeley. In the second post in particular I have become a little uneasy that my aim may have seemed to have moved too far away from the general content of this blog. In order to remedy this I hope here to tie those discussions back into these more general concerns here.

In doing so, however, I must assume assent on the part of the reader that is likely not going to be forthcoming but in the most rare of cases. This assent that is assumed is that the reader, like me, agrees with Berkeley’s ‘immaterialist’ or ‘subjective idealist’ philosophical stance. Or, put another, way: they agree with Berkeley and I that matter does not in fact exist. (The best breakdown of this argument, which I think iron-clad, is still Berkeley’s Three Dialogues Between Hylas and Philonous).

Now what, I ask, is the key objection to the notion that matter does not exist? I think that the strongest is, for example, that if I take an instrument of some form and destroy certain parts of my brain my perceptions will be entirely altered. Does this not then prove that matter trumps ideas/perceptions? Not really.

In fact, this is identical to the arguments that Berkeley uses in his texts — I think of, for example, those in which he asks whether the hotness of a fire is internal to the object itself or merely a sensation. Examined properly, of course, it is merely a sensation. This is most clearly shown in the following passage from the Three Dialogues:

Phil: Can any doctrine be true if it necessarily leads to absurdity?

Hyl: Certainly not.

Phil: Isn’t it an absurdity to think that a single thing should be at the same time both cold and warm?

Hyl: It is.

Phil: Well, now, suppose that one of your hands is hot and the other cold, and that they are both at once plunged into a bowl of water that has a temperature between the two. Won’t the water seem cold to one hand and warm to the other?

Hyl: It will.

Phil: Then doesn’t it follow by your principles that the water really is both cold and warm at the same time — thus believing something that you agree to be an absurdity?

Hyl: I admit that that seems right.

Phil: So the principles themselves are false, since you have admitted that no true principle leads to an absurdity.

From this simple thought experiment we can see clearly that ‘hotness’ and ‘coldness’ are not, in fact, present in any object itself but are rather ideas caused by sensations. Yes, we can formulate a manner of measuring temperature neutrally or quasi-objectively by deploying, for instance, a thermometer. But this is a rather artificial construction. It gives us a reading, but it does not actually tell us about hotness of coldness which are only ideas formed as the result of sensations and which are primary with respect to any numerical value we try to give to temperature.

Back to our example of brain damage. Is this not entirely the same? In truth if we use an instrument to cause ourselves brain damage what we are doing altering our capacity to produce sensations and ideas. But there is no reason to suppose that something called ‘matter’ exists in doing this. This is because, as Berkeley stresses time and again, the denial of the existence of matter is not the denial of the existence of what we might call ‘reality’.

It is perfectly true that damaging certain centers of the object we perceive as the brain will lead to sensory and ideational deprivation. But we no more have to assume that matter exists to accept this than to accept that matter exists when the object we perceive as the flame of a fire causes a sensation of hotness when the object we perceive as our hand is placed in it.

This gets to the heart of what the common prejudice in favour of matter is really all about — and this, in turn, is where we can tie Berkeley’s philosophy back into the general concerns of this blog.

What is really going on when people think of matter is that they are attempting to model their reality. When I think of a flame burning my hand by heating up the atoms and molecules within it, I am really constructing an imaginary model of what I think to be going on. I am trying to detach myself from my immediate perceptions and look at myself, as it were, from a third person or God’s eye perspective. I am, to reduce this somewhat, trying the impossible task of turning myself into the object of my own perception.

This may, in some circumstances, be rather useful. But we should recognise that it is a wholly artificial construction and purely an exercise in imagination. In truth, no such third person perspective can be attained that is not a construction of our imagination. If we put our hand in the fire, it burns; if we damage our brain, we lose certain sensory and ideational capacities. That is all. Any attempt to understand this from a third person perspective — that is, any attempt to model this — is a completely secondary construction that is based wholly in our imagination.

The common misconception that matter actually exists is to explained in this way: it arises from a cognitive bias in the Western mind. Westerners have been taught — brainwashed, in a sense — to think in terms of models. They have been taught to conceive of their reality not immediately, but mediately. This, in and of itself, can lead to scientific progress, but it can also lead to mass delusion: to begin to mistake imagination for experiences; models for reality.

‘Matter’ is but the imaginary construct that Westerners have come up with to ground their model-oriented tendencies of mind in some sort of objective reality. But when examined carefully, as Berkeley so masterfully did in his Three Dialogues, it turns out to be an empty delusion; an artificial construction set up and built into our cultural prejudices in order to have us assent to a certain, highly imaginary way of thinking.

Thus all the errors that we find in, for example, economics and the social sciences are really grounded in this belief in matter. What we refer to as ‘scientism’ is actually the mistaken belief in matter which is wholly synonymous with our attempts to gain a third person or God’s eye view of ourselves. Such acts of imagination, again, can be useful and constructive but when we fall into habits of thought that accept these acts of imagination as objective reality we completely and utterly delude ourselves.

It is such delusions that generate the most truly absurd aspects of economics today. It is the belief — for it is nothing but a belief with no ground — in the existence of matter that leads us to try to construct models and then mistake these for reality.

Posted in Economic Theory, Philosophy | 7 Comments

The Theory of Relativity: Anticipated at the Turn of the Seventeenth Century by George Berkeley

berkeley

Bishop George Berkeley is, in my opinion, the most profound philosopher ever to have written. He came up with many ideas in the early modern period — that is, around the beginning of the 18th century — that were only integrated into modern science around the beginning of the 20th century. What is more, most Anglophone philosophy today still operates under notions that have been stale in scientific discourse since Mach and Einstein and which should have been overturned by Berkeley nearly four hundred years ago.

Modern Anglophone philosophy, for example, usually operates under Kantian notions about space and time as a priori givens. Later in a philosophy course it is often admitted that Kant was operating in a Newtonian paradigm that has since been overthrown but this is simply a handwave; it is rarely, if ever, integrated into the teaching of philosophy. Thus Anglophone philosophy today operates in a strange slipstream: these philosophers at once know that absolute notions of space and time are incorrect but they nevertheless teach philosophy as if they were not.

Here I present a particularly clear exposition of the relativity of space and time as given by Berkeley in his seminal Three Dialogues Between Hylas and Philonous. In the dialogues Hylas is representative of Newtonian ideas together with a belief in the existence of matter whereas Philonous is representative of Berkeley’s position on the relativity of space and time together with an affirmation that matter does not exist. These two ideas are inherently tied up with one another.

In what follows I use a version of the Three Dialogues Between Hylas and Philonous that has updated it by translating it into modern English. Such a translation is perfecttly in keeping with Berkeley’s common sense view of how thought should be structured.

So, first let us turn to to the relativity of space. Here is the relevant part of the dialogue in full,

Phil: A tiny insect, therefore, must be supposed to see its own foot, and other things of that size or even smaller, seeing them all as bodies of considerable size, even though you can see them — if at all — only as so many visible points.
Hyl: I can’t deny that.
Phil: And to creatures even smaller than that insect they will seem even bigger.
Hyl: They will.
Phil: So that something you can hardly pick out because it is so small will appear like a huge mountain to an extremely tiny animal.
Hyl: I agree about all this.
Phil: Can a single thing have different sizes at the same time?
Hyl: It would be absurd to think so.
Phil: But from what you have said it follows that the true size of the insect’s foot is the size you see it having and the size the insect sees it as having, and all the sizes it is seen
as having by animals that are even smaller. That is to say, your own principles have led you into an absurdity.
Hyl: I seem to be in some difficulty about this.
As we can see, Berkeley overturns the idea of absolute space by reference to the fact that the world we experience is always based on our relative position within it. Thus the notion of space as existing ‘out there’ as some objective given becomes meaningless. In the appendice to his Relativity Einstein makes a similar point — although he wrongly attributes the overthrow of absolute space to David Hume. He writes,
Mach, in the nineteenth century, was the only one who thought seriously of an elimination of the concept of space, in that he sought to replace it by the notion of the totality of the instantaneous distances between all material points.
For more information on Mach’s theory of space the following article is a good summary. As can be seen his concerns are identical to Berkeley’s.
Let us now move on to Berkeley’s criticisms of the notion of absolute time. Again, I quote in full the relevant section of the dialogue.
Phil: Can a real motion in any external body be at the same time  both very swift and very slow?
Hyl: It cannot.
Phil: Isn’t the speed at which a body moves inversely proportional to the time it takes to go any given distance? Thus a body that travels a mile in an hour moves three times as fast as it would if it travelled only a mile in three hours.
Hyl: I agree with you.
Phil: And isn’t time measured by the succession of ideas in our minds?
Hyl: It is.
Phil: And isn’t it possible that ideas should succeed one another twice as fast in your mind as they do in mine, or in the mind of some kind of non-human spirit?
Hyl: I agree about that.
Phil: Consequently the same body may seem to another spirit to make its journey in half the time that it seems to you to take. (Half is just an example; any other fraction would make the point just as well.) That is to say, according to your view that both of the perceived motions are in the object, a single body can really move both very swiftly and very slowly at the same time. How is this consistent either with common sense or with what you recently agreed to?
Hyl: I have nothing to say to it.
Once again, it might be fruitful to compare this to Einstein’s argument. In the ninth chapter of Relativity Einstein writes,
Events which are simultaneous with reference to the embankment are not simultaneous with respect to the train, and vice versa (relativity of simultaneity). Every reference-body (co-ordinate system) has its own particular time; unless we are told the reference-body to which the statement of time refers, there is no meaning in a statement of the time of an event.
As we can see, Berkeley’s arguments are basically identical to Einstein’s. The key difference, however, is that Berkeley is trying to do away with the existence of matter and ‘external’ space altogether as was also the case, I think, of Mach. Einstein was eventually to reject this.
Berkeley’s philosophical position survives today in the Continental tradition of phenomenology and existentialism. The most important of whom include Edmund Husserl, Martin Heidegger, Jean-Paul Sartre and Maurice Merleau-Ponty. His insights have been all but been purged from Anglophone philosophy and only turn up in the most roundabout way. What’s more materialism has become a sort of crude Faith-based ideology of our time.
It seems that, as belief in religion and God has declined, people who would have been among the Faithful in other times have fashioned for themselves a Faith-based materialism that is as groundless as it was in the early 18th century. But old habits die hard and those inclined toward blind Faith will rarely change their minds when presented with a contrary argument.
Posted in Philosophy | 2 Comments

Faith-Based Arguments in Empirical, Causal and Probabilistic Reasoning

faith-and-reason

David Hume is today the philosopher most often associated with what might be termed ‘radical empiricism’. The problem, of course, as I have pointed out on this blog before, is that he was not the originator of what should properly be recognised as a conceptual revolution. More than this, the thought of the person from whom he took his ideas was completely perverted in the process.

The person whose ideas have been tampered with was, of course, the Irish philosopher George Berkeley. It was George Berkeley who lay the ground for radical empiricism with his observation that all abstract general ideas are merely instances of particular ideas. To get a handle on this we may as well follow Berkeley’s argument closely. In his Principles of Human Knowledge he quotes John Locke on what an abstract general idea is.

Abstract ideas are not so obvious or easy to children or the yet unexercised mind as particular ones. If they seem so to grown men it is only because by constant and familiar use they are made so. For, when we nicely reflect upon them, we shall find that general ideas are fictions and contrivances of the mind, that carry difficulty with them, and do not so easily offer themselves as we are apt to imagine. For example, does it not require some pains and skill to form the general idea of a triangle (which is yet none of the most abstract, comprehensive, and difficult); for it must be neither oblique nor rectangle, neither equilateral, equicrural, nor scalenon, but all and none of these at once? In effect, it is something imperfect that cannot exist, an idea wherein some parts of several different and inconsistent ideas are put together. It is true the mind in this imperfect state has need of such ideas, and makes all the haste to them it can, for the conveniency of communication and enlargement of knowledge, to both which it is naturally very much inclined. But yet one has reason to suspect such ideas are marks of our imperfection. At least this is enough to show that the most abstract and general ideas are not those that the mind is first and most easily acquainted with, nor such as its earliest knowledge is conversant about.

But Berkeley goes on to contest this. Indeed, he claims that such abstract general ideas are really only so much nonsense. He writes,

If any man has the faculty of framing in his mind such an idea of a triangle as is here described, it is in vain to pretend to dispute him out of it, nor would I go about it. All I desire is that the reader would fully and certainly inform himself whether he has such an idea or no. And this, methinks, can be no hard task for anyone to perform. What more easy than for anyone to look a little into his own thoughts, and there try whether he has, or can attain to have, an idea that shall correspond with the description that is here given of the general idea of a triangle, which is “neither oblique nor rectangle, equilateral, equicrural nor scalenon, but all and none of these at once?”

It was this move that allowed Hume to do away with the abstract general idea of causality. For Hume, as is well-known, causes are only known in their particularity. Just because the sun rises today does not mean it will not rise tomorrow. This is known as Hume’s ‘scepticism’ and it leads to the conclusion that we can only know what is (a) immediately given to our sense and (b) what is inscribed in our memory by previous sense impressions. Thus there is no rational reason to argue that the sun will rise tomorrow based on past instances of the sun rising and so forth.

Later on Hume would argue against his own theoretical scepticism in his A Treatise on Human Nature as follows,

Shou’d it here be ask’d me, whether I sincerely assent to this argument, which I seem to take such pains to inculcate, and whether I be really one of those sceptics, who hold that all is uncertain, and that our judgment is not in any thing possest of any measures of truth and falshood; I shou’d reply, that this question is entirely superfluous, and that neither I, nor any other person was ever sincerely and constantly of that opinion. Nature, by an absolute and uncontroulable necessity has determin’d us to judge as well as to breathe and feel; nor can we any more forbear viewing certain objects in a stronger and fuller light, upon account of their customary connexion with a present impression, than we can hinder ourselves from thinking as long as we are awake, or seeing the surrounding bodies, when we turn our eyes towards them in broad sunshine. Whoever has taken the pains to refute the cavils of this total scepticism, has really disputed without an antagonist, and endeavour’d by arguments to establish a faculty, which nature has antecedently implanted in the mind, and render’d unavoidable. (Part IV, Section I)

In actual fact we are back to Berkeley’s position on the matter, it’s just that Hume has not explicitly stated what he was doing. For Berkeley any order in the chaos of particular ideas that we experience as human’s is due to the fact that God puts this order there for us to find. So, Berkeley’s answer to the scepticism that he knew was implicit in his work was that people should have Faith.

Now, look again at that passage from Hume: is he not telling us the same thing? I think it rather obvious that he, in fact, is. He is just replacing the term ‘God’ in Berkeley’s argument with the term ‘Nature’. Rather than saying that God is responsible for the order that we encounter the world Hume claims that Nature has “implanted” a “faculty in the mind” that gives rise to this order. What is this metaphysical entity that Hume calls Nature? One would be forgiven for thinking that it was, in fact, a synonym for ‘God’.

Regardless, however, the implications are perfectly clear: for Hume, just as for Berkeley, the continuity of our perceptions is based on Faith — whether that Faith be in a mysterious entity called ‘God’ or a mysterious entity called ‘Nature’ is really a secondary question.

This was again recognised by the French phenomenologist Maurice Merleau-Ponty. He recognised it in what he referred to as ‘perceptual faith’ by which he meant a Faith in our perceptions that comes, as it were, inbuilt into them and which is not subject to further reflection. In his work The Visible and the Invisible he writes,

It is a question not of putting the perceptual faith in place of reflection, but on the contrary of taking into account the total situation, which involves reference from the one to the other. What is given is not a massive and opaque world, or a universe of adequate thought; it is a reflection which turns back over the density of the world in order to clarify it, but which, coming second, reflects back to it only its own light. (p35)

What Merleau-Ponty realised, as had Berkeley (and which Hume had carelessly neglected), was that Reason as such is based on Faith. There is a point at which our belief in certain rationalistic ideas is actually wholly Faith-based and that, lest we fall into total scepticism (which, as Hume pointed out, we never actually do) we must accept this aspect of Faith that undergirds our Reason. What meaning we give to this is, of course, entirely personal.

All of this has a bearing on, for example, the use of probabilistic reasoning in economics. The use of such techniques are always, when you strip them right back, based on a certain Faith in the validity of the causal chain under scrutiny. This Faith is usually placed both on some probability distribution and in the person of the researcher who trusts himself that he has chosen the ‘correct’ criteria for testing. (And, conversely, does not ‘trust’ other researchers who come up with different results; this is the point at which such Faith-based arguments becomes hermetic, sealed and not open to criticism). There is usually another Leap of Faith in assuming that the future will mirror the past in some sense — i.e. a Faith placed in the fact that the data under scrutiny is ergodic.

Most researchers using such techniques are completely blind to this: they just manipulate the symbols and have no need to question their Faith in them. This, of course, is deeply problematic, but it is built into the structure of the discourse of probability and how it relates to economics through econometrics. What these discourses do is they provide a firm means by which those using them may, in a very real sense, have Faith in their causal inferences. The danger is that such discourses also allow researchers to elide argument from the outside and never question their Faith in their ventures; simply for the fact that, as in dogmatic religion, Faith is given from the outside.

All the criticisms one sees of probabilistic reasoning is a simple questioning of certain of the tenets of Faith. While I have no real problem with such Faith-based arguments per se — indeed, I recognise them as necessary — I do think that often cheap rubbish is sold to unquestioning people. As we know from cults, when dealing with issues of Faith it is possible to convince vulnerable people of any nonsense so long as it provides a firm ground for their life and their activities. Econometric reasoning is mainly nonsense and the reason that it is sold so easily is because economists, whose practice is so uncertain and so open to error, are in an insecure ontological position that has them fall back uncritically on any garbage that provides them with a Faith that comes complete with a false mantle of scientificity.

Posted in Philosophy, Statistics and Probability | 2 Comments

The Economist Magazine, Marxism and the Conventional Wisdom

reds under the bed

This morning I came across a rather awful piece on The Economist‘s website entitled A Marxist Theory is (Sort Of) Right. The piece is indicative of what I think to be a far more general trend in contemporary intellectual life: namely, the fact that Marxism exists as a sort of weird counterpart to what we generally call the ‘conventional wisdom’.

The other day I wrote a post dealing with JK Galbraith and what he called the ‘conventional wisdom’ but perhaps I should again provide a nice quote from his The Affluent Society that lays out once more what the conventional wisdom is.

Because familiarity is such an important test of acceptability, the acceptable ideas have great stability. They are highly predictable. It will be convenient to have a name for the ideas which are esteemed at any time for their acceptability, and it should be a term that emphasizes this predictability. I shall refer to these ideas henceforth as the Conventional Wisdom. (p18)

That’s a rather nice summary: the conventional wisdom is characterised by ideas that are stable, predictable and, above all, familiar. With this in mind we can approach The Economist article but first a word on the publication.

The Economist magazine is perhaps the prime organ that disseminates the conventional wisdom that exists in the economics profession today. It is geared toward a popular audience — unlike, the far more sophisticated and specialist Financial Times — and can thus regularly be found, for example, in the dentist’s waiting-room. Whereas the Financial Times is a serious organ that seeks to provide real, tangible information in fairly concentrated form to an audience that actually uses such information in their professional lives, The Economist is better thought of as a sort of upmarket glossy magazine providing whimsy for a middle manager or a lawyer awaiting a filling or a root canal.

Many — on the political left, for example — inaccurately portray the magazine as being right-wing. But it is only as right-wing as the politics of the day; no more, no less. If the politics were to take a left turn, The Economist would follow. Likewise many in the heterodox economics community mistake the magazine for being neoclassical or marginalist. This is slightly more accurate but nevertheless somewhat misleading: The Economist is as neoclassical or marginalist as the economics profession of the day. If economics were to take a more Keynesian turn — which it does on occasion these days — the magazine would follow.

The Economist is really a sort of mirror reflecting back on society the economic orthodoxy of the day and it is for this reason — and almost no other, barring perhaps the occasional attractively presented graph — that it is interesting.

But back to Marxism. Many — both on the political left and in the heterodox community — get a bit jumpy when The Economist uses the ‘M-word’, seeing in it a sort of victory. While the political left may be somewhat correct in that a dominant ideology on that side of the fence is getting the spotlight, the heterodox community are being wholly misled. This is because they misunderstand Marxism as being some sort of remedy to the conventional wisdom. But it is, in fact, no such thing.

Think about this for a moment. You’re in your late-teens or very early twenties. You’re attending university. How difficult is it for you to be exposed to Marxist ideas? Not very! In fact, it is almost a rite of passage. In the humanities department there is always a few Marxist professors; pamphleteering in the halls are members of some Marxian socialist faction or other; while heading up the protests against, say, cuts to the university budget is a Trotskyist from the socialist or Green party.

As I said above, Marxism exists as a sort of weird flipside to the conventional wisdom. “Don’t be fooled by all that,” the Marxists of various stripes will say, “its really the opposite of the propaganda they’re feeding you!” In the world of Marxism many of the tenets of the conventional wisdom are literally overturned rather than properly interrogated. What is good in the conventional wisdom becomes bad on the Marxist reading.

Free trade is a salient example, as this is what The Economist article above is dealing with. In the conventional wisdom free trade is universally a good thing. In economics this is backed up by what can only be called a dogma in the form of the Ricardian idea of the so-called law of comparative advantage. On the Marxist reading free trade is all about exploitation and imperialism plain and simple. The trick here is often not to really analyse the truth content of, for example, a given trade policy or the theory of comparative advantage, but rather to just label it ‘evil’ rather than ‘good’; thus inverting what one thinks to be the moral consensus in Western capitalist democracies.

Those that generally adhere to the conventional wisdom then latch onto this and begin to associate, in the discourses of economics and politics, any position taken that seems to run contrary to the conventional wisdom with Marxism. That is precisely what the anonymous writer of The Economist article in question has done.

In the article the author discusses an IMF study that tests the Singer-Prebisch hypothesis against data going back to 1650. The thesis states that the terms of trade between those who produce manufacturing goods (secondary goods) and those who produce commodities (primary goods) deteriorates over time. This, of course, leads to a questioning of the so-called law of comparative advantage which would generally encourage developing countries to produce primary goods for the developed world as it is in their comparative advantage to do so.

Is there anything Marxist about this idea? Certainly not at a theoretical level. Although a Marxist may use the Singer-Prebisch hypothesis as part of a more general assertion that the developed world is ‘exploiting’ the developing world and extracting surplus value from them, the hypothesis does not contain within it any such moral judgments. It is merely a hypothesis about empirical facts. (And one which, it would seem from both common sense and the IMF study in question, contains a great deal of truth.)

The only reason that it is seen as Marxist for those so heavily sedated by the conventional wisdom is that they know nothing else. Anything in economics and politics that fall outside of the conventional wisdom leads the adherent of said conventional wisdom on a trip down memory lane to their university days; to their Marxist sociology professor and their encounter with socialist pamphleteers. “If it sounds like it goes against free trade and comparative advantage,” reasons the adherent of the conventional wisdom, “then it must be Marxism.”

Such a view is completely bizarre for anyone with an ounce of knowledge of the history of free trade. By such a reading, of course, the founding father and first US Treasury Secretary Alexander Hamilton would be a Marxist given that he entirely rejected free trade and comparative advantage in his seminal Report on Manufactures. Keynes too rejected the doctrine of laissez faire and free trade as early as 1926 in his The End of Laissez Fairre. Does that make Keynes a Marxist? Well, that would certainly be rather odd since in the aforementioned essay he writes:

But Marxian socialism must always remain a portent to the historians of opinion — how a doctrine so illogical and so dull can have exercised so powerful and enduring an influence over the minds of men and, through them, the events of history.

Obviously Keynes did not see scepticism with regards to free trade and laissez fairre as being synonymous with Marxism.

One could think up numerous other examples; the German historical school would be a case in point. But I think, at this stage, the reader gets my point.

Marxism is the inverse of the conventional wisdom and in its own strange way it insulates and protects it. Marxists themselves are often just spouting the conventional wisdom in its inverted form and, by doing so, they give those who adhere to the conventional wisdom a perfect label which they can tack onto anything that doesn’t fit with their preconceived notions. Thus far from being anathema to the conventional wisdom, Marxism is a sort of negative foundation upon which it rests.

Marxism provides a nice dichotomy: if you reject the moral consensus invert everything you’re taught and become a Marxist; while if you support the moral consensus adhere to everything you’re taught and label anything that doesn’t fit the bill as Marxism so that your intellectual circuits don’t become scrambled and you don’t have to think through the merits and truth content of your ideas. Such a “good” versus “evil” battle serves everyone nicely, it would seem. That is, I think, largely the function of Marxism today. As to why it rose to such a position, this is tied up with the history of the 20th century; with the Soviet Union and the labour movements and that is another day’s discussion.

Posted in Media/Journalism, Politics | 15 Comments

The Holy Grail: Distractions in Econometrics and Economic Modelling

holy-grail

Lars Syll ran an interesting piece today on the “confounder” problem in econometrics. This is basically the problem of how do we tell that if there is a relationship between, say, A and B that it is not, in actual fact, being caused by an entirely separate variable, C.

The problems raised by this are innumerable. Syll’s main point is that in order to test for this we must know all possible C’s; C1, C2, C3… Cn etc. That means that we must control for every variable that we want to prove is not causing the correlation. Assuming then, that we actually manage to isolate every possible “confounder”, we can then say that there is a causal relationship between A and B.

This raises all sorts of problems. As Syll points out the main one is that it is a somewhat circular argument: if we convince ourselves that we know every possible confounder — that is, every possible C — we have, in a strong sense, already convinced ourselves of the truth of our model.

Think about this: if I say that I know every possible confounder that means that I have already established the true causal relation between A and B deductively. Why? Because if I am wrong and I have missed a potential confounder then the correlation is spurious and the test meaningless.

So, what I am saying is really the following:

“I am certain that I have excluded every potential C because I have isolated every possible C and tested them. Thus if there is a relationship between A and B then it must be causal.”

The critic would then say:

“Well, how do you know that you have isolated every possible C?”

To which I would have to reply:

“Because I know that my model is correct and thus by process of negation I know of all the possible C’s which are incorrect.”

But then the critic would say:

“Why then are you undertaking the test if you already know your model is correct?”

To which I would have to respond:

“Because I want to test the correlation between A and B and assure myself that it is causal.”

And the critic would then point out:

“But that’s silly because by claiming that you know every possible C you are already claiming that your theory is correct by the negation of every other possible theory.”

Finally, I would throw my hands up and say:

“Okay, fine. Maybe I don’t know every possible C. But at least I tried all the ones I do know so that I can at least be sure that they are not causing the correlation and so I can say that, given the evidence, I think that the most probable causal relation between the variables is due to A and B.”

What this proves, beyond a shadow of a doubt, is that, as Syll says, econometrics cannot work in isolation. It must deploy deductive theories to the data it handles. This is important because some people who do econometrics work don’t, in fact, use deductive models at all.

It also proves something else that, while Syll mentioned it with reference to Keynes’ critique of econometrics toward the end, I don’t think he emphasised enough: it implies that all relevant C’s must be data points that we can use — i.e. all relevant confounders must be quantifiable data points.

This, I think, is an absolutely key problem with using econometrics for economic analysis. Anyone that follows the markets and the economy on a day-by-day basis knows the importance of non-quantifiable relationships on its trajectory. With reference to recent events, for example, how do we measure Bernanke’s move to taper the QE program back in June; and, in turn, how do we measure his flip-flop on this back in September? These events have been all people in the markets have been talking about over the past few months, yet they cannot be represented as data points and thus cannot be tested as confounding variables (or as causal variables, for that matter).

Another problem that Syll briefly raises but I don’t think places enough emphasis on is the supposed solution to confounding variables by selecting different ‘populations’ — that is, different ‘control groups’. The problem here is the assumption that what economists are looking for are some sort of iron laws that apply across different groups homogeneously. But this is not, of course, true; good economists do not believe in such iron laws.

Take the example of the multiplier. Imagine that we are trying to estimate a multiplier across, say, five different control groups. Should it bother us if we get entirely different readings? Of course not. We would just conclude that the different groups have different multipliers and that we could aggregate them to get the average.

This is how good economics is done. It’s not about establishing iron laws, but rather trends at any given moment in time. This is why the application of catch-all models — which is what Syll is implicitly dealing with — is just the wrong approach. It’s just a waste of time; a search for a Holy Grail that simply doesn’t exist.

This is where the critique of econometrics overlaps with the critique of economic modelling more generally: this is simply not what we should be doing as economists. Trying to build a Holy Grail — one that gives oneself some sort of immortal knowledge of The Economy — and then continuously testing this against ever new data-sets (which are potentially as infinite as time itself) is a complete and utter waste of time. As Keynes said in a similar context (I’m paraphrasing slightly),

The labour involved is enormous, but it is a nightmare to live with.

It is the tendency of economists to try to build their Holy Grail models that leads them to defend what is an indefensible discipline — econometrics — and it is not until the structure of the profession has changed sufficiently to move away from the former that the latter will cease. And with that I leave the reader with a positive rather than a negative quote from Keynes.

The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organised and orderly method of thinking out particular problems.

Posted in Philosophy, Statistics and Probability | 3 Comments

The (Brief) Rise and (Inevitable) Fall of Gold

sad gold

Gold fell rather precipitously yesterday and it appears that many in the markets are scratching their heads. “Why,” they think, “is gold falling in price in the face of a looming government shutdown and a recent indication by the Fed that they are halting the QE taper?”

The best way to answer this is probably to lay out the events sequentially. Okay, so the Fed announced its intention to taper on June 19th of this year. As can be seen from the graph below gold, as we might expect, fell immediately afterwards. From that graph, however, it appears that the shiny metal was already in a downward trend prior to the announcement of the taper, leading one to suspect that it might not simply have been the announcement that led to the fall.

Gold Price

On July 18th Bernanke’s Humprhey-Hawkins speech was pre-released and analysts noted its dovish tone. Bernanke seemed to imply that should an economic recovery not be forthcoming the Fed might increase rather than decrease its bond purchases. The gold market failed to react meaningfully; it spiked for literally an hour or two and then fell to new lows.

Then came the announcement that the Fed might not taper after all which reached the collective ears of the market on September 18th. Once again, as can be seen from the chart below, gold failed to react. It continued on its downward trend.

Gold Price Taper Halt

All of this, of course, begs the question: if gold prices were not reacting to talk of a taper then why did they rise substantially from their bottom at the end of June to their peak at the end of August? It seems that the answer was rather simple: it was a dead cat bounce plain and simple. Gold prices had fallen so far by the end of June that they had to recover at least some of their losses.

The reason for this, it seems to me, is built into the structure of the gold market itself. As I have pointed out elsewhere before, the gold market has two distinct characteristics which, if properly understood, explain a great deal about its ups and downs.

The first of these is that there is what we might call a ‘hard core’ of gold investors and these account for the massive increase in ‘bar and coin investments’ since 2007. This hard core are, of course, the gold bugs and as I pointed out before their investments are purely driven by (lack of) confidence – i.e. fear of hyperinflation and the like. These are a somewhat problematic investment class because while, on the one hand, they remain loyal customers in that provided their fears are being stoked they will hold onto their gold, on the other hand, they are probably rather skittish.

This leads to the second defining characteristic, namely, that there is an extremely low price elasticity of demand for gold. This means that if even a small amount of gold is dumped onto the market the price must fall substantially before a buyer is found. This indicates that the above mentioned hard core are adopting a defensive posture; they are not really interested in more gold, but rather in defending their current gold hoardings.

Taken together these two defining characteristics suggest that the gold market is in an extremely fragile state. Any rush for the exit by investors means precipitous price declines. The dead cat bounce we saw from the end of June to the end of August was likely the result of gold bug investment managers convincing their wary clients to take advantage of the price declines. But this is likely subject to diminishing returns as those clients become ever more defensive moving into a future in which the likelihood of serious inflation seems less and less.

Posted in Market Analysis | Leave a comment