Purchasing Power Parity (PPP) and the Exchange Rate

PPP

There is a theory that floats around out there called the ‘Purchasing Power Parity theory of the Exchange Rate’ — or something to that effect, the name seems to change depending on what source you go to. The theory, stripped right down, amounts to something like this: the ‘correct’ value of the exchange rate will be the old exchange rate times the change in the price level in one of the two countries involved divided by the change in the price level of the other of the two countries involved.

Let’s take a concrete example to be a bit clearer: the exchange rate between the yen and the US dollar (USD). So,

PPPfxrate

On the left hand side of the equation, denoted by a *, is the ‘equilibrium’ exchange rate between the USD and the yen in the future — i.e. time t+1. As we can see it is determined by the actually existing exchange rate in the here and now — i.e. time t — together with the inflation differential between the two countries.

Intuitively that means that if the we are trying to figure out what the USD/yen exchange rate should be a year from now and we know that the inflation rate is going to be 10% more in the US than in Japan we will expect the USD to depreciate by 10% relative to the yen. Again, if the inflation differential between the US and Japan is positive then the USD should fall relative to the yen.

Let’s do an example to illustrate this. Let’s say that the USD/yen exchange rate starts at parity — i.e. 1USD = 1yen. Now, let’s say that the inflation rate in the US is 10% more than the inflation rate in Japan. We denote this through two price indices, the US index being 10% higher than the Japanese index. So,

PPPexample1We do the calculations and we get,

PPPexample2

As we can see, the USD has devalued by 10% against the yen. Whereas before 1USD could buy 1yen, now it takes 1.10USDs to buy 1yen. The yen has become 10% more expensive.

The problems with this theory, which is often used as a rule-of-thumb, are varied. A handy list of them can be found here. Obviously I cannot run through all of them here but the one that stands out most to me is the following:

The theory assumes that changes in price levels could bring about changes in exchange rates not vice versa, that is, changes in exchange rates cannot affect domestic price levels of the countries concerned. This is not correct. Empirical evidence has shown that exchange rate governs price rather than the latter governing the former. Prof. Halm opines that the national price levels follow rather than precede the movements of exchange rates. He states: “A process of equalisation through arbitrage takes place so automatically that the national prices of commodities seem to follow rather than to determine the movements of the exchange rates.”

This strikes me as a particularly poignant criticism as it gets to root of the causality involved in the hypothesis behind the theory.

In an interesting paper by Rudi Dornbusch entitled simple Purchasing Power Parity the author points out that the status of the theory may be compared to the old Quantity Theory of Money. He writes,

The PPP theory has somewhat the same status as the Quantity Theory of Money (QT): by different authors and at different points in time it has been considered an identity, a truism, an empirical regularity or a grossly misleading simplification. The theory remains controversial, as does the QT, because strict versions are demonstrably wrong while soft versions deprive it of any useful content. (p1)

This strikes me as a very astute analogy. The PPP theory of exchange rates seems to try to make the same reductive assumptions about causality as the stronger forms of the QTM.

But let’s turn to some empirical evidence to see how it holds up. Given our above examples let’s take the yen/USD exchange rate during a period of turbulence between 1974 and 1990. The below chart lays out two variables: the yen/USD exchange rate and the inflation differential. The latter is calculated by subtracting the US inflation rate from the Japanese inflation rate. So, a positive differential indicates a higher rate of inflation in the US and a negative differential indicates a higher inflation rate in Japan.

yenUSfxrate

As we can see, between 1974 and the beginning of 1978 inflation in Japan was substantially higher than inflation in the US. According to the PPP theory of the exchange rate this should mean that the USD should have appreciated against the yen by a similar amount to the differential. Alas, such was not the case. Rather, between 1976 and 1978 the USD fell in value against the yen.

Conversely, between 1978 and 1983 inflation in the US was higher than inflation in Japan. According to the PPP theory this should have resulted in a depreciation of the USD relative to the yen. What we actually saw was, again, precisely the opposite. The USD appreciated. So much so that in 1985 the Reagan administration had to request that they be allowed forcibly depreciate their currency against the yen in what has since come to be known as the Plaza Accord.

After the Plaza Accord the rate of inflation stayed higher in the US than in Japan. But this begs the question: since the depreciation was forced would it not be more logical that it was the depreciation itself that was causing the higher inflation, due to higher import costs in the US, rather than vice versa?

Let’s lay this out in plain English: if the PPP theory were correct we would expect to see the blue line on the graph pushed downward whenever the red bars climbed upwards and vice versa. In actual fact, we do not see this; indeed, as just highlighted, we often see exactly the opposite. And when we do see the blue line fall (i.e. the dollar devalue) and the red line climb (i.e. US inflation) it appears that the causality is precisely the reverse: the devaluation takes place first and the uptick in inflation results from this.

Naturally, some will say that the PPP theory must be handled in a more nuanced way to make accurate predictions about exchange rate moves. But if it performs this poorly — I would say: counter-factually — when used in a simplistic way it is altogether dubious whether any more nuanced application is not likely an instance of hocus-pocus; much like the old Monetarist attempts to explain output and inflation with reference to expansions and contractions in the money supply.

On that note, I leave the reader with the opinion of Paul Samuelson as quoted in Dornbusch’s excellent paper,

Unless very sophisticated indeed, the PPP is a misleading, pretentious doctrine, promising what is rare in economics, detailed numerical prediction.

Indeed. Although, given the state of the expanded models, we might add: as the model is expanded and becomes ever more cumbersome it becomes ever more dubious.

Posted in Economic History, Economic Theory, Market Analysis | 6 Comments

An Excellent Guide to Using Gretl

gretl-logo

For those that don’t know Gretl is a freeware econometrics package. Despite not costing anything I’ve found it to be a very useful econometrics program that can do pretty much anything — or, at least, anything that I’ve ever wanted it to do.

Gretl can be a bit daunting to use, however. This especially so given it’s ‘stripped down’ presentational format (which I rather like, but others may not). Anyway, the author Hishamh over at the Economics Malaysia blog has put together a series of post that guides the user through all the major uses of Gretl. The posts, complete with screenshots, are indispensable and I will here run quickly through what they show.

In the first post the author shows how to input and format data.

In the second post the author shows how to run and interpret a regression.

In the third post the author shows how to ensure that the test that has been run is robust — in this he shows the reader how to test for independence and normal distribution.

Finally, in the fourth post the author shows how to introduce dummy variables to control for seasonal variation in the data.

The series is not a substitute for actually understanding the issues involved in econometric modelling; these need to be understood independently. But I’m sure that the resourceful reader can make ample use of Google, Wikipedia and, if necessary, an econometrics textbook to supplement the series.

Of course, as readers of this blog will know, I’m rather skeptical of econometrics techniques when used on economic data. I have not since changed my opinion and continue to think that these techniques do more harm than good.

Just as Keynes, however, in his critique of Tinbergen suggested that the econometric method was useful for finding out the effect that an increase in railway traffic, the rate of profit on railways, the price of pig-iron and the rate of interest had on net investment in railway rolling-stock (that is, railway vehicles), I would suggest that there are some rather obvious relationships that can be tested. (Although the issue then arises as to how useful such tests are when the relationships reach a certain level of obviousness…).

As well as this there some more contentious relationships that can be tested and used very provisionally in making forecasts. The author of the series’ own example — that is, the lagged effect of exports on imports in a small open economy — is a good example of this. As would be, for example, the import/export elasticities of demand (i.e. how exports/imports are affected by income and the exchange rate) or multiplier estimations. But again, such relationships likely miss more than they capture and they are sure to be unable to incorporate anything of real interest. For that reason I would continue to strongly endorse David Freedman’s ‘shoe leather’ approach to economic statistics.

But if you’re going to do econometrics, and you’re going to use Gretl in particular, the above series is great for either the beginner or those that want a refresher course.

Update: The new version of the excellent Gretl manual by Lee Adkins is out now and can be downloaded here.

Posted in Statistics and Probability | 4 Comments

Further Problems With the Static Framework of the ISLM

static_to_dynamic

Yesterday I did a short post on how the ISLM model misrepresents how interest rates function because it views them as static. Today I would like to make a further, if more difficult point: namely, that the very way in which the interest rate stimulates investment is inherently limited in that it cannot produce cyclical upswings in effective demand — and thus, cannot produce cyclical upswings in output. In doing this I will be drawing on Jan Kregel’s excellent paper Of Prodigal Sons and Bastard Progeny which in turn draws on some of Joan Robinson’s own writings on the ISLM.

As Kregel shows in the paper, Robinson had a very clear-sighted view about how interest rates function. The first part of Robinson’s article is quoted by Kregel as such,

Relatively to given expectations of profit, a fall in interest rates will stimulate investment somewhat, and by putting up the Stock Exchange value of placements [i.e. share prices], it may encourage expenditure for consumption. These influences will increase effective demand and so increase employment… But even when the rate of interest can be moved in the required direction, it may not have much effect. The dominant influence on the swings of effective demand is swings in the expectation of profits. (My Emphasis)

Here then there are two channels through which a fall in the interest rate works. On the one hand, it increases investment directly — presumably by lowering the cost of borrowing somewhat and giving a temporary boost to the animal spirits. On the other, it increases consumption expenditure through the wealth-effect as the net worth of those that hold shares rises. The increase in effective demand then arises due to the increase in investment and consumption that arises due to the fall in interest rates.

However, as I highlighted in the above quote, Robinson takes a properly Keynesian/Kaleckian view of how such increases in investment may or may not prove self-reinforcing: that is, in order for a cyclical upswing to be maintained expected profits must increase. Robinson is skeptical that this will occur under the influence of monetary policy because she thinks that the fall in interest rates will lead to,

…a boom which will not last because after some time the growth in the stock of productive capacity competing in the market will overcome the increase in total expenditure and so bring a fall in the current profits per unit of capacity, with a consequent worsening of the expected rate of profit on further investment.

The increased investment creates a bigger pool of productive capacity which then competes in the market for profits. This, in turn, drags down the profit on each unit of productive capacity. Since profit expectations are now somewhat dampened and since the fall in the interest rate has run its course further investment will fall off and the boom created will fizzle out. In the book from which Kregel draws the quotes, Economic Heresies, Robinson makes crystal clear that this is the Keynesian view proper.

[Keynes’] account of a boom is to say that a high rate of investment causes a fall in expected profits as the supply of productive capacity increases… one thing he would have never have said is that a permanently lower level of the rate of interest would create a permanently higher rate of investment.

This ties into Kalecki’s argument that if central banks try to control the level of effective demand through the interest rate they will find that they will have to drop the interest rate over and over again as each boom peters out until, ultimately, they end up at the zero-lower bound. As Steve Randy Waldman of Interfluidity notes, this appears rather prescient if we look at the period after 1980 when central banks moved toward trying to steer the economy by using the interest rate alone. He presents the following graph which shows precisely this dynamic,

As we can see, after each recession the central bank had to drop the interest rate ever lower to ensure that an expansion could take place. This is precisely what Keynes, Robinson and Kalecki would have expected.

The problem here, as in my original post on the ISLM, is that its adherents do not seem to understand the difference between statics and dynamics. Kregel states this quite clearly, all the while drawing on Robinson,

[U]sing Hicks’ IS curve, “a permanently lower level of the rate of interest would cause a permanently higher rate of investment”. This Keynes “could never have said” for it confused equilibrium with a process of change: “Keynes’ contention was that a fall in the rate of interest relatively to given expectations of profit would, in favourable circumstances, increase the rate of investment”. But, this would cause expectations to change and the marginal efficiency of capital curve to shift, and presumably the IS curve with it. An IS schedule could not be built upon the static relation between interest and investment.

Where many Post-Keynesians, and even some New Keynesians like David Romer, today focus on the LM-curve when critiquing the ISLM framework, Robinson showed how the IS-curve was simply not compatible with Keynes’ own theory. At the same time she showed how it was inapplicable to any world in which investment was based on the expectation of future profits.

Again, we should emphasise the underlying problem here: the ISLM model is based on a static framework that simply cannot conceptualise dynamics. What is more, the framework is not some ‘provisional’ outline sketched out prior to a more sophisticated dynamic analysis; indeed, it cannot be as it falls apart under dynamic conditions. Nor are the misinterpretations it produces ‘innocent’ in the sense that the errors only exist in the abstractions of the model; it is quite clear that those who use the model will likely come to extremely wrong-headed policy prescriptions (the call for a negative rate of interest today as the cure of our ills being one that comes to mind…).

The ISLM discourages economists from thinking in the very manner that they should think: that is, in terms of the historical time in which we all live, historical time in which expectations formed under uncertainty are of the utmost importance. It trains economists into thinking that they are like social engineers who understand the economy as one might understand the functioning of a piston-engine. But economists are no such thing and any economist who convinces themselves otherwise will soon find themselves running headfirst into the double-glazed window that the rest of us call ‘reality’.

Addendum: For a further critique of the idea of using the interest rate to control economic fluctuations see here for a completely different but equally powerful and, I think, somewhat novel argument by your’s truly.

Posted in Economic Policy, Economic Theory | 9 Comments

Problems with Static Interest Rates in the ISLM

dynamic-or-static-ip

The ISLM takes quite a beating from Post-Keynesians — and, I would argue, rightly so. There are any number of reasons for this but let me just here highlight one that is not very regularly talked about.

As is well-known and can be seen in the below diagram the ISLM considers output to be a function of the interest rate. At a higher level of interest rates output is thought to be lower and at a lower level of interest rates output is thought to be higher.

ISLM interest rate increaseThe problem with this presentation? It is not true. You see, even if we allow that interest rates have a substantial effect on output, it is not so much the absolute levels of interest rates that matters so much as it is the relative rate. Relative to what? To itself of course. What I mean is that if interest rates have effects on output it is the change in the interest rate rather than the absolute level that leads to expansions or contractions in output.

This can clearly be seen by simply looking at data for a wide variety of countries. If the ISLM were correct we would assume that countries with high interest rates would have low output, but this is simply not the case. Take Brazil as an example. Their interest rate has been between 18% and 8% since 2006,

brazil-interest-rate

From the perspective of the ISLM these extremely high interest rates should translate into low output growth, but this is simply not the case. Throughout the period — barring an interruption by the worldwide recession in 2008 — output growth in Brazil has been fairly high, running between about 2% and 9% annually,

brazil-gdp-growth-annualThis is not simply a case of the nominal interest rate being high while the real interest rate is low either. If we look at the inflation rate over this period it is not particularly high at all — at least, for a developing country — and has proved relatively stable,

brazil-inflation-cpi

I would say that over this period we are seeing an average interest rate of about 12% or so and an average rate of inflation of about 5% or so. Real interest rates, on average, then are about 7% — not to be sniffed at — and yet GDP growth averaged maybe 4-5% if we control for the recession.

The lesson here is obvious: it is not the absolute level of the interest rate that has an effect on output but rather the change in the interest rate. Large changes can cause expansions and contractions in output, but the effects of the absolute level is far less clear.

In this regard, the ISLM — even if we take it on its own terms — is somewhat misleading. The relationship between interest rates and output is not a mechanical one that can be represented in two-dimensions, rather it is one that rests on changes taking place in the variables.

Some will now say: “Oh, but we’re not stupid… everyone already knows that…”. Actually, I think that this is far from clear. When economists — like Krugman, for example — talk about a natural rate of interest that would result in full employment they tend to talk in absolute terms; “The natural rate is –x%…”.

Now, I obviously do not believe in this natural rate but even if I did I would point out that it is the rate at which the interest rate is changed from the present rate to the natural rate that is important, not the absolute level of the natural rate per se. Even if the so-called natural rate were assumed to function it would likely not do so if the interest rate was lowered very gradually to this rate, over the course of, say, 5 years. Rather it would have to be done rather quickly; maybe over the course of a few weeks. Central banks are fully aware of this, of course, which is why they always try to time their interest rate changes precisely so as to ensure the intended effects.

So, the ISLM framework, with its static ideas about interest rates and output, does indeed lead to confusion among those who use it. One more reason, among many, to throw it out.

Posted in Economic Policy, Economic Theory | 4 Comments

Steve Keen’s AS-AD Curves and a Suggestion For a New Stock-Flow Equilibrium Approach

key-to-pricing

A commenter on Lord Keynes’ blog recently called my attention to something rather interesting; namely, that Steve Keen seems to be using some sort of supply and demand framework to determine price in the macroeconomy in his models. Let me just say that I do not follow Steve’s work all that closely and so I apologise if this is old news and has since been overcome. With that caveat, a few comments.

The moment I heard this I thought, “Ah, Steve must be using the old aggregate supply/aggregate demand (AS-AD) framework…”; indeed, I responded as such to the commenter. He then directed attention to the following article in which Keen explains that when he was integrating prices into his debt model he found himself with a number of paths that he could take.

He could take the neoclassical path and equate marginal revenue with marginal cost. But that would, as Steve well knows, be cheating. Alternatively, he could adopt a Post-Keynesian perspective and use a mark-up pricing framework. He says that this would have been a ‘fudge’ but does not say why. I would imagine that he thought that this would be a fudge because it would have been very difficult to get substantial price movements in, for example, financial markets if there was a crash.

He instead opted for a third path which he describes as such:

The one way I could do that was to argue that the price level would adjust under the pressure from the flow of monetary demand on one side, and the pressure of physical supply on the other.

This seems to me to be identical to the old AS-AD framework which can be seen in the diagram below.

AS-ADcurvejpgWhat the AS-AD framework shows is a trade-off between prices and output. The idea is that as aggregate demand increases both output and price will rise. As we can see the aggregate supply curve is quite flat at low levels of output while it is basically vertical at high levels. This indicates that at low levels of output an increase in aggregate demand will lead to large increases in output with very small increases in prices because the economy is assumed to have significant excess capacity, while at high levels of output an increase in aggregate demand will only affect prices as the economy is assumed to be at full capacity.

The AS-AD is not the worst framework imaginable. Indeed, some Post-Keynesians, like Paul Davidson, have advocated its use for didactic purposes. But there are many problems with it. For one, the downward-sloping aggregate demand curve that you see in the above graph is derived from an ISLM curve that assumes a linear relationship between the interest rate and output and a fixed supply of money by the central bank.

It is assumed that as prices rise the real interest rate increases because the real value of money falls and so higher prices lead to a fall in real output via the rise in the interest rate; this is what the downward-sloping AD curve depicts. Of course, this rests on the crucial assumption that the central bank sets a fixed supply of money. Yes, we can shift the supply of money in the model by shifting the money supply curve which will then shift the AD curve, but we cannot avoid the simple fact that the AS-AD model is not compatible with Post-Keynesian endogenous money theory.

Basically, any objections to the ISLM framework that Post-Keynesians might hold equally apply to the AS-AD framework. There is simply no getting away from this. If we want to imagine the price level in the macroeconomy as simply being based on the interaction of a giant aggregate supply and aggregate demand curve we must accept the ISLM. My sense is that Keen will not be very happy about doing this.

I would also say that the AS-AD framework does not properly incorporate expectations. Surely this is an enormous problem for anyone trying to model speculative dynamics as I assume that Keen is. So, what is the alternative? Well, as I’ve said before on this blog, I tried to create a new theory of pricing for my dissertation that avoids these problems. Perhaps it would be better suited to such models.

Here, then, is a pricing equation for the macroeconomy in line with the theory laid out in my dissertation. As will shortly be seen it provides us with many advantages which I shall run through shortly. (I have omitted some more complex properties of the final equation as they would take up too much time to discuss here and will be discussed in the full working paper that is hopefully to be published soon. I should note, however, that these properties lead to some very interesting conclusions when seen in the context of the work of Hyman Minsky).

coverThose terms may seem a little obscure to most people so here is a table laying them out,

coverAs can be seen this is a framework that is not reliant on any notion of market equilibrium or supply and demand curves. Rather it is more akin to a Keynesian multiplier relationship. Or, to put that another way, it views price as the outcome of a stock-flow equilibrium process. We simply ‘plug in’ certain variables and we get price outcomes.

In the full framework the components of price are also broken down into various sectors — for example, financial asset prices are determined by the both the government sector (central banks etc. — think QE) and the private sector (financial market investors etc.) — which gives the framework even more flexibility.

The aggregate price level, which incorporates both asset prices and real prices which I distinguish in the paper as being those that contribute and those that do not contribute to Gross Domestic Product (yes, the framework is consistent with the national accounts) is determined both by the quantity of assets, real and financial, supplied (qZ) and the amount demanded. The latter is then broken down into the quantity of financial assets demanded based on future expected prices (Pef), the quantity of real assets demanded based on future expected price increases (Per) — these are the speculative components of demand — and finally, the quantity of assets demanded for ‘real’ consumption and investment, (Pr).

The framework incorporates the best parts of the supply and demand framework given allowances for quantity rather than price adjustments. This means that it can incorporate Post-Keynesian administered pricing theory without totally ditching supply and demand. It also incorporates Marshallian price elasticity concerns and expectations as they exist in both Post-Keynesian and behavioral economics. Taken together I believe that we can reduce price formation purely to these variables and to no more. I also believe that this framework can be applied to any price formation that takes place in any type of economy.

In the paper in which I present the theory I refer to it as a ‘general theory of pricing’ and in that regard I have the following quotation from Keynes in mind which he wrote in the forward to the German edition of the General Theory (which I discuss in further detail here),

This is one of the reasons that justifies the fact that I call my theory a general theory. Since it is based on fewer hypotheses than the orthodox theory, it can accommodate itself all the easier to a wider field of varying conditions.

As I said above, I don’t know if Keen has already done something different with his model. But so far as I can see he might well be better off with the above framework as it is extremely flexible both in the phenomena that it can explain and in the manner in which it can be used to simulate economic dynamics.

I think that Keen will appreciate this to an even greater extent if he considers the expanded version of this framework which includes a novel insight that I think has very important implications for Minskyian analysis. This insight I call the ‘paradox of speculative profits’ and I think goes a long way to explaining why financial fragility can become so acute while investors remain entirely oblivious. (Hint: this problem is built into the structure of asset markets just as the ‘paradox of thrift’ is built into the macroeconomy — i.e. it is structural). I will leave that insight, however, to emerge into the light of day when the paper finally appears in full. If Steve would like a look at the paper prior to its coming out I would be more than happy to send it to him.

Addendum: I would imagine that some commenters are going to jump on me here. “But Phil,” they will say, “a good part of this blog is taken up with your critiques of abstract modelling, why here are you seeming to promote it?” To this I would give two responses that are inherently linked.

First of all, I do not disagree with modelling per se. Rather I disagree with applied modelling as is done when models are applied to data for forecasting and so forth using Bayesian or objectivist probability methods (i.e. using econometric techniques). I have no problem with using modelling as a didactic tool provided that students are made to understand clearly that modelling and actual applied economic thinking are as different from one other as are dressing up and playing fireman as a five year old and actually working as a fireman.

Secondly, and tied to this, I think that the most valuable aspects of models are their components. If you dig into almost any worthwhile macroeconomic model, for example, you will find a Keynesian multiplier equation. I propose that this component is actually more important than the model itself in that it teaches something absolutely concrete about a very real economic relationship whereas the model likely drifts off into abstractions. What gives models their value is that they impart knowledge of these components to those who study them. In this regard I fully agree with Keynes who wrote in Chapter 21 of the General Theory,

The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organised and orderly method of thinking out particular problems.

It is thus, to my mind, far more important to get the various components of a model right than it is to construct a model. But if others prefer to construct models I see it as my job merely to try to give them the correct components. Recall that it was the microchip that gave such awesome power to the modern day computer and not vice versa.

Posted in Economic Theory | 11 Comments

Capital Sins: To What End Should Economic Life Be Directed?

cover

Victoria Chick published an interesting paper in the journal Economic Thought on the World Economic Association website entitled Economics and the Good Life: Keynes and Schumacher. In it she explores what both men thought that the end goal of economics should be. As she says in the paper she finds rather a lot of overlap but also some differences in approach. I will here run through both of these here.

Both men share the ideal of bringing economic life closer to how they think that people should live. Broadly speaking both think that people should engage in less stultifying work and spend more time doing things that will provide some sort of inner contentment or enjoyment. The manner in which this might be achieved and the form that, consequently, this should take are thought different by the two thinkers, however.

Keynes is more of an optimist about the capitalist system. He thinks that if we let it run its course and continue to accumulate capital we will eventually reach a point of “capital satiety” where no more capital need be accumulated and resources can be directed to other ends. The economy will then reach a sort of steady-state where accumulation stops, people work less and time is devoted to more satisfying ends. Keynes has what seem to be in retrospect some rather unusual ideas about when this state will be reached, at one point suggesting that it will come about in the mid-1960s!

Schumacher is more pessimistic about the system. He believes that the system itself encourages traits which both Keynes and he find unseemly, immoral even; think here of greed, envy and all those other Evils. In one passage Schumacher is particularly clear about this,

The modern economy is propelled by a frenzy of greed and indulges in an orgy of envy, and these are not accidental features but the very causes of its expansionist success. (p38)

For Schumacher, as for Keynes, it was greed, envy and the desire to ‘get one up’ on others that drove the system. But where Keynes said that we should simply let it run its course, Schumacher said that the whole thing needed to be restructured. He believed that, for example, rather than cutting down on working hours, as Keynes suggested, that we change the nature of work itself; make it less stultifying, smaller-scale and have those who engage in it actually stake some of themselves in it.

These are nice ideas, certainly. They are also, as Chick hints at toward the end of her article, clearly at odds with much marginalist doctrine which tends to consider wants as insatiable, the drive for profits as a de facto social good and work a burden. Here I will take what I think to be a somewhat unpopular stance, however, and say that I think both men envisage something that probably cannot be done; that is: they seek to change human nature.

Today I believe that we in the West largely live in an era of what Keynes would have considered capital satiety. Indeed, overproduction seems to be one of our key problems (together with income inequality, which seems to be overproduction’s not-so-strange bedfellow). Yet we have not reached Keynes’ nirvana and people have not tried to incorporate Schumacher’s conception of the good life into their daily routine on any large-scale. Why is this?

Sure, we can play Marxist and blame Capitalism-in-the-abstract. “Oh, capital is impelling us forward and we cannot stop the train,” we might say in a Marxian mood. But that is not an explanation. Keynes and Schumacher were more incisive than Marx because they were not materialists. They recognised that Capital is not a real agent, rather it is the product of human psychological traits; ones that many find repulsive. Marx’s genius was that he, like many religious prophets before him, externalised Evil — what the theologians called ‘Sin’, Marx called ‘Capital’. (Indeed, his major work could easily have been entitled ‘das Sünde’ rather than ‘das Kapital’ without the work losing much meaning).

Marx’s dictum about the capitalist class, that “they don’t know it, but they are doing it”, is straight from the Bible. (“Father, forgive them, for they do not know what they are doing.” Luke 23:34). This is not a coincidence. Jesus thought that those who had nailed him to the cross were being driven by Sin and required forgiveness; Marx thought that those who accumulated capital on the backs of others were being driven by Capital and required emancipation. The terms ‘Capital’ and ‘Sin’ are here structurally identical; they externalise negative psychological or ontological traits of human beings and give them context in a larger interpretive system (i.e. Marxism and Christianity).

Again, Keynes and Schumacher were more honest; they recognised psychological or ontological traits for what they are. So, again, because with Keynes and Schumacher we can shed our metaphysical baggage and ask the question straight: why do people continue engaging in the capital Sin of sinful Capital accumulation long after it is necessary? Here is a thought: because human beings can be rather unpleasant creatures. They love power, they are inhabited by envy, they have an innate drive to control and their wants are so insatiable that they will create new desires any time the old ones are met.

Some religious and metaphysical systems attempt to control these drives, but they are probably doomed to fail because these are drives proper. They are innate in our species. I am not saying that such drives are ‘biological’ — that word means little to me in this context — rather I am saying that they are ontological truths of our being-human. To try to banish them from our constitution is like trying to banish murder from our population; it cannot be done, regardless of what some social reformers may have thought.

What’s more people, to a far larger extent than I think is generally recognised, often love the things they claim to hate. The person who complains about the ten hours they spent at work before going back to the office for another ten hour shift are no different from the smoker who says that they want to quit as they light up their twelfth cigarette of the day. Such activities are a deep part of their being and the complaints only accentuate the ‘pleasure-in-pain’ dimension of the activity.

Does this mean that everything is hopeless? No, I would not say that. There are good traits in humans also and these should be encouraged. The property developer will destroy the nature reserve to build cheap apartments if he is not stopped, make no doubt about that, and there is every chance that he can be stopped. Income inequality and overproduction are outgrowths of very negative traits in people — namely, envy and insatiability respectively — but they can be curbed; we know how to do this, it is just a matter of being given a chance to do so by having people recognise them for what they are: destructive outcomes of negative ontological traits.

But it seems to me that shooting for the moon and imagining, as Keynes and Schumacher did, that there is some end-point, some nirvana, when all is right with the world and the dark-side of Man subsides is altogether naive. (Don’t even talk to me about Marx and his religious doctrines!). Maybe others will disagree, but I still think that they’re fighting against the unstoppable tide of human nature. Perhaps, even, good things will get done because some seek out Utopia. But I’m still convinced that energy is better focused on particular goals.

Posted in Philosophy, Politics, Psychology | 1 Comment

What Happened to Science and Research Funding?

science funding

I’ll never forget the reaction of a scientist I once met, a chemist who had transitioned into corporate management, when I told her that I was an economist. “Oh,” she said, “so you’d know something about corporations and how they structure scientific research then, right?” I was somewhat surprised at the question as it’s one that I’ve literally never been asked before. I said that I knew a little bit about it and asked her why she was so curious.

“Well,” she said, “one of the reasons I went from being a researcher to being a corporate manager was because I wanted to know what on earth was going on with scientific research in these places.” I asked her if she had any success. She replied in the negative, she just couldn’t get her head around it, but she said to me that her impression was that Western countries were having serious problems while China seemed to be developing in a positive direction at a rapid pace.

As I spoke to her about it the problems became obvious; and they were exactly what I expected them to be. China had massive amounts of state-intervention in the way they allowed corporations to pursue R & D, while Western countries had a far more laissez-fairre attitude. In China knowledge was seen as a public good and the government took the attitude that corporations had a certain amount of responsibility in producing knowledge for the public at large; if you were a corporation and wanted to benefit from what China had to offer you had to make a contribution to society at large. In the West however, where the ideology of the market reigned, the corporations called the shots.

In what follows I will lay out a brief periodisation of the regimes of science that dominated in the US over the course of the 20th century. In doing so I draw on the third chapter of Philip Mirowski’s excellent book Science-Mart. Finally, I will consider what role economists play in our current regime and try to answer the question as to why my chemist friend was so befuddled.

Regime I (1890-1939)

The first regime of science dominated between around 1890 and the beginning of the Second World War (1939). Mirowski calls it, using a phrase borrowed from Thorstein Veblen, the ‘Captains of Erudition’ regime. The backdrop of this regime was the emergence of the great corporate merger movement that came at the end of the Long Depression and the deflation that came after it (1873-1896). During this period of deflation firms tended to gobble each other up, presumably as the excesses of the prior boom, which was based mainly on railroad speculation, unwound.

As the corporations formed into ever more concentrated units their legal status began to become more solidified in the laws of the time with access to patents being an extremely important prerequisite for this regime of science. This, combined with their increased size and access to financial capital, led to the proliferation of in-house R & D labs. But these labs were less interested in producing novel gadgetry. Rather they were geared toward market control.

In the wake of the deflation US lawmakers knew well the dangers of cartels and trusts as it was these that had led to the “robber baron” boom era that precipitated the depression. Thus even though this was the time in which the US corporation began its first steps to maturity it was also a time of strongly enforced anti-trust laws. The corporations turned instead to patents in particular and intellectual property rights in general to try to control their market share and ensure that competition was adequately suppressed.

The academy, on the other hand, was not so concerned with the new in-house R & D departments. This was the era when American universities favoured mainly a liberal arts education regime. Although in the later period of this era corporate funding would begin to leak into the university it was more or less a philanthropic exercise and was not so interested in gaining control over what went on in universities per se.

Regime II (1939-1980)

The second regime of science arose during and after the Second World War. Mirowski refers to it as the ‘Cold War’ regime. During the war it was quickly recognised that science could play a significant role in giving the US a tactical advantage over its military enemies — the Manhattan Project and the production of the atomic bomb being the most obvious manifestation of this. Thus the military became the main agent funding R & D in the sciences.

This was not exactly a state-controlled regime of science, at least not in the centralised manner that emerged in the Soviet Union. Rather it was decentalised in that purpose-built corporate research institutions came into being that were plugged directly in Federal government funding and worked closely with the military — think, in this regard, of the infamous RAND Corporation or Bell Laboratories. Some universities, most notably MIT, also became closely tied up with the military in these years.

In this period military funding became a sort of de facto industrial policy in the US and intellectual property rights were weakened so that technology could flow more freely through the economic system. This latter point may seem surprising but it actually made a good deal of sense at the time. Only a small amount of research carried out created material that was required to be classified and the military strategists wanted to ensure that they did not come to rely on a single contractor because in the case of nuclear war this could prove massively problematic for the mobilisation of the US war machine.

In addition to this research was less commercialised. The military in its contracts were more inclined to leave the scientists to their own devices and this encouraged them to follow paths of research with no immediate purpose but which might pay off in a big way down the line in some unanticipated way. Again, the promotion of this by the military must be interpreted in the context that they thought it would eventually give them an edge over their enemies.

Notably also in this period, peer review was a somewhat secondary manner of controlling scientific output. The primary control mechanism was direct intervention by the military who effectively rubber-stamped projects or rejected them. This system gave rise, somewhat paradoxically, to a good deal of openness as scientists felt at ease proposing grandiose visions to their managers while universities felt no immediate pressure to put the squeeze on their professors and researchers.

Regime III (1980-Present)

The third regime seems to many to have come into being with the fall of the Berlin wall. In actual fact, however, it is less tied up with the end of the Cold War as it is with the new age of globalisation. For this reason Mirowski calls it the ‘Globalised Privatisation’ regime.

This regime is characterised by corporations outsourcing their R & D to external micro-labs and university departments. The funding that flows therefrom is then used to control what the researchers do. This is a carrot and stick approach. Where the military contractors dumped vast amounts of Federal money on research institutions with more long-term goals in mind, corporations are far more interested in immediate results. Thus research should be tightly controlled and monitored.

A key component of this regime is the weakening of anti-trust legislation and the enforcement of intellectual property rights. But according to Mirowski this is not the actual cause of the privatisation of the university and the commodification of knowledge; these are more so to be seen as catalysts speeding up the process. Rather the cause is to be found in the fact that corporations were given ample scope to offshore and outsource. This removed their previous ties to their domestic nation-state in a way that fundamentally undermined the existing structure of the university. Mirowski writes,

It is access to lower-wage labor in the context of an academic infrastructure, disengaged from any corporate obligations to provide ongoing structural support for local educational infrastructure, that explains the shift in research funding to countries like China, India, Brazil, and the Czech Republic. (p127)

The university then tries desperately to counteract this exodus by engaging in various cost-cutting exercises — which typically end up being simply an expansion of a redundant and intrusive management bureaucracy — and attempting to line up their research agendas with what corporations desire. It is thus that outsourcing and globalisation lead to the privatisation of the environment of the university; one that is so obvious to anyone working in or around these institutions today that it need barely even be mentioned as more than fundamental truth of everyday life.

The High Priests of the New Regime

The ideology used to justify this shift — provided, of course, by the economics profession — is one that we are all familiar with. What has supposedly been created is a sort of global ‘market for knowledge’. What some interpret as a rigid system of control stifling creativity and enforcing short-sightedness is glorified as being the most ‘efficient’ use of resources. Queue some dim marginalist argument by some brainwashed functionary wherein something called a ‘market’ distributes knowledge in some ‘Pareto optimal’ fashion.

Yes that’s right, the economists serve as the handmaidens of the new regime with their silly models and their nonsense being peddled to corporate managers in training who, once they leave the classroom and provided they do not completely lack critical faculties, become deeply confused about what on earth is going on.

“Why is China doing so well when their approaches seem so at odds with the diagrams they showed me in business school?” the curious corporate manager will ask themselves. But they will not receive an answer as the vast majority of people who are being paid to inform them as to what is going on are playing with stupid models completely detached from the real world.

Even those social scientists who try to extricate themselves from the regime find it difficult because universities and other institutions that provide funding try to push for more ‘scientific’ methodology in order to ensure (supposedly) that they are getting adequate bang for their buck. What this means in practice is usually rather straightforward: more maths, less critical reasoning. And what this leads to is a whole host of unreadable garbage studies with no real relevance but whose so-called ‘results’ can be dropped on the media outlets to give punters some immediate gratification. (“Did you know that sex is 78.6% better for people who recycle fruit skins and manage their budgets in an optimal manner?”… and so forth).

Welcome to the new Dark Age folks, where researchers’ scopes are narrowed to an altogether dangerous degree while those who acts as priests of the system busy themselves with constructions that can only be compared to the more unwieldy of the theological systems put together by the Scholastics in the Middle Ages. Welcome to the regime of globalised privatisation.

Posted in Economic Policy, Economics of Science and Technology | 9 Comments

Is Real Communication Possible? Berkeley’s Particularism and Lacan’s Semantic Slippage

miscommunication

I’m currently rereading George Berkeley’s A Treatise Concerning the Principles of Human Knowledge as a friend of mine and I are considering writing a short book on Berkeley in the near future. In it we are hoping to discuss all of Berkeley’s work, including the little known fact that Berkeley was something of a Chartalist and advocated something very similar to Keynesian full employment policies. I’m hoping to also show that Berkeley’s Chartalism — which first and foremost views money as a mere symbol — is tied to his immaterialist philosophy that holds that material substance does not exist. I will return to this point at the end of this post.

The version of the Treatise that I am reading has an excellent critical introduction by the philosopher Jonathan Dancey. (I say this not because I agree with all the criticisms that Dancey raises in the introduction, many of which strike me as being deeply confused, but because as a survey of the work is it most useful).  In the introduction Dancey discusses many aspects of Berkeley’s thought but there is one that I here want to focus on; namely, Berkeley’s critiques of Locke’s abstract notion of General Ideas. Dancey points out that a key component of Locke’s argument for General Ideas was communicability. If there were no General Ideas, Locke said, then your and my understanding of, for example, the term ‘city’ would be different.

Why? Because your understanding of the term ‘city’ would be merely a collection of all the individual cities that you have ever seen while mine would be a collection of all the individual cities that I have ever seen. Thus, if you have only seen Paris and London and I have only seen Tokyo and Sydney our conceptions of cities will be entirely different. This means that proper communication cannot take place in that when you and I hear ‘city’ we will understand different things by the term.

To avoid this Locke posited General Ideas. So, for Locke, the term ‘city’ contains something beyond Particular cities. It contains an abstraction that we all recognise as ‘city’. Berkeley denies this. He says that if we examine our own thoughts carefully it will be clear that any time we try to grasp at the General Idea of a city we will always encounter in our own imaginations merely particular experiences of certain cities. This is what might be called Berkeley’s ‘radical empiricist’ position. Dancey writes,

[Berkeley] holds that a perfectly non-abstract idea, the idea of a particular man, can stand for all men whatever; and he also, more contentiously, holds that thought does not require the constant occurrence of ideas in the thinker’s mind, and that therefore communication does not require the speaker to raise a matching idea in the hearer’s mind . (p32)

This is a very interesting argument that I don’t think is often recognised in the philosophy of Berkeley. It implies that communication does not occur in human’s in the same way, for example, that communication takes place in computers. To put that another way: for Berkeley human communication is always imperfect. When I say or write certain words they call to mind different ideas in your mind as they do in mine.

This is actually a very similar argument to that made by the French psychoanalyst Jacques Lacan. Lacan started with the basic unit of semiotics; that is, the sign. The sign is made up of a signifier and a signified. So, for example, the word ‘tree’ calls to mind the object that we know to be a tree. The word ‘tree’ is the signifier and the object tree is the signified. Here is a diagrammatic representation,

Sign

Semioticians usually assumed that there was a firm relationship between the signifier and the signified. Lacan, however, always held that there was always a sort of slippage of meaning. In this, Lacan was following in Berkeley’s wake but was, I think, more radical. For Berkeley the signifier and the signified only slip when communication is taking place between two people that have different particular experiences of, say, trees. But for Lacan this slippage takes place within the single mind.

Think in this regard of a slip of the tongue. In Freudian psychoanalysis a slip of the tongue indicates a fundamental truth about our desire. Yet for the person making the slip they usually claim that it was an error and insist that they meant something else. Thus their signifier is intended not to signal the signified that would be assumed if we interpret what they say literally but rather a different signified that, perhaps, has a similar phonetic make-up.

(Take the example of ‘tree’. A man might say in a slip of the tongue that he always liked the muscle cars that came from the ‘Big Tree’ US auto-manufacturers. The Freudian would then detect a whole host of phallic references in this slip given its context but the man would insist that his signifier, ‘tree’, did not refer to a large wooden object but rather to the number ‘three’. Thus, his signifier referred to a different signifier and not to the signified usually designated it).

I think that Berkeley’s radical empiricism is leading in this direction. His rejection of General Ideas makes clear that people do not think in the mechanistic way that contemporary science and Enlightenment thought often conceive of them as thinking. Rather meaning is far more open to interpretation and really depends on our own personal experiences; not to even mention the fact that communication is a very haphazard phenomenon that often breaks down. In his introduction to the Treatise Berkeley writes,

[T]here is no such thing as one precise and definite signification annexed to any general name, they all signifying indifferently a great number of particular ideas.

Later on, however, Berkeley takes a position even closer to that of the post-structuralists. Dancey summarises this as such,

The physical world is a genuinely linguistic system, whose elements are variously combined and concatenated in much the sort of way that letters and words are, so that they should be capable of carrying detailed messages. Just as a limited number of letters can be used to create an infinite variety of messages, so a limited number of physical elements can be combined for the same purpose. (p52)

Here Berkeley gives language a certain primacy in that he sees the world as being a giant mass of signifiers which we try to interpret as best we can. At a crude level dark clouds, for example, signify rain, while at a more complex level unemployment, for example, signifies a deficiency of effective demand. It is in and through the linguistic systems that we construct that we understand why one idea or event might lead to another and the more precisely our linguistic systems align with empirical experience the more useful they will prove to be and the better we can organise our actions for our desired ends (whether that be carrying an umbrella to avoid getting wet or engaging in fiscal stimulus to keep output and employment at desired levels).

This is, of course, a far cry from the ‘realist’ ontology usually associated with Post-Keynesian economics which holds that there are actual causes existing in something like an external material world. Rather it is more constructivist in that we actively participate in constituting these causes.

The example of fiat money — which Berkeley advocated — is instructive in this regard. If we agree with Chartalism that “taxes drive money” it should be clear that such a relationship does not truly exist “out there” but is rather a construction constituted in and through communicative language — you and I live under a state which subjects us to taxes which in turn dictates what we use as money; but this relationship is nothing material or existing “out there”, rather it is a contract entered into by each of us which can potentially be broken at any time. Examined properly it will quickly become clear that all the relationships that we deal with in economics, grounded as they are in some constructed accounting framework or other, are of some similar nature.

Posted in Philosophy, Psychology | 5 Comments

Dazed and Confused: Eugene Fama’s Bizarre 2010 Interview

Dazed and Confused

I recently reread a 2010 interview with Eugene Fama, who has just been awarded the Nobel Prize for his work on the Efficient Markets Hypothesis (EMH). In the interview Fama tries to defend the position that the EMH held up “quite well” in the financial crisis of 2008. The interview is worth reading in the original as it is quite bizarre.

Here, however, I want to deal with two points. One appears to me to be a willed contradiction on Fama’s part. The other surrounds his highly contradictory statements on asset price bubbles.

Now, some may tell me that I am being unfair to Fama. They might say that in interviews we often say things in a more careless and sloppy manner as we might in writing. I accept that this is true but nevertheless the oversights and contradictions in Fama’s arguments appear to me to paint a picture of a deeply confused man.

Let’s start with the second point; namely, that of how we define asset price bubbles. Fama starts by saying that he “doesn’t know what the term credit bubble means”. The interviewer then posits the following meaning,

I guess most people would define a bubble as an extended period during which asset prices depart quite significantly from economic fundamentals.

Although I don’t fully agree with that I think its a fairly robust definition. And Fama seems to agree in that he says “That’s what I would think it is”. Now here’s where the whole thing gets a bit weird. First of all he says the following,

It’s easy to say prices went down, it must have been a bubble, after the fact. I think most bubbles are twenty-twenty hindsight. Now after the fact you always find people who said before the fact that prices are too high. People are always saying that prices are too high. When they turn out to be right, we anoint them. When they turn out to be wrong, we ignore them. They are typically right and wrong about half the time. (My emphasis)

Okay, that’s pretty clear, right? Fama is saying that people who make calls on asset bubbles are right about half the time and wrong about half the time. Well, I’m not sure if that’s true but I certainly agree that asset bubbles can be predicted. Indeed, Fama this year shared the Nobel with Robert Shiller and Shiller called both the tech bubble and the housing bubble. But Fama then goes on in literally the next breath to assert that asset bubbles are not predictable phenomena. When asked whether he is saying that bubbles do not exist he says,

They have to be predictable phenomena. I don’t think any of this was particularly predictable.

Now, Fama just said that those making predictions about asset bubbles were right about half the time. That seems to me to imply that there is some predictability to these phenomena. So, why does Fama go on to say that they are not predictable? I have no idea. Frankly I think that he is talking rubbish.

Fama then goes on to further tie himself in knots. With regards to recession he says that,

That’s where economics has always broken down. We don’t know what causes recessions.

Well by Fama’s own criteria that would surely mean that recessions do not exist, right? If in order for economics to recognise a phenomena it must be predictable then the discipline cannot, by Fama’s own criteria, recognise recessions. But in the second part of the interview Fama seeks to blame the financial collapse of 2008 on — you guessed it — a recession. The reason he posits that we cannot talk about asset bubbles is because they are, according to him, not predictable but then he goes on to discuss recessions which, again according to him, are also not predictable. This is very strange stuff altogether.

The next oversight/contradiction in the interview is slightly more difficult to detect but when recognised it seems to imply that Fama is probably not being honest with himself and is probably defending his theory based on a desire to do so rather than any real thinking through of the evidence.  When asked whether people were getting loans in credit markets that they shouldn’t be getting Fama wheels out the shaky old right-wing argument that this was due to government policy. He says,

That was government policy; that was not a failure of the market. The government decided that it wanted to expand home ownership. Fannie Mae and Freddie Mac were instructed to buy lower grade mortgages.

The interviewer then points out that this was only a small slice of the market. But we will ignore that and focus on the coherence of Fama’s own narrative. Because when pressed on this question he quickly points out, quite correctly, that you cannot blame subprime mortgages because many countries saw housing bubbles (of course, Fama would not call them bubbles…) even though they didn’t have subprime markets.

You can blame subprime mortgages, but if you want to explain the decline in real estate prices you have to explain why they declined in places that didn’t have subprime mortgages. It was a global phenomenon. Now, it took subprime down with it, but it took a lot of stuff down with it.

Fair enough. I actually agree with this. Ireland had the biggest housing bubble in the world and subprimes were nowhere to be seen. But here’s the kicker: neither were Fannie Mae and Freddie Mac.

You see Fama is picking and choosing arguments to suit his own purposes here. When it suits his argument he will pin the US housing bubble on Fannie Mae and Freddie Mac but then when he is pressed on this he will also highlight that rising house prices in other countries were not due to the same factors as were operating in the US. So, why not follow this through to its logical conclusion: this implies that Fannie Mae and Freddie Mac were not the real cause of the US housing bubble any more than subprimes were? Because, frankly, I don’t think Fama’s argument is in any way serious. I think that he is willing to basically say anything to defend his theory against criticism.

As I said though, the interview is worth reading in the original. It is a truly bizarre document which I’m sure will appear in many history books in the future. What the historians will have trouble explaining, I think, is why the person in the interview was given a Nobel Prize in Economics only three years later.

 

Posted in Economic History, Media/Journalism | 1 Comment

Veblen and Freud on Instincts

life and death drives

I recently got my hands on a paper by Bill Waller entitled Veblen and Instincts Reconsidered. In the paper Waller discusses the role of ‘instincts’ in Veblen’s work and argues that it is important that those working in the evolutionary tradition of economics should take them on board.

Before commenting on one aspect of this paper I should note that this is not something that I am particularly interested in. Nevertheless, it seems to me that Veblen’s taxonomy could do with a good deal of cleaning up as it comes across as rather arbitrary and lacking in structure. I believe that the best way of doing this is introducing certain Freudian terms. In doing so we can then highlight where we are dealing with instincts proper and where we are dealing with something less fundamental.

First we should briefly lay out the Freudian conception of ‘instincts’, or ‘drives’ (I prefer the latter term but I will use the former since it is the one that Veblen scholars seem to favour). Freud’s theory of the instincts basically arises out of a borrowing of metaphors from energetics and thermodynamics. He saw humans as having two broad instincts: the life instincts and the death instincts. The life instincts — or Eros — were those that gave rise to unifying social phenomena; that is, anything that aimed at reproduction. So, broadly those aspects of life that would fall under the heading of ‘love’ — friendship, sexual relationships, the formation of social groups and so on. The death instincts — or Thanotos — were those that tended toward destruction; that is, any activity that aimed at annihilation. Broadly speaking, those aspects of life geared toward ‘aggression’ — envy, destroying groups, aimless violence against fellow creatures and so on.

A wonderful example of the death instinct often quoted by the great French psychoanalyst Jacques Lacan was one noted by St. Augustine which the latter thought to be a manifestation of Evil proper,

The weakness then of infant limbs, not its will, is its innocence. I myself have seen and known even a baby envious; it could not speak, yet it turned pale and looked bitterly on its foster-brother.

What Augustine noted was that even newborn babies, who can barely control their own motor functions, still manifest bitter hatred. This is a good indication that we are dealing with instinct proper. An instinct is something that is stripped of all else. It is, in a sense, a baseline determinate of our very being.

The theory of instincts in Freud, as I said, have their roots in energetics and thermodynamics. The life instincts are broadly commensurate with the law of the conservation of energy while the death instincts are broadly commensurate with the entropy principle. What is nice about Freud’s theory is that it really reduces surface phenomena — friendship, violence etc. — right down to their most basic level. It judges such surface phenomena on the basis of whether they aim at reproduction and creation or deconstruction and destruction. Of course, acts are rarely totally constructive or totally destructive and so instinctual processes are seen, in this framework, to intermix. But keeping them analytically separate is nevertheless quite useful.

In his paper Waller lays out Veblen’s five basic instincts. As we will see, they are not very basic at all. Looked at from a Freudian point-of-view they can easily be broken down further and given analytical coherence.

The first of Veblen’s instincts that Waller lays out is ‘the parental bent’. Waller writes,

As noted above the parental bent is the last instinct introduced in Veblen’s work. This instinct is more than the motivation to procreate that Veblen thought was quasi-tropismatic.  Instead for Veblen it was the proclivity “to the achievement of children” and “a primary element in the practical working out of parental solicitude.” Veblen argued that this solicitude was extended beyond the scope of children to a general solicitude toward the well-being of the entire community:  “… this instinctive disposition has a large part in the sentimental concern entertained by nearly all persons for the life and comfort of the community at large, and particularly for the community’s future welfare.” (p4)

This is, of course, just the life instinct in naked form. It is just a love relation geared toward successful reproduction plain and simple. True it is not exactly synonymous with “the motivation to procreate” but it is a key part of the procreative process. Children that are born and not properly raised are not good candidates to carry out one’s genetic legacy.

The second of Veblen’s instincts that Waller lays out is ‘the instinct for workmanship’. He writes,

The character of this instinct “occupies the interest with practical expedients, ways and means, devices and contrivances of efficiency and economy, proficiency, creative work and technological master of facts.  Much of the functional content of the instinct of workmanship is a proclivity for taking pains.  The best or most finished outcome of this disposition is not had under the stress of great excitement or under extreme urgency from any of the instinctive propensities with which its work is associated or whose ends it serves. (p5)

Again, this is but another manifestation of the life instinct. It is analogous to the so-called ‘parental bent’ insofar as it promotes reproduction and advancement of the species. Just as sexual reproduction introduces new genetic formations into the population, ‘workmanship’ introduces new organisational formations into the community. The processes are conceptually identical.

The next instinct that Waller discusses is that of ‘idle curiosity’. This is effectively the tendency for humans to play. Not just in the childish sense, however, but also in the sense that they might play with ideas and concepts and arrive at new discoveries. Again, what we see is the life instinct but this time, rather than reorganising genetic sequences or orgnaisational formations, it seeks to reorganise symbols and ideas to produce new insights.

The fourth of Veblen’s instincts that Waller introduces is the ‘predatory instinct’. He writes,

Veblen argued that the aversion to labor is a result of the predatory habits of thought.  He argues that “[w]hat meets unreserved approval is such conduct as furthers human life on the whole, rather than such a furthers the invidious or predatory interest of one as against another. “ From this he argued that contrary to much evidence and speculation human being are not naturally a predacious species.  Instead predation arises once humans have “outdistanced all competitors, and then it prevails only by sufferance and within limits set by [the instinct of workmanship].” (p6)

What we see here, of course, is the death instinct. This is the destructive component of human life. One that seeks not to form larger and more complex groups, but rather to attack others in the pursuit of some sort of immediate instinctual satisfaction. This is an instance of the death instinct that seeks to tear down structures without replacing them with anything else. It is the drive, for example, that humans have toward war and conquest.

Finally, there is the ’emulatory instinct’. This is the instinct that Veblen think leads people to copy or emulate one another. I’m not convinced that this is an instinct at all. Rather it seems closer to what Freud would have called ‘identification‘. Identification is merely a process by which people identify with one another and through which they form their identities and personalities. It may be in the service of the life instinct or the death instinct; that simply depends on the ‘tone’ given to an identification. Thus, we might identify with a positive image our parents have of us or a negative image. Identification or emulation is no more an instinct than is motor function or speech.

As we can see, by introducing the Freudian terminology — which is certainly the most precise in this regard — Veblen’s work can be systematised and organised in a much better fashion, as can the base determinants of human behavior and activity.

Posted in Philosophy, Psychology | 1 Comment