Thirlwall’s Law in Historical Context

context is key

There has been some reticence on the blogs to discuss Thirlwall’s Law and I myself have also been somewhat reluctant to deal with it in any great detail (although I did hint at some problems with it in this post). I think it might be worth discussing it in more depth, however, because I think that the model it is based on is actually quite interesting — albeit misleading. I will rely for this exposition on a very succinct account of the model in a recent paper by Thirlwall entitled Kaldor’s 1970 Regional Growth Model Revisited.

At the beginning of the paper Thirwall notes the assumptions made by the model. He writes,

The first proposition of the model is that regional growth is driven by export growth. Kaldor regarded exports as the only true autonomous component of aggregate demand, not just at the regional level but also at the national level because consumption and investment demand are largely induced by the growth of output itself. The more specialised regions are, the greater the importance of exports. (p3)

It is precisely in this assumption that the problems with the model become manifest. What the above assumption implies is highlighted by Thirlwall later in the paper when he writes “it is more difficult for a country to rectify an import-export gap than it is to rectify a savings-investment gap” (p5). Now, what exactly is the problem here? Well, let’s begin to examine this with reference to our standard GDP accounting identity.

Y = C + I + G + (X – M)

Okay, so we know from this that imports will be a drain on income growth and exports will add to income growth. But what does it mean to say that the trade balance imposes a constraint on income growth as Thirlwall thinks — and, indeed, as Kaldor in the late-60s and early 70s thought?

Surely we could simply assume that consumption, investment and/or government spending must be able to offset any deterioration in the trade balance. What I mean to say is: while it is true from the simple GDP accounting identity to state that an external account deficit will lead to a decrease in GDP, we must mean something quite different when we say that this imposes a constraint on potential income growth. In order to understand this constraint we must now turn to the period in which Kaldor was writing.

In the late-60s and early-70s the UK was experiencing what came to be known as ‘stop-go’ economic policies. The UK economy would experience a boom — often driven by government fiscal policies — which would then fade away as the balance of payments deteriorated when imports rushed ahead of exports due to growth in domestic income. In order to defend their gold reserves the UK government would then cut spending and raise taxes therefore slowing the economy and dampening import demand.

Understood in this context the constraint that Kaldor referred to is clear: it is the balance of payments constraint associated with a drain on gold reserves in a fixed exchange rate system. So, what happens when we consider a floating exchange rate system? Does the constraint disappear? Yes and no.

In fact, the constraint shifts to the value of the currency. If a country were to run a substantial external deficit under a floating exchange rate system it could easily offset this by boosting domestic demand — either through increased consumption, investment or government spending. Since there are no gold reserves the only possible breaking point would therefore be the value of the currency.

Now, if there are sufficient capital inflows into the country then an external deficit can be sustained without affecting the value of the currency. In such a circumstance — i.e. one in which foreigners want to hold assets denominated in the domestic unit of account — there is quite literally no external constraint so long as the arrangement is maintained.

This is the case today, for example, in most Western countries. Foreigners wish to hold dollar and pound denominated assets for a whole host of reasons — not least because they have export-driven economies that rely on Western demand to grow, but also because of the large financial sectors in, for example, the US and the UK.

(The Eurozone is in an altogether different scenario where, rather ironically given that we are talking about balance of payments constraints, their zealous desire to run trade surpluses is part of the reason why the economy cannot grow — see Merijn Knibbe at the RWER blog on this).

In the case of a developing country, however, even with a flexible exchange rate this constraint could be very real. Such countries, should they run external deficits, face the potential for substantial currency depreciation. This can feed through as domestic price inflation which raises the price of domestic output relative to foreign output and exacerbates the balance of payments difficulties thereby creating a vicious cycle. In such a case, Thirlwall’s Law comes into its own and growth can be said to depend upon the external balance of a country.

What lessons can we draw from this? Well, for one we can say once again that: there are no true Laws in economics. Economics, to repeat a point I never tire of making, is an historical discipline. And if we don’t understand political and institutional arrangements we will understand nothing of relevance or importance. Insights such as Thirwall’s Law (which is not a law at all…) are of secondary importance when faced with the realities of actually existing economic institutions. I think that Kaldor was well aware of this and this is why he used to create new theories for every historical constellation he found himself faced with; that was his genius as an economist.

Those who wield such supposed Laws and insist on their timelessness and Absolute Truth are likely to make rather poor economic analysts. (I’m not referring to Thirlwall in this regard who, so far as I can see, recognises the contingencies involved in what he is saying). Such relationships need to be approached with an understanding that historical/institutional constellations come first and equations such as Thirlwall’s can only be made to generate insights within the framework of analysis provided by a given historical/institutional constellation.

Posted in Economic Theory | 5 Comments

Marginalist Microeconomics is a Highly Normative Ethical Doctrine

wrong or right ethical question

In a recent post Lord Keynes raises the question of the so-called ‘law’ of diminishing marginal utility. The ‘law’ states that we will derive ever diminishing satisfaction from the acquisition of a good or service. Lord Keynes notes that this is true for some goods — like washing machines — but may not true of others. He gives a number of examples — such as addictive arcade games and drugs — that seem to defy the ‘law’.

I think that it is interesting to note that all the examples he gives might be considered in some way to be ‘pathologies’. I don’t mean that they would be taken to be pathologies by marginalist economic theory — although they undoubtedly would — but rather that they would generally be taken to be pathologies in the most widest of senses; they would be manifestations of psychological, sociological and, ultimately, moral pathologies.

Actually, I would argue that it is the latter which is at the root of all this: such activities are properly seen, in any social discipline, as simply moral pathologies — this despite the fact that this term may be left out and replaced with other codewords (‘socially destructive’, ‘psychologically destructive’ and so on). At base, however, is a moral judgement: such activities are bad for either the individual, society or both. That is a moral judgement.

Viewed in this light the so-called ‘law’ of diminishing marginal utility is actually somewhat of a moral imperative. It does not so much tell us what we do but rather what we, at some level, should do. We can highlight this clearly by returning to the washing machine example.

Imagine for a moment that someone suffering from what we would consider to be a psychological disorder — perhaps some combination of OCD and hoarding — was obsessed with collecting broken washing machines. Imagine that they took them regularly from the dump and brought them home and filled their house and gardens with these objects. Although I am making up the washing machine example, this is a very real phenomenon and was dealt with in this psychological training film.

Now people suffering from this disorder clearly do not adhere to the ‘law’ of diminishing marginal utility. But what makes this a disorder? I would argue that the term ‘disorder’ here is moral in tone. I would argue that it is based on what we consider normal or good as a society. In short, I would say that when we apply the term we are in effect saying: “You are not engaged in activity that constitutes the Good Life”.

I do not want to diminish the fact that people suffering from such disorders are made unhappy by them. But what I am saying is that if these activities were looked upon as culturally normal and everyone did them then they would not be considered disorders and would not cause people pain. The determinants of what does and does not constitute normal behavior is ultimately a rather arbitrary function of what is considered normal by a given society. What may be pathological in one society may be the path to the Good Life in another.

Perhaps the best recent example of this is the case of homosexuality in the 20th century. Although Freud and the early psychoanalysts had rather progressive views on homosexuality, later psychology pathologised it in the same manner it had been pathologised in the 19th century. The first two editions of the Diagnostic and Statistics Manual (DSM), which is used by working psychologists and psychiatrists as a guide to diagnosis, listed homosexuality as a sexual deviation. In these years most of those working in the mental health professions would have seen it as their duty to ‘cure’ homosexuals by helping them lead ‘normal’ lives.

Homosexuality was removed from the DSM in the early 1970s. Why? Because psychologists and psychiatrists had found new evidence indicating that it was not a sexual deviation? No. The fact is that culture was changing and the DSM was trying to get in line with changing social norms. Nowadays if a homosexual walked into a mental health clinic you can be sure that the attendant psychologist would be far more concerned that they might be repressing their sexuality rather than practicing it!

The point is that what was once considered to be a deviant behavior is now seen by many as being, for people with such urges, the path to the Good Life. Nothing has changed about the behavior at all. Nor has anything changed about the evidence (or lack thereof) that homosexuality is either ‘normal’ or ‘pathological’. Rather society has changed and has integrated homosexual behavior largely into the mainstream.

Now that we understand how normativity broadly works in the so-called social sciences let us turn to a notion that goes right back to the ancients and that relates to marginalist microeconomics is a most immediate way.

In Western societies — and indeed, I would think in most — the idea of temperance is an important one. We can find this in writings of the early Greek philosophers. In their writings on ethics these philosophers tried to teach regimes of behavior that would lead to ‘eudaimonia’ which translates as (economists take note) ‘welfare’. A key component of reaching a state of eudaimonia was moderation or temperance. Another key component was in using Reason to moderate and organise one’s existence — the similarity to the rational agents of modern economics is no coincidence, as these are part of similar intellectual projects.

(For an extensive discussion of ancient regimes of normative ethics I encourage the reader to pick up Volume II and Volume III of Michel Foucault’s excellent The History of Sexuality which, despite the titles, go far beyond simply dealing with sexuality).

This was an extremely extensive intellectual and ethical tradition that encompassed most of Western philosophical thought in the following millenia. But there was another tradition that existed all the way back to ancient times.

In the 19th century Friedrich Nietzsche distinguished the two. The first tradition — that which championed Reason, eudaimonia and so forth and which ran from Plato through Aristotle to Bentham and Hegel — Nietzsche termed the ‘Appollonian’ tradition, after Appollo, the Greek god of Reason. The second tradition — which represented excess and intoxication and ran from the Greek tragedies like Antigone through the German Romantics to Freud — Nietzsche termed the ‘Dionysian’ tradition, after Dionysus, the Greek god of wine.

Nietzsche argued that this excessive element in culture — which would be termed the ‘passions’ in philosophies like those of Spinoza and Hume or ‘drives’ in Freud — was not only always present in culture but was required for culture to move forward and thrive. It was this excessive element that gave Western culture its dynamism; its tendency to break boundaries; and to champion the newly discovered.

Without getting too deeply into this, however, I think that the reader can now appreciate that what is contained in the ‘law’ of diminishing marginal utility is deeply tied up with certain notions of ethics, morality and what is the ‘right’ and ‘wrong’ way to live one’s life. My point here would be threefold:

(1) When discussing human behavior and social organisation the moment we begin to speak of ‘normal’ behavior we are implicitly designating other behaviors as ‘pathological’. This is ultimately a moral judgement.

(2) The notion of ‘normal’ behavior relies on its obverse. If there were not ‘pathological’ behavior the idea of ‘normal’ behavior would be semantically meaningless. Thus, the idea of ‘normal’ behavior cannot exist without the existence of ‘pathological’ behavior. This means that any ‘laws’ that seek to establish norms for behavior actually undermine themselves as their norms rely, by definition, on cases that do not fit into these norms.

(3) Human activity always contains both Appollonian and Dionysian aspects. Innovation and entrepreneurship, for example, are Dionysian behaviors that involve taking incalculable risks buttressed by ‘animal spirits’** while the calculations of profit and loss utilised in carrying them out are Appollonian. In trying to suppress the Dionysian aspects of human existence — which I would argue is the function of marginalist microeconomics — we only succeed in remaining ignorant of a key component of human culture and psychology.

Beyond that, I think that people should be very well aware of the fact that microeconomics is an ethical doctrine —  as are many aspects of the so-called social sciences — and it should be judged accordingly. By positing it as a ‘science’ we are only engaged in ethical dogmatism — that is, we are giving its ethical proclamations a dimension of Absolute Truth which they simply do not possess. That is why so many are fooled by its form. Marginalist microeconomists convince themselves that they are engaged in ‘science’ when really all they are doing is applying a dogmatic ethical framework to the material they study. They are, rather humorously, priests who do not know that they are priests.

_______

** The clever reader will note here that Keynes’ theory of financial markets and investment are eminently Dionysian. Indeed, I think that a Dionysian tone resonates in all of Keynes’ work — and I think that his writings on probability should properly be seen as an attempt to insert Dionysian considerations into the all too Appollonian discipline of mathematical philosophy. Whereas economics before Keynes was based on the hokey Appollonian doctrine that the virtue of rational saving was what led to economic growth, Keynes turned this on its head and showed that such saving could be socially destructive and that economic growth was dependent on the Dionysian actions of investors taking action in the face of an unknown future.

Posted in Economic Theory, Philosophy, Psychology | 22 Comments

Keynes and Loanable Funds

loanable

I was recently discussing econometrics and Keynes’ critique of it with Severin Reissl, a particularly clever student currently attending the University of Glasgow who is critical of mainstream economics. (You can find some examples of his writing here in which I am quoted to criticise some of the assumptions in a mainstream macroeconomic textbook).

Anyway, I sent Reissl a copy of Keynes’ famous paper on econometrics entitled Professor Tinbergen’s Method and while we were discussing it Reissl pointed out the short piece that appeared below it. The paper, you see, was a book review published in The Economic Journal in 1939 and below it was another review by Keynes. This review was entitled The Process of Capital Formation and it dealt with an early statistical attempt to formalise the national accounts — something that Keynes would become deeply involved in during the war years and after.

I had never read the paper before but Reissl said that it contained within it a lucid critique of the loanable funds theory. I want to examine this here because I think it’s a very important point. By studying this paper I think that we can indeed understand better Keynes’ ideas about loanable funds.

The relevant discussion begins when Keynes discusses the savings/investment identity and clearly states that it is investment that drives savings, not savings that allow for investment. Using his typically brilliant ability for metaphor he writes,

For example, [the reader] might naturally suppose — for anything the Committee say to the contrary — that the right way to prepare for an increase of investment is to save more at an appropriately prior date. But the corollary shows that this is impossible. Saving at the prior date cannot be greater than the investment at that date. Increased investment will always be accompanied by increased saving, but it can never be preceded by it. Dishoarding and credit expansion provides not an alternative to increased saving, but a necessary preparation for it. It is the parent, not the twin, of increased saving. (p527 — My Emphasis)

Let us translate that into more formal terms before we move on. ‘Dishoarding’ would entail an increase in the velocity of money — as funds that were left dormant were reactivated they would begin the process of circulation. ‘Credit expansion’ while it clearly means some sort of borrowing is, at this moment in time, ambiguous and we shall deal with this point in a moment.

Keynes then goes on to show that funds that are invested add to savings and so, thinking that there is some pool of savings out of which investment is drawn is misleading because the money that is so invested is then added to the pool of savings. If there is £1m in savings in a country (assuming a closed economy) and £100,000 of these funds are invested then at the end of the period there will still be £1m in savings because the money invested will accrue as savings. This process is instantaneous because when I take the £100,000 and spend it on wages and investment goods that money is instantaneously credited to the bank accounts of workers and other capitalists where it then sits waiting to be spent as savings.

Or, as Keynes puts,

Prior saving has no more tendency to release funds available for subsequent investment than prior spending has. (pp572-573)

He then goes on to discuss the fact that what we are really dealing with when new investment is forthcoming is not the demand for savings but rather the demand for money. This ties in with his liquidity preference theory of the interest rate wherein the interest rate is determined as much by financial considerations — liquidity preference as a sort of ‘insurance’ against uncertainty — as by the considerations of the demand for money to finance investment.

At this point I think that Keynes has little to say that we can use today. The rest of the paper is written in a manner that implies that this quantity of money is somehow fixed at any moment in time. Thus, if the demand for money increases for any reason the interest rate should rise. This is the same assumption as the ISLM model with the upward-sloping LM curve. But as we now know, in a regime where the central bank targets the interest rate and not the quantity of money then the quantity of money will increase if there is any upward pressure on the interest rate.

While Keynes did indeed strike the first blow against the loanable funds theory in this review in that he got rid of the silly notion that investment out of savings ‘used up’ said savings, he nevertheless did not overturn it completely. That would have to wait for Robinson and Kaldor’s early formulations of the endogenous money theory.

Posted in Economic Theory | 9 Comments

Minsky’s Theory of Asset Prices: Why Minsky Was NOT a Neo-Monetarist

monetaristskeynesians

On a recent blogpost that I wrote there was some confusion in the comments section regarding Hyman Minsky’s theories and their relationship to the phenomenon of rising asset prices. I have seen this confusion made many times before — even by some otherwise good Post-Keynesian economists — but I think that it is time to finally clear it up once and for all.

The confusion runs something like this: “Hyman Minsky’s theory of rising asset prices is that debt drives asset prices. If we want an explanation for rising asset prices we simply look at the levels of debt in the economy. This is tied to the fact that Minsky was a proponent of Post-Keynesian endogenous money theory and this theory states that private banks create money which, if this creation process is allowed to spiral out of control, will lead to rising asset prices.”

Okay, before I go into what Minsky actually wrote in this regard I think I should make one crystal clear comment: the above argument is a monetarist argument. In Post-Keynesian endogenous money theory money = debt. So, if we say that debt is the cause of rising asset prices we are effectively saying that money is the cause of rising asset prices. That is a monetarist conception of how price levels in markets operate.

Monetarists maintain that rising prices — whether they be asset prices or the prices of goods and services — are due to increases in the money supply; and since saying that increases in the level of debt leads to rising prices is just another way of saying that increases in the level of money leads to rising prices, this is an absolutely identical argument to the monetarist one. All we have done is changed the words.

Post-Keynesian endogenous money theory never posits debt as the cause of anything. Rather it is decisions by economic agents that cause increases in the price levels of any given market. The debt that follows from these decisions is merely a residual.

We can see this clearly in the original statements of endogenous money theory. For example, in Basil Moore’s seminal paper Unpacking the Post-Keynesian Black Box: Bank Lending and the Money Supply he writes,

The behavior of the money wage rates, both as a component of companies’ demand for working capital finance and as determinants of disposable personal income, plays a central role in determining private demand for bank credit. (p555)

In that paper what Moore was describing as the ‘black box’ was the causal element that leads to increases in money lending and, hence, increases in the money supply. Through extensive empirical investigation he finds that the inflation of the 1970s was mainly caused by rising money wages.

As we can see for Moore an increase in bank lending does not truly cause anything. Rather it is the phenomenon that must be explained through reference to a causal variable. In the case of the 1970s inflation this causal variable was money wages. If Moore were to explain the inflation of the 1970s by saying that it was caused by an increase in bank lending he would be making an identical argument to the monetarists except that rather than saying that the ‘money supply’ caused rising prices he would say ‘bank credit’; but since the two things are two sides of the same coin (change in bank lending = change in money supply) he would be saying the same thing in different words.

The exact same holds true in asset markets. If Minsky had been saying that the cause of rising asset prices was changes in bank credit he may as well have been saying that the cause of rising asset prices was changes in the money supply. The arguments are the same; only the words are different.

Okay, so all that said, what was Minsky’s argument? How did he explain rising asset prices? Well, he deals with this in a section of his book Stabilizing an Unstable Economy entitled ‘Quasi-Rents and Capital Asset Prices’. This section can be found on pages 200-205 in that book. In this section he argues that capital asset prices are dependent on (that is, caused by) two things: expected cash-flows and perceived liquidity. Minsky writes,

In a world with a wide variety of financial markets and in which capital assets can be sold piecemeal or as collected in firms, all financial and capital assets have two cash-flow attributes. One is the money that will accrue as the contract is fulfilled or as the capital asset is used in production; the second is the cash that can be received if the asset is sold or pledged. The ability of an asset to yield cash when needed and with slight variation in the amount is called its liquidity.

The price, PK , of any capital asset depends upon the cash flows that ownership is expected to yield and the liquidity embodied in the asset. The cash flows a capital asset will yield depend upon the state of a market and the economy, while the liquidity embodied in an asset depends upon the ease and the assuredness with which it can be transformed into money. The price of a financial asset such as a bond or even a savings account depends upon the same considerations as the price of a capital asset: the cash flow and the breadth, depth, and resilience of the market in which it can be negotiated. (p202)

Note that nowhere does Minsky mention debt is discussing how asset prices are determined. It is true that when assets are purchased debt is often incurred but this is only a residual of a decision taken by investors with respect to expected cash-flows and perceived liquidity**. Debt is not the explanatory or causal variable in Minsky’s theories of asset prices. And that is because Minsky is not a monetarist but rather a Keynesian.

Minsky’s theory, as he always insisted, built on Keynes’ theory of financial markets as laid out in the General Theory and the Treatise on Money. For Keynes too it was these two variables that caused asset prices to rise or fall. The money that was used to purchase them — whether debt financed or not — was a residual; an important residual, as Ponzi debts might lead to asset price deflations, but still a residual. In these theories debt does not cause anything any more than money causes anything — indeed, the two entities are actually two sides of the same coin.

_______

** Note that with a given state of expected cash-flows and perceived liquidity the rate of interest (which is chosen by how much reserve cash the central bank wishes to release to the banking system) will determine the price of assets. But in Minsky’s theories, as in Keynes’, we cannot expect a static state of expectations and thus the rate of interest will be of secondary importance to the determination of asset prices; the key driver is, as in Keynes, the animal spirits of investors or the state of confidence.

Posted in Economic Theory | 10 Comments

Basic Macroeconomics of Income Distribution Cannot Explain Today’s Rising Inequality

inequality

I was recently looking over the debates surrounding the Pasinetti theorem and I thought it might be worth writing a few words on it. Pasinetti formulated his theorem — which is dealt with in detail in a fantastically thorough Wikipedia article — in 1962 in response to Nicholas Kaldor’s seminal paper Alternative Theories of Distribution.

What Pasinetti’s theorem showed was that while propensity to save by workers has no long-run effect on the share of profits in the national income, it does have a long-run effect on the manner in which these profits were shared between workers and capitalists.

The Pasinetti theorem actually has very interesting implications for how we should approach really existing capitalist economies. In his equations Pasinetti assumes that workers savings earn the interest rate — which is assumed to be equal to the rate of profit. Given the assumptions of the entire model I think that this is quite reasonable. Now, if workers can manage to save more then more of the profits will accrue to them.

If we apply this logic to really existing capitalism it yields something interesting: in really existing capitalism we have different layers of income distribution that save at different rates. At the top end are those that save copious amounts of their income — now colloquially known as the 1%. While at the bottom we have people who probably net dis-save — i.e. go into debt — a good deal of the time. By Pasinetti’s logic the more savings become concentrated at the top of the pyramid the more income distribution will diverge due to a higher share of the profits going to those that save greater amounts.

Before I go on to criticise this a little bit let us first turn to the neo-Keynesian response from Paul Samuelson and Franco Modigliani. They tweaked the model a bit with some extremely unrealistic assumptions so that it showed that income would eventually become evenly distributed. While Pasinetti patiently criticised the assumptions that Samuelson and Modigliani employed, Kaldor undertook what I think to be the more relevant critique: he analysed the empirical data and showed just how widely savings propensities in different groups diverged. He concluded,

[U]nless they [Samuelson and Modigliani] make a more imaginative effort to reconcile their theoretical framework with the known facts of experience, their economic theory is bound to remain a barren exercise.

Whether it was politics or the soothing fairy-tales told by marginalist economics that motivated Samuelson and Modigliani, I do not know — although I suspect it was some measure of both (their idea that the conclusions of the British economists required the assumption of ‘hereditary barons’ indicates extreme political immaturity and an almost cartoon-cultural American view of the British economy on their part) — regardless, however, they won the day, as they so typically did. Anyway, that particular debate — which goes round and round and ends up in the infamous Capital Controversies — is somewhat stagnant; and so I want to raise some slightly different and more pressing points here.

From the extensive empirical work done by James Galbraith and his co-authors we now know that income inequality is strongly positively correlated with the size of the financial sector. Intuitively, of course, this is not surprising and the Occupy Wall Street movement certainly figured it out without needing the extensive empirical work done by Galbraith and company.

Now, this can be interpreted in two ways. The first is that savings of the rich built up to such an extent that an entire financial sector rose up to accommodate it. The second is that the financial sector was an autonomous creation — the result of certain historically contingent policies — and that the savings/profits that accrued to the higher income tiers came from the rise of finance.

Here’s the problem with the first interpretation: the financial sector can generate savings/profits with no reference to the real economy. If stock prices rise of their own accord then the savings/profits of those that hold them also rise.

Now, following Pasinetti we might respond: “Sure, that is true, but you need a prior level of unequal income distribution otherwise workers will receive the gains of the upswing in the stock market”. There is certainly some truth in this view. But I just don’t believe that it is the crux of the issue. Rather I think that runaway financial markets actively engage in income redistribution through the building of hoards via rising asset prices that then generate reinvestment in those assets which then in turn produces a growth in the hoards. And so on and so on… round and round we go. While I cannot really substantiate this here, I think that this is what most of the data points to.

None of this, of course, actually undermines Pasinetti’s work. After all, his work never really says anything about how the rate of profit — and, in the long-run, the rate of interest — is actually generated. If we include rising asset prices in Pasinetti’s framework and assume a given unequal distribution of income then we can easily come to the conclusion that asset prices will generate worsening income inequality. But to me this sidesteps the issue.

If the true causal variable is the rise in asset prices — and again, I think that this is what all the data suggests — then it is this aspect of the problem that should be at the forefront of the analysis. That leads to the question: what causes asset prices to rise? And unfortunately the answer to that is institutional and, to a very large degree, psychological. It is deregulation and swings in psychology — not to mention accommodative monetary policies — that leads to growth in the financial sector and massive upswings in the price of financial assets.

This is where the neat, formalised Keynesian models break down: they cannot explain the key determinants of the problems we face today. Yes, we can build countless models — of lesser or greater merit — to show that unequal income distribution can hurt economic growth (frankly, that should be obvious by simply assuming different marginal propensities to consume across different income groups), but we cannot really say anything about the cause of the problem.

Actually, we should be clear: economic theory has very little to say about this problem. But the economics of institutions — particularly of political and financial institutions — has rather a lot to say about this problem. So too do empirical economic studies showing clear correlations between the financial sector and income inequality.

But basic macroeconomics comes to a halt, as it did in the famous passages in the General Theory on financial markets and expectations all those years ago, against the rock of finance. And any descent deeper into the crevices of the finance industry and the political institutions that facilitate it requires altogether different tools.

Addendum: I am pulling a comment that I made to Neil Wilson in the comments on this blog because I think it is of general interest to consider when thinking about the above discussion.

If you hold a stock worth £1000 and I feel super-confident in the market and offer you £2000 and this leads others to then think that this particular stock is valued at £2000 I’ve just increased the net worth of every holder of that stock by (£1000 x amount of stock held). If, say, 5000 of those stocks exist in the economy I have just increased the net worth of the economy by £5,000,000!

Now assume that the economy had a total net worth of £100,000,000 prior to my optimistic bid. And that income is distributed in such a way that the top 10% hold £50m while the bottom 90% hold £50m. Now assume that all of this particular stock was owned by the top 10% of the population.

Well, after my little optimistic bid the top 10% now have £55m in net worth while the bottom 90% still have £50m. We’ve gone from an economy where the top 10% of the population by income distribution hold 50% of the wealth to an economy where the top 10% of the population by income distribution hold 52% of the total wealth! And this was done simply by changing the prices of assets with one bid!

Obviously this is an extreme example. But it highlights the dynamics that I’m talking about very clearly.

Posted in Economic History, Economic Policy, Economic Theory, Toward a General Theory of Pricing | 37 Comments

Could a Russian Intervention in the Ukraine Burst the London Property Market?

bubble204

Over and over again I’ve been asking myself: is there a housing bubble in London? Certainly house prices are booming but I was never comfortable calling a bubble in the typical sense of that word.

Why? Because this didn’t look like a standard asset price bubble. Bubbles are driven by credit. But the London market is being driven by cash flowing into the top-end of the market.

It is areas like Chelsea and Mayfair that are really driving the London property market. These houses are being bought, not as residential properties, but as financial assets.

In the current low-yield, QE environment cash is pushed into a wide variety of risky assets. That is why we see stock markets booming while the real economy crawls along at a snail’s pace. I reckon that financial advisers have long been pushing the extremely wealthy into very high-end property in various large cities.

In London a lot of the high-end property is bought up by Russian billionaires. At the start of this year The Guardian did a report on something that had long been folk knowledge in London. In the report entitled Inside ‘Billionaires Row’: London’s rotting, derelict mansions worth £350m Robert Booth wrote,

A third of the mansions on the most expensive stretch of London’s “Billionaires Row” are standing empty, including several huge houses that have fallen into ruin after standing almost completely vacant for a quarter of a century… Estate agents and property developers said the avenue was in transition, with apartments under construction that would bring life back to the area, but said high vacancy rates were inevitable in an international market such as London where buyers come from the Middle East, Russia and increasingly China.

The foreign money that was flowing in to the high-end of the London market is what has really driven up prices. The lower-end of the market, which has not increased in value at nearly the same rates, is following that trend, not leading it.

I always thought that the end to this would be when the low-yield zero interest rate environment was reversed by central banks — something that is unlikely to happen for a rather long time — but the recent threats of economic sanctions by the West against Russia raises the question: what if Russian assets in London were seized or Russian money prevented from entering the market? If this occurred could it hit high-end property prices in London to such an extent that the enthusiasm was driven out of the whole market?

It is certainly a question worth raising — even if we cannot give a definitive answer. In credit-driven property bubbles it is the bottom of the market that falls out first as low-quality loans to people who cannot afford repayments go sour. If the above scenario played out we would see quite the opposite; namely, a property market that goes rotten, like a fish, from the head down.

Posted in Market Analysis | 3 Comments

Bidding War: The Quantity Theory of Money and the Price Level

????????????????????????????????????????

I was going to run a blog on Hans Albert’s critique of the quantity theory of money but it appears that Lord Keynes has gotten there ahead of me. I just wanted to pull out one point that he raised as it proved to be one of the most difficult I encountered when trying to formulate a general theory of pricing.

Lord Keynes notes that some versions of quantity theory assume a linear, self-same relation between the increase in the quantity of money and the price level. So, if the quantity of money increases by, say, $1,000 then the price level must increase by the same. The assumption here appears to be twofold. Firstly, that all the money is spent. And second, that prices are bid up by the same amount as the money spent.

I have outlined how such a bidding process occurs in algebraic form in the following post. In order to understand it we must imagine a market as an auction with a fixed supply of goods. Each buyer must bid up the goods using the cash reserves they have. The problems with the assumption that the price level rises in lockstep with the money supply increase becomes problematic straight away.

Imagine an auction with three buyers and one item. Each buyer is willing to spend all their money on the item, but they also want to get it for the minimum amount they can — a marginalist might place some utility maximising assumption here where money and the good both yield utility. Now, in equilibrium each buyer has $1,000 of cash reserves. In such a situation only two things can happen: (a) the good sells for $1,000 to the buyer who bids first or (b) two or more of the buyers engage in some sort of lending agreement with one another. Let’s pretend that (b) cannot happen because everyone is completely focused on getting the good.

Now, assume that a central banker walks into the auction with newly printed money. He gives and extra $1,000 to the first bidder — call him Buyer A. Buyer A now have $2,000 while the other buyers only have $1,000. So, what does he do? Well, assuming that he has perfect information about the other buyers he would simply bid up the item by the smallest possible increment — say, he would make a bid of $1,000.01.

In Keynesian economics the multiplier relieves us of this problem. In the above problem we would simply posit a marginal propensity to consume out of income. In the case above it would be 0.00001. The rest of the newly acquired money would be conceived of as savings — with the marginal propensity to save being the inverse of the marginal propensity to consume; i.e. 0.99999.

A modified version of the quantity theory could posit a fall in the velocity of circulation. This would especially fit in with the Cambridge cash balances version of the theory– from which, I think, Richard Kahn got the inspiration for the multiplier. But since many variants of the quantity theory — like monetarism in particular — assume a constant velocity they have very little to say about the above mentioned dynamics.

Posted in Economic Theory | Leave a comment

The Mystery of Matter: A Response to Lord Keynes on Berkeley’s Idealism

mystery-of-faithThere has been a bit of a debate between myself and Lord Keynes over at his blog. Most of the relevant debate can be found in the following blog and in the comments of this blog. In response to my comments Lord Keynes has done another post that is quite long. Frankly, I don’t think that many of the complaints he raises against Berkeley’s arguments are nearly as problematic as the dogmatism and mysticism at the heart of his own doctrine. For that reason I will only deal with the kernel of the issue here.

The most important point that Lord Keynes raises is the following,

Epistemologically speaking, the idealist – unless he can provide a valid and sound deductive argument or arguments – must use inductive arguments on the basis of indirect evidence to prove his hypothesis of a god or “super-mind.”

But that means, epistemologically speaking, that the idealist and the realist are also on the same ground, since the indirect realist’s belief in an external world is also an inductive inference on the basis of indirect evidence.

So again in this respect the idealist position is not epistemologically superior to the realist one, but on a par.

I have tried to deal with this a number of times in the comments but I suppose I can repeat myself if I have not bee sufficiently clear.

The question here boils down to this: which is a more robust argument for the constancy of ideas, matter or a ‘super-mind’? Berkeley contends that the word ‘matter’ is a word without content. If I try to describe what matter is I will never be able to do so. For example, I may say that matter is hard; but then Berkeley would say “no, the hardness is a perception, it is an idea”. Or I may say that matter is extended; to which Berkeley would reply “no, extendedness is only an idea”.

Eventually when you boil it right down the only thing we can say about matter is something like “matter is the cause that underlies the constancy of our perceptions”. But this is a tautology. If matter is being used to explain the constancy of our perceptions and is at the same time defined in such a way as to be the cause that underlies the constancy of our perceptions then it is clearly a tautology. The word ‘matter’ thus means nothing. It is an empty, self-referential term. A signifier with no signified.

Now, the ‘super-mind’ argument is different. I can reason by analogy using the super-mind argument. I can examine what constitutes the entire world that I know — that is, the world of ideas and minds — and I can divide the ideas into two groups: (1) ideas over which I have control (imaginations etc.) and (2) ideas over which I have no control (perceptions). If I allow that these, together with my mind, are the two base constituents of the world I experience I can then use my logical capacity to try to explain the ideas that fall into category (2) with reference to those that fall into category (1).

So, I can say that since all that I experience are ideas and minds then all that must exist are ideas and minds — this is an empiricist position. Now, I can posit that since there are ideas that my mind does not have control over — that is, ideas that fall into category (2) — then some other mind must have control over them. Thus we posit a super-mind that brings these ideas into being.

The differences between the materialist and the idealist position is that the materialist position relies on a tautology (the empty term ‘matter’) to explain the constancy of our ideas. This empty word has no content. Matter is never experienced and so it can never be explained. Thus materialism is anti-empiricist — it requires dogmatic or mystical foundations. But the idealist position is purely empirical. It uses the information we gain through induction and combines it with logical reasoning to explain the constitution of the world. There are no mysterious or dogmatic terms in idealism. The super-mind is formed through a combination of induction and logical reasoning.

This is why idealism is a superior philosophy. That is, unless one’s temperament is inclined toward dogmatic or mystical doctrines in which case materialism is obviously more preferable.

I do not see any of the problems Lord Keynes raises about Berkeley’s philosophy particularly compelling — for example, he asks why not many super-minds rather than one? I would answer: sure, if you believe in parallel universes! — but even if they are the central question remains: is empiricism preferable to mysticism? If so, then Berkeley’s philosophy is preferable to materialism. But if you wish to keep certain ‘mysteries’ of the metaphysical structure of the world intact — perhaps to give a sense of awe to what you see as a scientific Doctrine of Truth — then what makes your philosophy any better than a religion?

After all, the mystery of matter is no different to a mystery of faith — because what is at issue here is faith; faith in matter. For, as the Wiki article on Mysteries of Faith says,

In theology, an article of faith or doctrine which defies man’s ability to grasp it fully, something that transcends reason, is called “a mystery of the faith”.

Read in that light materialism is a theological doctrine that reflects on a mystery called ‘matter’.

Posted in Philosophy | 9 Comments

Hans Albert Expands Robinson’s Critique of Marginal Utility Theory to the Law of Demand

law_of_demand

A few days ago I wrote a post outlining Joan Robinson’s criticisms of the logical structure of marginal utility theory. It got quite a good response. Robinson’s point was that the manner in which the theory was constructed rendered it useless. Examined carefully it said or could say nothing of substance.

The theory posited that preferences must be fixed. Then we could attribute any change in consumer demand to price or income fluctuations. But if these preferences are not fixed in reality — as they certainly are not — then we could never be sure to what extent changes in consumer demand relied on price/income effects and to what extent they were due to a change in preferences.

I came across a very similar criticism of the Law of Demand in the philosopher Hans Albert’s 1963 paper Model Platonism: Neoclassical Thought in a Critical Light which was run on Lars Syll’s blog yesterday. I will first lay out the specific criticism, then highlight how it is analogous to Robinson’s and then allow Albert to provide something that Robinson did not: namely, an outline of what type of thinking gives rise to such errors.

Albert starts out by showing the tautology at the heart of the pure form of the Law of Demand. He writes:

The law appears prima facie to predicate a relatively simple and easily testable relationship and thus to have a fair amount of content. However, upon closer examination, this impression fades. As is well known, the law is usually tagged with a clause that entails numerous interpretation problems: the ceteris paribus clause. In the strict sense this must thus at least be formulated as follows to be acceptable to the majority of theoreticians: ceteris paribus – that is, all things being equal – the demanded quantity of a consumer good is a monotone decreasing function of its price. The ceteris paribus clause is not a relatively insignificant addition, which might be ignored. Rather, it can be viewed as an integral element of the law of demand itself. However, that would entail that theoreticians who interpret the clause differently de facto have different laws of demand in mind, maybe even laws that are incompatible with each other. Here, through an explicit interpretation of the ceteris paribus clause, the law of demand is made into a tautology. (p8)

What Albert is saying is that if we take the pure form of the Law of Demand — that is, the Law of Demand with an indeterminate ceteris paribus clause attached — then the law says nothing. If you adhere to it and I challenge you with a counterfactual you will just invoke the ceteris paribus clause. You will thus be insulated from criticism.

So, the logical response from a person who did not wish to speak in self-assertive tautologies would be to make explicit the all the ceteris paribus clauses. But Albert outlines why this does not solve the problem.

Various widespread formulations of the law of demand contain an interpretation of the clause that does not result in a tautology, but that has another weakness. The list of the factors to be held constant includes, among other things, the structure of the needs of the purchasing group in question. This leads to a difficulty connected with the identification of needs. As long as there is no independent test for the constancy of the structures of needs, any law that is formulated in this way has an absolute ‘alibi’. Any apparent counter case can be traced back to a change in the needs, and thus be discounted. Thus, in this form, the law is also immunized against empirical facts… If the factors that are to be left constant remain undetermined, as not so rarely happens, then the law of demand under question is fully immunized to facts, because every case which initially appears contrary must, in the final analysis, be shown to be compatible with this law. The clause here produces something of an absolute alibi, since, for every apparently deviating behavior, some altered factors can be made responsible. This makes the statement untestable, and its informational content decreases to zero. (p9-10)

This is where Albert’s criticism ties back in with Robinson’s: if we cannot say anything tangible about the structure of preferences — and the stability of that structure — then we can always blame any counterfactuals on unobserved changes in preferences. If you adhere to the law and you articulate all the possible ceteris paribus clauses you believe to exist and I challenge you that the empirical results still do not conform to the Law of Demand you can then invoke a change in preferences. Again, you are fully insulated from criticism and are still speaking in tautologies.

The logical response to this is to then dig into the preferences themselves and try to formulate a theory of them explicitly. But this is never done. Albert writes,

To counter this situation, it is in fact necessary to dig deeper into the problem of needs and preferences; in many cases, however, this is held to be unacceptable, because it would entail crossing the boundaries into social psychology. (p10)

But of course the whole reason for the Law of Demand and marginal utility theory more generally was to try to escape from the realm of psychology and psychological observation altogether. Paul Samuelson, for example, writes in his famous textbook Economics:

But you should definitively resist the idea that utility is a psychological function or feeling that can be observed or measured. Rather, utility is a scientific construct that economists use to understand how rational consumers divide their limited resources among commodities that provide them with satisfaction. (P. 73 — My emphasis)

Once we introduce the realm of psychology we have to throw the whole theory out because now we must seek out the real causes of shifts in consumer demand outside of economics. We no longer need microeconomics to explain anything. In such a case peoples’ relationship to the prices and quantities of goods they purchase are subject to entirely different causes than what the marginalist economists focus on. In such a world peoples’ purchases become as infinitely complex as any of their other psychological motivations and searching for some law-like relation in such a mire — not to even mention a law-like relation that is Universal across individuals — becomes a completely fruitless endeavor; just as the early marginalists had recognised it to be.

So, what does Albert attribute these strange conceptions to — remember, they are not only insulated according to the principles Albert lays out but, given that Albert’s paper is over 50 years old, we can only assume that they are also insulated from his criticisms. Albert attributes this to the formal properties of mainstream economics more generally. He describes these formal properties as ‘model Platonism’ — which is to say that the models are pure thought experiments, are designed that way and are constructed so as to avoid empirical reality. Albert writes,

The model Platonism of pure economics, which comes to expression in attempts to immunize economic statements and sets of statements (models) from experience through the application of conventionalist strategies… through conventionalist procedures, theories that certainly entail interesting ideas are often rendered insensitive to the facts and thus rendered useless. (pp9-10)

In short, this strange relationship to the real world is built into the methodology at the most basic of levels. It gives rise to strange constructions that speak to us in such generalities that they can never truly be applied to the real world.

Posted in Economic Theory, Philosophy | 3 Comments

John Hicks’ Book on Non-Ergodicity: A Forgotten Post-Keynesian Classic

hicks

Lars Syll recently provided an interesting quote from John Hicks’ 1979 book Causality in Economics. I thought that what Hicks said made an awful lot of sense, so I got my hands on a copy of the book. I have only so far scanned the book but I think that it is something of a masterpiece and I hope that someone suggests reissuing it; it could easily be a standard textbook for Post-Keynesian methodology.

Take this quote from the preface to see just what Hicks wants to explain about economics,

I find that all experimental sciences are, in the economic sense, ‘static’. They have to be static, since they have to assume that it does not matter at what date an experiment is performed. There do exist some economic problems that can be discussed in those terms; but there are not many of them. The prestige of scientific method has led economists to attach importance to them, for this is the field where economics appears to be most ‘scientific’. The more characteristic economic problems are problems of change, of growth and retrogression, and of fluctuation. The extent to which these can be reduced to scientific terms is rather limited; for at every stage in an economic process new things are happening, things which have not happened before — at the most they are rather like what has happened before. We need a theory that helps us with these problems; but it is impossible to believe that it can ever be a complete theory. It is bound, by nature, to be fragmentary… As economics pushes beyond ‘statics’, it becomes less like science, and more like history. (p6)

It looks like what the Post-Keynesians like Joan Robinson and Paul Davidson rubbed off on Hicks somewhat. From the above quote we can see that he truly absorbed this perspective — and that account, in part, for his later rejection of the ISLM model.

Hicks goes on to pursue a theme that Robinson also took up in her book Freedom and Necessity — a book that I think, from reading the present work, Hicks had read. He writes that while we can apply deterministic thought to the past in both history and economics we cannot really apply it to the future.

There is no reason, when looking forward, to doubt that we are free, as we feel ourselves to be, to choose one course of action over another. But no decision made now can affect what has happened in the past… So, with respect to the past, one can be fully determinist… Determinism, applied to the future (in theological terms, pre-destination) is cramping; but determinism applied to the past is not cramping. It is liberating. (p17)

This is slightly contentious given the severe limitations of our knowledge about the past, but I think that the spirit of what Hicks is saying is correct. The past is, in a sense, frozen. It is not affected by our interpretations of it. Yes, we may colour the past through our interpretations — many historians are well aware of this — but that does not affect the actual content of the past. Determinism, however, cannot be applied to the future because the decisions we make today — which have an element of free will (Soros would call this ‘reflexivity) — can make the future take any number of given trajectories.

This gives rise, for Hicks, to historical time — which is what economists deal with — as being in a state of flux. He writes,

One aspect of the difference between the sciences and economics [is that] the sciences are full of measurements which, over a wide field of application, can be regarded as constants… but there are no such constants in economics. There are indeed some price-ratios which for long periods have been some apparent constants or near-constants, such as the nine or ten-year length of the Trade Cycle, which for roundabout half a century, between 1820 and 1870, appeared to have become established (so established, indeed, that Jevons dared to associate it with the sunspot cycle, thus reducing it to purely physical terms); but regular fluctuation, on this pattern, has not persisted. The economic world, it has in our day become increasingly obvious, is inherently in a state of flux. (p40)

Writing in 1979, when the rational expectations theorists sought out timeless explanations for human behavior, it is by no means clear that this was becoming “increasingly obvious”. But nevertheless Hicks’ actual point stands: economics does not deal with timeless laws; rather it deals with an economy moving through historical time in a state of flux.

The eighth chapter of the book is probably the most interesting. It is entitled ‘Probability and Judgement’ and is an extended discussion on the use of probability theory and econometrics. Hicks’ views on probability are, by his own admission, closely aligned with Keynes’. I do not think that this is widely known — at least, I did not know it.

Hicks’ discussion is long and rather in depth. I would suggest that people read it themselves — I intend on rereading it because it is quite dense. The ultimate conclusion he comes to, however, is rather simple.

When we cannot accept that the observations, along the time-series available to us, are independent, or cannot by some device be divided into groups that can be treated as independent, we get into much deeper water. For we have then, in strict logic, no more than one observation, all of the separate items having been taken together. For the analysis of that the probability calculus is useless; it does not apply. We are left to use our judgement, making sense of what has happened as best we can, in the manner of the historian. Applied economics does then back to history after all. (p102)

This means that although statistical information may be interesting it does not generally explain anything, as the econometricians are wont to think it does. Rather, it is information itself that must be explained. Hicks writes,

I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed. We have no business to turn to them automatically; we should always ask ourselves, before we apply them, whether they are appropriate to the problem at hand. Very often they are not. Thus it is not at all sensible to take a small number of observations (sometimes no more than a dozen observations) and to use the rules of probability to deduce from them a ‘significant’ general law. (p102)

Hicks then goes on to note something extremely important — something that I have noted many times before: namely, the tendency for economists to suppress relevant information (for example, non-quantitative information) just because it does not fit in with their tidy regression model.

For we are often assuming, if we do so, that the variations from one to another of the observations are random, so that if we had a larger sample (as we do not) they would by some averaging tend to disappear. But what nonsense this is when the observations are derived, as not infrequently happens, from different countries, or localities, or industries — entities about which we may well have relevant information, but which we have deliberately decided, by our procedure, to ignore. By all means let us plot the points on a chart, and try to explain them; but it does not help in explaining them to suppress their names. The probability calculus is no excuse for forgetfulness. (p102)

All in all, John Hicks’ book is an excellent one and I cannot recommend it enough. It is absolutely written in the Post-Keynesian tradition and after reading it I cannot but say that John Hicks died a Post-Keynesian economist. Even though I find myself somewhat surprised that I am saying this: I think that John Hicks has genuinely written one of the most comprehensive works on the theory of non-ergodicity available in Post-Keynesian economics. And I think that his contribution — and his rejection of his own ISLM framework — is actually more Post-Keynesian than the work of some people who go by that label today. But given how unproductive those debates tend to be I will not try to elaborate on that here.

A Note on the Consumption Function: Although the above post was mainly a brief overview of Hicks’ argument I came across one part of the book which I thought it interesting to highlight on its own — for posterity, as it were. That is where Hicks discusses the consumption function — which many assume that Keynes considered to be some sort of Universal Constant. Hicks writes of this,

Just how far Keynes himself regarded the saving function (or consumption function) as dependable is, however, a question worth considering. We know that he was a sceptic about econometrics; so he hardly have fancied that it would be possible to calculate his function — the function to applied to the analysis of some particular year (‘1975’) — by induction from the behavior income and saving in the previous years (back to 1965, or 1955). So he would have not expected it to be usable, in the manner which later became fashionable, for projections, even or ‘fine tuning’. I know myself, from my recollections, that it was nearly a decade after my first acquaintance with the General Theory, before I realised that people were taking the function in this way. It was not natural to take it in this way when one first read the book.

It was natural to take the function as being theoretical; that is to say, as being based on reasoning, from rather obvious aspects of observed behavior, as is commonly done in other parts of economics. (p67-68)

I entirely agree with Hicks here. It has always surprised me that interpreters assumed that Keynes was saying that the consumption function might be stable. But then I suppose that those who are inclined to look for timeless laws will find them anywhere and everywhere.

Posted in Economic Theory, Philosophy, Statistics and Probability | Leave a comment