Marc Lavoie’s New Book

Marc-Lavoie-Post-Keynesian-Economics-New-Foundations-e1404125909902

Marc Lavoie’s new book on the foundations of Post-Keynesian economics is out entitled Post-Keynesian Economics: New Foundations. I learned a lot from the last version of this book and Marc has told me that he’s been working hard to update the new one in light of more recent developments in both Post-Keynesian and mainstream economics.

The first chapter is available on the publisher’s website. It looks pretty good too. It contains discussions of the SMD theorem. He also discusses the fact that tests for the aggregate production function have recently been exposed as being based on regressing national accounting identities on the national accounts data. If that sounds like gobble-dee-gook to you it basically means that much of the empirical results of mainstream economics since the 1960s is hokum — i.e. it’s a big f-ing deal! Lavoie writes:

Neoclassical economists are claiming to measure something, but are really measuring something entirely different. Their theories, such as the necessary negative relationship between real wages and employment, seem to be supported by the data, whereas the negative relationship arises straight from the identities of the national accounts, with no behavioural implication for the effect of higher real wages on employment. I have discussed these issues with a few of my neoclassical colleagues. The most genuine answers have been that without these elasticity estimates they could no longer say anything. But they would rather continue making policy proposals based on false information than make no propositions at all. In other words, they would rather be precisely wrong than approximately right. (p36)

Lavoie also discusses the empirical claims of the mainstream. Through a survey of a variety of literature, especially the literature of so-called ‘meta-regressions’ which seeks to find data manipulation, conscious or unconscious. The findings are damning and support the claims often made on this blog: econometric regressions run on models is largely meaningless the vast majority of the time.

Most post-Keynesians demonstrate scepticism when it comes to empirical and econometric research. Still, one cannot but be impressed by the huge quantity of empirical work that seems to provide support for orthodox theory. This section has shown that this cynicism with regard to orthodox econometric research is largely justified, as many of the studies that appear to verify or confirm orthodox theory are just artefacts. What is an ‘artefact’? The most common definition, relevant to science, says that an ‘artefact’, or ‘artifact’, is a spurious finding caused by faulty procedures. Meta-regression analysis has certainly demonstrated that many of the empirical proofs of orthodox theory were phoney and arising from defective procedures. The word ‘artefact’ is also used in the fantasy and sorcery literature. There, an ‘artefact’ is a magical tool with great power, like a magic wand. This definition seems to be particularly relevant to neoclassical production functions since all the predictions that can be drawn from a model of perfect competition cannot be refuted, even when we know that the required conditions do not hold. (p70)

Lavoie stops short of saying that the techniques should be given up altogether. But it seems to me difficult to draw any other conclusion. In High Finance these techniques, applied to macroeconomic models, are known to be GIGO (Garbage-In, Garbage-Out) which is why they are not taken seriously when there is money on the table. Why is it that policymakers and academics continue to insist upon using them? Perhaps because there is a certain amount of magical thinking going on in the highest echelons of the profession and the practitioners are not being honest about what they can and cannot say with relative surety?

This book will probably turn out to be a key reference text for the growing student movement against current economic teaching.

Post-Keynesian Economics New Foundations

Posted in Economic Theory | 28 Comments

Moar Rethinking of Economics

Here’s an interview with Tony Lawson on economics, mathematics and ontology.

Posted in Economic Theory, Philosophy, Statistics and Probability | Leave a comment

Rethinking Economics

rethink-mooc-web

The Rethinking Economics conference this weekend went very well. I think that this movement is only getting bigger. The level of organisation at the conference was extremely impressive. Something tells me that this will not be going away any time soon.

I’m still a bit busy writing my book. But in the meantime here is an interview with Philip Mirowki who appeared on the Sam Seder show last week.

You can also find a similar interview on This is Hell here.

Posted in Economic Theory | 2 Comments

Interest Rates and ‘Reserve Constraints’: Why Endogenous Money Works Without Central Bank Intervention

freep_cover

Endogenous money advocates often think that a central bank is required in order to offset increases in government borrowing. The story goes: the central bank targets the overnight interest rate by buying up government securities; if the government issues more debt in the form of securities to increase spending the central bank will soak this debt up to maintain the target interest rate. Thus government spending cannot cause higher interest rates. Rather the interest rate is set by the central bank.

This is a nice story. I tell it myself sometimes. It is easy to communicate and it usually causes anyone arguing otherwise to pipe down. But there is a much simpler and more fundamental argument for endogenous money. It is the one that we can use to explain the historical statistics in various countries. Let us turn to these first.

Now, we know today that the central bank targets a rate of interest and buys government securities but it is not so clear that this is what happened in, say, Britain in the 19th century. They operated under a gold reserves system and the central bank had not yet articulated its own role as setting the rate of interest. Yet, the Bank of England have clearly shown in a fantastic paper entitled The UK recession in context — what do three centuries of data tell us? that government borrowing in the UK never really affected bond yields. Take a look at the graph below from the paper.

OLYMPUS DIGITAL CAMERA

See that red circle? Well, that is an era in which government deficit-financing is rising enormously while the interest rate on bonds… falls! That seems somewhat at odds with the hoary old tale that rising government spending not backed by rising taxes leads to a ‘crowding out’ of investment and hence a rise in interest rates now doesn’t it?

So, how do we account for this? Well, it is quite simple really: when the government sells securities to the private sector the money that it receives gets spent back into the economy. This means that it accrues in someone else’s bank account. Thus the net amount of reserves in the system does not actually change. The effects on interest rates come from another sector entirely. Let us first look at the above statement in more detail.

Imagine that the government wants to borrow £100 to pay a policeman. It will issue £100 of government debt to a primary bond dealer and receive £100 in cash from him. It will then pay the policeman the £100 which will accrue to his bank account. So, the financial assets excluding cash in this period will look like this (with [-] denoting a liability and [+] and asset):

Financial assets not including cash

Government: -£100

Primary bond dealer: +£100

Policeman: -/+£0

As we can see the primary bond dealer’s financial assets (excluding cash) have increased while the government’s financial assets (excluding cash) have decreased. The policeman’s have stayed the same. What about the cash? Well, the balance sheets for cash will look like this:

Cash

Government: -/+£0

Primary bond dealer: -£100

Policeman: +£100

Now we know that the primary bond dealer’s cash has been depleted by £100 and the policeman’s have been increased by £100. Meanwhile the primary dealer’s financial assets (excluding cash) have increased by £100 and the government has incurred a liability of £100.

The total amount of private sector cash savings remains identical to before the transaction. Because the government spent the money it borrowed back into the economy it effectively just transferred cash savings from the primary bond dealer to the policeman. This means that if a private company want to borrow they will find the same amount of money in existence as before the government undertook the transaction. They will just have to approach the policeman (or his bank) rather than the primary bond dealer if they want to borrow it.

Another way to put this is that: deposits remain the same. The primary dealer loses a bank deposit and the policeman gains a deposit. And if the aggregate level of deposits in the banking system remain the same then why would there need to be an increase in bank reserves to accommodate? Simple answer: you don’t. This is where the money multiplier becomes a sort of fallacy of composition. Reserves are held against deposits, so if deposits in the aggregate remain the same then there will be no need to increase reserves in the aggregate to accommodate the lending and borrowing. In the example we have given shows the borrowing by the government does not increase deposits. Deposits remain the same.

Where do the potential effects on interest rates come in then? Simple. The stock of government debt has increased. In a typical supply and demand market if the quantity of a good increases its price falls. So, if the quantity of government debt rises then its price should fall and its interest rate should rise. This could, in principle, drag up interest rates on other securities and cause a general rise in the rate of interest (this is by no means clear but it is a possibility).

So why then did the yield on government debt in the UK in the early 19th century fall as the stock of government debt rose? Quite simply: confidence. People had confidence in British public debt since the Bank of England was established in 1694. (See here for a potted history). People didn’t much care that the stock of public debt was rising because they believed it would be paid off. So, even though the government ran up massive debts and massively increased the supply of government securities the price on these securities remained the same. This was likely helped to some extent by increases in savings held by exporters (but the current account surplus in this era was by no means large enough to explain this alone).

That is not how financial markets work. They are not price versus quantity markets. Rather they are price versus perceived risk markets. And this ties back into Keynes’ theory of financial markets. To put it very briefly: if people are highly confident in the financial markets and there is a fixed amount of money in those markets the velocity of the money in the financial markets can speed up to accommodate any increase in the flow of borrowing. This is actually very obvious but mainstreamers don’t dig very deep into these things generally.

This blog post is long enough but I have fully formalised Keynes’ theory in the book that I am nearly finished writing. I think that this is the first time it has been done. Since I have given more than enough content away on this blog for free you will have to wait until it comes out, buy it and then you can get a better grasp on Keynes’ theory; the only theory that really fits with the historical record in this regard.

Posted in Economic History, Economic Theory | 56 Comments

‘Uncertainty’ in Contemporary DSGE Modelling: Not Even Wrong

NOT EVEN WRONG

Confusion of thought and feeling leads to confusion of speech.

— John Maynard Keynes

Readers will note that I very rarely discuss DSGE modelling on here. Frankly, I’m not enormously interested. The fad is one in which economists — or, we should rather say: mathematicians with some loose economic training — have come to mistake analogy for literal explanation.

What do I mean by that? Simply that they have taken certain contingent theoretical statements made by previous generations of economists as Iron-Clad laws and then used these as building blocks to construct ever more Byzantine towers that tell us nothing about how the economy actually operates. This has given rise to a funny game where theorists no longer really build models to give us new insights. Rather they tend to try to integrate things that have already happened and thus they try to tailor their models to fit the discourse of the day.

For example, after the 2008 financial crisis a whole proliferation of models incorporating aspects of What Happened appeared. Theorists began to pick up on words that started appearing in the newspapers and so forth and then tried to incorporate these into their models. The models became the manner in which the theorists articulated What Happened to themselves. It is a truly bizarre ritual indeed. The theorists gain a fuzzy understanding of what actually happened and then incorporate this fuzzy understanding in an even fuzzier manner into a DSGE model and then convince themselves that they are engaged in scientific activity when really all they are doing is following a fad or news-trend.

One such approach that might be of particular interest to readers of this blog is an attempt by two economists by the names of Bianchi and Melosi to integrate ‘uncertainty’ into a DSGE model. If that smells funny, it should because the entire construction of such models seeks to eliminate uncertainty. But when you read the paper, entitled Modeling the Evolution of Expectations and Uncertainty in General Equilibrium, you quickly see that the authors do not understand the concepts they are dealing with.

When I read the title I thought to myself: “I bet they have mistaken risk for uncertainty” and indeed this is precisely what they have done. They basically turn the agents in the model into Bayesian calculators that tot up the probability of the economy entering various phases or states.

This leads to some truly bizarre statements. As readers will probably know a key component of uncertainty is due to the fact that we exist in a world where people exercise free decision. Thus we do not and cannot know what people are going to do in the future and must form judgements about the world by trying to anticipate what those around us will do. This is absolutely central to Keynes’ concepts of uncertainty and animal spirits and so on. The authors of the paper, on the other hand, write,

In our framework agents have always enough information to infer what the current state of the economy is or what other agents are doing: High or low growth, Hawkish or Dovish monetary policy, etc. Nevertheless, agents face uncertainty about the statistical properties of what they are observing. For example, agents could be uncertain about the persistence and the destination of a particular state. (p4 — My Emphasis)

After completely butchering the meaning of the term ‘uncertainty’ they proceed to violently attack other concepts. Next on the chopping block is ‘animal spirits’.

…the methods developed in this paper lay down a convenient framework for investigating the effects of changes in economic fundamentals or animal spirits on uncertainty and the feedback effects of such swings in uncertainty on the economic dynamics. (p37)

The authors genuinely think that in deploying Bayes’ probability theorem to human decision-making they have constructed an approach for studying ‘animal spirits’. But of course the very paragraph where Keynes, who knew more than a little bit about the philosophy of probability, introduces the term states crystal clearly that ‘animal spirits’ is unrelated to probability estimates.

Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as a result of animal spirits — of a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities. (My Emphasis)

This is a very simple point indeed and one that the authors could easily have picked up from either the source itself or the literature that grew up from it. In Keynesian economics we do not assume (a) that certain aspects of the future are reducible to probability estimates, or (b) that people act in a manner in which they take various events in the future to be calculable in numerical probabilistic terms and then apply this to reality in the form of Bayes’ theorem.

The level of scholarship in contemporary economics is absolutely shocking. Contemporary theorists just pick up on buzzwords that they hear in the media and then assume that they have understood these. Then they scramble to build some arcane model or other in which they assure others that they have captured the meaning of the buzzword in question. The mathematics then becomes a cloak hiding the fact that they have never bothered to actually think through the concepts they are using. Poor scholarship and lack of real understanding is perpetuated behind a wall of mathematics. The trick is that anyone who understands the mathematics is very likely not to understand the ideas in question. And so the whole circus can go on indefinitely.

Economists today no longer debate economics at all. They no longer want to talk about, for example, what the meaning of the idea of animal spirits has for economic theorising. Rather they want to busy themselves as fast as possible with applying whatever new trick they have learned in maths class. And then they think that they are saying something about the real economy! DSGE modelling and those that practice it are truly in the territory of not only not being right but not even being wrong.

Reading papers such as this one might well conclude that economics as an academic discipline is largely dead. Ideas are no longer actually debated. Words lose any clear meaning and become allusions to some mysterious and ever-changing x (did someone say ‘liquidity trap’!?). Mathematical consistency becomes confused with truth. The field becomes so garbled that anyone who wants to learn anything about economics has to tear themselves firmly away from it.

Posted in Economic Theory | 79 Comments

Housing Bubble Redux

housing-bubble-1

A piece I did for Al Jazeera on the IMF warnings of a potential global housing bubble. Here we go again.

Will real estate bubbles again sink the global economy?

Posted in Economic Policy, Market Analysis, Media/Journalism | 4 Comments

A Series of Interviews on My Forthcoming Book — Part Deux

interview

The next part of the interview on my new book. This one deals with less methodological issues, as did the previous interview, and focuses on actual substantive issues. We discuss aggregate demand, theories of financial markets — including a discussion of what is and what is not a liquidity trap — and the distribution of income in capitalist economies.

Posted in Economic Theory | 5 Comments

A Series of Interviews on My Forthcoming Book.

interview_thumbnail_large

Amogh Sahu recently did along series of interviews with me based on what will be in my forthcoming book. The first interview is up now and it deals broadly with methodology and epistemology. There’s also some discussion of equilibrium methods and so forth. It’s all a bit abstract but I think that we brought it down to earth to a large extent. At least I hope so.

Philip Pilkington on Modern Economic Methodology

Posted in Economic Theory, Philosophy | 2 Comments

The Eurozone: An Awful Mess

euros5

I have an article that went up on Al Jazeera yesterday. It’s on the future of the Eurozone and its got some good responses so far. Readers might be interested.

The European single currency system spirals further out of control

Posted in Economic History, Economic Policy | 6 Comments

Tobin’s Q: A Wily Trickster and Slightly Vacuous

Star-Trek-TNG-Q-Who

I was writing an article this morning for an internet website about the recent IMF proclamations that there may be a global housing bubble underway. The article, if it runs, should be online next week and I will link to it here.

While examining the potential impacts on the global economy if the IMF turns out to be correct I came across an interesting paper that the Bank of England published back in 2008 entitled Understanding Dwellings Investment. It’s a good paper, well put together and very coherent. Quite what I’d expect from the BoE. But it does show the Q-theory up to be rather vacuous, if only unintentionally.

Tobin’s Q for housing, as the paper lays it out, is as follows,

OLYMPUS DIGITAL CAMERAWhere: Hn is the price of new housing, L is the price of land per hectare, D is is density of development per hectare, C is the cost of construction, P is the cost of planning obligations, F is the cost of fees, and O is all other costs including the cost of finance.

Note that the variable for house prices is lagged forward. This is because, and this is where things start to break down, the authors note that it is not the house price now that matters to stimulate investment but rather the expected house price in the future. The authors write:

There is a lag between the decision to start construction work and the sale of a completed dwelling, so house builders must form an expectation about what house prices will be when the property is sold. (p396 — My Emphasis)

It seems to me better to note this explicitly in the algebra so that we all have a very clear conception of what is going on. So,

OLYMPUS DIGITAL CAMERAWhere,

OLYMPUS DIGITAL CAMERAWell, that third variable in that last equation is a bit of a puzzle, now isn’t it? Indeed, if we had any concrete idea as to what it might be we might be better suited to property speculation than macroeconomics. Rather than engage in crystal ball-gazing — thank God — the ever practical authors just lag the variable to the real data. They write,

For simplicity, this article uses the actual value of house prices three quarters in the future as a proxy for builders’ expectations. This lead on house prices is based on data from the National House Builders Council (NHBC) that show the average time between the date builders notify the NHBC of an intention to start work and the date of completion is about ten months. (ibid)

No problem at all. But this all seems a bit retrospective, does it not? I mean, the authors take actual house prices as they developed three quarters out as a proxy for investors’ expectations. There is nothing inherently wrong with this. But it does lead to the question as to what the utility of the exercise is in the first place.

Think of it this way: the independent or exogenous variable here is the expected house price three quarters out. But then what is the dependent variable? Well, its the Q, right? But what is the point of the Q? Well, generally it is seen as a means to approximate the future course of prices.

Feeling a bit dizzy yet? Yes? Well, you should. We’re firmly placed here in the land of tautology and retrospective 20/20 vision. And that is why Tobin’s Q is ultimately pretty vacuous. It is basically in line with Keynes’ marginal efficiency of capital but while the latter never pretends to give this any concrete existence outside of the mind of the investor, the former at least hints at this even though when good practical economists try to use the tool they soon find it slightly… hollow.

By my reading the BoE economists basically recognised this. But they do seem to have made a rather odd omission. They produce the following graph measuring their estimation of the Q against the actual development of dwelling investment:

OLYMPUS DIGITAL CAMERA

In the paper the authors note,

Q rose in the late 1980s as house prices rose faster than costs, prompting a house-building boom. Then, as housing market prospects deteriorated, so did the expected return and, with it, house building. Q quickly dropped and remained low until 2001, when rising house prices relative to costs again boosted the returns to house building. It is perhaps puzzling why house building did not rise more quickly given the increased returns. (p398)

The reason they give for the slower response of investment after 2001 has to do with increased planning regulations. But looking at the data they produce this does not seem to explain the lag. Rather the authors should have probably looked to what was driving expectations and hence investment.

In the late-1980s the UK was undergoing an investment boom due to the deregulation of the City of London under the Thatcher government. This was known as the ‘Big Bang‘. This led to a sort of mania in the City and ultimately to a housing and stock market boom followed by a slump.

We can see this clearly in the data where the rise in property investment followed the Q estimate.  Investment followed expectations of future price increases. By contrast, after 2001 investment lagged future price increases. The latter was probably a more ‘natural’ housing bubble than the former in this regard as it was the result of investors eyeballing the price increases for a period before engaging in speculation. While in the 1980s everyone just went, well, a bit mad as Thatcher appeared on television telling the City that she was taking the leash off from around their collective neck.

What should we learn from all this? Well, simply that estimates like Tobin’s Q are slightly vacuous. In trying to bury uncertainty about the future they lead people to seek information in the data that can probably only be gleaned from a better comprehension of economic history. Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.

 

 

Posted in Economic Policy, Economic Theory | 6 Comments