To What Extent Is Economics an Ideology and to What Extent Is It a Useful Theory?

By Philip Pilkington, a macroeconomist working in asset management and author of the new book The Reformation in Economics: A Deconstruction and Reconstruction of Economic Theory. The views expressed in this interview are not those of his employer

Ever since the Enlightenment many societies have moved away from justifying their existence and formulating their aims through recourse to religious language. Gone are the days of the ‘Great Chain of Being’ which justified the natural and social orders all the way from the plants and trees through the commoners, via the nobility and the King all the way up to God the creator. What replaced these ideologies were ideas about ‘Progress’ – how the good society was attained through Progress and what such Progress would look like. Progress, it was said, was to be grounded in the scientific method; what had worked so well to uncover natural processes could also be applied to engineer society.

It was in the 19th century, however, when the ideologies of Progress really began to blossom and flower. One was economics, of which we will have more to say about below. Another was phrenology. Phrenology was a science that claimed that a person’s character – including his capacities and his dispositions – were contained within his skull and could be determined by studying his skull carefully. Today few take this seriously – although many still recognise that phrenology was an early progenitor to so-called ‘neuroscience’. But throughout the 19thcentury these ideas were enormously popular – one popular English work sold more than 300,000 copies!

What made phrenology so popular was what also made economics so popular at the time: it gave a rationale for a society based on Progress and also provided a blueprint for how this could be achieved. The phrenological doctrine, being so vague in its pronouncements, was highly malleable and could be used to justify whatever those in power needed justifying. So, for example, in 19th century England phrenology was used to justify laissez faire economic policies by emphasising unequal natural capacities amongst the population while in early 20thcentury Belgian Rwanda it was used to justify the supposed superiority of the Tutsis over the Hutus.

In my book The Reformation in Economics I take the position that modern economics is more similar to phrenology than it is to, say, physics. This is not at all surprising as it grew up in the same era and out of remarkably similar ideas. But what is surprising is that this is not widely noticed today. What is most tragic, however, is that there is much in economics that can and should be salvaged. While these positive aspects of economics probably do not deserve the title of ‘science’ they at least provide us with a rational toolkit that can be used to improve political and economic governance in our societies.

The Ideology at the Heart of Modern Economics

The curious thing about modern economics is its almost complete insularity. Its proponents appear to have very little notion of how it applies to the real world. This is not the case in normal sciences. Take physics, for example. It is extremely clear how, say, the inverse squares law applies to experienced reality. In the case of gravitation, for example, the inverse squares law makes experimentally testable predictions about the force exerted by, say, the gravitational pull between the sun and the earth.

Modern economics – by which I mean neoclassical or marginalist economics which relies on the notion of utility-maximisation as its central pillar – completely lacks this capacity to map itself onto the real world. As philosophers of science like Hans Albert have pointed out, the theory of utility-maximisation rules out such mapping a priori, thus rendering the theory completely untestable. Since the theory is untestable it cannot be falsified and this allows economists to simply assume that it is true.

Once the theory is assumed to be true it can then be applied everywhere and anywhere in an entirely uncritical manner. Anything can then be interpreted in terms of utility-maximisation. This is most obvious in popular publications like Freakonomics: A Rogue Economist Explores the Hidden Side of Everything. Such books read in an almost identical way to the fashionable books of 19th century phrenology. The economists address everything from parenting to crime to the Ku Klux Klan by filtering it through the non-experimental theory of utility-maximisation – a theory that has not and cannot be verified and so the author and reader alike take it entirely on trust.

Such systems of ideas are ideological to the core. They are cooked up independently of the evidence and are then imposed upon the material of experienced reality. We are encouraged to ‘read’ the world through the interpretive lens of economics – and when we ask for evidence that this lens uncovers factually accurate information we are confounded with circular arguments from the economists.

Large-scale public policy is also filtered through this lens. This is done by constraining the study of macroeconomics – that is, GDP growth, unemployment, inflation and so on – by tying it to the theories of utility-maximisation. All macroeconomics today must be ‘microfounded’. This means that it must have microeconomic – read: ‘utility-maximising’ – foundations. In reality, as I show in the book, these foundations are anything by ‘micro’. Rather, what is done is that the entire economy is seen to be dominated by a single uber-utility-maximiser and all the conclusions flow from there.

This may seem like odd stuff but it is built into the theory as a sort of foundational delusion. The arbitrary, non-empirical theory of utility-maximisation assumes primacy to all considerations of actual statistical facts, intuitions about human motivations and even basic assumptions about what should constitute a properly moral view of man. What we end up with is not just a crushing, anti-inquiry ideology but also a lumbering failure of a system of ideas that has no hope in extracting relevant information about the real world.

What Is To Be Done?

Is economics then to be thought of as a failure? Must we scrap economics and try to find other ways to describe and address our economic and political problems? In this regard, my book claims to lay out a new path – albeit one that has been intuitively followed by some economists, most notably those in the heterodox camp. This new path is based on two key interrelated premises.

The first is that we have little insight into what actually motivates human beings. For this reason theories that rest on assumptions about human motivation – like utility-maximisation – must be thrown out and the study of the economy must be undertaken by examining large economic aggregates. In short, micro must be tossed off the throne and the crown must be handed to macro. The second premise is that we must not be overly concerned with highly precise ‘models’ of the economy. Instead we must take what I have come to call a ‘schematic’ approach. A schematic approach involves building tools that can be integrated into how we understand the world around us without assuming that these tools provide us with an exact description of this world. This schematic toolkit – which I begin to lay out in the later chapters of the book – can then be used to approach the study of actual economies.

These may seem like rather simple rules. But when applied to economic theory they generate rather radical results. At the same time they greatly constrain the amount of wisdom that we can assume economists to have; given these premises no book like Freakonomics should ever be taken seriously and should probably even be written in the first place. In that sense, they may appear to militate against Enlightenment optimism. This may well be so, but I would argue that they are arrived at through rational Enlightenment-style inquiry and so should be taken seriously even by proponents of Enlightenment Progress. After all, phrenology eventually fell in the face of rationalistic criticism.

In the book some of the issues around uncertainty and free will are also explored. Implicit in some of the book’s central criticisms is that societies are not to be understood in a deterministic manner. Unlike billiard balls, social forces are not subject to deterministic laws. In one sense this is unfortunate as it means that our understandings of social and economic processes must always be of a contingent and not-too-precise nature. But on the other hand it is optimistic in the sense that it attributes an agency to human beings to create the world around them that mainstream marginalist economics stripped away by imposing the limited utility-maximiser framework on everyone from Mother Theresa to Hitler.

This also creates an opening for a proper discussion of ethics and morality. Although this is not dealt with directly in the book – it would surely require another ten volumes – the framework does reopen awkward questions surrounding morality and ethics. Some self-professed social scientists, nervous that these questions have been passed to us from the world religions, would prefer to do away with any moral and ethical questions. But this was always a fantasy – even the most hardened anti-ethicist, unless they are serving life for serial-killing, has a system by which they determine right from wrong.

All that I have said here is rather abstract. But a good portion of the book is not and I do not want to give that impression. It contains chapters that deal with inflation, profits, income distribution, income determination, financial markets, interest rates, investment and employment. It is not simply a book of methodology but rather one that tries to also provide the basic building blocks of a theory that can be applied to understand really-existing economies. In this sense, I hope that it is again more optimistic than many mainstream economics books that leave the reader without any capacity to apply the supposed ideas that they have absorbed by reading them beyond mere chest-puffing at dinner parties and moral condemnations of the social safety net.


Posted in Uncategorized | 3 Comments

Why the Pollsters Totally Failed to Call a Trump Victory, Why I (Sort Of) Succeeded – and Why You Should Listen to Neither of Us

The views expressed in this article are the author’s own and do not reflect the views of his employer.


The election of Donald Trump as president of the United States will likely go down in history for any number of reasons. But let us leave this to one side for a moment and survey some of the collateral damage generated by the election. I am thinking of the pollsters. By all accounts these pollsters – specifically the pollster-cum-pundits – failed miserably in this election. Let us give some thought as to why – because it is a big question with large social and political ramifications.

Some may say that the polls were simply wrong this election. There is an element of truth to this notion. The day of the election the RCP poll average put Clinton some three points ahead of Trump which certainly did not conform to the victory that Trump actually won. But I followed the polls throughout the election and did some analysis of my own and I do not think that this explanation goes deep enough.

I have a very different explanation of why the pollsters got it so wrong. My argument is based on two statements which I hope to convince you of:

  1. That the pollsters were not actually using anything resembling scientific methodology when investigating the polls. Rather they were simply tracking the trends and calibrating their commentary in line with them. Not only did this not give us a correct understanding of what was going on but it also gave us no real new information other than what the polls themselves were telling us. I call this the redundancy argument.
  2. That the pollsters were committing a massive logical fallacy in extracting probability estimates from the polls (and whatever else they threw into their witches’ brew models). In fact they were dealing with a singular event (the election) and singular events cannot be assigned probability estimates in any non-arbitrary sense. I call this the logical fallacy argument.

Let us turn to the redundancy argument first. In order to explore the redundancy argument I will lay out briefly the type of analysis that I did on the polls during the election. I can then contrast this with the type of analysis done by pollsters. As we will see, the type of analysis that I was advocating produced new information while the type of approach followed by the pollsters did not. While I do not claim that my analysis actually predicted the election, in retrospect it certainly helps explain the result – while, on the other hand, the pollsters failed miserably.


Why I (Sort Of) Called The Election

My scepticism of the US election polling and commentary this year was generated by my analysis of the polls during the run-up to the Brexit referendum. All the pollsters claimed that there was no way that Brexit could go through. I totally disagreed with this assessment because I noticed that the Remain campaign’s numbers remained relatively static while the Leave campaign’s numbers tended to drift around. What is more, when the Leave campaign’s poll numbers rose the number of undecided voters fell. This suggested to me that all of those that were going to vote Remain had decided early on and the voters that decided later and closer to the election date were going to vote Leave. My analysis bore out in the election but I did not keep any solid, time-stamped proof that I had done such an analysis. So when the US election started not only did I want to see if a similar dynamic could be detected but I wanted to record its discovery in real time.

When I examined the polls I could not find the same phenomenon. But I then realised that (a) it was too far away from the election day and (b) this was a very different type of election than the Brexit vote and because of this the polls were more volatile. The reason for (b) is because the Brexit vote was not about candidates so there could be no scandal. When people thought about Brexit they were swung either way based on the issue and the arguments. If one of the proponents of Brexit had engaged in some scandal it would be irrelevant to their decision. But in the US election a scandal could cause swings in the polls. Realising this I knew that I would not get a straight-forward ‘drift’ in the polls and I decided that another technique would be needed.

Then along came the Republican and Democratic conventions in July. These were a godsend. They allowed for a massive experiment. That experiment can be summarised as a hypothesis that could be tested. The hypothesis was as follows: assume that there are large numbers of people who take very little interest in the election until it completely dominates the television and assume that these same people will ultimately carry the election but they will not make up their minds until election day; now assume that these same people will briefly catch a glimpse of the candidates during the conventions due to the press coverage. If this hypothesis proved true then any bounce that we saw in the polls during the conventions should give us an indication of where these undecided voters would go on polling day. I could then measure the relative sizes of the bounces and infer what these voters might do on election day. Here are those bounces in a chart that I put together at the time:


Obviously Trump’s bounce was far larger than Clinton’s. While it may appear that Clinton’s lasted longer this is only because the Democratic convention was on five days after the Republican convention so it stole the limelight from Trump and focused it on Clinton. This led to his bump falling prematurely. It is clear that the Trump bounce was much more significant. This led me to believe that undecided voters would be far more likely to vote Trump than Clinton on election day – and it seems that I was correct.

In addition to this it appeared that Trump was pulling in undecideds while Clinton had to pull votes away from Trump. We can see this in the scatterplot below.


What this shows is that during the Republican National Convention (RNC) Trump’s support rose without much impacting Clinton’s support – if we examine it closely it even seems that Clinton’s poll numbers went up during this period. This tells us that Trump was pulling in new voters that had either not decided or had until now supported a third party candidate. The story for Clinton was very different. During the Democratic National Convention (DNC) Clinton’s support rose at the same time as Trump’s support fell. This suggests that Clinton had to pull voters away from Trump in order to buttress her polls numbers. I reasoned that it was far more difficult to try to convince voters that liked the other guy to join your side than it is to convince enthusiastic new voters. You had to fight for these swing voters and convince them not to support the other guy. But the new voters seemed to be attracted to Trump simply by hearing his message. That looked to me like advantage Trump.

“Aha!” you might think, “maybe you’re just faking it. How do I know that you didn’t just create that chart after the election?” Well, this is why I time-stamped my results this time around. Here are the results of my findings summarised on a piece of paper next to a Bloomberg terminal on August 9th.


I also sent this analysis to some of the editors that are handling this piece. So they have this analysis in their email archives and can check to see that I’m not just making this up.

The reader may note that I criticise Nate Silver’s analysis in the text in the picture. I was referring to his post-convention bounce analysis in which he used the spread between the two candidates to gauge it – this was an incorrect methodology because as we have already seen the Democratic convention ate away at the Trump bounce because it came during the Trump bounce and this artificially inflated Clinton’s bounce in spread terms. The correct methodology was to consider the two bounces independently of one another while keeping in mind that the DNC stole the limelight from Trump five days after his bounce started and thereby brought that bounce to a premature halt.

This was a bad analytical error on Silver’s part but it is not actually what really damaged his analysis. What damaged his analysis significantly is that he did not pay more attention to this ‘natural experiment’ that was thrown up by the convention. Rather he went back to using his tweaked average-tracking polls. This meant that while he was following trends I was looking for natural experiments that generated additional information to that which I had from the polls. This led to Silver and other pollsters becoming trapped in the polls. That is, they provided no real additional information than that contained in the polls.

After this little experiment, as the polls wound this way and that based on whatever was in the news cycle I constantly fell back on my analysis. What was so fascinating to me was that because the pollsters simply tracked this news cycle through their models their estimates were pretty meaningless. All they were seeing was the surface phenomenon of a tumultuous and scandal-ridden race. But my little experiment had allowed me a glimpse into the heart of the voter who would only make up their mind on voting day – and they seemed to favour Trump.

Before I move on, some becoming modesty is necessary: I do not believe that I actually predicted the outcome of the election. I do not think that anyone can predict the outcome of any election in any manner that guarantees scientific precision or certainty (unless they rigged it themselves!). But what I believe I have shown is that if we can detect natural experiments in the polls we can extract new information from those polls. And what I also believe I have shown is that the pollsters do not generally do this. They just track the polls. And if they just track the polls then instead of listening to them you can simply track the polls yourself as the pollsters give you no new information. In informational terms pollsters are… simply redundant. That is the redundancy argument.


Why the Pollsters’ Estimates Are So Misleading

Note the fact that while my little experiment gave me some confidence that I had some insight into the minds of the undecided voter – more than the other guy, anyway – I did not express this in probabilistic terms. I did not say: “Well, given the polls are at x and given the results of my experiment then the chance of a Trump victory must be y”. I did not do this because it is impossible. Yet despite the fact that it is impossible the pollsters do indeed give such probabilities – and this is where I think that they are being utterly misleading.

Probability theory requires that in order for a probability to be assigned an event must be repeated over and over again – ideally as many times as possible. Let’s say that I hand you a coin. You have no idea whether the coin is balanced or not and so you do not know the probability that it will turn up heads. In order to discover whether the coin is balanced or skewed you have to toss it a bunch of times. Let’s say that you toss it 1000 times and find that 900 times it turns up heads. Well, now you can be fairly confident that the coin is skewed towards heads. So if I now ask you what the probability of the coin turning up heads on the next flip you can tell me with some confidence that it is 9 out of 10 (900/1000) or 90%.

Elections are not like this because they only happen once. Yes, there are multiple elections every year and there are many years but these are all unique events. Every election is completely unique and cannot be compared to another – at least, not in the mathematical space of probabilities. If we wanted to assign a real mathematical probability to the 2016 election we would have to run the election over and over again – maybe 1000 times – in different parallel universes. We could then assign a probability that Trump would win based on these other universes. This is silly stuff, of course, and so it is best left alone.

So where do the pollsters get their probability estimates? Do they have access to an interdimensional gateway? Of course they do not. Rather what they are doing is taking the polls, plugging them into models and generating numbers. But these numbers are not probabilities. They cannot be. They are simply model outputs representing a certain interpretation of the polls. Boil it right down and they are just the poll numbers themselves recast as a fake probability estimate. Think of it this way: do the odds on a horse at a horse race tell you the probability that this horse will win? Of course not! They simply tell you what people think will happen in the upcoming race. No one knows the actual odds that the horse will win. That is what makes gambling fun. Polls are not quite the same – they try to give you a snap shot of what people are thinking about how they will vote in the election at any given point in time – but the two are more similar than not. I personally think that this tendency for pollsters to give fake probability estimates is enormously misleading and the practice should be stopped immediately. It is pretty much equivalent to someone standing outside a betting shop and, having converted all the odds on the board into fake probabilities, telling you that he can tell you the likelihood of each horse winning the race.

There are other probability tricks that I noticed these pollsters doing too. Take this tweet from Nate Silver the day before the election. (I don’t mean to pick on Silver; he’s actually one of the better analysts but he gives me the best material precisely because of this).


Now this is really interesting. Ask yourself: Which scenarios are missing from this? Simple:

  1. Epic Trump blowout
  2. Solid Trump win

Note that I am taking (c) to mean that if the election is close or tied Silver can claim victory due to his statement of ‘*probably*’.

Now check this out. We can actually assign these various outcomes probabilities using the principle of indifference. What we do is we simply assign them equal probabilities. That means each has a 20% chance of winning. Do you see something awry here? You should. Silver has really covered his bases, hasn’t he? Applying the principle of indifference we can see that Silver has marked out 3 of the 5 possible scenarios. That means that even if we have no other information we can say that Silver has a 60% chance of ‘predicting the election’ using this statement. Not bad odds!

What is more, we can actually add in some fairly uncontroversial information. When this tweet was sent out the polls showed the candidates neck-and-neck. Surely this meant that a simple reading of the polls would tell us that it was likely to be a close call. Well, I don’t think it would be unfair to then weight the probability of (c) accordingly. Let’s say that the chance of a close call, based on the polls, was 50%. The rest of the possibilities then get assigned the rest of the probability equally – they get 12.5% each. Now Silver really has his bases covered. Without any other information he has a 75% chance of calling the election based on pure chance.

The irony is, of course, he got unlucky. Yes, I mean ‘unlucky’. He rolled the dice and the wrong number came up. Though he lost the popular vote, Trump won the electoral votes needed by a comfortable margin. But that is not the key point here. The key point here is that something else entirely is going on in this election forecasting business than what many people think is happening. What really appears to be going on is that (i) pundits are converting polls numbers into fake probability estimates arbitrarily and (ii) these same pundits are making predictive statements that are heavily weighted to being ‘probably’ correct – even if they are not conscious that they are doing this. This is really not much better than reading goat entrails or cold reading. Personally, I am more impressed by a good cold reader. The whole thing is based on probabilistic jiggery-pokery. That is the logical fallacy argument.


And Why You Should Listen to Neither of Us

Are you convinced? I hope so – because then you are being rational. But what is my point? My very general point is that we are bamboozling ourselves with numbers. Polls are polls. They say what they say. Sometimes they prove prescient; sometimes they do not. If we are thoughtful we can extract more information by analysing these polls carefully, as I did with my little experiment. But beyond this we can do little. Polls do not predict the future – they are simply a piece of information, a data point – and they cannot be turned into probability estimates. They are just polls. And they will always be ‘just polls’ no matter what we tell ourselves.

But beyond this we should stop fetishizing this idea that we can predict the future. It is a powerful and intoxicating myth – but it is a dangerous one. Today we laugh at the obsession of many Christian Churches with magic and witchcraft but actually what these institutions were counselling against is precisely this type of otherworldly prediction:

The Catechism of the Catholic Church in discussing the first commandment repeats the condemnation of divination: “All forms of divination are to be rejected: recourse to Satan or demons, conjuring up the dead or other practices falsely supposed to ‘unveil’ the future. Consulting horoscopes, astrology, palm reading, interpretation of omens and lots, the phenomena of clairvoyance, and recourse to mediums all conceal a desire for power over time, history, and, in the last analysis, other human beings, as well as a wish to conciliate hidden powers. These practices are generally considered mortal sins.

Of course I am not here to convert the reader to the Catholic Church. I am just making the point that many institutions in the past have seen the folly in trying to predict the future and have warned people against it. Today all we need say is that it is rather silly. Although we would also not go far wrong by saying, with the Church, that “recourse to mediums all conceal a desire for power over time, history, and, in the last analysis, other human beings”. That is a perfectly good secular lesson.

I would go further still. The cult of prediction plays into another cult: the cult of supposedly detached technocratic elitism. I refer here, for example, to the cult of mainstream economics with their ever mysterious ‘models’. This sort of enterprise is part and parcel of the cult of divination that we have fallen prey to but I will not digress too much on it here as it is the subject of a book that I will be publishing in mid-December 2016 – an overview of which can be found here. What knowledge-seeking people should be pursuing are tools of analysis that can help them better understand the world around us – and maybe even improve it – not goat entrails in which we can read future events. We live in tumultuous times; now is not the time to be worshipping false idols.

Posted in Uncategorized | 3 Comments

Achtung! My Book is Coming Out Soon: Here Is a Brief Overview and Some Media Links



Hi everyone – or, at least, whoever is left out there. As you probably know, this blog has been shut down since October 2014 and I have pretty much fallen off the face of the planet. Actually I’ve been working in investment where I’ve found a job that allows me to pursue non-mainstream economic research.

Some of you may recall that I was writing a book during the last days of this blog. I’m happy to say that this book is now fully completed and has been accepted for publication by Palgrave Macmillan. The provisional publication date for the book will be October 2016 and the price will be around £19.50. The book’s title will be: ‘The Reformation in Economics: A Deconstruction and Reconstruction of Economic Theory’.

The book will not be a rehash of material that is available on this blog. I consciously avoided this as I thought that it would be rather boring. So the book is all brand new material. Some of the ideas were thrown around on this blog in more primitive form but I have tried to develop them properly in the book.

The idea for the book is to go right back to first principles. The more I engaged with economic theory the more I found two things.

First of all, much economic theory is in fact ideology. By ‘ideology’ I do not mean something resembling a political ideology – I do not mean ‘socialism’ versus ‘libertarianism’ or ‘left-wing’ versus ‘right-wing’ or anything like that. Rather I mean a manner of structuring how we view the world around us – how we frame things to ourselves and how we understand what is and what is not possible to accomplish. A good example of an ideology is the idea that the world is flat and that if you sail out beyond a certain point you will be devoured by monsters. These sorts of ideas provide a sort of semi-conscious map that we use to engage with the world that has no basis in rationalistic or scientific inquiry.

Secondly, most of the problems with economic theory are actually buried in its very foundations. In the book I argue that much of economic theory is not actually aimed at being applied to the real world and that most economists have never actually thought through how their theory applies to reality. A physicist, for example, will typically have some real-world object that they are trying to understand – say, a ball falling from a building – and they know exactly how to fit their abstract theory onto the phenomenon that they are studying. I do not think that the vast majority economists have a clear conception of the real-world object that they are trying to approach and I do not think that they have the faintest clue of how they should apply their theory to the real-world. In this they typically fall back on institutionalised norms such as econometric testing which they do not really understand.

The aim of the first half of book is to interrogate these foundations – this is the act of deconstruction alluded to in the title. When we interrogate these foundations much of mainstream economic theory is shown to be entirely irrelevant – nothing more than a series of floating symbols that have no parallel existence in the real world. By understanding this this we also form a clear conception of what a good theory that is actually oriented to the real-world would look like.

The second half of the book attempts a reconstruction of what I call ‘stripped-down macroeconomics’. The first half of the book argues that any theoretical edifice that is overly precise or unwieldy will not function when applied to the real world. For this reason, economic theory is much better served by using very simple and clearly understood ideas. These ideas are then thought to serve as schemata – that is, “an organized pattern of thought or behavior that organizes categories of information and the relationships among them” – which can be mapped onto empirical material in order to gain an understanding of the world around us.

There is much else in the book that is dealt with along the way: critiques of the ISLM model; an examination of the different conceptions of equilibrium applied in economic theory; a critique of the EMH view of financial markets; reflections on the use of mathematics in economics; and much more. Although the book attempts to tackle the foundations of economic theory I had no interest in turning it into a dry, abstract tome full of needlessly big words and short on examples.

Anyway, I will be doing some media for the book in the coming weeks and months. I will update this blog post whenever new media appears. If you are interested in following this I would suggest that you should just check back here from time to time. Of course, when the book comes out I will also provide links to purchase it on this blog. I have also included the table of contents for the book.


  1. Philip Pilkington on Determinism and the Reformation in Economics. An interview with Frank Conway on the Economic Rockstar podcast.
  2. INET YSI Seminar Based on a Draft Version of Chapter 4 ‘Methodology, Modelling and Bias’.



The Reformation in Economics­­­­: A Deconstruction and Reconstruction of Economic Theory


By Philip Pilkington




Section I: Ideology and Foundations

  1. Economics: Ideology or Rationalistic Inquiry?
  2. The Limiting Principle: A Short History of Ideology in 20th Century Economics
  3. Deconstructing Marginalist Microeconomics
  4. Methodology, Modelling and Bias
  5. Differing Conceptions of Equilibrium

Section II: Stripped-Down Macroeconomics

  1. Theories of Money and Prices
  2. Profits, Prices, Distribution and Demand
  3. Finance and Investment

Section III: Approaching the Real-World

  1. Uncertainty and Probability
  2. Non-Dogmatic Approaches to the Economics of Trade

Conclusion and Appendices

  1. Conclusion
  2. Philosophical and Psychological Appendices
    1. Determinism and Free Will in Economics
    2. Between Personal Responsibility and Poor Theory
    3. Economic Modelling: A Psychologistic Explanation


Posted in Uncategorized | 24 Comments

The International Labour Organisation… Almost Correct


I have an article up on Al Jazeera this week. It may be the last journalistic article I write for some time as I start a new job next week. But this one deserves some brief discussion because the material it deals with is hugely important to the politics of the moment.

In the article I discuss a joint report by the ILO, the OECD and the World Bank. The ILO have clearly spearheaded this one. It has their fingerprints all over it. The ILO are pretty fantastic really. They are one of the only large-scale economic institutions that are talking sense today. Indeed, in the report the authors make a properly Post-Keynesian case for why the economy is stagnant: that is, it has to do with skewed income distribution and a low marginal propensity to consume among those in whose favour distribution is skewed in.

The problem, however, is that, like the trades unions that they represent, the ILO fundamentally buys into the deficit-scaremongering stories. They reflect the party line that we see in social democratic governments across the world: deficits are a Bad Thing and governments should be aiming at winding down their supposedly dangerous debt-to-GDP ratios.

It is ridiculous that center-left political parties, trade unions and the ILO often take this as their official line. Almost everyone I meet from these organisations know that it is a pile of silliness. So, why do they spout it in public? Honestly, I think it has to do with appearing as a Very Serious Person in public. There is still a taboo in place that requires people in public to pontificate on the Evils of government debt. Even though a lot of people don’t believe in this moral tale, they have to do it regardless.

After the Second World War the taboo was that no government official would be taken seriously who said that full employment was undesirable. In order to be taken seriously in public politicians and economists had to say that the primary economic problem was unemployment. Any scheme that was seen to generate unemployment was not taken seriously.

The question now is how we get back to that. Earlier this century it required a war. If the Great Depression and the current stagnation have taught us anything it is that capitalist democracies do not respond to problems of unemployment through their in-built institutional mechanisms. Even with very high rates of unemployment — say, 10%+ — the people that are unemployed do not make up a large enough voting constituency to force parties to adopt full employment policies. What is more, lacking leadership, this constituency are not all that sure what they should be voting for as they do not understand the nature of the problem.

Meanwhile, the left-wing and the workers’ organisations are weighed down by the stagnation of ideas to which they have succumbed. The left-wing, still believing that the early 20th century working class make up their key constituency, aim their rhetoric at anyone making upwards of £50,000 or £60,000 a year — when it is these people that they should be trying to win over. Indeed, the left-wing should just shut up about anyone earning less than about £120,000 if they want to sort out their electoral strategy. They would also do well to recognise that income distribution is not so much today to do with salaries as it is to do with asset holdings (CEOs paying themselves in stock options etc.).

In the meantime the labour unions have become over bureacratised and subject to ‘educated elite’ opinion through their hiring practices. Their ideology is the one derived from marginalist economics and typically has a vision of the union representative — now typically a well-heeled type from a major university who comes equipped with an economic degree — representing workers in a supposedly monopsonistic labour market. Yes, you can thank those lefties that bought into marginalist economics for much of the malaise in the ideology of today’s workers’ movement. Thanks boys!

Anyway, I rarely talk politics on this blog but these are the issues that we have to deal with if we want to get back to the old norms. Oh, and never trust a New Keynesian working in any nominally left-wing institution… ever. They do far more damage than you can possibly imagine. You only have to attend a few leftie trade union economic meetings to get a grip on that very quickly indeed.

Here is my article:

Global economic malaise driven by unemployment and low wages

Posted in Economic Policy, Economic Theory | 15 Comments

The Economic Consequences of the Overthrow of the Natural Rate of Interest


For quite a few months I have, on this blog, been alluding to a paper that I had written which showed that the natural rate of interest is implicitly dependent on the EMH in its strong-form in order to be coherent. I have finally published this paper (in working paper form) with the Levy Institute and it can be read here:

Endogenous Money and the Natural Rate of Interest: The Reemergence of Liquidity Preference and Animal Spirits in the Post-Keynesian Theory of Capital Markets

Some notes on the paper.

The motivation for the paper was that when reading up on endogenous money during my degree I found that mainstream economists had largely integrated it in their more recent models. This integration, as the paper notes, usually took the form of a Taylor Rule. I should be clear that although this had become standard practice at some levels of the discipline most mainstream economists remained ignorant or confused (the famous Krugman debates were highly illustrative of this). Nevertheless, I found that the mainstream had conceded to endogenous money and yet, for some reason, they were not in agreement with Post-Keynesians on the implications for this in theory nor were they in agreement on important policy issues.

What I found was that they were able to avoid the important implications of endogenous money theory by resurrecting the loanable funds theory in a different way. They did this by effectively becoming neo-Wicksellian and replacing the exogenous money proclamations with the idea of a ‘natural rate of interest’. This device allowed them to keep the rest of marginalist monetary theory intact and served as a justification for the dangerous idea that the economy could be steered to full employment and prosperity through vigilant manipulation of the central bank’s overnight interest rate (I deal with the track record of that dubious policy here).

In my paper I show that such ideas implicitly rely on a strong-form EMH view of capital markets. Think of it this way: the central bank set a single rate of interest. Piled on top of this rate of interest are countless other rates of interest — the interest rate on mortgages, student loans, junk bonds, and so on. This ‘stack’ of interest rate will be affected by the central bank rate of interest but, and this is crucial, the spread between the central bank rate and these other interest rates are set by the market. The assumption of mainstream monetary theory is that the market will line each of these rates of interest up with their particular natural rate. So there the natural rate on each type of loan will be automatically hit by the market.

It is clear that what is being assumed here is that the market will price in all relevant information objectively. That, of course, is the EMH view of capital markets and it is one that has been completely refuted and dismissed by all relevant economists since the 2008 financial crisis. But once this falls apart mainstream monetary theory goes out the window with it. What we end up with is Keynes’ own monetary theory; one in which liquidity preference determines interest rates across the markets and animal spirits drive the rate of investment in the economy. These two key economic variables are now subject to the vagaries of human psychology.

I have since had the opportunity to try the argument out on a few very senior economic policymakers and former economic policymakers. The results have been very encouraging. They seem to see instantly the logic of the approach and how much damage it does to the mainstream theoretical underpinnings. They also see that this has massive implications for policy: it completely changes how we should understand central banks to operate and how economic policy should be managed.

No longer should we use the interest rate to steer economic activity. This will not work. In the last boom we saw the interest rates on mortgages remain low even as the overnight rate was rising and we saw animal spirits in the housing market cause overly high rates of unsustainable investment in this market. This is what the theory would predict: using interest rates to steer the economy will only result in speculative excesses and destructive boom-bust cycles.

While I do not outline the policy conclusions in the paper they should be familiar to Post-Keynesians. First, the interest rate should be ‘parked’ at some permanent low-level; somewhere between 0% and 2%. Secondly, central banks should have their role changed to (a) providing easy credit policies and (b) regulating excesses in potentially speculative asset and investment markets — I favour Tom Palley’s ABRR proposal here. Thirdly, the currency should be flexible but can be managed should needs require through central bank intervention in the foreign exchange markets. Fourthly, shortfalls and excesses in effective demand should be managed by government expenditure and taxation.

This, of course, outlines an entirely different regime to the present inflation-targeting environment. It is somewhat similar to the post-war arrangements but would probably be more aggressive if implemented with full force.

Posted in Economic Policy, Economic Theory | 24 Comments

Noah Smith Fumbles Argument, Endorses Post-Keynesian Endogenous Money Theory


Economists say the darnedest things sometimes. They often say things that are factually inaccurate. Noah Smith put his foot in it recently when he claimed in a Bloomberg article:

It seems like the only people who don’t instinctively believe in credit-fueled growth are academic economists.

Now, this seems odd to me. In the article he notes that Post-Keynesians and Austrians do in fact think that credit fuels economic growth. Given that many of these economists hold academic positions and publish in academic journals are we to assume that they are not among academic economists? We will give Smith the benefit of the doubt here and assume that rather than belittling his colleagues he is simply a fuzzy writer who would do well sharpening his sentences before professional publication.

But when I read that I thought it more than a bit curious. After all, don’t monetarists believe that in the short-run economic growth is dictated by the growth in the money supply? Hey, don’t take my word for it, here is Milton Friedman himself in his article famous ‘A Theoretical Framework for Monetary Analysis‘:

I regard the description of our position as ‘money is all that matters for changes in nominal income and for short-run changes in real income’ as an exaggeration but one gives the right flavor to our conclusions. (p217)

Or, if that isn’t clear enough for you try this quote from his paper co-authored with Anna Schwartz ‘Money and Business Cycles‘:

There seems to us, accordingly, to be an extraordinarily strong case for the propositions that (i) appreciable changes in the rate of growth of the “stock of money are a necessary and sufficient condition for appreciable changes in the rate of growth of money income; and that (2) this is true both for long secular changes and also for changes over periods roughly the length of business cycles. (p53)

Now, we must assume that Smith — being an academic economist — knows the formula for money supply growth. For those readers outside the citadel of academic economics here it is:

Growth in £M3 = Public sector borrowing – non-bank purchase of government debt + bank lending to the private sector + net external private sector inflow – increase in non deposit liabilities of banks.

You see those highlighted terms? Yeah… those would be credit growth. And since the monetarists thought that growth in M3 fueled — and, indeed, caused — real GDP growth in the short-run and nominal GDP growth in the long-run we can only conclude that credit does indeed fuel economic growth in monetaristland. Indeed, it even causes economic growth for the monetarists.

But here is where it gets even weirder: New Keynesians also believe that credit fuels economic growth! One of the defining features of New Keynesian economics is that it believes money is non-neutral in the short-run. You don’t have to be an ivory tower academic economist to figure this out either. You could just check the Wiki page for ‘New Keynesian economics’ which states in no uncertain terms:

New Keynesian economists fully agree with New Classical economists that in the long run, the classical dichotomy holds: changes in the money supply are neutral. However, because prices are sticky in the New Keynesian model, an increase in the money supply (or equivalently, a decrease in the interest rate) does increase output and lower unemployment in the short run. Furthermore, some New Keynesian models confirm the non-neutrality of money under several conditions.

This is what led leading New Keynesian economist Greg Mankiw to state that New Keynesian economics should more properly be called ‘New Monetarist’ economics. You see, if money is non-neutral in the short-run then money growth does fuel real GDP growth in the short-run. And if the key component of money supply growth is credit growth then it follows that credit growth fuels GDP growth in the short-run for New Keynesians! This is all basic stuff that is given on undergraduate macro exams. How on earth can an academic economist like Smith get it so shockingly wrong!?

Well, actually if we examine his article carefully we see that Smith is just not writing clearly and that is what is leading to his confusion. He writes:

Here’s an alternative idea: Maybe credit is a follower, not a driver, of the boom-bust cycle. Maybe credit grows when the economy is growing, because of the need to finance investment, and shrinks when the economy is shrinking, because of the lack of investment. In retrospect, looking at a chart of credit growth vs. GDP growth, it might look like credit caused the cycle, but in fact it was just a passive tag-along.

Um… what!? Smith said earlier that academic economists “don’t instinctively believe in credit-fueled growth” but what he is talking about here is clearly… credit-fueled growth. That is, the growth is caused by other factors and fueled by credit! A car is fueled by petrol but driven by a driver. I am fueled by carbohydrates absorbed through my digestive system but my actions have something to do with mad electrical stuff going on in my brain. Smith’s writing is deeply and chronically confused. He has completely fumbled his entire argument by confusing the terms ‘to fuel’ and ‘to cause’. ‘To fuel’ is not ‘to cause’.

Interestingly, the argument that credit fuels growth but that growth is determined by other factors  like, as Smith says, “productivity changes, or changes in monetary policy, or changes in people’s sentiment and animal spirits” is the Post-Keynesian endogenous money argument. Here is a crystal clear statement from Post-Keynesian endogenous money theorist Basil Moore’s classic paper ‘Unpacking the Post-Keynesian Black Box‘:

The evidence suggests that the quantity of bank intermediation is determined primarily by the demand for bank credit. (pp538-539)

There you have it: the roots of Post-Keynesian endogenous money theory where credit/money is an endogenous variable. This is in contrast to, say, the ISLM where money/credit is an exogenous variable.

Smith is confused because, like most mainstream economists, he doesn’t know what he believes any more. Many of these people, for example, believed that the QE programs would drive (not fuel!) economic growth. But they were sorely mistaken. Now you see them fumbling around in the dark. Fortunately, they are arriving at the conclusions that heterodox economists arrived at decades ago. Welcome to the club, Noah, and please try not to insinuate that those academics who came to your own conclusions 40 years ago are not to be included under the heading ‘academic economists’. You may just be being fuzzy in your use of the English language but if this discussion has taught us anything it is that such fuzzy use of language can lead to substantial conceptual confusion.

Posted in Economic Theory | 3 Comments

Keynes’ Theory of the Business Cycle as Measured Against the 2008 Recession


In this post I will explore Keynes’ theory of the business cycle. He discusses his views in Chapter 22 of the General Theory and I think they hold up pretty well today. At the beginning of the chapter he notes that the business cycle — so-called, because it is not really a “cycle” at all despite what Keynes says in the chapter — is a highly complex phenomenon and that we can only really glean some very general features of it.

Keynes opens with a very clear quote on what he thinks to be the key determinate:

The Trade Cycle is best regarded, I think, as being occasioned by a cyclical change in the marginal efficiency of capital, though complicated. and often aggravated by associated changes in the other significant short-period variables of the economic system.

Recall that the marginal efficiency of capital (MEC) is basically the expected profitability that investors think they will receive on their investments measured against the present cost of these investments. The key component in the MEC is, of course, investor expectations. Keynes is clear on this and distinguishes himself from those who claim that a rise in the rate of interest is the cause of the crisis. He writes:

Now, we have been accustomed in explaining the “crisis” to lay stress on the rising tendency of the rate of interest under the influence of the increased demand for money both for trade and speculative purposes. At times this factor may certainly play an aggravating and, occasionally perhaps, an initiating part. But I suggest that a more typical, and often the predominant, explanation of the crisis is, not primarily a rise in the rate of interest, but a sudden collapse in the marginal efficiency of capital.

This is extremely perceptive and, I think, entirely correct. A rise in the rate of interest will typically precipitate a recession. In the US, for example, it is well-known that when the short-term rate of interest rises above the long-term rate of interest (i.e. when the yield curve is inverted) there will likely be a recession. (This is probably not, however, the case in other countries).

But the actual cause of the crisis is, as Keynes says, a collapse in the MEC. Consider the case of the 2008 recession. This recession was initiated by a fall in house prices which led to a fall in housing construction. Below is the number of housing starts plotted against the interest rate.

housing starts interest rateNow Keynes would argue that the causal chain went as follows: interest rates began to rise => the MEC of investors began to fall => eventually the MEC reached a threshold point at which investors stopped building houses. A recession ensued.

This is extremely important because the alternative interpretation is that the interest rate reached a point that it choked off credit demand for new housing. But this is not empirically valid. Take a look at the following chart plotting the same variables in the 1990s.

housing starts interest rate 2

In this period we see interest rates rise continuously — and, what is more, from a higher base — and yet housing continues to rise in lockstep. Clearly there is no mechanical relationship between housing starts and the interest rate. So, Keynes’ interpretation bears out: for some reason — and we shall not get into it here because it is very complicated — but for some reason in 2006 the interest rate rises triggered a collapse of the MEC among home-builders.

What happens next? Keynes says that liquidity preference shoots up quickly after the MEC collapses, the economy enters recession and the capital markets get nervous. He writes:

The fact that a collapse in the marginal efficiency of capital tends to be associated with a rise in the rate of interest may seriously aggravate the decline in investment. But the essence of the situation is to be found, nevertheless, in the collapse in the marginal efficiency of capital, particularly in the case of those types of capital which have been contributing most to the previous phase of heavy new investment. Liquidity-preference, except those manifestations of it which are associated with increasing trade and speculation, does not increase until after the collapse in the marginal efficiency of capital.

The effects of this are actually more difficult to perceive today than they were in Keynes’ time. Today central banks will step in and quickly flood the capital markets with liquidity when liquidity preference rises. Nevertheless, in extreme cases — such as a liquidity trap proper when the central bank loses control of interest rates — we will indeed see liquidity preference rise and interest rates on risky assets shoot up. This is precisely the case in 2008. Here is a graph showing interest rates on interbank loans shoot up vis-a-vis highly liquid treasury bills (which are money substitutes).

TED Spread 2008Keynes is quick to emphasise that monetary policy alone will ease interest rates and this may help recovery, but it will not actually provoke the recovery. He writes:

It is this, indeed, which renders the slump so intractable. Later on, a decline in the rate of interest will be a great aid to recovery and, probably, a necessary condition of it. But, for the moment, the collapse in the marginal efficiency of capital may be so complete that no practicable reduction in the rate of interest will be enough. If a reduction in the rate of interest was capable of proving an effective remedy by itself, it might be possible to achieve a recovery without the elapse of any considerable interval of time and by means more or less directly under the control of the monetary authority. But, in fact, this is not usually the case; and it is not so easy to revive the marginal efficiency of capital, determined, as it is, by the uncontrollable and disobedient psychology of the business world. It is the return of confidence, to speak in ordinary language, which is so insusceptible to control in an economy of individualistic capitalism. This is the aspect of the slump which bankers and business men have been right in emphasising, and which the economists who have put their faith in a “purely monetary” remedy have underestimated.

Again, if we turn to the data from the 2008 slump this will prove the case beyond a shadow of a doubt. The following graph maps gross private investment, the unemployment rate and the central bank interest rate.

impotent monetary policy

Meanwhile in the background the government deficit opened up massively and Congress passed a massive stimulus plan. After this investment picked up — very slowly — and unemployment started to fall — again, slowly. Because the stimulus spending did not plug the investment gap after six years we are still not back where we were in 2008 in terms of employment and investment has just about clawed back its losses.

Some will point to previous recessions where the interest rate was lowered and investment shot up as proof that monetary alone might be sufficient to steer the economy. I would say to them: take a look at the government budget balance. In all the post-war recessions the budget balance opened up — usually through the automatic stabilisers — and it was this that propped up demand. In absence of this some of these recessions would likely have become depressions.

Keynes is aware that the Austrians might pick up on his theory and then add their own ideologically motivated analyses of what constitutes ‘good’ and ‘bad’ investments. He makes clear something that I have tried to emphasise in a paper that I will be publishing shortly: we cannot say that the private sector will allocate resources effectively if left alone because they are subject to irrational swings of mood and do not engage in rational calculations as the marginalists (and Austrians) assume.

It may, of course, be the case — indeed it is likely to be — that the illusions of the boom cause particular types of capital-assets to be produced in such excessive abundance that some part of the output is, on any criterion, a waste of resources; — which sometimes happens, we may add, even when there is no boom. It leads, that is to say, to misdirected investment. But over and above this it is an essential characteristic of the boom that investments which will in fact yield, say, 2 per cent. in conditions of full employment are made in the expectation of a yield of, say, 6 per cent., and are valued accordingly. When the disillusion comes, this expectation is replaced by a contrary “error of pessimism”, with the result that the investments, which would in fact yield 2 per cent. in conditions of full employment, are expected to yield less than nothing; and the resulting collapse of new investment then leads to a state of unemployment in which the investments, which would have yielded 2 per cent. in conditions of full employment, in fact yield less than nothing. We reach a condition where there is a shortage of houses, but where nevertheless no one can afford to live in the houses that there are.

The final point we should bring out is the policy implications of this. Keynes favours that the central bank holds down the rate of interest and the government maintains full employment throughout the cycle. He writes:

Thus the remedy for the boom is not a higher rate of interest but a lower rate of interest! For that may enable the so-called boom to last. The right remedy for the trade cycle is not to be found in abolishing booms and thus keeping us permanently in a semi-slump; but in abolishing slumps and thus keeping us permanently in a quasi-boom.

I think that this is overly simplistic but certainly on the right track. We can hold down the general rate of interest so that money is cheap but then have the central bank exercise control particular rates of interest in markets prone to speculative bubbles by using Tom Palley’s ABRR proposal. In this scheme the central bank controls overactive investment markets but does not really hold responsibility for ensuring that economic growth be maintained continuously. That is the role of fiscal policy.

Personally I think that democracies are seriously flawed and politicians generally stupid and short-sighted. For this reason I would recommend building institutions that automatically open up the fiscal deficit in times of unemployment. Many welfare state institutions do exactly that — and we have these institutions, not politicians, to thank for ensuring that we have not entered a serious depression between 1980 and today. My favourite of such institutions is the Job Guarantee program developed and supported by Abba Lerner, Hyman Minsky and the Modern Monetary Theorists. But I recognise that this should be an open debate.

Posted in Economic Theory | 17 Comments