Economic Modelling and Artificial Intelligence: Is Economic Reasoning Always Based on a “Hidden” Model?

HAL_9000

There’s a trope one hears from economists all too often when one discusses the usefulness (or uselessness) of models. The argument usually runs like this: the person questioning the use of models says, for example, that all the useful predictions over the past X number of years have not used formal models; then the person defending the models says that all these predictions were made using models, it was just that the models were not explicitly articulated.

There are a few variations on this trope, but the underlying assumption is always the same: people have, locked inside their heads, models of the world that they apply without even knowing it. The same is true of economists who wrongly think that they work from trained intuition. They are just being naive because they are, in fact, working from a model; it is just one that is, as of yet, unconscious to them.

Epistemologically, this is a very slippery argument. But rather than getting into the nitty-gritty I’m going to say that this argument has already been played out in the computer sciences and economists who claim that we all walk around with models in our heads might do well to pay this some attention.

To get an idea of what this debate was all about we have to rewind to the 1960s. At this time research into cybernetics and artificial intelligence (AI) had reached a level of optimism never seen before or since. Yes, there were some — like Stanley Kubrick and Norbert Weiner — who painted a dark picture of where cybernetics and AI might lead, but there was a general consensus that we were heading firmly in the direction of AI, of HAL 9000 and so on.

Then an obscure philosopher named Hubert Dreyfus, working in a tradition of philosophy completely alien to his native MIT, published a paper entitled Alchemy and AI which he wrote while working at the RAND corporation; a hotbed of AI research and center of US intellectual strategy in the Cold War. The paper started with a sober assessment of the AI movement at the time written in true RAND style.

Early successes in programming digital computers to exhibit simple forms of intelligent behavior, coupled with the belief that intelligent activities differ only in their degree of complexity, have led to a conviction that the information processing underlying any cognitive performance can be simulated on a digital computer. Attempts to simulate cognitive processes have, however, run into greater difficulties than anticipated. (p.iii)

Dreyfus was expressing a deeply felt skepticism that touched a nerve with the optimists in the AI community. This was probably not helped, of course, by the air of ridicule in the paper; in it Dreyfus compared AI research to alchemy. Dreyfus was soon isolated from his peers who attacked his ideas and his person. His criticisms were soon subject to a setback when he pointed out that a computer could not beat a ten-year old at chess. In response, AI proponents had Dreyfus play against the Mac Hack chess program and he lost; proving that Dreyfus’ criticisms had, perhaps, gone too far.

Nevertheless, Dreyfus’ criticisms were not the products of an angry crank. Rather they were an epistemological attack on the foundations of AI. AI assumed, as Dreyfus notes in the above quote, that people think essentially in terms of symbols and rules — as do computers. Dreyfus, however, came from the phenomenological tradition in philosophy and insisted that this was not the case. He summarised his position in the introduction to the MIT edition of What Computers Still Can’t Do:

My work from 1965 on can be seen in retrospect as a repeatedly revised attempt to justify my intuition, based on my study of Martin Heidegger, Maurice Merleau-Ponty, and the later Wittgenstein, that the GOFAI [Good Old Fashioned AI] research program would eventually fail. My first take on the inherent difficulties of the symbolic information-processing model of the mind was that our sense of relevance was holistic and required involvement in ongoing activity, whereas symbol representations were atomistic and totally detached from such activity. By the time of the second edition of What Computers Can’t Do in 1979, the problem of representing what I had vaguely been referring to as the holistic context was beginning to be perceived by AI researchers as a serious obstacle. In my new introduction I therefore tried to show that what they called the commonsense-knowledge problem was not really a problem about how to represent knowledge; rather, the everyday commonsense background understanding that allows us to experience what is currently relevant as we deal with things and people is a kind of know-how. The problem precisely was that this know-how, along with all the interests, feelings, motivations, and bodily capacities that go to make a human being, would have had to be conveyed to the computer as knowledge — as a huge and complex belief system — and making our inarticulate, preconceptual background understanding of what it is like to be a human being explicit in a symbolic representation seemed to me a hopeless task. (p.xi-xii — Emphasis Original)

Regular readers of this blog will recognise in this both mine, Lars Syll’s, Tony Lawson’s and ultimately Keynes’ critiques of applied economic modelling and the use of econometric techniques. The problems are the same. Whereas certain people in the economics community are trying to model processes in terms of symbols that are so complex that they cannot be captured in these symbols alone, the AI community were attempting an even more daunting but not unrelated task: to use symbolic forms to model human consciousness itself. The AI modellers were, in a very real sense, trying to play God.

They basically failed, of course. Today AI research is much more humble and, although many laymen and some futurist-types still hold fast to a Sci-Fi view of the possibilities of AI, most of Dreyfus’ substantial predictions have been vindicated. AI failed spectacularly at mimicking the processes of human consciousness through the manipulation of symbols on computer programs and the likelihood that this will take place at any point in the future are remarkably slim. The AI community have run into the problems that Dreyfus thought they would — problems such as how to simulate non-symbolic reasoning or what Dreyfus calls “know how” — and although there is still some optimism the level of difficulty that these problems pose has made the AI community far more cautious in their claims. Much of the investment in boundless visions of the possibilities of AI now appear mere emotionally-charged fantasy — backed, undoubtedly, by an all-to-human desire to play at being God.

What can economists learn from this? A great deal actually. When dealing with economic data we use processes of reasoning that do not conform to systems of symbols — i.e. to models. This is why basically all interesting and relevant predictions come from intuitive empirical work and why none are generated by applying models. We do not, contrary to what the modellers believe, all carry models around with us in our heads that are just waiting to be discovered and applied. And anyone who thinks so will likely prove to be sub-par at actual applied work.

Human processes of reasoning are enormously complex and it is very difficult — if not impossible — to get an “outside” or “God’s eye” view of them. Thus, attempting to replicate the processes of reasoning inherent in economic thinking in models will only be useful for didactic purposes — and even then it will only be useful if students are made aware that these models cannot be directly applied and do not directly simulate how economics is done.

With that in mind I leave you with a nice quote from one of the late Wynne Godley’s students, Nick Edmonds, who does a great deal of modelling but who is nevertheless reflective enough to recognise the limits of modelling.

I think it is very important to recognise the limits to what models can do.  It is easy to get seduced into thinking that a model is some kind or oracle.  This is a mistake.  Any model is necessarily a huge simplification.  The results depend critically on the assumptions made.  However complex and detailed they are, all they really reflect is the theories of the modellerThe model is not revealing any new truth, it is simply reflecting our own ideas, helping us to visualise how a massively complex system fits together. (My Emphasis)

Update: Here is an excellent film made about the philosophical tradition that Dreyfus comes from which explains in far greater detail than I can here why it is wrong to conceive of human thought and action in terms of models. It also features Dreyfus and has an extensive discussion about the AI debate from about the 14 minute mark on.

Update II: The film has been taken down from Youtube. I’ve linked to the trailer instead. The film is worth seeking out though.

About pilkingtonphil

Philip Pilkington is a macroeconomist and investment professional. Writing about all things macro and investment. Views my own.You can follow him on Twitter at @philippilk.
This entry was posted in Economic Theory, Philosophy. Bookmark the permalink.

9 Responses to Economic Modelling and Artificial Intelligence: Is Economic Reasoning Always Based on a “Hidden” Model?

  1. PGB says:

    I really don’t see what Godley’s quotation has to do with anything that ‘s been said on the post, and I personally disagree with the argument. However, for this discussion there is a lot to be gained from the Methodenstreit of 1880s. This post is repiting in more, mmmmm, fundamentalist clothes the position of the German Historical School. I disagree with it. I have no problems with inductive methods per se, and I am certainly not an expert and have spent almost no time thinking about inductionism versus deductivism and so on and so forth, so my position has no authority weight behind it beyond my own understanding of the issue. Having said that, in my view a model is a stated relation between two or more “variables” or phenomenons or whatever you want to call them. Godley’s quote reaffirms this point. The associations might be contradictory and inconsistent between themselves, which only reflects a poor thought from the modeller, but that does not impair the fact that there is actually a model. Whenever you state a relation between two things, there is a model. There are many factors that can affect the way you formulate this association (in my view, Schumpeter’s “vision” concept reflects almost all I think about the issue), but that does not deny the fact that there is an association. Keynes presented a model in the General Theory, Ricardo presented a model in the Principles of Political Economy, and Kaldor presented his own. What inspires those models, how they came to build them, that’s a large discussion. But they are models. Everybody has models, from the moment they link one thing to the other. It might be suggested by the data or by “abstract” reasoning or by hallucination or whatever, but that’s a model. Cheers.

  2. David says:

    I think there’s two levels for these models: the individual person and aggregated behavior of groups. At a person level, these ‘models’ are infernally complex, as you say. But, when aggregated together, as is fairly common in economics, why can’t an average-individual model be employed as an approximation? Of course, taking that population-averaged model back to the individual is flawed. The average-model is only suited to aggregate behavior or casual description (and a few more entries into a CV).

    • That’s not what I meant. What I was saying was that the idea that even people who don’t use explicit models still have models “in their head” is a stupid argument. It is the same assumption that the AI researchers made when,. for example, they assumed that people have a “model” in their head when they walk across a room or play a guitar. It’s a nonsense argument and the old AI research program crumbled because of it.

      • David says:

        To over simplify, it’s sort of like a scientist everyone thinks like a scientist?

      • Sort of. But I’m not convinced that scientists think in terms of models/symbols etc. If a scientist plays pool does he do differential equations or does he use different methods of consciousness? I think that the answer is pretty obvious.

  3. Francis says:

    This is perhaps tangential, but nevertheless relevant in a wider sense.

    ‘Computers, Minds and Conduct’ by Button, Coulter, Lee & Sharrock, is perhaps even better than Dreyfus in this regards. There’s also Peter Winch to consider, and indeed, Rupert Read, who takes on a notion that is unfortunately rather prevalent in the social sciences (or studies as some prefer), namely that everyone has a theory, i.e. even if they don’t, cf. ‘There is No Such Thing as a Social Science’.

  4. Scott Hedlin says:

    Hi Phil, The only thing that would have made the movie better would have been popcorn flavored gumbo. I’m going to add an example of theory modeling pertaining to a priori assumptions of J. Schillinger regarding evolving musical tonal structures. He made observations to suggest that the music of one period could be transformed into that of another historical phase simply by applying principles of expansion or contraction to the most identifiable structural use of intervalic relationships that characterize a work, both its melodic and harmonic content.
    Using this theory one could reconstitute a work written hundreds of years ago and update it to current standards or reverse the process, a theory for projecting a model forwards or backwards in time. As a student composer the was concept fascinating to me. I tested this theory and found the immediate results to not be as coherent as hoped for.
    I’d arrived at new content variation but the musical sense by any subjective standard didn’t hold up. Not yet prepared to admit failure, I looked for a way to explain what didn’t happen, so the effort wouldn’t be a complete waste. I settled on applying the expansion of time values of durations (velocity of components)and considered the tweaked result listenable enough to produce a computer score of the transformed model.
    Many observations could be made after this type of experiment. One being that as interesting as a priori theories may be, their potential for managing select data only makes sense at the point one subjectively decides it does. This suggests that in addition to the problems created by using selective systems there is also the problem of managing and possibly combining various distinct types of selective orders each unique in the quality of knowledge and belief they represent and incorporate as information.
    I’ve only recently come back to reading your writes after no longer finding them on NC. Its challenging to catch up but I’m enjoying them greatly. Thanks

    • Many observations could be made after this type of experiment. One being that as interesting as a priori theories may be, their potential for managing select data only makes sense at the point one subjectively decides it does.

      I couldn’t put it better myself. But unfortunately people tend to treat such theories as if they were Holy Grails. I don’t think we will ever move away from this, to be honest. People love to build False Idols; no matter what Moses and others might say.

Leave a comment