BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

No, You Can't Build a Big Computer to Model the Financial Markets

This article is more than 10 years old.

I'm afraid this has to come under the title of one of the sillier ways of wasting the taxpayers' money that has been recently advanced. The idea is to build a really big computer model of the financial markets and thus work out where and when they're going to fail next. Sadly, the idea is going to fail itself, hopefully before anyone spends any money on it.

We conclude with an argument that, in the specific case of the global financial markets, there is an urgent need to develop major national strategic modelling and predictive simulation capabilities, comparable to national-scale meteorological monitoring and modelling capabilities. The intent here is not to predict the price-movements of particular financial instruments or asset classes, but rather to provide test-rigs for principled evaluation of systemic risk, estimating probability density functions over spaces of possible outcomes, and thereby identifying potential “black swan” failure modes in the simulations, before they occur in real life, by which time it is typically too late.

Felix tells us that this won't happen because the banks who might finance it won't because they don't want to be subject to any regulation that the findings might suggest. Not a bad political argument but not actually the correct one to be using.

As I've mentioned elsewhere, the proposal simply will not work anyway.

You cannot model the market because the only method we have of modelling the market is the market itself.

Too much knowledge is local, there is too much knowledge to attempt to capture it, it is simply not possible to model something as complex as the financial markets using anything other than the financial markets themselves as the model. If you like, you cannot map the territory it is only possible to use the territory itself as the map.

Now, if were just a few programmers playing with a box or two, well, leave them to it perhaps. But quite apart from that not being what will happen (they are already calling for a programme the size of the weather prediction business, supercomputers and all) and thus much more money than that will be wasted, the programme itself will introduce horrible uncertainty into the system. For, you see, the regulators, the lawmakers, will think that having spent tens, hundreds, of millions to map the territory, model the markets, will think that they actually understand them. That there are no little grey areas, no bottomless pits of ignorance into which they can fall, no relationships or linkages unknown.

And yet we know that this is impossible. They simply cannot manage to create a model which does not have these lacunae: and there's nothing on the planet more dangerous than a politician who thinks that they really do understand something.

Allow me to extend that idea a little bit here. The promoters of the scheme deliberately suggest, themselves, the comparison with meteorological computing. That itself comes in two flavours.

The first is trying to predict the weather. On one scale we can do that pretty well, it'll be colder in winter than in summer, that sort of thing. Although even that's not entirely correct as Christmas Day in England this year was rather warmer than many an English summer's day. At the computing level that these two are talking about we're not too bad at it in the 4 to 72 hours sort of timespan. After that it all becomes too vague to be directly useful. Weather is simply too large a chaotic system for us to be able to predict it successfully.

At the other end we've got what all those climate researchers are trying to do. Tell us how hot it's going to be after pumping another century's worth of CO2 into the atmosphere. At the large scale level this is OK. Obviously, more CO2 will lead to greater heat. But the level of knowledge is still pretty feeble: and you can't code what you don't know.

For example, climate sensitivity: how much warming should we expect with a doubling of CO2? Even the official determination is 2-4.5 oC and there are plenty of people arguing that that's too wide, too high, too low, too narrow and so on. Or feedbacks: we know very well that there are positive feedbacks, things that happen when warming does which then lead to greater warming (there must be, for sensitivity with CO2 alone would be 0.7 o only). We also know that there are negative feedbacks, say, greater plant growth and possibly greater carbon storage in the soil as a result.

Even in the scientific literature, while the general consensus is that total feedback is positive, there are those who aren't quite so sure. Not being quite so sure ranging from believing that total feedback could be negative (an extreme view, true) through evens, to only increasing as a logarithm of CO2 levels to the consensus view of 2-4.5 oC.

Or of course there's the great puzzle of where the last decade's warming has gone. CO2 is up, average temperatures not so much. We're told the answer is that weather has interfered with climate which is just fine although it would have been more believable if we'd been told that 10 years ago.

Now no, I'm not trying to say that such climate modelling is all wet and most certainly not that global warming isn't happening/going to happen. I'm just, rather, trying to point out the problems we have with the modelling of large and complex, chaotic even, systems.

And yes, the global financial markets are as complex, as chaotic, as weather and the climate. That's why these researchers are saying that we need a plan on a comparable scale to meteorological computing to look at the financial markets.

Which is, as the above shows, where the plan goes wrong. We can do meteorological computing in detail for a couple of days into the future. We can do climate modelling out a century or more but there's still a terribly large amount that we don't know and we most certainly don't gain any useful detail (broad brush, quite possibly, but not in detail) from such modelling.

And since the proposal is that we are to model the financial markets in detail some substantial time into the future: it won't work, will it?

Or as this was settled back in the 1920s. It's not about computing power it's about information collection. The socialist calculation problem that is. Too much of the knowledge you need is local, you cannot collect it in real time and therefore you cannot use the information you haven't got to be able to model something as complex as an economy: or the financial markets.

Not that you just can't in practice: you cannot in theory. So the idea rather fails I'm afraid.