Please view this talk by Zak Greant. Its an important subject, espcially for simulation, though so far there are some areas – even in government – where open source modelling seems to thrive alongside proprietary code.
Greants talk is a historical survey of the growth of speech and writing over the last 50,000 years. Partly its to do with freedom (censorship, copyright, translations of the Bible, etc.) but also its about shaping the way that software will shape us.
Greant describes the age of literate machines, dating from the Jacquard loom, which was the first machine to store knowledge and apply it directly, on to the computers which control our lives these days.
Its a cool and very elegant statement of his position. Ive tried to say the same sort of thing here and here, because the problem applies particularly to simulations, which
(a) by their nature are very complex
(b) are major part of the social control technology we are developing. So many decisions are played out in virtual worlds before they are implemented in the actual world. When the Bank of England raises UK interest rates because of a simulation, am I in the same position (to use Greants analogy) of a mediaeval peasant being told by a priest what it says in the Bible? The peasant cant read the bible, so he has to believe what he is told.
I suppose, tp extend my example, the NIESR Global Econometric Model (NIGEM) is partly open source. A series of equations are published here, and, though I dont know enough to understand them, this looks like a reasonably open description. It isnt exactly open-source software (since the model is proprietary and leased out to anyone for a basic £9,000 per year), but at least it looks as if its significant algorithms are exposed.
The counter argument of couse is that you cant take major policy decisions based on software that anyone can buy and run, as they might try to anticipate or counteract you. The UK Treasury has its own model, but allows this to be used by an independent group of outsiders.
The Bank of England has a further set of models, but again makes details public, eg they are described at some length in a book that can be downloaded from the internet. Again, this looks like a full list of equations to me, though theres no actual code.
Looks like a sensible compromise in this case. The officials expose their models to outside experts, the outside experts use their in-house model, and no doubt the two of the them learn from and reality check each other. Many if not all of the key alogrithms are published and debatable. I may not know exactly what assumptions the Bank makes when it runs the model, but at least I can examine the modelling.