HPB, a blog on the “history & philosophy of biology, etc.”, has an interesting piece on underdetermination – the idea that “when faced with contravening evidence, its always possible in principle to save some favored hypothesis by making enough revisions in other parts of the system associated with it.” HPBs theory is that scientists manipulate this to advance their favourite theories. This is relevant to simulation and modelling.
I think its fairly common ground that no theory can be conclusively proved, and that people adapt theories, even in the face of evidence, in order to keep them alive. (The more desperate attempts to defend the idea that the sun goes round the earth were good examples of this.)
Robert Skipper, who writes HPB, argues that
“evaluations are made from within a multidimensional framework (see Skipper 2000). There are numerous strategies that cut across social, pragmatic, and epistemic dimensions. The heart of theory assessment is the local evaluation of hypotheses against data, i.e., fit between model and data, independent support for assumptions of the model, and variety of evidence (from Lloyd 1988). But there are lots of others, including more practical experimental assessment strategies such as replication, control, and calibration (from Rudge 1998). And yet others, such as simplicity (e.g., Darden 1991). These strategies are used in different combinations and with different emphases as different strategies are given different weights during the assessment process. And the trade-offs involved in the assessment process can significantly affect claims about the degree of underdetermination and, so, affect the process of choosing a theory from among its rivals. Moreover, when scientists must choose one theory among a set of rivals in an underdetermination situation, there are no independent, objective measurements for weighting specific strategies and combinations of strategies.
With all of that said, scientists can manipulate underdetermination by shifting grounds for assessment and re-directing the course and terms of dispute. ”
– this applies just as much to modelling. You can do it in several different ways, and the method you choose influences the result. To give a very simple example, look at the French avian flu simulation!
– simulation, massive memory availability, and fast computers, allow you to take many more variables (or assessment strategies) into account. (This has been the experience of share trading simulations: the packages now available can take into account 40 or more variables whilst the average human trader can only think of about ten. Cant recall where I read that.) Would this allow you to minimise underdetermination by trying every possible approach to assessing a theory at once? I know its naive to assume you could take every approach into account, but its a target to aim for. A sort of huge Monte Carlo simulation.
The work of people lke David Phillips and Steven Levitt already shows that taking a massive statistics approach to life may bring out unexpected relationships.