(Subtitle: building on foundations of unknown shakiness?)
In a lot of fields, a new mathematical model is constructed for each problem, and the model is an encoding of the quantitative rules of the hypothesis you are testing. The model is the hypothesis.
But in cardiac electrophysiology, we often use ‘off the shelf’ previously published models (which implicitly encode many ‘hypotheses’ about how each current depends on voltage etc. etc.). Then we use these models to test a completely different sort of hypothesis with a simulation study: perhaps at the cell scale, something like ‘alternans occur at a lower pacing rate when we change action potential model parameters to look like failing hearts’; or perhaps at the tissue scale e.g. ‘discontinuities in tissue properties make spiral waves more likely to break up’. I’ll call these examples of a secondary hypothesis.
The thing is, the secondary hypothesis is usually stated as the sole hypothesis that a particular study/article/paper is testing. Even if we decide that the secondary hypothesis is true in our simulation study, that finding is completely reliant on the underlying primary hypotheses all being true too. These might not ever have been tested (especially because we probably haven’t unpicked exactly what behaviours at the primary level we rely on to see the result at the secondary level).
No answers in this post on how we should deal with this! Just a thought that we need to acknowledge the primary hypotheses are still just hypotheses, there may be competing ones, and it would be nice to figure out (and record!) which ones we’re relying on in which ways… Until we can do this in a sensible way, multiscale model building in a plug-and-play way is going to be… er… hard!
Edit: I first wrote this draft quite a long time ago, I think some uncertainty quantification at the level of the primary hypotheses might be the way to go (assigning probabilities to them, and examining the consequences for secondary hypotheses), but open for comments below!