## If computational biologists did astronomy…

Here is a cheeky post along the lines of Can a biologist fix a radio? (well worth a read). The discovery that Pluto wasn’t the size we expected got me thinking about how we would have tackled it. So in our case: Can a computational biologist do astronomy?

Our protagonists are a Mathematical / Computational Biologist MCB, and a Noteworthy Astronomy Scholar with Antiquated ideas, NASA. (If anyone’s initials are MCB, I forgot and genuinely didn’t mean you!)

MCB: I am going to make a model of how Pluto moves under gravity.

NASA: we know that Pluto is subject to gravitational forces, so its orbit depends on its total mass according to Newton’s law of universal gravitation.

MCB: Great, so if I could measure the density and radius of Pluto then I could work out its mass with M = density x volume, and then I could work out the gravitational forces on it.

NASA: Well, if we say it is a uniform sphere, then I suppose so yes.

MCB: Good. I will do some calculations…

MCB: I have varied the radius and density of Pluto and discovered that Pluto can have almost any orbit you like.

NASA: By the way, we have watched the orbit, and it is very consistent.

MCB: Interesting! I will do some more calculations to see which densities and radii it uses.

NASA: Er, we don’t really have any evidence that Pluto is varying its radius and density. It might make sense just to work with its total mass for now.

MCB: That’s all very well, but we don’t have any evidence that it isn’t changing its radius and density all the time do we?

NASA: Er…

MCB: In fact this would make sense, because Pluto’s orbit must be robust to fluctuations in radius and density that occur when space dust hits it stochastically. It probably varies them continuously to adapt to getting hotter when it is closer to the sun too, and all sorts of other things. I will do some more calculations…

MCB: The radius and density of Pluto could take almost any values and still give that orbit! Pluto is robust to changes in its radius and density [as long as they fall near a line that looks a bit like 1/density is proportional to radius^3].

NASA: [You’ll probably find that constant of proportionality is 4/3 pi], I think we might need new equipment to measure the radius and density, you should probably stick with a simple model with just total mass for now.

MCB: All the evidence we have is completely consistent with Pluto taking a wide range of radii and densities, I can make some interesting predictions for what happens to Pluto in new situations at different radii and densities and publish lots of papers about them.

NASA: Well it might take various values, but we’d better go and check.

[Some time later…]

NASA: We’ve just measured Pluto, its radius and density weren’t what we guessed, but they take constant values!

[The End]

In case you didn’t get my not-so-subtle point, I have read quite a few papers linking un-identifiability of parameters in inverse problems as plausibility that the parameters really do take a wide range of values, backed up with an argument that they must to confer biological robustness, or explain variability. Biological systems obviously do have to be robust to stochastic transcription, noise and a wide range of conditions. But lack of information about a quantity is not evidence that it varies in reality: certain things are very carefully controlled by numerous feedback mechanisms e.g. blood temperature. Don’t get me wrong, I think understanding variability and uncertainty is absolutely crucial. But if you are relying on something varying a lot for your results, I would say the onus is on you to design an experiment where you can measure it as well as possible, and do it lots of times to show how it varies.

A quick shameless plug for a job we are advertising.

This is a 3 year postdoctoral position to work with me in the Computational Biology Group in the Department of Computer Science, University of Oxford. It would best suit someone with a PhD in a subject such as: Bayesian inference; model selection; model calibration; uncertainty quantification; or optimal experimental design.

The idea will be to apply these kind of techniques to various problems in electrophysiology modelling, to allow us to begin to assign probabilities to predictions. The post would suit someone who is looking to apply their statistical skills to real problems, and to learn about biophysically-based modelling and its applications for biological problems. No prior experience of modelling these kind of things is necessary, or even desirable!

More details and a link to the official Oxford job portal can be found on the CS news feed. Please contact me at gary.mirams@cs.ox.ac.uk for any informal enquiries about the post.

## Triangles and Conductances

Just a little post on something that’s really simple, not new, and might seem obvious to some people – but might help when choosing how to explore the effect of altering parameters on action potential model behaviour.

The reason I thought I’d post this is that people have expressed surprise that they don’t see symmetric sized effects on action potentials when they make a maximal conductance bigger or smaller. By which, I mean that adding/subtracting a constant percentage (e.g.) +/- 50% to a conductance doesn’t lead to the same ‘balanced’ changes in action potential properties. But I don’t think you should expect that, and why not is all to do with triangles!

Remembering our GCSE maths, the simple equation for a straight line is:
``` y(x) = mx + c. ...(1) ```

Figure 1: The equation of a straight line

Where `m` is the gradient of the line, or `dy/dx`. If you want to consider where the triangle in Figure 1 hits the x-axis (`y = 0`), just re-arrange:
``` x = -c/m ...(2) ```
which is positive in Figure 1 since `m` is a negative gradient.

If you think of a cardiac action potential model, we usually write something that looks like
``` dV/dt = -(g*open_probability*driving_voltage + same for each other current) ```
where the ‘g’ is the maximum conductance of a current, proportional to the number of available ion channels in the cell membrane.

So thinking of the triangle as a simplified action potential*, the gradient is given by `m = dV/dt` and looking at the intercept with the x-axis as the action potential duration (APD), we would write (2) as
``` APD = -c/(dV/dt) = c/(g*stuff) ...(3) ```
(I will drop the ‘stuff’ for simplicity). From this simple equation, you can see that to double APD you would have to halve ‘g’, and to halve APD you would have to double ‘g’. This has some consequences for exploring model behaviour.

Figure 2. Left: the effect of altering the gradient by up to + 80% (APD can be reduced by up to 45%), Right: the effect of altering the gradient by up to -80% (APD can increase by up to 500%). Red line is 1/x, black dashed line is the identity line `y=x`. You wouldn’t believe how long it took me to make matlab do that overlapping shading thing.

In Figure 2 I’ve highlighted a couple of things to note from equation (3). I’ve plotted ‘APD’ as a function of `g` (a plot of `1/x`). If we increase `g` by 80%, by changing its value from 1 to 1.8, we can see the APD is reduced from 1 to 1/1.8, by about -45%. Decreasing ‘`g`‘ by 80%, from 1 to 0.2, has a much larger effect, with increase in APD from 1 to 1/0.2, a change of +500%.

Figure 3: scaling a conductance by multiplying/dividing by the same factor allows you to explore the same changes to behaviour.

In Figure 3 I’ve showed an alternative way of exploring the space. By multiplying/dividing by a factor (of 2 in what I’ve shown) we get a symmetric plot, so we can see changes of 2x or 1/2 in APD by exploring the same range in conductance ‘g’.

By way of a more useful example I’ve varied the calcium current conductance `g_CaL` in the ten Tusscher 2006 (epi) model, and show the results in Figure 4.

Figure 4: An example with ten Tusscher 2006 (epi), in both plots black is the control AP, red is with decreased g_CaL and blue is with increased g_CaL. Left: g_CaL scaled by +/-50%. Right: g_CaL scaled by *2 or /2. Here, in contrast to my advice in this post, I’ve just looked at one pace of simulation to avoid too many other effects coming in to confound this!

Just as we expected from the triangle analogy, we get far more ‘symmetry’ in the response when scaling `g_CaL` rather than changing it by +/- a fixed percentage (not exactly symmetric because the nonlinear parts of the system come into play too, as different currents get affected by the alteration to voltage).

You might decide that biology has set up its machinery in such a way as to make changes in conductances of +/-50% more likely than +100%/-50%, so you are happy seeing the consequences of varying parameters like that, which is fine. But if you want to do an inverse problem, and see which parameters could describe a wide range of APDs, then I’d suggest it makes sense to explore the parameters by scaling instead of percentage changes.

*of course, this is pretending there is only one (constant) ion current. But as we can see in the example in Figure 4, the principle on the possible effects of conductance changes still applies with more currents. Incidentally, none of the above argument needs the triangle to be the whole action potential, if you zoom in on any little bit of action potential it will look like a triangle, and the same argument therefore applies to any action potential property you care to look at…

## Hypotheses on hypotheses on …

(Subtitle: building on foundations of unknown shakiness?)

In a lot of fields, a new mathematical model is constructed for each problem, and the model is an encoding of the quantitative rules of the hypothesis you are testing. The model is the hypothesis.

But in cardiac electrophysiology, we often use ‘off the shelf’ previously published models (which implicitly encode many ‘hypotheses’ about how each current depends on voltage etc. etc.). Then we use these models to test a completely different sort of hypothesis with a simulation study: perhaps at the cell scale, something like ‘alternans occur at a lower pacing rate when we change action potential model parameters to look like failing hearts’; or perhaps at the tissue scale e.g. ‘discontinuities in tissue properties make spiral waves more likely to break up’. I’ll call these examples of a secondary hypothesis.

The thing is, the secondary hypothesis is usually stated as the sole hypothesis that a particular study/article/paper is testing. Even if we decide that the secondary hypothesis is true in our simulation study, that finding is completely reliant on the underlying primary hypotheses all being true too. These might not ever have been tested (especially because we probably haven’t unpicked exactly what behaviours at the primary level we rely on to see the result at the secondary level).

No answers in this post on how we should deal with this! Just a thought that we need to acknowledge the primary hypotheses are still just hypotheses, there may be competing ones, and it would be nice to figure out (and record!) which ones we’re relying on in which ways… Until we can do this in a sensible way, multiscale model building in a plug-and-play way is going to be… er… hard!

Edit: I first wrote this draft quite a long time ago, I think some uncertainty quantification at the level of the primary hypotheses might be the way to go (assigning probabilities to them, and examining the consequences for secondary hypotheses), but open for comments below!

## Cyclamen purpurascens

Alex Hyde is a friend of mine and a professional photographer, so I put him on a “Yorkshireman’s commission” as he called it (two pints of beer), to capture me something heart-related for the blog header image.

So here it is, and considering he had to go all the way to some woods in Croatia to get it, a bargain I think. It is an Alpine Cyclamen or Cyclamen purpurascens.

Alpine cyclamen photographed by Alex Hyde

Have a look at Alex’s website and follow him on Twitter – particular specialities are amazing close-ups of bugs and camouflage shots. Buy some prints for your friends!

## ODE solving for cardiac action potential models

Just a quick post to highlight a new paper by Jon Cooper, Ray Spiteri and me, which describes the use of Chaste and CellML to solve cardiac action potential model equations.

The paper in Frontiers in Physiology is in a special issue covering Interoperability in Scientific Computing and Health Informatics. If you ever work with action potential models, or indeed any stiff* ODEs, then it is probably worth a read.

The paper aims to:

• Demonstrate Chaste’s capabilities for nice handling of CellML models, and the range of solvers that can be used;
• Show the tolerances/timesteps that are required for (i) stability and (ii) accuracy for each model and solver (and highlight the fact you might get plausible looking nonsense if you only check stability!);
• Present some speed benchmarking on the best ODE solver to use for a wide range of action potential models, in two settings: single-cell (the answer is CVODE); and tissue simulation (the answer is – it depends!). We describe some of the tricks and tips we’ve used to get things as fast as possible (analytic jacobians, lookup tables, choice of compiler etc.).

*loose definition of ‘stiff’ is ODEs with different timescales at different times/in different equations.

## In-silico Action Potential Prediction Web Portal

This post is to announce that we are making our in-silico action potential prediction web portal (AP-predict) available for public use.

The portal is free, you just need to register for an account. We’d be happy to have any feedback on it, to make improvements.

## What does it do?

The portal allows you to integrate information on multiple ion channel screens, using a range of mathematical action potential models (taken from the CellML repository), to see a compound’s expected effect on the whole cellular action potential, as suggested as part of the CiPA initiative. The portal is intended to make these electrophysiology simulations accessible to safety pharmacology teams.

Figure 1: flowchart of action potential prediction portal simulations. Ion channel High Throughput Screening (HTS) data are entered; then used to calculate conduction changes for the channels in the electrophysiology model; steady pacing is simulated and the resulting action potentials are plotted and analysed.

The portal takes as its input data 50% Inhibitory Concentrations (IC50 – the concentration of a compound which blocks [peak] ion channel current by 50%); for any/each of the commonly-affected cardiac ion channels. The IC50 values are used to calculate a percentage ion channel block at a particular concentration, for each ion channel of interest (by assuming a standard Hill curve describes the full concentration-effect relationship). These percentage blocks are then used to reduce the maximum conductances of the ion channels, and the action potential model is paced to steady state to simulate the effect of the compound on a whole cardiac cell. The process is repeated for a range of concentrations; building up an action potential duration vs concentration curve.

## How does it work?

The portal provides a simple interface to the ApPredict simulation code (a Chaste bolt-on project) which is a C++ command-line program, running on a server here in Oxford. If you are interested you can install it yourself which provides a bit more flexibility, but this web portal provides a simple interface to the most widely used components. The portal also stores the simulation inputs and outputs for easy retrieval later on, and plots the outputs for you. You can download or view the source code for the portal too, which could be useful if you want to run it on your own system e.g. inside a firewall for data security.

## Evaluation and intended use

We have written a user guide which can be found on the About page of the portal, which is recommended reading. We have published a series of papers evaluating the predictive power of these simulations, based on various ion channel IC50 datasets. For more information see the user guide and my list of publications.

## Acknowledgements

The portal was developed by Geoff Williams and Gary Mirams, under our NC3Rs & EPSRC Maths in Toxicology Strategic Award, and a number of EPSRC Impact Acceleration / Knowledge Transfer awards.