Another short post, just to advertise a postdoctoral research associate (PDRA) position available to work with me. It’s a two year position, based at the Centre for Mathematical Medicine & Biology at the University of Nottingham, with the potential to visit labs around the world to get hands-on experimental experience.

The subject of the postdoc position will be designing new experiments to get as much information as we can on how pharmaceutical compounds bind to ion channels and affect the currents that flow through them. As part of this I would like to explore how to characterise and quantify model discrepancy, and design experiments for that, as well as model selection and parameterisation of the models.

We’ll then use the data generated by these experiments to build models of pharmaceutical drug interactions with ion currents, working with partners in pharmaceutical companies and international regulators to test out these new models. The project will involve learning some of the modelling behind electrophysiology and pharmacology, as well as data science/statistics behind designing experiments and choosing models and parameters. It will build on some of our recent work on novel experimental design, some of which is available in a preprint here.

There’s an effort underway to evaluate, improve and implement mathematical models of cardiac electrophysiology for pharmaceutical cardiac safety testing and regulatory practice. In particular, to be part of a more mechanistic and accurate in-vitro assessment of pro-arrhythmic risk than the existing human clinical ThoroughQT trial. The intended replacement is called the Comprehensive In-vitro Pro-arrhythmia Assay, or CiPA for short.

The plan is for the FDA modelling team to present the work they have been doing to the cardiac modelling community, to get feedback, encourage work in this area, and to network and start new collaborations. Also note there are a handful of speaker slots we are keeping free for late-breaking-abstracts which you can email me to submit short abstracts for (details here) by 30th September.

Please pass this message on to anyone you think may be interested.

On the 15th March I presented at an FDA public Advisory Committee hearing on the proposals of the CiPA initative. CiPA aims to replace the current testing for increased drug-induced Torsade de Pointes (TdP) arrhythmic risk, which is a human clinical trial, with earlier pre-clinical testing (including mathematical modelling) that could give a more accurate assessment without the need for a human trial. Most of my talk (available on Figshare) was about the rationale for using a biophysically-based mechanistic model to classify novel compounds’ TdP risk, the history of cardiac modelling, and how simulations might fit into the proposals.

The advisory committee asked some great questions, and I thought it was worth elaborating on one of my answers here. To summarise, quite a few of their questions came down to “Why don’t you include more detail of known risk factors?”. Things they brought up include:

long-short-long pacing intervals is often observed in the clinic prior to Torsade de Pointes starting, why not include that?;

heart failure is a known risk factor – would modelling these conditions help?

Mechanistic markers

Before answering, it’s worth considering where we are now in terms of ‘mechanistic’ markers of arrhythmic risk. Figure 1 shows how things are assessed at the moment.

Fig 1: block of hERG -> prolongation of action potential duration (APD) -> prolongation of QT interval on the body surface ECG. Taken from my 2012 BJP review paper.

It was observed that the risky drugs withdrawn from the market in the late 90s prolonged the QT interval, and for these compounds this was nicely mechanistically linked to block of the hERG channel/IKr (top panel of Fig 1). This all makes nice mechanistic sense – a prolonged QT is related to delayed repolarisation (as in bottom of Fig 1), which in turn is related to block of hERG/IKr.

There’s a couple of reasons prolonged repolarisation is conceptually/mechanistically linked to arrhythmia. Firstly, if you delayed repolarisation ‘a bit more’ (continue to decrease the slope at the end of the action potential – middle panel of Fig 1), you’d get repolarisation failure, or after-depolarisations. Secondly, by delaying repolarisation you may cause regions of tissue to fail to be ready for the following wave, termed ‘functional block’.

As a result of the pathway from hERG block to QT prolongation, early ion channel screening focusses on checking compounds don’t block hERG/IKr. The clinical trials tried to avoid directly causing arrhythmias, for obvious reasons, but by looking for QT prolongation in healthy volunteers you would hopefully spot compounds that could have a propensity to cause arrhythmias in unhealthy heart tissue, people with ion channel mutations, people on co-medication with other slightly risky compounds, or other risk factors. This has been remarkably successful, and there have very few surprises of TdP-inducing compounds sneaking past the QT check without being spotted.

But, there were some hERG blockers on the market that didn’t seem to cause arrhythmias. Our 2011 paper showed why that can happen – there are different mechanistic routes to get the same QT or APD changes (by blocking multiple ion channels rather than just hERG) and if you took multiple ion channel block into account you would get better predictions of risk than simply using the early hERG screening results. So multiple ion channel simulations of single cell APD are a very similar idea to clinical trials of QT (and comparing the two is a good check that we roughly understand a compound’s effects on ion channels).

So clinical QT/simulated APD is a mechanistically-based marker of arrhythmic risk, but we know it still isn’t perfect because some drugs with similar QT prolongation in our healthy volunteers have different arrhythmic risks (see CiPA papers for an intro).

These simulations definitely have their place in helping us understand the origin of TdP, how it is maintained/terminates, and possibly helping design clinical interventions to deal with it. If we want to go the whole hog and really assess TdP risk completely mechanistically why not do patient-specific whole organ TdP simulations, with mechanics, individualised electrophysiology models, all the known risk factors, the right concentrations of compound, and variations of these throughout the heart tissue, etc. etc.?

Let’s imagine for a minute that we could do that, and got a model that was very realistic for individual patients, and we could run simulations in lots of different patients in different disease states, and observe spontaneous initiation of drug-induced TdP via the ‘correct’ mechanisms (this hypothetical situation ignores the not-inconsiderable extra uncertainties in how well we model regional changes in electrophysiology, what changes between individuals, blood pressure regulation models, individual fibre directions, accurate geometry, etc. etc. etc. which might mean we get more detail but less realism than a single cell simulation!). Let’s also say we could get these huge simulations to run in real time – I think the IBM Cardioid software is about the fastest, and goes at about three times less than real time on the USA’s biggest supercomputer.

That would be brilliant for risk assessment wouldn’t it?

Unfortunately not!

TdP is very rare – perhaps occurring once in 10,000 patient-years of dosing for something like methadone. Which means ultra-realistic simulations would have to run for 10,000 years just to get an N=1 on the world’s biggest supercomputer! It’s going to be quite a while before Moore’s law makes this feasible…

Inevitably then, we are not looking to model all the details of the actual circumstances in which TdP arises, we’re looking for some marker that correlates well with the risk of TdP.

The other extreme: as simple as possible

The other extreme is perhaps forgetting about mechanism altogether, and simply using a statistical model, based on historical correlations of IC50s with TdP, to assess the risk. Hitesh Mistry did something a bit like this in this paper (although as I’ve said at conferences – it’s not really a simple statistical correlation model, it’s really a clever minimal biophysically based model, since it uses Hill equation and balance of depolarising and repolarising currents!). But for two or three ion channel block it works very well.

Why would I like something a bit more mechanistic than that then? I came up with the example in Figure 2 to explain why in the FDA hearing.

Fig 2: imagine dropping a mass of 1kg off a tower and timing how long it takes to fall to the ground. You might get the blue dots if you did it from between the second and third floors of a building. Left: if you were a statistician only you might then do a linear regression to estimate the relationship between height and time (think ion channel block and TdP risk). Right: a physicist would get out Newton’s II law and derive the relationship on the right. The one on the left would be dodgy for extrapolating outside the ‘training data’, the one on the right would be fairly reliable for extrapolating (not completely – as it doesn’t include air resistance etc.!)

Why might ion channel block be like Fig 2? Well when you’re considering just two or three channels being blocked, then Hitesh’s method (which actually includes a bit of mechanistic Newton’s II law in my analogy!) will work very well, assuming it’s trained on enough compounds of various degrees of block of the three channels, as the blue dots will cover most of the space.

But you might want to predict the outcome of block (or even activation) up to seven or more different ionic currents (and combinations thereof) that could theoretically happen and cause changes relevant to TdP risk. In this case, any method that is primarily based on a statistical regression, rather than mechanistic biophysics, is going to struggle because of the curse of dimensionality. In essence, you’ll struggle to get enough historical compounds to ‘fill the space’ for anything higher than two or three variables/ion currents. You could think of the biophysical models as reducing the dimension of the problem here (in the same way as the biology does, if we’ve got enough bits of the models good enough), so they can output a single risk marker that is then suitable for this historical correlation with risk – without a huge number of compounds.

The right balance?

CiPA is pursuing single-cell action potential simulations, looking for markers of arrhythmic risk in terms of quantifying ‘repolarisation stability’ in some sense. I think this is a very sensible approach, geared simply at improving one step on solely APD/QT.

In terms of including more risk factor details in here, as the committee asked originally at the top of this post, the real question is ‘does it improve my risk prediction?‘ or not. Hopefully I’ve explained why including all the detail we can think of isn’t obviously going to help. Your ranking of a new compound in terms of the risk of established ones would have to change in order for a new simulated risk marker to make any difference.

To assess whether that difference really was an improved risk prediction we would need to have faith that risk factors were widely applicable to the people who are getting TdP, and that any changes to the models for introducing heart failure etc. are sufficiently well validated and trusted to rely on them. I don’t think we are quite ready for this, as there is plenty to do at the moment trying to ensure there is an appropriate balance of currents in a baseline model (before any are blocked – two relevant papers: paper 1, paper 2), and that kinetics of drug-block of hERG included, as these are probably important.

Another thought along the committee’s lines is TdP risk for different patient subgroups, instead of a one-size-fits-all approach. This would be very nice, but the same difficulties apply, multiplied! Firstly, getting models that we trust for all these subgroups, with well quantified levels/distributions of ion channel expression and other risk-factor-induced changes. Secondly, even sparser gold standard clinical risk categorisation for all subgroups to test our models on. Unfortunately, with such a rare side effect it is difficult enough to get an overall risk level, never mind risk tailored to individual subgroups. So at present, I think the CiPA proposal of a single cell model (give or take an additional stem-cell derived myocyte prediction perhaps!) and single risk marker is a very sensible first step.

Regular readers of this blog will know that I worry about uncertainty in the numbers we are using to model drug action on electrophysiology quite a lot – see our recent white paper for an intro where we discuss the various sources of uncertainty in our simulations, and a 2013 paper on variability observed when you repeat ion channel screening experiments lots of times. That paper studied variability in the averages of lots of experiments, in contrast, it is also possible to look at the uncertainty that remains when you just do one experiment (or one set of experiments).

We have been looking at screening data that were published recently by Crumb et al. (2016). This is an exciting dataset because it covers an unprecedented number of ion currents (7), for a good number of compounds (30 – a fair number anyway!). The ideal scenario for the CiPA initative is that we can feed these data into a mathematical model, and classify compounds as high/intermediate/low arrhythmic risk as a result. Of course, the model simulation results are only ever going to be as accurate as the data going in, I’ve sketched what I mean by this in the plot below.

Figure 1: Uncertainty Quantification. Here we imagine we have two inputs (red, blue) to a nonlinear model, and we observe the resulting model outputs. Top: for just a single value (with no associated uncertainty). Middle: for two well-characterised (low uncertainty) inputs, these give us distinct probabilistic model predictions. Bottom: for two uncertain and overlapping inputs, inevitably giving us uncertain and overlapping output distributions.

In Figure 1 we can see the same scenario viewed three different ways. We have two inputs into the simulation – a ‘red’ one and a ‘blue’ one. You could think about these as any numbers, e.g. “% block of an ion current”.

Top row of Fig 1

If we ignored uncertainty, we might do what I’ve shown on the top row: plug in single values; and get out single outputs. Note that blue was higher than red and gives a correspondingly higher model output/prediction. But how certain were we that red and blue took those values? How certain are we that the output really is higher for blue?

Middle row of Fig 1

Instead of just the most likely values, it is helpful to think of probability distributions for red and blue, as I’ve shown in the middle row of Figure 1. Here, we aren’t quite sure of their exact values, but we are fairly certain that they don’t overlap (this step of working out distributions on inputs is called “uncertainty characterisation“), so their most likely values are the same as the ones we used before in the top row. When we map these through our model* (a step called “uncertainty propagation“) we get two probability distributions. N.B. these output distributions are not necessarily the same shape as the ones that went in, for a nonlinear model. In the sketch the output distributions don’t overlap much either. But this is only guaranteed to be true if the model output is a linear (with non-zero slope!) function of one input; in general, the output distributions could easily overlap more than the inputs [e.g. imagine the model output is a sine wave function of the input with red input = blue input + 2π]; which is why uncertainty propagation is an important exercise! The whole process of uncertainty characterisation and propagation is known as uncertainty quantification.

Bottom row of Fig 1

In the bottom row of Figure 1 we see another scenario – here there is lots of overlap between the red and blue input distributions. So we think blue is higher than red, but it could easily be the other way around. Now we map these through our model, and get correspondingly large and overlapping probability distributions on our outputs. Even in the best-case scenario (i.e. a linear model output depending solely on this input), we would end up with the same distribution overlap as the inputs; and for non-linear models, where outputs are complicated functions of many inputs, the distributions could overlap much more. So bear in mind that it isn’t possible to get less-overlapping distributions out of the model than the ones that went in, but it is possible to get more overlapping ones. The uncertainty always increases if anything (I hadn’t really thought about it before, but that might be related to why we think about entropy in information theory?).

If we consider now that the output is something like ‘safety risk prediction for a drug’, could we distinguish whether red or blue is a more dangerous drug? Well, maybe we’ll be OK with something like the picture in the middle row of Figure 1. Here my imaginary risk indicator model output distinguishes between the red and blue compounds quite well. But we can state categorically “no” if the inputs overlap to the extent that they do in the bottom row, before we even feed them in to a simulation. So we thought it would be important to do uncertainty characterisation and work out our uncertainty in ion channel block that we have from dose-response curves; before doing action potential simulations. This is the focus of our recent paper in Wellcome Open Research**.

Uncertainty Characterisation

In the new paper, Ross has developed a Bayesian inference method and code for inferring probability distributions for pIC50 values and Hill coefficients.

The basic idea is shown in Figure 2, where we compare the approach of fitting a single vs. distribution of dose-response curves:

Figure 2: best fit versus inference of dose-response curves. On the right we have plotted lots of samples from the distribution of possible curves, with each sample equally likely, so the darker regions are the more likely regions. Data from Crumb et al. (2016).

On the left of Figure 2 we see the usual approach that’s taken in the literature, fitting a line of best fit through all the data points. On the right, we plot samples of dose-response curves that may have given rise to these measurements.

The method works by inferring the pIC50 and Hill coefficients that describe the curve, but also the observation error that is present in the experiment. i.e the ‘statistical model’ is:

data = curve(pIC50, Hill) + Normal noise (mean = 0, standard deviation = σ).

One of the nice features of using a statistical model like this is that it tries to learn how much noise is on the data by looking at how noisy the data are, and therefore generates dose-response curves spread appropriately for this dataset, rather than with a fixed number for the standard deviation of the observational noise.

At this point we calculate what the % block of the current (inputs into model) might look like from these inferred curves, this is shown in Figure 3:

Figure 3: uncertainty characterisation. When we go to do a simulation we’ll want the “% hERG block” as an input into the simulations. This figure shows how we work that out at a given concentration. Imagine drawing a line vertically in the top plot (same as right plot in Fig 2), now we just look at the distribution along this line, which is displayed in the bottom plot for two concentrations (green and pink-ish).

The new paper also extends this approach to hierarchical fitting (saying data from each experiment were generated with different pIC50 and Hill coefficients), but I will let you read the paper for more information on that.

So what can we conclude about the distribution of inputs for the Crumb dataset? Well, that’s a little bit tricky since it’s 7 channels and therefore a seven dimensional thing to examine. To have a look at it we took 500 samples of each of the seven ion current % blocks. This gives a table where each row looks like:

Drug (1-30) | Sample (1-500) | % block current 1 | % block current 2 | ... | % block current 7 |

So the seven % blocks are the seven axes or dimensions of our input dataset.

I then simply did a principal components analysis that will separate out the data points as much as possible by projecting the data onto new axes that are linear combinations of the old ones. You can easily visualize up to 3D, as shown in the video below.

In the video above you see the samples for each compound plotted in a different colour (the PCA wasn’t told about the different compounds). So each cloud is in a position that is determined by what % block of each channel the compounds produce at their free Cmax concentrations (given in the Crumb paper). What we see is that about 10 to 12 of the compounds are in distinct distributions, so they block currents in a combination unlike other compounds, i.e. behave like the different inputs in the middle row of Figure 1. But the remaining ones seem to block in distributions that could easily overlap, like the inputs in the bottom row of Figure 1. As you might expect, this is clustered close to the origin (no block) co-ordinate.

Here, these first three principal components happen to describe 94% of the variation in the full 7D dataset, so whilst it is possible that there is some more discrimination between the ‘clouds’ of samples for each compound in higher dimensions, your first guess would be that this is not likely (but this is a big caveat that needs exploring futher before reaching firm conclusions). So it isn’t looking promising that there is enough information here, in the way we’ve used it anyway, to distinguish between the input values for half of these compounds.

Uncertainty Propagation

But once you’ve got this collection of inputs you can simply do the uncertainty propagation step anyway and see what comes out. We did this for a simple O’Hara 1Hz steady state pacing experiment, applying conductance block according to the video’s cloud values for each of the 30 compounds, to work out predicted output distributions of APD90 for each compound. The results are shown in Figure 4.

Figure 4: distributions of simulated APD90 in the O’Hara model (endocardial variant) at free Cmax for all 30 compounds in the Crumb dataset, using samples from the inferred dose-response curves (Figure 15 in the paper). Random colours for each of the 30 compounds.

Figure 4 is in line with what we expected from the video above, so it causes us to have some pause for thought. Perhaps five compounds have distinctly ‘long’ APD, and perhaps two have distinctly ‘short’ APD, but this leaves 23 compounds with very much overlapping ‘nothing-or-slight prolongation’ distributions. The arrhythmic risk associated with each of these compounds is not entirely clear (to me!) at present, so it is possible that we could distinguish some of them, but it is looking a bit as if we are in the scenario shown in the bottom right of Figure 1 – and this output overlaps to such an extent that it’s going to be difficult to say much.

So we’re left with a few options:

Do more experiments (the larger the number of data points, the smaller the uncertainty in the dose-response curves), whether narrower distributions allow us to classify the compounds according to risk remains to be seen.

Examine whether we actually need all seven ion currents as inputs (perhaps some are adding noise rather than useful information on arrhythmic risk).

Get more input data that might help to distinguish between compounds – the number one candidate here would be data on the kinetics of drug binding to hERG.

Examine other possible outputs (not just APD) in the hope they distinguish drugs more, but in light of the close-ness of the input distributions, this is perhaps unlikely.

So to sum up, it is really useful to do the uncertainty characterisation step to check that you aren’t about to attempt to do something impossible or extremely fragile. I think we’ve done it ‘right’ for the Crumb dataset, and it suggests that we will need more, or less(!), or different data to distinguish pro-arrhythmic risk. Comments welcome below…

Footnotes:

* The simplest way to do this is to take a random sample of the input probability distribution, run a simulation with that value to get an output sample, then repeat lots and lots of times to build up a distribution of outputs. This is what’s known as a Monte Carlo method (using random sampling to do something that in principle is deterministic!). There are some cleverer ways of doing it faster in certain cases, but we didn’t use them here!

** By the way: Wellcome Open Research is a new open-access, open-data, journal for Wellcome Trust funded researchers to publish any of their findings rapidly. It is a pre-print, post-publication review journal, so reviewers are invited after the manuscript is published and asked to help the authors revise a new version. The whole review process is online and open, and it follows the f1000research.com model. So something I’m happy we’ve supported by being in the first issue with this work.

This is quite a long post elaborating on a bit of work I’ve been doing with Greg Morley and his group, which was published in Scientific Reports. I hope it provides an accessible introduction to a fairly complicated set of experiments and simulations.

Background

Ablation is a treatment for some abnormal heart rhythms, which works by killing areas of the heart that we think are misbehaving and giving rise to abnormal rhythms. The idea behind ablation is that if you identify and kill a region of tissue that is allowing electrical signals to propagate ‘the wrong way’, then afterwards you will have created a non-conducting wall of scar tissue that prevents any abnormal electrical activity. As a treatment strategy, there’s clearly room for improvement; I hope we’ll soon look back on it as Dr McCoy does for dialysis in the scene in Star Trek IV. But at the moment, it’s the best we’ve got for certain conditions.

Usually, and ideally, you stop ablating when the arrhythmia terminates, so it is very effective in the short term. Unfortunately the success rate in the long term isn’t great – after a single round of atrial ablation people stay (for at least 3 years) arrhythmia free in only about 50% of cases^{1}. So why do people re-lapse after ablation? Well it can be because electrical waves somehow get through the scar regions (see ^{1} for further references to this).

You also get similar scars forming after a heart attack – in this case the cardiac muscle itself doesn’t get a blood supply, and dies because of that, leaving a scar region. This damage leads to scars with larger border zones than in ablation, as in these border zones have low but not-zero blood supply, so you get semi-functioning tissue where some of the muscle cells (myocytes) survive, and others don’t. The problem now is the opposite to ablation – you’d like to restore conduction in these regions to get the rest of the heart beating as well as it can, but electrical activity is disturbed by the presence of these scar regions and their border zones, and arrhythmias become more likely (or even occur instantly in the case of big heart attacks).

Greg works on optical mapping for whole hearts in various species to study what happens around these scars. This is a clever technique where you put a dye into a tissue, it sticks to the membrane of cells, and when excited by an external light source, it emits light at an intensity dependent on the voltage across the cell membrane. So you can excite the dye with light at a certain wavelength, and record the light emitted at another wavelength in a camera to create a map of electrical activity in your sample, and you can do this in real time to see how electrical waves move across the heart. Here’s a simulation of electrical activity that Pras did with our Chaste software, and it includes a comparison with optical mapping at 39 seconds in the video below (this is for a completely different situation – just to show you how optical mapping works!):

Experimental Results

Greg found that in some intact hearts from mice recovering from ablation he could observe the optical mapping signal appearing to go straight through the middle of the scars. This is unexpected, to say the least, and required a lot of further investigation. I’ll let you read the full story in the paper, but suffice to say we confirmed that:

there aren’t any myocytes (normal heart muscle cells) left in the scars (shown by doing histology and using an electron microscope) and there are lots of what are largely fibroblasts left there, the cells in the scars are definitely ‘non myocytes’ anyway;

the observations aren’t just optical mapping artefacts (shown by doing direct micro-electrode recordings, showing that local electrical pulses from a suction electrode diffuse in, and even putting black foil in the middle of the hearts!);

the observations depend on electrical coupling between myocytes and fibroblasts, as it goes away if you don’t have this (shown by specifically breeding a strain of mice where you could knock out Connexin43 in just the non myocytes – clever stuff!). In fibroblasts, Connexin 43 is thought to form gap junctions electrically coupling them to only myocytes, not to other fibroblasts. If you knock out Connexin43 proteins in the non myocytes with this special mouse strain, then the optical mapping signal you record is much smaller in the scar region; the electrical signal no longer diffuses in from a suction electrode, indicating that fibroblast-myocyte junctions are required to see conduction into the scar.*

To quote from the article, this is “the first direct evidence in the intact heart that the cells in myocardial scar tissue are electrically coupled to the surrounding myocardium and can support the passive conduction of action potentials.”

Where mathematical modelling comes in

When Greg presented the experimental results at conferences he had a hard time convincing people that the experimental results were real, and not some odd artefacts of the experimental set up! It seemed that nobody expected electrical waves to travel through these regions, even if they were populated with cells, as these cells weren’t cardiac myocytes. It was counter-intuitive to most cardiac electrophysiologists that any signal would get across this gap, because that’s the whole point of ablation! This is where we thought it would be quite interesting to see what you would expect, from the standard model of electrical conduction, if there are neutral non-myocyte cells in a lesion that are coupled together. By neutral we mean not providing any ‘active’ transmembrane currents, and just providing passive electrical resistance.

Our standard simplest model for the reaction-diffusion of voltage in cardiac tissue is the monodomain equation (so-called because there’s an two-domain extension called the bidomain equation):

There are a few terms here that need defining: V is voltage across the cell membrane – which is the quantity that we consider to be diffusing around, rather than the individual types of ions themselves. I_{ion} is the current across a unit surface area of the cell membrane at any point in space (varies across space), I_{stim} is any external stimulus current which is applied to a unit volume of the tissue, C_{m} is the capacitance of a unit surface area of the membrane (the bigger this is, the more current is required to charge up a bit of membrane the same amount), and σ is the conductivity of the tissue – how easy it is for voltage to diffuse (expressed as a vector as there can be preferential conduction in different directions). χ is a scaling factor of surface-area-of-membrane-per-unit-volume-of-tissue, required to change the current across a unit-of-surface-area membrane into current across all-the-surface-area-of-membrane-in-a-unit-volume of tissue. So you can think of χ as the amount of membrane packed into a little cube of the tissue.

So, what will be different in the lesion?:

𝜎 , the cell-cell conductivity, could be varying. This would represent how much Connexin was present to link the lesion cells together with gap junctions.

𝜒 , the surface area of membrane in a unit volume of tissue, could also be varying. This would represent non-myocyte membrane density in the lesion.

𝐶_{𝑚}, the capacitance of a unit of membrane area, probably won’t change (membrane made of same stuff). Beware though, a maths/biology language barrier means experimentalists might call 𝜒 ‘capacitance’ to confuse us all!

I_{ion} will be small compared to myocytes, these cells aren’t actively trying to propagate electrical signals. So we did the study twice, once with this set to zero in the scar, and once with it set to use a previously published model of fibroblast electrophysiology (didn’t make any visual difference to the results).

Now in the scar region we aren’t applying an external stimulus so I_{stim} = 0, and we can divide through by χ to get:

A nice property springs out – that the entire behaviour of the scar region (in terms of difference to the rest of the heart) is determined by the ratio of 𝜎/χ. So we introduce a factor ρ that will scale this quantity.

Even before we run any simulations, we’ve learnt something here by writing down the model and doing this “parameter lumping” (see nondimensionalisation for a framework in which to do this kind of thing rigorously!). Just by looking at ρ = 1 we see that there are an infinite number of ways we could get exactly the same behaviour. The scar cells could be just as well coupled (𝜎) and just as densely packed (χ) as the rest of the heart (incredibly unlikely to Greg), or they could be 1% as well coupled with 1% as much membrane present (more plausible to Greg); before we even do a simulation we can state definitively that this would give exactly the same behaviour: as ρ = 1 in both cases, and we’re solving the same equations! So this scaling is interesting, as we didn’t know whether the value of ρ would be increased or decreased in the scar, despite strong suspicion that the cells in the scar will have both reduced gap junctions and reduced membrane surface area available.

So we did a simulation with a mouse ventricular myocyte model, on a realistic sized bit of tissue 5mm x 5mm, with a 2mm diameter lesion in the middle. There are no preferential fibre directions here, something that could easily make the conduction more or less likely.

So what behaviours did we predict with different values of ρ? First for ρ = 1; the 𝜎/χ takes the same value as usual case (even though both could be reduced/increased by any factor):

In the video we see that conduction does indeed ‘get across’ the scar, even though it is not ‘driven’ as such as it is in the rest of the tissue, but instead is simply diffusing across that region. We predict the wave will get across a 2mm scar easily with this value of ρ.

What about with more membrane (or less conductivity): ρ = 0.1 (Remembering this would happen whenever 𝜎/χ is ten times less that the normal muscle: so both when 𝜎 is 1% of normal and χ is 10% of normal; or when 𝜎 is 0.1% of normal and χ is 1% of normal, for instance).

This time, you can see that the wave struggles to get across the lesion, but the membrane voltage still gets high, and that would probably still be high enough to record in optical mapping.

With less membrane density relative to the well-coupled-ness of the lesion cells (ρ = 10)? (Remembering this would happen whenever 𝜎/χ is ten times more that the normal muscle: so both when 𝜎 is 10% of normal and χ is 1% of normal; or when 𝜎 is 1% of normal and χ is 0.1% of normal, for instance). We see that the wave even appears to accelerate across the lesion region and conduction sets off earlier than before at the far side, as there is less membrane to charge so it is easier to do it:

So our conclusion was that it’s perfectly possible that a voltage signal could be recorded in the lesion, and a voltage signal could effectively travel straight through the scar, and conduction carry on out the other side. We’re not entirely sure about the value that ρ should take – but this behaviour was fairly robust, and matched what we saw in the experiments.

This simple model predicted many of the features of the recordings that were made from the scar region – see Figure 7 in the paper, and compare with experiments in Figure 4. So, it helped Greg answer accusations of “This is impossible!” that he got when he presented stuff at conferences, as he could reply that “If the cells have gap junctions, even without any voltage-gated ion channels of their own, this is exactly what we’d expect – see!”.

As usual, all the code is up on the Chaste website if anyone wants to have more of a play with this.

References

1 “Long-term Outcomes of Catheter Ablation of Atrial Fibrillation: A Systematic Review and Meta-analysis” Ganesan et al. J Am Heart Assoc.(2013)2:e004549 doi:10.1161/JAHA.112.004549

*Incidentally, I’d like to do a sociology experiment where I give biologists and mathematicians logic problems to do mentally. My hypothesis is that biologists would beat mathematicians, as they are always carrying around at least ten “if this then that” assumptions in their heads. Then I’d give them a pen and paper and see if the situation reversed…

When you are expressing how much a drug inhibits something, it’s common to fit a Hill curve through a graph of concentration against % inhibition as shown here:

Figure 1: A concentration-effect curve showing % inhibition as a function of drug concentration. The Inhibitory Concentration 50% value is simply defined as the concentration that causes 50% block, here I’ve expressed it as the ratio of the drug concentration over IC50, so IC50 is defined to be 1 here. Notice we’ve plotted this on a log scale, which perhaps answers the question in the title already…

In our case this is often ‘% inhibition’ for a given ionic current, a consequence of a drug molecule binding to, and blocking, ion channels of a particular type on a cell’s membrane.

A while back we were interested what the distribution of IC50s would be if you repeated an experiment lots and lots of times. We asked AstraZeneca and GlaxoSmithKline if they had ever done this, and it turned out both companies had hundreds, if not thousands, of repeats as they did positive controls as part of their ion channel screening programmes. We plotted histograms of the IC50 values and Hill coefficients from the concentration – effect curves they had fitted, and found these distributions:

Figure 2: Histograms of pIC50s and Hill coefficients fitted to over 12,000 repeats of a screen examining whether Cisapride inhibits the hERG potassium channel current. The red lines are distributions fitted to these distributions: logistic for pIC50 and log-logistic for Hill coefficients.

After some investigation, we found the standard probability distributions that both the IC50s and Hill coefficients from these experiments seemed to follow very well were Log-logistic.

Now, where do pIC50s come in? A pIC50 value is simply a transformation of an IC50, defined with a logarithm as:

At this point it is good to note that logarithms to the base 10 (also known as common logarithms) were invented by Henry Briggs who, like me, was born in Halifax!

The ‘pIC50’ is analogous to ‘pH‘ in terms of its negative log relationship with Hydrogen ion concentration, which is more familiar to most of us.

The logarithm means that pIC50s end up being distributed Logistically, as:

ln(X) ~ Loglogistic(α, β). ...(1)

implies that

X ~ Logistic(μ, σ), ...(2)

as shown below:

Figure 3: Logistic pIC50 and Log-logistic IC50s. Histogram of samples from the same simulated distribution pIC50 ~ Logistic(6, 0.2), with red lines indicating fitted Logistic and Log-logistic distributions (see note on implementation below!).

You can see how nicely these distributions work across a lot of different ion currents and drug compounds in the Elkins et al. paper supplement.

More recently, we ran into an interesting question: “Given just a handful of IC50 values should we take the mean value as ‘representative’, or the mean value once they are converted to pIC50s”?

Well, ideally, you’d try and do rigorous inference on the parameters of the underlying distribution – as we did in the original paper – and as our ApPredict project will try to do that for you if you install that. To do that across multiple experiments you probably want to use mixed-effect or hierarchical inference models. But it’s fair enough to want a rough and ready answer in some cases, especially if you don’t know much about the spread of your data. A tool to try and infer the spread parameter σ from a decent number of repeated runs of the same experiment is something I’ll try to provide soon.

But, let’s just say you want to have this ‘representative’ effect of a drug, given a handful of dose-response curves. You’ve got a few options: you could take a load of IC50 values, and take the mean of those; or you could take a load of pIC50 values, and take the mean of those. Or perhaps the median values? But which distribution should you use? Which would be more representative, and what is the behaviour you’re looking for? Answering this was a bit more interesting and involved than I expected…

First off, let’s look at some of the properties of the distributions shown in Figure 3. Here are the theoretical properties of both distributions (N.B. there are analytic formulae for these entries, which you can get off the wikipedia pages for each distribution (Loglogistic, Logistic). I converted the answers back to pIC50, for easy comparison, but the IC50s were really taken from the right hand distribution in Figure 3 in IC50 units.).

Theoretical Results

Mean

Median

Mode

From IC50 distbn

5.836

6 (μ)

6.199

From pIC50 distbn

6 (μ)

6 (μ)

6 (μ)

So as you might expect (from looking at it), there’s certainly a skew introduced into the Loglogistic distribution on the right of Figure 3 for the IC50 values.

So what to measure depends what kind of ‘representative’ behaviour we are after. Or in other words, what is a drug really doing when we get these distributions for observations of its action? Well, you really want to be inferring the ‘centering’ distribution parameter (μ), which in this case would be the mean/median/mode pIC50 or the median IC50. You will already get the impression that it’s most useful to think in pIC50s – as the distribution is symmetric the mean, median and mode are all the same.

But what about the more realistic case of the properties of a handful of samples of those distributions? I just simulated that and show the results in Figure 4 (I think you could probably do this analytically on paper, given time, but I haven’t yet!).

Figure 4:distributions of (Top row) Mean, and (Bottom row) median from N=3 samples of (Left) pIC50 and (Right) IC50. Simulated by taking 3 samples from Figure 3, and repeating the process a million times to build up these histograms. Note that the mean can be taken from IC50 or pIC50 transformed variables, and these give different answers.

It seems the only estimates for which you are likely to get back an unbiased and consistent estimate is the mean/median of pIC50 values, since the distribution is symmetric. The IC50 distribution isn’t symmetric, and so taking the mean of IC50 samples leads to a bias, it does however seem to give you a good estimate for the median of the IC50 distribution (better than the median of a sample of IC50s does!) for a low N – see top right plot of Figure 4. As N increases you do eventually get a distribution whose peak is at the mean, but N needs to be quite a lot larger than your average (no pun intended) experimental N. As you might expect, the median pIC50 is not quite as good a measure as the mean for the centre of the pIC50 distribution (but that’s hard to see visually in Fig 4, it does almost as well here).

You could point out that none of these “make that much difference” to the plots above, but you will introduce a bias if you use the wrong statistic, and for the semi-realistic distributions that we’ve got as an example here, your estimate of pIC50 = 5.836 versus the true pIC50 = 6 does give a block error of almost 10% when you substitute it back into a Hill curve of slope 1 at 1μM.

Importantly, pIC50s are much nicer numbers to deal with: they are always given in the same units of log M, and you can recognise at a glance for your average pharmaceutical compound whether you’re likely to have no block (<1 ish), very weak activity (2-3ish) which is perhaps just noise, low block (4-5ish) or strong block (>6ish) – give or take the concentration that the compound is going to be present at – see Figure 1! These easy-to-grasp numbers make it much easier to spot typos than it is when you’re looking at IC50s in different units. We’ve also seen parameter fitting methods that struggle with IC50s, but are happier working in log-space with pIC50s, as searching (in some sense ‘linearly’) for a pIC50 within say 9 to 2 is easier than searching for an IC50 in 1nM to 10,000μM.

So my conclusion is that I’ll try and work with pIC50s, and if you need a quick summary statistic, use the mean of pIC50s. They seem to occur in symmetric distributions with samples that behave nicely, and therefore are generally much easier to have sensible intuition about!

I’ve noticed more and more papers using the phrase “comprehensive model”. This phrase grates every time, and this post is about why.

I thought this might just be me getting old and grumpy, so I plotted the mentions of the phrases “comprehensive mathematical model” or “comprehensive model” in ‘Topic’ in Web of Science over the years in Figure 1. Sure enough, more papers in recent years feature models that are comprehensive*.

Figure 1: the rise of Comprehensive Models! Mentions of “Comprehensive Models” or “Comprehensive Mathematical Models” in Web of Science 1945 – 2015.

So why don’t I think a mathematical model is ever comprehensive?

Usually by comprehensive the authors mean something like this describes most of what we’ve seen well enough to say/predict something (or even to within the observational error bounds in some physics experiments). Perhaps their model integrates information/theories that weren’t part of a single conceptual framework/mathematical model before. That’s great – but it isn’t comprehensive!

This post is really an excuse to plug and discuss the following quotation from James Black (physiologist and Nobel Prize winner for developing the first beta-blockers). He summarises what mathematical models are and aren’t, and what they are for, beautifully:

[Mathematical] models in analytical pharmacology are not meant to be descriptions, pathetic descriptions, of nature; they are designed to be accurate descriptions of our pathetic thinking about nature. They are meant to expose assumptions, define expectations and help us to devise new tests.

Sir James Black (1924 – 2010) Nobel Prize Lecture, 1988**

A mathematical model isn’t supposed to be a comprehensive representation of a system – it’s always going to be a pathetic representation in many ways! Models make simplifying assumptions (by definition***), generally ignoring things that we think will make a smaller difference to the model’s predictions than the things that we have included (usually things that happen really fast/slow or that are really small/big).

What models do allow us to do is see exactly what we would expect to happen if the system works in the simple(ish!) way we think it does. Then that can teach us loads about whether the system does work as we thought it did, or whether something fundamental is missing from our understanding.

We’ll always be able to come back and add more detail as time goes on, we learn more, and can measure more things. So that means that a model is never finished, and never comprehensive. So I’d say let’s avoid using the word comprehensive to describe any kind of model!

A Large Confession: I thought the word comprehensive implied ‘includes everything’. Purely to back up my point I looked it up, and sure enough one of the OED’s definitions is “grasps or understands (a thing) fully” which a model never does; but another is “having the attribute of comprising or including much; of large content or scope” which might be OK! Hmmmm… given the ambiguity I still conclude we’d better avoid the word anyway! But I’ll let you make your own minds up:

* Yeah, I know I really need to divide by the total number of papers that mention models etc. but that’s harder to search sensibly – and WoS doesn’t summarise data for over 10,000 papers!

** You can watch his Nobel Prize lecture online. He’s quite a remarkable man and invented/discovered lots of classes of completely new drugs, interestingly – as you can see in the video – he used modelling a lot. His modelling would now be part of the trendy new field of Quantitative Systems Pharmacology (although I’d say we’ve been doing it for years 😉 ), and if QSP people ever have a prize for anything I think they should name it after him!

*** ‘Model’ means a simplification of reality, so it’s confusing to say they are “comprehensive” whilst there are lots of things we know about that aren’t included. I double-checked this one too, and indeed our kind of model is defined as “A simplified or idealized description or conception of a particular system, situation, or process, often in mathematical terms, that is put forward as a basis for theoretical or empirical understanding, or for calculations, predictions, etc.“.