Numerical errors from ODE solvers can mess up optimisation and inference very easily

(Subtitle: “when you’ve got a lot of data points!”)

This post is for people interested in doing optimisation or inference with Differential Equation (DE) models.

If you are a statistician, you might be used to treating model simulators as black boxes where you can stick parameters in and get outputs out. This post is about why you need to be a bit careful with that. It examines one of the quirks of working with differential equations and optimisation/inference that my team have bumped into in a few distinct situations – including simulators given out for public optimisation competitions! I haven’t seen it referred to in any of the textbooks, but please let me know in the comments if you have.

Below in Figure 1 is a likelihood surface (or objective function) that we came across (more on the definition of it below), as a function of one of the parameters in a cardiac action potential model. We are trying to find the maximum in this case.

ross_surface

Figure 1: a likelihood function of one parameter in a cardiac model. Urgh! For those that are interested in the detail, this is comparing a 10kHz time-sampled action potential voltage recording with a realistic level of noise to a simulated action potential. Figure thanks to Ross!

Not all optimisers rely on a nice smooth gradient – but they do all enjoy them! This is a horrible surface and no matter what kind of optimiser you use it is going to struggle to move around and explore something that looks like this. The red line marks the data-generating value in this case, and the green is somewhere we got stuck. Remember this is only in one dimension, now imagine it in ten or more…

To make matters worse, we might want to run MCMC on this surface to get a posterior distribution for the parameter on the x-axis. We see that there are ‘spikes’ of about 40 log-likelihood units. What does that mean? Well if we are talking about the probability of accepting a trough from a spike in Figure 1 using an MCMC Metropolis-Hastings step, that equates to an acceptance ratio of exp(-40) = 4×10^-18 ! Our chains will certainly get stuck and never move across this space nicely.

Is the problem really so non-linear that is has got thousands of local minima, or modes in a posterior, as this suggests? Thankfully, the answer is ‘No’!

After a bit of detective work we figured out that this bumpy surface is entirely due to numerical error in our simulation, and it should be completely smooth! The example is from an Ordinary Differential Equation (ODE) solver but Partial Differential Equation (PDE) solvers will also give the same behaviour.

Most of the time we can’t derive exact analytic solutions to our models’ equations, so we have to use numerical solution techniques; the simplest of these is the Forward Euler method. These numerical methods give you only an approximation to the solution of your equations, which you try to ensure is accurate by taking more computational effort by adding steps in your approximation (finer time steps) and checking the solution is converging to an answer. As you keep refining, the solution should change less and less.

Broadly speaking we can classify the different ODE solvers into: fixed step, like the Forward Euler method, that take the same size time steps as they go along; and adaptive step that alter the length of time steps, possibly on every step. When gradients are changing fast adaptive solvers try to take lots of small steps to stay accurate; when gradients are changing more slowly they make fewer but larger steps to run computations fast.

With an adaptive time-step solver you give a target tolerance (relative to the size of the variables (RelTol), or absolute (AbsTol), or typically both) and it refines the steps to try to maintain these tolerances on each step. In the example here we used CVODE but another common one is the Matlab ode15s stiff ODE solver. The same principle would also apply if you use a fixed-step solver, it would need smaller time steps rather than tighter tolerances.

In Figure 2 we show the shift in the likelihood surface as we tighten the ODE solver tolerances (Relative, Absolute in brackets above each plot):

TighterTolerances

Figure 2: Tolerances tightening from Rel=1e-4, Abs=1e-6 through to 1e-7, 1e-9. We need solutions under these tolerances to get a nice smooth likelihood surface in this problem. Figure is taken from Ross’ PhD thesis. Note, as well as getting smoother, the log-likelihood curve is shifting vertically (the y axes are different in these plots) and the difference in terms of probabilities would be huge.

In general RelTol = 10^-4 and AbsTol = 10^-6 are not unreasonable choices for a single ODE solve, indeed Matlab’s defaults are RelTol = 10^-3 (less precise than Figure 1) and AbsTol = 10^-6 (the same).

So why is this effect so big?

Likelihoods

A very common assumption is that a ‘data generating process’ (the way that you end up with observations that some instrument records) is:

data = reality + observation noise on each data point

Another common assumption is that the noise here is Gaussian, independent on each data point and identically-distributed (comes from a Normal distribution with the same mean (often zero) and standard deviation), this is known as “i.i.d.” Gaussian noise.

A third assumption is that ‘reality’ in our equation above is given by the smooth noise-less model output. This is obviously a bit shaky (because no model is perfect), but the idea is you can still get useful information on the parameters within your model if it is close enough (N.B. bear in mind you might get overconfident in the wrong answer – this is a good paper explaining why). So we then commonly have:

data = model output + i.i.d. Gaussian noise.

We can then write down a log-likelihood (log just because it is easier to work with numerically…) and we end up with a big sum-of-square errors across all of our time trace:

Screenshot 2018-10-17 19.56.48(see the Wikipedia derivation from the Normal probability density function). Here we take the mean to be the model output given some parameter set; x to be the observed data points and sigma is the i.i.d. noise parameter.

The reason that we have come across this problem perhaps more than other people isn’t that we have been more sloppy with our ODE solving (we put some effort into doing that relatively well!), but that we are dealing with problems that consist of high frequency samples of time-series data. We commonly work with a few seconds of 10kHz time sampled recordings, so we can end up with around 100,000 data points.

Why is this important? Say your simulation and data diverge by >=1.1 standard deviations of the noise level (P<0.86 in a statistics table) instead of >= 1 standard deviation (P<0.84) because of numerical error. If this happens at 100 time points then your probabilities multiply and become 0.86^100 = 5×10^-7 and 0.84^100 = 3×10^-8. It has become almost ten times less likely that your parameters gave rise to the data because of your numerical error that had a relatively small effect on the solution at each time point. As we have more and more data points, this effect is exaggerated until even tiny shifts in the solution have huge effects on probabilities, as we saw above.

There’s a slight subtlety here: you might have already checked that your solution is converging to within a pre-specified tolerance for a given parameter set. For example a modeller might say “I don’t care about changes of less than 0.01% in these variables, so I set the solver tolerance accordingly” then a statistician treating the simulator as a black box might just run with that. But what is important here is not the error bound on the individual variables at a given parameter set, but the error bound that the likelihood transformation of these variables demands in terms of reducing jumps in likelihood as a function of parameters. So the modeller and statistician need to talk to each other here to work out whether there might be problems…

Conclusions

I wouldn’t be surprised to find that this is one of the reasons people have found the need to use things like genetic algorithms in cardiac problems. But I suspect the information content, un-identifiability and parameter scalings are also very important factors in that.

So what should you do?

Examine 1D likelihood slices. We can fix all parameters and vary one at a time, plotting out the likelihood as above. Then tighten your solver tolerances until 1D slices of your likelihood are smooth enough for optimisers/MCMC to navigate easily. Whatever this extra accuracy costs in additional solver time will be compensated in far more efficient optimisation/inference (in the examples we have looked at, the worst cost is approximately just 10% more solve time for a solve with 10x tighter tolerances, resulting in thousands of times speed up in optimisation).

What about thinning the data? A way to get rid of this problem would be to remove a lot of data points. Something that’s called ‘thinning’ in the MCMC literature (although it usually refers to the MCMC chain afterwards rather than the data). I’m not a fan of doing it to the data. It will artificially throw away information and make your posteriors wider than they should be according to your noise model. You might not completely trust your likelihood/noise model, but thinning doesn’t automatically fix it either!

Finally, this post wouldn’t be complete without mentioning that there is a relatively new way to consider this effect, which explicitly admits that we have error from the solver, and treats it as a random variable (which can be correlated through time):

data = model + numerical approximation error + observation noise.

Dealing with this formulation is the field known as probabilistic numericssee the homepage for this, and you can use it to make MCMC take account of numerical errors. In our case, I  expect this approach could help by effectively warming up (c.f. tempering methods) the likelihood and making the spikes relatively smaller and more jump-able. Interestingly, in the above plots you can see that this isn’t independent noise as you move through parameter space, I don’t know enough about the subject to say whether that has been handled or not! Whether it is worth the extra complication I’m not convinced. Maybe for big PDE models it will be worth the trouble, but for the reasonably lightweight ODEs involved in single cell cardiac work it is probably just worth solving more accurately all the time.

Advertisements
Posted in Action Potential Models, Numerics, Stats and Inference | Tagged , , , , , , , | 5 Comments

A report on the Toronto CiPA in-silico modelling workshop

This is a long-awaited follow-up to this post advertising the workshop. Apologies it has taken so long, the fact I wanted to write something about the next meeting, which we just had, reminded me I never posted all these talks.

On the 9th November 2017 the CiPA in-silico Working Group hosted a meeting in Toronto General Hospital that the Cardiac Physiome meeting kindly let us run as a satellite meeting – a big thanks to them for organising the logistics of room booking etc.

The in-silico aspects of CiPA are led by the FDA Center for Drug Evaluation and Research. You might find the background document that we put together useful if you haven’t heard of CiPA. I’ve also written a post on the idea before. The FDA team let me organise this long half day with the following aims:

  • To inform the cardiac modelling community about the CiPA initiative.
  • To get feedback on the FDA’s work to date.
  • To draw attention to other research in the area they might not have been familiar with.
  • To discuss the next steps.
  • To spark more research and collaborations in this area.

It was a fascinating and thought provoking day, plenty of work for us to do, as you’ll see on my summing up slides at the end of the day. Here are links to all the talks, that you can also find in a Figshare Collection.

 

 

 

 

Posted in Action Potential Models, Drug action, Future developments, Ion Channel Models, Model Development, Safety Pharmacology | Tagged , , , , , , , | Leave a comment

Short and rich voltage-clamp protocols

This is a quick post to tell you all about Kylie’s new paper on sinusoidal-wave based voltage clamp protocols that has been published in the Journal of Physiology, and there’s an associated commentary from Ele Grandi. In the paper, some ideas that we’ve been thinking about for almost 10 years since I was working with Martin Fink and Denis Noble back in Oxford Physiology department in 2008-2010 have finally come to fruition.

In their 2009 simulation study comparing properties of Hodgkin Huxley vs. Markov Models (well worth a read) Martin and Denis discussed how an optimised short voltage step protocol might contain enough information to fit the parameters of models (termed an ‘identifiable’ model/protocol combination) in a relatively short amount of experimental time.

We picked up on these ideas when Kylie came to look at models of hERG. We originally wanted to study different modes of drug binding with hERG and design experiments to quantify that. Unfortunately, it rapidly became clear there was little consensus on how to model hERG itself, before even considering drug binding.

Existing literature models of IKr

Figure 1: seven existing model structures that described the 29 models for IKr (a.k.a.* hERG) that we could find in the literature.

OK, so we have lots of different structures, but does this matter? Or do they all give similar predictions? Unfortunately – as we show in Figure 1 of the paper – quite a wide variety of different current profiles are predicted, even by models for the same species, cell type and temperature.

So Kylie’s PhD project became a challenge of deciding where we should start! What complexity do we need in model of hERG (for studying its role in the action potential and what happens when it is blocked), and how should we build one?

These questions link back to a couple of my previous posts – how complex should a model be, and what experiments do we need to do to build it? Kylie’s thesis looked at the question of how we should parameterise ion channel models, and even how to select the right ion channel model to use in the first place. We had quite a lot of fun designing new voltage clamp protocols and then going to a lab to test them out. The full story is in Kylie’s thesis, and we present a simpler version that just shows how well you can do with one basic model in the paper.

Kylie did a brilliant job, and as well as doing all the statistical inference and mathematical modelling work, she went and learnt how to do whole-cell patch clamp experiments herself as well at Teun de Boer’s lab and also with Adam Hill and Jamie Vandenberg. Patch clamp is an amazing experimental technique where you effectively get yourself an electrode in the middle of a cell, my sketch of how it works is in Figure 2.

Patch clamp attached

Patch clamp whole cell
Figure 2: the idea behind patch clamp. Top: you first make a glass pipette by pulling a glass tube whilst heating it, to melt it and stretch it until it breaks into two really fine pipettes (micrometers across at the end). You put one electrode in a bath with the cells, then you put another electrode in your pipette with some liquid; attach to a rig to get fine control of where it points; and lower it down under a microscope onto the surface of a single cell. You then apply suction, clamping the pipette to a cell, this is commonly done by literally sucking on a straw! Bottom: you then keep sucking, and rupture the cell membrane, this has cunningly got you an electric circuit that effectively lets you measure voltages or currents across the cell membrane. You can clamp a certain voltage at the amplifier, which it does by injecting current to keep a stable voltage (or any voltage as a function of time). The idea is that the current the amplifier has to inject is the exact opposite of what the cell itself is allowing across the membrane (give or take some compensation for the other electrical components I have put in my diagram). So you can now measure the current flowing through the cells ion channels as a function of voltage.

We decided that the traditional approach of specific fixed voltage steps (which neatly de-couples time- and voltage-dependence) was a bit slow and tricky to assemble into a coherent model. So we made up some new sinusoid-based protocols for the patch clamp amplifier to rapidly probe the voltage- and time-dynamics of the currents. Things we learnt along the way:

  • Whilst it might work in theory for the model, you also might fry the cells in real life (our first attempts at protocols went up to +100mV for extended periods of time, which cells don’t really like).
  • HEK and CHO cells have their own voltage-dependent ion channels (which we call ‘endogenous’ voltage-dependent currents) which you can activate and mix up with the current you are interested in.
  • It’s really important to learn what all the dials on a patch clamp amplifier do(!), and adjust for things like liquid junction potential.
  • Synthetic data studies (simulating data, adding realistic levels of noise, and then attempting to recover the parameters you used) are a very useful tool for designing a good experiment. You can add in various errors and biases and see how sensitive your answers are to these discrepancies.
  • Despite conductance and kinetics being theoretically separable/identifiable, and practically in synthetic studies, we ended up with some problems here when using real data (e.g. kinetics make channel ‘twice as open’ with ‘half the conductance’. You can imagine this is impossible if the channels are already over 50% open, but maybe quite likely if only 5% of the channels are open?). We re-designed the voltage clamp to include an activation step to provoke a nice large current with a large open probability, based on hERG currents people had observed before.

But to cut a very long story short – it all worked better than we could have imagined. Figure 3 shows the voltage protocol we put into the amplifier, and the currents we recorded in CHO cells that were over-expressing hERG. We then fitted our simple Hodgkin-Huxley style model to the current, by varying all of its parameters to get the best possible fit, essentially.

training

Figure 3: Model Training/Fitting/Calibrating our model to currents provoked under the sinusoidal voltage clamp. Top: the whole training dataset. Bottom: a zoom in on the highlighted region of the top plot.

So a great fit, but that doesn’t mean anything on its own – see my previous post on that. So we then tested the model in situations that we would like it to make good predictions, here under cardiac action potentials and also slightly awry ones, see Fig 4.

validation

Figure 4: Model Evaluation/Validation. The red current trace was recorded in the same cell as the sinusoidal data shown above. The blue trace is predicted using the parameters from the fitting exercise in Figure 3, and isn’t fitted to this recording.

We repeated this in a few different cells, and this lets us look at cell-cell variability in the ion channel kinetics via examining changes in the model’s parameters. Anyway, that is hopefully enough to whet your appetite for reading the whole paper! As usual, all the data, code, and (perhaps unusually) fitting algorithms are available for anyone to play with too.

Wish list: if you can help with any of these, let’s collaborate! Please get in touch.

  • A better understanding of identifiability of conductance versus kinetic parameters, and how to ensure both.
  • A way to design voltage clamp protocols for particular currents (this was somewhat hand tuned).
  • A way to select between different model structures at the same time as parameterising them.
  • A way to say how ‘similar’ (in terms of model dynamics?) a validation protocol is to a training protocol. If validation was too similar to training, it wouldn’t really be validation… we think our case above is ‘quite different’, but could we quantify this?
  • A way to quantify/learn ‘model discrepancy’ and to put realistic probabilistic bounds on our model predictions when we are using the models “out in the wild” in future unknown situations.
Footnotes

*hERG is the gene that encodes for the mRNA that is translated into a protein that assembles into homotetramers (groups of four of the same thing stuck together) in the cell membrane. This protein complex forms the main part of a channel in the cell membrane (Kv11.1) that carries the ionic current known as the “rapid delayed rectifier potassium current” or IKr. So you can see why we abuse the term hERG and say things like “hERG current”!

Posted in Experimental Design, Future developments, Ion Channel Models, Model Development, Stats and Inference | Tagged , , , , , , , , , , , , | 2 Comments

Paper repository fatigue

(This is a non-cardiac-modelling rant, probably specific to UK researchers, so feel free to skip it!)

I am a massive fan of open access publication and open science in general. It is quite sensible that the public gets to read all of the research they are funding, and it has to be the best way to share ideas and let science happen without any barriers.

But I’m sure we aren’t doing it very efficiently at the moment, some very well-intentioned policies are making publishing a real nuisance in the UK.

Here’s a list of the all the places where papers we are publishing at the moment are ending up. When you google a paper title, you are likely to find hits for all of these, you have to hope that they all ended up being the same final version of a paper, and aren’t really sure which is best to look at:

  • On ArXiv/BioRxiv – I think preprint servers are a great way to make a version of your paper open access, get it google-able, and get feedback on it. So we put up papers on BioRxiv, and try (but sometimes forget) to make sure they are updated to match the final accepted article in a journal.
  • In the actual journal – this is generally the nicest to look at version (but not always!). My funders like to have their articles under a CC-BY licence, which is a great idea, but it generally means a Gold route for open access with quite high fees.
  • On PubMedCentral(PMC) or Europe’s version (or usually for us, both) – PMC is funded by the NIH, the USA’s main medical research agency, and any papers they funded also have to deposit a version with PMC. This applies even if it is open access – fair enough – I imagine  it’s a good idea to have an ‘official backup’ in case a journal shuts down for any reason. Since my funders go for Gold open it is somewhat redundant, and confuses people when they search on PubMed and have to choose which version to look at, but at least it is a big repository with almost all biomedical research in one place (give or take the European version – please just pool your resources EU and USA! Does Brexit mean we’ll also have to put a version in a UK-PMC too? Probably… groan).
  • On a university/institutional archive – the UK powers-that-be have (very sensibly) decided that (almost) all papers have to be available open access to be eligible for consideration as part of the next Research Excellence Framework which decides how much public money universities get. As far as I can work out, every single university has decided (very un-sensibly) that the only way to ensure this is to launch their own individual paper repository where they also host another open access version of the paper. Ours is called ePrints.
  • On a couple of other institutional archives – nearly all my papers have co-authors in other universities, who also have to submit the paper again to their own institutional repositories!

So, every single university in the UK is creating the IT databases, infrastructure, front-ends to host large numbers of research papers in perpetuity; as well as employing staff to curate and chase academics to put the right versions into the right forms at the right time with the right licences to keep everyone happy, mostly for papers that are already available open access elsewhere. This is an insane use of resources.

The only thing I can suggest is that UK REF people clarify that any paper that has a final open access accepted text in either arXiv/BioRXiv/a journal/PMC/EuropePMC is automatically eligible. For papers that doesn’t cover, the universities need to get together to either: beef up support for subject-specific repositories; or, just fund a single central repository between them, with a good user interface, to cover any subjects that fall down the cracks between the reliable subject repositories above. Maybe the sort of thing our highly-paid VCs and UUK should be organising 😉

 

 

Posted in Academia-in-general | Tagged , , , , , | Leave a comment

Postdoc position available

Another short post, just to advertise a postdoctoral research associate (PDRA) position available to work with me. It’s a two year position, based at the Centre for Mathematical Medicine & Biology at the University of Nottingham, with the potential to visit labs around the world to get hands-on experimental experience.

The subject of the postdoc position will be designing new experiments to get as much information as we can on how pharmaceutical compounds bind to ion channels and affect the currents that flow through them. As part of this I would like to explore how to characterise and quantify model discrepancy, and design experiments for that, as well as model selection and parameterisation of the models.

We’ll then use the data generated by these experiments to build models of pharmaceutical drug interactions with ion currents, working with partners in pharmaceutical companies and international regulators to test out these new models. The project will involve learning some of the modelling behind electrophysiology and pharmacology, as well as data science/statistics behind designing experiments and choosing models and parameters. It will build on some of our recent work on novel experimental design, some of which is available in a preprint here.

See http://www.nottingham.ac.uk/jobs/currentvacancies/ref/SCI308217 for details and links to apply. Closing date is Wed 4th October. Informal enquiries to me are welcome (but applications have to go through the official system on the link above).

Gary

 

Posted in Drug action, Experimental Design, Future developments, Ion Channel Models, Model Development, Safety Pharmacology, Stats and Inference | Tagged , , | Leave a comment

CiPA in-silico modelling meeting

There’s an effort underway to evaluate, improve and implement mathematical models of cardiac electrophysiology for pharmaceutical cardiac safety testing and regulatory practice. In particular, to be part of a more mechanistic and accurate in-vitro assessment of pro-arrhythmic risk than the existing human clinical ThoroughQT trial. The intended replacement is called the Comprehensive In-vitro Pro-arrhythmia Assay, or CiPA for short.

This is just a short post to let everyone know that the CiPA in-silico Working Group is organising a meeting on November 9th 2017 in Toronto, dedicated to discussing the mathematical modelling aspects of CiPA. You can find more information about the meeting on this page, and register for the meeting on this page.

The plan is for the FDA modelling team to present the work they have been doing to the cardiac modelling community, to get feedback, encourage work in this area, and to network and start new collaborations. Also note there are a handful of speaker slots we are keeping free for late-breaking-abstracts which you can email me to submit short abstracts for (details here) by 30th September.

Please pass this message on to anyone you think may be interested.

Hope to see lots of you there!

Gary

Posted in Action Potential Models, Drug action, Future developments, Ion Channel Models, Model Development, Safety Pharmacology | Tagged , , | 1 Comment

Arrhythmic risk: regression, single cell biophysics, or big tissue simulations?

On the 15th March I presented at an FDA public Advisory Committee hearing on the proposals of the CiPA initative. CiPA aims to replace the current testing for increased drug-induced Torsade de Pointes (TdP) arrhythmic risk, which is a human clinical trial, with earlier pre-clinical testing (including mathematical modelling) that could give a more accurate assessment without the need for a human trial.  Most of my talk (available on Figshare) was about the rationale for using a biophysically-based mechanistic model to classify novel compounds’ TdP risk, the history of cardiac modelling, and how simulations might fit into the proposals.

The advisory committee asked some great questions, and I thought it was worth elaborating on one of my answers here. To summarise, quite a few of their questions came down to “Why don’t you include more detail of known risk factors?”. Things they brought up include:

  • long-short-long pacing intervals is often observed in the clinic prior to Torsade de Pointes starting, why not include that?;
  • should we model Purkinje cells rather than ventricular cells (perhaps ectopic beats or arrhythmias arise in the Purkinje fibre system)?;
  • heart failure is a known risk factor – would modelling these conditions help?

Mechanistic markers

Before answering, it’s worth considering where we are now in terms of ‘mechanistic’ markers of arrhythmic risk. Figure 1 shows how things are assessed at the moment.

Fig 1: block of hERG -> prolongation of action potential duration (APD) -> prolongation of QT interval on the body surface ECG. Taken from my 2012 BJP review paper.

It was observed that the risky drugs withdrawn from the market in the late 90s prolonged the QT interval, and for these compounds this was nicely mechanistically linked to block of the hERG channel/IKr (top panel of Fig 1). This all makes nice mechanistic sense – a prolonged QT is related to delayed repolarisation (as in bottom of Fig 1), which in turn is related to block of hERG/IKr.

There’s a couple of reasons prolonged repolarisation is conceptually/mechanistically linked to arrhythmia. Firstly, if you delayed repolarisation ‘a bit more’ (continue to decrease the slope at the end of the action potential – middle panel of Fig 1), you’d get repolarisation failure, or after-depolarisations. Secondly, by delaying repolarisation you may cause regions of tissue to fail to be ready for the following wave, termed ‘functional block’.

As a result of the pathway from hERG block to QT prolongation, early ion channel screening focusses on checking compounds don’t block hERG/IKr. The clinical trials tried to avoid directly causing arrhythmias, for obvious reasons, but by looking for QT prolongation in healthy volunteers you would hopefully spot compounds that could have a propensity to cause arrhythmias in unhealthy heart tissue, people with ion channel mutations, people on co-medication with other slightly risky compounds, or other risk factors. This has been remarkably successful, and there have very few surprises of TdP-inducing compounds sneaking past the QT check without being spotted.

But, there were some hERG blockers on the market that didn’t seem to cause arrhythmias. Our 2011 paper showed why that can happen –  there are different mechanistic routes to get the same QT or APD changes (by blocking multiple ion channels rather than just hERG) and if you took multiple ion channel block into account you would get better predictions of risk than simply using the early hERG screening results. So multiple ion channel simulations of single cell APD are a very similar idea to clinical trials of QT (and comparing the two is a good check that we roughly understand a compound’s effects on ion channels).

So clinical QT/simulated APD is a mechanistically-based marker of arrhythmic risk, but we know it still isn’t perfect because some drugs with similar QT prolongation in our healthy volunteers have different arrhythmic risks (see CiPA papers for an intro).

One extreme: as detailed as possible

At one end of the scale, some studies advocate whole-organ simulations of TdP in action to assess TdP risk. Here’s a video of the impressive UT-heart simulator from Tokyo that was used in that study.

These simulations definitely have their place in helping us understand the origin of TdP, how it is maintained/terminates, and possibly helping design clinical interventions to deal with it. If we want to go the whole hog and really assess TdP risk completely mechanistically why not do patient-specific whole organ TdP simulations, with mechanics, individualised electrophysiology models, all the known risk factors, the right concentrations of compound, and variations of these throughout the heart tissue, etc. etc.?

Let’s imagine for a minute that we could do that, and got a model that was very realistic for individual patients, and we could run simulations in lots of different patients in different disease states, and observe spontaneous initiation of drug-induced TdP via the ‘correct’ mechanisms (this hypothetical situation ignores the not-inconsiderable extra uncertainties in how well we model regional changes in electrophysiology, what changes between individuals, blood pressure regulation models, individual fibre directions, accurate geometry, etc. etc. etc. which might mean we get more detail but less realism than a single cell simulation!). Let’s also say we could get these huge simulations to run in real time – I think the IBM Cardioid software is about the fastest, and goes at about three times less than real time on the USA’s biggest supercomputer.

That would be brilliant for risk assessment wouldn’t it?

Unfortunately not!

TdP is very rare – perhaps occurring once in 10,000 patient-years of dosing for something like methadone. Which means ultra-realistic simulations would have to run for 10,000 years just to get an N=1 on the world’s biggest supercomputer! It’s going to be quite a while before Moore’s law makes this feasible…

Inevitably then, we are not looking to model all the details of the actual circumstances in which TdP arises, we’re looking for some marker that correlates well with the risk of TdP.

The other extreme: as simple as possible

The other extreme is perhaps forgetting about mechanism altogether, and simply using a statistical model, based on historical correlations of IC50s with TdP, to assess the risk. Hitesh Mistry did something a bit like this in this paper (although as I’ve said at conferences – it’s not really a simple statistical correlation model, it’s really a clever minimal biophysically based model, since it uses Hill equation and balance of depolarising and repolarising currents!). But for two or three ion channel block it works very well.

Why would I like something a bit more mechanistic than that then? I came up with the example in Figure 2 to explain why in the FDA hearing.

statistical_vs_biophysical

Fig 2: imagine dropping a mass of 1kg off a tower and timing how long it takes to fall to the ground. You might get the blue dots if you did it from between the second and third floors of a building. Left: if you were a statistician only you might then do a linear regression to estimate the relationship between height and time (think ion channel block and TdP risk). Right: a physicist would get out Newton’s II law and derive the relationship on the right. The one on the left would be dodgy for extrapolating outside the ‘training data’, the one on the right would be fairly reliable for extrapolating (not completely – as it doesn’t include air resistance etc.!)

Why might ion channel block be like Fig 2? Well when you’re considering just two or three channels being blocked, then Hitesh’s method (which actually includes a bit of mechanistic Newton’s II law in my analogy!) will work very well, assuming it’s trained on enough compounds of various degrees of block of the three channels, as the blue dots will cover most of the space.

But you might want to predict the outcome of block (or even activation) up to seven or more different ionic currents (and combinations thereof) that could theoretically happen and cause changes relevant to TdP risk. In this case, any method that is primarily based on a statistical regression, rather than mechanistic biophysics, is going to struggle because of the curse of dimensionality. In essence, you’ll struggle to get enough historical compounds to ‘fill the space’ for anything higher than two or three variables/ion currents. You could think of the biophysical models as reducing the dimension of the problem here (in the same way as the biology does, if we’ve got enough bits of the models good enough), so they can output a single risk marker that is then suitable for this historical correlation with risk – without a huge number of compounds.

The right balance?

CiPA is pursuing single-cell action potential simulations, looking for markers of arrhythmic risk in terms of quantifying ‘repolarisation stability’ in some sense. I think this is a very sensible approach, geared simply at improving one step on solely APD/QT.

In terms of including more risk factor details in here, as the committee asked originally at the top of this post, the real question is ‘does it improve my risk prediction?‘ or not. Hopefully I’ve explained why including all the detail we can think of isn’t obviously going to help. Your ranking of a new compound in terms of the risk of established ones would have to change in order for a new simulated risk marker to make any difference.

To assess whether that difference really was an improved risk prediction we would need to have faith that risk factors were widely applicable to the people who are getting TdP, and that any changes to the models for introducing heart failure etc. are sufficiently well validated and trusted to rely on them. I don’t think we are quite ready for this, as there is plenty to do at the moment trying to ensure there is an appropriate balance of currents in a baseline model (before any are blocked – two relevant papers: paper 1, paper 2), and that kinetics of drug-block of hERG included, as these are probably important.

Another thought along the committee’s lines is TdP risk for different patient subgroups, instead of a one-size-fits-all approach. This would be very nice, but the same difficulties apply, multiplied! Firstly, getting models that we trust for all these subgroups, with well quantified levels/distributions of ion channel expression and other risk-factor-induced changes. Secondly, even sparser gold standard clinical risk categorisation for all subgroups to test our models on. Unfortunately, with such a rare side effect it is difficult enough to get an overall risk level, never mind risk tailored to individual subgroups. So at present, I think the CiPA proposal of a single cell model (give or take an additional stem-cell derived myocyte prediction perhaps!) and single risk marker is a very sensible first step.

As usual, comments welcome below!

Posted in Action Potential Models, Drug action, Future developments, Model Development, Safety Pharmacology, Tissue Simulations | Tagged , , , , , , , | 2 Comments