A report on the Toronto CiPA in-silico modelling workshop

This is a long-awaited follow-up to this post advertising the workshop. Apologies it has taken so long, the fact I wanted to write something about the next meeting, which we just had, reminded me I never posted all these talks.

On the 9th November 2017 the CiPA in-silico Working Group hosted a meeting in Toronto General Hospital that the Cardiac Physiome meeting kindly let us run as a satellite meeting – a big thanks to them for organising the logistics of room booking etc.

The in-silico aspects of CiPA are led by the FDA Center for Drug Evaluation and Research. You might find the background document that we put together useful if you haven’t heard of CiPA. I’ve also written a post on the idea before. The FDA team let me organise this long half day with the following aims:

  • To inform the cardiac modelling community about the CiPA initiative.
  • To get feedback on the FDA’s work to date.
  • To draw attention to other research in the area they might not have been familiar with.
  • To discuss the next steps.
  • To spark more research and collaborations in this area.

It was a fascinating and thought provoking day, plenty of work for us to do, as you’ll see on my summing up slides at the end of the day. Here are links to all the talks, that you can also find in a Figshare Collection.

 

 

 

 

Advertisements
Posted in Action Potential Models, Drug action, Future developments, Ion Channel Models, Model Development, Safety Pharmacology | Tagged , , , , , , , | Leave a comment

Short and rich voltage-clamp protocols

This is a quick post to tell you all about Kylie’s new paper on sinusoidal-wave based voltage clamp protocols that has been published in the Journal of Physiology, and there’s an associated commentary from Ele Grandi. In the paper, some ideas that we’ve been thinking about for almost 10 years since I was working with Martin Fink and Denis Noble back in Oxford Physiology department in 2008-2010 have finally come to fruition.

In their 2009 simulation study comparing properties of Hodgkin Huxley vs. Markov Models (well worth a read) Martin and Denis discussed how an optimised short voltage step protocol might contain enough information to fit the parameters of models (termed an ‘identifiable’ model/protocol combination) in a relatively short amount of experimental time.

We picked up on these ideas when Kylie came to look at models of hERG. We originally wanted to study different modes of drug binding with hERG and design experiments to quantify that. Unfortunately, it rapidly became clear there was little consensus on how to model hERG itself, before even considering drug binding.

Existing literature models of IKr

Figure 1: seven existing model structures that described the 29 models for IKr (a.k.a.* hERG) that we could find in the literature.

OK, so we have lots of different structures, but does this matter? Or do they all give similar predictions? Unfortunately – as we show in Figure 1 of the paper – quite a wide variety of different current profiles are predicted, even by models for the same species, cell type and temperature.

So Kylie’s PhD project became a challenge of deciding where we should start! What complexity do we need in model of hERG (for studying its role in the action potential and what happens when it is blocked), and how should we build one?

These questions link back to a couple of my previous posts – how complex should a model be, and what experiments do we need to do to build it? Kylie’s thesis looked at the question of how we should parameterise ion channel models, and even how to select the right ion channel model to use in the first place. We had quite a lot of fun designing new voltage clamp protocols and then going to a lab to test them out. The full story is in Kylie’s thesis, and we present a simpler version that just shows how well you can do with one basic model in the paper.

Kylie did a brilliant job, and as well as doing all the statistical inference and mathematical modelling work, she went and learnt how to do whole-cell patch clamp experiments herself as well at Teun de Boer’s lab and also with Adam Hill and Jamie Vandenberg. Patch clamp is an amazing experimental technique where you effectively get yourself an electrode in the middle of a cell, my sketch of how it works is in Figure 2.

Patch clamp attached

Patch clamp whole cell
Figure 2: the idea behind patch clamp. Top: you first make a glass pipette by pulling a glass tube whilst heating it, to melt it and stretch it until it breaks into two really fine pipettes (micrometers across at the end). You put one electrode in a bath with the cells, then you put another electrode in your pipette with some liquid; attach to a rig to get fine control of where it points; and lower it down under a microscope onto the surface of a single cell. You then apply suction, clamping the pipette to a cell, this is commonly done by literally sucking on a straw! Bottom: you then keep sucking, and rupture the cell membrane, this has cunningly got you an electric circuit that effectively lets you measure voltages or currents across the cell membrane. You can clamp a certain voltage at the amplifier, which it does by injecting current to keep a stable voltage (or any voltage as a function of time). The idea is that the current the amplifier has to inject is the exact opposite of what the cell itself is allowing across the membrane (give or take some compensation for the other electrical components I have put in my diagram). So you can now measure the current flowing through the cells ion channels as a function of voltage.

We decided that the traditional approach of specific fixed voltage steps (which neatly de-couples time- and voltage-dependence) was a bit slow and tricky to assemble into a coherent model. So we made up some new sinusoid-based protocols for the patch clamp amplifier to rapidly probe the voltage- and time-dynamics of the currents. Things we learnt along the way:

  • Whilst it might work in theory for the model, you also might fry the cells in real life (our first attempts at protocols went up to +100mV for extended periods of time, which cells don’t really like).
  • HEK and CHO cells have their own voltage-dependent ion channels (which we call ‘endogenous’ voltage-dependent currents) which you can activate and mix up with the current you are interested in.
  • It’s really important to learn what all the dials on a patch clamp amplifier do(!), and adjust for things like liquid junction potential.
  • Synthetic data studies (simulating data, adding realistic levels of noise, and then attempting to recover the parameters you used) are a very useful tool for designing a good experiment. You can add in various errors and biases and see how sensitive your answers are to these discrepancies.
  • Despite conductance and kinetics being theoretically separable/identifiable, and practically in synthetic studies, we ended up with some problems here when using real data (e.g. kinetics make channel ‘twice as open’ with ‘half the conductance’. You can imagine this is impossible if the channels are already over 50% open, but maybe quite likely if only 5% of the channels are open?). We re-designed the voltage clamp to include an activation step to provoke a nice large current with a large open probability, based on hERG currents people had observed before.

But to cut a very long story short – it all worked better than we could have imagined. Figure 3 shows the voltage protocol we put into the amplifier, and the currents we recorded in CHO cells that were over-expressing hERG. We then fitted our simple Hodgkin-Huxley style model to the current, by varying all of its parameters to get the best possible fit, essentially.

training

Figure 3: Model Training/Fitting/Calibrating our model to currents provoked under the sinusoidal voltage clamp. Top: the whole training dataset. Bottom: a zoom in on the highlighted region of the top plot.

So a great fit, but that doesn’t mean anything on its own – see my previous post on that. So we then tested the model in situations that we would like it to make good predictions, here under cardiac action potentials and also slightly awry ones, see Fig 4.

validation

Figure 4: Model Evaluation/Validation. The red current trace was recorded in the same cell as the sinusoidal data shown above. The blue trace is predicted using the parameters from the fitting exercise in Figure 3, and isn’t fitted to this recording.

We repeated this in a few different cells, and this lets us look at cell-cell variability in the ion channel kinetics via examining changes in the model’s parameters. Anyway, that is hopefully enough to whet your appetite for reading the whole paper! As usual, all the data, code, and (perhaps unusually) fitting algorithms are available for anyone to play with too.

Wish list: if you can help with any of these, let’s collaborate! Please get in touch.

  • A better understanding of identifiability of conductance versus kinetic parameters, and how to ensure both.
  • A way to design voltage clamp protocols for particular currents (this was somewhat hand tuned).
  • A way to select between different model structures at the same time as parameterising them.
  • A way to say how ‘similar’ (in terms of model dynamics?) a validation protocol is to a training protocol. If validation was too similar to training, it wouldn’t really be validation… we think our case above is ‘quite different’, but could we quantify this?
  • A way to quantify/learn ‘model discrepancy’ and to put realistic probabilistic bounds on our model predictions when we are using the models “out in the wild” in future unknown situations.
Footnotes

*hERG is the gene that encodes for the mRNA that is translated into a protein that assembles into homotetramers (groups of four of the same thing stuck together) in the cell membrane. This protein complex forms the main part of a channel in the cell membrane (Kv11.1) that carries the ionic current known as the “rapid delayed rectifier potassium current” or IKr. So you can see why we abuse the term hERG and say things like “hERG current”!

Posted in Experimental Design, Future developments, Ion Channel Models, Model Development, Stats and Inference | Tagged , , , , , , , , , , , , | Leave a comment

Paper repository fatigue

(This is a non-cardiac-modelling rant, probably specific to UK researchers, so feel free to skip it!)

I am a massive fan of open access publication and open science in general. It is quite sensible that the public gets to read all of the research they are funding, and it has to be the best way to share ideas and let science happen without any barriers.

But I’m sure we aren’t doing it very efficiently at the moment, some very well-intentioned policies are making publishing a real nuisance in the UK.

Here’s a list of the all the places where papers we are publishing at the moment are ending up. When you google a paper title, you are likely to find hits for all of these, you have to hope that they all ended up being the same final version of a paper, and aren’t really sure which is best to look at:

  • On ArXiv/BioRxiv – I think preprint servers are a great way to make a version of your paper open access, get it google-able, and get feedback on it. So we put up papers on BioRxiv, and try (but sometimes forget) to make sure they are updated to match the final accepted article in a journal.
  • In the actual journal – this is generally the nicest to look at version (but not always!). My funders like to have their articles under a CC-BY licence, which is a great idea, but it generally means a Gold route for open access with quite high fees.
  • On PubMedCentral(PMC) or Europe’s version (or usually for us, both) – PMC is funded by the NIH, the USA’s main medical research agency, and any papers they funded also have to deposit a version with PMC. This applies even if it is open access – fair enough – I imagine  it’s a good idea to have an ‘official backup’ in case a journal shuts down for any reason. Since my funders go for Gold open it is somewhat redundant, and confuses people when they search on PubMed and have to choose which version to look at, but at least it is a big repository with almost all biomedical research in one place (give or take the European version – please just pool your resources EU and USA! Does Brexit mean we’ll also have to put a version in a UK-PMC too? Probably… groan).
  • On a university/institutional archive – the UK powers-that-be have (very sensibly) decided that (almost) all papers have to be available open access to be eligible for consideration as part of the next Research Excellence Framework which decides how much public money universities get. As far as I can work out, every single university has decided (very un-sensibly) that the only way to ensure this is to launch their own individual paper repository where they also host another open access version of the paper. Ours is called ePrints.
  • On a couple of other institutional archives – nearly all my papers have co-authors in other universities, who also have to submit the paper again to their own institutional repositories!

So, every single university in the UK is creating the IT databases, infrastructure, front-ends to host large numbers of research papers in perpetuity; as well as employing staff to curate and chase academics to put the right versions into the right forms at the right time with the right licences to keep everyone happy, mostly for papers that are already available open access elsewhere. This is an insane use of resources.

The only thing I can suggest is that UK REF people clarify that any paper that has a final open access accepted text in either arXiv/BioRXiv/a journal/PMC/EuropePMC is automatically eligible. For papers that doesn’t cover, the universities need to get together to either: beef up support for subject-specific repositories; or, just fund a single central repository between them, with a good user interface, to cover any subjects that fall down the cracks between the reliable subject repositories above. Maybe the sort of thing our highly-paid VCs and UUK should be organising 😉

 

 

Posted in Academia-in-general | Tagged , , , , , | Leave a comment

Postdoc position available

Another short post, just to advertise a postdoctoral research associate (PDRA) position available to work with me. It’s a two year position, based at the Centre for Mathematical Medicine & Biology at the University of Nottingham, with the potential to visit labs around the world to get hands-on experimental experience.

The subject of the postdoc position will be designing new experiments to get as much information as we can on how pharmaceutical compounds bind to ion channels and affect the currents that flow through them. As part of this I would like to explore how to characterise and quantify model discrepancy, and design experiments for that, as well as model selection and parameterisation of the models.

We’ll then use the data generated by these experiments to build models of pharmaceutical drug interactions with ion currents, working with partners in pharmaceutical companies and international regulators to test out these new models. The project will involve learning some of the modelling behind electrophysiology and pharmacology, as well as data science/statistics behind designing experiments and choosing models and parameters. It will build on some of our recent work on novel experimental design, some of which is available in a preprint here.

See http://www.nottingham.ac.uk/jobs/currentvacancies/ref/SCI308217 for details and links to apply. Closing date is Wed 4th October. Informal enquiries to me are welcome (but applications have to go through the official system on the link above).

Gary

 

Posted in Drug action, Experimental Design, Future developments, Ion Channel Models, Model Development, Safety Pharmacology, Stats and Inference | Tagged , , | Leave a comment

CiPA in-silico modelling meeting

There’s an effort underway to evaluate, improve and implement mathematical models of cardiac electrophysiology for pharmaceutical cardiac safety testing and regulatory practice. In particular, to be part of a more mechanistic and accurate in-vitro assessment of pro-arrhythmic risk than the existing human clinical ThoroughQT trial. The intended replacement is called the Comprehensive In-vitro Pro-arrhythmia Assay, or CiPA for short.

This is just a short post to let everyone know that the CiPA in-silico Working Group is organising a meeting on November 9th 2017 in Toronto, dedicated to discussing the mathematical modelling aspects of CiPA. You can find more information about the meeting on this page, and register for the meeting on this page.

The plan is for the FDA modelling team to present the work they have been doing to the cardiac modelling community, to get feedback, encourage work in this area, and to network and start new collaborations. Also note there are a handful of speaker slots we are keeping free for late-breaking-abstracts which you can email me to submit short abstracts for (details here) by 30th September.

Please pass this message on to anyone you think may be interested.

Hope to see lots of you there!

Gary

Posted in Action Potential Models, Drug action, Future developments, Ion Channel Models, Model Development, Safety Pharmacology | Tagged , , | 1 Comment

Arrhythmic risk: regression, single cell biophysics, or big tissue simulations?

On the 15th March I presented at an FDA public Advisory Committee hearing on the proposals of the CiPA initative. CiPA aims to replace the current testing for increased drug-induced Torsade de Pointes (TdP) arrhythmic risk, which is a human clinical trial, with earlier pre-clinical testing (including mathematical modelling) that could give a more accurate assessment without the need for a human trial.  Most of my talk (available on Figshare) was about the rationale for using a biophysically-based mechanistic model to classify novel compounds’ TdP risk, the history of cardiac modelling, and how simulations might fit into the proposals.

The advisory committee asked some great questions, and I thought it was worth elaborating on one of my answers here. To summarise, quite a few of their questions came down to “Why don’t you include more detail of known risk factors?”. Things they brought up include:

  • long-short-long pacing intervals is often observed in the clinic prior to Torsade de Pointes starting, why not include that?;
  • should we model Purkinje cells rather than ventricular cells (perhaps ectopic beats or arrhythmias arise in the Purkinje fibre system)?;
  • heart failure is a known risk factor – would modelling these conditions help?

Mechanistic markers

Before answering, it’s worth considering where we are now in terms of ‘mechanistic’ markers of arrhythmic risk. Figure 1 shows how things are assessed at the moment.

Fig 1: block of hERG -> prolongation of action potential duration (APD) -> prolongation of QT interval on the body surface ECG. Taken from my 2012 BJP review paper.

It was observed that the risky drugs withdrawn from the market in the late 90s prolonged the QT interval, and for these compounds this was nicely mechanistically linked to block of the hERG channel/IKr (top panel of Fig 1). This all makes nice mechanistic sense – a prolonged QT is related to delayed repolarisation (as in bottom of Fig 1), which in turn is related to block of hERG/IKr.

There’s a couple of reasons prolonged repolarisation is conceptually/mechanistically linked to arrhythmia. Firstly, if you delayed repolarisation ‘a bit more’ (continue to decrease the slope at the end of the action potential – middle panel of Fig 1), you’d get repolarisation failure, or after-depolarisations. Secondly, by delaying repolarisation you may cause regions of tissue to fail to be ready for the following wave, termed ‘functional block’.

As a result of the pathway from hERG block to QT prolongation, early ion channel screening focusses on checking compounds don’t block hERG/IKr. The clinical trials tried to avoid directly causing arrhythmias, for obvious reasons, but by looking for QT prolongation in healthy volunteers you would hopefully spot compounds that could have a propensity to cause arrhythmias in unhealthy heart tissue, people with ion channel mutations, people on co-medication with other slightly risky compounds, or other risk factors. This has been remarkably successful, and there have very few surprises of TdP-inducing compounds sneaking past the QT check without being spotted.

But, there were some hERG blockers on the market that didn’t seem to cause arrhythmias. Our 2011 paper showed why that can happen –  there are different mechanistic routes to get the same QT or APD changes (by blocking multiple ion channels rather than just hERG) and if you took multiple ion channel block into account you would get better predictions of risk than simply using the early hERG screening results. So multiple ion channel simulations of single cell APD are a very similar idea to clinical trials of QT (and comparing the two is a good check that we roughly understand a compound’s effects on ion channels).

So clinical QT/simulated APD is a mechanistically-based marker of arrhythmic risk, but we know it still isn’t perfect because some drugs with similar QT prolongation in our healthy volunteers have different arrhythmic risks (see CiPA papers for an intro).

One extreme: as detailed as possible

At one end of the scale, some studies advocate whole-organ simulations of TdP in action to assess TdP risk. Here’s a video of the impressive UT-heart simulator from Tokyo that was used in that study.

These simulations definitely have their place in helping us understand the origin of TdP, how it is maintained/terminates, and possibly helping design clinical interventions to deal with it. If we want to go the whole hog and really assess TdP risk completely mechanistically why not do patient-specific whole organ TdP simulations, with mechanics, individualised electrophysiology models, all the known risk factors, the right concentrations of compound, and variations of these throughout the heart tissue, etc. etc.?

Let’s imagine for a minute that we could do that, and got a model that was very realistic for individual patients, and we could run simulations in lots of different patients in different disease states, and observe spontaneous initiation of drug-induced TdP via the ‘correct’ mechanisms (this hypothetical situation ignores the not-inconsiderable extra uncertainties in how well we model regional changes in electrophysiology, what changes between individuals, blood pressure regulation models, individual fibre directions, accurate geometry, etc. etc. etc. which might mean we get more detail but less realism than a single cell simulation!). Let’s also say we could get these huge simulations to run in real time – I think the IBM Cardioid software is about the fastest, and goes at about three times less than real time on the USA’s biggest supercomputer.

That would be brilliant for risk assessment wouldn’t it?

Unfortunately not!

TdP is very rare – perhaps occurring once in 10,000 patient-years of dosing for something like methadone. Which means ultra-realistic simulations would have to run for 10,000 years just to get an N=1 on the world’s biggest supercomputer! It’s going to be quite a while before Moore’s law makes this feasible…

Inevitably then, we are not looking to model all the details of the actual circumstances in which TdP arises, we’re looking for some marker that correlates well with the risk of TdP.

The other extreme: as simple as possible

The other extreme is perhaps forgetting about mechanism altogether, and simply using a statistical model, based on historical correlations of IC50s with TdP, to assess the risk. Hitesh Mistry did something a bit like this in this paper (although as I’ve said at conferences – it’s not really a simple statistical correlation model, it’s really a clever minimal biophysically based model, since it uses Hill equation and balance of depolarising and repolarising currents!). But for two or three ion channel block it works very well.

Why would I like something a bit more mechanistic than that then? I came up with the example in Figure 2 to explain why in the FDA hearing.

statistical_vs_biophysical

Fig 2: imagine dropping a mass of 1kg off a tower and timing how long it takes to fall to the ground. You might get the blue dots if you did it from between the second and third floors of a building. Left: if you were a statistician only you might then do a linear regression to estimate the relationship between height and time (think ion channel block and TdP risk). Right: a physicist would get out Newton’s II law and derive the relationship on the right. The one on the left would be dodgy for extrapolating outside the ‘training data’, the one on the right would be fairly reliable for extrapolating (not completely – as it doesn’t include air resistance etc.!)

Why might ion channel block be like Fig 2? Well when you’re considering just two or three channels being blocked, then Hitesh’s method (which actually includes a bit of mechanistic Newton’s II law in my analogy!) will work very well, assuming it’s trained on enough compounds of various degrees of block of the three channels, as the blue dots will cover most of the space.

But you might want to predict the outcome of block (or even activation) up to seven or more different ionic currents (and combinations thereof) that could theoretically happen and cause changes relevant to TdP risk. In this case, any method that is primarily based on a statistical regression, rather than mechanistic biophysics, is going to struggle because of the curse of dimensionality. In essence, you’ll struggle to get enough historical compounds to ‘fill the space’ for anything higher than two or three variables/ion currents. You could think of the biophysical models as reducing the dimension of the problem here (in the same way as the biology does, if we’ve got enough bits of the models good enough), so they can output a single risk marker that is then suitable for this historical correlation with risk – without a huge number of compounds.

The right balance?

CiPA is pursuing single-cell action potential simulations, looking for markers of arrhythmic risk in terms of quantifying ‘repolarisation stability’ in some sense. I think this is a very sensible approach, geared simply at improving one step on solely APD/QT.

In terms of including more risk factor details in here, as the committee asked originally at the top of this post, the real question is ‘does it improve my risk prediction?‘ or not. Hopefully I’ve explained why including all the detail we can think of isn’t obviously going to help. Your ranking of a new compound in terms of the risk of established ones would have to change in order for a new simulated risk marker to make any difference.

To assess whether that difference really was an improved risk prediction we would need to have faith that risk factors were widely applicable to the people who are getting TdP, and that any changes to the models for introducing heart failure etc. are sufficiently well validated and trusted to rely on them. I don’t think we are quite ready for this, as there is plenty to do at the moment trying to ensure there is an appropriate balance of currents in a baseline model (before any are blocked – two relevant papers: paper 1, paper 2), and that kinetics of drug-block of hERG included, as these are probably important.

Another thought along the committee’s lines is TdP risk for different patient subgroups, instead of a one-size-fits-all approach. This would be very nice, but the same difficulties apply, multiplied! Firstly, getting models that we trust for all these subgroups, with well quantified levels/distributions of ion channel expression and other risk-factor-induced changes. Secondly, even sparser gold standard clinical risk categorisation for all subgroups to test our models on. Unfortunately, with such a rare side effect it is difficult enough to get an overall risk level, never mind risk tailored to individual subgroups. So at present, I think the CiPA proposal of a single cell model (give or take an additional stem-cell derived myocyte prediction perhaps!) and single risk marker is a very sensible first step.

As usual, comments welcome below!

Posted in Action Potential Models, Drug action, Future developments, Model Development, Safety Pharmacology, Tissue Simulations | Tagged , , , , , , , | 2 Comments

Uncertainty quantification for ion channel screening and risk prediction

This post accompanies our new paper in Wellcome Open Research.

Regular readers of this blog will know that I worry about uncertainty in the numbers we are using to model drug action on electrophysiology quite a lot – see our recent white paper for an intro where we discuss the various sources of uncertainty in our simulations, and a 2013 paper on variability observed when you repeat ion channel screening experiments lots of times. That paper studied variability in the averages of lots of experiments, in contrast, it is also possible to look at the uncertainty that remains when you just do one experiment (or one set of experiments).

We have been looking at screening data that were published recently by Crumb et al. (2016). This is an exciting dataset because it covers an unprecedented number of ion currents (7), for a good number of compounds (30 – a fair number anyway!). The ideal scenario for the CiPA initative is that we can feed these data into a mathematical model, and classify compounds as high/intermediate/low arrhythmic risk as a result. Of course, the model simulation results are only ever going to be as accurate as the data going in, I’ve sketched what I mean by this in the plot below.

Uncertainty Quantification schematic plot

Figure 1: Uncertainty Quantification. Here we imagine we have two inputs (red, blue) to a nonlinear model, and we observe the resulting model outputs. Top: for just a single value (with no associated uncertainty). Middle: for two well-characterised (low uncertainty) inputs, these give us distinct probabilistic model predictions. Bottom: for two uncertain and overlapping inputs, inevitably giving us uncertain and overlapping output distributions.

In Figure 1 we can see the same scenario viewed three different ways. We have two inputs into the simulation – a ‘red’ one and a ‘blue’ one. You could think about these as any numbers, e.g. “% block of an ion current”.

Top row of Fig 1

If we ignored uncertainty, we might do what I’ve shown on the top row: plug in single values; and get out single outputs. Note that blue was higher than red and gives a correspondingly higher model output/prediction. But how certain were we that red and blue took those values? How certain are we that the output really is higher for blue?

Middle row of Fig 1

Instead of just the most likely values, it is helpful to think of probability distributions for red and blue, as I’ve shown in the middle row of Figure 1. Here, we aren’t quite sure of their exact values, but we are fairly certain that they don’t overlap (this step of working out distributions on inputs is called “uncertainty characterisation“), so their most likely values are the same as the ones we used before in the top row. When we map these through our model* (a step called “uncertainty propagation“) we get two probability distributions. N.B. these output distributions are not necessarily the same shape as the ones that went in, for a nonlinear model. In the sketch the output distributions don’t overlap much either. But this is only guaranteed to be true if the model output is a linear (with non-zero slope!) function of one input; in general, the output distributions could easily overlap more than the inputs [e.g. imagine the model output is a sine wave function of the input with red input = blue input + 2π]; which is why uncertainty propagation is an important exercise! The whole process of uncertainty characterisation and propagation is known as uncertainty quantification.

Bottom row of Fig 1

In the bottom row of Figure 1 we see another scenario – here there is lots of overlap between the red and blue input distributions. So we think blue is higher than red, but it could easily be the other way around. Now we map these through our model, and get correspondingly large and overlapping probability distributions on our outputs. Even in the best-case scenario (i.e. a linear model output depending solely on this input), we would end up with the same distribution overlap as the inputs; and for non-linear models, where outputs are complicated functions of many inputs, the distributions could overlap much more. So bear in mind that it isn’t possible to get less-overlapping distributions out of the model than the ones that went in, but it is possible to get more overlapping ones. The uncertainty always increases if anything (I hadn’t really thought about it before, but that might be related to why we think about entropy in information theory?).

If we consider now that the output is something like ‘safety risk prediction for a drug’, could we distinguish whether red or blue is a more dangerous drug? Well, maybe we’ll be OK with something like the picture in the middle row of Figure 1. Here my imaginary risk indicator model output distinguishes between the red and blue compounds quite well. But we can state categorically “no” if the inputs overlap to the extent that they do in the bottom row, before we even feed them in to a simulation. So we thought it would be important to do uncertainty characterisation and work out our uncertainty in ion channel block that we have from dose-response curves; before doing action potential simulations. This is the focus of our recent paper in Wellcome Open Research**.

Uncertainty Characterisation

In the new paper, Ross has developed a Bayesian inference method and code for inferring probability distributions for pIC50 values and Hill coefficients.

The basic idea is shown in Figure 2, where we compare the approach of fitting a single vs. distribution of dose-response curves:

Figure 2: maximum likelihood best fit versus inference of dose-response curves.

Figure 2: best fit versus inference of dose-response curves. On the right we have plotted lots of samples from the distribution of possible curves, with each sample equally likely, so the darker regions are the more likely regions. Data from Crumb et al. (2016).

On the left of Figure 2 we see the usual approach that’s taken in the literature, fitting a line of best fit through all the data points. On the right, we plot samples of dose-response curves that may have given rise to these measurements.

The method works by inferring the pIC50 and Hill coefficients that describe the curve, but also the observation error that is present in the experiment. i.e the ‘statistical model’ is:

data = curve(pIC50, Hill) + Normal noise (mean = 0, standard deviation = σ).

One of the nice features of using a statistical model like this is that it tries to learn how much noise is on the data by looking at how noisy the data are, and therefore generates dose-response curves spread appropriately for this dataset, rather than with a fixed number for the standard deviation of the observational noise.

At this point we calculate what the % block of the current (inputs into model) might look like from these inferred curves, this is shown in Figure 3:

Input distribution characterisation

Figure 3: uncertainty characterisation. When we go to do a simulation we’ll want the “% hERG block” as an input into the simulations. This figure shows how we work that out at a given concentration. Imagine drawing a line vertically in the top plot (same as right plot in Fig 2), now we just look at the distribution along this line, which is displayed in the bottom plot for two concentrations (green and pink-ish).

The new paper also extends this approach to hierarchical fitting (saying data from each experiment were generated with different pIC50 and Hill coefficients), but I will let you read the paper for more information on that.

So what can we conclude about the distribution of inputs for the Crumb dataset? Well, that’s a little bit tricky since it’s 7 channels and therefore a seven dimensional thing to examine. To have a look at it we took 500 samples of each of the seven ion current % blocks. This gives a table where each row looks like:

Drug (1-30) | Sample (1-500) | % block current 1 | % block current 2 | ... | % block current 7 |

So the seven % blocks are the seven axes or dimensions of our input dataset.

I then simply did a principal components analysis that will separate out the data points as much as possible by projecting the data onto new axes that are linear combinations of the old ones. You can easily visualize up to 3D, as shown in the video below.

In the video above you see the samples for each compound plotted in a different colour (the PCA wasn’t told about the different compounds). So each cloud is in a position that is determined by what % block of each channel the compounds produce at their free Cmax concentrations (given in the Crumb paper). What we see is that about 10 to 12 of the compounds are in distinct distributions, so they block currents in a combination unlike other compounds, i.e. behave like the different inputs in the middle row of Figure 1. But the remaining ones seem to block in distributions that could easily overlap, like the inputs in the bottom row of Figure 1. As you might expect, this is clustered close to the origin (no block) co-ordinate.

Here, these first three principal components happen to describe 94% of the variation in the full 7D dataset, so whilst it is possible that there is some more discrimination between the ‘clouds’ of samples for each compound in higher dimensions, your first guess would be that this is not likely (but this is a big caveat that needs exploring futher before reaching firm conclusions). So it isn’t looking promising that there is enough information here, in the way we’ve used it anyway, to distinguish between the input values for half of these compounds.

Uncertainty Propagation

But once you’ve got this collection of inputs you can simply do the uncertainty propagation step anyway and see what comes out. We did this for a simple O’Hara 1Hz steady state pacing experiment, applying conductance block according to the video’s cloud values for each of the 30 compounds, to work out predicted output distributions of APD90 for each compound. The results are shown in Figure 4.

APD90 distributions

Figure 4: distributions of simulated APD90 in the O’Hara model (endocardial variant) at free Cmax for all 30 compounds in the Crumb dataset, using samples from the inferred dose-response curves (Figure 15 in the paper). Random colours for each of the 30 compounds.

Figure 4 is in line with what we expected from the video above, so it causes us to have some pause for thought. Perhaps five compounds have distinctly ‘long’ APD, and perhaps two have distinctly ‘short’ APD, but this leaves 23 compounds with very much overlapping ‘nothing-or-slight prolongation’ distributions. The arrhythmic risk associated with each of these compounds is not entirely clear (to me!) at present, so it is possible that we could distinguish some of them, but it is looking a bit as if we are in the scenario shown in the bottom right of Figure 1 – and this output overlaps to such an extent that it’s going to be difficult to say much.

So we’re left with a few options:

  • Do more experiments (the larger the number of data points, the smaller the uncertainty in the dose-response curves), whether narrower distributions allow us to classify the compounds according to risk remains to be seen.
  • Examine whether we actually need all seven ion currents as inputs (perhaps some are adding noise rather than useful information on arrhythmic risk).
  • Get more input data that might help to distinguish between compounds – the number one candidate here would be data on the kinetics of drug binding to hERG.
  • Examine other possible outputs (not just APD) in the hope they distinguish drugs more, but in light of the close-ness of the input distributions, this is perhaps unlikely.

So to sum up, it is really useful to do the uncertainty characterisation step to check that you aren’t about to attempt to do something impossible or extremely fragile. I think we’ve done it ‘right’ for the Crumb dataset, and it suggests that we will need more, or less(!), or different data to distinguish pro-arrhythmic risk. Comments welcome below…

Footnotes:

* The simplest way to do this is to take a random sample of the input probability distribution, run a simulation with that value to get an output sample, then repeat lots and lots of times to build up a distribution of outputs. This is what’s known as a Monte Carlo method (using random sampling to do something that in principle is deterministic!). There are some cleverer ways of doing it faster in certain cases, but we didn’t use them here!

** By the way: Wellcome Open Research is a new open-access, open-data, journal for Wellcome Trust funded researchers to publish any of their findings rapidly. It is a pre-print, post-publication review journal, so reviewers are invited after the manuscript is published and asked to help the authors revise a new version. The whole review process is online and open, and it follows the f1000research.com model. So something I’m happy we’ve supported by being in the first issue with this work.

Posted in Drug action, Model Development, Safety Pharmacology, Stats and Inference | Tagged , , , , , , , , , | 4 Comments