Job Available: Statistical inference for mechanistic models

N.B. Applications for this position are now closed.

We are offering a 3 year position as a research fellow in statistical inference for mechanistic models. This is offered at either Postdoc or Senior postdoc level (equivalent grade to Assistant Professor) here in the Centre for Mathematical Medicine & Biology and Statistics & Probability groups, based in Mathematical Sciences, University of Nottingham.

We have started a Wellcome Trust funded project entitled “Developing cardiac electrophysiology models for drug safety studies”. This is an exciting opportunity to get involved in a substantial research team that will consist of at least two new postdoctoral research associate positions, together with Dominic Whittaker, me and a dedicated research software engineer Maurice Hendrix in collaboration with colleagues in statistics & probability within Nottingham (in particular Simon Preston and Theo Kypraois).

Over the last 10 years I’ve been doing cardiac modelling, I have come to think that our main problems in the field are related to reliably and reproducibly choosing and deriving biophysically-based mechanistic models from experimental data, and accounting for uncertainty whilst doing this. There are quite a few challenges involved, so many challenges that we held a month long residential programme on the challenges called the Fickle Heart at the Newton Institute in Cambridge this past summer (videos from final workshop available here).

You’ll find a lot of our open questions discussed in various past blog posts, but here are a few that we will be tackling in this grant:

  • Deciding appropriate baseline models for the ion currents (see my talk at Banff research station on this topic), and parameterising these models effectively is a big pre-requisite for our research, which we’ve been working on recently in these papers – sinusoidal protocols, high-throughput model building. Open challenges on how to do model selection as well as parameterisation, whilst accounting for all models being imperfect. Selecting appropriate noise models for use in likelihood-based methods is an interesting part of this.
  • Designing experiments to get information on drug binding to ion channels, and making sure that they can run on high-throughput automated machines. Open challenges in how to design these for (global) parameter optimisation, model selection, model validation, and to assess/capture/model the discrepancies.
  • Tailoring mathematical action potential models to particular cell types, to make predictions of what drugs might do in different species and cell types. Again, we think that doing more informative experiments (working with the Christini lab to build on this) will help a lot.
  • Considering all of this in a probabilistic/statistical framework that accounts for uncertainty and variability in a lot of different aspects:
    • our datasets and the underlying biological systems,
    • model parameters,
    • model structures/equations themselves,
    • discrepancy between models and reality,
    • our subsequent drug safety predictions.

We’ll be working closely with: industry labs (in particular at GlaxoSmithKline and Roche); pharmaceutical regulators (including the FDA); and academic labs (in particular Teun de Boer’s lab in UMC Utrecht in the Netherlands and Adam Hill & Jamie Vandenberg‘s labs in Victor Chang Cardiac Research Institute, Sydney, Australia). So candidates must enjoy teamwork, collaborative inter-disciplinary projects, and be prepared to get into the lab for a few weeks to really get to grips with the experiments we are trying to infer things from.

If any of that sounds interesting to you – please do apply! Feel free to contact me with informal enquiries.

You can apply online here: https://www.nottingham.ac.uk/jobs/currentvacancies/ref/SCI362219. The deadline is Tuesday 5th November.

Posted in Action Potential Models, Drug action, Experimental Design, Ion Channel Models, Model Development, Numerics, Stats and Inference | Tagged , , , , , , , , , , , , | Leave a comment

Hodgkin-Huxley models and Markov equivalents

Hodgkin & Huxley did some incredible work in the 1930s-50s on ion currents flowing through biological membranes. Despite not knowing what an ion channel was, they managed to work out an incredibly accurate predictive mathematical model for the currents that flow through them, and solved the differential equations numerically on a hand calculator. These models are still fundamental to a lot of electrophysiology work – we are still publishing Hodgkin-Huxley style models of particular currents (we just tried to parameterise them better)!

So there are a few points to raise about good old Hodgkin-Huxley models in this blog post!

  1. There’s a recent set of papers updating the original papers for modern conventions.
  2. I’ve sketched how (in the case of ‘powered’ gates) different equivalent Markov models can be written down for the same Hodgkin-Huxley model.
  3. A widely followed but rarely-expressed convention for the Markov diagrams.

Updated Papers

Recently my colleague Angus M Brown at the University of Nottingham published some updates to the landmark (and Nobel-prize winning) series of papers published in the Journal of Physiology in the 1940s-1950s by Hodgkin & Huxley:

I think this is great – there have been changes in conventions since the original papers were published (in particular the sign of Voltage/membrane potential, which is also now relative to earth rather than relative to resting potential). The changes are discussed in the editorial accompanying the papers.

So there is an updated set of equations for the squid axon action potential model in the ‘translated’ 1952 paper. I’d strongly recommend pointing students and colleagues towards this translation instead of the original paper, as I think it resolves a lot of confusions that can arise in trying to do all these convention changes (which you might not even be aware of!) in your head.

Equivalent Markov Models for Hodgkin-Huxley Structures

Quite a few people have asked me how the equivalence between Hodgkin-Huxley gating variables and Markov models works. So I thought I’d sketch it out – my effort is in Figure 1.

HH_and_Markov_equivalence

Figure 1: Two equivalent Markov Model structures for a Hodgkin-Huxley squared gate. Firstly two Hodgkin-Huxley gates multiplied together can be represented as a square Markov diagram, with the second gating process acting identically regardless of whether the first gating process was ‘closed’ or ‘open’. The rest of the diagram shows a simplification that can be made due to symmetry if these gating processes are identical (i.e. a squared gate like m^2) to reduce the Markov diagram to a linear chain which is one state smaller with related rates and states.

The same procedure holds for higher powers, so that an m^3 Markov model has three closed states and rates of 3α, 2α, α on the top and β, 2β, 3β on the bottom.

Now you haven’t really gained anything by re-casting like this (as it adds an equation in the case I’ve showed above). But if you come to modify the model so that something (like a drug) is interacting with just one of the states and breaking all the independence and symmetry in a Hodgkin-Huxley model, then being able to work out the Markov Model is necessary to simulate what happens then.

A convention for Markov diagrams of voltage-gated ion channels

It took me quite a while working in the field of cardiac electrophysiology to realise that an implicit (and, as such, not always followed!) convention in these diagrams is to have voltage underlying the axes, as I’ve sketched in Fig 2.

Shows the direction of increasing rates with voltage in Markov diagrams

Figure 2: a convention to lay out these Markov diagrams such that rates which increase with increasing voltage go from left to right and bottom to top. This is useful as one can tell at a glance “if voltage is high, the states towards the top right will be more occupied” and “if voltage is low the states towards the bottom left will be more occupied”.

So if you have a choice* try to present the diagrams such that increasing voltage pushes you right and up!

*sometimes, for reasons of clarity, it is nice to present bits of the diagram as mirror images, in which case I’ll let you off.
Posted in Ion Channel Models | Tagged , , , , , , | Leave a comment

Three potassium channel modelling papers

Just a quick note to tell you about our latest batch of papers in the “Heart By Numbers” special issue of Biophysical Journal which arose from a meeting in Berlin last year. They are all about IKr, the rapid-delayed rectifying potassium current carried by the hERG potassium ion channel, which is important in drug safety, as pharmaceutical drugs can block it and lead to dangerous disturbances in your heart’s rhythm. The papers are all open access and supported with open code and datasets (see our Github site).

First paper was led by Michael Clerx and is called “Four Ways to Fit an Ion Channel Model” we thought that a recipe or tutorial about how to make a Hodgkin-Huxley model for a voltage-gated ion channel was needed: why the protocols for ‘activation’, ‘deactivation’, ‘inactivation’ etc. are designed the way they are, how you can assemble data from them to fit time-constant and steady state curves, and which numerical schemes are sensible to use for doing this. In a lot of ways this paper is the partner for Kylie’s sinusoidal clamp paper that we published last year, which explains how people have typically done it, and then goes on to weigh up the pros and cons by trying the four different methods on the same dataset. The basic ways are:

  • Method 1: Fitting the model’s analytic equations for steady-states and time constants (e.g. m and τm for a gating variable m) directly to experimentally-derived current-voltage (I-V) and time-constant-voltage (τ-V) curves. This is generally a bad idea if you have more than one gate (for IKr at least, their timescales of gating aren’t as independent as this concept really needs them to be – e.g. inactivation mucks up your measurement of activation). You can show the problem quite easily by running a model forwards with some assumed gating properties. You then simulate the experiment and its post-processing to emulate experimental measurement of steady state and time constant gating properties – you get back different ones than the underlying equations suggest! But this method remains widely used in the literature.
  • Method 2: one way round the problem above is simulating the experiments and then deriving I-V and τ-V curves by postprocessing simulated currents, and then fitting these to data. So you get a model whose parameters are consistent with the data. BUT the optimisation problem becomes very hard because of the postprocessing steps to derive I-V and τ-V curves introducing more jumps and discontinuities in the objective function. There can be ‘divide by small number’ effects in the postprocessing, or long time constants fitted to flattish data, that lead to a lot of error on certain summary curve data points too.
  • Method 3: Simulating the traditional experiments and fitting directly to experimental current traces. This works surprisingly well, we had thought this optimisation would be difficult and need careful consideration of weighting traces and suchlike, but even a simple approach worked well, and better than Methods 1 and 2.
  • Method 4: Kylie’s sinusoidal clamp method. Experiments are very short (hence easier than Method 3), fitting is very simple and reliable.

A few highlights to look out for in this one:

  • A phase-plane plot method really helps understanding of the traditional protocol design, see Fig 1:
phase-planes

Fig 1. phase-portrait approach to understand traditional voltage clamp protocols for a two gate (activation/deactivation and inactivation/recovery) Hodgkin-Huxley model phase plane. A-F one for each voltage-clamp protocol. See the paper supplement for a slightly more mad 3D one with voltage as the z-axis!

  • Some tips and tricks for reliable parameter optimisation (including the role of parameter transforms, sensible bounds for voltage-dependent ion channel rate parameters, and the solver tolerance issue discussed on this blog before).

The second paper is by Chon Lok Lei, he created a new Method 4 discussed above, but instead of using the sinusoidal clamp, made a similar ‘Staircase Protocol‘ out of conventional steps and ramps which was able to run on a high-throughput 384 well Nanion SyncroPatch machine. This has some distinct advantages: Chon Lok got 124 good cell recordings, in one run of the machine, in about half an hour; compare this to manual patch where getting 10 nice stable cell recordings might take a week or more! So we hope this will let people create models of variants of this channel (e.g. mutations) and what happens to it under drug action much more easily than before. Some important points:

  • Fitting the same model using the staircase method returned very similar parameter sets to our previous manual-patch sinusoidal study.
  • There’s some nice mostly-automated quality control criteria used in addition to the usual series resistance, seal resistance and cell capacitance.
  • There’s some work on leak current and drug-subtraction to isolate IKr that will be worth looking at if you want to repeat such escapades.
  • We did 8 validation protocols alongside the staircase protocol that we used for fitting, and got some excellent predictions, for instance Fig. 2:

    ChonFigure

    Fig. 2 a zoom in on one of the panels in Figure 4 of Chon’s paper. Here the black trace at the top is the applied voltage clamp (actually made out of a series of linear ramps and clamps rather than a curve due to machine constraints). The blue trace is the recorded data in one of the 384 wells, and the red trace is the model prediction based on a fit to the Staircase Protocol when that was applied in the same well. The green areas show zoom in on the ‘spikes’ at the start of the action potential. Something we captured extremely well in the model which we weren’t necessarily expecting.

  • A hierarchical statistical model allowed us to describe the variability that we saw fitting to data across all the wells.
  • The paper includes the invention (we think!) of the ‘reversal ramp‘ to estimate error in applied voltage clamp, a special bit of the protocol designed to estimate an artefact in the experiment (a bit like a leak step does). Due to the parallel nature of the experiment and shared solutions/temperature this allows an estimate of this error in each individual well as shown in Fig 10 of the paper (small, but as we’ll see, maybe now the main source of error…).
  • Some strong circumstantial evidence that the principal variability in kinetic parameters fitted to different cells is due to this voltage clamp error rather than extrinsic variability between cells (see Fig. 9 of the paper).

The third and final paper, again by Chon Lok, uses the Staircase Protocol method introduced above to examine the temperature dependence of the parameters we get back. In essence we repeat the experiment at five distinct temperatures, so we can plot out how they vary with temperature. We can then compare this with how they should ‘theoretically’ vary with temperature following either:

  1. The commonly-used Q10 formulation.
  2. The sometimes-used (what we called ‘typical’) Eyring formulation (not actually as per Wikipedia – a voltage-dependent term is added to the Wikipedia definition in previous ion channel modelling! See our paper for the typical definition).
  3. The never-used in ion channel modelling (as far as we know!) Generalised Eyring formulation.

The punchline is that there’s strong evidence that Q10s are insufficient as they don’t give you any temperature-dependence on the parameter that governs the voltage dependence of the rate. But a strong dependence certainly appears in our refitted models. The typically-used Eyring formulation does confer some temperature dependence to this parameter, but is actually no better as the temperature-dependence can sometimes be constrained to be in the wrong direction (increasing with temperature instead of decreasing or vice-versa). The more generalised Eyring formulation, previously used in some electrical engineering battery applications, appears to work pretty well. So some significant consequences for any work that is relying on previously-used formulations for temperature dependence of voltage-dependent rate parameters.

If we are right, and the temperature dependence doesn’t follow a Q10, then we expect Q10 estimates to be protocol-dependent, and indeed we can explain some “this paper’s Q10 estimate was higher than this paper’s” (but not all) using our full temperature model with previously used protocols from the literature.

Our punchline is that you probably need to do the experiments at 37°C to be confident you are predicting well there.

In the discussion there are some interesting questions raised about whether the model breaking the first-principles ‘Typical Eyring’ formulation is actually a signpost that the model has some shortcomings/discrepancy, would a better mathematical model follow the first principles trends? Or is the need for a Generalised Eyring relation just a sign that the ‘single energy barrier’ model that the first principles rate equations assume is too simple for a large protein complex which may shift its preferred conformations with temperature. Open question!

Anyway, hope you enjoy the papers: one, two, three; comments welcome below.

Posted in Experimental Design, Future developments, Ion Channel Models, Model Development, Numerics, Stats and Inference | Tagged , , , , , , , , , , , | 1 Comment

Job Available: Research Software Engineer

Edit: please note the deadline for applications has now passed.

Here’s a great opportunity to join our team at the University of Nottingham on a 5 year Research Software Engineering (RSE) position:
https://www.nottingham.ac.uk/jobs/currentvacancies/ref/IS046219
The deadline for applications is 3rd March 2019.

People might not be too familiar with the concept of an ‘RSE’: UK funders and universities have recognised recently, and formally, that there needs to be a career path for people specialising in software engineering for research codes – see the UK RSE homepage for more info. So if you really enjoy the coding aspect of your research it might be the job for you! I am looking for someone with a PhD involving any aspect of computational research who still wants to be involved in doing cutting edge research and publishing, whilst focussing on professional levels of software development to underpin our research and its application to real world problems. You will be working closely with a team of post-doctoral researchers in our group in the Centre for Mathematical Medicine & Biology within Mathematical Sciences for 80% of your time, along with industrial collaborators and experimental academic groups. This is a unique arrangement with your job based in the university’s Digital Research Team so that you can continue to learn software skills there for the remaining 20% of your time. At the end of the 5 years your role will turn into a permanent job within the university’s RSE group.

Some of the software you’ll be involved in developing includes:

  • Chaste – our C++ cardiac electrophysiology simulator, the back-end simulation engine for a lot of our work.
  • libcellml – the API for the CellML markup language, giving access to hundreds of electrophysiology models in a standardised format.
  • The Cardiac Electrophysiology Web Lab – a web-based platform for documenting and reproducing the behaviour of models in different experimental simulations, comparing against experimental data, and documenting the process of deriving a model from data.
  • PINTS – probabilistic inference for noisy time series (python). Our main optimisation/inference software that forms the statistical back end for the Web Lab.
  • ApPortal – a web portal for safety pharmacology that provides a user-friendly interface to a Chaste-based simulator to run simulations, store and view their results.

Please don’t be put off if you don’t know all those languages and things inside out now, we are looking for someone who can learn them and has enthusiasm for open source software in research. Please get in touch with me if you have any questions about the job.

There are some other jobs being advertised in Nottingham’s Digital Research Team that you might like to have a look at too!

Posted in Action Potential Models, Drug action, Future developments, Ion Channel Models, Model Development, Numerics, Safety Pharmacology, Stats and Inference | Tagged , , , , , , , , | Leave a comment

Postdoctoral Research Positions Available

N.B. 16th Jan 2019 – the applications for these positions are now closed.

This post is to let people know about some opportunities for postdoctoral research here in the Centre for Mathematical Medicine & Biology, based in Mathematical Sciences, University of Nottingham.

We are starting a new Wellcome Trust funded project in 2019 entitled “Developing cardiac electrophysiology models for drug safety studies”. As you’ll see from some of the previous blog articles, and associated work on the CiPA project, we’ve been working on ways to understand and predict how certain pharmaceutical drugs are associated with increased pro-arrhythmic risk by using mathematical models of ion channel currents and cardiac cells.

This is an exciting opportunity to get involved in a substantial research team that will consist of at least three postdoctoral research associate positions, together with me and a dedicated research software engineer. We’ll be working closely with industry labs in particular at GlaxoSmithKline and Roche; pharmaceutical regulators including the FDA; and academic labs – in particular Teun de Boer’s lab in UMC Utrecht in the Netherlands and Adam Hill & Jamie Vandenberg‘s labs in Victor Chang Cardiac Research Institute, Sydney, Australia. So candidates must enjoy teamwork, collaborative inter-disciplinary projects, and be prepared to get into the lab and do some of their own experimental work to really get to grips with what we are trying to simulate.

There are quite a few challenges in this area, aspects of which you’ll find discussed in various past blog posts, but here are a few that we will be tackling in this grant:

  • Designing experiments to get information on drug binding to ion channels, and making sure that they can run on high-throughput automated patch clamp machines.
  • Simulating drug effects on the whole cell level
  • Comparing whole cell simulations with later safety test results: to see whether we quantitatively understand what the drugs are doing, or whether we see unexpected things.
  • Considering/building all of this in a probabilistic/statistical framework that accounts for uncertainty and variability in a lot of different aspects:
    • our datasets / biological systems,
    • model parameters,
    • model structures themselves,
    • discrepancy between models and reality,
    • our subsequent decisions / risk predictions.
  • And working on open source software tools that everyone can use for these tasks.

If any of that sounds interesting to you – please do apply! Feel free to contact me with informal enquiries.

There is a relevant job advert out now for fixed-term 3 year positions, available to start as soon as possible, details here: Research Associate/Fellow – up to two postdoctoral research positions (closing date for applications is 16th Jan 2019):

  • Either for people with experience in computational modelling of biological systems;
    OR
  • for people with experience in statistics/inference – in which case no previous experience of biological research is required.

There are also PhD positions available, see: “Optimising experiments for developing ion channel models” which is fully funded for UK and EU students, details here: https://www.nottingham.ac.uk/mathematics/prospective/research/maml.aspx

Posted in Action Potential Models, Drug action, Experimental Design, Future developments, Ion Channel Models, Model Development, Safety Pharmacology, Stats and Inference | Tagged , , , , | 1 Comment

Numerical errors from ODE solvers can mess up optimisation and inference very easily

(Subtitle: “when you’ve got a lot of data points!”)

This post is for people interested in doing optimisation or inference with Differential Equation (DE) models.

If you are a statistician, you might be used to treating model simulators as black boxes where you can stick parameters in and get outputs out. This post is about why you need to be a bit careful with that. It examines one of the quirks of working with differential equations and optimisation/inference that my team have bumped into in a few distinct situations – including simulators given out for public optimisation competitions! I haven’t seen it referred to in any of the textbooks, but please let me know in the comments if you have.

Below in Figure 1 is a likelihood surface (or objective function) that we came across (more on the definition of it below), as a function of one of the parameters in a cardiac action potential model. We are trying to find the maximum in this case.

ross_surface

Figure 1: a likelihood function of one parameter in a cardiac model. Urgh! For those that are interested in the detail, this is comparing a 10kHz time-sampled action potential voltage recording with a realistic level of noise to a simulated action potential. Figure thanks to Ross!

Not all optimisers rely on a nice smooth gradient – but they do all enjoy them! This is a horrible surface and no matter what kind of optimiser you use it is going to struggle to move around and explore something that looks like this. The red line marks the data-generating value in this case, and the green is somewhere we got stuck. Remember this is only in one dimension, now imagine it in ten or more…

To make matters worse, we might want to run MCMC on this surface to get a posterior distribution for the parameter on the x-axis. We see that there are ‘spikes’ of about 40 log-likelihood units. What does that mean? Well if we are talking about the probability of accepting a trough from a spike in Figure 1 using an MCMC Metropolis-Hastings step, that equates to an acceptance ratio of exp(-40) = 4×10^-18 ! Our chains will certainly get stuck and never move across this space nicely.

Is the problem really so non-linear that is has got thousands of local minima, or modes in a posterior, as this suggests? Thankfully, the answer is ‘No’!

After a bit of detective work we figured out that this bumpy surface is entirely due to numerical error in our simulation, and it should be completely smooth! The example is from an Ordinary Differential Equation (ODE) solver but Partial Differential Equation (PDE) solvers will also give the same behaviour.

Most of the time we can’t derive exact analytic solutions to our models’ equations, so we have to use numerical solution techniques; the simplest of these is the Forward Euler method. These numerical methods give you only an approximation to the solution of your equations, which you try to ensure is accurate by taking more computational effort by adding steps in your approximation (finer time steps) and checking the solution is converging to an answer. As you keep refining, the solution should change less and less.

Broadly speaking we can classify the different ODE solvers into: fixed step, like the Forward Euler method, that take the same size time steps as they go along; and adaptive step that alter the length of time steps, possibly on every step. When gradients are changing fast adaptive solvers try to take lots of small steps to stay accurate; when gradients are changing more slowly they make fewer but larger steps to run computations fast.

With an adaptive time-step solver you give a target tolerance (relative to the size of the variables (RelTol), or absolute (AbsTol), or typically both) and it refines the steps to try to maintain these tolerances on each step. In the example here we used CVODE but another common one is the Matlab ode15s stiff ODE solver. The same principle would also apply if you use a fixed-step solver, it would need smaller time steps rather than tighter tolerances.

In Figure 2 we show the shift in the likelihood surface as we tighten the ODE solver tolerances (Relative, Absolute in brackets above each plot):

TighterTolerances

Figure 2: Tolerances tightening from Rel=1e-4, Abs=1e-6 through to 1e-7, 1e-9. We need solutions under these tolerances to get a nice smooth likelihood surface in this problem. Figure is taken from Ross’ PhD thesis. Note, as well as getting smoother, the log-likelihood curve is shifting vertically (the y axes are different in these plots) and the difference in terms of probabilities would be huge.

In general RelTol = 10^-4 and AbsTol = 10^-6 are not unreasonable choices for a single ODE solve, indeed Matlab’s defaults are RelTol = 10^-3 (less precise than Figure 1) and AbsTol = 10^-6 (the same).

So why is this effect so big?

Likelihoods

A very common assumption is that a ‘data generating process’ (the way that you end up with observations that some instrument records) is:

data = reality + observation noise on each data point

Another common assumption is that the noise here is Gaussian, independent on each data point and identically-distributed (comes from a Normal distribution with the same mean (often zero) and standard deviation), this is known as “i.i.d.” Gaussian noise.

A third assumption is that ‘reality’ in our equation above is given by the smooth noise-less model output. This is obviously a bit shaky (because no model is perfect), but the idea is you can still get useful information on the parameters within your model if it is close enough (N.B. bear in mind you might get overconfident in the wrong answer – this is a good paper explaining why). So we then commonly have:

data = model output + i.i.d. Gaussian noise.

We can then write down a log-likelihood (log just because it is easier to work with numerically…) and we end up with a big sum-of-square errors across all of our time trace:

Screenshot 2018-10-17 19.56.48(see the Wikipedia derivation from the Normal probability density function). Here we take the mean to be the model output given some parameter set; x to be the observed data points and sigma is the i.i.d. noise parameter.

The reason that we have come across this problem perhaps more than other people isn’t that we have been more sloppy with our ODE solving (we put some effort into doing that relatively well!), but that we are dealing with problems that consist of high frequency samples of time-series data. We commonly work with a few seconds of 10kHz time sampled recordings, so we can end up with around 100,000 data points.

Why is this important? Say your simulation and data diverge by >=1.1 standard deviations of the noise level (P<0.86 in a statistics table) instead of >= 1 standard deviation (P<0.84) because of numerical error. If this happens at 100 time points then your probabilities multiply and become 0.86^100 = 5×10^-7 and 0.84^100 = 3×10^-8. It has become almost ten times less likely that your parameters gave rise to the data because of your numerical error that had a relatively small effect on the solution at each time point. As we have more and more data points, this effect is exaggerated until even tiny shifts in the solution have huge effects on probabilities, as we saw above.

There’s a slight subtlety here: you might have already checked that your solution is converging to within a pre-specified tolerance for a given parameter set. For example a modeller might say “I don’t care about changes of less than 0.01% in these variables, so I set the solver tolerance accordingly” then a statistician treating the simulator as a black box might just run with that. But what is important here is not the error bound on the individual variables at a given parameter set, but the error bound that the likelihood transformation of these variables demands in terms of reducing jumps in likelihood as a function of parameters. So the modeller and statistician need to talk to each other here to work out whether there might be problems…

Conclusions

I wouldn’t be surprised to find that this is one of the reasons people have found the need to use things like genetic algorithms in cardiac problems. But I suspect the information content, un-identifiability and parameter scalings are also very important factors in that.

So what should you do?

Examine 1D likelihood slices. We can fix all parameters and vary one at a time, plotting out the likelihood as above. Then tighten your solver tolerances until 1D slices of your likelihood are smooth enough for optimisers/MCMC to navigate easily. Whatever this extra accuracy costs in additional solver time will be compensated in far more efficient optimisation/inference (in the examples we have looked at, the worst cost is approximately just 10% more solve time for a solve with 10x tighter tolerances, resulting in thousands of times speed up in optimisation).

What about thinning the data? A way to get rid of this problem would be to remove a lot of data points. Something that’s called ‘thinning’ in the MCMC literature (although it usually refers to the MCMC chain afterwards rather than the data). I’m not a fan of doing it to the data. It will artificially throw away information and make your posteriors wider than they should be according to your noise model. You might not completely trust your likelihood/noise model, but thinning doesn’t automatically fix it either!

Finally, this post wouldn’t be complete without mentioning that there is a relatively new way to consider this effect, which explicitly admits that we have error from the solver, and treats it as a random variable (which can be correlated through time):

data = model + numerical approximation error + observation noise.

Dealing with this formulation is the field known as probabilistic numericssee the homepage for this, and you can use it to make MCMC take account of numerical errors. In our case, I  expect this approach could help by effectively warming up (c.f. tempering methods) the likelihood and making the spikes relatively smaller and more jump-able. Interestingly, in the above plots you can see that this isn’t independent noise as you move through parameter space, I don’t know enough about the subject to say whether that has been handled or not! Whether it is worth the extra complication I’m not convinced. Maybe for big PDE models it will be worth the trouble, but for the reasonably lightweight ODEs involved in single cell cardiac work it is probably just worth solving more accurately all the time.

Posted in Action Potential Models, Numerics, Stats and Inference | Tagged , , , , , , , | 6 Comments

A report on the Toronto CiPA in-silico modelling workshop

This is a long-awaited follow-up to this post advertising the workshop. Apologies it has taken so long, the fact I wanted to write something about the next meeting, which we just had, reminded me I never posted all these talks.

On the 9th November 2017 the CiPA in-silico Working Group hosted a meeting in Toronto General Hospital that the Cardiac Physiome meeting kindly let us run as a satellite meeting – a big thanks to them for organising the logistics of room booking etc.

The in-silico aspects of CiPA are led by the FDA Center for Drug Evaluation and Research. You might find the background document that we put together useful if you haven’t heard of CiPA. I’ve also written a post on the idea before. The FDA team let me organise this long half day with the following aims:

  • To inform the cardiac modelling community about the CiPA initiative.
  • To get feedback on the FDA’s work to date.
  • To draw attention to other research in the area they might not have been familiar with.
  • To discuss the next steps.
  • To spark more research and collaborations in this area.

It was a fascinating and thought provoking day, plenty of work for us to do, as you’ll see on my summing up slides at the end of the day. Here are links to all the talks, that you can also find in a Figshare Collection.

 

 

 

 

Posted in Action Potential Models, Drug action, Future developments, Ion Channel Models, Model Development, Safety Pharmacology | Tagged , , , , , , , | 1 Comment