Job Available: Research Software Engineer

Here’s a great opportunity to join our team at the University of Nottingham on a 5 year Research Software Engineering (RSE) position:
https://www.nottingham.ac.uk/jobs/currentvacancies/ref/IS046219
The deadline for applications is 3rd March 2019.

People might not be too familiar with the concept of an ‘RSE’: UK funders and universities have recognised recently, and formally, that there needs to be a career path for people specialising in software engineering for research codes – see the UK RSE homepage for more info. So if you really enjoy the coding aspect of your research it might be the job for you! I am looking for someone with a PhD involving any aspect of computational research who still wants to be involved in doing cutting edge research and publishing, whilst focussing on professional levels of software development to underpin our research and its application to real world problems. You will be working closely with a team of post-doctoral researchers in our group in the Centre for Mathematical Medicine & Biology within Mathematical Sciences for 80% of your time, but this is a unique arrangement with your job based in the university’s Digital Research Team so that you can continue to learn software skills there. At the end of the 5 years your role will turn into a permanent job within the university’s RSE group.

Some of the software you’ll be involved in developing includes:

  • Chaste – our C++ cardiac electrophysiology simulator, the back-end simulation engine for a lot of our work.
  • libcellml – the API for the CellML markup language, giving access to hundreds of electrophysiology models in a standardised format.
  • The Cardiac Electrophysiology Web Lab – a web-based platform for documenting and reproducing the behaviour of models in different experimental simulations, comparing against experimental data, and documenting the process of deriving a model from data.
  • PINTS – probabilistic inference for noisy time series (python). Our main optimisation/inference software that forms the statistical back end for the Web Lab.
  • ApPortal – a web portal for safety pharmacology that provides a user-friendly interface to a Chaste-based simulator to run simulations, store and view their results.

Please don’t be put off if you don’t know all those languages and things inside out now, we are looking for someone who can learn them and has enthusiasm for open source software in research. Please get in touch with me if you have any questions about the job.

There are some other jobs being advertised in Nottingham’s Digital Research Team that you might like to have a look at too!

Advertisements
Posted in Action Potential Models, Drug action, Future developments, Ion Channel Models, Model Development, Numerics, Safety Pharmacology, Stats and Inference | Tagged , , , , , , , , | Leave a comment

Postdoctoral Research Positions Available

N.B. 16th Jan 2019 – the applications for these positions are now closed.

This post is to let people know about some opportunities for postdoctoral research here in the Centre for Mathematical Medicine & Biology, based in Mathematical Sciences, University of Nottingham.

We are starting a new Wellcome Trust funded project in 2019 entitled “Developing cardiac electrophysiology models for drug safety studies”. As you’ll see from some of the previous blog articles, and associated work on the CiPA project, we’ve been working on ways to understand and predict how certain pharmaceutical drugs are associated with increased pro-arrhythmic risk by using mathematical models of ion channel currents and cardiac cells.

This is an exciting opportunity to get involved in a substantial research team that will consist of at least three postdoctoral research associate positions, together with me and a dedicated research software engineer. We’ll be working closely with industry labs in particular at GlaxoSmithKline and Roche; pharmaceutical regulators including the FDA; and academic labs – in particular Teun de Boer’s lab in UMC Utrecht in the Netherlands and Adam Hill & Jamie Vandenberg‘s labs in Victor Chang Cardiac Research Institute, Sydney, Australia. So candidates must enjoy teamwork, collaborative inter-disciplinary projects, and be prepared to get into the lab and do some of their own experimental work to really get to grips with what we are trying to simulate.

There are quite a few challenges in this area, aspects of which you’ll find discussed in various past blog posts, but here are a few that we will be tackling in this grant:

  • Designing experiments to get information on drug binding to ion channels, and making sure that they can run on high-throughput automated patch clamp machines.
  • Simulating drug effects on the whole cell level
  • Comparing whole cell simulations with later safety test results: to see whether we quantitatively understand what the drugs are doing, or whether we see unexpected things.
  • Considering/building all of this in a probabilistic/statistical framework that accounts for uncertainty and variability in a lot of different aspects:
    • our datasets / biological systems,
    • model parameters,
    • model structures themselves,
    • discrepancy between models and reality,
    • our subsequent decisions / risk predictions.
  • And working on open source software tools that everyone can use for these tasks.

If any of that sounds interesting to you – please do apply! Feel free to contact me with informal enquiries.

There is a relevant job advert out now for fixed-term 3 year positions, available to start as soon as possible, details here: Research Associate/Fellow – up to two postdoctoral research positions (closing date for applications is 16th Jan 2019):

  • Either for people with experience in computational modelling of biological systems;
    OR
  • for people with experience in statistics/inference – in which case no previous experience of biological research is required.

There are also PhD positions available, see: “Optimising experiments for developing ion channel models” which is fully funded for UK and EU students, details here: https://www.nottingham.ac.uk/mathematics/prospective/research/maml.aspx

Posted in Action Potential Models, Drug action, Experimental Design, Future developments, Ion Channel Models, Model Development, Safety Pharmacology, Stats and Inference | Tagged , , , , | 1 Comment

Numerical errors from ODE solvers can mess up optimisation and inference very easily

(Subtitle: “when you’ve got a lot of data points!”)

This post is for people interested in doing optimisation or inference with Differential Equation (DE) models.

If you are a statistician, you might be used to treating model simulators as black boxes where you can stick parameters in and get outputs out. This post is about why you need to be a bit careful with that. It examines one of the quirks of working with differential equations and optimisation/inference that my team have bumped into in a few distinct situations – including simulators given out for public optimisation competitions! I haven’t seen it referred to in any of the textbooks, but please let me know in the comments if you have.

Below in Figure 1 is a likelihood surface (or objective function) that we came across (more on the definition of it below), as a function of one of the parameters in a cardiac action potential model. We are trying to find the maximum in this case.

ross_surface

Figure 1: a likelihood function of one parameter in a cardiac model. Urgh! For those that are interested in the detail, this is comparing a 10kHz time-sampled action potential voltage recording with a realistic level of noise to a simulated action potential. Figure thanks to Ross!

Not all optimisers rely on a nice smooth gradient – but they do all enjoy them! This is a horrible surface and no matter what kind of optimiser you use it is going to struggle to move around and explore something that looks like this. The red line marks the data-generating value in this case, and the green is somewhere we got stuck. Remember this is only in one dimension, now imagine it in ten or more…

To make matters worse, we might want to run MCMC on this surface to get a posterior distribution for the parameter on the x-axis. We see that there are ‘spikes’ of about 40 log-likelihood units. What does that mean? Well if we are talking about the probability of accepting a trough from a spike in Figure 1 using an MCMC Metropolis-Hastings step, that equates to an acceptance ratio of exp(-40) = 4×10^-18 ! Our chains will certainly get stuck and never move across this space nicely.

Is the problem really so non-linear that is has got thousands of local minima, or modes in a posterior, as this suggests? Thankfully, the answer is ‘No’!

After a bit of detective work we figured out that this bumpy surface is entirely due to numerical error in our simulation, and it should be completely smooth! The example is from an Ordinary Differential Equation (ODE) solver but Partial Differential Equation (PDE) solvers will also give the same behaviour.

Most of the time we can’t derive exact analytic solutions to our models’ equations, so we have to use numerical solution techniques; the simplest of these is the Forward Euler method. These numerical methods give you only an approximation to the solution of your equations, which you try to ensure is accurate by taking more computational effort by adding steps in your approximation (finer time steps) and checking the solution is converging to an answer. As you keep refining, the solution should change less and less.

Broadly speaking we can classify the different ODE solvers into: fixed step, like the Forward Euler method, that take the same size time steps as they go along; and adaptive step that alter the length of time steps, possibly on every step. When gradients are changing fast adaptive solvers try to take lots of small steps to stay accurate; when gradients are changing more slowly they make fewer but larger steps to run computations fast.

With an adaptive time-step solver you give a target tolerance (relative to the size of the variables (RelTol), or absolute (AbsTol), or typically both) and it refines the steps to try to maintain these tolerances on each step. In the example here we used CVODE but another common one is the Matlab ode15s stiff ODE solver. The same principle would also apply if you use a fixed-step solver, it would need smaller time steps rather than tighter tolerances.

In Figure 2 we show the shift in the likelihood surface as we tighten the ODE solver tolerances (Relative, Absolute in brackets above each plot):

TighterTolerances

Figure 2: Tolerances tightening from Rel=1e-4, Abs=1e-6 through to 1e-7, 1e-9. We need solutions under these tolerances to get a nice smooth likelihood surface in this problem. Figure is taken from Ross’ PhD thesis. Note, as well as getting smoother, the log-likelihood curve is shifting vertically (the y axes are different in these plots) and the difference in terms of probabilities would be huge.

In general RelTol = 10^-4 and AbsTol = 10^-6 are not unreasonable choices for a single ODE solve, indeed Matlab’s defaults are RelTol = 10^-3 (less precise than Figure 1) and AbsTol = 10^-6 (the same).

So why is this effect so big?

Likelihoods

A very common assumption is that a ‘data generating process’ (the way that you end up with observations that some instrument records) is:

data = reality + observation noise on each data point

Another common assumption is that the noise here is Gaussian, independent on each data point and identically-distributed (comes from a Normal distribution with the same mean (often zero) and standard deviation), this is known as “i.i.d.” Gaussian noise.

A third assumption is that ‘reality’ in our equation above is given by the smooth noise-less model output. This is obviously a bit shaky (because no model is perfect), but the idea is you can still get useful information on the parameters within your model if it is close enough (N.B. bear in mind you might get overconfident in the wrong answer – this is a good paper explaining why). So we then commonly have:

data = model output + i.i.d. Gaussian noise.

We can then write down a log-likelihood (log just because it is easier to work with numerically…) and we end up with a big sum-of-square errors across all of our time trace:

Screenshot 2018-10-17 19.56.48(see the Wikipedia derivation from the Normal probability density function). Here we take the mean to be the model output given some parameter set; x to be the observed data points and sigma is the i.i.d. noise parameter.

The reason that we have come across this problem perhaps more than other people isn’t that we have been more sloppy with our ODE solving (we put some effort into doing that relatively well!), but that we are dealing with problems that consist of high frequency samples of time-series data. We commonly work with a few seconds of 10kHz time sampled recordings, so we can end up with around 100,000 data points.

Why is this important? Say your simulation and data diverge by >=1.1 standard deviations of the noise level (P<0.86 in a statistics table) instead of >= 1 standard deviation (P<0.84) because of numerical error. If this happens at 100 time points then your probabilities multiply and become 0.86^100 = 5×10^-7 and 0.84^100 = 3×10^-8. It has become almost ten times less likely that your parameters gave rise to the data because of your numerical error that had a relatively small effect on the solution at each time point. As we have more and more data points, this effect is exaggerated until even tiny shifts in the solution have huge effects on probabilities, as we saw above.

There’s a slight subtlety here: you might have already checked that your solution is converging to within a pre-specified tolerance for a given parameter set. For example a modeller might say “I don’t care about changes of less than 0.01% in these variables, so I set the solver tolerance accordingly” then a statistician treating the simulator as a black box might just run with that. But what is important here is not the error bound on the individual variables at a given parameter set, but the error bound that the likelihood transformation of these variables demands in terms of reducing jumps in likelihood as a function of parameters. So the modeller and statistician need to talk to each other here to work out whether there might be problems…

Conclusions

I wouldn’t be surprised to find that this is one of the reasons people have found the need to use things like genetic algorithms in cardiac problems. But I suspect the information content, un-identifiability and parameter scalings are also very important factors in that.

So what should you do?

Examine 1D likelihood slices. We can fix all parameters and vary one at a time, plotting out the likelihood as above. Then tighten your solver tolerances until 1D slices of your likelihood are smooth enough for optimisers/MCMC to navigate easily. Whatever this extra accuracy costs in additional solver time will be compensated in far more efficient optimisation/inference (in the examples we have looked at, the worst cost is approximately just 10% more solve time for a solve with 10x tighter tolerances, resulting in thousands of times speed up in optimisation).

What about thinning the data? A way to get rid of this problem would be to remove a lot of data points. Something that’s called ‘thinning’ in the MCMC literature (although it usually refers to the MCMC chain afterwards rather than the data). I’m not a fan of doing it to the data. It will artificially throw away information and make your posteriors wider than they should be according to your noise model. You might not completely trust your likelihood/noise model, but thinning doesn’t automatically fix it either!

Finally, this post wouldn’t be complete without mentioning that there is a relatively new way to consider this effect, which explicitly admits that we have error from the solver, and treats it as a random variable (which can be correlated through time):

data = model + numerical approximation error + observation noise.

Dealing with this formulation is the field known as probabilistic numericssee the homepage for this, and you can use it to make MCMC take account of numerical errors. In our case, I  expect this approach could help by effectively warming up (c.f. tempering methods) the likelihood and making the spikes relatively smaller and more jump-able. Interestingly, in the above plots you can see that this isn’t independent noise as you move through parameter space, I don’t know enough about the subject to say whether that has been handled or not! Whether it is worth the extra complication I’m not convinced. Maybe for big PDE models it will be worth the trouble, but for the reasonably lightweight ODEs involved in single cell cardiac work it is probably just worth solving more accurately all the time.

Posted in Action Potential Models, Numerics, Stats and Inference | Tagged , , , , , , , | 5 Comments

A report on the Toronto CiPA in-silico modelling workshop

This is a long-awaited follow-up to this post advertising the workshop. Apologies it has taken so long, the fact I wanted to write something about the next meeting, which we just had, reminded me I never posted all these talks.

On the 9th November 2017 the CiPA in-silico Working Group hosted a meeting in Toronto General Hospital that the Cardiac Physiome meeting kindly let us run as a satellite meeting – a big thanks to them for organising the logistics of room booking etc.

The in-silico aspects of CiPA are led by the FDA Center for Drug Evaluation and Research. You might find the background document that we put together useful if you haven’t heard of CiPA. I’ve also written a post on the idea before. The FDA team let me organise this long half day with the following aims:

  • To inform the cardiac modelling community about the CiPA initiative.
  • To get feedback on the FDA’s work to date.
  • To draw attention to other research in the area they might not have been familiar with.
  • To discuss the next steps.
  • To spark more research and collaborations in this area.

It was a fascinating and thought provoking day, plenty of work for us to do, as you’ll see on my summing up slides at the end of the day. Here are links to all the talks, that you can also find in a Figshare Collection.

 

 

 

 

Posted in Action Potential Models, Drug action, Future developments, Ion Channel Models, Model Development, Safety Pharmacology | Tagged , , , , , , , | 1 Comment

Short and rich voltage-clamp protocols

This is a quick post to tell you all about Kylie’s new paper on sinusoidal-wave based voltage clamp protocols that has been published in the Journal of Physiology, and there’s an associated commentary from Ele Grandi. In the paper, some ideas that we’ve been thinking about for almost 10 years since I was working with Martin Fink and Denis Noble back in Oxford Physiology department in 2008-2010 have finally come to fruition.

In their 2009 simulation study comparing properties of Hodgkin Huxley vs. Markov Models (well worth a read) Martin and Denis discussed how an optimised short voltage step protocol might contain enough information to fit the parameters of models (termed an ‘identifiable’ model/protocol combination) in a relatively short amount of experimental time.

We picked up on these ideas when Kylie came to look at models of hERG. We originally wanted to study different modes of drug binding with hERG and design experiments to quantify that. Unfortunately, it rapidly became clear there was little consensus on how to model hERG itself, before even considering drug binding.

Existing literature models of IKr

Figure 1: seven existing model structures that described the 29 models for IKr (a.k.a.* hERG) that we could find in the literature.

OK, so we have lots of different structures, but does this matter? Or do they all give similar predictions? Unfortunately – as we show in Figure 1 of the paper – quite a wide variety of different current profiles are predicted, even by models for the same species, cell type and temperature.

So Kylie’s PhD project became a challenge of deciding where we should start! What complexity do we need in model of hERG (for studying its role in the action potential and what happens when it is blocked), and how should we build one?

These questions link back to a couple of my previous posts – how complex should a model be, and what experiments do we need to do to build it? Kylie’s thesis looked at the question of how we should parameterise ion channel models, and even how to select the right ion channel model to use in the first place. We had quite a lot of fun designing new voltage clamp protocols and then going to a lab to test them out. The full story is in Kylie’s thesis, and we present a simpler version that just shows how well you can do with one basic model in the paper.

Kylie did a brilliant job, and as well as doing all the statistical inference and mathematical modelling work, she went and learnt how to do whole-cell patch clamp experiments herself as well at Teun de Boer’s lab and also with Adam Hill and Jamie Vandenberg. Patch clamp is an amazing experimental technique where you effectively get yourself an electrode in the middle of a cell, my sketch of how it works is in Figure 2.

Patch clamp attached

Patch clamp whole cell
Figure 2: the idea behind patch clamp. Top: you first make a glass pipette by pulling a glass tube whilst heating it, to melt it and stretch it until it breaks into two really fine pipettes (micrometers across at the end). You put one electrode in a bath with the cells, then you put another electrode in your pipette with some liquid; attach to a rig to get fine control of where it points; and lower it down under a microscope onto the surface of a single cell. You then apply suction, clamping the pipette to a cell, this is commonly done by literally sucking on a straw! Bottom: you then keep sucking, and rupture the cell membrane, this has cunningly got you an electric circuit that effectively lets you measure voltages or currents across the cell membrane. You can clamp a certain voltage at the amplifier, which it does by injecting current to keep a stable voltage (or any voltage as a function of time). The idea is that the current the amplifier has to inject is the exact opposite of what the cell itself is allowing across the membrane (give or take some compensation for the other electrical components I have put in my diagram). So you can now measure the current flowing through the cells ion channels as a function of voltage.

We decided that the traditional approach of specific fixed voltage steps (which neatly de-couples time- and voltage-dependence) was a bit slow and tricky to assemble into a coherent model. So we made up some new sinusoid-based protocols for the patch clamp amplifier to rapidly probe the voltage- and time-dynamics of the currents. Things we learnt along the way:

  • Whilst it might work in theory for the model, you also might fry the cells in real life (our first attempts at protocols went up to +100mV for extended periods of time, which cells don’t really like).
  • HEK and CHO cells have their own voltage-dependent ion channels (which we call ‘endogenous’ voltage-dependent currents) which you can activate and mix up with the current you are interested in.
  • It’s really important to learn what all the dials on a patch clamp amplifier do(!), and adjust for things like liquid junction potential.
  • Synthetic data studies (simulating data, adding realistic levels of noise, and then attempting to recover the parameters you used) are a very useful tool for designing a good experiment. You can add in various errors and biases and see how sensitive your answers are to these discrepancies.
  • Despite conductance and kinetics being theoretically separable/identifiable, and practically in synthetic studies, we ended up with some problems here when using real data (e.g. kinetics make channel ‘twice as open’ with ‘half the conductance’. You can imagine this is impossible if the channels are already over 50% open, but maybe quite likely if only 5% of the channels are open?). We re-designed the voltage clamp to include an activation step to provoke a nice large current with a large open probability, based on hERG currents people had observed before.

But to cut a very long story short – it all worked better than we could have imagined. Figure 3 shows the voltage protocol we put into the amplifier, and the currents we recorded in CHO cells that were over-expressing hERG. We then fitted our simple Hodgkin-Huxley style model to the current, by varying all of its parameters to get the best possible fit, essentially.

training

Figure 3: Model Training/Fitting/Calibrating our model to currents provoked under the sinusoidal voltage clamp. Top: the whole training dataset. Bottom: a zoom in on the highlighted region of the top plot.

So a great fit, but that doesn’t mean anything on its own – see my previous post on that. So we then tested the model in situations that we would like it to make good predictions, here under cardiac action potentials and also slightly awry ones, see Fig 4.

validation

Figure 4: Model Evaluation/Validation. The red current trace was recorded in the same cell as the sinusoidal data shown above. The blue trace is predicted using the parameters from the fitting exercise in Figure 3, and isn’t fitted to this recording.

We repeated this in a few different cells, and this lets us look at cell-cell variability in the ion channel kinetics via examining changes in the model’s parameters. Anyway, that is hopefully enough to whet your appetite for reading the whole paper! As usual, all the data, code, and (perhaps unusually) fitting algorithms are available for anyone to play with too.

Wish list: if you can help with any of these, let’s collaborate! Please get in touch.

  • A better understanding of identifiability of conductance versus kinetic parameters, and how to ensure both.
  • A way to design voltage clamp protocols for particular currents (this was somewhat hand tuned).
  • A way to select between different model structures at the same time as parameterising them.
  • A way to say how ‘similar’ (in terms of model dynamics?) a validation protocol is to a training protocol. If validation was too similar to training, it wouldn’t really be validation… we think our case above is ‘quite different’, but could we quantify this?
  • A way to quantify/learn ‘model discrepancy’ and to put realistic probabilistic bounds on our model predictions when we are using the models “out in the wild” in future unknown situations.
Footnotes

*hERG is the gene that encodes for the mRNA that is translated into a protein that assembles into homotetramers (groups of four of the same thing stuck together) in the cell membrane. This protein complex forms the main part of a channel in the cell membrane (Kv11.1) that carries the ionic current known as the “rapid delayed rectifier potassium current” or IKr. So you can see why we abuse the term hERG and say things like “hERG current”!

Posted in Experimental Design, Future developments, Ion Channel Models, Model Development, Stats and Inference | Tagged , , , , , , , , , , , , | 3 Comments

Paper repository fatigue

(This is a non-cardiac-modelling rant, probably specific to UK researchers, so feel free to skip it!)

I am a massive fan of open access publication and open science in general. It is quite sensible that the public gets to read all of the research they are funding, and it has to be the best way to share ideas and let science happen without any barriers.

But I’m sure we aren’t doing it very efficiently at the moment, some very well-intentioned policies are making publishing a real nuisance in the UK.

Here’s a list of the all the places where papers we are publishing at the moment are ending up. When you google a paper title, you are likely to find hits for all of these, you have to hope that they all ended up being the same final version of a paper, and aren’t really sure which is best to look at:

  • On ArXiv/BioRxiv – I think preprint servers are a great way to make a version of your paper open access, get it google-able, and get feedback on it. So we put up papers on BioRxiv, and try (but sometimes forget) to make sure they are updated to match the final accepted article in a journal.
  • In the actual journal – this is generally the nicest to look at version (but not always!). My funders like to have their articles under a CC-BY licence, which is a great idea, but it generally means a Gold route for open access with quite high fees.
  • On PubMedCentral(PMC) or Europe’s version (or usually for us, both) – PMC is funded by the NIH, the USA’s main medical research agency, and any papers they funded also have to deposit a version with PMC. This applies even if it is open access – fair enough – I imagine  it’s a good idea to have an ‘official backup’ in case a journal shuts down for any reason. Since my funders go for Gold open it is somewhat redundant, and confuses people when they search on PubMed and have to choose which version to look at, but at least it is a big repository with almost all biomedical research in one place (give or take the European version – please just pool your resources EU and USA! Does Brexit mean we’ll also have to put a version in a UK-PMC too? Probably… groan).
  • On a university/institutional archive – the UK powers-that-be have (very sensibly) decided that (almost) all papers have to be available open access to be eligible for consideration as part of the next Research Excellence Framework which decides how much public money universities get. As far as I can work out, every single university has decided (very un-sensibly) that the only way to ensure this is to launch their own individual paper repository where they also host another open access version of the paper. Ours is called ePrints.
  • On a couple of other institutional archives – nearly all my papers have co-authors in other universities, who also have to submit the paper again to their own institutional repositories!

So, every single university in the UK is creating the IT databases, infrastructure, front-ends to host large numbers of research papers in perpetuity; as well as employing staff to curate and chase academics to put the right versions into the right forms at the right time with the right licences to keep everyone happy, mostly for papers that are already available open access elsewhere. This is an insane use of resources.

The only thing I can suggest is that UK REF people clarify that any paper that has a final open access accepted text in either arXiv/BioRXiv/a journal/PMC/EuropePMC is automatically eligible. For papers that doesn’t cover, the universities need to get together to either: beef up support for subject-specific repositories; or, just fund a single central repository between them, with a good user interface, to cover any subjects that fall down the cracks between the reliable subject repositories above. Maybe the sort of thing our highly-paid VCs and UUK should be organising 😉

 

 

Posted in Academia-in-general | Tagged , , , , , | Leave a comment

Postdoc position available

N.B. This position is now closed.

Another short post, just to advertise a postdoctoral research associate (PDRA) position available to work with me. It’s a two year position, based at the Centre for Mathematical Medicine & Biology at the University of Nottingham, with the potential to visit labs around the world to get hands-on experimental experience.

The subject of the postdoc position will be designing new experiments to get as much information as we can on how pharmaceutical compounds bind to ion channels and affect the currents that flow through them. As part of this I would like to explore how to characterise and quantify model discrepancy, and design experiments for that, as well as model selection and parameterisation of the models.

We’ll then use the data generated by these experiments to build models of pharmaceutical drug interactions with ion currents, working with partners in pharmaceutical companies and international regulators to test out these new models. The project will involve learning some of the modelling behind electrophysiology and pharmacology, as well as data science/statistics behind designing experiments and choosing models and parameters. It will build on some of our recent work on novel experimental design, some of which is available in a preprint here.

See http://www.nottingham.ac.uk/jobs/currentvacancies/ref/SCI308217 for details and links to apply. Closing date is Wed 4th October. Informal enquiries to me are welcome (but applications have to go through the official system on the link above).

Gary

 

Posted in Drug action, Experimental Design, Future developments, Ion Channel Models, Model Development, Safety Pharmacology, Stats and Inference | Tagged , , | Leave a comment