Archive for the ‘mathematical biology’ Category

SMB conference in Utah

July 21, 2017

I have just attended the annual meeting of the Society for Mathematical Biology in Salt Lake City. Before reporting on my experiences there I will start with an apparently unrelated subject. I studied and did my PhD at the University of Aberdeen and I am very satisfied with the quality of the mathematical education I received there. There was one direction in mathematics in which I did not get as much training as I later needed, namely analysis and partial differential equations. This was not the fault of the lecturers, who had enough expertise and enthusiasm for teaching these things. It was the fault of the students. Since it was a small department the students chose which advanced courses were to be offered from a list of suggestions. Most of the students (all of them except me?) did not like analysis and so most of the advanced courses with a significant analysis content were not chosen. By the time I got my first postdoc position I had become convinced that in the area I was working in the research based on differential geometry was a region which was overgrazed and the thing to do was to apply PDE. Fortunately the members of the group of Jürgen Ehlers which I joined were of the same opinion. The first paper I wrote after I got there was on a parabolic PDE, the Robinson-Trautman equation. I had to educate myself for this from books and one of the sources which was most helpful was a book by Avner Friedman. Here is the connection to the conference. Avner Friedman, now 84 but very lively, gave a talk. Mathematically the subject was free boundary value problems for reaction-diffusion-advection equations, Friedman’s classic area. More importantly, these PDE problems came from the modelling of combination therapies for cancer. The type of therapies being discussed included antibodies to CTLA-4 and PD-1 and Raf inhibitors, subjects I have discussed at various places in this blog. I was impressed by how much at home Friedman seemed to be with these biological and medical themes. This is maybe not so surprising in view of the fact that he did found the Institute for Mathematical Biosciences in Ohio and was its director from 2002 to 2008. More generally I was positively impressed by the extent to which the talks I heard at this conference showed a real engagement with themes in biology and medicine and evidence of a lot of cooperations with biologists and clinicians. There were also quite a number of people there employed at hospitals and with medical training. As an example I mention Gary An from the University of Chicago who is trained as a surgeon and whose thoughtful comments about the relations between mathematics, biology and medicine I found very enlightening. There was a considerable thematic overlap with the conference on cancer immunotherapy I went to recently.

Now Subgroups are being set up within the Society to concentrate on particular subjects. One of these, the Immunobiology and Infection Subgroup had its inaugural meeting this week and of course I went. There I and a number of other people learned a basic immunological fact which we found very surprising. It is well known that the thymus decreases in size with age so that presumably our capacity to produce new T cells is constantly decreasing. The obvious assumption, which I had made, is that this is a fairly passive process related to the fact that many systems in our bodies run down with age. We learned from Johnna Barnaby that the situation may be very different. It may be that the decrease in the size of the thymus is due to active repression by sexual hormones. She is involved in work on therapy for prostate cancer and said that it has been found that in men with prostate cancer who are getting drugs to reduce their testosterone levels it is seen that their thymus increases in size.

There were some recurrent themes at the conference. One was oncolytic viruses. These are genetically engineered viruses intended to destroy cancer cells. In modelling these it is common to use extensions of the fundamental model of virus dynamics which is very familiar to me. For instance Dominik Wodarz talked about some ODE models for oncolytic viruses in vitro where the inclusion of interferon production in the model leads to bistability. (In reponse to a question from me he said that it is a theorem that without the interferon bistability is impossible.) I was pleased to see how, more generally, a lot of people were using small ODE models making real contact to applications. Another recurrent theme was that there are two broad classes of macrophages which may be favourable or unfavourable to tumour growth. I should find out more about that. Naveen Vaidya talked about the idea that macrophages in the brain may be a refuge for HIV. Actually, even after talking to him I am not sure if it should not rather be microglia than macrophages. James Moore talked about the question of how T cells are eliminated in the thymus or become Tregs. His talk was more mathematical than biological but it has underlined once again that I want to understand more about positive and negative selection in the thymus and the related production of Tregs.

On a quite different subject there were two plenary talks related to coral reefs. A theme which is common in the media is that of the damage to coral due to climate change. Of course this is dominated by politics and usually not accompanied by any scientific information on what is going on. The talk of Marissa Blaskett was an excellent antidote to this kind of thing and now I have really understood something about the subject. The other talk, by Mimi Koehl, was less about the reefs themselves but about the way in which the larvae of snails which graze on the coral colonize the reef. I found the presentation very impressive because it started with a subject which seemed impossibly complicated and showed how scientific investigation, in particular mathematical modelling, can lead to understanding. The subject was the interaction of microscopic swimming organisms with the highly turbulent flow of sea water around the reefs. Investigating this involved among other things the following. Measuring the turbulent flow around the reef using Doppler velocimetry. Reconstructing this flow in a wave tunnel containing an artificial reef in order to study the small-scale structure of the transport of chemical substances by the flow. Going out and checking the results by following dye put into the actual reef. And many other things. Last but not least there was the mathematical modelling. The speaker is a biologist and she framed her talk by slides showing how many (most?) biologists hate mathematical modelling and how she loves it.

Advertisements

Conference on mathematical analysis of biological interaction networks at BIRS

June 9, 2017

I have previously written a post concerning a meeting at the Banff International Research Station (BIRS). This week I am at BIRS again. Among the topics of the talks were stochastic chemical reaction networks, using reaction networks in cells as computers and the area of most direct relevance to me, multiple steady states and their stability in deterministic CRN. Among the most popular examples occurring in the talks in the latter area were the multiple futile cycle, the MAPK cascade and the EnvZ/OmpR system. In addition to the talks there was a type of event which I had never experienced before called breakout sessions. There the participants split into groups to discuss different topics. The group I joined was concerned with oscillations in phosphorylation cycles.

In the standard dual futile cycle we have a substrate which can be phosphorylated up to two times by a kinase and dephosphorylated again by a phosphatase. It is assumed that the (de-)phosphorylation is distributive (the number of phosphate groups changes by one each time a substrate binds to an enzyme) and sequential (the phosphate groups are added in one order and removed in the reverse order). A well-known alternative to this is processive (de-)phosphorylation where the number of phosphate groups changes by two in one encounter between a substrate and an enzyme. It is known that the double phosphorylation system with distributive and sequential phosphorylation admits reaction constants for which there are three steady states, two of which are stable. (From now on I only consider sequential phosphorylation here.) By contrast the corresponding system with processive phophorylation always has a unique steady state. Through the talk of Anne Shiu here I became aware of the following facts. In a paper by Suwanmajo and Krishnan (J. R. Soc. Interface 12:20141405) it is stated that in a mixed model with distributive phosphorylation and processive dephosphorylation periodic solutions occur as a result of a Hopf bifurcation. The paper does not present an analytical proof of this assertion.

It is a well-known open question, whether there are periodic solutions in the case that the modificiations are all distributive. It has been claimed in a paper of Errami et. al. (J. Comp. Phys. 291, 279) that a Hopf bifurcation had been discovered in this system but the claim seems to be unjustified. In our breakout sessions we looked at whether oscillations might be exported from the mixed model to the purely distributive model. We did not get any definitive results yet. There were also discussions on effective ways of detecting Hopf bifurcations, for instance by using Hurwitz determinants. It is well-known that oscillations in the purely distributive model, if they exist, do not persist in the Michaelis-Menten limit. I learned from Anne Shiu that it is similarly the case that the oscillations in the mixed model are absent from the Michaelis-Menten system. This result came out of some undergraduate research she supervised. Apart from these specific things I learned a lot just from being in the environment of these CRN people.

Yesterday was a free afternoon and I went out to look for some birds. I saw a few things which were of interest to me, one of which was a singing Tennessee warbler. This species has a special significance for me for the following reason. Many years ago when I still lived in Orkney I got an early-morning phone call from Eric Meek, the RSPB representative. He regularly checked a walled garden at Graemeshall for migrants. On that day he believed he had found a rarity and wanted my help in identifying it and, if possible, catching it. We did catch it and it turned out to be a Tennessee warbler, the third ever recorded in Britain. That was big excitement for us. I had not seen Eric for many years and I was sad to learn now that he died a few months ago at a relatively young age. The name of this bird misled me into thinking that it was at home in the southern US. In fact the name just came from the fact that the first one to be described was found in Tennessee, a migrant. The breeding range is much further north, especially in Canada. Thus it is quite appropriate that I should meet it here.

Mathematical models for T cell activation

May 2, 2017

The proper functioning of our immune system is heavily dependent on the ability of T cells to identify foreign substances and take appropriate action. For this they need to be able to distinguish the foreign substances (non-self) from those coming from substances belonging to the host (self). In the first case the T cell should be activated, in the second not. The process of activation is very complicated and takes days. On the other hand it seems that an important part of the distinction between self and non-self only takes a few seconds. A T cell must scan the surface of huge numbers of dendritic cells for the presence of the antigen it is specific for and it can only spare very little time for each one. Within that time the cell must register that there is something relevant there and be induced to stay longer, instead of continuing with its search.

A mathematical model for the initial stages of T cell activation (the first few minutes) was formulated and studied by Altan-Bonnet and Germain (PloS Biol. 3(11), e356). They were able to use it successfully to make experimental predictions, which they could then confirm. The predictions were made with the help of numerical simulations. From the point of view of the mathematician a disadvantage of this model is its great complexity. It is a system of more than 250 ordinary differential equations with numerous parameters. It is difficult to even write the definition of the model on paper or to describe it completely in words. It is clear that such a system is difficult to study analytically. Later Francois et. el. (PNAS 110, E888) introduced a radically simplified model for the same biological situation which seemed to show a comparable degree of effectiveness to the original model in fitting the experimental data. In fact the simplicity of the model even led to some new successful experimental predictions. (Altan-Bonnet was among the authors of the second paper.) This is the kind of situation I enjoy, where a relatively simple mathematical model suffices for interesting biological applications.

In the paper of Francois et. al. they not only do simulations but also carry out interesting analytical calculations for their model. On the other hand they do not follow the direction of attempting to use these calculations to formulate and prove mathematical theorems about the solutions of the model. Together with Eduardo Sontag we have now written a paper where we obtain some rigorous results about the solutions of this system. In the original paper the only situation considered is that where the system has a unique steady state and any other solution converges to that steady state at late times. We have proved that there are parameters for which there exist three steady states. A numerical study of these indicates that two of them are stable. A parameter in the system is the number N of phosphorylation sites on the T cell receptor complex which are included in the model. The results just mentioned on steady states were obtained for N=3.

An object of key importance is the response function. The variable which measures the degree of activation of the T cell in this model is the concentration C_N of the maximally phosphorylated state of the T cell receptor. The response function describes how C_N depends on the important input variables of the system. These are the concentration L of the ligand and the constant \nu describing the rate at which the ligand unbinds from the T cell receptor. A widespread idea (the lifetime dogma) is that the quantity \nu^{-1}, the dissociation time, determines how strongly an antigen signals to a T cell. It might naively be thought that the response should be an increasing function of L (the more antigen present the stronger the stimulation) and a decreasing function of \nu (the longer the binding the stronger the stimulation). However both theoretical and experimental results lead to the conclusion that this is not always the case.

We proved analytically that for certain values of the parameters C_N is a decreasing function of L and an increasing function of \nu. Since these rigorous results give rather poor information on the concrete values of the parameters leading to this behaviour and on the global form of the function we complemented this analytical work by simulations. These show how C_N can have a maximum as a function of \nu within this model and that as a function of L it can have the following form in a log-log plot. For L small the graph is a straight line of slope one. As L increases it switches to being a straight line of slope 1-N/2 and for still larger values it once again becomes a line of slope one, shifted with respect to the original one. Finally the curve levels out as it must do, since the function is bounded. The proofs do not make heavy use of general theorems and are in general based on doing certain estimates by hand.

All of these results were of the general form ‘there exist parameter values for the system such that X happens’. Of course this is just a first step. In the future we would like to understand better to what extent biologically motivated restrictions on the parameters lead to restrictions on the dynamical behaviour.

Kestrels and Dirichlet boundary conditions

February 28, 2017

The story I tell in this post is based on what I heard a long time ago in a talk by Jonathan Sherratt. References to the original work by Sherratt and his collaborators are Proc. R. Soc. Lond. B269, 327 (more biological) and SIAM J. Appl. Math. 63, 1520 (more mathematical). There are some things I say in the following which I did not find in these sources and so they are based on my memories of that talk and on things which I wrote down for my own reference at intermediate times. If this has introduced errors they will only concern details and not the basic story. The subject is a topic in population biology and how it relates to certain properties of reaction-diffusion equations.

In the north of England there is an area called the Kielder Forest with a lake in the middle and the region around the lake is inhabited by a population of the field vole Microtus\ agrestis. It is well known that populations of voles undergo large fluctuations in time. What is less known is what the spatial dependence is like. There are two alternative scenarios. In the first the population density of voles oscillates in a way which is uniform in space. In the second it is a travelling wave of the form U(x-ct). In that case the population at a fixed point of space oscillates in time but the phase of the oscillations is different at different spatial points. In general there is relatively little observational data on this type of thing. The voles in the Kielder forest are an exception to this since in that case a dedicated observer collected data which provides information on both the temporal and spatial variation of the population density. This data is the basis for the modelling which I will now describe.

The main predators of the voles are weasels Mustela\ nivalis. It is possible to set up a model where the unknowns are the populations of voles and weasels. Their interaction is modelled in a simple way common in predator-prey models. Their spatial motion is described by a diffusion term. In this way a system of reaction-diffusion equations is obtained. These are parabolic equations and to the time evolution is non-local in space. The unknowns are defined on a region with boundary which is the complement of a lake. Because of this we need not only initial values to determine a solution but also boundary conditions. How should they be chosen? In the area around the lake there live certain birds of prey, kestrels. They hunt voles from the air. In most of the area being considered there is very thick vegetation and the voles can easily hide from the kestrels. Thus the direct influence of the kestrels on the vole population is negligible and the kestrels to not need to be included in the reaction-diffusion system. They do, however, have a striking indirect effect. On the edge of the lake there is a narrow strip with little vegetation and any vole which ventures into that area is in great danger of being caught by a kestrel. This means that the kestrels essentially enforce the vanishing of the population density of voles at the edge of the lake. In other words they impose a homogeneous Dirichlet boundary condition on one of the unknowns at the boundary. Note that this is incompatible with spatially uniform oscillations. On the boundary oscillations are ruled out by the Dirichlet condition. When the PDE are solved numerically what is seen that the shore of the lake generates a train of travelling waves which propagate away from it. This can also be understood theoretically, as explained in the papers quoted above.

Conference on cancer immunotherapy at EMBL

February 5, 2017

I just came back from a conference on cancer immunotherapy at EMBL in Heidelberg. It was very interesting for me to get an inside view of what is happening in this field and to learn what some of the hot topics are. One of the speakers was Patrick Baeuerle, who talked about a molecular construct which he introduced, called BiTE (bispecific T cell engager). It is the basis of a drug called blinatumomab. This is an antibody construct which binds both CD3 (characteristic of T cells) and CD19 (characteristic of B cells) so that these cells are brought into proximity. In its therapeutic use in treating acute lymphoblastic leukemia, the B cell is a cancer cell. More generally similar constructs could be made so as to bring T cells into proximity with other cancer cells. The idea is that the T cell should kill the cancer cell and in that context it is natural to think of cytotoxic T cells. It was not clear to me how the T cell is activated since the T cell receptor is not engaged. I took the opportunity to ask Baeuerle about this during a coffee break and he told me that proximity alone is enough to activate T cells. This can work not only for CD8 T cells but also for CD4 cells and even regulatory T cells. He presented a picture of a T cell always being ready to produce toxic substances and just needing a signal to actually do it. Under normal circumstances T cells search the surfaces of other cells for antigens and do not linger long close to any one cell unless they find their antigen. If they do stay longer near another cell for some reason then this can be interpreted as a danger sign and the T cell reacts. Baeuerle, who started his career as a biochemist, was CEO of a company called Micromet whose key product was what became blinatumomab. The company was bought by Amgen for more than a billion dollars and Baeuerle went with it to Amgen. When it came on the market it was the most expensive cancer drug ever up to that time. Later Baeuerle moved to a venture capital firm called MPM Capital, which is where he is now. In his previous life as a biochemical researcher Baeuerle did fundamental work on NF\kappaB  with David Baltimore.

In a previous post I mentioned a video by Ira Mellman. At the conference I had the opportunity to hear him live. One thing which became clear to me at the conference is the extent to which, among the checkpoint inhibitor drugs, anti-PD1 is superior to anti-CTLA. It is successful in a much higher proportion of patients. I never thought much about PD1 before. It is a receptor which is present on the surface of T cells after they have been activated and it can be stimulated by the ligand PD1L leading to the T cell being switched off. But how does this switching off process work? The T cell is normally switched on by the engagement of the T cell receptor and a second signal from CD28. In his talk Mellman explained that the switching off due to PD1 is not due to signalling from the T cell receptor being stopped. Instead what happens is that PD1 activates the phosphatase SHP2 which dephosphorylates and thus deactivates CD28. Even a very short deactivation of CD28 is enough to turn off the T cell. In thinking about mathematical models for T cell activation I thought that there might be a link to checkpoint inhibitors. Now it looks like models for T cell activation are not of direct relevance there and that instead it would be necessary to model CD28.

I learned some more things about viruses and cancer. One is that the Epstein-Barr-virus, famous for causing Burkitt’s lymphoma also causes other types of cancers, in particular other types of lymphoma. Another is that viruses are being used in a therapeutic way. I had heard of oncolytic viruses before but I had never really paid attention. In one talk the speaker showed a picture of a young African man who had been cured of Burkitt’s lymphoma by … getting measles. This gave rise to the idea that viruses can sometimes preferentially kill cancer cells and that they can perhaps be engineered to as to do so more often. In particular measles is a candidate. In that case there is an established safe vaccination and the idea is to vaccinate with genetically modified measles virus to fight certain types of cancer.

In going to this conference my main aim was to improve my background in aspects of biology and medicine which could be of indirect use for my mathematical work. In fact, to my surprise, I met one of the authors of a paper on T cell activation which is closely related to mathematical topics I am interested in. This was Philipp Kruger who is in the group of Omer Dushek in Oxford. I talked to him about the question of what is really the mechanism by which signals actually cross the membrane of T cells. One possibility he mentioned was a conformational change in CD3. Another, which I had already come across is that it could have to do with a mechanical effect by which the binding of a certain molecule could bring the cell membranes of two interacting cells together and expel large phosphatases like CD45 from a certain region. In the paper of his I had looked at signalling in T cells is studied with the help of CAR T-cells, which have an artifical analogue of the T cell receptor which may have a much higher affinity than the natural receptor. In his poster he described a new project looking at the effect of using different co-receptors in CAR T-cells (not just CD28). In any case CAR T-cells was a subject which frequently came up at the conference. Something which was in the air was that this therapy may be associated with neurotoxicity in some cases but I did not learn any details.

As far as I can see, the biggest issue with all these techniques is the following. They can be dramatically successful, taking patients from the point of death to long-term survival. On the other hand they only work in a subset of patients (say, 40% at most) and nobody understands what success depends on. I see a great need for a better theoretical understanding. I can understand that when someone has what looks like a good idea in this area they quickly look for a drug company to do a clinical trial with it. These things can save lives.  On the other hand it is important to ask whether investing more time in obtaining a better understanding of underlying mechanisms might not lead to better results in the long run.

The importance of dendritic cells

October 30, 2016

I just realized that something I wrote in a previous post does not make logical sense. This was not just due to a gap in my exposition but to a gap in my understanding. I now want to correct it. A good source for the correct story is a video by Ira Mellman of Genentech. I first recall some standard things about antigen presentation. In this process peptides are presented on the surface of cells with MHC molecules which are of two types I and II. MHC Class I molecules are found on essentially all cells and can present proteins coming from viruses infecting the cell concerned. MHC Class II molecules are found only on special cells called professional antigen presenting cells. These are macrophages, T cells and dendritic cells. The champions in antigen presentations are the dendritic cells and those are the ones I will be talking about here. In order for a T cell to be activated it needs two signals. The first comes through the T cell receptor interacting with the peptide-MHC complex on an APC. The second comes from CD28 on the T cell surface interacting with B7.1 and B7.2 on the APC.

Consider now an ordinary cell, not an APC, which is infected with a virus. This could, for instance be an epithelial cell infected with a influenza virus. This cell will present peptides derived from the virus with MHC Class I molecules. These can be recognized by activated {\rm CD8}^+ T cells which can then kill the epithelial cell and put an end to the viral reproduction in that cell. The way I put it in the previous post it looked like the T cell could be activated by the antigen presented on the target cell with the help of CD28 stimulation. The problem is that the cell presenting the antigen in this case is an amateur. It has no B7.1 or B7.2 and so cannot signal through CD28. The real story is more complicated. The fact is that dendritic cells can also present antigen on MHC Class I, including peptides which are external to their own function. A possible mechanism explained in the video of Mellman (I do not know if it is certain whether this is the mechanism, or whether it is the only one) is that a cell infected by a virus is ingested by a dendritic cell by phagocytosis, so that proteins which were outside the dendritic cell are now inside and can be brought into the pathway of MHC Class I presentation. This process is known as cross presentation. Dendritic cells also have tools of the innate immune system, such as toll-like receptors, at their disposal. When they recognise the presence of a virus by these means they upregulate B7.1 and B7.2 and are then in a position to activate {\rm CD8}^+ T cells. Note that in this case the virus will be inside the dendritic cell but not infecting it. There are viruses which use dendritic cells for their own purposes, reproducing there or hitching a lift to the lymph nodes where they can infect their favourite cells. An example is HIV. The main receptor used by this virus to enter the cells is CD4 and this is present not only on T cells but also on dendritic cells. Another interesting side issue is that dendritic cells can not only activate T cells but also influence the differentiation of these cells into various different types. The reason is that the detection instruments of the dendritic cell not only recognise that a pathogen is there but can also classify it to some extent (Mellman talks about a bar code). Based on this information the dendritic cell secretes various cytokines which influence the differentiation process. For instance they can influence whether a T-helper cell becomes of type Th1 or Th2. This is related to work which I did quite a long time ago on an ODE system modelling the interactions of T cells and macrophages. In view of what I just said it ḿight be interesting to study an inhomogeneous version of this system. The idea is to include an external input of cytokines coming from dendritic cells. In fact the unknowns in the system are not the concentrations of cytokines but the populations of cells. Thus it would be appropriate to introduce an inhomogeneous contribution into the terms describing the production of different types of cells.

Modern cancer therapies

October 28, 2016

I find the subject of cancer therapies fascinating. My particular interest is in the possibility of obtaining new insights by modelling and what role mathematics can play in this endeavour. I have heard many talks related to these subjects, both live and online. I was stimulated to write this post by a video of Martin McMahon, then at UCSF. It made me want to systematize some of the knowledge I have obtained from that video (which is already a few years old) and from other sources. First I should fix my terminology. I use the term ‘modern cancer therapies’ to distinguish a certain group of treatments from what I will call ‘classical cancer therapies’. The latter are still of central importance today and the characteristic feature of those I am calling ‘modern’ here is that they have only been developed in the last few years. I start by reviewing the ‘classical therapies’, surgery, radiotherapy and chemotherapy. Surgery can be very successful when it works. The aim is to remove all the cancerous cells. There is a tension between removing too little (so that a few malignant cells could remain and restart the tumour) and too much (which could mean too much damage to healthy tissues). A particularly difficult case is that of the glioma where it is impossible to determine the extent of the tumour by imaging techniques alone. An alternative to this is provided by the work of Kristin Swanson, which I mentioned in a previous post. She has developed techniques of using a mathematical model of the tumour (with reaction-diffusion equations) to predict the extent of the tumour. The results of a simulation, specific to a particular patient, is given to the surgeon to guide his work. In the case of radiotherapy radiation is used to kill cancer cells while trying to avoid killing too many healthy cells. A problematic aspect is that the cells are killed by damaging their DNA and this kind of damage may lead to the development of new cancers. In chemotherapy a chemical substance (poison) is used with the same basic aim as in radiotherapy. The substance is chosen to have the greatest effect on cells which divide frequently. This is the case with cancer cells but unfortunately they are not the only ones. A problem with radiotherapy and chemotherapy is their poor specificity.

Now I come to the ‘modern’ therapies. One class of substances used is that of kinase inhibitors. The underlying idea is as follows. Whether cells divide or not is controlled by a signal transduction network, a complicated set of chemical reactions in the cell. In the course of time mutations can accumulate in a cell and when enough relevant mutations are present the transduction network is disrupted. The cell is instructed to divide under circumstances under which it would normally not do so. The cells dividing in an uncontrolled way constitute cancer. The signals in this type of network are often passed on by phosphorylation, the attachment of phosphate groups to certain proteins. The enzymes which catalyse the process of phosphorylation are called kinases. A typical problem then is that due to a mutation a kinase is active all the time and not just when it should be. A switch which activates the signalling network is stuck in the ‘on’ position. This can in principal be changed by blocking the kinase so that it can no longer send its signals. An early and successful example of this is the kinase inhibitor imatinib which was developed as therapy for chronic myelogenous leukemia (CML). It seems that this drug can even cure CML in many cases, in the sense that after a time (two years) no mutated cells can be detected and the disease does not come back if the treatment is stopped. McMahon talks about this while being understandibly cautious about using the word cure in the context of any type of cancer. One general point about the ‘modern’ therapies is that they do not work for a wide range of cancers or even for the majority of patients with a given type of cancer. It is rather the case that cancer can be divided into more and more subtypes by analysing it with molecular methods and the therapy only works in a very specific class of patients, having a specific mutation. I have said something about another treatment using a kinase, Vemurafenib in a previous post. An unfortunate aspect of the therapies using kinase inhibitors is that while they provide spectacular short-term successes their effects often do not last more than a few months due to the development of resistance. A second mutation can rewire the network and overcome the blockade. (Might mathematical models be used to understand better which types of rewiring are relevant?) The picture of this I had, which now appears to me to be wrong, was that after a while on the drug a new mutation appears which gives the resistance. The picture I got from McMahon’s video was a different one. It seems that the mutations which might lead to resistance are often there before treatment begins. They were in Darwinian competition with other cells without the second mutation which were fitter. The treatment causes the fitness of the cells without the second mutation to decrease sharply. This removes the competition and allows the population of resistant cells to increase.

Another drug mentioned by McMahon is herceptin. This is used to treat breast cancer patients with a mutation in a particular receptor. The drug is an antibody and binds to the receptor. As far as I can see it is not known why the binding of the antibody has a therapeutic effect but there is one idea on this which I find attractive. This is that the antibodies attract immune cells which kill the cell carrying the mutation. This gives me a perfect transition to a discussion of a class of therapies which started to become successful and popular very recently and go under the name of cancer immunotherapy, since they are based on the idea of persuading immune cells to attack cancer cells. I have already discussed one way of doing this, using antibodies to increase the activities of T cells, in a previous post. Rather than saying more about that I want to go on to the topic of genetically modified T cells, which was also mentioned briefly here.

I do not know enough to be able to give a broad review of cellular immunotherapy for cancer treatment and so I will concentrate on making some comments based on a video on this subject by Stephan Grupp. He is talking about the therapy of acute lymphocytic leukemia (ALL). In particular he is concerned with B cell leukemia. The idea is to make artificial T cells which recognise the surface molecule CD19 characteristic of B cells. T cells are taken from the patient and modified to express a chimeric T cell receptor (CAR). The CAR is made of an external part coming from an antibody fused to an internal part including a CD3 \zeta-chain and a costimulatory molecule such as CD28. (Grupp prefers a different costimulatory molecule.) The cells are activated and caused to proliferate in vitro and then injected back into the patient. In many cases they are successful in killing the B cells of the patient and producing a lasting remission. It should be noted that most of the patients are small children and that most cases can be treated very effectively with classical chemotherapy. The children being treated with immunotherapy are the ‘worst cases’. The first patient treated by Grupp with this method was a seven year old girl and the treatment was finally very successful. Nevertheless it did at first almost kill her and this is not the only case. The problem was a cytokine release syndrome with extremely high levels of IL-6. Fortunately this was discovered just in time and she was treated with an antibody to IL-6 which not only existed but was approved for the treatment of children (with other diseases). It very quickly solved the problem. One issue which remains to be mentioned is that when the treatment is successful the T cells are so effective that the patient is left without B cells. Hence as long as the treatment continues immunoglobulin replacement therapy is necessary. Thus the issue arises whether this can be a final treatment or whether it should be seen a preparation for a bone marrow transplant. As a side issue from this story I wonder if modelling could bring some more insight for the IL-6 problem. Grupp uses some network language in talking about it, saying that the problem is a ‘simple feedback loop’. After I had written this I discovered a preprint on BioRxiv doing mathematical modelling of CAR T cell therapy of B-ALL and promising to do more in the future. It is an ODE model where there is no explicit inclusion of IL-6 but rather a generic inflammation variable.

Models for photosynthesis, part 4

September 19, 2016

In previous posts in this series I introduced some models for the Calvin cycle of photosynthesis and mentioned some open questions concerning them. I have now written a paper where on the one hand I survey a number of mathematical models for the Calvin cycle and the relations between them and on the other hand I am able to provide answers to some of the open questions. One question was that of the definition of the Pettersson model. As I indicated previously this was not clear from the literature. My answer to the question is that this system should be treated as a system of DAE (differential-algebraic equations). In other words it can be written as a set of ODE \dot x=f(x,y) coupled to a set of algebraic equations g(x,y)=0. In general it is not clear that this type of system is locally well-posed. In other words, given a pair (x_0,y_0) with g(x_0,y_0)=0 it is not clear whether there is a solution (x(t),y(t)) of the system, local in time, with x(0)=x_0 and y(0)=y_0. Of course if the partial derivative of g with repect to y is invertible it follows by the implicit function theorem that g(x,y)=0 is locally equivalent to a relation y=h(x) and the original system is equivalent to \dot x=f(x,h(x)). Then local well-posedness is clear. The calculations in the 1988 paper of Pettersson and Ryde-Pettersson indicate that this should be true for the Pettersson model but there are details missing in the paper and I have not (yet) been able to supply these. The conservative strategy is then to stick to the DAE picture. Then we do not have a basis for studying the dynamics but at least we have a well-defined system of equations and it is meaningful to discuss its steady states.

I was able to prove that there are parameter values for which the Pettersson model has at least two distinct positive steady states. In doing this I was helped by an earlier (1987) paper of Pettersson and Ryde-Pettersson. The idea is to shut off the process of storage as starch so as to get a subnetwork. If two steady states can be obtained for this modified system we may be able to get steady states for the original system using the implicit function theorem. There are some more complications but the a key step in the proof is the one just described. So how do we get steady states for the modified system? The idea is to solve many of the equations explicitly so that the problem reduces to a single equation for one unknown, the concentration of DHAP. (When applying the implicit function theorem we have to use a system of two equations for two unknowns.) In the end we are left with a quadratic equation and we can arrange for the coefficients in that equation to have convenient properties by choosing the parameters in the dynamical system suitably. This approach can be put in a wider context using the concept of stoichiometric generators but the proof is not logically dependent on using the theory of those objects.

Having got some information about the Pettersson model we may ask what happens when we go over to the Poolman model. The Poolman model is a system of ODE from the start and so we do not have any conceptual problems in that case. The method of construction of steady states can be adapted rather easily so as to apply to the system of DAE related to the Poolman model (let us call it the reduced Poolman model since it can be expressed as a singular limit of the Poolman model). The result is that there are parameter values for which the reduced Poolman model has at least three steady states. Whether the Poolman model itself can have three steady states is not yet clear since it is not clear whether the transverse eigenvalues (in the sense of GSPT) are all non-zero.

By analogy with known facts the following intuitive picture can be developed. Note, however, that this intuition has not yet been confirmed by proofs. In the picture one of the positive steady states of the Pettersson model is stable and the other unstable. Steady states on the boundary where some concentrations are zero are stable. Under the perturbation from the Pettersson model to the reduced Poolman model an additional stable positive steady state bifurcates from the boundary and joins the other two. This picture may be an oversimplification but I hope that it contains some grain of truth.

ECMTB 2016 in Nottingham

July 17, 2016

This past week I attended a conference of the ESMTB and the SMB in Nottingham. My accomodation was in a hall of residence on the campus and my plan was to take a tram from the train station. When I arrived it turned out that the trams were not running. I did not find out the exact reason but it seemed that it was a problem which would not be solved quickly. Instead of finding out what bus I should take and where I should take it from I checked out the possibility of walking. As it turned out it was neither unreasonably far nor complicated. Thus, following my vocation as pedestrian, I walked there.

Among the plenary talks at the conference was one by Hisashi Ohtsuki on the evolution of social norms. Although I am a great believer in the application of mathematics to many real world problems I do become a bit sceptical when the area of application goes in the direction of sociology or psychology. Accordingly I went to the talk with rather negative expectations but I was pleasantly surprised. The speaker explained how he has been able to apply evolutionary game theory to obtain insights into the evolution of cooperation in human societies under the influence of indirect reciprocity. This means that instead of the simple direct pattern ‘A helps B and thus motivates B to help A’ we have ‘C sees A helping B and hence decides to help A’ and variations on that pattern. The central idea of the work is to compare many different strategies in the context of a mathematical model and thus obtain ideas about what are the important mechanisms at work. My impression was that this is a case where mathematics has generated helpful ideas in understanding the phenomenon and that there remain a lot of interesting things to be done in that direction. It also made me reflect on my own personal strategies when interacting with other people. Apart from the interesting content the talk was also made more interesting by the speaker’s entertaining accounts of experiments which have been done to compare with the results of the modelling. During the talk the speaker mentioned self-referentially that the fact of his standing in front of us giving the talk was an example of the process of the formation of a reputation being described in the talk. As far as I am concerned he succeeded in creating a positive reputation both for himself and for his field.

Apart from this the other plenary talk which I found most interesting was by Johan van de Koppel. He was talking about pattern formation in ecology and, in particular, about his own work on pattern formation in mussel beds. A talk which I liked much less was that of Adelia Sequeira and it is perhaps interesting to ask why. She was talking about modelling of atherosclerosis. She made the valid point near the beginning of her lecture that while heart disease is a health problem of comparable importance to cancer in the developed world the latter theme was represented much more strongly than the former at the conference. For me cancer is simply much more interesting than heart disease and this point of view is maybe more widespread. What could be the reason? One possibility is that the study of cancer involves many more conceptual aspects than that of heart disease and that this is attractive for mathematicians. Another could be that I am a lot more afraid of being diagnosed with cancer some day than of being diagnosed with heart disease although the latter may be no less probable and not less deadly if it happens. To come back to the talk I found that the material was too abundant and too technical and that many ideas were used without really being introduced. The consequence of these factors was that I lost interest and had difficulty not falling asleep.

In the case of the parallel talks there were seventeen sessions in parallel and I generally decided to go to whole sessions rather than trying to go to individual talks. I will make some remarks about some of the things I heard there. I found the first session I went to, on tumour-immune dynamics, rather disappointing but the last talk in the session, by Shalla Hanson was a notable exception. The subject was CAR T-cells and what mathematical modelling might contribute to improving therapy. I found both the content and the presentation excellent. The presentation packed in a lot of material but rather than being overwhelmed I found myself waiting eagerly for what would come next. During the talk I thought of a couple of questions which I might ask at the end but they were answered in due course during the lecture. It is a quality I admire in a speaker to be able to anticipate the questions which the audience may ask and answer them. I see this less as a matter of understanding the psychology of the audience (which can sometimes be important) and rather of really having got to the heart of the subject being described. There was a session on mathematical pharmacology which I found interesting, in particular the talks of Tom Snowden on systems pharmacology and that of Wilhelm Huisinga on multidrug therapies for HIV. In a session on mathematical and systems immunology Grant Lythe discussed the fascinating question of how to estimate the number of T cell clones in the body and what mathematics can contribute to this beyond just analysing the data statistically. I enjoyed the session on virus dynamics, particularly a talk by Harel Dahari on hepatitis C. In particular he told a story in which he was involved in curing one exceptional HCV patient with a one-off therapy using a substance called silibinin and real-time mathematical modelling.

I myself gave a talk about dinosaurs. Since this is work which is at a relatively early stage I will leave describing more details of it in this blog to a later date.

An eternal pedestrian

June 13, 2016

I am presently visiting Japan. My host is Atsushi Mochizuki who leads the Theoretical Biology Laboratory at RIKEN in Wako near Tokyo. RIKEN is a research organisation which was founded in 1917 using the Kaiser-Wilhelm-Gesellschaft as a model. Thus it is a kind of Japanese analogue of the Max Planck Society which is the direct descendant of the Kaiser-Wilhelm-Gesellschaft. I had only been in Japan once before and looking at my records I see that that was in August 2005. At that time I attended a conference in Sendai, a place which I had never heard of before I went there. Since then it has become sadly famous in connection with the damage it suffered from the tsunami which also caused the Fukushima nuclear disaster. At least I had even then previously heard of Tohoku University which is located in the city.

Yesterday, sitting by the river in Wako, I was feeling quite meditative. I was in an area where motor vehicles are not permitted. There were not many people around but most of those who were there were on bikes. I started thinking of how this is typical of what I have experienced in many places I have been. On a walk along the Rhine in Mainz or in the surrounding countryside most of the people you see are on bikes. Copenhagen is completely dominated by bikes. In the US cars dominate. For instance when I was in Miami for a conference and was staying at the Biltmore Hotel I had to walk quite a distance to get dinner for an affordable price. In general the only people I met walking on the streets there were other conference participants. When I visited the University of California at Santa Barbara bikes were not the thing on the campus but it was typical to see students with skateboards. Summing up, I have frequently had the experience that as a pedestrian I was an exception. It seems that for normal people just putting one foot in front of the other is not the thing to do. They need some device such as a car, a bike or a skateboard to accompany them. I, on the other hand, am an eternal pedestrian. I like to walk places whenever I can. I walk twenty minutes to work each day and twenty minutes back. I find that a good way of framing the day. When I lived in Berlin there was a long period when I had a one-way travelling time of 90 minutes by train. I am glad to have that behind me. I did not consciously plan being so near to work in Mainz but I am glad it happened. Of course being a pedestrian has its limits – I could not have come to Japan on foot.

My pedestrian nature is not limited to the literal interpretation of the term. I am also an intellectual pedestrian. An example of this is described in my post on low throughput biology. Interestingly this post has got a lot of hits, more than twice as many as any other post on my blog. This is related to the theme of simple and complex models in biology. Through the talks I have given recently in Copenhagen, Berlin and here in Japan and resulting discussions with different people I have become of conscious of how this is a recurring theme in those parts of mathematical biology which I find interesting. The pedestrian may not get as far as others but he often sees more in the places he does reach. He may also get to places that others do not. Travelling fast along the road may cause you to overlook a valuable shortcut. Or you may go a long way in the wrong direction and need a lot of time to come back. Within mathematics one aspect of being a pedestrian is calculating things by hand as far as possible and using computers as a last resort. This reminds me of a story about the physicist John Wheeler who had a picture of a computer on the wall in his office which he called ‘the big computer’. When he wanted to solve a difficult problem he would think about how he would programme it on the computer and when he had done that thoroughly he had understood the problem so well that he no longer needed the computer. Thus the fact that the computer did not exist except on paper was not a disadvantage.

This is the direction I want to (continue to) go. The challenges along the road are to achieve something essential and to make clear to others, who may be sceptical, that I have done so.