Immunotherapy for cancer

September 20, 2015

A promising innovative approach to cancer therapy is to try to persuade the immune system to attack cancer cells effectively. The immune system does kill cancer cells and presumably removes many tumours which we never suspect we had. At the same time established tumours are able to successfully resist this type of attack in many cases. The idea of taking advantage of the immune system in this way is an old one but it took a long time before it became successful enough to reach the stage of an approved drug. This goal was achieved with the approval of ipilimumab for the treatment of melanoma by the FDA in 2011. This drug is a monoclonal antibody which binds the molecule CTLA4 occurring on the surface of T cells.

To explain the background to this treatment I first recall some facts about T cells. T cells are white blood cells which recognize foreign substances (antigens) in the body. The antigen binds to a molecule called the T cell receptor on the surface of the cell and this gives the T cell an activation signal. Since an inappropriate activation of the immune system could be very harmful there are built-in safety mechanisms. In order to be effective the primary activation signal has to be delivered together with a kind of certificate that action is really necessary. This is a second signal which is given via another surface molecule on the T cell, CD28. The T cell receptor only binds to an antigen when the latter is presented on the surface of another cell (an antigen-presenting cell, APC) in a groove within another molecule, an MHC molecule (major histocompatibility complex). On the surface of the APC there are under appropriate circumstances other molecules called B7.1 and B7.2 which can bind to CD28 and give the second signal. Once this has happened the activated T cell takes appropriate action. What this is depends on the type of T cell involved but for a cytotoxic T cell (one which carries the surface molecule CD8) it means that the T cell kills cells presenting the antigen. If the cell was a virus-infected cell and the antigen is derived from the virus then this is exactly what is desired. Coming back to the safety mechanisms, it is not only important that the T cell is not erroneously switched on. It is also important that when it is switched on in a justified case it should also be switched off after a certain time. Having it switched on for an unlimited time would never be justified. This is where CTLA4 comes in. This protein can bind to B7.1 and B7.2 and in fact does so more strongly than CD28. Thus it can crowd out CD28 and switch off the second signal. By binding to CTLA4 the antibody in ipilimumab stops it from binding to B7.1 and B7.2, thus leaving the activated T cell switched on. In some cases cancer cells present unusual antigens and become a target for T cells. The killing of these cells can be increased by CTLA4 via the mechanism just explained. At this point I should say that it may not be quite clear whether this is really the mechanism of action of CTLA4 in causing tumours to shrink. Alternative possibilities are mentioned in the Wikipedia article on CTLA4.

There are various things which have contributed to my interest in this subject. One is lectures I heard in the series ‘Universität im Rathaus’ [University in the Town Hall] in Mainz last February. The speakers were Matthias Theobald and Ugur Sahin and the theme was personalized cancer medicine. The central theme of what they were talking about is one step beyond what I have just sketched. A weakness of the therapy using antibodies to CTLA4 or the related approach using antibodies to another molecule PD-1 is that they are unspecific. In other words they lead to an increase not only in the activity of the T cells specific to cancer cells but of all T cells which have been activated by some antigen. This means that serious side effects are very likely. An approach which is theoretically better but as yet in a relatively early stage of development is to produce T cells which are specific for antigens belonging to the tumour of a specific patient and for an MHC molecule of that patient capable of presenting that antigen. From the talk I had the impression that doing this requires a lot of input from bioinformatics but I was not able to understand what kind of input it is. I would like to know more about that. Coming back to CTLA4, I have been interested for some time in modelling the activation of T cells and in that context it would be natural to think about also modelling the deactivating effects of CTLA4 or PD-1. I do not know whether this has been tried.

Oscillations in the MAP kinase cascade

September 10, 2015

In a recent post I mentioned my work with Juliette Hell on the existence of oscillations in the Huang-Ferrell model for the MAP kinase cascade. We recently put our paper on the subject on ArXiv. The starting point of this project was the numerical and neuristic work of Qiao et. al., PLoS Comp. Biol. 3, 1819. Within their framework these authors did an extensive search of parameter space and found Hopf bifurcations and periodic solutions for many parameters. The size of the system is sufficiently large that it represents a significant obstacle to analytical investigations. One way of improving this situation is to pass to a limiting system (MM system) by a Michaelis-Menten reduction. In addition it turns out that the periodic solutions already occur in a truncated system consisting of the first two layers of the cascade. This leaves one layer with a single phosphorylation and one with a double phosphorylation. In a previous paper we had shown how to do Michaelis-Menten reduction for the truncated system. Now we have generalized this to the full cascade. In the truncated system the MM system is of dimension three, which is still quite convenient for doing bifurcation theory. Without truncation the MM system is of dimension five, which is already much more difficult. It is however possible to represent the system for the truncated cascade as a (singular) limit of that for the full cascade and thus transport information from the truncated to the full cascade.

Consider the MM system for the truncated cascade. The aim is then to find a Hopf bifurcation in a three-dimensional dynamical system with a lot of free parameters. Because of the many parameters is it not difficult to find a large class of stationary solutions. The strategy is then to linearize the right hand side of the equations about these stationary solutions and try show that there are parameter values where a suitable bifurcation takes place. To do this we would like to control the eigenvalues of the linearization, showing that it can happen that at some point one pair of complex conjugate eigenvalues passes through the imaginary axis with non-zero velocity as a parameter is varied, while the remaining eigenvalue has non-zero real part. The behaviour of the eigenvalues can largely be controlled by the trace, the determinant and an additional Hurwitz determinant. It suffices to arrange that there is a point where the trace is negative, the determinant is zero and the Hurwitz quantity passes through zero with non-zero velocity. This we did. A superficially similar situation is obtained by modelling an in vitro model for the MAPK cascade due to Prabakaran, Gunawardena and Sontag mentioned in a previous post in a way strictly analogous to that done in the Huang-Ferrell model. In that case the layers are in the opposite order and a crucial sign is changed. Up to now we have not been able to show the existence of a Hopf bifurcation in that system and our attempts up to now suggest that there may be a real obstruction to doing so. It should be mentioned that the known necessary condition for a stable hyperbolic periodic solution, the existence of a negative feedback loop, is satisfied by this system.

Now I will say some more about the model of Prabakaran et. al. Its purpose is to obtain insights on the issue of network reconstruction. Here is a summary of some things I understood. The in vitro biological system considered in the paper is a kind of simplification of the Raf-MEK-ERK MAPK cascade. By the use of certain mutations a situation is obtained where Raf is constitutively active and where ERK can only be phosphorylated once, instead of twice as in vivo. This comes down to a system containing only the second and third layers of the MAPK cascade with the length of the third layer reduced from three to two phosphorylation states. The second layer is modelled using simple mass action (MA) kinetics with the two phosphorylation steps being treated as one while in the third layer the enzyme concentrations are included explicitly in the dynamics in a standard Michaelis-Menten way (MM-MA). The resulting mathematical model is a system of seven ODE with three conservation laws. In the paper it is shown that for given values of the conserved quantities the system has a unique steady state. This is an application of a theorem of Angeli and Sontag. Note that this is not the same system of equations as the system analogous to that of Huang-Ferrell mentioned above.

The idea now is to vary one of the conserved quantities and monitor the behaviour of two functions x and y of the unknowns of the system at steady state. It is shown that for one choice of the conserved quantity x and y change in the same direction while for a different choice of the conserved quantity they change in opposite directions when the conserved quantity is varied. From a mathematical point of view this is not very surprising since there is no obvious reason forbidding behaviour of this kind. The significance of the result is that apparently biologists often use this type of variation in experiments to reach conclusions about causal relationships between the concentrations of different substances (activation and inhibition), which can be represented by certain signed oriented graphs. In this context ‘network reconstruction’ is the process of determining a graph of this type. The main conclusion of the paper, as I understand it, is that doing different experiments can lead to inconsistent results for this graph. Note that there is perfect agreement between the experimental results in the paper and the results obtained from the mathematical model. In a biological system if two experiments give conflicting results it is always possible to offer the explanation that some additional substance which was not included in the model is responsible for the difference. The advantage of the in vitro model is that there are no other substances which could play that role.

Models for photosynthesis, part 3

September 8, 2015

Here I continue the discussion of models for photosynthesis in two previous posts. There I described the Pettersson and Poolman models and indicated the possibility of introducing variants of these which use exclusively mass action kinetics. I call these the Pettersson-MA and Poolman-MA models. I was interested in obtaining information about the qualitative behaviour of solutions of these ODE systems. This gave rise to the MSc project of Dorothea Möhring which she recently completed successfully. Now we have extended this work a little further and have written up the results in a paper which has just been uploaded to ArXiv. The central issue is that of overload breakdown which is related to the mathematical notion of persistence. We would like to know under what circumstances a positive solution can have \omega-limit points where some concentrations vanish and, if so, which concentrations vanish in that case. It seems that there was almost no information on the latter point in the literature so that the question of what exactly overload breakdown is remained a bit nebulous. The general idea is that the Pettersson model should have a stronger tendency to undergo overload breakdown while the Poolman model should have a stronger tendency to avoid it. The Pettersson-MA and Poolman-MA models represent a simpler context to work in to start with.

For the Pettersson-MA model we were able to identify a regime in which overload breakdown takes place. This is where the initial concentrations of all sugar phosphates and inorganic phosphate in the chloroplast are sufficiently small. In that case the concentrations of all sugar phosphates tend to zero at late times with two exceptions. The concentrations of xylulose-4-phosphate and sedoheptulose-7-phosphate do not tend to zero. These results are obtained by linearizing the system around a simple stationary solution on the boundary and applying the centre manifold theorem. Another result is that if the reaction constants satisfy a certain inequality a positive solution can have no positive \omega-limit points. In particular, there are no positive stationary solutions in that case. This is proved using a Lyapunov function related to the total number of carbon atoms. In the case of the Poolman-MA model it was shown that the stationary point which was stable in the Pettersson case becomes unstable. Moreover, a quantitative lower bound for concentration of sugar phosphates at late times in obtained.These results fit well with the intuitive picture of what should happen. Some of the results on the Poolman-MA model can be extended to analogous ones for the original Poolman model. On the other hand the task of giving a full rigorous definition of the Pettersson model was postponed for later work. The direction in which this could go has been sketched in a previous post.

There remains a lot to be done. It is possible to define a kind of hybrid model by setting k_{32}=0 in the Poolman model. It would be desirable to completely clarify the definition of the Pettersson model and then, perhaps, to show that it can be obtained as a well-behaved limiting system of the hybrid system in the sense of geometric singular perturbation theory. This might allow the dynamical properties of solutions of the different systems to be related to each other. The only result on stationary solutions obtained so far is a non-existence theorem. It would be of great interest to have positive results on the existence, multiplicity and stability of stationary solutions. A related question is that of classifying possible \omega-limit points of positive solutions where some of the concentrations are zero. This was done in part in the paper but what was not settled is whether potential \omega-limit points with positive concentrations of the hexose phosphates can actually occur. Finally, there are a lot of other models for the Calvin cycle on the market and it would be interesting to see to what extent they are susceptible to methods similar to those used in our paper.

Phosphorylation systems

September 1, 2015

In order to react to their environment living cells use signalling networks to propagate information from receptors, which are often on the surface of the cell, to the nucleus where transcription factors can change the behaviour of the cell by changing the rate of production of different proteins. Signalling networks often make use of phosphorylation systems. These are networks of proteins whose enzymatic activity is switched on or off by phosphorylation or dephosphorylation. When switched on they catalyse the (de-)phophorylation of other proteins. The information passing through the network is encoded in the phosphate groups attached to specific amino acids in the proteins concerned. A frequently occurring example of this type of system is the MAPK cascade discussed in a previous post. There the phosphate groups are attached to the amino acids serine, threonine and tyrosine. Another type of system, which is common in bacteria, are the two-component systems where the phosphate groups are attached to histidine and aspartic acid.

There is a standard mathematical model for the MAPK cascade due to Huang and Ferrell. It consists of three layers, each of which is a simple or dual futile cycle. Numerical and heuristic investigations indicate that the Huang-Ferrell model admits periodic solutions for certain values of the parameters. Together with Juliette Hell we set out to find a rigorous proof of this fact. In the beginning we pursued the strategy of showing that there are relaxation oscillations. An important element of this is to prove that the dual futile cycle exhibits bistability, a fact which is interesting in its own right, and we were able to prove this, as has been discussed here. In the end we shifted to a different strategy in order to prove the existence of periodic solutions. The bistability proof used a quasistationary (Michaelis-Menten) reduction of the Huang-Ferrell system. It applied bifurcation theory to the Michaelis-Menten system and geometric singular perturbation theory to lift this result to the original system. To prove the existence of periodic solutions we used a similar strategy. This time we showed the presence of Hopf bifurcations in a Michaelis-Menten system and lifted those. The details are contained in a paper which is close to being finished. In the meantime we wrote a review article on phosphorylation systems. Here I want to mention some of the topics covered there.

The MAPK cascade, which is the central subject of the paper is not isolated in its natural biological context. It is connected with other biochemical reactions which can be thought of as feedback loops, positive and negative. As already mentioned, the cascade itself consists of layers which are futile cycles. The paper first reviews what is known about the dynamics of futile cycles and the stand-alone MAPK cascasde. The focus is on phenomena such as multistability, sustained oscillations and (marginally) chaos and what can be proved about these things rigorously. The techniques which can be used in proofs of this kind are also reviewed. Given the theoretical results on oscillations it is interesting to ask whether these can be observed experimentally. This has been done for the Raf-MEK-ERK cascade by Shankaran et. al. (Mol. Syst. Biol. 5, 332). In that paper it is found that the experimental results do not fit well to the oscillations in the isolated cascade but they can be modelled better when the cascade is embedded in a negative feedback loop. Two other aspects are also built into the models used – the translocation of ERK from the cytosol to the nucleus (which is what is actually measured) and the fact that when ERK and MEK are not fully phosphorylated they can bind to each other. It is also briefly mentioned in our paper that a negative feedback can arise through the interaction of ERK with its substrates, as explained in Liu et. al. (Biophys. J. 101, 2572). For the cascade as treated in the Huang-Ferrell model with feedback added no rigorous results are known yet. (For a somewhat different system there is result on oscillations due to Gedeon and Sontag, J. Diff. Eq. 239, 273, which uses the strategy based on relaxation oscillations.)

In our paper there is also an introduction to two-component systems. A general conclusion of the paper is that phosphorylation systems give rise to a variety of interesting mathematical problems which are waiting to be investigated. It may also be hoped that a better mathematical understanding of this subject can lead to new insights concerning the biological systems being modelled. Biological questions of interest in this context include the following. Are dynamical features of the MAPK cascade such as oscillations desirable for the encoding of information or are they undesirable side effects? To what extent do feedback loops tend to encourage the occurrence of features of this type and to what extent do they tend to suppress them? What are their practical uses, if any? If the function of the system is damaged by mutations how can it be repaired? The last question is of special interest due to the fact that many cancer cells have mutations in the Raf-MEK-ERK cascade and there have already been many attempts to overcome their negative effects using kinase inhibitors, some of them successful. A prominent example is the Raf inhibitor Vemurafenib which has been used to treat metastatic melanoma.

Rereading ‘To the Lighthouse’

August 23, 2015

There are some statements I started to believe at a certain distant time in my life and which I have continued to accept without further examination ever since. One of these is ‘the English-language author who I admire most is Virginia Woolf’. Another is obtained by replacing ‘English-language author’ by ‘author in any language’ and ‘Virginia Woolf’ by ‘Marcel Proust’. At one point in her diary Virginia Woolf writes that she has just finished reading the latest volume of ‘A la Recherche du Temps Perdu’ which had recently been published. Then she writes (I am quoting from memory here) that she despairs of ever being able to write as well as Proust. Perhaps she was being too modest at that point. Until very recently it was a long time since I had read anything by Woolf. I was now stimulated to do so again by the fact that Eva and I were planning a trip to southern England, including a visit to St. Ives. For me that town is closely associated with Woolf and it is because of the connection to her that I was motivated to visit St. Ives when I spent some time in Cornwall several years ago. (Here I rapidly pass over the fact, without further comment, that the author with the widest popular success whose books have an association with St. Ives is Rosamunde Pilcher.) The other aspect of my first trip to Cornwall which is most distinct in my memory is missing the last bus in Land’s End and having to walk all the way back to Penzance where I was staying. We visited Land’s End again this time but since I did not want miss the bus again I did not have time to visit the ‘Shaun the Sheep Experience’ which is running there at the moment. As a consolation, during a later visit to Shaun’s birthplace, Bristol, I saw parts of the artistic event ‘Shaun in the City’ and had my photograph taken with some of the sculptures of Shaun.

When I go on a holiday trip somewhere I often like to take a book with me which has some special connection to the place I am going. Often I have little time to actually read the book during the holiday but that does not matter. For Cornwall and, in particular, St. Ives the natural choice was ‘To the Lighthouse’. That novel is set in the Isle of Skye but it is well known that the real-life setting which inspired it (and the lighthouse of the title) was in St. Ives. This lighthouse, Godrevy Lighthouse, cost a little over seven thousand pounds to build, being finished in 1859. In 1892, on one of two visits there, the ten year old Virginia signed the visitors book. The book was sold for over ten thousand pounds in 2011. So in a sense the little girl’s signature ended up being worth more money than the lighthouse she was visiting. Of course, due to inflation, this is not a fair comparison. Looking on my bookshelves at home I was surprised to find that I do not own a copy of ‘To the Lighthouse’. On those shelves I find ‘The Voyage Out’, ‘Jacob’s Room’, ‘Moments of Being’ and ‘Between the Acts’ but neither ‘To the Lighthouse’ nor ‘The Waves’. Perhaps I never owned them and only borrowed them from libraries. I have a fairly clear memory of having borrowed ‘To the Lighthouse’ from the Kirkwall public library. I do not remember why I did so. Perhaps it was just that at that time I was omnivorously consuming almost everything I found in the literature section in that library. Or perhaps it had to do with the fact that lighthouses always had a special attraction for me. An alternative explanation for the fact I do not own the book myself could be that I parted with it when I left behind the majority of the books I owned when I moved from Aberdeen to Munich after finishing my PhD. This was due the practical constraint that I only took as many belongings with me as I could carry: two large suitcases and one large rucksack. I crossed the English Channel on a ferry and I remember how hard it was to carry that luggage up the gangway due to the fact that the tide was high.

I find reading ‘To the Lighthouse’ now a very positive experience. Just a few paragraphs put me in a frame of mind I like. I have the feeling that I am a very different person than what I was the first time I read it but after more than thirty years that is hardly surprising. I also feel that I am reading it in a different way from what I did then. I find it difficult to give an objective account of what it is that I like about the book. Perhaps it is the voice of the author. I feel that if I could have had the chance to talk to her I would certainly have enjoyed it even if she was perhaps not always the easiest of people to deal with. Curiously I have the impression that although I would have found it extremely interesting to meet Proust I am not sure I would have found it pleasant. So why do I think that I may be appreciating aspects of the book now which I did not last time? A concrete example is the passage where Mrs Ramsay is thinking about two things at the same time, the story she is reading to her son and the couple who are late coming home. The possibility of this is explained wonderfully by comparing it to ‘the bass … which now and then ran up unexpectedly into the melody’. I feel, although of course I cannot prove it, that I would not have paid much attention to that passage during my first reading. The differences may also be connected to the fact that I am now married. Often when I am reading a book it is as if my wife was reading it with me, over my shoulder, and this causes me to pay more attention to things which would interest her. A contrasting example is the story about Hume getting stuck in a bog. I am sure I paid attention to that during my first reading and it now conjured up a picture of how I was then, perhaps eighteen years old and still keen on philosophy. After a little thought following the encounter with the story it occurred to me that I knew more of the story about Hume, that he was allegedly forced to say that he believed in God in order to persuade an old woman to pull him out. This extended version is also something I knew in that phase of my life, perhaps through my membership in the Aberdeen University philosophy society. On the other hand this story does come up (at least) two more times in the book and it is a little different from what I remember. What the woman forced him to do was to say the Lord’s Prayer.

I came back from England yesterday and although I did not have much time for reading the book while there I am on page 236 due to the head start I had by reading it before I went on the trip. The day we went to St. Ives started out rainy but the weather cleared up during the morning so that about one o’ clock I was able to see Godrevy lighthouse and look at it through through my binoculars. They also allowed me to enjoy good views of passing gannets and kittiwakes but I think I would have been disappointed if I had made that trip without seeing the lighthouse.

Calvin on the Calvin cycle

July 31, 2015

In a previous post I mentioned Calvin’s Nobel lecture. Now I read it again and since I had learned a lot of things in the meantime I could profit from it in new ways. The subject of the lecture is the way in which Calvin and his collaborators discovered the mechanisms of the dark reactions of photosythesis. This involved years of experiments which I am not qualified to discuss. What I will do here is to describe some of the major conceptual components of this work. The first step was to discover which chemical substances are involved in the process. To make this a well-defined question it is necessary to fix a boundary between those substances to be considered and others. As their name suggests the dark reactions can take place in the dark and to start with the process was studied in the dark. It seems, however, that this did not lead to very satisfactory results and this led to a change of strategy. The dark reactions also take place in the light and the idea was to look at a steady state situation where photosynthesis is taking place in the presence of light. The dark reactions incorporate carbon dioxide into carbohydrates and the aim was to find the mechanism by which this occurs. At the end of the Second World War, when this work was done, carbon 14 had just become much more easily available due to the existence of nuclear reactors. Calvin also mentions that when doing difficult separations of compounds in his work on photosynthesis he used things he had learned when separating plutonium during the war. Given a steady state situation with ordinary carbon dioxide the radioactive form of the gas containing carbon 14 could be introduced. The radioactive carbon atoms became incorporated into some of the organic compounds in the plants used. (The principal subject of the experiment was the green alga Chlorella.) In fact the radioactive carbon atoms turned up in too many compounds – the boundary had been fixed too widely. This was improved on by looking what happened on sufficiently short time scales after the radioactive gas had been added, of the order of a few seconds. After this time the process was stopped, leading to a snapshot of the chemical concentrations. This meant that the labelled carbon had not had time to propagate too far through the system and was only found in a relatively small number of compounds. The compounds were separated by two-dimensional chromatography and those which were radioactive were located by the black spots they caused on photographic film. Calvin remarks ironically that the apparatus they were using did not label the spots with the names of the compounds giving rise to them. It was thus necessary to extract those compounds and analyse them by all sorts of techniques which I know very little about. It took about ten years. In any case, the endpoint of this process was the first major conceptual step: a set of relevant compounds had been identified. These are the carbon compounds which are involved in the reactions leading from the point where carbon dioxide enters the system and before too much of the carbon has been transferred to other systems connected to the Calvin cycle. While reading the text of the lecture I also had a modern picture of the reaction network in front of me and this was useful for understanding the significance of the elements of the story being told. From the point of view of the mathematician this step corresponds to determining the nodes of the reaction network. It remains to find out which compounds react with which others, with which stoichiometry.

In looking for the reactions one useful source of information is the following. The carbon atoms in a given substance involved in the cycle are not equivalent to each other. By suitable experiments it can be decided which are the first carbon atoms to become radioactive. For instance, a compound produced in relatively large amounts right at the beginning of the process is phosphoglyceric acid (PGA) and it is found that the carbon in the carboxyl group is the one which becomes radioactive first. The other two carbons become radioactive at a common later time. This type of information provides suggestions for possible reaction mechanisms. Another type of input is obtained by simply counting carbon atoms in potential reactions. For instance, if the three-carbon compound PGA is to be produced from a precursor by the addition of carbon dioxide then the simple arthmetic relation 3=1+2 indicates that there might be a precursor molecule with two carbons. However this molecule was never found and it turns out that the relevant arithmetic is 2\times 3=1+5. The reaction produces two molecules of PGA from a precursor with five carbon atoms, ribulose bisphosphate (RuBP). Combining the information about the order in which the carbon atoms were incorporated with the arithmetic considerations allowed a large part of the network to be reconstructed. Nevertheless the nature of one key step, that in which carbon dioxide is incorporated into PGA remained unclear. Further progress required a different type of experiment.

The measurements used up to now are essentially measurements of concentrations at one time point (or very few time points). The last major step was taken using measurements of the dynamics. Here the concentrations of selected substances are determined at sufficiently many time points so as to get a picture of the time evolution of concentrations is certain circumstances. The idea is to first take measurements of PGA and RuBP in conditions of constant light. These concentrations are essentially time-independent. Then the light is switched off. It is seen that the concentration of PGA increases rapidly (it more than doubles within a minute) while that of RuBP rapidly decreases on the same time scale. This gives evidence that at steady state RuBP is being converted to PGA. This completes the picture of the reaction network. Further confirmation that the picture is correct is obtained by experiments where the amount of carbon dioxide available is suddenly reduced and the resulting transients in various concentrations monitored.

Reaction networks in Copenhagen

July 9, 2015

Last week I attended a workshop on reaction network theory organized by Elisenda Feliu and Carsten Wiuf. It took place in Copenhagen from 1st to 3rd July. I flew in late on the Tuesday evening and on arrival I had a pleasant feeling of being in the north just due to the amount and quality of the light. Looking at the weather information for Mainz I was glad I had got a reduction in temperature of several degrees by making this trip. A lot of comments and extra information on the talks at this conference can be found on the blog of John Baez and that of Matteo Polettini. Now, on my own slower time scale, I will write a bit about things I heard at the conference which I found particularly interesting. The topic of different time scales is very relevant to the theme of the meeting and the first talk, by Sebastian Walcher, was concerned with it. Often a dynamical system of interest can be thought of as containing a small parameter and letting this parameter tend to zero leads to a smaller system which may be easier to analyse. Information obtained in this way may be transported back to the original system. If the parameter is a ratio of time scales then the limit will be singular. The issue discussed in the talk is that of finding a suitable small parameter in a system when one is suspected. It is probably unreasonable to expect to find a completely general method but the talk presented algorithms which can contribute to solving this type of problem.

In the second talk Gheorghe Craciun presented his proof of the global attractor conjecture, which I have mentioned in a previous post. I was intrigued by one comment he made relating the concept of deficiency zero to systems in general position. Later he explained this to me and I will say something about the direction in which this goes. The concept of deficiency is central in chemical reaction network theory but I never found it very intuitive and I feel safe in claiming that I am in good company as far as that is concerned. Gheorghe’s idea is intended to improve this state of affairs by giving the deficiency a geometric interpretation. In this context it is worth mentioning that there are two definitions of deficiency on the market. I had heard this before but never looked at the details. I was reminded of it by the talk of Jeanne Marie Onana Eloundou-Mbebi in Copenhagen, where it played an important role. She was talking about absolute concentration robustness. The latter concept was also the subject of the talk of Dave Anderson, who was looking at the issue of whether the known results on ACR for deterministic reaction networks hold in some reasonable sense in the stochastic case. The answer seems to be that they do not. But now I return to the question of how the deficiency is defined. Here I use the notation \delta for the deficiency as originally defined by Feinberg. The alternative, which can be found in Jeremy Gunawardena’s text with the title ‘Chemical reaction network theory for in silico biologists’ will be denoted by \delta'. Gunawardena, who seems to find the second definition more natural, proves that the two quantities are equal provided a certain condition holds (each linkage class contains precisely one terminal strong linkage class). This condition is, in particular, satisfied for weakly reversible networks and this is perhaps the reason that the difference in definitions is not often mentioned in the literature. In general \delta\ge\delta', so that deficiency zero in the sense of the common definition implies deficiency zero in the sense of the other definition.

For a long time I knew very little about control theory. The desire to change this motivated me to give a course on the subject in the last winter semester, using the excellent textbook of Eduardo Sontag as my main source. Since that I had never taken the time to look back on what I learned in the course of doing this and this became clearer to me only now. In Copenhagen Nicolette Meshkat gave a talk on identifiability in reaction networks. I had heard her give a talk on a similar subject at the SIAM life science conference last summer and not understood much. I am sure that this was not her fault but mine. This time around things were suddenly clear. The reason is that this subject involves ideas coming from control theory and through giving the course I had learned to think in some new directions. The basic idea of identifiability is to extract information on the parameters of a dynamical system from the input-output relation.

There was another talk with a lot of control theory content by Mustafa Khammash. He had brought some machines with him to illustrate some of the ideas. These were made of Lego, driven by computers and communicating with each other via bluetooth devices. One of these was a physical realization of one of the favourite simple examples in control theory, stabilization of the inverted pendulum. Another was a robot programmed to come to rest 30 cm in front of a barrier facing it. Next he talked about an experiment coupling living cells to a computer to form a control system. The output from a population of cells was read by a combination of GFP labeling and a FACS machine. After processing the signal the resulting input was done by stimulating the cells using light. This got a lot of media attention unter the name ‘cyborg yeast’. After that he talked about a project in which programmes can be incorporated into the cells themselves using plasmids. In one of the last remarks in his talk he mentioned how cows use integral feedback to control the calcium concentration in their bodies. I think it would be nice to incorporate this into popular talks or calculus lectures in the form ‘cows can do integrals’ or ‘cows can solve differential equations’. The idea would be to have a striking example of what the abstract things done in calculus courses have to do with the real world.

My talk at the conference was on phosphorylation systems and interestingly there was another talk there, by Andreas Weber, which had a possibly very significant relation to this. I only became aware of the existence of the corresponding paper (Errami et. al., J. Comp. Phys. 291, 279) a few weeks ago and since it involves a lot of techniques I am not too familiar with and has a strong computer science component I have only had a limited opportunity to understand it. I hope to get deeper into it soon. It concerns a method of finding Hopf bifurcations.

This conference was a great chance to maintain and extend my contacts in the community working on reaction networks and get various types of inside information on the field

Computer-assisted proofs

June 26, 2015

My spontaneous reaction to a computer-assisted proof is to regard it as having a lesser status than one done by hand. Here I want to consider why I feel this way and if and under what circumstances this point of view is justified. I start by considering the situation of a traditional mathematical proof, done by hand and documented in journal articles or books. In this context it is impossible to write out all details of the proof. Somehow the aim is to bridge the gap between the new result and what experts in the area are already convinced of. This general difficulty becomes more acute when the proof is very long and parts of it are quite repetitive. There is the tendency to say that the next step is strictly analogous to the previous one and if the next step is written out there is the tendency for the reader to think that it is strictly analogous and to gloss over it. Human beings (a class which includes mathematicians) make mistakes and have a limited capacity to concentrate. To sum up, a traditional proof is never perfect and very long and repetitive proofs are likely to be less so than others. So what is it that often makes a traditional proof so convincing? I think that in the end it is its embedding in a certain context. An experienced mathematician has met with countless examples of proofs, his own and those of others, errors large and small in those proofs and how they can often be repaired. This is complemented by experience of the interactions between different mathematicians and their texts. These things give a basis for judging the validity of a proof which is by no means exclusively on the level of explicit logical argumentation.

How is it, by comparison, with computer-assisted proofs? The first point to be raised is what is meant by that phrase. Let me start with a rather trivial example. Suppose I use a computer to calculate the kth digit of n factorial where k and n are quite large. If for given choices of the numbers a well-tested computer programme can give me the answer in one minute then I will not doubt the answer. Why is this? Because I believe that the answer comes from an algorithm which determines a unique answer. No approximations or floating point operations are involved. For me interval arithmetic, which I discussed in a previous post, is on the same level of credibility, which is the same level of credibility as a computer-free proof. There could be an error in the hardware or the software or the programme but this is not essentially different from the uncertainties connected with a traditional proof. So what might be the problem in other cases? One problem is that of transparency. If a computer-assisted proof is to be convincing for me then I must either understand what algorithm the computer is supposed to be implementing or at least have the impression that I could do so if I invested some time and effort. Thus the question arises to what extent this aspect is documented in a given case. There is also the issue of the loss of the context which I mentioned previously. Suppose I believe that showing that the answer to a certain question is ‘yes’ in 1000 cases constitutes a proof of a certain theorem but that checking these cases is so arduous that a human being can hardly do so. Suppose further that I understand an algorithm which, if implemented, can carry out this task on a computer. Will I then be convinced? I think the answer is that I will but I am still likely to be left with an uncomfortable feeling if I do not have the opportunity to see the details in a given case if I want to. In addition to the question of whether the nature of the application is documented there is the question of whether this has been done in a way that is sufficiently palatable that mathematicians will actually carefully study the documentation. Rather than remain on the level of generalities I prefer to now go over to an example.

Perhaps the most famous computer-assisted proof is that of the four colour theorem by Appel and Haken. To fill myself in on the background on this subject I read the book ‘Four colors suffice’ by Robin Wilson. The original problem is to colour a map in such a way that no two countries with a common border have the same colour. The statement of the theorem is that it is always possible with four colours. This statement can be reformulated as a question in graph theory. Here I am not interested in the details of how this reformulation is carried out. The intuitive idea is to associate a vertex to each country and an edge to each common border. Then the problem becomes that of colouring the vertices of a planar graph in such a way that no two adjacent vertices have the same colour. From now on I take this graph-theoretic statement as basic. (Unfortunately it is in fact not just a graph-theoretic, in particular combinatorial, statement since we are talking about planar graphs.) What I am interested in is not so much the problem itself as what it can teach us about computer-assisted proofs in general. I found the book of Wilson very entertaining but I was disappointed by the fact that he consistently avoids going over to the graph-theoretic formulation which I find more transparent (that word again). In an article by Robin Thomas (Notices of the AMS, 44, 848) he uses the graph-theoretic formulation more but I still cannot say I understood the structure of the proof on the coarsest scale. Thomas does write that in his own simplified version of the original proof the contribution of computers only involves integer arithmetic. Thus this proof does seem to belong to the category of things I said above I would tend to accept as a mathematical proof, modulo the fact that I would have to invest the time and effort to understand the algorithm. There is also a ‘computer-checked proof’ of the four colour theorem by Georges Gonthier. I found this text interesting to look at but felt as if I was quickly getting into logical deep waters. I do not really understand what is going on there.

To sum up this discussion, I am afraid that in the end the four colour problem was not the right example for me to start with and I that I need to take some other example which is closer to mathematical topics which I know better and perhaps also further from having been formalized and documented.

Models for photosynthesis, part 2

May 22, 2015

In my previous post on this subject I discussed the question of the status of the variables in the Poolman model of photosynthesis and in the end I was convinced that I had understood which concentrations are to be considered as dynamical unknowns and which as constants. The Poolman model is a modified version of the Pettersson model and the corresponding questions about the nature of the variables have the same answers in both cases. What I am calling the Pettersson model was introduced in a paper of Pettersson and Ryde-Pettersson (Eur. J. Biochem 175, 661) and there the description of the variables and the equations is rather complete and comprehensible. Now I will go on to consider the second question raised in the previous post, namely what the evolution equations are. The evolution equations in the Poolman model are modifications of those in the Pettersson model and are described relative to those in the original paper on the former model. For this reason I will start by describing the equations for the Pettersson model. As a preparation for that I will treat a side issue. In a reaction network a reaction whose rate depends only on the concentrations of the substances consumed in the reaction is sometimes called NAC (for non-autocatalytic). For instance this terminology is used in the paper of Kaltenbach quoted in the previous post. The opposite of NAC is the case where the reaction rate is modulated by the concentrations of other substances, such as activators or inhibitors.

The unknowns in the Pettersson model are concentrations of substances in the stroma of the chloroplast. The substances involved are 15 carbohydrates bearing one or more phosphate groups, inorganic phosphate and ATP, thus 17 variables in total. In addition to ordinary reactions between these substances there are transport processes in which sugar phosphates are moved from the stroma to the cytosol in exchange for inorganic phosphate. For brevity I will also refer to these as reactions. The total amount of phosphate in the stroma is conserved and this leads to a conservation law for the system of equations, a fact explicitly mentioned in the paper. On the basis of experimental data some of the reactions are classified as fast and it is assumed that they are already at equilibrium. They are also assumed to be NAC and to have mass-action kinetics. This defines a set of algebraic equations. These are to be used to reduce the 17 evolution equations which are in principle there to five equations for certain linear combinations of the variables. The details of how this is done are described in the paper. I will now summarize how this works. The time derivatives of the 16 variables other than inorganic phosphate are given in terms of linear combinations of 17 reaction rates. Nine of these reaction rates, which are not NAC, are given explicitly. The others have to be treated using the 11 algebraic equations coming from the fast reactions. The right hand sides F_i of the five evolution equations mentioned already are linear combinations of those reaction rates which are given explicitly. These must be expressed in terms of the quantities whose time derivatives are on the left hand side of these equations, using the algebraic equations coming from the fast reactions and the conservation equation for the total amount of phosphate. In fact all unknowns can be expressed in terms of the concentrations of RuBP, DHAP, F6P, Ru5P and ATP. Call these quantities s_i. Thus if the time derivatives of the s_i can be expressed in terms of the F_i we are done. It is shown in the appendix to the paper how a linear combination of the time derivatives of the s_i with coefficients only depending on the s_i is equal to F_i. Moreover it is stated that the time derivatives of the s_i can be expressed in terms of these linear combinations.

Consider now the Poolman model. One way in which it differs from the Pettersson model is that starch degradation is included. The other is that while the kinetics for the ‘slow reactions’ (i.e. those which are not classified as fast in the Pettersson model) are left unchanged, the equilibrium assumption for the fast reactions is dropped. Instead the fast reactions are treated as reversible with mass action kinetics. In the thesis of Sergio Grimbs (Towards structure and dynamics of metabolic networks, Potsdam 2009) there is some discussion of the models of Poolman and Pettersson. It is investigated whether information about multistability in these models can be obtained using ideas coming from chemical reaction network theory. Since the results from CRNT considered require mass action kinetics it is implicit in the thesis that the systems are being considered which are obtained by applying mass action to all reactions in the networks for the Poolman and Pettersson models. These systems are therefore strictly speaking different from those of Pettersson and Poolman. In any case it turned out that these tools were not useful in this example since the simplest results did not apply and for the more complicated computer-assisted ones the systems were too large.

In the Pettersson paper the results of computations of steady states are presented and a comparison with published experimental results looks good in a graph presented there. So whay can we not conclude that the problem of modelling the dynamics of the Calvin cycle was pretty well solved in 1988? The paper contains no details on how the simulations were done and so it is problematic to repeat them. Jablonsky et. al. set up simulations of this model on their own and found results very different from those reported in the original paper. In this context the advantage of the Poolman model is that it has been put into the BioModels database so that the basic data is available to anyone with the necessary experience in doing simulations for biochemical models. Forgetting the issue of the reliability of their simulations, what did Petterson and Ryde-Pettersson find? They saw that depending on the external concentration of inorganic phosphate there is either no positive stationary solution (for high values of this parameter) or two (for low values) with a bifurcation in between. When there are two stationary solutions one is stable and one unstable. It looks like there is a fold bifurcation. There is a trivial stationary solution with all sugar concentrations zero for all values of the parameter. When the external phosphate concentration tends to zero the two positive stationary solutions coalesce with the trivial one. The absence of positive stationary solutions for high phosphate concentrations is suggested to be related to the concept of ‘overload breakdown’. This means that sugars are being exported so fast that the production from the Calvin cycle cannot keep up and the whole system breaks down. It would be nice to have an analytical proof of the existence of a fold bifurcation for the Pettersson model but that is probably very difficult to get.

Models for photosynthesis

May 15, 2015

Photosynthesis is a process of central importance in biology. There is a large literature on modelling this process. One step is to identify networks of chemical reactions involved. Another is to derive mathematical models (usually systems of ODE) from these networks. Here when I say ‘model’ I mean ‘mathematical model’ and not the underlying network. In a paper by Jablonsky et. al. (BMC Systems Biology 5: 185) existing models are surveyed and a number or errors and inconsistencies in the literature are pointed out. This reminded me of the fact that a widespread problem in the biological literature is that the huge amount of data being generated these days contains very many errors. Here I want to discuss some issues related to this, concentrating on models for the Calvin cycle of photosynthesis and, in particular, on what I will call the Poolman model.

A point which might seem obvious and trivial to the mathematician is that a description of a mathematical model (I consider here only ODE models) should contain a clear answer to the following two questions. 1) What are the unknowns? 2) What are the evolution equations? One source of ambiguity involved in the first question is the impossibility of modelling everything. It is usually unreasonable to model a whole organism although this has been tried for some simple ones. Even if it were possible, the organism is in interaction with other organisms and its environment and these things cannot also be included. In practise it is necessary to fix a boundary of the system we want to consider and cut there. One way of handling the substances outside the cut in a mathematical model is to set their concentrations to constant values, thus implicitly assuming that to a good approximation these are not affected by the dynamics within the system. Let us call these external species and the substances whose dynamics is included in the model internal species. Thus part of answering question 1) is to decide on which species are to be treated as internal. In this post I will confine myself to discussing question 1), leaving question 2) for a later date.

Suppose we want to answer question 1) for a model in the literature. What are potential difficulties? In biological papers the equations (and even the full list of unknowns) are often banished to the supplementary material. In addition to being less easy to access and often less easy to read (due to typographical inferiority) than the main text I have the feeling that this supplementary material is often subjected to less scrutiny by the referees and by the authors, so that errors or incompleteness can occur more easily. Sometimes this information is only contained in some files intended to be read by a computer rather than a human being and it may be necessary to have, or be able to use, special software in order to read them in any reasonable way. Most of these difficulties are not absolute in nature. It is just that the mathematician embarking on such a process should ideally be aware of some of the challenges awaiting him in advance.

How does this look in the case of the Poolman model? It was first published in a journal in a paper of Poolman, Fell and Thomas (J. Exp. Botany, 51, 319). The reaction network is specified by Fig. 1 of the paper. This makes most of the unknowns clear but leaves the following cases where something more needs to be said. Firstly, it is natural to take the concentration of ADP to be defined implicitly through the concentration of ATP and the conservation of the total amount of adenosine phosphates. Secondly, it is explictly stated that the concentrations of NADP and NADPH are taken to be constant so that these are clearly external species. Presumably the concentration of inorganic phosphate in the stroma is also taken to be constant, so that this is also an external variable, although I did not find an explicit statement to this effect in the paper. The one remaining possible ambiguity involves starch – is it an internal or an external species in this model? I was not able to find anything directly addressing this point in the paper. On the other hand the paper does refer to the thesis of Poolman and some internet resources for further information. In the main body of the thesis I found no explicit resolution of the question of external phosphate but there it does seem that this quantity is treated as an external parameter. The question of starch is particularly important since this is a major change in the Poolman model compared to the earlier Pettersson model on which it is based and since Jablonsky et. al. claim that there is an error in the equation describing this step. It is stated in the thesis that ‘a meaningful concentration cannot be assigned to’ … ‘the starch substrate’ which seems to support my impression that the concentration of starch is an external species. Finally a clear answer confirming my suppositions above can be found in Appendix A of the thesis which describes the computer implementation. There we find a list of variables and constants and the latter are distinguished by being preceded by a dollar sign. So is there an error in the equation for starch degradation used in the Poolman model? My impression is that there is not, in the sense that the desired assumptions have been implemented successfully. The fact that Jablonsky et. al. get the absurd result of negative starch concentrations is because they compute an evolution for starch which is an external variable in the Poolman model. What could be criticised in the Poolman model is that the amount of starch in the chloroplast varies a lot over the course of the day. Thus a model with starch as an external variable could only be expected to give a good approximation to reality on timescales much shorter than one day.


Follow

Get every new post delivered to your Inbox.

Join 43 other followers