David Vetter, the bubble boy

October 17, 2015

T cells are a class of white blood cells without which a human being usually cannot survive. An exception to this was David Vetter, a boy who lived 12 years without T cells. This was only possible because he lived all this time in a sterile environment, a plastic bubble. For this reason he became known as the bubble boy. The disease which he suffered from is called SCID, severe combined immunodeficiency, and it corresponds to having no T cells. The most common form of this is due to a mutation on the X chromosome and as a result it usually affects males. The effects set in a few months after birth. The mutation leads to a lack of the \gamma chain of the IL-2 receptor. In fact this chain occurs in several cytokine receptors and is therefore called the ‘common chain’. Probably the key to the negative effects caused by its lack in SCID patients is the resulting lack of the receptor for IL-7, which is important for T cell development. SCID patients have a normal number of B cells but very few antibodies due to the lack of support by helper T cells. Thus in the end they lack both the immunity usually provided by T cells and that usually provided by B cells. This is the reason for the description ‘combined immunodeficiency’. I got the information on this theme which follows mainly from two sources. The first is a documentary film ‘Bodyshock – The Boy in the Bubble’ about David Vetter produced by Channel 4 and available on Youtube. (There are also less serious films on this subject, including one featuring John Travolta.) The second is the chapter on X-linked SCID in the book ‘Case Studies in Immunology’ by Raif Geha and Luigi Notarangelo. I find this book a wonderful resource for learning about immunology. It links general theory to the case history of specific patients.

David Vetter had an older brother who also suffered from SCID and died of infection very young. Thus his parents and their doctors were warned. The brother was given a bone marrow transplant from his sister, who had the necessary tissue compatibility. Unfortunately this did not save him, presumably because he had already been exposed to too many infections by the time it was carried out. The parents decided to have another child, knowing that if it was a boy the chances of another case of SCID were 50%. Their doctors had a hope of being able to save the life of such a child by isolating him and then giving him a bone marrow transplant before he had been exposed to infections. The parents very soon had another child, it was a boy, he had SCID. The child was put into a sterile plastic bubble immediately after birth. Unfortunately it turned out that the planned bone marrow donor, David’s sister, was not a good match for him. It was necessary to wait and hope for an alternative donor. This hope was not fulfilled and David had to stay in the bubble. This had not been planned and it must be asked whether the doctors involved had really thought through what would happen if the optimal variant they had thought of did not work out.

At one point David started making punctures in his bubble as a way of attracting attention. Then it was explained to him what his situation was and why he must not damage the bubble. Later there was a kind of space suit produced for him by NASA which allowed him to move around outside his home. He only used it six times since he was too afraid there could be an accident. His physical health was good but understandably his psychological situation was difficult. New ideas in the practise of bone marrow transplantation indicated that it might be possible to use donors with a lesser degree of compatibility. On this basis David was given a transplant with his sister as the donor. It was not noticed that her bone marrow was infected with Epstein-Barr virus. As a result David got Burkitt’s lymphoma, a type of cancer which can be caused by that virus. (Compare what I wrote about this role of EBV here.) He died a few months after the operation, at the age of 12. Since that time treatment techniques have improved. The patient whose case is described in the book of Geha and Notarangelo had a successful bone marrow transplant (with his mother as donor). Unfortunately his lack of antibodies was not cured but this can be controlled with injections of immunoglobulin once every three weeks.

Siphons in reaction networks

October 8, 2015

The concept of a siphon is one which I have been more or less aware of for quite a long time. Unfortunately I never had the impression that I had understood it completely. Given the fact that it came up a lot in discussions I was involved in and talks I heard last week I thought that the time had come to make the effort to do so. It is of relevance for demonstrating the property of persistence in reaction networks. This is the property that the \omega-limit points of a positive solution are themselves positive. For a bounded solution this is the same as saying that the infima of all concentrations at late times are positive. The most helpful reference I have found for these topics is a paper of Angeli, de Leenheer and Sontag in a proceedings volume edited by Queinnec et. al.

There are two ways of formulating the definition of a siphon. The first is more algebraic, the second more geometric. In the first the siphon is defined to be a set Z of species with the property that whenever one of the species in Z occurs on the right hand side of a reaction one of the species in Z occurs on the left hand side. Geometrically we replace Z by the set L_Z of points of the non-negative orthant which are common zeroes of the elements of Z, thought of as linear functions on the species space. The defining property of a siphon is that L_Z is invariant under the (forward in time) flow of the dynamical system describing the evolution of the concentrations. Another way of looking at the situation is as follows. Consider a point of L_Z. The right hand side of the evolution equations of one of the concentrations belonging to Z is a sum of positive and negative terms. The negative terms automatically vanish on L_Z and the siphon condition is what is needed to ensure that the positive terms also vanish there. Sometimes minimal siphons are considered. It is important to realize that in this case Z is minimal. Correspondingly L_Z is maximal. The convention is that the empty set is excluded as a choice for Z and correspondingly the whole non-negative orthant as a choice for L_Z. What is allowed is to choose Z to be the whole of the species space which means that L_Z is the origin. Of course whether this choice actually defines a siphon depends on the particular dynamical system being considered.

If x_* is an \omega-limit point of a positive solution but is not itself positive then the set of concentrations which are zero at that point is a siphon. In particular stationary solutions on the boundary are contained in siphons. It is remarked by Shiu and Sturmfels (Bull. Math. Biol. 72, 1448) that for a network with only one linkage class if a siphon contains one stationary solution it consists entirely of stationary solutions. To see this let x_* be a stationary solution in the siphon Z. There must be some complex y belonging to the network which contains an element of Z. If y' is another complex then there is a directed path from y' to y. We can follow this path backwards from y and conclude successively that each complex encountered contains an element of Z. Thus y' contains an element of Z and since y' was arbitrary all complexes have this property. This means that all complexes vanish at x_* so that x_* is a stationary solution.

Siphons can sometimes be used to prove persistence. Suppose that Z is a siphon for a certain network so that the points of Z are potential \omega-limit points of solutions of the ODE system corresponding to this network. Suppose further that A is a conserved quantity for the system which is a linear combination of the coordinates with positive coefficents. For a positive solution the quantity A has a positive constant value along the solution and hence also has the same value at any of its \omega-limit points. It follows that if A vanishes on Z then no \omega-limit point of that solution belongs to Z. If it is possible to find a conserved quantity A of this type for each siphon of a given system (possibly different conserved quantities for different siphons) then persistence is proved. For example this strategy is used in the paper of Angeli et al. to prove persistence for the dual futile cycle. The concept of persistence is an important one when thinking about the general properties of reaction networks. The persistence conjecture says that any weakly reversible reaction network with mass action kinetics is persistent (possibly with the additional assumption that all solutions are bounded). In his talk last week Craciun mentioned that he is working on proving this conjecture. If true it implies the global attractor conjecture. It also implies a statement claimed in a preprint of Deng et. al. (arXiv:1111.2386) that a weakly reversible network has a positive stationary solution in any stoichiometric compatobility class. This result has never been published and there seems to be some doubt as to whether the proof is correct.

Trip to the US

October 5, 2015

Last week I visited a few places in the US. My first stop was Morgantown, West Virginia where my host was Casian Pantea. There I had a lot of discussions with Casian and Carsten Conradi on chemical reaction network theory. This synergized well with the work I have recently been doing preparing a lecture course on that subject which I will be giving in the next semester. I gave a talk on MAPK and got some feedback on that. It rained a lot and there was not much opportunity to do anything except work. One day on the way to dinner while it was relatively dry I saw a Cardinal and I fortunately did have my binoculars with me. On Wednesday afternoon I travelled to New Brunswick and spent most of Thursday talking to Eduardo Sontag at Rutgers. It was a great pleasure to talk to an excellent mathematician who also knows a lot about immunology. He and I have a lot of common interests which is in part due to the fact that I was inspired by several of his papers during the time I was getting into mathematical biology. I also had the opportunity to meet Evgeni Nikolaev who told me a variety of interesting things. They concerned bifurcation theory in general, its applications to the kinds of biological models I am interested in and his successes in applying mathematical models to understanding concrete problems in biomedical research such as the processes taking place in tuberculosis. My personal dream is to see a real coming together of mathematics and immunology and that I have the chance to make a contribution to that process.

On Friday I flew to Chicago in order to attend an AMS sectional meeting. I had been in Chicago once before but that is many years ago now. I do remember being impressed by how much Lake Michigan looks like the sea, I suppose due to the structure of the waves. This impression was even stronger this time since there were strong winds whipping up the waves. Loyola University, the site of the meeting, is right beside the lake and it felt like home for me due to the combination of wind, waves and gulls. The majority of those were Ring-Billed Gulls which made it clear which side of the Atlantic I was on. There were also some Herring Gulls and although they might have been split from those on the other side of the Atlantic by the taxonomists I did not notice any difference. It was the first time I had been at an AMS sectional meeting and my impression was that the parallel sessions were very parallel, in other words in no danger of meeting. Most of the people in our session were people I knew from the conferences I attended in Charlotte and in Copenhagen although I did make a couple of new acquaintances, improving my coverage of the reaction network community.

In a previous post I mentioned Gheorghe Craciun’s ideas about giving the deficiency of a reaction network a geometric interpretation, following a talk of his in Copenhagen. Although I asked him questions about this on that occasion I did not completely understand the idea. Correspondingly my discussion of the point here in my blog was quite incomplete. Now I talked to him again and I believe I have finally got the point. Consider first a network with a single linkage class. The complexes of the network define points in the species space whose coordinates are the stoichiometric coefficients. The reactions define oriented segments joining the educt complex to the product complex of each reaction. The stoichiometric subspace is the vector space spanned by the differences of the complexes. It can also be considered as a translate of the affine subspace spanned by the complexes themselves. This makes it clear that its dimension s is at most n-1, where n is the number of complexes. The number s is the rank of the stoichiometric matrix. The deficiency is n-1-s. At the same time s\le m. If there are several linkage classes then the whole space has dimension at most n-l, where l is the number of linkage classes. The deficiency is n-l-s. If the spaces corresponding to the individual linkage classes have the maximal dimension allowed by the number of complexes in that class and these spaces are linearly independent then the deficiency is zero. Thus we see that the deficiency is the extent to which the complexes fail to be in general position. If the species and the number of complexes have been fixed then deficiency zero is seen to be a generic condition. On the other hand fixing the species and adding more complexes will destroy the deficiency zero condition since then we are in the case n-l>m so that the possibility of general position is excluded. The advantage of having this geometric picture is that it can often be used to read off the deficiency directly from the network. It might also be used to aid in constructing networks with a desired deficiency.

Immunotherapy for cancer

September 20, 2015

A promising innovative approach to cancer therapy is to try to persuade the immune system to attack cancer cells effectively. The immune system does kill cancer cells and presumably removes many tumours which we never suspect we had. At the same time established tumours are able to successfully resist this type of attack in many cases. The idea of taking advantage of the immune system in this way is an old one but it took a long time before it became successful enough to reach the stage of an approved drug. This goal was achieved with the approval of ipilimumab for the treatment of melanoma by the FDA in 2011. This drug is a monoclonal antibody which binds the molecule CTLA4 occurring on the surface of T cells.

To explain the background to this treatment I first recall some facts about T cells. T cells are white blood cells which recognize foreign substances (antigens) in the body. The antigen binds to a molecule called the T cell receptor on the surface of the cell and this gives the T cell an activation signal. Since an inappropriate activation of the immune system could be very harmful there are built-in safety mechanisms. In order to be effective the primary activation signal has to be delivered together with a kind of certificate that action is really necessary. This is a second signal which is given via another surface molecule on the T cell, CD28. The T cell receptor only binds to an antigen when the latter is presented on the surface of another cell (an antigen-presenting cell, APC) in a groove within another molecule, an MHC molecule (major histocompatibility complex). On the surface of the APC there are under appropriate circumstances other molecules called B7.1 and B7.2 which can bind to CD28 and give the second signal. Once this has happened the activated T cell takes appropriate action. What this is depends on the type of T cell involved but for a cytotoxic T cell (one which carries the surface molecule CD8) it means that the T cell kills cells presenting the antigen. If the cell was a virus-infected cell and the antigen is derived from the virus then this is exactly what is desired. Coming back to the safety mechanisms, it is not only important that the T cell is not erroneously switched on. It is also important that when it is switched on in a justified case it should also be switched off after a certain time. Having it switched on for an unlimited time would never be justified. This is where CTLA4 comes in. This protein can bind to B7.1 and B7.2 and in fact does so more strongly than CD28. Thus it can crowd out CD28 and switch off the second signal. By binding to CTLA4 the antibody in ipilimumab stops it from binding to B7.1 and B7.2, thus leaving the activated T cell switched on. In some cases cancer cells present unusual antigens and become a target for T cells. The killing of these cells can be increased by CTLA4 via the mechanism just explained. At this point I should say that it may not be quite clear whether this is really the mechanism of action of CTLA4 in causing tumours to shrink. Alternative possibilities are mentioned in the Wikipedia article on CTLA4.

There are various things which have contributed to my interest in this subject. One is lectures I heard in the series ‘Universität im Rathaus’ [University in the Town Hall] in Mainz last February. The speakers were Matthias Theobald and Ugur Sahin and the theme was personalized cancer medicine. The central theme of what they were talking about is one step beyond what I have just sketched. A weakness of the therapy using antibodies to CTLA4 or the related approach using antibodies to another molecule PD-1 is that they are unspecific. In other words they lead to an increase not only in the activity of the T cells specific to cancer cells but of all T cells which have been activated by some antigen. This means that serious side effects are very likely. An approach which is theoretically better but as yet in a relatively early stage of development is to produce T cells which are specific for antigens belonging to the tumour of a specific patient and for an MHC molecule of that patient capable of presenting that antigen. From the talk I had the impression that doing this requires a lot of input from bioinformatics but I was not able to understand what kind of input it is. I would like to know more about that. Coming back to CTLA4, I have been interested for some time in modelling the activation of T cells and in that context it would be natural to think about also modelling the deactivating effects of CTLA4 or PD-1. I do not know whether this has been tried.

Oscillations in the MAP kinase cascade

September 10, 2015

In a recent post I mentioned my work with Juliette Hell on the existence of oscillations in the Huang-Ferrell model for the MAP kinase cascade. We recently put our paper on the subject on ArXiv. The starting point of this project was the numerical and neuristic work of Qiao et. al., PLoS Comp. Biol. 3, 1819. Within their framework these authors did an extensive search of parameter space and found Hopf bifurcations and periodic solutions for many parameters. The size of the system is sufficiently large that it represents a significant obstacle to analytical investigations. One way of improving this situation is to pass to a limiting system (MM system) by a Michaelis-Menten reduction. In addition it turns out that the periodic solutions already occur in a truncated system consisting of the first two layers of the cascade. This leaves one layer with a single phosphorylation and one with a double phosphorylation. In a previous paper we had shown how to do Michaelis-Menten reduction for the truncated system. Now we have generalized this to the full cascade. In the truncated system the MM system is of dimension three, which is still quite convenient for doing bifurcation theory. Without truncation the MM system is of dimension five, which is already much more difficult. It is however possible to represent the system for the truncated cascade as a (singular) limit of that for the full cascade and thus transport information from the truncated to the full cascade.

Consider the MM system for the truncated cascade. The aim is then to find a Hopf bifurcation in a three-dimensional dynamical system with a lot of free parameters. Because of the many parameters is it not difficult to find a large class of stationary solutions. The strategy is then to linearize the right hand side of the equations about these stationary solutions and try show that there are parameter values where a suitable bifurcation takes place. To do this we would like to control the eigenvalues of the linearization, showing that it can happen that at some point one pair of complex conjugate eigenvalues passes through the imaginary axis with non-zero velocity as a parameter is varied, while the remaining eigenvalue has non-zero real part. The behaviour of the eigenvalues can largely be controlled by the trace, the determinant and an additional Hurwitz determinant. It suffices to arrange that there is a point where the trace is negative, the determinant is zero and the Hurwitz quantity passes through zero with non-zero velocity. This we did. A superficially similar situation is obtained by modelling an in vitro model for the MAPK cascade due to Prabakaran, Gunawardena and Sontag mentioned in a previous post in a way strictly analogous to that done in the Huang-Ferrell model. In that case the layers are in the opposite order and a crucial sign is changed. Up to now we have not been able to show the existence of a Hopf bifurcation in that system and our attempts up to now suggest that there may be a real obstruction to doing so. It should be mentioned that the known necessary condition for a stable hyperbolic periodic solution, the existence of a negative feedback loop, is satisfied by this system.

Now I will say some more about the model of Prabakaran et. al. Its purpose is to obtain insights on the issue of network reconstruction. Here is a summary of some things I understood. The in vitro biological system considered in the paper is a kind of simplification of the Raf-MEK-ERK MAPK cascade. By the use of certain mutations a situation is obtained where Raf is constitutively active and where ERK can only be phosphorylated once, instead of twice as in vivo. This comes down to a system containing only the second and third layers of the MAPK cascade with the length of the third layer reduced from three to two phosphorylation states. The second layer is modelled using simple mass action (MA) kinetics with the two phosphorylation steps being treated as one while in the third layer the enzyme concentrations are included explicitly in the dynamics in a standard Michaelis-Menten way (MM-MA). The resulting mathematical model is a system of seven ODE with three conservation laws. In the paper it is shown that for given values of the conserved quantities the system has a unique steady state. This is an application of a theorem of Angeli and Sontag. Note that this is not the same system of equations as the system analogous to that of Huang-Ferrell mentioned above.

The idea now is to vary one of the conserved quantities and monitor the behaviour of two functions x and y of the unknowns of the system at steady state. It is shown that for one choice of the conserved quantity x and y change in the same direction while for a different choice of the conserved quantity they change in opposite directions when the conserved quantity is varied. From a mathematical point of view this is not very surprising since there is no obvious reason forbidding behaviour of this kind. The significance of the result is that apparently biologists often use this type of variation in experiments to reach conclusions about causal relationships between the concentrations of different substances (activation and inhibition), which can be represented by certain signed oriented graphs. In this context ‘network reconstruction’ is the process of determining a graph of this type. The main conclusion of the paper, as I understand it, is that doing different experiments can lead to inconsistent results for this graph. Note that there is perfect agreement between the experimental results in the paper and the results obtained from the mathematical model. In a biological system if two experiments give conflicting results it is always possible to offer the explanation that some additional substance which was not included in the model is responsible for the difference. The advantage of the in vitro model is that there are no other substances which could play that role.

Models for photosynthesis, part 3

September 8, 2015

Here I continue the discussion of models for photosynthesis in two previous posts. There I described the Pettersson and Poolman models and indicated the possibility of introducing variants of these which use exclusively mass action kinetics. I call these the Pettersson-MA and Poolman-MA models. I was interested in obtaining information about the qualitative behaviour of solutions of these ODE systems. This gave rise to the MSc project of Dorothea Möhring which she recently completed successfully. Now we have extended this work a little further and have written up the results in a paper which has just been uploaded to ArXiv. The central issue is that of overload breakdown which is related to the mathematical notion of persistence. We would like to know under what circumstances a positive solution can have \omega-limit points where some concentrations vanish and, if so, which concentrations vanish in that case. It seems that there was almost no information on the latter point in the literature so that the question of what exactly overload breakdown is remained a bit nebulous. The general idea is that the Pettersson model should have a stronger tendency to undergo overload breakdown while the Poolman model should have a stronger tendency to avoid it. The Pettersson-MA and Poolman-MA models represent a simpler context to work in to start with.

For the Pettersson-MA model we were able to identify a regime in which overload breakdown takes place. This is where the initial concentrations of all sugar phosphates and inorganic phosphate in the chloroplast are sufficiently small. In that case the concentrations of all sugar phosphates tend to zero at late times with two exceptions. The concentrations of xylulose-4-phosphate and sedoheptulose-7-phosphate do not tend to zero. These results are obtained by linearizing the system around a simple stationary solution on the boundary and applying the centre manifold theorem. Another result is that if the reaction constants satisfy a certain inequality a positive solution can have no positive \omega-limit points. In particular, there are no positive stationary solutions in that case. This is proved using a Lyapunov function related to the total number of carbon atoms. In the case of the Poolman-MA model it was shown that the stationary point which was stable in the Pettersson case becomes unstable. Moreover, a quantitative lower bound for concentration of sugar phosphates at late times in obtained.These results fit well with the intuitive picture of what should happen. Some of the results on the Poolman-MA model can be extended to analogous ones for the original Poolman model. On the other hand the task of giving a full rigorous definition of the Pettersson model was postponed for later work. The direction in which this could go has been sketched in a previous post.

There remains a lot to be done. It is possible to define a kind of hybrid model by setting k_{32}=0 in the Poolman model. It would be desirable to completely clarify the definition of the Pettersson model and then, perhaps, to show that it can be obtained as a well-behaved limiting system of the hybrid system in the sense of geometric singular perturbation theory. This might allow the dynamical properties of solutions of the different systems to be related to each other. The only result on stationary solutions obtained so far is a non-existence theorem. It would be of great interest to have positive results on the existence, multiplicity and stability of stationary solutions. A related question is that of classifying possible \omega-limit points of positive solutions where some of the concentrations are zero. This was done in part in the paper but what was not settled is whether potential \omega-limit points with positive concentrations of the hexose phosphates can actually occur. Finally, there are a lot of other models for the Calvin cycle on the market and it would be interesting to see to what extent they are susceptible to methods similar to those used in our paper.

Phosphorylation systems

September 1, 2015

In order to react to their environment living cells use signalling networks to propagate information from receptors, which are often on the surface of the cell, to the nucleus where transcription factors can change the behaviour of the cell by changing the rate of production of different proteins. Signalling networks often make use of phosphorylation systems. These are networks of proteins whose enzymatic activity is switched on or off by phosphorylation or dephosphorylation. When switched on they catalyse the (de-)phophorylation of other proteins. The information passing through the network is encoded in the phosphate groups attached to specific amino acids in the proteins concerned. A frequently occurring example of this type of system is the MAPK cascade discussed in a previous post. There the phosphate groups are attached to the amino acids serine, threonine and tyrosine. Another type of system, which is common in bacteria, are the two-component systems where the phosphate groups are attached to histidine and aspartic acid.

There is a standard mathematical model for the MAPK cascade due to Huang and Ferrell. It consists of three layers, each of which is a simple or dual futile cycle. Numerical and heuristic investigations indicate that the Huang-Ferrell model admits periodic solutions for certain values of the parameters. Together with Juliette Hell we set out to find a rigorous proof of this fact. In the beginning we pursued the strategy of showing that there are relaxation oscillations. An important element of this is to prove that the dual futile cycle exhibits bistability, a fact which is interesting in its own right, and we were able to prove this, as has been discussed here. In the end we shifted to a different strategy in order to prove the existence of periodic solutions. The bistability proof used a quasistationary (Michaelis-Menten) reduction of the Huang-Ferrell system. It applied bifurcation theory to the Michaelis-Menten system and geometric singular perturbation theory to lift this result to the original system. To prove the existence of periodic solutions we used a similar strategy. This time we showed the presence of Hopf bifurcations in a Michaelis-Menten system and lifted those. The details are contained in a paper which is close to being finished. In the meantime we wrote a review article on phosphorylation systems. Here I want to mention some of the topics covered there.

The MAPK cascade, which is the central subject of the paper is not isolated in its natural biological context. It is connected with other biochemical reactions which can be thought of as feedback loops, positive and negative. As already mentioned, the cascade itself consists of layers which are futile cycles. The paper first reviews what is known about the dynamics of futile cycles and the stand-alone MAPK cascasde. The focus is on phenomena such as multistability, sustained oscillations and (marginally) chaos and what can be proved about these things rigorously. The techniques which can be used in proofs of this kind are also reviewed. Given the theoretical results on oscillations it is interesting to ask whether these can be observed experimentally. This has been done for the Raf-MEK-ERK cascade by Shankaran et. al. (Mol. Syst. Biol. 5, 332). In that paper it is found that the experimental results do not fit well to the oscillations in the isolated cascade but they can be modelled better when the cascade is embedded in a negative feedback loop. Two other aspects are also built into the models used – the translocation of ERK from the cytosol to the nucleus (which is what is actually measured) and the fact that when ERK and MEK are not fully phosphorylated they can bind to each other. It is also briefly mentioned in our paper that a negative feedback can arise through the interaction of ERK with its substrates, as explained in Liu et. al. (Biophys. J. 101, 2572). For the cascade as treated in the Huang-Ferrell model with feedback added no rigorous results are known yet. (For a somewhat different system there is result on oscillations due to Gedeon and Sontag, J. Diff. Eq. 239, 273, which uses the strategy based on relaxation oscillations.)

In our paper there is also an introduction to two-component systems. A general conclusion of the paper is that phosphorylation systems give rise to a variety of interesting mathematical problems which are waiting to be investigated. It may also be hoped that a better mathematical understanding of this subject can lead to new insights concerning the biological systems being modelled. Biological questions of interest in this context include the following. Are dynamical features of the MAPK cascade such as oscillations desirable for the encoding of information or are they undesirable side effects? To what extent do feedback loops tend to encourage the occurrence of features of this type and to what extent do they tend to suppress them? What are their practical uses, if any? If the function of the system is damaged by mutations how can it be repaired? The last question is of special interest due to the fact that many cancer cells have mutations in the Raf-MEK-ERK cascade and there have already been many attempts to overcome their negative effects using kinase inhibitors, some of them successful. A prominent example is the Raf inhibitor Vemurafenib which has been used to treat metastatic melanoma.

Rereading ‘To the Lighthouse’

August 23, 2015

There are some statements I started to believe at a certain distant time in my life and which I have continued to accept without further examination ever since. One of these is ‘the English-language author who I admire most is Virginia Woolf’. Another is obtained by replacing ‘English-language author’ by ‘author in any language’ and ‘Virginia Woolf’ by ‘Marcel Proust’. At one point in her diary Virginia Woolf writes that she has just finished reading the latest volume of ‘A la Recherche du Temps Perdu’ which had recently been published. Then she writes (I am quoting from memory here) that she despairs of ever being able to write as well as Proust. Perhaps she was being too modest at that point. Until very recently it was a long time since I had read anything by Woolf. I was now stimulated to do so again by the fact that Eva and I were planning a trip to southern England, including a visit to St. Ives. For me that town is closely associated with Woolf and it is because of the connection to her that I was motivated to visit St. Ives when I spent some time in Cornwall several years ago. (Here I rapidly pass over the fact, without further comment, that the author with the widest popular success whose books have an association with St. Ives is Rosamunde Pilcher.) The other aspect of my first trip to Cornwall which is most distinct in my memory is missing the last bus in Land’s End and having to walk all the way back to Penzance where I was staying. We visited Land’s End again this time but since I did not want miss the bus again I did not have time to visit the ‘Shaun the Sheep Experience’ which is running there at the moment. As a consolation, during a later visit to Shaun’s birthplace, Bristol, I saw parts of the artistic event ‘Shaun in the City’ and had my photograph taken with some of the sculptures of Shaun.

When I go on a holiday trip somewhere I often like to take a book with me which has some special connection to the place I am going. Often I have little time to actually read the book during the holiday but that does not matter. For Cornwall and, in particular, St. Ives the natural choice was ‘To the Lighthouse’. That novel is set in the Isle of Skye but it is well known that the real-life setting which inspired it (and the lighthouse of the title) was in St. Ives. This lighthouse, Godrevy Lighthouse, cost a little over seven thousand pounds to build, being finished in 1859. In 1892, on one of two visits there, the ten year old Virginia signed the visitors book. The book was sold for over ten thousand pounds in 2011. So in a sense the little girl’s signature ended up being worth more money than the lighthouse she was visiting. Of course, due to inflation, this is not a fair comparison. Looking on my bookshelves at home I was surprised to find that I do not own a copy of ‘To the Lighthouse’. On those shelves I find ‘The Voyage Out’, ‘Jacob’s Room’, ‘Moments of Being’ and ‘Between the Acts’ but neither ‘To the Lighthouse’ nor ‘The Waves’. Perhaps I never owned them and only borrowed them from libraries. I have a fairly clear memory of having borrowed ‘To the Lighthouse’ from the Kirkwall public library. I do not remember why I did so. Perhaps it was just that at that time I was omnivorously consuming almost everything I found in the literature section in that library. Or perhaps it had to do with the fact that lighthouses always had a special attraction for me. An alternative explanation for the fact I do not own the book myself could be that I parted with it when I left behind the majority of the books I owned when I moved from Aberdeen to Munich after finishing my PhD. This was due the practical constraint that I only took as many belongings with me as I could carry: two large suitcases and one large rucksack. I crossed the English Channel on a ferry and I remember how hard it was to carry that luggage up the gangway due to the fact that the tide was high.

I find reading ‘To the Lighthouse’ now a very positive experience. Just a few paragraphs put me in a frame of mind I like. I have the feeling that I am a very different person than what I was the first time I read it but after more than thirty years that is hardly surprising. I also feel that I am reading it in a different way from what I did then. I find it difficult to give an objective account of what it is that I like about the book. Perhaps it is the voice of the author. I feel that if I could have had the chance to talk to her I would certainly have enjoyed it even if she was perhaps not always the easiest of people to deal with. Curiously I have the impression that although I would have found it extremely interesting to meet Proust I am not sure I would have found it pleasant. So why do I think that I may be appreciating aspects of the book now which I did not last time? A concrete example is the passage where Mrs Ramsay is thinking about two things at the same time, the story she is reading to her son and the couple who are late coming home. The possibility of this is explained wonderfully by comparing it to ‘the bass … which now and then ran up unexpectedly into the melody’. I feel, although of course I cannot prove it, that I would not have paid much attention to that passage during my first reading. The differences may also be connected to the fact that I am now married. Often when I am reading a book it is as if my wife was reading it with me, over my shoulder, and this causes me to pay more attention to things which would interest her. A contrasting example is the story about Hume getting stuck in a bog. I am sure I paid attention to that during my first reading and it now conjured up a picture of how I was then, perhaps eighteen years old and still keen on philosophy. After a little thought following the encounter with the story it occurred to me that I knew more of the story about Hume, that he was allegedly forced to say that he believed in God in order to persuade an old woman to pull him out. This extended version is also something I knew in that phase of my life, perhaps through my membership in the Aberdeen University philosophy society. On the other hand this story does come up (at least) two more times in the book and it is a little different from what I remember. What the woman forced him to do was to say the Lord’s Prayer.

I came back from England yesterday and although I did not have much time for reading the book while there I am on page 236 due to the head start I had by reading it before I went on the trip. The day we went to St. Ives started out rainy but the weather cleared up during the morning so that about one o’ clock I was able to see Godrevy lighthouse and look at it through through my binoculars. They also allowed me to enjoy good views of passing gannets and kittiwakes but I think I would have been disappointed if I had made that trip without seeing the lighthouse.

Calvin on the Calvin cycle

July 31, 2015

In a previous post I mentioned Calvin’s Nobel lecture. Now I read it again and since I had learned a lot of things in the meantime I could profit from it in new ways. The subject of the lecture is the way in which Calvin and his collaborators discovered the mechanisms of the dark reactions of photosythesis. This involved years of experiments which I am not qualified to discuss. What I will do here is to describe some of the major conceptual components of this work. The first step was to discover which chemical substances are involved in the process. To make this a well-defined question it is necessary to fix a boundary between those substances to be considered and others. As their name suggests the dark reactions can take place in the dark and to start with the process was studied in the dark. It seems, however, that this did not lead to very satisfactory results and this led to a change of strategy. The dark reactions also take place in the light and the idea was to look at a steady state situation where photosynthesis is taking place in the presence of light. The dark reactions incorporate carbon dioxide into carbohydrates and the aim was to find the mechanism by which this occurs. At the end of the Second World War, when this work was done, carbon 14 had just become much more easily available due to the existence of nuclear reactors. Calvin also mentions that when doing difficult separations of compounds in his work on photosynthesis he used things he had learned when separating plutonium during the war. Given a steady state situation with ordinary carbon dioxide the radioactive form of the gas containing carbon 14 could be introduced. The radioactive carbon atoms became incorporated into some of the organic compounds in the plants used. (The principal subject of the experiment was the green alga Chlorella.) In fact the radioactive carbon atoms turned up in too many compounds – the boundary had been fixed too widely. This was improved on by looking what happened on sufficiently short time scales after the radioactive gas had been added, of the order of a few seconds. After this time the process was stopped, leading to a snapshot of the chemical concentrations. This meant that the labelled carbon had not had time to propagate too far through the system and was only found in a relatively small number of compounds. The compounds were separated by two-dimensional chromatography and those which were radioactive were located by the black spots they caused on photographic film. Calvin remarks ironically that the apparatus they were using did not label the spots with the names of the compounds giving rise to them. It was thus necessary to extract those compounds and analyse them by all sorts of techniques which I know very little about. It took about ten years. In any case, the endpoint of this process was the first major conceptual step: a set of relevant compounds had been identified. These are the carbon compounds which are involved in the reactions leading from the point where carbon dioxide enters the system and before too much of the carbon has been transferred to other systems connected to the Calvin cycle. While reading the text of the lecture I also had a modern picture of the reaction network in front of me and this was useful for understanding the significance of the elements of the story being told. From the point of view of the mathematician this step corresponds to determining the nodes of the reaction network. It remains to find out which compounds react with which others, with which stoichiometry.

In looking for the reactions one useful source of information is the following. The carbon atoms in a given substance involved in the cycle are not equivalent to each other. By suitable experiments it can be decided which are the first carbon atoms to become radioactive. For instance, a compound produced in relatively large amounts right at the beginning of the process is phosphoglyceric acid (PGA) and it is found that the carbon in the carboxyl group is the one which becomes radioactive first. The other two carbons become radioactive at a common later time. This type of information provides suggestions for possible reaction mechanisms. Another type of input is obtained by simply counting carbon atoms in potential reactions. For instance, if the three-carbon compound PGA is to be produced from a precursor by the addition of carbon dioxide then the simple arthmetic relation 3=1+2 indicates that there might be a precursor molecule with two carbons. However this molecule was never found and it turns out that the relevant arithmetic is 2\times 3=1+5. The reaction produces two molecules of PGA from a precursor with five carbon atoms, ribulose bisphosphate (RuBP). Combining the information about the order in which the carbon atoms were incorporated with the arithmetic considerations allowed a large part of the network to be reconstructed. Nevertheless the nature of one key step, that in which carbon dioxide is incorporated into PGA remained unclear. Further progress required a different type of experiment.

The measurements used up to now are essentially measurements of concentrations at one time point (or very few time points). The last major step was taken using measurements of the dynamics. Here the concentrations of selected substances are determined at sufficiently many time points so as to get a picture of the time evolution of concentrations is certain circumstances. The idea is to first take measurements of PGA and RuBP in conditions of constant light. These concentrations are essentially time-independent. Then the light is switched off. It is seen that the concentration of PGA increases rapidly (it more than doubles within a minute) while that of RuBP rapidly decreases on the same time scale. This gives evidence that at steady state RuBP is being converted to PGA. This completes the picture of the reaction network. Further confirmation that the picture is correct is obtained by experiments where the amount of carbon dioxide available is suddenly reduced and the resulting transients in various concentrations monitored.

Reaction networks in Copenhagen

July 9, 2015

Last week I attended a workshop on reaction network theory organized by Elisenda Feliu and Carsten Wiuf. It took place in Copenhagen from 1st to 3rd July. I flew in late on the Tuesday evening and on arrival I had a pleasant feeling of being in the north just due to the amount and quality of the light. Looking at the weather information for Mainz I was glad I had got a reduction in temperature of several degrees by making this trip. A lot of comments and extra information on the talks at this conference can be found on the blog of John Baez and that of Matteo Polettini. Now, on my own slower time scale, I will write a bit about things I heard at the conference which I found particularly interesting. The topic of different time scales is very relevant to the theme of the meeting and the first talk, by Sebastian Walcher, was concerned with it. Often a dynamical system of interest can be thought of as containing a small parameter and letting this parameter tend to zero leads to a smaller system which may be easier to analyse. Information obtained in this way may be transported back to the original system. If the parameter is a ratio of time scales then the limit will be singular. The issue discussed in the talk is that of finding a suitable small parameter in a system when one is suspected. It is probably unreasonable to expect to find a completely general method but the talk presented algorithms which can contribute to solving this type of problem.

In the second talk Gheorghe Craciun presented his proof of the global attractor conjecture, which I have mentioned in a previous post. I was intrigued by one comment he made relating the concept of deficiency zero to systems in general position. Later he explained this to me and I will say something about the direction in which this goes. The concept of deficiency is central in chemical reaction network theory but I never found it very intuitive and I feel safe in claiming that I am in good company as far as that is concerned. Gheorghe’s idea is intended to improve this state of affairs by giving the deficiency a geometric interpretation. In this context it is worth mentioning that there are two definitions of deficiency on the market. I had heard this before but never looked at the details. I was reminded of it by the talk of Jeanne Marie Onana Eloundou-Mbebi in Copenhagen, where it played an important role. She was talking about absolute concentration robustness. The latter concept was also the subject of the talk of Dave Anderson, who was looking at the issue of whether the known results on ACR for deterministic reaction networks hold in some reasonable sense in the stochastic case. The answer seems to be that they do not. But now I return to the question of how the deficiency is defined. Here I use the notation \delta for the deficiency as originally defined by Feinberg. The alternative, which can be found in Jeremy Gunawardena’s text with the title ‘Chemical reaction network theory for in silico biologists’ will be denoted by \delta'. Gunawardena, who seems to find the second definition more natural, proves that the two quantities are equal provided a certain condition holds (each linkage class contains precisely one terminal strong linkage class). This condition is, in particular, satisfied for weakly reversible networks and this is perhaps the reason that the difference in definitions is not often mentioned in the literature. In general \delta\ge\delta', so that deficiency zero in the sense of the common definition implies deficiency zero in the sense of the other definition.

For a long time I knew very little about control theory. The desire to change this motivated me to give a course on the subject in the last winter semester, using the excellent textbook of Eduardo Sontag as my main source. Since that I had never taken the time to look back on what I learned in the course of doing this and this became clearer to me only now. In Copenhagen Nicolette Meshkat gave a talk on identifiability in reaction networks. I had heard her give a talk on a similar subject at the SIAM life science conference last summer and not understood much. I am sure that this was not her fault but mine. This time around things were suddenly clear. The reason is that this subject involves ideas coming from control theory and through giving the course I had learned to think in some new directions. The basic idea of identifiability is to extract information on the parameters of a dynamical system from the input-output relation.

There was another talk with a lot of control theory content by Mustafa Khammash. He had brought some machines with him to illustrate some of the ideas. These were made of Lego, driven by computers and communicating with each other via bluetooth devices. One of these was a physical realization of one of the favourite simple examples in control theory, stabilization of the inverted pendulum. Another was a robot programmed to come to rest 30 cm in front of a barrier facing it. Next he talked about an experiment coupling living cells to a computer to form a control system. The output from a population of cells was read by a combination of GFP labeling and a FACS machine. After processing the signal the resulting input was done by stimulating the cells using light. This got a lot of media attention unter the name ‘cyborg yeast’. After that he talked about a project in which programmes can be incorporated into the cells themselves using plasmids. In one of the last remarks in his talk he mentioned how cows use integral feedback to control the calcium concentration in their bodies. I think it would be nice to incorporate this into popular talks or calculus lectures in the form ‘cows can do integrals’ or ‘cows can solve differential equations’. The idea would be to have a striking example of what the abstract things done in calculus courses have to do with the real world.

My talk at the conference was on phosphorylation systems and interestingly there was another talk there, by Andreas Weber, which had a possibly very significant relation to this. I only became aware of the existence of the corresponding paper (Errami et. al., J. Comp. Phys. 291, 279) a few weeks ago and since it involves a lot of techniques I am not too familiar with and has a strong computer science component I have only had a limited opportunity to understand it. I hope to get deeper into it soon. It concerns a method of finding Hopf bifurcations.

This conference was a great chance to maintain and extend my contacts in the community working on reaction networks and get various types of inside information on the field


Get every new post delivered to your Inbox.

Join 44 other followers