## Rereading ‘To the Lighthouse’

August 23, 2015

There are some statements I started to believe at a certain distant time in my life and which I have continued to accept without further examination ever since. One of these is ‘the English-language author who I admire most is Virginia Woolf’. Another is obtained by replacing ‘English-language author’ by ‘author in any language’ and ‘Virginia Woolf’ by ‘Marcel Proust’. At one point in her diary Virginia Woolf writes that she has just finished reading the latest volume of ‘A la Recherche du Temps Perdu’ which had recently been published. Then she writes (I am quoting from memory here) that she despairs of ever being able to write as well as Proust. Perhaps she was being too modest at that point. Until very recently it was a long time since I had read anything by Woolf. I was now stimulated to do so again by the fact that Eva and I were planning a trip to southern England, including a visit to St. Ives. For me that town is closely associated with Woolf and it is because of the connection to her that I was motivated to visit St. Ives when I spent some time in Cornwall several years ago. (Here I rapidly pass over the fact, without further comment, that the author with the widest popular success whose books have an association with St. Ives is Rosamunde Pilcher.) The other aspect of my first trip to Cornwall which is most distinct in my memory is missing the last bus in Land’s End and having to walk all the way back to Penzance where I was staying. We visited Land’s End again this time but since I did not want miss the bus again I did not have time to visit the ‘Shaun the Sheep Experience’ which is running there at the moment. As a consolation, during a later visit to Shaun’s birthplace, Bristol, I saw parts of the artistic event ‘Shaun in the City’ and had my photograph taken with some of the sculptures of Shaun.

When I go on a holiday trip somewhere I often like to take a book with me which has some special connection to the place I am going. Often I have little time to actually read the book during the holiday but that does not matter. For Cornwall and, in particular, St. Ives the natural choice was ‘To the Lighthouse’. That novel is set in the Isle of Skye but it is well known that the real-life setting which inspired it (and the lighthouse of the title) was in St. Ives. This lighthouse, Godrevy Lighthouse, cost a little over seven thousand pounds to build, being finished in 1859. In 1892, on one of two visits there, the ten year old Virginia signed the visitors book. The book was sold for over ten thousand pounds in 2011. So in a sense the little girl’s signature ended up being worth more money than the lighthouse she was visiting. Of course, due to inflation, this is not a fair comparison. Looking on my bookshelves at home I was surprised to find that I do not own a copy of ‘To the Lighthouse’. On those shelves I find ‘The Voyage Out’, ‘Jacob’s Room’, ‘Moments of Being’ and ‘Between the Acts’ but neither ‘To the Lighthouse’ nor ‘The Waves’. Perhaps I never owned them and only borrowed them from libraries. I have a fairly clear memory of having borrowed ‘To the Lighthouse’ from the Kirkwall public library. I do not remember why I did so. Perhaps it was just that at that time I was omnivorously consuming almost everything I found in the literature section in that library. Or perhaps it had to do with the fact that lighthouses always had a special attraction for me. An alternative explanation for the fact I do not own the book myself could be that I parted with it when I left behind the majority of the books I owned when I moved from Aberdeen to Munich after finishing my PhD. This was due the practical constraint that I only took as many belongings with me as I could carry: two large suitcases and one large rucksack. I crossed the English Channel on a ferry and I remember how hard it was to carry that luggage up the gangway due to the fact that the tide was high.

I came back from England yesterday and although I did not have much time for reading the book while there I am on page 236 due to the head start I had by reading it before I went on the trip. The day we went to St. Ives started out rainy but the weather cleared up during the morning so that about one o’ clock I was able to see Godrevy lighthouse and look at it through through my binoculars. They also allowed me to enjoy good views of passing gannets and kittiwakes but I think I would have been disappointed if I had made that trip without seeing the lighthouse.

## Calvin on the Calvin cycle

July 31, 2015

In looking for the reactions one useful source of information is the following. The carbon atoms in a given substance involved in the cycle are not equivalent to each other. By suitable experiments it can be decided which are the first carbon atoms to become radioactive. For instance, a compound produced in relatively large amounts right at the beginning of the process is phosphoglyceric acid (PGA) and it is found that the carbon in the carboxyl group is the one which becomes radioactive first. The other two carbons become radioactive at a common later time. This type of information provides suggestions for possible reaction mechanisms. Another type of input is obtained by simply counting carbon atoms in potential reactions. For instance, if the three-carbon compound PGA is to be produced from a precursor by the addition of carbon dioxide then the simple arthmetic relation $3=1+2$ indicates that there might be a precursor molecule with two carbons. However this molecule was never found and it turns out that the relevant arithmetic is $2\times 3=1+5$. The reaction produces two molecules of PGA from a precursor with five carbon atoms, ribulose bisphosphate (RuBP). Combining the information about the order in which the carbon atoms were incorporated with the arithmetic considerations allowed a large part of the network to be reconstructed. Nevertheless the nature of one key step, that in which carbon dioxide is incorporated into PGA remained unclear. Further progress required a different type of experiment.

The measurements used up to now are essentially measurements of concentrations at one time point (or very few time points). The last major step was taken using measurements of the dynamics. Here the concentrations of selected substances are determined at sufficiently many time points so as to get a picture of the time evolution of concentrations is certain circumstances. The idea is to first take measurements of PGA and RuBP in conditions of constant light. These concentrations are essentially time-independent. Then the light is switched off. It is seen that the concentration of PGA increases rapidly (it more than doubles within a minute) while that of RuBP rapidly decreases on the same time scale. This gives evidence that at steady state RuBP is being converted to PGA. This completes the picture of the reaction network. Further confirmation that the picture is correct is obtained by experiments where the amount of carbon dioxide available is suddenly reduced and the resulting transients in various concentrations monitored.

## Reaction networks in Copenhagen

July 9, 2015

Last week I attended a workshop on reaction network theory organized by Elisenda Feliu and Carsten Wiuf. It took place in Copenhagen from 1st to 3rd July. I flew in late on the Tuesday evening and on arrival I had a pleasant feeling of being in the north just due to the amount and quality of the light. Looking at the weather information for Mainz I was glad I had got a reduction in temperature of several degrees by making this trip. A lot of comments and extra information on the talks at this conference can be found on the blog of John Baez and that of Matteo Polettini. Now, on my own slower time scale, I will write a bit about things I heard at the conference which I found particularly interesting. The topic of different time scales is very relevant to the theme of the meeting and the first talk, by Sebastian Walcher, was concerned with it. Often a dynamical system of interest can be thought of as containing a small parameter and letting this parameter tend to zero leads to a smaller system which may be easier to analyse. Information obtained in this way may be transported back to the original system. If the parameter is a ratio of time scales then the limit will be singular. The issue discussed in the talk is that of finding a suitable small parameter in a system when one is suspected. It is probably unreasonable to expect to find a completely general method but the talk presented algorithms which can contribute to solving this type of problem.

In the second talk Gheorghe Craciun presented his proof of the global attractor conjecture, which I have mentioned in a previous post. I was intrigued by one comment he made relating the concept of deficiency zero to systems in general position. Later he explained this to me and I will say something about the direction in which this goes. The concept of deficiency is central in chemical reaction network theory but I never found it very intuitive and I feel safe in claiming that I am in good company as far as that is concerned. Gheorghe’s idea is intended to improve this state of affairs by giving the deficiency a geometric interpretation. In this context it is worth mentioning that there are two definitions of deficiency on the market. I had heard this before but never looked at the details. I was reminded of it by the talk of Jeanne Marie Onana Eloundou-Mbebi in Copenhagen, where it played an important role. She was talking about absolute concentration robustness. The latter concept was also the subject of the talk of Dave Anderson, who was looking at the issue of whether the known results on ACR for deterministic reaction networks hold in some reasonable sense in the stochastic case. The answer seems to be that they do not. But now I return to the question of how the deficiency is defined. Here I use the notation $\delta$ for the deficiency as originally defined by Feinberg. The alternative, which can be found in Jeremy Gunawardena’s text with the title ‘Chemical reaction network theory for in silico biologists’ will be denoted by $\delta'$. Gunawardena, who seems to find the second definition more natural, proves that the two quantities are equal provided a certain condition holds (each linkage class contains precisely one terminal strong linkage class). This condition is, in particular, satisfied for weakly reversible networks and this is perhaps the reason that the difference in definitions is not often mentioned in the literature. In general $\delta\ge\delta'$, so that deficiency zero in the sense of the common definition implies deficiency zero in the sense of the other definition.

For a long time I knew very little about control theory. The desire to change this motivated me to give a course on the subject in the last winter semester, using the excellent textbook of Eduardo Sontag as my main source. Since that I had never taken the time to look back on what I learned in the course of doing this and this became clearer to me only now. In Copenhagen Nicolette Meshkat gave a talk on identifiability in reaction networks. I had heard her give a talk on a similar subject at the SIAM life science conference last summer and not understood much. I am sure that this was not her fault but mine. This time around things were suddenly clear. The reason is that this subject involves ideas coming from control theory and through giving the course I had learned to think in some new directions. The basic idea of identifiability is to extract information on the parameters of a dynamical system from the input-output relation.

There was another talk with a lot of control theory content by Mustafa Khammash. He had brought some machines with him to illustrate some of the ideas. These were made of Lego, driven by computers and communicating with each other via bluetooth devices. One of these was a physical realization of one of the favourite simple examples in control theory, stabilization of the inverted pendulum. Another was a robot programmed to come to rest 30 cm in front of a barrier facing it. Next he talked about an experiment coupling living cells to a computer to form a control system. The output from a population of cells was read by a combination of GFP labeling and a FACS machine. After processing the signal the resulting input was done by stimulating the cells using light. This got a lot of media attention unter the name ‘cyborg yeast’. After that he talked about a project in which programmes can be incorporated into the cells themselves using plasmids. In one of the last remarks in his talk he mentioned how cows use integral feedback to control the calcium concentration in their bodies. I think it would be nice to incorporate this into popular talks or calculus lectures in the form ‘cows can do integrals’ or ‘cows can solve differential equations’. The idea would be to have a striking example of what the abstract things done in calculus courses have to do with the real world.

My talk at the conference was on phosphorylation systems and interestingly there was another talk there, by Andreas Weber, which had a possibly very significant relation to this. I only became aware of the existence of the corresponding paper (Errami et. al., J. Comp. Phys. 291, 279) a few weeks ago and since it involves a lot of techniques I am not too familiar with and has a strong computer science component I have only had a limited opportunity to understand it. I hope to get deeper into it soon. It concerns a method of finding Hopf bifurcations.

This conference was a great chance to maintain and extend my contacts in the community working on reaction networks and get various types of inside information on the field

## Computer-assisted proofs

June 26, 2015

My spontaneous reaction to a computer-assisted proof is to regard it as having a lesser status than one done by hand. Here I want to consider why I feel this way and if and under what circumstances this point of view is justified. I start by considering the situation of a traditional mathematical proof, done by hand and documented in journal articles or books. In this context it is impossible to write out all details of the proof. Somehow the aim is to bridge the gap between the new result and what experts in the area are already convinced of. This general difficulty becomes more acute when the proof is very long and parts of it are quite repetitive. There is the tendency to say that the next step is strictly analogous to the previous one and if the next step is written out there is the tendency for the reader to think that it is strictly analogous and to gloss over it. Human beings (a class which includes mathematicians) make mistakes and have a limited capacity to concentrate. To sum up, a traditional proof is never perfect and very long and repetitive proofs are likely to be less so than others. So what is it that often makes a traditional proof so convincing? I think that in the end it is its embedding in a certain context. An experienced mathematician has met with countless examples of proofs, his own and those of others, errors large and small in those proofs and how they can often be repaired. This is complemented by experience of the interactions between different mathematicians and their texts. These things give a basis for judging the validity of a proof which is by no means exclusively on the level of explicit logical argumentation.

Perhaps the most famous computer-assisted proof is that of the four colour theorem by Appel and Haken. To fill myself in on the background on this subject I read the book ‘Four colors suffice’ by Robin Wilson. The original problem is to colour a map in such a way that no two countries with a common border have the same colour. The statement of the theorem is that it is always possible with four colours. This statement can be reformulated as a question in graph theory. Here I am not interested in the details of how this reformulation is carried out. The intuitive idea is to associate a vertex to each country and an edge to each common border. Then the problem becomes that of colouring the vertices of a planar graph in such a way that no two adjacent vertices have the same colour. From now on I take this graph-theoretic statement as basic. (Unfortunately it is in fact not just a graph-theoretic, in particular combinatorial, statement since we are talking about planar graphs.) What I am interested in is not so much the problem itself as what it can teach us about computer-assisted proofs in general. I found the book of Wilson very entertaining but I was disappointed by the fact that he consistently avoids going over to the graph-theoretic formulation which I find more transparent (that word again). In an article by Robin Thomas (Notices of the AMS, 44, 848) he uses the graph-theoretic formulation more but I still cannot say I understood the structure of the proof on the coarsest scale. Thomas does write that in his own simplified version of the original proof the contribution of computers only involves integer arithmetic. Thus this proof does seem to belong to the category of things I said above I would tend to accept as a mathematical proof, modulo the fact that I would have to invest the time and effort to understand the algorithm. There is also a ‘computer-checked proof’ of the four colour theorem by Georges Gonthier. I found this text interesting to look at but felt as if I was quickly getting into logical deep waters. I do not really understand what is going on there.

To sum up this discussion, I am afraid that in the end the four colour problem was not the right example for me to start with and I that I need to take some other example which is closer to mathematical topics which I know better and perhaps also further from having been formalized and documented.

## Models for photosynthesis, part 2

May 22, 2015

In my previous post on this subject I discussed the question of the status of the variables in the Poolman model of photosynthesis and in the end I was convinced that I had understood which concentrations are to be considered as dynamical unknowns and which as constants. The Poolman model is a modified version of the Pettersson model and the corresponding questions about the nature of the variables have the same answers in both cases. What I am calling the Pettersson model was introduced in a paper of Pettersson and Ryde-Pettersson (Eur. J. Biochem 175, 661) and there the description of the variables and the equations is rather complete and comprehensible. Now I will go on to consider the second question raised in the previous post, namely what the evolution equations are. The evolution equations in the Poolman model are modifications of those in the Pettersson model and are described relative to those in the original paper on the former model. For this reason I will start by describing the equations for the Pettersson model. As a preparation for that I will treat a side issue. In a reaction network a reaction whose rate depends only on the concentrations of the substances consumed in the reaction is sometimes called NAC (for non-autocatalytic). For instance this terminology is used in the paper of Kaltenbach quoted in the previous post. The opposite of NAC is the case where the reaction rate is modulated by the concentrations of other substances, such as activators or inhibitors.

The unknowns in the Pettersson model are concentrations of substances in the stroma of the chloroplast. The substances involved are 15 carbohydrates bearing one or more phosphate groups, inorganic phosphate and ATP, thus 17 variables in total. In addition to ordinary reactions between these substances there are transport processes in which sugar phosphates are moved from the stroma to the cytosol in exchange for inorganic phosphate. For brevity I will also refer to these as reactions. The total amount of phosphate in the stroma is conserved and this leads to a conservation law for the system of equations, a fact explicitly mentioned in the paper. On the basis of experimental data some of the reactions are classified as fast and it is assumed that they are already at equilibrium. They are also assumed to be NAC and to have mass-action kinetics. This defines a set of algebraic equations. These are to be used to reduce the 17 evolution equations which are in principle there to five equations for certain linear combinations of the variables. The details of how this is done are described in the paper. I will now summarize how this works. The time derivatives of the 16 variables other than inorganic phosphate are given in terms of linear combinations of 17 reaction rates. Nine of these reaction rates, which are not NAC, are given explicitly. The others have to be treated using the 11 algebraic equations coming from the fast reactions. The right hand sides $F_i$ of the five evolution equations mentioned already are linear combinations of those reaction rates which are given explicitly. These must be expressed in terms of the quantities whose time derivatives are on the left hand side of these equations, using the algebraic equations coming from the fast reactions and the conservation equation for the total amount of phosphate. In fact all unknowns can be expressed in terms of the concentrations of RuBP, DHAP, F6P, Ru5P and ATP. Call these quantities $s_i$. Thus if the time derivatives of the $s_i$ can be expressed in terms of the $F_i$ we are done. It is shown in the appendix to the paper how a linear combination of the time derivatives of the $s_i$ with coefficients only depending on the $s_i$ is equal to $F_i$. Moreover it is stated that the time derivatives of the $s_i$ can be expressed in terms of these linear combinations.

Consider now the Poolman model. One way in which it differs from the Pettersson model is that starch degradation is included. The other is that while the kinetics for the ‘slow reactions’ (i.e. those which are not classified as fast in the Pettersson model) are left unchanged, the equilibrium assumption for the fast reactions is dropped. Instead the fast reactions are treated as reversible with mass action kinetics. In the thesis of Sergio Grimbs (Towards structure and dynamics of metabolic networks, Potsdam 2009) there is some discussion of the models of Poolman and Pettersson. It is investigated whether information about multistability in these models can be obtained using ideas coming from chemical reaction network theory. Since the results from CRNT considered require mass action kinetics it is implicit in the thesis that the systems are being considered which are obtained by applying mass action to all reactions in the networks for the Poolman and Pettersson models. These systems are therefore strictly speaking different from those of Pettersson and Poolman. In any case it turned out that these tools were not useful in this example since the simplest results did not apply and for the more complicated computer-assisted ones the systems were too large.

In the Pettersson paper the results of computations of steady states are presented and a comparison with published experimental results looks good in a graph presented there. So whay can we not conclude that the problem of modelling the dynamics of the Calvin cycle was pretty well solved in 1988? The paper contains no details on how the simulations were done and so it is problematic to repeat them. Jablonsky et. al. set up simulations of this model on their own and found results very different from those reported in the original paper. In this context the advantage of the Poolman model is that it has been put into the BioModels database so that the basic data is available to anyone with the necessary experience in doing simulations for biochemical models. Forgetting the issue of the reliability of their simulations, what did Petterson and Ryde-Pettersson find? They saw that depending on the external concentration of inorganic phosphate there is either no positive stationary solution (for high values of this parameter) or two (for low values) with a bifurcation in between. When there are two stationary solutions one is stable and one unstable. It looks like there is a fold bifurcation. There is a trivial stationary solution with all sugar concentrations zero for all values of the parameter. When the external phosphate concentration tends to zero the two positive stationary solutions coalesce with the trivial one. The absence of positive stationary solutions for high phosphate concentrations is suggested to be related to the concept of ‘overload breakdown’. This means that sugars are being exported so fast that the production from the Calvin cycle cannot keep up and the whole system breaks down. It would be nice to have an analytical proof of the existence of a fold bifurcation for the Pettersson model but that is probably very difficult to get.

## Models for photosynthesis

May 15, 2015

Photosynthesis is a process of central importance in biology. There is a large literature on modelling this process. One step is to identify networks of chemical reactions involved. Another is to derive mathematical models (usually systems of ODE) from these networks. Here when I say ‘model’ I mean ‘mathematical model’ and not the underlying network. In a paper by Jablonsky et. al. (BMC Systems Biology 5: 185) existing models are surveyed and a number or errors and inconsistencies in the literature are pointed out. This reminded me of the fact that a widespread problem in the biological literature is that the huge amount of data being generated these days contains very many errors. Here I want to discuss some issues related to this, concentrating on models for the Calvin cycle of photosynthesis and, in particular, on what I will call the Poolman model.

A point which might seem obvious and trivial to the mathematician is that a description of a mathematical model (I consider here only ODE models) should contain a clear answer to the following two questions. 1) What are the unknowns? 2) What are the evolution equations? One source of ambiguity involved in the first question is the impossibility of modelling everything. It is usually unreasonable to model a whole organism although this has been tried for some simple ones. Even if it were possible, the organism is in interaction with other organisms and its environment and these things cannot also be included. In practise it is necessary to fix a boundary of the system we want to consider and cut there. One way of handling the substances outside the cut in a mathematical model is to set their concentrations to constant values, thus implicitly assuming that to a good approximation these are not affected by the dynamics within the system. Let us call these external species and the substances whose dynamics is included in the model internal species. Thus part of answering question 1) is to decide on which species are to be treated as internal. In this post I will confine myself to discussing question 1), leaving question 2) for a later date.

Suppose we want to answer question 1) for a model in the literature. What are potential difficulties? In biological papers the equations (and even the full list of unknowns) are often banished to the supplementary material. In addition to being less easy to access and often less easy to read (due to typographical inferiority) than the main text I have the feeling that this supplementary material is often subjected to less scrutiny by the referees and by the authors, so that errors or incompleteness can occur more easily. Sometimes this information is only contained in some files intended to be read by a computer rather than a human being and it may be necessary to have, or be able to use, special software in order to read them in any reasonable way. Most of these difficulties are not absolute in nature. It is just that the mathematician embarking on such a process should ideally be aware of some of the challenges awaiting him in advance.

How does this look in the case of the Poolman model? It was first published in a journal in a paper of Poolman, Fell and Thomas (J. Exp. Botany, 51, 319). The reaction network is specified by Fig. 1 of the paper. This makes most of the unknowns clear but leaves the following cases where something more needs to be said. Firstly, it is natural to take the concentration of ADP to be defined implicitly through the concentration of ATP and the conservation of the total amount of adenosine phosphates. Secondly, it is explictly stated that the concentrations of NADP and NADPH are taken to be constant so that these are clearly external species. Presumably the concentration of inorganic phosphate in the stroma is also taken to be constant, so that this is also an external variable, although I did not find an explicit statement to this effect in the paper. The one remaining possible ambiguity involves starch – is it an internal or an external species in this model? I was not able to find anything directly addressing this point in the paper. On the other hand the paper does refer to the thesis of Poolman and some internet resources for further information. In the main body of the thesis I found no explicit resolution of the question of external phosphate but there it does seem that this quantity is treated as an external parameter. The question of starch is particularly important since this is a major change in the Poolman model compared to the earlier Pettersson model on which it is based and since Jablonsky et. al. claim that there is an error in the equation describing this step. It is stated in the thesis that ‘a meaningful concentration cannot be assigned to’ … ‘the starch substrate’ which seems to support my impression that the concentration of starch is an external species. Finally a clear answer confirming my suppositions above can be found in Appendix A of the thesis which describes the computer implementation. There we find a list of variables and constants and the latter are distinguished by being preceded by a dollar sign. So is there an error in the equation for starch degradation used in the Poolman model? My impression is that there is not, in the sense that the desired assumptions have been implemented successfully. The fact that Jablonsky et. al. get the absurd result of negative starch concentrations is because they compute an evolution for starch which is an external variable in the Poolman model. What could be criticised in the Poolman model is that the amount of starch in the chloroplast varies a lot over the course of the day. Thus a model with starch as an external variable could only be expected to give a good approximation to reality on timescales much shorter than one day.

## The species-reaction graph

May 14, 2015

In the study of chemical reaction networks important features of the networks are often summarised in certain graphs. Probably that most frequently used is the species graph (or interaction graph), which I discussed in a previous post. The vertices are the species taking part in the network and the edges are related to the non-zero entries of the Jacobian matrix of the vector field defining the dynamics. Since the Jacobian matrix depends in general on the concentrations at which it is evaluated there is a graph (the local graph) for each set of values of the concentrations. Sometimes a global graph is defined as the union of the local graphs. A sign can be attached to each edge of the local graph according to the sign of the corresponding partial derivative. In the case, which does occur quite frequently in practise, that the signs of the partial derivatives are independent of the concentrations the distinction between local and global graphs is not necessary. In the general case a variant of the species graph has been defined by Kaltenbach (arXiv:1210.0320). In that case there is a directed edge from vertex $i$ to vertex $j$ if there is any set of concentrations for which the corresponding partial derivative is non-zero and instead of being labelled with a sign the edge is labelled with a function, namely the partial derivative itself.

Another more complicated graph is the species-reaction graph or directed species-reaction graph (DSR graph). As explained in detail by Kaltenbach the definition (and the name of the object) are not consistent in the literature. The species-reaction graph was introduced in a paper of Craciun and Feinberg (SIAM J. Appl. Math. 66, 1321). In a parallel development which started earlier Mincheva and Roussel (J. Math. Biol. 55, 61) developed results using this type of graph based on ideas of Ivanova which were little known in the West and for which published proofs were not available. In the sense used by Kaltenbach the DSR graph is an object closely related to his version of the interaction graph. It is a bipartite graph (i.e. there are two different types of vertices and each edge connects a vertex of one type with a vertex of the other). In the DSR graph the species define vertices of one type and the reactions the vertices of the other type. There is a directed edge from species $i$ to reaction $j$ if species $i$ is present on the LHS of reaction $j$. There is a directed edge from reaction $i$ to species $j$ if the net production of species $j$ in reaction $i$ is non-zero. The first type of edge is labelled by the partial derivative of flux $j$ with respect to species $i$. The second type is labelled by the corresponding stoichiometric coefficient. The DSR graph determines the interaction graph. The paper of Soliman I mentioned in a recent post uses the DSR graph in the sense of Kaltenbach.

A type of species-reaction graph has been used in quite a different way by Angeli, de Leenheer and Sontag to obtain conditions for the montonicity of the equations for a network written in reaction coordinates.

## Proof of the global attractor conjecture

May 14, 2015

In a previous post I discussed the global attractor conjecture which concerns the asymptotic behaviour of solutions of the mass action equations describing weakly reversible chemical reaction networks of deficiency zero (or more generally complex balanced systems). Systems of the latter class are sometimes called toric dynamical systems because of relations to the theory of toric varieties in algebraic geometry. I just saw a paper by Gheorghe Craciun which he put on ArXiv last January (arxiv.org/pdf/1501.0286) where he proves this conjecture, thus solving a problem which has been open for more than 40 years. The result says that for reaction networks of this class the long-time behaviour is very simple. For given values of the conserved quantities there is exactly one positive stationary solution and all other solutions converge to it. What needs to be done beyond the classical results on this problem is to show that a positive solution can have no $\omega$-limit points where some concentration vanishes. This property is sometimes known as persistence.

A central concept used in the proof of the result is that of a toric differential inclusion. This says that the time rate of change of the concentrations is contained in a special type of subset. The paper contains a lot of intricate geometric constructions. These are explained consecutively in dimensions one, two, three and four in the paper. This should hopefully provide a good ladder to understanding.

## Arrival in Bretzenheim

May 1, 2015

Since I moved to Mainz two years ago my wife has remained in Berlin and we have been searching for a suitable place to live in Mainz or its surroundings. The original plan was to buy a piece of land on which we could build a house. This turned out to be much more difficult than we expected. The only land within our financial horizons was in small villages with almost no infrastructure or had other major disadvantages from our point of view. Eventually, after wasting a lot of time and effort, we stopped searching in the surroundings and looked for something in Mainz itself. Of course this was not easy but eventually we decided to buy a house (still to be built) within a small housing scheme in the district of Mainz called Bretzenheim. There is very little land available in Mainz and the scheme where we will be living is an example of the way in which the few remaining open spaces are being filled up, driven by the high property prices. The house was finished in March and I moved in on a provisional basis on 1st April, giving up my appartment, exactly two years after starting my job in Mainz. (Goodbye Jackdaws! The first birds I saw in what will be our garden were a Carrion Crow, which I am not taking as a bad omen, and a Black Redstart.) Our belongings left Berlin on 16th April and arrived in Mainz on 17th. Eva came to Mainz definitively on the 16th. So now there can be no doubt that a new era has begun for us.

The new house is conveniently situated. From there I can walk to the mathematics institute in 20 minutes and to the main train station in half an hour. It is near the end of a tram line coming from the station. Very close to where the houses have been built a Merovingian grave was found. When the garden centre a little further up the hill was being built the grave of a Merovingian warrior was found who had been buried with his horse. It is just as well for us that the archeologists did not dig too deep or we might have had to wait a long time before we could move in. Whenever you dig a hole in Mainz you are in danger of encountering a distant past and potential problems with the archeology department of the city. To show what can happen I will give an example. In the district of Gonsenheim there is a small stream, the Gonsbach. At some time when progress was fashionable the winding stream was straightened. Later an EU directive came into force which said that streams like this which had been straightened had to be made winding again, for ecological reasons, within a certain number of years. By now the deadline has been reached, or almost reached for the Gonsbach, and the city has started measures to attempt to comply with the directive. They started to dig and found … a Roman settlement. The work had to be stopped, for archeological reasons. So now, as far as I know, the archeological and ecological regulations and the city officials representing them are in deadlock.

Since I am British it is a curiosity for me that one suggested origin of the name Bretzenheim is that it is named after the Britons. It has been suggested that it could be identified with a certain vicus Brittaniae where the Roman Emperor Severus Alexander was murdered in the year 235. The evidence for this seems limited and an alternative hypothesis says that it was named after a locally important man called Bretzo. I cannot imagine what the Britons would have been doing there. Another theory is that the emperor was murdered in Britain and not in Mainz.

## Systems and synthetic biology in Strasbourg

March 26, 2015

This past week I was at a conference entitled ‘Advances in Systems and Synthetic Biology’ in Strasbourg. The first talk was by Albert Goldbeter and anyone who has read many of the posts on my blog will realize that I went there with high expectations. I was not disappointed. It was a broad talk with the main examples being glycolysis, circadian rhythms and the cell cycle. A lot of the things he talked about were already rather familiar. It was nevertheless rewarding to hear the inside story. There were also enough themes in the talk which were quite new to me. For instance he mentioned that oscillations have been observed in red blood cells, where transcription is ruled out. I enjoyed listening to him, perhaps even more than I did reading his book. Another talk on Monday was by Jacques Demongeot. I am sure that he is a brilliant, versatile and highly knowledgeable person. Unfortunately he made no concessions in his talk for the benefit of non-experts. He jumped into the talk without saying where it was going and I did not have the background knowledge to be able to supply that information on my own. I felt as if I was flattened by a blast wave of information and unfortunately I understood essentially nothing.

The first talk on Tuesday was by Nicolas Le Novère and it had to do with engineering-type approaches to molecular biology. This is very far from my world but gave me some fascinating glimpses as to how this kind of thing works. Incidentally, I found out that Le Novère has a blog with a number of contributions which I found enlightening. The next talk was by François Fages and was focussed on computer science issues. It nevertheless contained more than one aspect which I found very interesting. At this point I should give some background. There are influential ideas on the relation of feedback loops to the qualitative behaviour of the solutions of dynamical systems due to René Thomas. They have been developed over many years by several people and a number of them are at this conference (El Houssine Snoussi, Marcelle Kaufman, Jacques Demongeot) and today I attended a tutorial held by Gilles Bernot on related themes. The basic idea is ‘positive feedback loops are necessary for bistability, negative feedback loops for periodic solutions’. I will not get into this more deeply here but I will just mention that some of the conjectures of Thomas have been made into theorems over the years. For instance a rather definitive version of the result on multistability was proved by Christophe Soulé. In his talk Fages mentioned a recent generalization of this result due to Sylvain Soliman. In the past I had the suspicion that the interest of the conjectures of Thomas was severely limited by the fact that the hypotheses rule out certain configurations which are very widespread in reaction networks of practical importance. It seemed to me that Fages made exactly this point and was saying that the improved results of Soliman overcome this difficulty. I must go into the matter in more detail as soon as I have time. Another point mentioned in the talk was an automated way to find siphons. This is a concept in reaction networks which I should know more about and I have the impression that in a couple of cases I have discovered these objects in examples without realizing that they were instances of this general concept.

On Wednesday there was an extended presentation by Oliver Ebenhöh. One speaker had cancelled and Oliver extended his presentation to fill the resulting extra time. I felt that listening to the presentation was time well spent and I did not feel my attention waning. He explained many things related to plant science and, in particular, the use of starch by plants. One key topic was the way in which a particular enzyme acts on chains of glucose monomers (generalization of maltose, which is the case of two units). It creates a kind of maximal entropy distribution of different lengths. The talk presented both a theoretical analysis and precise experimental results. The theoretical part involved an application of elementary thermodynamic ideas. I liked this and it brought me to a realization about my relation to physics. In the past I have been exposed to too much theoretical physics of a very pure kind, remote from applications to phenomena close to everyday life. It was refreshing to see basic physical ideas being applied in a down to earth way to the analysis of real experiments, in this case in biology.

In his talk on Thursday Joel Bader talked about his work on engineering yeast in such a way as to find out which combinations of genes are essential for survival. The aim is to look for a kind of minimal genome within yeast. One of the techniques of gathering information is to do a random recombination using a Cre-Lox system and looking to see which of the mutants produced are viable. The analysis of these experiments leads to consideration of self-avoiding or non-self-avoiding random walks and at this point I had a strange feeling of deja vu. A few weeks ago I gave a talk in the Hausdorff colloquium in Bonn. In this event two speakers are invited on one afternoon and their themes are not necessarily correlated. On the day I was there the other speaker was Hugo Duminil-Copin and he was talking about self-avoiding random walks, a topic which I knew very little about. Now I was faced with (at least superficially) similar ideas in the context of DNA recombination. At the end of his talk Bader spent a few minutes on a quite different topic, namely bistability in the state of M. tuberculosis. I would have liked to have heard more about that. He is collaborating on this together with Denise Kirschner whose work on modelling tuberculosis I have discussed in a previous post.

This meeting had the advantages of relatively small conferences (in this case of the order of 50 participants) and has served the purpose of opening new perspectives for me.