July 31, 2015
In a previous post I mentioned Calvin’s Nobel lecture. Now I read it again and since I had learned a lot of things in the meantime I could profit from it in new ways. The subject of the lecture is the way in which Calvin and his collaborators discovered the mechanisms of the dark reactions of photosythesis. This involved years of experiments which I am not qualified to discuss. What I will do here is to describe some of the major conceptual components of this work. The first step was to discover which chemical substances are involved in the process. To make this a well-defined question it is necessary to fix a boundary between those substances to be considered and others. As their name suggests the dark reactions can take place in the dark and to start with the process was studied in the dark. It seems, however, that this did not lead to very satisfactory results and this led to a change of strategy. The dark reactions also take place in the light and the idea was to look at a steady state situation where photosynthesis is taking place in the presence of light. The dark reactions incorporate carbon dioxide into carbohydrates and the aim was to find the mechanism by which this occurs. At the end of the Second World War, when this work was done, carbon 14 had just become much more easily available due to the existence of nuclear reactors. Calvin also mentions that when doing difficult separations of compounds in his work on photosynthesis he used things he had learned when separating plutonium during the war. Given a steady state situation with ordinary carbon dioxide the radioactive form of the gas containing carbon 14 could be introduced. The radioactive carbon atoms became incorporated into some of the organic compounds in the plants used. (The principal subject of the experiment was the green alga Chlorella.) In fact the radioactive carbon atoms turned up in too many compounds – the boundary had been fixed too widely. This was improved on by looking what happened on sufficiently short time scales after the radioactive gas had been added, of the order of a few seconds. After this time the process was stopped, leading to a snapshot of the chemical concentrations. This meant that the labelled carbon had not had time to propagate too far through the system and was only found in a relatively small number of compounds. The compounds were separated by two-dimensional chromatography and those which were radioactive were located by the black spots they caused on photographic film. Calvin remarks ironically that the apparatus they were using did not label the spots with the names of the compounds giving rise to them. It was thus necessary to extract those compounds and analyse them by all sorts of techniques which I know very little about. It took about ten years. In any case, the endpoint of this process was the first major conceptual step: a set of relevant compounds had been identified. These are the carbon compounds which are involved in the reactions leading from the point where carbon dioxide enters the system and before too much of the carbon has been transferred to other systems connected to the Calvin cycle. While reading the text of the lecture I also had a modern picture of the reaction network in front of me and this was useful for understanding the significance of the elements of the story being told. From the point of view of the mathematician this step corresponds to determining the nodes of the reaction network. It remains to find out which compounds react with which others, with which stoichiometry.
In looking for the reactions one useful source of information is the following. The carbon atoms in a given substance involved in the cycle are not equivalent to each other. By suitable experiments it can be decided which are the first carbon atoms to become radioactive. For instance, a compound produced in relatively large amounts right at the beginning of the process is phosphoglyceric acid (PGA) and it is found that the carbon in the carboxyl group is the one which becomes radioactive first. The other two carbons become radioactive at a common later time. This type of information provides suggestions for possible reaction mechanisms. Another type of input is obtained by simply counting carbon atoms in potential reactions. For instance, if the three-carbon compound PGA is to be produced from a precursor by the addition of carbon dioxide then the simple arthmetic relation indicates that there might be a precursor molecule with two carbons. However this molecule was never found and it turns out that the relevant arithmetic is . The reaction produces two molecules of PGA from a precursor with five carbon atoms, ribulose bisphosphate (RuBP). Combining the information about the order in which the carbon atoms were incorporated with the arithmetic considerations allowed a large part of the network to be reconstructed. Nevertheless the nature of one key step, that in which carbon dioxide is incorporated into PGA remained unclear. Further progress required a different type of experiment.
The measurements used up to now are essentially measurements of concentrations at one time point (or very few time points). The last major step was taken using measurements of the dynamics. Here the concentrations of selected substances are determined at sufficiently many time points so as to get a picture of the time evolution of concentrations is certain circumstances. The idea is to first take measurements of PGA and RuBP in conditions of constant light. These concentrations are essentially time-independent. Then the light is switched off. It is seen that the concentration of PGA increases rapidly (it more than doubles within a minute) while that of RuBP rapidly decreases on the same time scale. This gives evidence that at steady state RuBP is being converted to PGA. This completes the picture of the reaction network. Further confirmation that the picture is correct is obtained by experiments where the amount of carbon dioxide available is suddenly reduced and the resulting transients in various concentrations monitored.
July 9, 2015
Last week I attended a workshop on reaction network theory organized by Elisenda Feliu and Carsten Wiuf. It took place in Copenhagen from 1st to 3rd July. I flew in late on the Tuesday evening and on arrival I had a pleasant feeling of being in the north just due to the amount and quality of the light. Looking at the weather information for Mainz I was glad I had got a reduction in temperature of several degrees by making this trip. A lot of comments and extra information on the talks at this conference can be found on the blog of John Baez and that of Matteo Polettini. Now, on my own slower time scale, I will write a bit about things I heard at the conference which I found particularly interesting. The topic of different time scales is very relevant to the theme of the meeting and the first talk, by Sebastian Walcher, was concerned with it. Often a dynamical system of interest can be thought of as containing a small parameter and letting this parameter tend to zero leads to a smaller system which may be easier to analyse. Information obtained in this way may be transported back to the original system. If the parameter is a ratio of time scales then the limit will be singular. The issue discussed in the talk is that of finding a suitable small parameter in a system when one is suspected. It is probably unreasonable to expect to find a completely general method but the talk presented algorithms which can contribute to solving this type of problem.
In the second talk Gheorghe Craciun presented his proof of the global attractor conjecture, which I have mentioned in a previous post. I was intrigued by one comment he made relating the concept of deficiency zero to systems in general position. Later he explained this to me and I will say something about the direction in which this goes. The concept of deficiency is central in chemical reaction network theory but I never found it very intuitive and I feel safe in claiming that I am in good company as far as that is concerned. Gheorghe’s idea is intended to improve this state of affairs by giving the deficiency a geometric interpretation. In this context it is worth mentioning that there are two definitions of deficiency on the market. I had heard this before but never looked at the details. I was reminded of it by the talk of Jeanne Marie Onana Eloundou-Mbebi in Copenhagen, where it played an important role. She was talking about absolute concentration robustness. The latter concept was also the subject of the talk of Dave Anderson, who was looking at the issue of whether the known results on ACR for deterministic reaction networks hold in some reasonable sense in the stochastic case. The answer seems to be that they do not. But now I return to the question of how the deficiency is defined. Here I use the notation for the deficiency as originally defined by Feinberg. The alternative, which can be found in Jeremy Gunawardena’s text with the title ‘Chemical reaction network theory for in silico biologists’ will be denoted by . Gunawardena, who seems to find the second definition more natural, proves that the two quantities are equal provided a certain condition holds (each linkage class contains precisely one terminal strong linkage class). This condition is, in particular, satisfied for weakly reversible networks and this is perhaps the reason that the difference in definitions is not often mentioned in the literature. In general , so that deficiency zero in the sense of the common definition implies deficiency zero in the sense of the other definition.
For a long time I knew very little about control theory. The desire to change this motivated me to give a course on the subject in the last winter semester, using the excellent textbook of Eduardo Sontag as my main source. Since that I had never taken the time to look back on what I learned in the course of doing this and this became clearer to me only now. In Copenhagen Nicolette Meshkat gave a talk on identifiability in reaction networks. I had heard her give a talk on a similar subject at the SIAM life science conference last summer and not understood much. I am sure that this was not her fault but mine. This time around things were suddenly clear. The reason is that this subject involves ideas coming from control theory and through giving the course I had learned to think in some new directions. The basic idea of identifiability is to extract information on the parameters of a dynamical system from the input-output relation.
There was another talk with a lot of control theory content by Mustafa Khammash. He had brought some machines with him to illustrate some of the ideas. These were made of Lego, driven by computers and communicating with each other via bluetooth devices. One of these was a physical realization of one of the favourite simple examples in control theory, stabilization of the inverted pendulum. Another was a robot programmed to come to rest 30 cm in front of a barrier facing it. Next he talked about an experiment coupling living cells to a computer to form a control system. The output from a population of cells was read by a combination of GFP labeling and a FACS machine. After processing the signal the resulting input was done by stimulating the cells using light. This got a lot of media attention unter the name ‘cyborg yeast’. After that he talked about a project in which programmes can be incorporated into the cells themselves using plasmids. In one of the last remarks in his talk he mentioned how cows use integral feedback to control the calcium concentration in their bodies. I think it would be nice to incorporate this into popular talks or calculus lectures in the form ‘cows can do integrals’ or ‘cows can solve differential equations’. The idea would be to have a striking example of what the abstract things done in calculus courses have to do with the real world.
My talk at the conference was on phosphorylation systems and interestingly there was another talk there, by Andreas Weber, which had a possibly very significant relation to this. I only became aware of the existence of the corresponding paper (Errami et. al., J. Comp. Phys. 291, 279) a few weeks ago and since it involves a lot of techniques I am not too familiar with and has a strong computer science component I have only had a limited opportunity to understand it. I hope to get deeper into it soon. It concerns a method of finding Hopf bifurcations.
This conference was a great chance to maintain and extend my contacts in the community working on reaction networks and get various types of inside information on the field
June 26, 2015
My spontaneous reaction to a computer-assisted proof is to regard it as having a lesser status than one done by hand. Here I want to consider why I feel this way and if and under what circumstances this point of view is justified. I start by considering the situation of a traditional mathematical proof, done by hand and documented in journal articles or books. In this context it is impossible to write out all details of the proof. Somehow the aim is to bridge the gap between the new result and what experts in the area are already convinced of. This general difficulty becomes more acute when the proof is very long and parts of it are quite repetitive. There is the tendency to say that the next step is strictly analogous to the previous one and if the next step is written out there is the tendency for the reader to think that it is strictly analogous and to gloss over it. Human beings (a class which includes mathematicians) make mistakes and have a limited capacity to concentrate. To sum up, a traditional proof is never perfect and very long and repetitive proofs are likely to be less so than others. So what is it that often makes a traditional proof so convincing? I think that in the end it is its embedding in a certain context. An experienced mathematician has met with countless examples of proofs, his own and those of others, errors large and small in those proofs and how they can often be repaired. This is complemented by experience of the interactions between different mathematicians and their texts. These things give a basis for judging the validity of a proof which is by no means exclusively on the level of explicit logical argumentation.
How is it, by comparison, with computer-assisted proofs? The first point to be raised is what is meant by that phrase. Let me start with a rather trivial example. Suppose I use a computer to calculate the kth digit of n factorial where k and n are quite large. If for given choices of the numbers a well-tested computer programme can give me the answer in one minute then I will not doubt the answer. Why is this? Because I believe that the answer comes from an algorithm which determines a unique answer. No approximations or floating point operations are involved. For me interval arithmetic, which I discussed in a previous post, is on the same level of credibility, which is the same level of credibility as a computer-free proof. There could be an error in the hardware or the software or the programme but this is not essentially different from the uncertainties connected with a traditional proof. So what might be the problem in other cases? One problem is that of transparency. If a computer-assisted proof is to be convincing for me then I must either understand what algorithm the computer is supposed to be implementing or at least have the impression that I could do so if I invested some time and effort. Thus the question arises to what extent this aspect is documented in a given case. There is also the issue of the loss of the context which I mentioned previously. Suppose I believe that showing that the answer to a certain question is ‘yes’ in 1000 cases constitutes a proof of a certain theorem but that checking these cases is so arduous that a human being can hardly do so. Suppose further that I understand an algorithm which, if implemented, can carry out this task on a computer. Will I then be convinced? I think the answer is that I will but I am still likely to be left with an uncomfortable feeling if I do not have the opportunity to see the details in a given case if I want to. In addition to the question of whether the nature of the application is documented there is the question of whether this has been done in a way that is sufficiently palatable that mathematicians will actually carefully study the documentation. Rather than remain on the level of generalities I prefer to now go over to an example.
Perhaps the most famous computer-assisted proof is that of the four colour theorem by Appel and Haken. To fill myself in on the background on this subject I read the book ‘Four colors suffice’ by Robin Wilson. The original problem is to colour a map in such a way that no two countries with a common border have the same colour. The statement of the theorem is that it is always possible with four colours. This statement can be reformulated as a question in graph theory. Here I am not interested in the details of how this reformulation is carried out. The intuitive idea is to associate a vertex to each country and an edge to each common border. Then the problem becomes that of colouring the vertices of a planar graph in such a way that no two adjacent vertices have the same colour. From now on I take this graph-theoretic statement as basic. (Unfortunately it is in fact not just a graph-theoretic, in particular combinatorial, statement since we are talking about planar graphs.) What I am interested in is not so much the problem itself as what it can teach us about computer-assisted proofs in general. I found the book of Wilson very entertaining but I was disappointed by the fact that he consistently avoids going over to the graph-theoretic formulation which I find more transparent (that word again). In an article by Robin Thomas (Notices of the AMS, 44, 848) he uses the graph-theoretic formulation more but I still cannot say I understood the structure of the proof on the coarsest scale. Thomas does write that in his own simplified version of the original proof the contribution of computers only involves integer arithmetic. Thus this proof does seem to belong to the category of things I said above I would tend to accept as a mathematical proof, modulo the fact that I would have to invest the time and effort to understand the algorithm. There is also a ‘computer-checked proof’ of the four colour theorem by Georges Gonthier. I found this text interesting to look at but felt as if I was quickly getting into logical deep waters. I do not really understand what is going on there.
To sum up this discussion, I am afraid that in the end the four colour problem was not the right example for me to start with and I that I need to take some other example which is closer to mathematical topics which I know better and perhaps also further from having been formalized and documented.
May 22, 2015
In my previous post on this subject I discussed the question of the status of the variables in the Poolman model of photosynthesis and in the end I was convinced that I had understood which concentrations are to be considered as dynamical unknowns and which as constants. The Poolman model is a modified version of the Pettersson model and the corresponding questions about the nature of the variables have the same answers in both cases. What I am calling the Pettersson model was introduced in a paper of Pettersson and Ryde-Pettersson (Eur. J. Biochem 175, 661) and there the description of the variables and the equations is rather complete and comprehensible. Now I will go on to consider the second question raised in the previous post, namely what the evolution equations are. The evolution equations in the Poolman model are modifications of those in the Pettersson model and are described relative to those in the original paper on the former model. For this reason I will start by describing the equations for the Pettersson model. As a preparation for that I will treat a side issue. In a reaction network a reaction whose rate depends only on the concentrations of the substances consumed in the reaction is sometimes called NAC (for non-autocatalytic). For instance this terminology is used in the paper of Kaltenbach quoted in the previous post. The opposite of NAC is the case where the reaction rate is modulated by the concentrations of other substances, such as activators or inhibitors.
The unknowns in the Pettersson model are concentrations of substances in the stroma of the chloroplast. The substances involved are 15 carbohydrates bearing one or more phosphate groups, inorganic phosphate and ATP, thus 17 variables in total. In addition to ordinary reactions between these substances there are transport processes in which sugar phosphates are moved from the stroma to the cytosol in exchange for inorganic phosphate. For brevity I will also refer to these as reactions. The total amount of phosphate in the stroma is conserved and this leads to a conservation law for the system of equations, a fact explicitly mentioned in the paper. On the basis of experimental data some of the reactions are classified as fast and it is assumed that they are already at equilibrium. They are also assumed to be NAC and to have mass-action kinetics. This defines a set of algebraic equations. These are to be used to reduce the 17 evolution equations which are in principle there to five equations for certain linear combinations of the variables. The details of how this is done are described in the paper. I will now summarize how this works. The time derivatives of the 16 variables other than inorganic phosphate are given in terms of linear combinations of 17 reaction rates. Nine of these reaction rates, which are not NAC, are given explicitly. The others have to be treated using the 11 algebraic equations coming from the fast reactions. The right hand sides of the five evolution equations mentioned already are linear combinations of those reaction rates which are given explicitly. These must be expressed in terms of the quantities whose time derivatives are on the left hand side of these equations, using the algebraic equations coming from the fast reactions and the conservation equation for the total amount of phosphate. In fact all unknowns can be expressed in terms of the concentrations of RuBP, DHAP, F6P, Ru5P and ATP. Call these quantities . Thus if the time derivatives of the can be expressed in terms of the we are done. It is shown in the appendix to the paper how a linear combination of the time derivatives of the with coefficients only depending on the is equal to . Moreover it is stated that the time derivatives of the can be expressed in terms of these linear combinations.
Consider now the Poolman model. One way in which it differs from the Pettersson model is that starch degradation is included. The other is that while the kinetics for the ‘slow reactions’ (i.e. those which are not classified as fast in the Pettersson model) are left unchanged, the equilibrium assumption for the fast reactions is dropped. Instead the fast reactions are treated as reversible with mass action kinetics. In the thesis of Sergio Grimbs (Towards structure and dynamics of metabolic networks, Potsdam 2009) there is some discussion of the models of Poolman and Pettersson. It is investigated whether information about multistability in these models can be obtained using ideas coming from chemical reaction network theory. Since the results from CRNT considered require mass action kinetics it is implicit in the thesis that the systems are being considered which are obtained by applying mass action to all reactions in the networks for the Poolman and Pettersson models. These systems are therefore strictly speaking different from those of Pettersson and Poolman. In any case it turned out that these tools were not useful in this example since the simplest results did not apply and for the more complicated computer-assisted ones the systems were too large.
In the Pettersson paper the results of computations of steady states are presented and a comparison with published experimental results looks good in a graph presented there. So whay can we not conclude that the problem of modelling the dynamics of the Calvin cycle was pretty well solved in 1988? The paper contains no details on how the simulations were done and so it is problematic to repeat them. Jablonsky et. al. set up simulations of this model on their own and found results very different from those reported in the original paper. In this context the advantage of the Poolman model is that it has been put into the BioModels database so that the basic data is available to anyone with the necessary experience in doing simulations for biochemical models. Forgetting the issue of the reliability of their simulations, what did Petterson and Ryde-Pettersson find? They saw that depending on the external concentration of inorganic phosphate there is either no positive stationary solution (for high values of this parameter) or two (for low values) with a bifurcation in between. When there are two stationary solutions one is stable and one unstable. It looks like there is a fold bifurcation. There is a trivial stationary solution with all sugar concentrations zero for all values of the parameter. When the external phosphate concentration tends to zero the two positive stationary solutions coalesce with the trivial one. The absence of positive stationary solutions for high phosphate concentrations is suggested to be related to the concept of ‘overload breakdown’. This means that sugars are being exported so fast that the production from the Calvin cycle cannot keep up and the whole system breaks down. It would be nice to have an analytical proof of the existence of a fold bifurcation for the Pettersson model but that is probably very difficult to get.
May 15, 2015
Photosynthesis is a process of central importance in biology. There is a large literature on modelling this process. One step is to identify networks of chemical reactions involved. Another is to derive mathematical models (usually systems of ODE) from these networks. Here when I say ‘model’ I mean ‘mathematical model’ and not the underlying network. In a paper by Jablonsky et. al. (BMC Systems Biology 5: 185) existing models are surveyed and a number or errors and inconsistencies in the literature are pointed out. This reminded me of the fact that a widespread problem in the biological literature is that the huge amount of data being generated these days contains very many errors. Here I want to discuss some issues related to this, concentrating on models for the Calvin cycle of photosynthesis and, in particular, on what I will call the Poolman model.
A point which might seem obvious and trivial to the mathematician is that a description of a mathematical model (I consider here only ODE models) should contain a clear answer to the following two questions. 1) What are the unknowns? 2) What are the evolution equations? One source of ambiguity involved in the first question is the impossibility of modelling everything. It is usually unreasonable to model a whole organism although this has been tried for some simple ones. Even if it were possible, the organism is in interaction with other organisms and its environment and these things cannot also be included. In practise it is necessary to fix a boundary of the system we want to consider and cut there. One way of handling the substances outside the cut in a mathematical model is to set their concentrations to constant values, thus implicitly assuming that to a good approximation these are not affected by the dynamics within the system. Let us call these external species and the substances whose dynamics is included in the model internal species. Thus part of answering question 1) is to decide on which species are to be treated as internal. In this post I will confine myself to discussing question 1), leaving question 2) for a later date.
Suppose we want to answer question 1) for a model in the literature. What are potential difficulties? In biological papers the equations (and even the full list of unknowns) are often banished to the supplementary material. In addition to being less easy to access and often less easy to read (due to typographical inferiority) than the main text I have the feeling that this supplementary material is often subjected to less scrutiny by the referees and by the authors, so that errors or incompleteness can occur more easily. Sometimes this information is only contained in some files intended to be read by a computer rather than a human being and it may be necessary to have, or be able to use, special software in order to read them in any reasonable way. Most of these difficulties are not absolute in nature. It is just that the mathematician embarking on such a process should ideally be aware of some of the challenges awaiting him in advance.
How does this look in the case of the Poolman model? It was first published in a journal in a paper of Poolman, Fell and Thomas (J. Exp. Botany, 51, 319). The reaction network is specified by Fig. 1 of the paper. This makes most of the unknowns clear but leaves the following cases where something more needs to be said. Firstly, it is natural to take the concentration of ADP to be defined implicitly through the concentration of ATP and the conservation of the total amount of adenosine phosphates. Secondly, it is explictly stated that the concentrations of NADP and NADPH are taken to be constant so that these are clearly external species. Presumably the concentration of inorganic phosphate in the stroma is also taken to be constant, so that this is also an external variable, although I did not find an explicit statement to this effect in the paper. The one remaining possible ambiguity involves starch – is it an internal or an external species in this model? I was not able to find anything directly addressing this point in the paper. On the other hand the paper does refer to the thesis of Poolman and some internet resources for further information. In the main body of the thesis I found no explicit resolution of the question of external phosphate but there it does seem that this quantity is treated as an external parameter. The question of starch is particularly important since this is a major change in the Poolman model compared to the earlier Pettersson model on which it is based and since Jablonsky et. al. claim that there is an error in the equation describing this step. It is stated in the thesis that ‘a meaningful concentration cannot be assigned to’ … ‘the starch substrate’ which seems to support my impression that the concentration of starch is an external species. Finally a clear answer confirming my suppositions above can be found in Appendix A of the thesis which describes the computer implementation. There we find a list of variables and constants and the latter are distinguished by being preceded by a dollar sign. So is there an error in the equation for starch degradation used in the Poolman model? My impression is that there is not, in the sense that the desired assumptions have been implemented successfully. The fact that Jablonsky et. al. get the absurd result of negative starch concentrations is because they compute an evolution for starch which is an external variable in the Poolman model. What could be criticised in the Poolman model is that the amount of starch in the chloroplast varies a lot over the course of the day. Thus a model with starch as an external variable could only be expected to give a good approximation to reality on timescales much shorter than one day.
May 14, 2015
In a previous post I discussed the global attractor conjecture which concerns the asymptotic behaviour of solutions of the mass action equations describing weakly reversible chemical reaction networks of deficiency zero (or more generally complex balanced systems). Systems of the latter class are sometimes called toric dynamical systems because of relations to the theory of toric varieties in algebraic geometry. I just saw a paper by Gheorghe Craciun which he put on ArXiv last January (arxiv.org/pdf/1501.0286) where he proves this conjecture, thus solving a problem which has been open for more than 40 years. The result says that for reaction networks of this class the long-time behaviour is very simple. For given values of the conserved quantities there is exactly one positive stationary solution and all other solutions converge to it. What needs to be done beyond the classical results on this problem is to show that a positive solution can have no -limit points where some concentration vanishes. This property is sometimes known as persistence.
A central concept used in the proof of the result is that of a toric differential inclusion. This says that the time rate of change of the concentrations is contained in a special type of subset. The paper contains a lot of intricate geometric constructions. These are explained consecutively in dimensions one, two, three and four in the paper. This should hopefully provide a good ladder to understanding.
May 1, 2015
Since I moved to Mainz two years ago my wife has remained in Berlin and we have been searching for a suitable place to live in Mainz or its surroundings. The original plan was to buy a piece of land on which we could build a house. This turned out to be much more difficult than we expected. The only land within our financial horizons was in small villages with almost no infrastructure or had other major disadvantages from our point of view. Eventually, after wasting a lot of time and effort, we stopped searching in the surroundings and looked for something in Mainz itself. Of course this was not easy but eventually we decided to buy a house (still to be built) within a small housing scheme in the district of Mainz called Bretzenheim. There is very little land available in Mainz and the scheme where we will be living is an example of the way in which the few remaining open spaces are being filled up, driven by the high property prices. The house was finished in March and I moved in on a provisional basis on 1st April, giving up my appartment, exactly two years after starting my job in Mainz. (Goodbye Jackdaws! The first birds I saw in what will be our garden were a Carrion Crow, which I am not taking as a bad omen, and a Black Redstart.) Our belongings left Berlin on 16th April and arrived in Mainz on 17th. Eva came to Mainz definitively on the 16th. So now there can be no doubt that a new era has begun for us.
The new house is conveniently situated. From there I can walk to the mathematics institute in 20 minutes and to the main train station in half an hour. It is near the end of a tram line coming from the station. Very close to where the houses have been built a Merovingian grave was found. When the garden centre a little further up the hill was being built the grave of a Merovingian warrior was found who had been buried with his horse. It is just as well for us that the archeologists did not dig too deep or we might have had to wait a long time before we could move in. Whenever you dig a hole in Mainz you are in danger of encountering a distant past and potential problems with the archeology department of the city. To show what can happen I will give an example. In the district of Gonsenheim there is a small stream, the Gonsbach. At some time when progress was fashionable the winding stream was straightened. Later an EU directive came into force which said that streams like this which had been straightened had to be made winding again, for ecological reasons, within a certain number of years. By now the deadline has been reached, or almost reached for the Gonsbach, and the city has started measures to attempt to comply with the directive. They started to dig and found … a Roman settlement. The work had to be stopped, for archeological reasons. So now, as far as I know, the archeological and ecological regulations and the city officials representing them are in deadlock.
Since I am British it is a curiosity for me that one suggested origin of the name Bretzenheim is that it is named after the Britons. It has been suggested that it could be identified with a certain vicus Brittaniae where the Roman Emperor Severus Alexander was murdered in the year 235. The evidence for this seems limited and an alternative hypothesis says that it was named after a locally important man called Bretzo. I cannot imagine what the Britons would have been doing there. Another theory is that the emperor was murdered in Britain and not in Mainz.
March 26, 2015
This past week I was at a conference entitled ‘Advances in Systems and Synthetic Biology’ in Strasbourg. The first talk was by Albert Goldbeter and anyone who has read many of the posts on my blog will realize that I went there with high expectations. I was not disappointed. It was a broad talk with the main examples being glycolysis, circadian rhythms and the cell cycle. A lot of the things he talked about were already rather familiar. It was nevertheless rewarding to hear the inside story. There were also enough themes in the talk which were quite new to me. For instance he mentioned that oscillations have been observed in red blood cells, where transcription is ruled out. I enjoyed listening to him, perhaps even more than I did reading his book. Another talk on Monday was by Jacques Demongeot. I am sure that he is a brilliant, versatile and highly knowledgeable person. Unfortunately he made no concessions in his talk for the benefit of non-experts. He jumped into the talk without saying where it was going and I did not have the background knowledge to be able to supply that information on my own. I felt as if I was flattened by a blast wave of information and unfortunately I understood essentially nothing.
The first talk on Tuesday was by Nicolas Le Novère and it had to do with engineering-type approaches to molecular biology. This is very far from my world but gave me some fascinating glimpses as to how this kind of thing works. Incidentally, I found out that Le Novère has a blog with a number of contributions which I found enlightening. The next talk was by François Fages and was focussed on computer science issues. It nevertheless contained more than one aspect which I found very interesting. At this point I should give some background. There are influential ideas on the relation of feedback loops to the qualitative behaviour of the solutions of dynamical systems due to René Thomas. They have been developed over many years by several people and a number of them are at this conference (El Houssine Snoussi, Marcelle Kaufman, Jacques Demongeot) and today I attended a tutorial held by Gilles Bernot on related themes. The basic idea is ‘positive feedback loops are necessary for bistability, negative feedback loops for periodic solutions’. I will not get into this more deeply here but I will just mention that some of the conjectures of Thomas have been made into theorems over the years. For instance a rather definitive version of the result on multistability was proved by Christophe Soulé. In his talk Fages mentioned a recent generalization of this result due to Sylvain Soliman. In the past I had the suspicion that the interest of the conjectures of Thomas was severely limited by the fact that the hypotheses rule out certain configurations which are very widespread in reaction networks of practical importance. It seemed to me that Fages made exactly this point and was saying that the improved results of Soliman overcome this difficulty. I must go into the matter in more detail as soon as I have time. Another point mentioned in the talk was an automated way to find siphons. This is a concept in reaction networks which I should know more about and I have the impression that in a couple of cases I have discovered these objects in examples without realizing that they were instances of this general concept.
On Wednesday there was an extended presentation by Oliver Ebenhöh. One speaker had cancelled and Oliver extended his presentation to fill the resulting extra time. I felt that listening to the presentation was time well spent and I did not feel my attention waning. He explained many things related to plant science and, in particular, the use of starch by plants. One key topic was the way in which a particular enzyme acts on chains of glucose monomers (generalization of maltose, which is the case of two units). It creates a kind of maximal entropy distribution of different lengths. The talk presented both a theoretical analysis and precise experimental results. The theoretical part involved an application of elementary thermodynamic ideas. I liked this and it brought me to a realization about my relation to physics. In the past I have been exposed to too much theoretical physics of a very pure kind, remote from applications to phenomena close to everyday life. It was refreshing to see basic physical ideas being applied in a down to earth way to the analysis of real experiments, in this case in biology.
In his talk on Thursday Joel Bader talked about his work on engineering yeast in such a way as to find out which combinations of genes are essential for survival. The aim is to look for a kind of minimal genome within yeast. One of the techniques of gathering information is to do a random recombination using a Cre-Lox system and looking to see which of the mutants produced are viable. The analysis of these experiments leads to consideration of self-avoiding or non-self-avoiding random walks and at this point I had a strange feeling of deja vu. A few weeks ago I gave a talk in the Hausdorff colloquium in Bonn. In this event two speakers are invited on one afternoon and their themes are not necessarily correlated. On the day I was there the other speaker was Hugo Duminil-Copin and he was talking about self-avoiding random walks, a topic which I knew very little about. Now I was faced with (at least superficially) similar ideas in the context of DNA recombination. At the end of his talk Bader spent a few minutes on a quite different topic, namely bistability in the state of M. tuberculosis. I would have liked to have heard more about that. He is collaborating on this together with Denise Kirschner whose work on modelling tuberculosis I have discussed in a previous post.
This meeting had the advantages of relatively small conferences (in this case of the order of 50 participants) and has served the purpose of opening new perspectives for me.
December 5, 2014
Having recently written about Harald zur Hausen I now had the opportunity to see him live since he gave a talk in Mainz today. On main theme of his talk was colon cancer. He discussed the different frequencies of this disease in different countries and how this is changing in time. The disease is increasing in Europe and decreasing in the US. He suggested that the latter is due to the increasing success of colonoscopy is identifying and removing pre-cancerous states. There has been a particularly strong increase in Japan and Korea which correlates with a much increased consumption of red meat. Places where this disease is relatively rare, despite considerable meat consumption, are Bolivia and Mongolia. One popular theory about the link between meat consumption and colon cancer is that the process of cooking at high temperatures produces carcinogens. A problem with this theory is that cooking chicken and fish at high temperatures produces the same carcinogens and that there is no corresponding correlation with colon cancer in that case. Thus there is no specificity of red meat. Zur Hausen’s suggestion is that the thing that favours the development of colon cancer is a combination of two factors. One of them is the carcinogens just mentioned but the other is specific to red meat. In fact the study of the geographical distribution suggests that it is even more specific than that. It is specific to cattle and even to the subtype of cattle common in Europe. The types of cattle or related animals in Bolivia and Mongolia do not have the same effect. The idea is that the causative agent could be a virus which is present just in that type of cattle prevalent in the ‘western’ countries. No specific virus has been incriminated but zur Hausen and his collaborators have isolated a lot of candidates from cattle. If this idea is correct then the highest danger would come from raw or lightly cooked meat and this is indeed popular in Japan and Korea.
Another main theme, which was quite unexpected for me, was MS. Here there is also a suggestion of a cattle connection. The idea is that consumption of cows milk at a young age and in particular consumption of non-pasteurized milk may carry a risk for getting MS. The model, at present rather speculative, is that there could be an interaction between some factor present in cows milk and some kind of virus, for instance EBV. Implication of virus infections in general and EBV in particular in causing MS is not new but here it is integrated into a more complicated suggestion. One problem with linking EBV and MS is that such a high percentage of the population has been affected with EBV. I cannot judge how solid these ideas about colon cancer and MS are but they are certainly interesting and original.