Geometric singular perturbation theory

November 29, 2013

I have already written two posts about the Michaelis-Menten limit, one not very long ago. I found some old results on this subject and I was on the look-out for some more modern accounts. Now it seems to me that what I need is something called geometric singular perturbation theory which goes back to a paper of Fenichel (J. Diff. Eq. 31, 53). An interesting aspect of this is that it involves using purely geometric statements to help solve analytical problems. If we take the system of two equations given in my last post on this subject, we can reformulate them by introducing a new time coordinate \tau=t/\epsilon, called the fast time, and adding the parameter as a new variable with zero time derivative. This gives the equations x'=\epsilon f(x,y), y'=g(x,y) and \epsilon'=0, where the prime denotes the derivative with respect to \tau. We are interested in the situation where the equation g(x,y)=0, which follows from the equations written in terms of the original time coordinate t, is equivalent to y=h_0(x) for a smooth function h_0. The linearization of the system in \tau along the zero set of g automatically has at least two zero eigenvalues. For Fenichel’s theorem it should be assumed that it does not have any more zero (or purely imaginary) eigenvalues. Then each point on that manifold has a two-dimensional centre manifold. Fenichel proves that there exists one manifold which is a centre manifold for all of these points. This is sometimes called a slow manifold. (Sometimes the part of it for a fixed value of \epsilon is given that name.) Its intersection with the plane \epsilon=0  coincides with the zero set of g. The original equations have a singular limit as \epsilon tends to zero, because \epsilon multiplies the time derivative in one of the equations. The remarkable thing is that the restriction of the system to the slow manifold is regular. This allows statements to be made that qualitative properties of the dynamics of solutions of the system with \epsilon=0 are inherited by the system with \epsilon small but non-zero.

Due to my growing interest in this subject I invited Peter Szmolyan from Vienna,who is a leading expert in this field, to come and give a colloquium here in Mainz, which he did yesterday. One of his main themes was that in many models arising in applications the splitting into the variables x and y cannot be done globally. Instead it may be necessary to use several splittings to describe different parts of the dynamics of one solution. He discussed two examples in which these ideas are helpful for understanding the dynamics better and establishing the existence of relaxation oscillations. The first is a model of Goldbeter and Lefever (Biophys J. 12, 1302) for glycolysis. It is different from the model I mentioned in a previous post but is also an important part of the chapter of Goldbeter’s book which I discussed there. The model of Goldbeter and Lefever was further studied theoretically by Segel and Goldbeter (J. Math. Biol. 32, 147). On this basis a rigorous analysis of the dynamics including a proof of the existence of relaxation oscillations was given in a recent paper by Szmolyan and Ilona Kosiuk (SIAM J. Appl. Dyn. Sys. 10, 1307). The other main example in the talk was a system of equations due to Goldbeter which is a kind of minimal model for the cell cycle. It is discussed in chapter 9 of Goldbeter’s book.

I have the feeling that GSPT is a body of theory which could be very useful for my future work and so I will do my best to continue to educate myself on the subject.

Higher dimensional bifurcations

September 9, 2013

Here I return to a subject which has been mentioned in this blog on several occasions, bifurcation theory. The general set-up is that we have a dynamical system of the form \dot x=f(x,\mu) where x\in R^m denotes the unknowns and \mu\in R^k stands for parameters. A central aim of the theory is to find conditions under which the system is topologically equivalent to a simple model system. In other words it only differs from the model system by an invertible continuous change of parameters. It is natural to start in the case where m=1 and k=1. By choosing the coordinates appropriately we can focus on the case x=0 and \mu=0. The subject can be explored by going to successively more complicated cases. If f(0,0)=0 and f'(0,0)\ne 0 we have the case that there is no bifurcation. It follows from the implicit function theorem that for parameters close to zero there is exactly one stationary solution close to zero and it has the same character (source or sink) as the stationary solution for \mu=0. The case where f(0,0)=0, f'(0,0)=0 and f''(0,0)\ne 0 is the fold bifurcation, which was discussed in a previous post. In this case these conditions together with the condition f_\mu\ne 0 imply topological equivalence to a standard system. The case where f(0,0)=0, f'(0,0)=0, f''(0,0)=0 and f'''(0,0)\ne 0 is the cusp bifurcation. Here topological equivalence to a standard case does not hold in a context with only one parameter. It can be obtained by passing to the case k=2 and requiring that a suitable combination of derivatives with respect to \mu_1 and \mu_2 does not vanish. I was able to use this to prove the existence of more than one stable stationary solution in a model for the competition between Th1 and Th2 dominance in the immune system, cf. a previous post on this subject. The system of interest in that case is of dimension four and the fact I could obtain results using a system of dimension one resulted from exploiting symmetry properties. Later I was able to prove the existence of more (four) stable stationary solutions of that system using other methods.

It is possible to talk about fold and cusp bifurcations in higher dimensional systems. This is possible in the case that the linearization of the system at the bifurcation point (which is necessarily singular) has a zero eigenvalue with a corresponding eigenspace of dimension one and no other eigenvalues with real part zero. Then the reduction theorem tells us that near the stationary point the dynamical system is topologically equivalent to the product of a standard saddle with the restriction of the system to a one-dimensional centre manifold. This centre manifold is by definition tangent to the eigenspace of the linearization corresponding to the zero eigenvalue of the linearization at the bifurcation point. It is now rather clear what has to be done in order to analyse this type of situation. It is necessary to determine an approximation of sufficiently high order to the centre manifold and to carry out a qualitative analysis of the dynamics on the centre manifold. In practice this leads to cumbersome calculations and so it is worth thinking carefully about how they can best be organized. A method of doing both calculations together in a way which makes them as simple as possible is described in Section 8.7 of Kuznetsov’s book on bifurcation theory. A number of bifurcations, including the fold and the cusp, are treated in detail there. One way of understanding why the example from immunology I mentioned above was relatively easy to handle is that in that case the centre manifold could be written down explicitly. I did not look at the problem in that way at the time but with hindsight it seems to be an explanation why certain things could be done.

In the case of higher dimensional systems the quantities which should vanish or not in order to get a certain type of bifurcation are replaced by more complicated expressions. In the fold or the cusp f''(0,0) is replaced by [L_i({\partial^2 f^i}/{\partial x_j\partial x_k})R^jR^k](0,0) where L and R are left and right eigenvectors of the linearization corresponding to the eigenvalue zero. Naively one might hope that for the cusp f'''(0,0) would be replaced by [L_i{(\partial^3 f^i}/{\partial x_j\partial x_k\partial x_l})R^jR^kR^l](0,0) but unfortunately, as explained in Kuznetsov’s book, this is not the case. There is a extra correction term which involves the second derivatives and which is somewhat inconvenient to calculate. We should be happy that a topological normal form can be obtained at all in these cases. Going more deeply into the landscape of bifurcations reveals cases where this is not possible. An example is the fold-Hopf bifurcation where there is one zero eigenvalue and one pair of non-zero imaginary eigenvalues. There it is possible to get a truncated normal form which is a standard form for the terms of the lowest orders. It is, however, in general the case that adding higher order terms to this gives topologically inequivalent systems. A simple kind of mechanism behind this is the breaking of a heteroclinic orbit. It is also possible that things can happen which are much nore complicated and not completely understood. There is an extended discussion of this in Kuznetsov’s book.

The practical use of proofs

July 31, 2013

A question which has come up in this blog from time to time is that of the benefits of mathematical proofs. Here I want to return to it. The idea of a proof is central in mathematics. A key part of research papers in mathematics is generally stating and proving theorems. In many mathematics lectures for students a lot of the time is spent stating and proving theorems. In practical applications of mathematics, to theoretical parts of other sciences or to real life problems, proofs play a much less prominent role. Very often theoretical developments go more in the direction of computer simulations or use heuristic approaches. A scientist used to working in this way may be sceptical about what proofs have to contribute. Thus it is up to us mathematicians to understand what mathematics has to offer and then to explain it to a wider circle of scientists.

Thinking about this subject I remembered some examples from my earlier research area in mathematical relativity. Some years ago, at a time when dark energy and accelerated cosmological expansion were not yet popular research subjects, a paper was published containing numerical simulations of a model of the universe with oscillating volume. In other words in this model the expansion of the universe alternately increased and decreased. This can only happen if there is accelerated expansion at some time. In the model considered there was no cosmological constant and no exotic matter with negative pressure. In this situation it has been well known for a very long time that cosmic acceleration is impossible. Proving it is a simple argument using ordinary differential equations. Apparently this was not known to the authors since they presented numerical results blatantly contradicting this fact. This is an example of the fact that once something has been proved it can sometimes be used to show immediately that certain results must be wrong, although it does not always indicate what has gone wrong. So what was wrong in the example? In the problem under consideration there is an ordinary differential equation to be solved, but that is not all. To be physically relevant an extra algebraic condition (the Hamiltonian constraint) must also be satisfied. There is an analytical result which says that if a solution of the evolution equation satisfies the constraint at some time it satisfies it for ever. Numerically the constraint will only approximately be satisfied by the initial data and the theorem just mentioned does not say that, being non-zero, it cannot grow very fast so that at later times the constraint is far from being satisfied. Thus the situation was probably as follows: the calculation successfully simulated the solution of an ordinary differential equation, even the right ordinary differential equation, but it was not the physically correct solution. Incidentally the error in the constraint tends to act like exotic matter.

A similar problem came up in a more sophisticated type of numerical calculation. This was work by Beverly Berger and Vincent Moncrief on the dynamics of inhomogeneous cosmological models near the big bang. The issue was whether the approach of these solutions to the limit is monotone or oscillatory. The expectation was that it should be oscillatory but at one point the numerical calculations showed monotone behaviour. The authors were suspicious of the conclusion although they did not understand the problem. Being careful and conscientious they avoided publishing these results for a year (if I remember correctly). This was good since they found the explanation for the discrepancy. It was basically the same explanation as in the previous example. They were using a free evolution code where the constraint is used for the initial data and never again. Accumulating violation of the constraint was falsifying the dynamics. In this case the sign of the error was different so that it tended to damp oscillations rather than creating them. Changing the code so as to solve the constraint equation repeatedly at later times restored the correct dynamics.

Another example concerns a different class of homogeneous cosmological models. Here the equations, which are ordinary differential equations, can be formulated as a dynamical system on the interior of a rectangle. This system can be extended smoothly to the rectangle itself. Then the sides of the rectangle are invariant and the corners are stationary solutions. The solutions in the boundary tend to one corner in the past and another corner in the future. They fit together to form a cycle, a heteroclinic cycle. I proved together with Paul Tod that almost all solutions which start in the interior converge to the rectangle in the approach to the singularity, circling infinitely many times around it. The top right hand corner of the rectangle, call it N, plays an important role in the story which follows. A relatively straightforward numerical simulation of this system suggested that there were solutions which started close to N and converged to it directly, in contradiction to the theorem. A more sophisticated simulation, due to Beverly Berger, replaced this by the scenario where some solutions starting close to N go once around the rectangle before converging to N. Solutions which go around more than once could not be found, despite trying hard. (The results I am talking about here were never published – I heard about them by word of mouth.) So what is the problem? The point N is non-hyperbolic. The linearization of the system there has a zero eigenvalue. The point is what is called a saddle node. On one side of a certain line (corresponding to one side of the rectangle) the solutions converge to N. If, due to inaccuracies in the calculation, the solution gets on the wrong side of the line we get the first erroneous result. On the physical side of the line the point N is a saddlle and all physical solutions pass it without converging to it. The problem is that, due the non-hyperbolic nature of N a solution which starts near the rectangle and goes once around it comes back extremely close to it. It fact it is so close that a program using a fixed number of decimal digits can no longer resolve the behaviour. Thus, as was explained to me by Beverly, this type of problem can only be solved using a program which is capable of implementing an arbitrarily high number of digits. At the time it was known how to do this but Beverly did not have experience with that and as far as I know this type of technique has never been applied to the problem. (The reader interested in details of what was proved is referred to the paper in Class. Quantum Grav. 16, 1705 (1999).)

The general conclusions which are drawn here are simple and commonplace. The computer can be a powerful ally but the reliability of what comes out is strongly dependent on how appropriate the formulation of the problem was. This cannot always be known in advance and so continuous vigilance is highly recommended. Sometimes theorems which have been proved can be powerful tools in discovering flaws in the formulation. In the case of arguments which are analytical but heuristic the same is true but in an even stronger sense. The reliability of the conclusions depends crucially on the individual who did the work. At the same time even the best intuition should be subjected to careful scrutiny. The most prominent example of this in general relativity is that of the singularity theorems of Penrose and Hawking starting in the mid 1960’s which led spacetime singularities to be taken seriously on the basis of rigorous proofs which contradicted the earlier heuristic work of Khalatnikov and Lifshitz.

The general questions discussed here are ones I will certainly return to. Here I have highlighted two dangerous situations: simulations which do not exactly implement conservation laws in the system being modelled and non-hyperbolic steady states of a dynamical system.

The Michaelis-Menten limit

July 2, 2013

In a previous post I wrote about the Michaelis-Menten reduction of reactions catalysed by enzymes in which a single equation (effective equation) is the limit of a system of two equations (extended equations) as a parameter \epsilon tends to zero. What I did not talk about is the sense in which solutions of the effective equation approximate those of the extended ones. I was sure that this must be well-known but I did not know a source for it. Now I discovered that what I had been seeking is to be found in a very nice form in a book which had been standing on a shelf in my office for many years. This is the book ‘Asymptotic Expansions for Ordinary Differential Equations’ by Wolfgang Wasow and the part of relevance to Michaelis-Menten reduction starts on p. 249. Michaelis-Menten is not mentioned there but the key mathematical result is exactly what is needed for that application. The theorem is due to Tikhonov but the original paper is in Russian and so not accessible to me. For convenience I repeat the equations from the previous post on this subject. \dot u=f(u,v),\epsilon\dot v= g(u,v). This is the type of system treated in Tikhonov’s theorem, including the possibility that u and v are vector-valued.

The statement of the theorem is as follows. On any finite time interval [0,T] the function u in the extended system converges uniformly to the solution of the reduced system as \epsilon\to 0. Given a solution of the reduced system it is possible to compute a corresponding function v. On the time interval (0,T] the function v in the extended system converges to the function v coming from the reduced system uniformly on compact subsets. Of course this conclusion  requires some hypotheses on the functions f and g. The key thing is that for a fixed value of u we have an asymptotically stable stationary solution of the equation for v (with \epsilon\ne 0).

With this result in hand it is possible to compute higher order corrections in \epsilon. This was first done by Vasileva and is also explained in the book of Wasow. The result was extended to a statement global in t by Hoppensteadt, Trans. Amer. Math. Soc. 123, 521. I expect that there are more modern treatments of these things in the literature but I find the sources quoted here very helpful for the beginner like myself. There remains the question of the relation to the usual Michaelis-Menten procedure. This is nicely discussed in a paper by Heineken et. al., Math. Biosci. 1, 95.

Population dynamics and chemical reactions

June 21, 2013

The seminar which I mentioned in a recent post has caused me to go back and look carefully at a number of different models in biology and chemistry. It has happened repeatedly that I felt I could glimpse some mathematical relations between the models. Now I have spent some time pursuing these ideas. One aspect is that many of the systems of ODE coming from biological models can be thought of as arising from chemical reaction networks with mass action kinetics, even when the unknowns are not chemical concentrations. In this context it should be mentioned that if an ODE system arises in this way the chemical network which leads to it need not be unique.

The first example I want to mention is the Lotka-Volterra system. Today it is usually presented as a model of population dynamics. Often the example of lynx and hares is used and this is natural due to the intrinsic attractiveness of furry animals. The story of Volterra and his son in law also has a certain human interest. The fact that Lotka found the equations earlier is usually just a side comment. In any case, the population model is equivalent to an ODE system coming from a reaction network which was described by Lotka in a paper in 1920 (J. Amer. Chem. Soc. 42, 1595). The network is defined by the reactions A_1\to 2A_1, A_1+A_2\to 2A_2, A_2\to 0 and A_1+A_2\to 0. The last entry in the list can be thought of as an alternative reaction producing another substance which is not included explicitly in the model. A simpler version, also considered in Lotka’s paper, omits this last reaction. In his book ‘Mathematical aspects of reacting and diffusing systems’ Paul Fife looks at the second system from the point of view of chemical reaction network theory. He computes its deficiency \delta in the sense of CRNT to be one. It has three linkage classes. The second model also has deficiency one. All the linkage classes have deficiency zero and so the deficiency one theorem does not apply. The chemical system introduced by Lotka was not supposed to correspond to a system of real reactions. He was just looking for a hypothetical reaction which would exhibit sustained oscillations.

Next I consider the fundamental model of virus dynamics as given in the book of Nowak and May which has previously been mentioned in this blog. Something which I only noticed now is that in a sense there is a term missing from the model. This represents the fact that when a virion enters a cell to infect it that virion is removed from the virus population. This fact is apparently not mentioned in the book. In an alternative model discussed in a paper of Perelson and Nelson (SIAM Rev. 41, 3) they also omit this term and discuss possible justifications for doing so. The fundamental model as found in the book of Nowak and May can be interpreted as the equations coming from a network of chemical reactions. This is also true of the modified version where the missing term is replaced. Both systems (at least the ones I found) have deficiency two.

Several well-known models in epidemiology can also be obtained from chemical networks. For instance the SIR model can be obtained from the reactions S+I\to 2I and I\to 0. This network has deficiency zero and is not weakly reversible. The deficiency zero theorem applies and tells us that there is no equilibrium. Of course this fact is nothing new for this example. The SIS model is similar but in that case the system has deficiency one and a positive stationary solution exists for certain parameter values. You might complain that the games I am playing do not lead to useful insights and you may be right. Nevertheless, seeing analogies between apparently unrelated things is a notorious strength of mathematics. There is also one success story related to the things I have been talking about here, namely the work of Korobeinikov on the standard model of virus dynamics mentioned in a previous post. He imported a Lyapunov function of a type known for epidemiological models in order to prove the global asymptotic stability of stationary solutions of the fundamental model of virus dynamics.

Guillain-Barré syndrome

May 9, 2013

Yesterday I went to a talk by Hans-Peter Hartung about autoimmune diseases of the peripheral nervous system. To start with he gave a summary of similarities and differences between the peripheral and central nervous systems and their relations to the immune system. Of the diseases he later discussed one which played a central role was Guillain-Barré syndrome. In fact he emphasized that this ‘syndrome’ is phenomenologically defined and consists of several diseases with different underlying mechanisms. There is one form which is sporadic in its occurrence and predominant in the western world and another which can take an epidemic form and occurs in China. At a time when medical services in China were very poor this kind of epidemic had very grave consequences. Now, however, I want to return to the ‘classical’ form of Guillain-Barré.

GBS is a disease which is fascinating for the outside observer and no doubt terrifying for the person affected by it. I first learned about it in an account – I do not remember where I read it – of the case of a German doctor. He was on holiday in Tenerife when he fell ill. He recognized the characteristic pattern of symptoms, suspected GBS and got on the first plane home. He wanted to optimize the treatment he got by going to the best medical centre he knew to get treated. The treatment was successful. In GBS the immune system attacks peripheral nerves and this leads to a rapidly progressive paralysis over the course of a few days. In a significant proportion of patients this leads to the control of the muscles responsible for breathing failing and thus to death. For this reason it it is important for the patient to quickly reach a place where the disease will be recognized and they can be put on a ventilator when needed.The disease can then also be treated by plasmapheresis or immunoglobulins. In the talk it was mentioned that in the epidemics in China it was often necessary to put patients on a manual ventilator which was operated their relatives. If this acute phase can be overcome the patient usually recovers rather completely, although some people have lasting damage. It is typical that in a single patient the disease does not recur although there are a small number of cases where there are several relapses and disability accumulates.

It has been suggested that influenza infections, or influenza vaccinations, can lead to an increased risk of developing GBS. This has been an important element of controversies surrounding vaccinations, including those against H1N1. I wrote briefly about this in a previous post. In the talk the speaker mentioned a recent Canadian study indicating a slight risk of GBS due to vaccination against influenza. Nevertheless this risk was still a lot less than that due to actually becoming infected with influenza. There has also been a German study with similar results which, however, has not yet been published. There is another kind of infection which appears to carry a much higher risk, namely that with the bacterium Campylobacter jejuni. I actually mentioned this in my previous post but had completely forgotten about it. In the talk it was pointed out that this infection is quite common while GBS is very rare. So the question arises of why GBS is not more frequent. A possible explanation is that the bacterium is rather variable. The suggested mechanism is molecular mimicry (and it seems that GBS is the first case where molecular mimicry was precisely documented). In other words, certain molecules of the bacterium are similar to molecules belonging to the nervous system. Then it happens that antibodies against the bacterium cause damage to the nerves. Depending on the variant of the bacterium this similarity of the two types of molecules is more or less strong so that the effect is more or less pronounced. There is some idea in this case what exactly the molecules are which show this similarity. They are so-called gangliosides, a type of glycolipids.

This has reminded me of an issue which fascinated me before. Is there a simple explanation of why some autoimmune diseases show repeated relapses while others show a single episode (like typical GBS), a continuous progression or a combination of relapsing and progressive phases at different times? Has anyone collected data on these patterns over a variety of autoimmune diseases?

Hello Mainz

April 18, 2013

This post is in some sense dual to the earlier one ‘goodbye to Berlin‘. To start with I can confirm that there is no shortage of Carrion Crows (and no Hooded Crows) in Mainz. When I arrived here and was waiting for my landlord to come and let me into my flat I saw some small and intensely green spots of colour in a row of trees in front of me. I knew the source of these – they were what I could see of Ring-Necked Parakeets. I have known for a long time that these birds live wild in England but it was only relatively recently, in the course of my activity looking for jobs, that I realised they were so common in parts of Germany. While in Heidelberg for an interview I observed a big number of them making a lot of noise in a small wood opposite the main railway station. I also saw some of them when I came to Mainz for the interview which eventually led to my present job. In my old institute in Golm I often used to see Red Kites out of my office window. It occurred to me that these might be replaced by Black Kites in Mainz. During my first weekend here I was walking across the campus of the university when I saw a large and unfamiliar bird of prey approaching me. When it came closer I realised that it was a Black Kite. I enjoyed the encounter. Since that I have also seen one from my office window. The Red Kite is a beautiful bird but for some reason I feel closer to its dark relative. It gives me a feeling of the south since the first place I saw these birds many years ago was in the Camargue.

Eva and I have been using Skype to maintain contact. I feel that this big change in our life has not been without benefits for our communication with each other and when I was home last weekend it was a richer experience than many weekends in the last few years. I appreciate the warm welcome I have had from my colleagues here in Mainz and my first days here, while sometimes a bit hectic, have been rewarding. Breaking the routine of years opens up new possibilities. I assured myself that I will not completely have to do without interesting biological talks here by going to a lecture by Alexander Steinkasserer on CD83. This taught me some more about dendritic cells for which this surface molecule is an important marker.

This is the first week of lectures here and yesterday brought the first concrete example of the new direction in my academic interests influencing my teaching, with the start of my seminar on ‘ordinary differential equations in biology and chemistry’. The first talk was on Lotka-Volterra equations. The subjects to be treated by other students in later lectures include ones a lot further from classical topics.

Modelling the Calvin cycle

March 18, 2013

Some years ago the Max Planck Institute for Molecular Plant Physiology organized a conference on metabolic networks. I decided to see what was going on in the institute next to the one where I work and I went to some of the talks. The one which I found most interesting was by Zoran Nikoloski. His subject was certain models for the Calvin cycle, which is part of photosynthesis. A motivating question was whether photosynthesis can work in two different stable steady states. If that were the case it might be possible to influence the plant to move from one state to another and, in the best case, to increase its production of biomass. This is of interest for biotechnology. Mathematically the question is that of multistationarity, i.e. whether a system of evolution equations admits more than one stationary solution. Beyond this it is of interest whether there can be more than one stable stationary solution. In fact in this context the issue is not that of absolute uniqueness of stationary solutions but of uniqueness within a given stoichiometric compatibility class. This means that the solution is unique when certain conserved quantities are fixed. One thing I found attractive about the presentation was that the speaker was talking about rigorous mathematical results on the dynamics and not just about numerically calculating a few solutions.

If the system is modelled deterministically and diffusion is neglected there results a system of ordinary differential equations for the concentrations of the substances involved as functions of time. It is necessary to choose which substances should be included in the description. In a basic model of the Calvin cycle there are five substances. In the work discussed in the talk of Nikoloski and in a paper he wrote with Sergio Grimbs and others (Biosystems 303, 212) various ODE systems based on this starting point are considered. They differ by the type of kinetics used. They consider mass action kinetics (MA), extended Michaelis-Menten kinetics where the enzymes catalysing the reactions are included explicitly (MM-MA) and effective Michaelis-Menten (MM) obtained from the system MM-MA by a singular limit. The systems MA and MM consist of five equations while the system MM-MA consists of nineteen equations. In the paper of Grimbs et. al. they show among other things that the system MM never admits a stable stationary solution, whatever the reaction constants, while the system MM-MA can exhibit two different stationary solutions.

After the talk I started reading about this subject and I also talked to Nikoloski about it. Later I began doing some research on these systems myself. Some technical difficulties which arose (which I wrote about in a previous post) led me to consult Juan Velázquez and he joined me in this project. Now we have written a paper on models for the Calvin cycle. In a case where there is only one stationary solution and it is unstable it is of interest to consider the final fate of general solutions of the system. For some initial conditions the concentrations of all substances tend to zero at late times. For other data (a whole open set) we were able to show that all concentrations tend to infinity as t\to\infty. We called the latter class runaway solutions. These do not seem to be of direct biological relevance but they might be helpful in choosing between alternative models which are more or less appropriate. The proof of the existence of runaway solutions for the MA system is somewhat complicated since this turns out to be a system with two different timescales. The system MM-MA also admits runaway solutions. Although the system is larger than MA the existence proof is simpler and in fact can be carried out in the context of a larger class of systems. Runaway solutions are also found for the system MM.

In the paper of Grimbs et. al. one system is considered which includes the effect of diffusion. Restricting to homogeneous solutions of this system gives a system of ODE called MAdh which is different from the system MA. The difference is that while the concentration of ATP is a dynamical variable in MAdh it is taken to be constant in MA. We showed that the system MAdh has zero, one or two solutions depending on the values of the parameters and that all solutions are bounded. Thus runaway solutions are ruled out. Intuitively this is due to the fact that the supply of energy is bounded but this heuristic argument is far from providing a proof. There are many other models of the Calvin cycle in the literature. In general they consider the reactions between a larger class of substances. It is an interesting task for the future to extend the results obtained up to now to these more general models. This post has been very much concerned with the mathematics of the problem and has not said much about the biology. The reactions making up the Calvin cycle were determined experimentally by Melvin Calvin and I can highly recommend his Nobel lecture as a description of how this was achieved

Absolute concentration robustness

February 20, 2013

In the past years I have been on the committees for many PhD examinations. A few days ago, for the first time, I was was on the committee for a thesis on a subject belonging to the area of mathematical biology. This was the thesis of Jost Neigenfind and it was concerned with a concept called absolute concentration robustness (ACR).

The concentration of a given substance in cells of a given type varies widely between the individual cells. (Cf. also this previous post). It is of interest to identify mechanisms which can ensure that the steady state concentration of a particular substance is independent of initial data. (This is a way in which the output of a system can be independent of background variation.) In saying this I am assuming implicitly that more general solutions converge to steady states. A more satisfactory formulation can be obtained as follows. In a chemical reaction network there are usually a number of conserved quanitities, say C_\alpha. These define affine subspaces of the state space, the stoichiometric compatibility classes. For many systems there is exactly one stationary solution in each stoichiometric compatibility class. The condition of interest here is that the value of one of the concentrations, call it x_1, in the steady state solution is independent of the parameters C_\alpha. (The other concentrations x_i,i>1 will in general depend on the C_\alpha.) This property is ACR. I first heard of this in a talk by Uri Alon at the SMB conference in Krakow in the summer of 2011. The basic idea is explained clearly in a paper of Shinar and Feinberg (Science 327, 1389). They present a general theoretical approach but also describe some biological systems where ACR (in a suitable approximate sense) has been observed experimentally. In the terminology of Chemical Reaction Network Theory (CRNT) the examples they discuss have deficiency one. They mention that ACR is impossible in systems of deficiency zero. There is no reason why it should not occur in systems of deficiency greater than one but in those more complicated dynamics make it more difficult to decide whether the property holds or not.

The result of Shinar and Feinberg only covers a class of reaction networks which is probably very restricted. What Neigenfind does in his thesis is to develop more general criteria for ACR and computer algorithms which can check these criteria for given systems. The phenomenon of ACR is interesting since it is a feature which may be more common in reaction systems coming from biology than in generic systems. At least there is a good potential reason why this might be the case.

Goodbye to Berlin

February 1, 2013

For this post I could not resist the temptation to borrow the title of Christopher Isherwood’s novel although what I am writing about here has very little to do with his book. The connection of the title to the content is that I will soon be leaving Berlin after living here more than fifteen years. I have accepted a professorship at the University of Mainz and I will move there in April. The first time I came to Berlin I landed at Tegel airport and I interpreted the Hooded Crow I saw beside the runway as a good omen. This requires some explanation. In those days the Hooded Crow (Corvus corone cornix) was a subspecies of the Carrion Crow. In the meantime it has been promoted to the rank of a species but that will not concern me here – I am not sure whether I feel I should congratulate it on receiving this honour. It differs from the nominate form (according to the old classification) by having a grey body while Corvus corone corone (Carrion Crow in the narrower sense) is all black. These two forms have the classical property of subspecies that they are allopatric. In other words they occur in more or less disjoint regions. On the boundary between the regions there is little interbreeding. The Orkney Islands where I grew up belong to the land of the Hooded Crow. Most of Great Britain and in fact most of Western Europe belong to the domain of the Carrion Crow. Even Aberdeen, where I studied and did my PhD, belongs to the land of the Carrion Crow. This helps to explain why I associate the Hooded Crow with ‘home’ and the Carrion Crow with ‘foreign parts’. It also has to do with the fact that there was a Hooded Crow which nested regularly in a garden near where I grew up in Orkney. I would climb the tree from time to time to keep an eye on the development of the brood and ring the chicks at the right moment. For these reasons the bird in Tegel seemed to tell me I was coming home. Now I am daring to venture once again (and probably for most of the rest of my life) into the land of the Carrion Crow.

When leaving a place it is natural to think about the good things which you experienced there. What were the best things about Berlin for me? The best thing of all is that Berlin was where I met my wife Eva. Eichwalde, where she lived at that time and where we both live now, has a very special feeling for me which will never go away. (Just for the record, we do not really live in Eichwalde but that is where the nearest train station is, with the result that the platform of the station there has something of the gates of Paradise for me.) Of course I cannot fail to mention the Max Planck Institute for Gravitational Physics which has provided me with a scientific home during all that time. I am grateful to the successive leaders of the mathematical group there, Jürgen Ehlers and Gerhard Huisken, for the working and social environment which they created and maintained. Another important thing about Berlin I will miss is the contact with its excellent research in biology and medicine. I have spent many valuable hours attending the Berlin Life Science Colloquium and I feel very attached to the Paul Ehrlich lecture hall where it usually takes place. The wooden seats are hard but the interest of the lectures was generally more than enough to make me forget that. I will also miss the stimulating atmosphere of the group of Bernold Fiedler at the Free University, which has been a source of a lot of intellectual input and a lot of pleasure.

This is perhaps the moment to say why I am leaving Berlin. Ever since I was a student I have felt a strong allegiance to mathematics. As a child I was concerned with metaphysical questions and later I got interested in physics as the most fundamental part of science. During my undergraduate study I realised that mathematics, and not physics, was the right intellectual environment for me. A key experience for me was that through my study plan I ended up doing two courses on Fourier series, a subject which was new to me, at the same time. One was in physics and one in mathematics. The contrast was like night and day. This may have had something to do with the abilities of the individual lecturers concerned but it was mainly due to essential differences between mathematics and physics. By the end of my studies I had specialized in mathematics and my commitment to that subject has remained constant ever since.

For a long time my strongest connection to mathematics concerned intrinsic aspects of the subject. The significance of applications for me was as a good source of mathematical problems. This has changed over the years and I have become increasingly fascinated by the interplay between mathematics and its applications. At the same time the focus of my interest has moved from mathematics related to fundamental physics to mathematics related to biology and medicine. This change has led to a discrepancy between the research I want to do and the research area of the institute where I work. A Max Planck Institute is by its very nature focussed on a certain restricted spectrum of subjects and this is not compatible with a major change of research direction of somebody working there. This is the reason that I started applying for jobs which fitted the directions of work where my new interests lie. The move to Mainz is the successful endpoint of this process. Moving from a Max Planck Institute to a university will naturally involve spending more time on teaching and less time on research. This does not dismay me. The most important thing is that I will be doing something I believe in. Teaching elementary mathematics and analysis, apart from establishing the basis needed for doing research, is something whose intrinsic value I am convinced of.


Get every new post delivered to your Inbox.

Join 32 other followers