The Hopf-Hopf bifurcation and chaos in ecological systems

February 11, 2014

This post arises from the fact that there seems to be some constructive interference between various directions I am pursuing at the moment. The first has to do with the course on dynamical systems I just finished giving. This course was intended not only to provide students in Mainz with an extended introduction to the subject but also to broaden my own knowledge. I wrote lecture notes for this in German and having gone to the effort of producing this resource I thought I should translate the notes into English so as to make them more widely available. The English version can be found here. Both versions are on the course web page. The second thing is that I will be organizing a seminar on bifurcation theory next semester and I want it to achieve a wide coverage, even at the risk that its waters, being broad, may be shallow (this is paraphrase of a quote I vaguely remember from Nietzsche). The connections between these two things are that I treated simple bifurcation theory and a little chaos in the course and that going further into the landscape of bifurcations necessarily means that at some point chaos rears its ugly head. The third thing is the fact that it has been suggested that the MAPK cascade, a dynamical system I am very interested in from the point of view of my own research, may exhibit chaotic behaviour, as described in a paper of Zumsande and Gross (J. Theor. Biol. 265, 481). This paper attracted my attention when it appeared on arXiv but it is only now that I understood some of the underlying ideas and, in particular, that the Hopf-Hopf bifurcation plays a central role. This in turn led me to a paper by Stiefs et. al. on chaos in ecological systems (Math. Biosci. Eng. 6, 855). They consider models with predator-prey interaction and a disease of the predators.

A Hopf-Hopf (or double Hopf) bifurcation arises at a stationary point where the linearization has two pairs of non-vanishing purely imaginary eigenvalues. Of course it is necessary to have a system of at least dimension four in order for this to occur. The subset of parameter space where it occurs has codimension two and lies at the intersection of two hypersurfaces on which there are Hopf bifurcations. For this system there is an approximate normal form. In other words the system is topologically equivalent to a system given by simple explicit formulae plus higher order error terms. The dynamics of the model system ignoring error terms can be analysed in detail. For simple bifurcations a system in approximate normal form is topologically equivalent to the model system. For the Hopf-Hopf bifurcation (and for the simpler fold-Hopf bifurcation with one zero and one pair of non-zero purely imaginary eigenvalues) this is no longer the case and the perturbation leads to more complicated dynamics. For instance, a heteroclinic orbit in the model system can break as a result of the perturbation. A lot of information on these things can be found in the book of Kuznetsov. In the paper on ecological systems mentioned above a Hopf-Hopf bifurcation is found using computer calculations and this is described as ‘clear evidence for the existence of chaotic parameter regions’. My understanding of chaos is still too weak to be able to appreciate the precise meaning of this statement.

Using computer calculations Zumsande and Gross find fold-Hopf bifurcations in the MAPK cascade (without explicit feedback) indicating the presence of complex dynamics. If chaos occurs in the ecological system and the MAPK cascade what biological meaning could this have? Ecosystems can often be thought of as spatially localized communities with their own dynamics which are coupled to each other. If the dynamics of the individual communities is of a simple oscillatory type then they may become synchronized and this could lead to global extinctions. If the local dynamics are chaotic this cannot happen so easily and even if a fluctuation which is too big leads to extinctions in one local community, these can be avoided in neighbouring communities, giving the ecosystem a greater global stability. One point of view of chaos in the MAPK cascade is that it is an undesirable effect which might interfere with the signalling function. It might be an undesirable side effect of other desirable features of the system. In reality MAPK cascades are usually embedded in various feedback loops and these might suppress the complex  behaviour in the free cascade. Zumsande and Gross investigated this possibility with the conclusion that the feedback loops tend to make things worse rather than better.

Interval arithmetic

January 28, 2014

In a recent post I wrote about the relations between mathematics and simulations. In doing so I forgot about one fascinating theme in this area, that of computer-assisted proofs. Here I am thinking about the technique known as interval arithmetic. I got interested in this subject years ago and went as far as to order a book on the subject for the institute library. However I never applied the technique myself. I was reminded of all this since I wanted to say something about the Lorenz system and strange attractors at the end of the course on dynamical systems I am giving at the moment. Then I remembered that there was a well-known result of Warwick Tucker related to the existence of the Lorenz attractor which made use of interval arithmetic. The basic idea of this technique is simple enough. Say I want to calculate the value of a function at a certain point on the computer. A conventional calculation gives an object y which is a rational number together with a certain idea of how accurate this number is. It does not, however, give any rigorous inequality for that number, due to possible errors in the calculation, in particular rounding error. Doing the same calculation by interval arithmetic gives an interval [y_1,y_2]. It constitutes a proof that the desired value of the function is contained in the interval defined by the rational numbers y_1 and y_2. All intermediate steps in the calculation are exact. This kind of approach could be used to give a proof that a certain function has a zero, in other words an existence proof. It could also be used to prove inequalities satisfied by the point where that zero is. The technique can be implemented in solvers for differential equations and thus used to prove results about dynamical systems. For me a computer-assisted proof of this type has the same philosophical satatus as a proof done by hand by a mathematician. There might be a mistake in the computer programme and the input data given to the programme might have been incorrect but this is on the same level as the mistakes a mathematician makes in a manual calculation. This type of proof has a very different status from the result of a numerical simulation done by a probably reliable but not strictly controlled programme.

Why is this technique not more popular? I can think of several reasons. The first is a lack of interest in rigorous proofs among many potential users. The second is that in practise the intervals obtained may be too large to solve the problems of interest. The third is that the calculations may be too slow. If the second or third reason is the main problem then it should be possible to improve the situation by using better algorithms and more computing power.

Monotone dynamical systems

December 29, 2013

In previous posts I have written a little about monotone dynamical systems, a class of systems which in some sense have simpler dynamical properties than general dynamical systems. Unfortunately this subject was always accompanied by some confusion in my mind. This results from the necessity of a certain type of bookkeeping which I was never really able to get straight. Now I think my understanding of the topic has improved and I want to fix this knowledge here. There are two things which have led to this improvement. One is that I read an expository article by Eduardo Sontag which discusses monotone systems in the context of biochemical networks (Systems and Synthetic Biology 1, 59). The other is that I had the chance to talk to David Angeli who patiently answered some of my elementary questions as well as providing other insights. In what follows I only discuss continuous dynamical systems. Information on the corresponding theory in the case of discrete dynamical systems can be found in the paper of Sontag.

Consider a dynamical system \dot x=f(x) defined on an open subset of R^n. The system is called monotone if \frac{\partial f_i}{\partial x_j}\ge 0 for all i\ne j. This is a rather restrictive definition – we will see alternative possibilities later – but I want to start in a simple context. There is a theorem of Müller and Kamke which says that if two solutions x and y of a monotone system satisfy x_i(0)\le y_i(0) for all i then they satisfy x_i(t)\le y_i(t) for all i and all t\ge 0. This can be equivalently expressed as the fact that for each t\ge 0 the time t flow of the dynamical system preserves the partial order defined by the condition that x_i\le y_i for all i. This can be further reexpressed as the condition that y-x belongs to the positive convex cone in R^n defined by the conditions that the values of all Cartesian coordinates are non-negative. This shows the way to more general definitions of monotone flows on vector spaces, possibly infinite dimensional. These definitions may be useful for the study of certain PDE such as reaction-diffusion equations. The starting point is the choice of a suitable cone. This direction will not be followed further here except to consider some other simple cones in R^n.

A monotone system in the sense defined above is also sometimes called cooperative. The name comes from population models where the species are beneficial to each other. Changing the sign in the defining conditions leads to the class of competitive systems. These can be transformed into cooperative systems by changing the direction of time. However for a given choice of time direction the competitive systems need not have the pleasant properties of cooperative systems. Another simple type of coordinate transformation is to reverse the signs of some of the coordinates x_i. When can this be used to transform a given system into a monotone one?. Two necessary conditions are that each partial derivative of a component of f must have a (non-strict) sign which is independent of x and that the derivatives are symmetric under interchange of their indices. What remains is a condition which can be expressed in terms of the so-called species graph. This has one node for each variable x_i and an arrow from node i to node j if \frac{\partial f_j}{\partial x_i} is not identically zero. If the derivative is positive the arrow bears a positive sign and if it is negative a negative sign. Alternatively, the arrows with positive sign have a normal arrowhead while those with negative sign have a blunt end. In this way the system gives rise to a labelled oriented graph. To each (not necessarily oriented) path in the graph we associate a sign which is the product of the signs of the individual edges composing the path. The graph is said to be consistent if signs can be associated to the vertices in such a way that the sign of an edge is always the product of the signs of its endpoints. This is equivalent to the condition that every closed loop in the graph has a positive sign. In other words, every feedback in the system is positive. Given that the other two necessary conditions are satisfied the condition of consistency characterizes those networks which can be transformed by changes of sign of the x_i to a monotone system. A transformation of this type can also be thought of as replacing the positive orthant by another orthant as the cone defining the partial order.

Next I consider some examples. Every one-dimensional system is monotone. In a two-dimensional system we can have the sign patterns (+,+), (-,-) and (+,-). In the first case the system is monotone. In the second case it is not but can be made so by reversing the sign of one of the coordinates. This is the case of a two-dimensional competitive system. In the third case the system cannot be made monotone. A three-dimensional competitive system cannot be made monotone. The species graph contains a negative loop. Higher dimensional competitive systems are no better since their graphs all contain copies of that negative loop.

A general message in Sontag’s paper is that consistent systems tend to be particularly robust to various types of disturbances. Large biochemical networks are in general not consistent in this sense but they are close to being consistent in the sense that removing a few edges from the network make them consistent. This also means that they can be thought of as a few consistent subsystems joined together. Since biological systems need robustness this suggests a topological property which biochemical networks should have compared to random networks. Sontag presents an example where this has been observed in the transcription network of yeast.

A more sophisticated method which can often be used to obtain monotone systems from systems of chemical reactions by a change of variables has been discussed in a previous post. The advantage of this is that together with other conditions it can be use to show that generic solutions, or sometimes even all solutions, of the original system converge to stationary solutions.

Geometric singular perturbation theory

November 29, 2013

I have already written two posts about the Michaelis-Menten limit, one not very long ago. I found some old results on this subject and I was on the look-out for some more modern accounts. Now it seems to me that what I need is something called geometric singular perturbation theory which goes back to a paper of Fenichel (J. Diff. Eq. 31, 53). An interesting aspect of this is that it involves using purely geometric statements to help solve analytical problems. If we take the system of two equations given in my last post on this subject, we can reformulate them by introducing a new time coordinate \tau=t/\epsilon, called the fast time, and adding the parameter as a new variable with zero time derivative. This gives the equations x'=\epsilon f(x,y), y'=g(x,y) and \epsilon'=0, where the prime denotes the derivative with respect to \tau. We are interested in the situation where the equation g(x,y)=0, which follows from the equations written in terms of the original time coordinate t, is equivalent to y=h_0(x) for a smooth function h_0. The linearization of the system in \tau along the zero set of g automatically has at least two zero eigenvalues. For Fenichel’s theorem it should be assumed that it does not have any more zero (or purely imaginary) eigenvalues. Then each point on that manifold has a two-dimensional centre manifold. Fenichel proves that there exists one manifold which is a centre manifold for all of these points. This is sometimes called a slow manifold. (Sometimes the part of it for a fixed value of \epsilon is given that name.) Its intersection with the plane \epsilon=0  coincides with the zero set of g. The original equations have a singular limit as \epsilon tends to zero, because \epsilon multiplies the time derivative in one of the equations. The remarkable thing is that the restriction of the system to the slow manifold is regular. This allows statements to be made that qualitative properties of the dynamics of solutions of the system with \epsilon=0 are inherited by the system with \epsilon small but non-zero.

Due to my growing interest in this subject I invited Peter Szmolyan from Vienna,who is a leading expert in this field, to come and give a colloquium here in Mainz, which he did yesterday. One of his main themes was that in many models arising in applications the splitting into the variables x and y cannot be done globally. Instead it may be necessary to use several splittings to describe different parts of the dynamics of one solution. He discussed two examples in which these ideas are helpful for understanding the dynamics better and establishing the existence of relaxation oscillations. The first is a model of Goldbeter and Lefever (Biophys J. 12, 1302) for glycolysis. It is different from the model I mentioned in a previous post but is also an important part of the chapter of Goldbeter’s book which I discussed there. The model of Goldbeter and Lefever was further studied theoretically by Segel and Goldbeter (J. Math. Biol. 32, 147). On this basis a rigorous analysis of the dynamics including a proof of the existence of relaxation oscillations was given in a recent paper by Szmolyan and Ilona Kosiuk (SIAM J. Appl. Dyn. Sys. 10, 1307). The other main example in the talk was a system of equations due to Goldbeter which is a kind of minimal model for the cell cycle. It is discussed in chapter 9 of Goldbeter’s book.

I have the feeling that GSPT is a body of theory which could be very useful for my future work and so I will do my best to continue to educate myself on the subject.

Higher dimensional bifurcations

September 9, 2013

Here I return to a subject which has been mentioned in this blog on several occasions, bifurcation theory. The general set-up is that we have a dynamical system of the form \dot x=f(x,\mu) where x\in R^m denotes the unknowns and \mu\in R^k stands for parameters. A central aim of the theory is to find conditions under which the system is topologically equivalent to a simple model system. In other words it only differs from the model system by an invertible continuous change of parameters. It is natural to start in the case where m=1 and k=1. By choosing the coordinates appropriately we can focus on the case x=0 and \mu=0. The subject can be explored by going to successively more complicated cases. If f(0,0)=0 and f'(0,0)\ne 0 we have the case that there is no bifurcation. It follows from the implicit function theorem that for parameters close to zero there is exactly one stationary solution close to zero and it has the same character (source or sink) as the stationary solution for \mu=0. The case where f(0,0)=0, f'(0,0)=0 and f''(0,0)\ne 0 is the fold bifurcation, which was discussed in a previous post. In this case these conditions together with the condition f_\mu\ne 0 imply topological equivalence to a standard system. The case where f(0,0)=0, f'(0,0)=0, f''(0,0)=0 and f'''(0,0)\ne 0 is the cusp bifurcation. Here topological equivalence to a standard case does not hold in a context with only one parameter. It can be obtained by passing to the case k=2 and requiring that a suitable combination of derivatives with respect to \mu_1 and \mu_2 does not vanish. I was able to use this to prove the existence of more than one stable stationary solution in a model for the competition between Th1 and Th2 dominance in the immune system, cf. a previous post on this subject. The system of interest in that case is of dimension four and the fact I could obtain results using a system of dimension one resulted from exploiting symmetry properties. Later I was able to prove the existence of more (four) stable stationary solutions of that system using other methods.

It is possible to talk about fold and cusp bifurcations in higher dimensional systems. This is possible in the case that the linearization of the system at the bifurcation point (which is necessarily singular) has a zero eigenvalue with a corresponding eigenspace of dimension one and no other eigenvalues with real part zero. Then the reduction theorem tells us that near the stationary point the dynamical system is topologically equivalent to the product of a standard saddle with the restriction of the system to a one-dimensional centre manifold. This centre manifold is by definition tangent to the eigenspace of the linearization corresponding to the zero eigenvalue of the linearization at the bifurcation point. It is now rather clear what has to be done in order to analyse this type of situation. It is necessary to determine an approximation of sufficiently high order to the centre manifold and to carry out a qualitative analysis of the dynamics on the centre manifold. In practice this leads to cumbersome calculations and so it is worth thinking carefully about how they can best be organized. A method of doing both calculations together in a way which makes them as simple as possible is described in Section 8.7 of Kuznetsov’s book on bifurcation theory. A number of bifurcations, including the fold and the cusp, are treated in detail there. One way of understanding why the example from immunology I mentioned above was relatively easy to handle is that in that case the centre manifold could be written down explicitly. I did not look at the problem in that way at the time but with hindsight it seems to be an explanation why certain things could be done.

In the case of higher dimensional systems the quantities which should vanish or not in order to get a certain type of bifurcation are replaced by more complicated expressions. In the fold or the cusp f''(0,0) is replaced by [L_i({\partial^2 f^i}/{\partial x_j\partial x_k})R^jR^k](0,0) where L and R are left and right eigenvectors of the linearization corresponding to the eigenvalue zero. Naively one might hope that for the cusp f'''(0,0) would be replaced by [L_i{(\partial^3 f^i}/{\partial x_j\partial x_k\partial x_l})R^jR^kR^l](0,0) but unfortunately, as explained in Kuznetsov’s book, this is not the case. There is a extra correction term which involves the second derivatives and which is somewhat inconvenient to calculate. We should be happy that a topological normal form can be obtained at all in these cases. Going more deeply into the landscape of bifurcations reveals cases where this is not possible. An example is the fold-Hopf bifurcation where there is one zero eigenvalue and one pair of non-zero imaginary eigenvalues. There it is possible to get a truncated normal form which is a standard form for the terms of the lowest orders. It is, however, in general the case that adding higher order terms to this gives topologically inequivalent systems. A simple kind of mechanism behind this is the breaking of a heteroclinic orbit. It is also possible that things can happen which are much nore complicated and not completely understood. There is an extended discussion of this in Kuznetsov’s book.

The practical use of proofs

July 31, 2013

A question which has come up in this blog from time to time is that of the benefits of mathematical proofs. Here I want to return to it. The idea of a proof is central in mathematics. A key part of research papers in mathematics is generally stating and proving theorems. In many mathematics lectures for students a lot of the time is spent stating and proving theorems. In practical applications of mathematics, to theoretical parts of other sciences or to real life problems, proofs play a much less prominent role. Very often theoretical developments go more in the direction of computer simulations or use heuristic approaches. A scientist used to working in this way may be sceptical about what proofs have to contribute. Thus it is up to us mathematicians to understand what mathematics has to offer and then to explain it to a wider circle of scientists.

Thinking about this subject I remembered some examples from my earlier research area in mathematical relativity. Some years ago, at a time when dark energy and accelerated cosmological expansion were not yet popular research subjects, a paper was published containing numerical simulations of a model of the universe with oscillating volume. In other words in this model the expansion of the universe alternately increased and decreased. This can only happen if there is accelerated expansion at some time. In the model considered there was no cosmological constant and no exotic matter with negative pressure. In this situation it has been well known for a very long time that cosmic acceleration is impossible. Proving it is a simple argument using ordinary differential equations. Apparently this was not known to the authors since they presented numerical results blatantly contradicting this fact. This is an example of the fact that once something has been proved it can sometimes be used to show immediately that certain results must be wrong, although it does not always indicate what has gone wrong. So what was wrong in the example? In the problem under consideration there is an ordinary differential equation to be solved, but that is not all. To be physically relevant an extra algebraic condition (the Hamiltonian constraint) must also be satisfied. There is an analytical result which says that if a solution of the evolution equation satisfies the constraint at some time it satisfies it for ever. Numerically the constraint will only approximately be satisfied by the initial data and the theorem just mentioned does not say that, being non-zero, it cannot grow very fast so that at later times the constraint is far from being satisfied. Thus the situation was probably as follows: the calculation successfully simulated the solution of an ordinary differential equation, even the right ordinary differential equation, but it was not the physically correct solution. Incidentally the error in the constraint tends to act like exotic matter.

A similar problem came up in a more sophisticated type of numerical calculation. This was work by Beverly Berger and Vincent Moncrief on the dynamics of inhomogeneous cosmological models near the big bang. The issue was whether the approach of these solutions to the limit is monotone or oscillatory. The expectation was that it should be oscillatory but at one point the numerical calculations showed monotone behaviour. The authors were suspicious of the conclusion although they did not understand the problem. Being careful and conscientious they avoided publishing these results for a year (if I remember correctly). This was good since they found the explanation for the discrepancy. It was basically the same explanation as in the previous example. They were using a free evolution code where the constraint is used for the initial data and never again. Accumulating violation of the constraint was falsifying the dynamics. In this case the sign of the error was different so that it tended to damp oscillations rather than creating them. Changing the code so as to solve the constraint equation repeatedly at later times restored the correct dynamics.

Another example concerns a different class of homogeneous cosmological models. Here the equations, which are ordinary differential equations, can be formulated as a dynamical system on the interior of a rectangle. This system can be extended smoothly to the rectangle itself. Then the sides of the rectangle are invariant and the corners are stationary solutions. The solutions in the boundary tend to one corner in the past and another corner in the future. They fit together to form a cycle, a heteroclinic cycle. I proved together with Paul Tod that almost all solutions which start in the interior converge to the rectangle in the approach to the singularity, circling infinitely many times around it. The top right hand corner of the rectangle, call it N, plays an important role in the story which follows. A relatively straightforward numerical simulation of this system suggested that there were solutions which started close to N and converged to it directly, in contradiction to the theorem. A more sophisticated simulation, due to Beverly Berger, replaced this by the scenario where some solutions starting close to N go once around the rectangle before converging to N. Solutions which go around more than once could not be found, despite trying hard. (The results I am talking about here were never published – I heard about them by word of mouth.) So what is the problem? The point N is non-hyperbolic. The linearization of the system there has a zero eigenvalue. The point is what is called a saddle node. On one side of a certain line (corresponding to one side of the rectangle) the solutions converge to N. If, due to inaccuracies in the calculation, the solution gets on the wrong side of the line we get the first erroneous result. On the physical side of the line the point N is a saddlle and all physical solutions pass it without converging to it. The problem is that, due the non-hyperbolic nature of N a solution which starts near the rectangle and goes once around it comes back extremely close to it. It fact it is so close that a program using a fixed number of decimal digits can no longer resolve the behaviour. Thus, as was explained to me by Beverly, this type of problem can only be solved using a program which is capable of implementing an arbitrarily high number of digits. At the time it was known how to do this but Beverly did not have experience with that and as far as I know this type of technique has never been applied to the problem. (The reader interested in details of what was proved is referred to the paper in Class. Quantum Grav. 16, 1705 (1999).)

The general conclusions which are drawn here are simple and commonplace. The computer can be a powerful ally but the reliability of what comes out is strongly dependent on how appropriate the formulation of the problem was. This cannot always be known in advance and so continuous vigilance is highly recommended. Sometimes theorems which have been proved can be powerful tools in discovering flaws in the formulation. In the case of arguments which are analytical but heuristic the same is true but in an even stronger sense. The reliability of the conclusions depends crucially on the individual who did the work. At the same time even the best intuition should be subjected to careful scrutiny. The most prominent example of this in general relativity is that of the singularity theorems of Penrose and Hawking starting in the mid 1960’s which led spacetime singularities to be taken seriously on the basis of rigorous proofs which contradicted the earlier heuristic work of Khalatnikov and Lifshitz.

The general questions discussed here are ones I will certainly return to. Here I have highlighted two dangerous situations: simulations which do not exactly implement conservation laws in the system being modelled and non-hyperbolic steady states of a dynamical system.

The Michaelis-Menten limit

July 2, 2013

In a previous post I wrote about the Michaelis-Menten reduction of reactions catalysed by enzymes in which a single equation (effective equation) is the limit of a system of two equations (extended equations) as a parameter \epsilon tends to zero. What I did not talk about is the sense in which solutions of the effective equation approximate those of the extended ones. I was sure that this must be well-known but I did not know a source for it. Now I discovered that what I had been seeking is to be found in a very nice form in a book which had been standing on a shelf in my office for many years. This is the book ‘Asymptotic Expansions for Ordinary Differential Equations’ by Wolfgang Wasow and the part of relevance to Michaelis-Menten reduction starts on p. 249. Michaelis-Menten is not mentioned there but the key mathematical result is exactly what is needed for that application. The theorem is due to Tikhonov but the original paper is in Russian and so not accessible to me. For convenience I repeat the equations from the previous post on this subject. \dot u=f(u,v),\epsilon\dot v= g(u,v). This is the type of system treated in Tikhonov’s theorem, including the possibility that u and v are vector-valued.

The statement of the theorem is as follows. On any finite time interval [0,T] the function u in the extended system converges uniformly to the solution of the reduced system as \epsilon\to 0. Given a solution of the reduced system it is possible to compute a corresponding function v. On the time interval (0,T] the function v in the extended system converges to the function v coming from the reduced system uniformly on compact subsets. Of course this conclusion  requires some hypotheses on the functions f and g. The key thing is that for a fixed value of u we have an asymptotically stable stationary solution of the equation for v (with \epsilon\ne 0).

With this result in hand it is possible to compute higher order corrections in \epsilon. This was first done by Vasileva and is also explained in the book of Wasow. The result was extended to a statement global in t by Hoppensteadt, Trans. Amer. Math. Soc. 123, 521. I expect that there are more modern treatments of these things in the literature but I find the sources quoted here very helpful for the beginner like myself. There remains the question of the relation to the usual Michaelis-Menten procedure. This is nicely discussed in a paper by Heineken et. al., Math. Biosci. 1, 95.

Population dynamics and chemical reactions

June 21, 2013

The seminar which I mentioned in a recent post has caused me to go back and look carefully at a number of different models in biology and chemistry. It has happened repeatedly that I felt I could glimpse some mathematical relations between the models. Now I have spent some time pursuing these ideas. One aspect is that many of the systems of ODE coming from biological models can be thought of as arising from chemical reaction networks with mass action kinetics, even when the unknowns are not chemical concentrations. In this context it should be mentioned that if an ODE system arises in this way the chemical network which leads to it need not be unique.

The first example I want to mention is the Lotka-Volterra system. Today it is usually presented as a model of population dynamics. Often the example of lynx and hares is used and this is natural due to the intrinsic attractiveness of furry animals. The story of Volterra and his son in law also has a certain human interest. The fact that Lotka found the equations earlier is usually just a side comment. In any case, the population model is equivalent to an ODE system coming from a reaction network which was described by Lotka in a paper in 1920 (J. Amer. Chem. Soc. 42, 1595). The network is defined by the reactions A_1\to 2A_1, A_1+A_2\to 2A_2, A_2\to 0 and A_1+A_2\to 0. The last entry in the list can be thought of as an alternative reaction producing another substance which is not included explicitly in the model. A simpler version, also considered in Lotka’s paper, omits this last reaction. In his book ‘Mathematical aspects of reacting and diffusing systems’ Paul Fife looks at the second system from the point of view of chemical reaction network theory. He computes its deficiency \delta in the sense of CRNT to be one. It has three linkage classes. The second model also has deficiency one. All the linkage classes have deficiency zero and so the deficiency one theorem does not apply. The chemical system introduced by Lotka was not supposed to correspond to a system of real reactions. He was just looking for a hypothetical reaction which would exhibit sustained oscillations.

Next I consider the fundamental model of virus dynamics as given in the book of Nowak and May which has previously been mentioned in this blog. Something which I only noticed now is that in a sense there is a term missing from the model. This represents the fact that when a virion enters a cell to infect it that virion is removed from the virus population. This fact is apparently not mentioned in the book. In an alternative model discussed in a paper of Perelson and Nelson (SIAM Rev. 41, 3) they also omit this term and discuss possible justifications for doing so. The fundamental model as found in the book of Nowak and May can be interpreted as the equations coming from a network of chemical reactions. This is also true of the modified version where the missing term is replaced. Both systems (at least the ones I found) have deficiency two.

Several well-known models in epidemiology can also be obtained from chemical networks. For instance the SIR model can be obtained from the reactions S+I\to 2I and I\to 0. This network has deficiency zero and is not weakly reversible. The deficiency zero theorem applies and tells us that there is no equilibrium. Of course this fact is nothing new for this example. The SIS model is similar but in that case the system has deficiency one and a positive stationary solution exists for certain parameter values. You might complain that the games I am playing do not lead to useful insights and you may be right. Nevertheless, seeing analogies between apparently unrelated things is a notorious strength of mathematics. There is also one success story related to the things I have been talking about here, namely the work of Korobeinikov on the standard model of virus dynamics mentioned in a previous post. He imported a Lyapunov function of a type known for epidemiological models in order to prove the global asymptotic stability of stationary solutions of the fundamental model of virus dynamics.

Guillain-Barré syndrome

May 9, 2013

Yesterday I went to a talk by Hans-Peter Hartung about autoimmune diseases of the peripheral nervous system. To start with he gave a summary of similarities and differences between the peripheral and central nervous systems and their relations to the immune system. Of the diseases he later discussed one which played a central role was Guillain-Barré syndrome. In fact he emphasized that this ‘syndrome’ is phenomenologically defined and consists of several diseases with different underlying mechanisms. There is one form which is sporadic in its occurrence and predominant in the western world and another which can take an epidemic form and occurs in China. At a time when medical services in China were very poor this kind of epidemic had very grave consequences. Now, however, I want to return to the ‘classical’ form of Guillain-Barré.

GBS is a disease which is fascinating for the outside observer and no doubt terrifying for the person affected by it. I first learned about it in an account – I do not remember where I read it – of the case of a German doctor. He was on holiday in Tenerife when he fell ill. He recognized the characteristic pattern of symptoms, suspected GBS and got on the first plane home. He wanted to optimize the treatment he got by going to the best medical centre he knew to get treated. The treatment was successful. In GBS the immune system attacks peripheral nerves and this leads to a rapidly progressive paralysis over the course of a few days. In a significant proportion of patients this leads to the control of the muscles responsible for breathing failing and thus to death. For this reason it it is important for the patient to quickly reach a place where the disease will be recognized and they can be put on a ventilator when needed.The disease can then also be treated by plasmapheresis or immunoglobulins. In the talk it was mentioned that in the epidemics in China it was often necessary to put patients on a manual ventilator which was operated their relatives. If this acute phase can be overcome the patient usually recovers rather completely, although some people have lasting damage. It is typical that in a single patient the disease does not recur although there are a small number of cases where there are several relapses and disability accumulates.

It has been suggested that influenza infections, or influenza vaccinations, can lead to an increased risk of developing GBS. This has been an important element of controversies surrounding vaccinations, including those against H1N1. I wrote briefly about this in a previous post. In the talk the speaker mentioned a recent Canadian study indicating a slight risk of GBS due to vaccination against influenza. Nevertheless this risk was still a lot less than that due to actually becoming infected with influenza. There has also been a German study with similar results which, however, has not yet been published. There is another kind of infection which appears to carry a much higher risk, namely that with the bacterium Campylobacter jejuni. I actually mentioned this in my previous post but had completely forgotten about it. In the talk it was pointed out that this infection is quite common while GBS is very rare. So the question arises of why GBS is not more frequent. A possible explanation is that the bacterium is rather variable. The suggested mechanism is molecular mimicry (and it seems that GBS is the first case where molecular mimicry was precisely documented). In other words, certain molecules of the bacterium are similar to molecules belonging to the nervous system. Then it happens that antibodies against the bacterium cause damage to the nerves. Depending on the variant of the bacterium this similarity of the two types of molecules is more or less strong so that the effect is more or less pronounced. There is some idea in this case what exactly the molecules are which show this similarity. They are so-called gangliosides, a type of glycolipids.

This has reminded me of an issue which fascinated me before. Is there a simple explanation of why some autoimmune diseases show repeated relapses while others show a single episode (like typical GBS), a continuous progression or a combination of relapsing and progressive phases at different times? Has anyone collected data on these patterns over a variety of autoimmune diseases?

Hello Mainz

April 18, 2013

This post is in some sense dual to the earlier one ‘goodbye to Berlin‘. To start with I can confirm that there is no shortage of Carrion Crows (and no Hooded Crows) in Mainz. When I arrived here and was waiting for my landlord to come and let me into my flat I saw some small and intensely green spots of colour in a row of trees in front of me. I knew the source of these – they were what I could see of Ring-Necked Parakeets. I have known for a long time that these birds live wild in England but it was only relatively recently, in the course of my activity looking for jobs, that I realised they were so common in parts of Germany. While in Heidelberg for an interview I observed a big number of them making a lot of noise in a small wood opposite the main railway station. I also saw some of them when I came to Mainz for the interview which eventually led to my present job. In my old institute in Golm I often used to see Red Kites out of my office window. It occurred to me that these might be replaced by Black Kites in Mainz. During my first weekend here I was walking across the campus of the university when I saw a large and unfamiliar bird of prey approaching me. When it came closer I realised that it was a Black Kite. I enjoyed the encounter. Since that I have also seen one from my office window. The Red Kite is a beautiful bird but for some reason I feel closer to its dark relative. It gives me a feeling of the south since the first place I saw these birds many years ago was in the Camargue.

Eva and I have been using Skype to maintain contact. I feel that this big change in our life has not been without benefits for our communication with each other and when I was home last weekend it was a richer experience than many weekends in the last few years. I appreciate the warm welcome I have had from my colleagues here in Mainz and my first days here, while sometimes a bit hectic, have been rewarding. Breaking the routine of years opens up new possibilities. I assured myself that I will not completely have to do without interesting biological talks here by going to a lecture by Alexander Steinkasserer on CD83. This taught me some more about dendritic cells for which this surface molecule is an important marker.

This is the first week of lectures here and yesterday brought the first concrete example of the new direction in my academic interests influencing my teaching, with the start of my seminar on ‘ordinary differential equations in biology and chemistry’. The first talk was on Lotka-Volterra equations. The subjects to be treated by other students in later lectures include ones a lot further from classical topics.


Follow

Get every new post delivered to your Inbox.

Join 36 other followers