Archive for the ‘dynamical systems’ Category

The existence proof for Hopf bifurcations

September 22, 2014

In a Hopf bifurcation a pair of complex conjugate eigenvalues of the linearization of a dynamical system \dot x=f(x,\alpha) at a stationary point pass through the imaginary axis. This has been discussed in a previous post. Often textbook results (e.g. Theorem 3.3 in Kuznetsov’s book) concentrate on the generic case where two additional conditions are satisfied. One of these is that the first Lyapunov coefficient is non-vanishing. The other is that the eigenvalues pass through the imaginary axis with non-zero velocity. The existence of periodic solutions can be obtained if only the second of these conditions are satisfied. This was already included in the original paper of Hopf in 1942. Hopf states his results only in the case of analytic systems but this should perhaps be seen as a historical accident. A similar result holds with mucher weaker regularity assumptions. It is proved under the assumption of C^2 dependence on x and C^1 dependence on \alpha in Hale’s book on ordinary differential equations. This has consequences for the case where the second genericity assumption is not satisfied. Let \lambda be an eigenvalue which passes through the imaginary axis for \alpha=0 and suppose that the derivatives of {{\rm Re}\lambda} with respect to \alpha vanish up to order 2k for an integer k but that the derivative of order 2k+1 does not vanish. Then it is possible to replace \alpha by \alpha^{2k+1} as parameter and after this change the second genericity assumption is satisfied. Even if the original right hand side was analytic in \alpha the transformed right hand side is in general not C^2. It is, however, C^1 and so the version of the theorem in Hale’s book applies to give the existence of periodic solutions. This theorem applies to a two-dimensional system but it then also evidently applies in general by a centre manifold reduction.

The theorem is proved as follows. The problem is transformed to polar coordinates (\rho,\theta) and then \rho is written as a function of \theta. In this way a non-autonomous scalar equation with 2\pi-periodic coefficients is obtained and the aim is to find a 2\pi-periodic solution. The first step is to reformulate the task as a fixed-point problem with the property that if a fixed point is periodic it will be a solution of the original problem. Then it is shown using the Banach fixed point theorem(in a minor variant of the local existence theorem for ODE using Picard iteration) that there always exists a fixed point depending on a certain new parameter. This fixed point is only periodic if the result of substituting it into the right hand side of the original equation has mean value zero. This condition can be written as G(\alpha,a)=0. Applying the implicit function theorem to G shows the existence of a solution of G(\alpha(a),a)=0 for a small. This completes the proof.

Summing up, there are two types of theorem about Hopf bifurcation, a ‘coarse’ theorem of the type just sketched with weak hypotheses and a weak but still very interesting conclusion and a ‘fine’ theorem which gives stronger conclusions but needs a stronger hypothesis (non-vanishing of the Lyapunov coefficient and its sign). In his original paper Hopf proved both types. Are there also ‘rough’ versions of theorems about other bifurcations?

SIAM Conference on the Life Sciences in Charlotte

August 7, 2014

This week I have been attending the SIAM Conference on the Life Sciences in Charlotte. Here I want to mention some highlights from my personal point of view. First I will mention some of the plenary talks. John Rinzel talked about mathematical modelling of certain perceptual phenomena. We are all familiar with the face-vase picture which switches repeatedly between two forms. I had never considered the question of trying to predict how often the picture switches. Rinzel presented models for this and for other related auditory phenomena which he demonstrated in the lecture. I find it remarkable that such apparently subjective phenomena can be brought into such close connection with precise mathematical models. Kristin Swanson talked about her work on modelling the brain cancer known as glioma and its various deadly forms. I had heard her talk on the same theme at the meeting of the Society for Mathematical Biology in Dundee in 2003. Of course there has been a lot of progress since then. This was long before I started this blog but if the blog had existed I would certainly have written about the topic. I will not try to resurrect the old stories from that distant epoch. Instead I will just say that Kristin is heavily involved in using computer simulations to optimize the treatment (surgery, radiotherapy, chemotherapy) of individual patients. One of the main points in her talk this week is that it seems to be possible to divide patients into two broad categories (with nodular or diffuse growth of the tumour) and that this alone may have important implications for therapeutic decisions. Oliver Jensen talked about a multiscale model for predicting plant growth, for instance the way in which a root manages to sense gravity and move downwards. This involves some very sophisticated continuum mechanics which the speaker illustrated by everyday examples in a very effective and sometimes humorous way. The talk was both impressive and entertaining. Norman Mazer talked about the different kinds of cholesterol (LDL, HDL etc.). According to what he said lowering LDL levels is an effective means for avoiding risks of cardiovascular illness but the alternative strategy of raising HDL levels has not been successful. He explained how mathematical modelling can throw light on this phenomenon. My understanding is that the link between high HDL level and lower cardiovascular risks is a correlation and not a sign of a causal influence of HDL level on risk factors. The last talk was by James Collins, a pioneer of synthetic biology. The talk was full of good material, both mathematical and non-mathematical. Maybe I should invest some time into learning about that field.

There was one very interesting subject which was not the subject of a talk at the conference (at least not of one I heard – it was briefly referred to in the talk of Collins mentioned above) but was a subject of conversation. It is a paper called ‘Paradoxical Results in Perturbation-Based Signaling Network Reconstruction’ by Sudhakaran Prabakaran, Jeremy Gunawardena and Eduardo Sontag which appeared in Biophys. J. 106, 2720. It suggests that the ways in which biologists deduce the influence of substances on each other on the basis of experiments are quite problematic. The mathematical content of the paper is rather elementary but its consequences for the way in which theoretical ideas are applied in biology may be considerable. The system studied in the paper is an in vitro reconstruction of part of the MAP kinase cascade and so not so far from some of my research.

Among the parallel sessions those which were most relevant for me were one entitled ‘Algebra in the Life Sciences’ and organized by Elisenda Feliu, Nicolette Meshkat and Carsten Wiuf and one called ‘Developments in the Mathematics of Biochemical Reaction Networks’ organized by Casian Pantea and Maya Mincheva. My talk was in the second of these. These sessions were very valuable for me since they allowed me to meet a considerable number of people working in areas close to my own research interests, including several whose papers were well known to me but whom I had never met. I think that this will bring me to a new level in my work in mathematical biology due to the various interactions which took place. I will not discuss the contents of individual talks here. It is rather the case that what I learned form them will flow into my research effort and hence indirectly influence future posts in this blog. I feel that this conference has gained me entrance into a (for me) new research community which could be the natural habitat for my future research. I am very happy about that. The whole conference was an enjoyable and stimulating experience for me. I noticed no jet lag at all but I must be suffering from a lack of sleep due to the fact that the many things going on here just did not leave me the eight hours of sleep per night I am used to.

 

 

The Higgins-Selkov oscillator

May 14, 2014

In a previous post I wrote about glycolytic oscillations and mentioned a mathematical model for them, the Higgins-Selkov oscillator. Higgins introduced this as a chemical model while Selkov also wrote about some mathematical aspects of modelling this system. When I was preparing my course on dynamical systems I wanted to present an example where the existence of periodic solutions can be concluded by using the existence of a confined region in a two-dimensional system and Poincare-Bendixson theory. An example which is frequently treated in textbooks is the Brusselator and I wanted to do something different. I decided to try the Higgins-Selkov oscillator. Unfortunately I came up against difficulties since that model has unbounded solutions and it is hard to show that there are any bounded solutions except a stationary solution which can be calculated explicitly. For the purposes of the course I went over to considering the Schnakenberg model, a modification of the Higgins-Selkov oscillator where it is not hard to see that all solutions are bounded.

More recently I decided to try to finally find out what happens with the Higgins-Selkov oscillator itself. Reading Selkov’s paper I originally had the impression that he had proved the essential properties of the solutions. This turned out to be mistaken. One obstacle for me was that Selkov cited a theorem from a famous Russian textbook of Andronov et. al. and I did not know what the theorem was. An English translation of the book exists in the university library here but since Selkov only cites a page number I did not know how to find the theorem. I was able to get further when Jan Fuhrmann got hold of a copy of the page in question from the Russian original. This page has an easily identifiable picture on it and this allowed me to identify the corresponding page of the English translation and hence the theorem. I found that, as far as it is applicable to the oscillator problem this was something I could prove myself by a simple centre manifold argument. Thus I realized that the results quoted by Selkov only resolve some of the simpler issues in this problem.

At this stage I decided to follow the direction pointed out by Selkov on my own. The first stage, which can be used to obtain information about solutions which tend to infinity, is to do a Poincare compactification. This leads to a dynamical system on a compact subset of Euclidean space. In general it leads to the creation of new stationary points on the boundary which are not always hyperbolic. In this particular example two new stationary points are created. One of these has a one-dimensional centre manifold and it is relatively easy to determine its qualitative nature. This is what Selkov was quoting the result of Andronov et. al. for. The other new stationary solution is more problematic since the linearization of the system at that point is identically zero. More information can be obtained by transforming to polar coordinates about that point. This creates two new stationary points. One is hyperbolic and hence unproblematic. The linearization about the other is identically zero. Passing to polar coordinates about that point creates three new stationary points. One of them is hyperbolic while the other two have one-dimensional centre manifolds. The process comes to an end. When trying this kind of thing in the past I was haunted by the nightmare that the process would never stop. Is there a theorem which forbids that? In any case, in this example it is possible to proceed in this way and determine the qualitative behaviour near all points of the boundary. The problem is that this does not seem to help with the original issue. I see no way in which, even using all this information, it is possible to rule out that every solution except the stationary solution tends to infinity as t tends to infinity.

Given that this appeared to be a dead end I decided to try an alternative strategy in order to at least prove that there are some parameter values for which there exists a stable periodic solution. It is possible to do this by showing that a generic supercritical Hopf bifurcation occurs and I went to the trouble of computing the Lyapunov coefficient needed to prove this. I am not sure how much future there is for the Higgins-Selkov oscillator since there are more modern and more complicated models for glycolysis on the market which have been studied more intensively from a mathematical point of view. More information about this can be found in a paper of Kosiuk and Szmolyan, SIAM J. Appl. Dyn. Sys. 10, 1307.

Finally I want to say something about the concept of feedback, something I find very confusing. Often it is said in the literature that oscillations are related to negative feedback. On the other hand the oscillations in glycolysis are often said to result from positive feedback. How can this be consistent? The most transparent definition of feedback I have seen is the one from a paper of Sontag which I discussed in the context of monotone systems. In that sense the feedback in the Higgins-Selkov oscillator is definitely negative. An increase in the concentration of the substrate leads to an increase in the rate of production of the product. An increase in the concentration of the product leads to an increase of the rate of consumption of the substrate. The combination of a positive and a negative sign gives a negative loop. The other way of talking about this seems to be related to the fact that an increase in the concentration of the product leads to an increase in the reaction rate between substrate and product. This is consistent with what was said before. The difference is what aspects of the system are being regarded as cause and effect, which can lead to a different assignment of the labels positive and negative. The problem as I see it is that feedback is frequently invoked but rarely defined, with the implicit suggestion that the definition should be obvious to anyone with an ounce of understanding. I seem to be lacking that ounce.

Proofs of dynamical properties of the MAPK cascade

April 3, 2014

The MAP kinase cascade, which I mentioned in a previous post, is a biochemical network which has been subject to a lot of theoretical and experimental study. Although a number of results about mathematical models for this network have been proved, many widely accepted results are based on numerical and/or heuristic approaches. Together with Juliette Hell we set out to extend the coverage of rigorous results in this area. Our first results on this can be found in a paper we just posted on q-bio.

The system of equations which is fundamental for this work is that of Huang and Ferrell discussed in my previous post on the subject. I call it the MM-MA system (for Michaelis-Menten via mass action). When this system can be reduced to a smaller system by means of a quasistationary approximation the result will be called the MM system (for Michaelis-Menten) (cf. this post). With a suitable formulation the MM system is a singular limit of the MM-MA system. The MAPK cascade consists of three coupled layers. The first main result of our paper concerns the dual futile cycle, which can be thought of as the second layer of the cascade in isolation (cf. this post). We proved that the MM system for the dual futile cycle exhibits a generic cusp bifurcation and hence that for suitable values of the parameters there exist two different stable stationary solutions (bistability). Using the fact that this system is a singular limit of the system arising from the MM-MA description of the same biological system we then used geometric singular perturbation theory (cf. this post) to conclude that the MM-MA system also shows bistability.

The second main result concerns the system obtained by truncating that of Huang-Ferrell by keeping only the first two layers. It is subtle to find a useful quasistationary approximation for this system and we were put on the right track by a paper of Ventura et. al. (PLoS Comp. Biol. 4(3):e1000041). This allowed us to obtained an MM system which is a limit of the MM-MA system in a way which allows geometric singular perturbation theory to be applied. This leads to the following relative statement: if the MM system for the truncated MAPK cascade has a hyperbolic periodic solution then the same is true for the MM-MA system. To get an absolute statement it remains to prove the existence of periodic solutions of the MM system, which in this case is of dimension three. That there are solutions of this kind is indicated by numerical work of Ventura et. al.

Lyapunov exponents

March 25, 2014

In the literature on chaos and strange attractors one concept which plays a prominent role is that of Lyapunov exponents. I came across this repeatedly but never understood the definition. I think I have now understood the reason that I had problems. The fact that I have spent so much time with differential geometry in the past years sometimes makes me see the mathematical world through ‘differential geometric spectacles’. I felt that dynamical systems were objects which naturally live on smooth manifolds and that the definition of important concepts should not be dependent on coordinates or the presence of a preferred metric. The usual definitions of Lyapunov exponents appeared to me strongly tied to Euclidean space although I had seen a couple of comments in the literature (without further justification) that the definition was coordinate independent. In what follows I will give a manifestly coordinate independent definition. Incidentally, the definition of Lyapunov exponents is not invariant under changes of the time coordinate and at one time this led to confusion among people studying chaos in cosmological models. The source of the problem was the lack of a preferred time coordinate in general relativity.

Consider a dynamical system defined by a smooth vector field on a manifold M. Let A be a compact subset of the manifold which is invariant under the flow \phi generated by the vector field. The aim here is to define the maximum Lyapunov exponent of a point x_0\in A. The derivative of the flow, J_t=D_x\phi (t,x_0) is a linear mapping from T_{x_0}M to T_{\phi (t,x_0)}M. In the Euclidean space picture J_t is treated as a matrix and this matrix is multiplied by its transpose. What is this transpose in an invariant setting? It could be taken to be the mapping from T^*_{\phi (t,x_0)}M to T^*_{x_0}M naturally associated to J_t by duality. The product of the matrices could be associated with the composition of the linear mappings but unfortunately the domains and ranges do not match. To overcome this I introduce a Riemannian metric g on a neighbourhood of A. It is then necessary to show at the end of the day that the result does not depend on the metric. The key input for this is that since A is compact the restrictions of any two metrics g_1 and g_2 to A are uniformly equivalent. In other words, there exists a positive constant C such that C^{-1}g_1(v,v)\le g_2(v,v)\le Cg_1(v,v) for all tangent vectors v at points of A. Once the metric g has been chosen it can be used to identify the tangent and cotangent spaces with each other at the points x_0 and \phi (t,x_0) and thus to compose J_t and its ‘transpose’ to get a linear mapping B(t) on the vector space T_{x_0}M. This vector space does not depend on t. The eigenvalues \lambda_i (t) of the mapping B(t) are easily shown to be positive. The maximum Lyapunov exponent is the maximum over i of the limes superior for t\to\infty of \frac{1}{t} times the logarithm of \sqrt{\lambda_i (t)}. Note that the ambiguity of a multiplicative constant in the definition of B(t) becomes an ambiguity of an additive constant in the definition of the logarithms and because of the factor \frac{1}{t} this has no effect on the end result.

In general if the maximum Lyapunov exponent at a point x_0 is positive this is regarded as a sign of instability of the solution starting at that point (sensitive dependence on initial conditions) and if the exponent is negative this is regarded as a sign of stability. Unfortunately in general these criteria are not reliable, a fact which is known as the Perron effect. This is connected with the question of reducing the study of the asymptotic behaviour of a non-autonomous linear system of ODE to that of the autonomous systems obtained by freezing the coefficients at fixed times.

The Hopf-Hopf bifurcation and chaos in ecological systems

February 11, 2014

This post arises from the fact that there seems to be some constructive interference between various directions I am pursuing at the moment. The first has to do with the course on dynamical systems I just finished giving. This course was intended not only to provide students in Mainz with an extended introduction to the subject but also to broaden my own knowledge. I wrote lecture notes for this in German and having gone to the effort of producing this resource I thought I should translate the notes into English so as to make them more widely available. The English version can be found here. Both versions are on the course web page. The second thing is that I will be organizing a seminar on bifurcation theory next semester and I want it to achieve a wide coverage, even at the risk that its waters, being broad, may be shallow (this is paraphrase of a quote I vaguely remember from Nietzsche). The connections between these two things are that I treated simple bifurcation theory and a little chaos in the course and that going further into the landscape of bifurcations necessarily means that at some point chaos rears its ugly head. The third thing is the fact that it has been suggested that the MAPK cascade, a dynamical system I am very interested in from the point of view of my own research, may exhibit chaotic behaviour, as described in a paper of Zumsande and Gross (J. Theor. Biol. 265, 481). This paper attracted my attention when it appeared on arXiv but it is only now that I understood some of the underlying ideas and, in particular, that the Hopf-Hopf bifurcation plays a central role. This in turn led me to a paper by Stiefs et. al. on chaos in ecological systems (Math. Biosci. Eng. 6, 855). They consider models with predator-prey interaction and a disease of the predators.

A Hopf-Hopf (or double Hopf) bifurcation arises at a stationary point where the linearization has two pairs of non-vanishing purely imaginary eigenvalues. Of course it is necessary to have a system of at least dimension four in order for this to occur. The subset of parameter space where it occurs has codimension two and lies at the intersection of two hypersurfaces on which there are Hopf bifurcations. For this system there is an approximate normal form. In other words the system is topologically equivalent to a system given by simple explicit formulae plus higher order error terms. The dynamics of the model system ignoring error terms can be analysed in detail. For simple bifurcations a system in approximate normal form is topologically equivalent to the model system. For the Hopf-Hopf bifurcation (and for the simpler fold-Hopf bifurcation with one zero and one pair of non-zero purely imaginary eigenvalues) this is no longer the case and the perturbation leads to more complicated dynamics. For instance, a heteroclinic orbit in the model system can break as a result of the perturbation. A lot of information on these things can be found in the book of Kuznetsov. In the paper on ecological systems mentioned above a Hopf-Hopf bifurcation is found using computer calculations and this is described as ‘clear evidence for the existence of chaotic parameter regions’. My understanding of chaos is still too weak to be able to appreciate the precise meaning of this statement.

Using computer calculations Zumsande and Gross find fold-Hopf bifurcations in the MAPK cascade (without explicit feedback) indicating the presence of complex dynamics. If chaos occurs in the ecological system and the MAPK cascade what biological meaning could this have? Ecosystems can often be thought of as spatially localized communities with their own dynamics which are coupled to each other. If the dynamics of the individual communities is of a simple oscillatory type then they may become synchronized and this could lead to global extinctions. If the local dynamics are chaotic this cannot happen so easily and even if a fluctuation which is too big leads to extinctions in one local community, these can be avoided in neighbouring communities, giving the ecosystem a greater global stability. One point of view of chaos in the MAPK cascade is that it is an undesirable effect which might interfere with the signalling function. It might be an undesirable side effect of other desirable features of the system. In reality MAPK cascades are usually embedded in various feedback loops and these might suppress the complex  behaviour in the free cascade. Zumsande and Gross investigated this possibility with the conclusion that the feedback loops tend to make things worse rather than better.

Interval arithmetic

January 28, 2014

In a recent post I wrote about the relations between mathematics and simulations. In doing so I forgot about one fascinating theme in this area, that of computer-assisted proofs. Here I am thinking about the technique known as interval arithmetic. I got interested in this subject years ago and went as far as to order a book on the subject for the institute library. However I never applied the technique myself. I was reminded of all this since I wanted to say something about the Lorenz system and strange attractors at the end of the course on dynamical systems I am giving at the moment. Then I remembered that there was a well-known result of Warwick Tucker related to the existence of the Lorenz attractor which made use of interval arithmetic. The basic idea of this technique is simple enough. Say I want to calculate the value of a function at a certain point on the computer. A conventional calculation gives an object y which is a rational number together with a certain idea of how accurate this number is. It does not, however, give any rigorous inequality for that number, due to possible errors in the calculation, in particular rounding error. Doing the same calculation by interval arithmetic gives an interval [y_1,y_2]. It constitutes a proof that the desired value of the function is contained in the interval defined by the rational numbers y_1 and y_2. All intermediate steps in the calculation are exact. This kind of approach could be used to give a proof that a certain function has a zero, in other words an existence proof. It could also be used to prove inequalities satisfied by the point where that zero is. The technique can be implemented in solvers for differential equations and thus used to prove results about dynamical systems. For me a computer-assisted proof of this type has the same philosophical satatus as a proof done by hand by a mathematician. There might be a mistake in the computer programme and the input data given to the programme might have been incorrect but this is on the same level as the mistakes a mathematician makes in a manual calculation. This type of proof has a very different status from the result of a numerical simulation done by a probably reliable but not strictly controlled programme.

Why is this technique not more popular? I can think of several reasons. The first is a lack of interest in rigorous proofs among many potential users. The second is that in practise the intervals obtained may be too large to solve the problems of interest. The third is that the calculations may be too slow. If the second or third reason is the main problem then it should be possible to improve the situation by using better algorithms and more computing power.

Monotone dynamical systems

December 29, 2013

In previous posts I have written a little about monotone dynamical systems, a class of systems which in some sense have simpler dynamical properties than general dynamical systems. Unfortunately this subject was always accompanied by some confusion in my mind. This results from the necessity of a certain type of bookkeeping which I was never really able to get straight. Now I think my understanding of the topic has improved and I want to fix this knowledge here. There are two things which have led to this improvement. One is that I read an expository article by Eduardo Sontag which discusses monotone systems in the context of biochemical networks (Systems and Synthetic Biology 1, 59). The other is that I had the chance to talk to David Angeli who patiently answered some of my elementary questions as well as providing other insights. In what follows I only discuss continuous dynamical systems. Information on the corresponding theory in the case of discrete dynamical systems can be found in the paper of Sontag.

Consider a dynamical system \dot x=f(x) defined on an open subset of R^n. The system is called monotone if \frac{\partial f_i}{\partial x_j}\ge 0 for all i\ne j. This is a rather restrictive definition – we will see alternative possibilities later – but I want to start in a simple context. There is a theorem of Müller and Kamke which says that if two solutions x and y of a monotone system satisfy x_i(0)\le y_i(0) for all i then they satisfy x_i(t)\le y_i(t) for all i and all t\ge 0. This can be equivalently expressed as the fact that for each t\ge 0 the time t flow of the dynamical system preserves the partial order defined by the condition that x_i\le y_i for all i. This can be further reexpressed as the condition that y-x belongs to the positive convex cone in R^n defined by the conditions that the values of all Cartesian coordinates are non-negative. This shows the way to more general definitions of monotone flows on vector spaces, possibly infinite dimensional. These definitions may be useful for the study of certain PDE such as reaction-diffusion equations. The starting point is the choice of a suitable cone. This direction will not be followed further here except to consider some other simple cones in R^n.

A monotone system in the sense defined above is also sometimes called cooperative. The name comes from population models where the species are beneficial to each other. Changing the sign in the defining conditions leads to the class of competitive systems. These can be transformed into cooperative systems by changing the direction of time. However for a given choice of time direction the competitive systems need not have the pleasant properties of cooperative systems. Another simple type of coordinate transformation is to reverse the signs of some of the coordinates x_i. When can this be used to transform a given system into a monotone one?. Two necessary conditions are that each partial derivative of a component of f must have a (non-strict) sign which is independent of x and that the derivatives are symmetric under interchange of their indices. What remains is a condition which can be expressed in terms of the so-called species graph. This has one node for each variable x_i and an arrow from node i to node j if \frac{\partial f_j}{\partial x_i} is not identically zero. If the derivative is positive the arrow bears a positive sign and if it is negative a negative sign. Alternatively, the arrows with positive sign have a normal arrowhead while those with negative sign have a blunt end. In this way the system gives rise to a labelled oriented graph. To each (not necessarily oriented) path in the graph we associate a sign which is the product of the signs of the individual edges composing the path. The graph is said to be consistent if signs can be associated to the vertices in such a way that the sign of an edge is always the product of the signs of its endpoints. This is equivalent to the condition that every closed loop in the graph has a positive sign. In other words, every feedback in the system is positive. Given that the other two necessary conditions are satisfied the condition of consistency characterizes those networks which can be transformed by changes of sign of the x_i to a monotone system. A transformation of this type can also be thought of as replacing the positive orthant by another orthant as the cone defining the partial order.

Next I consider some examples. Every one-dimensional system is monotone. In a two-dimensional system we can have the sign patterns (+,+), (-,-) and (+,-). In the first case the system is monotone. In the second case it is not but can be made so by reversing the sign of one of the coordinates. This is the case of a two-dimensional competitive system. In the third case the system cannot be made monotone. A three-dimensional competitive system cannot be made monotone. The species graph contains a negative loop. Higher dimensional competitive systems are no better since their graphs all contain copies of that negative loop.

A general message in Sontag’s paper is that consistent systems tend to be particularly robust to various types of disturbances. Large biochemical networks are in general not consistent in this sense but they are close to being consistent in the sense that removing a few edges from the network make them consistent. This also means that they can be thought of as a few consistent subsystems joined together. Since biological systems need robustness this suggests a topological property which biochemical networks should have compared to random networks. Sontag presents an example where this has been observed in the transcription network of yeast.

A more sophisticated method which can often be used to obtain monotone systems from systems of chemical reactions by a change of variables has been discussed in a previous post. The advantage of this is that together with other conditions it can be use to show that generic solutions, or sometimes even all solutions, of the original system converge to stationary solutions.

Geometric singular perturbation theory

November 29, 2013

I have already written two posts about the Michaelis-Menten limit, one not very long ago. I found some old results on this subject and I was on the look-out for some more modern accounts. Now it seems to me that what I need is something called geometric singular perturbation theory which goes back to a paper of Fenichel (J. Diff. Eq. 31, 53). An interesting aspect of this is that it involves using purely geometric statements to help solve analytical problems. If we take the system of two equations given in my last post on this subject, we can reformulate them by introducing a new time coordinate \tau=t/\epsilon, called the fast time, and adding the parameter as a new variable with zero time derivative. This gives the equations x'=\epsilon f(x,y), y'=g(x,y) and \epsilon'=0, where the prime denotes the derivative with respect to \tau. We are interested in the situation where the equation g(x,y)=0, which follows from the equations written in terms of the original time coordinate t, is equivalent to y=h_0(x) for a smooth function h_0. The linearization of the system in \tau along the zero set of g automatically has at least two zero eigenvalues. For Fenichel’s theorem it should be assumed that it does not have any more zero (or purely imaginary) eigenvalues. Then each point on that manifold has a two-dimensional centre manifold. Fenichel proves that there exists one manifold which is a centre manifold for all of these points. This is sometimes called a slow manifold. (Sometimes the part of it for a fixed value of \epsilon is given that name.) Its intersection with the plane \epsilon=0  coincides with the zero set of g. The original equations have a singular limit as \epsilon tends to zero, because \epsilon multiplies the time derivative in one of the equations. The remarkable thing is that the restriction of the system to the slow manifold is regular. This allows statements to be made that qualitative properties of the dynamics of solutions of the system with \epsilon=0 are inherited by the system with \epsilon small but non-zero.

Due to my growing interest in this subject I invited Peter Szmolyan from Vienna,who is a leading expert in this field, to come and give a colloquium here in Mainz, which he did yesterday. One of his main themes was that in many models arising in applications the splitting into the variables x and y cannot be done globally. Instead it may be necessary to use several splittings to describe different parts of the dynamics of one solution. He discussed two examples in which these ideas are helpful for understanding the dynamics better and establishing the existence of relaxation oscillations. The first is a model of Goldbeter and Lefever (Biophys J. 12, 1302) for glycolysis. It is different from the model I mentioned in a previous post but is also an important part of the chapter of Goldbeter’s book which I discussed there. The model of Goldbeter and Lefever was further studied theoretically by Segel and Goldbeter (J. Math. Biol. 32, 147). On this basis a rigorous analysis of the dynamics including a proof of the existence of relaxation oscillations was given in a recent paper by Szmolyan and Ilona Kosiuk (SIAM J. Appl. Dyn. Sys. 10, 1307). The other main example in the talk was a system of equations due to Goldbeter which is a kind of minimal model for the cell cycle. It is discussed in chapter 9 of Goldbeter’s book.

I have the feeling that GSPT is a body of theory which could be very useful for my future work and so I will do my best to continue to educate myself on the subject.

Higher dimensional bifurcations

September 9, 2013

Here I return to a subject which has been mentioned in this blog on several occasions, bifurcation theory. The general set-up is that we have a dynamical system of the form \dot x=f(x,\mu) where x\in R^m denotes the unknowns and \mu\in R^k stands for parameters. A central aim of the theory is to find conditions under which the system is topologically equivalent to a simple model system. In other words it only differs from the model system by an invertible continuous change of parameters. It is natural to start in the case where m=1 and k=1. By choosing the coordinates appropriately we can focus on the case x=0 and \mu=0. The subject can be explored by going to successively more complicated cases. If f(0,0)=0 and f'(0,0)\ne 0 we have the case that there is no bifurcation. It follows from the implicit function theorem that for parameters close to zero there is exactly one stationary solution close to zero and it has the same character (source or sink) as the stationary solution for \mu=0. The case where f(0,0)=0, f'(0,0)=0 and f''(0,0)\ne 0 is the fold bifurcation, which was discussed in a previous post. In this case these conditions together with the condition f_\mu\ne 0 imply topological equivalence to a standard system. The case where f(0,0)=0, f'(0,0)=0, f''(0,0)=0 and f'''(0,0)\ne 0 is the cusp bifurcation. Here topological equivalence to a standard case does not hold in a context with only one parameter. It can be obtained by passing to the case k=2 and requiring that a suitable combination of derivatives with respect to \mu_1 and \mu_2 does not vanish. I was able to use this to prove the existence of more than one stable stationary solution in a model for the competition between Th1 and Th2 dominance in the immune system, cf. a previous post on this subject. The system of interest in that case is of dimension four and the fact I could obtain results using a system of dimension one resulted from exploiting symmetry properties. Later I was able to prove the existence of more (four) stable stationary solutions of that system using other methods.

It is possible to talk about fold and cusp bifurcations in higher dimensional systems. This is possible in the case that the linearization of the system at the bifurcation point (which is necessarily singular) has a zero eigenvalue with a corresponding eigenspace of dimension one and no other eigenvalues with real part zero. Then the reduction theorem tells us that near the stationary point the dynamical system is topologically equivalent to the product of a standard saddle with the restriction of the system to a one-dimensional centre manifold. This centre manifold is by definition tangent to the eigenspace of the linearization corresponding to the zero eigenvalue of the linearization at the bifurcation point. It is now rather clear what has to be done in order to analyse this type of situation. It is necessary to determine an approximation of sufficiently high order to the centre manifold and to carry out a qualitative analysis of the dynamics on the centre manifold. In practice this leads to cumbersome calculations and so it is worth thinking carefully about how they can best be organized. A method of doing both calculations together in a way which makes them as simple as possible is described in Section 8.7 of Kuznetsov’s book on bifurcation theory. A number of bifurcations, including the fold and the cusp, are treated in detail there. One way of understanding why the example from immunology I mentioned above was relatively easy to handle is that in that case the centre manifold could be written down explicitly. I did not look at the problem in that way at the time but with hindsight it seems to be an explanation why certain things could be done.

In the case of higher dimensional systems the quantities which should vanish or not in order to get a certain type of bifurcation are replaced by more complicated expressions. In the fold or the cusp f''(0,0) is replaced by [L_i({\partial^2 f^i}/{\partial x_j\partial x_k})R^jR^k](0,0) where L and R are left and right eigenvectors of the linearization corresponding to the eigenvalue zero. Naively one might hope that for the cusp f'''(0,0) would be replaced by [L_i{(\partial^3 f^i}/{\partial x_j\partial x_k\partial x_l})R^jR^kR^l](0,0) but unfortunately, as explained in Kuznetsov’s book, this is not the case. There is a extra correction term which involves the second derivatives and which is somewhat inconvenient to calculate. We should be happy that a topological normal form can be obtained at all in these cases. Going more deeply into the landscape of bifurcations reveals cases where this is not possible. An example is the fold-Hopf bifurcation where there is one zero eigenvalue and one pair of non-zero imaginary eigenvalues. There it is possible to get a truncated normal form which is a standard form for the terms of the lowest orders. It is, however, in general the case that adding higher order terms to this gives topologically inequivalent systems. A simple kind of mechanism behind this is the breaking of a heteroclinic orbit. It is also possible that things can happen which are much nore complicated and not completely understood. There is an extended discussion of this in Kuznetsov’s book.


Follow

Get every new post delivered to your Inbox.

Join 36 other followers