The simplest type of stationary solutions of a dynamical system are those which are hyperbolic, which means that all eigenvalues of the linearization about the given point have non-zero real parts. If smooth one-parameter families of dynamical systems are considered the simplest type of loss of hyperbolicity is when some eigenvalue hits the imaginary axis at an isolated value of the parameter, say . The most generic examples of this are when a real eigenvalue passes through the origin or when a pair of complex eigenvalues meet at a point of the imaginary axis away from the origin. The latter case is the scenario of the Hopf bifurcation and is the one I will discuss in what follows. For the moment only the two-dimensional case will be considered. The parameter will be chosen such that the real part of the eigenvalues has the same sign as . Thus as the parameter passes through zero while increasing the stationary point loses stability. It will be assumed that the eigenvalue crosses the axis with non-zero velocity. My primary source of information on this subject is the book of Kuznetsov already mentioned in a previous post. The figures 3.5 and 3.7 of that book are useful for visualizing what is going on. If a further genericity assumption is made the phase portrait of the bifurcation can be shown to be topologically equivalent to that of a simple explicit model. This assumption is the non-vanishing of a quantity called the first Lyapunov number . When this number is non-zero its sign can be used to distinguish two different kinds of Hopf bifurcation called super- and subcritical. In the supercritical case () periodic solutions exist for all small positive values of and they are stable. In the subcritical case small periodic solutions exist for all small negative values of and are unstable. Kuznetsov gives an intuitive interpretation of this difference, calling the first case a soft loss of stability and the second a catastrophic one.

The first Lyapunov number depends not only on the linearization of the system about the bifurcation point but also on second and third order derivatives there. Calculating this quantity is elementary but usually lengthy, even for simple systems. A trick used to simplify this calculation is the introduction of a suitable complex coordinate. When I first saw this I did not like it but if it reduces cumbersome calculations there is a clear motivation for proceeding in this way. It is not even that unnatural, given the fact that a pair of purely imaginary eigenvalues are at the heart of the Hopf bifurcation. In the book the example of the Brusselator is discussed in an exercise. Although this is a very simple example the reader is encouraged to use computer algebra to do the calculations. I did some of them by hand and saw that it is not impossible but it is tedious. An alternative approach is to use a formula which has been derived once and for all by someone. A formula of this type is given on p. 353 of Perko’s book “Differential equations and dynamical systems”. I tried using this in the case of the Brusselator and it seemed easier than the alternative mentioned above. If it is possible to go further by assuming that another quantity , the second Lyapunov number, is non-zero. This gives rise to what is called a Bautin bifurcation. There is a natural generalization of the Hopf bifurcation to higher dimensions. It is just necessary to assume that all the eigenvalues except for the pair responsible for the bifurcation have non-zero real parts. Under this assumption the problem can be reduced to a centre manifold. This is a centre manifold adapted to the bifurcation problem rather than just a centre manifold for the system with fixed values of the parameter.In this way the two-dimensional setting can be recovered.

Up to this point I have just presented the Lyapunov numbers as the result of messy calculations which may be diagnostic for certain things. This gives no intuition about what they mean. To obtain this kind of intuition, note first that the Lyapunov numbers are characteristics of the dynamical system for . They do not involve any derivatives with respect to . In fact they arise in the study of two-dimensional dynamical systems which have a focus at some point, say the origin. This means that solutions starting near the origin spiral towards the origin as or , depending on whether the focus is stable or unstable. I now restrict to the stable case for definiteness. In this situation it is possible to define a PoincarĂ© map which is similar to that in the more familiar setting of periodic solutions. Consider a radial line segment of length . If it is short enough then following solutions through one rotation defines a mapping from the line segment to itself. Call it and define . It is always true that . If then its sign determines the stability with the negative sign corresponding to the stable case. This is the case where the eigenvalues have non-zero real part and the stationary point is hyperbolic. The case of interest in connection with the Hopf bifurcation is that where . It can be shown that in general the first non-vanishing derivative of at the origin is of odd order. If its order is three then it corresponds to the first Lyapunov number. So in a sense that number measures the leading order deviation of the solution from a circle. If the first Lyapunov number vanishes then the fifth derivative of gives the second Lyapunov number. An account of this can be found in section 3.4 of Perko’s book.

There are very many applications where the Hopf bifurcation plays a role. A first example is the Brusselator mentioned above. This is a schematic two-dimensional model for a chemical reactor. When I hear the name I get a mental picture of Brussels sprouts. This is of course nonsense. The name comes from the fact that the model was developed in Brussels and is a simplification of a three-dimensional model called the Oregonator which was developed in Oregon. The latter name was influenced by the fact that it is a kind of oscillator. The Oregonator is nothing other then the Field-Noyes model discussed in a recent post. As mentioned there the Field-Noyes model also exhibits Hopf bifurcations. Hopf bifurcations occur in the FitzHugh-Nagumo and Hogdkin-Huxley systems. Thus they are potentially relevant for electrical signalling by neurons. They may also come up in another kind of biological signalling, namely that by calcium. For an extensive review of this subject I refer to a paper of Martin Falcke (Adv. Phys. 53, 255). In section 5 of that paper the author discusses experimental evidence indicating that certain calcium oscillations cannot be modelled using Hopf bifurcations and that it might be better to use other types of bifurcation. On the other hand he suggests that the evidence for this is not conclusive. Oscillations in glycolysis are modelled by the Higgins-Selkov oscillator, a two-dimensional system bearing a superficial resemblance to the Brusselator. The unknowns are the concentrations of ADP and the enzyme phosphofructokinase. This simple system describing a part of glycolysis exhibits a Hopf bifurcation. More information on this and related systems can be found in the book of Klipp et. al. on systems biology quoted in a previous post.

February 24, 2010 at 1:22 pm |

[...] blog I have already discussed a number of aspects of bifurcation theory for dynamical systems. In a previous post I mentioned that there are two generic ways in which a stationary solution can lose hyperbolicity. [...]

November 19, 2011 at 11:59 am |

[...] setting and causes this system to reduce to the famous Brusselator, which I have commented on elsewhere. Thus the model can be thought of as a kind of generalized Brusselator and indeed it exhibits [...]

January 6, 2012 at 6:19 am |

[...] the unique stationary solution for given parameter values and the existence of periodic solutions. Hopf bifurcations play a role. The model is closely related to the Brusselator and techniques of proof can be [...]