Bifurcation and stability

A first step in studying solutions of the dynamical system \dot x=f(x,\lambda) is to look at the set of steady states, the solutions of the equations f(x,\lambda)=0. In the simplest cases these equations can be solved for the unknowns x as a function of the parameters \lambda. A criterion for when this can be done, at least in principle, is given be the implicit function theorem. If f(0,0)=0 then the condition is that A=D_x f(0,0) is invertible. A slightly less favourable situation is that when the original system of n equations for n variables can be reduced to a single equation for a single variable in such a way that when this one equation has been solved the other unknowns can be calculated at steady state. This is related to the case where D_x f(0,0) has rank n-1 and the appropriate analogue of the implicit function theorem is Lyapunov-Schmidt reduction. A brief general description of this technique can be found in a previous post. In the absence of non-zero purely imaginary eigenvalues of A the system has a one dimensional centre manifold and there is a relation between Lyapunov-Schmidt reduction and centre manifold reduction. There are a lot of similarities between these two techniques but also some important differences. In the case of centre manifold theory we obtain the existence of a one-dimensional submanifold, which may be non-unique and may be less regular (in the sense of being smooth or analytic) than the system itself. In the case of Lyapunov-Schmidt reduction we obtain a one-dimensional quotient manifold which is unique and as regular as the system itself. Note that when the latter method is applied some choices must be made but the essential results are independent of those choices.

When it is possible to reduce the equations for steady states to a single equation p(X,\alpha)=0 as discussed above it may still be difficult to determine how many solutions the equation has for fixed \alpha. The function has been denoted by p since in many applications it is a polynomial. The issue of the number of solutions is not my concern here. Instead I assume I already know something about how many solutions exist and I would like to know something about their stability. The question is, under what circumstances a reduction of the existence question to one dimension also leads to a reduction of the stability question to one dimension. Here I discuss a result on this question which was obtained in a paper of Crandall and Rabinowitz with the title ‘Bifurcation, perturbation of eigenvalues and linearized stability’ (Arch. Rat. Mech. Anal. 19, 1083 (1973)). The exposition I will give here is based not on that paper (which I have not read) but on that in the book ‘Singularities and Groups in Bifurcation Theory’ by Golubitsky and Schaeffer.

I will discuss the result in the simplest case I can think of. This is where x is a point in the plane and \alpha is a scalar. I assume that the kernel of A is the x_1-axis and its image the x_2-axis. I assume further that the non-zero eigenvalue of A is -a with a>0. In this situation the equation f_2(x_1,x_2,\alpha)=0 can be written in the form x_2=h(x_1). Substituting this into the equation f_1(x_1,x_2,\alpha)=0 gives an equation of the form g(x_1,\alpha)=0. Here x_1 plays the role of X above. Steady states close to the origin are in one-to-one correspondence with zeroes of g. The main result is that the stability of the steady state at the point X is determined by the sign of the derivative g' of g with respect to X. When that sign is negative the steady state is asymptotically stable and when the sign is positive it is a saddle.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: