A first step in studying solutions of the dynamical system is to look at the set of steady states, the solutions of the equations . In the simplest cases these equations can be solved for the unknowns as a function of the parameters . A criterion for when this can be done, at least in principle, is given be the implicit function theorem. If then the condition is that is invertible. A slightly less favourable situation is that when the original system of equations for variables can be reduced to a single equation for a single variable in such a way that when this one equation has been solved the other unknowns can be calculated at steady state. This is related to the case where has rank and the appropriate analogue of the implicit function theorem is Lyapunov-Schmidt reduction. A brief general description of this technique can be found in a previous post. In the absence of non-zero purely imaginary eigenvalues of the system has a one dimensional centre manifold and there is a relation between Lyapunov-Schmidt reduction and centre manifold reduction. There are a lot of similarities between these two techniques but also some important differences. In the case of centre manifold theory we obtain the existence of a one-dimensional submanifold, which may be non-unique and may be less regular (in the sense of being smooth or analytic) than the system itself. In the case of Lyapunov-Schmidt reduction we obtain a one-dimensional quotient manifold which is unique and as regular as the system itself. Note that when the latter method is applied some choices must be made but the essential results are independent of those choices.

When it is possible to reduce the equations for steady states to a single equation as discussed above it may still be difficult to determine how many solutions the equation has for fixed . The function has been denoted by since in many applications it is a polynomial. The issue of the number of solutions is not my concern here. Instead I assume I already know something about how many solutions exist and I would like to know something about their stability. The question is, under what circumstances a reduction of the existence question to one dimension also leads to a reduction of the stability question to one dimension. Here I discuss a result on this question which was obtained in a paper of Crandall and Rabinowitz with the title ‘Bifurcation, perturbation of eigenvalues and linearized stability’ (Arch. Rat. Mech. Anal. 19, 1083 (1973)). The exposition I will give here is based not on that paper (which I have not read) but on that in the book ‘Singularities and Groups in Bifurcation Theory’ by Golubitsky and Schaeffer.

I will discuss the result in the simplest case I can think of. This is where is a point in the plane and is a scalar. I assume that the kernel of is the -axis and its image the -axis. I assume further that the non-zero eigenvalue of is with . In this situation the equation can be written in the form . Substituting this into the equation gives an equation of the form . Here plays the role of above. Steady states close to the origin are in one-to-one correspondence with zeroes of . The main result is that the stability of the steady state at the point is determined by the sign of the derivative of with respect to . When that sign is negative the steady state is asymptotically stable and when the sign is positive it is a saddle.

## Leave a Reply