I discussed the process of Lyapunov-Schmidt reduction in a previous post. Here I give an extension of that to treat the question of stability. I again follow the book of Golubitsky and Schaeffer. My interest in this question has the following source. Suppose we have a system of ODE depending on a parameter
. The equation for steady state solutions is then
. Sometimes we can eliminate all but one of the variables to obtain an equation
for a real variable
whose solutions are in one to one correspondence with those of the original equation for steady states. Clearly this situation is closely related to Lyapunov-Schmidt reduction in the case where the linearization has corank one. Often the reduced equation is much easier to treat that the original one and this can be used to obtain information on the number of steady states of the ODE system. This can be used to study multistationarity in systems of ODE arising as models in molecular biology. In that context we would like more refined information related to multistability. In other words, we would like to know something about the stability of the steady states produced by the reduction process. Stablity is a dynamical property and so it is not a priori clear that it can be investigated by looking at the equation for steady states on its own. Different ODE systems can have the same set of steady states. Note, however, that in the case of hyperbolic steady states the stability of a steady state is determined by the eigenvalues of the linearization of the function
at that point. Golubitsky and Schaeffer prove the following remarkable result. (It seems clear that results of this kind were previously known in some form but I did not yet find an earlier source with a clear statement of this result free from many auxiliary complications.) Suppose that we have a bifurcation point
where the linearization of
has a zero eigenvalue of multiplicity one and all other eigenvalues have negative real part. Let
be the corresponding zero of
. The result is that if
and
then for
close to
the linearization of
about the steady state has a unique eigenvalue close to zero and its sign is the same as that of
. Thus the stability of steady states arising at the bifurcation point is determined by the function
.
I found the proof of this theorem hard to follow. I can understand the individual steps but I feel that I am still missing a global intuition for the strategy used. In this post I describe the proof and present the partial intuition I have for it. Close to the bifurcation point the unique eigenvalue close to zero, call it , is a smooth function of
and
because it is of multiplicity one. The derivative
is also a smooth function. The aim is to show that they have the same sign. This would be enough to prove the desired stability statement. Suppose that the gradient of
at
is non-zero. Then the zero set of
is a submanifold in a neighbourhood of
. It turns out that
vanishes on that manifold. If we could show that the gradient of
is non-zero there then it would follow that the sign of
off the manifold is determined by that of
. With suitable sign conventions they are equal and this is the desired conclusion. The statement about the vanishing of
is relatively easy to prove. Differentiating the basic equations arising in the Lyapunov-Schmidt reduction shows that the derivative of
applied to the gradient of a function
arising in the reduction process is zero. Thus the derivative of
has a zero eigenvalue and it can only be equal to
. For by the continuous dependence of eigenvalues no other eigenvalue can come close to zero in a neighbourhood of the bifurcation point.
After this the argument becomes more complicated since in general the gradients of and
could be zero. This is got around by introducing a deformation of the original problem depending on an additional parameter
and letting
tend to zero at the end of the day to recover the original problem. The deformed problem is defined by the function
. Lyapunov-Schmidt reduction is applied to
to get a function
. Let
be the eigenvalue of
which is analogous to the eigenvalue
of
. From what was said above it follows that, in a notation which is hopefully clear,
implies
. We now want to show that the gradients of these two functions are non-zero. Lyapunov-Schmidt theory includes a formula expressing
in terms of
. This formula allows us to prove that
. Next we turn to the gradient of
, more specifically to the derivative of
with respect to
. First it is proved that
for all
. I omit the proof, which is not hard. Differentiating
and evaluating at the point
shows that
is an eigenvalue of
there with eigenvalue
. Hence
for all
. Putting these facts together shows that
and the derivative of
with respect to
at the bifurcation point is equal to one.
We now use the following general fact. If and
are two smooth functions,
vanishes whenever
does and the gradients of both functions are non-zero then
extends smoothly to the zero set of
and the value of the extension there is given by the ratio of the gradients (which are necessarily proportional to each other). In our example we get
with
. Setting
in the first equation then gives the desired conclusion.
Leave a Reply