At an early age we learn how to tackle the problem of solving linear equations for
unknowns. What about solving
nonlinear equations for
unknowns? In general there is not much which can be said. One promising strategy is to start with a problem whose solution is known and perturb it. Consider an equation
where
should be thought of as the unknown and
as a parameter. The mapping defined by
is assumed smooth. Suppose that
so that we have a solution for
. It is helpful to consider the derivative
of
with respect to
at the origin. If the linear map
is invertible then we are in the situation of the implicit function theorem. The theorem says that there exists a smooth mapping
from a neighbourhood of the zero in
to
such that
. It is also (locally) unique. In other words the system of
equations has a unique solution for any parameter value near zero.
What happens if is degenerate? This is where Lyapunov-Schmidt reduction comes in. Suppose for definiteness that the rank of
is
. Thus the kernel
of
is one-dimensional. We can do linear transformations independently in the copies of
in the domain and range so as to simplify things. Let
be the standard basis in a particular coordinate system. It can be arranged that
is the span of
and the range of
is the span of
. Now define a mapping from
to
by
where
is the projection onto the range of
along the space spanned by
. Things have now been set up so that the implicit function theorem can be applied to
. It follows that there is a smooth mapping
such that
. In other words
satisfy
of the
equations. It only remains to solve one equation which is given by
. The advantage of this is that the dimensionality of the problem to be solved has been reduced drastically. The disadvantage is that the mapping
is not known – we only know that it exists. At first sight it may be asked how this could possibly be useful. One way of going further is to use the fact that information about derivatives of
at the origin can be used to give corresponding information on derivatives of
at the origin. Under some circumstances this may be enough to show that the problem is equivalent to a simpler problem after a suitable diffeomorphism, giving qualitative information on the solution set. The last type of conclusion belongs to the field known as singularity theory.
My main source of information for the above account was the first chapter of the book ‘Singularities and groups in bifurcation theory’ by M. Golubitsky and D. Schaeffer. I did reformulate things to correspond to my own ideas of simplicity. In that book there is also a lot of more advanced material on Lyapunov-Schmidt reduction. In particular the space may be replaced by an infinite-dimensional Banach space in some applications. An example of this discussed in Chapter 8 of the book. This is the Hopf bifurcation which describes a way in which periodic solutions of a system of ODE can arise from a stationary solution as a parameter is varied. This is then applied in the Case study 2 immediately following that chapter to study the space-clamped Hodgkin-Huxley system mentioned in a previous post.
March 26, 2009 at 10:32 am |
[…] is it must be infinite-dimensional. In the most optimistic case some kind of centre manifold or Lyapunov-Schmidt reduction might be used to get back to a finite-dimensional (even low-dimensional) system. Another problem, […]
March 27, 2009 at 9:44 am |
I have corrected some typos in the original post. I thank Roger Bieli for pointing them out to me.
February 2, 2011 at 1:55 pm |
A terrific read, thanks! I was trying to understand the LS reduction and found your explanation excellent.
Just one more technical remark: in general, I guess, the range and the nullspace of N are not complimentary (this is true however if N^2 = N, i.e. N is a kind of projections). That would mean that if e_1 spans the nullspace, that wouldn’t automatically imply that e_2, …, e_n span the whole range. But I guess you’re interested just in e_1 anyway. Please correct me if I’m wrong.
Yours.
February 2, 2011 at 3:26 pm |
Hi,
What I wrote is not wrong but I have been a little sloppy with the notation. I said ‘do linear transformations independently in the domain and range’ which implies that I am using two different bases. However I did not choose different notations for them. It would have been more precise (and less confusing?) if I had put primes on the basis vectors with indices 2 to n.
August 6, 2020 at 2:04 pm |
[…] is Lyapunov-Schmidt reduction. A brief general description of this technique can be found in a previous post. In the absence of non-zero purely imaginary eigenvalues of the system has a one dimensional […]
January 1, 2022 at 3:22 pm |
[…] about the existence proof for Hopf bifurcations. Here I want to explain another proof which uses Lyapunov-Schmidt reduction. This is based on the book ‘Singularities and Groups in Bifurcation Theory’ by […]
August 30, 2022 at 8:53 am |
[…] discussed the process of Lyapunov-Schmidt reduction in a previous post. Here I give an extension of that to treat the question of stability. I again follow the book of […]