## Lyapunov-Schmidt reduction

At an early age we learn how to tackle the problem of solving $n$ linear equations for $n$ unknowns. What about solving $n$ nonlinear equations for $n$ unknowns? In general there is not much which can be said. One promising strategy is to start with a problem whose solution is known and perturb it. Consider an equation $F(x,\lambda)=0$ where $x\in {\bf R}^n$ should be thought of as the unknown and $\lambda\in {\bf R}$ as a parameter. The mapping defined by $F$ is assumed smooth. Suppose that $F(0,0)=0$ so that we have a solution for $\lambda=0$. It is helpful to consider the derivative $N=D_x F(0,0)$ of $F$ with respect to $x$ at the origin. If the linear map $N$ is invertible then we are in the situation of the implicit function theorem. The theorem says that there exists a smooth mapping $g$ from a neighbourhood of the zero in ${\bf R}$ to ${\bf R}^n$ such that $F(g(\lambda),\lambda)=0$. It is also (locally) unique. In other words the system of $n$ equations has a unique solution for any parameter value near zero.

What happens if $N$ is degenerate? This is where Lyapunov-Schmidt reduction comes in. Suppose for definiteness that the rank of $N$ is $n-1$. Thus the kernel $L$ of $N$ is one-dimensional. We can do linear transformations independently in the copies of ${\bf R}^n$ in the domain and range so as to simplify things. Let $e_1,\ldots,e_n$ be the standard basis in a particular coordinate system. It can be arranged that $L$ is the span of $e_1$ and the range of $N$ is the span of $e_2,\ldots,e_n$. Now define a mapping from ${\bf R}^{n-1}\times {\bf R}^2$ to ${\bf R}^{n-1}$ by $G(x_2,\ldots,x_n,x_1,\lambda)=P(F(x_1,\ldots,x_n,\lambda))$ where $P$ is the projection onto the range of $N$ along the space spanned by $e_1$. Things have now been set up so that the implicit function theorem can be applied to $G$. It follows that there is a smooth mapping $h$ such that $G(h(x_1,\lambda),x_1,\lambda)=0$. In other words $(x_1,x_2,\ldots,x_n)$ satisfy $n-1$ of the $n$ equations. It only remains to solve one equation which is given by $H(x_1,\lambda)=F^1(x_1,h(x_1,\lambda),\lambda)=0$. The advantage of this is that the dimensionality of the problem to be solved has been reduced drastically. The disadvantage is that the mapping $h$ is not known – we only know that it exists. At first sight it may be asked how this could possibly be useful. One way of going further is to use the fact that information about derivatives of $F$ at the origin can be used to give corresponding information on derivatives of $H$ at the origin. Under some circumstances this may be enough to show that the problem is equivalent to a simpler problem after a suitable diffeomorphism, giving qualitative information on the solution set. The last type of conclusion belongs to the field known as singularity theory.

My main source of information for the above account was the first chapter of the book ‘Singularities and groups in bifurcation theory’ by M. Golubitsky and D. Schaeffer. I did reformulate things to correspond to my own ideas of simplicity. In that book there is also a lot of more advanced material on Lyapunov-Schmidt reduction. In particular the space ${\bf R}^n$ may be replaced by an infinite-dimensional Banach space in some applications. An example of this discussed in Chapter 8 of the book. This is the Hopf bifurcation which describes a way in which periodic solutions of a system of ODE can arise from a stationary solution as a parameter is varied. This is then applied in the Case study 2 immediately following that chapter to study the space-clamped Hodgkin-Huxley system mentioned in a previous post.

### 4 Responses to “Lyapunov-Schmidt reduction”

1. Shilnikov’s theorems on bifurcation from a homoclinic orbit « Hydrobates Says:

[…] is it must be infinite-dimensional. In the most optimistic case some kind of centre manifold or Lyapunov-Schmidt reduction might be used to get back to a finite-dimensional (even low-dimensional) system. Another problem, […]

2. hydrobates Says:

I have corrected some typos in the original post. I thank Roger Bieli for pointing them out to me.

3. Grigory Bordyugov Says:

A terrific read, thanks! I was trying to understand the LS reduction and found your explanation excellent.

Just one more technical remark: in general, I guess, the range and the nullspace of N are not complimentary (this is true however if N^2 = N, i.e. N is a kind of projections). That would mean that if e_1 spans the nullspace, that wouldn’t automatically imply that e_2, …, e_n span the whole range. But I guess you’re interested just in e_1 anyway. Please correct me if I’m wrong.

Yours.

• hydrobates Says:

Hi,

What I wrote is not wrong but I have been a little sloppy with the notation. I said ‘do linear transformations independently in the domain and range’ which implies that I am using two different bases. However I did not choose different notations for them. It would have been more precise (and less confusing?) if I had put primes on the basis vectors with indices 2 to n.