The pole-shifting theorem

Here I discuss a theorem of linear algebra whose interest comes from its applications in control theory.The discussion follows that in the book ‘Mathematical Control Theory’ by Eduardo Sontag. Let A be an n\times n matrix and B an n\times m matrix. We consider the expression A+BF for an m\times n matrix F. The game is now to fix A and B and attempt, by means of a suitable choice of F, to give the eigenvalues of A+BF specified values. The content of the theorem is that this is always possible, provided a suitable genericity assumption, called controllability, is satisfied. In fact this statement has to be modified slightly. I want to work with real matrices and thus the eigenvalues automatically come in complex conjugate pairs. Thus the correct statement concerns candidates for the set of eigenvalues which satisfy that restriction. Where does the name of the theorem come from? Its source is the fact that eigenvalues of a matrix M can be though of as poles of the function (\det (M-\lambda I))^{-1} or the matrix-valued function (M-\lambda I)^{-1}. This is a picture popular in classical control theory. The primary importance of this result for control theory is that the stability of a control system is in many cases determined by a matrix of the form A+BF. If we can choose F so that the eigenvalues of A+BF are all real and negative then we have shown how a system can be designed for which the desired state is asymptotically stable. When the state is perturbed it returns to the desired state. It even does so in a monotone manner, i.e. without any overshoot.

What is the genericity condition? It is implest to explain in the case m=1. Then B is a column vector and we can consider the vectors B, AB, A^2B, \ldots. After at most n-1 steps this sequence of vectors becomes constant. Controllability is the condition that the vectors generated in this way span the whole space. This condition can be reformulated as follows. We identify a set of n vectors in n dimensions with the Euclidean space of n^2 dimensions. To put it another way we place the vectors side by side as the columns of a matrix. Then the condition of controllability is nothing other than the condition that the rank of the resulting matrix is n. The path to the general definition is then simple. The matrices listed before are no longer vectors but we place them side by side to get an n\times mn matrix R(A,B), the reachability or controllability matrix. The condition for controllability is then that this matrix has rank n.

There are also other equivalent conditions for controllability, known under the name of the Hautus Lemma. This says that the rank condition (call this condition (i)) is equivalent to the condition (call it condition (ii)) that the rank of the matrix obtained by placing \lambda I-A next to B is n for all complex numbers \lambda. It is easily seen that it is equivalent to assume this in the case that \lambda is any eigenvalue of A. The proof that (i) imples (ii) is elementary linear algebra. The converse is more complicated and relies on the concept of the Kalman controllability decomposition. The proof of the pole-shifting theorem itself is rather involved and I will not discuss it here.

One Response to “The pole-shifting theorem”

  1. Ram Prasanth Says:

    I just found it now just before 10 minutes of my oral exam on Mathematical Systems Theory. I am going to go through this blog..

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.