Archive for March, 2019

The pole-shifting theorem, part 2

March 25, 2019

In the last post I wrote about the pole-shifting theorem but included almost no information about the proof. Now I want to sketch that proof, following the account in the book of Sontag. It is convenient to talk about prescribing the characteristic polynomial rather than prescribing the eigenvalues. The characteristic polynomial contains the information about the set of eigenvalues together with information about their multiplicity. It is helpful to use changes of basis in the state space to simplify the problem. A change of basis leads to a similarity transformation of A and so does not change the characteristic polynomial. It also does not change the rank of R(A,B). Hence the property of controllability is not changed. Which polynomials can be obtained from matrices of the form A+BF also does not change since the change of basis can be used to transform F in an obvious way. Putting these things together shows that when proving the theorem for given matrices it is allowed to pass to new matrices by a change of basis when convenient.

The first step in the proof is to look at the theorem in the case of one control variable (m=1). I will use the notation (A,b). In this case the system can be brought into a special form, the controller form, by a change of basis. Then it is elementary to solve for the unique feedback which solves the problem. The next step is to reduce the case of general m to a modified control problem with m=1. Let v be any vector with Bv non-zero and b=Bv. The idea is to show that there is an F_1 such that (A+BF_1,b) is controllable. If this can be done then the result for m=1 gives, for a given polynomial \chi, a matrix f such that the characteristic polynomial of (A+BF_1)+bf is \chi. But (A+BF_1)+bf=A+B(F_1+vf) and so taking F=F_1+vf solves the desired problem.

It remains to find F_1. For this purpose a sequence of vectors is constructed as follows. First choose a vector v such that Bv is non-zero and let x_1=Bv. Then x_1 is a non-zero element of the state space. Next choose x_2=Ax_1+u_1, where u_1 belongs to the image of B, in such a way that x_1 and x_2 are linearly independent. If this succeeds continue to choose x_3=Ax_2+u_2 in a similar way. The idea is to construct a maximal chain of linearly independent vectors \{x_1,\ldots,x_k\} of this type. The claim is now that if (A,B) is controllable k=n. Consider the space spanned by the x_i. It is of dimension k. Since the chain cannot be extended Ax_k+Bu must also belong to this space, for any choice of u. In particular Ax_k belongs to the space. Hence the image of B belongs to the space. The definition of the x_i then implies that Ax_i belongs to the space for all i so that the space is invariant under A. Putting these facts together shows that the image of R(A,B) is contained in this space. By controllability it must therefore be the whole n-dimensional Euclidean space. Next define F_1x_i=u_i for i=1,2,\ldots,k-1 and F_1x_k arbitrarily. Then R(A+BF_1,x_1)=(x_1,\ldots,x_n), which completes the proof.

In fact this theorem can be extended to one which describes which polynomials can be assigned when (A,B) is not controllable. They are the polynomials of the form \chi_1\chi_u where \chi_1 is an arbitrary monic polynomial of degree r and \chi_u is a polynomial defined by (A,B) called the uncontrollable part of the characteristic polynomial of A. What this means is that some poles (the uncontrollable ones) are fixed once and for all and the others can be shifted arbitrarily.

The pole-shifting theorem

March 22, 2019

Here I discuss a theorem of linear algebra whose interest comes from its applications in control theory.The discussion follows that in the book ‘Mathematical Control Theory’ by Eduardo Sontag. Let A be an n\times n matrix and B an n\times m matrix. We consider the expression A+BF for an m\times n matrix F. The game is now to fix A and B and attempt, by means of a suitable choice of F, to give the eigenvalues of A+BF specified values. The content of the theorem is that this is always possible, provided a suitable genericity assumption, called controllability, is satisfied. In fact this statement has to be modified slightly. I want to work with real matrices and thus the eigenvalues automatically come in complex conjugate pairs. Thus the correct statement concerns candidates for the set of eigenvalues which satisfy that restriction. Where does the name of the theorem come from? Its source is the fact that eigenvalues of a matrix M can be though of as poles of the function (\det (M-\lambda I))^{-1} or the matrix-valued function (M-\lambda I)^{-1}. This is a picture popular in classical control theory. The primary importance of this result for control theory is that the stability of a control system is in many cases determined by a matrix of the form A+BF. If we can choose F so that the eigenvalues of A+BF are all real and negative then we have shown how a system can be designed for which the desired state is asymptotically stable. When the state is perturbed it returns to the desired state. It even does so in a monotone manner, i.e. without any overshoot.

What is the genericity condition? It is implest to explain in the case m=1. Then B is a column vector and we can consider the vectors B, AB, A^2B, \ldots. After at most n-1 steps this sequence of vectors becomes constant. Controllability is the condition that the vectors generated in this way span the whole space. This condition can be reformulated as follows. We identify a set of n vectors in n dimensions with the Euclidean space of n^2 dimensions. To put it another way we place the vectors side by side as the columns of a matrix. Then the condition of controllability is nothing other than the condition that the rank of the resulting matrix is n. The path to the general definition is then simple. The matrices listed before are no longer vectors but we place them side by side to get an n\times mn matrix R(A,B), the reachability or controllability matrix. The condition for controllability is then that this matrix has rank n.

There are also other equivalent conditions for controllability, known under the name of the Hautus Lemma. This says that the rank condition (call this condition (i)) is equivalent to the condition (call it condition (ii)) that the rank of the matrix obtained by placing \lambda I-A next to B is n for all complex numbers \lambda. It is easily seen that it is equivalent to assume this in the case that \lambda is any eigenvalue of A. The proof that (i) imples (ii) is elementary linear algebra. The converse is more complicated and relies on the concept of the Kalman controllability decomposition. The proof of the pole-shifting theorem itself is rather involved and I will not discuss it here.