## Archive for October, 2011

### The Perron-Frobenius theorem

October 20, 2011

The Perron-Frobenius theorem is a result in linear algebra which I have known about for a long time. On the other hand I never took the time to study a proof carefully and think about why the result holds. I was now motivated to change this by my interest in chemical reaction network theory and the realization that the Perron-Frobenius theorem plays a central role in CRNT. In particular, it lies at the heart of the original proof of the existence part of the deficiency zero theorem. Here I will review some facts related to the Perron-Frobenius theorem and its proof.

Let $A$ be a square matrix all of whose entries are positive. Note how this condition makes no sense for an endomorphism of a vector space in the absence of a preferred basis. Then $A$ has a positive eigenvalue $\lambda_+$ and it is bigger than the magnitude of any other eigenvalue. The dimension of the generalized eigenspace corresponding to this eigenvalue is one. There is a vector in the eigenspace all of whose components are positive. Let $C_i$ be the sum of the entries in the $i$th column of $A$. Then $\lambda_+$ lies between the minimum and the maximum of the $C_i$.

If the assumption on $A$ is weakened to its having non-negative entries then most of the properties listed above are lost. However analogues can be obtained if the matrix is irreducible. This means by definition that the matrix has no invariant coordinate subspace. In that case $A$ has a positive eigenvalue which is at least as big as the magnitude of any other eigenvalue. As in the positive case it has multiplicity one. There is a vector in the eigenspace all of whose elements are positive. In general there are other eigenvalues of the same magnitude as the maximal positive eigenvalue and they are related to it by multiplication with powers of a root of unity. The estimate for the maximal real eigenvalue in terms of column sums remains true. The last statement follows from the continuous dependence of the eigenvalues on the matrix.

Suppose now that a matrix $B$ has the properties that its off-diagonal elements are non-negative and that the sum of the elements in each of its columns is zero. Then the sum of the elements in each column of a matrix of the form $B+\lambda I$ is $\lambda$. On the other hand for $\lambda$ sufficiently large the entries of the matrix $B+\lambda I$ are non-negative. If $B$ is irreducible then it can be concluded that the Perron eigenvalue of $B+\lambda I$ is $\lambda$, that the kernel of $B$ is one-dimensional and that it is spanned by a vector all of whose components are positive. In the proof of the deficiency zero theorem this is applied to certain restrictions of the kinetic matrix. The irreducibility property of $B$ follows from the fact that the network is weakly reversible.

The Perron-Frobenius theorem is proved in Gantmacher’s book on matrices. He proves the non-negative case first and uses that as a basis for the positive case. I would have preferred to see a proof for the positive case in isolation. I was not able to extract a simple conceptual picture which I found useful. I have seen some mention of the possibility of applying the Brouwer fixed point theorem but I did not find a complete treatment of this kind of approach written anywhere. There is an infinite-dimensional version of the theorem (the Krein-Rutman theorem). It applies to compact operators on a Banach space which satisfy a suitable positivity condition. In fact this throws some light on the point raised above concerning a preferred basis. Some extra structure is necessary but it does not need to be as much as a basis. What is needed is a positive cone. Let $K$ be the set of vectors in $n$-dimensional Euclidean space, all of whose components are non-negative. A matrix is non-negative if and only if it leaves $K$ invariant and this is something which can reasonably be generalized to infinite dimensions. Thus the set $K$ is the only extra structure which is required.