I discussed the deficiency zero theorem of chemical reaction network theory (CRNT) in a previous post. (Some further comments on this can be found here and here.) This semester I am giving a lecture course on chemical reaction network theory. Lecture notes are growing with the course and can be found in German and English versions on the web page of the course. The English version can also be found here. Apart from introductory material the first main part of the course was a proof of the Deficiency Zero Theorem. There are different related proofs in the literature and I have followed the approach in the classic lecture notes of Feinberg on the subject closely. The proof in those notes is essentially self-contained apart from one major input from a paper of Feinberg and Horn (Arch. Rat. Mech. Anal. 66, 83). In this post I want to give a high-level overview of the proof.

The starting point of CRNT is a reaction network. It can be represented by a directed graph where the nodes are the complexes (left or right hand sides of reactions) and the directed edges correspond to the reactions themselves. The connected components of this graph are called the linkage classes of the network and their number is usually denoted by . If two nodes can be connected by oriented paths in both directions they are said to be strongly equivalent. The corresponding equivalence classes are called strong linkage classes. A strong linkage class is called terminal if there is no directed edge leaving it. The number of terminal strong linkage classes is usually denoted by . From the starting point of the network making the assumption of mass action kinetics allows a system of ODE to be obtained in an algorithmic way. The quantity is a vector of concentrations as a function of time. Basic mathematical objects involved in the definition of the network are the set of chemical species, the set of complexes and the set of reactions. An important role is also played by the vector spaces of real-valued functions on these finite sets which I will denote by , and , respectively. Using natural bases they can be identified with , and . The vector is an element of . The mapping from to itself can be written as a composition of three mappings, two of them linear, . Here , the complex matrix, is a linear mapping from to . is a linear mapping from to itself. The subscript is there because this matrix is dependent on the reaction constants, which are typically denoted by . It is also possible to write in the form where describes the reaction rates and is the stoichiometric matrix. The image of is called the stoichiometric subspace and its dimension, the rank of the network, is usually denoted by . The additive cosets of the stoichiometric subspace are called stoichiometric compatibility classes and are clearly invariant under the time evolution. Finally, is a nonlinear mapping from to . The mapping is a generalized polynomial mapping in the sense that its components are products of powers of the components of . This means that depends linearly on the logarithms of the components of . The condition for a stationary solution can be written as . The image of is got by exponentiating the image of a linear mapping. The matrix of this linear mapping in natural bases is . Thus in looking for stationary solutions we are interested in finding the intersection of the manifold which is the image of with the kernel of . The simplest way to define the deficiency of the network is to declare it to be . A fact which is not evident from this definition is that is always non-negative. In fact is the dimension of the vector space where is the set of complexes of the network. An alternative concept of deficiency, which can be found in lecture notes of Gunawardena, is the dimension of the space . Since this vector space is a subspace of the other we have the inequality . The two spaces are equal precisely when each linkage class contains exactly one terminal strong linkage class. This is, in particular, true for weakly reversible networks. The distinction between the two definitions is often not mentioned since they are equal for most networks usually considered.

If is a stationary solution then belongs to . If (and in particular if ) then this means that . In other words belongs to the kernel of . Stationary solutions of this type are called complex balanced. It turns out that if is a complex balanced stationary solution the stationary solutions are precisely those points for which lies in the orthogonal complement of the stoichiometric subspace. It follows that whenever we have one solution we get a whole manifold of them of dimension . It can be shown that each manifold of this type meets each stoichiometric class in precisely one point. This is proved using a variational argument and a little convex analysis.

It is clear from what has been said up to now that it is important to understand the positive elements of the kernel of . This kernel has dimension and a basis each of whose elements is positive on a terminal strong linkage class and zero otherwise. Weak reversibility is equivalent to the condition that the union of the terminal strong linkage classes is the set of all complexes. It can be concluded that when the network is not weakly reversible there exists no positive element of the kernel of . Thus for a network which is not weakly reversible and has deficiency zero there exist no positive stationary solutions. This is part of the Deficiency Zero Theorem. Now consider the weakly reversible case. There a key statement of the Deficiency Zero Theorem is that there exists a complex balanced stationary solution . Where does this come from? We sum the vectors in the basis of and due to weak reversibility this gives something which is positive. Then we take the logarithm of the result. When this can be represented as a sum of two contributions where one is of the form . Then . A further part of the deficiency zero theorem is that the stationary solution in the weakly reversible case is asymptotically stable. This is proved using the fact that for a complex balanced stationary solution the function is a Lyapunov function which vanishes for

## Leave a Reply