January 23, 2016

Here I discuss another tool for analysing chemical reaction networks of deficiency greater than one. This is the Advanced Deficiency Algorithm developed by Feinberg and Ellison. It seems that the only direct reference for the mathematical proofs is Ellison’s PhD thesis. There is a later PhD thesis by Haixia Ji in which she introduces an extension of this called the Higher Deficiency Algorithm and where some of the ideas of Ellison are also recapitulated. In my lecture course, which ends next week, I will only have time to discuss the structure of the algorithm and give an extended example without proving much.

The Advanced Deficiency Algorithm has a general structure which is similar to that of the Deficiency One Algorithm. In some cases it can rule out multistationarity. Otherwise it gives rise to several sets of inequalities. If one of these has a solution then there is multistationarity and if none of them does there is no multistationarity. It is not clear to me if this is really an algorithm which is guaranteed to give a diagostic test in all cases. I think that this is probably not the case and that one of the themes of Ji’s thesis is trying to improve on this. An important feature of this algorithm is that the inequalities it produces are in general nonlinear and thus may be much more difficult to analyse than the linear inequalities obtained in the case of the Deficiency One Algorithm.

Now I have come to the end of my survey of deficiency theory for chemical reaction networks. I feel I have learned a lot and now is the time to profit from that by applying these techniques. The obvious next step is to try out the techniques on some of my favourite biological examples. Even if the result is only that I see why the techniques do not give anything interesting in this cases it will be useful to understand why. Of course I hope that I will also find some positive results.

Elementary flux modes

January 16, 2016

Elementary flux modes are a tool which can be used to take a reaction network of deficiency greater than one and produce subnetworks of deficiency one. If it can be shown that one of these subnetworks admits multistationarity then it can sometimes be concluded that the original network also does so. There are two conditions that need to be checked in order to allow this conclusion. The first is that the network under consideration must satisfy the condition $t=l$. The second is that genericity conditions must be satisfied which consist of requiring the non-vanishing of the determinants of certain matrices. If none of the subnetworks admit multistationarity then no conclusion is obtained for the full network. The core element of the proof is an application of the implicit function theorem. These conclusions are contained in a paper of Conradi et. al. (Proc. Nat. Acad. Sci. USA 104, 19175).

One of the ways of writing the condition for stationary solutions is $Nv(c)=0$, where as usual $N$ is the stoichiometric matrix and $v(c)$ is the vector of reaction rates. Since $v(c)$ is positive this means that we are looking for a positive element of the kernel of $N$. This suggests that it is interesting to look at the cone $K$ which is the intersection of the kernel of $N$ with the non-negative orthant. According to a general theorem of convex analysis $K$ consists of the linear combinations with non-negative coefficients of a finite set of vectors which have a mininum (non-zero) number of non-zero components. In the case of reaction networks these are the elementary flux modes. Recalling that $N=YI_a$ we see that positive vectors in the kernel of the incidence matrix $I_a$ are a special type of elementary flux modes. Those which are not in the kernel of $I_a$ are called stoichiometric generators. Each stoichiometric generator defines a subnetwork where those reaction constants of the full network are set to zero where the corresponding reactions are not in the support of the generator. It is these subnetworks which are the ones mentioned above in the context of multistationarity. The application of the implicit function theorem involves using a linear transformation to introduce adapted coordinates. Roughly speaking the new coordinates are of three types. The first are conserved quantities for the full network. The second are additional conserved quantities for the subnetwork, complementing those of the full network. Finally the third type represents quantities which are dynamical even for the subnetwork.

Here are some simple examples. In the extended Michaelis-Menten description of a single reaction there is just one elementary flux mode (up to multiplication by a positive constant) and it is not a stoichiometric generator. In the case of the simple futile cycle there are three elementary flux modes. Two of these, which are not stoichoimetric generators correspond to the binding and unbinding of one of the enzymes with its substrate. The third is a stoichoimetric generator and the associated subnetwork is obtained by removing the reactions where a substrate-enzyme complex dissociates into its original components. The dual futile cycle has four elementary flux modes of which two are stoichiometric generators. In the latter case we get the (stoichiometric generators of the) two simple futile cycles contained in the network. Of course these are not helpful for proving multistationarity. Another type of example is given by the simple models for the Calvin cycle which I have discussed elsewhere. The MA system considered there has two stoichiometric generators with components $(3,6,6,1,3,0,1)$ and $(3,5,5,1,3,1,0)$. I got these by working backwards from corresponding modes for the MM-MA system found by Grimbs et. al. This is purely on the level of a vague analogy. I wish I had a better understanding of how to get this type of answer more directly. Those authors used those modes to prove bistability for the MM-MA system so that this is an example where this machinery produces real results.

The German Dr. House

January 14, 2016

The central figure in the American TV series Dr. House is a doctor who is brilliant in the diagnosis of unusual medical conditions but personally very difficult. When I first saw this series I found the character so unpleasant that I did not want to watch the programme. However in the course of time I got drawn in to watching it by the interest of the medical content. While some aspects of this series are quite exaggerated and far from reality the medical parts are very accurate and well researched. As I learned yesterday even details seen there like the numbers on heart monitors accurately reflect the situation being portrayed. I have this information from a lecture I attended yesterday at the Medizinische Gesellschaft Mainz [Mainz Medical Society]. The speaker was Professor Jürgen Schäfer, a man who has become known in the media as the German Dr. House. I am pleased to report that I detected no trace of the social incompetence of Dr. House in Dr. Schäfer.

Jürgen Schäfer is trained as a cardiologist. He and his wife, who is a gastroenterologist, got so interested by the series Dr. House that they would spend time discussing the details of the diagnoses and researching the background after they has seen each programme. Then Schäfer had the idea that he could use Dr. House in his lectures at the University of Marburg. The first obstacle was to know if he could legally make use of this material. After a casual conversation with one of his patients who is a lawyer he contacted the necessary people and signed a suitable contract. At this time his project attracted considerable attention in the media even before it had started. In the lectures he analyses the cases occurring in the series. The students are encouraged to develop their own diagnoses in dialogue with the professor. These lectures are held in the evenings and are very popular with the students. In the evaluations the highest score was obtained for the statement that ‘the lectures are a lot of fun’.

This is only the start of the story. During a consultation in one of the episodes of Dr. House he suddenly makes a deep cut with a scalpel in the body of the patient (one of the melodramatic elements), opens the wound and shows that the flesh inside is black. The diagnosis is cobalt poisoning. After seeing this it occurred to Dr. Schäfer that this diagnosis might also apply to one of his own patients and this turned out to be true. In addition to serious heart problems this patient was becoming blind and deaf. He had had a hip joint replacement with an implant made of a ceramic material. At some point this became damaged and was replaced. In order to try to avoid the implant breaking again the new one was made of metal. The old implant fragmented and left splitters in the body. These had acted like sandpaper on the new joint and at the time of removal it had been reduced to 70% of its original size by this process. As a result large quantities of cobalt was released, resulting in the poisoning. The speaker showed a picture of the operation of another of his patients with a similar problem where the wound could be seen to be filled with a black oily liquid. Together with colleagues Schäfer published an account of this case in The Lancet with the title ‘Cobalt intoxication diagnosed with the help of Dr. House’. Not all his coauthors were happy with this title but Schäfer wanted to acknowledge his debt to the series. At the same time it was a great piece of advertizing for him which lead to a lot of attention in the international media.

Due to his growing fame Schäfer started to get a lot of letters from patients with mysterious illnesses. This was more than he could handle. He informed the administration of the university clinic where he worked that he was going to start sending back letters of this type unopened, since he just did not have the time to cope with them. To his surprise they wanted him to continue with this work and arranged from him to be relieved from other duties. They set up a new institute for him called Zentrum für unerkannte Krankheiten [centre for unrecognized diseases]. This was perhaps particularly surprising since this is a privately funded clinic and the work of this institute costs money rather than making money. The techniques used there include toxicological and genomic analyses.

Here is another example from the lecture. Schäfer’s institute uses large scale DNA analysis to screen for a broad range of parasites in patients with unclear symptoms. In one patient they found DNA of the parasite causing schistosomiasis. This disease is usually got by bathing in infected water in tropical or subtropical areas. The patient tested negatively for the parasite and had never been to a place where this disease occurs. The mystery was cleared up due to the help of a vet of Egyptian origin. He was familiar with schistosomiasis and due to his experience with large animals he was not afraid of analysing very large stool samples. He succeeded in finding eggs of the parasite in the patient’s stool. The diffculty was that the numbers of eggs were very low and that for certain reasons they were difficult to recognise in this case, except by an expert. The patient was treated for schistosomiasis as soon as the genetic results were available but it was very satisfying to have a confirmation by more classical techniques. The mystery of how the patient got infected was solved as follows. As a hobby he kept lots of fish and he imported these from tropical regions. The infection presumably came from the water in his aquarium. We see that in the modern world it is easy to import tropical diseases by express delivery after placing an order in the internet

I do not want to end before mentioning that Schäfer said something nice about how mathematicians can help medical doctors. He had a patient who is a mathematics professor and had the following problem. From time to time he would collapse and was temporarily paralysed although fully conscious. A possible explanation for this would have been an excessively high level of sodium in the body. On the other hand measurements showed that the concentration of sodium in his blood was normal, even after an attack. The patient then did a calculation (just simple arithmetic). On the basis of known data he worked out the amount of sodium and potassium in different types of food and noted a correlation between negative effects of a food on his health and the ratio of the sodium to potassium concentrations. This supported the hypothesis of sodium as a cause and encouraged the doctors to look more deeply into the matter. It turned out that in this type of disease the sodium is concentrated near the cell membrane and cannot be seen in the blood. A genetic analysis revealed that the patient had a mutation in a little-known sodium channel.

I think that this lecture was very entertaining for the audience, including my wife and myself. However this is not just entertainment. With his institute Schäfer is providing essential help for many people in very difficult situations. He has files of over 4000 patients. This kind of work requires a high investment in time and money which is not possible for a usual university clinic, not to mention an ordinary GP. It is nevertheless the case that Schäfer is developing resources which could be used more widely, such as standard protocols for assessing patients of this type. As he emphasized, while by definition a rare disease only effects a small number of patients the collection of all rare diseases together affects a large number of people. If more money was invested in this kind of research it could result in  a net saving for the health system since it would reduce the number of people running from one doctor to another since they do not have a diagnosis.

The deficiency one algorithm

January 7, 2016

Here I continue the discussion of Chemical Reaction Network Theory with the Deficiency One Algorithm. As its name suggests this is an algorithm for investigating whether a reaction network allows multistationarity or not. It is also associated to a theorem. This theorem says that the satisfaction of certain inequalities associated to a reaction network is equivalent to the existence of a choice of reaction constants for which the corresponding system of equations with mass action kinetics has two distinct positive stationary solutions in a stoichiometric compatibility class. This is true in the context of networks of deficiency one which satisfy some extra conditions. The property of satisfying these, called regularity, seems to be relatively weak. In fact, as in the case of the Deficiency One Theorem, there is a second related result where a pair of zeroes of the right hand side $f$ in a stoichiometric compatibility class is replaced by a vector in the kernel of $Df$ which is tangent to that class.

An example where this theory can be applied is double phosphorylation in the processive case. The paper of Conradi et. al. cited in the last post contains the statement that in this system the Deficiency One Algorithm implies that multistationarity is not possible. For this it refers to the Chemical Reaction Network Toolbox, a software package which implements the calculations of the theorem. In my course I decided for illustrative purposes to carry out these calculations by hand. It turned out not to be very hard. The conclusion is that multistationarity is not possible for this system but the general machinery does not address the question of whether there are any positive stationary solutions. I showed that this is the case by checking by hand that $\omega$-limit points on the boundary of the positive orthant are impossible. The conclusion then follows by a well-known argument based on the Brouwer fixed point theorem. This little bit of practical experience with the Deficiency One Algorithm gave me the impression that it is really a powerful tool. At this point it is interesting to note a cross connection to another subject which I have discussed in this blog. It is a model for the Calvin cycle introduced by Grimbs et. al. These authors noted that the Deficiency One Algorithm can be applied to this system to show that it does not allow multistationarity. They do not present the explicit calculations but I found that they are not difficult to do. In this case the equations for stationary solutions can be solved explicitly so that using this tool is a bit of an overkill. It neverless shows the technique at work in another example.

Regularity consists of three conditions. The first (R1) is a necessary condition that there be any positive solutions at all. If it fails we get a strong conclusion. The second (R2) is the condition $l=t$ familiar from the Deficiency One Theorem. (R3) is a purely graph theoretic condition on the network. A weakly reversible network which satisfies (R3) is reversible. Reading this backwards, if a network is weakly reversible but not reversible then the theorem does not apply. The inequalities in the theorem depend on certain partitions of a certain set of complexes (the reactive complexes) into three subsets. What is important is whether the inequalities hold for all partitions of a network or whether there is at least one partition for which they do not hold. The proof of the theorem uses a special basis of the kernel of $YA_\kappa$ where $\kappa$ is a function on $\cal C$ constructed from the reaction constants. In the context of the theorem this space has dimension $t+1$ and $t$ of the basis vectors come from a special basis of $A_\kappa$ of the type which already comes up in the proof of the Deficiency Zero Theorem.

An obvious restriction on the applicability of the Deficiency One Algorithm is that it is only applicable to networks of deficiency one. What can be done with networks of higher deficiency? One alternative is the Advanced Deficiency Algorithm, which is implemented in the Chemical Reaction Network Toolbox. A complaint about this method which I have seen several times in the literature is that it is not able to treat large systems – apparently the algorithm becomes unmanageable. Another alternative uses the notion of elementary flux modes which is the next topic I will cover in my course. It is a way of producing certain subnetworks of deficiency one from a given network of higher deficiency. The subnetworks satisfy all the conditions needed to apply the Deficiency One Algorithm except perhaps $t=l$.

The deficiency one theorem

December 2, 2015

Here I continue with the discussion of chemical reaction network theory begun in the previous post. After having presented a proof of the Deficiency Zero Theorem in my course I proceeded to the Deficiency One Theorem following the paper of Feinberg in Arch. Rat. Mech. Anal. 132, 311. If we have a reaction network we can consider each linkage class as a network in its own right and thus we can define its deficiency. If we denote the deficiency of the full network by $\delta$ and the deficiency of the $i$th linkage class by $\delta_i$ then in general $\delta\ge\sum_i \delta_i$. The first hypothesis of the Deficiency One Theorem is that the deficiencies of the linkage classes are no greater than one. The second is that equality holds in the inequality relating the deficiency of the network to those of its linkage classes. The stoichiometric subspace $S$ of the full network is the sum of the corresponding spaces $S_i$ for the linkage classes. The sum is direct precisely when equality holds in the inequality for the deficiencies. The third condition is that there is precisely one terminal strong linkage class in each linkage class ($t=l$). The first conclusion is that if there exists a positive stationary solution there is precisely one stationary solution in each stoichiometric compatibility class. The second is that if the network is weakly reversible there exists a positive stationary solution. Thus two of the conclusions of the Deficiency Zero Theorem have direct analogues in this case. Others do not. There is no statement about the stability of the stationary solutions. Networks which are not weakly reversible but satisfy the three conditions of the Deficiency One Theorem may have positive stationary solutions. In the paper of Feinberg the proof of the Deficiency One Theorem is intertwined with that of another result. It says that the linearization about a stationary solution of the restriction of the system to a stoichiometric compatibility class has trivial kernel. A related result proved in the same paper says that for a weakly reversible network of deficiency zero each stationary solution is a hyperbolic sink within its stoichiometric class. Statements of this type ensure that these stationary solutions possess a certain structural stability.

In the proof of the Deficiency One Theorem the first step (Step 1) is to show that when the assumptions of the theorem are satisfied and there is positive stationary solution $c^*$ then the set of stationary solutions is equal to the set of points for which $\log c-\log c^*$ lies in the orthogonal complement of the stoichiometric subspace. From this the conclusion that there is exactly one stationary solution in each stoichiometric compatibility class follows just as in the proof of the Deficiency Zero Theorem (Step 2). To complete the proof it then suffices to prove the existence of a positive stationary solution in the weakly reversible case (Step 3). In Step 1 the proof is reduced to the case where the network has only one linkage class by regarding the linkage classes of the original network as networks in their own right. In this context the concept of a partition of a network $(\cal S,\cal C,\cal R)$ is introduced. This is a set of subnetworks $({\cal S},{\cal C}^i,{\cal R}^i)$. The set of species is unchanged. The set of reactions $\cal R$ is a disjoint union of the ${\cal R}^i)$ and ${\cal C}^i$ is the set of complexes occurring in the reactions contained in ${\cal R}^i)$. The partition is called direct if the stoichiometric subspace of the full network is the direct sum of those of the subnetworks. The most important example of a partition of a network in the present context is that given by the linkage classes of any network. That it is direct is the second condition of the Deficiency One Theorem. The other part of Step 1 of the proof is to show that the statement holds in the case that there is only one linkage class. The deficiency is then either zero or one. Since the case of deficiency zero is already taken care of by the Deficiency Zero Theorem we can concentrate on the case where $\delta=1$. Then the dimension of ${\rm ker} A_k$ is one and that of ${\rm ker} (YA_k)$ is two. The rest of Step 1 consists of a somewhat intricate algebraic calculation in this two-dimensional space. It remains to discuss Step 3. In this step the partition given by the linkage classes is again used to reduce the problem to the case where there is only one linkage class. The weak reversibility is preserved by this reduction. Again we can assume without loss of generality that $\delta=1$. The subspace $U={\rm im} Y^T+{\rm span}\omega_{\cal C}$ is a hyperplane in $F({\cal C})$. We define $\Gamma$ to be the set of functions of the form $\log a$ with $a$ a positive element of ${\rm ker} (YA_k)$. The desired stationary solution is obtained as a point of the intersection of $U$ with $\Gamma$. To show that this intersection is non-empty it is proved that there are points of $\Gamma$ on both sides of $U$. This is done by a careful examination of the cone of positive elements of ${\rm ker} (YA_k)$.

To my knowledge the number of uses of the Deficiency Zero Theorem and the Deficiency One Theorem in problems coming from applications is small. If anyone reading this has other information I would like to hear it. I will now list the relevant examples I know. The Deficiency Zero Theorem was applied by Sontag to prove asymptotic stability for McKeithan’s kinetic proofreading model of T cell activation. I applied it to the model of Salazar and Höfer for the dynamics of the transcription factor NFAT. Another potential application would be the multiple futile cycle. This network is not weakly reversible. The simple futile cycle has deficiency one and the dual futile cycle deficiency two. The linkage classes have deficiency zero in both cases. Thus while the first and third conditions of the Deficiency One Theorem are satisfied the second is not. Replacing the distributive phosphorylation in the multiple futile cycle by processive phosphorylation may simplify things. This has been discussed by Conradi et. al., IEE Proc. Systems Biol. 152, 243. In the case of two phosphorylation steps the system obtained is of deficiency one but the linkage classes have deficiency zero so that condition (ii) is still not satisfied. It seems that the Deficiency One Algorithm may be more helpful and that will be the next subject I cover in my course.

Feinberg’s proof of the deficiency zero theorem

November 23, 2015

I discussed the deficiency zero theorem of chemical reaction network theory (CRNT) in a previous post. (Some further comments on this can be found here and here.) This semester I am giving a lecture course on chemical reaction network theory. Lecture notes are growing with the course and can be found in German and English versions on the web page of the course. The English version can also be found here. Apart from introductory material the first main part of the course was a proof of the Deficiency Zero Theorem. There are different related proofs in the literature and I have followed the approach in the classic lecture notes of Feinberg on the subject closely. The proof in those notes is essentially self-contained apart from one major input from a paper of Feinberg and Horn (Arch. Rat. Mech. Anal. 66, 83). In this post I want to give a high-level overview of the proof.

The starting point of CRNT is a reaction network. It can be represented by a directed graph where the nodes are the complexes (left or right hand sides of reactions) and the directed edges correspond to the reactions themselves. The connected components of this graph are called the linkage classes of the network and their number is usually denoted by $l$. If two nodes can be connected by oriented paths in both directions they are said to be strongly equivalent. The corresponding equivalence classes are called strong linkage classes. A strong linkage class is called terminal if there is no directed edge leaving it. The number of terminal strong linkage classes is usually denoted by $t$. From the starting point of the network making the assumption of mass action kinetics allows a system of ODE $\dot c=f(c)$ to be obtained in an algorithmic way. The quantity $c(t)$ is a vector of concentrations as a function of time. Basic mathematical objects involved in the definition of the network are the set $\cal S$ of chemical species, the set $\cal C$ of complexes and the set $\cal R$ of reactions. An important role is also played by the vector spaces of real-valued functions on these finite sets which I will denote by $F({\cal S})$, $F({\cal C})$ and $F({\cal R})$, respectively. Using natural bases they can be identified with $R^m$, $R^n$ and $R^r$. The vector $c(t)$ is an element of $F({\cal S})$. The mapping $f$ from $F({\cal S})$ to itself can be written as a composition of three mappings, two of them linear, $f=YA_k\Psi$. Here $Y$, the complex matrix, is a linear mapping from $F({\cal C})$ to $F({\cal S})$. $A_k$ is a linear mapping from $F({\cal C})$ to itself. The subscript $k$ is there because this matrix is dependent on the reaction constants, which are typically denoted by $k$. It is also possible to write $f$ in the form $Nv$ where $v$ describes the reaction rates and $N$ is the stoichiometric matrix. The image of $N$ is called the stoichiometric subspace and its dimension, the rank of the network, is usually denoted by $s$. The additive cosets of the stoichiometric subspace are called stoichiometric compatibility classes and are clearly invariant under the time evolution. Finally, $\Psi$ is a nonlinear mapping from $F({\cal S})$ to $F({\cal C})$. The mapping $\Psi$ is a generalized polynomial mapping in the sense that its components are products of powers of the components of $c$. This means that $\log\Psi$ depends linearly on the logarithms of the components of $c$. The condition for a stationary solution can be written as $\Psi(c)\in {\rm ker} (YA_k)$. The image of $\Psi$ is got by exponentiating the image of a linear mapping. The matrix of this linear mapping in natural bases is $Y^T$. Thus in looking for stationary solutions we are interested in finding the intersection of the manifold which is the image of $\Psi$ with the kernel of $YA_k$. The simplest way to define the deficiency of the network is to declare it to be $\delta=n-l-s$. A fact which is not evident from this definition is that $\delta$ is always non-negative. In fact $\delta$ is the dimension of the vector space ${\rm ker} Y\cap {\rm span}(\Delta)$ where $\Delta$ is the set of complexes of the network. An alternative concept of deficiency, which can be found in lecture notes of Gunawardena, is the dimension $\delta'$ of the space ${\rm ker} Y\cap{\rm im} A_k$. Since this vector space is a subspace of the other we have the inequality $\delta'\le\delta$. The two spaces are equal precisely when each linkage class contains exactly one terminal strong linkage class. This is, in particular, true for weakly reversible networks. The distinction between the two definitions is often not mentioned since they are equal for most networks usually considered.

If $c^*$ is a stationary solution then $A_k\Psi (c^*)$ belongs to ${\rm ker} Y\cap{\rm im} A_k$. If $\delta'=0$ (and in particular if $\delta=0$) then this means that $A_k\Psi (c^*)=0$. In other words $\Psi (c^*)$ belongs to the kernel of $A_k$. Stationary solutions of this type are called complex balanced. It turns out that if $c^*$ is a complex balanced stationary solution the stationary solutions are precisely those points $c$ for which $\log c-\log c_*$ lies in the orthogonal complement of the stoichiometric subspace. It follows that whenever we have one solution we get a whole manifold of them of dimension $n-s$. It can be shown that each manifold of this type meets each stoichiometric class in precisely one point. This is proved using a variational argument and a little convex analysis.

It is clear from what has been said up to now that it is important to understand the positive elements of the kernel of $A_k$. This kernel has dimension $t$ and a basis each of whose elements is positive on a terminal strong linkage class and zero otherwise. Weak reversibility is equivalent to the condition that the union of the terminal strong linkage classes is the set of all complexes. It can be concluded that when the network is not weakly reversible there exists no positive element of the kernel of $A_k$. Thus for a network which is not weakly reversible and has deficiency zero there exist no positive stationary solutions. This is part of the Deficiency Zero Theorem. Now consider the weakly reversible case. There a key statement of the Deficiency Zero Theorem is that there exists a complex balanced stationary solution $c^*$. Where does this $c^*$ come from? We sum the vectors in the basis of ${\rm ker} A_k$ and due to weak reversibility this gives something which is positive. Then we take the logarithm of the result. When $\delta=0$ this can be represented as a sum of two contributions where one is of the form $Y^T z$. Then $c^*=e^z$. A further part of the deficiency zero theorem is that the stationary solution $c^*$ in the weakly reversible case is asymptotically stable. This is proved using the fact that for a complex balanced stationary solution the function $h(c)=\sum_{s\in\cal S}[c(s)(\log c(s)-\log c^*(s)-1)+c(s^*)]$ is a Lyapunov function which vanishes for $c=c^*$

Conference on biological oscillators at EMBL in Heidelberg

November 17, 2015

EMBL, the European Molecular Biology Laboratory, is an international institution consisting of laboratories at five sites, two in Germany, one in the UK, one in France and one in Italy. I recently attended a meeting on the theme ‘Biological Oscillators’ at the site in Heidelberg. The impressive building is in the form of a double helix. There are two spiral ramps over several stories which are linked by bridges (‘hydrogen bonds’, in German Wasserstoffbrücken). This helix provides an original setting for the poster sessions. The building is reached by ascending steep hills in the area behind the castle. I took the comfortable option of using the bus provided by the institute. This meeting had about 130 participants but I think that the capacity is much greater.

One of the most interesting talks on the first day from my point of view was by Petra Schwille from the Max Planck Institute for Biochemistry. She talked about the Min system which is used by bacteria to determine their plane of division. The idea is that certain proteins (whose identity is explicitly known) oscillate between the ends of the cell and that the plane of division is the nodal surface of the concentration of one of these. The speaker and her collaborators have been able to reconstitute this system in a cell-free context. A key role is played by the binding of the proteins to the cell membrane. Diffusion of bound proteins is much slower than that of proteins in solution and this situation of having two different diffusion constants in a coupled system is similar to the classical scenario known from the Turing instability. It sounds like modelling this system mathematically can be a lot of fun and that there is no lack of people interested in doing so.

There was also a ‘Keynote Lecture’ by Jordi Garcia-Ojalvo which lived up to the promise of its special title. The topic was the growth of a colony of Bacillus subtilis. (The published reference is Nature 523, 550.) In fact, to allow better control, the colony is constrained to be very thin and is contained in a microfluidic system which allows its environment to be manipulated precisely. A key observation is that the colony does not grow at a constant rate. Instead its growth rate is oscillatory. The speaker explained that this can be understood in terms of the competition between the cells near the edge of the colony and those in the centre. The colony is only provided with limited resources (glycerol, glutamate and salts). It may be asked which resource limits the growth rate. It is not the glycerol, which is the primary carbon source. Instead it is the glutamate, which is the primary source of nitrogen. An important intermediate compound in the use of glutamate is ammonium. If cells near the boundary of the colony produced ammonium it would be lost to the surroundings. Instead they use ammonium produced by the interior cells. It is the exterior cells which grow and they can deprive the inner cells of glutamate. This prevents the inner cells producing ammonium which is then lacking for the growth of the outer cells. This establishes a negative feedback loop which can be seen as the source of the oscillations in growth rate. The feasibility of this mechanism was checked using a mathematical model. The advantage of the set-up for the bacteria is that if the colony is exposed to damage from outside it can happen that only the exterior cells die and the interior cells generate a new colony. The talk also included a report on further work (Nature 527, 59) concerning the role of ion channels in biofilms. There are close analogies to the propagation of nerve signals and the processes taking place can be modelled by equations closely related to the Hodgkin-Huxley system.

I will now mention a collection of other topics at the conference which I found particularly interesting. One recurring theme was NF$\kappa$B. This transcription factor is known to exhibit oscillations. A key question is what their function is, if any. One of the pioneers in this area, Mike White, gave a talk at the conference. There were also a number of other people attending working on related topics. I do not want to go any deeper here since I think that this is a theme to which I might later devote a post of its own, if not more than one. I just note two points from White’s talk. One is that this substance is a kind of hub or bow-tie with a huge number of inputs and outputs. Another is that the textbook picture of the basic interactions of NF$\kappa$B is a serious oversimplification. Another transcription factor which came up to a comparable extent during the conference is Hes1, which I had never previously heard of. Jim Ferrell gave a talk about the coordination of mitosis in Xenopus eggs. These are huge cells where communication by means of diffusion would simply not be fast enough. The alternative proposed by Ferrell are trigger waves, which can travel much faster. Carl Johnson talked about mechanisms ensuring the stability of the KaiABC oscillator. He presented videos showing the binding of individual KaiA molecules to KaiC. I was was amazed that these things can be seen directly and are not limited to the cartoons to be found in biology textbooks. Other videos I found similarly impressive were those of Alexander Aulehla showing the development of early mouse embryos (segmentation clock) where it could be seen how waves of known chemical events propagating throught the tissues orchestrate the production of structures in the embryo. These pictures brought the usual type of explanations used in molecular biology to a new level of concreteness in my perception.

David Vetter, the bubble boy

October 17, 2015

T cells are a class of white blood cells without which a human being usually cannot survive. An exception to this was David Vetter, a boy who lived 12 years without T cells. This was only possible because he lived all this time in a sterile environment, a plastic bubble. For this reason he became known as the bubble boy. The disease which he suffered from is called SCID, severe combined immunodeficiency, and it corresponds to having no T cells. The most common form of this is due to a mutation on the X chromosome and as a result it usually affects males. The effects set in a few months after birth. The mutation leads to a lack of the $\gamma$ chain of the IL-2 receptor. In fact this chain occurs in several cytokine receptors and is therefore called the ‘common chain’. Probably the key to the negative effects caused by its lack in SCID patients is the resulting lack of the receptor for IL-7, which is important for T cell development. SCID patients have a normal number of B cells but very few antibodies due to the lack of support by helper T cells. Thus in the end they lack both the immunity usually provided by T cells and that usually provided by B cells. This is the reason for the description ‘combined immunodeficiency’. I got the information on this theme which follows mainly from two sources. The first is a documentary film ‘Bodyshock – The Boy in the Bubble’ about David Vetter produced by Channel 4 and available on Youtube. (There are also less serious films on this subject, including one featuring John Travolta.) The second is the chapter on X-linked SCID in the book ‘Case Studies in Immunology’ by Raif Geha and Luigi Notarangelo. I find this book a wonderful resource for learning about immunology. It links general theory to the case history of specific patients.

At one point David started making punctures in his bubble as a way of attracting attention. Then it was explained to him what his situation was and why he must not damage the bubble. Later there was a kind of space suit produced for him by NASA which allowed him to move around outside his home. He only used it six times since he was too afraid there could be an accident. His physical health was good but understandably his psychological situation was difficult. New ideas in the practise of bone marrow transplantation indicated that it might be possible to use donors with a lesser degree of compatibility. On this basis David was given a transplant with his sister as the donor. It was not noticed that her bone marrow was infected with Epstein-Barr virus. As a result David got Burkitt’s lymphoma, a type of cancer which can be caused by that virus. (Compare what I wrote about this role of EBV here.) He died a few months after the operation, at the age of 12. Since that time treatment techniques have improved. The patient whose case is described in the book of Geha and Notarangelo had a successful bone marrow transplant (with his mother as donor). Unfortunately his lack of antibodies was not cured but this can be controlled with injections of immunoglobulin once every three weeks.

Siphons in reaction networks

October 8, 2015

The concept of a siphon is one which I have been more or less aware of for quite a long time. Unfortunately I never had the impression that I had understood it completely. Given the fact that it came up a lot in discussions I was involved in and talks I heard last week I thought that the time had come to make the effort to do so. It is of relevance for demonstrating the property of persistence in reaction networks. This is the property that the $\omega$-limit points of a positive solution are themselves positive. For a bounded solution this is the same as saying that the infima of all concentrations at late times are positive. The most helpful reference I have found for these topics is a paper of Angeli, de Leenheer and Sontag in a proceedings volume edited by Queinnec et. al.

There are two ways of formulating the definition of a siphon. The first is more algebraic, the second more geometric. In the first the siphon is defined to be a set $Z$ of species with the property that whenever one of the species in $Z$ occurs on the right hand side of a reaction one of the species in $Z$ occurs on the left hand side. Geometrically we replace $Z$ by the set $L_Z$ of points of the non-negative orthant which are common zeroes of the elements of $Z$, thought of as linear functions on the species space. The defining property of a siphon is that $L_Z$ is invariant under the (forward in time) flow of the dynamical system describing the evolution of the concentrations. Another way of looking at the situation is as follows. Consider a point of $L_Z$. The right hand side of the evolution equations of one of the concentrations belonging to $Z$ is a sum of positive and negative terms. The negative terms automatically vanish on $L_Z$ and the siphon condition is what is needed to ensure that the positive terms also vanish there. Sometimes minimal siphons are considered. It is important to realize that in this case $Z$ is minimal. Correspondingly $L_Z$ is maximal. The convention is that the empty set is excluded as a choice for $Z$ and correspondingly the whole non-negative orthant as a choice for $L_Z$. What is allowed is to choose $Z$ to be the whole of the species space which means that $L_Z$ is the origin. Of course whether this choice actually defines a siphon depends on the particular dynamical system being considered.

If $x_*$ is an $\omega$-limit point of a positive solution but is not itself positive then the set of concentrations which are zero at that point is a siphon. In particular stationary solutions on the boundary are contained in siphons. It is remarked by Shiu and Sturmfels (Bull. Math. Biol. 72, 1448) that for a network with only one linkage class if a siphon contains one stationary solution it consists entirely of stationary solutions. To see this let $x_*$ be a stationary solution in the siphon $Z$. There must be some complex $y$ belonging to the network which contains an element of $Z$. If $y'$ is another complex then there is a directed path from $y'$ to $y$. We can follow this path backwards from $y$ and conclude successively that each complex encountered contains an element of $Z$. Thus $y'$ contains an element of $Z$ and since $y'$ was arbitrary all complexes have this property. This means that all complexes vanish at $x_*$ so that $x_*$ is a stationary solution.

Siphons can sometimes be used to prove persistence. Suppose that $Z$ is a siphon for a certain network so that the points of $Z$ are potential $\omega$-limit points of solutions of the ODE system corresponding to this network. Suppose further that $A$ is a conserved quantity for the system which is a linear combination of the coordinates with positive coefficents. For a positive solution the quantity $A$ has a positive constant value along the solution and hence also has the same value at any of its $\omega$-limit points. It follows that if $A$ vanishes on $Z$ then no $\omega$-limit point of that solution belongs to $Z$. If it is possible to find a conserved quantity $A$ of this type for each siphon of a given system (possibly different conserved quantities for different siphons) then persistence is proved. For example this strategy is used in the paper of Angeli et al. to prove persistence for the dual futile cycle. The concept of persistence is an important one when thinking about the general properties of reaction networks. The persistence conjecture says that any weakly reversible reaction network with mass action kinetics is persistent (possibly with the additional assumption that all solutions are bounded). In his talk last week Craciun mentioned that he is working on proving this conjecture. If true it implies the global attractor conjecture. It also implies a statement claimed in a preprint of Deng et. al. (arXiv:1111.2386) that a weakly reversible network has a positive stationary solution in any stoichiometric compatobility class. This result has never been published and there seems to be some doubt as to whether the proof is correct.

Trip to the US

October 5, 2015

Last week I visited a few places in the US. My first stop was Morgantown, West Virginia where my host was Casian Pantea. There I had a lot of discussions with Casian and Carsten Conradi on chemical reaction network theory. This synergized well with the work I have recently been doing preparing a lecture course on that subject which I will be giving in the next semester. I gave a talk on MAPK and got some feedback on that. It rained a lot and there was not much opportunity to do anything except work. One day on the way to dinner while it was relatively dry I saw a Cardinal and I fortunately did have my binoculars with me. On Wednesday afternoon I travelled to New Brunswick and spent most of Thursday talking to Eduardo Sontag at Rutgers. It was a great pleasure to talk to an excellent mathematician who also knows a lot about immunology. He and I have a lot of common interests which is in part due to the fact that I was inspired by several of his papers during the time I was getting into mathematical biology. I also had the opportunity to meet Evgeni Nikolaev who told me a variety of interesting things. They concerned bifurcation theory in general, its applications to the kinds of biological models I am interested in and his successes in applying mathematical models to understanding concrete problems in biomedical research such as the processes taking place in tuberculosis. My personal dream is to see a real coming together of mathematics and immunology and that I have the chance to make a contribution to that process.

On Friday I flew to Chicago in order to attend an AMS sectional meeting. I had been in Chicago once before but that is many years ago now. I do remember being impressed by how much Lake Michigan looks like the sea, I suppose due to the structure of the waves. This impression was even stronger this time since there were strong winds whipping up the waves. Loyola University, the site of the meeting, is right beside the lake and it felt like home for me due to the combination of wind, waves and gulls. The majority of those were Ring-Billed Gulls which made it clear which side of the Atlantic I was on. There were also some Herring Gulls and although they might have been split from those on the other side of the Atlantic by the taxonomists I did not notice any difference. It was the first time I had been at an AMS sectional meeting and my impression was that the parallel sessions were very parallel, in other words in no danger of meeting. Most of the people in our session were people I knew from the conferences I attended in Charlotte and in Copenhagen although I did make a couple of new acquaintances, improving my coverage of the reaction network community.

In a previous post I mentioned Gheorghe Craciun’s ideas about giving the deficiency of a reaction network a geometric interpretation, following a talk of his in Copenhagen. Although I asked him questions about this on that occasion I did not completely understand the idea. Correspondingly my discussion of the point here in my blog was quite incomplete. Now I talked to him again and I believe I have finally got the point. Consider first a network with a single linkage class. The complexes of the network define points in the species space whose coordinates are the stoichiometric coefficients. The reactions define oriented segments joining the educt complex to the product complex of each reaction. The stoichiometric subspace is the vector space spanned by the differences of the complexes. It can also be considered as a translate of the affine subspace spanned by the complexes themselves. This makes it clear that its dimension $s$ is at most $n-1$, where $n$ is the number of complexes. The number $s$ is the rank of the stoichiometric matrix. The deficiency is $n-1-s$. At the same time $s\le m$. If there are several linkage classes then the whole space has dimension at most $n-l$, where $l$ is the number of linkage classes. The deficiency is $n-l-s$. If the spaces corresponding to the individual linkage classes have the maximal dimension allowed by the number of complexes in that class and these spaces are linearly independent then the deficiency is zero. Thus we see that the deficiency is the extent to which the complexes fail to be in general position. If the species and the number of complexes have been fixed then deficiency zero is seen to be a generic condition. On the other hand fixing the species and adding more complexes will destroy the deficiency zero condition since then we are in the case $n-l>m$ so that the possibility of general position is excluded. The advantage of having this geometric picture is that it can often be used to read off the deficiency directly from the network. It might also be used to aid in constructing networks with a desired deficiency.