Among the plenary talks at the conference was one by Hisashi Ohtsuki on the evolution of social norms. Although I am a great believer in the application of mathematics to many real world problems I do become a bit sceptical when the area of application goes in the direction of sociology or psychology. Accordingly I went to the talk with rather negative expectations but I was pleasantly surprised. The speaker explained how he has been able to apply evolutionary game theory to obtain insights into the evolution of cooperation in human societies under the influence of indirect reciprocity. This means that instead of the simple direct pattern ‘A helps B and thus motivates B to help A’ we have ‘C sees A helping B and hence decides to help A’ and variations on that pattern. The central idea of the work is to compare many different strategies in the context of a mathematical model and thus obtain ideas about what are the important mechanisms at work. My impression was that this is a case where mathematics has generated helpful ideas in understanding the phenomenon and that there remain a lot of interesting things to be done in that direction. It also made me reflect on my own personal strategies when interacting with other people. Apart from the interesting content the talk was also made more interesting by the speaker’s entertaining accounts of experiments which have been done to compare with the results of the modelling. During the talk the speaker mentioned self-referentially that the fact of his standing in front of us giving the talk was an example of the process of the formation of a reputation being described in the talk. As far as I am concerned he succeeded in creating a positive reputation both for himself and for his field.

Apart from this the other plenary talk which I found most interesting was by Johan van de Koppel. He was talking about pattern formation in ecology and, in particular, about his own work on pattern formation in mussel beds. A talk which I liked much less was that of Adelia Sequeira and it is perhaps interesting to ask why. She was talking about modelling of atherosclerosis. She made the valid point near the beginning of her lecture that while heart disease is a health problem of comparable importance to cancer in the developed world the latter theme was represented much more strongly than the former at the conference. For me cancer is simply much more interesting than heart disease and this point of view is maybe more widespread. What could be the reason? One possibility is that the study of cancer involves many more conceptual aspects than that of heart disease and that this is attractive for mathematicians. Another could be that I am a lot more afraid of being diagnosed with cancer some day than of being diagnosed with heart disease although the latter may be no less probable and not less deadly if it happens. To come back to the talk I found that the material was too abundant and too technical and that many ideas were used without really being introduced. The consequence of these factors was that I lost interest and had difficulty not falling asleep.

In the case of the parallel talks there were seventeen sessions in parallel and I generally decided to go to whole sessions rather than trying to go to individual talks. I will make some remarks about some of the things I heard there. I found the first session I went to, on tumour-immune dynamics, rather disappointing but the last talk in the session, by Shalla Hanson was a notable exception. The subject was CAR T-cells and what mathematical modelling might contribute to improving therapy. I found both the content and the presentation excellent. The presentation packed in a lot of material but rather than being overwhelmed I found myself waiting eagerly for what would come next. During the talk I thought of a couple of questions which I might ask at the end but they were answered in due course during the lecture. It is a quality I admire in a speaker to be able to anticipate the questions which the audience may ask and answer them. I see this less as a matter of understanding the psychology of the audience (which can sometimes be important) and rather of really having got to the heart of the subject being described. There was a session on mathematical pharmacology which I found interesting, in particular the talks of Tom Snowden on systems pharmacology and that of Wilhelm Huisinga on multidrug therapies for HIV. In a session on mathematical and systems immunology Grant Lythe discussed the fascinating question of how to estimate the number of T cell clones in the body and what mathematics can contribute to this beyond just analysing the data statistically. I enjoyed the session on virus dynamics, particularly a talk by Harel Dahari on hepatitis C. In particular he told a story in which he was involved in curing one exceptional HCV patient with a one-off therapy using a substance called silibinin and real-time mathematical modelling.

I myself gave a talk about dinosaurs. Since this is work which is at a relatively early stage I will leave describing more details of it in this blog to a later date.

]]>

Yesterday, sitting by the river in Wako, I was feeling quite meditative. I was in an area where motor vehicles are not permitted. There were not many people around but most of those who were there were on bikes. I started thinking of how this is typical of what I have experienced in many places I have been. On a walk along the Rhine in Mainz or in the surrounding countryside most of the people you see are on bikes. Copenhagen is completely dominated by bikes. In the US cars dominate. For instance when I was in Miami for a conference and was staying at the Biltmore Hotel I had to walk quite a distance to get dinner for an affordable price. In general the only people I met walking on the streets there were other conference participants. When I visited the University of California at Santa Barbara bikes were not the thing on the campus but it was typical to see students with skateboards. Summing up, I have frequently had the experience that as a pedestrian I was an exception. It seems that for normal people just putting one foot in front of the other is not the thing to do. They need some device such as a car, a bike or a skateboard to accompany them. I, on the other hand, am an eternal pedestrian. I like to walk places whenever I can. I walk twenty minutes to work each day and twenty minutes back. I find that a good way of framing the day. When I lived in Berlin there was a long period when I had a one-way travelling time of 90 minutes by train. I am glad to have that behind me. I did not consciously plan being so near to work in Mainz but I am glad it happened. Of course being a pedestrian has its limits – I could not have come to Japan on foot.

My pedestrian nature is not limited to the literal interpretation of the term. I am also an intellectual pedestrian. An example of this is described in my post on low throughput biology. Interestingly this post has got a lot of hits, more than twice as many as any other post on my blog. This is related to the theme of simple and complex models in biology. Through the talks I have given recently in Copenhagen, Berlin and here in Japan and resulting discussions with different people I have become of conscious of how this is a recurring theme in those parts of mathematical biology which I find interesting. The pedestrian may not get as far as others but he often sees more in the places he does reach. He may also get to places that others do not. Travelling fast along the road may cause you to overlook a valuable shortcut. Or you may go a long way in the wrong direction and need a lot of time to come back. Within mathematics one aspect of being a pedestrian is calculating things by hand as far as possible and using computers as a last resort. This reminds me of a story about the physicist John Wheeler who had a picture of a computer on the wall in his office which he called ‘the big computer’. When he wanted to solve a difficult problem he would think about how he would programme it on the computer and when he had done that thoroughly he had understood the problem so well that he no longer needed the computer. Thus the fact that the computer did not exist except on paper was not a disadvantage.

This is the direction I want to (continue to) go. The challenges along the road are to achieve something essential and to make clear to others, who may be sceptical, that I have done so.

]]>

Hepatitis C is transmitted by blood to blood contact. According to Zeuzem the main cause of the spread of this disease in developed countries is intravenous drug use. Before there was a test for the disease it was also spread via blood transfusions. (At one time the risk of infection with hepatitis due to a blood transfusion was 30%. This was mainly hepatitis B and by the time of discovery of hepatitis C, when the risk from hepatitis B had essentially been eliminated, it had dropped to 5%.) He also mentioned that there is a very high rate of infection in certain parts of Egypt due to the use of unsterilized needles in the treatment of other diseases. Someone asked how the disease could have survived before there were injections. He did not give a definitive answer but he did mention that while heterosexual contacts generally carry little risk of infection with this virus homosexual contacts between men do carry a significant risk. The disease typically becomes chronic and has few if any symptoms for many years. It does have dramatic long-term effects, namely cirrhosis and cancer of the liver. He showed statistics illustrating how public health policies have influenced the spread of the disease in different countries. The development in France has been much more favourable (with less cases) than in Germany, apparently due to a publicity campaign as a result of political motives with no direct relevance to the disease. The development in the UK has been much less favourable than it has even in Germany due an almost complete lack of publicity on the theme for a long time. The estimated number of people infected in Germany is 500000. The global number is estimated as 170 million.

There has been a dramatic improvement in the treatment of hepatitis C in the past couple of years and this was the central theme of the talks. A few years ago the situation was as follows. Drugs (a combination of ribavirin and interferon ) could be used to eliminate the virus in a significant percentage of patients, particularly for some of the sub-types of the virus. The treatment lasted about a year and was accompanied by side effects that were so severe that there was a serious risk of patients breaking it off. Now the treatment only lasts a few weeks, it cures at least 95% of the patients and in many situations 99% of them. The side effects of the new treatments are moderate. There is just one problem remaining: the drugs for the best available treatment are sold for extremely high prices. The order of magnitude is 100000 euros for a treatment. Zeuzem explained various aspects of the dynamics which has led to these prices and the circumstances under which they might be reduced in the future. In general this gave a rather depressing picture of the politics of health care relating to the approval and prescription of new drugs.

Let me get back to the scientific aspects of the theme, as explained by Bartenschlager. A obvious question to ask is: if hepatitis C can essentially be cured why does HIV remain essentially incurable despite the huge amount of effort and money spent on trying to find a treatment? The simple answer seems to be that HIV can hide while HCV cannot. Both these viruses have an RNA genome. Since the copying of RNA is relatively imprecise they both have a high mutation rate. This leads to a high potential for the development of drug resistance. This problem has nevertheless been overcome for HCV. Virus particles are continually being destroyed by the immune system and for the population to survive new virus particles must be produced in huge numbers. This is done by the liver cells. This heavy burden kills the liver cells after a while but the liver is capable of regenerating, i.e, replacing these cells. The liver has an impressive capability to survive this attack but every system has its limits and eventually, after twenty or thirty years, the long-term effects already mentioned develop. An essential difference between HIV and HCV is that the RNA of HCV can be directly read by ribosomes to produce viral proteins. By contrast, the RNA of HIV is used as a template to produce DNA by the enzyme reverse transcriptase and this DNA is integrated into the DNA of the cell. This integrated DNA (known as the provirus) may remain inactive, not leading to production of protein. As long as this is the case the virus is invisible to the immune system. This is one way the virus can hide. Moreover the cell can divide producing new cells also containing the provirus. There is also another problem. The main target of HIV are the T-helper cells. However the virus can also infect other cells such as macrophages or dendritic cells and the behaviour of the virus in these other cells is different from that in T-helper cells. It is natural that a treatment should be optimized for what happens in the typical host cell and this may be much less effective in the other cell types. This means that the other cells may serve as a reservoir for the virus in situations where the population is under heavy pressure from the immune system or drug treatment. This is a second sense in which the virus can hide.

Some of the recent drugs used to treat HCV are based on ideas developed for the treatment of HIV. For instance a drug of this kind may inhibit certain of the enzymes required for the reproduction of the virus. There is one highly effective drug in the case of HCV which works in a different way. The hepatitis C virus produces one protein which has no enzymatic activity and it is at first sight hard to see what use this could be for the virus. What it in fact does is to act as a kind of docking station which organizes proteins belonging to the cell into a factory for virus production.

The hepatitis C virus is a valuable example which illustrates the relations between various aspects of medical progress: improvement in scientific understanding, exploitation of that information for drug design, political problems encountered in getting an effective drug to the patients who need it. Despite the negative features which have been mentioned it is the subject of a remarkable success story.

]]>

I was thinking about my previous visits to Copenhagen and, in particular, that the first one was on a flying carpet. The background to this is that when I was seven years old I wrote a story in school with the title ‘The Magic Carpet’. I do not have the text any more but I know it appeared in the School Magazine that year. In my own version there was also a picture which I will say more about later. But first something about the story, of which I was the hero. I bought the carpet in Peshawar and used it to visit places in the world I was interested in. For some reason I no longer know I had a great wish at that time to visit Copenhagen. Perhaps it was due to coming into contact with stories of Hans Christian Andersen. In any case it is clear that having the chance this was one of the first places I visited using the magic carpet. The picture which I drew showed something closer to home. There I can be seen sitting on the carpet, wearing the blue jersey which was my favourite at that time, while the carpet bent upwards so as to just pass over the tip of the spire of St. Magnus Cathedral in Kirkwall. In the story it was also related that one of the effects of my journey was a newspaper article reporting a case of ‘mass hallucination’. I think my teachers were impressed at my using this phrase at my age. They might have been less impressed if they had known my source for this, which was a Bugs Bunny cartoon.

During my next visit to Copenhagen in 2008 (here I am not counting changing planes there on the way to Stockholm, which I did a few times) I was at a conference at the Niels Bohr Institute in my old research field of mathematical relativity and I gave a talk in that area. Little did I think I would return there years later and talk about something completely different. I remember that there was a photo in the main lecture room where many of the founders of quantum mechanics are sitting in the first row. From my own point of view I am happy that another person who can be seen there is Max Delbrück, a shining example of a switch from physics to biology. My next visit to Copenhagen was for the conference which I wrote about in a previous post. It was at the University. Since that a lot has happened with chemical reaction network theory and with my understanding of it. The lecture course I gave means that some of the points I mentioned in my post at that time are things I have since come to understand in some depth. I look forward to working on projects in that area with people here in the coming days.

]]>

Let me explain the usual story about how NFB is activated. There are lots of animated videos on Youtube illustrating this but I prefer a description in words. Normally NFB is found in the cytosol bound to an inhibitor IB. Under certain circumstances a complex of proteins called IKK forms. The last K stands for kinase and IKK phosphorylates IB. This causes IB to be ubiquinated and thus marked for degradation (cf. the discussion of ubiquitin here). When it has been destroyed NFB is liberated, moves to the nucleus and binds to DNA. What are the circumstances mentioned above? There are many alternatives. For instance TNF binds to its receptor, or something stimulates a toll-like receptor. The details are not important here. What is important is that there are many different signals which can lead to the activation of NFB. What genes does NFB bind to when it is activated? Here again there are many possibilities. Thus there is a kind of bow tie configuration where there are many inputs and many outputs which are connected to a single channel of communication. So how is it possible to arrange that when one input is applied, e.g. TNF the right genes are switched on while another input activates other genes through the same mediator NFB? One possibility is cross-talk, i.e. that this signalling pathway interacts with others. If this cannot account for all the specificity then the remaining possibility is that information is encoded in the signal passing through NFB itself. For example, one stimulus could produce a constant response while another causes an oscillatory one. Or two stimuli could cause oscillatory responses with different frequencies. Evidently the presence of oscillations in the concentration of NFB presents an opportunity for encoding more information than would otherwise be possible. To what extent this really happens is something where I do not have an overview at the moment. I want to learn more. In any case, oscillations have been observed in the NFB system. The primary thing which has been observed to oscillate is the concentration of NFB in the nucleus. This oscillation is a consequence of the movement of the protein between the cytosol and the nucleus. There are various mathematical models for describing these oscillations. As usual in modelling phenomena in cell biology there are models which are very big and complicated. I find it particularly interesting when some of the observations can be explained by a simple model. This is the case for NFB where a three-dimensional model and an explanation of its relations to the more complicated models can be found in a paper by Krishna, Jensen and Sneppen (PNAS 103, 10840). In the three-dimensional model the unknowns are the concentrations of NFB in the nucleus, IB in the cytoplasm and mRNA coding for IB. The oscillations in normal cells are damped but sustained oscillations can be seen in mutated cells or corresponding models.

What is the function of NFB? The short answer is that it has many. On a broad level of description it plays a central role in the phenomenon of inflammation. In particular it leads to production of the cytokine IL-17 which in turn, among other things, stimulates the production of anti-microbial peptides. When these things are absent it leads to a serious immunodeficiency. In one variant of this there is a mutation in the gene coding for NEMO, which is one of the proteins making up IKK. A complete absence of NEMO is fatal before birth but people with a less severe mutation in the gene do occur. There are symptoms due to things which took place during the development of the embryo and also immunological problems, such as the inability to deal with certain bacteria. The gene for NEMO is on the X chromosome so that this disease is usually limited to boys. More details can be found in the book of Geha and Notarangelo mentioned in a previous post.

]]>

While there are quite a lot of results in the literature on the number of steady states in systems of ODE modelling biochemical systems there is much less on the question of the stability of these steady states. It was a central motivation of our work to make some progress in this direction for the specific models of the Calvin cycle and to develop some ideas to approaching this type of question more generally. One key idea is that if it can be shown that there is bifurcation with a one-dimensional centre manifold this can be very helpful in getting information on the stability of steady states which arise in the bifurcation. Given enough information on a sufficient number of derivatives at the bifurcation point this is a standard fact. What is interesting and perhaps less well known is that it may be possible to get conclusions without having such detailed control. One type of situation occurring in our paper is one where a stable solution and a saddle arise. This is roughly the situation of a fold bifurcation but we do not prove that it is generic. Doing so would presumably involve heavy calculations.

The centre manifold calculation only controls one eigenvalue and the other important input in order to see that there is a stable steady state for at least some choice of the parameters is to prove that the remaining eigenvalues have negative real parts. This is done by considering a limiting case where the linearization simplifies and then choosing parameters close to those of the limiting case. The arguments in this paper show how wise it can be to work with the rates of the reactions as long as possible, without using species concentrations. This kind of approach is popular with many people – it has just taken me a long time to get the point.

]]>

The Advanced Deficiency Algorithm has a general structure which is similar to that of the Deficiency One Algorithm. In some cases it can rule out multistationarity. Otherwise it gives rise to several sets of inequalities. If one of these has a solution then there is multistationarity and if none of them does there is no multistationarity. It is not clear to me if this is really an algorithm which is guaranteed to give a diagostic test in all cases. I think that this is probably not the case and that one of the themes of Ji’s thesis is trying to improve on this. An important feature of this algorithm is that the inequalities it produces are in general nonlinear and thus may be much more difficult to analyse than the linear inequalities obtained in the case of the Deficiency One Algorithm.

Now I have come to the end of my survey of deficiency theory for chemical reaction networks. I feel I have learned a lot and now is the time to profit from that by applying these techniques. The obvious next step is to try out the techniques on some of my favourite biological examples. Even if the result is only that I see why the techniques do not give anything interesting in this cases it will be useful to understand why. Of course I hope that I will also find some positive results.

]]>

One of the ways of writing the condition for stationary solutions is , where as usual is the stoichiometric matrix and is the vector of reaction rates. Since is positive this means that we are looking for a positive element of the kernel of . This suggests that it is interesting to look at the cone which is the intersection of the kernel of with the non-negative orthant. According to a general theorem of convex analysis consists of the linear combinations with non-negative coefficients of a finite set of vectors which have a mininum (non-zero) number of non-zero components. In the case of reaction networks these are the elementary flux modes. Recalling that we see that positive vectors in the kernel of the incidence matrix are a special type of elementary flux modes. Those which are not in the kernel of are called stoichiometric generators. Each stoichiometric generator defines a subnetwork where those reaction constants of the full network are set to zero where the corresponding reactions are not in the support of the generator. It is these subnetworks which are the ones mentioned above in the context of multistationarity. The application of the implicit function theorem involves using a linear transformation to introduce adapted coordinates. Roughly speaking the new coordinates are of three types. The first are conserved quantities for the full network. The second are additional conserved quantities for the subnetwork, complementing those of the full network. Finally the third type represents quantities which are dynamical even for the subnetwork.

Here are some simple examples. In the extended Michaelis-Menten description of a single reaction there is just one elementary flux mode (up to multiplication by a positive constant) and it is not a stoichiometric generator. In the case of the simple futile cycle there are three elementary flux modes. Two of these, which are not stoichoimetric generators correspond to the binding and unbinding of one of the enzymes with its substrate. The third is a stoichoimetric generator and the associated subnetwork is obtained by removing the reactions where a substrate-enzyme complex dissociates into its original components. The dual futile cycle has four elementary flux modes of which two are stoichiometric generators. In the latter case we get the (stoichiometric generators of the) two simple futile cycles contained in the network. Of course these are not helpful for proving multistationarity. Another type of example is given by the simple models for the Calvin cycle which I have discussed elsewhere. The MA system considered there has two stoichiometric generators with components and . I got these by working backwards from corresponding modes for the MM-MA system found by Grimbs et. al. This is purely on the level of a vague analogy. I wish I had a better understanding of how to get this type of answer more directly. Those authors used those modes to prove bistability for the MM-MA system so that this is an example where this machinery produces real results.

]]>

Jürgen Schäfer is trained as a cardiologist. He and his wife, who is a gastroenterologist, got so interested by the series Dr. House that they would spend time discussing the details of the diagnoses and researching the background after they has seen each programme. Then Schäfer had the idea that he could use Dr. House in his lectures at the University of Marburg. The first obstacle was to know if he could legally make use of this material. After a casual conversation with one of his patients who is a lawyer he contacted the necessary people and signed a suitable contract. At this time his project attracted considerable attention in the media even before it had started. In the lectures he analyses the cases occurring in the series. The students are encouraged to develop their own diagnoses in dialogue with the professor. These lectures are held in the evenings and are very popular with the students. In the evaluations the highest score was obtained for the statement that ‘the lectures are a lot of fun’.

This is only the start of the story. During a consultation in one of the episodes of Dr. House he suddenly makes a deep cut with a scalpel in the body of the patient (one of the melodramatic elements), opens the wound and shows that the flesh inside is black. The diagnosis is cobalt poisoning. After seeing this it occurred to Dr. Schäfer that this diagnosis might also apply to one of his own patients and this turned out to be true. In addition to serious heart problems this patient was becoming blind and deaf. He had had a hip joint replacement with an implant made of a ceramic material. At some point this became damaged and was replaced. In order to try to avoid the implant breaking again the new one was made of metal. The old implant fragmented and left splitters in the body. These had acted like sandpaper on the new joint and at the time of removal it had been reduced to 70% of its original size by this process. As a result large quantities of cobalt was released, resulting in the poisoning. The speaker showed a picture of the operation of another of his patients with a similar problem where the wound could be seen to be filled with a black oily liquid. Together with colleagues Schäfer published an account of this case in The Lancet with the title ‘Cobalt intoxication diagnosed with the help of Dr. House’. Not all his coauthors were happy with this title but Schäfer wanted to acknowledge his debt to the series. At the same time it was a great piece of advertizing for him which lead to a lot of attention in the international media.

Due to his growing fame Schäfer started to get a lot of letters from patients with mysterious illnesses. This was more than he could handle. He informed the administration of the university clinic where he worked that he was going to start sending back letters of this type unopened, since he just did not have the time to cope with them. To his surprise they wanted him to continue with this work and arranged from him to be relieved from other duties. They set up a new institute for him called Zentrum für unerkannte Krankheiten [centre for unrecognized diseases]. This was perhaps particularly surprising since this is a privately funded clinic and the work of this institute costs money rather than making money. The techniques used there include toxicological and genomic analyses.

Here is another example from the lecture. Schäfer’s institute uses large scale DNA analysis to screen for a broad range of parasites in patients with unclear symptoms. In one patient they found DNA of the parasite causing schistosomiasis. This disease is usually got by bathing in infected water in tropical or subtropical areas. The patient tested negatively for the parasite and had never been to a place where this disease occurs. The mystery was cleared up due to the help of a vet of Egyptian origin. He was familiar with schistosomiasis and due to his experience with large animals he was not afraid of analysing very large stool samples. He succeeded in finding eggs of the parasite in the patient’s stool. The diffculty was that the numbers of eggs were very low and that for certain reasons they were difficult to recognise in this case, except by an expert. The patient was treated for schistosomiasis as soon as the genetic results were available but it was very satisfying to have a confirmation by more classical techniques. The mystery of how the patient got infected was solved as follows. As a hobby he kept lots of fish and he imported these from tropical regions. The infection presumably came from the water in his aquarium. We see that in the modern world it is easy to import tropical diseases by express delivery after placing an order in the internet

I do not want to end before mentioning that Schäfer said something nice about how mathematicians can help medical doctors. He had a patient who is a mathematics professor and had the following problem. From time to time he would collapse and was temporarily paralysed although fully conscious. A possible explanation for this would have been an excessively high level of sodium in the body. On the other hand measurements showed that the concentration of sodium in his blood was normal, even after an attack. The patient then did a calculation (just simple arithmetic). On the basis of known data he worked out the amount of sodium and potassium in different types of food and noted a correlation between negative effects of a food on his health and the ratio of the sodium to potassium concentrations. This supported the hypothesis of sodium as a cause and encouraged the doctors to look more deeply into the matter. It turned out that in this type of disease the sodium is concentrated near the cell membrane and cannot be seen in the blood. A genetic analysis revealed that the patient had a mutation in a little-known sodium channel.

I think that this lecture was very entertaining for the audience, including my wife and myself. However this is not just entertainment. With his institute Schäfer is providing essential help for many people in very difficult situations. He has files of over 4000 patients. This kind of work requires a high investment in time and money which is not possible for a usual university clinic, not to mention an ordinary GP. It is nevertheless the case that Schäfer is developing resources which could be used more widely, such as standard protocols for assessing patients of this type. As he emphasized, while by definition a rare disease only effects a small number of patients the collection of all rare diseases together affects a large number of people. If more money was invested in this kind of research it could result in a net saving for the health system since it would reduce the number of people running from one doctor to another since they do not have a diagnosis.

]]>

An example where this theory can be applied is double phosphorylation in the processive case. The paper of Conradi et. al. cited in the last post contains the statement that in this system the Deficiency One Algorithm implies that multistationarity is not possible. For this it refers to the Chemical Reaction Network Toolbox, a software package which implements the calculations of the theorem. In my course I decided for illustrative purposes to carry out these calculations by hand. It turned out not to be very hard. The conclusion is that multistationarity is not possible for this system but the general machinery does not address the question of whether there are any positive stationary solutions. I showed that this is the case by checking by hand that -limit points on the boundary of the positive orthant are impossible. The conclusion then follows by a well-known argument based on the Brouwer fixed point theorem. This little bit of practical experience with the Deficiency One Algorithm gave me the impression that it is really a powerful tool. At this point it is interesting to note a cross connection to another subject which I have discussed in this blog. It is a model for the Calvin cycle introduced by Grimbs et. al. These authors noted that the Deficiency One Algorithm can be applied to this system to show that it does not allow multistationarity. They do not present the explicit calculations but I found that they are not difficult to do. In this case the equations for stationary solutions can be solved explicitly so that using this tool is a bit of an overkill. It neverless shows the technique at work in another example.

Regularity consists of three conditions. The first (R1) is a necessary condition that there be any positive solutions at all. If it fails we get a strong conclusion. The second (R2) is the condition familiar from the Deficiency One Theorem. (R3) is a purely graph theoretic condition on the network. A weakly reversible network which satisfies (R3) is reversible. Reading this backwards, if a network is weakly reversible but not reversible then the theorem does not apply. The inequalities in the theorem depend on certain partitions of a certain set of complexes (the reactive complexes) into three subsets. What is important is whether the inequalities hold for all partitions of a network or whether there is at least one partition for which they do not hold. The proof of the theorem uses a special basis of the kernel of where is a function on constructed from the reaction constants. In the context of the theorem this space has dimension and of the basis vectors come from a special basis of of the type which already comes up in the proof of the Deficiency Zero Theorem.

An obvious restriction on the applicability of the Deficiency One Algorithm is that it is only applicable to networks of deficiency one. What can be done with networks of higher deficiency? One alternative is the Advanced Deficiency Algorithm, which is implemented in the Chemical Reaction Network Toolbox. A complaint about this method which I have seen several times in the literature is that it is not able to treat large systems – apparently the algorithm becomes unmanageable. Another alternative uses the notion of elementary flux modes which is the next topic I will cover in my course. It is a way of producing certain subnetworks of deficiency one from a given network of higher deficiency. The subnetworks satisfy all the conditions needed to apply the Deficiency One Algorithm except perhaps .

]]>