Hahn’s minimal model for the Calvin cycle

December 4, 2018

As has been discussed elsewhere in this blog there are many mathematical models for the Calvin cycle of photosynthesis in the literature and it is of interest to try to understand the relations between these models. One extreme case which may be considered is to look at the simplest possible models of this system which still capture some of the essential biological features. In 1984 Brian Hahn published a model of the Calvin cycle with 19 chemical species. Like many of the models in the literature this does not include a description of photorespiration. In 1987 he extended this to a model which does include photorespiration and has 31 chemical species. Later (1991) Hahn followed the strategy indicated above, looking for simplified models. He found one two-dimensional model and one three-dimensional model. Like many models in the literature the three-dimensional model contains the fifth power of the concentration of GAP (glyceraldehyde phosphate). This arises because elementary reactions have been lumped together to give an effective reaction where five molecules of the three-carbon molecule GAP go in and three molecules of a five-carbon sugar come out. Assuming mass action kinetics for this effective reaction then gives the fifth power. In an attempt to simplify the analysis yet further Hahn replaced the fifth power by the second power in his two-dimensional model.

Sadly, Brian Hahn is no longer around to study models of photosynthesis. He was a mathematics professor at the University of Cape Town. As head of department he was asked to give an opinion on whether the position of one member of the department should be extended. He found he could not make a positive recommendation and the matter was referred to a committee. At the meeting of this committee the candidate became very aggressive and, at least partly as a consequence of this, it was decided that his position should not be extended. Some time later, in the mathematics building, the candidate beat Hahn to death. At his trial it was decided that he was mentally ill and therefore could not be convicted of murder. After less than two years he was released from confinement. From what I have read it seems that the public discussion of this case has been influenced by issues of racism: Hahn was white and the man who killed him was black.

Let me return to photosynthesis. I thought that the small models of Hahn should be subjected to detailed scrutiny. This became the project of my PhD student Hussein Obeid. Now we have written a paper on the two-dimensional model of Hahn (with quadratic nonlinearity) and the related model with the fifth power. These can be studied with and without photorespiration. Without photorespiration it turns out that there are solutions which tend to infinity at late times. These are similar to solutions previously discovered in another model of the Calvin cycle by Juan Velazquez and myself, which we called runaway solutions. The difference is that in the two-dimensional model we can see how the runaway solutions fit into the global dynamics of the system. In this case there is a unique positive steady state which is unstable. The one-dimensional stable manifold of this point divides the space into two regions. In one region all solutions tend to infinity and have the same late-time asymptotics. In the other region all solutions tend to zero. The existence of these solutions is related to the phenomenon of overload breakdown studied by Dorothea Möhring and myself in another paper. When photorespiration is added a stable steady state comes in from infinity. Convergence to the stable steady state replaces the runaway asymptotics and all solutions are bounded.

We were able to show that the dynamics for the system with the fifth power is qualitatively similar to that with the second power. The only differences are that it not possible to obtain such detailed information about the parameter ranges for which certain qualitative behaviour occurs and that it is not ruled out that non-hyperbolic behaviour might happen in exceptional cases. Hahn derives his two-dimensional model by an informal reduction of the three-dimensional one. We showed how this can be made rigorous using geometric singular perturbation theory. This allows some limited information to be obtained about the dynamics of solutions of the three-dimensional system. We hope that this paper will serve as a good foundation for later investigations of the the three- and higher dimensional models of Hahn in the future.

Advertisements

Broken foot

November 17, 2018

Last Saturday morning, while getting up from my desk at home, I tripped over something. I fell on the ground but my foot was stuck under the desk and could not accompany me properly in my fall. After that my foot was rather painful. Nevertheless the pain was not extreme and I did not take the matter very seriously. I limped around and even gave my lectures at the blackboard as normal on Tuesday and Thursday morning. On Wednesday I went to my GP to get his opinion on a non-acute matter. Actually I did not even really know him since I have not been to the doctor for five years apart from getting a tetanus booster. The pain in my foot had been getting less from day to day and I imagined I was on the way to recovery. At the same time the foot looked a bit funny, with some strange bruises. For this reason I showed the doctor my foot. He said that these bruises could be a sign of an injury deeper in the foot and suspected it was broken. He sent me to get it X-rayed to make sure. This is the first time in my life I have broken a bone and so I might be excused for not thinking of the possibility.

When I got to the place where the X-ray was to be done it turned out that their machine was broken and would only be working again the next day. Having already wasted so much time in attending to the foot I decided to look for another radiologist. By this time it was lunchtime and the first radiologist I tried was closed for lunch. The next one I tried did not accept me as a patient, for reasons I did not quite understand. The next one was closed the whole afternoon for a training course. In the late afternoon the fifth attempt was finally successful. The X-ray revealed that my foot was broken. Technically it is what is called a Jones fracture, which is a certain kind of break of the fifth metatarsal bone, a bone of whose existence I had known nothing up to that point in my life. It has the reputation of healing rather badly due to the poor blood supply in that area. I was told that I should see a surgeon as soon as possible. Since it was already very late in the afternoon that had to wait for the next day.

On Thursday I went back to my GP to discuss the further strategy. His practise was closed in the morning for a training course (where did I hear that phrase recently?). It opened again at three in the afternoon. Once I managed to see the doctor he called up a practise of a surgeon to check someone could see me that day. He got a positive answer and so I went there as quickly as possible. When I arrived the ‘friendly’ lady at the desk greeted me with the sceptical phrase, ‘what, you expect the surgeon to see you today?’. I just replied ‘That was the idea.’ Fortunately he did accept to see me. He explained the mechanism of the fracture. The bone is attached to a tendon and when that tendon is pulled with a large enough force the bone breaks apart. Fortunately it was clean break and the two pieces had moved very little from the place they would normally be. I left the practise with crutches and wearing a surgical boot which looks very futuristic, like the first installment towards a space suit. I took a taxi home.

Now I learned something about how it is to have limited mobility. It just happens that Eva is away for a week so that she was not there to help me. (On Monday she was still there and took me to the university and back by car.) It is an interesting mental training. You realize how often you needlessly go from one place to another under normal circumstances, even within the home. Now it is necessary to plan an optimal route and minimise the number of times I go up and down the stairs. Yesterday I went to the university with the tram as a method to reduce the necessary walking distance to one I could manage. I also had the interesting experience of giving myself an anti-thrombosis injection. Actually it was not so bad as I expected and I suppose it will soon become routine. The professionals gave me the first one on Thursday to explain how it works. This morning I did some necessary shopping at the supermarket and took the tram one stop to get there. A neighbour saw me leaving home and was kind enough to transport me in the one direction in her car. Now I am looking forward to a weekend at home where the I will have no bigger physical obstacle to overcome than occasionally climbing the stairs.

Canards

September 22, 2018

This post follows on from the last one. For the mathematics I want to describe here my main source is the book ‘Multiple Time Scale Dynamics’ by Christian Kuehn. In the last post I discussed the transitions from slow to fast dynamics in a relaxation oscillator. A point where this takes place is a fold point. More insight into the transformations which can be used to analyse the dynamics near a point of this kind can be obtained using geometric singular perturbation theory and this is described in Chapter 7 of Kuehn’s book. The point can be blown up using quasihomogeneous directional blow-ups similar to those which I used in my work with Pia Brechmann on the Selkov oscillator, described here. The main conceptual difference in comparison to our work is that in the case of the fold point there is a small parameter \epsilon involved and it is also rescaled. In this context it is necessary to cover a neighbourhood of the fold point by three charts, in each of which there is a highly non-trivial dynamics. With a suitable analysis of these dynamics it is possible to get information about the transition map from a section before the fold point to one after it. Here the cube roots already seen in the previous post again come up. With this in hand it becomes relatively easy to prove the existence of a relaxation oscillation in the van der Pol system and also that it is stable and hyperbolic. In particular, the existence statement follows from the fact that the return map, obtained by following a solution from a section back to itself around the singular cycle for \epsilon=0 is a contraction. There are other ways of getting the existence result but they rely on special features, in particular the fact that the system is two-dimensional. The proof using GSPT is more powerful since it can be more easily adapted to other situations, such as higher dimensions and it gives more detailed results, such as the expansion for the period. For instance in the book it is explained how this works for a case with one fast and two slow variables.

I have not yet mentioned the concept in the title of this post.(I did once mention briefly it in a recent post.) A canard, apart from being the French word for a duck is an idea in dynamical systems which has intrigued me for a long time but which I understood very little about. With the help of Chapter 8 of Kuehn’s book I have now been able to change this. What I will not do here is to try to explain the origin of the word canard in this context. It has led to a considerable number of humorous contributions of varying quality and I do not want to add to that literature here. I recall that at a fold point a non-degeneracy condition f(0,0,0)\ne 0 holds. Here f(x,y,\epsilon) is the right hand side of the evolution equation for the fast variable. This means that the slow flow does not stand still at the fold point. A canard is obtained if we assume that f(0,0,0)=0 at the fold point while two other non-degeneracy assumptions are made. In this case the fold point is called a fold singularity. This word is used in the sense that sometimes a steady state of a dynamical system is referred to as a singularity. In this case the fold point is steady state of the slow flow. The first non-degeneracy assumption is that  f_y(0,0,0)\ne 0. This means that the steady state is hyperbolic. For the other condition the setting has to be extended by introducing an additional parameter \lambda. Then we have a function f(x,y,\epsilon,\lambda) and it is assumed that f_\lambda(0,0,0,0)\ne 0. In the simplest situation, such as the van der Pol oscillator, the slow dynamics takes place on a critical manifold which is normally hyperbolic and stable. The curious think about  a canard is that there the slow dynamics can follow an unstable critical manifold for a relatively long time before jumping off. More precisely it can remain within distance \epsilon to a repelling part of the slow manifold for a time which is of order one on the slow time scale. Information can be obtained on the dynamics of this type of situation by doing blow-ups. A surprising feature of this type of point is that it is associated with the production of small periodic solutions in a scenario called a singular Hopf bifurcation. Some intuition for this can be obtained by thinking about a periodic solution which starts near the fold singularity, moves a short distance along an unstable branch of the slow manifold (canard), jumps to the stable branch and then returns to its starting point along that branch. A simple example where a canard occurs is the van der Pol system with constant forcing, in other words a system obtained by modifying the basic van der Pol system by introducing an additive constant at an appropriate place on the right hand side.

Jumping off a critical manifold

September 21, 2018

In a previous post I discussed the concept of relaxation oscillations and the classical example, the van der Pol oscillator for large values of the parameter \mu. The periodic orbit consists of two phases of fast motion and two of slow motion.The slow motion is on the critical manifold and in the transition from slow to fast the solution jumps off the critical manifold. This is of course only a heuristic description of what happens when the parameter \epsilon=\mu^{-2} is zero. What we would like to understand is what happens for \epsilon small and positive. Then the non-differentiability related to the jumping off must be smoothed out. To get an expression for the period of the oscillation in leading order (i.e. order zero in \epsilon) it suffices to compute the time taken to complete one of the slow phases. Since the critical manifold is one-dimensional this can be reduced to computing an integral and in the case of the van der Pol oscillator the integral can be computed explicitly. The fast phases make no contribution – in this limit they are instantaneous. In order to get higher order corrections we need to be able to control what happens near the corners where the motion changes from slow to fast. What does the next correction look like? A naive guess would be that it might be an integer power of \epsilon or that at worst this might be corrected by some expression involving \log\epsilon. In reality that corner is really singular and it produces more exotic phenomena. To be specific, it produces integer powers of \epsilon^{1/3}. How can we understand the origin of such terms? The jumping-off point is what is called a fold point since it is analogous to a fold bifurcation. Suppose we have a system \dot y=g(x,y) which undergoes a generic fold bifurcation at the origin. We can extend the system  by adding the trivial equation \dot x=0 for the parameter x. Now we modify this by replacing the equation for x by the equation \dot x=\epsilon f(x,y,\epsilon) with f(0,0,0)\ne 0. Thus for \epsilon=0 we have something which looks like a fold bifurcation for the fast subsystem. For \epsilon non-zero the quantity x, which was constant and could be called a parameter becomes dynamical and starts to move slowly. For this kind of situation there is an approximate  normal form. The system is topologically equivalent to a simple system, up to higher order corrections.

What does all this have to do with the cube roots in the expansion?The remarkable fact is that the normal form (i.e. the leading part of the approximate normal form) can be reduced by means of a rescaling of x, y and t to a system where the parameter \epsilon is eliminated. The price to be payed for this is that the domain in which this can be done is very small for small \epsilon. The magical rescaling is to replace (x,y,t) by (\epsilon^{1/3}X,\epsilon^{2/3}Y,\epsilon^{2/3}T). The model equation which comes out is (up to signs) \frac{dX}{dY}=X^2+Y. This is a Riccati equation whose solutions can be analysed further by classical methods. There are solutions with three different types of behaviour and for one of these three types there is precisely one solution, which is the solution relevant for the relaxation oscillation. It describes the correct smoothing of the corner.

Visit to Aachen

September 13, 2018

In the past few days I was in Aachen. I had only been there once before, which was for a job interview. It was just before Christmas. I only remember that it was dark, that there was a big Christmas market which I had trouble navigating through and that the building of the RWTH where the job was was large and imposing. This time the visit was more relaxed. I was there for a small meeting organized by Sebastian Walcher centred around the topic of singular perturbation theory and with an emphasis on models coming from biology.

I visited the cathedral of which a major part is older than all comparable buildings in northern Europe (eighth century). Even for someone with such rudimentary knowledge of architecture as myself it did appear very different from other cathedrals, with an oriental look. If I had been asked beforehand to imagine what the church of Charlemagne might look like I would never have guessed anything like the reality.

I gave a talk about my work with Juliette Hell about the existence of periodic solutions of the MAP kinase cascade. The emphasis was not only on the proof of this one result but also on explaining some general techniques with the help of this particular example. A theme running through this meeting was how, given a system of ODE depending on parameters, it is possible to find a small parameter \epsilon such that the limit \epsilon\to 0, while singular, nevertheless has some nice properties. In particular the aim is to obtain a limiting system with less equations and less parameters. Then we would like to lift some features of the dynamics of the limiting system to the full system. One talk, which I would like to mention, was by Satya Samal of the Joint Research Centre for Computational Biomedicine on tropical geometry. This is a term which I have often heard without understanding what it means intrinsically or how it can be applied to the study of chemical reaction networks. A lot of what was in the introduction of the talk can also be found in a paper by Radulescu et al. (Math. Modelling of Natural Phenomena, 10, 124). There was one thing in the talk which I found enlightening, which is not in the paper and which I have not seen elsewhere. I will try to explain it now. In this tropical business an algebra is introduced with ‘funny’ definitions of addition and multiplication for real numbers. It seems that often the formal properties of these operations are explained and then the calculations start. This is perhaps satisfactory for people with a strong affinity for algebra. I would like something more, an intuitive explanation of where these things come from. Let us consider multiplication first. If I start with two real numbers a and b, multiply them with \log\epsilon and exponentiate, giving the powers of \epsilon with these exponents, then multiply the results and finally take the logarithm and divide by \log\epsilon I get the sum a+b. This should explain why the tropical equivalent of the product is the sum. Now let us consider the sum. I start with numbers a and b, take powers of a quantity \epsilon with these exponents as before, then sum the results and take the logarithm. Thus we get \log (\epsilon^a+\epsilon^b). Now we need an extra step, namely to take the limit \epsilon\to 0. If b>a then the second term is negligible with respect to the first for \epsilon small and the limit is a\log\epsilon. If a>b then the limit is b\log\epsilon. Thus after dividing by \log\epsilon I get \min\{a,b\}. This quantity is the tropical equivalent of the sum. I expect that this discussion might need some improvement but it does show on some level where the ‘funny’ operations come from. It also has the following intuitive interpretation related to asymptotic expansions. If I have two functions whose leading order contributions in the limit \epsilon\to 0 are given by the powers a and b then the leading order of the product is the power given by the tropical product of a and b while the leading order of the sum is the power given by the tropical sum of a and b.

The technique of matched asymptotic expansions has always seemed to me like black magic or, to use a different metaphor, tight-rope walking without a safety net. At some point I guessed that there might be a connection between (at least some instances of) this technique and geometric singular perturbation theory. I asked Walcher about this and he made an interesting suggestion. He proposed that what is being calculated in leading order is the point at which a solution of a slow-fast system hits the critical manifold (asymptotically) thus providing an initial condition for the slow system. I must try to investigate this idea more carefully.

SIAM Life Science Conference in Minneapolis

August 11, 2018

Here are some impressions from the SIAM Life Science Conference. On Monay I heard a talk by Nathan Kutz about the structure and function of neural networks in C. elegans and a moth whose name I have forgotten. Coincidentally, when I was in Boston I heard a presentation by someone in the Gunawardena group about genes and transcription factors related to different types of neurons in C. elegans. On that occasion I asked how many of the few hundred neurons of C. elegans are regarded as being of different types and I was suprised to hear that it of the order of one hundred. Returning to the talk of Kutz, the key thing is that all the neurons in C. elegans and all their connections are known. Thus it is possible to simulate the whole network and reproduce central aspects of the behaviour of the worm. Another simplifying circumstance is that the motion of the worm can be described by only four parameters. A central message of the talk was that the behaviour of the nervous system of the worm itself can be reduced to a low dimensional dynamical system. This reminded me, with how much justification I do not know, of what I heard about the beaks of Darwin’s finches in Lisbon recently. As for the moth, the question described in the talk was how it learns to identify odours. A lot is known about the architecture of the neurons in the moth’s olfactory system. The speaker has compared the learning ability of this biological neural network with that of artifical neural networks used in machine learning. In reaching quite a high accuracy (70%) on the basis a small amount of training data the moth network beat all the artifical networks clearly. When given more data the result of the moth network hardly improved while those of the artificial network got better and better and eventually overtook the moth. The speaker suggested that the moth system is very effectively optimized for the task it is supposed to perform. I also heard a nice talk by Richard Bertram about canards in a three-dimensional ODE system describing the electrical activity of heart cells. I found what he explained about canards enlightening and this has convinced me that it is time for me to finally understand this subject properly. In particular he explained that the oscillations associated with a canard have to do with twisting of the stable and unstable manifolds. I also found what he said about the relations between the canard and concrete biological observations (the form of electrical signals from neurons) very interesting.

On Wednesday I heard a talk by Benjamin Ribba, who after an academic past now works with Roche. One of the main themes of his talk was cancer immunotherapy but I did not feel I learned too much there. What I found more interesting was what he said in the first part of his talk about chemotherapy for low-grade glioma. This is a disease which only develops very slowly, so that even after it has been discovered the tumour may grow very slowly over a period of several years. He showed patient data for this. He explained why it was reasonable to leave patients so long without therapy. The reason is that the standard course of therapy at that time was something which could only be given to a patient once in a lifetime (presumably due to side effects). Thus it made sense to wait with the therapy until the time which was likely to increase the patients lifetime as much as possible. During the slow growth phase the disease is typically asymptomatic, so that it is not a question of years of physical suffering for the patient. The speaker develeloped a mathematical model for the evolution of the disease (a model with a few ODE) and this could be made to fit the data well. The time course of the tumour growth is as follows. Before treatment it grows steadily. After the treatment is started the tumour starts to shrink. After the treatment is stopped it continues to shrink for a long time but usually eventually takes off again after a few years. In the model it was possible to try out alternatives to the standard treatment protocol and one was found which performed a lot better. The talk did not include any indication of whether this information had been used in paractise. I asked that question at the end of the talk. The answer was that by the time the analysis had been done the standard treatment has been replaced by a very different one and so the theory was too late to have clinical consequences.

On Thursday I heard a talk by Dominik Wodarz. One of his themes was oncolytic viruses but here I want to concentrate on another one. This had to do with chronic lymphocytic leukemia. This is a B cell malignancy and is treated using ibrutinib, an inhibitor of BTK (Bruton’s tyrosine kinase). Most of the malignant B cells form a tumour in the bone marrow while a small percentage of them circulate in the blood. Measuring the latter is the easiest way of monitoring the course of the disease. When the drug is given the tumour shrinks but it was not clear whether this is because the cells leave the bone marrow for the blood, where they die, or whether a large proportion die in the marrow. Using a simple mathematical model it could be shown that the second of these is the correct explanation. There are several things I like about this work. First, very simple models are used. Second, after comparison with data, these give convincing explanations for what is happening. Third, these conclusions may actually influence clinical practise. This is a good example of what mathematical biology should be like.

Visit to Boston

August 5, 2018

At the moment I am visiting Boston for the first time in my life. On Thursday I gave a talk at Harvard Medical School, at the invitation of Jeremy Gunawardena. The fact was not lost on me that this address was a quite exceptional one among the long list of places where I have given talks in my life. The subject of my talk was T cell signalling and, in particular, my work on this with Eduardo Sontag. I had a look around in the surrounding area and was impressed to see the variety of prestigious medical institutions which I knew by reputation, such as the Boston Children’s Hospital, The Dana-Farber Cancer Center and the Brigham and Women’s Hospital. I was happy how many pedestrians there were, indicating that at least in Boston the pedestrian is not an endangered species.

On Friday I gave a talk on T cell activation at Northeastern University at the invitation of Eduardo Sontag. This is an institution which has recently jumped up in the rankings by cutting its student numbers drastically and increasing its fees correspondingly. The basic subject of the talk was the same as on Thursday but it was modified in order to try to cater for a different audience. The talk on Friday included more mathematics (since I expected that on average the Friday audience would have a stronger mathematical background than that on Thursday) and more biology (since on Thursday I left out more of the biology which I assumed would be known to most of the audience). There were two other talks by Michael Margaliot and Yoram Zarai. They were on the subject of certain models for the way that ribosomes are allocated to mRNA. A key idea is that many ribosomes moving along an RNA molecule could get into traffic jams. Indeed there is a close relation between these and models for traffic jams in the literal sense. The question is then how the machinery of the cell should be organized to avoid such traffic jams. Typical mathematical techniques which were applied in the work explained in the talk were theorems about monotone systems of ODE (or corresponding control systems) satisfying some extra conditions leading to a simplification of their dynamics.

Today, Saturday, I spent most of the day walking around in Boston and Cambridge. In the morning it rained quite a bit but with an umbrella that was no problem. I was blissfully unaware of the fact that there had been a tornado warning and I also only heard later that a tornado did today hit a place not too far from Boston. In Cambridge I observed a turkey sitting on the pavement and looking into a shop window as if it was considering what it might buy. After a while it crossed the road and seemed to be well-educated since it was very careful to stay on the zebra crossing. By chance and without expecting anything special I happened to enter the palatial Boston Public Library. Later in the afternoon I was on the waterfront near the aquarium and the pleasant experience of sitting by the sea was heightened by the fact that the gull on top of the lamppost was a Ring-Billed Gull, a treat for my European eyes. I also spontaneously decided to take a boat trip around the harbour. In coming to Boston I had no special expectations about the city itself. Despite my short and superficial acquaintance with the place I can say that it has gained a secure place on my personal list of the most attractive cities I know.

ECMTB 2018 in Lisbon, part 2

July 27, 2018

In the mini-workshops at the conference related to chemical reaction network theory the most striking new result to be announced was by Balazs Boros. His preprint on this is arXiv:1710.04732. In fact it is necessary to say in what sense this is new but I will postpone that point and first discuss the mathematics. This result is very easy to formulate and I will try to make the discussion here as self-contained as possible. We start with a chemical reaction network consisting of reactions and complexes (the expressions on the left and right hand sides of the reactions like X+Y). This network can be represented by a directed graph where the nodes are the complexes and the edges are the reactions, oriented from the complex on the LHS to that on the RHS. The network is called weakly reversible if whenever we can get from node X to node Y by following directed edges we can get from Y to X. If we assume mass action kinetics and choose a positive reaction rate for each reaction we get a system of ODE describing the evolution of the concentrations of the substances belonging to the network in a standard way. Because of the interpretation we are interested in positive solutions, i.e. solutions for which all concentrations are positive. The theorem proved by Boros says: if the network is weakly reversible then the corresponding ODE system with mass action kinetics has at least one positive steady state. Actually he proves that the stronger (and more natural) statement holds that there is a solution in each positive stoichiometric compatibility class. Evidently the hypotheses only involve the graph of the network and require no details of the form of the complexes or the values of the reaction constants. Thus it is a remarkably strong result. In contrast to the statement of the theorem the proof is not at all easy. It involves reducing the desired statement to an application of the Brouwer fixed point theorem. Returning to the question of the novelty of the result, it was announced in a preprint of Deng et al. in 2011 (arXiv:1111.2386). It has never been published and it seems that the proof proposed by the authors is far from complete. Furthermore, the authors do not seem to be willing and able to repair the argument. Thus this result has been blocked for seven years. For anyone else it is an ungrateful task to provide a proof since a positive reaction from the authors of the original paper is doubtful. Furthermore other people not familiar with the background may also fail to give due credit to the author of the new paper. I think that with this work Balazs has done a great service to the reaction network community and we who belong to this community should take every opportunity to express our gratitude for this.

There was a nice talk by Ilona Kosiuk on her work with Peter Szmolyan on NF\kappaB. She expressed doubts about the derivation of the three-dimensional system mentioned in a previous post from the four-dimensional system. The work she explained in some detail concerned the four-dimensional system and uses GSPT to investigate the existence of periodic solutions of that system.

I feel that I got a lot more out of this conference than that I did out of that in Nottingham two years ago. I found more points of contact with my own research. This fact perhaps has less to do with the conference itself than it does with me. It is simply that I have penetrated a lot more deeply into mathematical biology during the last two years.

ECMTB 2018 in Lisbon

July 24, 2018

I am attending a meeting of the ESMTB in Lisbon. I am happy that the temperatures are very moderate, much more moderate than those at home in Mainz. The highest temperatures I encountered so far were due to body heat, for the following reason. The first two sessions on reaction networks were in rooms much too small for the number of participants with people sitting on extra chairs brought in from outside, windowsills and the floor, as available. This caused a rise in temperature despite the open windows. In any case, the size of those audiences is a good sign for the field.

The first plenary talk, by Samuel Kou, was about something called Google Flu Trends. The idea of this initiative, which started in 2008, was to predict the development of epidemics such as influenza by using data on the number of Google searches for phrases related to cases of flu. The idea is that between the time someone goes to the doctor because they are suffering from an infectious disease and the time data finally come together on a national level there can be a lag of about two weeks, even in countries like the US with a highly developed reporting system. Thus the hope was that using data on Google searches could enable predictions two weeks earlier than other methods. The first test of this system looked very successful and obtained a lot of positive media coverage. After about a year the predictions started to be very wrong (perhaps due to the swine flu prevalent at that time being atypical). After modifications were made to the method the predictions were even further off and in the opposite direction. Then there was a lot of negative media coverage. Google did not make the raw data available in order to allow experts to look into what had gone wrong. However a paper published in Nature revealed some things. The number of search terms used was optimized and showed a clear local maximum. The search terms were not made public but one striking jump in the distribution was revealed to be a search term related to the Oscar ceremony. It was later suggested that what the analysis was predicting was not the incidence of flu but the beginning of winter. The mathematics used in the analysis was made public and was extremely elementary, although that by itself is not necessarily a bad thing. The group which Kou belonged to was able to make a low-grade reconstruction of the raw data using the freely available Google services called Google Trends and Google Correlate. They could then apply sophisticated statistical techniques to this data, creating a model called ARGO which performed a lot better than Google Flu Trends. Later Google stopped publishing their own analyses and made at least part of their data available to external researchers.

Another plenary talk, by Eörs Szathmáry, was about relationships between learning processes in the brain and evolution. I only understood a limited amount of the content but I nevertheless enjoyed the presentation, which combined a wide variety of different ideas. One specific point I did pick up concerns the Darwin finches. The speaker explained that the molecular (genetic) basis of the different beak shapes of these birds is now understood. This evolution takes place in a three-parameter space and it was indicated how this kind of space can be established. A similar process has been modelled in the case of RNA by Uri Alon. In the talk there was a nice partially implicit joke. One of the speaker’s main collaborators in this work is called Watson and he hinted at the idea of himself as Sherlock Holmes. Apart from the content I found the delivery of the talk very accomplished. I found it good in a way I cannot make precise.

The method of averaging

June 24, 2018

Techniques of averaging in the theory of differential equations have interested me for a long time. It happens that when determining the asymptotics of certain solutions it is important to show that certain integrals are finite although these integrals are not absolutely convergent. For this there must be a suitable cancellation of positive and negative contributions. Back in 2007 I published a paper (Class. Quantum Grav. 24, 667) where I studied this kind of phenomenon in certain inflationary cosmological models. There I did not use any general techniques but instead I just derived estimates by hand. More recently I have spent some time learning what general techniques there are. One famous method is the Krylov-Bogoliubov averaging method. Next semester I will organize a seminar on the subject of the method of averaging.

An iconic example which is a good starting point for discussing this subject is the Kapitza pendulum. Suppose that a rigid rod is attached to a support about which it can rotate freely in a vertical plane. If the support is fixed we get an ordinary pendulum. The steady state where the rod is vertically above the support is obviously unstable. The Kapitza pendulum is obtained by supposing that instead the support undergoes oscillations in the vertical direction with small amplitude and large frequency. For suitable choices of the parameters this stabilizes the unstable steady state of the ordinary pendulum.

How can the situation just described be understood mathematically? I follow here the discussion in Hale’s book on ordinary differential equations. The basic equation of motion is second order. If friction is ignored the equations can be put in Hamiltonian form which means reducing them to a system of two first order equations in a certain way. Introducing a rescaled time coordinate leads to a system of the form x'=\epsilon f(t,x)+\epsilon h(\epsilon t,x) which is a standard form for the method of averaging. The system contains two time scales t and \epsilon t. We now average over the fast time t, defining f_0(x)=\frac{1}{T}\int_0^T f(t,x) dt. Then the original equation is replaced by the averaged equation x'=\epsilon f_0(x)+\epsilon h(\epsilon t,x). Here T is the period of the oscillation. The definition of f_0 given by Hale is more complicated since he wants to allow more general motions of the support which might be only almost periodic. The question is now to what extent solutions of the original equation can be approximated by solutions of the averaged equation for \epsilon small. Write the right hand side of the averaged equation in the form \epsilon G(\epsilon t,x). Suppose that the averaged equation has a periodic solution. We linearize the averaged equation about that solution. This gives rise to characteristic exponents, which are the exponential growth rates of linearized perturbations. If none of these characteristic exponents is purely imaginary then we obtain an existence theorem for solutions of the original equation which are small perturbations of the periodic solutions of the averaged equation. If in addition all characteristic exponents have negative real parts then the perturbed solution is asymptotically stable and if there is a characteristic exponent with positive real part it is unstable. It is a separate problem to prove the existence of a periodic solution of the averaged equation.