Herd immunity

February 14, 2019

I have a long term interest in examples where mathematics has contributed to medicine. Last week I heard a talk at a meeting of the Mainzer Medizinische Gesellschaft about vaccination. The speaker was Markus Knuf, director of the pediatric section of the Helios Clinic in Wiesbaden. In the course of his talk he mentioned the concept of ‘herd immunity’ several times. I was familiar with this concept and I have even mentioned it briefly in some of my lectures and seminars. It never occurred to me that in fact this is an excellent example of a case where medical understanding has benefited from mathematical considerations. Suppose we have a population of individuals who are susceptible to a particular disease. Suppose further that there is an endemic state, i.e. that the disease persists in the population at a constant non-zero level. It is immediately plausible that if we vaccinate a certain proportion \alpha of the population against the disease then the proportion of the population suffering from the disease will be lower than it would have been without vaccination. What is less obvious is that if \alpha exceeds a certain threshold \alpha_* then the constant level will be zero. This is the phenomenon of herd immunity. The value of \alpha_* depends on how infectious the disease is. A well-known example with a relatively high value is measles, where \alpha is about 0.95. In other words, if you want to get rid of measles from a population then it is necessary to vaccinate at least 95% of the population. It occurs to me that this idea is very schematic since measles does not occur as a constant rate. Instead it occurs in large waves. This idea is nevertheless one which is useful when making public health decisions. Perhaps a better way of looking at it is to think of the endemic state as a steady state of a dynamical model. The important thing is that this state is asymptotically stable in the dynamic set-up so that it recovers from any perturbation (infected individuals coming in from somewhere else). It just so happens that in the usual mathematical models for this type of phenomenon whenever a positive steady state (i.e. one where all populations are positive) exists it is asymptotically stable. Thus the distinction between the steady state and dynamical pictures is not so important. After I started writing this post I came across another post on the same subject by Mark Chu-Carroll. I am not sad that he beat me to it. The two posts give different approaches to the same subject and I think it is good if this topic gets as much publicity as possible.

Coming back to the talk I mentioned, a valuable aspect of it was that the speaker could report on things from his everyday experience in the clinic. This makes things much more immediate than if someone is talking about the subject on an abstract level. Let me give an example. He showed a video of a small boy with an extremely persistent cough. (He had permission from the child’s parents to use this video for the talk.) The birth was a bit premature but the boy left the hospital two weeks later in good health. A few weeks after that he returned with the cough. It turned out that he had whooping cough which he had caught from an adult (non-vaccinated) relative. The man had had a bad cough but the cause was not realised and it was attributed to side effects of a drug he was taking for a heart problem. The doctors did everything to save the boy’s life but the infection soon proved fatal. It is important to realize that this is not an absolutely exceptional case but a scenario which happens regularly. It brings home what getting vaccinated (or failing to do so) really means. Of course an example like this has no statistical significance but it can nevertheless help to make people think.

Let me say some more about the mathematics of this situation. A common type of model is the SIR model. The dependent variables are S, the number of individuals who are susceptible to infection by the disease, I, the number of individuals who are infected (or infectious, this model ignores the incubation time) and R, the number of individuals who have recovered from the disease and are therefore immune. These three quantities depend on time and satisfy a system of ODE containing a number or parameters. There is a certain combination of these parameters, usually called the basic reproductive rate (or ratio) and denoted by R_0 whose value determines the outcome of the dynamics. If R_0\le 1 the infection dies out – the solution converges to a steady state on the boundary of the state space where I=0. If, on the other hand, R_0>1 there exists a positive steady state, an endemic equilibrium. The stability this of this steady state can be examined by linearizing about it. In fact it is always stable. Interestingly, more is true. When the endemic steady state exists it is globally asymptotically stable. In other words all solutions with positive initial data converge to that steady state at late time. For a proof of this see a paper by Korobeinikov and Wake (Appl. Math. Lett. 15, 955). They use a Lyapunov function to do so. At this point it is appropriate to mention that my understanding of these things has been improved by the hard work of Irena Vogel, who recently wrote her MSc thesis on the subject of Lyapunov functions in population models under my supervision.



The probability space as a fiction

February 12, 2019

I have always had the impression that I understood probability theory very poorly. I had a course on elementary probability theory as an undergraduate and I already had difficulties with that. I was very grateful that in the final exam there was a question on the Borel-Cantelli Lemma which was about the only thing I did understand completely. More recently I have taught elementary probability myself and I do now have a basic understanding there. As a source I used the book of Feller which was the text I had as an undergraduate. I nevertheless remained without a deeper understanding of the subject. In the more recent past I have often been to meetings on reaction networks and on such occasions there are generally talks about both the deterministic and stochastic cases. I did learn some things in the stochastic talks but I was missing the mathematical background, the theory of continuous time Markov chains. My attempts to change this by background reading met with limited success. Yesterday I found a book called ‘Markov Chains’ by J. R. Norris and this seems to me more enlightening than anything I had tried before.

Looking at this book also led to progress of a different kind. I started thinking about the question of why I found probability theory so difficult. One superficial view of the subject is that it is just measure theory except that the known objects are called by different names. Since I do understand measure theory and I have a strong affinity for language if that was the only problem I should have been able to overcome it. Then I noticed a more serious difficulty, which had previously only been hovering on the edge of my consciousness. In elementary probability the concept of a probability space is clear – it is a measure space with total measure one. In more sophisticated probability theory it seems to vanish almost completely from the discussion. My impression in reading texts or listening to talks on the subject is that there is a probability space around in the background but that you never get your hands on it. You begin to wonder if it exists at all and this is the reason for the title of this post. I began to wonder if it is like the embedding into Euclidean space which any manifold in principle has but which plays no role in large parts of differential geometry. An internet search starting from this suspicion let me to an enlightening blog post of Terry Tao called ‘Notes 0: A review of probability theory‘. There he reviews ‘foundational aspects of probability theory’. Fairly early in this text he compares the situation with that in differential geometry. He compares the role of the probability space to that of a coordinate system in differential geometry, a probably better variant of my thought with the embeddings. He talks about a ‘probabilistic way of thinking’ as an analogue of the ‘geometric way of thinking’. So I believe that I have now discovered the basic thing I did not understand in this context – I have not yet understood the probabilistic way of thinking. When I consider the importance when doing differential geometry of (not) understanding the geometric way of thinking I see what an enormous problem this is. It is the key to understanding the questions of ‘what things are’ and ‘where things live’. For instance, to take an example from Tao’s notes, Poisson distributions are probability measures (‘distribution’ is the probabilistic translation of the word ‘measure’) on the natural numbers, the latter being thought of as a potential codomain of a random variable. Tao writes ‘With this probabilistic viewpoint, we shall soon see the sample space essentially disappear from view altogether …’ Why I am thinking about the Cheshire cat?

In a sequel to the blog post just mentioned Tao continues to discuss free probability. This is a kind of non-commutative extension of ordinary probability. It is a subject I do not feel I have to learn at this moment but I do think that it would be useful to have an idea how it reduces to ordinary probability in the commutative case. There is an analogy between this and non-commutative geometry. The latter subject is one which fascinated me sufficiently at the time I was at IHES to motivate me to attend a lecture course of Alain Connes at the College de France. The common idea is to first replace a space (in some sense) by the algebra of (suitably regular) functions on that space with pointwise operations. In practise this is usually done in the context of complex functions so that we have a * operation defined by complex conjugation. This then means that continuous functions on a compact topological space define a commutative C^*-algebra. The space can be reconstructed from the algebra. This leads to the idea that a C^*-algebra can be thought of as a non-commutative topological space. I came into contact with these things as an undergraduate through my honours project, supervised by Ian Craw. Non-commutative geometry has to do with extending this to replace the topological space by a manifold. Coming back to the original subject, this procedure has an analogue for probability theory. Here we replace the continuous functions by L^\infty functions, which also form an algebra under pointwise operations. In fact, as discussed in Tao’s notes, it may be necessary to replace this by a restricted class of L^\infty functions which are in particular in L^1. The reason for this is that a key structure on the algebra of functions (random variables) is the expectation. In this case the * operation is also important. The non-commutative analogue of a probability space is then a W^*-algebra (von Neumann algebra). Comparing with the start of this discussion, the connection here is that while the probability space fades into the background the random variables (elements of the algebra) become central.

Minimal introduction to Newton polygons

January 24, 2019

In my work on dynamical systems I have used Newton polygons as a practical tool but I never understood the theoretical basis for why they are helpful. Here I give a minimal theoretical discussion. I do not yet understand the link to the applications I just mentioned but at least it is a start. Consider a polynomial equation of the form p(x,y)=0. The polynomial p can be written in the form p(x,y)=\sum_{i,j}a_{ij}x^iy^j. Suppose that p(0,0)=0, i.e. that a_{00}=0. I look for a family y=u(x) of solutions satisfying u(x)=Ax^\alpha+\ldots. We have F(x,u(x))=0. Intuitively the zero set of p is an algebraic variety which near the origin is the union of a finite number of branches. The aim is to get an analytic approximation to these branches. Substituting the ansatz into the equation gives a_{ij}x^iy^j=a_{ij}A^jx^{\alpha i+j}+\ldots=0. If we compare the size of the summands in this expression then we see that summands have the same order of magnitude if they satisfy the equation \alpha i+j=C for the same constant C. Let S be the subset of the plane with coordinates (i,j) for those cases where a_{ij}\ne 0. For C=0 the line L with equation \alpha i+j=C does not intersect S. If we increase C then eventually the line L will meet S. If it meets S in exactly one point then the ansatz is not consistent. A given value of \alpha is only possible if the line meets S in more than point for some C. Let \tilde S be the set of points with coordinates (k,l) such that k\ge i and l\ge j for some (i,j)\in S and let K be the convex hull of \tilde S. Then for an acceptable value of \alpha the line L must have a segment in common with K. There are only finitely many values of \alpha for which this is the case. A case which could be of particular interest is that of the smallest branch, i.e. that for which \alpha takes the smallest value. Consider for simplicity the case that only two points of L belong to S. Call their coordinates (i_1,j_1) and (i_2,j_2). Then the coefficient A is determined by the relation A^{j_2-j_1}=-\frac{a_{i_1j_1}}{a_{i_2j_2}}. Further questions which arise are whether the formal asymptotic expansion whose leading term has been calculated can be extended to higher order and whether there is a theorem asserting the existence of a branch for which this is actually an asymptotic approximation. These will not be pursued here.

Mathematics enters the canon of molecular biology

December 18, 2018

On the shelf in my office I have a heavy red tome. It is the fifth edition of ‘Molecular Biology of the Cell’ by Alberts et al., a standard work. While reading an editorial by Arup Chakraborty I learned that the sixth edition of the book contains a section on mathematical models. Its title is ‘Mathematical Analysis of Cell Functions’ and it is almost 20 pages long. It is perhaps not very deep mathematically but I am nevertheless delighted that it is there. I find it very important that a textbook of central importance in cell biology takes time to discuss mathematics in that context and presents arguments why mathematics is valuable for biology. I wonder how much of what is written there would be understood by a mathematician with no background in molecular biology. Of course that is not the intended audience of the book and it is just my idle curiosity that makes me ask the question.

Let me list some of the main themes treated in the section. I give them as informal statements. (1) Negative feedback can stabilize a steady state. (2) Negative feedback plus delay can give rise to sustained oscillations. (3) Positive feedback plus cooperativity can give rise to multistability. (4) An incoherent feed-forward loop can give rise to a transient response to a signal. (5) A coherent feed-forward loop can give rise to a delay. I should emphasize that in all cases mentioned here the models involved use ODE. The delay in (2) does not have to do with a delay equation but just with a sufficiently large number of steps which take place one after another.

I would like to make some connections between the points (1)-(5) above and mathematical theorems. I will start with (2) since that seems the easiest case. I will replace (2) with another statement which is even easier: (2a) negative feedback is a necessary condition for the existence of an attracting periodic solution. This is related to ideas of Rene Thomas which I disussed in a previous post. Suppose we have a system of ODE \dot x=f(x) where the signs of the elements of the linearization Df(x) are independent of x. Here a ‘sign’ may be positive, negative or zero. As explained elsewhere a system of this kind defines a graph called the species graph or influence graph. A feedback loop is an oriented cycle in this graph and its sign is the product of the signs of the edges making up the cycle. The system exhibits negative feedback if its influence graph contains a negative feedback loop. The claim is then that there exist no attracting periodic solutions. Before I go further I should acknowledge my debt to Frederic Beck, who wrote his MSc under my supervision on the subject of feedback. The discussion which follows has benefitted a lot from his exposition of this subject. A system without negative feedback loops as defined above is called coherent. It was proved by Angeli, Hirsch and Sontag (J. Diff. Eq. 246, 3058) that by reversing the signs of some of the variables the system can be made quasicooperative. This means that along any feedback loop all signs are equal. Note that this does not mean that signs need be equal along an unoriented cycle. This can be illustrated by the case of an incoherent feedforward loop. It follows from the results of Angeli et al. that in a coherent system there exist no attracting periodic solutions. This result can be strengthened by replacing ‘attracting’ by ‘stable’. This has been proved by Richard and Comet (J. Math. Biol 63, 593). They also claim that if the condition that the signs of the entries in the linearization are spatially constant is dropped then the analogous statement is false. Their proof of this latter statement contained an error, as noticed by Frederic Beck. They repaired it in an erratum (J. Math. Biol. 70, 957).

This discussion shows that a lot is understood about negative feedback as a necessary condition for periodic solutions although there may still be more to be discovered on that subject. I think that much less is understood about the possibility of negative feedback being a sufficient condition for periodic solutions. As a concrete example let us consider the Selkov oscillator. It does contain a negative feedback loop consisting of one positive and one negative edge. For some parameter values it exhibits periodic solutions but for others there are only damped oscillations. Can the assumption of a negative feedback loop be supplemented in some way by an assumption of delay to give a sufficient condition? What should that assumption of delay be?

Hahn’s minimal model for the Calvin cycle

December 4, 2018

As has been discussed elsewhere in this blog there are many mathematical models for the Calvin cycle of photosynthesis in the literature and it is of interest to try to understand the relations between these models. One extreme case which may be considered is to look at the simplest possible models of this system which still capture some of the essential biological features. In 1984 Brian Hahn published a model of the Calvin cycle with 19 chemical species. Like many of the models in the literature this does not include a description of photorespiration. In 1987 he extended this to a model which does include photorespiration and has 31 chemical species. Later (1991) Hahn followed the strategy indicated above, looking for simplified models. He found one two-dimensional model and one three-dimensional model. Like many models in the literature the three-dimensional model contains the fifth power of the concentration of GAP (glyceraldehyde phosphate). This arises because elementary reactions have been lumped together to give an effective reaction where five molecules of the three-carbon molecule GAP go in and three molecules of a five-carbon sugar come out. Assuming mass action kinetics for this effective reaction then gives the fifth power. In an attempt to simplify the analysis yet further Hahn replaced the fifth power by the second power in his two-dimensional model.

Sadly, Brian Hahn is no longer around to study models of photosynthesis. He was a mathematics professor at the University of Cape Town. As head of department he was asked to give an opinion on whether the position of one member of the department should be extended. He found he could not make a positive recommendation and the matter was referred to a committee. At the meeting of this committee the candidate became very aggressive and, at least partly as a consequence of this, it was decided that his position should not be extended. Some time later, in the mathematics building, the candidate beat Hahn to death. At his trial it was decided that he was mentally ill and therefore could not be convicted of murder. After less than two years he was released from confinement. From what I have read it seems that the public discussion of this case has been influenced by issues of racism: Hahn was white and the man who killed him was black.

Let me return to photosynthesis. I thought that the small models of Hahn should be subjected to detailed scrutiny. This became the project of my PhD student Hussein Obeid. Now we have written a paper on the two-dimensional model of Hahn (with quadratic nonlinearity) and the related model with the fifth power. These can be studied with and without photorespiration. Without photorespiration it turns out that there are solutions which tend to infinity at late times. These are similar to solutions previously discovered in another model of the Calvin cycle by Juan Velazquez and myself, which we called runaway solutions. The difference is that in the two-dimensional model we can see how the runaway solutions fit into the global dynamics of the system. In this case there is a unique positive steady state which is unstable. The one-dimensional stable manifold of this point divides the space into two regions. In one region all solutions tend to infinity and have the same late-time asymptotics. In the other region all solutions tend to zero. The existence of these solutions is related to the phenomenon of overload breakdown studied by Dorothea Möhring and myself in another paper. When photorespiration is added a stable steady state comes in from infinity. Convergence to the stable steady state replaces the runaway asymptotics and all solutions are bounded.

We were able to show that the dynamics for the system with the fifth power is qualitatively similar to that with the second power. The only differences are that it not possible to obtain such detailed information about the parameter ranges for which certain qualitative behaviour occurs and that it is not ruled out that non-hyperbolic behaviour might happen in exceptional cases. Hahn derives his two-dimensional model by an informal reduction of the three-dimensional one. We showed how this can be made rigorous using geometric singular perturbation theory. This allows some limited information to be obtained about the dynamics of solutions of the three-dimensional system. We hope that this paper will serve as a good foundation for later investigations of the the three- and higher dimensional models of Hahn in the future.

Broken foot

November 17, 2018

Last Saturday morning, while getting up from my desk at home, I tripped over something. I fell on the ground but my foot was stuck under the desk and could not accompany me properly in my fall. After that my foot was rather painful. Nevertheless the pain was not extreme and I did not take the matter very seriously. I limped around and even gave my lectures at the blackboard as normal on Tuesday and Thursday morning. On Wednesday I went to my GP to get his opinion on a non-acute matter. Actually I did not even really know him since I have not been to the doctor for five years apart from getting a tetanus booster. The pain in my foot had been getting less from day to day and I imagined I was on the way to recovery. At the same time the foot looked a bit funny, with some strange bruises. For this reason I showed the doctor my foot. He said that these bruises could be a sign of an injury deeper in the foot and suspected it was broken. He sent me to get it X-rayed to make sure. This is the first time in my life I have broken a bone and so I might be excused for not thinking of the possibility.

When I got to the place where the X-ray was to be done it turned out that their machine was broken and would only be working again the next day. Having already wasted so much time in attending to the foot I decided to look for another radiologist. By this time it was lunchtime and the first radiologist I tried was closed for lunch. The next one I tried did not accept me as a patient, for reasons I did not quite understand. The next one was closed the whole afternoon for a training course. In the late afternoon the fifth attempt was finally successful. The X-ray revealed that my foot was broken. Technically it is what is called a Jones fracture, which is a certain kind of break of the fifth metatarsal bone, a bone of whose existence I had known nothing up to that point in my life. It has the reputation of healing rather badly due to the poor blood supply in that area. I was told that I should see a surgeon as soon as possible. Since it was already very late in the afternoon that had to wait for the next day.

On Thursday I went back to my GP to discuss the further strategy. His practise was closed in the morning for a training course (where did I hear that phrase recently?). It opened again at three in the afternoon. Once I managed to see the doctor he called up a practise of a surgeon to check someone could see me that day. He got a positive answer and so I went there as quickly as possible. When I arrived the ‘friendly’ lady at the desk greeted me with the sceptical phrase, ‘what, you expect the surgeon to see you today?’. I just replied ‘That was the idea.’ Fortunately he did accept to see me. He explained the mechanism of the fracture. The bone is attached to a tendon and when that tendon is pulled with a large enough force the bone breaks apart. Fortunately it was a clean break and the two pieces had moved very little from the place they would normally be. I left the practise with crutches and wearing a surgical boot which looks very futuristic, like the first installment towards a space suit. I took a taxi home.

Now I learned something about how it is to have limited mobility. It just happens that Eva is away for a week so that she was not there to help me. (On Monday she was still there and took me to the university and back by car.) It is an interesting mental training. You realize how often you needlessly go from one place to another under normal circumstances, even within the home. Now it is necessary to plan an optimal route and minimise the number of times I go up and down the stairs. Yesterday I went to the university with the tram as a method to reduce the necessary walking distance to one I could manage. I also had the interesting experience of giving myself an anti-thrombosis injection. Actually it was not so bad as I expected and I suppose it will soon become routine. The professionals gave me the first one on Thursday to explain how it works. This morning I did some necessary shopping at the supermarket and took the tram one stop to get there. A neighbour saw me leaving home and was kind enough to transport me in the one direction in her car. Now I am looking forward to a weekend at home where the I will have no bigger physical obstacle to overcome than occasionally climbing the stairs.


September 22, 2018

This post follows on from the last one. For the mathematics I want to describe here my main source is the book ‘Multiple Time Scale Dynamics’ by Christian Kuehn. In the last post I discussed the transitions from slow to fast dynamics in a relaxation oscillator. A point where this takes place is a fold point. More insight into the transformations which can be used to analyse the dynamics near a point of this kind can be obtained using geometric singular perturbation theory and this is described in Chapter 7 of Kuehn’s book. The point can be blown up using quasihomogeneous directional blow-ups similar to those which I used in my work with Pia Brechmann on the Selkov oscillator, described here. The main conceptual difference in comparison to our work is that in the case of the fold point there is a small parameter \epsilon involved and it is also rescaled. In this context it is necessary to cover a neighbourhood of the fold point by three charts, in each of which there is a highly non-trivial dynamics. With a suitable analysis of these dynamics it is possible to get information about the transition map from a section before the fold point to one after it. Here the cube roots already seen in the previous post again come up. With this in hand it becomes relatively easy to prove the existence of a relaxation oscillation in the van der Pol system and also that it is stable and hyperbolic. In particular, the existence statement follows from the fact that the return map, obtained by following a solution from a section back to itself around the singular cycle for \epsilon=0 is a contraction. There are other ways of getting the existence result but they rely on special features, in particular the fact that the system is two-dimensional. The proof using GSPT is more powerful since it can be more easily adapted to other situations, such as higher dimensions and it gives more detailed results, such as the expansion for the period. For instance in the book it is explained how this works for a case with one fast and two slow variables.

I have not yet mentioned the concept in the title of this post.(I did once mention briefly it in a recent post.) A canard, apart from being the French word for a duck is an idea in dynamical systems which has intrigued me for a long time but which I understood very little about. With the help of Chapter 8 of Kuehn’s book I have now been able to change this. What I will not do here is to try to explain the origin of the word canard in this context. It has led to a considerable number of humorous contributions of varying quality and I do not want to add to that literature here. I recall that at a fold point a non-degeneracy condition f(0,0,0)\ne 0 holds. Here f(x,y,\epsilon) is the right hand side of the evolution equation for the fast variable. This means that the slow flow does not stand still at the fold point. A canard is obtained if we assume that f(0,0,0)=0 at the fold point while two other non-degeneracy assumptions are made. In this case the fold point is called a fold singularity. This word is used in the sense that sometimes a steady state of a dynamical system is referred to as a singularity. In this case the fold point is steady state of the slow flow. The first non-degeneracy assumption is that  f_y(0,0,0)\ne 0. This means that the steady state is hyperbolic. For the other condition the setting has to be extended by introducing an additional parameter \lambda. Then we have a function f(x,y,\epsilon,\lambda) and it is assumed that f_\lambda(0,0,0,0)\ne 0. In the simplest situation, such as the van der Pol oscillator, the slow dynamics takes place on a critical manifold which is normally hyperbolic and stable. The curious think about  a canard is that there the slow dynamics can follow an unstable critical manifold for a relatively long time before jumping off. More precisely it can remain within distance \epsilon to a repelling part of the slow manifold for a time which is of order one on the slow time scale. Information can be obtained on the dynamics of this type of situation by doing blow-ups. A surprising feature of this type of point is that it is associated with the production of small periodic solutions in a scenario called a singular Hopf bifurcation. Some intuition for this can be obtained by thinking about a periodic solution which starts near the fold singularity, moves a short distance along an unstable branch of the slow manifold (canard), jumps to the stable branch and then returns to its starting point along that branch. A simple example where a canard occurs is the van der Pol system with constant forcing, in other words a system obtained by modifying the basic van der Pol system by introducing an additive constant at an appropriate place on the right hand side.

Jumping off a critical manifold

September 21, 2018

In a previous post I discussed the concept of relaxation oscillations and the classical example, the van der Pol oscillator for large values of the parameter \mu. The periodic orbit consists of two phases of fast motion and two of slow motion.The slow motion is on the critical manifold and in the transition from slow to fast the solution jumps off the critical manifold. This is of course only a heuristic description of what happens when the parameter \epsilon=\mu^{-2} is zero. What we would like to understand is what happens for \epsilon small and positive. Then the non-differentiability related to the jumping off must be smoothed out. To get an expression for the period of the oscillation in leading order (i.e. order zero in \epsilon) it suffices to compute the time taken to complete one of the slow phases. Since the critical manifold is one-dimensional this can be reduced to computing an integral and in the case of the van der Pol oscillator the integral can be computed explicitly. The fast phases make no contribution – in this limit they are instantaneous. In order to get higher order corrections we need to be able to control what happens near the corners where the motion changes from slow to fast. What does the next correction look like? A naive guess would be that it might be an integer power of \epsilon or that at worst this might be corrected by some expression involving \log\epsilon. In reality that corner is really singular and it produces more exotic phenomena. To be specific, it produces integer powers of \epsilon^{1/3}. How can we understand the origin of such terms? The jumping-off point is what is called a fold point since it is analogous to a fold bifurcation. Suppose we have a system \dot y=g(x,y) which undergoes a generic fold bifurcation at the origin. We can extend the system  by adding the trivial equation \dot x=0 for the parameter x. Now we modify this by replacing the equation for x by the equation \dot x=\epsilon f(x,y,\epsilon) with f(0,0,0)\ne 0. Thus for \epsilon=0 we have something which looks like a fold bifurcation for the fast subsystem. For \epsilon non-zero the quantity x, which was constant and could be called a parameter becomes dynamical and starts to move slowly. For this kind of situation there is an approximate  normal form. The system is topologically equivalent to a simple system, up to higher order corrections.

What does all this have to do with the cube roots in the expansion?The remarkable fact is that the normal form (i.e. the leading part of the approximate normal form) can be reduced by means of a rescaling of x, y and t to a system where the parameter \epsilon is eliminated. The price to be payed for this is that the domain in which this can be done is very small for small \epsilon. The magical rescaling is to replace (x,y,t) by (\epsilon^{1/3}X,\epsilon^{2/3}Y,\epsilon^{2/3}T). The model equation which comes out is (up to signs) \frac{dX}{dY}=X^2+Y. This is a Riccati equation whose solutions can be analysed further by classical methods. There are solutions with three different types of behaviour and for one of these three types there is precisely one solution, which is the solution relevant for the relaxation oscillation. It describes the correct smoothing of the corner.

Visit to Aachen

September 13, 2018

In the past few days I was in Aachen. I had only been there once before, which was for a job interview. It was just before Christmas. I only remember that it was dark, that there was a big Christmas market which I had trouble navigating through and that the building of the RWTH where the job was was large and imposing. This time the visit was more relaxed. I was there for a small meeting organized by Sebastian Walcher centred around the topic of singular perturbation theory and with an emphasis on models coming from biology.

I visited the cathedral of which a major part is older than all comparable buildings in northern Europe (eighth century). Even for someone with such rudimentary knowledge of architecture as myself it did appear very different from other cathedrals, with an oriental look. If I had been asked beforehand to imagine what the church of Charlemagne might look like I would never have guessed anything like the reality.

I gave a talk about my work with Juliette Hell about the existence of periodic solutions of the MAP kinase cascade. The emphasis was not only on the proof of this one result but also on explaining some general techniques with the help of this particular example. A theme running through this meeting was how, given a system of ODE depending on parameters, it is possible to find a small parameter \epsilon such that the limit \epsilon\to 0, while singular, nevertheless has some nice properties. In particular the aim is to obtain a limiting system with less equations and less parameters. Then we would like to lift some features of the dynamics of the limiting system to the full system. One talk, which I would like to mention, was by Satya Samal of the Joint Research Centre for Computational Biomedicine on tropical geometry. This is a term which I have often heard without understanding what it means intrinsically or how it can be applied to the study of chemical reaction networks. A lot of what was in the introduction of the talk can also be found in a paper by Radulescu et al. (Math. Modelling of Natural Phenomena, 10, 124). There was one thing in the talk which I found enlightening, which is not in the paper and which I have not seen elsewhere. I will try to explain it now. In this tropical business an algebra is introduced with ‘funny’ definitions of addition and multiplication for real numbers. It seems that often the formal properties of these operations are explained and then the calculations start. This is perhaps satisfactory for people with a strong affinity for algebra. I would like something more, an intuitive explanation of where these things come from. Let us consider multiplication first. If I start with two real numbers a and b, multiply them with \log\epsilon and exponentiate, giving the powers of \epsilon with these exponents, then multiply the results and finally take the logarithm and divide by \log\epsilon I get the sum a+b. This should explain why the tropical equivalent of the product is the sum. Now let us consider the sum. I start with numbers a and b, take powers of a quantity \epsilon with these exponents as before, then sum the results and take the logarithm. Thus we get \log (\epsilon^a+\epsilon^b). Now we need an extra step, namely to take the limit \epsilon\to 0. If b>a then the second term is negligible with respect to the first for \epsilon small and the limit is a\log\epsilon. If a>b then the limit is b\log\epsilon. Thus after dividing by \log\epsilon I get \min\{a,b\}. This quantity is the tropical equivalent of the sum. I expect that this discussion might need some improvement but it does show on some level where the ‘funny’ operations come from. It also has the following intuitive interpretation related to asymptotic expansions. If I have two functions whose leading order contributions in the limit \epsilon\to 0 are given by the powers a and b then the leading order of the product is the power given by the tropical product of a and b while the leading order of the sum is the power given by the tropical sum of a and b.

The technique of matched asymptotic expansions has always seemed to me like black magic or, to use a different metaphor, tight-rope walking without a safety net. At some point I guessed that there might be a connection between (at least some instances of) this technique and geometric singular perturbation theory. I asked Walcher about this and he made an interesting suggestion. He proposed that what is being calculated in leading order is the point at which a solution of a slow-fast system hits the critical manifold (asymptotically) thus providing an initial condition for the slow system. I must try to investigate this idea more carefully.

SIAM Life Science Conference in Minneapolis

August 11, 2018

Here are some impressions from the SIAM Life Science Conference. On Monay I heard a talk by Nathan Kutz about the structure and function of neural networks in C. elegans and a moth whose name I have forgotten. Coincidentally, when I was in Boston I heard a presentation by someone in the Gunawardena group about genes and transcription factors related to different types of neurons in C. elegans. On that occasion I asked how many of the few hundred neurons of C. elegans are regarded as being of different types and I was suprised to hear that it of the order of one hundred. Returning to the talk of Kutz, the key thing is that all the neurons in C. elegans and all their connections are known. Thus it is possible to simulate the whole network and reproduce central aspects of the behaviour of the worm. Another simplifying circumstance is that the motion of the worm can be described by only four parameters. A central message of the talk was that the behaviour of the nervous system of the worm itself can be reduced to a low dimensional dynamical system. This reminded me, with how much justification I do not know, of what I heard about the beaks of Darwin’s finches in Lisbon recently. As for the moth, the question described in the talk was how it learns to identify odours. A lot is known about the architecture of the neurons in the moth’s olfactory system. The speaker has compared the learning ability of this biological neural network with that of artifical neural networks used in machine learning. In reaching quite a high accuracy (70%) on the basis a small amount of training data the moth network beat all the artifical networks clearly. When given more data the result of the moth network hardly improved while those of the artificial network got better and better and eventually overtook the moth. The speaker suggested that the moth system is very effectively optimized for the task it is supposed to perform. I also heard a nice talk by Richard Bertram about canards in a three-dimensional ODE system describing the electrical activity of heart cells. I found what he explained about canards enlightening and this has convinced me that it is time for me to finally understand this subject properly. In particular he explained that the oscillations associated with a canard have to do with twisting of the stable and unstable manifolds. I also found what he said about the relations between the canard and concrete biological observations (the form of electrical signals from neurons) very interesting.

On Wednesday I heard a talk by Benjamin Ribba, who after an academic past now works with Roche. One of the main themes of his talk was cancer immunotherapy but I did not feel I learned too much there. What I found more interesting was what he said in the first part of his talk about chemotherapy for low-grade glioma. This is a disease which only develops very slowly, so that even after it has been discovered the tumour may grow very slowly over a period of several years. He showed patient data for this. He explained why it was reasonable to leave patients so long without therapy. The reason is that the standard course of therapy at that time was something which could only be given to a patient once in a lifetime (presumably due to side effects). Thus it made sense to wait with the therapy until the time which was likely to increase the patients lifetime as much as possible. During the slow growth phase the disease is typically asymptomatic, so that it is not a question of years of physical suffering for the patient. The speaker develeloped a mathematical model for the evolution of the disease (a model with a few ODE) and this could be made to fit the data well. The time course of the tumour growth is as follows. Before treatment it grows steadily. After the treatment is started the tumour starts to shrink. After the treatment is stopped it continues to shrink for a long time but usually eventually takes off again after a few years. In the model it was possible to try out alternatives to the standard treatment protocol and one was found which performed a lot better. The talk did not include any indication of whether this information had been used in paractise. I asked that question at the end of the talk. The answer was that by the time the analysis had been done the standard treatment has been replaced by a very different one and so the theory was too late to have clinical consequences.

On Thursday I heard a talk by Dominik Wodarz. One of his themes was oncolytic viruses but here I want to concentrate on another one. This had to do with chronic lymphocytic leukemia. This is a B cell malignancy and is treated using ibrutinib, an inhibitor of BTK (Bruton’s tyrosine kinase). Most of the malignant B cells form a tumour in the bone marrow while a small percentage of them circulate in the blood. Measuring the latter is the easiest way of monitoring the course of the disease. When the drug is given the tumour shrinks but it was not clear whether this is because the cells leave the bone marrow for the blood, where they die, or whether a large proportion die in the marrow. Using a simple mathematical model it could be shown that the second of these is the correct explanation. There are several things I like about this work. First, very simple models are used. Second, after comparison with data, these give convincing explanations for what is happening. Third, these conclusions may actually influence clinical practise. This is a good example of what mathematical biology should be like.