Archive for February, 2019

Herd immunity

February 14, 2019

I have a long term interest in examples where mathematics has contributed to medicine. Last week I heard a talk at a meeting of the Mainzer Medizinische Gesellschaft about vaccination. The speaker was Markus Knuf, director of the pediatric section of the Helios Clinic in Wiesbaden. In the course of his talk he mentioned the concept of ‘herd immunity’ several times. I was familiar with this concept and I have even mentioned it briefly in some of my lectures and seminars. It never occurred to me that in fact this is an excellent example of a case where medical understanding has benefited from mathematical considerations. Suppose we have a population of individuals who are susceptible to a particular disease. Suppose further that there is an endemic state, i.e. that the disease persists in the population at a constant non-zero level. It is immediately plausible that if we vaccinate a certain proportion \alpha of the population against the disease then the proportion of the population suffering from the disease will be lower than it would have been without vaccination. What is less obvious is that if \alpha exceeds a certain threshold \alpha_* then the constant level will be zero. This is the phenomenon of herd immunity. The value of \alpha_* depends on how infectious the disease is. A well-known example with a relatively high value is measles, where \alpha is about 0.95. In other words, if you want to get rid of measles from a population then it is necessary to vaccinate at least 95% of the population. It occurs to me that this idea is very schematic since measles does not occur as a constant rate. Instead it occurs in large waves. This idea is nevertheless one which is useful when making public health decisions. Perhaps a better way of looking at it is to think of the endemic state as a steady state of a dynamical model. The important thing is that this state is asymptotically stable in the dynamic set-up so that it recovers from any perturbation (infected individuals coming in from somewhere else). It just so happens that in the usual mathematical models for this type of phenomenon whenever a positive steady state (i.e. one where all populations are positive) exists it is asymptotically stable. Thus the distinction between the steady state and dynamical pictures is not so important. After I started writing this post I came across another post on the same subject by Mark Chu-Carroll. I am not sad that he beat me to it. The two posts give different approaches to the same subject and I think it is good if this topic gets as much publicity as possible.

Coming back to the talk I mentioned, a valuable aspect of it was that the speaker could report on things from his everyday experience in the clinic. This makes things much more immediate than if someone is talking about the subject on an abstract level. Let me give an example. He showed a video of a small boy with an extremely persistent cough. (He had permission from the child’s parents to use this video for the talk.) The birth was a bit premature but the boy left the hospital two weeks later in good health. A few weeks after that he returned with the cough. It turned out that he had whooping cough which he had caught from an adult (non-vaccinated) relative. The man had had a bad cough but the cause was not realised and it was attributed to side effects of a drug he was taking for a heart problem. The doctors did everything to save the boy’s life but the infection soon proved fatal. It is important to realize that this is not an absolutely exceptional case but a scenario which happens regularly. It brings home what getting vaccinated (or failing to do so) really means. Of course an example like this has no statistical significance but it can nevertheless help to make people think.

Let me say some more about the mathematics of this situation. A common type of model is the SIR model. The dependent variables are S, the number of individuals who are susceptible to infection by the disease, I, the number of individuals who are infected (or infectious, this model ignores the incubation time) and R, the number of individuals who have recovered from the disease and are therefore immune. These three quantities depend on time and satisfy a system of ODE containing a number of parameters. There is a certain combination of these parameters, usually called the basic reproductive rate (or ratio) and denoted by R_0 whose value determines the outcome of the dynamics. If R_0\le 1 the infection dies out – the solution converges to a steady state on the boundary of the state space where I=0. If, on the other hand, R_0>1 there exists a positive steady state, an endemic equilibrium. The stability this of this steady state can be examined by linearizing about it. In fact it is always stable. Interestingly, more is true. When the endemic steady state exists it is globally asymptotically stable. In other words all solutions with positive initial data converge to that steady state at late time. For a proof of this see a paper by Korobeinikov and Wake (Appl. Math. Lett. 15, 955). They use a Lyapunov function to do so. At this point it is appropriate to mention that my understanding of these things has been improved by the hard work of Irena Vogel, who recently wrote her MSc thesis on the subject of Lyapunov functions in population models under my supervision.

 

Advertisement

The probability space as a fiction

February 12, 2019

I have always had the impression that I understood probability theory very poorly. I had a course on elementary probability theory as an undergraduate and I already had difficulties with that. I was very grateful that in the final exam there was a question on the Borel-Cantelli Lemma which was about the only thing I did understand completely. More recently I have taught elementary probability myself and I do now have a basic understanding there. As a source I used the book of Feller which was the text I had as an undergraduate. I nevertheless remained without a deeper understanding of the subject. In the more recent past I have often been to meetings on reaction networks and on such occasions there are generally talks about both the deterministic and stochastic cases. I did learn some things in the stochastic talks but I was missing the mathematical background, the theory of continuous time Markov chains. My attempts to change this by background reading met with limited success. Yesterday I found a book called ‘Markov Chains’ by J. R. Norris and this seems to me more enlightening than anything I had tried before.

Looking at this book also led to progress of a different kind. I started thinking about the question of why I found probability theory so difficult. One superficial view of the subject is that it is just measure theory except that the known objects are called by different names. Since I do understand measure theory and I have a strong affinity for language if that was the only problem I should have been able to overcome it. Then I noticed a more serious difficulty, which had previously only been hovering on the edge of my consciousness. In elementary probability the concept of a probability space is clear – it is a measure space with total measure one. In more sophisticated probability theory it seems to vanish almost completely from the discussion. My impression in reading texts or listening to talks on the subject is that there is a probability space around in the background but that you never get your hands on it. You begin to wonder if it exists at all and this is the reason for the title of this post. I began to wonder if it is like the embedding into Euclidean space which any manifold in principle has but which plays no role in large parts of differential geometry. An internet search starting from this suspicion let me to an enlightening blog post of Terry Tao called ‘Notes 0: A review of probability theory‘. There he reviews ‘foundational aspects of probability theory’. Fairly early in this text he compares the situation with that in differential geometry. He compares the role of the probability space to that of a coordinate system in differential geometry, a probably better variant of my thought with the embeddings. He talks about a ‘probabilistic way of thinking’ as an analogue of the ‘geometric way of thinking’. So I believe that I have now discovered the basic thing I did not understand in this context – I have not yet understood the probabilistic way of thinking. When I consider the importance when doing differential geometry of (not) understanding the geometric way of thinking I see what an enormous problem this is. It is the key to understanding the questions of ‘what things are’ and ‘where things live’. For instance, to take an example from Tao’s notes, Poisson distributions are probability measures (‘distribution’ is the probabilistic translation of the word ‘measure’) on the natural numbers, the latter being thought of as a potential codomain of a random variable. Tao writes ‘With this probabilistic viewpoint, we shall soon see the sample space essentially disappear from view altogether …’ Why I am thinking about the Cheshire cat?

In a sequel to the blog post just mentioned Tao continues to discuss free probability. This is a kind of non-commutative extension of ordinary probability. It is a subject I do not feel I have to learn at this moment but I do think that it would be useful to have an idea how it reduces to ordinary probability in the commutative case. There is an analogy between this and non-commutative geometry. The latter subject is one which fascinated me sufficiently at the time I was at IHES to motivate me to attend a lecture course of Alain Connes at the College de France. The common idea is to first replace a space (in some sense) by the algebra of (suitably regular) functions on that space with pointwise operations. In practise this is usually done in the context of complex functions so that we have a * operation defined by complex conjugation. This then means that continuous functions on a compact topological space define a commutative C^*-algebra. The space can be reconstructed from the algebra. This leads to the idea that a C^*-algebra can be thought of as a non-commutative topological space. I came into contact with these things as an undergraduate through my honours project, supervised by Ian Craw. Non-commutative geometry has to do with extending this to replace the topological space by a manifold. Coming back to the original subject, this procedure has an analogue for probability theory. Here we replace the continuous functions by L^\infty functions, which also form an algebra under pointwise operations. In fact, as discussed in Tao’s notes, it may be necessary to replace this by a restricted class of L^\infty functions which are in particular in L^1. The reason for this is that a key structure on the algebra of functions (random variables) is the expectation. In this case the * operation is also important. The non-commutative analogue of a probability space is then a W^*-algebra (von Neumann algebra). Comparing with the start of this discussion, the connection here is that while the probability space fades into the background the random variables (elements of the algebra) become central.