## Archive for the ‘immunology’ Category

### Dynamics of the activation of Lck

January 26, 2021

The enzyme Lck (lymphocyte-associated tyrosine kinase) is of central importance in the function of immune cells. I hope that mathematics can contribute to the understanding and improvement of immune checkpoint therapies for cancer. For this reason I have looked at the relevant mathematical models in the literature. In doing this I have realized the importance in this context of obtaining a better understanding of the activation of Lck. I was already familiar with the role of Lck in the activation of T cells. There are two tyrosines in Lck, Y394 and Y505, whose phosphorylation state influences its activity. Roughly speaking, phosphorylation of Y394 increases the activity of Lck while phosphorylation of Y505 decreases it. The fact that these are influences going in opposite directions already indicates complications. In fact the kinase Lck catalyses its own phosphorylation, especially on Y394. This is an example of autophosphorylation in trans, i.e. one molecule of Lck catalyses the phosphorylation of another molcule of Lck. It turns out that autophosphorylation tends to favour complicated dynamics. It is already the case that in a protein with a single phosphorylation site the occurrence of autophosphorylation can lead to bistability. Normally bistability in a chemical reaction network means the existence of more than one stable positive steady state and this is the definition I usually adopt. The definition may be weakened to the existence of more than one non-negative stable steady state. That autophosphorylation can produce bistability in this weaker sense was already observed by Lisman in 1985 (PNAS 82, 3055). He was interested in this as a mechanism of information storage in a biochemical system. In 2006 Fuss et al. (Bioinformatics 22, 158) found bistability in the strict sense in a model for the dynamics of Src kinases. Since Lck is a typical member of the family of Src kinases these results are also of relevance for Lck. In that work the phosphorylation processes are embedded in feedback loops. In fact the bistability is present without the feedback, as observed by Kaimachnikov and Kholodenko (FEBS J. 276, 4102). Finally, it was shown by Doherty et al. (J. Theor. Biol. 370, 27) that bistability (in the strict sense) can occur for a protein with only one phosphorylation site. This is in contrast to more commonly considered phosphorylation systems. These authors have also seen more complicated dynamical behaviour such as periodic solutions.

All these results on the properties of solutions of reaction networks are on the level of simulations. Recently Lisa Kreusser and I set out to investigate these phenomena on the level of rigorous mathematics and we have just put a paper on this subject on the archive. The model developed by Doherty et al. is one-dimensional and therefore relatively easy to analyse. The first thing we do is to give a rigorous proof of bistability for this system together with some information on the region of parameter space where this phenomenon occurs. We also show that it can be lifted to the system from which the one-dimensional system is obtained by timescale separation. The latter system has no periodic solutions. To obtain bistability the effect of the phosphorylation must be to activate the enzyme. It does not occur in the case of inhibition. We show that when an external kinase is included (in the case of Lck there is an external kinase Csk which may be relevant) and we do not restrict to the Michaelis-Menten regime bistability is restored.

We then go on to study the dynamics of the model of Kaimachnikov and Kholodenko, which is three-dimensional. These authors mention that it can be reduced to a two-dimensional model by timescale separation. Unfortunately we did not succeed in finding a rigorous framework for their reduction. Instead we used another related reduction which gives a setting which is well-behaved in the sense of geometric singular perturbation theory (normally hyperbolic) and can therefore be used to lift dynamical features from two to three dimensions in a rather straightforward way. It then remains to analyse the two-dimensonal system. It is easy to deduce bistability from the results already mentioned. We go further and show that there exist periodic and homoclinic solutions. This is done by showing the existence of a generic Bogdanov-Takens bifurcation, a procedure described more generally here and here. This system contains an abundance of parameters and the idea is to fix these so as to get the desired result. First we choose the coordinates of the steady state to be fixed simple rational numbers. Then we fix all but four of the parameters in the system. The four conditions for a BT bifurcation are then used to determine the values of the other four parameters. To get the desired positivity for the computed values the choices must be made carefully. This was done by trial and error. Establishing genericity required a calculation which was complicated but doable by hand. When a generic BT bifurcation has been obtained it follows that there are generic Hopf bifurcations nearby in parameter space and the nature of these (sub- or supercritical) can be determined. It turns out that in our case they are subcritical so that the associated periodic solutions are unstable. Having proved that periodic solutions exist we wanted to see what a solution of this type looks like by simulation. We had difficulties in finding parameter values for which this could be done. (We know the parameter values for the BT point and that those for the periodic solution are nearby but they must be chosen in a rather narrow region which we do not know explicitly.) Eventually we succeeded in doing this. In this context I used the program XPPAUT for the first time in my life and I learned to appreciate it. I see this paper as the beginning rather than the end of a path and I am very curious as to where it will lead.

### My first virtual conference (SMB 2020)

August 20, 2020

At the moment I am attending the annual conference of the Society for Mathematical Biology, which is taking place online. This is my first experience of this kind of format. The conference has many more participants than in any previous year, more than 1700. It takes place in a virtual building which is generated by the program Sococo. I find this environment quite disorienting and a bit stressful. This reaction probably has to do with the facts that I am no longer so young and that I have always tried to avoid social media as much as possible. I am sure that younger generations (and members of older generations with an enthusiasm for new technical developments) have far fewer problems getting used to it. In advance I was a bit worried about setting up the necessary computer requirements to be able to give my talk or even to go to others. In the end it worked out and my talk, given via Zoom, went smoothly. I got some good feedback, I am already convinced that it was worth joining this meeting and I may be less sceptical about joining others of this type in the future. There have been technical hitches. For instance the start of one big talk was delayed by about 20 minutes for a reason of this kind. Nevertheless, many things have gone well. Of course it is much preferable to meet people personally but when that is not possible virtual meetings with old friends are also pleasant.

I also want to mention an interesting conversation I had in the poster session. The poster concerned was that of Daniel Koch. The theme of his work (which has already been published in a paper in J. Theor. Biol.) is that the formation of oligomers of proteins (or their posttranslationally modified variants) can lead to interesting dynamics. At first sight this may sound too simple to be interesting but in fact in mathematics it is often the careful consideration of apparently simple situations which leads to fundamental progress. I imagine that this principle also applies to other disciplines (such as biology) but it is perhaps strongest in mathematics. In any case, I am strongly motivated to study this work carefully. The only question in when it will be, given the many other directions I want to pursue.

### Monotone systems revisited

December 4, 2019

There are some topics in mathematics and physics which are a lasting source of dissatisfaction for me since I feel that I have not properly understood them despite having made considerable efforts to do so. In the case of physics the reason is often that the physicists who understand the subject are not able to explain it in a way which provides what a mathematician sees as a comprehensible account. In mathematics the problem is a different one. Mathematicians frequently have a tendency (often justified) to discuss things on a level which is as general as possible. This leads to theorems which are loaded down with detail and where the many technical conditions make it difficult to see the wood for the trees. When confronted with such things I sometimes feel exhausted and give up. I prefer an account which builds up ideas step by step from simple beginnings. Here I return to a subject which I have written about more than once in this blog before but where the sense of dissatisfaction remains. I hope to reduce it here.

I start with a system of ordinary differential equations $\dot x_i=f_i(x)$. It should be defined on the $n$-dimensional Euclidean space or on one of its orthants. (An orthant is the subset of Euclidean space defined by making a choice of the signs of its components. It generalises a quadrant in the two-dimensional case.) The system is said to be cooperative if $\frac{\partial f_i}{\partial x_j}>0$ for all $i\ne j$. The name comes from the fact that the equations for the population dynamics of a set of species has this property if each species benefits the others. Suppose we now have two solutions $x$ and $\bar x$ of the system and that $x_i(t_0)\le\bar x_i(t_0)$ for all $i$ at some time time $t_0$. We may abbreviate this relation by $x(t_0)\le\bar x(t_0)$. Here we see a partial order on Euclidean space defined by the ordering of the components. A theorem of Müller and Kamke says that if the initial data for two solutions of a cooperative system at time $t_0$ satisfies this relation then $x(t)\le\bar x(t)$ for all $t\ge t_0$. Another way of saying this is that the time-$t$ flow of the system is preserves the partial order. A system of ODE with this property is called monotone. Thus the Müller-Kamke theorem says that a cooperative system is monotone.

The differential condition for monotonicity can be integrated. If $x$ and $\bar x$ are two points in Euclidean space with $x_i=\bar x_i$ for a certain $i$ and $x_j\le\bar x_j$ for $j\ne i$ then $f_i(x)\le f_i(\bar x)$. To see this we join $x$ to $\bar x$ by a piecewise linear curve where the coordinates other than the $i$th are increased successively from $x_j$ to $\bar x_j$. On each segment of this curve the value of $f_i$ does not decrease, as a consequence of the fundamental theorem of calculus. Hence its value at the end of the entire path is at least as big as its value at the beginning. We now want to prove that a certain inequality holds at all times $t\ge t_0$. In order to do this we would like to consider the first time $t_*>t_0$ where the inequality fails and get a contradiction. Unfortunately there might be no such time – in principle the condition might fail immediately. To get around this we deform the system for the solution $\bar x$ to $\frac{d\bar x_i}{dt}=f_i(\bar x)+\epsilon$. If we can prove the result for the deformed system the result for the initial system follows by continuous dependence of the solution on $\epsilon$. For the deformed system let $t_*$ be the supremum of the times where the desired inequality holds. If the inequality does not hold globally then the system is still defined at $t=t_*$. For $t=t_*$ we have $x_i=\bar x_i$ for some $i$ and we can assume w.l.o.g. that $x_j<\bar x_j$ for some $j$ since otherwise the two solutions would be equal and the result trivial. The integrated form of the cooperativity condition implies that at $t_*$ the right hand side of the evolution equation for $\bar x_i-x_i$ is positive. On the other hand the fact that it just reached zero coming from positive values implies that the right hand side of the evolution equation is non-positive and we get a contradiction.

A key source of information about monotone dynamical systems is the book of Hal Smith with this title. I have repeatedly looked at this book but always got bogged down quite quickly. Now I realise that for my purposes it would have been much better if I had started with chapter 3. The Müller-Kamke theorem is discussed in section 3.1. The range of application of this theorem can be extended considerably by the following trick, discussed in section 3.5. Suppose that we define $y_i=(-1)^{m_i}x_i$ where each of the $m_i$ are zero or one. This transforms the signs of $Df$ in a certain way and so cooperativity of the system for $y$ corresponds to a certain sign pattern for the entries of $Df$. A first important condition is that each off-diagonal element of $Df(x)$ should be either non-negative or non-positive. Next, the sign of $\frac{\partial f_i}{\partial x_j}\frac{\partial f_j}{\partial x_i}$ is not changed be the transformation and must thus be non-negative. In the context of population models this can be interpreted as saying that there is no pair of species which are in a predator-prey relationship. Given that these two conditions are satisfied we consider a labelled graph where the nodes are the numbers from $1$ to $n$ and there is an edge between two nodes if at least one of the corresponding partial derivatives is non-zero at some point. The edge is then labelled with the sign of this non-zero value. A loop in the graph can be assigned the sign which is the product of those of its edges. It turns out that a system can be transformed to a cooperative system in the way indicated if and only if the graph contains no negative loops. I will call a system of this type ‘cooperative up to sign reversal’. The system can be transformed by a permutation of the variables into one where $Df$ has diagonal blocks with non-negative entries and off-diagonal elements with non-positive entries.

If all elements of $Df$ are required to be non-positive we get the class of competitive systems. It should be noted that being competitive leads to less restrictions on the dynamics of a system (towards the future) than being cooperative. We can define a class of systems which are competitive up to sign reversal. An example of such a system is the basic model of virus dynamics. In that system the unknowns are the populations of uninfected cells $x$, infected cells $y$ and virus particles $v$. The transformation $y\mapsto -y$ makes it into a competitive system. In various models of virus dynamics including the immune response the target cells of the virus and the immune cells are in a predator-prey relationship and so these systems can be neither cooperative up to sign or competitive up to sign.

### SMB meeting in Montreal

July 27, 2019

This week I have been attending the SMB meeting in Montreal. There was a minisymposium on reaction networks and I gave a talk there on my work with Elisenda Feliu and Carsten Wiuf on multistability in the multiple futile cycle. There were also other talks related to that system. A direction which was new to me and was discussed in a talk by Elizabeth Gross was using a sophisticated technique from algebraic geometry (the mixed volume) to obtain an upper bound on the number of complex solutions of the equations for steady states for a reaction network (which is then of course also an upper bound for the number of positive real solutions). There were two talks about the dynamics of complex balanced reaction networks with diffusion. I have the impression that there remains a lot to be understood in that area.

At this conference the lecture rooms were usually big enough. An exception was the first session ‘mathematical oncology from bench to bedside’ which was completely overfilled and had to move to a different room. In that session there was a tremendous amount of enthusiasm. There is now a subgroup of the SMB for cancer modelling which seems to be very active with its own web page and blog. I should join that subgroup. Some of the speakers were so full of energy and so extrovert that it was a bit much for me. Nevertheless, it is clear that this is an exciting area and I would like to be part of it. There was also a session of cancer immunotherapy led by Vincent Lemaire from Genentech. He and two others described the mathematical modelling being done in cancer immunotherapy in three major pharmaceutical companies (Genentech, Pfizer and Glaxo-Smith-Kline). These are very big models. Lemaire said that at the moment that there are 2500 clinical trials going on for therapies related to PD-1. A recurring theme in these talks was the difference between mice and men.

This morning there was a talk by Hassan Jamaleddine concerning nanoparticles used to present antigen. These apparently primarily stimulate Tregs more than effector T cells and can thus be used as a therapy for autoimmune diseases. He showed some impressive pictures illustrating clearance of EAE using this technique. A central theme was interference between attempts to use the technique in animals with two autoimmune diseases in different organs, e.g. brain and liver. I was interested by the fact that for what he was doing steady state analysis was insufficient for understanding the biology.

This afternoon, the conference being over, I took to opportunity to visit Paul Francois at McGill, a visit which was well worthwhile.

### Book on cancer therapy using immune checkpoints, part 2

April 20, 2019

I now finished reading the book of Graeber I wrote about in the last post. Here are some additional comments. Chapter 7 is about CAR T cells, a topic which I wrote about briefly here. I also mentioned in that post that there is a mathematical model related to this in the literature but I have not got around to studying it. Chapter 8 is a summary of the present state of cancer immunotherapy while the last chapter is mainly concerned with an individual case where PD-1 therapy showed a remarkable success but the patient, while against all odds still alive, is still not cancer-free. It should not be forgotten that the impressive success stories in this field are accompanied by numerous failures and the book also reports at length on what these failures can look like for individual patients.

For me the subject of this book is the most exciting topic in medicine I know at the moment. It is very dynamic with numerous clinical studies taking place. It is suggested in the book that there is a lot of redundancy in this and correspondingly a lot of waste, financial and human. My dream is that progress in this area could be helped by more theoretical input. What do I mean by progress? There are three directions which occur to me. (1) Improving the proportion of patients with a given type of cancer who respond by modifying a therapy or replacing it by a different one. (2) Identifying in advance which patients with a given type of cancer will respond to which therapy, so as to allow rational choices between therapies in individual cases. (3) Identifying new types of cancer which are promising targets for a given therapy. By theoretical input I mean getting a better mechanistic understanding of the ways in which given therapies work and using that to obtain a better understanding of the conditions needed for success. The dream goes further with the hope that this theoretical input could be improved by the formulation and analysis of mathematical models.

What indications are there that this dream can lead to something real? I have already mentioned one mathematical model related to CAR T-cells. I have mentioned a mechanistic model for PD-1 by Mellman and collaborators here. This has been made into a mathematical model in a 2018 article by Arulraj and Barik (PLoS ONE 13(10): e0206232). There is a mathematical model for CTLA-4 by Jansson et al. (J. Immunol. 175, 1575) and it has been extended to model the effects of related immunotherapy in a 2018 paper of Ganesan et al. (BMC Med. Inform. Decis. Mak. 18,37).

I conclude by discussing one topic which is not mentioned in the book. In Mainz (where I live) there is a company called BIONTECH with 850 employees whose business is cancer immunotherapy. The CEO of the company is Ugur Sahin, who is also a professor at the University of Mainz. I have heard a couple of talks by him, which were on a relatively general level. I did not really understand what his speciality is, only that it has something to do with mRNA. I now tried to learn some more about this and I realised that there is a relation to a topic mentioned in the book, that of cold and hot tumours. The most favourable situation for immune checkpoint therapies is where a tumour does in principle generate a strong immune response and has adapted to switch that off. Then the therapy can switch it back on. This is the case of a hot tumour, which exhibits a lot of mutations and where enough of these mutations are visible to the immune system. By contrast for a cold tumour, with no obvious mutations, there is no basis for the therapy to work on. The idea of the type of therapy being developed by Sahin and collaborators is as follows (my preliminary understanding). First analyse DNA and RNA from the tumour of a patient to identify existing mutations. Then try to determine by bioinformatic methods which of these mutations could be presented effectively by the MHC molecules of the patients. This leads to candidate proteins which might stimulate the immune system to attack the tumour cells. Now synthesise mRNA coding for those proteins and use it as a vaccine. The results of the first trials of this technique are reported in a 2017 paper in Nature 547, 222. It has 295 citations in Web of Science which indicates that it has attracted some attention.

### Book on cancer therapy using immune checkpoints

April 19, 2019

In a previous post I wrote about cancer immunotherapy and, in particular, about the relevance of immune checkpoints such as CTLA-4. For the scientific work leading to this therapy Jim Allison and Tasuku Honjo were awarded the Nobel Prize for Medicine in 2018. I am reading a book on this subject, ‘The Breakthrough. Immunotherapy and the Race to Cure Cancer’ by Charles Graeber. I did not feel in harmony with this book due to some notable features which made it far from me. One was the use of words and concepts which are typically American and whose meanings I as a European do not know. Of course I could go out and google them but I do not always feel like it. A similar problem arises from the fact that I belong to a different generation than the author. It is perhaps important to realise that the author is a journalist and not someone with a strong background in biology or medicine. One possible symptom of this is the occurrence of spelling mistakes or unconventional names (e.g. ‘raff’ instead of ‘raf’, ‘Mederex’ instead of ‘Medarex’ for the company which played an essential role in the development of antibodies for cancer immunotherapy, ‘dendrites’ instead of ‘dendritic cells’). As a consequence I think that if a biological statement made in the book looks particularly interesting it is worth trying to verify it independently. For example, the claim in one of the notes to Chapter 5 that penicillin is fatal to mice is false. This is not only of interest as a matter of scientific fact since it has also been used as an (unjustified) argument by protesters against medical experiments in animals. More details can be found here.

Chapter four is concerned with Jim Allison, the discoverer of the first type of cancer immunotherapy using CTLA-4. I find it interesting that in his research Allison was not deriven by the wish to find a cancer therapy. He wanted to understand T cells and their activation. While doing so he discovered CTLA-4, as an important ‘off switch’ for T cells. It seems that from the beginning Allison liked to try certain experiments just to see what would happen. If what he found was more complicated than he expected he found that good. In any case, Allison did an experiment where mice with tumours were given antibodies to CTLA-4. This disables the off switch. The result was that while the tumours continued to grow in the untreated control mice they disappeared in the treated mice. The 100% reponse was so unexpected that Allison immediately repeated the experiment to rule out having made some mistake. The result was the same.

Chapter six comes back to the therapy with PD-L1 with which the book started. The treatments with antibodies against PD-1 and PD-L1 have major advantages compared to those with CTLA-4. The success rate with metastatic melanoma can exceed 50% and the side effects are much less serious. The latter aspect has to do with the fact that in this case the mode of action is less to activate T cells in general than to sustain the activation of cells which are already attacking the tumour. This does not mean that treatments targetting CTLA-4 have been superceded. For certain types of cancer it can be better than those targetting PD-1 or PD-L1 and combinations may be better than either type of therapy alone. For the second class of drugs getting them on the market was also not easy. In the book it is described how this worked in the case of a drug developed by Genentech. It had to be decided whether the company wanted to develop this drug or a more conventional cancer therapy. The first was more risky but promised a more fundamental advance if successful. There was a showdown between the oncologists and the immunologists. After a discussion which lasted several hours the person responsible for the decision said ‘This is enough, we are moving forward’ and chose the risky alternative.

This post has already got quite long and it is time to break it off here. What I have described already covers the basic discussion in the book of the therapies using CTLA-4 and PD-1 or PD-L1. I will leave everthing else for another time.

### Herd immunity

February 14, 2019

I have a long term interest in examples where mathematics has contributed to medicine. Last week I heard a talk at a meeting of the Mainzer Medizinische Gesellschaft about vaccination. The speaker was Markus Knuf, director of the pediatric section of the Helios Clinic in Wiesbaden. In the course of his talk he mentioned the concept of ‘herd immunity’ several times. I was familiar with this concept and I have even mentioned it briefly in some of my lectures and seminars. It never occurred to me that in fact this is an excellent example of a case where medical understanding has benefited from mathematical considerations. Suppose we have a population of individuals who are susceptible to a particular disease. Suppose further that there is an endemic state, i.e. that the disease persists in the population at a constant non-zero level. It is immediately plausible that if we vaccinate a certain proportion $\alpha$ of the population against the disease then the proportion of the population suffering from the disease will be lower than it would have been without vaccination. What is less obvious is that if $\alpha$ exceeds a certain threshold $\alpha_*$ then the constant level will be zero. This is the phenomenon of herd immunity. The value of $\alpha_*$ depends on how infectious the disease is. A well-known example with a relatively high value is measles, where $\alpha$ is about $0.95$. In other words, if you want to get rid of measles from a population then it is necessary to vaccinate at least 95% of the population. It occurs to me that this idea is very schematic since measles does not occur as a constant rate. Instead it occurs in large waves. This idea is nevertheless one which is useful when making public health decisions. Perhaps a better way of looking at it is to think of the endemic state as a steady state of a dynamical model. The important thing is that this state is asymptotically stable in the dynamic set-up so that it recovers from any perturbation (infected individuals coming in from somewhere else). It just so happens that in the usual mathematical models for this type of phenomenon whenever a positive steady state (i.e. one where all populations are positive) exists it is asymptotically stable. Thus the distinction between the steady state and dynamical pictures is not so important. After I started writing this post I came across another post on the same subject by Mark Chu-Carroll. I am not sad that he beat me to it. The two posts give different approaches to the same subject and I think it is good if this topic gets as much publicity as possible.

Coming back to the talk I mentioned, a valuable aspect of it was that the speaker could report on things from his everyday experience in the clinic. This makes things much more immediate than if someone is talking about the subject on an abstract level. Let me give an example. He showed a video of a small boy with an extremely persistent cough. (He had permission from the child’s parents to use this video for the talk.) The birth was a bit premature but the boy left the hospital two weeks later in good health. A few weeks after that he returned with the cough. It turned out that he had whooping cough which he had caught from an adult (non-vaccinated) relative. The man had had a bad cough but the cause was not realised and it was attributed to side effects of a drug he was taking for a heart problem. The doctors did everything to save the boy’s life but the infection soon proved fatal. It is important to realize that this is not an absolutely exceptional case but a scenario which happens regularly. It brings home what getting vaccinated (or failing to do so) really means. Of course an example like this has no statistical significance but it can nevertheless help to make people think.

Let me say some more about the mathematics of this situation. A common type of model is the SIR model. The dependent variables are $S$, the number of individuals who are susceptible to infection by the disease, $I$, the number of individuals who are infected (or infectious, this model ignores the incubation time) and $R$, the number of individuals who have recovered from the disease and are therefore immune. These three quantities depend on time and satisfy a system of ODE containing a number of parameters. There is a certain combination of these parameters, usually called the basic reproductive rate (or ratio) and denoted by $R_0$ whose value determines the outcome of the dynamics. If $R_0\le 1$ the infection dies out – the solution converges to a steady state on the boundary of the state space where $I=0$. If, on the other hand, $R_0>1$ there exists a positive steady state, an endemic equilibrium. The stability this of this steady state can be examined by linearizing about it. In fact it is always stable. Interestingly, more is true. When the endemic steady state exists it is globally asymptotically stable. In other words all solutions with positive initial data converge to that steady state at late time. For a proof of this see a paper by Korobeinikov and Wake (Appl. Math. Lett. 15, 955). They use a Lyapunov function to do so. At this point it is appropriate to mention that my understanding of these things has been improved by the hard work of Irena Vogel, who recently wrote her MSc thesis on the subject of Lyapunov functions in population models under my supervision.

### T cell triggering

April 6, 2018

When reading immunology textbooks I had the feeling that one important point was not explained. The T cell receptor is almost entirely outside the cell and so when it encounters its antigen it cannot transmit this information into the cytosol the way a transmembrane receptor does. But since the activation of the cell involves the phosphorylation of the cytoplasmic tails of proteins associated to the receptor (CD3 and the $\zeta$-chains) the information must get through somehow. So how does this work? This process, which precedes the events relevant to the models for T cell activation I discussed here, is referred to as T cell triggering. I had an idea about how this process could work. If the T cell receptor and the coreceptor CD8 both bind to a peptide-MHC complex they are brought into proximity. As a consequence CD3 and the $\zeta$-chains are then close to CD8. On the other hand the kinase Lck is associated to CD8. Thus Lck is brought into proximity with the proteins of the T cell receptor complex and can phosphorylate them. I had never seen this described in detail in the literature. Now I found a review article by van der Merwe and Dushek (Nature Reviews in Immunology 11, 47) which explains this mechanism (and gives it a name, co-receptor heterodimerization) together with a number of other alternatives. It is mentioned that this mechanism alone does not suffice to explain T cell triggering since there are experiments where T cells lacking CD4 and CD8 were triggered. The authors of this paper do not commit themselves to one mechanism but instead suggest that a combination of mechanisms may be necessary.

I will describe one other mechanism which I find particularly interesting and which I already mentioned briefly in a previous post. It is called kinetic segregation and was proposed by Davis and van der Merwe. One way of imagining the state of a T cell before activation is that Lck is somehow inactive or that the phosphorylation sites relevant to activation are not accessible to it. A different picture is that of a dynamic balance between kinase and phosphatase, between Lck and CD45. Both of these enzymes are active and pushing the system in opposite directions. In an inactivated cell CD45 wins this struggle. When the TCR binds to an antigen on an antigen-presenting cell the membranes of the cells are brought together and there is no longer room for the bulky extracellular domain of CD45. Thus the phosphatase is pushed away from the TCR complex and Lck can take control. This could also represent a plausible mechanism for the function of certain artificial constructs for activating T cells, as discussed briefly here.

This mechanism may be plausible but what direct evidence is there that it really works? Some work which is very interesting in this context is due to James and Vale (Nature 487, 64). The more general basic issue is how to identify which molecules are involved in a particular biochemical process and which are not. The method used by these authors is to introduce selected molecules (including T cell receptors) into a non-immune cell and to see under what circumstances triggering can occur. Different combinations of molecules can be used in different experiments. With these techniques it is shown that the kinetic segregation mechanism can work and more is learned about the details of how it might work.

### SMB conference in Utah

July 21, 2017

Now Subgroups are being set up within the Society to concentrate on particular subjects. One of these, the Immunobiology and Infection Subgroup had its inaugural meeting this week and of course I went. There I and a number of other people learned a basic immunological fact which we found very surprising. It is well known that the thymus decreases in size with age so that presumably our capacity to produce new T cells is constantly decreasing. The obvious assumption, which I had made, is that this is a fairly passive process related to the fact that many systems in our bodies run down with age. We learned from Johnna Barnaby that the situation may be very different. It may be that the decrease in the size of the thymus is due to active repression by sexual hormones. She is involved in work on therapy for prostate cancer and said that it has been found that in men with prostate cancer who are getting drugs to reduce their testosterone levels it is seen that their thymus increases in size.

There were some recurrent themes at the conference. One was oncolytic viruses. These are genetically engineered viruses intended to destroy cancer cells. In modelling these it is common to use extensions of the fundamental model of virus dynamics which is very familiar to me. For instance Dominik Wodarz talked about some ODE models for oncolytic viruses in vitro where the inclusion of interferon production in the model leads to bistability. (In reponse to a question from me he said that it is a theorem that without the interferon bistability is impossible.) I was pleased to see how, more generally, a lot of people were using small ODE models making real contact to applications. Another recurrent theme was that there are two broad classes of macrophages which may be favourable or unfavourable to tumour growth. I should find out more about that. Naveen Vaidya talked about the idea that macrophages in the brain may be a refuge for HIV. Actually, even after talking to him I am not sure if it should not rather be microglia than macrophages. James Moore talked about the question of how T cells are eliminated in the thymus or become Tregs. His talk was more mathematical than biological but it has underlined once again that I want to understand more about positive and negative selection in the thymus and the related production of Tregs.

On a quite different subject there were two plenary talks related to coral reefs. A theme which is common in the media is that of the damage to coral due to climate change. Of course this is dominated by politics and usually not accompanied by any scientific information on what is going on. The talk of Marissa Blaskett was an excellent antidote to this kind of thing and now I have really understood something about the subject. The other talk, by Mimi Koehl, was less about the reefs themselves but about the way in which the larvae of snails which graze on the coral colonize the reef. I found the presentation very impressive because it started with a subject which seemed impossibly complicated and showed how scientific investigation, in particular mathematical modelling, can lead to understanding. The subject was the interaction of microscopic swimming organisms with the highly turbulent flow of sea water around the reefs. Investigating this involved among other things the following. Measuring the turbulent flow around the reef using Doppler velocimetry. Reconstructing this flow in a wave tunnel containing an artificial reef in order to study the small-scale structure of the transport of chemical substances by the flow. Going out and checking the results by following dye put into the actual reef. And many other things. Last but not least there was the mathematical modelling. The speaker is a biologist and she framed her talk by slides showing how many (most?) biologists hate mathematical modelling and how she loves it.

### Mathematical models for T cell activation

May 2, 2017

The proper functioning of our immune system is heavily dependent on the ability of T cells to identify foreign substances and take appropriate action. For this they need to be able to distinguish the foreign substances (non-self) from those coming from substances belonging to the host (self). In the first case the T cell should be activated, in the second not. The process of activation is very complicated and takes days. On the other hand it seems that an important part of the distinction between self and non-self only takes a few seconds. A T cell must scan the surface of huge numbers of dendritic cells for the presence of the antigen it is specific for and it can only spare very little time for each one. Within that time the cell must register that there is something relevant there and be induced to stay longer, instead of continuing with its search.

A mathematical model for the initial stages of T cell activation (the first few minutes) was formulated and studied by Altan-Bonnet and Germain (PloS Biol. 3(11), e356). They were able to use it successfully to make experimental predictions, which they could then confirm. The predictions were made with the help of numerical simulations. From the point of view of the mathematician a disadvantage of this model is its great complexity. It is a system of more than 250 ordinary differential equations with numerous parameters. It is difficult to even write the definition of the model on paper or to describe it completely in words. It is clear that such a system is difficult to study analytically. Later Francois et. el. (PNAS 110, E888) introduced a radically simplified model for the same biological situation which seemed to show a comparable degree of effectiveness to the original model in fitting the experimental data. In fact the simplicity of the model even led to some new successful experimental predictions. (Altan-Bonnet was among the authors of the second paper.) This is the kind of situation I enjoy, where a relatively simple mathematical model suffices for interesting biological applications.

In the paper of Francois et. al. they not only do simulations but also carry out interesting analytical calculations for their model. On the other hand they do not follow the direction of attempting to use these calculations to formulate and prove mathematical theorems about the solutions of the model. Together with Eduardo Sontag we have now written a paper where we obtain some rigorous results about the solutions of this system. In the original paper the only situation considered is that where the system has a unique steady state and any other solution converges to that steady state at late times. We have proved that there are parameters for which there exist three steady states. A numerical study of these indicates that two of them are stable. A parameter in the system is the number $N$ of phosphorylation sites on the T cell receptor complex which are included in the model. The results just mentioned on steady states were obtained for $N=3$.

An object of key importance is the response function. The variable which measures the degree of activation of the T cell in this model is the concentration $C_N$ of the maximally phosphorylated state of the T cell receptor. The response function describes how $C_N$ depends on the important input variables of the system. These are the concentration $L$ of the ligand and the constant $\nu$ describing the rate at which the ligand unbinds from the T cell receptor. A widespread idea (the lifetime dogma) is that the quantity $\nu^{-1}$, the dissociation time, determines how strongly an antigen signals to a T cell. It might naively be thought that the response should be an increasing function of $L$ (the more antigen present the stronger the stimulation) and a decreasing function of $\nu$ (the longer the binding the stronger the stimulation). However both theoretical and experimental results lead to the conclusion that this is not always the case.

We proved analytically that for certain values of the parameters $C_N$ is a decreasing function of $L$ and an increasing function of $\nu$. Since these rigorous results give rather poor information on the concrete values of the parameters leading to this behaviour and on the global form of the function we complemented this analytical work by simulations. These show how $C_N$ can have a maximum as a function of $\nu$ within this model and that as a function of $L$ it can have the following form in a log-log plot. For $L$ small the graph is a straight line of slope one. As $L$ increases it switches to being a straight line of slope $1-N/2$ and for still larger values it once again becomes a line of slope one, shifted with respect to the original one. Finally the curve levels out as it must do, since the function is bounded. The proofs do not make heavy use of general theorems and are in general based on doing certain estimates by hand.

All of these results were of the general form ‘there exist parameter values for the system such that $X$ happens’. Of course this is just a first step. In the future we would like to understand better to what extent biologically motivated restrictions on the parameters lead to restrictions on the dynamical behaviour.