Archive for the ‘immunology’ Category

Mathematical models for T cell activation

May 2, 2017

The proper functioning of our immune system is heavily dependent on the ability of T cells to identify foreign substances and take appropriate action. For this they need to be able to distinguish the foreign substances (non-self) from those coming from substances belonging to the host (self). In the first case the T cell should be activated, in the second not. The process of activation is very complicated and takes days. On the other hand it seems that an important part of the distinction between self and non-self only takes a few seconds. A T cell must scan the surface of huge numbers of dendritic cells for the presence of the antigen it is specific for and it can only spare very little time for each one. Within that time the cell must register that there is something relevant there and be induced to stay longer, instead of continuing with its search.

A mathematical model for the initial stages of T cell activation (the first few minutes) was formulated and studied by Altan-Bonnet and Germain (PloS Biol. 3(11), e356). They were able to use it successfully to make experimental predictions, which they could then confirm. The predictions were made with the help of numerical simulations. From the point of view of the mathematician a disadvantage of this model is its great complexity. It is a system of more than 250 ordinary differential equations with numerous parameters. It is difficult to even write the definition of the model on paper or to describe it completely in words. It is clear that such a system is difficult to study analytically. Later Francois et. el. (PNAS 110, E888) introduced a radically simplified model for the same biological situation which seemed to show a comparable degree of effectiveness to the original model in fitting the experimental data. In fact the simplicity of the model even led to some new successful experimental predictions. (Altan-Bonnet was among the authors of the second paper.) This is the kind of situation I enjoy, where a relatively simple mathematical model suffices for interesting biological applications.

In the paper of Francois et. al. they not only do simulations but also carry out interesting analytical calculations for their model. On the other hand they do not follow the direction of attempting to use these calculations to formulate and prove mathematical theorems about the solutions of the model. Together with Eduardo Sontag we have now written a paper where we obtain some rigorous results about the solutions of this system. In the original paper the only situation considered is that where the system has a unique steady state and any other solution converges to that steady state at late times. We have proved that there are parameters for which there exist three steady states. A numerical study of these indicates that two of them are stable. A parameter in the system is the number N of phosphorylation sites on the T cell receptor complex which are included in the model. The results just mentioned on steady states were obtained for N=3.

An object of key importance is the response function. The variable which measures the degree of activation of the T cell in this model is the concentration C_N of the maximally phosphorylated state of the T cell receptor. The response function describes how C_N depends on the important input variables of the system. These are the concentration L of the ligand and the constant \nu describing the rate at which the ligand unbinds from the T cell receptor. A widespread idea (the lifetime dogma) is that the quantity \nu^{-1}, the dissociation time, determines how strongly an antigen signals to a T cell. It might naively be thought that the response should be an increasing function of L (the more antigen present the stronger the stimulation) and a decreasing function of \nu (the longer the binding the stronger the stimulation). However both theoretical and experimental results lead to the conclusion that this is not always the case.

We proved analytically that for certain values of the parameters C_N is a decreasing function of L and an increasing function of \nu. Since these rigorous results give rather poor information on the concrete values of the parameters leading to this behaviour and on the global form of the function we complemented this analytical work by simulations. These show how C_N can have a maximum as a function of \nu within this model and that as a function of L it can have the following form in a log-log plot. For L small the graph is a straight line of slope one. As L increases it switches to being a straight line of slope 1-N/2 and for still larger values it once again becomes a line of slope one, shifted with respect to the original one. Finally the curve levels out as it must do, since the function is bounded. The proofs do not make heavy use of general theorems and are in general based on doing certain estimates by hand.

All of these results were of the general form ‘there exist parameter values for the system such that X happens’. Of course this is just a first step. In the future we would like to understand better to what extent biologically motivated restrictions on the parameters lead to restrictions on the dynamical behaviour.

New hope for primary progressive multiple sclerosis?

April 12, 2017

Multiple sclerosis is generally classified into three forms. The relapsing-remitting form is the most common initial form. It is characterized by periods when the symptoms get much worse separated by periods where they get better. The second form is the primary progressive form where the symptoms slowly and steadily get worse. It is generally thought to have a worse prognosis than the relapsing-remitting form. In many cases the relapsing-remitting form converts to a progressive form at some time. This is then the secondary progressive form. In the meantime there is a big variety of drugs on the market which are approved for the treatment of the RR form of MS. They cannot stop the disease but they can slow its progression. Until very recently there was no drug approved for the treatment of progressive MS. This has now changed with the approval of ocrelizumab, an antibody against the molecule CD20 which is found on the surface of B cells. It has been approved for both the RR form and some cases of the progressive form of MS.

Ocrelizumab acts by causing B cells to be killed. It has been seen to have strong positive effects in combatting MS in some cases. This emphasizes the fact that T cells, usually regarded as the main culprit causing damage during MS, are not alone. B cells also seem to play an important role although what role that is is not so clear. There previously existed an antibody against CD20, rituximab, which was used in the therapy of diseases other than MS. Ocrelizumab has had problemtic side effects, with a high frequency of infections and a slightly increased cancer risk. For this reason it has been abandoned as a therapy for rheumatoid arthritis. On the other hand the trial for MS has less problems with side effects.

One reason not to be too euphoric about this first treatment for progressive MS is the following. It has been shown to be effective against patients in the first few years of illness and those where there are clear signs of inflammatory activity in MRT scans. This suggests to me a certain suspicion. The different types of MS are not clearly demarcated. Strong activity in the MRT is typical of the RR form. So I wonder if the patients where this drug is effective are perhaps individuals with an atypical RR form where the disease activity just does not cross the threshold to becoming manifest on the symptomatic level for a certain time. This says nothing against the usefuleness of the drug in this class of patients but it might be a sign that its applicability will not extend to a wider class of patients with the progressive form in the future. It also suggests caution in hoping that the role of B cells in this therapy might help to understand the mechanism of progressive MS.

Conference on cancer immunotherapy at EMBL

February 5, 2017

I just came back from a conference on cancer immunotherapy at EMBL in Heidelberg. It was very interesting for me to get an inside view of what is happening in this field and to learn what some of the hot topics are. One of the speakers was Patrick Baeuerle, who talked about a molecular construct which he introduced, called BiTE (bispecific T cell engager). It is the basis of a drug called blinatumomab. This is an antibody construct which binds both CD3 (characteristic of T cells) and CD19 (characteristic of B cells) so that these cells are brought into proximity. In its therapeutic use in treating acute lymphoblastic leukemia, the B cell is a cancer cell. More generally similar constructs could be made so as to bring T cells into proximity with other cancer cells. The idea is that the T cell should kill the cancer cell and in that context it is natural to think of cytotoxic T cells. It was not clear to me how the T cell is activated since the T cell receptor is not engaged. I took the opportunity to ask Baeuerle about this during a coffee break and he told me that proximity alone is enough to activate T cells. This can work not only for CD8 T cells but also for CD4 cells and even regulatory T cells. He presented a picture of a T cell always being ready to produce toxic substances and just needing a signal to actually do it. Under normal circumstances T cells search the surfaces of other cells for antigens and do not linger long close to any one cell unless they find their antigen. If they do stay longer near another cell for some reason then this can be interpreted as a danger sign and the T cell reacts. Baeuerle, who started his career as a biochemist, was CEO of a company called Micromet whose key product was what became blinatumomab. The company was bought by Amgen for more than a billion dollars and Baeuerle went with it to Amgen. When it came on the market it was the most expensive cancer drug ever up to that time. Later Baeuerle moved to a venture capital firm called MPM Capital, which is where he is now. In his previous life as a biochemical researcher Baeuerle did fundamental work on NF\kappaB  with David Baltimore.

In a previous post I mentioned a video by Ira Mellman. At the conference I had the opportunity to hear him live. One thing which became clear to me at the conference is the extent to which, among the checkpoint inhibitor drugs, anti-PD1 is superior to anti-CTLA. It is successful in a much higher proportion of patients. I never thought much about PD1 before. It is a receptor which is present on the surface of T cells after they have been activated and it can be stimulated by the ligand PD1L leading to the T cell being switched off. But how does this switching off process work? The T cell is normally switched on by the engagement of the T cell receptor and a second signal from CD28. In his talk Mellman explained that the switching off due to PD1 is not due to signalling from the T cell receptor being stopped. Instead what happens is that PD1 activates the phosphatase SHP2 which dephosphorylates and thus deactivates CD28. Even a very short deactivation of CD28 is enough to turn off the T cell. In thinking about mathematical models for T cell activation I thought that there might be a link to checkpoint inhibitors. Now it looks like models for T cell activation are not of direct relevance there and that instead it would be necessary to model CD28.

I learned some more things about viruses and cancer. One is that the Epstein-Barr-virus, famous for causing Burkitt’s lymphoma also causes other types of cancers, in particular other types of lymphoma. Another is that viruses are being used in a therapeutic way. I had heard of oncolytic viruses before but I had never really paid attention. In one talk the speaker showed a picture of a young African man who had been cured of Burkitt’s lymphoma by … getting measles. This gave rise to the idea that viruses can sometimes preferentially kill cancer cells and that they can perhaps be engineered to as to do so more often. In particular measles is a candidate. In that case there is an established safe vaccination and the idea is to vaccinate with genetically modified measles virus to fight certain types of cancer.

In going to this conference my main aim was to improve my background in aspects of biology and medicine which could be of indirect use for my mathematical work. In fact, to my surprise, I met one of the authors of a paper on T cell activation which is closely related to mathematical topics I am interested in. This was Philipp Kruger who is in the group of Omer Dushek in Oxford. I talked to him about the question of what is really the mechanism by which signals actually cross the membrane of T cells. One possibility he mentioned was a conformational change in CD3. Another, which I had already come across is that it could have to do with a mechanical effect by which the binding of a certain molecule could bring the cell membranes of two interacting cells together and expel large phosphatases like CD45 from a certain region. In the paper of his I had looked at signalling in T cells is studied with the help of CAR T-cells, which have an artifical analogue of the T cell receptor which may have a much higher affinity than the natural receptor. In his poster he described a new project looking at the effect of using different co-receptors in CAR T-cells (not just CD28). In any case CAR T-cells was a subject which frequently came up at the conference. Something which was in the air was that this therapy may be associated with neurotoxicity in some cases but I did not learn any details.

As far as I can see, the biggest issue with all these techniques is the following. They can be dramatically successful, taking patients from the point of death to long-term survival. On the other hand they only work in a subset of patients (say, 40% at most) and nobody understands what success depends on. I see a great need for a better theoretical understanding. I can understand that when someone has what looks like a good idea in this area they quickly look for a drug company to do a clinical trial with it. These things can save lives.  On the other hand it is important to ask whether investing more time in obtaining a better understanding of underlying mechanisms might not lead to better results in the long run.

The importance of dendritic cells

October 30, 2016

I just realized that something I wrote in a previous post does not make logical sense. This was not just due to a gap in my exposition but to a gap in my understanding. I now want to correct it. A good source for the correct story is a video by Ira Mellman of Genentech. I first recall some standard things about antigen presentation. In this process peptides are presented on the surface of cells with MHC molecules which are of two types I and II. MHC Class I molecules are found on essentially all cells and can present proteins coming from viruses infecting the cell concerned. MHC Class II molecules are found only on special cells called professional antigen presenting cells. These are macrophages, T cells and dendritic cells. The champions in antigen presentations are the dendritic cells and those are the ones I will be talking about here. In order for a T cell to be activated it needs two signals. The first comes through the T cell receptor interacting with the peptide-MHC complex on an APC. The second comes from CD28 on the T cell surface interacting with B7.1 and B7.2 on the APC.

Consider now an ordinary cell, not an APC, which is infected with a virus. This could, for instance be an epithelial cell infected with a influenza virus. This cell will present peptides derived from the virus with MHC Class I molecules. These can be recognized by activated {\rm CD8}^+ T cells which can then kill the epithelial cell and put an end to the viral reproduction in that cell. The way I put it in the previous post it looked like the T cell could be activated by the antigen presented on the target cell with the help of CD28 stimulation. The problem is that the cell presenting the antigen in this case is an amateur. It has no B7.1 or B7.2 and so cannot signal through CD28. The real story is more complicated. The fact is that dendritic cells can also present antigen on MHC Class I, including peptides which are external to their own function. A possible mechanism explained in the video of Mellman (I do not know if it is certain whether this is the mechanism, or whether it is the only one) is that a cell infected by a virus is ingested by a dendritic cell by phagocytosis, so that proteins which were outside the dendritic cell are now inside and can be brought into the pathway of MHC Class I presentation. This process is known as cross presentation. Dendritic cells also have tools of the innate immune system, such as toll-like receptors, at their disposal. When they recognise the presence of a virus by these means they upregulate B7.1 and B7.2 and are then in a position to activate {\rm CD8}^+ T cells. Note that in this case the virus will be inside the dendritic cell but not infecting it. There are viruses which use dendritic cells for their own purposes, reproducing there or hitching a lift to the lymph nodes where they can infect their favourite cells. An example is HIV. The main receptor used by this virus to enter the cells is CD4 and this is present not only on T cells but also on dendritic cells. Another interesting side issue is that dendritic cells can not only activate T cells but also influence the differentiation of these cells into various different types. The reason is that the detection instruments of the dendritic cell not only recognise that a pathogen is there but can also classify it to some extent (Mellman talks about a bar code). Based on this information the dendritic cell secretes various cytokines which influence the differentiation process. For instance they can influence whether a T-helper cell becomes of type Th1 or Th2. This is related to work which I did quite a long time ago on an ODE system modelling the interactions of T cells and macrophages. In view of what I just said it ḿight be interesting to study an inhomogeneous version of this system. The idea is to include an external input of cytokines coming from dendritic cells. In fact the unknowns in the system are not the concentrations of cytokines but the populations of cells. Thus it would be appropriate to introduce an inhomogeneous contribution into the terms describing the production of different types of cells.

Modern cancer therapies

October 28, 2016

I find the subject of cancer therapies fascinating. My particular interest is in the possibility of obtaining new insights by modelling and what role mathematics can play in this endeavour. I have heard many talks related to these subjects, both live and online. I was stimulated to write this post by a video of Martin McMahon, then at UCSF. It made me want to systematize some of the knowledge I have obtained from that video (which is already a few years old) and from other sources. First I should fix my terminology. I use the term ‘modern cancer therapies’ to distinguish a certain group of treatments from what I will call ‘classical cancer therapies’. The latter are still of central importance today and the characteristic feature of those I am calling ‘modern’ here is that they have only been developed in the last few years. I start by reviewing the ‘classical therapies’, surgery, radiotherapy and chemotherapy. Surgery can be very successful when it works. The aim is to remove all the cancerous cells. There is a tension between removing too little (so that a few malignant cells could remain and restart the tumour) and too much (which could mean too much damage to healthy tissues). A particularly difficult case is that of the glioma where it is impossible to determine the extent of the tumour by imaging techniques alone. An alternative to this is provided by the work of Kristin Swanson, which I mentioned in a previous post. She has developed techniques of using a mathematical model of the tumour (with reaction-diffusion equations) to predict the extent of the tumour. The results of a simulation, specific to a particular patient, is given to the surgeon to guide his work. In the case of radiotherapy radiation is used to kill cancer cells while trying to avoid killing too many healthy cells. A problematic aspect is that the cells are killed by damaging their DNA and this kind of damage may lead to the development of new cancers. In chemotherapy a chemical substance (poison) is used with the same basic aim as in radiotherapy. The substance is chosen to have the greatest effect on cells which divide frequently. This is the case with cancer cells but unfortunately they are not the only ones. A problem with radiotherapy and chemotherapy is their poor specificity.

Now I come to the ‘modern’ therapies. One class of substances used is that of kinase inhibitors. The underlying idea is as follows. Whether cells divide or not is controlled by a signal transduction network, a complicated set of chemical reactions in the cell. In the course of time mutations can accumulate in a cell and when enough relevant mutations are present the transduction network is disrupted. The cell is instructed to divide under circumstances under which it would normally not do so. The cells dividing in an uncontrolled way constitute cancer. The signals in this type of network are often passed on by phosphorylation, the attachment of phosphate groups to certain proteins. The enzymes which catalyse the process of phosphorylation are called kinases. A typical problem then is that due to a mutation a kinase is active all the time and not just when it should be. A switch which activates the signalling network is stuck in the ‘on’ position. This can in principal be changed by blocking the kinase so that it can no longer send its signals. An early and successful example of this is the kinase inhibitor imatinib which was developed as therapy for chronic myelogenous leukemia (CML). It seems that this drug can even cure CML in many cases, in the sense that after a time (two years) no mutated cells can be detected and the disease does not come back if the treatment is stopped. McMahon talks about this while being understandibly cautious about using the word cure in the context of any type of cancer. One general point about the ‘modern’ therapies is that they do not work for a wide range of cancers or even for the majority of patients with a given type of cancer. It is rather the case that cancer can be divided into more and more subtypes by analysing it with molecular methods and the therapy only works in a very specific class of patients, having a specific mutation. I have said something about another treatment using a kinase, Vemurafenib in a previous post. An unfortunate aspect of the therapies using kinase inhibitors is that while they provide spectacular short-term successes their effects often do not last more than a few months due to the development of resistance. A second mutation can rewire the network and overcome the blockade. (Might mathematical models be used to understand better which types of rewiring are relevant?) The picture of this I had, which now appears to me to be wrong, was that after a while on the drug a new mutation appears which gives the resistance. The picture I got from McMahon’s video was a different one. It seems that the mutations which might lead to resistance are often there before treatment begins. They were in Darwinian competition with other cells without the second mutation which were fitter. The treatment causes the fitness of the cells without the second mutation to decrease sharply. This removes the competition and allows the population of resistant cells to increase.

Another drug mentioned by McMahon is herceptin. This is used to treat breast cancer patients with a mutation in a particular receptor. The drug is an antibody and binds to the receptor. As far as I can see it is not known why the binding of the antibody has a therapeutic effect but there is one idea on this which I find attractive. This is that the antibodies attract immune cells which kill the cell carrying the mutation. This gives me a perfect transition to a discussion of a class of therapies which started to become successful and popular very recently and go under the name of cancer immunotherapy, since they are based on the idea of persuading immune cells to attack cancer cells. I have already discussed one way of doing this, using antibodies to increase the activities of T cells, in a previous post. Rather than saying more about that I want to go on to the topic of genetically modified T cells, which was also mentioned briefly here.

I do not know enough to be able to give a broad review of cellular immunotherapy for cancer treatment and so I will concentrate on making some comments based on a video on this subject by Stephan Grupp. He is talking about the therapy of acute lymphocytic leukemia (ALL). In particular he is concerned with B cell leukemia. The idea is to make artificial T cells which recognise the surface molecule CD19 characteristic of B cells. T cells are taken from the patient and modified to express a chimeric T cell receptor (CAR). The CAR is made of an external part coming from an antibody fused to an internal part including a CD3 \zeta-chain and a costimulatory molecule such as CD28. (Grupp prefers a different costimulatory molecule.) The cells are activated and caused to proliferate in vitro and then injected back into the patient. In many cases they are successful in killing the B cells of the patient and producing a lasting remission. It should be noted that most of the patients are small children and that most cases can be treated very effectively with classical chemotherapy. The children being treated with immunotherapy are the ‘worst cases’. The first patient treated by Grupp with this method was a seven year old girl and the treatment was finally very successful. Nevertheless it did at first almost kill her and this is not the only case. The problem was a cytokine release syndrome with extremely high levels of IL-6. Fortunately this was discovered just in time and she was treated with an antibody to IL-6 which not only existed but was approved for the treatment of children (with other diseases). It very quickly solved the problem. One issue which remains to be mentioned is that when the treatment is successful the T cells are so effective that the patient is left without B cells. Hence as long as the treatment continues immunoglobulin replacement therapy is necessary. Thus the issue arises whether this can be a final treatment or whether it should be seen a preparation for a bone marrow transplant. As a side issue from this story I wonder if modelling could bring some more insight for the IL-6 problem. Grupp uses some network language in talking about it, saying that the problem is a ‘simple feedback loop’. After I had written this I discovered a preprint on BioRxiv doing mathematical modelling of CAR T cell therapy of B-ALL and promising to do more in the future. It is an ODE model where there is no explicit inclusion of IL-6 but rather a generic inflammation variable.

In the beginning was the worm

September 29, 2016

In a previous post I mentioned the book by Andrew Brown whose title I have used here. I came across it in a second hand bookshop in Berkeley when I was spending time at MSRI in 2009. I read it with pleasure then and now I have read it again. It contains the story of how the worm Caenorhabditis elegans became an important model organism. This came about because Sydney Brenner deliberately searched for an organism with favourable properties and promoted it very effectively once he had found it. It is transparent so that it is possible to see what is going on inside it and it is easy to keep in the lab and reproduces fast enough in order to allow genetic research to be done rapidly. The organism sought was supposed to have a suitable sexual system. C. elegans is normally hermaphrodite but does also have males and so it is acceptable from that point of view. One further important fact about C. elegans is that it has a nervous system, albeit a relatively simple one. (More precisely, it has two nervous systems but I have not looked into the details of that issue.) Brenner was looking to understand how genetics determines behaviour and C. elegans gave him an opportunity to make an attack on this problem in two steps. First understand how to get from genes to neurons and then understand how to get from neurons to behaviour. C. elegans has a total of 302 neurons. It has 959 cells in total, not including eggs and sperm. Among the remarkable things known about the worm are the complete developmental history of each of its cells and the wiring diagram of its neurons. There are about 6400 synapses but the exact number, unlike the number of cells or neurons, is dependent on the individual. For orientation note that C. elegans is a eukaryotic organism (in contrast to phages or E. coli) which is multicellular (in contrast to Saccharomyces cerevisiae) and it is an animal (in contrast to Arabidopsis thaliana). Otherwise, among the class of model organisms, it is as simple and fast reproducing as possible. In particular it is simpler than Drosophila, which was traditionally the favourite multicellular model organism of the geneticists.

In this blog I have previously mentioned Sydney Brenner and expressed my admiration for him. I have twice met him personally when he was giving talks in Berlin and I have also watched a number of videos of him which are available on the web and read various texts he has written. In this way I have experienced a little of the magnetism which allowed him to inspire gifted and risk-taking young scientists to work on the worm. Brenner spent 20 years at the Laboratory of Molecular Biology in Cambridge, a large part of it as director of that organization. In the pioneering days of molecular biology the lab was producing Nobel prizes in series. He had to wait until 2002 for his own Nobel prize (for physiology or medicine), shared with John Sulston and Robert Horvitz. In his Nobel speech Brenner said that he felt there was a fourth prizewinner, C. elegans, which, however, did not get a share of the money. My other favourite quote from that speech is his description of the (then) present state of molecular biology, ‘drowning in a sea of data, starving for knowledge’. Since then that problem has only got worse.

Now I will collect some ‘firsts’ associated with C. elegans. It was the first multicellular organism to have its whole genome sequenced, in 1998. This can also be seen as the point of departure for the human genome project. Here the worm people overtook the drosophilists and the Drosophila genome was only finished in 2000. Sulston played a central role in the public project to sequence the human genome and the struggle with the commercial project of Craig Venter. It was only the link between the worm genome project and the human one which allowed enough money to be raised to finish the worm sequence. According to the book Sulston was more interested in the worm project since he wanted to properly finish what he had started. Martin Chalfie, coming from the worm community introduced GFP (green fluorescent protein) into molecular biology. He first expressed it in E. coli and C. elegans. He got a Nobel prize for that in 2008. microRNA (miRNA) was first found in C. elegans. It is the basis of RNA interference (RNAi), also first found in C. elegans. This earned a Nobel prize in 2006. The genetics of the process of apoptosis (programmed cell death) was understood by studying C. elegans. When Sulston was investigated the cell lineage he saw that certain cells had to die as part of the developmental process. Exactly 131 cells die during this process.

To conclude I mention a couple of features of C. elegans going beyond the time covered by the book. I asked myself what we can learn about the immune system from C. elegans. Presumably every living organism needs an immune system to survive in a hostile environment. The adaptive immune system in the form known in humans only exists in vertebrates and hence, in particular, not in the worm. Some related comments can be found here. It seems that C. elegans has no adaptive immune system at all but it does have innate immunity. It has cells called coelomocytes which have at least some resemblance to immune cells. It has six of them in total. Compare this with more than 10^9 immune cells per litre in our blood. C. elegans eats bacteria. These days the human gut flora is a fashionable topic. A couple of weeks ago I heard a talk by Giulia Enders, the author of the book ‘Darm mit Charme’ which sold a million copies in 2014. I had bought and read the book and found it interesting although I was not really enthusiastic about it. Now TV advertising includes products aimed at the gut flora of cats. So what about C. elegans? Does it have an interesting gut flora? The answer seems to be yes. See for instance the 2013 article ‘Worms need microbes too’ in EMBO Mol. Med. 5, 1300.

ECMTB 2016 in Nottingham

July 17, 2016

This past week I attended a conference of the ESMTB and the SMB in Nottingham. My accomodation was in a hall of residence on the campus and my plan was to take a tram from the train station. When I arrived it turned out that the trams were not running. I did not find out the exact reason but it seemed that it was a problem which would not be solved quickly. Instead of finding out what bus I should take and where I should take it from I checked out the possibility of walking. As it turned out it was neither unreasonably far nor complicated. Thus, following my vocation as pedestrian, I walked there.

Among the plenary talks at the conference was one by Hisashi Ohtsuki on the evolution of social norms. Although I am a great believer in the application of mathematics to many real world problems I do become a bit sceptical when the area of application goes in the direction of sociology or psychology. Accordingly I went to the talk with rather negative expectations but I was pleasantly surprised. The speaker explained how he has been able to apply evolutionary game theory to obtain insights into the evolution of cooperation in human societies under the influence of indirect reciprocity. This means that instead of the simple direct pattern ‘A helps B and thus motivates B to help A’ we have ‘C sees A helping B and hence decides to help A’ and variations on that pattern. The central idea of the work is to compare many different strategies in the context of a mathematical model and thus obtain ideas about what are the important mechanisms at work. My impression was that this is a case where mathematics has generated helpful ideas in understanding the phenomenon and that there remain a lot of interesting things to be done in that direction. It also made me reflect on my own personal strategies when interacting with other people. Apart from the interesting content the talk was also made more interesting by the speaker’s entertaining accounts of experiments which have been done to compare with the results of the modelling. During the talk the speaker mentioned self-referentially that the fact of his standing in front of us giving the talk was an example of the process of the formation of a reputation being described in the talk. As far as I am concerned he succeeded in creating a positive reputation both for himself and for his field.

Apart from this the other plenary talk which I found most interesting was by Johan van de Koppel. He was talking about pattern formation in ecology and, in particular, about his own work on pattern formation in mussel beds. A talk which I liked much less was that of Adelia Sequeira and it is perhaps interesting to ask why. She was talking about modelling of atherosclerosis. She made the valid point near the beginning of her lecture that while heart disease is a health problem of comparable importance to cancer in the developed world the latter theme was represented much more strongly than the former at the conference. For me cancer is simply much more interesting than heart disease and this point of view is maybe more widespread. What could be the reason? One possibility is that the study of cancer involves many more conceptual aspects than that of heart disease and that this is attractive for mathematicians. Another could be that I am a lot more afraid of being diagnosed with cancer some day than of being diagnosed with heart disease although the latter may be no less probable and not less deadly if it happens. To come back to the talk I found that the material was too abundant and too technical and that many ideas were used without really being introduced. The consequence of these factors was that I lost interest and had difficulty not falling asleep.

In the case of the parallel talks there were seventeen sessions in parallel and I generally decided to go to whole sessions rather than trying to go to individual talks. I will make some remarks about some of the things I heard there. I found the first session I went to, on tumour-immune dynamics, rather disappointing but the last talk in the session, by Shalla Hanson was a notable exception. The subject was CAR T-cells and what mathematical modelling might contribute to improving therapy. I found both the content and the presentation excellent. The presentation packed in a lot of material but rather than being overwhelmed I found myself waiting eagerly for what would come next. During the talk I thought of a couple of questions which I might ask at the end but they were answered in due course during the lecture. It is a quality I admire in a speaker to be able to anticipate the questions which the audience may ask and answer them. I see this less as a matter of understanding the psychology of the audience (which can sometimes be important) and rather of really having got to the heart of the subject being described. There was a session on mathematical pharmacology which I found interesting, in particular the talks of Tom Snowden on systems pharmacology and that of Wilhelm Huisinga on multidrug therapies for HIV. In a session on mathematical and systems immunology Grant Lythe discussed the fascinating question of how to estimate the number of T cell clones in the body and what mathematics can contribute to this beyond just analysing the data statistically. I enjoyed the session on virus dynamics, particularly a talk by Harel Dahari on hepatitis C. In particular he told a story in which he was involved in curing one exceptional HCV patient with a one-off therapy using a substance called silibinin and real-time mathematical modelling.

I myself gave a talk about dinosaurs. Since this is work which is at a relatively early stage I will leave describing more details of it in this blog to a later date.

NFκB

May 1, 2016

NF\kappaB is a transcription factor, i.e. a protein which can bind to DNA and cause a particular gene to be read more or less often. This means that more or less of a certain protein is produced and this changes the behaviour of the cell. The full name of this transcription factor is ‘nuclear factor, \kappa-light chain enhancer of B cells’. The term ‘nuclear factor’ is clear. The substance is a transcription factor and to bind to DNA it has to enter the nucleus. NF\kappaB is found in a wide variety of different cells and its association with B cells is purely historical. It was found in the lab of David Baltimore during studies of the way in which B cells are activated. It remains to explain the \kappa. B cells produce antibodies each of which consists of two symmetrical halves. Each half consists of a light and a heavy chain. The light chain comes in two variants called \kappa and \lambda. The choice which of these a cell uses seems to be fairly random. The work in the Baltimore lab had found out that NF\kappaB could skew the ratio. I found a video by Baltimore from 2001 about NF\kappaB. This is probably quite out of date by now but it contained one thing which I found interesting. Under certain circumstances it can happen that a constant stimulus causing activation of NF\kappaB leads to oscillations in the concentration. In the video the speaker mentions ‘odd oscillations’ and comments ‘but that’s for mathematicians to enjoy themselves’. It seems that he did not believe these oscillations to be biologically important. There are reasons to believe that they might be important and I will try to explain why. At the very least it will allow me to enjoy myself.

Let me explain the usual story about how NF\kappaB is activated. There are lots of animated videos on Youtube illustrating this but I prefer a description in words. Normally NF\kappaB is found in the cytosol bound to an inhibitor I\kappaB. Under certain circumstances a complex of proteins called IKK forms. The last K stands for kinase and IKK phosphorylates I\kappaB. This causes I\kappaB to be ubiquinated and thus marked for degradation (cf. the discussion of ubiquitin here). When it has been destroyed NF\kappaB is liberated, moves to the nucleus and binds to DNA. What are the circumstances mentioned above? There are many alternatives. For instance TNF\alpha binds to its receptor, or something stimulates a toll-like receptor. The details are not important here. What is important is that there are many different signals which can lead to the activation of NF\kappaB. What genes does NF\kappaB bind to when it is activated? Here again there are many possibilities. Thus there is a kind of bow tie configuration where there are many inputs and many outputs which are connected to a single channel of communication. So how is it possible to arrange that when one input is applied, e.g. TNF\alpha the right genes are switched on while another input activates other genes through the same mediator NF\kappaB? One possibility is cross-talk, i.e. that this signalling pathway interacts with others. If this cannot account for all the specificity then the remaining possibility is that information is encoded in the signal passing through NF\kappaB itself. For example, one stimulus could produce a constant response while another causes an oscillatory one. Or two stimuli could cause oscillatory responses with different frequencies. Evidently the presence of oscillations in the concentration of NF\kappaB presents an opportunity for encoding more information than would otherwise be possible. To what extent this really happens is something where I do not have an overview at the moment. I want to learn more. In any case, oscillations have been observed in the NF\kappaB system. The primary thing which has been observed to oscillate is the concentration of NF\kappaB in the nucleus. This oscillation is a consequence of the movement of the protein between the cytosol and the nucleus. There are various mathematical models for describing these oscillations. As usual in modelling phenomena in cell biology there are models which are very big and complicated. I find it particularly interesting when some of the observations can be explained by a simple model. This is the case for NF\kappaB where a three-dimensional model and an explanation of its relations to the more complicated models can be found in a paper by Krishna, Jensen and Sneppen (PNAS 103, 10840). In the three-dimensional model the unknowns are the concentrations of NF\kappaB in the nucleus, I\kappaB in the cytoplasm and mRNA coding for I\kappaB. The oscillations in normal cells are damped but sustained oscillations can be seen in mutated cells or corresponding models.

What is the function of NF\kappaB? The short answer is that it has many. On a broad level of description it plays a central role in the phenomenon of inflammation. In particular it leads to production of the cytokine IL-17 which in turn, among other things, stimulates the production of anti-microbial peptides. When these things are absent it leads to a serious immunodeficiency. In one variant of this there is a mutation in the gene coding for NEMO, which is one of the proteins making up IKK. A complete absence of NEMO is fatal before birth but people with a less severe mutation in the gene do occur. There are symptoms due to things which took place during the development of the embryo and also immunological problems, such as the inability to deal with certain bacteria. The gene for NEMO is on the X chromosome so that this disease is usually limited to boys. More details can be found in the book of Geha and Notarangelo mentioned in  a previous post.

David Vetter, the bubble boy

October 17, 2015

T cells are a class of white blood cells without which a human being usually cannot survive. An exception to this was David Vetter, a boy who lived 12 years without T cells. This was only possible because he lived all this time in a sterile environment, a plastic bubble. For this reason he became known as the bubble boy. The disease which he suffered from is called SCID, severe combined immunodeficiency, and it corresponds to having no T cells. The most common form of this is due to a mutation on the X chromosome and as a result it usually affects males. The effects set in a few months after birth. The mutation leads to a lack of the \gamma chain of the IL-2 receptor. In fact this chain occurs in several cytokine receptors and is therefore called the ‘common chain’. Probably the key to the negative effects caused by its lack in SCID patients is the resulting lack of the receptor for IL-7, which is important for T cell development. SCID patients have a normal number of B cells but very few antibodies due to the lack of support by helper T cells. Thus in the end they lack both the immunity usually provided by T cells and that usually provided by B cells. This is the reason for the description ‘combined immunodeficiency’. I got the information on this theme which follows mainly from two sources. The first is a documentary film ‘Bodyshock – The Boy in the Bubble’ about David Vetter produced by Channel 4 and available on Youtube. (There are also less serious films on this subject, including one featuring John Travolta.) The second is the chapter on X-linked SCID in the book ‘Case Studies in Immunology’ by Raif Geha and Luigi Notarangelo. I find this book a wonderful resource for learning about immunology. It links general theory to the case history of specific patients.

David Vetter had an older brother who also suffered from SCID and died of infection very young. Thus his parents and their doctors were warned. The brother was given a bone marrow transplant from his sister, who had the necessary tissue compatibility. Unfortunately this did not save him, presumably because he had already been exposed to too many infections by the time it was carried out. The parents decided to have another child, knowing that if it was a boy the chances of another case of SCID were 50%. Their doctors had a hope of being able to save the life of such a child by isolating him and then giving him a bone marrow transplant before he had been exposed to infections. The parents very soon had another child, it was a boy, he had SCID. The child was put into a sterile plastic bubble immediately after birth. Unfortunately it turned out that the planned bone marrow donor, David’s sister, was not a good match for him. It was necessary to wait and hope for an alternative donor. This hope was not fulfilled and David had to stay in the bubble. This had not been planned and it must be asked whether the doctors involved had really thought through what would happen if the optimal variant they had thought of did not work out.

At one point David started making punctures in his bubble as a way of attracting attention. Then it was explained to him what his situation was and why he must not damage the bubble. Later there was a kind of space suit produced for him by NASA which allowed him to move around outside his home. He only used it six times since he was too afraid there could be an accident. His physical health was good but understandably his psychological situation was difficult. New ideas in the practise of bone marrow transplantation indicated that it might be possible to use donors with a lesser degree of compatibility. On this basis David was given a transplant with his sister as the donor. It was not noticed that her bone marrow was infected with Epstein-Barr virus. As a result David got Burkitt’s lymphoma, a type of cancer which can be caused by that virus. (Compare what I wrote about this role of EBV here.) He died a few months after the operation, at the age of 12. Since that time treatment techniques have improved. The patient whose case is described in the book of Geha and Notarangelo had a successful bone marrow transplant (with his mother as donor). Unfortunately his lack of antibodies was not cured but this can be controlled with injections of immunoglobulin once every three weeks.

Trip to the US

October 5, 2015

Last week I visited a few places in the US. My first stop was Morgantown, West Virginia where my host was Casian Pantea. There I had a lot of discussions with Casian and Carsten Conradi on chemical reaction network theory. This synergized well with the work I have recently been doing preparing a lecture course on that subject which I will be giving in the next semester. I gave a talk on MAPK and got some feedback on that. It rained a lot and there was not much opportunity to do anything except work. One day on the way to dinner while it was relatively dry I saw a Cardinal and I fortunately did have my binoculars with me. On Wednesday afternoon I travelled to New Brunswick and spent most of Thursday talking to Eduardo Sontag at Rutgers. It was a great pleasure to talk to an excellent mathematician who also knows a lot about immunology. He and I have a lot of common interests which is in part due to the fact that I was inspired by several of his papers during the time I was getting into mathematical biology. I also had the opportunity to meet Evgeni Nikolaev who told me a variety of interesting things. They concerned bifurcation theory in general, its applications to the kinds of biological models I am interested in and his successes in applying mathematical models to understanding concrete problems in biomedical research such as the processes taking place in tuberculosis. My personal dream is to see a real coming together of mathematics and immunology and that I have the chance to make a contribution to that process.

On Friday I flew to Chicago in order to attend an AMS sectional meeting. I had been in Chicago once before but that is many years ago now. I do remember being impressed by how much Lake Michigan looks like the sea, I suppose due to the structure of the waves. This impression was even stronger this time since there were strong winds whipping up the waves. Loyola University, the site of the meeting, is right beside the lake and it felt like home for me due to the combination of wind, waves and gulls. The majority of those were Ring-Billed Gulls which made it clear which side of the Atlantic I was on. There were also some Herring Gulls and although they might have been split from those on the other side of the Atlantic by the taxonomists I did not notice any difference. It was the first time I had been at an AMS sectional meeting and my impression was that the parallel sessions were very parallel, in other words in no danger of meeting. Most of the people in our session were people I knew from the conferences I attended in Charlotte and in Copenhagen although I did make a couple of new acquaintances, improving my coverage of the reaction network community.

In a previous post I mentioned Gheorghe Craciun’s ideas about giving the deficiency of a reaction network a geometric interpretation, following a talk of his in Copenhagen. Although I asked him questions about this on that occasion I did not completely understand the idea. Correspondingly my discussion of the point here in my blog was quite incomplete. Now I talked to him again and I believe I have finally got the point. Consider first a network with a single linkage class. The complexes of the network define points in the species space whose coordinates are the stoichiometric coefficients. The reactions define oriented segments joining the educt complex to the product complex of each reaction. The stoichiometric subspace is the vector space spanned by the differences of the complexes. It can also be considered as a translate of the affine subspace spanned by the complexes themselves. This makes it clear that its dimension s is at most n-1, where n is the number of complexes. The number s is the rank of the stoichiometric matrix. The deficiency is n-1-s. At the same time s\le m. If there are several linkage classes then the whole space has dimension at most n-l, where l is the number of linkage classes. The deficiency is n-l-s. If the spaces corresponding to the individual linkage classes have the maximal dimension allowed by the number of complexes in that class and these spaces are linearly independent then the deficiency is zero. Thus we see that the deficiency is the extent to which the complexes fail to be in general position. If the species and the number of complexes have been fixed then deficiency zero is seen to be a generic condition. On the other hand fixing the species and adding more complexes will destroy the deficiency zero condition since then we are in the case n-l>m so that the possibility of general position is excluded. The advantage of having this geometric picture is that it can often be used to read off the deficiency directly from the network. It might also be used to aid in constructing networks with a desired deficiency.