Archive for the ‘mathematical biology’ Category

Kestrels and Dirichlet boundary conditions

February 28, 2017

The story I tell in this post is based on what I heard a long time ago in a talk by Jonathan Sherratt. References to the original work by Sherratt and his collaborators are Proc. R. Soc. Lond. B269, 327 (more biological) and SIAM J. Appl. Math. 63, 1520 (more mathematical). There are some things I say in the following which I did not find in these sources and so they are based on my memories of that talk and on things which I wrote down for my own reference at intermediate times. If this has introduced errors they will only concern details and not the basic story. The subject is a topic in population biology and how it relates to certain properties of reaction-diffusion equations.

In the north of England there is an area called the Kielder Forest with a lake in the middle and the region around the lake is inhabited by a population of the field vole Microtus\ agrestis. It is well known that populations of voles undergo large fluctuations in time. What is less known is what the spatial dependence is like. There are two alternative scenarios. In the first the population density of voles oscillates in a way which is uniform in space. In the second it is a travelling wave of the form U(x-ct). In that case the population at a fixed point of space oscillates in time but the phase of the oscillations is different at different spatial points. In general there is relatively little observational data on this type of thing. The voles in the Kielder forest are an exception to this since in that case a dedicated observer collected data which provides information on both the temporal and spatial variation of the population density. This data is the basis for the modelling which I will now describe.

The main predators of the voles are weasels Mustela\ nivalis. It is possible to set up a model where the unknowns are the populations of voles and weasels. Their interaction is modelled in a simple way common in predator-prey models. Their spatial motion is described by a diffusion term. In this way a system of reaction-diffusion equations is obtained. These are parabolic equations and to the time evolution is non-local in space. The unknowns are defined on a region with boundary which is the complement of a lake. Because of this we need not only initial values to determine a solution but also boundary conditions. How should they be chosen? In the area around the lake there live certain birds of prey, kestrels. They hunt voles from the air. In most of the area being considered there is very thick vegetation and the voles can easily hide from the kestrels. Thus the direct influence of the kestrels on the vole population is negligible and the kestrels to not need to be included in the reaction-diffusion system. They do, however, have a striking indirect effect. On the edge of the lake there is a narrow strip with little vegetation and any vole which ventures into that area is in great danger of being caught by a kestrel. This means that the kestrels essentially enforce the vanishing of the population density of voles at the edge of the lake. In other words they impose a homogeneous Dirichlet boundary condition on one of the unknowns at the boundary. Note that this is incompatible with spatially uniform oscillations. On the boundary oscillations are ruled out by the Dirichlet condition. When the PDE are solved numerically what is seen that the shore of the lake generates a train of travelling waves which propagate away from it. This can also be understood theoretically, as explained in the papers quoted above.

Conference on cancer immunotherapy at EMBL

February 5, 2017

I just came back from a conference on cancer immunotherapy at EMBL in Heidelberg. It was very interesting for me to get an inside view of what is happening in this field and to learn what some of the hot topics are. One of the speakers was Patrick Baeuerle, who talked about a molecular construct which he introduced, called BiTE (bispecific T cell engager). It is the basis of a drug called blinatumomab. This is an antibody construct which binds both CD3 (characteristic of T cells) and CD19 (characteristic of B cells) so that these cells are brought into proximity. In its therapeutic use in treating acute lymphoblastic leukemia, the B cell is a cancer cell. More generally similar constructs could be made so as to bring T cells into proximity with other cancer cells. The idea is that the T cell should kill the cancer cell and in that context it is natural to think of cytotoxic T cells. It was not clear to me how the T cell is activated since the T cell receptor is not engaged. I took the opportunity to ask Baeuerle about this during a coffee break and he told me that proximity alone is enough to activate T cells. This can work not only for CD8 T cells but also for CD4 cells and even regulatory T cells. He presented a picture of a T cell always being ready to produce toxic substances and just needing a signal to actually do it. Under normal circumstances T cells search the surfaces of other cells for antigens and do not linger long close to any one cell unless they find their antigen. If they do stay longer near another cell for some reason then this can be interpreted as a danger sign and the T cell reacts. Baeuerle, who started his career as a biochemist, was CEO of a company called Micromet whose key product was what became blinatumomab. The company was bought by Amgen for more than a billion dollars and Baeuerle went with it to Amgen. When it came on the market it was the most expensive cancer drug ever up to that time. Later Baeuerle moved to a venture capital firm called MPM Capital, which is where he is now. In his previous life as a biochemical researcher Baeuerle did fundamental work on NF\kappaB  with David Baltimore.

In a previous post I mentioned a video by Ira Mellman. At the conference I had the opportunity to hear him live. One thing which became clear to me at the conference is the extent to which, among the checkpoint inhibitor drugs, anti-PD1 is superior to anti-CTLA. It is successful in a much higher proportion of patients. I never thought much about PD1 before. It is a receptor which is present on the surface of T cells after they have been activated and it can be stimulated by the ligand PD1L leading to the T cell being switched off. But how does this switching off process work? The T cell is normally switched on by the engagement of the T cell receptor and a second signal from CD28. In his talk Mellman explained that the switching off due to PD1 is not due to signalling from the T cell receptor being stopped. Instead what happens is that PD1 activates the phosphatase SHP2 which dephosphorylates and thus deactivates CD28. Even a very short deactivation of CD28 is enough to turn off the T cell. In thinking about mathematical models for T cell activation I thought that there might be a link to checkpoint inhibitors. Now it looks like models for T cell activation are not of direct relevance there and that instead it would be necessary to model CD28.

I learned some more things about viruses and cancer. One is that the Epstein-Barr-virus, famous for causing Burkitt’s lymphoma also causes other types of cancers, in particular other types of lymphoma. Another is that viruses are being used in a therapeutic way. I had heard of oncolytic viruses before but I had never really paid attention. In one talk the speaker showed a picture of a young African man who had been cured of Burkitt’s lymphoma by … getting measles. This gave rise to the idea that viruses can sometimes preferentially kill cancer cells and that they can perhaps be engineered to as to do so more often. In particular measles is a candidate. In that case there is an established safe vaccination and the idea is to vaccinate with genetically modified measles virus to fight certain types of cancer.

In going to this conference my main aim was to improve my background in aspects of biology and medicine which could be of indirect use for my mathematical work. In fact, to my surprise, I met one of the authors of a paper on T cell activation which is closely related to mathematical topics I am interested in. This was Philipp Kruger who is in the group of Omer Dushek in Oxford. I talked to him about the question of what is really the mechanism by which signals actually cross the membrane of T cells. One possibility he mentioned was a conformational change in CD3. Another, which I had already come across is that it could have to do with a mechanical effect by which the binding of a certain molecule could bring the cell membranes of two interacting cells together and expel large phosphatases like CD45 from a certain region. In the paper of his I had looked at signalling in T cells is studied with the help of CAR T-cells, which have an artifical analogue of the T cell receptor which may have a much higher affinity than the natural receptor. In his poster he described a new project looking at the effect of using different co-receptors in CAR T-cells (not just CD28). In any case CAR T-cells was a subject which frequently came up at the conference. Something which was in the air was that this therapy may be associated with neurotoxicity in some cases but I did not learn any details.

As far as I can see, the biggest issue with all these techniques is the following. They can be dramatically successful, taking patients from the point of death to long-term survival. On the other hand they only work in a subset of patients (say, 40% at most) and nobody understands what success depends on. I see a great need for a better theoretical understanding. I can understand that when someone has what looks like a good idea in this area they quickly look for a drug company to do a clinical trial with it. These things can save lives.  On the other hand it is important to ask whether investing more time in obtaining a better understanding of underlying mechanisms might not lead to better results in the long run.

The importance of dendritic cells

October 30, 2016

I just realized that something I wrote in a previous post does not make logical sense. This was not just due to a gap in my exposition but to a gap in my understanding. I now want to correct it. A good source for the correct story is a video by Ira Mellman of Genentech. I first recall some standard things about antigen presentation. In this process peptides are presented on the surface of cells with MHC molecules which are of two types I and II. MHC Class I molecules are found on essentially all cells and can present proteins coming from viruses infecting the cell concerned. MHC Class II molecules are found only on special cells called professional antigen presenting cells. These are macrophages, T cells and dendritic cells. The champions in antigen presentations are the dendritic cells and those are the ones I will be talking about here. In order for a T cell to be activated it needs two signals. The first comes through the T cell receptor interacting with the peptide-MHC complex on an APC. The second comes from CD28 on the T cell surface interacting with B7.1 and B7.2 on the APC.

Consider now an ordinary cell, not an APC, which is infected with a virus. This could, for instance be an epithelial cell infected with a influenza virus. This cell will present peptides derived from the virus with MHC Class I molecules. These can be recognized by activated {\rm CD8}^+ T cells which can then kill the epithelial cell and put an end to the viral reproduction in that cell. The way I put it in the previous post it looked like the T cell could be activated by the antigen presented on the target cell with the help of CD28 stimulation. The problem is that the cell presenting the antigen in this case is an amateur. It has no B7.1 or B7.2 and so cannot signal through CD28. The real story is more complicated. The fact is that dendritic cells can also present antigen on MHC Class I, including peptides which are external to their own function. A possible mechanism explained in the video of Mellman (I do not know if it is certain whether this is the mechanism, or whether it is the only one) is that a cell infected by a virus is ingested by a dendritic cell by phagocytosis, so that proteins which were outside the dendritic cell are now inside and can be brought into the pathway of MHC Class I presentation. This process is known as cross presentation. Dendritic cells also have tools of the innate immune system, such as toll-like receptors, at their disposal. When they recognise the presence of a virus by these means they upregulate B7.1 and B7.2 and are then in a position to activate {\rm CD8}^+ T cells. Note that in this case the virus will be inside the dendritic cell but not infecting it. There are viruses which use dendritic cells for their own purposes, reproducing there or hitching a lift to the lymph nodes where they can infect their favourite cells. An example is HIV. The main receptor used by this virus to enter the cells is CD4 and this is present not only on T cells but also on dendritic cells. Another interesting side issue is that dendritic cells can not only activate T cells but also influence the differentiation of these cells into various different types. The reason is that the detection instruments of the dendritic cell not only recognise that a pathogen is there but can also classify it to some extent (Mellman talks about a bar code). Based on this information the dendritic cell secretes various cytokines which influence the differentiation process. For instance they can influence whether a T-helper cell becomes of type Th1 or Th2. This is related to work which I did quite a long time ago on an ODE system modelling the interactions of T cells and macrophages. In view of what I just said it ḿight be interesting to study an inhomogeneous version of this system. The idea is to include an external input of cytokines coming from dendritic cells. In fact the unknowns in the system are not the concentrations of cytokines but the populations of cells. Thus it would be appropriate to introduce an inhomogeneous contribution into the terms describing the production of different types of cells.

Modern cancer therapies

October 28, 2016

I find the subject of cancer therapies fascinating. My particular interest is in the possibility of obtaining new insights by modelling and what role mathematics can play in this endeavour. I have heard many talks related to these subjects, both live and online. I was stimulated to write this post by a video of Martin McMahon, then at UCSF. It made me want to systematize some of the knowledge I have obtained from that video (which is already a few years old) and from other sources. First I should fix my terminology. I use the term ‘modern cancer therapies’ to distinguish a certain group of treatments from what I will call ‘classical cancer therapies’. The latter are still of central importance today and the characteristic feature of those I am calling ‘modern’ here is that they have only been developed in the last few years. I start by reviewing the ‘classical therapies’, surgery, radiotherapy and chemotherapy. Surgery can be very successful when it works. The aim is to remove all the cancerous cells. There is a tension between removing too little (so that a few malignant cells could remain and restart the tumour) and too much (which could mean too much damage to healthy tissues). A particularly difficult case is that of the glioma where it is impossible to determine the extent of the tumour by imaging techniques alone. An alternative to this is provided by the work of Kristin Swanson, which I mentioned in a previous post. She has developed techniques of using a mathematical model of the tumour (with reaction-diffusion equations) to predict the extent of the tumour. The results of a simulation, specific to a particular patient, is given to the surgeon to guide his work. In the case of radiotherapy radiation is used to kill cancer cells while trying to avoid killing too many healthy cells. A problematic aspect is that the cells are killed by damaging their DNA and this kind of damage may lead to the development of new cancers. In chemotherapy a chemical substance (poison) is used with the same basic aim as in radiotherapy. The substance is chosen to have the greatest effect on cells which divide frequently. This is the case with cancer cells but unfortunately they are not the only ones. A problem with radiotherapy and chemotherapy is their poor specificity.

Now I come to the ‘modern’ therapies. One class of substances used is that of kinase inhibitors. The underlying idea is as follows. Whether cells divide or not is controlled by a signal transduction network, a complicated set of chemical reactions in the cell. In the course of time mutations can accumulate in a cell and when enough relevant mutations are present the transduction network is disrupted. The cell is instructed to divide under circumstances under which it would normally not do so. The cells dividing in an uncontrolled way constitute cancer. The signals in this type of network are often passed on by phosphorylation, the attachment of phosphate groups to certain proteins. The enzymes which catalyse the process of phosphorylation are called kinases. A typical problem then is that due to a mutation a kinase is active all the time and not just when it should be. A switch which activates the signalling network is stuck in the ‘on’ position. This can in principal be changed by blocking the kinase so that it can no longer send its signals. An early and successful example of this is the kinase inhibitor imatinib which was developed as therapy for chronic myelogenous leukemia (CML). It seems that this drug can even cure CML in many cases, in the sense that after a time (two years) no mutated cells can be detected and the disease does not come back if the treatment is stopped. McMahon talks about this while being understandibly cautious about using the word cure in the context of any type of cancer. One general point about the ‘modern’ therapies is that they do not work for a wide range of cancers or even for the majority of patients with a given type of cancer. It is rather the case that cancer can be divided into more and more subtypes by analysing it with molecular methods and the therapy only works in a very specific class of patients, having a specific mutation. I have said something about another treatment using a kinase, Vemurafenib in a previous post. An unfortunate aspect of the therapies using kinase inhibitors is that while they provide spectacular short-term successes their effects often do not last more than a few months due to the development of resistance. A second mutation can rewire the network and overcome the blockade. (Might mathematical models be used to understand better which types of rewiring are relevant?) The picture of this I had, which now appears to me to be wrong, was that after a while on the drug a new mutation appears which gives the resistance. The picture I got from McMahon’s video was a different one. It seems that the mutations which might lead to resistance are often there before treatment begins. They were in Darwinian competition with other cells without the second mutation which were fitter. The treatment causes the fitness of the cells without the second mutation to decrease sharply. This removes the competition and allows the population of resistant cells to increase.

Another drug mentioned by McMahon is herceptin. This is used to treat breast cancer patients with a mutation in a particular receptor. The drug is an antibody and binds to the receptor. As far as I can see it is not known why the binding of the antibody has a therapeutic effect but there is one idea on this which I find attractive. This is that the antibodies attract immune cells which kill the cell carrying the mutation. This gives me a perfect transition to a discussion of a class of therapies which started to become successful and popular very recently and go under the name of cancer immunotherapy, since they are based on the idea of persuading immune cells to attack cancer cells. I have already discussed one way of doing this, using antibodies to increase the activities of T cells, in a previous post. Rather than saying more about that I want to go on to the topic of genetically modified T cells, which was also mentioned briefly here.

I do not know enough to be able to give a broad review of cellular immunotherapy for cancer treatment and so I will concentrate on making some comments based on a video on this subject by Stephan Grupp. He is talking about the therapy of acute lymphocytic leukemia (ALL). In particular he is concerned with B cell leukemia. The idea is to make artificial T cells which recognise the surface molecule CD19 characteristic of B cells. T cells are taken from the patient and modified to express a chimeric T cell receptor (CAR). The CAR is made of an external part coming from an antibody fused to an internal part including a CD3 \zeta-chain and a costimulatory molecule such as CD28. (Grupp prefers a different costimulatory molecule.) The cells are activated and caused to proliferate in vitro and then injected back into the patient. In many cases they are successful in killing the B cells of the patient and producing a lasting remission. It should be noted that most of the patients are small children and that most cases can be treated very effectively with classical chemotherapy. The children being treated with immunotherapy are the ‘worst cases’. The first patient treated by Grupp with this method was a seven year old girl and the treatment was finally very successful. Nevertheless it did at first almost kill her and this is not the only case. The problem was a cytokine release syndrome with extremely high levels of IL-6. Fortunately this was discovered just in time and she was treated with an antibody to IL-6 which not only existed but was approved for the treatment of children (with other diseases). It very quickly solved the problem. One issue which remains to be mentioned is that when the treatment is successful the T cells are so effective that the patient is left without B cells. Hence as long as the treatment continues immunoglobulin replacement therapy is necessary. Thus the issue arises whether this can be a final treatment or whether it should be seen a preparation for a bone marrow transplant. As a side issue from this story I wonder if modelling could bring some more insight for the IL-6 problem. Grupp uses some network language in talking about it, saying that the problem is a ‘simple feedback loop’. After I had written this I discovered a preprint on BioRxiv doing mathematical modelling of CAR T cell therapy of B-ALL and promising to do more in the future. It is an ODE model where there is no explicit inclusion of IL-6 but rather a generic inflammation variable.

Models for photosynthesis, part 4

September 19, 2016

In previous posts in this series I introduced some models for the Calvin cycle of photosynthesis and mentioned some open questions concerning them. I have now written a paper where on the one hand I survey a number of mathematical models for the Calvin cycle and the relations between them and on the other hand I am able to provide answers to some of the open questions. One question was that of the definition of the Pettersson model. As I indicated previously this was not clear from the literature. My answer to the question is that this system should be treated as a system of DAE (differential-algebraic equations). In other words it can be written as a set of ODE \dot x=f(x,y) coupled to a set of algebraic equations g(x,y)=0. In general it is not clear that this type of system is locally well-posed. In other words, given a pair (x_0,y_0) with g(x_0,y_0)=0 it is not clear whether there is a solution (x(t),y(t)) of the system, local in time, with x(0)=x_0 and y(0)=y_0. Of course if the partial derivative of g with repect to y is invertible it follows by the implicit function theorem that g(x,y)=0 is locally equivalent to a relation y=h(x) and the original system is equivalent to \dot x=f(x,h(x)). Then local well-posedness is clear. The calculations in the 1988 paper of Pettersson and Ryde-Pettersson indicate that this should be true for the Pettersson model but there are details missing in the paper and I have not (yet) been able to supply these. The conservative strategy is then to stick to the DAE picture. Then we do not have a basis for studying the dynamics but at least we have a well-defined system of equations and it is meaningful to discuss its steady states.

I was able to prove that there are parameter values for which the Pettersson model has at least two distinct positive steady states. In doing this I was helped by an earlier (1987) paper of Pettersson and Ryde-Pettersson. The idea is to shut off the process of storage as starch so as to get a subnetwork. If two steady states can be obtained for this modified system we may be able to get steady states for the original system using the implicit function theorem. There are some more complications but the a key step in the proof is the one just described. So how do we get steady states for the modified system? The idea is to solve many of the equations explicitly so that the problem reduces to a single equation for one unknown, the concentration of DHAP. (When applying the implicit function theorem we have to use a system of two equations for two unknowns.) In the end we are left with a quadratic equation and we can arrange for the coefficients in that equation to have convenient properties by choosing the parameters in the dynamical system suitably. This approach can be put in a wider context using the concept of stoichiometric generators but the proof is not logically dependent on using the theory of those objects.

Having got some information about the Pettersson model we may ask what happens when we go over to the Poolman model. The Poolman model is a system of ODE from the start and so we do not have any conceptual problems in that case. The method of construction of steady states can be adapted rather easily so as to apply to the system of DAE related to the Poolman model (let us call it the reduced Poolman model since it can be expressed as a singular limit of the Poolman model). The result is that there are parameter values for which the reduced Poolman model has at least three steady states. Whether the Poolman model itself can have three steady states is not yet clear since it is not clear whether the transverse eigenvalues (in the sense of GSPT) are all non-zero.

By analogy with known facts the following intuitive picture can be developed. Note, however, that this intuition has not yet been confirmed by proofs. In the picture one of the positive steady states of the Pettersson model is stable and the other unstable. Steady states on the boundary where some concentrations are zero are stable. Under the perturbation from the Pettersson model to the reduced Poolman model an additional stable positive steady state bifurcates from the boundary and joins the other two. This picture may be an oversimplification but I hope that it contains some grain of truth.

ECMTB 2016 in Nottingham

July 17, 2016

This past week I attended a conference of the ESMTB and the SMB in Nottingham. My accomodation was in a hall of residence on the campus and my plan was to take a tram from the train station. When I arrived it turned out that the trams were not running. I did not find out the exact reason but it seemed that it was a problem which would not be solved quickly. Instead of finding out what bus I should take and where I should take it from I checked out the possibility of walking. As it turned out it was neither unreasonably far nor complicated. Thus, following my vocation as pedestrian, I walked there.

Among the plenary talks at the conference was one by Hisashi Ohtsuki on the evolution of social norms. Although I am a great believer in the application of mathematics to many real world problems I do become a bit sceptical when the area of application goes in the direction of sociology or psychology. Accordingly I went to the talk with rather negative expectations but I was pleasantly surprised. The speaker explained how he has been able to apply evolutionary game theory to obtain insights into the evolution of cooperation in human societies under the influence of indirect reciprocity. This means that instead of the simple direct pattern ‘A helps B and thus motivates B to help A’ we have ‘C sees A helping B and hence decides to help A’ and variations on that pattern. The central idea of the work is to compare many different strategies in the context of a mathematical model and thus obtain ideas about what are the important mechanisms at work. My impression was that this is a case where mathematics has generated helpful ideas in understanding the phenomenon and that there remain a lot of interesting things to be done in that direction. It also made me reflect on my own personal strategies when interacting with other people. Apart from the interesting content the talk was also made more interesting by the speaker’s entertaining accounts of experiments which have been done to compare with the results of the modelling. During the talk the speaker mentioned self-referentially that the fact of his standing in front of us giving the talk was an example of the process of the formation of a reputation being described in the talk. As far as I am concerned he succeeded in creating a positive reputation both for himself and for his field.

Apart from this the other plenary talk which I found most interesting was by Johan van de Koppel. He was talking about pattern formation in ecology and, in particular, about his own work on pattern formation in mussel beds. A talk which I liked much less was that of Adelia Sequeira and it is perhaps interesting to ask why. She was talking about modelling of atherosclerosis. She made the valid point near the beginning of her lecture that while heart disease is a health problem of comparable importance to cancer in the developed world the latter theme was represented much more strongly than the former at the conference. For me cancer is simply much more interesting than heart disease and this point of view is maybe more widespread. What could be the reason? One possibility is that the study of cancer involves many more conceptual aspects than that of heart disease and that this is attractive for mathematicians. Another could be that I am a lot more afraid of being diagnosed with cancer some day than of being diagnosed with heart disease although the latter may be no less probable and not less deadly if it happens. To come back to the talk I found that the material was too abundant and too technical and that many ideas were used without really being introduced. The consequence of these factors was that I lost interest and had difficulty not falling asleep.

In the case of the parallel talks there were seventeen sessions in parallel and I generally decided to go to whole sessions rather than trying to go to individual talks. I will make some remarks about some of the things I heard there. I found the first session I went to, on tumour-immune dynamics, rather disappointing but the last talk in the session, by Shalla Hanson was a notable exception. The subject was CAR T-cells and what mathematical modelling might contribute to improving therapy. I found both the content and the presentation excellent. The presentation packed in a lot of material but rather than being overwhelmed I found myself waiting eagerly for what would come next. During the talk I thought of a couple of questions which I might ask at the end but they were answered in due course during the lecture. It is a quality I admire in a speaker to be able to anticipate the questions which the audience may ask and answer them. I see this less as a matter of understanding the psychology of the audience (which can sometimes be important) and rather of really having got to the heart of the subject being described. There was a session on mathematical pharmacology which I found interesting, in particular the talks of Tom Snowden on systems pharmacology and that of Wilhelm Huisinga on multidrug therapies for HIV. In a session on mathematical and systems immunology Grant Lythe discussed the fascinating question of how to estimate the number of T cell clones in the body and what mathematics can contribute to this beyond just analysing the data statistically. I enjoyed the session on virus dynamics, particularly a talk by Harel Dahari on hepatitis C. In particular he told a story in which he was involved in curing one exceptional HCV patient with a one-off therapy using a substance called silibinin and real-time mathematical modelling.

I myself gave a talk about dinosaurs. Since this is work which is at a relatively early stage I will leave describing more details of it in this blog to a later date.

An eternal pedestrian

June 13, 2016

I am presently visiting Japan. My host is Atsushi Mochizuki who leads the Theoretical Biology Laboratory at RIKEN in Wako near Tokyo. RIKEN is a research organisation which was founded in 1917 using the Kaiser-Wilhelm-Gesellschaft as a model. Thus it is a kind of Japanese analogue of the Max Planck Society which is the direct descendant of the Kaiser-Wilhelm-Gesellschaft. I had only been in Japan once before and looking at my records I see that that was in August 2005. At that time I attended a conference in Sendai, a place which I had never heard of before I went there. Since then it has become sadly famous in connection with the damage it suffered from the tsunami which also caused the Fukushima nuclear disaster. At least I had even then previously heard of Tohoku University which is located in the city.

Yesterday, sitting by the river in Wako, I was feeling quite meditative. I was in an area where motor vehicles are not permitted. There were not many people around but most of those who were there were on bikes. I started thinking of how this is typical of what I have experienced in many places I have been. On a walk along the Rhine in Mainz or in the surrounding countryside most of the people you see are on bikes. Copenhagen is completely dominated by bikes. In the US cars dominate. For instance when I was in Miami for a conference and was staying at the Biltmore Hotel I had to walk quite a distance to get dinner for an affordable price. In general the only people I met walking on the streets there were other conference participants. When I visited the University of California at Santa Barbara bikes were not the thing on the campus but it was typical to see students with skateboards. Summing up, I have frequently had the experience that as a pedestrian I was an exception. It seems that for normal people just putting one foot in front of the other is not the thing to do. They need some device such as a car, a bike or a skateboard to accompany them. I, on the other hand, am an eternal pedestrian. I like to walk places whenever I can. I walk twenty minutes to work each day and twenty minutes back. I find that a good way of framing the day. When I lived in Berlin there was a long period when I had a one-way travelling time of 90 minutes by train. I am glad to have that behind me. I did not consciously plan being so near to work in Mainz but I am glad it happened. Of course being a pedestrian has its limits – I could not have come to Japan on foot.

My pedestrian nature is not limited to the literal interpretation of the term. I am also an intellectual pedestrian. An example of this is described in my post on low throughput biology. Interestingly this post has got a lot of hits, more than twice as many as any other post on my blog. This is related to the theme of simple and complex models in biology. Through the talks I have given recently in Copenhagen, Berlin and here in Japan and resulting discussions with different people I have become of conscious of how this is a recurring theme in those parts of mathematical biology which I find interesting. The pedestrian may not get as far as others but he often sees more in the places he does reach. He may also get to places that others do not. Travelling fast along the road may cause you to overlook a valuable shortcut. Or you may go a long way in the wrong direction and need a lot of time to come back. Within mathematics one aspect of being a pedestrian is calculating things by hand as far as possible and using computers as a last resort. This reminds me of a story about the physicist John Wheeler who had a picture of a computer on the wall in his office which he called ‘the big computer’. When he wanted to solve a difficult problem he would think about how he would programme it on the computer and when he had done that thoroughly he had understood the problem so well that he no longer needed the computer. Thus the fact that the computer did not exist except on paper was not a disadvantage.

This is the direction I want to (continue to) go. The challenges along the road are to achieve something essential and to make clear to others, who may be sceptical, that I have done so.

Flying to Copenhagen without a carpet

May 11, 2016

This semester I have a sabbatical and I am profiting from it by travelling more than I usually do. At the moment I am visiting the group of Carsten Wiuf and Elisenda Feliu at the University of Copenhagen for two weeks. The visit here also gives me the opportunity to discuss with people at the Niels Bohr Institute. Note that the authors of the paper I quoted in the post on NF\kappaB were at the NBI when they wrote it and in particular Mogens Jensen is still there now. I gave a talk on some of my work on the Calvin cycle at NBI today. Afterwards I talked to Mogens and one of his collaborators and found out that he is still very active in modelling this system.

I was thinking about my previous visits to Copenhagen and, in particular, that the first one was on a flying carpet. The background to this is that when I was seven years old I wrote a story in school with the title ‘The Magic Carpet’. I do not have the text any more but I know it appeared in the School Magazine that year. In my own version there was also a picture which I will say more about later. But first something about the story, of which I was the hero. I bought the carpet in Peshawar and used it to visit places in the world I was interested in. For some reason I no longer know I had a great wish at that time to visit Copenhagen. Perhaps it was due to coming into contact with stories of Hans Christian Andersen. In any case it is clear that having the chance this was one of the first places I visited using the magic carpet. The picture which I drew showed something closer to home. There I can be seen sitting on the carpet, wearing the blue jersey which was my favourite at that time, while the carpet bent upwards so as to just pass over the tip of the spire of St. Magnus Cathedral in Kirkwall. In the story it was also related that one of the effects of my journey was a newspaper article reporting a case of ‘mass hallucination’. I think my teachers were impressed at my using this phrase at my age. They might have been less impressed if they had known my source for this, which was a Bugs Bunny cartoon.

During my next visit to Copenhagen in 2008 (here I am not counting changing planes there on the way to Stockholm, which I did a few times) I was at a conference at the Niels Bohr Institute in my old research field of mathematical relativity and I gave a talk in that area. Little did I think I would return there years later and talk about something completely different. I remember that there was a photo in the main lecture room where many of the founders of quantum mechanics are sitting in the first row. From my own point of view I am happy that another person who can be seen there is Max Delbrück, a shining example of a switch from physics to biology. My next visit to Copenhagen was for the conference which I wrote about in a previous post. It was at the University. Since that a lot has happened with chemical reaction network theory and with my understanding of it. The lecture course I gave means that some of the points I mentioned in my post at that time are things I have since come to understand in some depth. I look forward to working on projects in that area with people here in the coming days.

NFκB

May 1, 2016

NF\kappaB is a transcription factor, i.e. a protein which can bind to DNA and cause a particular gene to be read more or less often. This means that more or less of a certain protein is produced and this changes the behaviour of the cell. The full name of this transcription factor is ‘nuclear factor, \kappa-light chain enhancer of B cells’. The term ‘nuclear factor’ is clear. The substance is a transcription factor and to bind to DNA it has to enter the nucleus. NF\kappaB is found in a wide variety of different cells and its association with B cells is purely historical. It was found in the lab of David Baltimore during studies of the way in which B cells are activated. It remains to explain the \kappa. B cells produce antibodies each of which consists of two symmetrical halves. Each half consists of a light and a heavy chain. The light chain comes in two variants called \kappa and \lambda. The choice which of these a cell uses seems to be fairly random. The work in the Baltimore lab had found out that NF\kappaB could skew the ratio. I found a video by Baltimore from 2001 about NF\kappaB. This is probably quite out of date by now but it contained one thing which I found interesting. Under certain circumstances it can happen that a constant stimulus causing activation of NF\kappaB leads to oscillations in the concentration. In the video the speaker mentions ‘odd oscillations’ and comments ‘but that’s for mathematicians to enjoy themselves’. It seems that he did not believe these oscillations to be biologically important. There are reasons to believe that they might be important and I will try to explain why. At the very least it will allow me to enjoy myself.

Let me explain the usual story about how NF\kappaB is activated. There are lots of animated videos on Youtube illustrating this but I prefer a description in words. Normally NF\kappaB is found in the cytosol bound to an inhibitor I\kappaB. Under certain circumstances a complex of proteins called IKK forms. The last K stands for kinase and IKK phosphorylates I\kappaB. This causes I\kappaB to be ubiquinated and thus marked for degradation (cf. the discussion of ubiquitin here). When it has been destroyed NF\kappaB is liberated, moves to the nucleus and binds to DNA. What are the circumstances mentioned above? There are many alternatives. For instance TNF\alpha binds to its receptor, or something stimulates a toll-like receptor. The details are not important here. What is important is that there are many different signals which can lead to the activation of NF\kappaB. What genes does NF\kappaB bind to when it is activated? Here again there are many possibilities. Thus there is a kind of bow tie configuration where there are many inputs and many outputs which are connected to a single channel of communication. So how is it possible to arrange that when one input is applied, e.g. TNF\alpha the right genes are switched on while another input activates other genes through the same mediator NF\kappaB? One possibility is cross-talk, i.e. that this signalling pathway interacts with others. If this cannot account for all the specificity then the remaining possibility is that information is encoded in the signal passing through NF\kappaB itself. For example, one stimulus could produce a constant response while another causes an oscillatory one. Or two stimuli could cause oscillatory responses with different frequencies. Evidently the presence of oscillations in the concentration of NF\kappaB presents an opportunity for encoding more information than would otherwise be possible. To what extent this really happens is something where I do not have an overview at the moment. I want to learn more. In any case, oscillations have been observed in the NF\kappaB system. The primary thing which has been observed to oscillate is the concentration of NF\kappaB in the nucleus. This oscillation is a consequence of the movement of the protein between the cytosol and the nucleus. There are various mathematical models for describing these oscillations. As usual in modelling phenomena in cell biology there are models which are very big and complicated. I find it particularly interesting when some of the observations can be explained by a simple model. This is the case for NF\kappaB where a three-dimensional model and an explanation of its relations to the more complicated models can be found in a paper by Krishna, Jensen and Sneppen (PNAS 103, 10840). In the three-dimensional model the unknowns are the concentrations of NF\kappaB in the nucleus, I\kappaB in the cytoplasm and mRNA coding for I\kappaB. The oscillations in normal cells are damped but sustained oscillations can be seen in mutated cells or corresponding models.

What is the function of NF\kappaB? The short answer is that it has many. On a broad level of description it plays a central role in the phenomenon of inflammation. In particular it leads to production of the cytokine IL-17 which in turn, among other things, stimulates the production of anti-microbial peptides. When these things are absent it leads to a serious immunodeficiency. In one variant of this there is a mutation in the gene coding for NEMO, which is one of the proteins making up IKK. A complete absence of NEMO is fatal before birth but people with a less severe mutation in the gene do occur. There are symptoms due to things which took place during the development of the embryo and also immunological problems, such as the inability to deal with certain bacteria. The gene for NEMO is on the X chromosome so that this disease is usually limited to boys. More details can be found in the book of Geha and Notarangelo mentioned in  a previous post.

Stability of steady states in models of the Calvin cycle

April 25, 2016

I have just written a paper with Stefan Disselnkötter on stationary solutions of models for the Calvin cycle and their stability. There we concentrate on the simplest models for this biological system. There were already some analytical results available on the number of positive stationary solutions (let us call them steady states for short), with the result that this number is zero, one or two in various circumstances. We were able to extend these results, in particular showing that in a model of Zhu et. al. there can be two steady states or, in exceptional cases, a continuum of steady states. This is at first sight surprising since those authors stated that there is at most one steady state. However they impose the condition that the steady states should be ‘physiologically feasible’. In fact for their investigations, which are done by means of computer calculations, they assume among other things that certain Michaelis constants which occur as parameters in the system have specific numerical values. This assumption is biologically motivated but at the moment I do not understand how the numbers they give follow from the references they quote. In any case, if these values are assumed our work gives an analytical proof that there is at most one steady state.

While there are quite a lot of results in the literature on the number of steady states in systems of ODE modelling biochemical systems there is much less on the question of the stability of these steady states. It was a central motivation of our work to make some progress in this direction for the specific models of the Calvin cycle and to develop some ideas to approaching this type of question more generally. One key idea is that if it can be shown that there is bifurcation with a one-dimensional centre manifold this can be very helpful in getting information on the stability of steady states which arise in the bifurcation. Given enough information on a sufficient number of derivatives at the bifurcation point this is a standard fact. What is interesting and perhaps less well known is that it may be possible to get conclusions without having such detailed control. One type of situation occurring in our paper is one where a stable solution and a saddle arise. This is roughly the situation of a fold bifurcation but we do not prove that it is generic. Doing so would presumably involve heavy calculations.

The centre manifold calculation only controls one eigenvalue and the other important input in order to see that there is a stable steady state for at least some choice of the parameters is to prove that the remaining eigenvalues have negative real parts. This is done by considering a limiting case where the linearization simplifies and then choosing parameters close to those of the limiting case. The arguments in this paper show how wise it can be to work with the rates of the reactions as long as possible, without using species concentrations. This kind of approach is popular with many people – it has just taken me a long time to get the point.