Archive for October, 2008

Th17 cells

October 26, 2008

In a previous post I mentioned T-cells of types Th1 and Th2 and discussed their possible role in multiple sclerosis and a mathematical model for their dynamics. Both of these are types of helper T-cells carrying the molecule CD4 on their surface. In the recent past another class of CD4+ T-cells, the Tregs (regulatory T-cells) have been widely studied. In a way they are a new and more precise incarnation of the old idea of suppressor T-cells which had lived a rather shadowy existence. Tregs are associated with the surface molecule CD25 and express the transcription factor Foxp3. They may be of central importance in autoimmune disease. Now a new class of T-cells has come onto the scene. These are the Th17 cells. As far as I know there are no T-cells with the names Th4 to Th16. The name Th3 has been used for a type of regulatory T-cells but does not seem to have gained much prominence. The name Th17 has a different origin – it is due to the fact that these cells secrete the cytokine interleukin 17. Th17 cells have been implicated in a variety of immune-mediated diseases such as psoriasis, rheumatoid arthritis, multiple sclerosis, inflammatory bowel disease and asthma. These cells were only identified in 2006 when IL-17 had already been around for ten years.

In some cases Th1 and Th17 cells seem to work side by side with similar effects. For instance both seem to be having harmful effects in rheumatoid arthritis (RA). The analogue of IL-17 for the Th1 cells is tumour necrosis factor \alpha. Antibodies against TNF\alpha are now used therapeutically in the treatment of RA and some other autoimmune diseases which are usually considered as associated with Th1 cells. So what about MS? There Th1 and Th17 cells seem to be at work. Surprisingly, antibodies against TNF\alpha seem to have a neligible or harmful effect on MS patients. This might give some new insight into the differences between diseases with superficially similar mechanisms. Of course one may think of using antibodies against IL-17 as treatments for autoimmune diseases. In which cases will the results be the same as when TNF\alpha is targeted and in which cases different? My hope is that divergent results could lead to deeper insights into the workings of the immune system. Perhaps it could be possible to identify some essential networks of players (cells and/or cytokines) which are susceptible to theoretical analysis.

The polarized Gowdy equation

October 22, 2008

The polarized Gowdy solutions are the simplest solutions of the vacuum Einstein equations which are dynamical and not spatially homogeneous. Physically they represent a one-dimensional configuration of polarized gravitational waves propagating in a closed and otherwise empty universe. Here ‘closed’ means that space is compact. The simplest case is that in which space is a three-dimensional torus and here I will consider only that case. These solutions represent a simple model system for developing mathematical techniques for studying the Einstein equations. This system has accompanied me in my research for many years and now I want to take a minute to stand back and reflect on it.

The central equation involved is P_{tt}+t^{-1}P_t-P_{\theta\theta}=0. This equation is also known under the name Euler-Poisson-Darboux equation but this does not seem to have helped much in the study of Gowdy solutions. This is perhaps due to the fact that the side conditions (boundary and initial conditions) are different from those of interest in other applications. In the Gowdy case P is assumed to be periodic in the spatial variable \theta and initial data can be prescribed on a hypersurface t=t_0 for some positive real number t_0. The data are of the form (P_0,P_1) where P_0 and P_1 are the restrictions of P and P_t to the initial hypersurface t=t_0. This is a linear hyperbolic equation and it follows from standard results that there is a unique smooth (i.e. C^\infty) solution corresponding to any smooth initial data set. Thus all solutions can be parametrized by data on this initial hypersurface.

The asymptotics of solutions in the limit t\to 0 are well understood. Any solution has an asymptotic expansion of the form P(t,\theta)=-k(\theta)\log t+\omega(\theta)+R(t,\theta) where R(t,\theta)=o(1) as t\to 0. Moreover R has an asymptotic expansion of the form R(t,\theta)\sim\sum_{j=1}^\infty (A_j(\theta)+B_j(\theta)\log t)t^j. where all the coefficients A_j and B_j are determined uniquely in terms of k and \omega. The asymptotic expansions may be differentiated term by term as often as desired. Conversely, given smooth functions k and \omega there is a solution P for which the leading terms in the asymptotic expansion are exactly these functions. Thus k and \omega can be used to parametrize all solutions just as well as P_0 and P_1. After I wrote this I realized that published proofs of the statement about prescribing k and \omega apparently only cover the case where k is everywhere positive. This restriction is not hard to remove using known techniques.

What about the limit t\to\infty? It was proved by Thomas Jurke that any solution has an asymptotic expansion of the form P(t,\theta)=A\log t+B+t^{-1/2}\nu(t,\theta)+R(t,\theta) where A and B are constants, \nu satisfies the flat space wave equation \nu_{tt}=\nu_{\theta\theta} and R(t,\theta)=O(t^{-3/2}) as t\to\infty. It was proved by Hans Ringström that given constants A and B and a solution \nu of the flat-space wave equation there is a unique solution for which the leading terms in the asymptotic expansion are given by exactly these objects. In this way a third parametrization of the solutions is obtained. To mirror what is known for the limit t\to 0 it would be good to have an asymptotic expansion for the remainder R arising in the limit t\to\infty. An expansion of this type has apparently never been derived.

Supposing that a full expansion at late times had been obtained could it then be said that we knew essentially everything about solutions of the polarized Gowdy equations? I am not sure since I think there might still be some kind of intermediate asymptotics to be discovered.

Schrödinger’s ‘What is life’

October 12, 2008

After writing the previous post I discovered a web site where Sydney Brenner talks about his life on video. He discusses many interesting things which I cannot go into here. I just want to concentrate on one subject which he mentions there, Schrödinger’s book ‘What is life’. A large part of this book, which was published in 1944 and is available online, is concerned with the mechanism of heredity. It had a strong influence on a number of important biologists, including Francis Crick and James Watson. Thus it may be said that, at least on the level of motivation, the book made an important contribution towards the discovery of the structure of DNA. Brenner’s attitude to Schrödinger’s book is rather negative. He is of the opinion that the one who really gained important insights into heredity from the point of view of mathematics or physics was John von Neumann, following up on ideas of Alan Turing on computing. Unfortunately the biologists payed little attention to (or simply did not know about) this work of von Neumann.

Most of Schrödinger’s book is about heredity. He starts by pointing out that because of the universal presence of fluctuations in the real world a mechanical system cannot work in a reliable way unless it consists of a very large number of atoms. A system consisting of only a few atoms is too sensitive to random disturbances. This Schrödinger presents as the answer to the question why living organisms are so big on the atomic scale. Although the fluctuations arise from quantum phenomena this is not crucially important for the discussion. What is important is that there is some prolific source of fluctuations. Now a gene can be estimated to consist of a number of atoms which is not enormous. The question then is how genetic information can be passed on so reliably from one generation of cells to the next. The answer (in my words) is that it is a digital system. The information is stored in discrete pieces and these cannot be routinely affected by small fluctuations. This has to do with quantum theory and the presence of potential barriers which must be overcome to change from one state to another. The information in the chromosomes is encoded (in Schrödinger’s picture, which we of course now know to be correct) in chemical bonds. The stability of this system rests on the stability of the chemical bond and this in turn is really a consequence of quantum nature of atoms.

Brenner’s objection to Schrödinger’s presentation is that while in reality the genetic material only contains the information needed to make a new individual Schrödinger does not clearly distinguish between the information and the machinery required to implement that information to replicate the cell. In modern terms, he does not distinguish between the role of DNA and that of such things as messenger and ribosomal RNA. This, apparently, von Neumann did without of course knowing anything about the detailed mechanisms. I would tend to say that Schrödinger’s picture was not wrong, its disadvantage being that it is at too low resolution.

I enjoyed reading most of Schrödinger’s book but I felt less well when I got to chapter 6 where the concept of entropy takes center stage. Schrödinger writes (on p. 30 of the online text) ‘Let me first emphasize that it [entropy] is not a hazy concept or idea …’ This is not enough to reassure me. I often feel that even if entropy is not ‘hazy’ in principle it often does have that character in the way it is used by many physicists, not to mention others. When in the seventh and last chapter Schrödinger seems to leave the realm of science in the direction of religion I feel that it does not concern me any more. Chapter 6, on the other hand, does seem to concern me and leaves me with an uneasy feeling. There may be some unfinished business for me there.

Sydney Brenner on mice and men

October 8, 2008

On 27th September 2007 I had the privilege of hearing a talk by Sydney Brenner in Berlin. Brenner is a Nobel prize winner (2002) and known for his role in deciphering the genetic code, discovering messenger RNA and launching the humble worm Caenorhabditis elegans on its brilliant career as a model organism. I find that Brenner is an inspiring speaker and I had the impression of experiencing a very special source of knowledge and an exceptional individual. The talk was seasoned with critical comments on various aspects of modern molecular biology including a tough one-liner directed at systems biology. He also had something more general to say about the applications of mathematics to biology – unfortunately I do not remember the details. In any case, his variety of scientific argumentation reminded me of some of the best things about the way mathematics works.

Yesterday I discovered online videos of two talks of Brenner. I watched them and was glad I did. The first begins by talking about the redshift as a tool which can be used to obtain information about the distant past of our universe and asking if there exists something similar which could be used to explore the distant evolutionary past of life. Brenner points out that the genomes of many organisms have now been deciphered and asks what kind of information can be obtained from them. He mentions the following ‘inverse problem’. Suppose we were given just the genome of an organism without knowing the organism itself. Could we then reconstruct that organism? It seems that this is far beyond what can be done at the present time. He then goes on to discuss the question of comparing the genome of different organisms and trying to define a kind of evolutionary distance between them. As a concrete example he takes the case of mice and men.

One of the main themes of the lectures is finding ways of determining the speed at which the genome is evolving in different species. He points out that the genetic data contain no arrow of time. Thus they are equally consistent with fish evolving into human beings or fish having been formed by degeneration of previously existing human beings. The external facts that allow us to decide in which direction evolution goes come from the fossil record. The data show that the mouse genome is evolving much faster than the human genome. Getting this kind of information requires comparing the genomes of more than two species.

In what way is it possible to obtain information about the evolution of the genome? Certain types of statistical analysis of the occurrence of different bases in the genomes of different species can do this. Brenner emphasizes the important of silent mutations, those where a change in one base replaces a codon by another one corresponding to the same amino acid. An advantage of studying these mutations is the absence of selective pressure on them. What this statistics involves is anything but applying standard (perhaps powerful) methods of analysis. It rather has to do with having good ideas about what patterns to look for in the data. Brenner points out that since the genome data are freely available on the internet and the computing power required is modest it would be possible to develop home genomics. That is to say: someone could develop important new ideas for the analysis of the dynamics of the genome by playing about on their home computer.

Mouse fur

October 1, 2008

The Turing instability has been a popular theme in mathematical biology. There is no doubt that it is nice mathematics but how much does it really explain in biology? Recently a detailed proposal was made by researchers from the Max Planck Institute of Immunobiology and the University of Freiburg to explain the development of hairs in mice on the basis of a Turing mechanism. (S. Sick, S. Reinker, J. Timmer and T.Schlake. WNT and DKK determine hair follicle spacing through a reaction diffusion mechanism. Science 314, 1447.)

Here I want to take this as a stimulus to think again about the status of these ideas. On my bookshelf at home I have the biography of Turing by Andrew Hodges (‘Alan Turing: the enigma’). I now reread the parts concerning biology, which are not very extensive. On the basis of thistext it seems that one of the things in biology that fascinated Turing most was the occurrence of Fibonacci numbers in plants. This seems to have little to do with the contribution to biology for which he became famous. He himself seems to have hoped that there would be a connection. I looked at the original paper of Turing (‘The chemical basis of morphogenesis’) but I did not learn anything new compared to modern accounts of the same subject I had seen before. The basic mathematical input is a system of reaction-diffusion equations, as described briefly in a previous post. A homogeneous steady state is considered which is stable within the class of homogeneous solutions. Then a growing mode is sought which describes the beginning of pattern formation. This is similar to what is done for the Keller-Segel system. There is a mouse in Turing’s paper but it has nothing to do with the mouse in the title of this post. Its role is to climb on a pendulum and thus illustrate ideas about instability.

Another book I have at home is ‘Endless forms most beautiful‘ by Sean Carroll. In this book, which appeared in 2005, the author explains recent ideas about embryonic development and their connections to the evolution of organisms on geological timescales. Turing is mentioned once, on p. 123, but only to dismiss the relevance of his ideas to embryology, the central theme of his paper. Carroll writes, ‘While the math and models are beautiful, none of this theory has been borne out by the discoveries of the last twenty years’. As a remaining glimmer of hope for the Turing mechanism, the diagrams on pp. 104-105 of the book might fit a Turing-type picture but concern small-scale structures. The large-scale architecture of living bodies is claimed to arise in a quite different way. The picture of the development of individual organisms presented by Carroll has a character which strikes me as digital. I do not find it attractive. I should emphasize that this is an aesthetic judgement and not a scientific one. I suppose I am just in love with the continuum.

Now I return to the article of Sick et. al. A key new aspect of what they have done in comparison to previous attempts in this direction is that they are able to identify definite candidates for the substances taking part in the reaction-diffusion scenario and obtain experimental evidence supporting their suggestion. These belong to the classes Wnt and Dkk (Dickkopf). An accompanying article by Maini et. al. (Science 314, 1397) is broadly positive but does also add a cautionary note. It is pointed out that similar predictions can be produced by different mathematical models. A model of Turing type may produce something that looks a lot like what is provided by a model involving chemotaxis. This is a generic danger in mathematical biology. In a given application it is important to be on the lookout for experimental data which can help to resolve this type of ambiguity.

The reaction-diffusion model used for modelling and numerical simulation is related to a classical model of Gierer and Meinhardt. The original paper from 1972 and a great deal of information on related topics are available from this web page.


Follow

Get every new post delivered to your Inbox.

Join 29 other followers