## Archive for September, 2009

### Formation of black holes in vacuum, part 2

September 22, 2009

I have just returned from a conference at the Mathematical Sciences Research Institute (MSRI) in Berkeley with the title ‘Hot topics: black holes in relativity‘. The central theme of this conference was the work of Demetrios Christodoulou on the formation of black holes in vacuum which I discussed in a previous post

On the first day of the conference I gave a talk on the characteristic initial value problem in general relativity. This was based on a paper I wrote more than twenty years ago (Proc. R. Soc. Lond. A427, 221 – I find it difficult to believe that it has been so long). The result of this paper is used in Christodoulou’s work and this was the main justification for the talk. In the ordinary initial value problem (Cauchy problem) for a hyperbolic system, or for the Einstein equations, initial data are prescribed on a spacelike hypersurface. The idea of the characteristic initial value problem is to instead prescribe data on a characteristic hypersurface. In fact it is necessary to use a singular characteristic hypersurface (such as a cone) or a pair of smooth hypersurfaces which intersect transversely. The result of Christodoulou is formulated in terms of the first of these possibilities, with data prescribed on a light cone. However he assumes that these data coincide with flat space data near the vertex of the cone, which allows the problem to be reduced to the second, easier possibility and it is the latter which I treated in my paper. In the result of that paper, which applies to smooth initial data, existence and uniqueness results for the characteristic initial value problem are deduced from the corresponding results for the Cauchy problem. In the ordinary Cauchy problem for the Einstein equations it is necessary to solve the constraint equations, which means solving an elliptic problem. In the characteristic case the constraints reduce to a hierarchical system of ordinary differential equations, which can be a big advantage.

During the conference Christodoulou gave five talks about his theorem and its proof and I found these very enlightening. I feel I have a much better understanding of the basics of this work now that I did before. One aspect of the result is that the data used are in one sense small (close to flat space data) and in another sense large. If they were small in a sufficiently strong sense then this should lead to a global existence result which in particular rules out the formation of black holes due to the theorem of Christodoulou and Klainerman on the stability of Minkowski space. On the other hand the interpretation of the result (formation of a trapped surface starting from a weak-field situation) requires that the data be small in some sense. Combining these two requirements (smallness and largeness in different senses) is a key feature of the theorem. It is also the case that the data are in some sense close to being spherically symmetric but in another sense far from spherical symmetry. Intuitively, it is necessary to have data which represent a sufficiently strong pulse of gravitational radiation. Spherical symmetry rules out gravitational radiation and this might be extrapolated to say that being close to spherical symmetry means restricting to a small amount of radiation.

In the proof of the theorem the solution is parametrized in the following way. The initial hypersurface is a null cone $C_0$. It can be foliated by surfaces which are of constant affine distance from the vertex. Through each of these there is a null hypersurface transverse to $C_0$ which is taken to be a level hypersurface of a function $\underline{u}$. This function agrees with the affine distance (suitably normalized) on $C_0$. A function $u$ is defined to be constant on the null cones of the points on a timelike curve passing through the vertex of the cone. Things are always set up so that these null hypersurfaces have no caustics. The two functions define a foliation by spheres by means of the intersections of their level hypersurfaces. This foliation is in a sense the analogue of that by symmetry orbits in a spherically symmetric problem. The fact that the problem is almost spherically symmetric is witnessed by the fact that the Gaussian curvature of these spheres is almost constant. Note that the gradients of the functions $u$ and $\underline{u}$ do not commute as vector fields in general. Thus they are not tangent to surfaces and this is an important difference from spherical symmetry.

The initial data is such that a suitable energy density on the cone changes suddenly from being zero to being sufficiently large. This is the basis of the short pulse method, which is the central new technique in the proof of the theorem. What is this energy density? It is the norm squared of the trace-free part of the second fundamental form of the spheres in the direction along the cone.

When Christodoulou had completed his last lecture someone in the audience asked, ‘What’s next?’ In reply he announced that this had been his last project in general relativity, which came as quite a shock to the audience. The word ‘announced’ is perhaps not appropriate since it sounds too formal – he just said it spontaneously. This is sad news for the field of mathematical relativity but perhaps it is less sad in a wider context. After all, Christodoulou has a number of fascinating projects he is working on in other areas. At the same time the theorem I have been talking about here will probably be a beginning rather than an end. At the conference Igor Rodnianski gave a talk on work he has been doing with Sergiu Klainerman aimed at generalizing this result while understanding it more deeply. I look forward to seeing where that will lead.

### The transcription factor T-bet

September 12, 2009

In the previous post I discussed tests for diseases. It is good to be able to get a high degree of certainty about whether an individual has a certain disease and this may have important implications for treatment. Under favourable circumstances a test may do more. Suppose that for a disease X there is a drug Y available which has the following properties. There is some, at least rough, quantitative measure for the severity of the disease at a given time. It is known that on average the severity of the disease is reduced by a significant percentage (e.g. 30%) when the drug is taken. It might, however, be that this average results from a few patients with a very large reduction in severity and a large number of patients with a small reduction in severity. Suppose that the drug is very expensive and has unpleasant side effects so that there is strong motivation not to prescribe it if it is not going to have a major benefit. Then it would be very useful to have a test which identifies those patients who are going to benefit before treatment is started.

A concrete example of the above is where the disease X is multiple sclerosis and the drug Y is one based on interferon $\beta$. It has been suggested in a recent paper of Drulovic et. al. in the Journal of Neuroimmunology that the expression of a substance called T-bet in mononuclear cells in the blood may be prognostic for a good response to interferon $\beta$ treatment of MS patients. (A mononuclear cell means a white blood cell which is not a polymorphonuclear granulocyte.To say that T-bet is expressed in these cells means that they produce it.) Two other potential candidates for substances which might be prognostic, interferon $\gamma$ and interleukin 17, do not give the same promising results.

So what is T-bet? The name, which was introduced in 2000, stands for ‘T-box expressed in T cells’. The name T-box denotes a class of genes which code for transcription factors. Note that the same name is often used for a gene and the protein it codes for. A transcription factor is a protein which binds to DNA and increases or decreases the extent to which certain genes are transcribed into RNA. It therefore influences the amount of the corresponding protein which the cell produces. Transcription factors are important for cell differentiation. All cells in the body have the same genes (almost – I do not want to get into exceptions here) and it is the expression of the genes in a given cell which determines which type of cell it is. As has been mentioned in previous posts T helper cells come in at least two types, Th1 and Th2, and a shift to a higher proportion of Th1 cells seems to be bad for MS. The substances T-bet and interferon $\gamma$ are involved in the process of differentiation, pushing the cells towards Th1. In particular, T-bet seems to be the master regulator of this process. The details of the process are complicated. A recent paper by Edda Schulz, Luca Mariani, Andreas Radbruch and Thomas Höfer (Immunity 30, 673) has studied this both experimentally and theoretically.The theoretical part uses a mathematical (ODE) model. As mentioned in a previous post there is also another class of T cells called Th17. The difference between IL-17 and T-bet in MS indicated above seems to indicate that in that disease Th1 cells are more important than Th17 cells.