Archive for the ‘Uncategorized’ Category

My COVID-19 vaccination

May 14, 2021

In Germany vaccination against COVID-19 has been going slowly compared to some other places. One of the leading vaccines was developed by Biontech, a company based in Mainz, which is where I live. Due to this and my impression that that vaccine is very good my first choice would have been to get the Biontech vaccine. In the end things turned out differently. The Biontech vaccine is the one which has been given most often in Germany but the number of vaccinations have been limited by the available supply. The vaccine developed by AstraZenenca has also been used here but has acquired a bad reputation. There were several reasons for that. One is the bad relations of the company to the media. Another is the occurrence of rare side effects involving thromboses. A third is that the publicly quoted efficiency of preventing the illness is much less for AstraZeneca than for Biontech (80% compared to 95%). A fourth is that the waiting time between the first and second injections is longer for AstraZeneca (12 instead of 6 weeks). Despite these things my wife and I decided to get vaccinated with the product from AstraZeneca and had our first injections today. Here I want to explain why we did so and discuss some more general related issues.

I already mentioned that AstraZeneca became unpopular. In connection with the thromboses it was only recommended by the Paul Ehrlich Institute (the official body for such things in Germany) for people over sixty. The argument was that since people over sixty had a higher probability of serious consequences if they got infected the net benefit was positive for them. Doctors were having problems getting rid of their doses. However the situation is very dynamic. Although AstraZeneca is still not recommended for people under 60 (like myself) and it is not administered to them in the official vaccination centres doctors (GP’s and specialists) are allowed to vaccinate people under 60 with that product if they inform them about the risks. They are also allowed to reduce the waiting time between the injections. These political moves have led to many people (particularly young people) wanting to get vaccinated with AstraZeneca. The result is that now instead of being hard to get rid of the AstraZeneca vaccine is scarce. Thus now there is a lot of competition and the whole process is quite chaotic. We were lucky that my wife was offered a vaccination with AstraZeneca at a time when this seemed like asking for a favour rather than offering to do a favour. She asked if I could also be vaccinated by the same doctor and got a positive answer. This was our good fortune.

It was still not clear to us whether we should accept the offer. It is in the nature of the situation that many things concerning the disease and the vaccination are simply not known. On the other hand the amount of information available is huge. All this makes a rational decision difficult. From my point of view the thromboses were so rare that they did not influence my decision at all. The combination of the earlier availability of AstraZeneca for us with the longer waiting time meant that it was not clear which option (taking AstraZeneca now or waiting for Biontech) would lead to being fully immunized sooner. We did not want to shorten the interval for the AstraZeneca vaccination (which our doctor would have accepted) for a reason I will discuss later. A reason for making the choice we did was that this gave the feeling of finally making some progress. In the comparison of the efficiencies of the two products it is important to know that if the criterion considered is the risk of serious illness or hospitalization then according to the official figures AstraZeneca is at least as good as Biontech (both at least 95%). In any case, it was a good feeling to know that we had received one dose of the vaccine. A few hours later I have not noticed any side-effects whatsoever.

Let me return to the question of shortening the time between the injections for AstraZeneca. There is some evidence that this decreases the efficiency of the vaccinations and I have read about two possible mechanisms for this which seem to me at least plausible. The first is connected with the phenomenon of affinity maturation. The first B cells which are activated in response to a certain antigen can undergo somewhat random mutations. This means that the population of B cells becomes rather heterogeneous. The different variants then compete with each other in such a way that those which bind most strongly to the antigen come to dominate the population. In this way the quality of the antibodies is increased. If a second vaccination is given too soon it can interrupt the affinity maturation initiated by the first vaccination before the antibodies have been optimized. The second mechanism is as follows. The immune reponse against the antigen remains active for some time after the vaccination. If second vaccination is given while that is still the case then the antibodies generated in the first vaccination can bind to the vector viruses coming from the second vaccination and prevent them from achieving their intended purpose. These are not established facts but I prefer to have plausible hypotheses than a complete lack of a possible explanation.


Computer-assisted proofs

June 26, 2015

My spontaneous reaction to a computer-assisted proof is to regard it as having a lesser status than one done by hand. Here I want to consider why I feel this way and if and under what circumstances this point of view is justified. I start by considering the situation of a traditional mathematical proof, done by hand and documented in journal articles or books. In this context it is impossible to write out all details of the proof. Somehow the aim is to bridge the gap between the new result and what experts in the area are already convinced of. This general difficulty becomes more acute when the proof is very long and parts of it are quite repetitive. There is the tendency to say that the next step is strictly analogous to the previous one and if the next step is written out there is the tendency for the reader to think that it is strictly analogous and to gloss over it. Human beings (a class which includes mathematicians) make mistakes and have a limited capacity to concentrate. To sum up, a traditional proof is never perfect and very long and repetitive proofs are likely to be less so than others. So what is it that often makes a traditional proof so convincing? I think that in the end it is its embedding in a certain context. An experienced mathematician has met with countless examples of proofs, his own and those of others, errors large and small in those proofs and how they can often be repaired. This is complemented by experience of the interactions between different mathematicians and their texts. These things give a basis for judging the validity of a proof which is by no means exclusively on the level of explicit logical argumentation.

How is it, by comparison, with computer-assisted proofs? The first point to be raised is what is meant by that phrase. Let me start with a rather trivial example. Suppose I use a computer to calculate the kth digit of n factorial where k and n are quite large. If for given choices of the numbers a well-tested computer programme can give me the answer in one minute then I will not doubt the answer. Why is this? Because I believe that the answer comes from an algorithm which determines a unique answer. No approximations or floating point operations are involved. For me interval arithmetic, which I discussed in a previous post, is on the same level of credibility, which is the same level of credibility as a computer-free proof. There could be an error in the hardware or the software or the programme but this is not essentially different from the uncertainties connected with a traditional proof. So what might be the problem in other cases? One problem is that of transparency. If a computer-assisted proof is to be convincing for me then I must either understand what algorithm the computer is supposed to be implementing or at least have the impression that I could do so if I invested some time and effort. Thus the question arises to what extent this aspect is documented in a given case. There is also the issue of the loss of the context which I mentioned previously. Suppose I believe that showing that the answer to a certain question is ‘yes’ in 1000 cases constitutes a proof of a certain theorem but that checking these cases is so arduous that a human being can hardly do so. Suppose further that I understand an algorithm which, if implemented, can carry out this task on a computer. Will I then be convinced? I think the answer is that I will but I am still likely to be left with an uncomfortable feeling if I do not have the opportunity to see the details in a given case if I want to. In addition to the question of whether the nature of the application is documented there is the question of whether this has been done in a way that is sufficiently palatable that mathematicians will actually carefully study the documentation. Rather than remain on the level of generalities I prefer to now go over to an example.

Perhaps the most famous computer-assisted proof is that of the four colour theorem by Appel and Haken. To fill myself in on the background on this subject I read the book ‘Four colors suffice’ by Robin Wilson. The original problem is to colour a map in such a way that no two countries with a common border have the same colour. The statement of the theorem is that it is always possible with four colours. This statement can be reformulated as a question in graph theory. Here I am not interested in the details of how this reformulation is carried out. The intuitive idea is to associate a vertex to each country and an edge to each common border. Then the problem becomes that of colouring the vertices of a planar graph in such a way that no two adjacent vertices have the same colour. From now on I take this graph-theoretic statement as basic. (Unfortunately it is in fact not just a graph-theoretic, in particular combinatorial, statement since we are talking about planar graphs.) What I am interested in is not so much the problem itself as what it can teach us about computer-assisted proofs in general. I found the book of Wilson very entertaining but I was disappointed by the fact that he consistently avoids going over to the graph-theoretic formulation which I find more transparent (that word again). In an article by Robin Thomas (Notices of the AMS, 44, 848) he uses the graph-theoretic formulation more but I still cannot say I understood the structure of the proof on the coarsest scale. Thomas does write that in his own simplified version of the original proof the contribution of computers only involves integer arithmetic. Thus this proof does seem to belong to the category of things I said above I would tend to accept as a mathematical proof, modulo the fact that I would have to invest the time and effort to understand the algorithm. There is also a ‘computer-checked proof’ of the four colour theorem by Georges Gonthier. I found this text interesting to look at but felt as if I was quickly getting into logical deep waters. I do not really understand what is going on there.

To sum up this discussion, I am afraid that in the end the four colour problem was not the right example for me to start with and I that I need to take some other example which is closer to mathematical topics which I know better and perhaps also further from having been formalized and documented.

Organizing posts by categories

August 25, 2012

I have a tendency to use the minimal amount of technology I have to in order to achieve a particular goal. So for instance, having been posting things on this blog for several years, I have made use of hardly any of the technical possibilities available.  Among other things I did not assign my posts to categories, just putting them in one long list. I can well understand that not everyone who wants to read about immunology wants to read about general relativity and vice versa. Hence it is useful to have a sorting mechanism which can help to direct people to what they are interested in. Now I have invested the effort to add information on categories to most of the posts. It was easy (though time-consuming) to do and I find that the results are useful. It is helpful for me myself to navigate through the material and it is interesting for me to see at a glance how many posts on which subjects there are. For now on I will systematically assign (most) new posts to a category and the effort to do so should be negligible. This post is an exception since it does not really fit into any category I have.

Do you know these matrices?

March 9, 2012

I have come across a class of matrices with some interesting properties. I feel that they must be known but I have not been able to find anything written about them. This is probably just because I do not know the right place to look. I will describe these matrices here and I hope that somebody will be able to point out a source where I can find more information about them. Consider an n\times n matrix A with elements a_{ij} having the following properties. The elements with i=j (call them b_i) are negative. The elements with j=i+1\ {\rm mod}\ n (call them c_i) are positive. All other elements are zero. The determinant of a matrix of this type is \prod_i b_i+(-1)^{n+1}\prod_i c_i. Notice that the two terms in this sum always have opposite signs. A property of these matrices which I found surprising is that B=(-1)^{n+1}(\det A)A^{-1} is a positive matrix, i.e. all its entries b_{ij} are positive. In proving this it is useful to note that the definition of the class is invariant under cyclic permutation of the indices. Therefore it is enough to show that the entries in the first row of B are non-zero. Removing the first row and the first column from A leaves a matrix belonging to the class originally considered. Removing the first row and a column other than the first from A leaves a matrix where a_{n1} is alone in its column. Thus the determinant can be expanded about that element. The result is that we are left to compute the determinant of an (n-2)\times (n-2)matrix which is block diagonal with the first diagonal block belonging to the class originally considered and the second diagonal block being the transpose of a matrix of that class. With these remarks it is then easy to compute the determinant of the (n-1)\times (n-1) matrix resulting in each of these cases. In more detail b_{11}=(-1)^{n+1}b_2b_3\ldots b_n and b_{1j}=(-1)^{n-j}b_2b_3\ldots b_{j-1}c_j\ldots c_n for j>1.

Knowing the positivity of (-1)^{n+1}(\det A)A^{-1} means that it is possible to apply the Perron-Frobenius theorem to this matrix. In the case that \det A has the same sign as (-1)^{n+1} it follows that A^{-1} has an eigenvector all of whose entries are positive. The corresponding eigenvalue is positive and larger in magnitude than any other eigenvalue of A^{-1}. This vector is also an eigenvalue of A with a positive eigenvalue. Looking at the characteristic polynomial it is easy to see that if (-1)^n(b_1b_2\ldots b_n+(-1)^{n+1}c_1c_2\ldots c_n)<0 the matrix A has exactly one positive eigenvalue and that none of its eigenvalues is zero.

The Perron-Frobenius theorem

October 20, 2011

The Perron-Frobenius theorem is a result in linear algebra which I have known about for a long time. On the other hand I never took the time to study a proof carefully and think about why the result holds. I was now motivated to change this by my interest in chemical reaction network theory and the realization that the Perron-Frobenius theorem plays a central role in CRNT. In particular, it lies at the heart of the original proof of the existence part of the deficiency zero theorem. Here I will review some facts related to the Perron-Frobenius theorem and its proof.

Let A be a square matrix all of whose entries are positive. Note how this condition makes no sense for an endomorphism of a vector space in the absence of a preferred basis. Then A has a positive eigenvalue \lambda_+ and it is bigger than the magnitude of any other eigenvalue. The dimension of the generalized eigenspace corresponding to this eigenvalue is one. There is a vector in the eigenspace all of whose components are positive. Let C_i be the sum of the entries in the ith column of A. Then \lambda_+ lies between the minimum and the maximum of the C_i.

If the assumption on A is weakened to its having non-negative entries then most of the properties listed above are lost. However analogues can be obtained if the matrix is irreducible. This means by definition that the matrix has no invariant coordinate subspace. In that case A has a positive eigenvalue which is at least as big as the magnitude of any other eigenvalue. As in the positive case it has multiplicity one. There is a vector in the eigenspace all of whose elements are positive. In general there are other eigenvalues of the same magnitude as the maximal positive eigenvalue and they are related to it by multiplication with powers of a root of unity. The estimate for the maximal real eigenvalue in terms of column sums remains true. The last statement follows from the continuous dependence of the eigenvalues on the matrix.

Suppose now that a matrix B has the properties that its off-diagonal elements are non-negative and that the sum of the elements in each of its columns is zero. Then the sum of the elements in each column of a matrix of the form B+\lambda I is \lambda. On the other hand for \lambda sufficiently large the entries of the matrix B+\lambda I are non-negative. If B is irreducible then it can be concluded that the Perron eigenvalue of B+\lambda I is \lambda, that the kernel of B is one-dimensional and that it is spanned by a vector all of whose components are positive. In the proof of the deficiency zero theorem this is applied to certain restrictions of the kinetic matrix. The irreducibility property of B follows from the fact that the network is weakly reversible.

The Perron-Frobenius theorem is proved in Gantmacher’s book on matrices. He proves the non-negative case first and uses that as a basis for the positive case. I would have preferred to see a proof for the positive case in isolation. I was not able to extract a simple conceptual picture which I found useful. I have seen some mention of the possibility of applying the Brouwer fixed point theorem but I did not find a complete treatment of this kind of approach written anywhere. There is an infinite-dimensional version of the theorem (the Krein-Rutman theorem). It applies to compact operators on a Banach space which satisfy a suitable positivity condition. In fact this throws some light on the point raised above concerning a preferred basis. Some extra structure is necessary but it does not need to be as much as a basis. What is needed is a positive cone. Let K be the set of vectors in n-dimensional Euclidean space, all of whose components are non-negative. A matrix is non-negative if and only if it leaves K invariant and this is something which can reasonably be generalized to infinite dimensions. Thus the set K is the only extra structure which is required.

Me on TV

November 26, 2010

Recently I was interviewed by TV journalists for a documentary of the channel 3Sat called “Rätsel Dunkle Materie” [The riddle of dark matter]. It was broadcast yesterday. Before I say more about my experience with this let me do a flashback to the only other time in my life I appeared on TV. On that occasion the BBC visited our school. I guess I was perhaps twelve at the time although I do not know for sure. I was filmed reading a poem which I had written myself. I was seen sitting in a window of the Bishops’ Palace in Kirkwall, looking out. I suppose only my silhouette was visible. I no longer have the text of the poem. All I know is that the first line was ‘Björn, adventuring at last’ and that later on there was some stuff about ravens. At that time I was keen on Vikings. The poem was no doubt very heroic, so that the pose looking out the window was appropriate.

Coming back to yesterday, the documentary consisted of three main elements. There was a studio discussion with three guests – the only one I know personally is Simon White. There were some clips illustrating certain ideas. Thirdly there were short sequences from interviews with some other people. I was one of these people. They showed a few short extracts of the interview with me and I was quite happy with the selection they made. This means conversely that they nicely cut out things which I might not have liked so much. I was answering questions posed by one of the journalists and which were not heard on TV. They told me in advance that this would be the case. They told me that for this reason I should not refer to the question during my answers. I found this difficult to do and I think I would need some practice to do it effectively. Fortunately it seems that they efficiently cut out these imperfections. I did not know the questions in advance of the filming and this led to some hesitant starts in my answers. This also did not come through too much in what was shown. Summing up, it was an interesting experience and I would do it again if I had the chance. Of course being a studio guest would be even more interesting …

I found the documentary itself not so bad. I could have done without the part about religion at the end. Perhaps the inclusion of this is connected with the fact that the presenter of the series, Gert Scobel, studied theology and also has a doctorate in hermeneutics. (I had to look up that word to have an idea what it meant.) An aspect of the presentation which was a bit off track was that it gave the impression that the idea of a theory unifying general relativity and quantum theory was solely due to Stephen Hawking. Before ending this post I should perhaps say something about my own point of view on dark matter and dark energy. Of course they are symptoms of serious blemishes in our understanding of reality. I believe that dark matter and dark energy are better approaches to explaining the existing observational anomalies than any other alternative which is presently available. In the past I have done some work related to dark energy myself. The one thing that I do not like about a lot of the research in this area is that while people are very keen on proposing new ‘theories’ (which are often just more or less vague ideas for models) there is much less enthusiasm for working out these ideas to obtain a logically sound proposal. Of course that would be more difficult. A case study in this direction was carried out in the diploma thesis of Nikolaus Berndt which was done under my supervision. The theme was to what extent the so-called Cardassian models (do not) deserve to be called a theory. We later produced a joint publication on this. It has not received much attention in the research community and as far as I know has only been cited once.

The principle of symmetric criticality

May 12, 2010

There are many interesting partial differential equations which can be expressed as the Euler-Lagrange equations corresponding to some Lagrangian. Thus they are equivalent to the condition that the action defined by the Lagrangian is stationary under all variations. Sometimes we want to study solutions of the equations which are invariant under some symmetry group. Starting from the original equations, it is possible to calculate the symmetry-reduced equations. This is what I and many others usually do, without worrying about a Lagrangian formulation. Suppose that in some particular case the task of doing a symmetry reduction of the Lagrangian is significantly easier than the corresponding task for the differential equations. Then it is tempting to take the Euler-Lagrange equations corresponding to the symmetry-reduced action and hope that for symmetric solutions they are equivalent to the Euler-Lagrange equations without symmetry. But is this always true? The Euler-Lagrange equations without symmetry are equivalent to stationarity under all variations while the Euler-Lagrange equations for the symmetry-reduced action are equivalent to stationarity under symmetric perturbations. The second property is a priori weaker than the first. This procedure is often implicit in physics papers, where the variational formulation is more at the centre of interest than the equations of motion.

The potential problem just discussed is rarely if ever mentioned in the physics literature. Fortunately this question has been examined a long time ago by Richard Palais in a paper entitled ‘The principle of symmetric criticality’ (Commun. Math. Phys. 69, 19). I have known of the existence of this paper for many years but I never took the trouble to look at it seriously. Now I have finally done so. Palais shows that the principle is true if the group is compact or if the action is by isometries on a Riemannian manifold. Here the manifold is allowed to be an infinite-dimensional Hilbert manifold, so that examples of relevance to field theories in physics are included. The proof in the Riemannian case is conceptually simple and so I will give it here. Suppose that (M,g) is a Riemannian manifold and f a function on M. Let a group G act smoothly on M leaving g and f invariant. Let p be a critical point of the restriction of f to the set F of fixed points of the group action. It can be shown that F is a smooth totally geodesic submanifold. (In fact in more generality a key question is whether the fixed point set is a submanifold. If this is not the case even the definition of the principle may be problematic.) The gradient of f at p is orthogonal to F. Now consider the geodesic starting at p with initial tangent vector equal to the gradient of f. It is evidently invariant under the group action since all the objects entering into its definition are. It follows that this geodesic consists of fixed points of the action of G and so must be tangent to F. Hence the gradient of f vanishes.

When does the principle fail? Perhaps the simplest example is given by the action of the real numbers on the plane generated by the vector field x\frac{\partial}{\partial y} and the function x. This has no critical points but its restriction to the fixed point set, which is the y-axis, has critical points everywhere.

Induced pluripotent stem cells

January 18, 2010

The usual career of a living cell proceeds from its beginning as a stem cell in the embryo through a process of differentiation where it becomes more and more specialized until (in most cases) it finally takes its place in some tissue as a terminally differentiated cell. This process involves various genes being switched on or off. Usually in the past this process has been thought of as being more or less irreversible. This leads to the great interest in embryonic stem cells as a potential basis of the treatment of various illnesses by regeneration of certain types of cells. Unfortunately embryonic stem (ES) cells have two big problems associated with them. The first is that their use raises ethical concerns in many people which act as a powerful inhibitor of the development of the technology. The other is that they may involve medical dangers. If the cells develop in the wrong direction they may lead to tumours, especially the type called a teratoma where cells are found which are of the wrong type of tissue (and often of many types) for the place they are in.

It was discovered in 2006 by Shinya Yamanaka and his associates that the usual development can be run backwards, producing stem cells from terminally differentiated cells, for instance skin cells. They named these cells induced pluripotent stem cells (iPS cells). On the web page of the National Institutes of Health where they have videos of lectures ( there is a talk given by Yamanaka on January 14th, 2010 which is inspiring and at the same time presented in an entertaining style. The introduction by Francis Collins, director of the NIH suggests that Yamanaka will not have to wait long for his Nobel Prize. iPS cells are an ethically safe alternative to ES cells. Their medical safety does not look so good at the moment. Under some circumstances the safety profile of iPS cells is similar to that of EC cells. Under other circumstances a subset of the cells seem to be refractory to differentiation and can then produce teratomas at a later time. It is necessary to learn to control their development better before they can be used in regenerative medicine. Of course it would be important to know what characterizes this subset. Yamanaka suggests that this may have to do with epigenetic factors and this ideas is being tested in his laboratory now. An application of iPS cells less risky than tissue regeneration is to use cells produced from iPS cells to test drugs which are toxic, or even lethal, for certain patients but not for the majority. The idea is to take skin cells from the patient, turn them into stem cells and test the drug on those cells. Unfortunately this process requires a lot of time and money.

The normal cells are turned into iPS cells by the application of certain transcription factors. This may be done by tranferring genetic material or by using the proteins themselves directly. Originally four different transcription factors had to be combined. Recent work by Hans Schöler and collaborators indicates that one of these, Oct4, is enough in humans. The article is in Nature, 461 (2009) 649.

Four-dimensional Lie algebras

January 1, 2010

In mathematical general relativity it is common to study solutions of the Einstein equations with symmetry. In other words, solutions are considered which are invariant under the action of a Lie group G. (In what follows I will restrict consideration to the vacuum case to avoid having to talk about matter. So a solution means a Lorentzian metric g satisfying {\rm Ric}(g)=0.) It is usual to concentrate on the four-dimensional case, corresponding to the fact that in everyday life we encounter one time and three space dimensions. One type of solutions with symmetry are the spatially homogeneous ones where the orbits of the group action are three-dimensional and spacelike. Then the Einstein equations reduce from partial differential equations to ordinary differential equations. This is a huge simplification although the solutions of the ODEs obtained are pretty complicated. Here I will make the further assumptions that the Lie group is of dimension three and that it is simply connected. The first of these assumptions is a real restriction but the second is not from my point of view since it does not change the dynamics of the solutions, which is what I am mainly interested in. With these assumptions the unknown can naturally be considered as a one-parameter family of left-invariant Riemannian metrics on a three-dimensional Lie group. These Riemannian metrics are obtained as the metrics induced by the spacetime metric on the orbits of the group action. Any connected three-dimensional Lie group can occur. Connected and simply connected Lie groups are in one to one correspondence with their Lie algebras.Thus it is important to understand what three-dimensional Lie algebras there are. Fortunately there exists a classification which was found by Bianchi in 1898. People working in general relativity call the spatially homogenous solutions of the Einstein equations with symmetry property defined by Lie groups in this way Bianchi models. They use the terminology of Bianchi, who distinguished types I-IX. A lot of work has been done on the dynamics of these solutions. Some more information on this can be found in a previous post on the Mixmaster model.

For reasons of pure mathematical curiosity, or otherwise, it is interesting to ask what happens to all this in space dimensions greater than three. Recently Arne Gödeke has written a diploma thesis on some aspects of this question under my supervision and this has led me to go into the issue in some depth. One thing which naturally comes up is the question of classifying Lie algebras in n dimensions. As far as I can see there is not a useful complete classification in general dimensions but there is quite a bit of information available in low dimensions. Here I will concentrate on the case of four dimensions. In that case there is a classification which was found by Fubini in 1904 and since then other people have produced other versions. Having worked with Bianchi models for many years I feel very much at home with the three-dimensional Lie algebras. In contrast the four-dimensional classification appeared to me quite inhospitable and so I have invested some time in trying to fit the four-dimensional Lie algebras into a framework which I find more appealing. I record some of what I found here. The best guide I found was the work of Sigbjørn Hervik, in particular his paper in Class. Quantum Grav. 19, 5409 (cf. arXiv:gr-qc/0207079).

From the point of view of the dynamics of the Einstein equations one Bianchi type which is notably different from all others is type IX.The reason for this is that the Lie group (which is SU(2)) admits left-invariant metrics of positive scalar curvature. Is there a natural analogue for four-dimensional Lie algebras? A useful tool here is the Levi-Malcev theorem which provides a way of splitting a general Lie algebra into two simpler pieces. More precisely it says that each Lie algebra is the semidirect sum of a semisimple and a solvable Lie algebra. The semisimple part is called a Levi subalgebra and is unique up to isomorphism. It turns out that the information about whether there exists a metric of positive scalar curvature is contained in the Levi subalgebra. There are not many semisimple Lie algebras in low dimensions. In fact in dimension no greater than four there are only two, su(2) and sl(2,R). These correspond to Bianchi types IX and VIII respectively. The only possible non-trivial Levi decompositions are the semidirect sum of one of the two Lie algebras just mentioned and the real numbers. In fact it turns out that the semidirect sum of a semisimple Lie algebra and the real numbers is automatically a direct sum because any derivation of a semisimple Lie algebra is an inner derivation. The corresponding simply connected Lie group is a direct product. It can be concluded from this that the only simply connected and connected four-dimensional Lie group which admits a metric of positive scalar curvature is SU(2)\times R. This is the analogue of Bianchi type IX for n=4.

It is common in general relativity to divide the three-dimensional Lie algebras into two disjoint classes, Class A and Class B. The first of these consist of the unimodular Lie algebras, i.e. those whose structure constants have vanishing trace. They are closely associated with the class of Lie groups whose left-invariant metrics can be compactified by taking the quotient by a discrete group of isometries. They also have the pleasant property that their dynamics can be reduced to the case where the matrix of components of the metric in a suitable basis of left-invariant one-forms is diagonal. This is important for the Wainwright-Hsu system, a dynamical system formulation of the Einstein equations for Class A Bianchi models which is the basis for most of the rigorous results on the dynamics of these solutions obtained up to now. If type IX is omitted there are five different Lie algebras in Class A. One way of getting unimodular Lie algebras of dimension four is to take the direct sum of the three dimensional Lie algebras with the real numbers. Call the others indecomposable. The indecomposable unimodular four-dimensional Lie algebras can be classified into six types. Four of these are individual Lie algebras while the other types are one-parameter families of non-isomorphic algebras. One way of putting these into a larger framework is to note that each of them has a three-dimensional Abelian subalgebra. They can therefore be considered as special cases of solutions with three commuting spacelike Killing vector fields. This generalizes the fact that all the Class A Bianchi types except VIII and IX can be considered as solutions with two commuting Killing vector fields. I do not have an overview of the questions of compactification and diagonalization for these metrics. It seems that calculations done by Isenberg, Jackson and Lu in their study of the Ricci flow on homogeneous four-dimensional manifolds (Commun. Anal. Geom. 14, 345) might be helpful in this context.

More details on some of the things mentioned in this post will be given in a forthcoming preprint by Gödeke and myself.

Manipulating cells using light

October 27, 2009

In what follows I describe another subject which was a theme in the talk of Orion Weiner mentioned in the previous post. In the meantime I am familiar with the fact that there are techniques which allow us to see details of what is going on in cells. Here the most prominent protagonist is the green fluorescent protein (GFP) which was honoured by Nobel prizes in 2008. It allows information to be exported from the cell. This is a passive process in the sense that once the system has been prepared we just watch what happens. A more active process which is sometimes shown on video is that where a neutrophil follows the moving tip of a micropipette which is releasing a substance to which the cell is chemotactic. The subject of the present post is how it is possible to actively manipulate cells by sending in light of certain wavelengths. This may mean bathing the cell in light, illuminating certain precisely defined areas with a laser or a combination of the two.

The first type of experiment involves proteins which can be located either at the cell membrane or in the cytosol and which are fluorescently labelled so that their position can be monitored. It is possible to cause these molecules to move rapidly from the one localization to the other. This can be done on a time scale of a couple of seconds and it looks likes switching on and off a light. This can be done many times in a row. Here the effect on the cell is global. The second type of experiment has to do with localizing this type of effect. It allows patterns chosen by the experimenter to be projected onto the cell. Here coloured patches are visible. Their interpretation is that concentrations of a certain substance have been fixed according to the pattern. The third type of experiment is the most striking. Here a spot of light is moved over the cell and away from it in a certain direction. There results a long projection of the cell in that direction. On the video it looks as if the the cell is being pulled by a sticky object. All these things are done by switching on certain proteins which have been made light-sensitive.The sensitivity to light is achieved by incorporating elements which are responsible for allowing certain plants to react to light. One of the plants which acts as a source here is the favourite model organism among plants, Arabidopsis thaliana. The reference to the paper describing these results is ‘Spatiotemporal control of cell signalling using a light-switchable protein interaction’, Nature 461, 997-1001 (15 October, 2009).