Archive for the ‘partial differential equations’ Category

Fuchsian equations

July 21, 2011

Fuchsian equations are a class of differential equations which have played a big role in my research and which have now found a niche in the mathematical relativity community. In fact they have a very wide range of applications. The story begins with Lazarus Fuchs in the mid nineteenth century. He was interested in the case of a higher order linear scalar ODE for a complex-valued function whose coefficients are analytic except for isolated singularities. The cases which I have mainly been concerned with are first order nonlinear systems of PDE for real-valued functions whose coefficients may only be smooth. It sounds like almost everything is different in the two cases and so what is the similarity? The basic idea is that a Fuchsian problem concerns a system of PDE with a singularity in the coefficients where the desire is to describe the behaviour of solutions near the singularity. If certain structural conditions are satisfied then for any formal power series solution of the problem there is a corresponding actual solution. In the original situation of Fuchs the series converged and the solution was its limit. More generally there need be no convergence and the series is only asymptotic. A related task is to find singular solutions of a regular system. By suitable changes of variables this can sometimes be transformed to the problem of finding a regular solution of a singular system.

With hindsight my first contact with a problem of Fuchsian type was in some work I did with Bernd Schmidt (Class. Quantum Grav. 8, 985) where we proved the existence of certain models for spherically symmetry stars in general relativity. Writing the equations in polar coordinates leads to a system of ODE with a singularity corresponding to the centre of symmetry. We proved the existence near the centre of solutions satisfying the appropriate boundary conditions by doing a direct iteration. In later years I was interested in the problem of mathematical models of the big bang in general relativity. I spent the academic year 1994-95 at IHES (I have written about this in a previous post) and in that period I often went to PDE seminars in Orsay. On one occasion the speaker was Satyanad Kichenassamy and his subject was Fuchsian equations. This is a long term research theme of his and he has written about it extensively in his books ‘Nonlinear Wave Equations’ and ‘Fuchsian Reduction: Applications to Geometry, Cosmology and Mathematical Physics’. I found the talk very stimulating and after some time I realized that this technique might be useful for studying spacetime singularities. Kichenassamy and I cooperated on working out an application to Gowdy spacetimes and this resulted in a joint paper. A general theorem on Fuchsian systems proved there has since (together with some small later modifications) been a workhorse for investigating spacetime singularities in various classes of spacetimes.

One of the highlights of this development was the proof by Lars Andersson and myself (Commun. Math. Phys. 218, 479) that there are very general classes of solutions of the Einstein equations coupled to a massless scalar field whose initial singularities can be described in great detail. In later work with Thibault Damour, Marc Henneaux and Marsha Weaver (Ann. H. Poincare 3, 1049) we were able to generalize this considerably, in particular to the case of solutions of the vacuum Einstein equations in sufficiently high dimensions.  For these results no symmetry assumptions were necessary. More recently these results were generalized in another direction by Mark Heinzle and Patrik Sandin (arXiv:1105.1643). Fuchsian systems have also been applied to the study of the late-time behaviour of cosmological models with positive cosmological constant. In the meantime there are more satisfactory results on this question useing other methods (see this post) but this example does show that the Fuchsian method can be applied to problems in general relativity which have nothing to do with the big bang. In general this method is a kind of machine for turning heuristic calculations into theorems.

The Fuchsian method works as follows. Suppose that a system of PDE is given and write it schematically as F(u)=0. I consider the case that the equation itself is regular and the aim is to find singular solutions. Let u_0 be an explicit function which satisfies the equation up to a certain order in an expansion parameter and which is singular on a hypersurface defined by t=0. Look for a solution of the form u=u_0+t^\alpha v where t^\alpha is less singular than u_0. The original equation can be rewritten as G(v)=0, where G is singular. Now the aim is to show that there is a unique solution v of the transformed equation which vanishes as t\to 0. A theorem of this kind was proved in the analytic case in the paper mentioned above which I wrote with Kichenassamy. Results on the smooth case are harder to prove and there are less of them known. A variant of the procedure is to define u_0 as a solution (not necessarily explicit) of an equation F_0(u_0)=0 which is a simplified version of the equation F(u)=0. In the cases of spacetime singularities which have been successfully handled the latter system is the so-called velocity-dominated system.

Weighted Sobolev spaces

February 22, 2011

In the study of elliptic systems of partial differential equations on compact manifolds it is often helpful to use Sobolev spaces. The Sobolev space H^s is the space of functions which are square integrable together with their derivatives up to order s. In general it is necessary to use distributions and distributional derivatives. Sobolev spaces can be defined in a similar way for sections of vector bundles but I will restrict myself to real-valued functions here. The Sobolev spaces H^s are Hilbert spaces. It is possible to replace L^2 by L^p, 1<p<\infty in the definitions with the resulting spaces being Banach spaces. On a manifold it is necessary to specify what kind of derivatives are used, how the lengths of these objects are defined and what measure is used for integration. One way of doing this is to use a Riemannian metric defined on the given manifold. Then covariant derivatives can be used, the length of tensors is naturally defined and the metric defines a volume form. In general using different metrics defines norms which are different but equivalent. An alternative approach is based on local coordinates. Introduce a partition of unity each of whose members has a support which is contained in a coordinate chart to write any function f as a sum of functions f_i which can be transported to Euclidean space using the charts. Define the square of the Sobolev norm of f to be equal to the sum of the squares of the corresponding flat space norms of the f_i. Different choices of the partition of unity and the charts lead to norms which are equivalent to each other and to the norms defined by means of Riemannian metrics. In the end the uniqueness of the Sobolev spaces as topological vector spaces (although not as normed spaces) is guaranteed by the compactness of the manifold.

Next some properties of Sobolev spaces on compact manifolds will be listed. If s_2>s_1 then the natural embedding from H^{s_2} to H^{s_1} is compact. A linear differential operator of order k whose coefficients belong to a Sobolev space of sufficiently high order defines a bounded linear operator from H^s to H^{s-k}. If the operator is elliptic then the corresponding mapping between Sobolev spaces is Fredholm. This means by definition that its kernel is finite dimensional and that its image is closed and of finite codimension. Elliptic regularity holds: if P is an elliptic operator of order k and Pu belongs to H^s then u belongs to H^{s+k}.

On a non-compact manifold things are more complicated. We have to expect that if there is any reasonable analogue of the theory just sketched it will depend on the asymptotics of the metrics used or the type of coordinate systems chosen. I am familar with one theory of this type which plays a role in general relativity. This is the theory of weighted Sobolev spaces on manifolds with an asymptotically flat metric. Since I was interested in a generalization of this and a bit rusty on the known theory I decided to refresh my memory by rereading the basic references. These are a paper of Choquet-Bruhat and Christodoulou (Acta Math. 146, 129) and one of Bartnik (Commun. Pure Appl. Math. 39, 661). The weighted Sobolev spaces used in this theory differ from ordinary Sobolev spaces by the introduction of positive functions (weights) multiplying the volume form in the definition of the norms. The weights used depend on the choice of a real number \delta in addition to the differentiability index s. The notational conventions used in the two papers just quoted differ and to avoid any ambiguity let me state that in what follows I use Bartnik’s conventions. (They seem to be the ones most frequently encountered in the literature.)

A Riemannian manifold (M,g) is called asymptotically flat if the following conditions hold. There is a compact subset K of M such that the complement of K is diffeomorphic to the complement of a closed ball in a Euclidean space of the same dimension. If the restriction of the metric to the complement of K is transported to R^n using a suitable diffeomorphism of this type then its components in Cartesian coordinates tend to those of the flat Euclidean metric at infinity in a suitable sense. One of the ways of making this into a precise definition is to use weighted Sobolev spaces of functions on flat space. (For the experts I remark that I have restricted here to the case of one end.) The weights are powers of a function \sigma which is everywhere smooth and behaves at infinity like the distance r from a fixed point. These weights are chosen so that (with Bartnik’s conventions) the growth rate of a function in the space H^s_\delta at infinity is no greater than r^\delta and that the growth rate decreases by one for each derivative taken up to order s. These statements are to be understood in an L^2 sense and not pointwise. There do, however, exist Sobolev embedding theorems which relate them to pointwise statements. The definitions and a number of the basic properties of these spaces are valid for any complete Riemannian manifold. For instance, the embedding from H^{s_1}_{\delta_1} into H^{s_2}_{\delta_2} is compact if s_1>s_2 and \delta_1<\delta_2. The elliptic theory requires more special assumptions. It uses scaling properties of Euclidean space and the fundamental solution for the flat space Laplacian. It can be shown, for instance, that suitable second order elliptic operators with coefficients belonging to weighted Sobolev spaces are Fredholm as operators from H^{s+2}_\delta to H^s_{\delta-2} and all non-integer values of \delta. For integer values of \delta the Fredholm property is lost. This is connected to the following fact. Each Fredholm operator has an index, which is the difference between the dimensions of the kernel and cokernel. The index is invariant under continuous variations of the operator. Changing the coefficient \delta for a fixed elliptic differential operator can be thought of as a continuous variation of the operator between Hilbert spaces. However the index in general changes when passing through an integer value of \delta and so the Fredholm property must be lost there.

Critical wave maps

January 20, 2010

At the moment I am attending a research programme called ‘Quantitative studies of nonlinear wave phenomena’ at the Erwin Schrödinger Institute in Vienna. This has stimulated me to think again about a topic in the study of nonlinear wave equations which has seen remarkable developments just recently. This concerns critical wave maps. Wave maps are nonlinear generalizations of the wave equation associated to a Riemannian manifold(N,h) called the target manifold. These days wave maps are one of the most important model problems in the study of nonlinear wave equations. If (M,g) is a pseudo-Riemannian manifold and \Phi a mapping from M to N define a Lagrange function by the norm squared of the derivative of \Phi with the norm being determined by g and h. In local coordinates L=g^{ij}\partial_i\Phi^I\partial_j\Phi^J h_{IJ}. Solutions of the corresponding Euler-Lagrange equations are called harmonic maps when g is Riemannian and wave maps when g is Lorentzian. Harmonic maps are elliptic while wave maps are hyperbolic. It is common to classify nonlinear evolution equations by the scaling of the energy into subcritical, critical and supercritical. The standard point of view is then that subcritical problems are relatively straightforward, supercritical problems are impossible at present and critical problems are on the borderline. It is therefore natural that critical problems are a hive of activity at the moment. For more information about this classification and its significance see this post of Terence Tao. Actually there are a couple of
results around on problems which could be called marginally supercritical. One, due to Tao, concerns the nonlinear wave equation \nabla^\alpha\nabla_\alpha u=u^5 (\log (2+u^2)) in three space dimensions (arXiv:math/0606145). Note that the equation with nonlinearity u^5 is critical in three space dimensions. Another, due to Michael Struwe, concerns the equation \nabla^\alpha\nabla_\alpha u=ue^{u^2} in two space dimensions, where every power law nonlinearity is subcritical. Here criticality is decided by the value of the energy and Struwe’s result applies to the supercritical case, under the assumption of rotational symmetry. He gave a talk on this here on Monday.

Getting back to wave maps the criticality is decided by the dimension of the manifold M. If its dimension is two (i.e. space dimension n=1) then the problem is subcritical. The case n=2 is critical while the problem is supercritical for n>2. From now on I concentrate on the critical case. In this context there is a general guide for distinguishing between global smoothness of solutions and the development of singularities which is the curvature of the target manifold. Negative curvature tends to be good for regularity while positive curvature tends to encourage singularities. This is related to an analogous feature of harmonic maps which has been known for a long time. A good model problem for the negatively curved case is that where (N,h) is the hyperbolic plane. The thesis of Shadi Tahvildar-Zadeh played an important role in making this problem known in the mathematics community. In this work the manifold (M,g) is three-dimensional Minkowski space. It was done at the Courant Institute under the supervision of Jalal Shatah and Demetrios Christodoulou. To make the problem more tractable some symmetry assumptions were made. Consider the action of SO(2) on R^2 by rotations about a point. There is also a natural action of the rotation group on the hyperbolic plane. One symmetry assumption is invariance – applying a rotation to a point x\in R^2 leaves \Phi(x) invariant. Another symmetry assumption is equivariance – applying a rotation by an angle \theta to x leads to a rotation of \Phi(x) by an angle k\theta where k is a positive integer. It can be seen that there are different types of equivariance depending on the parameter k. The invariant case is related to the Einstein equations of general relativity. Cylindrically symmetric solutions of the vacuum Einstein equations lead to exactly this problem. On the other hand one of the key techniques used in the proofs originates in a paper of Christodoulou where he showed a stability result for an exterior region in a black hole spacetime in the presence of a scalar field. The results on the invariant case were published by Christodoulou and Tahvildar-Zadeh in 1993. They made no mention of the connection to the vacuum Einstein equations. The results on the equivariant case were published by Shatah and Tahvildar-Zadeh at about the same time. Interestingly the two papers use rather different terminology. While the work on the equivarant case uses the language of multipliers, traditional in PDE theory, the work on the invariant case uses the language of vector fields coming from the direction of general relativity.

Without the symmetry the problem becomes much harder and was the subject of a major project of Tao (project heatwave). For more information see this post of Tao and the comments on it. By the time the project was finished there were already alternative approaches to solving the same problem by Sterbenz and Tataru and Krieger and Schlag. Now this problem can be considered solved. The different papers use different methods and the detailed conclusions are different. Nevertheless they all include global regularity for wave maps from three-dimensional Minkowski space to the hyperbolic plane without any symmetry assumptions. The proof of Sterbenz and Tataru goes through an intermediate statement that if there were a singularity there would be a finite energy harmonic map into the given target. They actually concentrate on the case of a compact target manifold but the case of the hyperbolic plane can be handled by passing to the universal cover. There also give a more general statement for the case that there is a non-trivial harmonic map around. Then regularity is obtained as long as the energy is less than that of the harmonic map. These ideas are also related to results on the positive curvature case which show what some blow-up solutions look like when they occur. Consider the case that the target manifold is the two-sphere. Here it is possible, as in the case of the hyperbolic plane, to define invariant and equivariant wave maps. In recent work Raphaël and Rodnianski are able to describe the precise nature of the blow-up for an open set of initial data and for all values of the integer k describing the type of equivariance. The case k=1 is exceptional. They have also proved similar results for an analogous critical problem, the Yang-Mills equations in 4+1 dimensions. These results provide a remarkably satisfactory confirmation of conjectures made by Piotr Bizoń and collaborators on the basis of numerical and heuristic considerations which were an important stimulus for the field.

Bootstrap arguments

November 22, 2009

A bootstrap argument is an analogue of mathematical induction where the natural numbers are replaced by the non-negative real numbers. This type of argument is a powerful tool for proving long-time existence theorems for evolution equations. For instance, it plays a central role in the proof of the stability of Minkowski space by Christodoulou and Klainerman and the theorem on formation of trapped surfaces by Christodoulou discussed in previous posts. The name comes from a story where someone pulls himself up by his bootstraps, leather attachments to the back of certain boots. This story is often linked to the name of Baron Münchhausen. In another variant he pulls himself out of a bog by his pigtail. This was a person who really lived and was known for telling tall tales. In later years people wrote various books about him and incorporated many other tall tales from various sources. The word ‘booting’ applied to computers is derived from ‘bootstrapping’ in the sense of this story. There are also bootstrap methods used in statistics. They involve analysing new samples drawn from a fixed sample. In some sense this means obtaining more knowledge about a system without any further input of information. It is this aspect of ‘apparently getting something for nothing’ which is typical of the bootstrap. In French the procedure in statistics has been referred to as ‘méthode Cyrano’. Unfortunately is seems that in PDE theory the French have just adopted the English term. I say ‘unfortunately’ because of a fondness for Cyrano de Bergerac. As in the case of Münchhausen there was a real person of this name, this time a writer. However the name is much better known as that of a fictional character, the hero of a play by Edmond Rostand. The non-fictional Cyrano wrote among other things about a trip to the moon. There is also a Münchhausen story where he uses a kind of inverse bootstrap (could there be a PDE analogue here?) to return from the moon. He constructs a rope which he attaches to one of the horns of the moon but it is much too short to reach down to the ground. He climbs to the bottom of the rope, reaches up and cuts off and detaches the part ‘which he does not need any more’ and ties it onto the bottom. He then repeats this process. Returning to Cyrano, he describes seven methods for getting to the moon of which the sixth is the one relevant to the bootstrap. He stands on an iron plate and throws a magnet into the air. The iron plate is attracted by the magnet and starts to rise. Then he rapidly catches the magnet and throws it into the air again. I should point out that Cyrano does not believe in the nonsensical stories he is telling – his aim is a practical one, holding the attention of the Duc de Guiche so as to delay him for a very specific reason.

Now I return to the topic of bootstrap arguments for evolution equations. I have given a discussion of the nature of these arguments in Section 10.3 of my book. Another description can be found in section 1.3 of Terry Tao’s book ‘Nonlinear dispersive equations: local and global analysis‘. A related and more familiar concept is that of the method of continuity. Consider a statement P(t) depending on a parameter t belong to the interval [0,\infty). Let S be the subset consisting of those t for which the statement P is true on the interval [0,t). If it can be shown that S is non-empty, open and closed then it can be concluded that the statement holds for all t, by the connectedness of the interval. The special feature of a bootstrap argument is the way in which openness is obtained. Suppose that, starting from P(t), we can prove a string of implications which ends again with P(t). This is nothing other than a circular argument and proves nothing. Suppose, however, that in addition this can be improved so that the statement at the end of the string is slightly stronger than that at the beginning. This improvement is something to work with and is a typical way of proving the openness needed to apply the continuity argument. It is more convenient here to work with the open interval (0,\infty) since we want to look at properties of solutions of an evolution equation defined on the interval (0,t). Let P(t) be the statement that a certain inequality (1) holds on the interval (0,t) and suppose that P(t) implies the statment Q(t) that a stronger inequality (2) holds on the same interval. Things are usually set up so that Q(t) implies by continuity that (2) holds at t and that the the property of being ‘stronger’ then shows that P(t') holds for t' slightly greater than t. This shows the openness property. I think the best way to really understand what a bootstrap argument means is to write out a known example explicitly or, even better, to invent a new one to solve a problem which interests you. The key thing is to find the right choice of P and Q. What I have described here is only the simplest variant. In the work of Christodoulou mentioned above he uses a continuity argument on two-dimensional sets.

The Vlasov-Poisson system

June 4, 2009

The Vlasov-Poisson system is a system of partial differential equations which comes up in mathematical physics. I have been involved quite a bit with these equations and related systems for many years now. In this post I want to reflect a little on what is and is not known about solutions of this system. One of the things which has stimulated me to think more about these questions just now is a lecture course on kinetic equations which I am giving at the Free University in Berlin. Because of the physics motivation the Vlasov-Poisson system is usually studied in three space dimensions. Here I will allow the space dimension n to be general. For convenience I also introduce a parameter \gamma which can take the values +1 and -1. The equations are \partial_t f+v\cdot \nabla_x f+\gamma\nabla_x U\cdot\nabla_v f=0 and \Delta U=\rho where \rho=\int f dv. Here t is time and x and v denote position and velocity variables, each belonging to {\bf R}^n. \Delta is the Laplacian. Because of their most frequent applications the cases \gamma=1 and \gamma=-1 are often called the plasma physics case and the stellar dynamics case respectively. A natural problem here is the initial value problem (Cauchy problem) with data prescribed for f.

Local existence in the Cauchy problem is known. For n\le 3 it is furthermore known that the local solution extends to a global in time solution, independently of the sign of \gamma. (The first proofs by two different methods were given by Pfaffelmoser and Lions/Perthame around 1991.) When n\ge 4 and \gamma=-1 there are large classes of smooth initial data for which global existence fails. More specifically, these equations have a conserved energy and when this energy is negative the corresponding smooth solution breaks down after finite time. The easiest way to realize that n=4 might be important is to look at scaling properties of the equations. For a discussion of the significance of scaling properties in general see Terry Tao’s post on the Navier-Stokes equations. In the case n=3 the potential and kinetic energies satisfy an inequality of the form |{\cal E}_{\rm pot}|\le C{\cal E}_{\rm kin}^{\frac12} and this plays an important role in the global existence proof. The essential feature is that the power on the right hand side is less than one. If similar arguments are carried out in the case n=4 then the power one half is replaced by the power one. Thus in a sense n=4 is the critical case. For n\ge 4 the global existence problem for the Vlasov-Poisson system with \gamma=1 is open. For n=4 it is a critical problem and might be solvable in a not too distant future. Similar remarks might be made about the relativistic Vlasov-Poisson system with massless particles in three space dimensions which is given by \partial_t f+\hat v\cdot \nabla_x f+\nabla_x U\cdot\nabla_v f=0 where \hat v=\frac{v}{|v|}. The analogue of this last system plays an important role in the recent work of Lemou, Méhats and Raphaël on the nature of singularities in solutions of the relativistic Vlasov-Poisson system with \gamma=-1.

Other open questions concern the behaviour of solutions of the Vlasov-Poisson system at late times. There are various results on this but they seem to be far from an exhaustive understanding of the asymptotics. Interesting questions include whether the density \rho is bounded any solution with \gamma=-1 and whether \|\rho\|_{L^\infty}=O(t^{-3}) in the case \gamma=1, as is known to be the case for small initial data.

Reaction-diffusion equations, interfaces and geometric flows

November 24, 2008

The subject of this post is a reaction-diffusion equation of the form \partial_t u=\Delta u-\frac{1}{\epsilon^2} f'(u) for a real-valued function $u$ with t\in {\bf R}^+ and x\in {\bf R}^n. Here \epsilon is a small parameter. This equation is often known as the Allen-Cahn equation and the phenomena discussed in the following were described in a 1979 paper of Allen and Cahn in Acta Metallurgica (which I have not seen). My primary source is a paper by Xinfu Chen (J. Diff. Eq. 96, 116) which includes both a nice introduction to the subject and rigorous proofs of some key theorems. The function f in the equation is a smooth function with certainly qualitative properties: two minima u_1^- and u_2^- separated by a maximum u^+.

In physical terms there are two opposing effects at work in this system. There is reaction which drives the evolution to resemble that of solutions of the corresponding ODE (limit \epsilon\to\infty) and diffusion which causes the solution to spread out in space. Generic solutions of the ODE tend to one of the two minima of the potential at late times. If the diffusion is set to zero this leads to two spatial regions where the solution has almost constant values u_1^- and u_2^- respectively. They are separated by an narrow interface where the spatial derivatives of u are large and the the values of u are close to u^+. These ideas are captured mathematically by proving asympotic expansions in the parameter \epsilon for the solutions on time intervals whose lengths depend on \epsilon in a suitable way.

The following phenomena arise. Here I assume that the space dimension n is at least two. The case n=1 is special. The first statement is that on a suitable time interval, which is short when \epsilon is small, an interface forms. It can be described by a hypersurface. Without going into details it can be thought of as the level hypersurface u=u^+. The second describes how the interface moves in space on a longer timescale. If the function f has unequal values at the minima then the interface moves in the direction of its normal towards the region corresponding to the smaller minimum with a velocity which is proportional to the difference of the two values. On the timescale on which this motion takes place the interface stands still in the case that the two values are equal. It can, however, be shown that in the case of equal minima the interface moves in a definite way on an even longer timescale. The interface moves in the direction of its normal with a velocity which is proportional to its mean curvature at the given point. In other words, the hypersurface is a solution of mean curvature flow, one of the best-known geometric flows.

To finish I will state a question which I have no answer to, not even vaguely. Are there any useful analogues of these results for the hyperbolic equation \partial_t^2 u+\partial_t u=\Delta u-\frac{1}{\epsilon^2} f'(u)? Here there is a potential interaction between reaction, damping and dispersion.

Mouse fur

October 1, 2008

The Turing instability has been a popular theme in mathematical biology. There is no doubt that it is nice mathematics but how much does it really explain in biology? Recently a detailed proposal was made by researchers from the Max Planck Institute of Immunobiology and the University of Freiburg to explain the development of hairs in mice on the basis of a Turing mechanism. (S. Sick, S. Reinker, J. Timmer and T.Schlake. WNT and DKK determine hair follicle spacing through a reaction diffusion mechanism. Science 314, 1447.)

Here I want to take this as a stimulus to think again about the status of these ideas. On my bookshelf at home I have the biography of Turing by Andrew Hodges (‘Alan Turing: the enigma’). I now reread the parts concerning biology, which are not very extensive. On the basis of thistext it seems that one of the things in biology that fascinated Turing most was the occurrence of Fibonacci numbers in plants. This seems to have little to do with the contribution to biology for which he became famous. He himself seems to have hoped that there would be a connection. I looked at the original paper of Turing (‘The chemical basis of morphogenesis’) but I did not learn anything new compared to modern accounts of the same subject I had seen before. The basic mathematical input is a system of reaction-diffusion equations, as described briefly in a previous post. A homogeneous steady state is considered which is stable within the class of homogeneous solutions. Then a growing mode is sought which describes the beginning of pattern formation. This is similar to what is done for the Keller-Segel system. There is a mouse in Turing’s paper but it has nothing to do with the mouse in the title of this post. Its role is to climb on a pendulum and thus illustrate ideas about instability.

Another book I have at home is ‘Endless forms most beautiful‘ by Sean Carroll. In this book, which appeared in 2005, the author explains recent ideas about embryonic development and their connections to the evolution of organisms on geological timescales. Turing is mentioned once, on p. 123, but only to dismiss the relevance of his ideas to embryology, the central theme of his paper. Carroll writes, ‘While the math and models are beautiful, none of this theory has been borne out by the discoveries of the last twenty years’. As a remaining glimmer of hope for the Turing mechanism, the diagrams on pp. 104-105 of the book might fit a Turing-type picture but concern small-scale structures. The large-scale architecture of living bodies is claimed to arise in a quite different way. The picture of the development of individual organisms presented by Carroll has a character which strikes me as digital. I do not find it attractive. I should emphasize that this is an aesthetic judgement and not a scientific one. I suppose I am just in love with the continuum.

Now I return to the article of Sick et. al. A key new aspect of what they have done in comparison to previous attempts in this direction is that they are able to identify definite candidates for the substances taking part in the reaction-diffusion scenario and obtain experimental evidence supporting their suggestion. These belong to the classes Wnt and Dkk (Dickkopf). An accompanying article by Maini et. al. (Science 314, 1397) is broadly positive but does also add a cautionary note. It is pointed out that similar predictions can be produced by different mathematical models. A model of Turing type may produce something that looks a lot like what is provided by a model involving chemotaxis. This is a generic danger in mathematical biology. In a given application it is important to be on the lookout for experimental data which can help to resolve this type of ambiguity.

The reaction-diffusion model used for modelling and numerical simulation is related to a classical model of Gierer and Meinhardt. The original paper from 1972 and a great deal of information on related topics are available from this web page.

The Keller-Segel model

July 30, 2008

I mentioned the Keller-Segel model in a previous post on chemotaxis. In the past I have read, and heard and thought a lot about this model but I had never actually carefully read the 1970 paper where it was introduced. (Keller, E. F. and Segel, L. A., Initiation of slime mold aggregation viewed as an instability. J. Theor. Biol. 26, 399). I have done so now. I also read a paper by Evelyn Fox Keller called ‘Science as a medium for friendship: how the Keller-Segel models came about’ (Bull. Math. Biol. 68, 1033) where she describes the history of the origin of the model. As indicated in the title of the paper the main object which the model was intended to describe was the formation of concentrations of the population density of the cellular slime mould Dictyostelium discoideum under certain circumstances. The title also indicates that the relevance of this model is to the very beginning of this process. Mathematical studies of the model which follow it up to the formation of singularities, and there are many of these in the literature, are probably of little relevance to Dictyostelium. They may, however be of relevance to the population dynamics of other chemotactic organisms, for instance E. coli. References can be found for instance in a paper of Brenner, Levitov and Budrene (Physical mechanisms for chemotactic pattern formation in bacteria, Biophys. J., 74, 1677). In Dictyostelium ‘stickiness’ as Keller and Segel call it (a more dignified sounding name would be ‘adhesion’) comes into play before the cell density gets very high. The mathematical modelling of adhesion in this type of context does not seem to be well developed although some possibilities have been proposed.

Keller and Segel define a parabolic system of four coupled evolution equations. They then simplify this by assuming that some of the variables have already evolved to equilibrium. This results in a system of two equations which is the usual starting point of mathematical papers on the subject. The equations are linearized about a stationary and homogeneous state and a mode analysis is carried out for the resulting linear system. Growing modes indicate instability. The system contains a number of parameters and for certain choices of these parameters instability is found. This is interpreted as the genesis of the concentrations in the population density which are to be modelled. This procedure is very similar to what is done in many analyses of systems in physics. The authors refer to the analysis of the Benard instability. Another source they quote is the work of Turing on pattern formation. The latter work has had a huge influence in mathematical biology. I am reminded of a mechanism in astrophysics, the Jeans instability, which is invoked to explain the formation of galaxies in the early universe.

A feature of the analysis which the authors see as unsatisfactory is that there is no prediction of the spatial scale on which the concentrations occur. They do make some suggestions for overcoming this. In experiments on Dictyostelium aggregation is seen to be accompanied by pulsations. It had been suggested that these are actively controlled by some pacemaker activity of the cells. Keller and Segel raise the idea that pulsations might arise from a system of the type they discuss without the need for additional input. In this context they mention the concept of overstability which as far as I can see just means the occurrence of an eigenvalue of the linearized problem with positive real part and non-zero imaginary part. In the absence of sufficient knowledge of the more recent literature on the subject I do not know whether, or to what extent, the issues raised in this paragraph have been resolved in the meantime.


Follow

Get every new post delivered to your Inbox.

Join 32 other followers