Archive for November, 2008

Reaction-diffusion equations, interfaces and geometric flows

November 24, 2008

The subject of this post is a reaction-diffusion equation of the form \partial_t u=\Delta u-\frac{1}{\epsilon^2} f'(u) for a real-valued function $u$ with t\in {\bf R}^+ and x\in {\bf R}^n. Here \epsilon is a small parameter. This equation is often known as the Allen-Cahn equation and the phenomena discussed in the following were described in a 1979 paper of Allen and Cahn in Acta Metallurgica (which I have not seen). My primary source is a paper by Xinfu Chen (J. Diff. Eq. 96, 116) which includes both a nice introduction to the subject and rigorous proofs of some key theorems. The function f in the equation is a smooth function with certainly qualitative properties: two minima u_1^- and u_2^- separated by a maximum u^+.

In physical terms there are two opposing effects at work in this system. There is reaction which drives the evolution to resemble that of solutions of the corresponding ODE (limit \epsilon\to\infty) and diffusion which causes the solution to spread out in space. Generic solutions of the ODE tend to one of the two minima of the potential at late times. If the diffusion is set to zero this leads to two spatial regions where the solution has almost constant values u_1^- and u_2^- respectively. They are separated by an narrow interface where the spatial derivatives of u are large and the the values of u are close to u^+. These ideas are captured mathematically by proving asympotic expansions in the parameter \epsilon for the solutions on time intervals whose lengths depend on \epsilon in a suitable way.

The following phenomena arise. Here I assume that the space dimension n is at least two. The case n=1 is special. The first statement is that on a suitable time interval, which is short when \epsilon is small, an interface forms. It can be described by a hypersurface. Without going into details it can be thought of as the level hypersurface u=u^+. The second describes how the interface moves in space on a longer timescale. If the function f has unequal values at the minima then the interface moves in the direction of its normal towards the region corresponding to the smaller minimum with a velocity which is proportional to the difference of the two values. On the timescale on which this motion takes place the interface stands still in the case that the two values are equal. It can, however, be shown that in the case of equal minima the interface moves in a definite way on an even longer timescale. The interface moves in the direction of its normal with a velocity which is proportional to its mean curvature at the given point. In other words, the hypersurface is a solution of mean curvature flow, one of the best-known geometric flows.

To finish I will state a question which I have no answer to, not even vaguely. Are there any useful analogues of these results for the hyperbolic equation \partial_t^2 u+\partial_t u=\Delta u-\frac{1}{\epsilon^2} f'(u)? Here there is a potential interaction between reaction, damping and dispersion.


Michaelis-Menten theory

November 15, 2008

Biochemical reactions are typically catalysed by enzymes. A common feature of these reactions is that the initial concentration of the enzyme is much smaller than that of the substrate. Their ratio defines a small dimensionless constant \epsilon. A model for the simplest type of reaction of this type was introduced by Leonor Michaelis and Maud Menten in 1913. Following my habit of identifying links between the subjects of different posts on this blog whenever I can, I mention that Michaelis was an assistant to Paul Ehrlich in Berlin at one time.

My primary source for the following discussion is the well-known book of Murray on mathematical biology. Another useful source is a paper by Segel and Slemrod (SIAM Review, 3, 466). When the basic chemical assumptions are translated into mathematics a system of four ordinary differential equations is obtained. One of these decouples and a conservation law can be used to eliminate another. The result is a system of two ODE. The unknowns are the concentrations of the substrate and of the complex made by the combination of substrate and enzyme. If these equations are written in dimensionless variables u (dimensionless concentration of substrate) and v (dimensionless concentration of complex) they take the schematic form \dot u=f(u,v),\epsilon\dot v= g(u,v). Recall that \epsilon is small. The sledgehammer method is then to say: let us set \epsilon equal to zero. If this is done the second equation becomes algebraic. If it is solved for u and the result substituted into the first equation then an ODE for u alone results. It is called the uptake equation. The question is now what solutions of the uptake equation have to do with solutions of the original system.

There are natural initial conditions for the original system coming from its interpretation. Since it is a system of two first order equations there are two conditions. For the uptake equation, on the other hand, it is only possible to prescribe one condition, which is the initial condition for u. The original system together with the preferred initial data determine a unique solution for u and v for any fixed \epsilon. Nevertheless considering the limit \epsilon\to 0 may be useful for extracting interesting information from the solution. In fact there is a short time interval after the initial time where v varies very rapidly. This time, which is of order \epsilon, is so short that it is not possible to measure the evolution of the concentration experimentally in this interval. The quantity of most practical interest is in a sense the limit of the time derivative of u as t\to 0 but since the measurements are done at times greater than \epsilon this limit can be evaluated using the uptake equation. At the times of measurement the solution of the uptake equation gives a good approximation to the genuine solution although the two are very different near t=0. It is not equal to the corresponding derivative computed from the solution of the original system with the natural initial conditions. The mathematical techniques which play a role in analysing more precisely what is going on here, and why certain formal procedures give the right answer, are singular perturbation theory and matched asymptotic expansions.

Migrating ion channels

November 11, 2008

It is common in people suffering from multiple sclerosis that the distance they can walk without a rest is a lot less than what a healthy person can do. While walking a certain kind of fatigue arises which is different from the ordinary tiredness of muscles. Continuing for long enough leads to a state where the muscles just do not seem to respond any more. If this state has been approached too closely it can take many hours for normality to be restored. What is the mechanism of this kind of fatigue? It does not seem to be mentioned in most texts on the subject.

Some years ago I read an account of a patient suffering from the neuromuscular disease myasthenia gravis. She could walk normally in the morning but was confined to a wheelchair in the afternoon. Superficially this sounds like an extreme version of the fatigue in MS mentioned above. In this case more is known about what is going on. Myasthenia gravis is an autoimmune disease where the target of the attack mounted by the immune system is the acetylcholine receptors of muscles. Normally acetylcholine released by nerve cells carries the signal to the muscle cells that they should contract. This is interfered with by antibodies against the receptors. The result of this is that when the nerves responsible for directing the action of muscles are stimulated too often there is not enough acetylcholine present. It then takes a long time before the system can recover. It is clear that this cannot be the mechanism working in MS but I wondered whether in that case neurotransmitters could be depleted in some other way. In fact it seems that this is not the right explanation. In the book ‘McAlpine’s Multiple Sclerosis’ I found another alternative for which there is some evidence. It has to do with ion channels.

The mechanism of propagation of electrical signals in nerve cells was discovered in the early 1950’s by Alan Hodgkin and Andrew Huxley. It got them a Nobel prize in 1963. The basis of the phenomenon are flows of potassium and sodium ions across the cell membrane. In the resting state of the axon there is an electrical potential across the cell membrane which results from the difference in concentration of potassium ions on both sides. When a nerve signal (action potential) passes the permeability of the membrane to sodium and potassium ions changes in response to the changes in potential. This is a non-trivial dynamical process. A natural mathematical model would be a system of reaction-diffusion equations which admits travelling wave solutions. The study of this may be simplified by going to the “space-clamped” case. This comes down to studying solutions which only depend on time. It gives rise to a system of nonlinear ordinary differential equations. A possibility of studying this situation experimentally was found in the giant axon of the squid Loligo. This was used by Hodgkin and Huxley to get information about the coefficients of the ODE system. Once they had that information they had to solve the equations numerically in order to compare theory with experiment. In those days this numerical work had to be done by hand although a couple of years later they were able to apply some of the first ever computers, then being developed in Cambridge. In the Nobel lecture of Huxley we find a vivid description of doing this calculation. He says: “This was a laborious business … a propagated action potential took a matter of weeks. But it was often quite exciting. … an important lesson I learned from these manual computations was the complete inadequacy of one’s intuition in trying to deal with a system of this degree of complexity.” The Hodgkin-Huxley model is one of the most notable successes of mathematical biology. It is an example of the concept of an excitable system mentioned in a previous post. Now we know that the changes of permeability of the membrane are due to ion channels which regulate the movement of ions through the cell membrane. Ion channels are made by molecules embedded in the membrane which undergo conformational changes as a result of various stimuli. In the case of nerve conduction the stimuli are electrical. These changes are not deterministic. What changes with the applied potential is the probability of a channel being open, closed or inactivated.

The propagation speed of nerve impulses increases with the diameter of the axon and the unusually large diameter of the axon in the squid is its method of achieving fast signalling. Vertebrates like ourselves have found another method, which is to insulate the axon using myelin. It is the myelin which is damaged in MS. The rate of travel of nerve signals along the axon is not uniform. There are small regions along the axon, the nodes of Ranvier, where myelin is absent. The nerve impulse jumps from one node to the next. Associated to this is the fact that under normal circumstances most of the sodium and potassium channels are concentrated near the nodes of Ranvier. They are kept there by the oligodendrocytes which are also the cells which produce myelin. In fact myelin consists of layers of cell membrane of the oligodendrocytes.

In MS myelin is removed from the axons and nerve conduction no longer works properly, if it works at all in a given axon. There may be remyelination but this generally only produces a thin layer of myelin whose insulating properties are limited. Now it seems, and here I come back to what I read in ‘McAlpine’s Multiple Sclerosis’, that the nerve cells have developed another strategy to restore conduction to some extent. This is that ion channels migrate along the axon from the nodes of Ranvier. The general mechanism of conduction is then a different one from that of the fully myelinated axon. One disadvantage is that the nerve conduction is not so fast. The other is that ions may accumulate on one side of the cell membrane and the resting potential cannot be reestablished in an efficient way. Each firing of the nerve cell only results a small change in the concentration of ions after it has passed. If, however, these small changes are not being corrected for regularly they can add up to a serious change. This is a possible explanation for the fatigue. I would not claim that this is definitive explanation. On p. 628ff of ‘McAlpine’s Multiple Sclerosis’ a zoo of different ion channels is dicussed. From the short section on fatigue on p. 643 it seems clear that there are a lot of theories around. I would like to penetrate into the matter further.