## Archive for July, 2009

### Metabolic networks

July 24, 2009

The processes of life involve systems of coupled chemical reactions. Often these are assumed to be homogeneous in space so that the concentrations of the substances involved are functions of time alone. The dynamics of these quantities are described by a system of ODE satisfied by the concentrations. I will call a system of this kind a metabolic system. What I want to do here is to say what is special about metabolic systems compared to general ODE systems. I also want to say something about features of these systems which are typically studied in theoretical work. A feature of metabolic systems which should be mentioned immediately is that they are often very large and moreover depend on a large number of parameters. This makes rigorous analysis difficult and can lead to a strong temptation to move to numerical approaches. On the basis of my personal mathematical preferences I would like to see analytical work taken as far as possible. My main source of information on this topic is the book ‘Systems Biology in Practice’ by E. Klipp et. al.

Consider a system of $n$ substances taking part in $r$ reactions. The system is taken to be of the form $\frac{dS}{dt}=N\nu$ where $S$ is the vector of concentrations taking values in ${\bf R}^n$, $\nu$ is a vector of reaction rates, taking values in ${\bf R}^r$ and $N$ is a constant matrix, the stoichiometric matrix. It is $n$ by $r$. It contains information about how many molecules of each type are consumed or produced in each reaction. The whole system depends on $m$ parameters which form a vector in ${\bf R}^m$. Notice that any system of ODE can be put into this form – simply take $n=r$ and $N$ to be equal to the identity. One obviously interesting question about the system is how many stationary solutions it admits. Actually what is of interest is steady state solutions where $\nu$ is not identically zero. (Call these non-trivial.) The concentrations of all chemicals should be independent of time but there should be non-zero reaction rates so that individual reactions are converting certain chemicals into others. Non-trivial steady states are only possible if the rank of $N$ is less than $r$. In general the possible reaction rates form a vector space of dimension $r$ minus the rank of $N$. Notice that the definition of ‘non-trivial’ here does not only use information about the ODE system – it also uses information about a particular splitting of the right hand side into two factors.

Metabolic control analysis is an attempt to understand which changes in a metabolic system result in which changes in particular features of the solutions. The quantity $\nu$ depends on the variables $S$ and $p$. For certain purposes it may be helpful to consider the derivatives of $\nu$ with respect to these variables. In terms of components this means considering the partial derivatives $\frac{\partial\nu_i}{\partial S_j}$ or $\frac{\partial\nu_i}{\partial p_j}$. In fact it is common to consider normalized quantities such as $\frac{S_j}{\nu_i}\frac{\partial\nu_i}{\partial S_j}$, which is possible as long as the denominators do not vanish. The resulting quantities are called elasticities ( $\epsilon$-elasticty for $S$ and $\pi$-elasticity for $p$). These relative quantities seem to require giving up the picture of certain quantities as vectors. Maybe they should be thought of as bunches of scalars or as points of a manifold admitting certain preferred transformations. A different type of coefficients are known as control coefficients. They are defined in terms of steady state solutions of the system. Steady states can be changed by changing parameters in the system. It seems to me that the definition of the control coefficients requires being able to define the rates of change of some aspect of the steady state solution (e.g. one of the concentrations) with respect to another.This appears to include the implicit requirement that the steady states are locally isolated for given values of the parameters. This means that both quantities of interest are functions of a common quantity (the parameter being varied) so that we have a chance to define the derivative of one with respect to the other.

There are certain types of nonlinearities which are typically used in modelling metabolic processes. The simplest is the mass action form. Here if $p$ molecules of a species with concentration $S_1$ and $q$ molecules of a species with concentration $S_2$ are the inputs for a certain reaction then the reaction rate is taken to be proportional to $S_1^pS_2^q$. This has a simple intuitive interpretation in terms of the probability that the necessary molecules meet. The other common type of nonlinearity results from the mass action form by a Michaelis-Menten reduction, a procedure described in a previous post. This leads to a non-polynomial nonlinearity but has the important advantage of reducing the number of equations in the system.

### Casimir invariants

July 13, 2009

For some time I have wanted to learn about the concept of Casimir invariants and I was not very satisfied with the information I found. Now I have made new efforts to learn about this topic and I will record some of what I learned here. Let $g$ be a finite-dimensional Lie algebra and let $G$ be the corresponding connected and simply connected Lie group.The Lie algebra $g$ can be identified with $T_e G$, the tangent space to $G$ at the identity. There is a one-to-one correspondence between elements of this tangent space and left-invariant vector fields on $G$. Let $S(g)$ be the algebra of symmetric tensors over $g$. Let $U(g)$ denote the associative algebra which is the quotient of $S(g)$ by elements of the form $x\otimes y-y\otimes x-[x,y]$. This is called the universal enveloping algebra of $g$. There is a natural embedding $i$ of $g$ into $U(g)$ and it is a Lie algebra homomorphism into $L(U(g))$, the Lie algebra obtained from $U(g)$ by using the commutator to define a Lie bracket. Given an associative algebra $A$ and a Lie algebra homomorphism $\phi$ from $g$ to $L(A)$ there exists an algebra homomorphism $\psi$ from $U(g)$ to $A$ such that $\phi=\psi\circ i$. This is the universal property which appears in the name of the object. Here the fact has been used implicitly that $A$ and $L(A)$ can be identified as sets. An important example is given by a representation $\rho$ of the Lie algebra $g$ on a vector space $V$, which can be thought of a Lie algebra homomorphism from $g$ to $gl(V)=L(GL(V))$.

A Casimir invariant, or Casimir element or Casimir operator of $g$ is an element of the centre of $U(g)$. What remains unclear to me is whether these three concepts are supposed to be equivalent, or just related. I am also not sure whether (in any of these cases) any element of the centre is allowed, or just a particular one or a particular type. One definition I have found is the following. Suppose that $G$ semisimple. Then it has a Killing form $K$ which is a non-degenerate bilinear form. Let $X^i$ be a basis of $g$ and let $X_i$ be the basis of one-forms associated to it via $K$. (I.e. for any vector $Y$ we have $X_i(Y)=K(X^i,Y)$. Then the Casimir invariant is defined to be the element of $U(g)$ given by $C=\sum _i X^i X_i$. This is independent of the basis chosen. Since $K$ is an invariant bilinear form it follows that $C$ commutes with all elements of $g$ and in fact lies in the centre of $U(g)$. As a consequence of the universal property it is possible to define an object $\rho (C)$. This is an operator on $V$ which commutes with all elements of the image of $\rho$. If the representation is irreducible then this implies that $\rho(C)$ is a multiple of the identity. The factor of proportionality is a real number which is an invariant of the representation.

There is a theorem on the structure of the centre of the universal enveloping algebra of a semisimple Lie algebra which is associated with the term ‘Harish-Chandra homomorphism’. It can be used to list the number of elements required to generate the centre and their orders as polynomials in basis vectors of the Lie algebra. The number of these generators is the rank of the algebra. For instance for $SL(2,R)$ there is one generator of order two. For $SU(3)$ there are generators of order two and three and the rank is two. The same will be true for $SL(3,R)$ or $SU(2,1)$. My aim at the moment is not to learn the abstract theory in depth but rather to understand enough to do some calculations for a specific application. I plan to say more about the application in a later post.