Archive for August, 2020

Determinant of a block triangular matrix

August 26, 2020

I am sure that the fact I am going to discuss here is well known but I do not know a good source and so I decided to prove it for myself and record the answer here. Suppose we have an n\times n matrix M with complex entries m_{ij} and that it is partitioned into four blocks by considering the index ranges 1\le i\le k and k+1\le i\le n for some positive integer k<n. Suppose that the lower left block is zero so that the matrix is block upper triangular. Denote by A, B and C the upper left, upper right and lower right blocks. Then \det M=\det A\det C. This is already interesting in the block diagonal case B=0. To begin the proof of this I use the fact that over the complex numbers diagonalizable matrices with distinct diagonal elements are dense in all matrices. Approximate A and C by diagonalizable matrices with distinct diagonal elements A_n and C_n, respectively. Replacing A and C by A_n and C_n gives an approximating sequence M_n for M. If we can show that \det M_n=\det A_n\det C_n for each n then the desired result follows by continuity. The reason I like this approach is that the density statement may be complicated to prove but it is very easy to remember and can be applied over and over again. The conclusion of this is that it suffices to treat the case where A and C are diagonalizable with distinct eigenvalues. Since the determinant of a matrix of this kind is the product of its eigenvalues it is enough to show that every eigenvalue of A or C is an eigenvalue of M. In general the determinant of a matrix is equal to the determinant of its transpose. Thus the matrix and its transpose have the same eigenvalues. Putting it another way, left eigenvectors define the same set of eigenvalues as right eigenvectors. Let \lambda be an eigenvalue of A and x a corresponding eigenvector. Then [x\ 0]^T is an eigenvector of M corresponding to that same eigenvalue. Hence any eigenvalue of A is an eigenvalue of M. Next let \lambda be an eigenvalue of C and y a corresponding left eigenvector. Then [0\ y] is a left eigenvector of M with eigenvalue \lambda. We see that all eigenvalues of A and C are eigenvalues of M. To see that we get all eigenvalues of M in this way it suffices to do the approximation of A and C in such a way that A_n and C_n have no common eigenvalues for any n. Then we just need to compare the numbers of eigenvalues of the matrices A,C and M. A consequence of this result is that the characteristic polynomial of M is the product of those of A and C. An application which interests me is the following. Suppose we have a partially decoupled system of ODE \dot x=f(x,y), \dot y=g(y), where x and y are vectors. Then the derivative of the right hand side of this system at any point has the block triangular form. This is in particular true for the linearization of the system about a steady state and so the above result can be helpful in stability analyses.


Talk by David Ho on COVID-19

August 22, 2020

On Thursday David Ho gave a keynote lecture at the SMB conference. He talked about work to develop monoclonal antibodies against SARS-CoV-2. He started by apologising, in view of the given audience, that there would be no mathematics in his talk but he did make make clear his continuing belief in the importance of applying mathematics to biology. He has been leading an effort with a precise medical goal – to find effective neutralising antibodies against this new virus. Antibodies were obtained from five patients severely ill with COVID-19. Four of them survived while one later died of the disease. These antibodies were then analysed by biochemical and bioinformatic means to find those which bound best to the spike protein of the virus. In this context I learned some basic things about the virus. The spike, which is used by the virus to enter cells is considered the number one target for antibodies which could be effective in combating the disease. More precisely there are two different subdomains which are possible targets, one more at the tip of the spike (the receptor-binding domain) and another more on the sides (the N-terminal domain), which is a trimer. A number of antibodies were found which bind to the first subdomain or to one of the subunits of the second. Another was found whose binding site is somewhat less local. This whole process was carried out in just few weeks, a remarkable achievement.

The antibodies just mentioned are the therapeutic candidates. The idea is to either produce monoclonal antoibodies with these sequences or possibly versions which are improved so as to be longer-lived. Monoclonal antibodies are known to be extremely expensive when used to treat other diseases, such as cancer. They are also expensive in the present context, but the speaker said that the retail cost depends very much on the quantity produced. In other applications the number of patients is relatively small and the cost correspondingly high. If the antobodies were being used for a very large number of patients the cost would be lower. It would remain problematic for low and middle income countries. It has been discussed that the Gates foundation might make it possible to offer this treatment in poorer countries for fifty dollars a dose. The main advantage of this method compared with that of trying to use antibodies from the serum of patients directly is that it is much more practical to apply on a very large scale. The effectiveness of the antibodies against the disease has been tested in hamsters. Ho made the impression of someone tackling a major problem of humanity head on with some of the best tools available. He said that an article giving an account of the work had appeared in Nature (Potent neutralizing antibodies against multiple epitopes on SARS-CoV-2 spike, Nature 584, 450). In response to one question on one aspect of the treatment he said that the answer was not known but he would just be continuing to a Zoom meeting of researchers leading the attempts to develop therapies which was to discuss exactly that question.

My first virtual conference (SMB 2020)

August 20, 2020

At the moment I am attending the annual conference of the Society for Mathematical Biology, which is taking place online. This is my first experience of this kind of format. The conference has many more participants than in any previous year, more than 1700. It takes place in a virtual building which is generated by the program Sococo. I find this environment quite disorienting and a bit stressful. This reaction probably has to do with the facts that I am no longer so young and that I have always tried to avoid social media as much as possible. I am sure that younger generations (and members of older generations with an enthusiasm for new technical developments) have far fewer problems getting used to it. In advance I was a bit worried about setting up the necessary computer requirements to be able to give my talk or even to go to others. In the end it worked out and my talk, given via Zoom, went smoothly. I got some good feedback, I am already convinced that it was worth joining this meeting and I may be less sceptical about joining others of this type in the future. There have been technical hitches. For instance the start of one big talk was delayed by about 20 minutes for a reason of this kind. Nevertheless, many things have gone well. Of course it is much preferable to meet people personally but when that is not possible virtual meetings with old friends are also pleasant.

This meeting has been organized around the subgroups of the society. I am a member of the subgroups for immunology and oncology. In fact my talk got scheduled in the group for mathematical modelling. My greatest allegiance is to the immunology subgroup and so I was happy to see that it was represented so strongly at this conference. It has seven sessions of lectures, made up of 28 talks. If I should choose my favourite from those talks in this section I have heard so far (it is not finished yet) then I pick the talk of Ruy Ribeiro on CD8 cells in HIV infection. I did feel I was missing some necessary background but I nevertheless found the talk useful in introducing me to important ideas which were new to me. The immunology subgroup have managed to get a very prominent speaker for their keynote talk, David Ho. I wrote about his work and its relation to mathematics in one of my first ever posts on this blog, way back in 2008. I am looking forward to hearing his talk later today (which will be on COVID-19). I will now mention some of my other personal highlights from the conference so far. I noticed on the program that Stas Shvartsman was giving a talk. His work has made a positive impression on me in the past and I did have a little e-mail contact with him. On the other hand I never met him personally and I had never heard a talk by him. This was my chance and I went in with high expectations. They were not disappointed. He talked about developmental defects arising from mutations in a single gene. In particular he concentrated on mutations in the Raf-MEK-ERK MAPK cascade. I was familiar with the role of mutations in this cascade in cancer but I had never heard about this other role. Stas described experiments in Drosophila where one base in MEK is mutated. This produces flies with a particular small change in the pattern of the veins in their wings. Interestingly, this does not occur in all flies with the mutation but only in 30% of them (I hope I am remembering the right number). Another talk yesterday which I appreciated was that by Robert Insall, who was talking about aspects of chemotaxis. He started with the phenomenon of the spread of melanoma. Melanoma cells do have a very strong tendency to spread in space and the question is what controls that. Is it chemotaxis along a chemical gradient?He showed pictures of melanoma cells moving fast up a chemical gradient. Then he showed a similar picture showing them moving just as fast without the chemical gradient. So what is going on? This kind of experiment starts with cells concentrated on one side of a region and a spatially homogeneous distribution of a relevant substance. The cells consume this substance, create their own gradient and then undergo chemotaxis along it. The speaker explained how there are many instances of chemotaxis in biology which can only be explained by a self-created gradient and not be a pre-existing one. He also made some interesting remarks about how mathematical modelling can lead to insights in biology which would not be possible with the usual verbal approaches of the biologists.

I also want to mention an interesting conversation I had in the poster session. The poster concerned was that of Daniel Koch. The theme of his work (which has already been published in a paper in J. Theor. Biol.) is that the formation of oligomers of proteins (or their posttranslationally modified variants) can lead to interesting dynamics. At first sight this may sound too simple to be interesting but in fact in mathematics it is often the careful consideration of apparently simple situations which leads to fundamental progress. I imagine that this principle also applies to other disciplines (such as biology) but it is perhaps strongest in mathematics. In any case, I am strongly motivated to study this work carefully. The only question in when it will be, given the many other directions I want to pursue.

Bifurcation and stability

August 6, 2020

A first step in studying solutions of the dynamical system \dot x=f(x,\lambda) is to look at the set of steady states, the solutions of the equations f(x,\lambda)=0. In the simplest cases these equations can be solved for the unknowns x as a function of the parameters \lambda. A criterion for when this can be done, at least in principle, is given be the implicit function theorem. If f(0,0)=0 then the condition is that A=D_x f(0,0) is invertible. A slightly less favourable situation is that when the original system of n equations for n variables can be reduced to a single equation for a single variable in such a way that when this one equation has been solved the other unknowns can be calculated at steady state. This is related to the case where D_x f(0,0) has rank n-1 and the appropriate analogue of the implicit function theorem is Lyapunov-Schmidt reduction. A brief general description of this technique can be found in a previous post. In the absence of non-zero purely imaginary eigenvalues of A the system has a one dimensional centre manifold and there is a relation between Lyapunov-Schmidt reduction and centre manifold reduction. There are a lot of similarities between these two techniques but also some important differences. In the case of centre manifold theory we obtain the existence of a one-dimensional submanifold, which may be non-unique and may be less regular (in the sense of being smooth or analytic) than the system itself. In the case of Lyapunov-Schmidt reduction we obtain a one-dimensional quotient manifold which is unique and as regular as the system itself. Note that when the latter method is applied some choices must be made but the essential results are independent of those choices.

When it is possible to reduce the equations for steady states to a single equation p(X,\alpha)=0 as discussed above it may still be difficult to determine how many solutions the equation has for fixed \alpha. The function has been denoted by p since in many applications it is a polynomial. The issue of the number of solutions is not my concern here. Instead I assume I already know something about how many solutions exist and I would like to know something about their stability. The question is, under what circumstances a reduction of the existence question to one dimension also leads to a reduction of the stability question to one dimension. Here I discuss a result on this question which was obtained in a paper of Crandall and Rabinowitz with the title ‘Bifurcation, perturbation of eigenvalues and linearized stability’ (Arch. Rat. Mech. Anal. 19, 1083 (1973)). The exposition I will give here is based not on that paper (which I have not read) but on that in the book ‘Singularities and Groups in Bifurcation Theory’ by Golubitsky and Schaeffer.

I will discuss the result in the simplest case I can think of. This is where x is a point in the plane and \alpha is a scalar. I assume that the kernel of A is the x_1-axis and its image the x_2-axis. I assume further that the non-zero eigenvalue of A is -a with a>0. In this situation the equation f_2(x_1,x_2,\alpha)=0 can be written in the form x_2=h(x_1). Substituting this into the equation f_1(x_1,x_2,\alpha)=0 gives an equation of the form g(x_1,\alpha)=0. Here x_1 plays the role of X above. Steady states close to the origin are in one-to-one correspondence with zeroes of g. The main result is that the stability of the steady state at the point X is determined by the sign of the derivative g' of g with respect to X. When that sign is negative the steady state is asymptotically stable and when the sign is positive it is a saddle.