Now Subgroups are being set up within the Society to concentrate on particular subjects. One of these, the Immunobiology and Infection Subgroup had its inaugural meeting this week and of course I went. There I and a number of other people learned a basic immunological fact which we found very surprising. It is well known that the thymus decreases in size with age so that presumably our capacity to produce new T cells is constantly decreasing. The obvious assumption, which I had made, is that this is a fairly passive process related to the fact that many systems in our bodies run down with age. We learned from Johnna Barnaby that the situation may be very different. It may be that the decrease in the size of the thymus is due to active repression by sexual hormones. She is involved in work on therapy for prostate cancer and said that it has been found that in men with prostate cancer who are getting drugs to reduce their testosterone levels it is seen that their thymus increases in size.

There were some recurrent themes at the conference. One was oncolytic viruses. These are genetically engineered viruses intended to destroy cancer cells. In modelling these it is common to use extensions of the fundamental model of virus dynamics which is very familiar to me. For instance Dominik Wodarz talked about some ODE models for oncolytic viruses in vitro where the inclusion of interferon production in the model leads to bistability. (In reponse to a question from me he said that it is a theorem that without the interferon bistability is impossible.) I was pleased to see how, more generally, a lot of people were using small ODE models making real contact to applications. Another recurrent theme was that there are two broad classes of macrophages which may be favourable or unfavourable to tumour growth. I should find out more about that. Naveen Vaidya talked about the idea that macrophages in the brain may be a refuge for HIV. Actually, even after talking to him I am not sure if it should not rather be microglia than macrophages. James Moore talked about the question of how T cells are eliminated in the thymus or become Tregs. His talk was more mathematical than biological but it has underlined once again that I want to understand more about positive and negative selection in the thymus and the related production of Tregs.

On a quite different subject there were two plenary talks related to coral reefs. A theme which is common in the media is that of the damage to coral due to climate change. Of course this is dominated by politics and usually not accompanied by any scientific information on what is going on. The talk of Marissa Blaskett was an excellent antidote to this kind of thing and now I have really understood something about the subject. The other talk, by Mimi Koehl, was less about the reefs themselves but about the way in which the larvae of snails which graze on the coral colonize the reef. I found the presentation very impressive because it started with a subject which seemed impossibly complicated and showed how scientific investigation, in particular mathematical modelling, can lead to understanding. The subject was the interaction of microscopic swimming organisms with the highly turbulent flow of sea water around the reefs. Investigating this involved among other things the following. Measuring the turbulent flow around the reef using Doppler velocimetry. Reconstructing this flow in a wave tunnel containing an artificial reef in order to study the small-scale structure of the transport of chemical substances by the flow. Going out and checking the results by following dye put into the actual reef. And many other things. Last but not least there was the mathematical modelling. The speaker is a biologist and she framed her talk by slides showing how many (most?) biologists hate mathematical modelling and how she loves it.

]]>

A talk I found very interesting was by Sebastian Walcher. I already wrote briefly about a talk of his in Copenhagen in a previous post but this time I understood a lot more. The question he was concerned with is how to find interesting small parameters in dynamical systems which allow the application of geometric singular perturbation theory. In GSPT the system written in the slow time (with the smallness parameter included as a variable) contains a whole manifold of steady states, the critical manifold. The most straightforward theory is obtained when the eigenvalues of the linearization of the system transverse to the critical manifold lie away from the imaginary axis. This corresponds to the situation of a transversely hyperbolic manifold of steady states. The first idea of Walcher’s talk is that whenever we have a transversely hyperbolic manifold of steady states in a dynamical system this is an opportunity for identifying a small parameter. This may not sound very useful at first sight because it would seem reasonable that generic dynamical systems would never contain manifolds of steady states of dimension greater than zero. There is a reason why this observation is misleading for systems arising from reaction networks. In these systems the state space is defined by positivity conditions on the concentrations and there are also certain parameters (such as reaction constants and total amounts) which are required to be positive. To have a name let us call the region defined by these positivity conditions the conventional region of the spaces of states and parameters. In the conventional region manifolds of steady states are not to be expected. On the other hand it frequently happens that they arise when we go to the boundary of that region. A familiar example is the passage to the Michaelis-Menten limit in the system describing a reaction catalysed by an enzyme. This takes us from the extended mass action kinetics for substrate, free enzyme and substrate-enzyme complex to Michaelis-Menten kinetics for the substrate alone. Roughly speaking it is the limit where the amount of the enzyme is very small compared to the amount of the substrate. I often wondered whether there could not be a kind of ‘anti-Michaelis-Menten’ limit where the amount of enzyme is very large compared to the amount of the substrate. I asked Walcher whether he knew how to do this and how it fitted into his general scheme. He gave me a positive answer to this question and some references and I must look into this in detail when I get time. The reason for being interested in this is that if we can obtain suitable information about a limiting case on the boundary it may be possible obtain information on the part of the conventional region where a certain parameter is small but non-zero.

There was one talk which did have a connection to population biology in way closer to what I had expected. It happens all the time that ecosystems are damaged by exotic species imported, deliberately or by accident, from other parts of the world. There are also well-known stories of the type that to try to control exotic species number one exotic species number two is introduced and is itself very harmful. It is nice to hear an example where this kind of introduction of an exotic species was very successful. It is the case of the cassava plant which was introduced from South America to Africa and became a staple food there. Then an insect from South America (species number one) called the mealy bug was introduced accidentally and caused enormous damage. Finally an ecologist called Hans Herren introduced a parasitic wasp (species number two) from South America, restoring the food supply and saving numerous lives (often the number 20 million is quoted). More details of this story can be found here.

I want to mention one statement made in the talk of Gheorghe Craciun in Oberwolfach which I found intriguing. I might have heard this before but it did not stick in my mind properly. The statement is that the set of dynamical systems which possess a complex balanced steady state is a variety of codimension , where is the deficiency. There seemed to be some belief in the audience that this variety is actually a smooth manifold. On one afternoon we had something similar to the breakout sessions in Banff. I suggested the topic for one of these, which was Lyapunov functions. The idea was to compare classes of Lyapunov functions which people working on different classes of dynamical systems knew. This certainly did not lead to any breakthrough but I think it did lead to a useful exchange of information. I documented the discussion for my own use and I think I could profit by following some of the leads there.

To finish I want to mention a claim made by Ankit Gupta in his talk. It did not sound very plausible to me but I expect that it at least contains a grain of truth. He said that these days more papers are published on than on all of mathematics.

]]>

In the standard dual futile cycle we have a substrate which can be phosphorylated up to two times by a kinase and dephosphorylated again by a phosphatase. It is assumed that the (de-)phosphorylation is distributive (the number of phosphate groups changes by one each time a substrate binds to an enzyme) and sequential (the phosphate groups are added in one order and removed in the reverse order). A well-known alternative to this is processive (de-)phosphorylation where the number of phosphate groups changes by two in one encounter between a substrate and an enzyme. It is known that the double phosphorylation system with distributive and sequential phosphorylation admits reaction constants for which there are three steady states, two of which are stable. (From now on I only consider sequential phosphorylation here.) By contrast the corresponding system with processive phophorylation always has a unique steady state. Through the talk of Anne Shiu here I became aware of the following facts. In a paper by Suwanmajo and Krishnan (J. R. Soc. Interface 12:20141405) it is stated that in a mixed model with distributive phosphorylation and processive dephosphorylation periodic solutions occur as a result of a Hopf bifurcation. The paper does not present an analytical proof of this assertion.

It is a well-known open question, whether there are periodic solutions in the case that the modificiations are all distributive. It has been claimed in a paper of Errami et. al. (J. Comp. Phys. 291, 279) that a Hopf bifurcation had been discovered in this system but the claim seems to be unjustified. In our breakout sessions we looked at whether oscillations might be exported from the mixed model to the purely distributive model. We did not get any definitive results yet. There were also discussions on effective ways of detecting Hopf bifurcations, for instance by using Hurwitz determinants. It is well-known that oscillations in the purely distributive model, if they exist, do not persist in the Michaelis-Menten limit. I learned from Anne Shiu that it is similarly the case that the oscillations in the mixed model are absent from the Michaelis-Menten system. This result came out of some undergraduate research she supervised. Apart from these specific things I learned a lot just from being in the environment of these CRN people.

Yesterday was a free afternoon and I went out to look for some birds. I saw a few things which were of interest to me, one of which was a singing Tennessee warbler. This species has a special significance for me for the following reason. Many years ago when I still lived in Orkney I got an early-morning phone call from Eric Meek, the RSPB representative. He regularly checked a walled garden at Graemeshall for migrants. On that day he believed he had found a rarity and wanted my help in identifying it and, if possible, catching it. We did catch it and it turned out to be a Tennessee warbler, the third ever recorded in Britain. That was big excitement for us. I had not seen Eric for many years and I was sad to learn now that he died a few months ago at a relatively young age. The name of this bird misled me into thinking that it was at home in the southern US. In fact the name just came from the fact that the first one to be described was found in Tennessee, a migrant. The breeding range is much further north, especially in Canada. Thus it is quite appropriate that I should meet it here.

]]>

What is more important is that there are now arguments in favour of doing so. With the EU showing signs of a possible disintegration the chance that I could lose the privileges I have here as an EU citizen is not so small that it should be neglected. The referendum in which the Scots voted on the possibility of leaving the UK was the concrete motivation for my decision to start the application process. Scotland stayed in the UK but then the Brexit confirmed that I had made the right decision. At the moment there is no problem with keeping British citizenship when obtaining German citizenship and I am doing so. This may change sometime, meaning that I will have to give up my British citizenship to keep the German one, but I see this as of minor importance.

As prerequisites for my application I had to do a number of things. Of course it was necessary to submit a number of documents but I have the feeling that the amount of effort was less than when obtaining the documents needed to get married here. I had to take an examination concerning my knowledge of the German language, spoken and written. It was far below my actual level of German and so from that point of view it was a triviality. It was just a case of investing a bit of time and money. I also had to do a kind of general knowledge test on Germany and on the state where I live. This was also easy in the sense that the questions were not only rather simple for anyone who has lived in the country for some time but they are also taken from a list which can be seen in advance. Again it just meant an investment of time and money. At least I did learn a few facts about Germany which I did not know before. In my case these things were just formalities but I think it does make sense that they exist. It is important to ensure that other applicants with a background quite different from mine have at least a minimal knowledge of the language and the country before they are accepted.

After all these things had been completed and I had submitted everything it took about a year before I heard that the application had been successful. This time is typical here in Mainz – I do not know how it is elsewhere in Germany – and it results from the huge backlog of files. People are queueing up to become German citizens, attracted by the prospect of a strong economy and a stable political system. Yesterday I was invited to an event where the citizenship of the latest group of candidates was bestowed in a ceremony presided over by the mayor. There were about 60 new citizens there from a wide variety of countries. The most frequent nationality by a small margin was Turkish, followed by people from other middle eastern countries such as Iraq and Iran. There were also other people from the EU with the most frequent nationality in that case being British. My general feeling was one of being slightly uneasy that I was engaged in a futile game of changing horses. It is sad that the most civilised countries in the world are so much affected by divisive tendencies instead of uniting to meet the threats confronting them from outside.

]]>

For a fixed value of the Hurwitz matrix is an by matrix defined as follows. The th diagonal element is , with . Starting from a diagonal element and proceeding to the left along a row the index increases by one in each step. Similarly, proceeding to the right along a row the index decreases by one. In the ranges where the index is negative or greater than the element can be replaced by zero. The leading principal minors of the Hurwitz matrix, in other words the determinants of the submatrices which are the upper left hand corner of the original matrix, are the Hurwitz determinants . The Hurwitz criterion says that the real parts of all roots of the polynomial are negative if and only if and for all . Note that a necessary condition for all roots to have negative real parts is that all are positive. Now and so the last condition can be replaced by . Note that the form of the does not depend on . For we get the conditions , and . For we get the conditions , , and . Note that the third condition is invariant under the replacement of by . When , and then the conditions and are equivalent to each other. In this way the invariance under reversal of the order of the coefficients becomes manifest. For we get the conditions , , , and .

Next we look at the issue of loss of stability. If is the region in matrix space where the Routh-Hurwitz criteria are satisfied, what happens on the boundary of ? One possibility is that at least one eigenvalue becomes zero. This is equivalent to the condition . Let us look at the situation where the boundary is approached while remains positive, in other words the determinant of the matrix remains non-zero. Now and so one of the quantities with must become zero. In terms of eigenvalues what happens is that a number of complex conjugate pairs reach the imaginary axis away from zero. The generic case is where it is just one pair. An interesting question is whether and how this kind of event can be detected using the alone. The condition for exactly one pair of roots to reach the imaginary axis is that while the remain positive for . In a paper of Liu (J. Math. Anal. Appl. 182, 250) it is shown that the condition for a Hopf bifurcation that the derivative of the real part of the eigenvalues with respect to a parameter is non-zero is equivalent to the condition that the derivative of with respect to the parameter is non-zero. In a paper with Juliette Hell (Math. Biosci. 282, 162), not knowing the paper of Liu, we proved a result of this kind in the case .

]]>

A mathematical model for the initial stages of T cell activation (the first few minutes) was formulated and studied by Altan-Bonnet and Germain (PloS Biol. 3(11), e356). They were able to use it successfully to make experimental predictions, which they could then confirm. The predictions were made with the help of numerical simulations. From the point of view of the mathematician a disadvantage of this model is its great complexity. It is a system of more than 250 ordinary differential equations with numerous parameters. It is difficult to even write the definition of the model on paper or to describe it completely in words. It is clear that such a system is difficult to study analytically. Later Francois et. el. (PNAS 110, E888) introduced a radically simplified model for the same biological situation which seemed to show a comparable degree of effectiveness to the original model in fitting the experimental data. In fact the simplicity of the model even led to some new successful experimental predictions. (Altan-Bonnet was among the authors of the second paper.) This is the kind of situation I enjoy, where a relatively simple mathematical model suffices for interesting biological applications.

In the paper of Francois et. al. they not only do simulations but also carry out interesting analytical calculations for their model. On the other hand they do not follow the direction of attempting to use these calculations to formulate and prove mathematical theorems about the solutions of the model. Together with Eduardo Sontag we have now written a paper where we obtain some rigorous results about the solutions of this system. In the original paper the only situation considered is that where the system has a unique steady state and any other solution converges to that steady state at late times. We have proved that there are parameters for which there exist three steady states. A numerical study of these indicates that two of them are stable. A parameter in the system is the number of phosphorylation sites on the T cell receptor complex which are included in the model. The results just mentioned on steady states were obtained for .

An object of key importance is the response function. The variable which measures the degree of activation of the T cell in this model is the concentration of the maximally phosphorylated state of the T cell receptor. The response function describes how depends on the important input variables of the system. These are the concentration of the ligand and the constant describing the rate at which the ligand unbinds from the T cell receptor. A widespread idea (the lifetime dogma) is that the quantity , the dissociation time, determines how strongly an antigen signals to a T cell. It might naively be thought that the response should be an increasing function of (the more antigen present the stronger the stimulation) and a decreasing function of (the longer the binding the stronger the stimulation). However both theoretical and experimental results lead to the conclusion that this is not always the case.

We proved analytically that for certain values of the parameters is a decreasing function of and an increasing function of . Since these rigorous results give rather poor information on the concrete values of the parameters leading to this behaviour and on the global form of the function we complemented this analytical work by simulations. These show how can have a maximum as a function of within this model and that as a function of it can have the following form in a log-log plot. For small the graph is a straight line of slope one. As increases it switches to being a straight line of slope and for still larger values it once again becomes a line of slope one, shifted with respect to the original one. Finally the curve levels out as it must do, since the function is bounded. The proofs do not make heavy use of general theorems and are in general based on doing certain estimates by hand.

All of these results were of the general form ‘there exist parameter values for the system such that happens’. Of course this is just a first step. In the future we would like to understand better to what extent biologically motivated restrictions on the parameters lead to restrictions on the dynamical behaviour.

]]>

Ocrelizumab acts by causing B cells to be killed. It has been seen to have strong positive effects in combatting MS in some cases. This emphasizes the fact that T cells, usually regarded as the main culprit causing damage during MS, are not alone. B cells also seem to play an important role although what role that is is not so clear. There previously existed an antibody against CD20, rituximab, which was used in the therapy of diseases other than MS. Ocrelizumab has had problemtic side effects, with a high frequency of infections and a slightly increased cancer risk. For this reason it has been abandoned as a therapy for rheumatoid arthritis. On the other hand the trial for MS has less problems with side effects.

One reason not to be too euphoric about this first treatment for progressive MS is the following. It has been shown to be effective against patients in the first few years of illness and those where there are clear signs of inflammatory activity in MRT scans. This suggests to me a certain suspicion. The different types of MS are not clearly demarcated. Strong activity in the MRT is typical of the RR form. So I wonder if the patients where this drug is effective are perhaps individuals with an atypical RR form where the disease activity just does not cross the threshold to becoming manifest on the symptomatic level for a certain time. This says nothing against the usefuleness of the drug in this class of patients but it might be a sign that its applicability will not extend to a wider class of patients with the progressive form in the future. It also suggests caution in hoping that the role of B cells in this therapy might help to understand the mechanism of progressive MS.

]]>

On important discovery of Poincaré was chaos. He discovered it in the context of his work on celestial mechanics and indeed that work was closely connected to his founding the subject of dynamical systems as a new way of approaching ordinary differential equations, emphasizing qualitative and geometric properties in contrast to the combination of complex analysis and algebra which had dominated the subject up to that point. The existence of chaos places limits on predictability and it is remarkable that these do not affect our ability to do science more than they do. For instance it is known that there are chaotic phenomena in the motion of objects belonging to the solar system. This nevertheless does not prevent us from computing the trajectories of the planets and those of space probes sent to the other end of the solar system with high accuracy. These space probes do have control systems which can make small corrections but I nevertheless find it remarkable how much can be computed a priori, although the system as a whole includes chaos.

This issue is part of a bigger question. When we try to obtain a scientific understanding of certain phenomena we are forced to neglect many effects. This is in particular true when setting up a mathematical model. If I model something using ODE then I am, in particular, neglecting spatial effects (which would require partial differential equations) and the fact that often the aim is not to model one particular object but a population of similar objects and I neglect the variation between these objects which I do not have under control and for whose description a stochastic model would be necessary. And of course quantum phenomena are very often neglected. Here I will not try to address these wider issues but I will concentrate on the following more specific question. Suppose I have a system of ODE which is a good description of the real-world situation I want to describe. The evolution of solutions of this system is uniquely determined by initial data. There remains the problem of sensitive dependence on initial data. To be able to make a prediction I would like to know that if I make a small change in the initial data the change in some predicted quantity should be small. What ‘small’ means in practice is fixed by the application. A concrete example is the weather forecast whose essential limits are illustrated mathematically by the Lorenz system, which is one of the icons of chaos. Here the effective limit is a quantitative one: we can get a reasonable weather forecast for a couple of days but not more. More importantly, this time limit is not set by our technology (amount of observational data collected, size of the computer used, sophistication of the numerical programs used) but by the system itself. This time limit will not be relaxed at any time in the future. Thus one way of getting around the effects of chaos is just to restrict the questions we ask by limits on the time scales involved.

Another aspect of this question is that even when we are in a regime where a system of ODE is fully chaotic there will be some aspects of its behaviour which will be predictable. This is why is is possible to talk of ‘chaos theory’- I know too little about this subject to say more about it here. One thing I find intriguing is the question of model reduction. Often it is the case that starting from a system of ODE describing something we can reduce it to an effective model with less variables which still includes essential aspects of the behaviour. If the dimension of the reduced model is one or two then chaos is lost. If there was chaos in the original model how can this be? Has there been some kind of effective averaging? Or have we restricted to a regime (subset of phase space) where chaos is absent? Are the questions we tend to study somehow restricted to chaos-free regions? If the systems being modelled are biological is the prevalence of chaos influenced by the fact that biological systems have evolved? I have seen statements to the effect that biological systems are often ‘on the edge of chaos’, whatever that means.

This post contains many questions and few answers. I just felt the need to bring them up.

]]>

In the north of England there is an area called the Kielder Forest with a lake in the middle and the region around the lake is inhabited by a population of the field vole . It is well known that populations of voles undergo large fluctuations in time. What is less known is what the spatial dependence is like. There are two alternative scenarios. In the first the population density of voles oscillates in a way which is uniform in space. In the second it is a travelling wave of the form . In that case the population at a fixed point of space oscillates in time but the phase of the oscillations is different at different spatial points. In general there is relatively little observational data on this type of thing. The voles in the Kielder forest are an exception to this since in that case a dedicated observer collected data which provides information on both the temporal and spatial variation of the population density. This data is the basis for the modelling which I will now describe.

The main predators of the voles are weasels . It is possible to set up a model where the unknowns are the populations of voles and weasels. Their interaction is modelled in a simple way common in predator-prey models. Their spatial motion is described by a diffusion term. In this way a system of reaction-diffusion equations is obtained. These are parabolic equations and to the time evolution is non-local in space. The unknowns are defined on a region with boundary which is the complement of a lake. Because of this we need not only initial values to determine a solution but also boundary conditions. How should they be chosen? In the area around the lake there live certain birds of prey, kestrels. They hunt voles from the air. In most of the area being considered there is very thick vegetation and the voles can easily hide from the kestrels. Thus the direct influence of the kestrels on the vole population is negligible and the kestrels to not need to be included in the reaction-diffusion system. They do, however, have a striking indirect effect. On the edge of the lake there is a narrow strip with little vegetation and any vole which ventures into that area is in great danger of being caught by a kestrel. This means that the kestrels essentially enforce the vanishing of the population density of voles at the edge of the lake. In other words they impose a homogeneous Dirichlet boundary condition on one of the unknowns at the boundary. Note that this is incompatible with spatially uniform oscillations. On the boundary oscillations are ruled out by the Dirichlet condition. When the PDE are solved numerically what is seen that the shore of the lake generates a train of travelling waves which propagate away from it. This can also be understood theoretically, as explained in the papers quoted above.

]]>

In a previous post I mentioned a video by Ira Mellman. At the conference I had the opportunity to hear him live. One thing which became clear to me at the conference is the extent to which, among the checkpoint inhibitor drugs, anti-PD1 is superior to anti-CTLA. It is successful in a much higher proportion of patients. I never thought much about PD1 before. It is a receptor which is present on the surface of T cells after they have been activated and it can be stimulated by the ligand PD1L leading to the T cell being switched off. But how does this switching off process work? The T cell is normally switched on by the engagement of the T cell receptor and a second signal from CD28. In his talk Mellman explained that the switching off due to PD1 is not due to signalling from the T cell receptor being stopped. Instead what happens is that PD1 activates the phosphatase SHP2 which dephosphorylates and thus deactivates CD28. Even a very short deactivation of CD28 is enough to turn off the T cell. In thinking about mathematical models for T cell activation I thought that there might be a link to checkpoint inhibitors. Now it looks like models for T cell activation are not of direct relevance there and that instead it would be necessary to model CD28.

I learned some more things about viruses and cancer. One is that the Epstein-Barr-virus, famous for causing Burkitt’s lymphoma also causes other types of cancers, in particular other types of lymphoma. Another is that viruses are being used in a therapeutic way. I had heard of oncolytic viruses before but I had never really paid attention. In one talk the speaker showed a picture of a young African man who had been cured of Burkitt’s lymphoma by … getting measles. This gave rise to the idea that viruses can sometimes preferentially kill cancer cells and that they can perhaps be engineered to as to do so more often. In particular measles is a candidate. In that case there is an established safe vaccination and the idea is to vaccinate with genetically modified measles virus to fight certain types of cancer.

In going to this conference my main aim was to improve my background in aspects of biology and medicine which could be of indirect use for my mathematical work. In fact, to my surprise, I met one of the authors of a paper on T cell activation which is closely related to mathematical topics I am interested in. This was Philipp Kruger who is in the group of Omer Dushek in Oxford. I talked to him about the question of what is really the mechanism by which signals actually cross the membrane of T cells. One possibility he mentioned was a conformational change in CD3. Another, which I had already come across is that it could have to do with a mechanical effect by which the binding of a certain molecule could bring the cell membranes of two interacting cells together and expel large phosphatases like CD45 from a certain region. In the paper of his I had looked at signalling in T cells is studied with the help of CAR T-cells, which have an artifical analogue of the T cell receptor which may have a much higher affinity than the natural receptor. In his poster he described a new project looking at the effect of using different co-receptors in CAR T-cells (not just CD28). In any case CAR T-cells was a subject which frequently came up at the conference. Something which was in the air was that this therapy may be associated with neurotoxicity in some cases but I did not learn any details.

As far as I can see, the biggest issue with all these techniques is the following. They can be dramatically successful, taking patients from the point of death to long-term survival. On the other hand they only work in a subset of patients (say, 40% at most) and nobody understands what success depends on. I see a great need for a better theoretical understanding. I can understand that when someone has what looks like a good idea in this area they quickly look for a drug company to do a clinical trial with it. These things can save lives. On the other hand it is important to ask whether investing more time in obtaining a better understanding of underlying mechanisms might not lead to better results in the long run.

]]>