In the past two weeks I attended two conferences. The first was the annual meeting of the Deutsche Mathematikervereinigung (DMV, the German mathematical society) in Berlin. The second was the joint annual meeting of the ESMTB (European Society for Mathematical and Theoretical Biology) and the SMB (Society for Mathematical Biology) in Heidelberg. I had the impression that the participation of the SMB was relatively small compared to previous years. (Was this mainly due to the pandemic or due to other problems in international travel?) There were about 500 participants in total who were present in person and about another 100 online. I was disappointed with the plenary talks at both conferences. The only one which I found reasonably good was that of Benoit Perthame. One reason I did not like them was the dominance of topics like machine learning and artificial intelligence. This brings me to the title of this post. I have the impression that mathematics (at least in applied areas) is becoming ever weaker and being replaced by the procedure of developing computer programmes which could be applied (and sometimes are) to the masses of data which our society produces these days. This was very noticeable in these two conferences. I would prefer if we human beings would continue to learn something and not just leave it to the machines. The idea that some day the work of mathematicians might be replaced by computers is an old one. Perhaps it is now happening, but in a different way from that which I would have expected. Computers are replacing humans but not because they are doing everything better. There is no doubt there are some things they can do better but I think there are many things which they cannot. The plenary talks at the DMV conference on topics of this kind were partly critical. There occurred examples of a type I had not encountered before. A computer is presented with a picture of a pig and recognizes it as a pig. Then the picture is changed in a very specific way. The change is quantitatively small and is hardly noticeable to the human eye. The computer identifies the modified picture as an aeroplane. In another similar example the starting picture is easily recognizable as a somewhat irregular seven and is recognized by the computer as such. After modification the computer recognizes it as an eight. This seems to provide a huge potential for mistakes and wonderful opportunities for criminals. I feel that the trend to machine learning and related topics in mathematics is driven by fashion. It reminds me a little of the ‘successes’ of string theory in physics some years ago. Another aspect of the plenary talks at these conferences I did not like was that the speakers seemed to be showing off with how much they had done instead of presenting something simple and fascinating. At the conference in Heidelberg there were three talks by young prizewinners which were shorter than the plenaries. I found that they were on average of better quality and I know that I was not the only one who was of that opinion.

In the end there were not many talks at these conferences I liked much but let me now mention some that I did. Amber Smith gave a talk on the behaviour of the immune system in situations where bacterial infections of the lung arise during influenza. In that talk I really enjoyed how connections were made all the way from simple mathematical models to insights for clinical practise. This is mathematical biology of the kind I love. In a similar vein Stanca Ciupe gave a talk about aspects of COVID-19 beyond those which are common knowledge. In particular she discussed experiments on hamsters which can be used to study the infectiousness of droplets in the air. A talk of Harsh Chhajer gave me a new perspective on the intracellular machinery for virus production used by hepatitis C, which is of relevance to my research. I saw this as something which is special for HCV and what I learned is that it is a feature of many positive strand RNA viruses. I obtained another useful insight on in-host models for virus dynamics from a talk of James Watmough.

Returning to the issue of mathematics and computers another aspect I want to mention is arXiv. For many years I have put copies of all my papers in preprint form on that online archive and I have monitored the parts of it which are relevant for my research interests for papers by other people. When I was working on gravitational physics it was gr-qc and since I have been working on mathematical biology it has been q-bio (quantitative biology) which I saw as the natural place for papers in that area. q-bio stands for ‘quantitative biology’ and I interpreted the word ‘quantitative’ as relating to mathematics. Now the nature of the papers on that archive has changed and it is also dominated by topics strongly related to computers such as machine learning. I no longer feel at home there. (To be fair I should say there are still quite a lot of papers there which are on stochastic topics which are mathematics in the classical sense, just in a part of mathematics which is not my speciality.) In the past I often cross-listed my papers to dynamical systems and maybe I should exchange the roles of these two in future – post to dynamical systems and cross-list to q-bio. If I succeed in moving further towards biology in my research, which I would like to I might consider sending things to bioRxiv instead of arXiv.

In this post I have written a lot which is negative. I feel the danger of falling into the role of a ‘grumpy old man’. Nevertheless I think it is good that I have done so. Talking openly about what you are unsatisfied with is a good starting point for going out and starting in new positive directions.

September 28, 2022 at 9:01 am |

Hello

Thanks for the nice post. I wholeheartedly agree with you, although I’d prefer to drop the term *old* in «grumpy old man».

When I joined the Department of Applied Mathematics more than 20 years ago, the main area of research was what I would call analysis, the second one was PDE (existence and uniqueness theorems). Now 20 years later, the group of analysis has retired, the PDE group has shrunk and a large amount of research is more applied than mathematics. All sorts of computer-assisted results are presented, in most of these cases, even the numerics is not very sophisticated.

However, there is an area in which a computer might be handy, and that has fascinated me since I started with mathematics: computer-assisted proofs, either for proof checking or even for proof generating.

There are a couple of these projects, most if not all concentrate on what one could call «finite» mathematics (that is not analysis).

The most spectacular result has been the successful check of a proof provided by Peter Scholze

https://www.quantamagazine.org/lean-computer-program-confirms-peter-scholze-proof-20210728/

As far as I understood it, providing something similar for analysis is still a long way to go.

Uwe Brauer