Director, Mathematical Sciences Institute,
Belagavi, India.
Has the style of mathematical research in a way changed drastically in the past few decades? No, not really, says Yuri Manin, to a question from Mikhail Gelfand, people who are engaged in research in mathematics today are doing so the same way it was done two hundred years ago. This is partly because we don’t choose mathematics as our profession but rather it chooses us, he observes. Moreover, in doing so it chooses a certain type of person, of which there are no more than a few thousand in each generation, worldwide. Special about it is that they all carry the stamp of those sorts of people mathematics has chosen [ 1], [ 2].
The social style has changed, in the sense that social institutions have changed within which one studies mathematics. This evolution was not strange. There was the period of Newton.
Later, Lagrange and so forth, when academies and universities were being formed, when individual amateurs studying mathematics, who once studied alchemy or astrology in the same way, by exchanging letters started forming social structures. There were interruptions to this natural development in Europe during the first thousand years of Christianity not to overlook the fact. Then came scientific journals, all this was put in place three hundred years ago. In the last half of the twentieth century, computers have contributed to this development. Thereby consolidating the social system further. So were the academies, universities, and journals so to say, and developed gradually, bit by bit, and assumed the form in which we now know them, for example, Crelle’s journal (the Journal of Pure and Applied Mathematics), appeared in 1826, we do not see any difference in its professional style at all from a contemporary journal. Abel’s article appeared there, on the insolvability in radicals of general equations of degree higher than 4, indeed a wonderful article and as a member of the editorial board of Crelle, one would accept it hands down with great pleasure.
In the last few decades, we have noticed that the interface between society and professional mathematicians has changed. It resulted in new acquaintances with computer professionals and people around them and their public relations whom the professional field needed due to new methods of financing their work - related to proposals, grants, and things like that. In mathematics, unlike other experimental sciences, this looks odd and funny, you must first write just what it is that you are doing is so great, then later give an account of what you have accomplished, it seems that there were things like the theorem was fifty percent proven (in their mid-term report), some people would even write that they were planning to prove the theorems that in-fact were already proven in the past year and buy whole year to work on them. As already mentioned, these are all frivolous things. So long as mathematics chooses us and so long as there are people such as Grigory Perelman and Alexander Grothendieck, we will remember our ideals. It puts us in a peculiar situation, but grants are important and what are the other alternatives and mechanisms that work? Well, what do we need? Salaries for people and a budget for the institution, but the fact that the organizations that provide these requirements have decided to adopt the marketplace language is entirely a misplaced perception, at least for the field of mathematics. This viewpoint affects three basic areas: healthcare, education, and culture. Roger Bacon keenly spoke about the ‘Idols of the marketplace’ fallacy and mathematics is a part of the culture in the broad sense of that term and not part of industry or services. Until now there has been no voice of descent as far as investing funds support for mathematical research. Thanks to its very nature of it being seen as an inexpensive science. In that case, why should we put ourselves on the market, in the first place it does not cost anything, and it does not exploit natural resources and cause damage to the environment, give us a salary and leave us in peace to do our joyful pursuits.
The role of computers in pure mathematics started showing significant changes and in that it was the unique possibility of doing large scale physical experiments in mental reality. It enabled us to try the most impossible things, more exactly, things that Euler, Lagrange, Gauss and many others could do even without a computer. But now, what Euler and Gauss could do, any mathematician can do at his or her desktop. So, if one does not have the imagination to distinguish some features of this platonic reality then one can experiment. If some bright idea occurs that something is equal to something else, one can sit and compute the values, and iterate it several times. Not only that people have now emerged who have mathematical minds but are computer oriented. More precisely, these sorts of people were around earlier, but without computers and somehow something was missing. In a sense Euler was like that, and would have taken to computers passionately, same is the case with Ramanujan who did not even know mathematics whose formal training did not go beyond intermediate. Examples like Don Zagier who is a natural and great mathematical mind, which is at the same time ideally suited to work with computers quite effectively. What computers have done to pure mathematics is to create opportunities for collaboration.
Relationships and Collaborations
An important development that one would notice is the relationship between mathematics and theoretical physics, and the way the structural changes occurred. It is interesting to note that during Newton, Euler, Lagrange, and Gauss' time the relationship was close, and the same people did research in both mathematics and theoretical physics. They might have considered themselves more as mathematicians or more as physicists, but they were the same people. This prevailed until about the end of the nineteenth century. The twentieth century revealed significant differences. The story of the development of the general theory of relativity is a striking example. Not only did Einstein not know the mathematics he needed to, but he did not even know that such mathematics existed when he started understanding the general theory of relativity in 1907 in his own brilliantly intuitive language. After several years dedicated to the study of quanta he returned to gravitation in 1912 wrote to his friend Marcel Grossman for help. That resulted in their first article called, A Sketch of a theory of general relativity and a theory of gravity and it was done in two parts, the physics part was written by Einstein himself, mathematics part by Marcel Grossman. However, this attempt was half successful. They found the right language but not yet found the right equations. In 1915 that was accomplished by Einstein and David Hilbert. Hilbert derived them by finding the right Lagrangian density, its importance of this problem that for some time even eluded Einstein. It stands as a testimony to a great collaboration of two great minds that unfortunately prompted to have silly fights about priorities, despite the creators' grateful and genius understanding of their insights. This story also marks the period in which mathematics and physics parted ways; this divergence continued until about the 1950s. The physicists dreamed up quantum mechanics in which they found a need for Hilbert space, Schrodingers equation, the quantum of action, the uncertainty principle, the delta function. This was a completely new type of physics and a completely new type of philosophy. Whatever pieces of mathematics were necessary they developed them themselves.
To this end, the mathematicians did analysis, geometry started creating topology and functional analysis. The important thing at the beginning of the century was the pressure by philosophers and logicians, trying to clarify and purify the insights of Cantor, Zermelo, Whitehead, at all about sets and infinity. Paradoxically, this line of thought generated both what came to be known as the crisis in foundations and later, computer science. The paradox of a finite language that can give us information about infinite things, is it possible? Formal languages, models and truth, consistency and incompleteness, important things, were developed but quite disjointly from physicists’ preoccupation of that time.
And Alan Turing enters the scene to tell us about the model of a mathematical deduction is a machine, not a text. A machine Brilliant. In ten years, we had von Neumann machines and the principle of separation of programs we mean software and hardware and in the following next two decades everything was ready. During the early years of the century except for particular minds, von Neumann was undoubtedly both a physicist and a mathematician and know of no person with a mind on that scale in twentieth century. A new means of quantifying things came from Richard Feynman who in 1940s wrote about his wonderful path integral and worked on it in a startlingly mathematical way to imagine something like Eiffel tower hanging in the air with no foundation from a mathematical point of view , so it exists and works just right but standing on nothing we know of. This situation continues even to this day.
Then in the 1950s the quantum field theory of molecular forces started to appear, and it turned out that mathematically the respective classical fields are connection forms. The classical equation of stationary action for them was known in differential geometry. The equation of Yang - Mills made its entry, mathematicians began to look at the physicists suspiciously and physicists at them with an element of disapproval. It turned out paradoxically and pleasantly, mathematicians began to learn more from the physicists than they learnt from their mathematical community. It turned out that with the help of quantum field theory and the apparatus of the Feynman integral theory developed cognitive tools that allowed them to discover one mathematical fact after another. These were not proofs, just discoveries. Later, mathematicians sat down, worked on details in the form of theorems and proving them honestly. This shows that what the physicists do is indeed mathematically meaningful. And the physicists say, we always knew that they were to draw attention. But in general from physicists mathematicians learned what questions to ask and what answers we might presuppose , as a rule they turn out to be correct Freeman Dyson, renowned physicist and mathematician in his Gibbs Lectures ‘Missed Opportunities’ has beautifully described many cases where mathematicians and physicists lost chances of making discoveries by neglecting to talk to each other and he himself revealed, how he missed the opportunity of not discovering the deeper connection between modular forms and Lie algebras, just because number theorist Dyson and the physicist Dyson weren’t speaking to each other.
Edward Witten appeared with this kind of Eiffel tower hanging in air viewpoint. He is the expert in such astonishing mental equipment that his background speaks before getting his Ph. D in physics and produces mathematics of unlikely strength and force but arising from his physical insights. Strangely the starting point of his insight was not with the physical world as it is described by the experimental world but the mental machinery developed for the explanation of this world by Feynman, Dyson ,Schwinger, Tomonaga and many other physicists, machinery that is entirely mathematical but that has very weak mathematical foundation; clearly an earth shaking heuristic principle ,not at all some triviality but an enormous structure without a foundation, at least of the kind we have gotten accustomed to. None of these attempts have succeeded in sufficient generality. Mathematicians have developed a few approximations to what we might call the Feynman integral; for example, the Weiner integration, which was invented as early as the early twenties. It was used to study Brownian motion where one finds rigorous mathematical theory. There are also some interesting variants, but the theory is much narrower than is required to cover all varied applications of the Feynman integral. As a mathematical theory it is small. In strength or power, it is not comparable to the machinery that now produces great mathematics. Not sure what will happen with the machinery when Witten stops working on it, but it is hoped very much that it will soon permeate the mathematical world that can be seen in a small way to prove the theorems Witten guessed, in particular in the so called Topological Quantum Field theory (TQFT) and its output just is quite well known. Actually, homotopical topology and TQFT have grown so close that it started giving a feeling that it can turn out to be a language of new foundations.
Paradoxes and paradigm shifts
We have evidence of such things having occurred already. The Cantor’s theory of infinite had no basis in older mathematics. So, it is a new mathematics and a new way to think about mathematics. In the final run the Cantor’s universe was accepted by Bourbaki without any prejudice. They created pragmatic foundations that were adopted for many decades by all working mathematicians as opposed to normative foundations that logicists tried to impose upon us. What Bourbaki did was to take a historical step just as what Cantor did. While it played an enormous role anyway it was not creating the philosophical foundations of mathematics but was developing a universal common mathematical language which could be used by probabilists, topologists, specialists in graph theory or functional analysis or in algebraic geometry. You take few common elementary words such as set, element, subset and like then build up definitions of the basic structures that you study, it could be group, topological space, formal language as the case maybe. Their names form the second layer of your own terminology and there might come the third, fourth or fifth layer , but basic construction rules are common and getting together, people could talk to each other with complete understanding; formal language is a set of letters with a subset of well-formed words, terms plus connectives and qualifiers, deduction rules and like. From this perspective, Gödel’s incompleteness theorem, for instance loses any sort of mystery. The theorem gains it when you start examining it philosophically, but it is simply a theorem stating that a certain structure does not have finitely many generators, otherwise forcing us to delve upon the philosophical foundations of mathematics.
Then how to imagine mathematics? So, one intelligent way is to take the stand of an emotional Platonist instead of a rational one. This can be explained with some good problems. A well-known one is in Fermat’s last theorem. In this one would see that something can be formulated as the presence or absence of something. Looking at this equation x2+y2=z2 it is amazing to see that we can write down all the integral solutions in one formula, in certain sense that was known to Diophantus. Having seen this it raises a question- what about cubes? Fair enough. You keep on searching for such integers, but the effort would be futile. There are none and what could be said about fourth powers. The answer remains the same. Well, can it be so that there is never anything further? You discover a difference between the second power and its higher exponents. This history of Fermat’s last theorem, well it is that sort of history. But when you pose a problem that this - and - this is equal to that - and - that, or that such - and - such never happens , you never know in advance if you have a good problem or a bad one, not until it is solved or almost solved.
But what can be said about other problems, say those concerning perfect numbers or twin primes. Are there infinitely many perfect numbers? A number n is said to be perfect if the sum of all its divisors other than n is n. For example, 6 is a perfect number, so is 28. Twin primes are those numbers whose difference is always 2. For example, 3 and 5, 11 and 13, 17 and 19 are twin primes. To this day no one has built any interesting theories around these problems, although their statements look no worse than that of Fermat’s last theorem. If we go by that assertion, as platonists indeed they are in a way of that sort properties of the problems themselves. They do not reveal now of formulating the problem but manifests itself in the process of historical development. Because of that one can be impartial to the problems and hence solving a problem requires the skill of finding a detail that one does not know what it is that comes on a platter. A program arises when a great mathematical mind sees something as a whole, or otherwise but as something more than a single detail that is seen first only vaguely and the details have to be seen to blow away the mists to find appropriate telescopes , seek analogies with edifices that have been discovered before, create a language for the things that one sees so vaguely. This is what one could call tentatively a program.
Cantor’s theory of the infinite was such a program, it was a rare event by becoming at once a program and a discovery; that these were orders of infinity and say the continuum hypothesis, whether there is something between the countable infinity and the continuum. It set the ball rolling, setting right away a whole program of investigation. Weil’s hypothesis, about how many solutions there are to an equation modulo p is such a program and gaps that were found in certain areas contrary to an entire theory in other blocks, invited people like Grothendieck , Pierre Deligne to devote most of their life to filling the gap. As a result, the analogy became precise modern algebraic geometry took birth and much more happened, pushing set theory aside and Categories with all super structures replacing sets gradually.
In logic there was Hilbert’s program that he formulated too optimistically. He wanted to prove that everything true was provable. He saw the contours of the edifice inaccurately, but the program developed anyway. Gödel, Turing, Church, von Neumann, computers and computer science to a great degree originated with Hilbert. Again, the Erlangen program for geometry was initiated by Felix Klein around 1900. Are these problems also bad? In the sense, those which did not lead to a program? Four color problem in Graph Theory stands as an example whose proof was given with the aid of computers. But that is not so important as the fact that until now no one has incorporated it into any sort of sufficiently rich context. So, it is simply a means of training the mind. For this reason, some problems as such are isolated but when the problem arises within a program that is when it can be a good one, when we know in advance to what edifice the detail belongs. The Riemann hypothesis is undoubtedly a problem that Riemann originated within a program and remains open in the field of number theory; even to this day, a right solution to this conjecture is to be seen in wider context.
Are there hypotheses that everyone grew with them and assumed to be obviously correct but then counter examples were found? We do not think so. Well, even if someone found a counter example say in the case of Fermat’s last theorem , rather than a proof would things be a great upset or to declare that the problem is not a good one but it continues to be so as it stimulated the development of a context and then someone solves it within this context and so it helped to establish an important context. Again, timing matters, if a counter example had been found bit early say in 60’s then everyone would have scratched their heads. If it were in 70’s it would have been clear that this problem could be reduced from several other conjectures that are equally complex and had a more far -reaching character, related to the Langland’s program. In other words, if these things were true then so was Fermat’s last theorem. Of course, if a counterexample to Fermat’s last theorem had been found a bit early then these things would have to be false and leave an impression leading to the destruction of a much more fundamental and complex system of belief. Also, it would have evoked an enormous interest and attempts to what was amiss and had to rebuild a lot of the edifice and so on. All that have followed from the emergence of a counter example.
When we subject the Godel’s theorem under this counter example scrutiny, before that it was supposed one could prove everything that is true. At least Hilbert believed this, we have no idea how many others believed it. But this shows that one must view this program correctly. Its first important outcome was the construction of a mathematical context in which one could formulate questions about truth and provability in mathematics as precise mathematical problems rather than vague philosophical ones. By the nature of this quest, one must introduce self-referent ability, and the rest would become the matter of inventiveness brilliantly demonstrated by Tarski and Gödel. At the start of the formulation of the program people made wrong guesses about what it would lead to, and the counter examples showed that these were in fact errors and some wrong perceptions showing a lack of human imagination, and were treated in the history of mathematics such things not usually as counter examples but paradoxes. Again, theorem of Banach - Tarski which can be described as follows- Imagine a ball that can be cut into say 5 pieces and now rearrange them to put them back together to obtain two balls of the same size as the initial one, it is not magic , this construction tells us a lot. To the critics of the set theoretic approach in general it means if this view leads one to such an assertion, then it is not mathematics but some sort of wild nonsense. For logicians it is an example of a paradoxical application of the axiom of choice of zermelo and so an argument against acceptance. Several such paradoxes were discovered during the transition period between classical mathematics and set theoretic mathematics. There was the theorem that a curve could fill the square and many such things, and they taught us a lot. Many people thought that this was a pure fantasy, but newly trained imagination allowed one to recognize the paradoxical behavior of Fourier series to understand Brownian motion, then to invent wavelets and it turned out that these were not at all fantasies about almost applied mathematics. One may not foresee any revolutionary changes when it was not noticed in the last three hundred years, but every time new and powerful intuitions arose and mathematics retained its character, in some strange way.
References
1. Based on the interview appeared in the newspaper Troitsky Variant, By Mikhail Gelfand, September ,2008).
2. Based on its translation, Notices AMS November 2009.