## Posts Tagged ‘**mathematics**’

## Some playing with Python

A long time ago, Diophantus (sort of) discussed integer solutions to the equation

(solutions to this equation are called Pythagorean triples).

Centuries later, in 1637, Fermat made a conjecture (now called Fermat’s Last Theorem, not because he uttered it in his dying breath, but because it was the last one to be proved — in ~1995) that

has no positive integer solutions for . In other words, his conjecture was that none of the following equations has a solution:

… and so on. An nth power cannot be partitioned into two nth powers.

About a century later, Euler proved the case of Fermat’s conjecture, but generalized it in a different direction: he conjectured in 1769 that an nth power cannot be partitioned into fewer than n nth powers, namely

has no solutions with . So his conjecture was that (among others) none of the following equations has a solution:

… and so on.

This conjecture stood for about two centuries, until abruptly it was found to be false, by Lander and Parkin who in 1966 simply did a direct search on the fastest (super)computer at the time, and found this counterexample:

(It is still one of only three examples known, according to Wikipedia.)

Now, how might you find this solution on a computer today?

In his wonderful (as always) post at bit-player, Brian Hayes showed the following code:

import itertools as it def four_fifths(n): '''Return smallest positive integers ((a,b,c,d),e) such that a^5 + b^5 + c^5 + d^5 = e^5; if no such tuple exists with e < n, return the string 'Failed'.''' fifths = [x**5 for x in range(n)] combos = it.combinations_with_replacement(range(1,n), 4) while True: try: cc = combos.next() cc_sum = sum([fifths[i] for i in cc]) if cc_sum in fifths: return(cc, fifths.index(cc_sum)) except StopIteration: return('Failed')

to which, if you add (say) `print four_fifths(150)`

and run it, it returns the correct answer fairly quickly: in about 47 seconds on my laptop.

The `if cc_sum in fifths:`

line inside the loop is an cost each time it’s run, so with a simple improvement to the code (using a set instead) and rewriting it a bit, we can write the following full program:

import itertools def find_counterexample(n): fifth_powers = [x**5 for x in range(n)] fifth_powers_set = set(fifth_powers) for xs in itertools.combinations_with_replacement(range(1, n), 4): xs_sum = sum([fifth_powers[i] for i in xs]) if xs_sum in fifth_powers_set: return (xs, fifth_powers.index(xs_sum)) return 'Failed' print find_counterexample(150)

which finishes in about 8.5 seconds.

Great!

But there’s something unsatisfying about this solution, which is that it assumes there’s a solution with all four numbers on the LHS less than 150. After all, changing the function invocation to `find_counterexample(145)`

makes it run a second faster even, but how could we know to do without already knowing the solution? Besides, we don’t have a fixed 8- or 10-second budget; what we’d *really* like is a program that keeps searching till it finds a solution or we abort it (or it runs out of memory or something), with no other fixed termination condition.

The above program used the given “n” as an upper bound to generate the combinations of 4 numbers; is there a way to generate all combinations when we don’t know an upper bound on them?

Yes! One of the things I learned from Knuth volume 4 is that if you simply write down each combination in descending order and order them lexicographically, the combinations you get for each upper bound are a prefix of the list of the next bigger one, i.e., for any upper bound, all the combinations form a prefix of the same infinite list, which starts as follows (line breaks for clarity):

1111, 2111, 2211, 2221, 2222, 3111, 3211, 3221, 3222, 3311, 3321, 3322, 3331, 3332, 3333, 4111, ... ... 9541, 9542, 9543, 9544, 9551, ... 9555, 9611, ...

There doesn’t seem to be a library function in Python to generate these though, so we can write our own. If we stare at the above list, we can figure out how to generate the next combination from a given one:

- Walk backwards from the end, till you reach the beginning or find an element that’s less than the previous one.
- Increase that element, set all the following elements to 1s, and continue.

We could write, say, the following code for it:

def all_combinations(r): xs = [1] * r while True: yield xs for i in range(r - 1, 0, -1): if xs[i] < xs[i - 1]: break else: i = 0 xs[i] += 1 xs[i + 1:] = [1] * (r - i - 1)

(The `else`

block on a `for`

loop is an interesting Python feature: it is executed if the loop wasn’t terminated with `break`

.) We could even hard-code the `r=4`

case, as we’ll see later below.

For testing whether a given number is a fifth power, we can no longer simply lookup in a fixed precomputed set. We can do a binary search instead:

def is_fifth_power(n): assert n > 0 lo = 0 hi = n # Invariant: lo^5 < n <= hi^5 while hi - lo > 1: mid = lo + (hi - lo) / 2 if mid ** 5 < n: lo = mid else: hi = mid return hi ** 5 == n

but it turns out that this is slower than one based on looking up in a growing set (as below).

Putting everything together, we can write the following (very C-like) code:

largest_known_fifth_power = (0, 0) known_fifth_powers = set() def is_fifth_power(n): global largest_known_fifth_power while n > largest_known_fifth_power[0]: m = largest_known_fifth_power[1] + 1 m5 = m ** 5 largest_known_fifth_power = (m5, m) known_fifth_powers.add(m5) return n in known_fifth_powers def fournums_with_replacement(): (x0, x1, x2, x3) = (1, 1, 1, 1) while True: yield (x0, x1, x2, x3) if x3 < x2: x3 += 1 continue x3 = 1 if x2 < x1: x2 += 1 continue x2 = 1 if x1 < x0: x1 += 1 continue x1 = 1 x0 += 1 continue if __name__ == '__main__': tried = 0 for get in fournums_with_replacement(): tried += 1 if (tried % 1000000 == 0): print tried, 'Trying:', get rhs = get[0]**5 + get[1]**5 + get[2]**5 + get[3]**5 if is_fifth_power(rhs): print 'Found:', get, rhs break

which is both longer and slower (takes about 20 seconds) than the original program, but at least we have the satisfaction that it doesn’t depend on any externally known upper bound.

I originally started writing this post because I wanted to describe some experiments I did with profiling, but it’s late and I’m sleepy so I’ll just mention it.

python -m cProfile euler_conjecture.py

will print relevant output in the terminal:

26916504 function calls in 26.991 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 18.555 18.555 26.991 26.991 euler_conjecture.py:1() 13458164 4.145 0.000 4.145 0.000 euler_conjecture.py:12(fournums_with_replacement) 13458163 4.292 0.000 4.292 0.000 euler_conjecture.py:3(is_fifth_power) 175 0.000 0.000 0.000 0.000 {method 'add' of 'set' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}

Another way to view the same thing is to write the profile output to a file and read it with `cprofilev`

:

python -m cProfile -o euler_profile.out euler_conjecture.py cprofilev euler_profile.out

and visit http://localhost:4000 to view it.

Of course, simply translating this code to C++ makes it run much faster:

#include <array> #include <iostream> #include <map> #include <utility> typedef long long Int; constexpr Int fifth_power(Int x) { return x * x * x * x * x; } std::map<Int, int> known_fifth_powers = {{0, 0}}; bool is_fifth_power(Int n) { while (n > known_fifth_powers.rbegin()->first) { int m = known_fifth_powers.rbegin()->second + 1; known_fifth_powers[fifth_power(m)] = m; } return known_fifth_powers.count(n); } std::array<Int, 4> four_nums() { static std::array<Int, 4> x = {1, 1, 1, 0}; int i = 3; while (i > 0 && x[i] == x[i - 1]) --i; x[i] += 1; while (++i < 4) x[i] = 1; return x; } std::ostream& operator<<(std::ostream& os, std::array<Int, 4> x) { os << "(" << x[0] << ", " << x[1] << ", " << x[2] << ", " << x[3] << ")"; return os; } int main() { while (true) { std::array<Int, 4> get = four_nums(); Int rhs = fifth_power(get[0]) + fifth_power(get[1]) + fifth_power(get[2]) + fifth_power(get[3]); if (is_fifth_power(rhs)) { std::cout << "Found: " << get << " " << known_fifth_powers[rhs] << std::endl; break; } } }

and

```
clang++ -std=c++11 euler_conjecture.cc && time ./a.out
```

runs in 2.43s, or 0.36s if compiled with `-O2`.

But I don’t have a satisfactory answer to how to make our Python program which takes 20 seconds as fast as the 8.5-second known-upper-bound version.

Edit [2015-05-08]: I wrote some benchmarking code to compare all the different “combination” functions.

import itertools # Copied from the Python documentation def itertools_equivalent(iterable, r): pool = tuple(iterable) n = len(pool) if not n and r: return indices = [0] * r yield tuple(pool[i] for i in indices) while True: for i in reversed(range(r)): if indices[i] != n - 1: break else: return indices[i:] = [indices[i] + 1] * (r - i) yield tuple(pool[i] for i in indices) # Above function, specialized to first argument being range(1, n) def itertools_equivalent_specialized(n, r): indices = [1] * r yield indices while True: for i in reversed(range(r)): if indices[i] != n - 1: break else: return indices[i:] = [indices[i] + 1] * (r - i) yield indices # Function to generate all combinations of 4 elements def all_combinations_pythonic(r): xs = [1] * r while True: yield xs for i in range(r - 1, 0, -1): if xs[i] < xs[i - 1]: break else: i = 0 xs[i] += 1 xs[i + 1:] = [1] * (r - i - 1) # Above function, written in a more explicit C-like way def all_combinations_clike(r): xs = [1] * r while True: yield xs i = r - 1 while i > 0 and xs[i] == xs[i - 1]: i -= 1 xs[i] += 1 while i < r - 1: i += 1 xs[i] = 1 # Above two functions, specialized to r = 4, using tuple over list. def fournums(): (x0, x1, x2, x3) = (1, 1, 1, 1) while True: yield (x0, x1, x2, x3) if x3 < x2: x3 += 1 continue x3 = 1 if x2 < x1: x2 += 1 continue x2 = 1 if x1 < x0: x1 += 1 continue x1 = 1 x0 += 1 continue # Benchmarks for all functions defined above (and the library function) def benchmark_itertools(n): for xs in itertools.combinations_with_replacement(range(1, n), 4): if xs[0] >= n: break def benchmark_itertools_try(n): combinations = itertools.combinations_with_replacement(range(1, n), 4) while True: try: xs = combinations.next() if xs[0] >= n: break except StopIteration: return def benchmark_itertools_equivalent(n): for xs in itertools_equivalent(range(1, n), 4): if xs[0] >= n: break def benchmark_itertools_equivalent_specialized(n): for xs in itertools_equivalent_specialized(n, 4): if xs[0] >= n: break def benchmark_all_combinations_pythonic(n): for xs in all_combinations_pythonic(4): if xs[0] >= n: break def benchmark_all_combinations_clike(n): for xs in all_combinations_clike(4): if xs[0] >= n: break def benchmark_fournums(n): for xs in fournums(): if xs[0] >= n: break if __name__ == '__main__': benchmark_itertools(150) benchmark_itertools_try(150) benchmark_itertools_equivalent(150) benchmark_itertools_equivalent_specialized(150) benchmark_all_combinations_pythonic(150) benchmark_all_combinations_clike(150) benchmark_fournums(150)

As you can see, I chose inside the benchmarking function the same statement that would cause `all_combinations`

to terminate, and have no effect for the other combination functions.

When run with

`python -m cProfile benchmark_combinations.py`

the results include:

2.817 benchmark_combinations.py:80(benchmark_itertools) 8.583 benchmark_combinations.py:84(benchmark_itertools_try) 126.980 benchmark_combinations.py:93(benchmark_itertools_equivalent) 46.635 benchmark_combinations.py:97(benchmark_itertools_equivalent_specialized) 44.032 benchmark_combinations.py:101(benchmark_all_combinations_pythonic) 18.049 benchmark_combinations.py:105(benchmark_all_combinations_clike) 10.923 benchmark_combinations.py:109(benchmark_fournums)

Lessons:

- Calling
`itertools.combinations_with_replacement`

is by far the fastest, taking about 2.7 seconds. It turns out that it’s written in C, so this would be hard to beat. (Still, writing it in a`try`

block is seriously bad.) - The “equivalent” Python code from the itertools documentation (
`benchmark_itertools_combinations_with_replacment`

) is about 50x slower. - Gets slightly better when specialized to numbers.
- Simply generating
*all*combinations without an upper bound is actually faster. - It can be made even faster by writing it in a more C-like way.
- The tuples version with the loop unrolled manually is rather fast when seen in this light, less than 4x slower than the library version.

## How does Tupper’s self-referential formula work?

*[I write this post with a certain degree of embarrassment, because in the end it turns out (1) to be more simple than I anticipated, and (2) already done before, as I could have found if I had internet access when I did this. :-)]*

The so-called “Tupper’s self-referential formula” is the following, due to Jeff Tupper.

Graph the set of all points such that

in the region

where N is the following 544-digit integer:

48584506361897134235820959624942020445814005879832445494830930850619

34704708809928450644769865524364849997247024915119110411605739177407

85691975432657185544205721044573588368182982375413963433822519945219

16512843483329051311931999535024137587652392648746133949068701305622

95813219481113685339535565290850023875092856892694555974281546386510

73004910672305893358605254409666435126534936364395712556569593681518

43348576052669401612512669514215505395545191537854575257565907405401

57929001765967965480064427829131488548259914721248506352686630476300

The result is the following graph:

Whoa. How does this work?

At first sight this is rather too incredible for words.

But after a few moments we can begin to guess what is going on, and see that—while clever—this is perhaps not so extraordinary after all. So let us calmly try to reverse-engineer this feat.

## Identify

What’s this?

Seeing more should help:

If that’s too easy, how about this?

Both are from *Rekhāgaṇita*, which is a c. 1720s translation by Jagannatha of Nasir al-Din al-Tusi’s 13th-century Arabic translation of Euclid’s *Elements*. It seems to be a straightforward translation of the Arabic — it even uses, to label vertices etc., letters in the Arabic order अ ब ज द ह व झ…. The text retains most of the structure and proposition numbers of Euclid, but in fact the Arabic world has considerably elaborated on Euclid. For instance, for the famous first example above, it gives sixteen additional proofs/demonstrations, which are not in the Greek texts.

The first volume (Books I to VI) is available on Google Books and the second volume (Books VII to XV) through wilbourhall.org (25 MB PDF).

Notes on the second: some technical vocabulary — a प्रथमाङ्कः is a prime number, and a रूप is a unit (one). The rest of the vocabulary, like “निःशेषं करोति” meaning “divides (without remainder)”, is more or less clear, and also the fact that as in Euclid, numbers are being conceived of as lengths (so दझ and झद mean the same).

It does sound cooler to say “इदमशुद्धम्” than “But this is a contradiction”. :-)

## On “trivial” in mathematics

One aspect of mathematicians’ vocabulary that non-mathematicians often find non-intuitive is the word “trivial”. Mathematicians seem to call a great many things “trivial”, most of which are anything but. Here’s a joke, pointed out to me by Mohan:

Two mathematicians are discussing a theorem. The first mathematician says that the theorem is “trivial”. In response to the other’s request for an explanation, he then proceeds with two hours of exposition. At the end of the explanation, the second mathematician agrees that the theorem is trivial.

Like many jokes, this is not far from the truth. This tendency has led others to say, for example, that

In mathematics, there are only two kinds of proofs: Trivial ones, and undiscovered ones.

Or as Feynman liked to say, “mathematicians can prove only trivial theorems, because every theorem that’s proved is trivial”. (The mathematicians to whom Feynman said this do not seem to have liked the statement.)

A little examination, however, shows that all this is not far from reality, and indeed not far from the ideal of mathematics. Consider these excerpts from Gian-Carlo Rota‘s wonderful *Indiscrete Thoughts*:

“eventually every mathematical problem is proved trivial. The quest for ultimate triviality is characteristic of the mathematical enterprise.” (p.93)

“Every mathematical theorem is eventually proved trivial. The mathematician’s ideal of truth is triviality, and the community of mathematicians will not cease its beaver-like work on a newly discovered result until it has shown to everyone’s satisfaction that all difficulties in the early proofs were spurious, and only an analytic triviality is to be found at the end of the road.” (p. 118, in

The Phenomenology of Mathematical Truth)

Are there definitive proofs?

It is an article of faith among mathematicians that after a new theorem is discovered, other simpler proofs of it will be given until a definitive one is found. A cursory inspection of the history of mathematics seems to confirm the mathematician’s faith. The first proof of a great many theorems is needlessly complicated. “Nobody blames a mathematician if the first proof of a new theorem is clumsy”, said Paul Erdős. It takes a long time, from a few decades to centuries, before the facts that are hidden in the first proof areunderstood, as mathematicians informally say. This gradual bringing out of the significance of a new discovery takes the appearance of a succession of proofs, each one simpler than the preceding. New and simpler versions of a theorem will stop appearing when the facts are finally understood. (p.146, inThe Phenomenology of Mathematical Proof, here/here).

For more context, the section titled *“Truth and Triviality”* is especially worth reading, where he gives the example of proofs of the prime number theorem, starting with Hadamard and de la Vallée Poussin, through Wiener, Erdős and Selberg, and Levinson. Also p. 119, where he feels this is not unique to mathematics:

Any law of physics, when finally ensconced in a proper mathematical setting, turns into a mathematical triviality. The search for a universal law of matter, a search in which physics has been engaged throughout this century, is actually the search for a trivializing principle, for a universal “‘nothing but.” […] The ideal of all science, not only of mathematics, is to do away with any kind of synthetic

a posterioristatement and to leave only analytic trivialities in its wake. Science may be defined as the transformation of synthetic facts of nature into analytic statements of reason.

So to correct the earlier quote, there are *three* kinds of proofs from the mathematician’s perspective: those that are trivial, those that have not been discovered, and those niggling ones that are not **yet** trivial.

## Euler, pirates, and the discovery of America

Is there nothing Euler wasn’t involved in?!

That rhetorical question is independent of the following two, which are exceedingly weak connections.

**“Connections” to piracy**: Very tenuous connections, of course, but briefly, summarising from the article:

- Maupertuis: President of the Berlin Academy for much of the time Euler was there. His father got a license from the French king to attack English ships, made a fortune, and retired. Maupertuis is known for formulating the Principle of Least Action (but maybe it was Euler), and best known for taking measurements showing the Earth bulges at the equator as Newton had predicted, thus “The Man Who Flattened the Earth”.
- Henry Watson: English privateer living in India, lost a fortune to the scheming British East India Company. Wanted to be a pirate, but wasn’t actually one. Known for: translated Euler’s
*Théorie complette*[E426] from its original French:*A complete theory of the construction and properties of vessels: with practical conclusions for the management of ships, made easy to navigators*. (Yes, Euler wrote that.) - Kenelm Digby: Not connected to Euler actually, just the recipient of a letter by Fermat in which a problem that was later solved by Euler was discussed. Distinguished alchemist, one of the founders of the Royal Society, did some pirating (once) and was knighted for it.
- Another guy, nevermind.

Moral: The fundamental interconnectedness of all things. Or, connections don’t mean a thing.

**The discovery of America:** Columbus never set foot on the mainland of America, and died thinking he had found a shorter route to India and China, not whole new continents that were in the way. The question remained whether these new lands were part of Asia (thus, “Very Far East”) or not. The czar of Russia (centuries later) sent Bering to determine the bounds of Russia, and the Bering Strait separating the two continents was discovered and reported back: America was not part of Russia. At about this time, there were riots in Russia, there was nobody to make the announcement, and “Making the announcement fell to Leonhard Euler, still the preeminent member of the St. Petersburg Academy, and really the only member who was still taking his responsibilities seriously.” As the man in charge of drawing the geography of Russia, Euler knew a little, and wrote a letter to Wetstein, member of the Royal Society in London. So it was only through Euler that the world knew that the America that was discovered was new. This letter [E107], with others, is about the only work of Euler in English. That Euler knew English (surprisingly!) is otherwise evident from the fact that he translated and “annotated” a book on ballistics by the Englishman Benjamin Robins. The original was 150 pages long; with Euler’s comments added, it was 720. [E77, translated back into English as *New principles of gunnery*.]

Most or all of the above is from Ed Sandifer’s monthly column *How Euler Did It*.

**The works of Leonhard Euler online** has pages for all 866 of his works; 132 of them are available in English, including the translations from the Latin posted by graduate student Jordan Bell on the arXiv. They are very readable.

This includes his *Letters to a German Princess on various topics in physics and philosophy* [E343,E344,E417], which were bestsellers when reprinted as science books for a general audience. It includes his textbook, *Elements of Algebra* [E387,E388]. Find others on Google Books. The translations do *not* seem to include (among his other books) his classic textbook *Introductio in analysin infinitorum* [E101,E102, “the foremost textbook of modern times”], though there are French and German translations available.

Apparently, Euler’s Latin is (relatively) not too hard to follow.

## Keeping up with the Schwar(t)zes

For future reference, reminded by a recent webcomic:

**There is no ‘t’ in Cauchy–Schwarz.**It is named after Hermann Schwarz (1843–1921), who rediscovered it in 1888. His other work includes some stuff in complex analysis: Schwarz lemma, Schwarz–Christoffel mapping, a theorem about symmetry of second derivatives, etc.**All other (mathematical) Schwar(t)zes have ‘t’s.**- There’s Jack Schwartz (1930–2009), mathematician–and–computer scientist, who was Gian-Carlo Rota’s advisor and co-wrote the monumental three-volume work on
*Linear Operators*(“Dunford and Schwartz”). And he’s also the name in the Schwartz-Zippel lemma (“Or the Schwartz-Zippel-DeMillo-Lipton Lemma.”) - There’s Laurent Schwartz (1915–2002), who was at ENS, worked on the theory of distributions, and won the Fields Medal in 1950. (“Schwartz space”)

- There’s Jack Schwartz (1930–2009), mathematician–and–computer scientist, who was Gian-Carlo Rota’s advisor and co-wrote the monumental three-volume work on

Those are the major ones, I think.

Omissions? Errors?

## Is it important to “understand” first?

The Oxford University Press has been publishing a book series known as “Very Short Introductions”. These slim volumes are an excellent idea, and cover over 200 topics already. The volume Mathematics: A Very Short Introduction is written by Timothy Gowers.

Gowers is one of the leading mathematicians today, and a winner of the Fields Medal (in 1998). In addition to his research work, he has also done an amazing amount of service to mathematics in other ways. He edited the 1000-page *Princeton Companion to Mathematics*, getting the best experts to write, and writing many articles himself. He also started the Polymath project and the Tricki, the “tricks wiki”. You can watch his talk on The Importance of Mathematics (with slides) (transcript), and read his illuminating mathematical discusssions, and his blog. His great article The Two Cultures of Mathematics is on the “theory builders and problem solvers” theme, and is a paper every mathematician should read.

Needless to say, “Mathematics: A Very Short Introduction” is a very good read. Unlike many books aimed at non-mathematicians, Gowers is quite clear that he does “presuppose some interest on the part of the reader rather than trying to drum it up myself. For this reason I have done without anecdotes, cartoons, exclamation marks, jokey chapter titles, or pictures of the Mandelbrot set. I have also avoided topics such as chaos theory and Godel’s theorem, which have a hold on the public imagination out of proportion to their impact on current mathematical research”. What follows is a great book that particularly excels at describing what it is that mathematicians do. Some parts of the book, being Gowers’s personal views on the philosophy of mathematics, might not work very well when directed at laypersons, not because they require advanced knowledge, but assume a culture of mathematics. Doron Zeilberger thinks that this book “should be recommended reading to everyone and required reading to mathematicians”.

Its last chapter, “Some frequently asked questions”, carries Gowers’s thoughts on some interesting questions. With whole-hearted apologies for inserting my own misleading “summaries” of the answers in brackets, they are the following: “1.Is it true that mathematicians are past it by the time they are 30?” (no), “2. Why are there so few women mathematicians?” (puzzling and regrettable), “3. Do mathematics and music go together?” (not really), “4. Why do so many people positively dislike mathematics?” (more on this below), “5. Do mathematicians use computers in their work?” (not yet), “6. How is research in mathematics possible?” (if you have read this book you won’t ask), “7. Are famous mathematical problems ever solved by amateurs?” (not really), “8. Why do mathematicians refer to some theorems and proofs as beautiful?” (already discussed. Also, “One difference is that […] a mathematician is more anonymous than an artist. […] it is, in the end, the mathematics itself that delights us”.) As I said, you should read the book itself, not my summaries.

The interesting one is (4).

4. Why do so many people positively dislike mathematics?

One does not often hear people saying that they have never liked biology, or English literature. To be sure, not everybody is excited by these subjects, but those who are not tend to understand perfectly well that others are. By contrast, mathematics, and subjects with a high mathematical content such as physics, seem to provoke not just indifference but actual antipathy. What is it that causes many people to give mathematical subjects up as soon as they possibly can and remember them with dread for the rest of their lives?

Probably it is not so much mathematics itself that people find unappealing as the experience of mathematics lessons, and this is easier to understand. Because mathematics continually builds on itself, it is important to keep up when learning it. For example, if you are not reasonably adept at multiplying two-digit numbers together,then you probably won’t have a good intuitive feel for the distributive law (discussed in Chapter 2). Without this, you are unlikely to be comfortable with multiplying out the brackets in an expression such as , and then you will not be able to understand quadratic equations properly. And if you do not understand quadratic equations, then you will not understand why the golden ratio is .

There are many chains of this kind, but there is more to keeping up with mathematics than just maintaining technical fluency. Every so often, a new idea is introduced which is very important and markedly more sophisticated than those that have come before, and each one provides an opportunity to fall behind. An obvious example is the use of letters to stand for numbers, which many find confusing but which is fundamental to all mathematics above a certain level. Other examples are negative numbers, complex numbers, trigonometry, raising to powers, logarithms, and the beginnings of calculus. Those who are not ready to make the necessary conceptual leap when they meet one of these ideas will feel insecure about all the mathematics that builds on it. Gradually they will get used to only half understanding what their mathematics teachers say, and after a few more missed leaps they will find that even half is an overestimate. Meanwhile, they will see others in their class who are keeping up with no difficulty at all. It is no wonder that mathematics lessons become, for many people, something of an ordeal.

This seems to be exactly the right reason. No one would enjoy being put through drudgery that they were not competent at, and without the beauty at the end of the pursuit being apparent. (I hated my drawing classes in school, too.) See also Lockhart’s Lament, another article that *everyone* — even, or *especially*, non-mathematicians — should read.

As noted earlier, Gowers has some things to say about the philosophy of mathematics. As is evident from his talk “Does mathematics need a philosophy?” (also typeset as essay 10 of 18 Unconventional Essays on the Nature of Mathematics), he has rejected the Platonic philosophy (≈ mathematical truths exist, and we’re discovering them) in favour of a formalist one (≈ it’s all just manipulating expressions and symbols, just stuff we do). The argument is interesting and convincing, but I find myself unwilling to change my attitude. Yuri Manin says in a recent interview that “I am an emotional Platonist (not a rational one: there are no rational arguments in favor of Platonism)”, so it’s perhaps just as well.

Anyway, the anti-Platonist / formalist idea of Gowers is evident throughout the book, and of course it has its great side: “a mathematical object *is* what it *does*” is his slogan, and most of us can agree that “one should learn to think abstractly, because by doing so many philosophical difficulties disappear” , etc. The only controversial suggestion, perhaps, follows the excerpt quoted above (of “Why do so many people positively dislike mathematics?”):

Is this a necessary state of affairs? Are some people just doomed to dislike mathematics at school? Or might it be possible to teach the subject differently in such a way that far fewer people are excluded from it? I am convinced that any child who is given one-to-one tuition in mathematics from an early age by a good and enthusiastic teacher will grow up liking it. This, of course, does not immediately suggest a feasible educational policy, but it does at least indicate that there might be room for improvement in how mathematics is taught.

One recommendation follows from the ideas I have emphasized in this book. Above, I implicitly drew a contrast between being technically fluent and understanding difficult concepts, but it seems that almost everybody who is good at one is good at the other. And indeed, if understanding a mathematical object is largely a question of learning the rules it obeys rather than grasping its essence, then this is exactly what one would expect — the distinction between technical fluency and mathematical understanding is less clear-cut than one might imagine.

How should this observation influence classroom practice? I do not advocate any revolutionary change — mathematics has suffered from too many of them already — but a small change in emphasis could pay dividends. For example, suppose that a pupil makes the common mistake of thinking that

x. A teacher who has emphasized the intrinsic meaning of expressions such as^{a+b}= x^{a}+ x^{b}xwill point out that^{a}xmeans^{a+b}a+bxs all multiplied together, which is clearly the same asaof them multiplied togethermultipliedbybof them multiplied together. Unfortunately, many children find this argument too complicated to take in, and anyhow it ceases to be valid ifaandbare not positive integers.Such children might benefit from a more abstract approach. As I pointed out in Chapter 2, everything one needs to know about powers can be deduced from a few very simple rules, of which the most important is

x. If this rule has been emphasized, then not only is the above mistake less likely in the first place, but it is also easier to correct: those who make the mistake can simply be told that they have forgotten to apply the right rule. Of course, it is important to be familiar with basic facts such as that^{a+b}= x^{a}x^{b}xmeans^{3}xtimesxtimesx, but these can be presented as consequences of the rules rather than as justifications for them.I do not wish to suggest that one should try to explain to children what the abstract approach is, but merely that teachers should be aware of its implications. The main one is that it is quite possible to learn to use mathematical concepts correctly without being able to say exactly what they mean. This might sound a bad idea, but the use is often easier to teach, and a deeper understanding of the meaning, if there

isany meaning over and above the use, often follows of its own accord.

Of course, there is an instinctive reason to immediately reject such a proposal — as the MAA review by Fernando Q. Gouvêa observes, ‘I suspect, however, that there is *far too much* “that’s the rule” teaching, and far too little explaining of reasons in elementary mathematics teaching. Such a focus on rules can easily lead to students having to remember a huge list of unrelated rules. I fear Gowers’ suggestion here may in fact be counterproductive.’ Nevertheless, the idea that technical fluency may precede and lead to mathematical understanding is worth pondering.

(Unfortunately, even though true, it may not actually help with teaching: in practice, drilling-in “mere” technical fluency can be as unsuccessful as imparting understanding.)