## Posts Tagged ‘**mathematics**’

## Some playing with Python

A long time ago, Diophantus (sort of) discussed integer solutions to the equation

(solutions to this equation are called Pythagorean triples).

Centuries later, in 1637, Fermat made a conjecture (now called Fermat’s Last Theorem, not because he uttered it in his dying breath, but because it was the last one to be proved — in ~1995) that

has no positive integer solutions for . In other words, his conjecture was that none of the following equations has a solution:

… and so on. An nth power cannot be partitioned into two nth powers.

About a century later, Euler proved the case of Fermat’s conjecture, but generalized it in a different direction: he conjectured in 1769 that an nth power cannot be partitioned into fewer than n nth powers, namely

has no solutions with . So his conjecture was that (among others) none of the following equations has a solution:

… and so on.

This conjecture stood for about two centuries, until abruptly it was found to be false, by Lander and Parkin who in 1966 simply did a direct search on the fastest (super)computer at the time, and found this counterexample:

(It is still one of only three examples known, according to Wikipedia.)

Now, how might you find this solution on a computer today?

In his wonderful (as always) post at bit-player, Brian Hayes showed the following code:

import itertools as it def four_fifths(n): '''Return smallest positive integers ((a,b,c,d),e) such that a^5 + b^5 + c^5 + d^5 = e^5; if no such tuple exists with e < n, return the string 'Failed'.''' fifths = [x**5 for x in range(n)] combos = it.combinations_with_replacement(range(1,n), 4) while True: try: cc = combos.next() cc_sum = sum([fifths[i] for i in cc]) if cc_sum in fifths: return(cc, fifths.index(cc_sum)) except StopIteration: return('Failed')

to which, if you add (say) `print four_fifths(150)`

and run it, it returns the correct answer fairly quickly: in about 47 seconds on my laptop.

The `if cc_sum in fifths:`

line inside the loop is an cost each time it’s run, so with a simple improvement to the code (using a set instead) and rewriting it a bit, we can write the following full program:

import itertools def find_counterexample(n): fifth_powers = [x**5 for x in range(n)] fifth_powers_set = set(fifth_powers) for xs in itertools.combinations_with_replacement(range(1, n), 4): xs_sum = sum([fifth_powers[i] for i in xs]) if xs_sum in fifth_powers_set: return (xs, fifth_powers.index(xs_sum)) return 'Failed' print find_counterexample(150)

which finishes in about 8.5 seconds.

Great!

But there’s something unsatisfying about this solution, which is that it assumes there’s a solution with all four numbers on the LHS less than 150. After all, changing the function invocation to `find_counterexample(145)`

makes it run a second faster even, but how could we know to do without already knowing the solution? Besides, we don’t have a fixed 8- or 10-second budget; what we’d *really* like is a program that keeps searching till it finds a solution or we abort it (or it runs out of memory or something), with no other fixed termination condition.

The above program used the given “n” as an upper bound to generate the combinations of 4 numbers; is there a way to generate all combinations when we don’t know an upper bound on them?

Yes! One of the things I learned from Knuth volume 4 is that if you simply write down each combination in descending order and order them lexicographically, the combinations you get for each upper bound are a prefix of the list of the next bigger one, i.e., for any upper bound, all the combinations form a prefix of the same infinite list, which starts as follows (line breaks for clarity):

1111, 2111, 2211, 2221, 2222, 3111, 3211, 3221, 3222, 3311, 3321, 3322, 3331, 3332, 3333, 4111, ... ... 9541, 9542, 9543, 9544, 9551, ... 9555, 9611, ...

There doesn’t seem to be a library function in Python to generate these though, so we can write our own. If we stare at the above list, we can figure out how to generate the next combination from a given one:

- Walk backwards from the end, till you reach the beginning or find an element that’s less than the previous one.
- Increase that element, set all the following elements to 1s, and continue.

We could write, say, the following code for it:

def all_combinations(r): xs = [1] * r while True: yield xs for i in range(r - 1, 0, -1): if xs[i] < xs[i - 1]: break else: i = 0 xs[i] += 1 xs[i + 1:] = [1] * (r - i - 1)

(The `else`

block on a `for`

loop is an interesting Python feature: it is executed if the loop wasn’t terminated with `break`

.) We could even hard-code the `r=4`

case, as we’ll see later below.

For testing whether a given number is a fifth power, we can no longer simply lookup in a fixed precomputed set. We can do a binary search instead:

def is_fifth_power(n): assert n > 0 lo = 0 hi = n # Invariant: lo^5 < n <= hi^5 while hi - lo > 1: mid = lo + (hi - lo) / 2 if mid ** 5 < n: lo = mid else: hi = mid return hi ** 5 == n

but it turns out that this is slower than one based on looking up in a growing set (as below).

Putting everything together, we can write the following (very C-like) code:

largest_known_fifth_power = (0, 0) known_fifth_powers = set() def is_fifth_power(n): global largest_known_fifth_power while n > largest_known_fifth_power[0]: m = largest_known_fifth_power[1] + 1 m5 = m ** 5 largest_known_fifth_power = (m5, m) known_fifth_powers.add(m5) return n in known_fifth_powers def fournums_with_replacement(): (x0, x1, x2, x3) = (1, 1, 1, 1) while True: yield (x0, x1, x2, x3) if x3 < x2: x3 += 1 continue x3 = 1 if x2 < x1: x2 += 1 continue x2 = 1 if x1 < x0: x1 += 1 continue x1 = 1 x0 += 1 continue if __name__ == '__main__': tried = 0 for get in fournums_with_replacement(): tried += 1 if (tried % 1000000 == 0): print tried, 'Trying:', get rhs = get[0]**5 + get[1]**5 + get[2]**5 + get[3]**5 if is_fifth_power(rhs): print 'Found:', get, rhs break

which is both longer and slower (takes about 20 seconds) than the original program, but at least we have the satisfaction that it doesn’t depend on any externally known upper bound.

I originally started writing this post because I wanted to describe some experiments I did with profiling, but it’s late and I’m sleepy so I’ll just mention it.

python -m cProfile euler_conjecture.py

will print relevant output in the terminal:

26916504 function calls in 26.991 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 18.555 18.555 26.991 26.991 euler_conjecture.py:1() 13458164 4.145 0.000 4.145 0.000 euler_conjecture.py:12(fournums_with_replacement) 13458163 4.292 0.000 4.292 0.000 euler_conjecture.py:3(is_fifth_power) 175 0.000 0.000 0.000 0.000 {method 'add' of 'set' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}

Another way to view the same thing is to write the profile output to a file and read it with `cprofilev`

:

python -m cProfile -o euler_profile.out euler_conjecture.py cprofilev euler_profile.out

and visit http://localhost:4000 to view it.

Of course, simply translating this code to C++ makes it run much faster:

#include <array> #include <iostream> #include <map> #include <utility> typedef long long Int; constexpr Int fifth_power(Int x) { return x * x * x * x * x; } std::map<Int, int> known_fifth_powers = {{0, 0}}; bool is_fifth_power(Int n) { while (n > known_fifth_powers.rbegin()->first) { int m = known_fifth_powers.rbegin()->second + 1; known_fifth_powers[fifth_power(m)] = m; } return known_fifth_powers.count(n); } std::array<Int, 4> four_nums() { static std::array<Int, 4> x = {1, 1, 1, 0}; int i = 3; while (i > 0 && x[i] == x[i - 1]) --i; x[i] += 1; while (++i < 4) x[i] = 1; return x; } std::ostream& operator<<(std::ostream& os, std::array<Int, 4> x) { os << "(" << x[0] << ", " << x[1] << ", " << x[2] << ", " << x[3] << ")"; return os; } int main() { while (true) { std::array<Int, 4> get = four_nums(); Int rhs = fifth_power(get[0]) + fifth_power(get[1]) + fifth_power(get[2]) + fifth_power(get[3]); if (is_fifth_power(rhs)) { std::cout << "Found: " << get << " " << known_fifth_powers[rhs] << std::endl; break; } } }

and

```
clang++ -std=c++11 euler_conjecture.cc && time ./a.out
```

runs in 2.43s, or 0.36s if compiled with `-O2`.

But I don’t have a satisfactory answer to how to make our Python program which takes 20 seconds as fast as the 8.5-second known-upper-bound version.

Edit [2015-05-08]: I wrote some benchmarking code to compare all the different “combination” functions.

import itertools # Copied from the Python documentation def itertools_equivalent(iterable, r): pool = tuple(iterable) n = len(pool) if not n and r: return indices = [0] * r yield tuple(pool[i] for i in indices) while True: for i in reversed(range(r)): if indices[i] != n - 1: break else: return indices[i:] = [indices[i] + 1] * (r - i) yield tuple(pool[i] for i in indices) # Above function, specialized to first argument being range(1, n) def itertools_equivalent_specialized(n, r): indices = [1] * r yield indices while True: for i in reversed(range(r)): if indices[i] != n - 1: break else: return indices[i:] = [indices[i] + 1] * (r - i) yield indices # Function to generate all combinations of 4 elements def all_combinations_pythonic(r): xs = [1] * r while True: yield xs for i in range(r - 1, 0, -1): if xs[i] < xs[i - 1]: break else: i = 0 xs[i] += 1 xs[i + 1:] = [1] * (r - i - 1) # Above function, written in a more explicit C-like way def all_combinations_clike(r): xs = [1] * r while True: yield xs i = r - 1 while i > 0 and xs[i] == xs[i - 1]: i -= 1 xs[i] += 1 while i < r - 1: i += 1 xs[i] = 1 # Above two functions, specialized to r = 4, using tuple over list. def fournums(): (x0, x1, x2, x3) = (1, 1, 1, 1) while True: yield (x0, x1, x2, x3) if x3 < x2: x3 += 1 continue x3 = 1 if x2 < x1: x2 += 1 continue x2 = 1 if x1 < x0: x1 += 1 continue x1 = 1 x0 += 1 continue # Benchmarks for all functions defined above (and the library function) def benchmark_itertools(n): for xs in itertools.combinations_with_replacement(range(1, n), 4): if xs[0] >= n: break def benchmark_itertools_try(n): combinations = itertools.combinations_with_replacement(range(1, n), 4) while True: try: xs = combinations.next() if xs[0] >= n: break except StopIteration: return def benchmark_itertools_equivalent(n): for xs in itertools_equivalent(range(1, n), 4): if xs[0] >= n: break def benchmark_itertools_equivalent_specialized(n): for xs in itertools_equivalent_specialized(n, 4): if xs[0] >= n: break def benchmark_all_combinations_pythonic(n): for xs in all_combinations_pythonic(4): if xs[0] >= n: break def benchmark_all_combinations_clike(n): for xs in all_combinations_clike(4): if xs[0] >= n: break def benchmark_fournums(n): for xs in fournums(): if xs[0] >= n: break if __name__ == '__main__': benchmark_itertools(150) benchmark_itertools_try(150) benchmark_itertools_equivalent(150) benchmark_itertools_equivalent_specialized(150) benchmark_all_combinations_pythonic(150) benchmark_all_combinations_clike(150) benchmark_fournums(150)

As you can see, I chose inside the benchmarking function the same statement that would cause `all_combinations`

to terminate, and have no effect for the other combination functions.

When run with

`python -m cProfile benchmark_combinations.py`

the results include:

2.817 benchmark_combinations.py:80(benchmark_itertools) 8.583 benchmark_combinations.py:84(benchmark_itertools_try) 126.980 benchmark_combinations.py:93(benchmark_itertools_equivalent) 46.635 benchmark_combinations.py:97(benchmark_itertools_equivalent_specialized) 44.032 benchmark_combinations.py:101(benchmark_all_combinations_pythonic) 18.049 benchmark_combinations.py:105(benchmark_all_combinations_clike) 10.923 benchmark_combinations.py:109(benchmark_fournums)

Lessons:

- Calling
`itertools.combinations_with_replacement`

is by far the fastest, taking about 2.7 seconds. It turns out that it’s written in C, so this would be hard to beat. (Still, writing it in a`try`

block is seriously bad.) - The “equivalent” Python code from the itertools documentation (
`benchmark_itertools_combinations_with_replacment`

) is about 50x slower. - Gets slightly better when specialized to numbers.
- Simply generating
*all*combinations without an upper bound is actually faster. - It can be made even faster by writing it in a more C-like way.
- The tuples version with the loop unrolled manually is rather fast when seen in this light, less than 4x slower than the library version.

## How does Tupper’s self-referential formula work?

*[I write this post with a certain degree of embarrassment, because in the end it turns out (1) to be more simple than I anticipated, and (2) already done before, as I could have found if I had internet access when I did this. :-)]*

The so-called “Tupper’s self-referential formula” is the following, due to Jeff Tupper.

Graph the set of all points such that

in the region

where N is the following 544-digit integer:

48584506361897134235820959624942020445814005879832445494830930850619

34704708809928450644769865524364849997247024915119110411605739177407

85691975432657185544205721044573588368182982375413963433822519945219

16512843483329051311931999535024137587652392648746133949068701305622

95813219481113685339535565290850023875092856892694555974281546386510

73004910672305893358605254409666435126534936364395712556569593681518

43348576052669401612512669514215505395545191537854575257565907405401

57929001765967965480064427829131488548259914721248506352686630476300

The result is the following graph:

Whoa. How does this work?

At first sight this is rather too incredible for words.

But after a few moments we can begin to guess what is going on, and see that—while clever—this is perhaps not so extraordinary after all. So let us calmly try to reverse-engineer this feat.

## Identify

What’s this?

Seeing more should help:

If that’s too easy, how about this?

Both are from *Rekhāgaṇita*, which is a c. 1720s translation by Jagannatha of Nasir al-Din al-Tusi’s 13th-century Arabic translation of Euclid’s *Elements*. It seems to be a straightforward translation of the Arabic — it even uses, to label vertices etc., letters in the Arabic order अ ब ज द ह व झ…. The text retains most of the structure and proposition numbers of Euclid, but in fact the Arabic world has considerably elaborated on Euclid. For instance, for the famous first example above, it gives sixteen additional proofs/demonstrations, which are not in the Greek texts.

The first volume (Books I to VI) is available on Google Books and the second volume (Books VII to XV) through wilbourhall.org (25 MB PDF).

Notes on the second: some technical vocabulary — a प्रथमाङ्कः is a prime number, and a रूप is a unit (one). The rest of the vocabulary, like “निःशेषं करोति” meaning “divides (without remainder)”, is more or less clear, and also the fact that as in Euclid, numbers are being conceived of as lengths (so दझ and झद mean the same).

It does sound cooler to say “इदमशुद्धम्” than “But this is a contradiction”. :-)

## On “trivial” in mathematics

One aspect of mathematicians’ vocabulary that non-mathematicians often find non-intuitive is the word “trivial”. Mathematicians seem to call a great many things “trivial”, most of which are anything but. Here’s a joke, pointed out to me by Mohan:

Two mathematicians are discussing a theorem. The first mathematician says that the theorem is “trivial”. In response to the other’s request for an explanation, he then proceeds with two hours of exposition. At the end of the explanation, the second mathematician agrees that the theorem is trivial.

Like many jokes, this is not far from the truth. This tendency has led others to say, for example, that

In mathematics, there are only two kinds of proofs: Trivial ones, and undiscovered ones.

Or as Feynman liked to say, “mathematicians can prove only trivial theorems, because every theorem that’s proved is trivial”. (The mathematicians to whom Feynman said this do not seem to have liked the statement.)

A little examination, however, shows that all this is not far from reality, and indeed not far from the ideal of mathematics. Consider these excerpts from Gian-Carlo Rota‘s wonderful *Indiscrete Thoughts*:

“eventually every mathematical problem is proved trivial. The quest for ultimate triviality is characteristic of the mathematical enterprise.” (p.93)

“Every mathematical theorem is eventually proved trivial. The mathematician’s ideal of truth is triviality, and the community of mathematicians will not cease its beaver-like work on a newly discovered result until it has shown to everyone’s satisfaction that all difficulties in the early proofs were spurious, and only an analytic triviality is to be found at the end of the road.” (p. 118, in

The Phenomenology of Mathematical Truth)

Are there definitive proofs?

It is an article of faith among mathematicians that after a new theorem is discovered, other simpler proofs of it will be given until a definitive one is found. A cursory inspection of the history of mathematics seems to confirm the mathematician’s faith. The first proof of a great many theorems is needlessly complicated. “Nobody blames a mathematician if the first proof of a new theorem is clumsy”, said Paul Erdős. It takes a long time, from a few decades to centuries, before the facts that are hidden in the first proof areunderstood, as mathematicians informally say. This gradual bringing out of the significance of a new discovery takes the appearance of a succession of proofs, each one simpler than the preceding. New and simpler versions of a theorem will stop appearing when the facts are finally understood. (p.146, inThe Phenomenology of Mathematical Proof, here/here).

For more context, the section titled *“Truth and Triviality”* is especially worth reading, where he gives the example of proofs of the prime number theorem, starting with Hadamard and de la Vallée Poussin, through Wiener, Erdős and Selberg, and Levinson. Also p. 119, where he feels this is not unique to mathematics:

Any law of physics, when finally ensconced in a proper mathematical setting, turns into a mathematical triviality. The search for a universal law of matter, a search in which physics has been engaged throughout this century, is actually the search for a trivializing principle, for a universal “‘nothing but.” […] The ideal of all science, not only of mathematics, is to do away with any kind of synthetic

a posterioristatement and to leave only analytic trivialities in its wake. Science may be defined as the transformation of synthetic facts of nature into analytic statements of reason.

So to correct the earlier quote, there are *three* kinds of proofs from the mathematician’s perspective: those that are trivial, those that have not been discovered, and those niggling ones that are not **yet** trivial.

## Euler, pirates, and the discovery of America

Is there nothing Euler wasn’t involved in?!

That rhetorical question is independent of the following two, which are exceedingly weak connections.

**“Connections” to piracy**: Very tenuous connections, of course, but briefly, summarising from the article:

- Maupertuis: President of the Berlin Academy for much of the time Euler was there. His father got a license from the French king to attack English ships, made a fortune, and retired. Maupertuis is known for formulating the Principle of Least Action (but maybe it was Euler), and best known for taking measurements showing the Earth bulges at the equator as Newton had predicted, thus “The Man Who Flattened the Earth”.
- Henry Watson: English privateer living in India, lost a fortune to the scheming British East India Company. Wanted to be a pirate, but wasn’t actually one. Known for: translated Euler’s
*Théorie complette*[E426] from its original French:*A complete theory of the construction and properties of vessels: with practical conclusions for the management of ships, made easy to navigators*. (Yes, Euler wrote that.) - Kenelm Digby: Not connected to Euler actually, just the recipient of a letter by Fermat in which a problem that was later solved by Euler was discussed. Distinguished alchemist, one of the founders of the Royal Society, did some pirating (once) and was knighted for it.
- Another guy, nevermind.

Moral: The fundamental interconnectedness of all things. Or, connections don’t mean a thing.

**The discovery of America:** Columbus never set foot on the mainland of America, and died thinking he had found a shorter route to India and China, not whole new continents that were in the way. The question remained whether these new lands were part of Asia (thus, “Very Far East”) or not. The czar of Russia (centuries later) sent Bering to determine the bounds of Russia, and the Bering Strait separating the two continents was discovered and reported back: America was not part of Russia. At about this time, there were riots in Russia, there was nobody to make the announcement, and “Making the announcement fell to Leonhard Euler, still the preeminent member of the St. Petersburg Academy, and really the only member who was still taking his responsibilities seriously.” As the man in charge of drawing the geography of Russia, Euler knew a little, and wrote a letter to Wetstein, member of the Royal Society in London. So it was only through Euler that the world knew that the America that was discovered was new. This letter [E107], with others, is about the only work of Euler in English. That Euler knew English (surprisingly!) is otherwise evident from the fact that he translated and “annotated” a book on ballistics by the Englishman Benjamin Robins. The original was 150 pages long; with Euler’s comments added, it was 720. [E77, translated back into English as *New principles of gunnery*.]

Most or all of the above is from Ed Sandifer’s monthly column *How Euler Did It*.

**The works of Leonhard Euler online** has pages for all 866 of his works; 132 of them are available in English, including the translations from the Latin posted by graduate student Jordan Bell on the arXiv. They are very readable.

This includes his *Letters to a German Princess on various topics in physics and philosophy* [E343,E344,E417], which were bestsellers when reprinted as science books for a general audience. It includes his textbook, *Elements of Algebra* [E387,E388]. Find others on Google Books. The translations do *not* seem to include (among his other books) his classic textbook *Introductio in analysin infinitorum* [E101,E102, “the foremost textbook of modern times”], though there are French and German translations available.

Apparently, Euler’s Latin is (relatively) not too hard to follow.

## Keeping up with the Schwar(t)zes

For future reference, reminded by a recent webcomic:

**There is no ‘t’ in Cauchy–Schwarz.**It is named after Hermann Schwarz (1843–1921), who rediscovered it in 1888. His other work includes some stuff in complex analysis: Schwarz lemma, Schwarz–Christoffel mapping, a theorem about symmetry of second derivatives, etc.**All other (mathematical) Schwar(t)zes have ‘t’s.**- There’s Jack Schwartz (1930–2009), mathematician–and–computer scientist, who was Gian-Carlo Rota’s advisor and co-wrote the monumental three-volume work on
*Linear Operators*(“Dunford and Schwartz”). And he’s also the name in the Schwartz-Zippel lemma (“Or the Schwartz-Zippel-DeMillo-Lipton Lemma.”) - There’s Laurent Schwartz (1915–2002), who was at ENS, worked on the theory of distributions, and won the Fields Medal in 1950. (“Schwartz space”)

- There’s Jack Schwartz (1930–2009), mathematician–and–computer scientist, who was Gian-Carlo Rota’s advisor and co-wrote the monumental three-volume work on

Those are the major ones, I think.

Omissions? Errors?

## Is it important to “understand” first?

The Oxford University Press has been publishing a book series known as “Very Short Introductions”. These slim volumes are an excellent idea, and cover over 200 topics already. The volume Mathematics: A Very Short Introduction is written by Timothy Gowers.

Gowers is one of the leading mathematicians today, and a winner of the Fields Medal (in 1998). In addition to his research work, he has also done an amazing amount of service to mathematics in other ways. He edited the 1000-page *Princeton Companion to Mathematics*, getting the best experts to write, and writing many articles himself. He also started the Polymath project and the Tricki, the “tricks wiki”. You can watch his talk on The Importance of Mathematics (with slides) (transcript), and read his illuminating mathematical discusssions, and his blog. His great article The Two Cultures of Mathematics is on the “theory builders and problem solvers” theme, and is a paper every mathematician should read.

Needless to say, “Mathematics: A Very Short Introduction” is a very good read. Unlike many books aimed at non-mathematicians, Gowers is quite clear that he does “presuppose some interest on the part of the reader rather than trying to drum it up myself. For this reason I have done without anecdotes, cartoons, exclamation marks, jokey chapter titles, or pictures of the Mandelbrot set. I have also avoided topics such as chaos theory and Godel’s theorem, which have a hold on the public imagination out of proportion to their impact on current mathematical research”. What follows is a great book that particularly excels at describing what it is that mathematicians do. Some parts of the book, being Gowers’s personal views on the philosophy of mathematics, might not work very well when directed at laypersons, not because they require advanced knowledge, but assume a culture of mathematics. Doron Zeilberger thinks that this book “should be recommended reading to everyone and required reading to mathematicians”.

Its last chapter, “Some frequently asked questions”, carries Gowers’s thoughts on some interesting questions. With whole-hearted apologies for inserting my own misleading “summaries” of the answers in brackets, they are the following: “1.Is it true that mathematicians are past it by the time they are 30?” (no), “2. Why are there so few women mathematicians?” (puzzling and regrettable), “3. Do mathematics and music go together?” (not really), “4. Why do so many people positively dislike mathematics?” (more on this below), “5. Do mathematicians use computers in their work?” (not yet), “6. How is research in mathematics possible?” (if you have read this book you won’t ask), “7. Are famous mathematical problems ever solved by amateurs?” (not really), “8. Why do mathematicians refer to some theorems and proofs as beautiful?” (already discussed. Also, “One difference is that […] a mathematician is more anonymous than an artist. […] it is, in the end, the mathematics itself that delights us”.) As I said, you should read the book itself, not my summaries.

The interesting one is (4).

4. Why do so many people positively dislike mathematics?

One does not often hear people saying that they have never liked biology, or English literature. To be sure, not everybody is excited by these subjects, but those who are not tend to understand perfectly well that others are. By contrast, mathematics, and subjects with a high mathematical content such as physics, seem to provoke not just indifference but actual antipathy. What is it that causes many people to give mathematical subjects up as soon as they possibly can and remember them with dread for the rest of their lives?

Probably it is not so much mathematics itself that people find unappealing as the experience of mathematics lessons, and this is easier to understand. Because mathematics continually builds on itself, it is important to keep up when learning it. For example, if you are not reasonably adept at multiplying two-digit numbers together,then you probably won’t have a good intuitive feel for the distributive law (discussed in Chapter 2). Without this, you are unlikely to be comfortable with multiplying out the brackets in an expression such as , and then you will not be able to understand quadratic equations properly. And if you do not understand quadratic equations, then you will not understand why the golden ratio is .

There are many chains of this kind, but there is more to keeping up with mathematics than just maintaining technical fluency. Every so often, a new idea is introduced which is very important and markedly more sophisticated than those that have come before, and each one provides an opportunity to fall behind. An obvious example is the use of letters to stand for numbers, which many find confusing but which is fundamental to all mathematics above a certain level. Other examples are negative numbers, complex numbers, trigonometry, raising to powers, logarithms, and the beginnings of calculus. Those who are not ready to make the necessary conceptual leap when they meet one of these ideas will feel insecure about all the mathematics that builds on it. Gradually they will get used to only half understanding what their mathematics teachers say, and after a few more missed leaps they will find that even half is an overestimate. Meanwhile, they will see others in their class who are keeping up with no difficulty at all. It is no wonder that mathematics lessons become, for many people, something of an ordeal.

This seems to be exactly the right reason. No one would enjoy being put through drudgery that they were not competent at, and without the beauty at the end of the pursuit being apparent. (I hated my drawing classes in school, too.) See also Lockhart’s Lament, another article that *everyone* — even, or *especially*, non-mathematicians — should read.

As noted earlier, Gowers has some things to say about the philosophy of mathematics. As is evident from his talk “Does mathematics need a philosophy?” (also typeset as essay 10 of 18 Unconventional Essays on the Nature of Mathematics), he has rejected the Platonic philosophy (≈ mathematical truths exist, and we’re discovering them) in favour of a formalist one (≈ it’s all just manipulating expressions and symbols, just stuff we do). The argument is interesting and convincing, but I find myself unwilling to change my attitude. Yuri Manin says in a recent interview that “I am an emotional Platonist (not a rational one: there are no rational arguments in favor of Platonism)”, so it’s perhaps just as well.

Anyway, the anti-Platonist / formalist idea of Gowers is evident throughout the book, and of course it has its great side: “a mathematical object *is* what it *does*” is his slogan, and most of us can agree that “one should learn to think abstractly, because by doing so many philosophical difficulties disappear” , etc. The only controversial suggestion, perhaps, follows the excerpt quoted above (of “Why do so many people positively dislike mathematics?”):

Is this a necessary state of affairs? Are some people just doomed to dislike mathematics at school? Or might it be possible to teach the subject differently in such a way that far fewer people are excluded from it? I am convinced that any child who is given one-to-one tuition in mathematics from an early age by a good and enthusiastic teacher will grow up liking it. This, of course, does not immediately suggest a feasible educational policy, but it does at least indicate that there might be room for improvement in how mathematics is taught.

One recommendation follows from the ideas I have emphasized in this book. Above, I implicitly drew a contrast between being technically fluent and understanding difficult concepts, but it seems that almost everybody who is good at one is good at the other. And indeed, if understanding a mathematical object is largely a question of learning the rules it obeys rather than grasping its essence, then this is exactly what one would expect — the distinction between technical fluency and mathematical understanding is less clear-cut than one might imagine.

How should this observation influence classroom practice? I do not advocate any revolutionary change — mathematics has suffered from too many of them already — but a small change in emphasis could pay dividends. For example, suppose that a pupil makes the common mistake of thinking that

x. A teacher who has emphasized the intrinsic meaning of expressions such as^{a+b}= x^{a}+ x^{b}xwill point out that^{a}xmeans^{a+b}a+bxs all multiplied together, which is clearly the same asaof them multiplied togethermultipliedbybof them multiplied together. Unfortunately, many children find this argument too complicated to take in, and anyhow it ceases to be valid ifaandbare not positive integers.Such children might benefit from a more abstract approach. As I pointed out in Chapter 2, everything one needs to know about powers can be deduced from a few very simple rules, of which the most important is

x. If this rule has been emphasized, then not only is the above mistake less likely in the first place, but it is also easier to correct: those who make the mistake can simply be told that they have forgotten to apply the right rule. Of course, it is important to be familiar with basic facts such as that^{a+b}= x^{a}x^{b}xmeans^{3}xtimesxtimesx, but these can be presented as consequences of the rules rather than as justifications for them.I do not wish to suggest that one should try to explain to children what the abstract approach is, but merely that teachers should be aware of its implications. The main one is that it is quite possible to learn to use mathematical concepts correctly without being able to say exactly what they mean. This might sound a bad idea, but the use is often easier to teach, and a deeper understanding of the meaning, if there

isany meaning over and above the use, often follows of its own accord.

Of course, there is an instinctive reason to immediately reject such a proposal — as the MAA review by Fernando Q. Gouvêa observes, ‘I suspect, however, that there is *far too much* “that’s the rule” teaching, and far too little explaining of reasons in elementary mathematics teaching. Such a focus on rules can easily lead to students having to remember a huge list of unrelated rules. I fear Gowers’ suggestion here may in fact be counterproductive.’ Nevertheless, the idea that technical fluency may precede and lead to mathematical understanding is worth pondering.

(Unfortunately, even though true, it may not actually help with teaching: in practice, drilling-in “mere” technical fluency can be as unsuccessful as imparting understanding.)

## Eugene Curtain and Max Washauer

If you came here because you were reading Peter Winkler’s “7 Puzzles You Think You Must Not Have Heard Correctly”, the names are supposed to be **Eugene Curtin** and **Max Warshauer**, and the paper is called “The locker puzzle”, published in *The Mathematical Intelligencer*, Volume 28, Number 1 (March 2006), pages 28–31.

[If not, you should read the amazing “7 Puzzles You Think You Must Not Have Heard Correctly”, spend several days trying the first problem, read the brilliant solution, and then come back here if you’re interested in learning why no other solution can do better.]

The paper is available here if your institution has access. If not, here’s a sketch of the proof that the strategy cannot be improved upon. [**Update 2010-01-06**: Oliver Nash has a post about the puzzle, explaining both the original solution and the proof of optimality, here. Just the original solution is also worked out by David MacKay here.]

First, let us modify the rules slightly so that each prisoner must continue looking in boxes until he finds the box containing his name. The prisoners win if no prisoner opens more than 50 (i.e., n/2) boxes. This change obviously makes no difference to the outcome. Let’s call this (modified) game **Game 1**.

A different game involves all the prisoners being in the room at the same time, and is played as follows. The first prisoner opens boxes until he finds his name (i.e., the number “1”). Then, the lowest-numbered prisoner whose name hasn’t been revealed starts opening boxes until *he* finds his name. Then the next lowest-numbered whose name hasn’t been revealed opens boxes, and so on. The prisoners win if no one opens more than 50 boxes. Call this **Game 2**.

Let’s say we observe the prisoners as they play Game 2, and record the order in which boxes were revealed. This completely specifies what happened. For example, (with 10 prisoners instead of 100) if we record the list 2,6,1,4,9,7,10,8,3,5, we know that the first prisoner revealed boxes containing 2, 6, 1, then the third (lowest unrevealed) prisoner opened boxes with 4,9,7,10,8,3, then prisoner 5 opened 5, and they lost because the third prisoner opened 6 > 5 boxes.

**Prove:**No matter what strategy the prisoners follow, each permutation has the same probability (1/n!) of being the list recorded.**Prove:**“The classical records-to-cycles bijection”. It sends 2,6,1,4,9,7,10,8,3,5 to (2 6 1)(4 9 7 10 8 3)(5), for example.- So the probability of the prisoners winning Game 2 (no matter what strategy they follow) is exactly the probability that a random permutation has no cycle of length greater than n/2.
**Prove:**Any strategy for Game 1 corresponds to a strategy for Game 2 with the same probability. (Trivial: the only change is that you don’t have to open boxes you’ve already seen opened.)

This proves that the pointer-chasing strategy is optimal for Game 1.

Here’s the puzzle as it was originally considered, still open: suppose there are n prisoners and 2n boxes, half of them empty. The prisoners can each examine n lockers. The pointer-chasing strategy doesn’t work as empty boxes point nowhere. Does the probability of winning go to 0 as n→∞?

## Giving credit

V. I. Arnold, *On teaching mathematics*:

What is a

group? Algebraists teach that this is supposedly a set with two operations that satisfy a load of easily-forgettable axioms. This definition provokes a natural protest: why would any sensible person need such pairs of operations? […]What is a smooth manifold?

In a recent American book I read that Poincaré was not acquainted with this (introduced by himself) notionand that the “modern” definition was only given by Veblen in the late 1920s: a manifold is a topological space which satisfies a long series of axioms.For what sins must students try and find their way through all these twists and turns? Actually, in Poincaré’s

Analysis Situsthere is an absolutely clear definition of a smooth manifold which is much more useful than the “abstract” one.

(Interesting talk, do read.)

Meanwhile…

Bill Poser at the Language Log:

Sir William Jones is incorrectly viewed as the discoverer of the Indo-European language family and founder of modern historical linguistics […]

The second and more important point is that Jones cannot be considered the founder of modern historical linguistics because he did not use the

comparative method, the crucial innovation that distinguishes modern historical linguistics from its predecessors.

Sigh. Let’s not forget people who actually caused us to perceive the world differently, and leave it to pedantic types to define who invented what.

## Music and lyrics

I attended a talk today by Adriano Garsia, which was part of the MIT combinatorics seminar. It was called “A New Recursion in the Theory of Macdonald Polynomials”, and while I didn’t know what Macdonald polynomials were, I went to the talk anyway, because I like polynomials and I like recursion and I like combinatorics (but primarily because it was a way of procrastinating). :-)

Even though I understood almost nothing of the deep mathematics in the talk (and still don’t exactly know what Macdonald polynomials are), it was a very pleasant and refreshing talk, and I felt very good after hearing it. The reason is that it had, of all the talks I’ve attended in recent memory, probably the best “music”. What does that mean? As Prof. Doron Zeilberger invented the term:

Human beings have bodies and souls. Computers have hardware and software, and math talks have lyrics and music. Most math talks have very hard-to-follow lyrics, […]

But like a good song, and a good opera, you can still enjoy it if the music is good. The “music” in a math talk is the speaker’s enthusiasm, body-language, and off-the-cuff heuristic explanations.

Sometimes you can recognize a familiar word, and relate it to something of your own experience, whether or not the meaning that you attribute to it is what the speaker meant, and this can also enhance your enjoyment.

(read more at Zeilberger’s Opinion 78)

And so it was with this talk. Prof. Garsia clearly *loved* the subject, and even someone like me who had no idea what’s going on felt compelled to listen, fascinated. He told us how the problem came about (“long relationship with Jim Haglund: he makes brilliant conjectures and I prove them”), of false proofs they had had, of how their current proof was driven by heuristics and unproven conjectures, he even posed a problem and offered a $100 reward for an elementary/combinatorial proof. :-)

Far better than the talks with bad music *and* bad lyrics. (It also helped that although I couldn’t understand the lyrics, they *sounded* nice: permutations, Young tableaux, polynomials defined in terms of them…)

**Edit**: See also the recent research ‘showing’ that gestures help students learn mathematics.

## Convex and concave

A mnemonic for ‘convex’ and ‘concave’:

Two mnemonics are better than one.

## “Every good theorem must have a good counterexample”

Abhyankar[1] attributes the quote to Severi.

[1]: Historical Ramblings in Algebraic Geometry and Related Algebra, Shreeram S. Abhyankar, The American Mathematical Monthly, Vol. 83, No. 6 (Jun. – Jul., 1976), pp. 409-448. Also available here, because it won a Lester R. Ford Award (“articles of expository excellence”) and also a Chauvenet Prize (“the highest award for mathematical expository writing”).

Abhyankar, after distinguishing between the flavours of “high-school algebra” (polynomials, power series), “college algebra” (rings, fields, ideals) and “university algebra” (functors, homological algebra) goes on to present his fundamental thesis (“obviously a partisan claim”):

The method of high-school algebra is powerful, beautiful and accessible. So let us not be overwhelmed by the groups-ring-fields or the functorial arrows of the other two algebras and thereby lose sight of the power of the explicit algorithmic processes given to us by Newton, Tschirnhausen, Kronecker, and Sylvester.

Perhaps for this reason, Dr. Z calls Abhyankar (“one of my great heroes”) “the modern prophet of high-school-algebra”.

Anyway, enough rambling. Back to “Every good theorem must have a good counterexample”. Discuss.

[**Edited to add**: The statement in its original context was referring to a phenomenon where a pleasing conjecture is found to have counterexamples, until it is resolved by realising that we must, say, count multiplicities the “right” way—the right way turning out to be whatever makes the conjecture true. Thus Bezout’s theorem, etc., and the quote just means, as he paraphrases, “don’t be deterred if your formula is presently invalid in some cases; it only means that you have not yet completely deciphered god’s mind”. On the other hand, what I (mis?)remembered was that one must know “counterexamples” to a theorem in the sense that one must know why the conclusion is not true if the hypotheses are weakened: thus one doesn’t really understand a theorem till one knows at least one “counterexample” (and at least two proofs).]

## Mathematics and notation: the Hindu-Arabic numeral system

Quick: What is CCXXXVII × CCCXXIX?

From page 15 of The Life of Pi by Jonathan Borwein:

The Indo-Arabic system came to Europe around 1000 CE. Resistance ranged from accountants who didn’t want their livelihood upset to clerics who saw the system as ‘diabolical,’ since they incorrectly assumed its origin was Islamic. European commerce resisted until the 18th century, and even in scientific circles usage was limited into the 17th century.

The prior difficulty of doing arithmetic is indicated by college placement advice given a wealthy German merchant in the 16th century:

(George Ifrah, p. 577)“If you only want him to be able to cope with addition and subtraction, then any French or German university will do. But if you are intent on your son going on to multiplication and division — assuming that he has sufficient gifts — then you will have to send him to Italy.”

[The rest of the pages of the slides are also great and worth reading!]

Just to give some context of the time: The Hindu-Arabic system was introduced into Europe by Leonardo of Pisa (Fibonacci) — an Italian — in his *Liber Abaci*, written in 1202. Gutenberg (in Germany) invented the printing press around 1450. In Italy, Tartaglia lived 1500-1557, Cardano 1501-1576, Sturm 1507-1589, Giodano Bruno (1548-1600), and Ludovico Ferrari (1522-1565). (And outside Italy, Robert Recorde (as we’re talking about notation) (1510-1558) in Wales, François Viète (1540-1603) in France, etc. See this image.) Of course Galileo Galilei (1564-1642) was Italian too, but came later, as did Newton, Fermat, the Bernoullis, and all the others.

While on the topic of mathematics and notation, see also this post: Visual Clarity in the Naming of Variables.

[And while not exactly notation, Donald Knuth’s Calculus via O notation]

## A “Möbius” inversion formula

[Not *the* Möbius inversion formula, but something similar.]

As usual, define the Möbius function μ on the natural numbers as

Let be *any* function defined on the natural numbers, and let be the function defined as .

Then it is true that .

Note that f need not be multiplicative; it can be any arbitrarily defined function. I have no idea why it is true. Help?

## Lattice points visible from the origin

*[A test of LaTeX-to-Wordpress conversion. Bugs remain, point them out. Original PDF]*

Here is a problem I love. It is simple to state, and it has a solution that is not trivial, but is easy to understand. The solution also goes through some beautiful parts, so I can promise it’s worth reading :-)

[The solution is not mine. Also, the question is just an excuse for the ideas in the solution. :P]

Question.Suppose you are standing on an infinite grid in the plane. You can see infinitely in all directions, but you cannot see through grid points: a point is hidden from view if some other grid point lies in your line of sight. What fraction of the grid points can you see?

Let us first imagine that we are standing at the origin, and that the grid is that of the lattice (integer) points.

Read the rest of this entry »

## A theorem by Euler on partitions

There are so many beautiful facts in mathematics that are not as well known as they deserve to be. [Just today I couldn’t find an online reference for the Robinson-Schensted(-Knuth) correspondence.] Some efforts in the direction of communicating these to an online audience have been collected in the Carnival of Mathematics (41 so far) posts on various blogs. Another good idea I recently found is Theorem of the Day. [EtA: Found at Theorem of the Day: RSK correspondence.]

Anyway… today I attended a lecture on Euler by William Dunham (great talk!) where after a brief biography of Euler and a tour through some of his results, he showed Euler’s proof of a particular theorem about partitions. It will be familiar to anyone who has read about integer partitions, but it deserves to be familiar to everyone! So here is a sketch of the proof, for your reading pleasure. [It doesn’t do justice to present only a “sketch”, but I have assignments due ~~tomorrow~~today… feel free to take it and turn it into some form worthy of the content. :-)]

Now when I say “a theorem by Euler”, it is very ambiguous which one I mean, and even with the “on partitions” restriction, there are several. So let me state it:

The number of partitions of a number into odd parts is the same as the number of partitions of the number into distinct parts.

[If I remember correctly what was narrated in the talk, one of the Bernoullis wrote to Euler asking him a question about partitions, and within a few days (think of the postal system then) Euler replied with a proof of the theorem, along with a apology for the delay caused by the poor eyesight he had recently been suffering from. This is an outline of his proof, ask me about the details.]

Let D(n) denote the number of partitions into distinct parts, and O(n) denote the number of partitions into odd parts. Then we have:

which proves the theorem.

The first equality, which gives the generating function for the D(n)s, can be seen as follows: when you “expand” and write out , you get , where the coefficient of any term is exactly the number of ways of writing as a product of distinct factors of the form , which is . Similarly, the last equality, about the generating function for the O(n)s, is because the coefficient of any term in is exactly the number of ways of writing as a product of (not necessarily distinct) for odd .

The result might not be terribly important, but this idea was the starting point for proving several observations about partitions, and the method has led to the discovery of several facts and is more widely applicable than you think!

If you want a combinatorial proof by bijection (“For Entertainment Only”) here is an outline of one:

Given a partition of n into odd parts, say

,

write each “in binary”, by which I mean as a sum of distinct powers of two. So you have

.

Now just get rid of the brackets, and note that all terms are distinct.

E.g. for ,

i.e. , you get the partition

.

[Exercise: Why are all parts distinct?]

Conversely, given a partition into distinct parts, we can separate the “even part” from the “odd part” in each term, so for example with , we write

, and collect the coefficients of each odd number, so , which was our original odd partition.

[Exercise: Why can’t we do the same thing starting with *any* partition?]

[**Question: Is the bijective proof related to the algebraic one?**]

## A non-intuitive(?) fact

This seems counter-intuitive, at least to me, but if you can find an intuitive explanation for it, please comment!

Fact: Let G be a cycle (connected graph where every vertex has degree 2: you know what I mean) of length n (i.e., n vertices and n edges). Start a random walk on G from any node u. Then the probability that v is the last node visited (i.e., the random walk hits all other nodes before hitting v) is the **same for every v** other than u!

[Such a property is obviously true for the complete graph, and apparently it’s true only for cycles and complete graphs.]

I do not know a probabilist’s proof; what I could think of is a distinctly ComputerSciencey (Dynamic Programming / recurrence relation) proof.

## African Institute of Mathematical Sciences

Neil Turok is a mathematician/physicist at Cambridge University.

He has founded an institute in Africa from which students have graduated to go on to great places, including for instance a student who is now pursuing a PhD at Cambridge under Stephen Hawking. They now want to start (a few) similar institutions in other sciences, and hope to replicate these successes.

Other notable highlights from the talk:

Maps from World Mapper. Maps in which countries’ areas correspond to a certain characteristic. Along with Gapminder, wonderfully illustrative visualisation tools.

“All known physics” equation: