The Lumber Room

"Consign them to dust and damp by way of preserving them"

Archive for the ‘mathematics’ Category

Multiple ways of understanding

leave a comment »

In his wonderful On Proof and Progress in Mathematics, Thurston begins his second section “How do people understand mathematics?” as follows:

This is a very hard question. Understanding is an individual and internal matter that is hard to be fully aware of, hard to understand and often hard to communicate. We can only touch on it lightly here.

People have very different ways of understanding particular pieces of mathematics. To illustrate this, it is best to take an example that practicing mathematicians understand in multiple ways, but that we see our students struggling with. The derivative of a function fits well. The derivative can be thought of as:

  1. Infinitesimal: the ratio of the infinitesimal change in the value of a function to the infinitesimal change in a function.
  2. Symbolic: the derivative of x^n is nx^{n-1}, the derivative of \sin(x) is \cos(x), the derivative of f \circ g is f' \circ g * g', etc.
  3. Logical: f'(x) = d if and only if for every \epsilon there is a \delta such that when 0 < |\Delta x| < \delta,

    \left|\frac{f(x+\Delta x) - f(x)}{\Delta x} - d\right| < \epsilon.

  4. Geometric: the derivative is the slope of a line tangent to the graph of the function, if the graph has a tangent.
  5. Rate: the instantaneous speed of f(t), when t is time.
  6. Approximation: The derivative of a function is the best linear approximation to the function near a point.
  7. Microscopic: The derivative of a function is the limit of what you get by looking at it under a microscope of higher and higher power.

This is a list of different ways of thinking about or conceiving of the derivative, rather than a list of different logical definitions. Unless great efforts are made to maintain the tone and flavor of the original human insights, the differences start to evaporate as soon as the mental concepts are translated into precise, formal and explicit definitions.

I can remember absorbing each of these concepts as something new and interesting, and spending a good deal of mental time and effort digesting and practicing with each, reconciling it with the others. I also remember coming back to revisit these different concepts later with added meaning and understanding.

The list continues; there is no reason for it ever to stop. A sample entry further down the list may help illustrate this. We may think we know all there is to say about a certain subject, but new insights are around the corner. Furthermore, one person’s clear mental image is another person’s intimidation:

  1. The derivative of a real-valued function f in a domain D is the Lagrangian section of the cotangent bundle T^{\ast}(D) that gives the connection form for the unique flat connection on the trivial \mathbf{R}-bundle D \times \mathbf{R} for which the graph of f is parallel.

These differences are not just a curiosity. Human thinking and understanding do not work on a single track, like a computer with a single central processing unit. Our brains and minds seem to be organized into a variety of separate, powerful facilities. These facilities work together loosely, “talking” to each other at high levels rather than at low levels of organization.

This has been extended on the MathOverflow question Different ways of thinking about the derivative where you can find even more ways of thinking about the derivative. (Two of the interesting pointers are to this discussion on the n-Category Café, and to the book Calculus Unlimited by Marsden and Weinstein, which does calculus using a “method of exhaustion” that does not involve limits. (Its definition of the derivative is also mentioned at the earlier link, as that notion of the derivative closest to [the idea of Eudoxus and Archimedes] of “the tangent line touches the curve, and in the space between the line and the curve, no other straight line can be interposed”, or “the line which touches the curve only once” — this counts as another important way of thinking about the derivative.)

It has also been best extended by Terence Tao, who in an October 2009 blog post on Grothendieck’s definition of a group gave several ways of thinking about a group:

In his wonderful article “On proof and progress in mathematics“, Bill Thurston describes (among many other topics) how one’s understanding of given concept in mathematics (such as that of the derivative) can be vastly enriched by viewing it simultaneously from many subtly different perspectives; in the case of the derivative, he gives seven standard such perspectives (infinitesimal, symbolic, logical, geometric, rate, approximation, microscopic) and then mentions a much later perspective in the sequence (as describing a flat connection for a graph).

One can of course do something similar for many other fundamental notions in mathematics. For instance, the notion of a group {G} can be thought of in a number of (closely related) ways, such as the following:

  1. Motivating examples: A group is an abstraction of the operations of addition/subtraction or multiplication/division in arithmetic or linear algebra, or of composition/inversion of transformations.
  2. Universal algebraic: A group is a set {G} with an identity element {e}, a unary inverse operation {\cdot^{-1}: G \rightarrow G}, and a binary multiplication operation {\cdot: G \times G \rightarrow G} obeying the relations (or axioms) {e \cdot x = x \cdot e = x}, {x \cdot x^{-1} = x^{-1} \cdot x = e}, {(x \cdot y) \cdot z = x \cdot (y \cdot z)} for all {x,y,z \in G}.
  3. Symmetric: A group is all the ways in which one can transform a space {V} to itself while preserving some object or structure {O} on this space.
  4. Representation theoretic: A group is identifiable with a collection of transformations on a space {V} which is closed under composition and inverse, and contains the identity transformation.
  5. Presentation theoretic: A group can be generated by a collection of generators subject to some number of relations.
  6. Topological: A group is the fundamental group {\pi_1(X)} of a connected topological space {X}.
  7. Dynamic: A group represents the passage of time (or of some other variable(s) of motion or action) on a (reversible) dynamical system.
  8. Category theoretic: A group is a category with one object, in which all morphisms have inverses.
  9. Quantum: A group is the classical limit {q \rightarrow 0} of a quantum group.

etc.
One can view a large part of group theory (and related subjects, such as representation theory) as exploring the interconnections between various of these perspectives. As one’s understanding of the subject matures, many of these formerly distinct perspectives slowly merge into a single unified perspective.

From a recent talk by Ezra Getzler, I learned a more sophisticated perspective on a group, somewhat analogous to Thurston’s example of a sophisticated perspective on a derivative (and coincidentally, flat connections play a central role in both):

  1. Sheaf theoretic: A group is identifiable with a (set-valued) sheaf on the category of simplicial complexes such that the morphisms associated to collapses of {d}-simplices are bijective for {d < 1} (and merely surjective for {d \leq 1}).

The rest of the post elaborates on this understanding.

Again in a Google Buzz post on Jun 9, 2010, Tao posted the following:

Bill Thurston’s “On proof and progress in mathematics” has many nice observations about the nature and practice of modern mathematics. One of them is that for any fundamental concept in mathematics, there is usually no “best” way to define or think about that concept, but instead there is often a family of interrelated and overlapping, but distinct, perspectives on that concept, each of which conveying its own useful intuition and generalisations; often, the combination of all of these perspectives is far greater than the sum of the parts. Thurston illustrates this with the concept of differentiation, to which he lists seven basic perspectives and one more advanced perspective, and hints at dozens more.

But even the most basic of mathematical concepts admit this multiplicity of interpretation and perspective. Consider for instance the operation of addition, that takes two numbers x and y and forms their sum x+y. There are many such ways to interpret this operation:

1. (Disjoint union) x+y is the “size” of the disjoint union X u Y of an object X of size x, and an object Y of size y. (Size is, of course, another concept with many different interpretations: cardinality, volume, mass, length, measure, etc.)

2. (Concatenation) x+y is the size of the object formed by concatenating an object X of size x with an object Y of size y (or by appending Y to X).

3. (Iteration) x+y is formed from x by incrementing it y times.

4. (Superposition) x+y is the “strength” of the superposition of a force (or field, intensity, etc.) of strength x with a force of strength y.

5. (Translation action) x+y is the translation of x by y.

5a. (Translation representation) x+y is the amount of translation or displacement incurred by composing a translation by x with a translation by y.

6. (Algebraic) + is a binary operation on numbers that give it the structure of an additive group (or monoid), with 0 being the additive identity and 1 being the generator of the natural numbers or integers.

7. (Logical) +, when combined with the other basic arithmetic operations, are a family of structures on numbers that obey a set of axioms such as the Peano axioms.

8. (Algorithmic) x+y is the output of the long addition algorithm that takes x and y as input.

9. etc.

These perspectives are all closely related to each other; this is why we are willing to give them all the common name of “addition”, and the common symbol of “+”. Nevertheless there are some slight differences between each perspective. For instance, addition of cardinals is based on perspective 1, while addition of ordinals is based on perspective 2. This distinction becomes apparent once one considers infinite cardinals or ordinals: for instance, in cardinal arithmetic, aleph_0 = 1+ aleph_0 = aleph_0 + 1 = aleph_0 + aleph_0, whereas in ordinal arithmetic, omega = 1+omega < omega+1 < omega + omega.

Transitioning from one perspective to another is often a necessary first conceptual step when the time comes to generalise the concept. As a child, addition of natural numbers is usually taught initially by using perspective 1 or 3, but to generalise to addition of integers, one must first switch to a perspective such as 4, 5, or 5a; similar conceptual shifts are needed when one then turns to addition of rationals, real numbers, complex numbers, residue classes, functions, matrices, elements of abstract additive groups, nonstandard number systems, etc. Eventually, one internalises all of the perspectives (and their inter-relationships) simultaneously, and then becomes comfortable with the addition concept in a very broad set of contexts; but it can be more of a struggle to do so when one has grasped only a subset of the possible ways of thinking about addition.

In many situations, the various perspectives of a concept are either completely equivalent to each other, or close enough to equivalent that one can safely “abuse notation” by identifying them together. But occasionally, one of the equivalences breaks down, and then it becomes useful to maintain a careful distinction between two perspectives that are almost, but not quite, compatible. Consider for instance the following ways of interpreting the operation of exponentiation x^y of two numbers x, y:

1. (Combinatorial) x^y is the number of ways to make y independent choices, each of which chooses from x alternatives.

2. (Set theoretic) x^y is the size of the space of functions from a set Y of size y to a set X of size x.

3. (Geometric) x^y is the volume (or measure) of a y-dimensional cube (or hypercube) whose sidelength is x.

4. (Iteration) x^y is the operation of starting at 1 and multiplying by x y times.

5. (Homomorphism) y → x^y is the continuous homomorphism from the domain of y (with the additive group structure) to the range of x^y (with the multiplicative structure) that maps 1 to x.

6. (Algebraic) ^ is the operation that obeys the laws of exponentiation in algebra.

7. (Log-exponential) x^y is exp( y log x ). (This raises the question of how to interpret exp and log, and again there are multiple perspectives for each…)

8. (Complex-analytic) Complex exponentiation is the analytic continuation of real exponentiation.

9. (Computational) x^y is whatever my calculator or computer outputs when it is asked to evaluate x^y.

10. etc.

Again, these interpretations are usually compatible with each other, but there are some key exceptions. For instance, the quantity 0^0 would be equal to zero [ed: I think this should be one —S] using some of these interpretations, but would be undefined in others. The quantity 4^{1/2} would be equal to 2 in some interpretations, be undefined in others, and be equal to the multivalued expression +-2 (or to depend on a choice of branch) in yet further interpretations. And quantities such as i^i are sufficiently problematic that it is usually best to try to avoid exponentiation of one arbitrary complex number by another arbitrary complex number unless one knows exactly what one is doing. In such situations, it is best not to think about a single, one-size-fits-all notion of a concept such as exponentiation, but instead be aware of the context one is in (e.g. is one raising a complex number to an integer power? A positive real to a complex power? A complex number to a fractional power? etc.) and to know which interpretations are most natural for that context, as this will help protect against making errors when manipulating expressions involving exponentiation.

It is also quite instructive to build one’s own list of interpretations for various basic concepts, analogously to those above (or Thurston’s example). Some good examples of concepts to try this on include “multiplication”, “integration”, “function”, “measure”, “solution”, “space”, “size”, “distance”, “curvature”, “number”, “convergence”, “probability” or “smoothness”. See also my blog post below in which the concept of a “group” is considered.

I plan to collect more such “different ways of thinking about the same (mathematical) thing” in this post, as I encounter them.

Written by S

Sat, 2016-03-26 at 10:05:09

Posted in mathematics, quotes

The same in every country

leave a comment »

(TODO: Learn and elaborate more on their respective histories and goals.)

The formula
\frac{\pi}{4} = 1 - \frac13 + \frac15 - \frac17 + \frac19 - \frac1{11} + \dots
(reminded via this post), a special case at x=1 of
\arctan x = x - \frac{x}3 + \frac{x}5 - \frac{x}7 + \dots,

was found by Leibniz in 1673, while he was trying to find the area (“quadrature”) of a circle, and he had as prior work the ideas of Pascal on infinitesimal triangles, and that of Mercator on the area of the hyperbola y(1+x) = 1 with its infinite series for \log(1+x). This was Leibniz’s first big mathematical work, before his more general ideas on calculus.

Leibniz did not know that this series had already been discovered earlier in 1671 by the short-lived mathematician James Gregory in Scotland. Gregory too had encountered Mercator’s infinite series \log(1+x) = x - x^2/2 + x^3/3 + \dots, and was working on different goals: he was trying to invert logarithmic and trigonometric functions.

Neither of them knew that the series had already been found two centuries earlier by Mādhava (1340–1425) in India (as known through the quotations of Nīlakaṇṭha c.1500), working in a completely different mathematical culture whose goals and practices were very different. The logarithm function doesn’t seem to have been known, let alone an infinite series for it, though a calculus of finite differences for interpolation for trigonometric functions seems to have been ahead of Europe by centuries (starting all the way back with Āryabhaṭa in c. 500 and more clearly stated by Bhāskara II in 1150). Using a different approach (based on the arc of a circle) and geometric series and sums-of-powers, Mādhava (or the mathematicians of the Kerala tradition) arrived at the same formula.

[The above is based on The Discovery of the Series Formula for π by Leibniz, Gregory and Nilakantha by Ranjay Roy (1991).]

This startling universality of mathematics across different cultures is what David Mumford remarks on, in Why I am a Platonist:

As Littlewood said to Hardy, the Greek mathematicians spoke a language modern mathematicians can understand, they were not clever schoolboys but were “fellows of a different college”. They were working and thinking the same way as Hardy and Littlewood. There is nothing whatsoever that needs to be adjusted to compensate for their living in a different time and place, in a different culture, with a different language and education from us. We are all understanding the same abstract mathematical set of ideas and seeing the same relationships.

The same thought was also expressed by Mean Girls:

Written by S

Tue, 2016-03-15 at 13:53:32

Posted in history, mathematics

The generating function for Smirnov words (or: time until k consecutive results are the same)

leave a comment »

1. Alphabet

Suppose we have an alphabet {\mathcal{A}} of size {m}. Its generating function (using the variable {z} to mark length) is simply {A(z) = mz}, as {\mathcal{A}} contains {m} elements of length {1} each.

2. Words

Let {\mathcal{W}} denote the class of all words over the alphabet {\mathcal{A}}. There are many ways to find the generating function {W(z)} for {\mathcal{W}}.

2.1.

We have

\displaystyle \mathcal{W} = \{\epsilon\} + \mathcal{A} + \mathcal{A}\mathcal{A} + \mathcal{A}\mathcal{A}\mathcal{A} + \dots

so its generating function is

\displaystyle \begin{aligned} W(z) &= 1 + A(z) + A(z)^2 + A(z)^3 + \dots \\ &= 1 + mz + (mz)^2 + (mz)^3 + \dots \\ &= \frac{1}{1-mz} \end{aligned}

2.2.

To put it differently, in the symbolic framework, we have {\mathcal{W} = \textsc{Seq}(\mathcal{A})}, so the generating function for {\mathcal{W}} is

\displaystyle W(z) = \frac{1}{1 - A(z)} = \frac{1}{1-mz}.

2.3.

We could have arrived at this with direct counting: the number of words of length {n} is {W_n = m^n} as there are {m} choices for each of the {n} letters, so the generating function is

\displaystyle W(z) = \sum_{n \ge 0}W_n z^n = \sum_{n \ge 0} m^n z^n = \frac{1}{1-mz}.

3. Smirnov words

Next, let {\mathcal{S}} denote the class of Smirnov words over the alphabet {\mathcal{A}}, defined as words in which no two consecutive letters are identical. (That is, words {w_1w_2 \dots w_n} in which {w_i \in \mathcal{A}} for all {i}, and {w_i \neq w_{i-1}} for any {1 < i \le n}.) Again, we can find the generating function for {\mathcal{S}} in different ways.

3.1.

For any word in {\mathcal{W}}, by “collapsing” all runs of each letter, we get a Smirnov word. To put it differently, any word in {\mathcal{W}} can be obtained from a Smirnov word {w} by “expanding” each letter {w_i} into a nonempty sequence of that letter. This observation (see Analytic Combinatorics, pp. 204–205) lets us relate the generating functions of {\mathcal{W}} and {\mathcal{S}} as

\displaystyle W(z) = S(\frac{z}{1-z})

which implicitly gives the generating function {S(z)}: we have

\displaystyle S(z) = W(\frac{z}{1+z}) = \frac{1}{1-m\frac{z}{1+z}} = \frac{1+z}{1 - (m-1)z}.

3.2.

Alternatively, consider in an arbitrary word the first occurrence of a pair of repeated letters. Either this doesn’t happen at all (the word is a Smirnov word), or else, if it happens at position {i} so that {w_i = w_{i+1}}, then the part of the word up to position {i} is a nonempty Smirnov word, the letter at position {i+1} is the same as the previous letter, and everything after {i+1} is an arbitrary word. This gives

\displaystyle \mathcal{W} = \mathcal{S} + (\mathcal{S} \setminus \{ \epsilon \}) \cdot \mathcal{Z} \cdot \mathcal{W}

or in terms of generating functions

\displaystyle W(z) = S(z) + (S(z) - 1)zW(z)

giving

\displaystyle S(z) = \frac{W(z) (1 + z)}{1 + zW(z)} = \frac{1 + z}{(1-mz)(1 + \frac{z}{1-mz})} = \frac{1+z}{1 - (m-1)z}

3.3.

A minor variant is to again pick an arbitrary word and consider its first pair of repeated letters, happening (if it does) at positions {i} and {i+1}, but this time consider the prefix up to {i -1}: either it is empty, or the pair of letters is different from the last letter of the prefix, giving us the decomposition

\displaystyle \mathcal{W} = \mathcal{S} + m\mathcal{Z}^2 \cdot \mathcal{W} + (\mathcal{S}\setminus \{ \epsilon \}) \cdot (m-1)\mathcal{Z}^2 \mathcal{W}

and corresponding generating function

\displaystyle W(z) = S(z) + mz^2W(z) + (S(z) - 1)(m-1)z^2W(z)

so

\displaystyle S(z) = \frac{W(z)(1-z^2)}{1 + (m-1)z^2W(z)} = \frac{1-z^2}{1 - mz + (m-1)z^2} = \frac{(1-z)(1+z)}{(1-z)(1 - (m-1)z)}

which is the same as before after we cancel the {(1-z)} factors.

3.4.

We could have arrived at this result with direct counting. For {n \ge 1}, for a Smirnov word of length {n}, we have {m} choices for the first letter, and for each of the other {(n-1)} letters, as they must not be the same as the previous letter, we have {(m-1)} choices. This gives the number of Smirnov words of length {n} as {m (m-1)^{n-1}} for {n \ge 1}, and so the generating function for Smirnov words is

\displaystyle S(z) = 1 + \sum_{n \ge 1} m (m-1)^{n-1} z^n = 1 + mz \sum_{n \ge 1} (m-1)^{n-1}z^{n-1} = 1 + \frac{mz}{1-(m-1)z}

again giving

\displaystyle S(z) = \frac{1 + z}{1 - (m-1)z}

4. Words with bounded runs

We can now generalize. Let {\mathcal{S}_k} denote the class of words in which no letter occurs more than {k} times consecutively. ({\mathcal{S} = \mathcal{S}_1}.) We can find the generating function for {\mathcal{S}_k}.

4.1.

To get a word in {\mathcal{S}} we can take a Smirnov word and replace each letter with a nonempty sequence of up to {k} occurrences of that letter. This gives:

\displaystyle S_k(z) = S(z + z^2 + \dots + z^k) = S(z\frac{1-z^{k}}{1-z})

so

\displaystyle S_k(z) = \frac{1 + z\frac{1-z^{k}}{1-z}}{1 - (m-1)z\frac{1-z^{k}}{1-z}} = \frac{1 - z^{k+1}}{1 - mz + (m-1)z^{k+1}}.

4.2.

Pick any arbitrary word, and consider its first occurrence of a run of {k+1} letters. Either such a run does not exist (which means the word we picked is in {\mathcal{S}_k}), or it occurs right at the beginning ({m} possibilities, one for each letter in the alphabet), or, if it occurs starting at position {i > 1}, then the part of the word up to position {i-1} (the “prefix”) is a nonempty Smirnov word, positions {i} to {i+k} are {k+1} occurrences of any of the {m-1} letters other than the last letter of the prefix, and what follows is an arbitrary word. This gives

\displaystyle \mathcal{W} = \mathcal{S}_k + m\mathcal{Z}^{k+1} \cdot \mathcal{W} + (\mathcal{S}_k \setminus \{ \epsilon \}) \cdot (m-1)\mathcal{Z}^{k+1} \cdot \mathcal{W}

or in terms of generating functions

\displaystyle W(z) = S_k(z) + mz^{k+1}W(z) + (S_k(z) - 1)(m-1)z^{k+1}W(z)

so

\displaystyle W(z)(1 - z^{k+1}) = S_k(z) (1 + (m-1)z^{k+1} W(z))

giving

\displaystyle S_k(z) = \frac{W(z)(1-z^{k+1})}{1 + (m-1)z^{k+1}W(z)} = \frac{1-z^{k+1}}{1-mz + (m-1)z^{k+1}}

4.3.

Arriving at this via direct counting seems hard.

5. Words that stop at a long run

Now consider words in which we “stop” as soon we see {k} consecutive identical letters. Let the class of such words be denoted {\mathcal{U}} (not writing {\mathcal{U}_k} to keep the notation simple). As before, we can find its generating function in multiple ways.

5.1.

We get any word in {\mathcal{U}} by either immediately seeing a run of length {k} and stopping, or by starting with a nonempty prefix in {\mathcal{S}_{k-1}}, and then stopping with a run of {k} identical letters different from the last letter of the prefix. Thus we have

\displaystyle \mathcal{U} = m \mathcal{Z}^k + (\mathcal{S}_{k-1} \setminus \{\epsilon\}) \cdot (m-1)\mathcal{Z}^k

and

\displaystyle U(z) = m z^k + (S_{k-1}(z) - 1) (m-1) z^k

which gives

\displaystyle U(z) = z^k(1 + (m-1)S_{k-1}(z)) = z^k\left(1+(m-1)\frac{1-z^k}{1-mz+(m-1)z^k}\right) = \frac{m(1-z)z^k}{1 - mz + (m-1)z^k}

5.2.

Alternatively, we can decompose any word by looking for its first run of {k} identical letters. Either it doesn’t occur at all (the word we picked is in {\mathcal{S}_{k-1}}, or the part of the word until the end of the run belongs to {\mathcal{U}} and the rest is an arbitrary word, so

\displaystyle \mathcal{W} = \mathcal{S}_{k-1} + \mathcal{U} \cdot \mathcal{W}

and

\displaystyle W(z) = S_{k-1}(z) + U(z) W(z)

so

\displaystyle U(z) = 1 - \frac{S_{k-1}(z)}{W(z)} = 1 - \frac{(1-z^k)(1-mz)}{1-mz + (m-1)z^k} = \frac{m(1-z)z^k}{1 - mz + (m-1)z^k}

6. Probability

Finally we arrive at the motivation: suppose we keep appending a random letter from the alphabet, until we encounter the same letter {k} times consecutively. What can we say about the length {X} of the word thus generated? As all sequences of letters are equally likely, the probability of seeing any string of length {n} is {\frac{1}{m^n}}. So in the above generating function {U(z) = \sum_{n} U_n z^n}, the probability of our word having length {n} is {U_n / m^n}, and the probability generating function {P(z)} is therefore {\sum_{n} U_n z^n / m^n}. This {P(z)} can be got by replacing {z} with {z/m} in the expression for {U(z)}: we have

\displaystyle P(z) = U(z/m) = \frac{(m-z)z^k}{m^k(1-z) + (m-1)z^k}

In principle, this probability generating function tells us everything about the distribution of the length of the word. For example, its expected length is

\displaystyle \mathop{E}[X] = P'(1) = \frac{m^k - 1}{m - 1}

(See this question on Quora for other powerful ways of finding this expected value directly.)

We can also find its variance, as

\displaystyle \mathop{Var}[X] = P''(1) + P'(1) - P'(1)^2 = \frac{m^{2k} - (2k-1)(m-1)m^k - m}{(m-1)^2}

This variance is really too large to be useful, so what we would really like, is the shape of the distribution… to be continued.

Written by S

Sun, 2016-01-03 at 03:06:23

Posted in mathematics

Some playing with Python

with 2 comments

A long time ago, Diophantus (sort of) discussed integer solutions to the equation

\displaystyle  x^2 + y^2 = z^2

(solutions to this equation are called Pythagorean triples).

Centuries later, in 1637, Fermat made a conjecture (now called Fermat’s Last Theorem, not because he uttered it in his dying breath, but because it was the last one to be proved — in ~1995) that

\displaystyle x^n + y^n = z^n

has no positive integer solutions for n \ge 3. In other words, his conjecture was that none of the following equations has a solution:

\displaystyle x^3 + y^3 = z^3

\displaystyle x^4 + y^4 = z^4

\displaystyle x^5 + y^5 = z^5

\displaystyle x^6 + y^6 = z^6

… and so on. An nth power cannot be partitioned into two nth powers.

About a century later, Euler proved the n = 3 case of Fermat’s conjecture, but generalized it in a different direction: he conjectured in 1769 that an nth power cannot be partitioned into fewer than n nth powers, namely

\displaystyle z^n = \sum_{i = 1}^k x_i^n

has no solutions with k < n. So his conjecture was that (among others) none of the following equations has a solution:

\displaystyle z^3 = a^3 + b^3

\displaystyle z^4 = a^4 + b^4 + c^4

\displaystyle z^5 = a^5 + b^5 + c^5 + d^5

\displaystyle z^6 = a^6 + b^6 + c^6 + d^6 + e^6

… and so on.

This conjecture stood for about two centuries, until abruptly it was found to be false, by Lander and Parkin who in 1966 simply did a direct search on the fastest (super)computer at the time, and found this counterexample:

\displaystyle 27^5 + 84^5 + 110^5 + 133^5 = 144^5

(It is still one of only three examples known, according to Wikipedia.)

Now, how might you find this solution on a computer today?

In his wonderful (as always) post at bit-player, Brian Hayes showed the following code:

import itertools as it

def four_fifths(n):
    '''Return smallest positive integers ((a,b,c,d),e) such that
       a^5 + b^5 + c^5 + d^5 = e^5; if no such tuple exists
       with e < n, return the string 'Failed'.'''
    fifths = [x**5 for x in range(n)]
    combos = it.combinations_with_replacement(range(1,n), 4)
    while True:
        try:
            cc = combos.next()
            cc_sum = sum([fifths[i] for i in cc])
            if cc_sum in fifths:
                return(cc, fifths.index(cc_sum))
        except StopIteration:
            return('Failed')

to which, if you add (say) print four_fifths(150) and run it, it returns the correct answer fairly quickly: in about 47 seconds on my laptop.

The if cc_sum in fifths: line inside the loop is an O(n) cost each time it’s run, so with a simple improvement to the code (using a set instead) and rewriting it a bit, we can write the following full program:

import itertools

def find_counterexample(n):
  fifth_powers = [x**5 for x in range(n)]
  fifth_powers_set = set(fifth_powers)
  for xs in itertools.combinations_with_replacement(range(1, n), 4):
    xs_sum = sum([fifth_powers[i] for i in xs])
    if xs_sum in fifth_powers_set:
      return (xs, fifth_powers.index(xs_sum))
  return 'Failed'

print find_counterexample(150)

which finishes in about 8.5 seconds.

Great!

But there’s something unsatisfying about this solution, which is that it assumes there’s a solution with all four numbers on the LHS less than 150. After all, changing the function invocation to find_counterexample(145) makes it run a second faster even, but how could we know to do without already knowing the solution? Besides, we don’t have a fixed 8- or 10-second budget; what we’d really like is a program that keeps searching till it finds a solution or we abort it (or it runs out of memory or something), with no other fixed termination condition.

The above program used the given “n” as an upper bound to generate the combinations of 4 numbers; is there a way to generate all combinations when we don’t know an upper bound on them?

Yes! One of the things I learned from Knuth volume 4 is that if you simply write down each combination in descending order and order them lexicographically, the combinations you get for each upper bound are a prefix of the list of the next bigger one, i.e., for any upper bound, all the combinations form a prefix of the same infinite list, which starts as follows (line breaks for clarity):

1111,
2111, 2211, 2221, 2222,
3111, 3211, 3221, 3222, 3311, 3321, 3322, 3331, 3332, 3333,
4111, ...
     ... 9541, 9542, 9543, 9544, 9551, ... 9555, 9611, ...

There doesn’t seem to be a library function in Python to generate these though, so we can write our own. If we stare at the above list, we can figure out how to generate the next combination from a given one:

  1. Walk backwards from the end, till you reach the beginning or find an element that’s less than the previous one.
  2. Increase that element, set all the following elements to 1s, and continue.

We could write, say, the following code for it:

def all_combinations(r):
  xs = [1] * r
  while True:
    yield xs
    for i in range(r - 1, 0, -1):
      if xs[i] < xs[i - 1]:
        break
    else:
      i = 0
    xs[i] += 1
    xs[i + 1:] = [1] * (r - i - 1)

(The else block on a for loop is an interesting Python feature: it is executed if the loop wasn’t terminated with break.) We could even hard-code the r=4 case, as we’ll see later below.

For testing whether a given number is a fifth power, we can no longer simply lookup in a fixed precomputed set. We can do a binary search instead:

def is_fifth_power(n):
  assert n > 0
  lo = 0
  hi = n
  # Invariant: lo^5 < n <= hi^5
  while hi - lo > 1:
    mid = lo + (hi - lo) / 2
    if mid ** 5 < n:
      lo = mid
    else:
      hi = mid
  return hi ** 5 == n

but it turns out that this is slower than one based on looking up in a growing set (as below).

Putting everything together, we can write the following (very C-like) code:

largest_known_fifth_power = (0, 0)
known_fifth_powers = set()
def is_fifth_power(n):
  global largest_known_fifth_power
  while n > largest_known_fifth_power[0]:
    m = largest_known_fifth_power[1] + 1
    m5 = m ** 5
    largest_known_fifth_power = (m5, m)
    known_fifth_powers.add(m5)
  return n in known_fifth_powers

def fournums_with_replacement():
  (x0, x1, x2, x3) = (1, 1, 1, 1)
  while True:
    yield (x0, x1, x2, x3)
    if x3 < x2:
      x3 += 1
      continue
    x3 = 1
    if x2 < x1:
      x2 += 1
      continue
    x2 = 1
    if x1 < x0:
      x1 += 1
      continue
    x1 = 1
    x0 += 1
    continue

if __name__ == '__main__':
  tried = 0
  for get in fournums_with_replacement():
    tried += 1
    if (tried % 1000000 == 0):
      print tried, 'Trying:', get
    rhs = get[0]**5 + get[1]**5 + get[2]**5 + get[3]**5
    if is_fifth_power(rhs):
      print 'Found:', get, rhs
      break

which is both longer and slower (takes about 20 seconds) than the original program, but at least we have the satisfaction that it doesn’t depend on any externally known upper bound.

I originally started writing this post because I wanted to describe some experiments I did with profiling, but it’s late and I’m sleepy so I’ll just mention it.

python -m cProfile euler_conjecture.py

will print relevant output in the terminal:

         26916504 function calls in 26.991 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1   18.555   18.555   26.991   26.991 euler_conjecture.py:1()
 13458164    4.145    0.000    4.145    0.000 euler_conjecture.py:12(fournums_with_replacement)
 13458163    4.292    0.000    4.292    0.000 euler_conjecture.py:3(is_fifth_power)
      175    0.000    0.000    0.000    0.000 {method 'add' of 'set' objects}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}

Another way to view the same thing is to write the profile output to a file and read it with cprofilev:

python -m cProfile -o euler_profile.out euler_conjecture.py
cprofilev euler_profile.out

and visit http://localhost:4000 to view it.

Of course, simply translating this code to C++ makes it run much faster:

#include <array>
#include <iostream>
#include <map>
#include <utility>

typedef long long Int;
constexpr Int fifth_power(Int x) { return x * x * x * x * x; }

std::map<Int, int> known_fifth_powers = {{0, 0}};
bool is_fifth_power(Int n) {
  while (n > known_fifth_powers.rbegin()->first) {
    int m = known_fifth_powers.rbegin()->second  + 1;
    known_fifth_powers[fifth_power(m)] = m;
  }
  return known_fifth_powers.count(n);
}

std::array<Int, 4> four_nums() {
  static std::array<Int, 4> x = {1, 1, 1, 0};
  int i = 3;
  while (i > 0 && x[i] == x[i - 1]) --i;
  x[i] += 1;
  while (++i < 4) x[i] = 1;
  return x;
}

std::ostream& operator<<(std::ostream& os, std::array<Int, 4> x) {
  os << "(" << x[0] << ", " << x[1] << ", " << x[2] << ", " << x[3] << ")";
  return os;
}

int main() {
  while (true) {
    std::array<Int, 4> get = four_nums();
    Int rhs = fifth_power(get[0]) + fifth_power(get[1]) + fifth_power(get[2]) + fifth_power(get[3]);
    if (is_fifth_power(rhs)) {
      std::cout << "Found: " << get << " " << known_fifth_powers[rhs] << std::endl;
      break;
    }
  }
}

and


clang++ -std=c++11 euler_conjecture.cc && time ./a.out

runs in 2.43s, or 0.36s if compiled with -O2.

But I don’t have a satisfactory answer to how to make our Python program which takes 20 seconds as fast as the 8.5-second known-upper-bound version.


Edit [2015-05-08]: I wrote some benchmarking code to compare all the different “combination” functions.

import itertools

# Copied from the Python documentation
def itertools_equivalent(iterable, r):
    pool = tuple(iterable)
    n = len(pool)
    if not n and r:
        return
    indices = [0] * r
    yield tuple(pool[i] for i in indices)
    while True:
        for i in reversed(range(r)):
            if indices[i] != n - 1:
                break
        else:
            return
        indices[i:] = [indices[i] + 1] * (r - i)
        yield tuple(pool[i] for i in indices)

# Above function, specialized to first argument being range(1, n)
def itertools_equivalent_specialized(n, r):
  indices = [1] * r
  yield indices
  while True:
    for i in reversed(range(r)):
      if indices[i] != n - 1:
        break
    else:
      return
    indices[i:] = [indices[i] + 1] * (r - i)
    yield indices

# Function to generate all combinations of 4 elements
def all_combinations_pythonic(r):
  xs = [1] * r
  while True:
    yield xs
    for i in range(r - 1, 0, -1):
      if xs[i] < xs[i - 1]:
        break
    else:
      i = 0
    xs[i] += 1
    xs[i + 1:] = [1] * (r - i - 1)

# Above function, written in a more explicit C-like way
def all_combinations_clike(r):
  xs = [1] * r
  while True:
    yield xs
    i = r - 1
    while i > 0 and xs[i] == xs[i - 1]:
      i -= 1
    xs[i] += 1
    while i < r - 1:
      i += 1
      xs[i] = 1

# Above two functions, specialized to r = 4, using tuple over list.
def fournums():
  (x0, x1, x2, x3) = (1, 1, 1, 1)
  while True:
    yield (x0, x1, x2, x3)
    if x3 < x2:
      x3 += 1
      continue
    x3 = 1
    if x2 < x1:
      x2 += 1
      continue
    x2 = 1
    if x1 < x0:
      x1 += 1
      continue
    x1 = 1
    x0 += 1
    continue

# Benchmarks for all functions defined above (and the library function)
def benchmark_itertools(n):
  for xs in itertools.combinations_with_replacement(range(1, n), 4):
    if xs[0] >= n:
      break
def benchmark_itertools_try(n):
  combinations = itertools.combinations_with_replacement(range(1, n), 4)
  while True:
    try:
      xs = combinations.next()
      if xs[0] >= n:
        break
    except StopIteration:
      return
def benchmark_itertools_equivalent(n):
  for xs in itertools_equivalent(range(1, n), 4):
    if xs[0] >= n:
      break
def benchmark_itertools_equivalent_specialized(n):
  for xs in itertools_equivalent_specialized(n, 4):
    if xs[0] >= n:
      break
def benchmark_all_combinations_pythonic(n):
  for xs in all_combinations_pythonic(4):
    if xs[0] >= n:
      break
def benchmark_all_combinations_clike(n):
  for xs in all_combinations_clike(4):
    if xs[0] >= n:
      break
def benchmark_fournums(n):
  for xs in fournums():
    if xs[0] >= n:
      break

if __name__ == '__main__':
  benchmark_itertools(150)
  benchmark_itertools_try(150)
  benchmark_itertools_equivalent(150)
  benchmark_itertools_equivalent_specialized(150)
  benchmark_all_combinations_pythonic(150)
  benchmark_all_combinations_clike(150)
  benchmark_fournums(150)

As you can see, I chose inside the benchmarking function the same statement that would cause all_combinations to terminate, and have no effect for the other combination functions.
When run with

python -m cProfile benchmark_combinations.py

the results include:

    2.817 benchmark_combinations.py:80(benchmark_itertools)
    8.583 benchmark_combinations.py:84(benchmark_itertools_try)
  126.980 benchmark_combinations.py:93(benchmark_itertools_equivalent)
   46.635 benchmark_combinations.py:97(benchmark_itertools_equivalent_specialized)
   44.032 benchmark_combinations.py:101(benchmark_all_combinations_pythonic)
   18.049 benchmark_combinations.py:105(benchmark_all_combinations_clike)
   10.923 benchmark_combinations.py:109(benchmark_fournums)

Lessons:

  • Calling itertools.combinations_with_replacement is by far the fastest, taking about 2.7 seconds. It turns out that it’s written in C, so this would be hard to beat. (Still, writing it in a try block is seriously bad.)
  • The “equivalent” Python code from the itertools documentation (benchmark_itertools_combinations_with_replacment) is about 50x slower.
  • Gets slightly better when specialized to numbers.
  • Simply generating all combinations without an upper bound is actually faster.
  • It can be made even faster by writing it in a more C-like way.
  • The tuples version with the loop unrolled manually is rather fast when seen in this light, less than 4x slower than the library version.

Written by S

Sun, 2015-02-08 at 00:03:38

Colliding balls approximate pi

leave a comment »

Found via G+, a new physical experiment that approximates \pi, like Buffon’s needle problem: The Pi Machine.

Roughly, the amazing discovery of Gregory Galperin is this: When a ball of mass M collides with one of ball m, propelling it towards a wall, the number of collisions (assuming standard physics idealisms) is \pi \lfloor\sqrt{M/m}\rfloor, so by taking M/m = 10^{2n}, we can get the first n+1 digits of \pi. Note that this number of collisions is an entirely determinstic quantity; there’s no probability is involved!

Here’s a video demonstrating the fact for M/m = 100 (the blue ball is the heavier one):

The NYT post says how this discovery came about:

Dr. Galperin’s approach was also geometric but very different (using an unfolding geodesic), building on prior related insights. Dr. Galperin, who studied under well-known Russian mathematician Andrei Kolmogorov, had recently written (with Yakov Sinai) extensively on ball collisions, realized just before a talk in 1995 that a plot of the ball positions of a pair of colliding balls could be used to determine pi. (When he mentioned this insight in the talk, no one in the audience believed him.) This finding was ultimately published as “Playing Pool With Pi” in a 2003 issue of Regular and Chaotic Dynamics.

The paper, Playing Pool With π (The number π from a billiard point of view) is very readable. The post has, despite a “solution” section, essentially no explanation, but the two comments by Dave in the comments section explain it clearly. And a reader sent in a cleaned-up version of that too: here, by Benjamin Wearn who teaches physics at Fieldston School.

Now someone needs to make a simulation / animation graphing the two balls in phase space of momentum. :-)

I’d done something a while ago, to illustrate The Orbit of the Moon around the Sun is Convex!, here. Probably need to re-learn all that JavaScript stuff, to make one for this. Leaving this post here as a placeholder.

Or maybe someone has done it already?

Written by S

Mon, 2014-06-23 at 23:03:18

Posted in mathematics, unfinished

Prefatory apprehension

leave a comment »

Robert Recorde’s 1557 book is noted for being the first to introduce the equals sign =, and is titled:

The Whetstone of Witte: whiche is the seconde parte of Arithmeteke: containing the extraction of rootes; the cossike practise, with the rule of equation; and the workes of Surde Nombers.

Its title page (see http://www.maa.org/publications/periodicals/convergence/mathematical-treasures-robert-recordes-whetstone-of-witte, see also the full book at https://archive.org/stream/TheWhetstoneOfWitte#page/n0/mode/2up) contains this verse:

Original spelling

Though many ſtones doe beare greate price,
The
whetſtone is for exerſice
As neadefull, and in woorke as ſtraunge:
Dulle thinges and harde it will ſo chaunge,
And make them ſharpe, to right good vſe:
All arteſmen knowe, thei can not chuſe,
But uſe his helpe: yet as men ſee,
Noe ſharpeneſſe ſemeth in it to bee.

The grounde of artes did brede this ſtone:
His vſe is greate, and moare then one.
Here if you lift your wittes to whette,
Moche ſharpeneſſe thereby ſhall you gette.
Dulle wittes hereby doe greately mende,
Sharpe wittes are fined to their fulle ende.
Now proue, and praiſe, as you doe finde,
And to your ſelf be not vnkinde.

   
Modern spelling

Though many stones do bear great price,
The whetstone is for exercise
As needful, and in work as strange:
Dull things and hard it will so change
And make them sharp, to right good use:
All artsmen know they cannot choose
But use his help; yet as men see,
No sharpness seemeth in it to be.

The ground of arts did breed this stone;
His use is great, and more than one.
Here if you lift your wits to whet,
Much sharpness thereby shall you get.
Dull wits hereby do greatly mend,
Sharp wits are fined to their full end.
Now prove and praise as you do find,
And to yourself be not unkind.

Apparently the full title contains a pun (see http://www.pballew.net/arithm17.html): “the cossike practise” in the title refers to algebra, as the Latin cosa apparently meaning “a thing” was used to stand for an unknown, abbreviated to cos — but the Latin word cos itself means a grindstone.

The author again reminds readers not to blame his book, at the end of his preface:

To the curiouſe ſcanner.

If you ought finde, as ſome men maie,
That you can mende, I ſhall you praie,
To take ſome paine ſo grace maie ſende,
This worke to growe to perfecte ende.

But if you mende not that you blame,
I winne the praiſe, and you the ſhame.
Therfore be wiſe, and learne before,
Sith ſlaunder hurtes it ſelf moſte ſore.

Authors are either anxious about how their book is received, or make sure to be pointedly uncaring.

Sir Arthur Conan Doyle, in a mostly forgettable volume of poetry (Songs of the Road, 1911), begins:

If it were not for the hillocks
   You’d think little of the hills;
The rivers would seem tiny
   If it were not for the rills.
If you never saw the brushwood
   You would under-rate the trees;
And so you see the purpose
   Of such little rhymes as these.

Kālidāsa of course begins his Raghuvaṃśa with a grand disclaimer:

kva sūryaprabhavo vaṃśaḥ kva cālpaviṣayā matiḥ /
titīrṣur dustaram mohād uḍupenāsmi sāgaram // Ragh_1.2 //

mandaḥ kaviyaśaḥ prārthī gamiṣyāmy upahāsyatām /
prāṃśulabhye phale lobhād udbāhur iva vāmanaḥ // Ragh_1.3 //

atha vā kṛtavāgdvāre vaṃśe ‘smin pūrvasūribhiḥ /
maṇau vajrasamutkīrṇe sūtrasyevāsti me gatiḥ // Ragh_1.4 //

But the most nonchalant I’ve seen, thanks to Dr. Ganesh, is this gīti by Śrīkṛṣṇa Brahmatantra Yatīndra of the Parakāla Maṭha, Mysore:

nindatu vā nandatu vā
mandamanīṣā niśamya kṛtim etām
harṣaṃ vā marṣaṃ vā
sarṣapamātram api naiva vindema

Screw you guys. :-)

Written by S

Wed, 2014-05-28 at 23:56:11

Posted in history, mathematics

Big O() notation: a couple of sources

with one comment

This post contains, just for future reference, a couple of primary sources relevant to the O (“Big O”) notation:

  1. Some introductory words from Asymptotic Methods in Analysis by de Bruijn
  2. An letter from Donald Knuth on an approach to teaching calculus using this notation.

Read the rest of this entry »

Written by S

Thu, 2014-03-13 at 16:33:20

Follow

Get every new post delivered to your Inbox.

Join 144 other followers