Posts Tagged ‘maths’
Eugene Curtain and Max Washauer
If you came here because you were reading Peter Winkler’s “7 Puzzles You Think You Must Not Have Heard Correctly”, the names are supposed to be Eugene Curtin and Max Warshauer, and the paper is called “The locker puzzle”, published in The Mathematical Intelligencer, Volume 28, Number 1 (March 2006), pages 28–31.
[If not, you should read the amazing “7 Puzzles You Think You Must Not Have Heard Correctly”, spend several days trying the first problem, read the brilliant solution, and then come back here if you’re interested in learning why no other solution can do better.]
The paper is available here if your institution has access. If not, here’s a sketch of the proof that the strategy cannot be improved upon. [Update 2010-01-06: Oliver Nash has a post about the puzzle, explaining both the original solution and the proof of optimality, here. Just the original solution is also worked out by David MacKay here.]
First, let us modify the rules slightly so that each prisoner must continue looking in boxes until he finds the box containing his name. The prisoners win if no prisoner opens more than 50 (i.e., n/2) boxes. This change obviously makes no difference to the outcome. Let’s call this (modified) game Game 1.
A different game involves all the prisoners being in the room at the same time, and is played as follows. The first prisoner opens boxes until he finds his name (i.e., the number “1”). Then, the lowest-numbered prisoner whose name hasn’t been revealed starts opening boxes until he finds his name. Then the next lowest-numbered whose name hasn’t been revealed opens boxes, and so on. The prisoners win if no one opens more than 50 boxes. Call this Game 2.
Let’s say we observe the prisoners as they play Game 2, and record the order in which boxes were revealed. This completely specifies what happened. For example, (with 10 prisoners instead of 100) if we record the list 2,6,1,4,9,7,10,8,3,5, we know that the first prisoner revealed boxes containing 2, 6, 1, then the third (lowest unrevealed) prisoner opened boxes with 4,9,7,10,8,3, then prisoner 5 opened 5, and they lost because the third prisoner opened 6 > 5 boxes.
- Prove: No matter what strategy the prisoners follow, each permutation has the same probability (1/n!) of being the list recorded.
- Prove: “The classical records-to-cycles bijection”. It sends 2,6,1,4,9,7,10,8,3,5 to (2 6 1)(4 9 7 10 8 3)(5), for example.
- So the probability of the prisoners winning Game 2 (no matter what strategy they follow) is exactly the probability that a random permutation has no cycle of length greater than n/2.
- Prove: Any strategy for Game 1 corresponds to a strategy for Game 2 with the same probability. (Trivial: the only change is that you don’t have to open boxes you’ve already seen opened.)
This proves that the pointer-chasing strategy is optimal for Game 1.
Here’s the puzzle as it was originally considered, still open: suppose there are n prisoners and 2n boxes, half of them empty. The prisoners can each examine n lockers. The pointer-chasing strategy doesn’t work as empty boxes point nowhere. Does the probability of winning go to 0 as n→∞?
“Every good theorem must have a good counterexample”
Abhyankar[1] attributes the quote to Severi.
[1]: Historical Ramblings in Algebraic Geometry and Related Algebra, Shreeram S. Abhyankar, The American Mathematical Monthly, Vol. 83, No. 6 (Jun. – Jul., 1976), pp. 409-448. Also available here, because it won a Lester R. Ford Award (“articles of expository excellence”) and also a Chauvenet Prize (“the highest award for mathematical expository writing”).
Abhyankar, after distinguishing between the flavours of “high-school algebra” (polynomials, power series), “college algebra” (rings, fields, ideals) and “university algebra” (functors, homological algebra) goes on to present his fundamental thesis (“obviously a partisan claim”):
The method of high-school algebra is powerful, beautiful and accessible. So let us not be overwhelmed by the groups-ring-fields or the functorial arrows of the other two algebras and thereby lose sight of the power of the explicit algorithmic processes given to us by Newton, Tschirnhausen, Kronecker, and Sylvester.
Perhaps for this reason, Dr. Z calls Abhyankar (“one of my great heroes”) “the modern prophet of high-school-algebra”.
Anyway, enough rambling. Back to “Every good theorem must have a good counterexample”. Discuss.
[Edited to add: The statement in its original context was referring to a phenomenon where a pleasing conjecture is found to have counterexamples, until it is resolved by realising that we must, say, count multiplicities the “right” way—the right way turning out to be whatever makes the conjecture true. Thus Bezout’s theorem, etc., and the quote just means, as he paraphrases, “don’t be deterred if your formula is presently invalid in some cases; it only means that you have not yet completely deciphered god’s mind”. On the other hand, what I (mis?)remembered was that one must know “counterexamples” to a theorem in the sense that one must know why the conclusion is not true if the hypotheses are weakened: thus one doesn’t really understand a theorem till one knows at least one “counterexample” (and at least two proofs).]
Mathematics and notation: the Hindu-Arabic numeral system
Quick: What is CCXXXVII × CCCXXIX?
From page 15 of The Life of Pi by Jonathan Borwein:
The Indo-Arabic system came to Europe around 1000 CE. Resistance ranged from accountants who didn’t want their livelihood upset to clerics who saw the system as ‘diabolical,’ since they incorrectly assumed its origin was Islamic. European commerce resisted until the 18th century, and even in scientific circles usage was limited into the 17th century.
The prior difficulty of doing arithmetic is indicated by college placement advice given a wealthy German merchant in the 16th century: “If you only want him to be able to cope with addition and subtraction, then any French or German university will do. But if you are intent on your son going on to multiplication and division — assuming that he has sufficient gifts — then you will have to send him to Italy.” (George Ifrah, p. 577)
[The rest of the pages of the slides are also great and worth reading!]
Just to give some context of the time: The Hindu-Arabic system was introduced into Europe by Leonardo of Pisa (Fibonacci) — an Italian — in his Liber Abaci, written in 1202. Gutenberg (in Germany) invented the printing press around 1450. In Italy, Tartaglia lived 1500-1557, Cardano 1501-1576, Sturm 1507-1589, Giodano Bruno (1548-1600), and Ludovico Ferrari (1522-1565). (And outside Italy, Robert Recorde (as we’re talking about notation) (1510-1558) in Wales, François Viète (1540-1603) in France, etc. See this image.) Of course Galileo Galilei (1564-1642) was Italian too, but came later, as did Newton, Fermat, the Bernoullis, and all the others.
While on the topic of mathematics and notation, see also this post: Visual Clarity in the Naming of Variables.
[And while not exactly notation, Donald Knuth’s Calculus via O notation]
A “Möbius” inversion formula
[Not the Möbius inversion formula, but something similar.]
As usual, define the Möbius function μ on the natural numbers as
Let be any function defined on the natural numbers, and let
be the function defined as
.
Then it is true that .
Note that f need not be multiplicative; it can be any arbitrarily defined function. I have no idea why it is true. Help?
Lattice points visible from the origin
[A test of LaTeX-to-Wordpress conversion. Bugs remain, point them out. Original PDF]
Here is a problem I love. It is simple to state, and it has a solution that is not trivial, but is easy to understand. The solution also goes through some beautiful parts, so I can promise it’s worth reading :-)
[The solution is not mine. Also, the question is just an excuse for the ideas in the solution. :P]
Question. Suppose you are standing on an infinite grid in the plane. You can see infinitely in all directions, but you cannot see through grid points: a point is hidden from view if some other grid point lies in your line of sight. What fraction of the grid points can you see?
Let us first imagine that we are standing at the origin, and that the grid is that of the lattice (integer) points.
Read the rest of this entry »
A theorem by Euler on partitions
There are so many beautiful facts in mathematics that are not as well known as they deserve to be. [Just today I couldn’t find an online reference for the Robinson-Schensted(-Knuth) correspondence.] Some efforts in the direction of communicating these to an online audience have been collected in the Carnival of Mathematics (41 so far) posts on various blogs. Another good idea I recently found is Theorem of the Day. [EtA: Found at Theorem of the Day: RSK correspondence.]
Anyway… today I attended a lecture on Euler by William Dunham (great talk!) where after a brief biography of Euler and a tour through some of his results, he showed Euler’s proof of a particular theorem about partitions. It will be familiar to anyone who has read about integer partitions, but it deserves to be familiar to everyone! So here is a sketch of the proof, for your reading pleasure. [It doesn’t do justice to present only a “sketch”, but I have assignments due tomorrowtoday… feel free to take it and turn it into some form worthy of the content. :-)]
Now when I say “a theorem by Euler”, it is very ambiguous which one I mean, and even with the “on partitions” restriction, there are several. So let me state it:
The number of partitions of a number into odd parts is the same as the number of partitions of the number into distinct parts.
[If I remember correctly what was narrated in the talk, one of the Bernoullis wrote to Euler asking him a question about partitions, and within a few days (think of the postal system then) Euler replied with a proof of the theorem, along with a apology for the delay caused by the poor eyesight he had recently been suffering from. This is an outline of his proof, ask me about the details.]
Let D(n) denote the number of partitions into distinct parts, and O(n) denote the number of partitions into odd parts. Then we have:
which proves the theorem.
The first equality, which gives the generating function for the D(n)s, can be seen as follows: when you “expand” and write out , you get
, where the coefficient of any term
is exactly the number of ways of writing
as a product of distinct factors of the form
, which is
. Similarly, the last equality, about the generating function for the O(n)s, is because the coefficient of any term
in
is exactly the number of ways of writing
as a product of (not necessarily distinct)
for odd
.
The result might not be terribly important, but this idea was the starting point for proving several observations about partitions, and the method has led to the discovery of several facts and is more widely applicable than you think!
If you want a combinatorial proof by bijection (“For Entertainment Only”) here is an outline of one:
Given a partition of n into odd parts, say
,
write each “in binary”, by which I mean as a sum of distinct powers of two. So you have
.
Now just get rid of the brackets, and note that all terms are distinct.
E.g. for ,
i.e. , you get the partition
.
[Exercise: Why are all parts distinct?]
Conversely, given a partition into distinct parts, we can separate the “even part” from the “odd part” in each term, so for example with , we write
, and collect the coefficients of each odd number, so
, which was our original odd partition.
[Exercise: Why can’t we do the same thing starting with any partition?]
[Question: Is the bijective proof related to the algebraic one?]
A non-intuitive(?) fact
This seems counter-intuitive, at least to me, but if you can find an intuitive explanation for it, please comment!
Fact: Let G be a cycle (connected graph where every vertex has degree 2: you know what I mean) of length n (i.e., n vertices and n edges). Start a random walk on G from any node u. Then the probability that v is the last node visited (i.e., the random walk hits all other nodes before hitting v) is the same for every v other than u!
[Such a property is obviously true for the complete graph, and apparently it’s true only for cycles and complete graphs.]
I do not know a probabilist’s proof; what I could think of is a distinctly ComputerSciencey (Dynamic Programming / recurrence relation) proof.
All functions are continuous
…or else they are uncomputable :)
here and at What does topology have to do with computability?
Understanding monads
“Sigfpe” (Dan Piponi) has written some helpful posts on his blog. Together with other stuff, I understand it… somewhat. Or maybe I understand it perfectly, I don’t know.
Although they are in descending order of “simpleness”, this is the order in which I saw them:
First, read the definition of monads from here (only), and stop. It probably won’t make sense.
This excellent post:
You Could Have Invented Monads! (And Maybe You Already Have.)
explains monads with enough examples — they are a kind of “lift”. The definition makes sense now.
That, and Philip Wadler’s original paper should suffice.
There are too many monad tutorials, and quoting from Tell us why your language sucks:
So, things I hate about Haskell:
Let’s start with the obvious. Monad tutorials. No, not monads. Specifically the tutorials. They’re endless, overblown and dear god are they tedious. Further, I’ve never seen any convincing evidence that they actually help. Read the class definition, write some code, get over the scary name.
Finally, whether you’re trying to teach someone monads or anything else, you must read Brent Yorgey’s Abstraction, intuition, and the “monad tutorial fallacy”:
But now Joe goes and writes a monad tutorial called “Monads are Burritos,” under the well-intentioned but mistaken assumption that if other people read his magical insight, learning about monads will be a snap for them. “Monads are easy,” Joe writes. “Think of them as burritos.” […] Of course, exactly the opposite is true, and all Joe has done is make it harder for people to learn about monads…
[Random interesting stuff:
This post looks at them as “expressions” v/s “commands”:
The IO Monad for People who Simply Don’t Care
There’s also some interesting stuff in the first half of this post.]
A nice theorem, and trying to invert a function
[Inspired by N, my first use of LaTeX on this blog… and it’s not as much of a pain as I had thought it would be.]
Here’s a nice theorem (“Levy–Desplanques theorem”):
Given a matrix
, if for every row
,
, then
.
It’s quite easy to prove: Suppose , then there exists a vector
such that
. Pick a coordinate
that has maximum magnitude (i.e., let
). Then look at row
. We have
, so
, which is a contradiction.
This theorem has apparently been discovered and rediscovered many times. According to Wikipedia:
This result has been independently rediscovered dozens of times. A few notable ones are Lévy (1881), Desplanques (1886), Minkowski (1900), Hadamard (1903), Schur, Markov (1908), Rohrbach (1931), Gershgorin (1931), Artin (1932), Ostrowski (1937), and Furtwängler (1936).
Olga Taussky has written a short paper on “A Recurring Theorem on Determinants“, in which she mentions 25 references. (This was in 1949… it’s probably been rediscovered a lot of times since.)
Applying this theorem to gives us the Gershgorin circle theorem: For a matrix
, let
be
, then for any eigenvalue
, there exists an
such that
.
That is, consider discs centred at
and of radius
. Then every eigenvalue lies inside one of these “Gershgorin discs”.
(This theorem can also be proved directly in a similar way, in which case the original theorem follows directly.)
Where I encountered this: In an economics paper, where we have a function that is a demand system. That is, there are (substitutable) varieties of a product, and the demand for each product is a function of all the
prices. So for a given vector of prices, there is a vector of demands, thus we have a function
, such that
is a vector whose
th component is the demand for the
th product when the prices are
. In this case, it is “natural” to assume (rather not too implausible :-)) that in the Jacobian of this function (the matrix of partial derivatives whose
th entry is the partial derivative of the demand for the
th product with respect to the
th price), we have:
: If you increase your price, fewer people will want it
: If someone else increases their price, you can expect to get at least a few converts to your product.
, i.e.
: If everyone increases their price, the total demand in the market does not go up. It also implies that the demand for your product depends more on your price than on everyone’s price put together!
Let’s look at the function instead, so that the three properties above say that
has diagonal entries positive, off-diagonal entries nonpositive, and the positive dominant diagonal property in each row. It can be proved that any matrix with positive dominant diagonal property is a P-matrix, i.e., it has all principal minors positive. And there is a “global univalence theorem” due to Gale-Nikaido-Inada that a differentiable function defined on a rectangular region, whose Jacobian is a P-matrix at every point, is globally one-to-one. (So it is invertible if we ignore points outside its range.) (I see this theorem stated in the paper “The Jacobian matrix, global univalence and completely mixed games” by T. Parthasarathy and G. Ravindran — I vaguely remember TP mentioning something of this sort in class some time….)
So our demand system is globally invertible, and we can express the prices in terms of the demands. That was a lot of effort to get this!