The Lumber Room

"Consign them to dust and damp by way of preserving them"

Some incredibly beautiful things

with 2 comments

First, a totally completely absolutely incredible fact about convergents of continued fractions: the Khinchin-Levy theorem, seen here: For almost all real numbers \alpha, the denominators q_n of the continued-fraction convergents of \alpha satisfy

\displaystyle \lim_{n \to \infty} q_n^{1/n} = e^{\pi^2/(12 \log 2)}

Why should such a thing be a true?! Why should the limit even exist, why should it be the same for almost all real numbers, and where do e, \pi, \log 2 and 12 come from?!!

Intrigued by these questions, and embarrassed by my near-complete ignorance of continued fractions (each time I had encountered them before, I had been intimidated by the notation and skipped pages), I started reading a couple of books on continued fractions (including Khinchin’s)… other things intervened and I had to give up for now, but I hope to return to their study soon. But from what I’ve already seen, continued fractions are awesome, and it is a real shame they seem to be out of fashion these days and not more widely taught and known. Neither of these books proves the above theorem actually, but an editor’s footnote in Khinchin’s book gives the reference: it was proved by Khinchin in 1935 that the limit exists and is the same for almost all real numbers, and Lévy found the constant soon after (published 1937).

I suspect (based on nothing at all) that it bears some relation to, among other things, (some generalization of) the following well-known fact, itself quite surprising (probably the most surprising fact I know a proof of):

\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6} = \prod_{p \text{ prime}}{\left(1-\frac1{p^2}\right)}

This fact is known as the Basel problem, was proved by (who else?) Euler, and fourteen proofs of it are collected here. But let this not distract you from reading about continued fractions!


Second, pointed out at John Baez’s This Week’s Finds in Mathematical Physics (Week 230), and from this paper by Sondow, are these (quoting Prof. Baez directly, just typeset in LaTeX):

eerily similar formulas for some of our favorite math constants. First, one for e:

\displaystyle e = \left(\frac{2}{1}\right)^{1/1} \left(\frac{2^2}{1\times3}\right)^{1/2} \left(\frac{2^3\times4}{1\times3^3}\right)^{1/3}  \left(\frac{2^4\times4^4}{1\times3^6\times5}\right)^{1/4} \dots

Then, one for π/2:

\displaystyle \frac{\pi}{2} = \left(\frac{2}{1}\right)^{1/2} \left(\frac{2^2}{1\times3}\right)^{1/4} \left(\frac{2^3\times4}{1\times3^3}\right)^{1/8} \left(\frac{2^4\times4^4}{1\times3^6\times5}\right)^{1/16} \dots

Then one for eγ, where γ is Euler’s constant:

\displaystyle e^\gamma = \left(\frac{2}{1}\right)^{1/2} \left(\frac{2^2}{1\times3}\right)^{1/3} \left(\frac{2^3\times4}{1\times3^3}\right)^{1/4} \left(\frac{2^4\times4^4}{1\times3^6\times5}\right)^{1/5} \dots

He also points out Wallis’ product for π/2 and Pippenger’s for e:

\displaystyle \frac{\pi}{2} = \left(\frac{2}{1}\right)^{1/1} \left(\frac{2\times 4}{3\times3}\right)^{1/1} \left(\frac{4\times6\times6\times8}{5\times5\times7\times7}\right)^{1/1} \dots

\displaystyle  e = \left(\frac{2}{1}\right)^{1/2} \left(\frac{2\times 4}{3\times3}\right)^{1/4} \left(\frac{4\times6\times6\times8}{5\times5\times7\times7}\right)^{1/8} \dots

What does it all mean? I haven’t a clue! Another mystery thrown down to us by the math gods, like a bone from on high… we can merely choose to chew on it or not, as we wish.

Indeed!


Third, my favourite way to prove the Möbius inversion formula. Well, it’s actually the only way I know, besides induction, but I think it’s beautiful. (Ignore if you already know what Dirichlet convolutions are.) It’s from Herbert Wilf’s generatingfunctionology, one of my favourite books.

Let us consider a kind of generating function known as a Dirichlet generating function. (“A generating function is a clothesline on which we hang up a sequence of numbers for display.”) For a sequence a_n, the Dirichlet generating function is defined as the (formal) series (in s)

\displaystyle G_a(s) = \sum_{n=1}^{\infty} \frac{a_n}{n^s} = a_1 + \frac{a_2}{2^s} + \frac{a_3}{3^s} + \frac{a_4}{4^s} + \dots.

(A formal series means that we can ignore issues of convergence etc.) Now if we have the generating functions G_a(s) and G_b(s) for two sequences a_n and b_n, what is G_a(s)G_b(s)? Well, it’s easy to see that the coefficient of 1/n^s in G_a(s)G_b(s), which comes exactly from the terms of the form i^sj^s with ij=n, is \displaystyle \sum_{ij=n} a_ib_j, or equivalently \displaystyle \sum_{d \vert n} a_db_{n/d}. So the product is the generating function for a special kind of convolution, the sequence c_n defined as

\displaystyle c_n = \sum_{d \vert n} a_db_{n/d}

Let us call this fact “(*)” and put it aside for now; we will use it shortly. Instead, let us look at the generating function for a very special sequence, the sequence of all 1s. It is, by definition, \displaystyle \sum_{n\ge 1} \frac{1}{n^s}. While we could call this G_1(s), it goes by the more familiar name \zeta(s) — it is (when viewed as a function instead of a formal series) the Riemann zeta function. But we also know (this is the Euler product formula, and follows simply from the unique factorisation of integers):

\displaystyle \zeta(s) = \sum_{n\ge1}\frac{1}{n^s} = \prod_{p\text{ prime}}\left(1+\frac1{p^s} + \frac1{p^{2s}} + \frac1{p^{3s}} + \dots\right) = \prod_{p\text{ prime}}\frac1{\left(1-\frac1{p^s}\right)}.

This means that

\displaystyle \frac1{\zeta(s)} = \prod_{p\text{ prime}}{\left(1-\frac1{p^s}\right)} = G_\mu(s).

Why did I write G_\mu(s) on the right? Is this a generating function of anything? Why, yes! By expanding the product, you can see that it is the generating function for the sequence \mu(n), defined as

\displaystyle \mu(s) = \begin{cases} 0 \quad \text{if }n\text{ is not squarefree,} \\ (-1)^k \quad \text{if }n = p_1p_2\dots p_k\text{ is the product of k primes}\end{cases}.

(This function “mu” is known as the “Möbius function”.) With this background, the Möbius inversion formula falls out entirely naturally: if G_f(s) is the generating function of a sequence f(n), and the function F(n) is defined as

\displaystyle F(n) = \sum_{d\vert n} f(d),

then its generating function G_F(s) is, by (*), simply

\displaystyle G_F(s) = G_f(s)\zeta(s),

which means that

\displaystyle G_f(s) = G_F(s)\frac{1}{\zeta(s)} = G_F(s)G_\mu(s),

or in other words, again by (*),

\displaystyle f(n) = \sum_{d\vert n} F(d)\mu(n/d),

which is precisely the Möbius inversion formula. How cool is that?

About these ads

Written by S

Fri, 2010-04-30 at 01:12:02 +05:30

Posted in mathematics

2 Responses

Subscribe to comments with RSS.

  1. Haha, your introductory para reminded me of Ramanan’s “Thm. e^iπ + 1 = 0. Cor. God exists” and your article on “trivial” recently (or not-so-). :)

    Kshitij

    Tue, 2010-05-11 at 03:37:24 +05:30


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 57 other followers