The Lumber Room

"Consign them to dust and damp by way of preserving them"

Archive for the ‘mathematics’ Category

Some playing with Python

with 2 comments

A long time ago, Diophantus (sort of) discussed integer solutions to the equation

\displaystyle  x^2 + y^2 = z^2

(solutions to this equation are called Pythagorean triples).

Centuries later, in 1637, Fermat made a conjecture (now called Fermat’s Last Theorem, not because he uttered it in his dying breath, but because it was the last one to be proved — in ~1995) that

\displaystyle x^n + y^n = z^n

has no positive integer solutions for n \ge 3. In other words, his conjecture was that none of the following equations has a solution:

\displaystyle x^3 + y^3 = z^3

\displaystyle x^4 + y^4 = z^4

\displaystyle x^5 + y^5 = z^5

\displaystyle x^6 + y^6 = z^6

… and so on. An nth power cannot be partitioned into two nth powers.

About a century later, Euler proved the n = 3 case of Fermat’s conjecture, but generalized it in a different direction: he conjectured in 1769 that an nth power cannot be partitioned into fewer than n nth powers, namely

\displaystyle z^n = \sum_{i = 1}^k x_i^n

has no solutions with k < n. So his conjecture was that (among others) none of the following equations has a solution:

\displaystyle z^3 = a^3 + b^3

\displaystyle z^4 = a^4 + b^4 + c^4

\displaystyle z^5 = a^5 + b^5 + c^5 + d^5

\displaystyle z^6 = a^6 + b^6 + c^6 + d^6 + e^6

… and so on.

This conjecture stood for about two centuries, until abruptly it was found to be false, by Lander and Parkin who in 1966 simply did a direct search on the fastest (super)computer at the time, and found this counterexample:

\displaystyle 27^5 + 84^5 + 110^5 + 133^5 = 144^5

(It is still one of only three examples known, according to Wikipedia.)

Now, how might you find this solution on a computer today?

In his wonderful (as always) post at bit-player, Brian Hayes showed the following code:

import itertools as it

def four_fifths(n):
    '''Return smallest positive integers ((a,b,c,d),e) such that
       a^5 + b^5 + c^5 + d^5 = e^5; if no such tuple exists
       with e < n, return the string 'Failed'.'''
    fifths = [x**5 for x in range(n)]
    combos = it.combinations_with_replacement(range(1,n), 4)
    while True:
            cc =
            cc_sum = sum([fifths[i] for i in cc])
            if cc_sum in fifths:
                return(cc, fifths.index(cc_sum))
        except StopIteration:

to which, if you add (say) print four_fifths(150) and run it, it returns the correct answer fairly quickly: in about 47 seconds on my laptop.

The if cc_sum in fifths: line inside the loop is an O(n) cost each time it’s run, so with a simple improvement to the code (using a set instead) and rewriting it a bit, we can write the following full program:

import itertools

def find_counterexample(n):
  fifth_powers = [x**5 for x in range(n)]
  fifth_powers_set = set(fifth_powers)
  for xs in itertools.combinations_with_replacement(range(1, n), 4):
    xs_sum = sum([fifth_powers[i] for i in xs])
    if xs_sum in fifth_powers_set:
      return (xs, fifth_powers.index(xs_sum))
  return 'Failed'

print find_counterexample(150)

which finishes in about 8.5 seconds.


But there’s something unsatisfying about this solution, which is that it assumes there’s a solution with all four numbers on the LHS less than 150. After all, changing the function invocation to find_counterexample(145) makes it run a second faster even, but how could we know to do without already knowing the solution? Besides, we don’t have a fixed 8- or 10-second budget; what we’d really like is a program that keeps searching till it finds a solution or we abort it (or it runs out of memory or something), with no other fixed termination condition.

The above program used the given “n” as an upper bound to generate the combinations of 4 numbers; is there a way to generate all combinations when we don’t know an upper bound on them?

Yes! One of the things I learned from Knuth volume 4 is that if you simply write down each combination in descending order and order them lexicographically, the combinations you get for each upper bound are a prefix of the list of the next bigger one, i.e., for any upper bound, all the combinations form a prefix of the same infinite list, which starts as follows (line breaks for clarity):

2111, 2211, 2221, 2222,
3111, 3211, 3221, 3222, 3311, 3321, 3322, 3331, 3332, 3333,
4111, ...
     ... 9541, 9542, 9543, 9544, 9551, ... 9555, 9611, ...

There doesn’t seem to be a library function in Python to generate these though, so we can write our own. If we stare at the above list, we can figure out how to generate the next combination from a given one:

  1. Walk backwards from the end, till you reach the beginning or find an element that’s less than the previous one.
  2. Increase that element, set all the following elements to 1s, and continue.

We could write, say, the following code for it:

def all_combinations(r):
  xs = [1] * r
  while True:
    yield xs
    for i in range(r - 1, 0, -1):
      if xs[i] < xs[i - 1]:
      i = 0
    xs[i] += 1
    xs[i + 1:] = [1] * (r - i - 1)

(The else block on a for loop is an interesting Python feature: it is executed if the loop wasn’t terminated with break.) We could even hard-code the r=4 case, as we’ll see later below.

For testing whether a given number is a fifth power, we can no longer simply lookup in a fixed precomputed set. We can do a binary search instead:

def is_fifth_power(n):
  assert n > 0
  lo = 0
  hi = n
  # Invariant: lo^5 < n <= hi^5
  while hi - lo > 1:
    mid = lo + (hi - lo) / 2
    if mid ** 5 < n:
      lo = mid
      hi = mid
  return hi ** 5 == n

but it turns out that this is slower than one based on looking up in a growing set (as below).

Putting everything together, we can write the following (very C-like) code:

largest_known_fifth_power = (0, 0)
known_fifth_powers = set()
def is_fifth_power(n):
  global largest_known_fifth_power
  while n > largest_known_fifth_power[0]:
    m = largest_known_fifth_power[1] + 1
    m5 = m ** 5
    largest_known_fifth_power = (m5, m)
  return n in known_fifth_powers

def fournums_with_replacement():
  (x0, x1, x2, x3) = (1, 1, 1, 1)
  while True:
    yield (x0, x1, x2, x3)
    if x3 < x2:
      x3 += 1
    x3 = 1
    if x2 < x1:
      x2 += 1
    x2 = 1
    if x1 < x0:
      x1 += 1
    x1 = 1
    x0 += 1

if __name__ == '__main__':
  tried = 0
  for get in fournums_with_replacement():
    tried += 1
    if (tried % 1000000 == 0):
      print tried, 'Trying:', get
    rhs = get[0]**5 + get[1]**5 + get[2]**5 + get[3]**5
    if is_fifth_power(rhs):
      print 'Found:', get, rhs

which is both longer and slower (takes about 20 seconds) than the original program, but at least we have the satisfaction that it doesn’t depend on any externally known upper bound.

I originally started writing this post because I wanted to describe some experiments I did with profiling, but it’s late and I’m sleepy so I’ll just mention it.

python -m cProfile

will print relevant output in the terminal:

         26916504 function calls in 26.991 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1   18.555   18.555   26.991   26.991
 13458164    4.145    0.000    4.145    0.000
 13458163    4.292    0.000    4.292    0.000
      175    0.000    0.000    0.000    0.000 {method 'add' of 'set' objects}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}

Another way to view the same thing is to write the profile output to a file and read it with cprofilev:

python -m cProfile -o euler_profile.out
cprofilev euler_profile.out

and visit http://localhost:4000 to view it.

Of course, simply translating this code to C++ makes it run much faster:

#include <array>
#include <iostream>
#include <map>
#include <utility>

typedef long long Int;
constexpr Int fifth_power(Int x) { return x * x * x * x * x; }

std::map<Int, int> known_fifth_powers = {{0, 0}};
bool is_fifth_power(Int n) {
  while (n > known_fifth_powers.rbegin()->first) {
    int m = known_fifth_powers.rbegin()->second  + 1;
    known_fifth_powers[fifth_power(m)] = m;
  return known_fifth_powers.count(n);

std::array<Int, 4> four_nums() {
  static std::array<Int, 4> x = {1, 1, 1, 0};
  int i = 3;
  while (i > 0 && x[i] == x[i - 1]) --i;
  x[i] += 1;
  while (++i < 4) x[i] = 1;
  return x;

std::ostream& operator<<(std::ostream& os, std::array<Int, 4> x) {
  os << "(" << x[0] << ", " << x[1] << ", " << x[2] << ", " << x[3] << ")";
  return os;

int main() {
  while (true) {
    std::array<Int, 4> get = four_nums();
    Int rhs = fifth_power(get[0]) + fifth_power(get[1]) + fifth_power(get[2]) + fifth_power(get[3]);
    if (is_fifth_power(rhs)) {
      std::cout << "Found: " << get << " " << known_fifth_powers[rhs] << std::endl;


clang++ -std=c++11 && time ./a.out

runs in 2.43s, or 0.36s if compiled with -O2.

But I don’t have a satisfactory answer to how to make our Python program which takes 20 seconds as fast as the 8.5-second known-upper-bound version.

Edit [2015-05-08]: I wrote some benchmarking code to compare all the different “combination” functions.

import itertools

# Copied from the Python documentation
def itertools_equivalent(iterable, r):
    pool = tuple(iterable)
    n = len(pool)
    if not n and r:
    indices = [0] * r
    yield tuple(pool[i] for i in indices)
    while True:
        for i in reversed(range(r)):
            if indices[i] != n - 1:
        indices[i:] = [indices[i] + 1] * (r - i)
        yield tuple(pool[i] for i in indices)

# Above function, specialized to first argument being range(1, n)
def itertools_equivalent_specialized(n, r):
  indices = [1] * r
  yield indices
  while True:
    for i in reversed(range(r)):
      if indices[i] != n - 1:
    indices[i:] = [indices[i] + 1] * (r - i)
    yield indices

# Function to generate all combinations of 4 elements
def all_combinations_pythonic(r):
  xs = [1] * r
  while True:
    yield xs
    for i in range(r - 1, 0, -1):
      if xs[i] < xs[i - 1]:
      i = 0
    xs[i] += 1
    xs[i + 1:] = [1] * (r - i - 1)

# Above function, written in a more explicit C-like way
def all_combinations_clike(r):
  xs = [1] * r
  while True:
    yield xs
    i = r - 1
    while i > 0 and xs[i] == xs[i - 1]:
      i -= 1
    xs[i] += 1
    while i < r - 1:
      i += 1
      xs[i] = 1

# Above two functions, specialized to r = 4, using tuple over list.
def fournums():
  (x0, x1, x2, x3) = (1, 1, 1, 1)
  while True:
    yield (x0, x1, x2, x3)
    if x3 < x2:
      x3 += 1
    x3 = 1
    if x2 < x1:
      x2 += 1
    x2 = 1
    if x1 < x0:
      x1 += 1
    x1 = 1
    x0 += 1

# Benchmarks for all functions defined above (and the library function)
def benchmark_itertools(n):
  for xs in itertools.combinations_with_replacement(range(1, n), 4):
    if xs[0] >= n:
def benchmark_itertools_try(n):
  combinations = itertools.combinations_with_replacement(range(1, n), 4)
  while True:
      xs =
      if xs[0] >= n:
    except StopIteration:
def benchmark_itertools_equivalent(n):
  for xs in itertools_equivalent(range(1, n), 4):
    if xs[0] >= n:
def benchmark_itertools_equivalent_specialized(n):
  for xs in itertools_equivalent_specialized(n, 4):
    if xs[0] >= n:
def benchmark_all_combinations_pythonic(n):
  for xs in all_combinations_pythonic(4):
    if xs[0] >= n:
def benchmark_all_combinations_clike(n):
  for xs in all_combinations_clike(4):
    if xs[0] >= n:
def benchmark_fournums(n):
  for xs in fournums():
    if xs[0] >= n:

if __name__ == '__main__':

As you can see, I chose inside the benchmarking function the same statement that would cause all_combinations to terminate, and have no effect for the other combination functions.
When run with

python -m cProfile

the results include:



  • Calling itertools.combinations_with_replacement is by far the fastest, taking about 2.7 seconds. It turns out that it’s written in C, so this would be hard to beat. (Still, writing it in a try block is seriously bad.)
  • The “equivalent” Python code from the itertools documentation (benchmark_itertools_combinations_with_replacment) is about 50x slower.
  • Gets slightly better when specialized to numbers.
  • Simply generating all combinations without an upper bound is actually faster.
  • It can be made even faster by writing it in a more C-like way.
  • The tuples version with the loop unrolled manually is rather fast when seen in this light, less than 4x slower than the library version.

Written by S

Sun, 2015-02-08 at 00:03:38

Colliding balls approximate pi

leave a comment »

Found via G+, a new physical experiment that approximates \pi, like Buffon’s needle problem: The Pi Machine.

Roughly, the amazing discovery of Gregory Galperin is this: When a ball of mass M collides with one of ball m, propelling it towards a wall, the number of collisions (assuming standard physics idealisms) is \pi \lfloor\sqrt{M/m}\rfloor, so by taking M/m = 10^{2n}, we can get the first n+1 digits of \pi. Note that this number of collisions is an entirely determinstic quantity; there’s no probability is involved!

Here’s a video demonstrating the fact for M/m = 100 (the blue ball is the heavier one):

The NYT post says how this discovery came about:

Dr. Galperin’s approach was also geometric but very different (using an unfolding geodesic), building on prior related insights. Dr. Galperin, who studied under well-known Russian mathematician Andrei Kolmogorov, had recently written (with Yakov Sinai) extensively on ball collisions, realized just before a talk in 1995 that a plot of the ball positions of a pair of colliding balls could be used to determine pi. (When he mentioned this insight in the talk, no one in the audience believed him.) This finding was ultimately published as “Playing Pool With Pi” in a 2003 issue of Regular and Chaotic Dynamics.

The paper, Playing Pool With π (The number π from a billiard point of view) is very readable. The post has, despite a “solution” section, essentially no explanation, but the two comments by Dave in the comments section explain it clearly. And a reader sent in a cleaned-up version of that too: here, by Benjamin Wearn who teaches physics at Fieldston School.

Now someone needs to make a simulation / animation graphing the two balls in phase space of momentum. :-)

I’d done something a while ago, to illustrate The Orbit of the Moon around the Sun is Convex!, here. Probably need to re-learn all that JavaScript stuff, to make one for this. Leaving this post here as a placeholder.

Or maybe someone has done it already?

Written by S

Mon, 2014-06-23 at 23:03:18

Posted in mathematics, unfinished

Prefatory apprehension

leave a comment »

Robert Recorde’s 1557 book is noted for being the first to introduce the equals sign =, and is titled:

The Whetstone of Witte: whiche is the seconde parte of Arithmeteke: containing the extraction of rootes; the cossike practise, with the rule of equation; and the workes of Surde Nombers.

Its title page (see, see also the full book at contains this verse:

Original spelling

Though many ſtones doe beare greate price,
whetſtone is for exerſice
As neadefull, and in woorke as ſtraunge:
Dulle thinges and harde it will ſo chaunge,
And make them ſharpe, to right good vſe:
All arteſmen knowe, thei can not chuſe,
But uſe his helpe: yet as men ſee,
Noe ſharpeneſſe ſemeth in it to bee.

The grounde of artes did brede this ſtone:
His vſe is greate, and moare then one.
Here if you lift your wittes to whette,
Moche ſharpeneſſe thereby ſhall you gette.
Dulle wittes hereby doe greately mende,
Sharpe wittes are fined to their fulle ende.
Now proue, and praiſe, as you doe finde,
And to your ſelf be not vnkinde.

Modern spelling

Though many stones do bear great price,
The whetstone is for exercise
As needful, and in work as strange:
Dull things and hard it will so change
And make them sharp, to right good use:
All artsmen know they cannot choose
But use his help; yet as men see,
No sharpness seemeth in it to be.

The ground of arts did breed this stone;
His use is great, and more than one.
Here if you lift your wits to whet,
Much sharpness thereby shall you get.
Dull wits hereby do greatly mend,
Sharp wits are fined to their full end.
Now prove and praise as you do find,
And to yourself be not unkind.

Apparently the full title contains a pun (see “the cossike practise” in the title refers to algebra, as the Latin cosa apparently meaning “a thing” was used to stand for an unknown, abbreviated to cos — but the Latin word cos itself means a grindstone.

The author again reminds readers not to blame his book, at the end of his preface:

To the curiouſe ſcanner.

If you ought finde, as ſome men maie,
That you can mende, I ſhall you praie,
To take ſome paine ſo grace maie ſende,
This worke to growe to perfecte ende.

But if you mende not that you blame,
I winne the praiſe, and you the ſhame.
Therfore be wiſe, and learne before,
Sith ſlaunder hurtes it ſelf moſte ſore.

Authors are either anxious about how their book is received, or make sure to be pointedly uncaring.

Sir Arthur Conan Doyle, in a mostly forgettable volume of poetry (Songs of the Road, 1911), begins:

If it were not for the hillocks
   You’d think little of the hills;
The rivers would seem tiny
   If it were not for the rills.
If you never saw the brushwood
   You would under-rate the trees;
And so you see the purpose
   Of such little rhymes as these.

Kālidāsa of course begins his Raghuvaṃśa with a grand disclaimer:

kva sūryaprabhavo vaṃśaḥ kva cālpaviṣayā matiḥ /
titīrṣur dustaram mohād uḍupenāsmi sāgaram // Ragh_1.2 //

mandaḥ kaviyaśaḥ prārthī gamiṣyāmy upahāsyatām /
prāṃśulabhye phale lobhād udbāhur iva vāmanaḥ // Ragh_1.3 //

atha vā kṛtavāgdvāre vaṃśe ‘smin pūrvasūribhiḥ /
maṇau vajrasamutkīrṇe sūtrasyevāsti me gatiḥ // Ragh_1.4 //

But the most nonchalant I’ve seen, thanks to Dr. Ganesh, is this gīti by Śrīkṛṣṇa Brahmatantra Yatīndra of the Parakāla Maṭha, Mysore:

nindatu vā nandatu vā
mandamanīṣā niśamya kṛtim etām
harṣaṃ vā marṣaṃ vā
sarṣapamātram api naiva vindema

Screw you guys. :-)

Written by S

Wed, 2014-05-28 at 23:56:11

Posted in history, mathematics

Big O() notation: a couple of sources

with one comment

This post contains, just for future reference, a couple of primary sources relevant to the O (“Big O”) notation:

  1. Some introductory words from Asymptotic Methods in Analysis by de Bruijn
  2. An letter from Donald Knuth on an approach to teaching calculus using this notation.

Read the rest of this entry »

Written by S

Thu, 2014-03-13 at 16:33:20

Visualizing product of permutations

with 4 comments

A simple pedagogical trick that may come in handy: represent a permutation \sigma using arrows (curved lines) from k to \sigma(k) for each k. Then, the product of two permutations can be represented by just putting the two corresponding figures (sets of arrows) one below the other, and following the arrows.

Representing permutations and products of permutations.

Representing permutations and products of permutations.

The figure is from an article called Symmetries by Alain Connes, found via the Wikipedia article on Morley’s trisector theorem (something entirely unrelated to permutations, but the article covers both of them and more).

I’m thinking how one might write a program to actually draw these: if we decide that the “height” of the figure is some h, then each arrow needs to go from some (k, 0) to (\sigma(k), h) (using here the usual screen convention of x coordinate increasing from left to right, and y coordinate increasing from top to bottom). Further, each curve needs to have vertical slope at its two endpoints, so that successive curves can line up smoothly. The constraint on starting point, ending point, and directions at the endpoints defines almost a quadratic Bezier curve, except that here the two directions are parallel. So it’s somewhere between a quadratic and the (usual) cubic Bezier curve, which is given by the start point, end point, and derivatives at the start and end point. (Here we only care about the direction of the derivative; we can pick some arbitrary magnitude to fix the curve: the larger we pick, the more smooth it will look at the ends, at the cost of smoothness in the interior.)

Even knowing the curve, how do we generate an image?

Written by S

Thu, 2014-03-06 at 23:15:44

The idea of logarithms, and the first appearance of e

with 2 comments

[Incomplete post: about 10% written, about 90% left.]

The notion of the number e, the exponential function e^x, and logarithms \log x are often conceptual stumbling blocks even to someone who has an otherwise solid understanding of middle-school mathematics.

Just what is the number e? How was it first calculated / where did it first turn up? Premature exposure to its numerical value

\displaystyle e \approx 2.718281828459045\dots

only serves to deepen the mysteriousness and to make it seem arbitrary.

Here a historical perspective helps: as is often the case, here too, the first appearance is simpler and more well-motivated than the accounts in dry textbooks. This is from this account by Matthew P. Wiener (originally posted on USENET somewhere, as quoted by MJD). I’m just going to quote it directly for now, and edit it later:

Napier, who invented logarithms, more or less worked out a table of logarithms to base \frac1e, as follows:

     0  1  2  3   4   5   6    7    8    9    10 ...
     1  2  4  8  16  32  64  128  256  512  1024 ...

The arithmetic progression in the first row is matched by a geometric progression in the second row. If, by any luck, you happen to wish to multiply 16 by 32, that just happen to be in the bottom row, you can look up their “logs” in the first row and add 4+5 to get 9 and then conclude 16·32=512.

For most practical purposes, this is useless. Napier realized that what one needs to multiply in general is 1+\epsilon for a base—the intermediate values will be much more extensive. For example, with base 1.01, we get:

       0 1.00   1 1.01   2 1.02   3 1.03   4 1.04   5 1.05
       6 1.06   7 1.07   8 1.08   9 1.09  10 1.10  11 1.12
      12 1.13  13 1.14  14 1.15  15 1.16  16 1.17  17 1.18
      18 1.20  19 1.21  20 1.22  21 1.23  22 1.24  23 1.26
      24 1.27  25 1.28  26 1.30  27 1.31  28 1.32  29 1.33
      30 1.35  31 1.36  32 1.37  33 1.39  34 1.40  35 1.42
      50 1.64  51 1.66  52 1.68  53 1.69  54 1.71  55 1.73
      94 2.55  95 2.57  96 2.60  97 2.63  98 2.65  99 2.68
     100 2.70 101 2.73 102 2.76 103 2.79 104 2.81 105 2.84

So if you need to multiply 1.27 by 1.33, say, just look up their logs, in this case, 24 and 29, add them, and get 53, so 1.27·1.33=1.69. For two/three digit arithmetic, the table only needs entries up to 9.99.

Note that e is almost there, as the antilogarithm of 100. The natural logarithm of a number can be read off from the above table, as just [approximately] \frac1{100} the corresponding exponent.

What Napier actually did was work with base .9999999. He spent 20 years computing powers of .9999999 by hand, producing a grand version of the above. That’s it. No deep understanding of anything, no calculus, and e pops up anyway—in Napier’s case, \frac1e was the 10 millionth entry. (To be pedantic, Napier did not actually use decimal points, that being a new fangled notion at the time.)

Later, in his historic meeting with Briggs, two changes were made. A switch to a base > 1 was made, so that logarithms would scale in the same direction as the numbers, and the spacing on the logarithm sides was chosen so that \log(10)=1. These two changes were, in effect, just division by -\log_e(10).

In other words, e made its first appearance rather implicitly.

(I had earlier read a book on Napier and come to the same information though a lot less clearly, here.)

I had started writing a series of posts leading up to an understanding of the exponential function e^x (here, here, here), but it seems to have got abandoned. Consider this one a contribution to that series.

Written by S

Wed, 2013-11-27 at 10:52:51

Posted in mathematics

The functional equation f(x+y) = f(x)f(y)

with 2 comments

Suppose f: \mathbb{R} \to \mathbb{R} satisfies f(x+y) = f(x) f(y). What can we say about f?

Putting y = 0 gives

\displaystyle f(x) = f(x+0) = f(x)f(0),

which can happen if either f(x) = 0 or f(0) = 1. Note that the function f which is identically zero satisfies the functional equation. If f is not this function, i.e., if f(x) \neq 0 for at least one value of x, then plugging that value of x (say x^*) into the equation gives f(0) = 1. Also, for any x, the equation f(x^*) = f(x +x^* - x) = f(x)f(x^* - x) forces f(x) \neq 0 as well. Further, f(x) = f(x/2 + x/2) = f(x/2)^2 so f(x) > 0 for all x.

Next, putting y = x gives f(2x) = f(x)^2, and by induction f(nx) = f(x)^n. Putting \frac{x}{n} in place of x in this gives f(n\frac{x}{n}) = f(\frac{x}{n})^n which means f(\frac{x}{n}) = f(x)^{\frac1n} (note we’re using f(x) > 0 here). And again, f(\frac{m}{n}x) = f(x)^{m/n}. So f(\frac{m}{n}) = f(1)^{m/n}, which completely defines the function at rational points.

[As f(1) > 0, it can be written as f(1) = e^k for some constant k, which gives f(x) = e^{kx} for rational x.]

To extend this function to irrational numbers, we need some further assumptions on f, such as continuity. It turns out that being continuous at any point is enough (and implies the function is f(x) = f(1)^x everywhere): note that f(x + m/n) = f(x)f(m/n) = f(x)f(1)^{m/n}. Even being Lebesgue-integrable/measurable will do.

Else, there are discontinuous functions satisfying the functional equation. (Basically, we can define the value of the function separately on each “independent” part. That is, define the equivalence class where x and y are related if y = r_1x + r_2 for rationals r_1 and r_2, pick a representative for each class using the axiom of choice (this is something like picking a basis for \mathbb{R}/\mathbb{Q}, which corresponds to the equivalence class defined by the relation y = r_1x), define the value of the function independently for each representative, and this fixes the value of f on \mathbb{R}. See this article for more details.)

To step back a bit: what the functional equation says is that f is a homorphism from (\mathbb{R}, +), the additive group of real numbers, to (\mathbb{R}, \times), the multiplicative monoid of real numbers. If f is not the trivial identically-zero function, then (as we saw above) f is in fact a homomorphism from (\mathbb{R}, +), the additive group of real numbers, to (\mathbb{R_+^*}, \times), the multiplicative group of positive real numbers. What we proved is that the exponential functions e^{kx} are precisely all such functions that are nice (nice here meaning either measurable or continuous at least one point). (Note that this set includes the trivial homomorphism corresponding to k = 0: the function f(x) = 1 identically everywhere. If f is not this trivial map, then it is in fact an isomorphism.)

Edit [2013-10-11]: See also Overview of basic facts about Cauchy functional equation.

Written by S

Mon, 2013-04-08 at 11:24:08

Posted in mathematics


Get every new post delivered to your Inbox.

Join 129 other followers