## Posts Tagged ‘**maths**’

## All functions are continuous

…or else they are uncomputable :)

here and at What does topology have to do with computability?

## Understanding monads

“Sigfpe” (Dan Piponi) has written some helpful posts on his blog. Together with other stuff, I understand it… somewhat. Or maybe I understand it perfectly, I don’t know.

Although they are in descending order of “simpleness”, this is the order in which I saw them:

First, read the definition of monads from here (only), and stop. It probably won’t make sense.

This excellent post:

You Could Have Invented Monads! (And Maybe You Already Have.)

explains monads with enough examples — they are a kind of “lift”. The definition makes sense now.

That, and Philip Wadler’s original paper should suffice.

There are too many monad tutorials, and quoting from Tell us why your language sucks:

So, things I hate about Haskell:

Let’s start with the obvious. Monad tutorials. No, not monads. Specifically the tutorials. They’re endless, overblown and dear god are they tedious. Further, I’ve never seen any convincing evidence that they actually help. Read the class definition, write some code, get over the scary name.

Finally, whether you’re trying to teach someone monads or anything else, you *must* read Brent Yorgey’s Abstraction, intuition, and the “monad tutorial fallacy”:

But now Joe goes and writes a monad tutorial called “Monads are Burritos,” under the well-intentioned but mistaken assumption that if other people read his magical insight, learning about monads will be a snap for them. “Monads are easy,” Joe writes. “Think of them as burritos.” […] Of course, exactly the opposite is true, and all Joe has done is make it

harderfor people to learn about monads…

[Random interesting stuff:

This post looks at them as “expressions” v/s “commands”:

The IO Monad for People who Simply Don’t Care

There’s also some interesting stuff in the first half of this post.]

## A nice theorem, and trying to invert a function

[Inspired by N, my first use of LaTeX on this blog… and it’s not as much of a pain as I had thought it would be.]

Here’s a nice theorem (“Levy–Desplanques theorem”):

Given a matrix , if for every row , , then .

It’s quite easy to prove: Suppose , then there exists a vector such that . Pick a coordinate that has maximum magnitude (i.e., let ). Then look at row . We have , so , which is a contradiction.

This theorem has apparently been discovered and rediscovered many times. According to Wikipedia:

This result has been independently rediscovered dozens of times. A few notable ones are Lévy (1881), Desplanques (1886), Minkowski (1900), Hadamard (1903), Schur, Markov (1908), Rohrbach (1931), Gershgorin (1931), Artin (1932), Ostrowski (1937), and Furtwängler (1936).

Olga Taussky has written a short paper on “A Recurring Theorem on Determinants“, in which she mentions 25 references. (This was in 1949… it’s probably been rediscovered a lot of times since.)

Applying this theorem to gives us the *Gershgorin circle theorem*: For a matrix , let be , then for any eigenvalue , there exists an such that .

That is, consider discs centred at and of radius . Then every eigenvalue lies inside one of these “Gershgorin discs”.

(This theorem can also be proved directly in a similar way, in which case the original theorem follows directly.)

Where I encountered this: In an economics paper, where we have a function that is a *demand system*. That is, there are (substitutable) varieties of a product, and the demand for each product is a function of all the prices. So for a given vector of prices, there is a vector of demands, thus we have a function , such that is a vector whose th component is the demand for the th product when the prices are . In this case, it is “natural” to assume (rather not too implausible :-)) that in the Jacobian of this function (the matrix of partial derivatives whose th entry is the partial derivative of the demand for the th product with respect to the th price), we have:

- : If you increase your price, fewer people will want it
- : If someone else increases their price, you can expect to get at least a few converts to your product.
- , i.e. : If everyone increases their price, the total demand in the market does not go up. It also implies that the demand for your product depends more on your price than on everyone’s price put together!

Let’s look at the function instead, so that the three properties above say that has diagonal entries positive, off-diagonal entries nonpositive, and the *positive dominant diagonal property* in each row. It can be proved that any matrix with positive dominant diagonal property is a *P-matrix*, i.e., it has all principal minors positive. And there is a “global univalence theorem” due to Gale-Nikaido-Inada that a differentiable function defined on a rectangular region, whose Jacobian is a P-matrix at every point, is globally one-to-one. (So it is invertible if we ignore points outside its range.) (I see this theorem stated in the paper *“The Jacobian matrix, global univalence and completely mixed games”* by T. Parthasarathy and G. Ravindran — I vaguely remember TP mentioning something of this sort in class some time….)

So our demand system is globally invertible, and we can express the prices in terms of the demands. That was a lot of effort to get this!