# The Lumber Room

"Consign them to dust and damp by way of preserving them"

## Boolean blindness

(Just digesting the first page of Google search results.)

One of the lessons from functional programming is to encode as much information as possible into the types. Almost all programmers understand to some extent that types are helpful: they know not to store everything as void* (in C/C++) or as Object (in Java). They even know not to use say double for all numbers, or string for everything (as shell scripts / Tcl do). (This pejoratively called “stringly typed”.)

A corollary, from slide 19 here, is that (when your type system supports richer types) a boolean type is almost always the wrong choice, as it carries too little information.

The name “Boolean blindness” for this seems to have been coined by Dan Licata when taught a course at CMU as a PhD student.

There is no information carried by a Boolean beyond its value, and that’s the rub. As Conor McBride puts it, to make use of a Boolean you have to know its provenance so that you can know what it means.
[…]
Keeping track of this information (or attempting to recover it using any number of program analysis techniques) is notoriously difficult. The only thing you can do with a bit is to branch on it, and pretty soon you’re lost in a thicket of if-then-else’s, and you lose track of what’s what.
[…]

The problem is computing the bit in the first place. Having done so, you have blinded yourself by reducing the information you have at hand to a bit, and then trying to recover that information later by remembering the provenance of that bit. An illustrative example was considered in my article on equality:

fun plus x y = if x=Z then y else S(plus (pred x) y)

Here we’ve crushed the information we have about x down to one bit, then branched on it, then were forced to recover the information we lost to justify the call to pred, which typically cannot recover the fact that its argument is non-zero and must check for it to be sure. What a mess! Instead, we should write

 fun plus x y = case x of Z => y | S(x') => S(plus x' y) 

No Boolean necessary, and the code is improved as well! In particular, we obtain the predecessor en passant, and have no need to keep track of the provenance of any bit.

Some commenter there says

To the best of my knowledge, Ted Codd was the first to point out, in his relational model, that there is no place for Boolean data types in entity modeling. It is a basic design principle to avoid characterizing data in terms of Boolean values, since there is usually some other piece of information you are forgetting to model, and once you slam a Boolean into your overall data model, it becomes very hard to version towards a more exact model (information loss).

An example from Hacker News (on an unrelated post):

Basically, the idea is that when you branch on a conditional, information is gained. This information may be represented in the type system and used by the compiler to verify safety, or it can be ignored. If it is ignored, the language is said to have “boolean blindness”.
Example:


if (ptr == NULL) {
... a ...
} else {
... b ...
}


In branch a and branch b, different invariants about ptr hold. But the language/compiler are not verifying any of these invariants.

 data Maybe a = Nothing | Just a 

This defines a type “Maybe”, parameterized by a type variable “a”, and it defines two “data constructors” that can make a Maybe value: “Nothing” which contains nothing, and “Just” which contains a value of type “a” inside it.
This is known as a “sum” type, because the possible values of the type are the sum of all data constructor values.
We could still use this sum data-type in a boolean-blind way:


if isJust mx then
.. use fromJust mx .. -- UNSAFE!
else
.. can't use content of mx ..


However, using pattern-matching, we can use it in a safe way. Assume “mx” is of type “Maybe a”:

 case mx of Nothing -> ... There is no value of type "a" in our scope Just x -> ... "x" of type "a" is now available in our scope! 

So when we branch on the two possible cases for the “mx” type, we gain new type information that gets put into our scope.
“Maybe” is of course a simple example, but this is also applicable for any sum type at all.

Another from notes of someone called HXA7241:

A nullable pointer has two ‘parts’: null, and all-other-values. These can be got at with a simple if-else conditional: if p is not null then ... else ... end. And there is still nothing wrong with that. The problem arrives when you want to handle the parts – and you lack a good type system. What do you do with the non-null pointer value? You just have to put it back into a pointer again – a nullable pointer: which is what you started with. So where has what you did with the test been captured? Nowhere. When you handle that intended part elsewhere you have to do the test again and again.

A Reddit discussion: Don Stewart says

Not strongly typed, richly typed. Where evidence isn’t thrown away. Agda is about the closest thing to this we have.

### Relation to “object-oriented programming”

There’s also a good (and unintentionally revealing) example there by user munificent. Consider Harper’s example, which in more explicit Haskell could be:


data Num = Zero | Succ Num

plus :: Num -> Num -> Num
plus Zero     y = y
plus (Succ x) y = Succ (plus x y)


We might write it in Java as the following:

 interface Num {} class Zero implements Num {} class Succ implements Num { public Succ(Num pred) { this.pred = pred; } public final Num pred; } Num plus(Num x, Num y) { if (x instanceof Succ) { // (1) Succ xAsSucc = (Succ)x; // (2) return new Succ(plus(xAsSucc.pred, y)); } else { return y; } } 

Here instanceof returns a boolean (comment (1)), but doesn’t carry with it any information about what that boolean represents (namely that x is an instance of Succ) so when we get to the next line (2) we’re forced to do an unsafe cast. As programmers, we know it’s safe from context, but the compiler doesn’t.

There is a way in Java of avoiding this situation (where the programmer has context the compiler doesn’t):


interface Num {
Num plus(Num other);
}

class Zero implements Num {
public Num plus(Num other) {
return other;
}
}

class Succ implements Num {
public Succ(Num pred) {
this.pred = pred;
}

public Num plus(Num other) {
return new Succ(pred.plus(y));
}

public final Num pred;
}


But see what has happened (by user aaronia):

There’s a rub though — in your first code snippet, “plus” was written as a non-member function; anyone can write it as a stand alone library. It was modular. But, as you demonstrated, it had the redundant type check.
However, your second code snippet has lost this modularity. You had to add new methods to the Num data type.

I think this is the pitfall of Object-Oriented Programming (which is like “Pants-Oriented Clothing”): objects can talk about themselves, but it’s harder to talk about objects.

If you want to write a function that does similar things for a bunch of types, you have to write similar function definitions in all those different classes. These function definitions do not stay together, so there is no way of knowing or ensuring that they are similar.

Written by S

Sat, 2015-01-31 at 14:55:00

Posted in computer science

Tagged with

## The “most natural” definition of the div and mod functions

with one comment

[incomplete: must add examples and more discussion]

Most programming languages include a “remainder” or “modulo” function, and also an integer division (“quotient”) function. Given two integers $D$ and $d$, let’s call the results of these functions $r$ and $q$ respectively.

For positive $D$ and $d$, it is clear what $q$ and $r$ should be: $q = \left\lfloor D/d \right\rfloor$ is the largest integer $n$ such that $nd \le D$, and $r = D - qd$ is the remainder which therefore satisfies $0 \le r < d$.

What should we do when, as frequently happens, $D$ is negative, or (as less frequently happens) $d$ is negative?

For negative $D$ and positive $d$, there are two choices when $D$ lies between two multiples of $d$ (i.e. $q_1 d < D < q_2 d$):

(1) Set $q$ to the lesser value, so that $0 \le r < d$ continues to hold, or
(2) Set $q$ to the greater (and therefore smaller in magnitude) value.

There are very good reasons why (1) is preferable to (2): it ensures that the function $a \bmod m$ is always positive no matter what the value of $a$, so that, for example, $a \equiv b \pmod m \iff (a \bmod m) = (b \bmod m)$.

And indeed that is what the more recent programming languages do. There is a table on Wikipedia: C, C++, Java, Go(!), OCaml(!), PHP, all have the “bad” behaviour, while Maple, Mathematica, Microsoft Excel, Perl, Python, Ruby have the “good” behaviour. Some languages have separate functions for both behaviours (e.g. Haskell has quotRem and divMod functions, similarly Common Lisp, Fortran, MATLAB).

There’s also the question of what to do when $d$ is negative, which turns out not to matter much (as long as it’s consistent with the above). One defintion is to continue to have $q$ be the lesser value, and the other is to continue to insist that $r \ge 0$. Both are fine, though sometimes the latter is nicer.

These are elaborated in The Euclidean Definition of the Functions div and mod by Raymond T. Boute, ACM Transactions on Programming Languages and Systems, Vol 14, No. 2, April 1992.

Written by S

Mon, 2015-01-12 at 09:30:35

## Big O() notation: a couple of sources

This post contains, just for future reference, a couple of primary sources relevant to the $O$ (“Big O”) notation:

1. Some introductory words from Asymptotic Methods in Analysis by de Bruijn
2. An letter from Donald Knuth on an approach to teaching calculus using this notation.

Written by S

Thu, 2014-03-13 at 16:33:20

## Visualizing product of permutations

A simple pedagogical trick that may come in handy: represent a permutation $\sigma$ using arrows (curved lines) from $k$ to $\sigma(k)$ for each $k$. Then, the product of two permutations can be represented by just putting the two corresponding figures (sets of arrows) one below the other, and following the arrows.

Representing permutations and products of permutations.

The figure is from an article called Symmetries by Alain Connes, found via the Wikipedia article on Morley’s trisector theorem (something entirely unrelated to permutations, but the article covers both of them and more).

I’m thinking how one might write a program to actually draw these: if we decide that the “height” of the figure is some $h$, then each arrow needs to go from some $(k, 0)$ to $(\sigma(k), h)$ (using here the usual screen convention of $x$ coordinate increasing from left to right, and $y$ coordinate increasing from top to bottom). Further, each curve needs to have vertical slope at its two endpoints, so that successive curves can line up smoothly. The constraint on starting point, ending point, and directions at the endpoints defines almost a quadratic Bezier curve, except that here the two directions are parallel. So it’s somewhere between a quadratic and the (usual) cubic Bezier curve, which is given by the start point, end point, and derivatives at the start and end point. (Here we only care about the direction of the derivative; we can pick some arbitrary magnitude to fix the curve: the larger we pick, the more smooth it will look at the ends, at the cost of smoothness in the interior.)

Even knowing the curve, how do we generate an image?

Written by S

Thu, 2014-03-06 at 23:15:44

## Free, defiant, and without a security label

James Mickens is a CS researcher (“Galactic Viceroy of Research Magnificence”) who among other things writes wrote for the online version of Usenix’s magazine ;login: (called ;login: logout, published every other month). Here are some of his articles:

1. [May 2013] The Saddest Moment
2. [July 2013] Mobile Computing Research Is a Hornet’s Nest of Deception and Chicanery
3. [September 2013] The Slow Winter
4. [November 2013] The Night Watch
5. [January 2014] This World of Ours
6. [March 2014] To Wash It All Away

Reading these is an epiphany akin to one’s first encounter with Dave Barry or Airplane!