## Archive for **January 2015**

## Boolean blindness

*(Just digesting the first page of Google search results.)*

One of the lessons from functional programming is to encode as much information as possible into the types. Almost all programmers understand to some extent that types are helpful: they know not to store everything as `void*` (in C/C++) or as `Object` (in Java). They even know not to use say `double` for all numbers, or `string` for everything (as shell scripts / Tcl do). (This pejoratively called “stringly typed”.)

A corollary, from slide 19 here, is that (when your type system supports richer types) a boolean type is almost always the wrong choice, as it carries too little information.

The name “Boolean blindness” for this seems to have been coined by Dan Licata when taught a course at CMU as a PhD student.

From here (blog post by Robert Harper, his advisor at CMU):

There is no

informationcarried by a Boolean beyond its value, and that’s the rub. As Conor McBride puts it, to make use of a Boolean you have to know itsprovenanceso that you can know what itmeans.

[…]

Keeping track of this information (or attempting to recover it using any number of program analysis techniques) is notoriously difficult. The only thing you can do with a bit is to branch on it, and pretty soon you’re lost in a thicket of if-then-else’s, and you lose track of what’s what.

[…]The problem is computing the bit in the first place. Having done so, you have blinded yourself by reducing the information you have at hand to a bit, and then trying to recover that information later by remembering the provenance of that bit. An illustrative example was considered in my article on equality:

`fun plus x y = if x=Z then y else S(plus (pred x) y)`

Here we’ve crushed the information we have about x down to one bit, then branched on it, then were forced to recover the information we lost to justify the call to pred, which typically cannot recover the

factthat its argument is non-zero and must check for it to be sure. What a mess! Instead, we should write

fun plus x y = case x of Z => y | S(x') => S(plus x' y)

No Boolean necessary, and the code is improved as well! In particular, we obtain the predecessor

en passant, and have no need to keep track of the provenance of any bit.

Some commenter there says

To the best of my knowledge, Ted Codd was the first to point out, in his relational model, that there is no place for Boolean data types in entity modeling. It is a basic design principle to avoid characterizing data in terms of Boolean values, since there is usually some other piece of information you are forgetting to model, and once you slam a Boolean into your overall data model, it becomes very hard to version towards a more exact model (information loss).

An example from Hacker News (on an unrelated post):

Basically, the idea is that when you branch on a conditional, information is gained. This information may be represented in the type system and used by the compiler to verify safety, or it can be ignored. If it is ignored, the language is said to have “boolean blindness”.

Example:`if (ptr == NULL) { ... a ... } else { ... b ... }`

In branch a and branch b, different invariants about ptr hold. But the language/compiler are not verifying any of these invariants.

Instead, consider:

data Maybe a = Nothing | Just a

This defines a type “Maybe”, parameterized by a type variable “a”, and it defines two “data constructors” that can make a Maybe value: “Nothing” which contains nothing, and “Just” which contains a value of type “a” inside it.

This is known as a “sum” type, because the possible values of the type are the sum of all data constructor values.

We could still use this sum data-type in a boolean-blind way:`if isJust mx then .. use fromJust mx .. -- UNSAFE! else .. can't use content of mx ..`

However, using pattern-matching, we can use it in a safe way. Assume “mx” is of type “Maybe a”:

case mx of Nothing -> ... There is no value of type "a" in our scope Just x -> ... "x" of type "a" is now available in our scope!

So when we branch on the two possible cases for the “mx” type, we gain new type information that gets put into our scope.

“Maybe” is of course a simple example, but this is also applicable for any sum type at all.

Another from notes of someone called HXA7241:

A nullable pointer has two ‘parts’: null, and all-other-values. These can be got at with a simple if-else conditional:

`if p is not null then ... else ... end`

. And there is still nothing wrong with that. The problem arrives when you want to handle the parts – and you lack a good type system. What do you do with the non-null pointer value? You just have to put it back into a pointer again – anullablepointer: which is what you started with. So where has what you did with the test been captured? Nowhere. When you handle that intended part elsewhere you have to do the test again and again.

A Reddit discussion: Don Stewart says

Not strongly typed,

richly typed. Where evidence isn’t thrown away. Agda is about the closest thing to this we have.

### Relation to “object-oriented programming”

There’s also a good (and unintentionally revealing) example there by user `munificent`. Consider Harper’s example, which in more explicit Haskell could be:

```
data Num = Zero | Succ Num
plus :: Num -> Num -> Num
plus Zero y = y
plus (Succ x) y = Succ (plus x y)
```

We might write it in Java as the following:

```
```interface Num {}
class Zero implements Num {}
class Succ implements Num {
public Succ(Num pred) {
this.pred = pred;
}
public final Num pred;
}
Num plus(Num x, Num y) {
if (x instanceof Succ) { // (1)
Succ xAsSucc = (Succ)x; // (2)
return new Succ(plus(xAsSucc.pred, y));
} else {
return y;
}
}

Here instanceof returns a boolean (comment (1)), but doesn’t carry with it any information about what that boolean represents (namely that x is an instance of Succ) so when we get to the next line (2) we’re forced to do an unsafe cast. As programmers, we know it’s safe from context, but the compiler doesn’t.

There *is* a way in Java of avoiding this situation (where the programmer has context the compiler doesn’t):

```
interface Num {
Num plus(Num other);
}
class Zero implements Num {
public Num plus(Num other) {
return other;
}
}
class Succ implements Num {
public Succ(Num pred) {
this.pred = pred;
}
public Num plus(Num other) {
return new Succ(pred.plus(y));
}
public final Num pred;
}
```

But see what has happened (by user `aaronia`):

There’s a rub though — in your first code snippet, “plus” was written as a non-member function; anyone can write it as a stand alone library. It was modular. But, as you demonstrated, it had the redundant type check.

However, your second code snippet has lost this modularity. You had to add new methods to the Num data type.

I think this is the pitfall of Object-Oriented Programming (which is like “Pants-Oriented Clothing”): objects can talk about themselves, but it’s harder to talk about objects.

If you want to write a function that does similar things for a bunch of types, you have to write similar function definitions in all those different classes. These function definitions do not stay together, so there is no way of knowing or ensuring that they are similar.

## C++ vs Java terminology glossary

E.g. there is no such thing as a “method” in C++. As far as the C++ standard is concerned, functions inside classes are always called “member functions”.

Here is a handout from this course (taught by, AFAICT, Jeffrey S. Leon) that helps me understand the Java people.

## The “most natural” definition of the div and mod functions

[incomplete: must add examples and more discussion]

Most programming languages include a “remainder” or “modulo” function, and also an integer division (“quotient”) function. Given two integers and , let’s call the results of these functions and respectively.

For positive and , it is clear what and should be: is the largest integer such that , and is the remainder which therefore satisfies .

What should we do when, as frequently happens, is negative, or (as less frequently happens) is negative?

For negative and positive , there are two choices when lies between two multiples of (i.e. ):

(1) Set to the *lesser* value, so that continues to hold, or

(2) Set to the *greater* (and therefore smaller in magnitude) value.

There are very good reasons why (1) is preferable to (2): it ensures that the function is always positive no matter what the value of , so that, for example, .

And indeed that is what the more recent programming languages do. There is a table on Wikipedia: C, C++, Java, Go(!), OCaml(!), PHP, all have the “bad” behaviour, while Maple, Mathematica, Microsoft Excel, Perl, Python, Ruby have the “good” behaviour. Some languages have separate functions for both behaviours (e.g. Haskell has `quotRem`

and `divMod`

functions, similarly Common Lisp, Fortran, MATLAB).

There’s also the question of what to do when is negative, which turns out not to matter much (as long as it’s consistent with the above). One defintion is to continue to have be the lesser value, and the other is to continue to insist that . Both are fine, though sometimes the latter is nicer.

These are elaborated in The Euclidean Definition of the Functions div and mod by Raymond T. Boute, ACM Transactions on Programming Languages and Systems, Vol 14, No. 2, April 1992.