Monday, October 29, 2018

Book Review: A Philosophy of Software Design

I’m trying to read all the good writing about software design. This is very easy because not very much has been written: it turns out that it’s much easier to write an article about how to write a Tetris AI as a containerized Kotlin microservice than it is to shed insight on how to write good code. And so, when I heard about John Ousterhout’s new book “A Philosophy of Software Design,” I ordered my copy immediately.

I remember John Ousterhout from Stanford’s grad student visit day as the tall guy who introduced himself with a self-deprecating joke and invited all the Ph. D. admits over to dinner at his house. I know him also as the father of Kay Ousterhout, whom I recently met as a fellow speaker at Strange Loop, and Amy Ousterhout, whom together are the first pair of sisters to both win the prestigious Hertz Fellowship.

At 170 pages, “A Philosophy of Software Design” (henceforth: PoSD) is a humble book. John’s background is in systems rather than in software engineering or programming languages, and he never claims special expertise. But his practitioner cred is immense. I enjoy tearing apart open-source projects and turning them into case studies of what not to do, so much that my students have requested I write a case study about good code for once. RAMCloud, Ousterhout’s distributed in-memory storage system, is now on my shortlist: from a 5-minute glance, it’s among the cleanest and best-documented code I’ve seen. And, given that he’s a busy professor managing a large lab, he’s written a surprising amount of it himself. He’s had plenty of impact too: he’s the creator of the Tcl language and its Tk framework, which I learned in 2005 as The Way to Write GUIs(™).

PoSD is best read as a tactical guide of how-to’s. About a quarter of it is spent on naming and comments, and much of the rest is about specific patterns. His few attempts to jump from tactical advice to principles are either done by trying to blur together similar-sounding tips, or are hamstrung by his inability to see the meaning of a program beyond the code (more on that later). He demonstrates the lack of principles comically in Chapter 19, where he promises to apply the books’ “principles” to several software trends, and then fills the rest of the chapter with standard (but solid) advice on unit-testing and OOP, with nary a reference to the rest of the book. On the whole, the book’s advice is higher-level than beginner books like Clean Code, but most of its contents will be familiar to a senior software engineer, and the novel parts are hit-and-miss.

Following other books like Code Simplicity, PoSD starts with a high-minded explanation of the benefits of good code and the dangers of complexity. Its early chapters are a grand tour of the basic concepts of software organization: separating levels of abstraction, isolating complexity, and when to break up functions. Chapter 5 is one of the most approachable introductions I’ve seen to Parnas’s ideas about information hiding. But it’s Chapter 4 where he introduces the book’s central idea: deep modules. An interface, explains Ousterhout, is not just the function signatures written in the code. It also includes informal elements: high-level behavior, constraints on ordering; anything a developer needs to know to use it. Many modules are shallow: they take a lot to explain, but don’t actually do that much. A good module is deep: the interface should be much simpler than the implementation.

Beautiful, obvious, and impossible to disagree with. Unfortunately, it’s also objectively wrong.

When specifications are longer than the code

It sounds pretty nice to say “interfaces should be shorter than the implementation.” How do you test it?

To Ousterhout, the interface is just a comment and some discussion about whether it’s simple to use and think about. Intuition and experience are the sole arbiters here. And this reveals his major blind spot.

I’ve explained before that the important information of software design is not in the code (Level 2), but in the logic: the specifications and reasoning that are rarely written down concretely, but shape the code nonetheless. I group these artifacts into the aggregate “Level 3 constructs.” The “informal interface” Ousterhout describes is such a Level 3 construct, but they’re just as real as the code, and, contrary to Ousterhout, there are plenty of programming languages that do let you write them down and check them.

Experience doing this gives us concrete grounding when we talk software design. It’s how we move into post-rigorous stage of software engineering, and know what we mean when we use terms like “interface” and “complexity.” It defends us against making confused and contradictory statements. Ousterhout lacks this insight, and that’s how he gets burned.

I’m going to pause for a moment and tell you: I like this book overall. It's well-written, and there’s a lot of advice in the book that I consider useful even though it’s on shaky ground, and more that doesn’t depend on this at all. Still, Ousterhout makes a big deal out of it, and so I’ll be taking a couple pages to explain why it’s wrong. These ideas are important, because they’re part of what leads to the higher levels of mastery.

My view is that Ousterhout’s “informal interface” is just the translation into English of a formal specification. Any question we have about interfaces can be answered by asking the question “what would a specification look like?” While I can’t prove the correspondence without peeking into Ousterhout’s head more than I’ve gotten to in our back-and-forth, I’ve found this lens unreasonably effective in helping to explain software design. And so, for the remainder of this post, I’ll be using the words “spec” and “interface” interchangeably.

I agree that the spec should usually be much simpler than the code. But anyone with experience actually formalizing specs can tell you that there are interesting cases where the specification is and should be more complicated than the implementation.

That's right: there are times when it’s actually desirable to have a specification more complicated than the code. Two major reasons are ghost state and imprecision. Ghost state is a concept from verification that describes certain kinds of “subtle” code. It’s an interesting subject that deserves its own blog post; I won’t mention it again. (Short version: it’s when a simple action like flipping a bit actually represents something conceptually complicated.)

Imprecision is a bigger one. For example:

  • Specification: The temperature of the Fudarkameter will be between 60 and 90 degrees.
  • Implementation: The temperature of the Fudarkameter is 70 degrees.
The specification is longer precisely because it creates an abstraction barrier. If you design the rest of the system assuming the Fudarkameter is exactly 70 degrees, then the Fudarkameter becomes much harder to change or replace. By weakening the assumptions placed on a module, code becomes more evolvable.

On top of these, there’s another fundamental reason: It is much easier to describe something from the inside than from the outside. It is much easier to show you an apple than to answer every question you may ask of it. (Where are the seeds? How will it roll when I drop it?) And while there is more you can say about a single apple than all the apples in the world, there are more things that may be true about some apple than about a single apple.

As an example, let’s take a stack data structure, something I hope we can all agree is a useful abstraction. A stack is a sequence with push and pop operations, following the last-in-first-out ordering. The linked-list implementation is very short: just adding and removing elements off the front of the list. But if you use a stack, and you don’t want to use internal details of this implementation, then you need a way to think about it that doesn’t reference the underlying sequence. One solution is to use the stack axioms, which say things like “If you push something onto a stack and then pop from the stack, you get the old value back” and “If you’ve ever pushed something onto a stack, then it’s not empty.” We’ve gone from the internal view of explaining how the stack operations manipulate memory, to the external view of explaining their interactions and observable behaviors.

In my public correspondence with Prof. Ousterhout, I illustrated this by writing down an implementation and interface for a stack data structure, including the stack axioms. My implementation was 30 tokens; the interface was 54.

Perhaps you can find a shorter way to explain stacks, but this is not looking good. It seems that Ousterhout’s advice, held under a microscope, is actually telling us we should not use stacks in our code (or, at least, only use the more complicated implementations, like lock-free concurrent stacks).

A “Simple” API

It’s easy for the interface for stacks to be larger than the implementation because they’re so small. Now, let’s look at something larger. I don’t need to look very hard for an example, because Ousterhout gives me one.

The mechanism for file IO provided by the Unix operating system and its descendants, such as Linux, is a beautiful example of a deep interface. There are only five basic system calls for I/O, with simple signatures:

int open(const char* path, int flags, mode_t permissions);
ssize_t read(int fd, void* buffer, size_t count);
ssize_t write(int fd, const void* buffer, size_t count);
off_t lseek(int fd, off_t offset, int referencePosition);
int close(int fd);

The POSIX file API is a great example, but not of a deep interface. Rather, it’s a great example of how code with a very complicated interface may look deceptively simple when reduced to C-style function signatures. It’s a stateful API with interesting orderings and interactions between calls. The flags and permissions parameters of open hide an enormous amount of complexity, with hidden requirements like “exactly one of these five bits should be specified.” open may return 20 different error codes, each with their own meaning, and many with references to specific implementations.

The authors of SibylFS tried to write down an exact description of the open interface. Their annotated version of the POSIX standard is over 3000 words. Not counting basic machinery, it took them over 200 lines to write down the properties of open in higher-order logic, and another 70 to give the interactions between open and close.

For comparison, while it’s difficult to do the accounting for the size of a feature, their model implementation is a mere 40 lines.

Yes, the real versions in Linux are much longer, even if you don’t count the more general “inode” machinery it’s based on. And you can get by with only a partial understanding of the API. But, having looked at its semantics, a Level 3 artifact, we now have a much truer sense of the complexity of this API beyond the “simple” signature.

There’s a lot that could be done to improve this API, but there’s a fundamental reason why the implementation can be shorter. This is an interface meant to describe every possible implementation of open. Applications that follow it can work with any of them. And so how can it be simpler than the simplest implementation?

So, when coupled with its simpler implementations, open is indeed one of the shallow APIs that Ousterhout reviles. And given how much variety it’s meant to encapsulate, to some extent that’s inevitable.

(Ousterhout’s counterargument is: “You’re just talking about the specification, rather than how easy they are to use to write code that works.” Looking at what’s in the spec, I’d say that knowing how it interprets file paths and what the O_RDONLY flag does are both very much part of knowing how to use it.)

Perhaps a more penetrating example is this write function for a simple replicated disk. This is a system that acts like one disk, but copies everything to two underlying disks, so that it can still run even if one fails. Here’s the write function, transliterated from Coq into C:

void write(addr a, block *b) {
  disk1->write(a, b);
  disk2->write(a, b);
}

What’s a spec for this function? Well, it’s written b to both disks  but does nothing to a disk if it’s dead. And if the system crashes midway through, then either neither write succeeded, or the disk1 write succeeded, or both writes succeeded. In Coq:

 {|
   pre :=
     disk0 state ?|= eq d /\
     disk1 state ?|= eq d;
   post :=
     fun r state' =>
       r = tt /\
       disk0 state' ?|= eq (diskUpd d a b) /\
       disk1 state' ?|= eq (diskUpd d a b);
   recovered :=
     fun _ state' => write_recover_condition d a b state'
   ;
 |})

[...]

Definition write_recover_condition d a b state' :=
     (disk0 state' ?|= eq d /\ disk1 state' ?|= eq d)
                               \/ (disk0 state' ?|= eq (diskUpd d a b) /\ disk1 state' ?|= eq d)
                               \/ (disk0 state' ?|= eq (diskUpd d a b) /\ disk1 state' ?|= eq (diskUpd d a b)).

Yes, that’s the internal API. The spec for the external API is simpler, but still longer than the code.

There’s a lot more fun elsewhere in that file. My specs for the recovery procedures total 70 complicated lines, compared to 29 simple lines for the implementation. This is because, when writing this kind of code, you need to constantly be asking “what happens if there’s a crash on this line.” It's easy to miss that and think the code is simple, but the logic lays all bare. Hence, the interfaces are much longer than the code.

So, Ousterhout’s big insight about deep modules is flawed, and advice based on it is unreliable. Using it, he attacks the common wisdom of making small classes/methods, but doesn’t give a way to distinguish when doing so is abstracting something vs. merely adding indirection.

And there are a lot of smaller flaws throughout the book that come from not engaging directly with Level 3 constructs. In an early discussion of coupling, for instance, he discusses how parsing and serialization code for a binary protocol may depend on each other, but it’s more accurate to say that they both depend on the protocol, which is Level 3 and exists outside the code. (Indeed, if you were to use a tool to synthesize a parser from a serializer, you’d do it by first inferring the protocol, and then generating the parsing code from the protocol.)

Into the weeds

After Chapter 9, the book moves away from trying to give broad coding principles into softer territory, as well as more specific coding practices. Chapter 10, “Define errors out of existence,” was the most unusual and thought-provoking chapter for me. I came in expecting some cousin of the “make invalid states unrepresentable” stuff that I teach. What I actually found was a pastiche of different tricks for changing the spec of a function to tolerate more inputs/situations.

When I was trying to pin down each piece of advice in this chapter, I found that some of it was actually the opposite of others. In Section 10.9, he implores us to “Design special cases out of existence.” Specifically, he explains how, in a text-editing application, modeling the application state as “a selection always exists, but may be empty” removes the need for special code to handle the case where there is no selection. In other words, take a conditional out of the spec for the function. But in Section 10.5, he tells us that we should add a conditional to the spec of a function, namely in making Java’s substring method defined for out-of-bounds indices. I’m not completely sure he’s wrong (as I discuss in my Strange Loop talk, it comes down to: is there a clean way to describe this behavior?), but I find his claims that this makes the code “simpler” only slightly more credible than his claims about the Unix file API.

The next 7 chapters are the soft parts of the book. Chapter 11 argues that you should consider at least two designs for everything, an instance of multi-tracking in decision-making. The following chapters on comments are well-written if at times moralizing, and I don’t have a solid basis for the parts where I disagree. I strongly approve of his practice of writing “See comment in <other file>” whenever he implements an interface (breaking the hidden coupling between comments). Seeing it in action in the RAMCloud codebase was beautiful.

It’s not until the second-to-last chapter, “Designing for Performance,” that Ousterhout shifts from an enthusiast to an expert. The chapter centers on his “Design around the critical path” concept, reminiscent of Carmack’s comments on inlined code, and a clearly-written case study in RAMCloud. The chapter shines with battle-won experience, and I’d gladly read a book-length from him on this topic. I only wish it had come earlier.

Of names and types

Throughout the second half of the book, I only found two notable pieces of advice that I regard as bad.

Near the end of the book, Ousterhout attacks code like the following:

private List<Message> incomingMessageList;

incomingMessageList = new ArrayList<Message>();

Why is it being declared as a List even though it’s an ArrayList, Ousterhout asks? Isn’t that making it less obvious? After all, ArrayList’s have their own performance properties.

Yes, but it needlessly ties the code to the specific implementation of ArrayList, and can make the code harder to change. Joshua Bloch thoroughly argues for the opposite advice using an almost-identical example in Point 52 “Refer to objects by their interfaces” in his book Effective Java.

After discussing this example with Ousterhout though, it sounds like he, Bloch, and I are all in agreement. Ousterhout tells people this on the assumption that they wouldn’t be using an ArrayList unless they needed the specific performance guarantees of ArrayList, which is a situation I’ve never encountered. (Conversely, I’ve gotten burned from having to conform to an interface that took ArrayList<Integer>, when I needed to write something that used less memory.) It’s cutting out details and caveats like this that gives Ousterhout his short book length, but also transmutes sound advice into something ripe for misuse, and destroys a lot of the value it has over raw intuition.

The second piece of bad (well, misleading) advice comes out of a discussion of a pernicious bug in Sprite, a distributed operating system. On rare occasions, some data would be randomly overwritten. The culprit was when an integer representing a logical block within a file (called a “block”) was used as the address of a physical block on the disk (also called a “block”). Echoing Spolsky, Ousterhout recommends his fix: come up with the perfect name for each variable.

Taking names seriously, and never using the same name for two different concepts, is a good idea. And, as noted before by commenters on Spolsky, there’s a much better choice for the primary defense mechanism.

The designers of ART, the Java VM that runs on every Android device, had a similar problem. They had many different kinds of pointers which should not be assigned to each other. References to objects controlled by the garbage collector needed to be kept separate from objects internal to the runtime. Some were 64-bit pointers compresesd to 32-bit, and couldn’t be dereferenced directly.

Their insight was that these values are already treated like different types, and there’s hence little complexity cost in making this explicit. Thus, their solution (see: here and here) was to create a separate type for each kind of pointer. Now there is no risk of confusing two such pointers. The compiler can use this information to overload assignment, making code shorter. And it can be done in C++ with zero runtime overhead.

“Use more precise types” is the answer to a lot of software engineering problems.

More Discussion

Ousterhout created a Google group for the sole purpose of sharing feedback on his book, which I find admirable and hope more authors follow. We had several weeks of discussion about this review before I posted it, including big things such as the definitions of “abstraction” and “complexity,” and niche things like the history of the Waterfall method. If you want to see more details of his take on this review, you can read it here.

Summary

PoSD is one of three software design books I’ve read that I’d classify in the “intermediate” category, and the first such book with enough code examples to communicate clearly (though I have a few more candidates on my reading list). PoSD is not a flawless book nor especially original, but it is a good one. He’s collected a lot of advice that’s been roaming around into one thin book, along with some oddballs and handwaving at bigger things. There’s a lot for the junior engineer to learn, and a lot for the senior engineer to reflect on.

I think a lot of people could have written “A Philosophy of Software Design.” But Ousterhout actually did.

Overall Status: Recommend

It may not be groundbreaking, but “A Philosophy of Software Design” is a well-written book with clear examples and solid advice that deserves a place on any junior engineer’s bookshelf.

Quotes and Examples from the Book

Here is an extreme example of a shallow method, taken from a project in a software design class: 

private void addNullValueForAttribute(String attribute) {
  data.put(attribute, null);
}

From the standpoint of managing complexity, this method makes things worse, not better. The method offers no abstraction, since all of its functionality is visible through its interface. For example, callers probably need to know that the attribute will be stored in the data variable. It is no simpler to think about the interface than to think about the full implementation. If the method is documented properly, the documentation will be longer than the method’s code.

This is a great example of a case of small-class-itis gone wrong. I’ve seen plenty of examples like this myself.

Of course, I can think of a good reason to write a method like this: to hide the decision to use an in-memory map in a handful of functions . But that would be a time to show the abstraction barrier explicitly (e.g.: with an inner class), in accordance with the Embedded Design Principle. “Documentation longer than the code” is a smell of bad code, but not a criterion.

Hiding variables and methods in a class by declaring them private isn’t the same thing as information hiding. Private elements can help with information hiding, since they make it impossible for the items to be accessed directly from outside the class. However, information about the private items can still be exposed through public methods such as getter and setter methods. When this happens the nature and usage of the variables are just as exposed as if the variables were public

I’ve found this to be the most common misunderstanding of information-hiding, and it’s nice to have this written down. I’ll add that it’s also easy to leak information about a private member in more subtle ways. Common example: adding public methods to add/get from an internal data structure, in a fashion where there’s only one sensible implementation consistent with the interface. As Parnas taught us, if a decision can’t be changed without changing other modules, then it’s not a secret.

File deletion provides another example of how errors can be defined away. The Windows operating system does not permit a file to be deleted if it is open in a process. This is a continual source of frustration for developers and users. In order to delete a file that is in use, the user must search through the system to find the process that has the file open, and then kill that process. [...]

In Unix, if a file is open when it is deleted, Unix does not delete the file immediately. Instead, it marks the file for deletion, then the delete operation returns successfully. [...] Once the file has been closed by all of the accessing processes, its data is freed.

The Unix approach defines away two different kinds of errors. First, the delete operation no longer returns an error if the file is currently in use; the delete succeeds, and the file will eventually be deleted. Second, deleting a file that’s in use does not create exceptions for the processes using the file. [...]

This is a really interesting point by Ousterhout, and I don’t have anything like it in my current teachings on preventing errors. I see this as an instance of the more general operation of simplifying the rely of a function, which simplifies reasoning and eliminates errors. (The “rely” is like a concurrent version of a precondition; see here.)

Red Flag: Hard to Pick Name

If it’s hard to find a simple name for a variable or method that creates a clear image of the underlying object, that’s a hint that the underlying object may not have a clean design.

Best line in the entire book.

Acknowledgments

Thanks to John Ousterhout for his extensive comments on this review (and for writing the book!), as well as to Kieran Barry and Peter Ludemann. Thanks to Tej Chajed for pointing me to SibylFS, and to Adam Chlipala for discussion about when specs can be shorter than the implementation.