Not on the Shelves

Version 1

(This article originally appeared in Doctor Dobb's Journal in 1997.)

I was watching Field of Dreams again a couple of nights ago. When the ghostly voice whispered, "If you build it, they will come," I thought, "That's it! If I write reviews of the books I'd most like to read, maybe someone will write the books!"

The next morning, as I was nursing my hangover, I explained my plan to a friend. Her first response was, "Yeah, maybe," but when I persisted, she took the icepack off her forehead long enough to explain the idea of sympathetic magic to me. According to her, there's a tribe in north-eastern India whose land is occasionally stricken with drought. When the rain fails to arrive, the tribal elders get out a plow and a handful of seeds, and go out to till the soil anyway. The idea is that by doing what they would have done if the rain had fallen, they can force it to actually fall. That, she said, is sympathetic magic; by writing reviews of books that don't exist, you're trying to trick the universe into bringing the books into existence.

When she put it that way, it sounded kind of silly, but after reviewing more than fifty computer-related books in the last eight years, I'd be willing to sacrifice a rooster, or my neighbor's noisy first-born, in order to get someone to write these books. I'm constantly amazed by how few different books there actually are. My local bookstore, for example, has eight shelves of Java books, but if you were to do a set-union on their contents, you'd be left with only two or three books' worth of information. What's worse, a lot of things I really want to know wouldn't be there at all. It's almost as if our notion of what we should put into a book on a new programming language, or on user interfaces, or software engineering, somehow got stuck in the early 1980s.

These reviews of non-existent books are my attempt to point out the gaps in the computing literature, and, indirectly, the gaps in most programmers' education (including my own). If, by chance, one of these books already exists, please drop me a line; otherwise, if you'd like to try writing one of them, please drop me a line as well, so that I can review it for real when it comes out.

Real-World C++

Get a listing of any large program you've written recently, and count how much of it is devoted to error handling. If your code is anything like mine, you'll find that between a tenth and a quarter of your program is there to handle files that can't be opened, pointers that are null when they shouldn't be, and so on.

Now, flip through any book on C++, and count how much of it is devoted to error handling. Other than a few simple examples showing the syntax of exceptions, you'll probably draw a blank. A similar gap between what programmers put into programs, and what authors put into books, shows up in other areas as well, such as how to make user preferences persistent, how to save and restore objects in files, and so on. We all know how frustrating it is to have a parser throw out two hundred error messages because of a single missing semi-colon, for example, but most compiler textbooks offer little or no advice on how to recover from parsing errors.

Real-World C++ is a practical guide to those parts of real programs that language textbooks leave out. The first three chapters cover error handling: how to structure programs to deal with error codes from operating system calls, the design of an exception class hierarchy, and examples of when and why to throw exceptions. Chapter four covers error handling in threaded systems, while chapter five looks at sockets and other network protocols, and chapter six examines how to deal with parsing errors in both hand-written parsers and those generated by automatic tools such as yacc. A friend particularly liked chapter seven, which described a class that throttles debug messages from particular sections of code using run-time controls, or compile out the whole, expensive debug mess.

Chapter eight moves away from error handling to examine persistence. The author introduces the problem by looking at ways of saving window positions, font sizes, and the like between application sessions. The examples are all written in terms of the Windows NT registry, but the ideas could easily be applied to other systems. Chapter nine then describes a simple object persistence scheme, similar to the one used in the Microsoft Foundation Classes. This scheme allows arbitrary sets of objects, connected by pointers, to be saved to a file, and then restored. The author steps outside C++ in this chapter to compare her scheme with that used in Java. The last chapter of the book discusses interface issues: how to report errors to users, how to log non-fatal errors for later inspection, and so on.

Case Studies in User Interface Design

Some philosophers distinguish between knowing that, and knowing how. The former is mostly facts: what is the capital of Samoa, what is the airspeed of an unladen African swallow, and so on. The latter primarily consists of techniques, like how to ride a bicycle, or how to design a user interface. Unfortunately, knowledge of the second type is hard to put into books, since every general rule has exceptions, and none of the important ideas can be given exact definitions.

This book on user interfaces tries to teach the "how" by working through fourteen examples. The starting point for each of the first seven is an existing interface that needs improvement. The authors describe the interface, analyze its shortcomings, and then show how to improve it. In the last three of these examples, the authors then criticize the new interface, and improve on it again.

The examples in the second half of the book start with a blank canvas. The first two studies begin by describing applications whose existing interfaces are command-line based; as the authors point out, interface designers are often required to retro-fit GUIs to existing programs, so they might as well get used to it early on. In the last five of the studies, the application itself is up for grabs: given a vague specification that reads like an email message from someone in Marketing, the designers are required to figure out both what the program should do, and how it should appear.

The greatest strength of this book is that it shows how interfaces are developed, not just what they look like when they're done. Two of the studies, for example, devote as much space to blind alleys as to the final, finished interface. By doing this, the authors show how designers iterate over a design. Ideas such as balance and emphasis are taught by example, rather than by definition. Throughout, the authors are careful to draw examples from a variety of systems, including Windows'95, Macintosh, and Nintendo.

The authors are now apparently working on a companion text, whose working title is The World's Best Interfaces. This book will analyze and critique some classic interfaces using the ideas, terms, and analysis techniques built up in Case Studies.

Windows NT for Unix Programmers

We used to make jokes about it: a bunch of guys sitting in a circle, and one of them stands up and says, "Hello, my name is Greg, and I'm a Windows user." We knew Unix was better, and we were confident that, sooner or later, the rest of the world would come to its senses and start ls'ing and cat'ing along with us.

It hasn't quite worked out that way. Like a lot of Unix programmers, I've come to realize that Windows isn't going away, and that NT is actually a pretty good operating system. It has better support for security and threading than most flavors of Unix, and NT code doesn't have to be littered with hundreds of fragile, platform-dependent #ifdef's. As a friend of mine says, at least when Windows is broken, it's broken the same way everywhere...

The hardest part about making the transition from programming on Unix to programming on NT was learning the thousand and one things that Windows programmers take for granted. What exactly is OLE? And ActiveX? What's the registry for, and how do I use it? How does Excel work? (Having grown up with Unix, I'd never used a spreadsheet.) And, most importantly, how do I get a simple program up and running using Visual C++?

This book answers all of those questions, and more. Unlike some books on Windows, it assumes that its readers are intelligent and computer-literate. Unlike others; it doesn't assume that its readers were borned and raised in Windowstan. The three chapters on Visual C++, for example, don't explain what classes and templates are, but cover precompiled header files in depth. Similarly, while the whole of the standard NT interface (mousing, window control, simple editing commands, the file system browser, and Internet Explorer) is covered in a single chapter, Excel—the tool most likely to be foreign to Unix users—gets a chapter of its own.

The parts of this book that I will use most are the chapters on NT's security model, and on system administration, with the chapter describing OLE and ActiveX running a close second. While I only skimmed the chapter on Visual Basic, it's nice to know that it's there if I ever need it. A comprehensive index, with a separate "how to" index for quick reference, should earn this book a place on every recovering Unixaholic's desk.

The Design and Implementation of Interpreted Languages

Whatever else it may be good for, Java is an excellent teaching language. It runs almost everywhere, is used in industry, is object-oriented, and checks for a lot of simple errors, including bad type casts, dangling pointers, and out-of-bounds array indices. By the end of the decade, I expect Java to be as widely used in first-year college courses as Pascal was in my youth.

The problem, however, is what to do with students in their second year. As a sophomore, I took two courses: one on data structures and algorithms, and one on machine architecture and assembly-language programming. The first is easy to translate into Java (see the review immediately following this one), but what about the second? Going from Pascal to PDP-11 assembler was bad enough; going from an interpreted, garbage-collected language to today's pipelined RISC architectures would be impossible.

This book's premise is that C has always really been a high-level assembly language, and that we should teach it to students in relation to Java as we taught assembler in relation to Pascal. Over the course of more than 500 pages (the book is clearly intended for use in a full-year course), the author shows how to build a simple interpreter for a subset of Java, called Javette, using C. Everything important is covered at least once, including pointer arithmetic and array indexing, allocating and freeing memory, garbage collection, signed vs. unsigned arithmetic, and machine-dependent data sizes. Like Kamin's Programming Languages: An Interpreter-Based Approach, this book builds up its interpreter in stages. Each stage adds another bit of syntax (such as arrays or inheritance), and then shows what has to go on under the hood to make it work. Along the way, students are introduced to the quirks of C, including the use of preprocessor directives to handle platform-dependent code.

Toward the end of the book, the author stops adding features to Javette, and starts building support tools. The most important of these is an execution profiler, which uses both sampling and instrumentation to collect statistics about program behavior. Results obtained from profiling are used to segue into a discussion of caching, virtual memory, and other aspects of machine architecture that haven't been covered earlier. The final chapter in the book then presents a simple sockets interface, and introduces the basics of network programming.

A Second Course in Object-Oriented Programming: Design, Analysis, Data Structures, and Algorithms in Java

Like most professional programmers of my generation, I came to C++ from C, and spent my first year with the language using classes as if they were structs with constructors, and methods as if they were functions whose first argument was supplied by the compiler. This was partly laziness—I stopped learning C++ as soon as I knew enough to meet my deadlines—but I was also encouraged to take this attitude by the books I'd read, which concentrated on the syntax of the language, and skipped over analysis and design.

The aim of this new book is to teach analysis and design techniques as if they were as important as language syntax, or basic data structures. The book is aimed at sophomores who have learned the basics of Java in a first-year course. It begins by summarizing some basic data structures and their associated algorithms, and then presents some design strategies and notations appropriate to them. Right from the start, the author introduces both the idea of design patterns, and the Unified Modeling Language (UML) developed jointly by Booch and Rumbaugh, so that the recapitulation of standard data structures (in the first half of the book), and the case studies (in the second) have a uniform look and feel.

All of the examples are based on Web applications, and all of the implementations are in Java, but the real emphasis of the book is on how to analyze a problem methodically, and how to design a program, or set of programs, to meet users' needs. While the main text does not discuss the software development process per se, it does describe the place of design and analysis activities in the software lifecycle. The appendices discuss the problem of keeping designs in step with code as the latter evolves, and how to run a design meeting. The UML Workbook discussed below is the second in this series, and focuses more on the software engineering side of object-oriented programming.

Debuggers: Design and Implementation

There used to be a show on the BBC called Desert Island Discs. Each week, different guests were asked what ten records they would take along if they were going to be stranded for the rest of their life. The equivalent game for programmers would probably be called Desert Island Development Tools. If you had to pick three-just three-programming tools to take with you, what would you take? My list would be (in order) an editor, a compiler, and a debugger, and I think most other programmers would make the same choice. I can recompile programs by hand if I have to, and do version control by making backups, but without a symbolic debugger, my life would be very, very slow.

Despite their importance, debuggers are ignored by both educators and authors. In part, this is because there isn't a tidy theory to teach, as there is with (for example) parsers or databases, but in part there is also an element of disdain for something that is "just a tool". As this book shows, it might be just a tool, but it's a very complicated tool to get right.

The author starts with an overview of the software development lifecycle, and a catalog of well-known debuggers and their features. After prioritizing the entries in this catalog, she spends three chapters developing a simple interactive debugger for Javette, the Java subset introduced in The Design and Implementation of Interpreted Languages (reviewed earlier in this article). As she explains at the start of these chapters, it is much easier to write a debugger for a language that is executed by software, rather than directly by hardware, so most of the key ideas in debuggers are introduced in this forgiving environment. The debugger itself is written in a mix of Java and C.

The next five chapters of the book cover the same ground, but this time the debugging target is real machine code. While the Pentium processor is used for all working examples, the author discusses other common microprocessor architectures, including the Alpha, MIPS, SPARC, and PowerPC, in sidebars. The topics covered include setting breakpoints and watchpoints, displaying the contents of memory, tracing execution history by walking through a stack, and so on. Again, sidebars are used to compare debugging Javette with debugging other languages, such as C++, in which less information survives to runtime. The author's emphasis is always on "what" and "how", but there is some good discussion of user interface issues as well. The author also discusses ways of dealing with such things as out-of-order instruction execution, dynamically-loaded libraries, and functions with multiple entry and exit points.

The book's three appendices discuss how to debug multi-threaded code, how to debug remote applications, and (interestingly) how to test and debug a debugger. Taken as a whole, the book resembles Tanenbaum's Operating Systems: Design and Implementation (a.k.a. the Minix book) in its practical emphasis.

The Elements of Software Engineering Style

This book's subtitle, "A Software Development Process for Small Teams", is almost as much review as it needs. Most books on software engineering are written as if the only important projects were those that required hundreds of programmers, and at least five years of uninterrupted work. Even the best of the exceptions, McConnell's Rapid Development, is a survey of practices that a small group of programmers could adopt, rather than a description of a complete, coherent development process.

This book fills the gap by describing how a small group of programmers (up to a dozen, according to the introduction, including both one technical writer and two testers) should tackle a project that will take from six months to two years to complete. After describing a few of the standard software lifecycle models, the first chapter outlines the "Small Team Process" described in the rest of the book. Each chapter then covers a single aspect of the process, including gathering requirements, architectural design, setting coding standards, code reviews, scheduling and progress reviews, testing, source code control, bug tracking, documentation, and preparing for release.

One of the book's strengths is its brevity. It has the same dimensions as a paperback novel, and individual chapters are very short: the longest is 22 pages, and the shortest (on coding standards) is only five, half of which is devoted to a humorous discussion of why so many programmers have wasted so much energy on religious wars over indentation.

The book is full of small tables and checklists, all of which are available for downloading via the Web. The authors clearly expect that readers will be using Windows as their development platform (the Gantt charts in the section on scheduling, for example, still have a Windows'95 frame around them), but the ideas themselves are platform-independent. I was particularly impressed by the discussion of how to integrate bug reports from both internal testing and external users with source code control, and with the author's "What Can Go Wrong?" and "Have You Remembered?" lists.

How to Write Better Computer Games Faster

Most teenagers who teach themselves how to program do so because they want to write computer games. As the authors of this book point out in their introduction, however, anyone who learns how to program from the examples in games magazines will probably also learn a lot of bad habits, which they may or may not be able to shake off later in life. Since few teenagers are interested in anything that's good for them, this book tries to teach good programming practice by disguising it as games programming. "Get Your Games Running Faster!!" the cover proclaims (complete with a double exclamation mark). "Spend Less Time Debugging, and More Time Playing!!"

The material in the book is much the same as you would find in McConnell's Code Complete, or Maguire's Writing Solid Code. However, the presentation is aimed directly at fifteen-year-olds with "Nintendo Million-Point Champion" tattoos. Almost all of the examples involve graphics, and many show how sloppy style leads to either buggy code, or code that runs too slowly to be playable. The "before and after" style, with changes to the code carefully explained and highlighted, means that you can read the examples out of order. I particularly enjoyed the chapter, "50 Ways a Bomb Can Bomb", which finds 50 separate errors and inefficiencies in a small class representing a nuclear bomb.

The two last chapters in the book are devoted to handling joystick input. The tight constraints of real-time programming, and the near-impossibility of debugging device handlers with the kinds of tools available to amateurs, give the authors a chance to drive their "get it right the first time" message home. C++ and Windows'95 are used throughout the book, although the authors are careful to stick to the simpler features of both. Thankfully, though, the authors steer clear of examples involving sound cards...

Software Tools for Scientists and Engineers

Most scientists and engineers write programs when they should use packages, do by hand what could be done automatically, and make little use of advanced algorithms and data structures. One of the reasons for this is the rate at which computing technology is evolving. In 1980, when I started programming, I could get by knowing only a few editor commands and a couple of Fortran compiler options. Today, the editor I use is larger than most of the operating systems of that era, and my compiler has over a hundred different switches to control optimization alone. Mastering these more complex tools clearly takes more time, but that time is not available in most undergraduate curricula. As important as computing skills may seem, learning them is less important to a scientist or engineer than learning about her own discipline.

The aim of this book is to make modern software engineering practice accessible to numerical scientists. In the introduction, the author lays out three principles on which the book is built:

  1. Concentrate on the concrete, not the abstract.
  2. Be conservative-describe only things that have proved themselves and are unlikely to change.
  3. Focus on those platforms that scientists and engineers are most likely to use.

The result covers much the same material as The Elements of Software Engineering Style (reviewed above), but at greater length, with much more description and motivation, and with very different examples. Where ESES is written in terms of tools that computer scientists use (such as C++), STSE uses MATLAB 5.0 (a popular numerical scripting language) and FORTRAN-77. There is no discussion of object-oriented programming, but a full chapter on numerical precision, and another on how to test numerical programs in the face of round-off errors. As a running example, the authors build up a simple regression-testing framework called strafe, which (by the time it is completed) consists of an HTML interface and a few simple scripts to re-run tests, collate their results, and generate reports in which differences from previous runs are highlighted.

Large-Scale Visual Basic Software Design

The idea is simple, and at least twenty years old: programmers should use objects as if they were integrated circuits, and build programs by combining such "software ICs" with bits and pieces of glue logic. Despite the claims made by vendors of object-oriented systems, however, few programmers work this way. A typical C++ programmer might use stdlib or the Microsoft Foundation Classes extensively, but in the same "call 'em when you need 'em" way as her predecessor in the 1960s.

Visual Basic—lowly, sneered-at Basic—is the only industrially-significant language whose users have really embraced component-based programming. There is an enormous market for Visual Basic controls, and many large applications are written by combining these, rather than re-writing them. If history had been only slightly different, Visual Basic could have turned into the lingua franca of computing that Java is now poised to become.

However, Visual Basic does have some significant shortcomings. As the author of this book points out, the most important of these is that Visual Basic encourages a fragmented programming style: it can be almost impossible to reverse engineer the control flow of a mature VB application. The solution she advocates is to avoid getting into trouble in the first place. Over the course of 400 pages, she describes and illustrates programming practices intended to accomplish exactly that. Like Lakos' Large-Scale C++ Software Design, the aim is not to teach the language, but to show how to organize a large, complicated source base so that many programmers can work together productively over a period of months or years.

The author has no illusions about what it will take to get people to adopt the practices she advocates. "If you are reading this book," she says in the introduction, "It's probably because you've just watched a big project grind to an unproductive halt." While this book might not stop that happening to everyone once, at least there's no longer an excuse for it to happen a second time.

Software Tools for the World-Wide Web

Software Tools was one of the most influential books in the history of computing, as it introduced a whole generation of programmers to the Unix philosophy of tool-based computing. In retrospect, one of the reasons the Unix tools were so successful was that they all worked with a single, universal data format, namely strings of ASCII text, terminated by newline characters. One of the reasons that tool-based computing hasn't taken root in other environments (such as Microsoft Windows) is that no such format exists. The internal structure of a Microsoft Word .doc file, for example, bears little resemblance to that of an Excel spreadsheet.

The author of this book begins by arguing that in fact a new universal data format does exist: HTML. Unlike newline-terminated ASCII, however, HTML has a nested structure, which is difficult for the streaming model of Unix to handle. In the first ten chapters of this book, the author therefore develops a suite of tree transformation utilities, which can be used to parse, re-arrange, and output HTML. These utilities are more sophisticated than the cat, grep, and sed of the original Software Tools, but, as their starting point is also more sophisticated (a Java class library that contains regular expressions and other parsing tools), the overall cognitive burden is about the same. Where the original Software Tools concentrated on parsing and text formatting, this book concentrates on page layout and data mining, i.e. on the things that make the World-Wide Web more than just "FTP with pictures".

The final two chapters of this book take the tools built in the previous ten, and construct a simple dataflow GUI for combining them. With a few mouse clicks, users can create pipelines (or even task farms, for those lucky enough to have multiprocessors) of HTML filters, then set the controls of each. With another mouse click, the resulting multi-filter can be turned into a standalone Java applet—the 1990s equivalent of simple C-shell script.

The Future of the Java Programming Language

Few things in computing have become as popular, as quickly, as Java. While a decade passed between the introduction of C++ as its widespread adoption as the standard systems programming language, Java became a fact of life for thousands of programmers in a matter of months. One of the reasons for this success is that Java is a very conservative language. Its syntax is immediately familiar to anyone who has ever use C, its type system combines those of C++ and Smalltalk without moving beyond either, and its concurrency primitives are a formalization of the kinds of object-oriented wrappers that programmers have been using to encapsulate low-level thread packages for years.

The contributors to this collection all clearly believe that Java would be a better language if it incorporated features that have proved useful in other languages. Each chapter is the work of a different author. Those in the first half of the book focus on one extension that their authors particularly want. These range from anonymous functions (similar to those created by the "lambda" operator found in Lisp dialects), to distributed shared memory (on which there are two chapters), true multi-dimensional arrays (like C, Java provides only nested vectors), and strongly-typed templates. This last proposal owes a great deal to the work of Bank, Liskov, and Myers at MIT; its authors acknowledge that runtime casting allows generic containers to be built in Java, but argue persuasively that this is bad both from a software engineering standpoint, and because it makes compile-time optimization more difficult.

The chapters in the second half of the book discuss class libraries and support tools. Among the former is a descriptin of a matrix class library that relies on Veldhuizen's expression templates to generate just-in-time execution kernels. Among the latter is a description of the Class Hierarchy Evolution Animator. This uses version control information, such as that stored by rcs or SourceSafe, to animate a class hierarchy diagram using Booch and Rumbaugh's Unified Modeling Language (UML). New classes sprout from old ones as other classes fission like bacteria or grow fat with additional methods. While this may not have a lot to do with Java, it's a great toy.

Object-Oriented Systems Programming

By the end of the 1980s, C had become the language of choice for systems programming on both Unix and Windows machines. In fact, C had become so successful that, as far as most programmers were concerned, both operating systems were more-or-less defined by their C interfaces.

Since then, however, most C compilers have evolved into C++ compilers, and books that once used C for their examples, such as graphics and database textbooks, have switched to C++ as well. The only major holdouts are operating systems and networking books, which (according to the introduction in this book) have been held back by the lack of standard C++ bindings for Unix and Windows. As a result, almost everyone who uses C++ winds up defining classes to represent files, directories, sockets, threads, and so on, even for "new" (i.e. post-C++) operating systems such as Windows NT.

The aim of this book is accordingly to bring systems programming into the 1990s. In the course of more than 500 pages, the authors develop a C++ class library whose elements model everything from file system entries and network connections to user accounts and performance monitoring information. Some of this work is not new—the networking classes, for example, are based on Schmidt's ACE toolkit—but the parts are integrated cleanly. What is more important, the authors develop their class library in stages, examining alternative design possibilities and implementation strategies as they go along. They also borrow some ideas from Real-World C++ (reviewed earlier), and show how to use conditional compilation flags and proxy classes to manage the differences between their two target platforms (Solaris and Windows NT). The Web sites referenced at the end of each chapter contain implementations for other operating systems, including Linux and AIX (but not Windows'95 or MacOS). The result is a book that could, if supplemented by material on the theory of the subjects, be used as a text in a full-year course on operating systems and networking.

Intellectual Property Made Simple

The legal aspects of the software business were complicated enough when the major problem was people using software without paying for it. The advent of the World-Wide Web has squared and cubed the problem. If you use a GIF image as a button in your home page, for example, and I download it for use in my page without asking your permission, am I breaking the law? What if you copied that button from someone whose page explicitly said that it wasn't in the public domain, but you didn't include a note to that effect? And what if I then printed out my page, GIF and all? Would that be illegal? In Ontario, the answers are (currently) "no", "no", and "yes", but other jurisdictions might not even officially recognize that there are issues to address.

Intellectual Property Made Simple presents this problem, and several like it, in its first chapter. The next three chapters then trace the historical development of property law from land, through patents and copyrights, to the invention of photocopying. As the authors say at the start of chapter five, "That's when hell quietly broke loose." Using example after example, the authors show how cheap reproduction, particularly digital reproduction, is reshaping the intellectual underpinnings of capitalism. Look and feel, free software, and litigation as intimidation are all discussed, and some folklore is put to rest (no, IBM doesn't employ more lawyers than engineers).

IPMS is aimed squarely at programmers, particularly programmers working in start-ups. There is practical advice on how to patent software, and how much protection that patent actually gives its holder. One section discusses what rights students have to the software they produce during their studies; another, who owns things that were produced by companies that no longer exist, and another, the furore that surrounds encryption in the United States. Where they can, the authors concentrate on principles rather than particular statutes, as the latter are so often either non-existent or changing rapidly. This not only makes the book more readable, it also ensures that it won't quickly be outdated.

Computational Layout: An Object-Oriented Approach

My second non-trivial C program, back in 1982, read text from a file and set it in left- and right-justified paragraphs. I've been tripping over layout problems ever since: VLSI circuits, geometric objects in 3-dimensional landscapes, dependency graphs, floor plans, and many more. What I haven't been tripping over, until now, is a programmer's introduction to placing things in space.

Computational Layout is exactly that book. The author starts with the same problem I first encountered, that of justifying text using a fixed-width font (although they do it in the context of a web browser, rather than a compiler like troff or TeX). Having used this example to introduce some terms and algorithms, the author starts Chapter 2 by asking, "What if space was elastic?" She then spends almost 200 pages looking at the implications of stretchability, touching on things such as hyphenation, pagination, and widowing (the "dangling line" problem). By the time she is done, she has described a framework in which classes capture layout information, and class instances figure out where they should position their contents by evaluating themselves.

The second half of the book shows how this framework (or, more accurately, the ideas behind the framework) can be applied to other problems, such as labeling maps and laying out circle-and-arrow graphs so that lines are short, straight, and cross one another infrequently. While this is all useful in its own right, the book's real strength is the way in which it shows how to build an object-oriented framework for a particular problem domain.

From Key to Screen: How Computers Really Work

The introduction to this book says it best:

About two years ago, my daughter asked me how it was that she could press a key on our computer, and make a picture from Australia appear on the screen. I said I didn't know, but maybe we could find out together. Today, two years later, I know a lot more than I thought I ever would about packet switching, name servers, Unicode, JPEG, and a hundred other things.

The book's target audience is 14-year-olds who are interested in science and technology, but I suspect it will also be read by adults who dislike the wearying jokiness of books with titles like "XYZ for Cretins". What kind of signal is sent to a computer when a key is pressed? How does the computer know which program to give the character to? How does that program decide what to do with the character? How does it turn characters into names, and names into locations on the World-Wide Web? How does it ask other computers for pictures and HTML pages? How does it display them? These questions, and many more, are all answered in 200 pages' worth of simple line drawings and simple, but not simplistic, prose. A lot of other, less concrete, things are also explained: hierarchy and modularity, abstract machines, the difference between a program and an algorithm... The book closes with a URL for a site where the author's answers to readers' questions (and his daughter's as well) are regularly posted.

The UML Workbook

This book, a follow-on to A Second Course in Object-Oriented Programming (reviewed earlier), is a software engineering book with a difference. Like McConnell's Rapid Development, it focuses on small-team programming: half-a-dozen people, working for a year or so, on a single product. However, instead of surveying different working practices, or even trying to teach those practices explicitly, this book presents over two dozen partially-worked case studies. Readers are given extracts from reports, folders of email messages, hand-drawn sketches, and even a few audio clips, then asked to construct use case diagrams and other models using the Unified Modeling Language (UML). They can then compare their answers with those given on the CD that comes with this book (which usually has two or three different answers for each problem). Several of the examples—an on-line auction system, an automated teller machine, and a computational geometry toolkit—recur several times, so that readers can see how various models should be layered on top of one another. While this book is probably not suitable by itself for a one-term course on software engineering, it would be an excellent adjunct to McConnell, or to Fowler and Scott's UML Distilled.

Software Tools: A Survey of Best Practice

Most programmers only ever master a single programming environment, such as Emacs and GNU on Unix, or Borland C++ on Microsoft Windows. As a result, most programmers have no context within which to evaluate their tools. How does your present debugger stack up against others, for example? The odds are that you don't know.

Don't know, that is, until you've read this book. In 11 chapters and 3 appendices, its contributors present a horizontal look at programming environment components, rather than a vertical look at a particular programming environment. Thus, instead of "Programming with GNU" or "Visual C++ Made Less Excruciating", the book has chapters describing how each of the following work, what the state of common use is (i.e. where's the floor), and what the state of the art is (i.e. where's the ceiling):

The book doesn't cover everything, of course: project management software, while just as important to most programmers' day-to-day activities, is barely mentioned. While I expect much of what's in this book to become stale as new versions of the tools mentioned are produced, that's partly the point—as the authors say, having someone point out publically where its offerings fall short is a big incentive for a tool vendor to fix things.

C++ As If C Never Happened

As C++ compilers approach the ANSI standard, C++ class libraries that exploit the complexity of the standard to give both expressiveness and efficiency are coming on line... while the library writer must basically use every obscure subclause in the standard, most of this complexity can be hidden from the library user.
Scott Haney

After sending a lengthy proof of a geometric theorem to a colleague, Blaise Pascal wrote, "If I'd had more time, I would have written a shorter letter." Sadly, programming language standardization committees seldom feel the same way: almost without exception, the length of the final standard is proportional to the amount of time spent deliberating. Add this to the reluctance of most committees to ever actually discard features (deprecation doesn't count), and life eventually becomes hell for both compiler writers and students.

This book goes a long way toward making C++ less hellish for newcomers. Pointer arithmetic, "naked" arrays, and other low-level features inherited from C are not mentioned until the third appendix. While references show up in chapter 1, along with vector, string, and other standard library classes, pointers don't appear until chapter 5, and it isn't until chapter 7 that students are shown how to write a class from scratch, instead of deriving from the reference-counted base class CRoot.

Purists and power users will probably not like this book, and will probably express their indignation by saying that it describes C++ as if it were Java. However, that's not necessarily a bad thing: having seen how to get things done safely and quickly in C++, students will be better placed to learn how to handle the language's more dangerous features.