CS Blog

Last update on .

New Edition of Computer Graphics: Principles and Practice

Nearly a decade in the writing, the new edition of Computer Graphics: Principles and Practice has finally been published (the first copy is shown here sitting on my somewhat messy desk). 

The book is 1209 pages, which is slightly shorter than the second edition, but it's in a larger format, which more than compensates for the difference. Several topics (the extensive discussion of user interfaces, the long chapters on spline curves and surfaces) have been substantially trimmed down, since there are now whole fields (computer-human interaction, computer-aided design) in which these topics find their natural home. The discussion of rendering -- especially Monte Carlo methods -- has been enlarged a good deal. 

There's a big Brown CS representation in the book -- Andy van Dam and I here at Brown, my former Ph.D. student Morgan McGuire of Williams, former adjunct faculty member David Sklar of Vizify, Andy's former Ph.D. student Steve Feiner of Columbia -- along with Jim Foley of Georgia Tech and Kurt Akeley of Lytro. 

As the lead author on this edition, I'm (a) exhausted, and (b) very happy with the final product. The text is almost entirely new, although it's strongly influenced, of course, by the presentation and order of the earlier editions. 

What's Different?

Hardware by a world expert

The third edition contains a chapter on Modern Graphics Hardware  by Kurt Akeley, the cofounder of Silicon Graphics, designer of the Reality Engine and GL/OpenGL, and now CTO of Lytro. Kurt uses a recent NVIDIA GPU as a model for analyzing the tradeoffs involved in designing a graphics processor, including the cost/benefit choices involved in parallelizing graphics tasks, and extensive discussion of memory, concentrating on locality of reference and its relationship to caching, and the consequences of the differing constants in the Moore's-Law-like improvements in memory, computation, and bandwidth.  He also discusses the tradeoff between implementation simplicity and power provided to the user, and identifies a principle --- The art of architecture design includes identifying conflicts between the interests of implementors and users, and making the best tradeoffs --- early in the chapter, and then illustrates it with numerous examples. 

Principles galore

That design tradeoff principle illustrates something about the book as well: as we designed and revised chapters, we found ourselves repeatedly explaining a single idea in multiple contexts, and began to extract principles that we've found ourselves using over the years. 

These principles range over many levels of detail. The "average height principle" says that the average height of a point on the upper hemisphere of the unit sphere is 1/2, for example. That seems pretty specific, but it's remarkable how often it comes up in discussing rendering topics. At the other extreme, the "meaning principle" --- which says that for every number that appears in a graphics program, you need to know the semantics, the meaning, of that number --- applies very widely. This principle might seem completely obvious to you -- of course you need to know what numbers mean! If you're thinking that, let me ask you this: suppose the top left pixel of your color image has colors (r, g, b) = (245, 13, 11). What does that "245" mean? If you think the pixel values are describing light as a physical phenomenon, what are the units? 

Writing a book in a new century

The world’s changed a lot since our last edition. Students are used to grabbing code from the internet. The language of choice has changed from Pascal and/or C to … well, to what? C++? Scheme? Java? C#? Haskell? OCaml? The great thing is that it doesn’t really matter. If you want to learn about, say, ray-intersect-plane computations, you can probably find implementations in any of those languages. That meant two things for us as authors:

We don’t actually have to include code for many algorithms. The student can grab code from the web in whatever language works best for him or her. 

When we do write code, we can feel free to do it in almost any language. In the book, there’s C, C++, C#, GLSL, pseudocode, and possibly some others I’ve forgotten. The C-like languages are all similar enough that a student who knows one can generally read the others. Much of the early part of the book introduces 2D and basic 3D graphics via Windows Presentation Foundation (WPF), a graphics library accessible through an XML-like format and via C# code, but essentially the same ideas are usable via other libraries. 

These two mean that if the main ideas are explained simply and clearly enough – which is, after all, our strength – then the student can make the most of them.


The second edition started with 2D graphics in great detail, including extensive coverage of low-level topics like scan-conversion. Since the modern version of scan-conversion, rasterization, is now generally done in the GPU, it’s no longer the central topic it once was. It’s also usually based on spatial subdivision approaches, which are most naturally delayed until later in the book. 

In the new edition, we’ve taken a different approach, briefly describing in the first chapter many of the main ideas of graphics, which are then treated in successively greater detail and mathematical sophistication in multiple later chapters. 

clock.pngA clock modeled in WPF in Chapter 2 The Durer engraving used in Chapter 3The Durer engraving used in Chapter 3

Pictures early!

We start with WPF’s 2D features, which gives students a chance to make pictures – indeed animated pictures – in the second chapter, and learn about hierarchical modeling as they build a model of a clock-face. This same 2D foundation is used, in Chapter 3, to produce output for a very basic raytracer based on the famous Durer etching shown above. Almost immediately the student then learns about WPF 3D, and its basic Blinn-Phong shading model, after which we describe a couple of test-bed programs in WPF that the student can use to perform exercises throughout the book. 

Onion peeling

At the end of the introduction we lay out a few basic facts about light, a little mathematics, and something about representation of shape in graphics – just enough to let a student make a first renderer. As we work through the first several chapters, topics like clipping and transformations arise naturally, and efficiency considerations lead to discussion of how best to represent shapes with meshes. A few chapters further along, we revisit many of these ideas with greater sophistication. Morgan McGuire provides a wonderful mid-book chapter that summarizes the main current representations of light, of shape, and of light-transport, covering each in enough detail to let the student begin to see the big picture of how efficiency in one area may complicate another, etc. It’s the most “computer-sciency” chapter the students have seen at this point. It goes into detail on fixed- and floating-point representations of numbers, memory structure in Z-buffers (and other buffers), precomputation and caching for geometric models, etc.  The next chapter puts much of this information to use in building a slightly more sophisticated (but not recursive) raytracer, a rasterizing renderer, and a hardware-based renderer, and showing how the three produce identical results, thus emphasizing the critical difference between raytracing and rasterization in the reordering of two main loops, and the consequences this has on caching, memory access patterns, etc. In later chapters, we return to raytracing in its recursive form, together with more sophisticated scattering models for light-surface interaction, and develop a path-tracer and photon-mapping renderer. And in the final chapter, on graphics hardware, we return to hardware-based rendering. 

This repeated treatment of the same topic allows the student to develop sophistication before facing the full complexities of the topic in its greatest generality. It also lets a teacher select how deeply to address a topic by including some chapters in the syllabus and omitting others.  

Bracewell.pngThe Fourier transform of a box is a sinc

Extra material

Another feature of writing a book in the internet age is that we can provide lots more to our readers. We’re working on releasing source code for many of the illustrations in the book, many of which (like the one illustrating that the Fourier transform of a box-filter is a sinc-function, shamelessly adapted from Bracewell’s Fourier Analysis book) were generated by programs in Matlab and other environments.  We also provide example programs for download, and the basic ideas in WPF are explained using “Browser Apps” (created by David Sklar) in which the student can edit, in a browser, WPF2D XAML code and get instant feedback on the results without every installing any software on his/her computer at all. 

Andy hard at work signing booksAndy hard at work signing books Signing2.JPGMcGuire, Sklar, Hughes, Akeley, van Dam, Foley, Feiner at SIGGRAPH 2013 book-signing event.

Launching the book

The new edition was launched at SIGGRAPH 2013, with a launch party followed by a book-signing on the show floor. Judging from the lines at the signing, people seem eager to have the book, and our first review on Amazon gave us five stars…we’re off to a good start!