I read Yet Another Great Rant by Steve Yegge this morning. He’s had some great rants over the years, and today’s was as thought-provoking as any. His thesis, in a nutshell, is that there’s a liberal-conservative spectrum in software development, much as in political life. He goes on to explore that alleged comtinuum, and try to capture some of the essential philosophical tenets of each “side.”
Steve Yegge on Software Politics
When I read the two “creeds”, I found myself more in sympathy with the “liberal” end of the spectrum, emphasizing contingency, uncertainty, and the need to respond to change. That didn’t surprise me. What did surprise me a little was my impatience with some of the (admittedly rhetorical/hyperbolic) statements on both sides. Just like I have less and patience with the rhetorical and hyperbolic statements from the regular political spectrum today.
Which is the “right” approach to software development? More conservative or more liberal? Here’s the TL;DR, the Zen secret of the greatest development teams:
It depends.
Let’s compare just one pair of statements from Yegge’s spectrum extremes, concerning bugs. Here’s the conservative:
Software should aim to be bug free before it launches. (Banner claim: “Debugging Sucks!”) Make sure your types and interfaces are all modeled, your tests are all written, and your system is fully specified before you launch. Or else be prepared for the worst!
OK, we know that’s wrong. Many systems are not, and more importantly CANNOT be fully specified before construction begins. Why? Because the act of construction itself is essential to revealing requirements. Forcing full specification upfront significantly increases the chance of requirements misses and lower user satisfaction.
So now let’s look at the “liberal” end. If we didn’t like the conservative view of bugs, this statement should be preaching to the choir:
Bugs are not a big deal. They happen anyway, no matter how hard you try to prevent them, and somehow life goes on. Good debuggers are awesome pieces of technology, and stepping through your code gives you insights you can’t get any other way. Debugging and diagnosing are difficult arts, and every programmer should be competent with them. The Christmas Eve Outage scenario never, ever happens in practice — that’s what code freeze is for. Bugs are not a big deal! (This belief really may be the key dividing philosophy between Conservative and Liberal philosophies.)
OK, we know this is wrong too. Bugs aren’t a big deal — except when they are. A bug just cost Knight Capital $440M. That seems pretty big to me. A bug on Apple’s website during a product launch date? Heads and other parts would roll.
A project’s sensitivity to bugs is just one of a family of attributes that business analysts call “nonfunctional requirements.” Functional requirements, as the saying goes, says what a program should DO: add up invoices, fly to the moon, simulate a volcano. Nonfunctional requirements specify what what the program should BE: fast, exceptionally user-friendly, highly maintainable, bug-free etc.
So quality, measured as freedom from errors, is in fact just another requirement, like system platform, user interface, or behavior. Making absolute statements about the importance or non-importance of bugs (implicitly across many or indeed all possible projects) is as nonsensical as making absolute statements about functional requirements across all projects. The statement “bugs are not a big deal” is as nonsensical as the statement “all programs should be localizable into thirty languages.” The relative importance of bugs is not a given — it’s a project requirement, and like other requirements, must be DISCOVERED, and understood.
If you are building a power tool for a small team of five highly knowledgeable users, used to generate intermediate results that get inspected by humans and used in other parts of a process, the program can probably tolerate a certain level of error. If errors are manifested through slightly obscure dialogs, the power users can probably click through them, see that their results were unaffected, and go on with their work, and shoot you a note to maybe clean that dialog up someday.
On the other hand, if you are building a customer facing, business-to-consumer website for a few million of a multinational corporation’s closest friends, you need to be fanatical about trapping and recovering from errors before a user sees them. ANY disruption of a browsing experience will be enough to drive them from the site. Let a secure certificate error pop up in the browser during online banking, and that user will leave in a hurry and with prejudice.
So, on bugs: conservative creed, WRONG. Liberal creed, WRONG again. ANY absolute statement: well, WRONG. A pattern begins to emerge.
By the way, this is not a criticism of Yegge’s post at all. I think these personality types absolutely do exist in software development.
I could go through the rest of the statements in the opposed creeds, but the pattern is the same. The right answer to each question depends heavily on the nonfunctional requirements of the project in question. Some projects are highly critical: errors or failures in the program can lead to loss of huge sums of money, human injury or death. If you don’t take a maximally conservative approach in building such software you’re … not being wise, at all.
Some projects are designed to support processes that are themselves poorly understood and can only be better understood by attempting to model them in software (or some other medium) and though iterative experimentation in actually running the process. Requirements fluidity is a third, and critical, axis of software project attributes. The more fluid the requirements, the more “liberal” an approach one must take. (But what about a project with highly-fluid requirements, where errors can cause loss of money or life? Well, the best thing to do with such programs is not to build them, since that tension is nigh-unresolvable. The best answer I can give is that a successful approach will require extraordinary flexibility, appetite for risk, and a decent amount of luck as well).
So what does this make my outlook? I suppose this all makes me a Pragmatist à la Richard Rorty. And I believe the root of that pragmatism, in software development, must return for its answers to the nonfunctional dimensions of a project, including nonfunctional HUMAN dimensions, such as stakeholder tolerance for uncertainty and risk, technical sophistication, control of budget and the like. The “right” approach for a software project can only be determined empirically, by sampling the project’s nonfunctional dimensions and crafting an approach that meets the project on its own terms: correct level of error tolerance, correct amount of up-front requirements, correct level of design rigor, correct amount of QA (wait, the right amount of QA is negotiable? Yes. It is). If you attempt to shape a project approach based not on empirical factors proceeding from the project itself, but on absolute considerations (“political” ideas, if you will), you’re likely to get it wrong.
To bring it back down to earth: if you’re on a software project, and you don’t have an intimate idea of the project’s nonfunctional profile, then if you get it right using your principle-driven, non-empirical approach, you’ve essentially done so by accident.
I think Yegge is right to say that, in software development as likely in life, our politics are shaped by bad experiences of the past, and by those who have influenced us. But the project in front of you doesn’t care what burned you in the past, and it doesn’t care what Professor McGilbert the greatest professor of all time (or Alan Perlis in my case, for one) thought. It has its own unique needs, and if you’re going to succeed with it, you need to shut off the voices in your head and assess the work in front of you on its own terms.
Or, in the words of Edna Mode (The Incredibles):
I never look back, dahling. It distracts from The Now.