Monday, May 19, 2008

The Prototype-Production Knob

Once you've seen the progression that software goes through from birth as a hacker's one-night-stand, to 3-man garage-startup's baby, to Small Corp's stubborn adolescent, to The-Next-Microsoft's bloated 1000-developer software-engineering nightmare... you simply can't ignore it and the programming language feature it seems to demand.


In the beginning when you have an idea, you want a flexible medium for experimenting with. You really don't know where you're going to end up, so you want your medium to just get out of the way and let your creative juices flow. It's the same in every industry really, whether it be software engineering, architecture, painting, or writing. But once you have a product, and hundreds of other people besides you care about the outcome of every little detail, everything from which shade of gray its background is to what happens when you press the Tab key when the last textbox of the final dialog of your Import Wizard has the focus, you have to worry about things like quality assurance.1

Software, like concrete, hardens over time becoming a rigid unmovable mass. This happens as the original developers move on or simply forget about code they wrote and haven't touched for a while. Code gets pushed down into layers of abstraction, becoming black boxes that no one ever looks into unless something goes wrong. This is the natural progression as new building blocks get created by combining the functionality of older building blocks. The fringes of development churn like mad, but over time, newer modules start depending on them, weighing them down by discouraging change.

On top of that, shear code size prevents change. Once you have a massive software system built from thousands upon thousands of man-hours, you simply can't throw it away and start from scratch. Maybe in an ideal world where you didn't have to worry about paying rent... but if you intend to make a living off of software, it simply isn't an option.

Once a software system has been grown so large, you're stuck with it. Steve Yegge talked about this in a blog post, but I think most people who read skimmed it just voted it up on their favorite news site and moved on to the next article. This is so fundamental — size! Not some theoretical cyclomatic metric. Size! And part of the reason size is so important is because once you have a sufficiently large code-base, re-writing it is no longer an option. Which means, changing it is no longer an option.

The code literally solidifies!

The Knob

Concrete naturally hardens over time. But what if your concrete were rigid even when you wanted to constantly mold it. Or what if it never completely hardened, even after you found the perfect form. That is what programming languages are like today. You have to choose the static language that's too rigid to prototype with or the dynamic language that never completely hardens even in production.

HTML and PHP are good examples of languages that never completely harden. They were great at first; it was so easy to dive right in, and they blew up in popularity as a result. But years later we are stuck with large websites and code-bases which are living nightmares to maintain. Although this is partially the responsibility of the developers, as good developers can write good code in any language, the language itself should support this transition, not hinder it.

On the opposite side, we have languages like ML and Haskell whose type-systems are so strict that most people give up on them before writing a single useful program.2 They are not flexible enough for constant molding. I, of all people, understand the benefits of static type-systems. But I'm beginning to realize that when you're prototyping, it's okay to have some runtime errors. In fact, it's desirable, because prototypes are by-nature underspecified. Any error that is caught statically must necessarily be determined by analyzing the source code alone, not its execution, which means that I must write more code to single-out those error cases. Like the None branch in a case expression that "shouldn't happen", it is literally error-handling code that is required by the compiler.

Writing error-handling code is — by definition — code that deals with uncommon special-cases. It's common knowledge that most code paths don't get exercised until perhaps years after being out in the wild. Why then should I care about catching them all statically in my prototype? Even in the first released version. It's naive to think I even can catch them all.

And the problem with writing all this extra code is not that it takes longer to write the first time, but that it takes longer to change each time that you do, which is many many times when you are still in the prototyping phase and the code is constantly churning. So the code-base starts out rigid and gets even more rigid faster.

What we need is a dial — a knob — that can be tuned in the direction we are in: either flexibility for a prototype or rigidity for a production app.

Breaking Things

The problem stems from the fact that when you modify code you didn't write, you can't see the big picture. You only have a local view of the code you're modifying, so you don't completely understand the ramifications of your changes.

People fail to respect the great differences between writing new code and {modifying or maintaining} code they didn't write.

Sure, both require knowledge of programming, but they're completely different activities. In the film industry, the corresponding activities are called completely different things: directing and editing. Both require knowledge of film making, and experience doing one can help improve skills in the other, but they are fundamentally different tasks. When I am writing code from scratch, I start with a blank editor and combine language constructs that I am already intimately familiar with. When I am modifying code that I am not familiar with, my biggest concern is will this change break anything? And most of the time, that's a difficult question to answer because I only have a local view of the code.3 I don't completely understand the entire system and can't see the big picture of what the change will affect. So I usually end up being extremely conservative, inevitably creating cruft that is otherwise unnesessary. Done over and over again, this can be extremely harmful to a code-base.

Basically, if you're modifying someone else's code, it's because that code can not, for one reason or another, be re-written. That code is more rigid, closer to the production end of the spectrum. Now... a lot of effort (and resources) goes into making sure that production code works. So when you're adding to or modifying code written by someone else, you don't want to change anything that already works and undo all that effort, nullifying the resources already spent on it.

Today's PLs

It would be nice if our language allowed us to keep our code nimble as long as possible, and then, when we were ready to push code into an abstraction or let someone else maintain it, solidify the code on cue.

Perl's use strict allows you to adjust the amount of static checking done on a program. However, no sane programmer that I know of ever turns this switch off for a program more than a few lines long. This seems to say that without the strict option enabled, the language is too flexible even for prototyping. Paul Graham even experimented with implicit variable declarations in Arc, a language designed specifically for prototyping, but decided against it.

The closest feature I know of that resembles what I'm thinking of is optional type declarations. Languages which allow programmers to omit types and optionally insert type-constraints when and where they please are a step in this direction. It allows for flexibility during the prototyping phase and a little more compiler-checked guarantees when inserted. Additionally, it documents the code and allows the compiler to take advantage of type-directed performance optimizations, two things more valuable towards the production side of the spectrum. When an app is a prototype, performance usually isn't as important as getting feedback on working features, and documentation is a waste because the code is more likely than not to change, rendering any documentation obsolete (and even misleading). Besides, you can always ask the developer who owns the code, as he's still working on it and it's fresh in his mind.

Lispers, I'm waiting for you to chime in right about now stating how Lisp has had this feature all along. And that's great. But if people don't understand why it's so great, they won't use or support it.

So how else can we tune a programming language from flexible to rigid? From dynamic to static?

Feature Flip-Flopping

I suppose that any feature that separates flexible languages from rigid ones is a candidate for being a knob in this regard. But I'm pretty sure this is fallow territory with lots of room for improvement.

For one thing, I think it would be useful to restrict which kinds of decisions are delayed until runtime. The more that is delayed until runtime, the more possibilities there are for errors that are uncatchable until the last moment, driving the cost of the errors up. If you can catch an error as early as compile-time, or even at edit-time with a little red squiggly underline directly in the editor, the cost is only a few moments of a developer's time to fix it. But if that error is not caught until it's being run by a client — heavean forbid, on a client's 7-year-old desktop 273.1 miles away running Windows ME — not only is it extraodinarily difficult to reproduce and track down the error, but one of your paying customers is unhappy, and just might blog about how terrible your software is for all his friends to hear about it.

What kinds of decisions am I talking about? Ones that prevent reasoning about the code without executing it, like modifying the symbol table based on runtime values, calling eval, using reflection, or using dynamic dispatching. These things throw most, if not all, of your reasoning out the window. In general, it's not possible to determine what the effect of a call to eval will be, so any guarantees are shot. With dynamic dispatching, it's never quite clear at compile-time what code will be executed as a result of a function call, so again, just about anything could happen. All bets are off.

Again, these features are great for prototyping. They reduce the amount of code you have to write, reducing the amount of time you have to spend changing it while the code is still churning. Additionally, you are probably the one who wrote all the code, so there's no issue of not being able to see the big picture to understand it.

However, at the same time, these features are bad for the maintainability of production code. It's true that less code is easier to maintain than more code, as it is simply less that maintainers have to try to understand. But dynamic features actually make code more difficult to grok because they are more abstract. ... Calculating offsets of fields. Generating code. Modifying code. Data-flow analysis. Code-transforming optimizations. All of these things are normal programming concepts. But if you add to the end of each phrase "in bed"... Sorry, I mean, if you add to the end of each phrase "at runtime", they suddenly become horrors!4 In the same way that pointers are simply more abstract, so too is eval and dynamic features like it.

Am I suggesting that people should write code with eval and dynamic dispatching, and then when the code becomes stable, turn off those features and re-write the code without them? It does seem like the logical conclusion from the above observations.

This doesn't sit right with me though. For one, it would mean re-writing code just when you wanted to solidify it, undoing all the testing effort that went into it.

The first thing that comes to my mind is: is there a way we can compile these features away when they're switched off? perhaps by collecting data about runtime values and then generating static code which is functionally equivalent for the observed scenarios, explicitly triggering a runtime error otherwise? I honestly don't know what the right thing to do should be, but I hope I've raised some interesting questions for others to consider.
1. Ironically, because you worry about quality assurance and have an entire process to ensure quality of a product before releasing it, you increase the time it takes for a release iteration, thus increasing the cost of bugs or missing requirements. And this is on top of whatever extra cost it took to QA in the first place. But I guess this is like car insurance.
2. One problem is that once a type-system becomes sufficiently complicated, it requires an intimate understanding of it to write programs in the language, which can be a barrier to learning at the least.
3. This is where unit- and regression-tests become imperative.
4. That is, they become horrors to all but compiler writers and metaprogrammers.

No comments: