That's it! I've moved this blog to Tumblr. Read the details.
On top of that, I got a domain plpatterns.com and a new feed.
Monday, June 9, 2008
Monday, May 19, 2008
The Prototype-Production Knob
Once you've seen the progression that software goes through from birth as a hacker's one-night-stand, to 3-man garage-startup's baby, to Small Corp's stubborn adolescent, to The-Next-Microsoft's bloated 1000-developer software-engineering nightmare... you simply can't ignore it and the programming language feature it seems to demand.
Software, like concrete, hardens over time becoming a rigid unmovable mass. This happens as the original developers move on or simply forget about code they wrote and haven't touched for a while. Code gets pushed down into layers of abstraction, becoming black boxes that no one ever looks into unless something goes wrong. This is the natural progression as new building blocks get created by combining the functionality of older building blocks. The fringes of development churn like mad, but over time, newer modules start depending on them, weighing them down by discouraging change.
On top of that, shear code size prevents change. Once you have a massive software system built from thousands upon thousands of man-hours, you simply can't throw it away and start from scratch. Maybe in an ideal world where you didn't have to worry about paying rent... but if you intend to make a living off of software, it simply isn't an option.
Once a software system has been grown so large, you're stuck with it. Steve Yegge talked about this in a blog post, but I think most people who read skimmed it just voted it up on their favorite news site and moved on to the next article. This is so fundamental — size! Not some theoretical cyclomatic metric. Size! And part of the reason size is so important is because once you have a sufficiently large code-base, re-writing it is no longer an option. Which means, changing it is no longer an option.
The code literally solidifies!
HTML and PHP are good examples of languages that never completely harden. They were great at first; it was so easy to dive right in, and they blew up in popularity as a result. But years later we are stuck with large websites and code-bases which are living nightmares to maintain. Although this is partially the responsibility of the developers, as good developers can write good code in any language, the language itself should support this transition, not hinder it.
On the opposite side, we have languages like ML and Haskell whose type-systems are so strict that most people give up on them before writing a single useful program.2 They are not flexible enough for constant molding. I, of all people, understand the benefits of static type-systems. But I'm beginning to realize that when you're prototyping, it's okay to have some runtime errors. In fact, it's desirable, because prototypes are by-nature underspecified. Any error that is caught statically must necessarily be determined by analyzing the source code alone, not its execution, which means that I must write more code to single-out those error cases. Like the
Writing error-handling code is — by definition — code that deals with uncommon special-cases. It's common knowledge that most code paths don't get exercised until perhaps years after being out in the wild. Why then should I care about catching them all statically in my prototype? Even in the first released version. It's naive to think I even can catch them all.
And the problem with writing all this extra code is not that it takes longer to write the first time, but that it takes longer to change each time that you do, which is many many times when you are still in the prototyping phase and the code is constantly churning. So the code-base starts out rigid and gets even more rigid faster.
What we need is a dial — a knob — that can be tuned in the direction we are in: either flexibility for a prototype or rigidity for a production app.
People fail to respect the great differences between writing new code and {modifying or maintaining} code they didn't write.
Sure, both require knowledge of programming, but they're completely different activities. In the film industry, the corresponding activities are called completely different things: directing and editing. Both require knowledge of film making, and experience doing one can help improve skills in the other, but they are fundamentally different tasks. When I am writing code from scratch, I start with a blank editor and combine language constructs that I am already intimately familiar with. When I am modifying code that I am not familiar with, my biggest concern is will this change break anything? And most of the time, that's a difficult question to answer because I only have a local view of the code.3 I don't completely understand the entire system and can't see the big picture of what the change will affect. So I usually end up being extremely conservative, inevitably creating cruft that is otherwise unnesessary. Done over and over again, this can be extremely harmful to a code-base.
Basically, if you're modifying someone else's code, it's because that code can not, for one reason or another, be re-written. That code is more rigid, closer to the production end of the spectrum. Now... a lot of effort (and resources) goes into making sure that production code works. So when you're adding to or modifying code written by someone else, you don't want to change anything that already works and undo all that effort, nullifying the resources already spent on it.
Perl's
The closest feature I know of that resembles what I'm thinking of is optional type declarations. Languages which allow programmers to omit types and optionally insert type-constraints when and where they please are a step in this direction. It allows for flexibility during the prototyping phase and a little more compiler-checked guarantees when inserted. Additionally, it documents the code and allows the compiler to take advantage of type-directed performance optimizations, two things more valuable towards the production side of the spectrum. When an app is a prototype, performance usually isn't as important as getting feedback on working features, and documentation is a waste because the code is more likely than not to change, rendering any documentation obsolete (and even misleading). Besides, you can always ask the developer who owns the code, as he's still working on it and it's fresh in his mind.
Lispers, I'm waiting for you to chime in right about now stating how Lisp has had this feature all along. And that's great. But if people don't understand why it's so great, they won't use or support it.
So how else can we tune a programming language from flexible to rigid? From dynamic to static?
For one thing, I think it would be useful to restrict which kinds of decisions are delayed until runtime. The more that is delayed until runtime, the more possibilities there are for errors that are uncatchable until the last moment, driving the cost of the errors up. If you can catch an error as early as compile-time, or even at edit-time with a little red squiggly underline directly in the editor, the cost is only a few moments of a developer's time to fix it. But if that error is not caught until it's being run by a client — heavean forbid, on a client's 7-year-old desktop 273.1 miles away running Windows ME — not only is it extraodinarily difficult to reproduce and track down the error, but one of your paying customers is unhappy, and just might blog about how terrible your software is for all his friends to hear about it.
What kinds of decisions am I talking about? Ones that prevent reasoning about the code without executing it, like modifying the symbol table based on runtime values, calling
Again, these features are great for prototyping. They reduce the amount of code you have to write, reducing the amount of time you have to spend changing it while the code is still churning. Additionally, you are probably the one who wrote all the code, so there's no issue of not being able to see the big picture to understand it.
However, at the same time, these features are bad for the maintainability of production code. It's true that less code is easier to maintain than more code, as it is simply less that maintainers have to try to understand. But dynamic features actually make code more difficult to grok because they are more abstract. ... Calculating offsets of fields. Generating code. Modifying code. Data-flow analysis. Code-transforming optimizations. All of these things are normal programming concepts. But if you add to the end of each phrase "in bed"... Sorry, I mean, if you add to the end of each phrase "at runtime", they suddenly become horrors!4 In the same way that pointers are simply more abstract, so too is
Am I suggesting that people should write code with
This doesn't sit right with me though. For one, it would mean re-writing code just when you wanted to solidify it, undoing all the testing effort that went into it.
The first thing that comes to my mind is: is there a way we can compile these features away when they're switched off? perhaps by collecting data about runtime values and then generating static code which is functionally equivalent for the observed scenarios, explicitly triggering a runtime error otherwise? I honestly don't know what the right thing to do should be, but I hope I've raised some interesting questions for others to consider.
Hardening
In the beginning when you have an idea, you want a flexible medium for experimenting with. You really don't know where you're going to end up, so you want your medium to just get out of the way and let your creative juices flow. It's the same in every industry really, whether it be software engineering, architecture, painting, or writing. But once you have a product, and hundreds of other people besides you care about the outcome of every little detail, everything from which shade of gray its background is to what happens when you press the Tab key when the last textbox of the final dialog of your Import Wizard has the focus, you have to worry about things like quality assurance.1Software, like concrete, hardens over time becoming a rigid unmovable mass. This happens as the original developers move on or simply forget about code they wrote and haven't touched for a while. Code gets pushed down into layers of abstraction, becoming black boxes that no one ever looks into unless something goes wrong. This is the natural progression as new building blocks get created by combining the functionality of older building blocks. The fringes of development churn like mad, but over time, newer modules start depending on them, weighing them down by discouraging change.
On top of that, shear code size prevents change. Once you have a massive software system built from thousands upon thousands of man-hours, you simply can't throw it away and start from scratch. Maybe in an ideal world where you didn't have to worry about paying rent... but if you intend to make a living off of software, it simply isn't an option.
Once a software system has been grown so large, you're stuck with it. Steve Yegge talked about this in a blog post, but I think most people who read skimmed it just voted it up on their favorite news site and moved on to the next article. This is so fundamental — size! Not some theoretical cyclomatic metric. Size! And part of the reason size is so important is because once you have a sufficiently large code-base, re-writing it is no longer an option. Which means, changing it is no longer an option.
The code literally solidifies!
The Knob
Concrete naturally hardens over time. But what if your concrete were rigid even when you wanted to constantly mold it. Or what if it never completely hardened, even after you found the perfect form. That is what programming languages are like today. You have to choose the static language that's too rigid to prototype with or the dynamic language that never completely hardens even in production.HTML and PHP are good examples of languages that never completely harden. They were great at first; it was so easy to dive right in, and they blew up in popularity as a result. But years later we are stuck with large websites and code-bases which are living nightmares to maintain. Although this is partially the responsibility of the developers, as good developers can write good code in any language, the language itself should support this transition, not hinder it.
On the opposite side, we have languages like ML and Haskell whose type-systems are so strict that most people give up on them before writing a single useful program.2 They are not flexible enough for constant molding. I, of all people, understand the benefits of static type-systems. But I'm beginning to realize that when you're prototyping, it's okay to have some runtime errors. In fact, it's desirable, because prototypes are by-nature underspecified. Any error that is caught statically must necessarily be determined by analyzing the source code alone, not its execution, which means that I must write more code to single-out those error cases. Like the
None
branch in a case expression that "shouldn't happen", it is literally error-handling code that is required by the compiler.Writing error-handling code is — by definition — code that deals with uncommon special-cases. It's common knowledge that most code paths don't get exercised until perhaps years after being out in the wild. Why then should I care about catching them all statically in my prototype? Even in the first released version. It's naive to think I even can catch them all.
And the problem with writing all this extra code is not that it takes longer to write the first time, but that it takes longer to change each time that you do, which is many many times when you are still in the prototyping phase and the code is constantly churning. So the code-base starts out rigid and gets even more rigid faster.
What we need is a dial — a knob — that can be tuned in the direction we are in: either flexibility for a prototype or rigidity for a production app.
Breaking Things
The problem stems from the fact that when you modify code you didn't write, you can't see the big picture. You only have a local view of the code you're modifying, so you don't completely understand the ramifications of your changes.People fail to respect the great differences between writing new code and {modifying or maintaining} code they didn't write.
Sure, both require knowledge of programming, but they're completely different activities. In the film industry, the corresponding activities are called completely different things: directing and editing. Both require knowledge of film making, and experience doing one can help improve skills in the other, but they are fundamentally different tasks. When I am writing code from scratch, I start with a blank editor and combine language constructs that I am already intimately familiar with. When I am modifying code that I am not familiar with, my biggest concern is will this change break anything? And most of the time, that's a difficult question to answer because I only have a local view of the code.3 I don't completely understand the entire system and can't see the big picture of what the change will affect. So I usually end up being extremely conservative, inevitably creating cruft that is otherwise unnesessary. Done over and over again, this can be extremely harmful to a code-base.
Basically, if you're modifying someone else's code, it's because that code can not, for one reason or another, be re-written. That code is more rigid, closer to the production end of the spectrum. Now... a lot of effort (and resources) goes into making sure that production code works. So when you're adding to or modifying code written by someone else, you don't want to change anything that already works and undo all that effort, nullifying the resources already spent on it.
Today's PLs
It would be nice if our language allowed us to keep our code nimble as long as possible, and then, when we were ready to push code into an abstraction or let someone else maintain it, solidify the code on cue.Perl's
use strict
allows you to adjust the amount of static checking done on a program. However, no sane programmer that I know of ever turns this switch off for a program more than a few lines long. This seems to say that without the strict option enabled, the language is too flexible even for prototyping. Paul Graham even experimented with implicit variable declarations in Arc, a language designed specifically for prototyping, but decided against it.The closest feature I know of that resembles what I'm thinking of is optional type declarations. Languages which allow programmers to omit types and optionally insert type-constraints when and where they please are a step in this direction. It allows for flexibility during the prototyping phase and a little more compiler-checked guarantees when inserted. Additionally, it documents the code and allows the compiler to take advantage of type-directed performance optimizations, two things more valuable towards the production side of the spectrum. When an app is a prototype, performance usually isn't as important as getting feedback on working features, and documentation is a waste because the code is more likely than not to change, rendering any documentation obsolete (and even misleading). Besides, you can always ask the developer who owns the code, as he's still working on it and it's fresh in his mind.
Lispers, I'm waiting for you to chime in right about now stating how Lisp has had this feature all along. And that's great. But if people don't understand why it's so great, they won't use or support it.
So how else can we tune a programming language from flexible to rigid? From dynamic to static?
Feature Flip-Flopping
I suppose that any feature that separates flexible languages from rigid ones is a candidate for being a knob in this regard. But I'm pretty sure this is fallow territory with lots of room for improvement.For one thing, I think it would be useful to restrict which kinds of decisions are delayed until runtime. The more that is delayed until runtime, the more possibilities there are for errors that are uncatchable until the last moment, driving the cost of the errors up. If you can catch an error as early as compile-time, or even at edit-time with a little red squiggly underline directly in the editor, the cost is only a few moments of a developer's time to fix it. But if that error is not caught until it's being run by a client — heavean forbid, on a client's 7-year-old desktop 273.1 miles away running Windows ME — not only is it extraodinarily difficult to reproduce and track down the error, but one of your paying customers is unhappy, and just might blog about how terrible your software is for all his friends to hear about it.
What kinds of decisions am I talking about? Ones that prevent reasoning about the code without executing it, like modifying the symbol table based on runtime values, calling
eval
, using reflection, or using dynamic dispatching. These things throw most, if not all, of your reasoning out the window. In general, it's not possible to determine what the effect of a call to eval
will be, so any guarantees are shot. With dynamic dispatching, it's never quite clear at compile-time what code will be executed as a result of a function call, so again, just about anything could happen. All bets are off.Again, these features are great for prototyping. They reduce the amount of code you have to write, reducing the amount of time you have to spend changing it while the code is still churning. Additionally, you are probably the one who wrote all the code, so there's no issue of not being able to see the big picture to understand it.
However, at the same time, these features are bad for the maintainability of production code. It's true that less code is easier to maintain than more code, as it is simply less that maintainers have to try to understand. But dynamic features actually make code more difficult to grok because they are more abstract. ... Calculating offsets of fields. Generating code. Modifying code. Data-flow analysis. Code-transforming optimizations. All of these things are normal programming concepts. But if you add to the end of each phrase "in bed"... Sorry, I mean, if you add to the end of each phrase "at runtime", they suddenly become horrors!4 In the same way that pointers are simply more abstract, so too is
eval
and dynamic features like it.Am I suggesting that people should write code with
eval
and dynamic dispatching, and then when the code becomes stable, turn off those features and re-write the code without them? It does seem like the logical conclusion from the above observations.This doesn't sit right with me though. For one, it would mean re-writing code just when you wanted to solidify it, undoing all the testing effort that went into it.
The first thing that comes to my mind is: is there a way we can compile these features away when they're switched off? perhaps by collecting data about runtime values and then generating static code which is functionally equivalent for the observed scenarios, explicitly triggering a runtime error otherwise? I honestly don't know what the right thing to do should be, but I hope I've raised some interesting questions for others to consider.
1. Ironically, because you worry about quality assurance and have an entire process to ensure quality of a product before releasing it, you increase the time it takes for a release iteration, thus increasing the cost of bugs or missing requirements. And this is on top of whatever extra cost it took to QA in the first place. But I guess this is like car insurance.
2. One problem is that once a type-system becomes sufficiently complicated, it requires an intimate understanding of it to write programs in the language, which can be a barrier to learning at the least.
3. This is where unit- and regression-tests become imperative.
4. That is, they become horrors to all but compiler writers and metaprogrammers.
Posted by Jon T at 2:16 PM 0 comments
Saturday, May 17, 2008
An Arc News Forum
For my monthly Day of Hacking, I decided to put together my own news forum, yc-style. Luckily, it's the demo app of Paul Graham's Arc.
Within a day, I got my own forum up and running, quick and dirty. But it took some effort drawing from multiple sources, so I thought I'd document the steps I took here, all in one place.
Once I signed up for Linode, I signed in and used their Linux Distribution Wizard to deploy a distribution. I chose Ubuntu 7.10 somewhat arbitrarily, but it saved a few steps installing MzScheme later. (You don't have to build it from source.)
Once my distro was deployed, which only took a few minutes, I booted my server and ssh-ed in. From there, I updated my system with
It's in a git repository, so you'll have to install git.
However, if you want to be able to modify your code at the REPL while the web server is running, run the news app in a separate thread and bind it, like so:
While I was mucking with the source, I found Arcfn's index of Arc functions to be helpful. I have also been using
That's it. I hope someone finds this helpful.
Within a day, I got my own forum up and running, quick and dirty. But it took some effort drawing from multiple sources, so I thought I'd document the steps I took here, all in one place.
Setup a Host
Arc runs on top of MzScheme, which means you can't just run it on top of any old web host. Loosely following Liperati's tutorial on setting up a Scheme web server, I signed up for virtual hosting with Linode.Once I signed up for Linode, I signed in and used their Linux Distribution Wizard to deploy a distribution. I chose Ubuntu 7.10 somewhat arbitrarily, but it saved a few steps installing MzScheme later. (You don't have to build it from source.)
Once my distro was deployed, which only took a few minutes, I booted my server and ssh-ed in. From there, I updated my system with
apt-get
. (The -y option automatically accepts, so if you'd rather get prompted before installs, omit this option.)
apt-get -y update
apt-get -y upgrade
apt-get -y install openssl
I found out (the hard way) that the news app depends on openssl, so you might as well install it now.
Install MzScheme
According to the Arc install page, Arc requires MzScheme version 352. I'm not sure if this is completely up to date, as I've heard people successfully use version 360. But I don't want to have to deal with issues, so I just used 352.wget http://download.plt-scheme.org/bundles/352/mz/mz-352-bin-i386-linux-ubuntu.sh
chmod a+x mz-352-bin-i386-linux-ubuntu.sh
./mz-352-bin-i386-linux-ubuntu.sh
See Steve Morin's Arc Installation for what that looks like. I accepted the defaults and installed system links.
Install Arc Itself
It turns out there are some bugs in the latest release of Arc (Arc2 as of this writing), and if you try to use it out of the box, you'll run in to a few problems. Instead, you'll want to download Anarki.It's in a git repository, so you'll have to install git.
apt-get -y install git-core
Run the following to actually get a working copy of Arc and the news app.
git clone git://github.com/nex3/arc.git
There are some experimental (and unofficial) features of Arc in Anarki. I decided I wanted a stable version of Arc in the hopes that any code I add won't break in the next version. To use the stable branch, cd
into the working directory and run the following commands.
cd arc
git branch stable origin/stable
git checkout stable
Run Arc
To run Arc, use the following command from the arc directory.mzscheme -m -f as.scm
You should see the following with an Arc prompt.
Use (quit) to quit, (tl) to return here after an interrupt.
arc>
Start the News App
To run the news app, simply type(nsv)
at the Arc prompt. You'll see something like the following printed back to you.
load items:
load users:
ready to serve port 8080
You should now be able to visit your news app by going to "http://<your-host-ip>:8080/news".However, if you want to be able to modify your code at the REPL while the web server is running, run the news app in a separate thread and bind it, like so:
(= app (thread (nsv)))
The Arc prompt will come back to you, and you can simply re-bind symbols to modify them. Then, when you want to kill the app, use the following.
(break-thread app)
At any time at the Arc REPL, you can type (quit)
to exit to your shell. You can also press Control-C to break into the Scheme REPL, where (exit)
exits and (tl)
returns to the Arc prompt.While I was mucking with the source, I found Arcfn's index of Arc functions to be helpful. I have also been using
screen
to detach the process from the shell and allow the server to run continuously after logging out.That's it. I hope someone finds this helpful.
Posted by Jon T at 11:13 PM 0 comments
Tuesday, April 22, 2008
My Ideal Job
Somewhere in New York there is a metaprogramming job with my name on it.
Where exactly? I haven't found it yet, but I'm sure it's keeping an eye out for me. ;-)
Where exactly? I haven't found it yet, but I'm sure it's keeping an eye out for me. ;-)
Posted by Jon T at 7:48 PM 0 comments
Friday, April 18, 2008
PL What-Ifs
What if you compiled a source language to multiple target languages? gaining the benefit of more than one platform.
For example, what if you were creating a brand new language that you wanted to be type-safe with all the intricacies of Haskell's type-system, but you wanted to take advantage of libraries written in Ruby. And you created a compiler that first compiled your program to Haskell, ran it through ghc's type-checker, and then, if it passed, compiled your program to Ruby. You'd get the benefit of Haskell's type-checker and Ruby's libraries.
What if a language wasn't statically typed or dynamically typed? but instead had a knob that could be tuned in one direction or the other depending on the situation.
For example, what if you wanted the benefits of static type-checking, but if you could just access the symbol table or use
What if you could visualize the dependency graph of language objects like functions, modules, etc.?
For example, I've noticed that projects whose sub-projects have dependencies in a stack (i.e. more like a linear chain) are much easier to grok than those whose dependencies form an intricate cyclic graph. Would seeing these dependency graphs help in spotting possible complexity hot-spots, and thus, possible bug hot-spots? Or would visualizing the dependencies alone help us to better understand them. I'd expect my compiler to generate these automatically, of course, because it's already doing the dependency analysis anyway.
What if you could inline and un-inline function calls at will as you were editing the code?
For example, some people are good at thinking very abstractly and like to factor out commonalities as much as possible to reduce code. After a point though, diminishing returns are seen as code becomes unintuitive or "unreadable", deferring the simplest two-time-use definitions to a separate file for example. Where that point is is different for different people however. So what if a sufficient code-editor — i.e. a viewer for data that happens to be code — in addition to skins allowed different users to adjust how many levels functions got inlined. Said another way, what if your editor allowed you to macroexpand and un-macroexpand the code you were editing (inline, not in an output buffer somewhere) at the push of a button, arbitrary levels deep.
...Let us all keep asking questions. About programming and everything else.
For example, what if you were creating a brand new language that you wanted to be type-safe with all the intricacies of Haskell's type-system, but you wanted to take advantage of libraries written in Ruby. And you created a compiler that first compiled your program to Haskell, ran it through ghc's type-checker, and then, if it passed, compiled your program to Ruby. You'd get the benefit of Haskell's type-checker and Ruby's libraries.
What if a language wasn't statically typed or dynamically typed? but instead had a knob that could be tuned in one direction or the other depending on the situation.
For example, what if you wanted the benefits of static type-checking, but if you could just access the symbol table or use
eval
in one or two places in your code, it would be infinitely simpler at the cost of a possible runtime error. And no, this is not the same as implementing everything yourself with some sort of variant type, as all Turing-complete languages could. I'm thinking something more like Haskell's IO monad that allows you to execute impure code in an otherwise pure setting. In the same way that the IO monad infects everything it touches, so too would the dynamically-typed-code "monad". But that's just one way of doing it. Another way would be to specifically declare something to be a variant type whose properly typed value was implicitly projected out.What if you could visualize the dependency graph of language objects like functions, modules, etc.?
For example, I've noticed that projects whose sub-projects have dependencies in a stack (i.e. more like a linear chain) are much easier to grok than those whose dependencies form an intricate cyclic graph. Would seeing these dependency graphs help in spotting possible complexity hot-spots, and thus, possible bug hot-spots? Or would visualizing the dependencies alone help us to better understand them. I'd expect my compiler to generate these automatically, of course, because it's already doing the dependency analysis anyway.
What if you could inline and un-inline function calls at will as you were editing the code?
For example, some people are good at thinking very abstractly and like to factor out commonalities as much as possible to reduce code. After a point though, diminishing returns are seen as code becomes unintuitive or "unreadable", deferring the simplest two-time-use definitions to a separate file for example. Where that point is is different for different people however. So what if a sufficient code-editor — i.e. a viewer for data that happens to be code — in addition to skins allowed different users to adjust how many levels functions got inlined. Said another way, what if your editor allowed you to macroexpand and un-macroexpand the code you were editing (inline, not in an output buffer somewhere) at the push of a button, arbitrary levels deep.
...Let us all keep asking questions. About programming and everything else.
Posted by Jon T at 8:35 PM 0 comments
Monday, April 14, 2008
The Phase Concept
Anyone who's been following my blog for a while may have seen a pattern by now. Everything I've written about programming languages has a theme, which when extrapolated, has one logical conclusion: to create a compiler for a programming language that is a good tool for creating other programming languages (possibly mini languages otherwise known as APIs or DSLs) with a GUI editor that is aware of the semantics of the language and whose target language is well-established with many existing tools.
The compiler project I mentioned last post is a start of that. However, it is by no means the final product. First of all, I conjecture that an s-expression-based source language will lend itself as a good target language for a graphical code-editor built later.
Secondly, the idea of having PHP as the compiler's target language was based on the desire to take advantage of the hordes of PHP code already out there for creating web apps. However, after creating an initial prototype, it became obvious that PHP's lack of support for closures is a huge obstacle in creating the compiler whose source language has closures. I can't imagine not having closures, so PHP is out. (It's not that it can't be done, but it would be significantly more work to compile away the closures.) This is good though, because it forces me to re-write (i.e. re-design) the compiler.
I also decided against compiling to Python. Even though I have good feelings towards it, I can't justify using it when it restricts closures to be one-liners in an otherwise imperative language. I was also considering Common Lisp as a target language. The thing is, using it as a target language leaves this new language with all the same problems that Common Lisp has, and so in a way that would defeat the purpose of building on top of something supported by armies of coders. Put another way, CL's armies are significantly smaller than the armies of other languages.
As much as I don't want to admit this, Ruby is starting to look like the best option for a target language.
So for those of you wondering if I'm going to release my little prototype, I see no reason to. It was written in Haskell as a proof of concept. The s-expression parser was taken from the Lisp interpreter I wrote, and I simply added the translation to PHP.
I have concerns though about certain features like
So what is this "language" I keep referring to? What's special about it? What will its purpose be? It's just an idea I've been toying with, and this prototyping is meant to try to figure out if it's a good idea or not.
Every language lends itself to writing code in a certain way. Java, for example, lends itself to writing code in an object-oriented way. You could, however, write Java code that looks more like garbage-collected C code with classes used only as namespaces. Or you could write functional code in Java, passing around "functors" built out of anonymous classes. But the reason people tend towards writing object-oriented code in Java is because Java lends itself to an OO design. It makes writing OO code cheap — so cheap that it changes the way you think about algorithms so that they fit an OO model.
But me, I already think of everything as a compiler. I see every program as a compilation from inputs to outputs. A giant function if you will. Of course, when a program is big, you break it up into multiple functions, each with its own inputs and outputs. On a larger scale, you break up groups of functions among modules, where each module's interface defines a mini DSL, and each module's implementation is the compiler for it.
In this way, every program is a composition of mini compilers between mini languages. Oftentimes data in a program will pass through many intermediate stages as it flows from input to output through various transformations. In the same way that C++ code gets compiled to C code, then to object code, and then finally to machine code, each stage that data flows through is a compilation phase.
With a C++ compiler, the data happens to be C++ source code which gets translated into machine code. However, a clock program is a compiler from the OS's API for retrieving the system's time to a graphical readout of the time. A database engine is a compiler from SQL statements (select, update, delete, etc.) to result sets. (Order of execution is significant, as updates affect the results of compiling select statements in the future.)
A text editor is an advanced compiler with many phases of compilation. Ignoring the transformation (or compilation) of keystrokes to key codes at the hardware and OS levels, text editors transform key presses (and perhaps mouse input) into formatted text, formatted text into the graphical layout of the formatted text, formatted text into linear byte-streams for saving, formatted text into postscript or something suitable for printing.
I already see everything as a compiler, so why not have a language that lends itself to writing programs in this paradigm. A language that makes it cheap to express computations as the composition of multiple phases of translations from one language to another.
It's all about dataflow and how that data changes form as it passes from one phase to the next. So for now, "phase" is its code name.
The compiler project I mentioned last post is a start of that. However, it is by no means the final product. First of all, I conjecture that an s-expression-based source language will lend itself as a good target language for a graphical code-editor built later.
Secondly, the idea of having PHP as the compiler's target language was based on the desire to take advantage of the hordes of PHP code already out there for creating web apps. However, after creating an initial prototype, it became obvious that PHP's lack of support for closures is a huge obstacle in creating the compiler whose source language has closures. I can't imagine not having closures, so PHP is out. (It's not that it can't be done, but it would be significantly more work to compile away the closures.) This is good though, because it forces me to re-write (i.e. re-design) the compiler.
I also decided against compiling to Python. Even though I have good feelings towards it, I can't justify using it when it restricts closures to be one-liners in an otherwise imperative language. I was also considering Common Lisp as a target language. The thing is, using it as a target language leaves this new language with all the same problems that Common Lisp has, and so in a way that would defeat the purpose of building on top of something supported by armies of coders. Put another way, CL's armies are significantly smaller than the armies of other languages.
As much as I don't want to admit this, Ruby is starting to look like the best option for a target language.
So for those of you wondering if I'm going to release my little prototype, I see no reason to. It was written in Haskell as a proof of concept. The s-expression parser was taken from the Lisp interpreter I wrote, and I simply added the translation to PHP.
I have concerns though about certain features like
eval
. My first inclination was to include it, as I plan on having something like macros à la Lisp. That could slow down the development of a prototype, and thus feedback, so I may cut it from the first version. Including eval creates a bootstrapping problem. It requires me to either write the compiler in the target language or include enough language primitives to implement eval in the source language itself and re-write eval in my new language. This is a sad cut, but it's necessary to get a feel for the language quickly.So what is this "language" I keep referring to? What's special about it? What will its purpose be? It's just an idea I've been toying with, and this prototyping is meant to try to figure out if it's a good idea or not.
Every language lends itself to writing code in a certain way. Java, for example, lends itself to writing code in an object-oriented way. You could, however, write Java code that looks more like garbage-collected C code with classes used only as namespaces. Or you could write functional code in Java, passing around "functors" built out of anonymous classes. But the reason people tend towards writing object-oriented code in Java is because Java lends itself to an OO design. It makes writing OO code cheap — so cheap that it changes the way you think about algorithms so that they fit an OO model.
But me, I already think of everything as a compiler. I see every program as a compilation from inputs to outputs. A giant function if you will. Of course, when a program is big, you break it up into multiple functions, each with its own inputs and outputs. On a larger scale, you break up groups of functions among modules, where each module's interface defines a mini DSL, and each module's implementation is the compiler for it.
In this way, every program is a composition of mini compilers between mini languages. Oftentimes data in a program will pass through many intermediate stages as it flows from input to output through various transformations. In the same way that C++ code gets compiled to C code, then to object code, and then finally to machine code, each stage that data flows through is a compilation phase.
With a C++ compiler, the data happens to be C++ source code which gets translated into machine code. However, a clock program is a compiler from the OS's API for retrieving the system's time to a graphical readout of the time. A database engine is a compiler from SQL statements (select, update, delete, etc.) to result sets. (Order of execution is significant, as updates affect the results of compiling select statements in the future.)
A text editor is an advanced compiler with many phases of compilation. Ignoring the transformation (or compilation) of keystrokes to key codes at the hardware and OS levels, text editors transform key presses (and perhaps mouse input) into formatted text, formatted text into the graphical layout of the formatted text, formatted text into linear byte-streams for saving, formatted text into postscript or something suitable for printing.
I already see everything as a compiler, so why not have a language that lends itself to writing programs in this paradigm. A language that makes it cheap to express computations as the composition of multiple phases of translations from one language to another.
It's all about dataflow and how that data changes form as it passes from one phase to the next. So for now, "phase" is its code name.
Posted by Jon T at 11:42 PM 1 comments
Wednesday, April 9, 2008
What I've Been Up To
The more I actually do, the less I write.
In November, I didn't blog a single post. The time I usually spent writing went to doing Project Euler problems and learning Haskell.
Last month, I gave a presentation on functional programming with Haskell for Philly Lambda. In order to better understand Haskell (and Lisp) for the presentation, I wrote a simple Lisp interpreter in Haskell. Now I've been spending my time on revising and re-working the presentation for a lunch-and-learn for my day-job. I have also been reading an amazing book called Gödel, Escher, Bach.
Although I was interested in Clojure for a while, I honestly don't think it has much of a future. It's hardly a hundred-year language, yet it's not all that great for today's problems either.1 Specifically, web apps.
Thus, I've embarked on an adventure in my spare time. While my former startup-team deserted me to pursue other things, I continued to do R&D.2 My latest fancy is a compiler from an s-expression-based language to PHP. I thought I'd go with the whole standing-on-the-shoulders-of-giants idea like Clojure, but my plan is that users of the language will not need to know a thing about PHP. However, someone who does should be able to reach in and access PHP's immense code-base.
Is it really a good idea to compile to PHP? I don't know. Would another language like Ruby, Python, or Common Lisp be more suitable? Possibly. This is simply meant to be an experiment, and I'm trying not to be afraid to make mistakes. Since I have a good deal of experience writing PHP, I thought using it as a target language would be the easiest thing to get a working prototype for.
The more I actually do, the less I write. Let's see if the converse is also true. ...I'll try to write more once I have something tangible.
In November, I didn't blog a single post. The time I usually spent writing went to doing Project Euler problems and learning Haskell.
Last month, I gave a presentation on functional programming with Haskell for Philly Lambda. In order to better understand Haskell (and Lisp) for the presentation, I wrote a simple Lisp interpreter in Haskell. Now I've been spending my time on revising and re-working the presentation for a lunch-and-learn for my day-job. I have also been reading an amazing book called Gödel, Escher, Bach.
Although I was interested in Clojure for a while, I honestly don't think it has much of a future. It's hardly a hundred-year language, yet it's not all that great for today's problems either.1 Specifically, web apps.
Thus, I've embarked on an adventure in my spare time. While my former startup-team deserted me to pursue other things, I continued to do R&D.2 My latest fancy is a compiler from an s-expression-based language to PHP. I thought I'd go with the whole standing-on-the-shoulders-of-giants idea like Clojure, but my plan is that users of the language will not need to know a thing about PHP. However, someone who does should be able to reach in and access PHP's immense code-base.
Is it really a good idea to compile to PHP? I don't know. Would another language like Ruby, Python, or Common Lisp be more suitable? Possibly. This is simply meant to be an experiment, and I'm trying not to be afraid to make mistakes. Since I have a good deal of experience writing PHP, I thought using it as a target language would be the easiest thing to get a working prototype for.
The more I actually do, the less I write. Let's see if the converse is also true. ...I'll try to write more once I have something tangible.
1. I get the impression there are many quirks in Clojure due to its tight integration with Java. Its meta-data on code is great, but it begs for a code-editor.
2. In my opinion, the rest of the team left to get jobs and independent design work due to fluctuating states of mind and fear of risk. Unlike them, I have higher goals which reduce my wavering and prevent me from turning away.
Posted by Jon T at 1:01 PM 1 comments
Friday, March 14, 2008
Meta-Problems
The other day, I overheard one co-worker showing Emacs to another co-worker. The one who was new to Emacs said something like, "Oh, it doesn't do X?" The Emacs proponent replied, "Well, no, but you could always implement it if you want it."
And this statement sounded really familiar. It sounded just like Paul Graham's description of the Blub language (which is an amazing essay, I must say).
Now, I'm not saying some other editor is more powerful than Emacs. But I am saying that the fact that this statement was made is an indicator that Emacs isn't the most powerful. It's a Blub editor compared to the imagined ideal editor.
The entire idea of a Blub language and the power continuum doesn't apply to just languages; it applies to all tools which are powerful enough to emulate tools with more power. With programming languages, as long as a language is Turing-complete, it can emulate any other language that is Turing-complete. A similar "whatever you can do with your tool, I can do with Blub" argument can be used with editors when Blub can be extended with plugins or scripts.
Emacs may be powerful due to its extensibility, but I think we can do better.
Whatever you make, there will always be patterns — patterns of repetitive tasks that are tedious, expensive, and could be done cheaper by a machine. Thus arises the desire to automate the task completely, or at the very least, create a tool to make the repetitive parts disappear. When this is done, the pattern becomes embodied in the tool, and at first, all patterns seem to have been eliminated. But in time, new patterns emerge, in which your tool is just a small piece. At this point, you're back where you started, possibly with more resources and personal growth if you're lucky. But the loop begins again.
Seeing this meta-pattern, we can attempt to follow it to its conclusion. If we're going to create tool X in the first tool-making iteration and tool Y in the second tool-making iteration, can we go directly to Y and skip X altogether? If you can see patterns 2 or more iterations ahead, you may still have to implement X but you'll have a great advantage by seeing the longer-term goal of Y. That said, it's hard to see patterns, even in this iteration, let alone iterations far in the future. Who knew, that in just 10 years, the web would evolve the way it did. Back in the 90's, it wasn't even clear that anyone could make money off of a search engine (one of the most lucrative fields) because the thinking was that people would search for something once, bookmark it, and never return to the search engine — a legitimate concern back in the day. Obviously wrong though knowing what we know now. ...So can we do better?
If you noticed, what I just did was unroll the tool-making loop 2 times. We tried to exploit the pattern by jumping ahead 2 loop iterations at once. But to fully exploit a pattern, you must look at the big picture. You have to look at the entire loop all at once.
Most people are stuck in 1 — the current iteration. Last paragraph, we jumped to 2. What about n? The "limit", if you will.
The first thing that sticks in my mind is the loop condition. When do we ever exit? Why are we here in the first place? There are many directions this line of questioning can take. One of them — if you pursue it faithfully to its conclusion by looking for the limit of the {meta to the power n}-problem — leads to questions like "What do I want out of life?", "What is my purpose?", and "What is my true essence that defines me?". But those are beyond the scope of this essay.
Another direction we can take is to observe the fact that as long as we are in this loop, we are struggling to find a pattern, factor out that pattern, and make the thing we originally set out to make (at the non-meta level). And this is all irrespective to what we're actually making. In other words, as long as you're making new things and competing for resources, regardless of the field, whether it's software, clothing, or underwater basket weaving, you will always be in this loop.
But after a while, I observed the act of creating software as being repetitive. The act of programming — the very act of problem-solving — became the barrier to solving problems. The problem-solving embodied in programming became the new problem. It was a meta-problem because I didn't actually care about writing the programs themselves; I cared about the solutions that the programs entailed. Writing program after program, it only made sense to solve the meta-problem. And that is how I got interested in programming languages, the medium through which all programs were communicated.
Looking back, what I did was jump from 1 to 2. But for a while, I failed to make the quantum leap from 2 to n.
The human race will no doubt continue to improve technology ad infinitum, becoming more efficient and allowing people to do things they could never do before. And for short-term goals, this makes sense. But how is that any better overall?
I used to think that advancing technology to do work for us was a good thing. It would free us up to focus on what really mattered.1 But why not just focus on what really matters right now?
Some people will say that they can't because of some thing X that prevents them. To that, I would argue that you will have to learn how to deal with X, because even if you solved X (which humans will one day), there will always be Y to fill the role of the thorn in your side. Of course, some will say that advancing the state of the human race in the way I describe is what matters. And to that, I would say you've never honestly asked yourself "why".
When you're being practical in the short-term, the thing you're actually making on the non-meta level is what matters. But when you're setting long-term goals, the solution to the meta-problem is what matters, because the problem (just like in software) inevitably arises again in an infinite number of variations. It only makes sense to come up with a general solution.
But you can apply the same reasoning to the meta-problem itself, and think of it as simply another problem. Now you care about the solution to the meta-meta-problem, and it makes no sense to settle for a solution to merely the meta-problem. ...Ad infinitum.
What does all this "meta" talk mean? It means that technology is inane — it can't solve your Ultimate Problem. There will always be another problem. So should we give up and stop advancing technology?
It only makes sense to do something that has no inherent value if it works towards value elsewhere. So, if for example, you were using the day-to-day work of building the next great web app that allowed people to eat ice cream in a way that was socially networked (with a news feed of who ate what flavor combinations when (and the number of pounds they gained as a result (updated in real-time (and notified via text message)))), then go right ahead. Ditto for creating AI or reciting 1024 digits of π while juggling if you believe it is a means to something inherently valuable.
But to think for one moment that it in itself will satisfy you in the long-run, that it is a valuable end in itself, you're deluding yourself.
And this statement sounded really familiar. It sounded just like Paul Graham's description of the Blub language (which is an amazing essay, I must say).
Now, I'm not saying some other editor is more powerful than Emacs. But I am saying that the fact that this statement was made is an indicator that Emacs isn't the most powerful. It's a Blub editor compared to the imagined ideal editor.
The entire idea of a Blub language and the power continuum doesn't apply to just languages; it applies to all tools which are powerful enough to emulate tools with more power. With programming languages, as long as a language is Turing-complete, it can emulate any other language that is Turing-complete. A similar "whatever you can do with your tool, I can do with Blub" argument can be used with editors when Blub can be extended with plugins or scripts.
Emacs may be powerful due to its extensibility, but I think we can do better.
Patterns of Patterns
Building on top of the most powerful tools available, which eliminates usage patterns and implementation details, leads to new usage patterns. Eventually, these patterns will be considered implementation details. When the Internet was first invented, HTTP GET wasn't an implementation detail; it was what they were creating — the churning fringe of development. Now it's a detail, as we're creating AJAX web apps built on top of HTTP and a slew of 57 other things that didn't exist before.Whatever you make, there will always be patterns — patterns of repetitive tasks that are tedious, expensive, and could be done cheaper by a machine. Thus arises the desire to automate the task completely, or at the very least, create a tool to make the repetitive parts disappear. When this is done, the pattern becomes embodied in the tool, and at first, all patterns seem to have been eliminated. But in time, new patterns emerge, in which your tool is just a small piece. At this point, you're back where you started, possibly with more resources and personal growth if you're lucky. But the loop begins again.
Seeing this meta-pattern, we can attempt to follow it to its conclusion. If we're going to create tool X in the first tool-making iteration and tool Y in the second tool-making iteration, can we go directly to Y and skip X altogether? If you can see patterns 2 or more iterations ahead, you may still have to implement X but you'll have a great advantage by seeing the longer-term goal of Y. That said, it's hard to see patterns, even in this iteration, let alone iterations far in the future. Who knew, that in just 10 years, the web would evolve the way it did. Back in the 90's, it wasn't even clear that anyone could make money off of a search engine (one of the most lucrative fields) because the thinking was that people would search for something once, bookmark it, and never return to the search engine — a legitimate concern back in the day. Obviously wrong though knowing what we know now. ...So can we do better?
If you noticed, what I just did was unroll the tool-making loop 2 times. We tried to exploit the pattern by jumping ahead 2 loop iterations at once. But to fully exploit a pattern, you must look at the big picture. You have to look at the entire loop all at once.
Most people are stuck in 1 — the current iteration. Last paragraph, we jumped to 2. What about n? The "limit", if you will.
The first thing that sticks in my mind is the loop condition. When do we ever exit? Why are we here in the first place? There are many directions this line of questioning can take. One of them — if you pursue it faithfully to its conclusion by looking for the limit of the {meta to the power n}-problem — leads to questions like "What do I want out of life?", "What is my purpose?", and "What is my true essence that defines me?". But those are beyond the scope of this essay.
Another direction we can take is to observe the fact that as long as we are in this loop, we are struggling to find a pattern, factor out that pattern, and make the thing we originally set out to make (at the non-meta level). And this is all irrespective to what we're actually making. In other words, as long as you're making new things and competing for resources, regardless of the field, whether it's software, clothing, or underwater basket weaving, you will always be in this loop.
To What End?
When I was younger, I thought solving certain problems on the computer was valuable. I began programming to solve some of them.But after a while, I observed the act of creating software as being repetitive. The act of programming — the very act of problem-solving — became the barrier to solving problems. The problem-solving embodied in programming became the new problem. It was a meta-problem because I didn't actually care about writing the programs themselves; I cared about the solutions that the programs entailed. Writing program after program, it only made sense to solve the meta-problem. And that is how I got interested in programming languages, the medium through which all programs were communicated.
Looking back, what I did was jump from 1 to 2. But for a while, I failed to make the quantum leap from 2 to n.
The human race will no doubt continue to improve technology ad infinitum, becoming more efficient and allowing people to do things they could never do before. And for short-term goals, this makes sense. But how is that any better overall?
I used to think that advancing technology to do work for us was a good thing. It would free us up to focus on what really mattered.1 But why not just focus on what really matters right now?
Some people will say that they can't because of some thing X that prevents them. To that, I would argue that you will have to learn how to deal with X, because even if you solved X (which humans will one day), there will always be Y to fill the role of the thorn in your side. Of course, some will say that advancing the state of the human race in the way I describe is what matters. And to that, I would say you've never honestly asked yourself "why".
When you're being practical in the short-term, the thing you're actually making on the non-meta level is what matters. But when you're setting long-term goals, the solution to the meta-problem is what matters, because the problem (just like in software) inevitably arises again in an infinite number of variations. It only makes sense to come up with a general solution.
But you can apply the same reasoning to the meta-problem itself, and think of it as simply another problem. Now you care about the solution to the meta-meta-problem, and it makes no sense to settle for a solution to merely the meta-problem. ...Ad infinitum.
What does all this "meta" talk mean? It means that technology is inane — it can't solve your Ultimate Problem. There will always be another problem. So should we give up and stop advancing technology?
It only makes sense to do something that has no inherent value if it works towards value elsewhere. So, if for example, you were using the day-to-day work of building the next great web app that allowed people to eat ice cream in a way that was socially networked (with a news feed of who ate what flavor combinations when (and the number of pounds they gained as a result (updated in real-time (and notified via text message)))), then go right ahead. Ditto for creating AI or reciting 1024 digits of π while juggling if you believe it is a means to something inherently valuable.
But to think for one moment that it in itself will satisfy you in the long-run, that it is a valuable end in itself, you're deluding yourself.
1. It seems silly to think we will one day have robot slaves doing all the work for us, but if you follow the current pattern to its conclusion, that's what will happen, among other things.
Posted by Jon T at 2:20 PM 1 comments
Labels: Author's Favorites
Friday, February 29, 2008
Stand on the Shoulders of Giants
I feel like I haven't even written a useful piece of code in ages because the mere thought of boilerplate code stops me in my tracks. This is one of the main motivations behind creating a new language free of hindering boilerplate code. But then you start running into other problems.
The egotistic developer will believe that creating a new programming language is the key, secretly striving for the silver bullet, even though he would never openly admit it because he is not even conscious that he is doing it. Some people believe that the silver bullet is Lisp, but for some strange reason, it has failed to slay the dragon for the past 50 years and counting.
The more seasoned developer, unambitious and static, will believe that creating a new language is a waste of time due to how many wheels have to be re-invented in a new language before it even approaches the usefulness of existing languages. Not to mention the fact that any feature could simply have been implemented in (or on top of) the existing language in the first place.
I think the only thing that makes sense is to strike a balance with something that allows (an order of magnitude) more succinctness and extensibility without sacrificing the millions upon millions of man-hours already spent by armies of programmers. We can stand on the shoulders of giants like IBM, Microsoft, Sun, and Google, and move forward without taking a giant leap back.1
On that note, we have people building things like Instapaper. And Instapaper is great; I use it. But it's silly that I now use 2 completely separate bookmarking services that are oblivious to each other.
Of course, upon closer inspection and further use, I realized that the way I use del.icio.us and the way I use Instapaper are completely orthogonal. That is, the set of links that I save to del.icio.us and the set of links I save to Instapaper are completely disjoint. Occasionally I will first save a link to Instapaper, read the article, and then save it to del.icio.us, but that is a rare exception.
However, it's completely obvious to me that the functionality of Instapaper is a proper subset of the functionality of del.icio.us. In other words, everything that Instapaper can do, del.icio.us can do and more.
Why then didn't Marco build Instapaper over del.icio.us? It seems like a perfect match.
With the Web 2.0 craze, APIs started popping up everywhere. Even tools like Pipes and Popfly to wire those services together. But who is using them?
Instapaper's value is solely in its amazingly simple interface. The reason I use it differently from del.icio.us is because its interface affords to different things. It makes different operations cheap, and that changes the way I think about those operations. But why doesn't Instapaper integrate with my del.icio.us account, simply tagging things with a "read-later" tag? When it comes to bookmarking, del.icio.us is king; there's no disputing that. Instapaper wouldn't have less value if it were built on top of del.icio.us, it would actually have more.
And this, in my humble opinion, is the next step for the web. People will finally start realizing that most of the functionality in their little web app to-be is already done — not in a library — but in a web service.2
Services that do single things — and do them right — will be indispensable to the web. And a glue language will be in high demand. But personally, as an entrepreneur, I'm not looking towards Yahoo Pipes or Microsoft Popfly (yes, in direct opposition to the idea of standing on the shoulders of giants). I want to own my mashups, and those services don't give me that at all.3
The egotistic developer will believe that creating a new programming language is the key, secretly striving for the silver bullet, even though he would never openly admit it because he is not even conscious that he is doing it. Some people believe that the silver bullet is Lisp, but for some strange reason, it has failed to slay the dragon for the past 50 years and counting.
The more seasoned developer, unambitious and static, will believe that creating a new language is a waste of time due to how many wheels have to be re-invented in a new language before it even approaches the usefulness of existing languages. Not to mention the fact that any feature could simply have been implemented in (or on top of) the existing language in the first place.
I think the only thing that makes sense is to strike a balance with something that allows (an order of magnitude) more succinctness and extensibility without sacrificing the millions upon millions of man-hours already spent by armies of programmers. We can stand on the shoulders of giants like IBM, Microsoft, Sun, and Google, and move forward without taking a giant leap back.1
On that note, we have people building things like Instapaper. And Instapaper is great; I use it. But it's silly that I now use 2 completely separate bookmarking services that are oblivious to each other.
Of course, upon closer inspection and further use, I realized that the way I use del.icio.us and the way I use Instapaper are completely orthogonal. That is, the set of links that I save to del.icio.us and the set of links I save to Instapaper are completely disjoint. Occasionally I will first save a link to Instapaper, read the article, and then save it to del.icio.us, but that is a rare exception.
However, it's completely obvious to me that the functionality of Instapaper is a proper subset of the functionality of del.icio.us. In other words, everything that Instapaper can do, del.icio.us can do and more.
Why then didn't Marco build Instapaper over del.icio.us? It seems like a perfect match.
With the Web 2.0 craze, APIs started popping up everywhere. Even tools like Pipes and Popfly to wire those services together. But who is using them?
Instapaper's value is solely in its amazingly simple interface. The reason I use it differently from del.icio.us is because its interface affords to different things. It makes different operations cheap, and that changes the way I think about those operations. But why doesn't Instapaper integrate with my del.icio.us account, simply tagging things with a "read-later" tag? When it comes to bookmarking, del.icio.us is king; there's no disputing that. Instapaper wouldn't have less value if it were built on top of del.icio.us, it would actually have more.
And this, in my humble opinion, is the next step for the web. People will finally start realizing that most of the functionality in their little web app to-be is already done — not in a library — but in a web service.2
Services that do single things — and do them right — will be indispensable to the web. And a glue language will be in high demand. But personally, as an entrepreneur, I'm not looking towards Yahoo Pipes or Microsoft Popfly (yes, in direct opposition to the idea of standing on the shoulders of giants). I want to own my mashups, and those services don't give me that at all.3
1. And this is why I have been very interested in Clojure — an extremely practical Lisp built on top of the JVM.
2. For example, I will almost never have to create my own charts, as Google already did it. Ditto for maps.
3. If you haven't heard me practically shouting this already, I want a glue language for the web! I will personally thank anyone who builds one that doesn't suck. But people, please don't comment here saying X is already that glue language. [Ruby on Rails people, I'm looking in your direction. ;-) ]
Posted by Jon T at 8:10 PM 0 comments
Labels: Author's Favorites
Sunday, February 3, 2008
The Cost of a New Language
As I described in my post on tool-making, the most sensible thing to do is to minimize total cost to reach your goal. Accounting for all costs, it occurred to me that creating new tools such as programming languages can inadvertently increase total cost by reducing the usefulness of other tools designed to be used with other languages.
For example, gdb is an extremely useful tool for working with C code. But if you start using Java, the usefulness — or the amount of cost savings — of gdb is reduced so much that it is practically useless. What you really need is a debugger designed for Java. If you didn't have a debugger for Java, your total cost may have increased as a result of switching to Java. (I'm exaggerating of course, because it's not just gdb but other tools as well. And they all add up.)
I recently became aware of Google Web Toolkit (GWT) which basically compiles Java to JavaScript for use with AJAX web apps. At first this sounds right. Gain static typing and a unified server- and client-side language. However, this didn't sit right in my mind and I wasn't sure why. Then I realized it troubled me because, in a way, GWT was compiling from a lower-level language to a higher-level language.1 And the thought of that is just absurd. Once you take into account the reduction of usefulness of tools though that results from switching from Java to JavaScript, it all makes sense.
So when does it ever make sense for a small software company to create a new programming language? I'm beginning to think the answer is: never. Although there are inspiring words saying how much more productive a good programming language can be, there are also stories of people working with such languages their entire lives and how they became disillusioned.
The only sense I can make of all this is that people love to hear stories like Paul Graham's that hold the carrot of hope out in front; the idea of creating the ultimate programming language is extremely ego-boosting. But at the end of the day, if your goal is to create great software that you can live off of, then there are very real costs to using less popular languages, and the trade-off does not often make sense. GWT on the other hand, works because it enables rather than disables the use of well-developed tools of another language. A new language is just a tool, and it should help, not hinder, overall.
...Another burst of insight came to me. Say you created a new source language S but wanted to exploit the abundant, well-developed tools in a target language T. What if, when you wrote your compiler from S to T, you also wrote a translator for the tools. In other words, think of the API or data-structure that the tool works on as T', and create another compiler from S to T'. It all goes back to the idea that everything is a compiler.
I don't know if this is something feasible. Again, it's more of a what-if question. Is there a way to compile away the costs of a new language?
For example, gdb is an extremely useful tool for working with C code. But if you start using Java, the usefulness — or the amount of cost savings — of gdb is reduced so much that it is practically useless. What you really need is a debugger designed for Java. If you didn't have a debugger for Java, your total cost may have increased as a result of switching to Java. (I'm exaggerating of course, because it's not just gdb but other tools as well. And they all add up.)
I recently became aware of Google Web Toolkit (GWT) which basically compiles Java to JavaScript for use with AJAX web apps. At first this sounds right. Gain static typing and a unified server- and client-side language. However, this didn't sit right in my mind and I wasn't sure why. Then I realized it troubled me because, in a way, GWT was compiling from a lower-level language to a higher-level language.1 And the thought of that is just absurd. Once you take into account the reduction of usefulness of tools though that results from switching from Java to JavaScript, it all makes sense.
So when does it ever make sense for a small software company to create a new programming language? I'm beginning to think the answer is: never. Although there are inspiring words saying how much more productive a good programming language can be, there are also stories of people working with such languages their entire lives and how they became disillusioned.
The only sense I can make of all this is that people love to hear stories like Paul Graham's that hold the carrot of hope out in front; the idea of creating the ultimate programming language is extremely ego-boosting. But at the end of the day, if your goal is to create great software that you can live off of, then there are very real costs to using less popular languages, and the trade-off does not often make sense. GWT on the other hand, works because it enables rather than disables the use of well-developed tools of another language. A new language is just a tool, and it should help, not hinder, overall.
...Another burst of insight came to me. Say you created a new source language S but wanted to exploit the abundant, well-developed tools in a target language T. What if, when you wrote your compiler from S to T, you also wrote a translator for the tools. In other words, think of the API or data-structure that the tool works on as T', and create another compiler from S to T'. It all goes back to the idea that everything is a compiler.
I don't know if this is something feasible. Again, it's more of a what-if question. Is there a way to compile away the costs of a new language?
1. Another reason GWT seems weird to me is because instead of its source language being under-developed, its target language is.
Posted by Jon T at 4:19 PM 2 comments
Sunday, January 13, 2008
In the future...
In the future, no one will be using test-driven development. Instead, we will be using proof-driven development. A programmer will create a specification for a program in a language like Gallina and prove its safety and correctness using a proof assistant like Coq. Once done, a compiler will transform the specification and proof into an executable program which is proven to be semantically equivalent to the specification. In addition, the proof will be bundled with the machine code of the program so that a third-party can statically check that the program adheres to its safety policy before running a single line of code, a technique called proof-carrying code.
In the future, machine code will not specify instructions to be executed. Instead, machine code will specify the layout of a circuit, and the computer when executing that program will "re-wire" itself to match the specification. Data will flow through the constructed circuit as fast as physically possible. And when execution is finished, the circuit will be re-used for another program after another re-wiring. Of course, the computer won't literally move wires around; it will probably be more akin to gates making and breaking connections. How is that different from today's computers that use transistors as gates? These gates won't be clocked. Moreover, these computers won't be limited to running N things in parallel, one for each of N cores. The computers of the future will be more like today's FPGAs in that you will be able to run as many things in parallel as you can fit in your circuit, which could be on the order of thousands or more.
In the future, people will create life in their own image. They will start off with simple robotic toys for entertainment. Then they will begin automating mundane tasks like cleaning and maintenance. Then they will progress to having robots take part in high-risk situations like flying planes, performing surgery, and going to war. But eventually the technology will become so well developed and cheap that creating automata will be as easy as hacking scripts on a home computer. These little things will infiltrate our lives the same way PCs did in the 80s and 90s. And people will accept them. It won't stop there however, because people always want more; it's in our nature. And driven by money, we will create automata (they won't be called "robots", but something more marketable) to satiate our addictions. The female sex slave will come out first, and soon after followed by the male model. But with all these automata running around, people will begin to feel disconnected, and they will want to have someone in the lonely world to listen to them and comfort them. They will again turn to their ever-constant automata for support, as they are the only things that can be consistently controlled. So they will make automata that can please one not only sexually, but emotionally and psychologically. Of course, to do this, they would have to make an automaton that can empathize, an automaton that can understand the feelings that its master feels. And the only way to do this is to create an automaton that can actually feel these things, the same things that its human master feels, and be capable of going through the same experiences. But once this great feat is accomplished, people will then begin to realize that what they've created — with all their own feelings, desires, and experiences — won't want to serve anyone but themselves. Humans, in their infinite ignorance, will have created beings not only in their own image, but in their own problematic situation, in an attempt to solve that very problem. And in the end, we won't be any better off except in realizing that there is nothing fundamentally different between a human body and an automaton body.
In the future, every choice we will have made will have become obvious. And in the future, we will still believe that we are in control of the choices we make. In the future we will look back on the past with a nostalgic eye. And in the future, we will still believe that there are better days far ahead. In the future, we will continue to run and run towards comfort and safety. And in the future, we will not find what we are looking for until we give up looking.
In the future, I will see some more patterns. And in the future, I will sit back and do nothing about them. ...Unless of course — something changes.
In the future, machine code will not specify instructions to be executed. Instead, machine code will specify the layout of a circuit, and the computer when executing that program will "re-wire" itself to match the specification. Data will flow through the constructed circuit as fast as physically possible. And when execution is finished, the circuit will be re-used for another program after another re-wiring. Of course, the computer won't literally move wires around; it will probably be more akin to gates making and breaking connections. How is that different from today's computers that use transistors as gates? These gates won't be clocked. Moreover, these computers won't be limited to running N things in parallel, one for each of N cores. The computers of the future will be more like today's FPGAs in that you will be able to run as many things in parallel as you can fit in your circuit, which could be on the order of thousands or more.
In the future, people will create life in their own image. They will start off with simple robotic toys for entertainment. Then they will begin automating mundane tasks like cleaning and maintenance. Then they will progress to having robots take part in high-risk situations like flying planes, performing surgery, and going to war. But eventually the technology will become so well developed and cheap that creating automata will be as easy as hacking scripts on a home computer. These little things will infiltrate our lives the same way PCs did in the 80s and 90s. And people will accept them. It won't stop there however, because people always want more; it's in our nature. And driven by money, we will create automata (they won't be called "robots", but something more marketable) to satiate our addictions. The female sex slave will come out first, and soon after followed by the male model. But with all these automata running around, people will begin to feel disconnected, and they will want to have someone in the lonely world to listen to them and comfort them. They will again turn to their ever-constant automata for support, as they are the only things that can be consistently controlled. So they will make automata that can please one not only sexually, but emotionally and psychologically. Of course, to do this, they would have to make an automaton that can empathize, an automaton that can understand the feelings that its master feels. And the only way to do this is to create an automaton that can actually feel these things, the same things that its human master feels, and be capable of going through the same experiences. But once this great feat is accomplished, people will then begin to realize that what they've created — with all their own feelings, desires, and experiences — won't want to serve anyone but themselves. Humans, in their infinite ignorance, will have created beings not only in their own image, but in their own problematic situation, in an attempt to solve that very problem. And in the end, we won't be any better off except in realizing that there is nothing fundamentally different between a human body and an automaton body.
In the future, every choice we will have made will have become obvious. And in the future, we will still believe that we are in control of the choices we make. In the future we will look back on the past with a nostalgic eye. And in the future, we will still believe that there are better days far ahead. In the future, we will continue to run and run towards comfort and safety. And in the future, we will not find what we are looking for until we give up looking.
In the future, I will see some more patterns. And in the future, I will sit back and do nothing about them. ...Unless of course — something changes.
Posted by Jon T at 8:08 PM 1 comments
Subscribe to:
Posts (Atom)