It's the 1, 2, n rule in software development.
You do something once. You do something twice. And the next time you do it, you generalize it for all times henceforth.
Waiting longer to generalize is painful at best. In the worst case, it leads to maintenance nightmares as when the same thing is done 14 times in 14 different places, each time slightly different than the rest.
Generalizing too soon however leads to other problems. Everyone's done this at one point in their coding career or another. Hey, this is a great idea! Why hasn't someone else done it before? I'll code it completely general the first time around so that it'll be done right, from the start. And then a few weeks of coding go by, you start to actually use it, and you find out the hard way that either it was a really bad idea or the API is so terrible that you have to re-write it. Because basically, you didn't iterate your design.
Code is all about design. And good design is re-design.
This is a big part of the reason why programming languages are so far behind. The amount of work that it currently takes to create (a compiler for) a programming language is so high, that the time it takes for a language designer to get feedback on his design is years. But by that time (if he was lucky enough to actually get users), he's already invested so much into it that he is unlikely to throw it away and start from scratch — something a good designer must do. On top of that, even if he wanted to start from scratch, it would take many months to de-program his mind from thinking in the previous language. (Like the first time a Java programmer learns Haskell and tries to write a loop!)
An IDE is part of a language. Error messages are part of a language. Interacting with other systems, written in other languages, is part of a language. They're all part of the interface of the language. And the interface is everything.
It's a chicken-vs.-egg problem though. Because to get all these tools made, based on history, requires that a language be popular. But how do you get people to use a language if it doesn't have the most basic tools like a debugger?
As a designer, I can't simply make a command-line compiler for my language. I have a vision of what editing code is like. And it's not with a text-editor. The closest thing I've ever seen to a true code-editor is Eclipse for editing Java code, but Eclipse has been around long enough for us to learn a few lessons. I can already hear the Emacs junkies crying about how Emacs is the ultimate editor. But again, it's still a glorified text-editor.
A true code-editor must absolutely be aware of the semantics of the language you are editing. It is inseparable from the compiler. Because the compiler knows how to interpret your code, but when something is amiss, it must be able to feed back to the programmer.
Writing code is all about design. And a designer needs feedback — as much feedback as possible. The quicker he gets that feedback the better, because he can make any necessary changes to the design only after he's gotten feedback on his current design.
So any code-editor that doesn't give immediate feedback on parse errors and requires that you do a full build is simply crap! How can you possibly work like that?! Ditto for type errors, link errors, and test-assertion errors. People see it as obvious when the old-timers tell stories about how they used to submit compilation jobs to mainframes on punch-cards, only to find out the next day that they forgot a semicolon and had to re-submit the entire job. Yet when it comes to where we are, here and now, people are blind.
Frankly, I'm getting tired of it. I want to see test data flow through my code as I write it and the unit-test results change from red to green as I fix a bug. I want to be able to highlight two blocks of code and have my code-editor factor out a function, replacing the two blocks with applications. I want bottlenecks to be highlighted in-editor based on profiling results. And compilation should be done completely in the background so I don't have to wait after making a change to use it. And this read-eval-print-loop that's all the rage nowadays might actually be useful if I could use it in the context of a particular point of my code.
Is it really that hard? Someone please tell me I'm a visionary because I can't believe that this can't be done. Nothing a little universal substitution and computing power can't solve.