Being thoroughly trained in ML, I understand the benefits of static typing. Here are some claimed benefits.
- Reduced CPU usage since types don't need to be checked at runtime
- Reduced memory usage since types don't need to be stored at runtime
- Other type-directed optimizations
- Discovery of errors as early as possible, for all my code
- Static typing is more powerful/expressive since dynamic typing is a special case, or subset, of static typing where everything is tagged
- Easier to run and test since there are practically no compile-time or link-time errors
- Programs are not limited by the expressiveness of the type system of the language — e.g. heterogeneous data structures w/o explicit tagging
- Allows for implicit open recursion, late binding, dynamic scope, and duck typing
- Programs are simpler since I don't have to worry about types. Things are often coerced implicitly making my programs more concise, and I don't have to use really abstract things like type variables and higher-order types. In other words, they have been abstracted away by being pushed down to a place I don't have to think about.
- Faster prototyping due to not having to write down types which change often during prototyping
Myth: Static typing is more powerful/expressive since dynamic typing is a special case, or subset, of static typing where everything is tagged.It's true that dynamic typing is a special case of static typing. This can be seen by simply creating a tagged union where each branch is a different possible runtime type. Then make every expression in your language a value of this type. Voila — dynamic typing.1
However, it does not follow that this makes static typing more powerful. To see this more clearly, let's use a simpler example. C vs. assembly. C is more powerful than assembly. But there are things you can write in assembly that you simply can not write in C. Because you are restricted. Restricted by the definition of the C language which does not know anything about your cool new machine instruction to factor large numbers and break your gramma's closely-guarded RSA-encrypted banana nut bread recipes. However, anything you can write in C, you can obviously write in assembly. So C is a special case, or subset, of assembly.
In other words, the more powerful language is the one that is the subset.
And this does not depend on the specifics of the language. So in the case of dynamic vs. static typing, dynamic typing is actually more powerful. I alluded to this already by the fact that dynamic typing abstracts away types by pushing them down into the runtime system.
Myth: Dynamic typing allows for faster prototyping due to not having to write down types which change often during prototyping.This myth stems from the belief that types in dynamically typed languages don't have to be written down at all, while types always have to be written in statically typed languages. And this is absolutely false!
First of all, even though you don't have to declare the types of variables and functions in dynamically typed languages, you still write down some types. Whenever you make a class, declare fields or methods of a class, or specify inheritance relationships, you are creating and/or specifying types.
Secondly, type information does not need to be specified in statically typed languages that include type inference. Many languages with type inference like ML and Haskell are specifically designed so that you don't have to write down any types. The only exceptions are enumeration-like data structures (i.e. tagged unions) which have runtime values. So the difference between the amount of type information you must write depends greatly on the language itself, not whether that language has static or dynamic typing.
So we are down to the following for static typing.
- Reduced CPU usage since types don't need to be checked at runtime
- Reduced memory usage since types don't need to be stored at runtime
- Other type-directed optimizations
- Discovery of errors as early as possible, for all my code
- Static typing is more powerful/expressive since dynamic typing is a special case, or subset, of static typing where everything is tagged
- Easier to run and test since there are practically no compile-time or link-time errors
- Programs are not limited by the expressiveness of the type system of the language — e.g. heterogeneous data structures w/o explicit tagging
- Allows for implicit open recursion, late binding, dynamic scope, and duck typing
- Programs are simpler since I don't have to worry about types. Things are often coerced implicitly making my programs more concise, and I don't have to use really abstract things like type variables and higher-order types. In other words, they have been abstracted away by being pushed down to a place I don't have to think about.
- Faster prototyping due to not having to write down types which change often during prototyping
Myth: The cost of runtime type information in speed or memory is not significant, especially with today's hardware.In general keyboard-speed applications, the cost of runtime type information is not significant. But in core loops, it can be huge. Because it is not just a matter of checking the types, which itself can be very costly in a loop. But the fact that you must store everything as a tagged value means that everything must be stored as a pair in memory. This means that in dynamically typed languages, there is no way to specify the computation of adding the values in two registers and storing the result in another register. The add operation in a dynamically typed language must load a tag from memory, conditionally branch on the value of the tag, load a 2nd tag from memory, conditionally branch on the value of that tag, load the first number from memory, load the 2nd number from memory, add the numbers, store the result in memory, and then store a new tag. This is ignoring the case where the runtime system has to allocate memory for this new object, as the statically typed version may spill registers, so call it even.
For such a simple operation though, this is a huge cost.
Myth: Type-checking is a substitute for testing.Again, absolutely not true. Although some people may use type-checking (static in particular) for testing program correctness, this is neither its intended purpose nor its limit of use. Type-checking is more about ensuring meaningful, well-defined execution than ensuring correctness. Even though you may hear strong static-typing advocates say, "When your program type-checks, you'll often find that it just works", this is simply not true for large, intricate programs. Although type-checking may help you find errors, it is not the same as testing. Thus, it is not a suitable substitute for testing.
Static type-checking will however discover one entire class (or a few classes) of errors at compile-time, early in the development cycle — something that is dependent upon executing every code path in a dynamically-typed language, which is simply infeasible.
Other Issues
Open Recursion, Late Binding, Dynamic Scope
Open recursion and late binding are just special cases of dynamic scope. It's not clear whether having these features is a good thing or not. They make it easy to introduce bugs when extending code in a way that was originally not intended (which happens fairly often). They can definitely be useful though since they allow you to do things that can't be done easily without the features. But as I explained earlier, more flexibility does not imply power. In fact it's the opposite, as long as the constraints are well-designed. Except in bottlenecks where optimization is crucial, Duff's device is a bad feature for a language to support. And although you can implement open recursion in a statically typed language, you can't implement dynamic scope in general (and be type-safe) without converting your entire program to use tagged values, which would be implementing the entirety of dynamic typing. So in this respect, dynamic typing is actually more flexible.Some may argue that the language should not constrain the programmer. Let him decide whether or not to use it. After all, exceptions are a special case of dynamic scoping, and many statically-typed languages support this. The problem is that you will inevitably be using or extending someone else's code. And the person who wrote that code will inevitably make a judgment call that you consider to be poor, making the code non-obvious and unnecessarily complicated from your perspective — which can really be said about any powerful language feature, so maybe it's just me. But this one can make it especially difficult to reason about your programs. ...Again, it's not completely clear whether these features are good or bad.
Type-Directed Optimizations
If you care about performance, which you will for just about any production app, type-directed optimizations can help you a great deal. I won't go into the details of how they work, but the basic idea is that the compiler uses type information to optimize your code. Theoretically, you could have a compiler for a dynamically-typed language that performed the same optimizations. However, to do this at compile-time, it would have to collect the same information as a compiler for a statically-typed language. I've never seen a compiler able to do this in general, but such a hybrid is the future of typing.An alternative is for the compiler to do the optimizations at run-time — i.e. just-in-time.
The problem with static typing is that there are always more type-directed optimizations that would be nice to do is always more type-expressiveness that would be nice to add, and so more and more complicated type systems emerge, oftentimes complicating a language with quirks such as ML's value restriction. On the other hand, if dynamic typing were used, the problem causing such complication would disappear. A simple solution is often the best.
Types in the Source Code
Dynamic-typing advocates claim that writing down types is a hindrance, especially during prototyping. This stems from the assumption that you have to do additional work to think of and change the types of the expressions you're dealing with.I definitely understand the criticism of writing down types because I don't usually write down loop invariants, so I could see that as being "extra work". But types are no different. Both types and loop invariants are required to create the program, but not to execute the program.
For anyone who's become proficient in a language with a powerful type system, he knows that in order to even write a meaningful program, you have to use the correct types. And so when you're writing the program, you implicitly (and unconsciously for some) give a type to every single expression you write. So why not write down all this extra information that was used to create the program so that it can be used not only by the compiler, but by you when you come back to your code after a week. The same goes for invariants — loop invariants in particular — but I'll save that for another post on theorem provers!
Knowing what I know about theorem-proving and loop invariants, I believe writing down the types is the right thing. However, in a prototyping situation where the types are changing faster than you can type them, you shouldn't have to write them down. That's where type-inference comes in. But as the system begins to stabilize — as some parts which are getting pushed down in the hierarchy do — you should write the types down, because at that point you know what the types should be, and the extra information can be used by the compiler to check and optimize, and by those reading your code to understand your program's interfaces and semantics.
Typing of the Future
Here's the typing hybrid I see as the future, as the answer is never on either end of a spectrum, but a balance somewhere in the middle — a pattern of the Universe.- The flexibility to omit the vast majority of types, but consequently,
- a significant amount of type-inference that uses
- a powerful type system that is so expressive that it is itself Turing-complete, which requires
- blurring the lines between compile-time and runtime to the point where the programmer can not always tell (and should not always care) whether static or dynamic typing is being used by the compiler.
1. This is where the "un(i)typed language" joke comes from. A language that is "untyped" really has exactly one type for all expressions.
10 comments:
If you havn't already heard of it, you may want to look at QI.
http://www.lambdassociates.org/aboutqi.htm
I think it's pretty close to your future language.. it's got the dynamic/static blur, strong inference, and the turing complete type system anyway.
As for the third claimed benefit of dynamic typing, don't you think that structural subtyping allows for late binding and something you could equate to duck typing?
In particular, it seems to me that things like OCaml's row variables, which extend the ordinary parametric polymorphism, allow you to code the same way you'd do in a dynamically typed language if you choose to, without relinquishing the other benefits of static typing (going back to OCaml, you can use one of the available OO wrappers of the structures in the stdlib).
It is true that the choice is to some extent determined by the style of the code you have to interact with, so in OCaml's case you'll normally use good old algebraic types without subtyping because the standard library is not object-oriented (and also in order to avoid slow method dispatching). However, this is a choice motivated by practical concerns, not an imposition of the type system.
Also, regarding your rebuttal of the fifth claimed benefit of static typing, it is manifest that static typing disallows many programs a dynamic typing discipline wouldn't, namely all those that would result in a run-time type error, and then some (those programs that don't type but would actually work). However, couldn't it be argued that in the presence of type inference there's some gain in conciseness since many invariants are documented by the type signatures? (This has got nothing to do with dynamic typing being "less powerful" because it's a special case of static typing. I agree that this argument is flawed, for the above stated reason.)
And as usual, when "Future of <something>" appears when talking about programming language design, LISP had all those for years... oh well.
You left out one interesting piece of the strong-typing and testing discussion. While strong-typing is certainly not a substitute for testing, it does make more comprehensive unit testing easier. QuickCheck for Haskell is an example of this.
-R Hayes
Actually, the value restriction is there for soundness, not optimization (http://www.smlnj.org//doc/Conversion/types.html).
Myth: Dynamic typing allows for faster prototyping due to not having to write down types which change often during prototyping
I am not sure that is a myth in fact. That's often said of dynamic languages meaning general purpose scripting languages like Python, Perl, Ruby, etc. In those languages I think it is a practical fact that you can prototype faster than in system languages in general.
But you could have a dynamic C, so to speak, a dynamically typed language, small and with very limited expressiveness. I think you wouldn't prototype faster there than in a richer systems language.
There are also more comments on reddit: http://programming.reddit.com/info/2nyne/comments
Your statements apply to Standard ML and not to OCaml or Haskell, of course.
I don't fully agree when you say dynamic typing is a special case of static typing. That may be true if you take the computer perspective, but if you take the expression perspective (which I consider much more useful and important) it's the other way round, static typing actually is a special case of dynamic typing.
When you go static you take expression power away from the programmer.
cheers.
I agree mostly with you. I just want to note that the fifth benefit of dynamic typing is a myth only to a certain degree. Most discussions are between advocates of C++ and Java versus Javascript, PHP and Ruby. In that context hardcoding functions to use short and int, or having to refactor class hierarchies or create additional interfaces when only a subset of methods from a class is needed in a new situation slows down prototyping. Some of that can be mitigated by the judicious use of templates and generics at the cost of additional boilerplate code. But it does not help me any when people on the web invoke a good type system for their arguments while in the real world coworkers later reuse those same arguments to force everyone to use C++, C# and Java over any dynamic language because "a language with static typing is better". That said, I really wish ML was more widespread in the industry; I would love to use it instead of C++ or Java.
Post a Comment