Years of endless debates about static type annotations regurgitating the same information over again have led me to shy awayfrom them in recent years. I've done pretty well, yet here I am in the gravity field of another one. I'm not sure how long this will last but for this day, I am somewhat interested in continuing.
PJE writes in a comment...
Um, just because a given use case can be accomplished without a given tool, doesn't mean the tool is useless, or isn't useful for that use case. If that were true, we'd never use anything but machine code.I have not found any useful scenarios where I want static type annotations. There's not much more to say than that. I'd like to hear of scenarios others have found useful so I can compare them to my own.
Meanwhile, I notice you haven't responded to the point that solid integration with statically typed languages often requires type annotation; e.g. jythonc's use of special patterns in docstrings. Integration with other languages, documentation, and runtime introspection/adaptation are my use cases for type annotation.I'm not sure what you mean by "solid" integration. I have done production-quality integration between "agile" languages like Lisp, Smalltalk, and Python with "rigid" languages like Pascal, C, C++.
There are situations where the more rigid languages have required, say, a 32-bit integer or a specifically formatted sequence of values (i.e. "struct" or "record"). "Agile" languages tend to provide integers that are not limited by the word size of the underlying hardware. In scenarios like this then, on the agile side of the integration a check is necessary that the integer will fit in the rigid size of 32-bits and so on. Likewise a conversion is necessary to actually format the scalar data and structured data into the kind expected on the rigid side.
I would not call these expressions "static type annotations" in the agile language though. Rather they are hoops I'd not have to jump through as much if the rigid language were more agile. The language systems I have used that I can recall have tended to allow me a way to express these in the agile language itself rather than in a comment or doc string, but that's a minor point.
Languages like Java and C# are less rigid and given their capabilities for reflection, integration with more agile languages tend to be more automatic. And so Jython and Python.Net among others tend to do most of the conversion automatically without special annotations or coding.
Corner cases require more programmer attention but that's also a minor point. They could agree on a common model for data exchange such as is done in CORBA and it's descendent WS-xxx. Doing integration via CORBA using Smalltalk is so easy I believe had more programmers been working this way the WS-xxx fiasco would never have been. Rather improvements to CORBA's relationship with HTTP would have been getting all the intention. These are market force issues, not software.
Another source of feedback on this is the Inter-Language Unification work done at Xerox. Related to CORBA but more focused on integration aspects and RPC, where CORBA was beginning to branch out into a variety of distributed programming models.
I don't see how type inference helps with either documentation or introspection, either. If it's pure "duck typing", how is the system going to infer that I want an IAtomicStream, say, versus an IRawStream, if the two have the same methods? Duck typing isn't sufficient to distinguish different *semantics* of a type/interface, only its method signature -- and that is not sufficient for documentation!If you need an
IAtomicStreamon the rigid side then there is probably enough information to infer that from the code on the agile side. If the rigid specification is ambiguous (well, then it's not really a "specification" is it?)... then, yes, you need to have some kind of an expression on the agile side to disambiguate.
The point is, this is a burden rather than a feature. Why would I *want* to work at this level of detail?
A good bit of information on the power of inference for agile languages can be found in the implementation of the Stalin Scheme compiler, see "Flow-Directed Lightweight Closure Conversion" by Jeffrey Mark Siskind.
Documentation needs to present the semantics of the interface, so that if you are implementing that interface yourself, you know what semantics to implement.The documentation I want to see is (1) executable examples or (2) technical memos that augment executable examples. Period.
Likewise, for introspection, I don't see how an implicit signature helps. Let's say I want to create an XML schema for a document format that can contain messages that get converted to method calls on some object. If each of those methods uses a different set of methods on their arguments, I'm going to end up defining new types for each method, which might be okay for the computer to process, but for a human being it's going to be a mess to understand.I am not sure what this example is about. I agree there are integration scenarios where something like an XML Scheme is required to specify the kinds of messages that can be exchanged between language-independent processes. That does not in any way imply that static type annotations are a benefit to the languages I use to implement the independent processes.
Type inference, IMO, is just type checking. It doesn't seem to me to be useful for anything else -- and like you, I'm not interested in type checking! (I suppose it could be used for optimization in some cases, but only if you can statically prove a concrete type - not just an operation signature.)Type inference is better than type annotation. The real benefit I see is in getting even more capable agile modeling and inferencing tools by extrapolating from recent work in model checking. Simple, even compound, type checks are uninteresting. We do need a lot of help with concurrent state machines however. Tools that can tell me more about *real* behavior problems rather than simple type checks should be getting all this attention.
Now, perhaps I could be wrong, and there is some other way to accomplish these use cases that's just as clean. If so, I'd sure like to see them. I personally don't see that any of my uses for type annotation *can* be satisfied by inference; there's just not enough information. Heck, I don't see how a *human* could do the inference in the absence of added documentation, let alone a computer. It would be little more than blind guessing.I think I see one use case here... how to integrate languages that have different data models. I agree system integration requires some kind of agreement on specifying data integration. I'm not sure what else I am seeing here.
OK. I am getting tired of writing about type checking. I think we're in agreement on this use case of language-independent data integration.