The controversy
Over the past few months, I've been seeing some Design Patterns backlash on the blogosphere, I guess it is accompanying the agile anti-hype phase that's also strong these days. The most rabid attack came from Mark Dominus, making the case that many innovations in programming languages in the past decades were in response to common practices that could be described as patterns, had the pattern format been known back then. Take, for instance, what we now know as a procedure call: code to stack up register values and return address followed by a jump instruction. Before assemblers became available, programmers used to recode this every time they wanted a subroutine. Dominus sees this observation as a sign that something is wrong with the patterns movement:"Identification of patterns is an important driver of progress in programming languages. As in all programming, the idea is to notice when the same solution is appearing repeatedly in different contexts and to understand the commonalities. This is admirable and valuable. The problem with the "Design Patterns" movement is the use to which the patterns are put afterward: programmers are trained to identify and apply the patterns when possible. Instead, the patterns should be used as signposts to the failures of the programming language. As in all programming, the identification of commonalities should be followed by an abstraction step in which the common parts are merged into a single solution."Professor Ralph Johnson, one of the original "Gang of Four" members, responded:
"No matter how complicated your language will be, there will always be things that are not in the language. These things will have to be patterns. So, we can eliminate one set of patterns by moving them into the language, but then we'll just have to focus on other patterns. We don't know what patterns will be important 50 years from now, but it is a safe bet that programmers will still be using patterns of some sort."Dominus retorts with subtler arguments, and I encourage you to read his article to avoid any misinterpretations of my part. But I'll still quote what I believe is the core of his thinking:
"What I imagine is that when pattern P applies to language L, then, to the extent that some programmer on some project finds themselves needing to use P in their project, the use of P indicates a deficiency in language L for that project.
The absence of a convenient and simple way to do P in language L is not always a problem. You might do a project in language L that does not require the use of pattern P. Then the problem does not manifest, and, whatever L's deficiencies might be for other projects, it is not deficient in that way for your project.
(...)
But to the extent that some deficiency does come up in your project, it is a problem, because you are implementing the same design over and over, the same arrangement of objects and classes, to accomplish the same purpose. If the language provided more support for solving this recurring design problem, you wouldn't need to use a "pattern". Consider again the example of the "subroutine" pattern in assembly language: don't you have anything better to do than redesign and re-implement the process of saving the register values in a stack frame, over and over?"
My take
Mark seems to be worried that programming language innovation will be stifled by the patterns movement, on account of programmers being taught to mimic patterns on their code instead of applying that energy on incorporating them into programming languages. It is important to first acknowledge, as himself put it, that it is valuable to identify commonalities in software design, one could even say this is a precondition for evolving a programming language. So, he sees the problem in not working to embody these commonalities in the language. From my exposure to the patterns literature, I believe the movement is not in any way against incorporating patterns as programming language features. It is implicit in this passage from GoF, page 4:"Our patterns assume Smalltalk/C++-level language features, and that choice determines what can and cannot be implemented easily. If we assumed procedural languages, we might have included design patterns called "Inheritance," "Encapsulation," and "Polymorphism." similarly, some of our patterns are supported directly by the less common object-oriented languages. CLOS has multi-methods, for example, which lessen the need for a pattern such as Visitor"And explicit in Ralph Johnson's aforementioned blog post: "Many of the patterns will get subsumed by future languages, but probably not all of them". Now that we know that design patterns are contextual and can be incorporated as language features, we must ask if the emphasis of the patterns movement isn't displaced. Why go to the trouble of publishing large books describing at length things like intent, motivation, participants, consequences, related patterns, etc, when we could just inventory all patterns and shove our favorites in our programming languages? One very simple reason is that not all patterns would yield benefit from being reified. Dominus says that people often mentioned to him MVC as an example of a pattern so complex that it could not be absorbed in a programming languge. But, of course, they were all wrong and he was right, since novel systems like Ruby on Rails or subtext do exactly that, definitively confirming his thesis that the only impediment was an atrocious lack of imagination on the part of his opponents! This is all very strange to me, since "MVC" came to life as pretty concrete software - it was the GUI development framework bundled with Smalltalk-80. Many years later, after several separate versions were developed, it was described and published as an (architectural) pattern. Leaving object orientation early history aside, lets get back to the matter of patterns that aren't targets for reification. One example is the Façade(185) design pattern. As one of the most frequently applied GoF patterns, described having as it's Intent to "provide a unified interface to a set of interfaces in a subsystem", there can be no doubt that it is indeed recurring. Looking at the Structure and Participants sections, we can see that it is very simple, just a Façade class that depends on many other (unidentified) subsystem classes inside a boundary. A consequence of this simplicity is that there would be little gain in incorporating this pattern into a programming language, as it doesn't really represent specific interactions that have to be recoded each time the pattern is applied.
One could respond that if that is the case, then the pattern is useless. It describes a solution so simple that a programmer who has never heard of the "Façade design pattern" would probably be able to construct it by himself. The error in this judgment is that a pattern's merit isn't measured by how interestingly or usefully it gives a solution to a problem. Knowing that its not hard to invent Façade does not detract from the fact that being able to talk about Façade is a big design win in itself. Patterns are as much an aid to communication as they are vehicles to disseminate knowledge.:
"Naming a pattern immediately increases our design vocabulary. It lets us design at a higher level of abstraction. Having a vocabulary for patterns lets us talk about them with our colleagues, in our documentation, and even to ourselves. It makes it easier to think about designs and to communicate them and their trade-offs to others."[GoF, page 3]On the other hand, we already mentioned that some of the patterns are indeed amenable to reification. In this case there are two paths to choose from: coding it as a library atop usual language abstraction mechanisms or incorporating it as a basic language feature. Dominus second post argues that this choice is a mere implementation detail. Now, to paraphrase him, his point seems completely daft, which may I interpret to mean that there's something that went completely over my head. The difference is obvious: if some feature can be implemented as a library, any programmer that wants it can write it exactly one time and reuse as appropriate; no language modification required. As a contributor to CPAN , he should know well that nowadays much software reuse is done through open source libraries, lowering the recode count to zero for it's users. This is a crucial distinction, as adding something to the language has a huge impact on the whole technical ecosystem, from increased barrier to entry to pernicious interaction between features (non-orthogonality).
Lets take another look at Dominus' central proposition:
"(...) when pattern P applies to language L, then, to the extent that some programmer on some project finds themselves needing to use P in their project, the use of P indicates a deficiency in language L for that project. "I believe the most important question here is: what does it mean to "use [pattern] P"? If you take it to mean follow the steps from P's description, possibly copying some code from the "Implementation" section, then this reasoning makes sense. But patterns are not merely recipes, and a pattern language is not a cookbook! One way in that patterns differ from recipes is that, as we saw in Façade, the Solution section can be of little importance compared to the communication gain from just naming the pattern. Another difference is that there is wide latitude in implementation strategies for each pattern, not to mention cases where multiple patterns present themselves as design alternatives for a set of forces.
Speculating a bit, I believe the origin of this antagonism toward design patterns is that the idea of a recurrent structure is antithetical to a certain ethos present in the software development community. Mark Dominus summarizes this feeling well in this passage "As in all programming, the identification of commonalities should be followed by an abstraction step in which the common parts are merged into a single solution." It can also be formulated as a call to arms: Don't repeat yourself! Or better yet, DRY! (the love for TLAs is another part of the programming ethos :). I don't dispute that eliminating repetition is an important design heuristic, but I sometimes wonder if it can be taken too far, becoming a sort of factorization fetishism that is detrimental to productivity.