Friday, December 14, 2007

Semantic Ramblings

This past Tuesday I attended a talk about the Semantic Web and social networks by Sun's Henry Story. I've posted my impressions, mixed with some uninformed commentary about semweb in general, on my work blog.

Sunday, December 09, 2007

REST Beyond the Obvious

Now that the heat of the REST vs. SOA battle seems to be dissipating, we can try to shed some light into how the choice of a resource centric design affects overall enterprise architecture. We can start by following a thought experiment. Imagine a company growing so fast the HR staff is overwhelmed by the task of forwarding all the new job openings to recruiting agencies. Our job is to automate this problem away. Using REST.


The basics

As RESTful Web Services go, this one seems pretty simple. One resource per job opening, represented with a simple custom XML format; maybe a collection resource, listing all current openings; and possibly also a search resource, to look for openings with specific attributes. Something like this:


URI Template Method Representation
/job_opening/{id} GET Job Opening XML Schema
/job_openings GET Links to all openings as a custom schema or with XOXO.
/job_openings/find?{query_params} GET Custom schema or XOXO or atom feed, all w/ Opensearch elements.

So, how do we create the job opening resources? The obvious choice would be to apply the traditional RESTful collections pattern: a POST to the /job_openings collection with an entity body containing the xml representation of the opening triggers the creation of a new resource, whose URL would then be returned in the location header of the response.


Shaking things up

But there is an alternate model: let client departments simply publish job_opening resources by themselves, on their departmental web servers. There is a vast array of options to do such a thing, none of them requiring users to write XML, of course. IT could whip up a word processor macro to convert documents to our xml format, and save them to a web server. In the MsWorld, one could use Word's XML support for inserting elements from the job_opening namespace into a document and publish it to an WebDav enabled server, maybe using Sharepoint. In the OpenOffice universe, we could just as easily save a document as ODF, pass it trough an xml transformation to the job_opening format, and then publish it to an Web Server. The publishing itself can be made with a number of techniques, from FTPing to a shared directory to using an Atompub client with a compatible server to the aforementioned WebDav protocol. A completely different, and much more complex, option would be to have the department run an instance of a custom job openings application, maybe even something similar to the “obvious choice” outlined above, but I really don't see how it could be useful.

So far, all very cute and distributed, postmodern even, but does it work? I mean, the end goal is to send all the openings info to external recruiting companies, and just throwing the resources at the internal web doesn't quite accomplish that. The missing piece of the puzzle is a simple crawler; a small piece of software that scans the web by following links and looks for resources that accept being viewed as job_openings. A side effect of most publishing options mentioned in the past paragraph is that a link is created to the newly available document in some sort of listing. Our crawler needs only to know how to get at those listings and how to parse them looking for links. If you think parsing is complicated, I urge you to think about the following code snippet: /(src|href)="(.*?)"/




What If...

Since this is a thought experiment, we can get more, well, experimental. Let's see what could be gained from shunning XML altogether and using an HTML microformat for the job_opening resource representations. This would expand even more the publishing options. For instance, a plain Wordpress blog, maybe already used as a bulletin board of sorts by some department, could be repurposed as a recruiting server. Another benefit would be to have every document in the system in human-readable form, not XML “human-readable”, but really human-readable.

Now, suppose the company decided it just wasn't growing fast enough, went ahead and bought a smaller competitor. And of course, this competitor already had an in-house recruiting application. Being of a more conservative nature, which just might explain why they weren't market leaders, their IT department built it as a traditional Web front-ed / RDBMS backed application. How do we integrate that with our REST-to-the-bone job openings system? First we note that there is no need to have the data available in real-time, after all, no company needs to hire new people by the minute. Given that, the simplest solution would probably be a periodic process (someone said cron?) that extracts the data directly from the database, transforms it to our job_opening format and shoves it in some web server.



So what?

I can't say if this scenario were for real would I choose such a distributed approach. Maybe a quick Rails app running on a single server would better fit the bill. But that's not the point of our little exercise in architectural astronautics, we are here to think about the effects of REST's constraints to overall architecture. So, let's recap some of them:

  • The line between publishing content and using an application blurs. “Static” documents and active programs have equal footing as members of the system.

  • Thanks to the uniform interface constraint, the mere definition of a data format allows for unplanned integration.

  • If we add a standard media type to the mix, we can take advantage of established infrastructure, as in the case of a blogging platform reused as a “service endpoint” simply by adoption of HTML microformats.

  • REST in HTTP encourages pull-based architectures (think of our HR crawler), which aren't all that common outside of, well, HTTP web applications

  • The very idea of a service may fade away in a network of distributed connected resources. The closest thing to a service in our system is the crawler, but note that it does its work autonomously, no one ever actually calls the service.

  • Links (aka hypermidia) are a crucial enabler for loosely-coupled distributed services. In some sense, everything is a service registry.
  • One of the forgotten buzzwords of the 90's, intranet, may come to have a new meaning.

Sunday, December 02, 2007

Metaprogramming and conjuring spells

"A computational process is indeed much like a sorcerer's idea of a spirit. It cannot be seen or touched. It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory. The programs we use to conjure processes are like a sorcerer's spells. They are carefully composed from symbolic expressions in arcane and esoteric programming languages that prescribe the tasks we want our processes to perform."
- Abelson and Sussman — SICP


Among the plethora of metaphors that plague our field, I find computing as incantation of spells one of the least annoying. Some fellow with a long beard writes odd-looking prose that will later, through some magical process, turn lead into gold or help vanquish some inept demon. Man, that show sucked, I don't know why anyone would watch that shit, specially the reruns Tuesday to Saturday at 04AM and Monday to Friday at 12PM on channel 49. Well, anyway, what's interesting about the analogy is that it shows the dual nature of software: it is both a magical process and a bunch of scribbled lines. To mix metaphors a bit, we can say that software is, at the same time, a machine and the specification for that machine.

This is what makes metaprogramming possible and, also, what makes it unnecessary. By the way, when I write "metaprogramming", I'm specifically thinking about mutable meta-structures, changing the tires while the car is running metaprogramming, not mere introspection. Throwing away that mundane car analogy and getting back to our wizardry metaphor, metaprogramming would be like a spell that modifies itself while being casted. This begs the question, beyond fodder for bad TV Show scripts, what would this be useful for? My answer: for very little, because if the wizard programmer already has all the information needed for the metaprogram, then he might as well just program it... To make this a little less abstract, take a simple example of Ruby metaprogramming:
01  class DrunkenActress
02 attr_writer :blood_alcohol_level
03 end
04
05 shannen_doherty = DrunkenActress.new
06 shannen_doherty.blood_alcohol_level = 0.13
By the way, I'm not a Rubyist, so please let me know if the example above is wrong. The only line with meta-stuff is line 2, where a method named "attr_writer" is being called with an argument of :blood_alcohol_level. This call will change the DrunkenActress class definition to add accessor methods for the blood_alcohol_level attribute. We can see that it worked in line 6, where we call the newly defined setter.

But the programmer obviously already knows that the DunkenActress class needs to define a blood_alcohol_level attribute, so we see that meta-stuff is only applied here to save a few keystrokes. And that is not a bad motivation in itself, more concise code often is easier to understand. Then again, there are other ways to eliminate this kind of boilerplate without recursing to runtime metaprogramming, such as macros or even built-in support for common idioms (in this case, properties support like C# or Scala).

There may be instances where the cleanest way to react to information available only in runtime is trough some metaprogramming facility, but I have yet to encounter them. Coming back to Rubyland, Active Record is often touted as a poster child for runtime metaprogramming, as it extracts metadata from the database to represent table columns as attributes in a class. But those attributes will be accessed by code that some programmer will write — and that means, again, that the information consumed by the metaprogram will need to be available in development time. And indeed it is, in the database. So ActiveRecord metaprogramming facilities are just means to delegate the definition of some static structure to an external store, with no real dynamicity involved. If it were not so, this kind of thing would be impossible. Also note that recent Rails projects probably use Migrations to specify schema info in yet another static format.

To summarize, runtime mutable metaprogramming is like that bad purple translucent special effect, it is flashy, but ultimately useless. Anyway, that's my current thinking in the matter, but I still need to read more on staging.

[EDIT: corrected a mistake relating to the code sample]

Thursday, November 29, 2007

JSR-311 at USP

I gave a talk about RESTful Web Services and JSR-311 at our local USPJUG meeting, the slides are available here (in portuguese).

Wednesday, November 14, 2007

Event "Conexão Java 2007"

This past week I've attended an event called Conexão Java. I've posted my notes on the other blog.

Monday, October 15, 2007

Link-blog

I've setup an alternate feed that splices my bookmarks from del.icio.us with this blog's entries. You can find it here.

Saturday, October 13, 2007

Pondering

"Formal methods will never have any impact until they can be used by people that don’t understand them."
Tom Melham


"The original study that showed huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson and Grant (1968). They studied professional programmers with an average of 7 years' experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1, the ration of debugging times over 25 to 1, of program size 5 to 1, and of program execution speed about 10 to 1. They found no relationship between a programmer's amount of experience and code quality or productivity.

Although specific rations such as 25 to 1 aren't particularly meaningful , more general statements such as "There are order-of-magnitude differences among programmers'" are meaningful and have been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al. 2000)."


"If you look at the way software gets written in most organizations, it's almost as if they were deliberately trying to do things wrong. In a sense, they are. One of the defining qualities of organizations since there have been such a thing is to treat individuals as interchangeable parts. This works well for more parallelizable tasks, like fighting wars. For most of history a well-drilled army of professional soldiers could be counted on to beat an army of individual warriors, no matter how valorous. But having ideas is not very parallelizable. And that's what programs are: ideas."


"Software entities are more complex for their size than perhaps any other human construct because no two parts are alike (at least above the statement level). If they are, we make the two similar parts into a subroutine--open or closed. In this respect, software systems differ profoundly from computers, buildings, or automobiles, where repeated elements abound. [...]

The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence. For three centuries, mathematics and the physical sciences made great strides by constructing simplified models of complex phenomena, deriving properties from the models, and verifying those properties by experiment. This paradigm worked because the complexities ignored in the models were not the essential properties of the phenomena. It does not work when the complexities are the essence."
Fred Brooks
No Silver Bullet


"Architecture is intended to be facilitative, of course, in that a good architecture should enable developers to build applications quickly and easily, without having to spend significant amounts of time re-inventing similar infrastructure across multiple projects. [...]

But an architecture is also intended to be restrictive, in that it should channel software developers in a direction that leads to all of these successes, and away from potential decisions that would lead to problems later. In other words, as Microsoft's CLR architect Rico Mariani put it, a good architecture should enable developers to "fall into the pit of success", where if you just (to quote the proverbial surfer) "go with the flow", you make decisions that lead to all of those good qualities we just discussed. "


"The more interesting your types get, the less fun it is to write them down!"


"If you don’t know the difference between a group, a ring, and a field, you have no business overloading operators.

Now I’m not saying that one has to take a course in abstract algebra before you can be a competent programmer. You don’t as long as the language you program in doesn’t support operator overloading (or as long as you’re wise enough not to use it if it does). However since most programmers are didn’t get beyond ODEs in college (if indeed they got that far–some of my comp sci friends struggled mightily with calculus and had to retake it repeatedly), one can’t responsibly design a language that requires mathematical sophistication in the 99th percentile for proper use."

Elliotte Rusty Harold
Operator Overloading: Trickier Than it Looks



"You used to start out in college with a course in data structures, with linked lists and hash tables and whatnot, with extensive use of pointers. Those courses were often used as weedout courses: they were so hard that anyone that couldn't handle the mental challenge of a CS degree would give up, which was a good thing, because if you thought pointers are hard, wait until you try to prove things about fixed point theory.

All the kids who did great in high school writing pong games in BASIC for their Apple II would get to college, take CompSci 101, a data structures course, and when they hit the pointers business their brains would just totally explode, and the next thing you knew, they were majoring in Political Science because law school seemed like a better idea. I've seen all kinds of figures for drop-out rates in CS and they're usually between 40% and 70%. The universities tend to see this as a waste; I think it's just a necessary culling of the people who aren't going to be happy or successful in programming careers."

Saturday, October 06, 2007

Four books

Another blog post, still no inspiration for anything creative, so lets just rehash that old bloggers' recipe of stuffing some "cultural" reviews in a post and hope it passes for content. Excited yet?

Concepts, Techniques, and Models of Computer Programming



First up is Peter Van Roy's "Concepts, Techniques, and Models of Computer Programming". Don't let that big title scare you away. The book is pretty hefty in itself, but don't let that scare you either, it is a great read. But what is it about, you may ask? Well, CTM - as it is affectionately called - could sit comfortably on the "programming paradigms" shelf, alongside Sebesta and Kamin. All books that aim to take the reader through a stroll down the computing Zoo, allowing him or her to gaze in awe of the strength of higher order functions, be amused by the quirkiness of dataflow variables, marvel at the elegant logical predicates lying under a sunny...

Ok, I took the metaphor too far, sorry about that. What I was trying to say is that CTM doesn't limit itself to enumerate paradigms accompanying each with a brief description and a couple of examples and leaving it at that. Van Roy's text goes further by discussing in reasonable depth programming techniques applicable to each computation model (the authors prefer to avoid the term "paradigm") and, more that that, advising the reader on how to best integrate them.

The technical approach that enables this leveling is to describe the models in terms of a kernel language that is expanded throughout the book. Each chapter shows how the kernel language needs to be augmented to support the required features, how it is interpreted by an abstract machine and what syntactic sugar can be added on top of the kernel to ease programming.

It would not be a fair review if I didn't relate at least one negative point, but it is a minor one. I think that the approach to logic/relational programming would be more representative of the usual intent if the language was more predicate-and-fact based. Or, to put it in other words, I like the Prolog syntax better than the "Relational Oz" one. As the authors explain, both approaches are semantically equivalent in their core, so I'm nitpicking. Overall, I can safely say that I recommend this book. It is, if you pardon the cliché, an eye-opener, making it clear that the "mainstream" imperative and stateful programming model is but one of many equally significant alternatives.

Engines of Logic



If you've ever been subject to any formal instruction in computing (or "informatics" or Information Systems or whatever), you probably had to endure at least one lecture on the "history of computing", which usually amounts to a lengthy enumeration of machines. If you were particularly unlucky, it started with some blabber about the abacus back in who-the-fuck-cares AD, and it invariably went on to spend a great deal of time discussing punched cards and looms. Yeah, freaking looms! I'm sure Joseph Marie Jacquard is a swell guy and all, but is a rudimentary mechanical input system all that important in the grand scheme of things? My answer, of course, is no. As Dijkstra put it: "Computer science is as much about computers as astronomy is about telescopes". And that is why Engines of Logic is such a great little book, it seeks to give an account of the history of ideas that culminated in modern computing.

We see how Leibnitz' utopia of a machine to automate human reasoning, up to the point of forever settling all disputes and intellectual arguments, evolved to a series of formal mathematical systems for "calculating with thoughts" (mathematical logic) by the hand of such great man as Boole, Frege, Cantor, Gödel, and others, culminating with the notion of "universal computers" and their actual realization. The book reads like a good popular science work, with amusing biographical anecdotes scattered throughout the nine chapters. Although, contrary to many works in this genre, Engines of Logic is not afraid of stating formulas and proving theorems when when deeper insight is required*. Check out a small excerpt from the chapter on David Hilbert for a sample of the lighter side of the book:
During my own graduate student days in the late 1940s, anecdotes about Göttingen in the 1920s were still being repeated from one generation of students to the next. We heard about the endless cruel pranks that Carl Ludwig Siegel played on on the hapless Bessel-Hagen, who remained ever gullible. My own favorite story was about the time that Hilbert was seen day after day in torn trousers, a source of embarrassment to many. The task of tactfully informing Hilbert of the situation was delegated to his assistant, Richard Courant. Knowing the pleasure Hilbert took in strolls in the countryside while talking mathematics, Courant invited him for a walk. Courant managed matters so that the pair walked through some thorny bushes, at which point Courant informed Hilbert that he had evidently torn his pants on one of the bushes. "Oh no," Hilbert replied, "they've been that way for weeks, but nobody notices".
Also of note in the paragraph I quoted is the personal touch given at times by the author, Martin Davis. He is a theoretical computer scientist, with the distinction of being present in Princeton back in the 1950s, in the companion of chaps like John Von Neumann, Kurt Gödel, Hermann Weyl and Albert Einstein. As an author, Davis is probably best known for writing more technical books on computability and complexity. But please, make no mistake, this is emphatically not an academic textbook; it goes to great pains to clearly explain subtle concepts like Cantor's diagonal method, achieving a balance between rigor and ease that is hard to come by**.

1984



It is sad that I only got around to reading this book now. "Now" meaning late 2006, as these reviews are a little bit behind schedule... Anyway, as I'm having a hard time finding worthy adjectives, I guess something I could say is that after finishing 1984 I felt utterly stunned. It is powerful and it is important, so put it on your reading list if you haven't already.

A final observation is that the edition I'm linking to - a combined printing of Animal Farm and 1984 published by Harcourt - is cheap and pretty good. The preface is signed by Christopher Hitchens.

Snow Crash



I'm getting lazy (well, lazier) so this will be short: good book, so-so plot, so-so characters, awesome ambiance.









* To be fair, some of the most tricky proofs for non-crucial topics are left to end notes. Still, those notes are far easier to read than most academic mathematical tomes.
** Off the top of my head, I can only think of Nagel and Newman's book on Gödel's proof.

Monday, September 03, 2007

Concurrency and the demand for computing

On my previous post I expressed some doubt with the “market” for rich internet applications. The skepticism was rooted on an observation that the demand side of the equation isn't looking so promising: what applications will emerge benefiting from those wonderful technologies?

I find myself thinking along similar lines with regard to concurrency. For the sake of analysis, lets split the space of applications in server-based and client-based. Members of the first group basically deal with responding to requests coming over the network. This means that there is a naturally high degree of parallelism and, of course, this has been exploited for a long time. The typical scenario is some serial business application code atop a middleware platform that handles threading and I/O. This means that on the server front, the “multicore revolution” will impact little on most software development efforts. Now, desktop software developers don't have such luck – the era of surfing on Moore's Law is really over. And so what? The way I see it,* raw computing has ceased to be an important bottleneck, long gone are the days of watching a crude hourglass animation while the CPU labored away. Not that we do any less waiting by now, these days we spend our time waiting for the network.

Anyway, maybe the concurrency boogieman is less scary than we think.


*(this is a blog, after all)

Monday, August 27, 2007

The Wealth of Software

One of the hot topics in 2007, much like it was in 1997, is the rich versus thin client issue: will the future of software lay purely on the cloud, or will rich fat desktop apps suffer a revival? This old debate was rekindled earlier this year by the announcements of Adobe AIR, Microsoft Silverlight and Sun JavaFX; all fighting for the so-called Rich Internet Apps (RIA) market. I guess the neologism is marginally better that the previous buzzword, AJAX, which still makes me cringe a little bit. In fact, I blame Jesse James Garrett for the worsening of my case of bruxism, but I'm digressing...

What brought me to this weary debate is the historical perspective brought by Herb Sutter in this recent post. He argued that we are seeing a manifestation of a cycle where computing moves periodically between the center and the edges, a movement driven by an imbalance between resources:
"More specifically, it's the industry constantly rebalancing the mix of several key technology factors, notably:
  • computation capacity available on the edge (from motes and phones through to laptops and desktops) and in the center (from small and large servers through to local and global datacenters)
  • communication bandwidth, latency, cost, availability, and reliability"
While this seems reasonable enough, I think there is an element missing: he scarcely mentions the role of the applications that run on those systems. The article presents a purely supply-side analysis of the computing marketplace, to put it in "economic" terms. To illustrate the importance of the demand-side, I expanded Herb's chronology table with important application classes of each epoch:

Era/Epoch

The Center


The Edge

Apps

Precambrian

ENIAC



Military calculations.

Cambrian

Walk-up mainframes



Huge business batch processing.

Devonian



Terminals and time-sharing

Big business batch processing

Permian


Minicomputers


Scientific computation, maybe? I don’t know…

Triassic



Microcomputers, personal computers

Spreadsheets, desktop publishing.

Jurassic


File and print servers


Departmental or Small-Business DB apps. (think video rental service software)

Cretaceous

Client/Server, server tier

Client/Server, middle tier


OLTP (for instance, bank account management)

Paleocene



PDA

PIM

Eocene

Web servers



Web portals (Yahoo!, Excite!, …)

Oligocene



ActiveX, JavaScript
PDA phone

Web based apps. (Hotmail, many ASP/JSP/PHP db apps).

Miocene

E-tailers



?

Pliocene



Flash, AJAX

Fancy web apps (Flickr, Google Maps)

Pleistocene

Web services
Data centers



Google Data, DabbleDB?

Holocene



Google Gears
Adobe AIR
Silverlight

Now what?


Now, what do we fill in that last cell? What are the killer apps of the RIA platforms? There is no clear answer, but I see basically two niches that can be a good fit for the space: apps that handle audiovisual media (youtube, picnik, etc.) and apps that require rich modes of interaction (Google Earth). Its important to bear in mind that media intensive operations are expensive all around, from server storage space to quality digital video cameras for the users. Also, in many cases AJAXian alternatives exist (see PX8N or any other Web2.0 reflective-logoed startup in techcrunch). Regarding the other niche, applications using novel user interaction features, it seems cool in theory, but apart from a handful of HCI journals there is very little action in this space nowadays. And that's probably good, because most attempts at UI innovation fall flat on the user's faces, as DHH eloquently argues in this podcast. All in all, skepticism is healthy as usual, but I don't see the door closed shut to a richer software landscape.

Sunday, August 19, 2007

Administrivia

  • I realized there is no point in keeping the blog bilingual, so from now on all posts will be in English. I may, some day, open another portuguese-only blog, but not anytime soon (I can barely post here more than once a month)
  • But I do have another blog. I should have announced this a while ago, but better late then never, I guess. It is at http://blogs.sun.com/rafaelferreira. It is there because I've been working for Sun for the past few months - I'm a Sun Campus Ambassador at the University of São Paulo. That means, BTW, that if you're a student or a professor at USP and want to engage with Sun, I'm you're guy (hint: my name is Rafael Ferreira and all email addresses at Sun are formed as Firstname.Lastname@sun.com). On a related note, we have setup a JUG for the USP community at https://uspjug.dev.java.net/
  • Oh, and obviously, all opinions expressed here are my own and not necessarily shared by Sun Microsystems or any of my co-workers. Duh.

Monday, June 25, 2007

Pull my Strings

My favorite Joel Spolsky article is The Law of Leaky Abstractions. It may not be the wittiest or the most controversial, but makes up for it in shear importance. Joel's site is down right now (something that doesn't happen frequently), but wikipedia says that the Law was worded as:

"All non-trivial abstractions, to some degree, are leaky."

Well, some pretty trivial abstractions are leaky too. What is the type you use most in your programs? Probably int, but after that? For me it is String (currently java.lang.String or scala.String to be annoyingly more precise).



That is a pretty shitty* abstraction, don't you think? String is supposed to be short for StringOfCharacters. Leaving aside the fact that shorthanding is a bad practice, the name may not be wrong, per se, but it does nothing to characterize the meaning of the type in our programs. Why not Text? It is semantically more accurate and actually shorter than "String".



* To retain the plumbing analogy.

Monday, June 18, 2007

Futurologia

A minha bola de cristal enxerga duas tendências* para as linguagens de programação do futuro*:
  • Quantificação. Só porque PROLOG foi uma frustração coletiva, não significa que suas idéias sejam inúteis. A propósito, quantificação é a parte mais interessante de AOP, IMNSHO.
  • Metaprogramação reflexiva profunda. Bonito isso né, eu li num livro. Falando sério, acredito que a importância de metaprogramação é óbvia, o que não é tão óbvio são os qualificadores "reflexivo" e "profundo". O primeiro vem da observação que se estado mutável é um problema, então metaestado mutável é um metaproblema maior ainda. E "profunda" porque eu acho que a reflexão tem que atingir até o nível da AST - o Erik Meijer nesta entrevista com a InfoQ fala bem sobre isso, como "quoting" é um conceito fundamental em sistemas formais.

* Definições:
tendência == algo que eu quero que aconteça.
futuro == ponto no tempo distante o suficiente para que: (a) a tendência supracitada tenha se realizado ou (b) ninguém mais se lembre deste post.

Sunday, May 27, 2007

IDLs para REST?


[Edit: O Aristotle Pagaltzis tratou deste mesmo tema neste post.]


Antes de mais nada, um aviso para quem usa a porcaria do blogger: o “auto-save” não funciona! Como o leitor já deve ter deduzido, eu descobri esse problema de maneira pouco agradável, perdendo o resultado de algumas horas de digitação. Vamos tentar de novo...

Se eu ainda me lembro, comecei explicando que fui motivado por dois posts com que esbarrei por aí na internet. Primeiro foi o Roy Fielding na lista REST-discuss comentando que um dos princípios do estilo não é bem compreendido. O segundo foi o Daniel Quirino bloggando sobre uma deficiência do REST, a ausência de linguagens de descrição de interface. Citando o Daniel citando ele próprio:

“WSDL cumpre um papel muito importante neste contexto de servir outros sistemas, uma vez que existe uma maneira padrão para se descrever para outras máquinas todos os serviços que estão sendo oferecidos. “

Parece razoável, não? O provedor de um serviço escreve um documento WSDL descrevendo as operações oferecidas e os tipos de dados de entrada e saída de cada. A partir daí, para se desenvolver um cliente do serviço basta obter uma cópia do WSDL, usar alguma ferramenta para gerar artefatos correspondentes e invocar as operações como se fossem chamadas de procedimento locais(*).

Essa descrição não se aplica a serviços RESTful. Para começar, o conceito de operação é muito diferente – só existe um conjunto fixo delas para todos os recursos no sistema inteiro. No caso do HTTP, as operações são os famosos verbos GET, PUT, POST e DELETE(**). Ok, mas para onde vai a semântica que estava nas operações? A princípio parece que vai parar na identificação de recursos: uma chamada getStockQuote(“SUNW”) se transforma em um GET na URL http://example.com/stock/SUNW.

Mas REST é um pouco mais complicado do que isto. Aliás, os argumentos a favor do estilo por ser mais simples do que WS-* estão ligeiramente equivocados, IMHO. O foco principal do REST é a escalabilidade, em vários sentidos, desde uma preocupação em controlar a carga nos servidores em relação à demanda(***) até um entendimento de como habilitar network effects para aumentar o valor da rede inteira à cada contribuição individual. Na tese do Roy Fielding este foco leva à quatro restrições que guiam a arquitetura: “identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state”. Uma destas é o tal princípio que o Roy observou ser mal compreendido, consegue adivinhar qual?

O ganhador é “hypermedia as the engine of application state”, que de fato parece bem mais obscuro que os demais. A idéia aqui é, no meu entendimento, que o estado da aplicação é conhecido somente pelo cliente mas é dirigido pelo servidor. Conhecido somente pelo cliente porque o estilo exige que o servidor não mantenha informação alguma sobre “sessões” com um cliente. Isso mesmo, cookies com sessionids e assemelhados não são RESTful. A parte de “dirigido pelo servidor” pede um exemplo; vamos considerar o funcionamento dos forms html. O browser recebe um documento contendo uma tag <form action="alguma_uri"> definindo o destino e subtags <input ...> especificando os dados a serem preenchidos. O usuário entra com seus dados, aperta submit e o browser faz um POST no recurso identificado por alguma_uri passando uma representação application/x-www-form-urlencoded das informações. Percebam que o cliente não precisava saber a priori para qual URI POSTar seus dados, ela foi obtida do servidor. Em outras palavras, a transição de estado se deu no cliente seguindo um link (hipermídia). O Sam Ruby explica isso bem melhor que eu aqui.

Essa restrição ajuda a permitir que porções diferentes do sistema evoluam independentemente, por exemplo se um servidor decidir mudar sua estrutura de navegação basta começar a servir URIs diferentes em seus links e os clientes continuam operando sem sofrer alterações. Outras características do REST, como content-negotiation baseada em tipos padronizados, também contribuem para facilitar este requisito. E é claro que evolução independente é crucial para um sistema que pretende atingir escala global. Aí está o link com a escalabilidade, no pun intended.

Por isso que muitos RESTfarians criticam as linguagens de descrição de interface, inclusive WADL. Como já vimos no caso do WSDL, elas acoplam um cliente com uma definição de serviços em “tempo de compilação”, por assim dizer. E mais, há quem defenda o uso de documentos análogos à estes padrões de descrição de interfaces, mas pensados para serem interpretados por clientes em tempo de execução. Uma iniciativa deste tipo é o RDF-forms, a respeito do qual eu não sei absolutamente nada.

Isso ainda é controverso, e os oponentes apresentam um argumento forte – IMHO de novo – que comunicação programática entre aplicações, estilo EAI, é muito diferente do sistema interativo de hipermídia global para o qual REST foi concebido. Na minha opinião pessoal, independente de ortodoxias, considere adotar REST para suas APIs públicas, mesmo se não seguir rigorosamente todos os preceitos do doutor Fielding.





* Uma equipe esperta vai saber que a confiabilidade e a latência destas chamadas são muito diferentes de invocações locais, mas isso não muda muito o processo em linhas gerais.

** E mais HEAD, OPTIONS e outros primos pobres.

*** Caching, caching, caching

Friday, February 23, 2007

Design Patterns are not Recipes

The controversy

Over the past few months, I've been seeing some Design Patterns backlash on the blogosphere, I guess it is accompanying the agile anti-hype phase that's also strong these days. The most rabid attack came from Mark Dominus, making the case that many innovations in programming languages in the past decades were in response to common practices that could be described as patterns, had the pattern format been known back then. Take, for instance, what we now know as a procedure call: code to stack up register values and return address followed by a jump instruction. Before assemblers became available, programmers used to recode this every time they wanted a subroutine. Dominus sees this observation as a sign that something is wrong with the patterns movement:
"Identification of patterns is an important driver of progress in programming languages. As in all programming, the idea is to notice when the same solution is appearing repeatedly in different contexts and to understand the commonalities. This is admirable and valuable. The problem with the "Design Patterns" movement is the use to which the patterns are put afterward: programmers are trained to identify and apply the patterns when possible. Instead, the patterns should be used as signposts to the failures of the programming language. As in all programming, the identification of commonalities should be followed by an abstraction step in which the common parts are merged into a single solution."
Professor Ralph Johnson, one of the original "Gang of Four" members, responded:
"No matter how complicated your language will be, there will always be things that are not in the language. These things will have to be patterns. So, we can eliminate one set of patterns by moving them into the language, but then we'll just have to focus on other patterns. We don't know what patterns will be important 50 years from now, but it is a safe bet that programmers will still be using patterns of some sort."
Dominus retorts with subtler arguments, and I encourage you to read his article to avoid any misinterpretations of my part. But I'll still quote what I believe is the core of his thinking:

"What I imagine is that when pattern P applies to language L, then, to the extent that some programmer on some project finds themselves needing to use P in their project, the use of P indicates a deficiency in language L for that project.

The absence of a convenient and simple way to do P in language L is not always a problem. You might do a project in language L that does not require the use of pattern P. Then the problem does not manifest, and, whatever L's deficiencies might be for other projects, it is not deficient in that way for your project.

(...)

But to the extent that some deficiency does come up in your project, it is a problem, because you are implementing the same design over and over, the same arrangement of objects and classes, to accomplish the same purpose. If the language provided more support for solving this recurring design problem, you wouldn't need to use a "pattern". Consider again the example of the "subroutine" pattern in assembly language: don't you have anything better to do than redesign and re-implement the process of saving the register values in a stack frame, over and over?"

My take

Mark seems to be worried that programming language innovation will be stifled by the patterns movement, on account of programmers being taught to mimic patterns on their code instead of applying that energy on incorporating them into programming languages. It is important to first acknowledge, as himself put it, that it is valuable to identify commonalities in software design, one could even say this is a precondition for evolving a programming language. So, he sees the problem in not working to embody these commonalities in the language. From my exposure to the patterns literature, I believe the movement is not in any way against incorporating patterns as programming language features. It is implicit in this passage from GoF, page 4:
"Our patterns assume Smalltalk/C++-level language features, and that choice determines what can and cannot be implemented easily. If we assumed procedural languages, we might have included design patterns called "Inheritance," "Encapsulation," and "Polymorphism." similarly, some of our patterns are supported directly by the less common object-oriented languages. CLOS has multi-methods, for example, which lessen the need for a pattern such as Visitor"
And explicit in Ralph Johnson's aforementioned blog post: "Many of the patterns will get subsumed by future languages, but probably not all of them". Now that we know that design patterns are contextual and can be incorporated as language features, we must ask if the emphasis of the patterns movement isn't displaced. Why go to the trouble of publishing large books describing at length things like intent, motivation, participants, consequences, related patterns, etc, when we could just inventory all patterns and shove our favorites in our programming languages? One very simple reason is that not all patterns would yield benefit from being reified. Dominus says that people often mentioned to him MVC as an example of a pattern so complex that it could not be absorbed in a programming languge. But, of course, they were all wrong and he was right, since novel systems like Ruby on Rails or subtext do exactly that, definitively confirming his thesis that the only impediment was an atrocious lack of imagination on the part of his opponents! This is all very strange to me, since "MVC" came to life as pretty concrete software - it was the GUI development framework bundled with Smalltalk-80. Many years later, after several separate versions were developed, it was described and published as an (architectural) pattern. Leaving object orientation early history aside, lets get back to the matter of patterns that aren't targets for reification. One example is the Façade(185) design pattern. As one of the most frequently applied GoF patterns, described having as it's Intent to "provide a unified interface to a set of interfaces in a subsystem", there can be no doubt that it is indeed recurring. Looking at the Structure and Participants sections, we can see that it is very simple, just a Façade class that depends on many other (unidentified) subsystem classes inside a boundary. A consequence of this simplicity is that there would be little gain in incorporating this pattern into a programming language, as it doesn't really represent specific interactions that have to be recoded each time the pattern is applied.

One could respond that if that is the case, then the pattern is useless. It describes a solution so simple that a programmer who has never heard of the "Façade design pattern" would probably be able to construct it by himself. The error in this judgment is that a pattern's merit isn't measured by how interestingly or usefully it gives a solution to a problem. Knowing that its not hard to invent Façade does not detract from the fact that being able to talk about Façade is a big design win in itself. Patterns are as much an aid to communication as they are vehicles to disseminate knowledge.:
"Naming a pattern immediately increases our design vocabulary. It lets us design at a higher level of abstraction. Having a vocabulary for patterns lets us talk about them with our colleagues, in our documentation, and even to ourselves. It makes it easier to think about designs and to communicate them and their trade-offs to others."[GoF, page 3]
On the other hand, we already mentioned that some of the patterns are indeed amenable to reification. In this case there are two paths to choose from: coding it as a library atop usual language abstraction mechanisms or incorporating it as a basic language feature. Dominus second post argues that this choice is a mere implementation detail. Now, to paraphrase him, his point seems completely daft, which may I interpret to mean that there's something that went completely over my head. The difference is obvious: if some feature can be implemented as a library, any programmer that wants it can write it exactly one time and reuse as appropriate; no language modification required. As a contributor to CPAN , he should know well that nowadays much software reuse is done through open source libraries, lowering the recode count to zero for it's users. This is a crucial distinction, as adding something to the language has a huge impact on the whole technical ecosystem, from increased barrier to entry to pernicious interaction between features (non-orthogonality).

Lets take another look at Dominus' central proposition:
"(...) when pattern P applies to language L, then, to the extent that some programmer on some project finds themselves needing to use P in their project, the use of P indicates a deficiency in language L for that project. "
I believe the most important question here is: what does it mean to "use [pattern] P"? If you take it to mean follow the steps from P's description, possibly copying some code from the "Implementation" section, then this reasoning makes sense. But patterns are not merely recipes, and a pattern language is not a cookbook! One way in that patterns differ from recipes is that, as we saw in Façade, the Solution section can be of little importance compared to the communication gain from just naming the pattern. Another difference is that there is wide latitude in implementation strategies for each pattern, not to mention cases where multiple patterns present themselves as design alternatives for a set of forces.

Speculating a bit, I believe the origin of this antagonism toward design patterns is that the idea of a recurrent structure is antithetical to a certain ethos present in the software development community. Mark Dominus summarizes this feeling well in this passage "As in all programming, the identification of commonalities should be followed by an abstraction step in which the common parts are merged into a single solution." It can also be formulated as a call to arms: Don't repeat yourself! Or better yet, DRY! (the love for TLAs is another part of the programming ethos :). I don't dispute that eliminating repetition is an important design heuristic, but I sometimes wonder if it can be taken too far, becoming a sort of factorization fetishism that is detrimental to productivity.

Monday, February 12, 2007

Wednesday, January 10, 2007

Domain Specific Languages e Estratificação por Estabilidade

2007 mal começou e o meme das DSLs já está provocando polêmica. Eric Evans, em entrevista na InfoQ, disse:

"More out on the cutting-edge are the efforts in the area of domain-specific languages (DSLs), which I have long believed could be the next big step for DDD. To date, we still don't have a tool that really gives us what we need. But people are experimenting more than ever in this area, and that makes me hopeful."
Philip Calçado citou o Evans e foi além, afirmando que "DSLs são iminentes mas as ferramentas simplesmente ainda não chegaram lá". Escrevendo em tom mais cético, Rodrigo Kumpera comentou que DSLs ainda parecem uma tecnologia de um exemplo só; da mesma maneira que toda palestra de AOP fala de logging, toda introdução à DSLs fala em configuração. Citando:

"Todos estão falando o tempo todo sobre como configuração é um problema a ser resolvido por linguagens de domínio. Até o Martin Fowler comete essa gafe em sua palestra na JAOO 2006! O problema é o mesmo que logging, eu diria, representa nenhum risco a complexidade de um projeto. Quantos realmente já tiveram problemas relacionados a dificuldade de colocar logging em uma aplicação? Configuração idem."
Eu acho que a analogia com AOP não procede. Tracing/logging é de fato um caso de uso (quase trivial, BTW) para AOP. Precisamos analisar configuração com mais cuidado. Tome por exemplo uma aplicação de eCommerce B2C tradicional (você pode imaginar que é, quem sabe, uma loja de, sei lá.., animais de estimação). Na primeira iteração do software, cada item à venda tem um preço de catálogo cadastrado. Essa decisão logo se mostra inflexível, pois uma variação sazonal nos preços é comum no varejo. O requisito é atendido com uma nova feature permitindo que se estabeleça um perentual de desconto/acréscimo por mês para cada produto. A interface com o usuário para esta feature se localiza numa tela entitulada "Configurações". Mais adiante percebe-se que a flexibilidade ainda está aquém do ideal, os clientes querem definir regras de preço mais complicadas, como:
se vendeu mais de 500 na semana passada, acréscimo de 10%.
Pode-se inserir estas regras (hard-coded) no meio do sistema ou decidir por manter a parametrização externa e usar uma DSL. Em qualquer caso, ainda podemos chamar isto de configuração?
Sabendo que código e dados são basicamente a mesma coisa, eu vejo a aplicação de DSLs como decisões a respeito de que tipo de informação faz parte do sistema, e que tipo de informação consiste em parametrização "externa". No fim das contas, é técnica para atender ao "Stable Dependencies Principle (S0P)", deve-se isolar as porções de um software com maior volatilidade e uma linguagem específica - talvez até editável por usuários finais - é uma boa alternativa para implementar os módulos menos estáveis.