Thursday, August 11, 2011

Speculating about Future Development Environments

This series of essays, making up a sort of ode to the command line, got me thinking again about the user interface we face everyday, our software development environments. Remember when you were a kid and spent time imagining the ideal car — it should be able to fly and transform into a submarine —, or the ideal house — it should have an underwater garage to park the plane-submarine-car? No? Perhaps I was a weird kid. But that's what this post is about, conceiving what would be my ideal programming environment.

I can't get into this subject without first discussing the big rift: editors versus IDEs. I have to say I'm not a member of either faction, I love vim, I get a kick of learning new tricks (did you know that if you accidentally delete something and lose a previously yanked text, you can get it back with "0p ?), but I'm also a happy user of modern IDEs. In fact, I don't know why we have these factions. I suspect the origin is in the name IDE, Integrated Development Environment. It suggests a large piece of software integrating lots of tools to be used in the making of software, stuff like stupid code generation wizards and the always dissatisfying WYSIWYG interface designers. This perspective isn't entirely misleading, but I think it fails to capture the most important aspect of IDEs: they are editors that understand the programming language being used. They analyse your code's with regard to syntax, semantics and type structure to be able to realise feats such as jumping from a call to a definition site, changing a symbol definition and all it's uses, listing all lines of code that call a certain method/function, etc.

I see absolutely no reason why any programmer would forgo these abilities, but I see why many avoid current IDEs. The user interface is a mess of toolbars, menus, tabs, docked windows, popup windows of different sorts and gargantuan settings dialogs. Only after some time spent living in these strange environments one is expected to reap the benefits of working with a smart editor. In a way, the current crop of IDEs reminds me of the web just before Google appeared. To try to find anything with the search engines of the time was a guarantee of wasted time and probable frustration. The situation was so bleak that most of those search engines gave up on being starting points for the web and were becoming "portals", so users could stand a chance of finding what they were looking for within the sections and categories and subcategories of specifically produced pieces of information. Then came Google, with the right technology to extract relevance from the same sea of data everyone else was floundering on.

Aside from the immensely better ranking of search results, another striking difference was Google's minimalistic approach to user interfaces, showing little more than a search box and a list of results. That's where I'd start my ideal environment, a visually simple application to help the programmer focus on reading and writing code. Reading and writing code, not just text, and to be able to help with that, the application would need the code navigation and refactoring tools found in today's IDEs. The question then is, can this be done without all the cruft?

I think it can, and the way to do it is the same Google took to beat the competition: relevance and minimalism. All the housekeeping functionality we are accustomed to in current IDEs should be cast aside the in favor of a search centric interface. And there is plenty of housekeeping to get rid of: modules, windows, tabs, workspaces, perspectives, views, projects, buffers, editors, etc... As I envision it, a minimum productive environment would consist of a few tiled panes for editing code, with no tabs. The last part deserves a little explaining. The purpose of tabs is to show the currently opened files. My thinking is that whether a file is currently open or closed should be an implementation detail invisible to users. In order for this to work well, changes in files should be continuously saved and we should have access to a full version history (this is an old idea that is being picked up in Apple's latest OS).

As the ubiquitous file-tree view would be gone, all navigation would be done either directly — following a reference in a source file — or via search. The key to make this work is to go to extra lengths to perfect search relevance, perhaps taking contextual clues enriched by the abstract model of the code. For instance, a recently opened file that is close in the call graph to the file currently being edited would spring to the top of the results.

This minimalism isn't just an exercise in modernistic aesthetics; the problems caused by the litany of housekeeping features go beyond visual clutter. Each of those features generate work for the user, who has to spend time organising his environment and then remember where everything is. This is the kind of work we should delegate to computers, the user needs to deal only with the content he is currently working on and ask the environment for what he needs next. Don Norman makes a much better argument than I ever could for search based user interfaces in his column for the CACM.

Ok, so much for editing. What about the rest of the services of a modern IDE? Stuff like version-control integration, build systems, debuggers, test runners, console runners, diagram editors, kitchen-sink explorers, etc. My answer is that while many of these tools benefit from graphical user interfaces, they don't necessarily need to be in the same application as the editor. Some of them could maybe reuse some code from the IDE, but there is no need for them to share window space with the code.

So ends my wish-list for a future development environment, a fitting time to restate this is not a prediction, as I see no movement in this direction. Quite the opposite, actually, as most IDEs keep sprawling in ever larger feature matrices.

Monday, August 01, 2011

Experience report: Ruby

For a long time I've been curious about how the supposed benefits and liabilities of programming in a dynamically typed language1 actually play out in practice. I'm now getting a chance to find it out, since I'm involved in a largish project using Ruby (lots of Rails plus some Sinatra and a couple of random daemons). My professional background has largely been in Java, but I spend a good chunk of my free time learning about different programming languages (and some PL theory), mostly on the static side of the fence. Anyway, here are some notes:

The big question is whether dynamic typing allows for more bugs to pass trough. My answer to this has to be put in context: we are a medium-sized team (between half and a full dozen of devs), all experienced in, and practicing, TDD. As with the rest of this blog post, I don't have hard data to show, but my impression is that the unit tests do indeed catch all the bugs that the Java type system would help to catch; with the caveat that it's often harder to pinpoint the source of a regression. I don't have much real experience in languages whose type systems help to enforce strong guarantees2, but I would imagine they would catch a larger fraction of the bugs that are caught by the unit tests, while not really avoiding further bugs. The reason is twofold: firstly, in my experience, many of the bugs are found in the interaction of separate pieces of software (such as Javascript in the browser talking to the web server), secondly, even when the bug is located within process boundaries, it is most often related to a forgotten invariant than to breaking a established one. But that's all conjecture.

On the matter of productivity, the abstraction mechanisms offered by the Ruby language help to structure code and avoid repetition and that's certainly noticeable in comparison to Java (though, in my opinion, not in comparison to Scala, and probably also not in comparison to Smalltalk, ML, Haskell or Oz). That gain is offset to a point by the lack of refactoring tools. There are some who argue that those tools are made necessary by the verbosity of Java, and aren't needed in better languages. That's nonsense. If a language has an abstraction mechanism, that mechanism is used to define an abstraction and to use the defined abstraction elsewhere. If we then want to change the abstraction we must change code at the use sites as well, and that's where such tools can help. This is all so obvious that I find it almost silly to have to write it down.

Many of the abstraction gains in Ruby come from metaprogramming techniques. I'm not completely sold that they are necessary to achieve the level of abstraction attained, and I'm sure that they hinder readability. It's much easier to gain a footing in a library written with straighforward composition mechanisms (be it functions or classes and objects) than in a mess of Strings and calls to define_method and cousins.

While on the subject of abstraction, I have to comment on Rails. It's a mature web framework that does a masterful job of making the common cases easy. This is a much harder feat than it sounds, as we can glean from the failure of JSF, WebForms and similar unsuccessful attempts to abstract the web. The hidden cost of the bargain is when we get to the uncommon cases. It isn't so much the case that there are specific application features that are hard to code in Rails, though it happens, but that the structure of the code that Rails mandates sometimes isn't a good fit to the problem being solved.

I'm sounding a little negative, so let me balance it out by saying that I believe Ruby and Rails were as adequate choices for our project as any. The main reason is the availability of decent quality libraries and tools (refactoring and code navigation notwithstanding), specially in the often overlooked front of deployment and configuration management.

Apologies for an opinionated blog post.

1. Unittyped, for the pedantic.
2. Such as Agda or Coq, or some styles of programming in ML, Haskell or Scala.