Saturday, October 13, 2007

Pondering

"Formal methods will never have any impact until they can be used by people that don’t understand them."
Tom Melham


"The original study that showed huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson and Grant (1968). They studied professional programmers with an average of 7 years' experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1, the ration of debugging times over 25 to 1, of program size 5 to 1, and of program execution speed about 10 to 1. They found no relationship between a programmer's amount of experience and code quality or productivity.

Although specific rations such as 25 to 1 aren't particularly meaningful , more general statements such as "There are order-of-magnitude differences among programmers'" are meaningful and have been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al. 2000)."


"If you look at the way software gets written in most organizations, it's almost as if they were deliberately trying to do things wrong. In a sense, they are. One of the defining qualities of organizations since there have been such a thing is to treat individuals as interchangeable parts. This works well for more parallelizable tasks, like fighting wars. For most of history a well-drilled army of professional soldiers could be counted on to beat an army of individual warriors, no matter how valorous. But having ideas is not very parallelizable. And that's what programs are: ideas."


"Software entities are more complex for their size than perhaps any other human construct because no two parts are alike (at least above the statement level). If they are, we make the two similar parts into a subroutine--open or closed. In this respect, software systems differ profoundly from computers, buildings, or automobiles, where repeated elements abound. [...]

The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence. For three centuries, mathematics and the physical sciences made great strides by constructing simplified models of complex phenomena, deriving properties from the models, and verifying those properties by experiment. This paradigm worked because the complexities ignored in the models were not the essential properties of the phenomena. It does not work when the complexities are the essence."
Fred Brooks
No Silver Bullet


"Architecture is intended to be facilitative, of course, in that a good architecture should enable developers to build applications quickly and easily, without having to spend significant amounts of time re-inventing similar infrastructure across multiple projects. [...]

But an architecture is also intended to be restrictive, in that it should channel software developers in a direction that leads to all of these successes, and away from potential decisions that would lead to problems later. In other words, as Microsoft's CLR architect Rico Mariani put it, a good architecture should enable developers to "fall into the pit of success", where if you just (to quote the proverbial surfer) "go with the flow", you make decisions that lead to all of those good qualities we just discussed. "


"The more interesting your types get, the less fun it is to write them down!"


"If you don’t know the difference between a group, a ring, and a field, you have no business overloading operators.

Now I’m not saying that one has to take a course in abstract algebra before you can be a competent programmer. You don’t as long as the language you program in doesn’t support operator overloading (or as long as you’re wise enough not to use it if it does). However since most programmers are didn’t get beyond ODEs in college (if indeed they got that far–some of my comp sci friends struggled mightily with calculus and had to retake it repeatedly), one can’t responsibly design a language that requires mathematical sophistication in the 99th percentile for proper use."

Elliotte Rusty Harold
Operator Overloading: Trickier Than it Looks



"You used to start out in college with a course in data structures, with linked lists and hash tables and whatnot, with extensive use of pointers. Those courses were often used as weedout courses: they were so hard that anyone that couldn't handle the mental challenge of a CS degree would give up, which was a good thing, because if you thought pointers are hard, wait until you try to prove things about fixed point theory.

All the kids who did great in high school writing pong games in BASIC for their Apple II would get to college, take CompSci 101, a data structures course, and when they hit the pointers business their brains would just totally explode, and the next thing you knew, they were majoring in Political Science because law school seemed like a better idea. I've seen all kinds of figures for drop-out rates in CS and they're usually between 40% and 70%. The universities tend to see this as a waste; I think it's just a necessary culling of the people who aren't going to be happy or successful in programming careers."

No comments: