Get to the point

Dave Christiansen over at Information Technology Dark Side has a good graphic representing what he calls the FSOP Cycle (Flying by the Seat of Your Pants). The basic idea is that when smart people do good things, someone will try to reproduce their success by doing the same thing that worked the first time.

The problem with trying to do this is that every time someone tries to document a “successful process”, they always leave off the first step: get smart people working on the project.

Dave outlines several reasons why capital-P Process will never solve some problems. Joel Spolsky described this same issue in his Hitting the High Notes article when he wrote, “Five Antonio Salieris won’t produce Mozart’s Requiem. Ever. Not if they work for 100 years.”

So if Process can’t solve your problem, what will? According to Dave, it’s simple:

Put a smart PERSON in the driver’s seat, and let them find the way from where you are to where you want to be. It’s the only way to get there, because process will never get you there on its own.

I’ll assume that when Dave says “smart” he really means “someone good at solving the current problem”. I could have the world’s smartest accountant and I wouldn’t want him to remove my appendix. Okay, I don’t want anyone to remove my appendix. But if I needed it done, I’d probably go find a doctor to do it. So what Dave is saying is that you’d rather have someone who’s good at solving your current type of problem, than have Joe Random Guy trying to follow some checklist.

This isn’t a complete answer, though. Some songs don’t have any high notes.

In the middle of describing why you should choose the most powerful programming language, Paul Graham writes:

But plenty of projects are not demanding at all. Most programming probably consists of writing little glue programs, and for little glue programs you can use any language that you’re already familiar with and that has good libraries for whatever you need to do.

Trevor Blackwell suggests that while that may be changing, it was still true at least until recently:

Before the rise of the Web, I think only a very small minority of software contained complex algorithms: sorting is about as complex as it got.

If it’s true that most programming doesn’t require the most powerful language, it seems fair to say most programming doesn’t require the best programmers, either.

You might notice at this point (as I just did) that Dave wasn’t talking about programming, or at least not only programming. The same principle seems to hold, though: Average people can do the most common things with the most common tools. Exceptional circumstances require exceptional tools and/or exceptional people. If only there were a way to predict when there will be exceptions …

The other problem

But let’s say we’re looking at genuinely exceptional people who have done great work. Should we ask them how they did it? After all, they’re the experts.

Well, not really. They’re only experts on what they did, not why it worked. They might have simply guessed right. Even if they didn’t think they were guessing.

Need another example of false authority? Have you ever heard someone describe a car accident, and attribute their survival to not wearing a seatbelt?

First, they don’t know that. They believe it. They obviously didn’t do a controlled experiment where the only difference was the seatbelt. Second, even if they happen to be right in this case, statistics show that it’s much more common for the seatbelt to save you than harm you.

So if you want to design a repeatable process for creating software, you can’t do it by asking people who are good at creating software.

The other other problem

One change I’d make in Dave’s FSOP diagram is in the circle labeled “Process Becomes Painful”. It’s not that the Process changes. Really what’s happening is that the project runs into some problem that isn’t addressed by the current Process.

Every practice you can name was originally designed to solve a specific problem. On very large projects, there’s the potential to encounter lots of problems, so extensive Process can simultaneously prevent many of those problems from appearing.

But attempting to prevent every conceivable problem actually causes the problem of too much time spent on the Process instead of the project.

That’s where Agile comes in. It solves the problem of too much process. Do you currently suffer from too much process? Then you could incorporate some ideas from Agile.

But make a distinction between using ideas to deal with specific problems, and adopting a whole big-M “Methodology”. Once you adopt a Methodology designed to solve the problem of too much Process, you face the danger of having too little Process.

Your expert programmers may be the best in the world at the problem you hired them for. But now you want them to do something different. And you have no way to recognize that they’re not making progress, because you’ve eliminated all the governance that came with the heavyweight Methodology.

So what’s the point, anyway?

Process is not something you can measure on a linear scale, having “more process” or “less process”. Always adding practices — even “best” practices — to your current Process is simply trying to solve all possible problems, whether you’re currently having them or not.

For each practice you have to consider what it was designed to solve, what was the point of it to begin with? Then don’t treat these practices as a checklist that must be completed every time, but follow the principles behind them.

There’s a line in the movie Dogma that I think describes this really well. Just substitute “process” for “idea” and “Methodology” for “belief”:

You can change an idea. Changing a belief is trickier. Life should be malleable and progressive; working from idea to idea permits that. Beliefs anchor you to certain points and limit growth; new ideas can’t generate. Life becomes stagnant.

What conclusion would you like me to reach?

Why is it that the rest of the world has functioned fine on estimating projects of many sizes and scopes, then sticking to them; while IT screams “OH WE can’t do that!”?

I’ll admit a large reason is because lots of people in IT are really bad at estimating. But part of the reason we’ve managed to stay so bad at estimating, and the main reason all estimates seem to be so far off, is that the business side wants the estimates to be low.

The common complaint is that IT projects are “always over time and over budget”. Since the IT budget (for software) is almost entirely salary, time and budget are synonymous. When you set the budget, you’ve just set the time.

But most IT projects — and nearly every one I’ve worked on — have a budget set before the detailed design is ever done. Or at least the clients have an idea in mind what they’d like it to cost. So without realizing they’re doing so, the clients set the time estimate sometimes before they’ve ever talked to the developers.

I had to help out some less-experienced people who were being asked for estimates. I told them the trick is to figure out a diplomatic way to ask, “What number would you like me to say?”

The funny thing is that this is not just being cynical, either. There’s real value to it. If you’re thinking four months and the client says three days, there’s a good chance you don’t have the same idea in mind for what you plan to do.

For instance, they ask you to write a search engine “like Google” to put on your intranet. You’re thinking a couple of months. They’re thinking end of the week.

It turns out they want you to buy a Google search appliance and integrate it into the intranet. To the client, “write” “create” “implement” “install” and all those other words we use that mean different things … all mean the same thing: work that the IT guys do.

So if you want to get the work — or if you’re an employee and have no choice in the matter — see what number they have in mind already and tell them what they can have in that length of time. If you don’t promise to deliver something in the given time frame, they’ll go find someone who will. Not that they’ll deliver in that time, but they’ll promise to deliver in that time.

Compare this to construction. The client may get five quotes for pouring concrete. If four of the bids are close to $50k, but one of them is $20k, the client will likely assume the low bid is unrealistic and choose among the remaining contractors.

But if a programmer or independent software vendor says some work will cost $50k, the client can find someone who will promise to deliver for $20k. The sponsor will either accept the lower bid, or use it to negotiate the first vendor down to $25k. When it ends up costing $50k, that project goes in the books as “over time and over budget”.

What would it look like if construction bids were awarded the same way? If for instance the client were required by law to select the lowest bid? Then contractors would low-ball every bid to get the contract. You’d end up with construction that always ran over time and over budget. You would need an endless supply of money to stay in business. You would need to be … the government.

Installing software is not worth my time

My PC is managed according to corporate standards. I can’t install anything without an administrator signing on and authorizing it. I asked if I could install the software to link to my cell phone and load my contacts into it. Otherwise I’d be spending a couple of hours over the next week manually entering them all in via the keypad.

The day after I put the ticket in, someone called up and asked where the software was. I told him I had it on a CD. The local support guy came up the next morning, saw that it was an OEM disk and not some random thing I’d burned, and logged in as administrator so I could install it.

Sure, it was two days before I got what I needed, but it wasn’t a compelling gotta-have-it-today issue.

Everything else on here is available as a network install, and is set up in my profile. When I get a new PC — I’m due to be refreshed in the next month or so — I’ll go to a single application, check the boxes for everything I need, and go to lunch. When I come back, everything will be installed.

Every application that I install myself I’d have to reinstall when I get a new PC. How many hours would that take? Multiply that by the number of users in my office, which just relocated earlier this year. Most of us didn’t take the hardware from the old location, we just came to blank systems at our new desks and kicked off the install process.

If I’m paying for it, it is not worth my time to install software. If my alternatives are to load all my apps on a new PC I’ve just bought, or to work a billable hour and pay someone else to do the installs, I’ll pay the Geek Squad to click OK and reboot 14 times. Why should I expect the company I work for, who owns the PC I’m working on, to make a different decision?

The program is not the product

Managers want programs to be like the output of a factory. Install the right robots and tooling, start the process, and good software comes out the end.

WRONG WRONG WRONG WRONG WRONG WRONG WRONG WRONG WRONG WRONG!!!!!! (Can you tell I disagree?)

Nearly everyone who makes the factory analogy misses a very fundamental point: the program is not the product. For one thing, unless you work for a software company selling shrink-wrapped software products you aren’t ever selling the program. Conventional wisdom says most programmers actually work for internal IT, so it’s safe to say most programs are never sold.

Businesses don’t want programs, they want credit reports … and loan contracts … and title searches … and purchase orders … and claim forms …

So what is the right analogy? The program is the assembly line! So measuring bugs in the program is wrong. Even comparing compiling to manufacturing — which is still better than comparing programming to manufacturing — is wrong. What you should be looking at is the output produced by the program. Is your program a website? Measure the number of pages it can produce per unit time, without error. Is it an editor? Measure the number of pages it can spell-check per unit time, and with what accuracy.

When measuring the output of a manufacturing process, you literally shouldn’t care what the process looks like, nor what tools are used, so long as it consistently produces the same output. This is not to say the process and tools don’t matter. A bad process may be prone to unexpected failure. Tools may be harder to maintain or have a shorter service life. You may be locked into a service contract with the manufacturer. And coincidentally [ahem] all these factors apply to software.

So yes, programming can be compared to manufacturing. As long as you remember that the program is not the product, the program is the assembly line.

Can the FSF “Ban” Novell from selling Linux?

Novell Could Be Banned From Selling Linux: Group Claims

BOSTON – The Free Software Foundation is reviewing Novell Inc.’s right to sell new versions of Linux operating system software after the open-source community criticized Novell for teaming up with Microsoft Corp.

The problem is that the FSF wants all code to be free. Period.

That’s their preference, yes.

They want to make the GPL so darned viral that no one can include any copyrighted or patented components Period.

No, they want all the components on which they hold the copyrights to be protected by those copyrights. And they want those components to be freely available to anyone who agrees to make their modifications available under the same terms.

You can’t modify and distribute Microsoft’s code without permission. You can’t modify and distribute GPL code without permission.
The way you get permission to distribute Microsoft’s code is to pay them a lot of money, or cross-license your own code. The way you get permission to distribute GPL code is to release your modifications under the GPL.
Microsoft can destroy your business model by bundling a version of what you make. GPL-using authors can destroy your business model by releasing a free version of what you make.
If you don’t want to be bound by Microsoft’s terms, write your own code. If you don’t want to be bound by the GPL, write your own code.

 
How is GPL viral while Microsoft is business?

How can the FSF “ban” Novel from selling “Linux” when Linux itself is not wholely licensed under the GPL and not wholely owned by FSF? Sure, there are many GPL components within the typical Linux distro, but not all of them have to be.

According to Answers.com:

More Than a Gigabuck: Estimating GNU/Linux’s Size, a 2001 study of Red Hat Linux 7.1, found that this distribution contained 30 million source lines of code. … Slightly over half of all lines of code were licensed under the GPL. The Linux kernel was 2.4 million lines of code, or 8% of the total.

So the first point is that no, the FSF can not ban Novell from selling a GNU/Linux-based distribution, as long as all the current license terms are followed.

However, the holder of the Linux trademark, Linux Torvalds, could choose to prohibit them from using that mark to describe what they’re selling. (See Micosoft / Sun / Javaâ„¢) Though I haven’t seen anything suggesting he plans to do so.

Next, the Linux kernel is covered under the GPL, so even if the the FSF doesn’t hold the copyright it’s entirely possible the kernel authors could ask the FSF to pursue any violations on their behalf. And I suspect Stallman and Moglen would be more than happy to do so.

The bottom line, I think, is that business people who don’t understand the technicalities will either see a deal with Microsoft as a reason to choose Novell for any Linux plans, or they will see the controversy as a reason to avoid Linux plans altogether. Either conclusion benefits Microsoft.

People who do understand the details will see that Novell offers them a conditional, time-limited right to use a specific version of Linux, which may or may not interoperate better with Windows systems, which can be effectively “end-of-lifed” at any time by Microsoft.

And this is bad why?

If you try hard enough, I suppose it’s possible to spin anything into an attack on your pet target. But the consistency with which Neil McAllister sounds the call of doom and gloom for all things open source is really quite astonishing. Especially when you consider he writes the Open Enterprise column for Infoworld.

Take his January 29th column about the formation of the Linux Foundation for example:

On the surface, the union of Open Source Development Labs (OSDL) and the Free Standards Group (FSG) seems like a natural fit. Open standards and open source software are two great ideas that go great together.

But wouldn’t it make more sense to call the merged organization the Open Source and Standards Lab, or the Free Software and Standards Group? Why did they have to go and call it the Linux Foundation?

On the one hand, it seems a shame that the group should narrow the scope of its activities to focus on a single project. Linux may be the open source poster child du jour, but it’s hardly the only worthwhile project around.

If Neil had bothered to read his own magazine’s newsletter the previous week, he would have known that:

With Linux now an established operating system presence for embedded, desktop and server systems, the primary evangelizing mission that the OSDL and FSG embarked upon in 2000 has come to an end, Zemlin said. The focus for the foundation going forward is on what the organization can do to help the Linux community more effectively compete with its primary operating system rival Microsoft.

The combination of the two Linux consortiums was “inevitable,” said Michael Goulde, senior analyst with Forrester Research. “The challenge Linux faces is the same one Unix faced and failed — how to become a single standard.”

So what’s wrong with focusing on Linux, anyway?

But then again, maybe it’s not so strange — not if you conclude that the Linux Foundation isn’t any kind of philanthropic foundation at all. It’s an industry trade organization, the likes of which we’ve seen countless times before. Judging by its charter, its true goal is little more than plain, old-fashioned corporate marketing.

As such, the Linux Foundation is a unique kind of hybrid organization, all right — but it’s not the union of open source and open standards that make it one. Rather, it stands as an example of how to combine open source with all the worst aspects of the proprietary commercial software industry. How noble.

This is really amazing. No one ever claimed that the partners in this merger were anything other than industry trade organizations, but the fact that the new foundation will continue the work of it’s members is somehow un-noble. And nobility is the standard by which we should judge those who are trying to make Linux more competitive in the market.

His grammar and spelling may be better than that of the stereotypical Linux fanboys, who famously attack less-rabid supporters for their lack of purity. Or maybe he just has a better editor. But all the craft in the world doesn’t disguise the fact that Neil’s opinions are rarely more useful than the ramblings of an anonymous Usenet troll.

Changing your development platform

There are certain milestones in the life of a product when developers are free to ask if it’s time to change the platform it’s developed on. Typically you’ve shipped a major version and gone into maintenance mode. Planning has started for the next version, and you wonder if you should stick with what you’ve got or if, knowing what you know now, it might be better to switch from .NET to PHP, or from PHP to Java.

You might think that checking Netcraft would be a good idea. You can see if your current platform is gaining or losing market share, and who doesn’t like market share? If you look at the latest chart you’ll see that Microsoft is gaining on Apache.

But keep in mind that while Apache’s market share has gone down marginally, the total number of sites has still gone up. Most of Microsoft’s gain is from new sites, not from existing sites switching. (The exception being large site-parking operations switching to IIS.)

But really the important question is whether your preferred platform faces a reasonable possibility of becoming obsolete/unsupported. This is actually one place where the Unix world’s slower upgrade cycles help. You rarely have applications “sunsetted” by the manufacturer.

Am I arguing in favor of dropping .NET? Not at all. I think you should use what works for you. What I’m saying is unless your chosen platform is in danger of becoming unsupported, and that causes a problem for you, then looking at market share charts should never get you to switch.

Now if you hadn’t already chosen a platform, and you wanted to know what platform had a larger market, then you’d care about market share. But that’s a subject for another post.

Geeks still don’t know what normal people want

If you listen to geeks, locking out development of third-party applications will doom the iPhone in the market. But remember the now-famous review when the iPod was released:

No wireless. Less space than a nomad. Lame.

The market quickly decided they didn’t care about wireless and bought the things in droves. And current versions have more space than the nomad did when the iPod came out. Now that the iPhone has been shown, geeks are again claiming that it’s going to fail. This time because it’s not going to be open to third-party applications.

Apple doesn’t care if you can extend it because they believe their target customer doesn’t want it extended. They want something that works well, the same way, every time. The iPod wins because it does pretty much what people want, close enough to how they want, without making them think about how to do it.

The iPhone may not be open to developers, but it’s upgradable. When Apple finishes writing software to make the Wi-Fi automatically pick up a hotspot and act as a VoIP phone, that functionality can be rolled out transparently. First-gen iPhones will become second-gen iPhones without the users having to do anything.

The upgrade path will be to higher HD capacity, so people can carry more movies with them. I see these things as hugely popular for people who take trains to work. If I could take a train where I work now, I’d already be on a waiting list for an iPhone.

Meet the new boss, same as the old boss v2

Sometimes you read something that you can’t summarize without losing a lot. I just can’t find any extra words in this post, so here it is in its entirety:

1) Whatever language is currently popular will be the target of dislike for novel and marginal languages.

2) Substitute technology or methodology for language in #1. In the case of methodology, it seems a straw man suffices.

3) Advocates will point to the success of toy projects to support claims for their language/methodology/technology (LMT).

4) Eventually either scale matters or nothing matters. Success brings scale. An LMT is worthy of consideration only after proving out at scale.

5) Feature velocity matters in early stage Web 2.0 startups with hyperbolic time to market, but that is only a popular topic on the Web for the same reason Hollywood loves to hand out Oscars.

6) Industry success brings baggage. Purity is the sign of an unpopular LMT. The volume of participants alone will otherwise muddy the water.

7) Popularity invites scrutiny. Being unfairly blamed for project failure signals a maturing LMT; unfairly claiming success, immature LMT. Advocates rarely spend much time differentiating success factors.

8 ) You can tell whether a LMT is mature by whether it is easier to find a practitioner or a consultant. Or by whether there is more software written *with* or prose written *about* the LMT.

9) If you stick around the industry long enough, the tech refresh cycle will repeat with different terminology and personalities. The neophytes trying to make their bones will accuse the old guard of being unable to adapt, when really we just don’t want to stay on this treadmill. That’s why making statements like “Java is the new COBOL” are ironic; given time, “N+1 is the new N” for all values of N. It’s the same playbook, every time — but as Harlan Ellison said of fiction, every story has already been told, but nobody was listening the first time.

10) Per #9, I could have written this same post, with little alteration, ten, twenty or thirty years ago. It seems to take ten years of practise to truly understand the value of any LMT. Early adopters do play the important role of exploring all the dead ends and limitations, at their cost. It’s cheaper to watch other people fail, just like it hurts less to watch other people get injured.

11) Lisp is older than I am. There’s a big difference between novel and marginal, although the marginal LMTs try to appear novel by inserting themselves into every tech refresh cycle. Disco will rise again!

12) If an LMT is truly essential, learning it is eventually involuntary. Early adopters assume high risks; on the plus side they generate a lot of fodder for blogs, books, courses and conferences.

13) I wonder if I can get rich writing a book called Agile Lisp for Web 2.0 SOA. At least the consulting and course revenue would be sweet. Maybe I can buy an island. Or at least afford the mortgage payments on a small semi-detached bungalow in the Bay area.

14) It requires support from a major industry player to bootstrap any novel LMT into popularity. The marginal LMTs often are good or even great, but lack sponsors.

15) C/C++ remain fundamental for historical reasons. C is a good compromise between portability and performance — in fact, a C compiler creates more optimal code than humans on modern machine architectures. Even if not using C/C++ for implementation, most advocates of new languages must at least acknowledge how much heavy lifting C/C++ does for them.

16) Ditto with Agile and every preceding iterative methodology. Winding the clock back to waterfall is cheating. I’m more sophisticated than a neanderthal, but that won’t work as a pick up line.

17) Per #13, I don’t think so, because writing this post was already a chore, let alone expanding the material to book length. Me an Yegge both need a good editor.

This covers the technology pretty well. All he left out was the reason so much is coming back.