Pay the man

IT people are frequently highly-educated, with extensive formal and on-the-job training. And we all, if you look at our resumés, think that we’re fast learners. That’s probably because everything we work with keeps changing every couple of years, so anyone who’s been doing this for very long has learned multiple generations of tools. Many of our jobs also require us to be generalists, with a broad range of knowledge across multiple unrelated fields.

It’s probably not surprising, then, that we tend to be DIY-ers. Never changed a light fixture? No problem. Give me a few minutes with a book and I’ll know enough to do it. House needs painting? Heck, I’ve always wanted an excuse to go get one of those power sprayers, I’m on it! That’s why we’re shocked to hear how much people pay to have someone do work that, after all, we could do ourselves with little or no training.

That was my frame of mind when I had to replace the shower door. The frame was mounted on tiled walls. I only cracked two of the tiles a little bit trying to get the old frame off, and lifted about a dozen away from the wall. No problem, just ran to the hardware store for some tile adhesive. And I only put the adhesive on a little too thick, so two of the tiles fell off the next day when I started mounting the frame. And I only cracked one more because I was unfamiliar with the mounting hardware.

I had to remove all the tiles and start over because the adhesive was actually nowhere near dry. I wanted to make sure it dried all the way, because I wasn’t completely sure I did it right this time. When I tried again three days later, there was only one tile that fell off because I had gone too thin with the adhesive. But after waiting a day for the grout on the rest to dry, I was able to scrape that space out and get the last tile up and grout it. The caulk and grout I used to patch the cracks looks mostly okay … for now … while it’s still white

All in all, it only took me a week and a half to hang that door. And the cracked and patched tiles will probably still look good when I go to sell the house. (At least I hope they will; the color was discontinued years ago, so I’d have to re-tile the whole damn bathroom otherwise.) I’m so glad I didn’t pay a hundred bucks to some barely-trained tradesman to do it for me.

Lipstick on a pig

If you’ve ever seen one of my project plans, there’s a chance you’ve seen a task at the end that says Add pretty. With good use of stylesheets, you can radically improve — or damage — the look of a website even after all the coding and most of the testing are done. A different person or group with a different skill set can take over from the programmers and work some magic with little interaction.

You might think, based on this, that other parts of development can be pushed to the end after “real” development is done. You’ll know someone was thinking that when you see a task late in a project plan that says “Add fast”. This is usually a sign of excessive specialization. People think that they just have to get the user interaction right and leave performance tuning to someone else.

I suppose I can live with the idea that there will be some performance tuning that’s best done once everything else is complete. And on some projects just throwing more hardware at the problem is cheaper than a programmer’s time to fix it. But actually improving the performance of an application is hard, and the changes pervasive.

Another side-effect of excessive specialization, one that always raises the brown flag, is when I see “Add security” at the end of a plan. It’s simply inexperience that allows anyone to think they can graft a security model onto a codebase after the fact without significant amounts of rewriting.

“But this is a quick hack, and we only need the numbers for this one meeting.” Sure, a report you’ll only ever need once. I guess such a thing could exist, but I’ve never seen it. In the first place, nothing lasts as long as a temporary fix that works well enough. And in the second place, many (most?) large, successful products started out as small, successful products.

End/begin dependencies look really great on a Gantt chart. Activities that invite and incorporate feedback don’t look so neat and clean. Treating security as something that can happen to a product after it’s already done is no better than … well, see the title of this post.

Design = function + aesthetics

Ask your local programmer if he knows how to design user interfaces and invariably he’ll say he does. Go ahead, ask. I’ll wait.

You’re back? Good. Now go look at the new iPhone. Has your guy ever made anything remotely that cool? Unless you’re reading this from Cupertino, odds are he hasn’t. The UI is more beautiful and, as near as I can tell from the demo movies, more usable than any other phone or music player I’ve seen. But I wonder, how much of the perceived usability is a response to the beauty?

It’s becoming conventional wisdom that you don’t want to make the demo look done. Excessive visual polish early in the process not only limits the feedback you get to comments about the superficial details, it also suggests equally finished interaction with the system. It literally makes it look like it’s doing more than it really is doing.

I’ve avoided this problem in my career by not being very good at graphics, and avoided realizing that by not working with any real visual artists to compare my work to. Yes, I used to think I was good at it, just like every programmer. Eventually I realized that consistency and predictability were a poor subset of what an artist can add.

Now, whenever I make up a project plan, there is a task at the end for “Add Pretty”. And my name isn’t on that task.

The Digital Dark Ages

I’ve been paying my mortgage for about three years now. Unless I change something, I’m going to keep paying on it for another 27 years. I try not to think about the fact that although I have an actual physical copy of the mortgage agreement, with real pen-and-ink signatures, I don’t have any proof that I’ve ever made a payment.

At the risk of sounding like a Luddite, it bothers me that I have to trust the bank’s computer system to keep track of all 360 payments I’ll have made by the time it’s over. I’m not just being paranoid. I had an issue where a bank said my wife still owed money on a loan we had paid off three years earlier. We didn’t have anything in writing for each payment. The bank couldn’t even tell us the history of the loan; just that the computer showed we still owed money. And if a bank says you owe money, unless your lawyers are bigger than their lawyers, then you owe them money.

If you go to museums, you’ll see ledgers from banks in the 1800s and earlier. Over two hundred years later and we still know who paid their bills and when. But five years in the past … it doesn’t exist.

This could change with new regulations and retention requirements. But the big difference is what is standard vs. what you have to work at. A hundred years ago everything was written down. If you wanted to get rid of records you had to make an effort to identify what you wanted to delete, somehow separate it from the rest, and physically destroy it. Today, we only keep data as long as we have to. We only bother with long-term storage when the law or financial necessity makes us.

Let’s assume we have some data that we really want to keep “forever”. What is that going to take?

First, you’ll want to store it on something that doesn’t degrade quickly. Burning it to a CD or DVD seems to offer better longevity than VHS. Well, maybe. Second, you want to store it in a format that you’ll be able to read when you want to. This might be a harder problem than the physical longevity, when you start to consider how much data goes into a modern file format.

Look at the problem from the user’s perspective: The document format (the same applies to music and video) is just a way of saving the document in a way that it can be opened and look the same way at a later time, maybe on the same computer maybe not. When Windows 97 handles table formatting and text reflow around images a certain way for instance, the document format has a way of capturing the choices the user made.

If I open that Word 97 document in Word 2003, either the tables, text and images look the same or they don’t. If they look the same, it’s because there’s an import filter that understands what the old format means, and Word 2003 has a way of representing the same layout. If I then save as Word 2003, while the specific way to represent the layout has changed, the user doesn’t see the difference nor care.

If, on the other hand, that Word 97 document doesn’t look the same in Word 2003, it really doesn’t matter to the user if problem is a bad import filter or if Word 2003 doesn’t support the same features from Word 97. (Maybe they used flame text.) So a format that technically captures all the information needed to exactly recreate a document is utterly useless without something that can render it the same way.

Okay, so we need long-term media, and we need to choose a format that is popular enough that there will still be import filters for it in the foreseeable future. Eventually we’ll still reach the end of those paths. Either the disks will degrade, or the file format will be so out of date that no one makes import filters any more. When that happens, the only way to keep our data will be to copy it to new media, and potentially in a new format.

What should that format look like? We’ve already got PDF, which is based on how something looks in print. We’ve got various audio and video formats, which deal with playing an uninterrupted stream. But what about interactive/animated documents designed for online viewing?

Believe it or not, I’m going to suggest a Microsoft solution, though it’s one they haven’t thought to apply this way: PowerPoint. Today nearly everyone has a viewer, but not so long ago most of the slideshows I got were executables. If you had PowerPoint installed you could open the executable and edit the slideshow the same way you can edit a PDF if you have Acrobat.

As much as people complain about the bloat that Word adds to simple files, I think the future of file distribution will be to package the viewer along with the file. At some point storage becomes cheaper than the hassle of constantly updating all those obsolete file formats. The only question is how low a level the viewers will be written to: OS family, processor architecture, anything that runs C, etc.

Meet the new boss, same as the old boss

In case you haven’t noticed yet, we’re going through another round of power struggles in the IT industry. Oh, that might not look like what’s going on. On the surface what people are saying is that it’s a matter of web-based vs. desktop applications. Frequently these conversations are based on the premise that it’s a discussion of the technical merits.

Nope. It’s the return of the glass house. Peel back all the rationalizations about easier deployment, easier support, more consistency, and what it really comes down to is more control. If we can just keep the software out of the users’ hands then everything will be okay.

But what history shows us is that users like having control of their “stuff”. Taking that control away requires either redefining “their stuff” to be “our stuff”, or convincing them that they aren’t qualified to handle their stuff.

Is this what your customers are hearing from you?

The War on Laundry™

Let’s see:

  • Not a finite thing that can be destroyed, nor group which can be defeated.
  • No one qualified to declare surrender for it.
  • There are better and worse ways to deal with it, none of which are able to completely eliminate it.
  • No matter how much you fight it, there will always be more soon.
  • No one really likes it, but the only way to avoid it is to change your lifestyle so profoundly that the alternative is worse.

Hmm, sounds about right.

Any relation to other Wars on Nouns is completely intentional.

The day I got a lot smarter

One sign of intelligence is the ability to learn from your mistakes. An even better sign is the ability to learn from someone else’s mistakes. Unfortunately, we don’t always have the luxury of watching someone else learn a valuable lesson, and we have to do it ourselves. But if we pay attention, sometimes we get to learn multiple lessons from one mistake. (Lucky us.)

Case in point: Dealing with a crisis. I was managing a group of web developers, and the project lead on an integration with our largest client was going on vacation. He assured me his backup was fully trained, and would be able to deal with any issues. He left on Friday, and we deployed some new code on Monday. Everything looked good.

Time passes …

On Wednesday at about 4 p.m., we got a call asking about an order. We couldn’t find it in our system. From what we could tell, the branch that placed the order wasn’t set up to use our system yet, so we shouldn’t have the order. At 5 I let the backup go home for the day while I worked on writing up what we’d found. I sent an internal email explaining what I believed had happened. I said that I would call the client and explain why we didn’t have the order, and that they should check their old system.

While double-checking the deployment plan, I discovered that the new branch actually was on our new system … as of that Monday. That’s part of what was included in the new code. That’s when I got the shiver down my spine. By that time the backup, whose house was conveniently in a patch of bad cell coverage, was gone. The lead was on vacation. “Okay,” I thought, “I’ve seen most of this code, in fact I’ve written a good bit of it. I can figure this out.”

Stop laughing. It sounded good at the time.

To make a long story short (Too late!) we hadn’t been accepting orders for three days from several branches, but had been returning confirmations for them. It was somewhere around 3 a.m. when I finally thought I knew exactly how many orders we had dropped, though I hadn’t found the actual bug in the code yet. I created a spreadsheet with the list of affected orders. At one point I used Excel’s drag-to-copy feature to fill a range of cells with the branch number for a set of orders.

Did you know Excel will automatically increment a number if you drag to copy? Yes, I know it too. At 11:30 in the morning today I know it. At 3 a.m. that night I apparently didn’t know that. So I sent it to the client with non-existent branch numbers that I didn’t double-check. “Oops” apparently doesn’t quite cover it.

The reveal

The next morning on a conference call with the client, my boss, his boss, and several other people, we were going over the spreadsheet when someone noticed the problem. To me, it seemed obvious that it was a simple cut-and-paste error on the spreadsheet. But someone — a co-worker, believe it or not — decided to ask, “Are you sure? Because I don’t see those other two branches on here either.” After dumbly admitting that I didn’t know anything about any other two branches, I ended the call so I could go figure out what was happening.

Now I had apparently demonstrated that I didn’t actually know what was wrong, that I had no idea of the scope of it, and that I was trying to cover it up. Yay me. We called in the lead (whose vacation was at home doing renovations) and started going through the code. I finally found the cause of the error, and it caused exactly the list of errors that I had sent out early that morning, except for the cut-and-paste error. The “other two branches” turned out to be from the previous night’s email, where I had specifically said those branches were not affected by the problem.

Within two hours, we had the code fixed and all the orders recovered. So everyone’s happy, right? If you think so, then you haven’t yet learned the lessons I did that day.

  1. No matter how urgently someone says they need an answer, the wrong answer won’t help.
  2. If it looks like the wrong answer, it might as well be the wrong answer. This doesn’t mean counter-intuitive answers can’t be right. It means that presentation and the ability to support your conclusion count.
  3. If you didn’t create the problem, always give the person who did the first chance to fix it.
  4. If someone knows more about a topic than you do, have them check your work.
  5. Don’t make important decisions on too little sleep.
  6. Before making a presentation to a client, review the materials with your co-workers.
  7. Don’t make important changes when key people are unavailable.

Looking at that list, I realize I already knew several of those lessons. So why did it take that incident to “learn” them? Because there’s a difference between knowing something, and believing it.

When design is not design

“How is software production like the car industry?”

Oh no, not again. Yeah, well, most people are getting it wrong. So here’s another shot at it.

There are aspects of car design that strictly deal with measurable quality: performance of the electrical system, horsepower, fuel economy, reliability. But the shape and style of the car are much more loosely coupled to hard-and-fast measurements. That facet of the design — the way it looks, the demographic it will appeal to — is not amenable to Six Sigma processes.

Granted, there are some cars that are strictly (or nearly so) utilitarian. Some people only care about efficiency and reliability. They buy Corollas by the boatload. But the FJ Cruiser is not the result of a logical, statistical analysis, with high conformance to the mean and low variation of anything.

I think what I’m trying to say is that marketing design is building the right thing, while production design is building the thing right. The auto industry is mature enough that you need both. Success in the software industry still relies more on building the right thing.

There are no IT projects … mostly

Whenever someone says something I’ve been thinking or saying for a while, it’s clear evidence of how smart they are. (Don’t laugh, you think so too.) So when Bob Lewis published the KJR Manifesto – Core Principles, he confirmed his intelligence when he wrote:

There are no IT projects. Projects are about changing and improving the business or what’s the point?

The variation that I’ve been telling people for years is that people don’t want software, they want the things they do with the software. So if you’re working on an IT project and can’t explain the benefits in terms that matter to the business, you probably shouldn’t be doing the project. Then in the middle of making this point to someone, I realized it’s not always true.

One case I thought of was a steel manufacturer that I interviewed with. While the factory was computer-controlled, the people who worked on those systems were in Engineering. The non-production computer system — email, financials, advertising, etc. — was IT. In that case, IT really was a support function, no more important to the company than telecom.

That doesn’t mean it was unimportant. They could no more survive without their back-office system than they could do without phones. But that system really had no bearing on how they ran their business. It was something that was expected to Just Workâ„¢, like the electricity or plumbing.

The thing I don’t know is if this is the exception that proves the rule, or if it’s more common than I thought to find a place where IT really isn’t a strategic partner in the business.

Maybe I’m the one missing something

Magicians make a living at misdirection, getting you to look at their right hand while they’re hiding the ball with their left hand. You’d think journalists would want to be a little more direct than that. But Neil McAllister did a whopper of a slight-of-hand recently, using more than half his column to summarize a Joel Spolsky post before jumping to a completely unrelated conclusion.

Joel’s point, and the first more-than-half of Neil’s summary, was shooting down the idea beloved of suits that programming can be reduced to a set of building blocks that can be snapped together by a non-programmer. (For a hysterically painful example of how wrong this is, and how far people will go to try to do it anyway, see The Customer-Friendly System at The Daily WTF.)

Joel covered the ground pretty well, so I was wondering where Neil was going with this. Once I got to it, I had to re-read the segue three times to see what connection I was missing:

Don’t you believe it. If, as Brooks wrote, the hard part of software development is the initial design, then no amount of radical workflows or agile development methods will get a struggling project out the door, any more than the latest GUI rapid-development toolkit will.

And neither will open source. Too often, commercial software companies decide to turn over their orphaned software to “the community” — if such a thing exists — in the naïve belief that open source will be a miracle cure to get a flagging project back on track. This is just another fallacy, as history demonstrates.

If there’s a fundamental connection between open source and “Lego programming” I don’t know about it. Maybe Neil makes the connection for us:

As Jamie Zawinski recounts, the resulting decision to rewrite [Netscape’s] rendering engine from scratch derailed the project anywhere from six to ten months.

Which, as far as I can see, has nothing to do with the fact that it was open source. In fact it seems more like what Lotus did when they delayed 1-2-3 for 16 months while they rewrote it to fit in 640k, by which time Microsoft had taken the market with Excel. Actually that’s another point that Joel made, sooner and better.

Is Neil trying to say that Lego programming assumes that code can be interchangeable, and man-month scheduling assumes that programmers are interchangeable? Maybe, and that’s even an interesting idea. But that’s not what he said, and if I flesh out the idea it won’t be in the context of a critique of someone else’s work.

Or maybe it was an opportunity to take a shot at the idea of “the community”. Although in his very next column he talks about the year ahead for the open source community, negative community reaction to the Novell/Microsoft deal, and praise from the community for Sun open-sourcing Java. Does he really dispute the existence of a community, or was it hit bait?

Okay, so where did I start? Right, with misdirection. So the formula seems to be: quote a better columnist making a point that I like, completely change the subject with the word “therefore”, summarize another author making my second point, and send it to InfoWorld. Am I ready to be a “real” pundit yet?