Design = function + aesthetics

Ask your local programmer if he knows how to design user interfaces and invariably he’ll say he does. Go ahead, ask. I’ll wait.

You’re back? Good. Now go look at the new iPhone. Has your guy ever made anything remotely that cool? Unless you’re reading this from Cupertino, odds are he hasn’t. The UI is more beautiful and, as near as I can tell from the demo movies, more usable than any other phone or music player I’ve seen. But I wonder, how much of the perceived usability is a response to the beauty?

It’s becoming conventional wisdom that you don’t want to make the demo look done. Excessive visual polish early in the process not only limits the feedback you get to comments about the superficial details, it also suggests equally finished interaction with the system. It literally makes it look like it’s doing more than it really is doing.

I’ve avoided this problem in my career by not being very good at graphics, and avoided realizing that by not working with any real visual artists to compare my work to. Yes, I used to think I was good at it, just like every programmer. Eventually I realized that consistency and predictability were a poor subset of what an artist can add.

Now, whenever I make up a project plan, there is a task at the end for “Add Pretty”. And my name isn’t on that task.

The Digital Dark Ages

I’ve been paying my mortgage for about three years now. Unless I change something, I’m going to keep paying on it for another 27 years. I try not to think about the fact that although I have an actual physical copy of the mortgage agreement, with real pen-and-ink signatures, I don’t have any proof that I’ve ever made a payment.

At the risk of sounding like a Luddite, it bothers me that I have to trust the bank’s computer system to keep track of all 360 payments I’ll have made by the time it’s over. I’m not just being paranoid. I had an issue where a bank said my wife still owed money on a loan we had paid off three years earlier. We didn’t have anything in writing for each payment. The bank couldn’t even tell us the history of the loan; just that the computer showed we still owed money. And if a bank says you owe money, unless your lawyers are bigger than their lawyers, then you owe them money.

If you go to museums, you’ll see ledgers from banks in the 1800s and earlier. Over two hundred years later and we still know who paid their bills and when. But five years in the past … it doesn’t exist.

This could change with new regulations and retention requirements. But the big difference is what is standard vs. what you have to work at. A hundred years ago everything was written down. If you wanted to get rid of records you had to make an effort to identify what you wanted to delete, somehow separate it from the rest, and physically destroy it. Today, we only keep data as long as we have to. We only bother with long-term storage when the law or financial necessity makes us.

Let’s assume we have some data that we really want to keep “forever”. What is that going to take?

First, you’ll want to store it on something that doesn’t degrade quickly. Burning it to a CD or DVD seems to offer better longevity than VHS. Well, maybe. Second, you want to store it in a format that you’ll be able to read when you want to. This might be a harder problem than the physical longevity, when you start to consider how much data goes into a modern file format.

Look at the problem from the user’s perspective: The document format (the same applies to music and video) is just a way of saving the document in a way that it can be opened and look the same way at a later time, maybe on the same computer maybe not. When Windows 97 handles table formatting and text reflow around images a certain way for instance, the document format has a way of capturing the choices the user made.

If I open that Word 97 document in Word 2003, either the tables, text and images look the same or they don’t. If they look the same, it’s because there’s an import filter that understands what the old format means, and Word 2003 has a way of representing the same layout. If I then save as Word 2003, while the specific way to represent the layout has changed, the user doesn’t see the difference nor care.

If, on the other hand, that Word 97 document doesn’t look the same in Word 2003, it really doesn’t matter to the user if problem is a bad import filter or if Word 2003 doesn’t support the same features from Word 97. (Maybe they used flame text.) So a format that technically captures all the information needed to exactly recreate a document is utterly useless without something that can render it the same way.

Okay, so we need long-term media, and we need to choose a format that is popular enough that there will still be import filters for it in the foreseeable future. Eventually we’ll still reach the end of those paths. Either the disks will degrade, or the file format will be so out of date that no one makes import filters any more. When that happens, the only way to keep our data will be to copy it to new media, and potentially in a new format.

What should that format look like? We’ve already got PDF, which is based on how something looks in print. We’ve got various audio and video formats, which deal with playing an uninterrupted stream. But what about interactive/animated documents designed for online viewing?

Believe it or not, I’m going to suggest a Microsoft solution, though it’s one they haven’t thought to apply this way: PowerPoint. Today nearly everyone has a viewer, but not so long ago most of the slideshows I got were executables. If you had PowerPoint installed you could open the executable and edit the slideshow the same way you can edit a PDF if you have Acrobat.

As much as people complain about the bloat that Word adds to simple files, I think the future of file distribution will be to package the viewer along with the file. At some point storage becomes cheaper than the hassle of constantly updating all those obsolete file formats. The only question is how low a level the viewers will be written to: OS family, processor architecture, anything that runs C, etc.

Meet the new boss, same as the old boss

In case you haven’t noticed yet, we’re going through another round of power struggles in the IT industry. Oh, that might not look like what’s going on. On the surface what people are saying is that it’s a matter of web-based vs. desktop applications. Frequently these conversations are based on the premise that it’s a discussion of the technical merits.

Nope. It’s the return of the glass house. Peel back all the rationalizations about easier deployment, easier support, more consistency, and what it really comes down to is more control. If we can just keep the software out of the users’ hands then everything will be okay.

But what history shows us is that users like having control of their “stuff”. Taking that control away requires either redefining “their stuff” to be “our stuff”, or convincing them that they aren’t qualified to handle their stuff.

Is this what your customers are hearing from you?

The War on Laundry™

Let’s see:

  • Not a finite thing that can be destroyed, nor group which can be defeated.
  • No one qualified to declare surrender for it.
  • There are better and worse ways to deal with it, none of which are able to completely eliminate it.
  • No matter how much you fight it, there will always be more soon.
  • No one really likes it, but the only way to avoid it is to change your lifestyle so profoundly that the alternative is worse.

Hmm, sounds about right.

Any relation to other Wars on Nouns is completely intentional.

The day I got a lot smarter

One sign of intelligence is the ability to learn from your mistakes. An even better sign is the ability to learn from someone else’s mistakes. Unfortunately, we don’t always have the luxury of watching someone else learn a valuable lesson, and we have to do it ourselves. But if we pay attention, sometimes we get to learn multiple lessons from one mistake. (Lucky us.)

Case in point: Dealing with a crisis. I was managing a group of web developers, and the project lead on an integration with our largest client was going on vacation. He assured me his backup was fully trained, and would be able to deal with any issues. He left on Friday, and we deployed some new code on Monday. Everything looked good.

Time passes …

On Wednesday at about 4 p.m., we got a call asking about an order. We couldn’t find it in our system. From what we could tell, the branch that placed the order wasn’t set up to use our system yet, so we shouldn’t have the order. At 5 I let the backup go home for the day while I worked on writing up what we’d found. I sent an internal email explaining what I believed had happened. I said that I would call the client and explain why we didn’t have the order, and that they should check their old system.

While double-checking the deployment plan, I discovered that the new branch actually was on our new system … as of that Monday. That’s part of what was included in the new code. That’s when I got the shiver down my spine. By that time the backup, whose house was conveniently in a patch of bad cell coverage, was gone. The lead was on vacation. “Okay,” I thought, “I’ve seen most of this code, in fact I’ve written a good bit of it. I can figure this out.”

Stop laughing. It sounded good at the time.

To make a long story short (Too late!) we hadn’t been accepting orders for three days from several branches, but had been returning confirmations for them. It was somewhere around 3 a.m. when I finally thought I knew exactly how many orders we had dropped, though I hadn’t found the actual bug in the code yet. I created a spreadsheet with the list of affected orders. At one point I used Excel’s drag-to-copy feature to fill a range of cells with the branch number for a set of orders.

Did you know Excel will automatically increment a number if you drag to copy? Yes, I know it too. At 11:30 in the morning today I know it. At 3 a.m. that night I apparently didn’t know that. So I sent it to the client with non-existent branch numbers that I didn’t double-check. “Oops” apparently doesn’t quite cover it.

The reveal

The next morning on a conference call with the client, my boss, his boss, and several other people, we were going over the spreadsheet when someone noticed the problem. To me, it seemed obvious that it was a simple cut-and-paste error on the spreadsheet. But someone — a co-worker, believe it or not — decided to ask, “Are you sure? Because I don’t see those other two branches on here either.” After dumbly admitting that I didn’t know anything about any other two branches, I ended the call so I could go figure out what was happening.

Now I had apparently demonstrated that I didn’t actually know what was wrong, that I had no idea of the scope of it, and that I was trying to cover it up. Yay me. We called in the lead (whose vacation was at home doing renovations) and started going through the code. I finally found the cause of the error, and it caused exactly the list of errors that I had sent out early that morning, except for the cut-and-paste error. The “other two branches” turned out to be from the previous night’s email, where I had specifically said those branches were not affected by the problem.

Within two hours, we had the code fixed and all the orders recovered. So everyone’s happy, right? If you think so, then you haven’t yet learned the lessons I did that day.

  1. No matter how urgently someone says they need an answer, the wrong answer won’t help.
  2. If it looks like the wrong answer, it might as well be the wrong answer. This doesn’t mean counter-intuitive answers can’t be right. It means that presentation and the ability to support your conclusion count.
  3. If you didn’t create the problem, always give the person who did the first chance to fix it.
  4. If someone knows more about a topic than you do, have them check your work.
  5. Don’t make important decisions on too little sleep.
  6. Before making a presentation to a client, review the materials with your co-workers.
  7. Don’t make important changes when key people are unavailable.

Looking at that list, I realize I already knew several of those lessons. So why did it take that incident to “learn” them? Because there’s a difference between knowing something, and believing it.

When design is not design

“How is software production like the car industry?”

Oh no, not again. Yeah, well, most people are getting it wrong. So here’s another shot at it.

There are aspects of car design that strictly deal with measurable quality: performance of the electrical system, horsepower, fuel economy, reliability. But the shape and style of the car are much more loosely coupled to hard-and-fast measurements. That facet of the design — the way it looks, the demographic it will appeal to — is not amenable to Six Sigma processes.

Granted, there are some cars that are strictly (or nearly so) utilitarian. Some people only care about efficiency and reliability. They buy Corollas by the boatload. But the FJ Cruiser is not the result of a logical, statistical analysis, with high conformance to the mean and low variation of anything.

I think what I’m trying to say is that marketing design is building the right thing, while production design is building the thing right. The auto industry is mature enough that you need both. Success in the software industry still relies more on building the right thing.

There are no IT projects … mostly

Whenever someone says something I’ve been thinking or saying for a while, it’s clear evidence of how smart they are. (Don’t laugh, you think so too.) So when Bob Lewis published the KJR Manifesto – Core Principles, he confirmed his intelligence when he wrote:

There are no IT projects. Projects are about changing and improving the business or what’s the point?

The variation that I’ve been telling people for years is that people don’t want software, they want the things they do with the software. So if you’re working on an IT project and can’t explain the benefits in terms that matter to the business, you probably shouldn’t be doing the project. Then in the middle of making this point to someone, I realized it’s not always true.

One case I thought of was a steel manufacturer that I interviewed with. While the factory was computer-controlled, the people who worked on those systems were in Engineering. The non-production computer system — email, financials, advertising, etc. — was IT. In that case, IT really was a support function, no more important to the company than telecom.

That doesn’t mean it was unimportant. They could no more survive without their back-office system than they could do without phones. But that system really had no bearing on how they ran their business. It was something that was expected to Just Workâ„¢, like the electricity or plumbing.

The thing I don’t know is if this is the exception that proves the rule, or if it’s more common than I thought to find a place where IT really isn’t a strategic partner in the business.

Maybe I’m the one missing something

Magicians make a living at misdirection, getting you to look at their right hand while they’re hiding the ball with their left hand. You’d think journalists would want to be a little more direct than that. But Neil McAllister did a whopper of a slight-of-hand recently, using more than half his column to summarize a Joel Spolsky post before jumping to a completely unrelated conclusion.

Joel’s point, and the first more-than-half of Neil’s summary, was shooting down the idea beloved of suits that programming can be reduced to a set of building blocks that can be snapped together by a non-programmer. (For a hysterically painful example of how wrong this is, and how far people will go to try to do it anyway, see The Customer-Friendly System at The Daily WTF.)

Joel covered the ground pretty well, so I was wondering where Neil was going with this. Once I got to it, I had to re-read the segue three times to see what connection I was missing:

Don’t you believe it. If, as Brooks wrote, the hard part of software development is the initial design, then no amount of radical workflows or agile development methods will get a struggling project out the door, any more than the latest GUI rapid-development toolkit will.

And neither will open source. Too often, commercial software companies decide to turn over their orphaned software to “the community” — if such a thing exists — in the naïve belief that open source will be a miracle cure to get a flagging project back on track. This is just another fallacy, as history demonstrates.

If there’s a fundamental connection between open source and “Lego programming” I don’t know about it. Maybe Neil makes the connection for us:

As Jamie Zawinski recounts, the resulting decision to rewrite [Netscape’s] rendering engine from scratch derailed the project anywhere from six to ten months.

Which, as far as I can see, has nothing to do with the fact that it was open source. In fact it seems more like what Lotus did when they delayed 1-2-3 for 16 months while they rewrote it to fit in 640k, by which time Microsoft had taken the market with Excel. Actually that’s another point that Joel made, sooner and better.

Is Neil trying to say that Lego programming assumes that code can be interchangeable, and man-month scheduling assumes that programmers are interchangeable? Maybe, and that’s even an interesting idea. But that’s not what he said, and if I flesh out the idea it won’t be in the context of a critique of someone else’s work.

Or maybe it was an opportunity to take a shot at the idea of “the community”. Although in his very next column he talks about the year ahead for the open source community, negative community reaction to the Novell/Microsoft deal, and praise from the community for Sun open-sourcing Java. Does he really dispute the existence of a community, or was it hit bait?

Okay, so where did I start? Right, with misdirection. So the formula seems to be: quote a better columnist making a point that I like, completely change the subject with the word “therefore”, summarize another author making my second point, and send it to InfoWorld. Am I ready to be a “real” pundit yet?

Blinded by experience

Don Norman knows more about design than I ever will. He even “knows” a few things that aren’t even true. That’s some powerful knowing! Specifically:

You cannot successfully introduce a non-qwerty keyboard today, or reverse the window scroll bar convention, or suddenly require double-clicking on web links.

I think Don has spent too much time lately with experienced web users, and not watching people like Lou, my father-in-law. Lou is a retired printing production manager, who trained as a graphic artist and worked for several ad agencies. He got one of the first Macs so he could use PageMaker, and used Photoshop when it was still owned by Aldus.

Lou “knows” that you have to double-click things with the mouse. He has known that for over 20 years. I’ve tried to point out that you don’t have to do that on the web. He does it anyway. His current computer runs Windows 95, so if I wanted to I could set the option to use single-click select globally. I would never set that preference on his system, though. It would drive him insane.

But Microsoft did introduce the new behavior. There is a generation that has grown up using it. Could someone do the same thing with web links? They’d have to have market penetration comparable to what Microsoft had with Windows 95, but yes, they could do it.

Here’s where it gets interesting. You could reasonably support either side of this decision: There are people who “know” that you double-click everything, and people who “know” that you single-click everything. Either one will be frustrated by a system that does the opposite. But knowing that both kinds of people exist, what should we do? In the Hippocratic tradition, first do no harm.

I can’t count the number of sites I’ve seen that instruct users to “Only click the Submit button once.” Or “Please wait for the page to load.” As soon as we start warning users what they should or shouldn’t do, two things should be apparent:

  1. Users, for some reason, think that they should do the very thing we’re telling them not to do.
  2. Some of them are going to do it despite our warnings.

If we can’t prevent users from doing the “wrong” thing with our applications, we owe it to them to at least make sure that they don’t break anything when they do.

Negotiation starts from confidence

It seems obvious when you think about it, but I don’t hear it said very often: The first step toward getting what you want is knowing what you want. You have to do more than recognize what you want, you have to be confident in it.

Here’s an example from my college days: I worked at a bar for a couple of years. On weekends, we’d have someone at the door checking ID. There were plenty of people who tried to talk their way in without any.

One night the regular doorman was late, and I covered for him. The first girl who showed up without ID smiled, batted her eyelashes and asked “pretty please?” I said something like, “Gee, I really wish I could, but rules are rules, etc.” She argued, I refused, she got mad … I’d like to chalk my behavior up to my youth and her smile, but the bottom line is I didn’t project confidence in what I wanted.

The next girl who showed up without ID (for some reason guys never tried this with me, I don’t know why) I just told her I needed ID or she couldn’t get in, sorry. She threw a quick pout, then left to find someplace else she could get into.

As I thought about this later I realized the first girl wasn’t mad because she wasn’t allowed in. She was mad because I gave the impression I might be open to negotiation but I wasn’t.

Moral of the story: Know which of your goals are negotiable and which ones aren’t. Never give the impression that one of the former might be one of the latter. Reasonable people can respect an honest difference of opinion, but will read indecisiveness as an opportunity to negotiate.

Update: I found another way of putting this, on Scott Berkun‘s site where he explains why things suck:

It’s the things that tease us, making us think they’ll satisfy us but then failing, that hurt the most.

He was pointing out that people don’t complain about things they don’t care about, but it works really well to explain what happened to me when I was working at the bar.