How corporations spin a risk into a benefit

James Maguire, managing editor of Datamation, wrote in “Indian IT Firms: Is the Future Theirs?

“In the past, companies used to award IT outsourcing contracts that were longer, 7-10 years. They would hire one firm to do it, and that firm would have subcontractors,” Ford-Taggart says. Now, big clients split up major projects and request bids on individual components. “Then they’ll say, “Look, we can have this portion done in India for 30% less.'”

This might cause more managerial headaches for the client company, but in fact it’s less risk: clients have fewer eggs in a basket with any one IT firm, so if a projects goes bad or creates cost overruns, the entire project won’t take such a big hit.

Emphasis added.

I hate this corporate executive definition of “risk”. And before you think I don’t get it, I understand what they mean but I think they’re wrong.

When I talk about reducing risk, I mean that I’m making problems less likely to occur. What the execs mean when they take this position is that they’re diversifying the accountability. They want to be able to report that while 20% of the project is at risk, the other 80% is on track.

Well sure, the other 80% can’t be used without the 20% that’s missing. But the focus here should be that we’ve got 80% of the project on time and on budget!

If you admit up front that your process is increasing management headaches, you should realize you’re increasing the likelihood of problems. You may be mitigating the potential impact, but that’s not a given.

Any mitigation strategy that seeks to reduce the impact of a failure, and does so by increasing the likelihood of failure, is probably a bad idea.

How using people in your ads increases sales

We’ve all seen the light beer commercials with young, impossibly attractive people guzzling a product that, by all rights, should be making them fat and unhealthy. The obvious question is, “Do they think we’re really that stupid? That we think if we drink their beer we’ll become supermodels?”

Good question. But it gets the real thinking backwards.

When you show people using the product, you’re helping the prospect visualize themselves using it. Make it seem familiar and safe, instead of new and unknown.

But since you can’t put each viewer into their own personalized version of the ad (yet), you have to use a stand-in. If you show someone that the viewer would like to be, it increases their desire to want to recreate that image.

So you don’t want the prospect thinking, “If I use that product, I will become that cool, attractive person in the ad.” You want them thinking, “Because I am cool and attractive, I can see myself using that product.”

It’s the same thinking that leads to cliques and fads:

  • I’m not cool because I wear Air Jordans. I wear Jordans because I’m cool.
  • I’m not tough because I play rugby. I play rugby because I’m tough.
  • I’m not a redneck because I drive a truck. I drive a truck because I’m a redneck.

You don’t want your prospect to think your product will make them more attractive. You want to help them confirm what they already believe about themselves.

The difference between innovation and sloppiness

I took a couple of art classes in college. In Life Drawing we were supposed to look at the arrangement or person in front of us and put it on paper. In pencil, then charcoal, then ink … watercolor … oil … etc.

I was always good at photorealism. I might not have been fast, but I had some pencil drawings that could have passed (at a distance) for black-and-white photos.

There was another student, an art major who fancied himself an “artiste”. His work spanned the range from abstract to really abstract. He looked down on my mere technical facility.

But my grades were as good as his, sometimes better. It seems when he talked about his rejection of formal rules, it really meant he wasn’t able to do realism. He didn’t have the command of the tools, the mere technical facility. So everything he did owed as much to chance as to intent.

He may have been right, that I didn’t have the creativity to do modern art. I’ll admit that I appreciate paintings that simply look good, without high-flying pretension or socio-political overtones. I guess I’ll never have a SOHO gallery showing. C’est la vie.

But with all his fervor, and whatever glorious visions he had in his head, he couldn’t reliably get them onto the paper. He couldn’t create something specific, on purpose.

[Cue un-subtle segue … ]

But what does this have to do with the business world? Scott Berkun wrote recently about how constraints can help creative thinking. When a large corporation does “blue sky” thinking, they can wander aimlessly and never produce. Constraints set a direction.

But I think there’s another problem with “blue sky” thinking that goes beyond a lack of direction. It’s summed up by Voltaire’s famous maxim:

The perfect is the enemy of the good.

When a company tries to “blue sky” a problem, they are implicitly seeking perfection: With no limits, what would be possible?

But there are always limits. They may be self-imposed, inconsequential, misunderstood, overblown, or in any number of other ways not real limits. And it helps to know the difference.

When you start out by asking people what they would do if there were no constraints, don’t be surprised when they come back with a solution that can’t possibly work. And then convince themselves that theirs is the only possible solution. By then, you’ve already lost.

Is your resume hot or not?

Getting a job is like getting a date.

  1. There may or may not be rules to it.
  2. It may or may not be fair.
  3. They may be so desperate it doesn’t matter what you say. But do you want to hook up with someone who’s desperate?
  4. Or maybe it doesn’t matter what you say because they’ve decided the answer is “no” before you opened your mouth.
  5. The more in-demand you are, the more choosy you can afford to be. But if you want to get this job — or this date — then you’re probably going to have to say what they want to hear. Good luck figuring out what that is.

How to Kill a Project by Accident

In my last post I talked about perverse incentives in project management. Something I mentioned in passing was what happens when you don’t have positive incentives: the cases where there is simply no incentive to do the right thing. I realized there’s actually another way to get this wrong by trying too hard to get it right.

Let’s say you have just finished a project, gone into production, and it blew up. Data corruption, security problems, too slow, everything that can go wrong with a product. First you do an emergency project to fix it, then you do the after-action review to see what went wrong.

What you find is that you didn’t have good test coverage. There were whole modules that were never reviewed before going into production. It’s painfully obvious that there was a complete breakdown in the QA process.

Fixing yesterday’s problem

You’re not going to make that mistake again. You write up policies for demonstrating test coverage. You create reports to track test execution and results. You re-write employee performance expectations to align with the new methodology. (If you’ve read my stuff before, you should hear the alarm bells start ringing when you see the “m word”.)

Your next project is going exactly according to plan. Test coverage is at 90% overall, with 100% of high priority use cases covered. You’re on schedule to execute all test cases before acceptance testing starts. Defects are identified and corrected.

Then the users get it. They hate it. It doesn’t do anything the way they wanted it to. Not only that, it doesn’t even do what they asked for. How could you be so far off?

You don’t get what you don’t measure

Look back at what incentives you created. You reward doing the methodology, following the checklist. Test coverage is great, but it’s not the goal of the project. The goal is to provide something of value to the users … or at least it should be. Did you include a line in the new process that explicitly says, “Check with the users that it’s doing what they need”?

So how do you create the right incentives? Just flip the emphasis. Instead of saying an employee’s performance evaluation is 80% following the methodology and 20% client satisfaction, turn the numbers around. Your users don’t care that you followed “best practices.” They care that the product does what they need. Where is that measured in your methodology?

Installing software is not worth my time

My PC is managed according to corporate standards. I can’t install anything without an administrator signing on and authorizing it. I asked if I could install the software to link to my cell phone and load my contacts into it. Otherwise I’d be spending a couple of hours over the next week manually entering them all in via the keypad.

The day after I put the ticket in, someone called up and asked where the software was. I told him I had it on a CD. The local support guy came up the next morning, saw that it was an OEM disk and not some random thing I’d burned, and logged in as administrator so I could install it.

Sure, it was two days before I got what I needed, but it wasn’t a compelling gotta-have-it-today issue.

Everything else on here is available as a network install, and is set up in my profile. When I get a new PC — I’m due to be refreshed in the next month or so — I’ll go to a single application, check the boxes for everything I need, and go to lunch. When I come back, everything will be installed.

Every application that I install myself I’d have to reinstall when I get a new PC. How many hours would that take? Multiply that by the number of users in my office, which just relocated earlier this year. Most of us didn’t take the hardware from the old location, we just came to blank systems at our new desks and kicked off the install process.

If I’m paying for it, it is not worth my time to install software. If my alternatives are to load all my apps on a new PC I’ve just bought, or to work a billable hour and pay someone else to do the installs, I’ll pay the Geek Squad to click OK and reboot 14 times. Why should I expect the company I work for, who owns the PC I’m working on, to make a different decision?

The program is not the product

Managers want programs to be like the output of a factory. Install the right robots and tooling, start the process, and good software comes out the end.

WRONG WRONG WRONG WRONG WRONG WRONG WRONG WRONG WRONG WRONG!!!!!! (Can you tell I disagree?)

Nearly everyone who makes the factory analogy misses a very fundamental point: the program is not the product. For one thing, unless you work for a software company selling shrink-wrapped software products you aren’t ever selling the program. Conventional wisdom says most programmers actually work for internal IT, so it’s safe to say most programs are never sold.

Businesses don’t want programs, they want credit reports … and loan contracts … and title searches … and purchase orders … and claim forms …

So what is the right analogy? The program is the assembly line! So measuring bugs in the program is wrong. Even comparing compiling to manufacturing — which is still better than comparing programming to manufacturing — is wrong. What you should be looking at is the output produced by the program. Is your program a website? Measure the number of pages it can produce per unit time, without error. Is it an editor? Measure the number of pages it can spell-check per unit time, and with what accuracy.

When measuring the output of a manufacturing process, you literally shouldn’t care what the process looks like, nor what tools are used, so long as it consistently produces the same output. This is not to say the process and tools don’t matter. A bad process may be prone to unexpected failure. Tools may be harder to maintain or have a shorter service life. You may be locked into a service contract with the manufacturer. And coincidentally [ahem] all these factors apply to software.

So yes, programming can be compared to manufacturing. As long as you remember that the program is not the product, the program is the assembly line.

Changing your development platform

There are certain milestones in the life of a product when developers are free to ask if it’s time to change the platform it’s developed on. Typically you’ve shipped a major version and gone into maintenance mode. Planning has started for the next version, and you wonder if you should stick with what you’ve got or if, knowing what you know now, it might be better to switch from .NET to PHP, or from PHP to Java.

You might think that checking Netcraft would be a good idea. You can see if your current platform is gaining or losing market share, and who doesn’t like market share? If you look at the latest chart you’ll see that Microsoft is gaining on Apache.

But keep in mind that while Apache’s market share has gone down marginally, the total number of sites has still gone up. Most of Microsoft’s gain is from new sites, not from existing sites switching. (The exception being large site-parking operations switching to IIS.)

But really the important question is whether your preferred platform faces a reasonable possibility of becoming obsolete/unsupported. This is actually one place where the Unix world’s slower upgrade cycles help. You rarely have applications “sunsetted” by the manufacturer.

Am I arguing in favor of dropping .NET? Not at all. I think you should use what works for you. What I’m saying is unless your chosen platform is in danger of becoming unsupported, and that causes a problem for you, then looking at market share charts should never get you to switch.

Now if you hadn’t already chosen a platform, and you wanted to know what platform had a larger market, then you’d care about market share. But that’s a subject for another post.

Geeks still don’t know what normal people want

If you listen to geeks, locking out development of third-party applications will doom the iPhone in the market. But remember the now-famous review when the iPod was released:

No wireless. Less space than a nomad. Lame.

The market quickly decided they didn’t care about wireless and bought the things in droves. And current versions have more space than the nomad did when the iPod came out. Now that the iPhone has been shown, geeks are again claiming that it’s going to fail. This time because it’s not going to be open to third-party applications.

Apple doesn’t care if you can extend it because they believe their target customer doesn’t want it extended. They want something that works well, the same way, every time. The iPod wins because it does pretty much what people want, close enough to how they want, without making them think about how to do it.

The iPhone may not be open to developers, but it’s upgradable. When Apple finishes writing software to make the Wi-Fi automatically pick up a hotspot and act as a VoIP phone, that functionality can be rolled out transparently. First-gen iPhones will become second-gen iPhones without the users having to do anything.

The upgrade path will be to higher HD capacity, so people can carry more movies with them. I see these things as hugely popular for people who take trains to work. If I could take a train where I work now, I’d already be on a waiting list for an iPhone.

Pay the man

IT people are frequently highly-educated, with extensive formal and on-the-job training. And we all, if you look at our resumés, think that we’re fast learners. That’s probably because everything we work with keeps changing every couple of years, so anyone who’s been doing this for very long has learned multiple generations of tools. Many of our jobs also require us to be generalists, with a broad range of knowledge across multiple unrelated fields.

It’s probably not surprising, then, that we tend to be DIY-ers. Never changed a light fixture? No problem. Give me a few minutes with a book and I’ll know enough to do it. House needs painting? Heck, I’ve always wanted an excuse to go get one of those power sprayers, I’m on it! That’s why we’re shocked to hear how much people pay to have someone do work that, after all, we could do ourselves with little or no training.

That was my frame of mind when I had to replace the shower door. The frame was mounted on tiled walls. I only cracked two of the tiles a little bit trying to get the old frame off, and lifted about a dozen away from the wall. No problem, just ran to the hardware store for some tile adhesive. And I only put the adhesive on a little too thick, so two of the tiles fell off the next day when I started mounting the frame. And I only cracked one more because I was unfamiliar with the mounting hardware.

I had to remove all the tiles and start over because the adhesive was actually nowhere near dry. I wanted to make sure it dried all the way, because I wasn’t completely sure I did it right this time. When I tried again three days later, there was only one tile that fell off because I had gone too thin with the adhesive. But after waiting a day for the grout on the rest to dry, I was able to scrape that space out and get the last tile up and grout it. The caulk and grout I used to patch the cracks looks mostly okay … for now … while it’s still white

All in all, it only took me a week and a half to hang that door. And the cracked and patched tiles will probably still look good when I go to sell the house. (At least I hope they will; the color was discontinued years ago, so I’d have to re-tile the whole damn bathroom otherwise.) I’m so glad I didn’t pay a hundred bucks to some barely-trained tradesman to do it for me.