How corporations spin a risk into a benefit

James Maguire, managing editor of Datamation, wrote in “Indian IT Firms: Is the Future Theirs?

“In the past, companies used to award IT outsourcing contracts that were longer, 7-10 years. They would hire one firm to do it, and that firm would have subcontractors,” Ford-Taggart says. Now, big clients split up major projects and request bids on individual components. “Then they’ll say, “Look, we can have this portion done in India for 30% less.'”

This might cause more managerial headaches for the client company, but in fact it’s less risk: clients have fewer eggs in a basket with any one IT firm, so if a projects goes bad or creates cost overruns, the entire project won’t take such a big hit.

Emphasis added.

I hate this corporate executive definition of “risk”. And before you think I don’t get it, I understand what they mean but I think they’re wrong.

When I talk about reducing risk, I mean that I’m making problems less likely to occur. What the execs mean when they take this position is that they’re diversifying the accountability. They want to be able to report that while 20% of the project is at risk, the other 80% is on track.

Well sure, the other 80% can’t be used without the 20% that’s missing. But the focus here should be that we’ve got 80% of the project on time and on budget!

If you admit up front that your process is increasing management headaches, you should realize you’re increasing the likelihood of problems. You may be mitigating the potential impact, but that’s not a given.

Any mitigation strategy that seeks to reduce the impact of a failure, and does so by increasing the likelihood of failure, is probably a bad idea.

The difference between innovation and sloppiness

I took a couple of art classes in college. In Life Drawing we were supposed to look at the arrangement or person in front of us and put it on paper. In pencil, then charcoal, then ink … watercolor … oil … etc.

I was always good at photorealism. I might not have been fast, but I had some pencil drawings that could have passed (at a distance) for black-and-white photos.

There was another student, an art major who fancied himself an “artiste”. His work spanned the range from abstract to really abstract. He looked down on my mere technical facility.

But my grades were as good as his, sometimes better. It seems when he talked about his rejection of formal rules, it really meant he wasn’t able to do realism. He didn’t have the command of the tools, the mere technical facility. So everything he did owed as much to chance as to intent.

He may have been right, that I didn’t have the creativity to do modern art. I’ll admit that I appreciate paintings that simply look good, without high-flying pretension or socio-political overtones. I guess I’ll never have a SOHO gallery showing. C’est la vie.

But with all his fervor, and whatever glorious visions he had in his head, he couldn’t reliably get them onto the paper. He couldn’t create something specific, on purpose.

[Cue un-subtle segue … ]

But what does this have to do with the business world? Scott Berkun wrote recently about how constraints can help creative thinking. When a large corporation does “blue sky” thinking, they can wander aimlessly and never produce. Constraints set a direction.

But I think there’s another problem with “blue sky” thinking that goes beyond a lack of direction. It’s summed up by Voltaire’s famous maxim:

The perfect is the enemy of the good.

When a company tries to “blue sky” a problem, they are implicitly seeking perfection: With no limits, what would be possible?

But there are always limits. They may be self-imposed, inconsequential, misunderstood, overblown, or in any number of other ways not real limits. And it helps to know the difference.

When you start out by asking people what they would do if there were no constraints, don’t be surprised when they come back with a solution that can’t possibly work. And then convince themselves that theirs is the only possible solution. By then, you’ve already lost.

How to Kill a Project by Accident

In my last post I talked about perverse incentives in project management. Something I mentioned in passing was what happens when you don’t have positive incentives: the cases where there is simply no incentive to do the right thing. I realized there’s actually another way to get this wrong by trying too hard to get it right.

Let’s say you have just finished a project, gone into production, and it blew up. Data corruption, security problems, too slow, everything that can go wrong with a product. First you do an emergency project to fix it, then you do the after-action review to see what went wrong.

What you find is that you didn’t have good test coverage. There were whole modules that were never reviewed before going into production. It’s painfully obvious that there was a complete breakdown in the QA process.

Fixing yesterday’s problem

You’re not going to make that mistake again. You write up policies for demonstrating test coverage. You create reports to track test execution and results. You re-write employee performance expectations to align with the new methodology. (If you’ve read my stuff before, you should hear the alarm bells start ringing when you see the “m word”.)

Your next project is going exactly according to plan. Test coverage is at 90% overall, with 100% of high priority use cases covered. You’re on schedule to execute all test cases before acceptance testing starts. Defects are identified and corrected.

Then the users get it. They hate it. It doesn’t do anything the way they wanted it to. Not only that, it doesn’t even do what they asked for. How could you be so far off?

You don’t get what you don’t measure

Look back at what incentives you created. You reward doing the methodology, following the checklist. Test coverage is great, but it’s not the goal of the project. The goal is to provide something of value to the users … or at least it should be. Did you include a line in the new process that explicitly says, “Check with the users that it’s doing what they need”?

So how do you create the right incentives? Just flip the emphasis. Instead of saying an employee’s performance evaluation is 80% following the methodology and 20% client satisfaction, turn the numbers around. Your users don’t care that you followed “best practices.” They care that the product does what they need. Where is that measured in your methodology?

How to Fail by Succeeding

Dave Christiansen over at Information Technology Dark Side is talking about perverse incentives in project management, which he defines as:

Any policy, practice, cultural value, or behavior that creates perceived or real obstacles to acting in the best interest of the organization.

One class of these perverse incentives comes from the methodology police. These departments exist to turn all processes into checklists. If only you would follow the checklist, everything will work. But more importantly, if you follow the checklist you can’t be blamed if you fail.

How’s that for perverse?

A great example is that there is rarely any incentive to not spend money. Before you decide I’m out of touch with reality, notice I didn’t say “save money.” I said “not spend money.” Here’s the difference.

IT by the numbers

Let’s say you do internal IT for a company that produces widgets. Someone from Operations says that they need a new application to track defects. If you follow the checklist, you:

  • engage a business analyst, who
  • documents the business requirements, including
  • calculating the Quantifiable Business Objective, then
  • writes a specification, which is
  • inspected for completeness and format, then
  • passed on to an architect, who
  • determines if there is an off-the-shelf solution, or if it needs custom development.

At this point you’re probably several weeks and tens of thousands of dollars into the Analysis Phase of your Software Development Lifecycle. Whose job is it to step in and point out that all Ops needs is a spreadsheet with an input form and some formulas to spit out a weekly report?

Let’s put some numbers to this thing.

Assume the new reporting system will identify production problems. With this new information, Operations can save $100,000 per month. A standard ROI calculation says the project should cost no more than $2.4-million, so that it will pay for itself within two years.

Take 25% of that for hardware costs, and 25% for first-year licensing, you’ve got $1.2-million for labor costs. If people are billed out at $100/hour – and contractors can easily go three to four times that for niche industries – that’s 300 man-weeks of labor. Get ten people on the project – a project manager, two business analysts, four programmers, two testers, one sysadmin – and that’s about seven months.

If everything goes exactly to plan, seven months after the initial request you’re $2.4-million in the hole and you start saving $100,000 per month in reduced production costs. Everyone gets their bonus.

And 31 months after the initial request, assuming nothing has changed, you break even on the investment. Assuming the new system had $0 support costs.

But what if …

Way back at the first step, you gave a programmer a week to come up with a spreadsheet. Maybe the reports aren’t as good as what the large project would have produced. You only enable $50,000 per month in savings. That week to produce it costs you $4,000 in labor, and $0 in hardware and licensing.

You are only able to show half the operational savings, so you don’t get a bonus. You don’t get to put “brought multi-million dollar project in on time and on budget” on your resume.

And 31 months after the initial request, the spreadsheet has enabled over $1.5-million in operational savings.