In my last post I talked about perverse incentives in project management. Something I mentioned in passing was what happens when you don’t have positive incentives: the cases where there is simply no incentive to do the right thing. I realized there’s actually another way to get this wrong by trying too hard to get it right.
Let’s say you have just finished a project, gone into production, and it blew up. Data corruption, security problems, too slow, everything that can go wrong with a product. First you do an emergency project to fix it, then you do the after-action review to see what went wrong.
What you find is that you didn’t have good test coverage. There were whole modules that were never reviewed before going into production. It’s painfully obvious that there was a complete breakdown in the QA process.
Fixing yesterday’s problem
You’re not going to make that mistake again. You write up policies for demonstrating test coverage. You create reports to track test execution and results. You re-write employee performance expectations to align with the new methodology. (If you’ve read my stuff before, you should hear the alarm bells start ringing when you see the “m word”.)
Your next project is going exactly according to plan. Test coverage is at 90% overall, with 100% of high priority use cases covered. You’re on schedule to execute all test cases before acceptance testing starts. Defects are identified and corrected.
Then the users get it. They hate it. It doesn’t do anything the way they wanted it to. Not only that, it doesn’t even do what they asked for. How could you be so far off?
You don’t get what you don’t measure
Look back at what incentives you created. You reward doing the methodology, following the checklist. Test coverage is great, but it’s not the goal of the project. The goal is to provide something of value to the users … or at least it should be. Did you include a line in the new process that explicitly says, “Check with the users that it’s doing what they need”?
So how do you create the right incentives? Just flip the emphasis. Instead of saying an employee’s performance evaluation is 80% following the methodology and 20% client satisfaction, turn the numbers around. Your users don’t care that you followed “best practices.” They care that the product does what they need. Where is that measured in your methodology?