How corporations spin a risk into a benefit

James Maguire, managing editor of Datamation, wrote in “Indian IT Firms: Is the Future Theirs?

“In the past, companies used to award IT outsourcing contracts that were longer, 7-10 years. They would hire one firm to do it, and that firm would have subcontractors,” Ford-Taggart says. Now, big clients split up major projects and request bids on individual components. “Then they’ll say, “Look, we can have this portion done in India for 30% less.'”

This might cause more managerial headaches for the client company, but in fact it’s less risk: clients have fewer eggs in a basket with any one IT firm, so if a projects goes bad or creates cost overruns, the entire project won’t take such a big hit.

Emphasis added.

I hate this corporate executive definition of “risk”. And before you think I don’t get it, I understand what they mean but I think they’re wrong.

When I talk about reducing risk, I mean that I’m making problems less likely to occur. What the execs mean when they take this position is that they’re diversifying the accountability. They want to be able to report that while 20% of the project is at risk, the other 80% is on track.

Well sure, the other 80% can’t be used without the 20% that’s missing. But the focus here should be that we’ve got 80% of the project on time and on budget!

If you admit up front that your process is increasing management headaches, you should realize you’re increasing the likelihood of problems. You may be mitigating the potential impact, but that’s not a given.

Any mitigation strategy that seeks to reduce the impact of a failure, and does so by increasing the likelihood of failure, is probably a bad idea.

How using people in your ads increases sales

We’ve all seen the light beer commercials with young, impossibly attractive people guzzling a product that, by all rights, should be making them fat and unhealthy. The obvious question is, “Do they think we’re really that stupid? That we think if we drink their beer we’ll become supermodels?”

Good question. But it gets the real thinking backwards.

When you show people using the product, you’re helping the prospect visualize themselves using it. Make it seem familiar and safe, instead of new and unknown.

But since you can’t put each viewer into their own personalized version of the ad (yet), you have to use a stand-in. If you show someone that the viewer would like to be, it increases their desire to want to recreate that image.

So you don’t want the prospect thinking, “If I use that product, I will become that cool, attractive person in the ad.” You want them thinking, “Because I am cool and attractive, I can see myself using that product.”

It’s the same thinking that leads to cliques and fads:

  • I’m not cool because I wear Air Jordans. I wear Jordans because I’m cool.
  • I’m not tough because I play rugby. I play rugby because I’m tough.
  • I’m not a redneck because I drive a truck. I drive a truck because I’m a redneck.

You don’t want your prospect to think your product will make them more attractive. You want to help them confirm what they already believe about themselves.

Why no one wants software: a case study

No one wants software.

Really, no one.

What they want is documents … pictures … loan applications … insurance claims … Software is just another tool they can use to maybe produce more of the things they really want, better or cheaper.

What this means to legions of unhappy, cynical programmers is that no one cares about the quality of the code. Nope. They don’t. And odds are, they shouldn’t.

Here’s a little story to illustrate why. (By the way, this is the kind of thing you’ll see in an MBA course. If you don’t already do this kind of thinking, you should stop telling yourself there’s no value in getting an MBA.)

The pitch

I’m in charge at an insurance company. I have a manual, paper-based process that requires 100 people working full time to process all the claims.

Someone comes in and offers to build me a system to automate much of the process. He projects the new system could reduce headcount by half. It will take six months for a team of four people to build.

If you’re a programmer, and you think this probably sounds like a winner, try looking at the real numbers.

The direct cost

100 claims processors working at $8/hour. $1.6M per year in salaries. (Let’s leave benefits out of it. The insurance company probably does.)

Four people on the project:

  • architect/dev lead, $100/hour
  • junior dev, $60/hour
  • DBA, $80/hour
  • analyst/UI designer, $75/hour

Total $325/hour, or about $325k for six months’ work.

Still sounds like a winner, right? $325k for an $800k/year savings!

Except the savings doesn’t start for six months. So my first year savings are at best $400k. Damn, now it’s barely breaking even in the first year. That’s OK though, it’ll start paying off nicely in year two.

The hidden costs

Oh wait, then I need to include training costs for the new system. Let’s figure four weeks of training before processors are back to their current efficiency. Maybe a short-term 20% bump in headcount through a temp agency to maintain current throughput during the conversion and training. Add the agency cut and you’re paying $15/hour for the temps. 20 temps x $15/hour x 40 hours x 4 weeks = $48k one-time cost. Now my first-year cost is up to $373k.

And don’t forget to add the cost of hiring a trainer. Say two weeks to create the training materials plus the four weeks of on-site training. Since this is a high-skill, short-term gig (possibly with travel) I’ll be paying probably $150/hour or more. $36k for the trainer.

So if everything goes perfectly, I’ll be paying $409k in the first year. And actually, I don’t get even the $400k savings. I can’t start cutting headcount until efficiency actually doubles. Generously assume that will be three months after the training finishes. Now I’ve got three months of gradually declining headcount, and only two months of full headcount reduction. Maybe $200k in reduced salary.

Of course you need to add a percentage for profit for the development company. Let’s go with 30%. So …

The balance sheet

Software $325k + 30% = $422.5k
Trainer $36k
Training (temps) $48k
Total Y1 cost $506.5k
Projected Y1 savings $200k
Shortfall $306.5k
Y2 savings $64k/month

The project breaks even near the end of the fifth month of year 2. And that’s if NOTHING GOES WRONG! The code works on time, it does exactly what it’s supposed to, I don’t lose all my senior processors as they see the layoffs starting, etc. etc. etc.

The other pitch

Then a lone consultant comes in and offers to build me a little Access database app. A simple data-entry form to replace the paper version, a printable claim form, and a couple quick management reports. Two months’ work, and I’ll see a 10% headcount reduction. The consultant will do the training, which will only take a week because the new app will duplicate their current workflow.

Software $200/hour x 8 weeks = $64k
Training $200/hour x 1 week = $8k
Total Y1 cost $72k
Savings $12.8k/month (starting in the fourth month)

The project breaks even six months after the project is done, so early in the ninth month of Y1. Since the scope was much less ambitious, the risk is also lower.

The obvious choice

Which sales pitch do you think I will go with? Does that mean I don’t respect “proper” software development practices? And, the bottom line: should I spend more money on the “better” solution? And why?

The difference between innovation and sloppiness

I took a couple of art classes in college. In Life Drawing we were supposed to look at the arrangement or person in front of us and put it on paper. In pencil, then charcoal, then ink … watercolor … oil … etc.

I was always good at photorealism. I might not have been fast, but I had some pencil drawings that could have passed (at a distance) for black-and-white photos.

There was another student, an art major who fancied himself an “artiste”. His work spanned the range from abstract to really abstract. He looked down on my mere technical facility.

But my grades were as good as his, sometimes better. It seems when he talked about his rejection of formal rules, it really meant he wasn’t able to do realism. He didn’t have the command of the tools, the mere technical facility. So everything he did owed as much to chance as to intent.

He may have been right, that I didn’t have the creativity to do modern art. I’ll admit that I appreciate paintings that simply look good, without high-flying pretension or socio-political overtones. I guess I’ll never have a SOHO gallery showing. C’est la vie.

But with all his fervor, and whatever glorious visions he had in his head, he couldn’t reliably get them onto the paper. He couldn’t create something specific, on purpose.

[Cue un-subtle segue … ]

But what does this have to do with the business world? Scott Berkun wrote recently about how constraints can help creative thinking. When a large corporation does “blue sky” thinking, they can wander aimlessly and never produce. Constraints set a direction.

But I think there’s another problem with “blue sky” thinking that goes beyond a lack of direction. It’s summed up by Voltaire’s famous maxim:

The perfect is the enemy of the good.

When a company tries to “blue sky” a problem, they are implicitly seeking perfection: With no limits, what would be possible?

But there are always limits. They may be self-imposed, inconsequential, misunderstood, overblown, or in any number of other ways not real limits. And it helps to know the difference.

When you start out by asking people what they would do if there were no constraints, don’t be surprised when they come back with a solution that can’t possibly work. And then convince themselves that theirs is the only possible solution. By then, you’ve already lost.

Hate the game if you want, don’t pretend it isn’t being played

When I was in college I worked at a bar that had a pool room. I played a lot when it was slow and after hours. I got pretty good on those tables. Only “pretty good” and only on those tables … I knew where the dead spots were in the rails.

If I bet anything on the game, it was usually who bought the next round. Sometimes we couldn’t drink as fast as we played, and we’d go for a dollar a game. We were all friends, and it was just to make the game more interesting.

But every so often someone would come in who none of us recognized. You could usually tell really fast who was better than just a casual player. Sometimes they’d see if they could get a game for $5. I’d always take them up on it. And I always won the first game.

Then they’d ask for a rematch … let them win their money back. I’d take that game, too. Sometimes I’d win, sometimes not. But I was breaking even so I didn’t care. It was usually after the second game that they’d look at their watch and realize they had somewhere they needed to be. But they had time for one more game.

How about one last round for $50?

No.

How about $30?

No.

Come on, give me a chance to win it back!

No, you’re a better pool player than I am. But you suck at reading people. You thought I didn’t know you were throwing the first two games.

That’s when they’d get pissed. It wasn’t fair that I took their money even though they could beat me without trying hard. I was just a punk-ass bitch that couldn’t carry their stick.

Yup. But I had their money, and they were leaving.

If I wanted to, I could have spent the equivalent of a full-time job becoming a professional level pool player. I would have run into diminishing returns as I was going up against ever stronger competition. It would have dominated my life, and the only way to make a steady living would be constant travel.

Plus there’d be no retirement plan. Your earnings stop the moment you stop playing. The skills don’t translate to anything else worthwhile.

So I stayed in school and learned to be a programmer, which doesn’t suffer from any of those negatives. </sarcasm>


Hmm, that came out a lot longer (and a lot faster) than I expected. It all started from one idea, though: You don’t win by being better, you win by playing better. And you start by knowing what game you’re playing.

When you’re in an interview, you’re playing the “get the job” game. Once you have the job you’re in the “impress the decision-makers” game. If you go the uISV route you’re in the “sell the most product” game.

Being better at coding is one of the plays in each of those playbooks. But it’s not the one they keep score with.

Is your resume hot or not?

Getting a job is like getting a date.

  1. There may or may not be rules to it.
  2. It may or may not be fair.
  3. They may be so desperate it doesn’t matter what you say. But do you want to hook up with someone who’s desperate?
  4. Or maybe it doesn’t matter what you say because they’ve decided the answer is “no” before you opened your mouth.
  5. The more in-demand you are, the more choosy you can afford to be. But if you want to get this job — or this date — then you’re probably going to have to say what they want to hear. Good luck figuring out what that is.

Drug Testing as a Job Prerequisite

This topic comes up often enough online that I figured it would be more convenient to just lay out my thoughts on the subject once and point people to it. So here goes:

  • Most companies use pre-employment drug testing to filter people out because they don’t have any good performance-based measurements they can use. If they knew how to directly evaluate a potential employee, they wouldn’t need artificial criteria.
  • Some companies have a legitimate need for pre-employment testing: hospitals, pharmacies, chemical labs. Jobs where people will have unusual access to drugs or their components.
  • There are some jobs where you clearly don’t want someone working under the influence: bus drivers, teachers, etc. But if you’re going to make that argument, doesn’t it make more sense to do random screenings after they’re employed?
  • Some jobs have an inherent conflict with drug use: law enforcement. Enough said on that one.
  • As a job candidate, I would prefer that a company’s policies aligned with my own. But I recognize that most HR policies are about balancing various legal liabilities, and have little to do with the day-to-day business culture. So yes, I’ll take a test if they want one.
  • As a potential employer or hiring manager, I prefer not to know what an employee does when they’re away from work. But if my employer’s HR department requires a test, that’s the policy I’ll tell any candidates.

Please don’t believe anything in this post

In Yet Another Internet Forum Discussion About Offshoring (I hereby claim authorship of the acronym YAIFDAO) someone wrote:

A lot of decisions should NOT be left to developers to make. imho, the time to think out of the box is gone by the time it is TIME TO CODE. It’s not time to think about alternatives to what to do.

That is absolutely right. You never want developers talking to end users. They might suggest some other plan than what was painstakingly shepherded through four levels of approvals.

And let’s just squash the notion right now that sometimes there are trade offs to consider. Just because the analyst’s solution will take three weeks of coding effort and a new application server, while the programmer knows of a reusable component that will take one hour and no increased hardware, is no reason to institute the Change Control Process.

Alternatives should always be considered in isolation from the impact they cause. Implementation issues should never be allowed to intrude into the world of business decisions.

Next thing you know someone’s going to suggest that maybe mere programmers could have a meaningful contribution to make to the business process. What rubbish.

How to Kill a Project by Accident

In my last post I talked about perverse incentives in project management. Something I mentioned in passing was what happens when you don’t have positive incentives: the cases where there is simply no incentive to do the right thing. I realized there’s actually another way to get this wrong by trying too hard to get it right.

Let’s say you have just finished a project, gone into production, and it blew up. Data corruption, security problems, too slow, everything that can go wrong with a product. First you do an emergency project to fix it, then you do the after-action review to see what went wrong.

What you find is that you didn’t have good test coverage. There were whole modules that were never reviewed before going into production. It’s painfully obvious that there was a complete breakdown in the QA process.

Fixing yesterday’s problem

You’re not going to make that mistake again. You write up policies for demonstrating test coverage. You create reports to track test execution and results. You re-write employee performance expectations to align with the new methodology. (If you’ve read my stuff before, you should hear the alarm bells start ringing when you see the “m word”.)

Your next project is going exactly according to plan. Test coverage is at 90% overall, with 100% of high priority use cases covered. You’re on schedule to execute all test cases before acceptance testing starts. Defects are identified and corrected.

Then the users get it. They hate it. It doesn’t do anything the way they wanted it to. Not only that, it doesn’t even do what they asked for. How could you be so far off?

You don’t get what you don’t measure

Look back at what incentives you created. You reward doing the methodology, following the checklist. Test coverage is great, but it’s not the goal of the project. The goal is to provide something of value to the users … or at least it should be. Did you include a line in the new process that explicitly says, “Check with the users that it’s doing what they need”?

So how do you create the right incentives? Just flip the emphasis. Instead of saying an employee’s performance evaluation is 80% following the methodology and 20% client satisfaction, turn the numbers around. Your users don’t care that you followed “best practices.” They care that the product does what they need. Where is that measured in your methodology?

How to Fail by Succeeding

Dave Christiansen over at Information Technology Dark Side is talking about perverse incentives in project management, which he defines as:

Any policy, practice, cultural value, or behavior that creates perceived or real obstacles to acting in the best interest of the organization.

One class of these perverse incentives comes from the methodology police. These departments exist to turn all processes into checklists. If only you would follow the checklist, everything will work. But more importantly, if you follow the checklist you can’t be blamed if you fail.

How’s that for perverse?

A great example is that there is rarely any incentive to not spend money. Before you decide I’m out of touch with reality, notice I didn’t say “save money.” I said “not spend money.” Here’s the difference.

IT by the numbers

Let’s say you do internal IT for a company that produces widgets. Someone from Operations says that they need a new application to track defects. If you follow the checklist, you:

  • engage a business analyst, who
  • documents the business requirements, including
  • calculating the Quantifiable Business Objective, then
  • writes a specification, which is
  • inspected for completeness and format, then
  • passed on to an architect, who
  • determines if there is an off-the-shelf solution, or if it needs custom development.

At this point you’re probably several weeks and tens of thousands of dollars into the Analysis Phase of your Software Development Lifecycle. Whose job is it to step in and point out that all Ops needs is a spreadsheet with an input form and some formulas to spit out a weekly report?

Let’s put some numbers to this thing.

Assume the new reporting system will identify production problems. With this new information, Operations can save $100,000 per month. A standard ROI calculation says the project should cost no more than $2.4-million, so that it will pay for itself within two years.

Take 25% of that for hardware costs, and 25% for first-year licensing, you’ve got $1.2-million for labor costs. If people are billed out at $100/hour – and contractors can easily go three to four times that for niche industries – that’s 300 man-weeks of labor. Get ten people on the project – a project manager, two business analysts, four programmers, two testers, one sysadmin – and that’s about seven months.

If everything goes exactly to plan, seven months after the initial request you’re $2.4-million in the hole and you start saving $100,000 per month in reduced production costs. Everyone gets their bonus.

And 31 months after the initial request, assuming nothing has changed, you break even on the investment. Assuming the new system had $0 support costs.

But what if …

Way back at the first step, you gave a programmer a week to come up with a spreadsheet. Maybe the reports aren’t as good as what the large project would have produced. You only enable $50,000 per month in savings. That week to produce it costs you $4,000 in labor, and $0 in hardware and licensing.

You are only able to show half the operational savings, so you don’t get a bonus. You don’t get to put “brought multi-million dollar project in on time and on budget” on your resume.

And 31 months after the initial request, the spreadsheet has enabled over $1.5-million in operational savings.