On Simplicity

This originally appeared on the Joel on Software discussion group. This issue was the value of simplicity in product design, and whether it requires removing features.


There is a difference between features of the product and features exposed in the user interface.  Take automobile traction control as an example.  If you don’t know, it uses sensors in each wheel to detect a slipping wheel and either apply the brakes or limit power to that wheel until it stops slipping.  Many cars have this feature now, but most don’t have any user interface to it.

Some cars, though, have a button you can press when you’re driving in icy conditions.  This changes the response of the system to be more aggressive about stopping wheelspin.

Then there’s the Corvette.  The first version to have traction control had the option to completely disable it.  Under racetrack conditions, sometimes the fastest line requires intentionally sliding the rear end to set up for the next turn.  Traction control can’t anticipate that.

In the real world, how many people are actually good enough drivers that they can take advantage of that exceptional condition?  How often are they really able to take advantage of it?  How many ever have?

Now how many people want to believe that someday they’ll drive like that?  The feature makes the car better.  Needlessly exposing the feature in the user interface makes the car more marketable.  People like to believe they’re not average, but instead the elite for whom “good enough” just isn’t.

A related phenomenon leads to Jeep commercials touting the off-road prowess of vehicles that, for the most part, never leave the suburbs.  People want to believe that they someday will do something exceptional, and want their product to support that belief.

Nissan, back when they were called Datsun, once had a commercial that came right out and said it: “You won’t go 0-50 in 6 seconds flat but you know you could.  You won’t use it’s drag speed of over 100 mp/h but you know you could.  And when the light turns green you won’t flaunt your turbo power but knowing you could is awesome!”

https://www.youtube.com/watch?v=CgAxVtZbPZo

Now apply this to the washing machine that senses what kind of load it has.  Personally, I can’t think of many things I care less about than my skill in classifying loads of laundry.  But I would still have trouble accepting a salesman’s pitch that really, the machine is smarter than me about this.  If Consumer Reports backed it up, though, I’d take a one-button washer and a one-button dryer.

===

Now consider the first iPod. It didn’t play several poplar audio file formats. Lack of this capability is not “simplicity”.  If the codecs were added, the user interface wouldn’t have to change.  More features, same simplicity.

===

Then there are cars with remote keyless entry that doesn’t depend on pressing a button. Simply approach the car with the fob in your pocket and the doors unlock. Very simple. But there are times you need to decide whether the car is locked or not

  • If you have your spare keys in your gym bag in the trunk when you walk away, the car doesn’t lock. Oops.
  • You can’t leave the car unlocked while you and the kids make several trips to unload groceries.  You’d have to hand off the remote every time one of you goes down.

I think the process probably could be simplified, but I would want extensive usability testing and design work before I would take a car that decided for me when to be locked and when to be unlocked.

===

Finally, consider two different cars.

Vehicle #1:

  • Three-speed stick shift
  • Two-wheel drive
  • Solid rear axle

Vehicle #2:

  • Seven-speed automatic
  • Adaptive all-wheel drive
  • Positraction
  • Traction control

Which one is simpler?  Hmm, that depends.  Do you mean simpler in design, or simpler for the user?

Let’s make it closer, and say Vehicle #1 is now an automatic.  Now they’re equally simple for the user.  But Vehicle #2 does more, making the decisions for the user.

In “Choices = Headaches” Joel said the user interface should be simple.  Not that the features shouldn’t be there, just that the user shouldn’t have to chose when to use which.

How to negotiate a better contracting rate

In any transaction, the person with more information and more experience usually comes out ahead. That’s why the typical consumer negotiating with a full-time salesman is at a huge disadvantage. A car dealer, for example, might negotiate several sales every week, while you only do it every two to three years.

So people making big decisions — new car, new house, new job — do as much research as they can, trying to level the playing field just a little bit. And lots of the information they come up with is flat out wrong.

One of the most damaging pieces of advice to follow when looking for a job is to rely on a headhunter’s self-interest to get you the best rate. The idea – which seems quite reasonable on the surface – is that the headhunter’s commission is a percentage of your salary. Obviously they want this number to be as high as possible. It’s easy to believe that their self-interest lines up with yours.

The first flaw with this idea is that the headhunter doesn’t get anything if someone else gets the job. If there are multiple qualified applicants, you are on the wrong side of a bidding war. The contractor doesn’t want to price you out of the running, so the incentive is to lowball your rate.

The second flaw is that every day the headhunter spends searching for your perfect job is day they don’t spend finding a job for the dozen other people they’re working with. They make more money by placing more people than they do by placing fewer people at higher rates. 30% of $70k x 3 is more than 30% of $80k x 2. Their incentive favors the quick hit, not protecting your interests.

So what do you do about it?

  • Stop thinking of the headhunter as your own personal agent.

    They’re doing a job for you, but they are more interested in getting you something than in getting you the best thing.

  • Know what you’ll accept before taking the interview.

    Have a bottom line that you won’t go below. Based on what you hear in the interview, you may decide to demand even more to accept the conditions. But your lower limit should never be negotiable.

  • Ask what the range is for the position up front.

    There’s no point in wasting time on a position that you’ll never take.

  • Never give up something for nothing.

    If they want you to travel and you don’t want to do it, ask for extra vacation in return. If they want you to be on call, ask for comp time. Never give up one of your demands without getting a concession in return.

  • Get it in writing.

    You can’t deposit a promise in the bank, or buy groceries with verbal assurances.

So are all headhunters ready to sell you out at a moment’s notice? Of course not, even if onlyto preserve their reputation. But if you want to avoid being disappointed, you should never forget that your best interest only sometimes matches up with the headhunter’s interests.

Hate the game if you want, don’t pretend it isn’t being played

When I was in college I worked at a bar that had a pool room. I played a lot when it was slow and after hours. I got pretty good on those tables. Only “pretty good” and only on those tables … I knew where the dead spots were in the rails.

If I bet anything on the game, it was usually who bought the next round. Sometimes we couldn’t drink as fast as we played, and we’d go for a dollar a game. We were all friends, and it was just to make the game more interesting.

But every so often someone would come in who none of us recognized. You could usually tell really fast who was better than just a casual player. Sometimes they’d see if they could get a game for $5. I’d always take them up on it. And I always won the first game.

Then they’d ask for a rematch … let them win their money back. I’d take that game, too. Sometimes I’d win, sometimes not. But I was breaking even so I didn’t care. It was usually after the second game that they’d look at their watch and realize they had somewhere they needed to be. But they had time for one more game.

How about one last round for $50?

No.

How about $30?

No.

Come on, give me a chance to win it back!

No, you’re a better pool player than I am. But you suck at reading people. You thought I didn’t know you were throwing the first two games.

That’s when they’d get pissed. It wasn’t fair that I took their money even though they could beat me without trying hard. I was just a punk-ass bitch that couldn’t carry their stick.

Yup. But I had their money, and they were leaving.

If I wanted to, I could have spent the equivalent of a full-time job becoming a professional level pool player. I would have run into diminishing returns as I was going up against ever stronger competition. It would have dominated my life, and the only way to make a steady living would be constant travel.

Plus there’d be no retirement plan. Your earnings stop the moment you stop playing. The skills don’t translate to anything else worthwhile.

So I stayed in school and learned to be a programmer, which doesn’t suffer from any of those negatives. </sarcasm>


Hmm, that came out a lot longer (and a lot faster) than I expected. It all started from one idea, though: You don’t win by being better, you win by playing better. And you start by knowing what game you’re playing.

When you’re in an interview, you’re playing the “get the job” game. Once you have the job you’re in the “impress the decision-makers” game. If you go the uISV route you’re in the “sell the most product” game.

Being better at coding is one of the plays in each of those playbooks. But it’s not the one they keep score with.

How to Fail by Succeeding

Dave Christiansen over at Information Technology Dark Side is talking about perverse incentives in project management, which he defines as:

Any policy, practice, cultural value, or behavior that creates perceived or real obstacles to acting in the best interest of the organization.

One class of these perverse incentives comes from the methodology police. These departments exist to turn all processes into checklists. If only you would follow the checklist, everything will work. But more importantly, if you follow the checklist you can’t be blamed if you fail.

How’s that for perverse?

A great example is that there is rarely any incentive to not spend money. Before you decide I’m out of touch with reality, notice I didn’t say “save money.” I said “not spend money.” Here’s the difference.

IT by the numbers

Let’s say you do internal IT for a company that produces widgets. Someone from Operations says that they need a new application to track defects. If you follow the checklist, you:

  • engage a business analyst, who
  • documents the business requirements, including
  • calculating the Quantifiable Business Objective, then
  • writes a specification, which is
  • inspected for completeness and format, then
  • passed on to an architect, who
  • determines if there is an off-the-shelf solution, or if it needs custom development.

At this point you’re probably several weeks and tens of thousands of dollars into the Analysis Phase of your Software Development Lifecycle. Whose job is it to step in and point out that all Ops needs is a spreadsheet with an input form and some formulas to spit out a weekly report?

Let’s put some numbers to this thing.

Assume the new reporting system will identify production problems. With this new information, Operations can save $100,000 per month. A standard ROI calculation says the project should cost no more than $2.4-million, so that it will pay for itself within two years.

Take 25% of that for hardware costs, and 25% for first-year licensing, you’ve got $1.2-million for labor costs. If people are billed out at $100/hour – and contractors can easily go three to four times that for niche industries – that’s 300 man-weeks of labor. Get ten people on the project – a project manager, two business analysts, four programmers, two testers, one sysadmin – and that’s about seven months.

If everything goes exactly to plan, seven months after the initial request you’re $2.4-million in the hole and you start saving $100,000 per month in reduced production costs. Everyone gets their bonus.

And 31 months after the initial request, assuming nothing has changed, you break even on the investment. Assuming the new system had $0 support costs.

But what if …

Way back at the first step, you gave a programmer a week to come up with a spreadsheet. Maybe the reports aren’t as good as what the large project would have produced. You only enable $50,000 per month in savings. That week to produce it costs you $4,000 in labor, and $0 in hardware and licensing.

You are only able to show half the operational savings, so you don’t get a bonus. You don’t get to put “brought multi-million dollar project in on time and on budget” on your resume.

And 31 months after the initial request, the spreadsheet has enabled over $1.5-million in operational savings.

How to Finish IT Projects Faster with Less Documentation

If you’re responsible for running an IT project you want things to be done on time and within budget. So how do you set your schedule and budget? Hopefully you define what you want to accomplish, and then ask the developers how long it’s going to take. If you’re putting out a request for proposal (RFP) you’ll have several different answers to that question. Typically the highest consideration in the selection is the total proposed cost. But really, the total time is a better choice.

Why that’s true is based not on ideas about processes, but on ideas about people.

Consultants live and die by billable hours. In the short term, they don’t have any incentive to finish their current project any faster. But in the long term, finishing faster should lead to more work as clients come to respect their ability to meet a deadline. If project managers and clients learn to value that behavior, that is.

How it Could Be

Let’s look at a production support issue for an example. Production support is completely different from most project work in one very important way: the problem is well defined. Something worked on Monday, it doesn’t work on Tuesday. Make it work just like Monday again.

For something with six-figure impact per hour of downtime – and if you think that’s an artificially-high number you’ve never worked with credit card processing – you don’t want a programmer with an impressive resume, dozens of certifications, and decades of experience with your primary programming language. You want Bob, the guy who wrote the system from scratch and demands $1k per hour with a four-hour minimum.

Once you’ve got Bob and his hand-picked team of support people, you get out of their way and let them work. Status reports might be no more than, “We’ve found the problem … We’ve identified the solution … We’re ready to test the fix … It’s live.”

How it Is

But when it comes to new development, companies play it “safe” and look for the best qualifications on paper. They hire based on keyword matching and offer rates based on industry standards for a given skill set. They require specific processes and deliverables (pet peeve: when did “deliverable” become a noun?) and status reporting becomes a significant percentage of the total budget.

Why the Difference?

There are two reasons expert teams get away with less formal process than is typical, but I can only prove one of them. The public answer business sponsors tell themselves to justify the exception to “official methodology” is that the experts have worked the methodology for so long that they can follow the same procedures without exhaustively documenting all the steps. And there is some truth to that.

But I suspect the larger reason is that experts get the work done so much faster there just isn’t enough time for documentation to build up.

The best athletes make things look easy that most people could not even do. A high jumper might clear six feet without even trying hard. Most people would never come close even with months to try.

The best IT people do the same thing. They complete projects in weeks that other people could never do. As “safe” projects drag on specifications are refined, status reports are produced, contracts are negotiated, updates are requested and provided. Meanwhile the expert team has just released to production – so it must have been a small project.

The hard part for the client is to recognize the difference between a project that went smoothly because it was easy, and one that went smoothly because the team made it look easy. But here’s the secret: You don’t really need to recognize the difference.

How to Do Better

The reason you hired someone else to do the work is because you couldn’t do it yourself. Which means you can’t accurately judge which projects really are hard, and which ones just look hard. So don’t judge the project, judge the people.

The people who seem to always be working on small, simple projects – after all, they always go quickly with no major problems – are better at execution. They will be better no matter what the project is.

A Tale of Two Techies

You’ve just finished learning MS SQL 4.2 and VB 5 in school. You get a job with a company that has just upgraded to those languages. You get to learn the ins and outs of the languages along with the rest of your co-workers.

Two years later you want to upgrade from your entry-level salary. You know that big raises only come from job hopping, and you see that most of the job ads are for VB6, so you start studying it on your own. You find some cool new things you think would help in your current job and start pushing people to upgrade.

Your tech lead, Bob — who has been with the company for seven years — says the current applications are stable and it’s not worth the cost to upgrade. The boss listens to Bob instead of you. You say bad things about Bob on an internet forum.

You find a new contract gig working with VB6, and making almost as much as Bob does. Boy, Bob sure is dumb. If he had any balls he’d have jumped already. You start racking up frequent-flyer miles chasing the next gig. Bob evaluates and recommends a new third-party tool that uses VB6. He gets a VB6 class for his whole staff included in the project cost.

Five years later, you’re a .NET hired gun and you know which airports have the best frequent-flyer clubs. You’ve got a fat bank account and all the best buzzwords on your resume.

Bob is still with the same company, but now he’s the IT Director. He’s not making as much as you, but he’s vested and his 401 is looking pretty good. He hasn’t touched any code in a couple of years, but has a few long-term employees working for him whose opinions he trusts. He also hasn’t answered an after-hours page for a few years.

You meet Bob on a street corner one day and talk about old times, catch up on what’s been happening. Suddenly a car jumps the curb and puts you both in the hospital. Oops.

You were smart enough to get good medical insurance, but your income stops since you’re not billing hours any more. Bob goes on medical leave. His wife takes his two children out of junior high and comes in to visit. Your pregnant wife comes in to visit. (You spent your twenties traveling, so are just starting your family.)

Three months later, Bob goes back to work part-time while you sit at home, surfing the net, searching for a gig that will give you the flexibility you need to work around your physical therapy.

By the end of the year, your savings are gone. Microsoft has released the Next Big Thing after .NET, and you don’t have any work experience with it on your resume. You’re applying for maintenance gigs on “legacy” apps — two-year-old .NET apps written by guys straight out of school, who just left for their first not-entry-level jobs. Maybe in another year or two you’ll be able to climb back onto the leading edge.

Bob just accepted an internal transfer to run the division he’s been supporting for the last decade. He recommends as his replacement the long-term employee who filled in for him during his absence. The last division head held the job for 15 years until his retirement. Bob could do the same, and retire with a decent pension when he’s 60.

What conclusion would you like me to reach?

Why is it that the rest of the world has functioned fine on estimating projects of many sizes and scopes, then sticking to them; while IT screams “OH WE can’t do that!”?

I’ll admit a large reason is because lots of people in IT are really bad at estimating. But part of the reason we’ve managed to stay so bad at estimating, and the main reason all estimates seem to be so far off, is that the business side wants the estimates to be low.

The common complaint is that IT projects are “always over time and over budget”. Since the IT budget (for software) is almost entirely salary, time and budget are synonymous. When you set the budget, you’ve just set the time.

But most IT projects — and nearly every one I’ve worked on — have a budget set before the detailed design is ever done. Or at least the clients have an idea in mind what they’d like it to cost. So without realizing they’re doing so, the clients set the time estimate sometimes before they’ve ever talked to the developers.

I had to help out some less-experienced people who were being asked for estimates. I told them the trick is to figure out a diplomatic way to ask, “What number would you like me to say?”

The funny thing is that this is not just being cynical, either. There’s real value to it. If you’re thinking four months and the client says three days, there’s a good chance you don’t have the same idea in mind for what you plan to do.

For instance, they ask you to write a search engine “like Google” to put on your intranet. You’re thinking a couple of months. They’re thinking end of the week.

It turns out they want you to buy a Google search appliance and integrate it into the intranet. To the client, “write” “create” “implement” “install” and all those other words we use that mean different things … all mean the same thing: work that the IT guys do.

So if you want to get the work — or if you’re an employee and have no choice in the matter — see what number they have in mind already and tell them what they can have in that length of time. If you don’t promise to deliver something in the given time frame, they’ll go find someone who will. Not that they’ll deliver in that time, but they’ll promise to deliver in that time.

Compare this to construction. The client may get five quotes for pouring concrete. If four of the bids are close to $50k, but one of them is $20k, the client will likely assume the low bid is unrealistic and choose among the remaining contractors.

But if a programmer or independent software vendor says some work will cost $50k, the client can find someone who will promise to deliver for $20k. The sponsor will either accept the lower bid, or use it to negotiate the first vendor down to $25k. When it ends up costing $50k, that project goes in the books as “over time and over budget”.

What would it look like if construction bids were awarded the same way? If for instance the client were required by law to select the lowest bid? Then contractors would low-ball every bid to get the contract. You’d end up with construction that always ran over time and over budget. You would need an endless supply of money to stay in business. You would need to be … the government.

Can the FSF “Ban” Novell from selling Linux?

Novell Could Be Banned From Selling Linux: Group Claims

BOSTON – The Free Software Foundation is reviewing Novell Inc.’s right to sell new versions of Linux operating system software after the open-source community criticized Novell for teaming up with Microsoft Corp.

The problem is that the FSF wants all code to be free. Period.

That’s their preference, yes.

They want to make the GPL so darned viral that no one can include any copyrighted or patented components Period.

No, they want all the components on which they hold the copyrights to be protected by those copyrights. And they want those components to be freely available to anyone who agrees to make their modifications available under the same terms.

You can’t modify and distribute Microsoft’s code without permission. You can’t modify and distribute GPL code without permission.
The way you get permission to distribute Microsoft’s code is to pay them a lot of money, or cross-license your own code. The way you get permission to distribute GPL code is to release your modifications under the GPL.
Microsoft can destroy your business model by bundling a version of what you make. GPL-using authors can destroy your business model by releasing a free version of what you make.
If you don’t want to be bound by Microsoft’s terms, write your own code. If you don’t want to be bound by the GPL, write your own code.

 
How is GPL viral while Microsoft is business?

How can the FSF “ban” Novel from selling “Linux” when Linux itself is not wholely licensed under the GPL and not wholely owned by FSF? Sure, there are many GPL components within the typical Linux distro, but not all of them have to be.

According to Answers.com:

More Than a Gigabuck: Estimating GNU/Linux’s Size, a 2001 study of Red Hat Linux 7.1, found that this distribution contained 30 million source lines of code. … Slightly over half of all lines of code were licensed under the GPL. The Linux kernel was 2.4 million lines of code, or 8% of the total.

So the first point is that no, the FSF can not ban Novell from selling a GNU/Linux-based distribution, as long as all the current license terms are followed.

However, the holder of the Linux trademark, Linux Torvalds, could choose to prohibit them from using that mark to describe what they’re selling. (See Micosoft / Sun / Javaâ„¢) Though I haven’t seen anything suggesting he plans to do so.

Next, the Linux kernel is covered under the GPL, so even if the the FSF doesn’t hold the copyright it’s entirely possible the kernel authors could ask the FSF to pursue any violations on their behalf. And I suspect Stallman and Moglen would be more than happy to do so.

The bottom line, I think, is that business people who don’t understand the technicalities will either see a deal with Microsoft as a reason to choose Novell for any Linux plans, or they will see the controversy as a reason to avoid Linux plans altogether. Either conclusion benefits Microsoft.

People who do understand the details will see that Novell offers them a conditional, time-limited right to use a specific version of Linux, which may or may not interoperate better with Windows systems, which can be effectively “end-of-lifed” at any time by Microsoft.

And this is bad why?

If you try hard enough, I suppose it’s possible to spin anything into an attack on your pet target. But the consistency with which Neil McAllister sounds the call of doom and gloom for all things open source is really quite astonishing. Especially when you consider he writes the Open Enterprise column for Infoworld.

Take his January 29th column about the formation of the Linux Foundation for example:

On the surface, the union of Open Source Development Labs (OSDL) and the Free Standards Group (FSG) seems like a natural fit. Open standards and open source software are two great ideas that go great together.

But wouldn’t it make more sense to call the merged organization the Open Source and Standards Lab, or the Free Software and Standards Group? Why did they have to go and call it the Linux Foundation?

On the one hand, it seems a shame that the group should narrow the scope of its activities to focus on a single project. Linux may be the open source poster child du jour, but it’s hardly the only worthwhile project around.

If Neil had bothered to read his own magazine’s newsletter the previous week, he would have known that:

With Linux now an established operating system presence for embedded, desktop and server systems, the primary evangelizing mission that the OSDL and FSG embarked upon in 2000 has come to an end, Zemlin said. The focus for the foundation going forward is on what the organization can do to help the Linux community more effectively compete with its primary operating system rival Microsoft.

The combination of the two Linux consortiums was “inevitable,” said Michael Goulde, senior analyst with Forrester Research. “The challenge Linux faces is the same one Unix faced and failed — how to become a single standard.”

So what’s wrong with focusing on Linux, anyway?

But then again, maybe it’s not so strange — not if you conclude that the Linux Foundation isn’t any kind of philanthropic foundation at all. It’s an industry trade organization, the likes of which we’ve seen countless times before. Judging by its charter, its true goal is little more than plain, old-fashioned corporate marketing.

As such, the Linux Foundation is a unique kind of hybrid organization, all right — but it’s not the union of open source and open standards that make it one. Rather, it stands as an example of how to combine open source with all the worst aspects of the proprietary commercial software industry. How noble.

This is really amazing. No one ever claimed that the partners in this merger were anything other than industry trade organizations, but the fact that the new foundation will continue the work of it’s members is somehow un-noble. And nobility is the standard by which we should judge those who are trying to make Linux more competitive in the market.

His grammar and spelling may be better than that of the stereotypical Linux fanboys, who famously attack less-rabid supporters for their lack of purity. Or maybe he just has a better editor. But all the craft in the world doesn’t disguise the fact that Neil’s opinions are rarely more useful than the ramblings of an anonymous Usenet troll.

Meet the new boss, same as the old boss v2

Sometimes you read something that you can’t summarize without losing a lot. I just can’t find any extra words in this post, so here it is in its entirety:

1) Whatever language is currently popular will be the target of dislike for novel and marginal languages.

2) Substitute technology or methodology for language in #1. In the case of methodology, it seems a straw man suffices.

3) Advocates will point to the success of toy projects to support claims for their language/methodology/technology (LMT).

4) Eventually either scale matters or nothing matters. Success brings scale. An LMT is worthy of consideration only after proving out at scale.

5) Feature velocity matters in early stage Web 2.0 startups with hyperbolic time to market, but that is only a popular topic on the Web for the same reason Hollywood loves to hand out Oscars.

6) Industry success brings baggage. Purity is the sign of an unpopular LMT. The volume of participants alone will otherwise muddy the water.

7) Popularity invites scrutiny. Being unfairly blamed for project failure signals a maturing LMT; unfairly claiming success, immature LMT. Advocates rarely spend much time differentiating success factors.

8 ) You can tell whether a LMT is mature by whether it is easier to find a practitioner or a consultant. Or by whether there is more software written *with* or prose written *about* the LMT.

9) If you stick around the industry long enough, the tech refresh cycle will repeat with different terminology and personalities. The neophytes trying to make their bones will accuse the old guard of being unable to adapt, when really we just don’t want to stay on this treadmill. That’s why making statements like “Java is the new COBOL” are ironic; given time, “N+1 is the new N” for all values of N. It’s the same playbook, every time — but as Harlan Ellison said of fiction, every story has already been told, but nobody was listening the first time.

10) Per #9, I could have written this same post, with little alteration, ten, twenty or thirty years ago. It seems to take ten years of practise to truly understand the value of any LMT. Early adopters do play the important role of exploring all the dead ends and limitations, at their cost. It’s cheaper to watch other people fail, just like it hurts less to watch other people get injured.

11) Lisp is older than I am. There’s a big difference between novel and marginal, although the marginal LMTs try to appear novel by inserting themselves into every tech refresh cycle. Disco will rise again!

12) If an LMT is truly essential, learning it is eventually involuntary. Early adopters assume high risks; on the plus side they generate a lot of fodder for blogs, books, courses and conferences.

13) I wonder if I can get rich writing a book called Agile Lisp for Web 2.0 SOA. At least the consulting and course revenue would be sweet. Maybe I can buy an island. Or at least afford the mortgage payments on a small semi-detached bungalow in the Bay area.

14) It requires support from a major industry player to bootstrap any novel LMT into popularity. The marginal LMTs often are good or even great, but lack sponsors.

15) C/C++ remain fundamental for historical reasons. C is a good compromise between portability and performance — in fact, a C compiler creates more optimal code than humans on modern machine architectures. Even if not using C/C++ for implementation, most advocates of new languages must at least acknowledge how much heavy lifting C/C++ does for them.

16) Ditto with Agile and every preceding iterative methodology. Winding the clock back to waterfall is cheating. I’m more sophisticated than a neanderthal, but that won’t work as a pick up line.

17) Per #13, I don’t think so, because writing this post was already a chore, let alone expanding the material to book length. Me an Yegge both need a good editor.

This covers the technology pretty well. All he left out was the reason so much is coming back.