Can the FSF “Ban” Novell from selling Linux?

Novell Could Be Banned From Selling Linux: Group Claims

BOSTON – The Free Software Foundation is reviewing Novell Inc.’s right to sell new versions of Linux operating system software after the open-source community criticized Novell for teaming up with Microsoft Corp.

The problem is that the FSF wants all code to be free. Period.

That’s their preference, yes.

They want to make the GPL so darned viral that no one can include any copyrighted or patented components Period.

No, they want all the components on which they hold the copyrights to be protected by those copyrights. And they want those components to be freely available to anyone who agrees to make their modifications available under the same terms.

You can’t modify and distribute Microsoft’s code without permission. You can’t modify and distribute GPL code without permission.
The way you get permission to distribute Microsoft’s code is to pay them a lot of money, or cross-license your own code. The way you get permission to distribute GPL code is to release your modifications under the GPL.
Microsoft can destroy your business model by bundling a version of what you make. GPL-using authors can destroy your business model by releasing a free version of what you make.
If you don’t want to be bound by Microsoft’s terms, write your own code. If you don’t want to be bound by the GPL, write your own code.

 
How is GPL viral while Microsoft is business?

How can the FSF “ban” Novel from selling “Linux” when Linux itself is not wholely licensed under the GPL and not wholely owned by FSF? Sure, there are many GPL components within the typical Linux distro, but not all of them have to be.

According to Answers.com:

More Than a Gigabuck: Estimating GNU/Linux’s Size, a 2001 study of Red Hat Linux 7.1, found that this distribution contained 30 million source lines of code. … Slightly over half of all lines of code were licensed under the GPL. The Linux kernel was 2.4 million lines of code, or 8% of the total.

So the first point is that no, the FSF can not ban Novell from selling a GNU/Linux-based distribution, as long as all the current license terms are followed.

However, the holder of the Linux trademark, Linux Torvalds, could choose to prohibit them from using that mark to describe what they’re selling. (See Micosoft / Sun / Javaâ„¢) Though I haven’t seen anything suggesting he plans to do so.

Next, the Linux kernel is covered under the GPL, so even if the the FSF doesn’t hold the copyright it’s entirely possible the kernel authors could ask the FSF to pursue any violations on their behalf. And I suspect Stallman and Moglen would be more than happy to do so.

The bottom line, I think, is that business people who don’t understand the technicalities will either see a deal with Microsoft as a reason to choose Novell for any Linux plans, or they will see the controversy as a reason to avoid Linux plans altogether. Either conclusion benefits Microsoft.

People who do understand the details will see that Novell offers them a conditional, time-limited right to use a specific version of Linux, which may or may not interoperate better with Windows systems, which can be effectively “end-of-lifed” at any time by Microsoft.

Changing your development platform

There are certain milestones in the life of a product when developers are free to ask if it’s time to change the platform it’s developed on. Typically you’ve shipped a major version and gone into maintenance mode. Planning has started for the next version, and you wonder if you should stick with what you’ve got or if, knowing what you know now, it might be better to switch from .NET to PHP, or from PHP to Java.

You might think that checking Netcraft would be a good idea. You can see if your current platform is gaining or losing market share, and who doesn’t like market share? If you look at the latest chart you’ll see that Microsoft is gaining on Apache.

But keep in mind that while Apache’s market share has gone down marginally, the total number of sites has still gone up. Most of Microsoft’s gain is from new sites, not from existing sites switching. (The exception being large site-parking operations switching to IIS.)

But really the important question is whether your preferred platform faces a reasonable possibility of becoming obsolete/unsupported. This is actually one place where the Unix world’s slower upgrade cycles help. You rarely have applications “sunsetted” by the manufacturer.

Am I arguing in favor of dropping .NET? Not at all. I think you should use what works for you. What I’m saying is unless your chosen platform is in danger of becoming unsupported, and that causes a problem for you, then looking at market share charts should never get you to switch.

Now if you hadn’t already chosen a platform, and you wanted to know what platform had a larger market, then you’d care about market share. But that’s a subject for another post.

Geeks still don’t know what normal people want

If you listen to geeks, locking out development of third-party applications will doom the iPhone in the market. But remember the now-famous review when the iPod was released:

No wireless. Less space than a nomad. Lame.

The market quickly decided they didn’t care about wireless and bought the things in droves. And current versions have more space than the nomad did when the iPod came out. Now that the iPhone has been shown, geeks are again claiming that it’s going to fail. This time because it’s not going to be open to third-party applications.

Apple doesn’t care if you can extend it because they believe their target customer doesn’t want it extended. They want something that works well, the same way, every time. The iPod wins because it does pretty much what people want, close enough to how they want, without making them think about how to do it.

The iPhone may not be open to developers, but it’s upgradable. When Apple finishes writing software to make the Wi-Fi automatically pick up a hotspot and act as a VoIP phone, that functionality can be rolled out transparently. First-gen iPhones will become second-gen iPhones without the users having to do anything.

The upgrade path will be to higher HD capacity, so people can carry more movies with them. I see these things as hugely popular for people who take trains to work. If I could take a train where I work now, I’d already be on a waiting list for an iPhone.

Get your boxes in order

Everyone seems to have an opinion on downloading music and TV shows, everything from “Information wants to be free” to “Skipping commercials with your TIVO is theft.” Some of the views are self-serving, some are rationalizations, and some people have strong opinions based on what they believe is right and just.

Here’s the thing a lot of people are missing, though: Breaking the law does not count as civil disobedience unless you go out of your way to do it publicly. Obviously I’m referring to people who upload and download music, movies or software without permission from the copyright holders. Some of them are just in it for the free tunes. Some of them think the law is wrong. But the ones who believe copyright laws have gone too far damage their case when they quietly violate the law, expecting to protest the law if — and only if — they are caught.

Think the law has tilted too far in favor of the copyright industry? Great, so do I. Have you written to your congressman? If not, then don’t complain about the law when you get busted. It makes it look like you’re just trying to stay out of jail — which you are — and supports the MPAA and RIAA next time they try to get copyright extended.

Before you end up in a jury box, you should really try the ballot box. Time for me to get off my soapbox.

Pay the man

IT people are frequently highly-educated, with extensive formal and on-the-job training. And we all, if you look at our resumés, think that we’re fast learners. That’s probably because everything we work with keeps changing every couple of years, so anyone who’s been doing this for very long has learned multiple generations of tools. Many of our jobs also require us to be generalists, with a broad range of knowledge across multiple unrelated fields.

It’s probably not surprising, then, that we tend to be DIY-ers. Never changed a light fixture? No problem. Give me a few minutes with a book and I’ll know enough to do it. House needs painting? Heck, I’ve always wanted an excuse to go get one of those power sprayers, I’m on it! That’s why we’re shocked to hear how much people pay to have someone do work that, after all, we could do ourselves with little or no training.

That was my frame of mind when I had to replace the shower door. The frame was mounted on tiled walls. I only cracked two of the tiles a little bit trying to get the old frame off, and lifted about a dozen away from the wall. No problem, just ran to the hardware store for some tile adhesive. And I only put the adhesive on a little too thick, so two of the tiles fell off the next day when I started mounting the frame. And I only cracked one more because I was unfamiliar with the mounting hardware.

I had to remove all the tiles and start over because the adhesive was actually nowhere near dry. I wanted to make sure it dried all the way, because I wasn’t completely sure I did it right this time. When I tried again three days later, there was only one tile that fell off because I had gone too thin with the adhesive. But after waiting a day for the grout on the rest to dry, I was able to scrape that space out and get the last tile up and grout it. The caulk and grout I used to patch the cracks looks mostly okay … for now … while it’s still white

All in all, it only took me a week and a half to hang that door. And the cracked and patched tiles will probably still look good when I go to sell the house. (At least I hope they will; the color was discontinued years ago, so I’d have to re-tile the whole damn bathroom otherwise.) I’m so glad I didn’t pay a hundred bucks to some barely-trained tradesman to do it for me.

Lipstick on a pig

If you’ve ever seen one of my project plans, there’s a chance you’ve seen a task at the end that says Add pretty. With good use of stylesheets, you can radically improve — or damage — the look of a website even after all the coding and most of the testing are done. A different person or group with a different skill set can take over from the programmers and work some magic with little interaction.

You might think, based on this, that other parts of development can be pushed to the end after “real” development is done. You’ll know someone was thinking that when you see a task late in a project plan that says “Add fast”. This is usually a sign of excessive specialization. People think that they just have to get the user interaction right and leave performance tuning to someone else.

I suppose I can live with the idea that there will be some performance tuning that’s best done once everything else is complete. And on some projects just throwing more hardware at the problem is cheaper than a programmer’s time to fix it. But actually improving the performance of an application is hard, and the changes pervasive.

Another side-effect of excessive specialization, one that always raises the brown flag, is when I see “Add security” at the end of a plan. It’s simply inexperience that allows anyone to think they can graft a security model onto a codebase after the fact without significant amounts of rewriting.

“But this is a quick hack, and we only need the numbers for this one meeting.” Sure, a report you’ll only ever need once. I guess such a thing could exist, but I’ve never seen it. In the first place, nothing lasts as long as a temporary fix that works well enough. And in the second place, many (most?) large, successful products started out as small, successful products.

End/begin dependencies look really great on a Gantt chart. Activities that invite and incorporate feedback don’t look so neat and clean. Treating security as something that can happen to a product after it’s already done is no better than … well, see the title of this post.

The Digital Dark Ages

I’ve been paying my mortgage for about three years now. Unless I change something, I’m going to keep paying on it for another 27 years. I try not to think about the fact that although I have an actual physical copy of the mortgage agreement, with real pen-and-ink signatures, I don’t have any proof that I’ve ever made a payment.

At the risk of sounding like a Luddite, it bothers me that I have to trust the bank’s computer system to keep track of all 360 payments I’ll have made by the time it’s over. I’m not just being paranoid. I had an issue where a bank said my wife still owed money on a loan we had paid off three years earlier. We didn’t have anything in writing for each payment. The bank couldn’t even tell us the history of the loan; just that the computer showed we still owed money. And if a bank says you owe money, unless your lawyers are bigger than their lawyers, then you owe them money.

If you go to museums, you’ll see ledgers from banks in the 1800s and earlier. Over two hundred years later and we still know who paid their bills and when. But five years in the past … it doesn’t exist.

This could change with new regulations and retention requirements. But the big difference is what is standard vs. what you have to work at. A hundred years ago everything was written down. If you wanted to get rid of records you had to make an effort to identify what you wanted to delete, somehow separate it from the rest, and physically destroy it. Today, we only keep data as long as we have to. We only bother with long-term storage when the law or financial necessity makes us.

Let’s assume we have some data that we really want to keep “forever”. What is that going to take?

First, you’ll want to store it on something that doesn’t degrade quickly. Burning it to a CD or DVD seems to offer better longevity than VHS. Well, maybe. Second, you want to store it in a format that you’ll be able to read when you want to. This might be a harder problem than the physical longevity, when you start to consider how much data goes into a modern file format.

Look at the problem from the user’s perspective: The document format (the same applies to music and video) is just a way of saving the document in a way that it can be opened and look the same way at a later time, maybe on the same computer maybe not. When Windows 97 handles table formatting and text reflow around images a certain way for instance, the document format has a way of capturing the choices the user made.

If I open that Word 97 document in Word 2003, either the tables, text and images look the same or they don’t. If they look the same, it’s because there’s an import filter that understands what the old format means, and Word 2003 has a way of representing the same layout. If I then save as Word 2003, while the specific way to represent the layout has changed, the user doesn’t see the difference nor care.

If, on the other hand, that Word 97 document doesn’t look the same in Word 2003, it really doesn’t matter to the user if problem is a bad import filter or if Word 2003 doesn’t support the same features from Word 97. (Maybe they used flame text.) So a format that technically captures all the information needed to exactly recreate a document is utterly useless without something that can render it the same way.

Okay, so we need long-term media, and we need to choose a format that is popular enough that there will still be import filters for it in the foreseeable future. Eventually we’ll still reach the end of those paths. Either the disks will degrade, or the file format will be so out of date that no one makes import filters any more. When that happens, the only way to keep our data will be to copy it to new media, and potentially in a new format.

What should that format look like? We’ve already got PDF, which is based on how something looks in print. We’ve got various audio and video formats, which deal with playing an uninterrupted stream. But what about interactive/animated documents designed for online viewing?

Believe it or not, I’m going to suggest a Microsoft solution, though it’s one they haven’t thought to apply this way: PowerPoint. Today nearly everyone has a viewer, but not so long ago most of the slideshows I got were executables. If you had PowerPoint installed you could open the executable and edit the slideshow the same way you can edit a PDF if you have Acrobat.

As much as people complain about the bloat that Word adds to simple files, I think the future of file distribution will be to package the viewer along with the file. At some point storage becomes cheaper than the hassle of constantly updating all those obsolete file formats. The only question is how low a level the viewers will be written to: OS family, processor architecture, anything that runs C, etc.

Meet the new boss, same as the old boss

In case you haven’t noticed yet, we’re going through another round of power struggles in the IT industry. Oh, that might not look like what’s going on. On the surface what people are saying is that it’s a matter of web-based vs. desktop applications. Frequently these conversations are based on the premise that it’s a discussion of the technical merits.

Nope. It’s the return of the glass house. Peel back all the rationalizations about easier deployment, easier support, more consistency, and what it really comes down to is more control. If we can just keep the software out of the users’ hands then everything will be okay.

But what history shows us is that users like having control of their “stuff”. Taking that control away requires either redefining “their stuff” to be “our stuff”, or convincing them that they aren’t qualified to handle their stuff.

Is this what your customers are hearing from you?

Maybe I’m the one missing something

Magicians make a living at misdirection, getting you to look at their right hand while they’re hiding the ball with their left hand. You’d think journalists would want to be a little more direct than that. But Neil McAllister did a whopper of a slight-of-hand recently, using more than half his column to summarize a Joel Spolsky post before jumping to a completely unrelated conclusion.

Joel’s point, and the first more-than-half of Neil’s summary, was shooting down the idea beloved of suits that programming can be reduced to a set of building blocks that can be snapped together by a non-programmer. (For a hysterically painful example of how wrong this is, and how far people will go to try to do it anyway, see The Customer-Friendly System at The Daily WTF.)

Joel covered the ground pretty well, so I was wondering where Neil was going with this. Once I got to it, I had to re-read the segue three times to see what connection I was missing:

Don’t you believe it. If, as Brooks wrote, the hard part of software development is the initial design, then no amount of radical workflows or agile development methods will get a struggling project out the door, any more than the latest GUI rapid-development toolkit will.

And neither will open source. Too often, commercial software companies decide to turn over their orphaned software to “the community” — if such a thing exists — in the naïve belief that open source will be a miracle cure to get a flagging project back on track. This is just another fallacy, as history demonstrates.

If there’s a fundamental connection between open source and “Lego programming” I don’t know about it. Maybe Neil makes the connection for us:

As Jamie Zawinski recounts, the resulting decision to rewrite [Netscape’s] rendering engine from scratch derailed the project anywhere from six to ten months.

Which, as far as I can see, has nothing to do with the fact that it was open source. In fact it seems more like what Lotus did when they delayed 1-2-3 for 16 months while they rewrote it to fit in 640k, by which time Microsoft had taken the market with Excel. Actually that’s another point that Joel made, sooner and better.

Is Neil trying to say that Lego programming assumes that code can be interchangeable, and man-month scheduling assumes that programmers are interchangeable? Maybe, and that’s even an interesting idea. But that’s not what he said, and if I flesh out the idea it won’t be in the context of a critique of someone else’s work.

Or maybe it was an opportunity to take a shot at the idea of “the community”. Although in his very next column he talks about the year ahead for the open source community, negative community reaction to the Novell/Microsoft deal, and praise from the community for Sun open-sourcing Java. Does he really dispute the existence of a community, or was it hit bait?

Okay, so where did I start? Right, with misdirection. So the formula seems to be: quote a better columnist making a point that I like, completely change the subject with the word “therefore”, summarize another author making my second point, and send it to InfoWorld. Am I ready to be a “real” pundit yet?

Here’s a crazy thought

Corel should sue people who use Photoshop without paying.

Huh?

Think about it. Graphics professionals pay for Photoshop. Lots more people do some graphics, but not enough to be worth $650. But many of those people possibly would agree that their use is worth $80.

So there are probably people using cracked copies of Photoshop who, if they didn’t have the crack, would instead have bought Paint Shop Pro.

It seems reasonable that if we’re going to report losses to “piracy” in terms of lost sales, we should be counting the potential sales that are really lost.