GPLv3 and Corporate Contrarian Hype

The latest draft of the third GPL version is provoking a lot of argument, posturing, and controversy. I’m glad its careful drafting process is taking the amount of time it is. I think it’s useful to widen the sphere of public awareness on the issues the license addresses. Some of the most controversial issues, such as digital rights/restriction management (DRM) and patents are going to impact our lives and culture in far reaching ways (they’re not isolated from technical and business issues). Yet a lot of people discussing these issues don’t seem to apply the rigourous thinking that is required.

I’d argue the FSF has a track record of considering important issues like these, with foresight and the creative will to develop strong, practical solutions, staving off potential damage to our freedoms. Damage that would otherwise be carried out by imaginary legal entities armed with human bullets, which fly toward profit so quickly they miss all other practical and ethical issues. The solutions have also enabled a huge amount of innovation and positive change.

But some of the most visible contrarians to this draft of the next GPL–opinions which are getting hyped, are mostly irrelevant in the greater scheme of things. I’m talking about the recent HP issues that were circulated around numerous web sites. To quote Christine Martino, vice president of Open Source And Linux with HP from the article linked above,

“HP had hoped that the second draft would clarify the patent provision such as to ease concern that mere distribution of a single copy of GPL-licensed software might have significant adverse IP impact on a company…”

What does that mean, “significant adverse IP impact”? It’s removed from its context so I can’t be sure, but it sounds to me like the HP folks are taking issue with something the new GPL would prevent them from doing with their patent portfolio. Furthermore, referring to HP’s commentary, the article states

“The second draft of the GPL version 3 license is not even a day old and already one of the largest Linux vendors in the world is taking issue with its content.”

So what? The FSF is interested in freedom, and its foresight in ensuring and encouraging that was the ultimate basis of practicality giving rise to the IT business shifts underway because of FOSS. Although the Open Source Initiative fairly claims the pragmatic approach under its rubric (as that is its stated goal), it doesn’t imply an either/or stance. Freedom doesn’t preclude pragmatism. Unfortunately too many articles treat these notions as mutually exclusive.

While the previous GPL versions were produced with a goal of promoting freedom in a software development basis, they also triggered important business and social developments. Why were they so successful? Because many many many individuals adopted these licenses. The freedom and collaboration they enabled for masses of individuals in free software development is key. A recent ZDNet blog post states

“But that’s not where the debate will really play out. It will really play out in the market. Will GPL companies switch to GPL v3, or explicitly demand retention of V2, which is frankly vague on the DRM question.”

Does it matter if major business entities like HP object to certain freedom promoting aspects of the license? Does their wanting to switch really signify whether the third version will be successful in getting adopted? I don’t think it matters much at all. HP is a player in the free software community, it is not the player. And that is true of everyone else. So HP commentary should be considered for what it’s worth–nothing more.

So, I’d argue against Steven J. Vaughan-Nichols’s point that the GPLv3 will be dead on arrival. As he mentions the HP issue, he also mentions Linus Torvalds’s objections. This is fair, from what I’ve read it sounds like Torvalds has some clearly thought-out opinions. From what I’ve read, some of these sound quite reasonable. As I said at the beginning, I like the debate this drafting process is raising, Torvalds and HP included. Nevertheless, from what I’ve read of Torvalds’s arguments, I have the impression he is single-mindedly focusing on technical issues and intentionally excluding debate on all else. I just don’t believe that’s ok. There are too many important, non-technical ramifications interconnected with information technology to ignore. It doesn’t mean everyone must think about these things, but how does it help to intentionally excise them from the debate?


A new story covering the feedback issue in this debate was just published at NewsForge. This is a useful balance to the different sides involved.

Reference Site Visits, the Evidence

Yesterday I was editing a document for a project in which we’re helping an organization select its ERP system. The document covered practical reasons that the organization’s selection steering committee should take part in reference site visits. In other words (and this is a regular practice our company recommends) while evaluating the right system, the people that are responsible for overseeing its selection ought to visit real customer sites that have already implemented the system. (I suggested the author develop the document into an article as well, so we will likely be publishing a full article on the subject, at the TEC web site.)

I thought there were a few interesting points that seemed to jive with a recent Strategy+Business magazine article I read concerning evidence-based management (I appear to be quoting them a lot lately). In the Strategy+Business article, Why Managing by Facts Works, the authors point out

“…we are convinced that when companies base decisions on evidence, they enjoy a competitive advantage. And even when little or no data is available, there are things executives can do that allow them to rely more on evidence and logic and less on guesswork, fear, faith, or hope. For example, qualitative data, such as that gathered on field trips to retail sites for the purpose of testing existing assumptions, can be an extremely powerful form of useful evidence for quick analysis.”

If a committee of people is going to be involved in a big selection project, it can analyze all of the business processes and software functionality possible, it can see scripted vendor demonstrations, but it seems like it would still be pretty difficult to envision just how the system works out in the world, in actual production situations.

So if the steering committee of the selection project visits a few sites in their own industry, which have implemented the ERP system, they get the opportunity to see how unexpected issues arose and got resolved. They might witness benefits or problems that they didn’t expect or even consider beforehand. Finally, they have a chance to see how the system affects the people working with it. Because those issues would be important considerations for making a critical business and technical decision like selecting an ERP system, the site visit is a clear way to vaporize assumptions with real evidence.

Are Co-ops the Ideal FOSS Business Structure?

Free and open source software is a community affair. One would think it might be a perfect fit for a cooperative type of business entity. Businesses surviving and growing in virtue of FOSS ecosystems develop some interesting business models–the support and services model for example (though becoming increasingly common) relies on the collaborative efforts of, sometimes huge, communities of people as a basis for its existance. Another model that I once thought was pretty innovative came from Transgaming Technologies, which had this idea of letting users pay for a subscription to (among other things) vote on the company’s product roadmap (this was a while ago, I don’t know if they still operate this way). But in spite of these group collaboration and voice-of-the-people aspects, most FOSS-related companies operate in a pretty regular corporate fashion. I may be ignorant of some company out there that is already doing this, but I cannot think of any FOSS-based companies that have organized themselves as cooperatives.

Sure, there may be many differences between the operation of FOSS-based companies and their proprietary counterparts. For example, Linux distributors have all kinds of organizations, processes, and ways to facilitate community participation, which they not only rely on for their well-being but also put great effort into nurturing. Doesn’t this go hand-in-hand with the idea behind a cooperative? I thought Strategy+Business Magazine’s recent article, A Cooperative Solution, was incredibly interesting and enlightening on just how successful co-ops can be (perhaps I’m naïve–the scale, power, and apparent efficiency that some have, hadn’t dawned on me before). Not only did it explain, in-depth, how massive organizations like the Dutch Rabobank or Italian retail COOP thrive (sometimes even moreso than their traditional counterparts), it also focused on how beneficial the co-op structure is to its communities of participants.

Here is a quote from the article concerning thousands of people involved with Rabobank. I’ll make two points about it.

“The members of the bank took part recently, for example, in voting on whether to merge some of its branches. That is the kind of crucial decision usually made by top management. But at Rabobank, it was the focus of long debate among all the members. It took Rabobank’s central organization nine months, many personal discussions, and two general assemblies to build consensus throughout its vast constituency on the consolidation issue.”

First, doesn’t that sound similar to how things often take place in Free and Open Source Software communities? Second, while it may sound like it took a long time, the article later details why some of these group decision making processes, while on the surface sound like they’re hugely inefficient or time-consuming, end up actually making the company, as whole, much more lean and responsive down the road. The Strategy+Business articles explores these co-op features:

  • Consensual decision making
  • Better communication
  • Leadership development in the company and community
  • Long-range planning and experimentation
  • Opennes to learning best practices
  • The social dimension

The processes they evolve to facilitate these areas and the effort they put into doing things “right” in the first place, tends to flatten out all kinds of problems that other non-cooperative organizations face.

Co-ops greatest strength are their constituents. The co-op is, by nature, for the interest of its members and the communities those members constitute. So while a public company might, for example, have to constantly be on guard to increase its earnings every quarter and thus satisfy share-holders (who are likely to have motives outside the scope of what is good for the employees/communities affected by the company), a co-op doesn’t face that problem. Even though (as is the case of the co-ops profiled in the Strategy+Business article) they may be very successful, co-op money flows to its communities, to its own success.

Lastly, in Tim O’Reilly’s recent blog post, Four Big Ideas About Open Source, his second point concerns the way open source companies have an ability to change the rules of the game. I suppose I argued for this advantage too when I wrote about what I thought a Linux distributor should do to in order to gain the typical mass customer mindset for choosing an OS (change the whole rules of the game). Mr. O’Reilly said

“One of the most powerful things about open source is its potential to reset the rules of the game, to compete in a way that undercuts all of the advantages of incumbent players. Yet what we see in open source is that the leading companies have in many ways abandoned this advantage, becoming increasingly like the companies with which they compete.”

And he concludes that these companies should be Web 2.0 companies. Ok, that might be the case, I haven’t formed any intelligent or unintelligent thoughts on that yet. However, I can’t help but think that maybe this is a chance for the dispersed, open, development model to wed its counterpart in business, the co-op. It would certainly be unlike the competitors. I have a smirk on my face just imagining the article fallout that would happen with all the proprietary vendors crying “see, we told you FOSS is communism!”

Life, Staring Storage

Can we clearly see it all from one spot? In a Wired column, Momus, discusses his pursuit of the absense of western-style storage living. He mentions a typical Tokyo apartment style, in which the center of the room is relatively sparse (object-wise) but the outer edges contain the information-storage of the inhabitants–things like closets of clothes, dishes, etc. The western style on the other hand, I suppose is more likely to be arranged such that objects are placed throughout the living space and the center of a given room may not be relatively clear for, as Momus puts it, “processing”.

This all caught my interest as my home office has been a source of frustration for me lately. It’s small and in spite of my best efforts to adorn it with only a few of the totems I like to have near me while working on my creative endeavours, I think it’s impacting my senses too much, preventing the all-out focus I like to pursue. I’ve been feeling like the key to fixing all that is its arrangement. While I typically dislike metaphors that set human being to computer terminology, this one was compelling. It recalls the expression “out-of-sight, out-of-mind.”

I almost achieved this once. A number of years back, when I moved from the West coast of the US, to the East of Canada, I got rid of most of my posessions and brought only what I could fit in a small VW hatchback. About 70 percent of that consisted of books and CDs. Momus notes the satisfaction this sort of excercise can bring but he also mentions how now, often times, a digital photograph is just as satisfying as the object itself.

Once in a while I get caught up, sometimes obsessively, trying to convert physical objects I own into digital representations so that I can store them on a hard drive and let go of the physical object. Or is that the reason? Maybe I’m just looking for an additional way to preserve them. Maybe their digitization is simply another layer of storage to deal with.

I mentioned my books. Momus’s article adresses books too (in an important way, though differently from what I’m about to say). I know other people for whom maintaining a physical library is very important, sometimes sacred (if you’re a literary sort). I like seeing the books I’ve read stacked around me. I like seeing the ones I intend to read stacked and ready too. Why? Most are not reference books and I rarely go back to them to look up specific passages. I don’t often reread books but I struggle to part with them. I do however, sometimes sit and stare at their spines. Looking from title to title, author to author. I remember what they were about and remember the characters that I lived with while reading them. I remember other things happening in my life during that time. This is in-sight, in-mind. Sometimes it inspires a new path of thinking about something or provokes a creative path. I don’t think this works the same with information stored digitally–where the ready-to-hand is not ready unless we can first envision it as such (onus: us). I suppose the tricky part is figuring out which objects need regular readiness as opposed to those that may be hidden away, lying in wait, for my need.

Linux TCO with Eyes Open

IBM published an overview of two recent Linux TCO studies. One of the studies was done by the Robert Frances Group and the other by a group called Pund-IT Inc. Unlike another recent attention-getting study, these found the cost results were in Linux’s favour. I haven’t seen the actual studies so I don’t know much about the methodology they used but it seems one was done by surveying twenty companies regarding their application servers, while the other was an in-depth review of three specfic companies, each in a different industry. They concluded that the Linux deployments were significantly lower in TCO.

After the overview, the article provides an interview with the reports’ authors. One point that I thought was insightful came from RFG’s Chad Robinson. In discussing good and bad Linux deployments, he mentioned

“The people that go into Linux with their eyes open tend to be the most successful, because they don’t try to make Linux fit the old model. When you deploy Linux, it’s not enough to just to put a new operating system out there, because you’ve added an operating system to your mix, and that increases complexity. If you just drop Linux in as a replacement and you expect it to behave exactly the same way that your old operating system did, then you’re going to do a little worse than a little better.”

I think that makes a lot of sense. I frequently read articles that talk about advantages or disadvantages to deploying Linux, maybe whenever discussing these advantages or disadvantages there should also be a discussion on the ways these relate to and change the existing work environment. One might make a transportation analogy. Say, I have a car that I sometimes drive to work. Yet there is a cultural push to start riding bicycles instead. Perhaps this could be viewed as adding complexity because the roads must accommodate cars and bikes. However, when I ride a bicycle, I never go to a gas station to guzzle at the pump, it would be pointless (well if I was already feeling pointless I might make this a different story and have sip or two). The two different modes of transportation do not have the same requirements. The advantages of one (it reduces pollution and saves money) would be counteracted if everyone stuck to the same old, unnecessary model by guzzling gas from atop their bicycles. Quite a catastrophe.

About the Evaluation Layer for Open Source Services

I just read Alex Fletcher’s first piece of the Open Source Software Bedrock. He delineates three layers, namely, evaluation, adoption, and integration. Evaluation is what the other layers get stacked upon and altogether these make what he’s described as a supporting foundation for the policies, practices, and standards of the software’s life cycle. It seems to me that a guiding phenomenon inspiring the article is how FOSS changes the traditional selection/purchase process. Fletcher states:

“The traditional model of contacting a vendor, arranging a demo/sales pitch, wading through marketing fodder, etc. has been replaced with a model that shifts the balance of power from the vendor to the end user/customer.”

A point on this balance of power that I’m not sure surfaced in the article, is that sometimes a potential customer to an open source software firm has already downloaded, installed, and sampled the software before contacting the vendor for support or other services or products. (This is likely to be true, less frequently, when addressing totally proprietary software.) It means that the customer, on contacting a FOSS vendor, is doing so from a potentially more informed stance about what it requires from the vendor. This could also make it easier for the vendor to understand what is most valuable to provide the client. In other words, I’m not completely sure that this change in model necessarily shifts the balance of power. Perhaps instead, it shifts the needs assessment and provision processed in a way that might benefit both sides.

Fletcher makes a point that this “…paradigm requires a more prepared and motivated end user/customer…” which I appreciate though I also think, in some ways, it also aids that end. Another couple points that I thought were well-put and would like to address are Fletcher’s statements:

“It is a high priority to understand the exact support terms for a given piece of software, in line with any anticipated needs as revealed during the evaluation phase.” and “If the evaluation layer is done haphazardly the according adoption and integration layers will lack the proper support to be of any value.”

I believe this leads right into one of the greater points on evaluating an open source solution, which is how to ensure ongoing, stable, professional support. It seems to be a fear raised repeatedly by potential adopters of open source software. Yet that is the basis for many, if not the majority, of the vendors building their businesses around open source software. The support options are available so the customer must make sure it identifies the proper ones, which may not be that simple. I think, like other software evaluation practices, it is important to systematically identify business requirements from the different stakeholders within the company. Once those are well-understood the customer should thoroughly evaluate how potential vendors compare on all of the requirements.

A resource for evaluating open source IT and Linux service providers is the FOSS Evaluation Center. It offers about a thousand criteria addressing different support requirements a customer might have of a vendor, and it lets people compare vendors on each point. I designed those criteria, so it’s a bit of a plug, but it can be accessed for free, and I hope it’s useful. Another resource that might be useful toward the evaluation end (though I don’t have any experience with it) is a site called Find Open Source Support.

A Real Year of the Linux Desktop–What’s Needed

They said it at LinuxWorld in Toronto a few months ago. They’ve buzzed it at analysts, and now the press is saying it to the public. Novell says this is the year of the Linux desktop, and I’m familiar with evidence showing gains in popularity for Linux. Yet, I disagree that this is the year. Nothing is happening this year to make it, specifically, the year of the Linux desktop and I’m going to hypothesize what could change that.

To me, there’s no contest, GNU/Linux systems have been offering more innovative, stable, easily productive, and pleasant desktop systems (KDE for example) for years. However, that’s not enough to move Linux to a place where it challenges the automatic momentum both Microsoft and Apple enjoy within the mindset of the general population (at least in North America–perhaps elsewhere this is different). The mindset of the user/customer environment is what is needed to turn it into the year of the Linux desktop–Novell isn’t making much of a dent in this regard.

Jem Matzen wrote why specialized systems as opposed to fancier eye candy would be a better answer to move in this direction (that’s my very over-simplified paraphrase). I appreciate that notion in part; I’d like to suggest something else though, something which I think would give GNU/Linux and FOSS applications a real poignant way to shift the public’s mindset toward their adoption. Even better, it’s a business model that could only, really work in its entirety within a Free and open source ecosystem. What I’m suggesting, is essentially like something James P. Womack and Daniel T. Jones recommend in their book, Lean Thinking, except applied within a FOSS ecosystem.

To catalyze the required mindset shift–and this may appear plain at first glance, let me flesh it out–if a customer could easily buy a computer system, stacked with the desired hardware, configured software, support expertise, update service, backup service, in addition to having automatic access to a range of web services (like music stores or VoIP services) optionally pre-setup, it would be a completely compelling solution. What’s so special? Don’t we see that from the likes of Apple or Dell? Not really. No company that I’m aware of actually does this to the degree I’m proposing but a GNU/Linux OS distribution is the one that would fit this model and allow it to work, now. I’ll continue by talking about what such a fictitious GNU/Linux solution provider would do and I’m going to refer to this fictitious company as Fictux.

A full computing solution should come from a company that pre-bundles everything its customers want, consistently supporting it, for the duration of ownership. It should not require anxious intervention from the owner when the owner desires a new component or new system, and the new system should have all data and applications from the old system installed, setup, and accessible upon delivery.

1) Getting the computer. It’s not impossible to find a company on-line that will sell a computer set up with Linux. There are some hardware vendors offering compelling Ubuntu and Linspire preinstalled systems. Every now and then you even hear about a big box store selling some Linux PCs. Some companies, like Dell, even let you pre-configure the hardware components to varying degrees. Fictux would make this selection easy, it would have pre-tested the hardware to be sure it all works together in combination with the applicable software. This is not a new idea but it must be combined perfectly with the rest of the service.

2) The right software, configured right. The system cannot simply be preloaded with a Linux distro! From the point-of-view of most average users, there probably isn’t a cognizance of getting anything extremely compelling from an OEM with Linux preinstalled, they might as well have Windows. Worse, getting a new system with the standard OS leaves too much effort to the user to seek and install all their desired applications (this is true of Windows, Macintosh, and Linux). Most standard Linux distributions get a running start (bundling thousands of apps) compared to Windows or Mac systems, but sometimes too many apps are a detriment. Worse is when the user gets apps targetting what s/he wants but they’re not necessarily the specific ones s/he wanted (say I want Kopete while my distro automatically gives me GAIM).

A long time ago, when I was a dedicated Mandrake (Mandriva) user, I remember suggesting (and I don’t recall if this was in a user forum, an e-mail, a comment form, or what) that they let users select every software package they want, in advance to downloading an installation ISO. Then the user could download a totally custom version of the distribution. That’s to say that Fictux would offer custom versions of its distribution, tailored to exactly what the user wants the instant the system is turned on. This must be done at the time of purchasing the hardware.

Could Microsoft or Apple get agreements, permanently ongoing agreements, from the thousands of potential proprietary software vendors a customer might want to have installed? Could Microsoft or Apple charge a humane price for such a system? It doesn’t seem plausible. However, a Linux-based manufacturer can do this because of its FOSS ecosystem.

If I was the customer, obviously over the computer’s lifetime I’d want to occasionally install something new, but currently when I, for example, install a Kubuntu system for the first time, I have to search through a package repository interface (though it’s an easily unified one) for whatever I want to install, then tell it to install–the consequence is that every time I set up a new computer with the operating system, I spend half a day just adding the applications I want and configuring them. Yet a Linux distribution is already a carefully selected collection of Free software applications, tied and tested together into a whole system. Why is practically every distribution offering its common system (sometimes there is a server or business version) and then asking the user to install all the options? Fictux would ask the options first and make the distribution, the user’s distribution. It could be an audio work-oriented distro, desktop publishing distro, file server distro, immediately upon powering on, and according to the user’s taste. Furthermore, and I’ll expand this when I get to backups, it should already be populated with the information about the user, his/her preferences, and files.

3) Provide the support expertise. Plenty of companies, especially in the open source world, have chosen a business model of providing support services. Why is this often an independent company from the hardware, software, or other services? Of course they’re not all independent companies, but Fictux, in providing each point I’m detailing here would also be the point of contact for any support-related issue. Software questions, hardware failures (even to the point of arranging pickup and delivery replacement service), possibly even in agreement with the ISP.

4) Manage the update service. If there is some sort of hardware recall, Fictux would be responsible. As new technology is available, Fictux stays on top of it and folds the new tech into its service. It’s got to preemptively know which hardware will best support new software and be able to let the user know, without requiring the user to research all kinds of options and configurations. I think the transparency of the many test releases in open source development might be especially helpful in this regard. As fixes for software bugs, security holes, and new versions become available, the company must manage these and make them simple for the user to be aware of and apply. This is essentially a no-brainer for Linux distributions, most of them already do this on the software side, it’s a matter of making this process as effortless on the hardware side. For example, current excitement is the Novell sponsored xgl/compiz combo. It requires certain graphics hardware. Fictux would offer this alongside its software update service so that the user immediately and easily understood what would be needed to get the latest fun features. Linux systems generally are able to support the hardware I throw at them (often more easily than Windows), though some exceptions stand out–as Linux systems gain in popularity, I expect this issue will continue to decrease.

5) Make the backup service easy and more useful than just a data backup. A number of different Internet-based backup services have been sprouting up, both for business and the regular home user, but these don’t interconnect as an integral part of the rest of the products and services I’ve mentioned for Fictux. Backing up data should come easily and automatically. It should be secure and accessible. But let it do more than just back-up data. It could be used for preconfiguring a system. Save all the configuration data throughout users’ computers’ lifetimes, even as new applications are installed. When it’s time to buy a new system, the customer won’t have to reselect all of his/her applications (like the first time) because it would already be known to Fictux. Even better, the computer system that the user receives would include all of his/her data, settings, bookmarks, etc. Many of these could even be imported from non-Linux systems at the first order. This would be like a dynamic “ghosting” system for companies that continually have to order new computers for employees. I’m sure there are vendors that already deliver similar services for large organizations but again, I’m not aware of a company that does it in conjunction with all of the rest of the items I’ve detailed and by scaling from one to hundreds or thousands of units.

6) Pre-setup web services. Deals used to come bundled by some manufacturers, months of AOL at a discount, just click the icon to activate it. Instead, allow the user to select the web services they use or would like to use (say VoIP services, on-line music stores, and even free services such as favourite Internet radio stations) in advance to receiving the computer, it would just be another configuration the company could easily arrange for its customers before the customers even start using their computers and more importantly it would allow Fictux to include the appropriate hardware to support these services (audio file player? headset?, etc.). It may be argued that these services are too vast to manage, but I think Fictux could find a way to bundle a service distribution in much the same manner it bundles the thousands of Free software applications in its repository.

Finally, as I said at the beginning, none of these ideas are necessarily new in-and-of themselves, they just haven’t all been offered together by one company. If each can be done by some company, why can’t they all be done by a single company? It should appeal from a business perspective because each provision of service or product helps the company further its sales effort within its own solution chain. The more important point, however is the customer/user. Each step of buying a computer, using it, managing to obtain and use software, hardware, and services, and finally, after a few years, buying a new one, is accompanied by anxiety, research efforts, and ultimately wasted time by the customer/user. A company should eliminate all of that extra effort. Most users only undertake these efforts because they have no choice (read, these steps themselves provide no value for the customer/user). As I mentioned in my second point, only a FOSS vendor can adequately offer such a solution. Furthermore I think a FOSS vendor would be especially suited to do the other steps well (such as the web services/hardware pre-configuration integration) because of its existing expertise in packaging complex and diverse software configurations.

A single vendor that can accomplish all of these steps would be offering something incredibly appealing for the masses (neophytes and computer experts alike) because it would be offering the only solution that is valuable from the start, with a minimum of wasted customer/user effort. I think this kind of solution would differentiate a company enough to challenge the automatic momentum Microsoft and Apple enjoy within the mindset of the general population. When it arrives, it might even shift the gradual gain in Linux adoption to a more pronounced, year of the Linux desktop.

Sides of Subverting Open Source

Martin Schneider at The 451 Group commented on whether the collective “we” can be too jaded regarding some proprietary vendors’ apparent embrace of open source methods. This was in response to a piece by Dave Rosenberg and Matt Asay about subverting open source for sake of certain marketing purposes. Rosenberg and Asay essentially say that Microsoft and SAP have a well-known history of speaking out against Free and open source software (FOSS) and concepts.

Certainly, Microsoft and SAP have put effort and money into spreading fear, uncertainty, and doubt (FUD), and both have publicly made, sometimes very strange statements about or against FOSS. Yet recently, both are putting some effort into releasing bits in an open source method or else funding some open source development. Rosenberg and Asay seem to think there is an ulterior motive,

“Any outreach attempts from vendors who have worked for years to destroy open source should be taken with a grain of salt and a sharp eye cast on motivating factors.”

Or could this mean, as Schneider suggests, that these companies are beginning to join the community’s stance that open source “…is simply a better way to create and distribute software.”? Rosenberg and Asay seem to take that into account by acknowledging the project leaders for the open source initiatives within these companies probably are working in earnest–I can’t help but lean toward a bigger picture that, as a whole, there is something else, more involved, taking place.

It makes perfect sense, if you’re a proprietary vendor, to delve deeply into your FOSS competitors, and for several reasons. I believe there are serious reasons to be wary of such proprietary vendors’ forays into FOSS and at the same time to embrace that. Here is why.

First, any vendor has to know what it’s competing against. This is just standard good business practice, there are even industries devoted to supporting this idea–competitive intelligence. What better way to understand the new models undoing your traditional strategy than to emulate them and find out how they work. The more you understand, the better can you build your products to compete and win. If the FOSS community innovates new technology, Microsoft wins by learning it and improving upon it for their own products, just like any good open source vendor would want to do (of course an open source vendor would participate by feeding the community with those improvements as well).

Second, what about that often referred to Machiavellian notion of keeping your friends close and enemies closer? If Microsoft can successfully attract an open source development community into its fold (so-to-speak) it gains a very powerful tool, a foothold into the “enemy’s” camp, which allows it to anticipate and prepare its proprietary strategies.

Third, does it hurt the proprietary vendor in any way? They’ve got all their proprietary business and propaganda in full swing, everyone already knows about that. On the other hand FOSS and Linux are gaining recognition. I’ll make an educated guess that FOSS and Linux are still not as well understood, in concept, by the majority of business decision-makers, much less the public in general. I think they still lack the massive public feeling of acceptance that most software vendors currently enjoy with their traditional proprietary business models. However, as that understanding and recognition grows in positive ways, it can only help companies like Microsoft and SAP to be able to show they’re just as much involved in the leading edge of technology practices. It’s simply good PR. If Microsoft and SAP can manage this while maintaining their proprietary side, so much the better for them (from their perspective).

Fourth, let’s suppose there truly is an ulterior motive to subvert FOSS communities. In the shoes of a company like Microsoft, it makes sense to blur the lines of differentiation between your proprietary approach and real FOSS approaches (hence the shared-source initiative). The harder it is for critics, detractors, or enemies to clearly differentiate your approach from their own, the harder it will be for them spotlight your weaknesses and their strengths, thus the customer cannot act on clear information for his or her software selection decisions. Furthermore, if you actually do participate in some ways with the FOSS community, you may gain some supporters that will defend, in good conscience, your motives, and possibly even turn a blind eye toward some of your other, less savoury practices (this not only blurs more boundaries but it again helps with grassroots PR, which is oh-so important on the Internet).

Finally, I’d like to say that already there is no clear side-versus-side here, we have to pay attention to the grey to really comprehend the situation. While I think we can see companies like Microsoft and SAP employing some intruiging strategies for subversion, and there are battles between models and methodologies, to a degree there is also some learning and the adoption of new and better practices. Because of the co-opetitive nature of FOSS models, the gradual adoption by the likes of proprietary vendors may even, unexpectedly, end up subverting those vendors’ models. We’re not too jaded to be constantly wary and suspect these companies of efforts to undermine FOSS, but we should, at the same time, cheer them on when they actually do participate in real FOSS processes.

Net Neutrality and Future Legacies

I’d like to comment quickly on the net neutrality issue. The Web thus far is a system–that from the beginning–essentially anyone could access in a like manner. A few companies have a strong interest in changing that though, in making, what I understand, are something like tiers of accessibility. Considering the life and social changes that have taken place as provoked by the new sorts of creative innovation the Web has fostered, I think changes limiting Net interoperation are incredibly bad ideas. A basic idea Tim Berners-Lee puts forward is

“Freedom of connection, with any application, to any party, is the fundamental social basis of the Internet, and, now, the society based on it.”

This may sound abstract to some but Bob Frankston wrote an entertaining piece that illustrates the unsavoury results of losing such freedom. For a thorough and technical analysis, I find Daniel Weitzner’s text on The Neutral Internet: An Information Architecture for Open Societies interesting.

The thing is, whatever starts taking place, technologically or in government policy now is going to be around for a while. People will adapt, install, and use software that is based on or otherwise enforces such technologies and policies. That means we have to imagine the consequences of a future saddled with the legacies we’re creating now. I hope we act to keep our liberty intact.

PeopleSoft Nuisance in North Dakota

A Computerworld article covers some of the problems (and ends with a few happier notes) about a PeopleSoft (Oracle) ERP implementation taking place in ND’s government and education sectors. Although the state agencies sound generally satisfied, the article focuses on North Dakota University System’s unhapiness with the unexpected massive cost and time overruns for getting their system implemented.

Why did they underestimate the costs, which ballooned from the extra time required for the (still) incomplete implementation? The article suggests the lesson to be learned is never embark on a major project like this without employing a full-time project manager (which, surprisingly it sounds like this implementation lacked from the start). But there is something else to learn from the article:

“The academic software modules, particularly a grants and contracts management application, also did not perform as expected and have required extensive customization, said Laura Glatt, vice chancellor of administrative affairs at the Bismarck-based university system.”

I wonder why they did not expect this? Perhaps their original RFI/RFP was not designed to request that information? Did they script some demonstration scenarios for the vendor to show them how the modules would accomplish the sort of functionality they needed? I’d think there could have been some way to prevent this issue–maybe the ghost of a full-time project manager would have thought of that during the selection and evaluation phases.