Slow Erosion toward Open

In the slew of posts today on the Microsoft/Novell agreement, I think one of the most interesting comes from David Berlind. David draws out locomotion methods of large companies like Google or Microsoft. In particular the issue of disruptive technologies. The established companies have to, one way or another, embrace these disruptions quickly, and there are a number of ways to do that. He notes how Google acquired Jotspot in that regard. Microsoft is faced with a lot disruptive technology issues, the biggest being the sea-change of Free and open source software. So the hidden twist in the strategy may be based in further .NET via Novell’s Mono project.

What I think makes Berlind’s point so interesting is the following observation

“Looking around at the many startups that springing up all over the world, that trend to either build-on or be an open source company isn’t slowing down. It’s speeding up which means that, going forward, the only choice for closed-sourced companies to respond to disruption (or create it) may be to acquire open source companies.”

And if that’s the case, the result is that over time these proprietary companies will become open source companies. It feels a little like that stat about human cells regenerating entirely every seven years–you’re a whole new person. Will this continue as the case for the software industry?

Oracle’s Community

And another thing! ZDNet’s Dana Blankenthorn points out one of the more interesting issues in the Oracle Linux new world order, the community’s perception. I didn’t comment on this in my previous post on the situation because, although it’s crucially important, it wasn’t within the scope of what I wanted to express. Dana notes that the Oracle support business for Linux “…is aimed, not at competition, but at domination.”

That is the perception one usually gets from reading news about Oracle’s moves. It is the perception selected from sound bites and media clips of Larry Ellison’s talks. Oracle conquers and dominates, it positions itself to have the appearance of shrewd aggression. It’s employees have commented on that feeling pervading their work environment. That has brought Oracle success in a proprietary software market. Can that approach carry it into the FOSS world?

Blankenthorn essentially is pointing out how Oracle will have to undergo some real change to be an accepted part of the FOSS community. What if Oracle chose not to care? Would it be able to pull off its support model successfully? That remains to be seen. Quickly combing through other companies in mind, I cannot recall any that have been successful delivering and supporting FOSS solutions and that did not somehow strive to be members in good-standing within the FOSS community. So when Oracle has initial customers on board with this initiative will it be able to continue and follow through down the road?

Maybe the conqueror will be changed by that which it has conquered.

FOSS Support and Differentiation

One of the old but recurring fears of those considering an open source try is that there’s nobody to call when they’ve got a problem. Yet, I’d say most FOSS companies are alive for that sake.

I recently read a well-put example of an open source support process from a Sun blogger. In the example, Tim Quinn discusses how a Sun customer went to an open source community for help with his problem and when a clear solution wasn’t forthcoming, opened the issue from the basis of a formal support agreement he had with Sun. That enabled Sun to put its resources into the issue and solve the customer’s problem. So even though there was community help readily available, there was another layer of paid support that the customer could rely to solve his problem.

When one looks at the different companies providing support services, one can see definite similarities. I’m saying that a hallmark of open source software companies is providing support services. What is becoming more interesting is the way in which these services will differ as different industries grow their own open source ecosystems. I expect the services will start appearing with unique characteristics and a comparison of these among industries may perpetuate changes across open source support providers. A Computer World article on open source health care applications makes an interesting point that I think speaks to the support issue. According to the article:

“With open-source technologies, development and adoption go hand in hand. The robust and growing HIT offerings did not emerge from vendors marketing to health care providers, but from HIT teams serving the providers themselves. Therefore, adoption has been organic, based on community ‘pull’ and not on commercial ‘push’.”

It is talking about the way in which open source applications have been adopted by health care providers. What I find interesting is that the development of the applications is driven by what those in the traditional customer role require while maintaining the distinction of a provider. What will those providers (like WebReach or Uversa) be doing down the road? They know there is a solid group requiring exactly what has been developed. Surely they will continue development but they’re also going to need to support all this development output. It looks like the “community pull” is generating more professional support services and thus I’d think those may mimic the pull model of development.

Innovation and Invention Query

This post might be innovative. I doubt it’s inventive, mostly I’m reopening something that’s been open and I want to get a better understanding of why.

I saw a lot the topics about innovation versus invention a few years ago. Now all I see in IT and especially IT business-related articles is the notion of innovation. For some time people were lamenting the industry’s apparent all-out focus on invention. When instead what people were calling for as the true path to business success was more innovation. Then it seemed there was a media round saying, no really all these successful tech companies are actually innovating, not inventing, let’s not confuse the two. Then countries started getting labeled as having cultures of innovation as opposed to invention. Do a few searches, you’ll see the topics I’m referring to. And that brings me back to the idea that everyone’s concerned with innovating now.

The Wall Street Journal’s recent article on the Right Way For Firms to Be Creative featured an interview with Nick Carr covering what was promoted as his somewhat unorthodox ideas about the innovation trend (though I’d take issue with that, they don’t seem to be very controversial). While he seems to declare the value of innovation, he warns that American companies are too in love with the idea, stating “…that innovation isn’t free, that innovation actually is quite expensive and quite risky.” Carr continues to offer some interesting examples though they’re quite similar (way to innovate, Carr ;-)) to those put forward by Bill Buxton in his piece on design innovation and invention (PDF).

On the difference between innovating and inventing Buxton stated that “Too often the obsession is with ‘inventing’ something totally unique, rather than extracting value from the creative understanding of what is already known.” Or in other words grasping what has been introduced already and building, improving, extending that. I see why innovating then appeals so much to business, especially in tech fields. For one thing, by simply changing or improving on the existing, companies have a more solid base for selling their products or services, they’re introducing something that may already have a level of familiarity to the target audience, and they get a jumpstart on the product/service since they’re not beginning from scratch. (And I might add, innovation is clearly built in to the Free and open source paradigm as FOSS licenses explicitely encourage innovative activities–this is a topic I’d like to explore further.) Note however that Carr raises an issue with companies obsessing on innovation as well,

“They lose sight of the fact that innovation isn’t free, that innovation actually is quite expensive and quite risky… You want to make sure that you innovate in those few areas where innovation can really pay off and create a competitive advantage and not innovate in other areas where it won’t pay off.”

There is great incentive for companies to innovate as opposed to invent. Maybe the initial invention represents a risky value proposition so innovation is seen as a safer bet; it can stand on the shoulders of known successes, or at least, the known. However as Carr points out in the quote above, problems exist with innovation, which I think sound equally applicable to invention. I’d like to refer back to Buxton, who states that “…success in capitalizing upon design and innovation is primarily a cultural thing, and shaping corporate culture is an executive responsibility.” So strategic innovatation must be the phrase of the day–considering the risks involved.

Look, if there are similar risks to innovation and invention can’t we say that both require the same sort of corporate stewardship to be beneficial in business? I think that all-too-often discussions of innovation and invention pit the one against the other. Using a “versus” analysis strategy as the crux of the questioning or exploration of this issue, leads us down a looping and self-destructive path. In other words, why try to say one is better than the other?

Without invention, won’t our platforms for innovation disappear? We need both, the real issue should be to understand how and when and where to devote energy to each. I’d like to see more thought about the interplay between invention and innovation and the creative, developmental, and business strategies for harnessing that interplay–I believe that would be a far more useful discussion.

RFI Collection Days Begin

Technology Evaluation Centers (TEC) has been working on a very large software selection project for an electric utility. We only take on a few specific projects a year (though lots of people/companies use our analysis tools and data for their software selection projects). After our team mapped the utility’s business processes, quite diligently, to 6654 ERP/SCM/BI and specific utility criteria (like billing systems, asset management, electricity generation, and fleet management), we finalized an RFI. So now, I’ve been busy the last few days qualifying and corresponding with companies that want to participate in RFI response process.

This is an exciting step because all the work everyone put into determining and designing the RFI criteria is finalized so there is a “tangible” accomplishment complete. Although we regularly send our standard RFIs to various companies for the research areas we cover, when we embark on custom projects like this we amass a lot of new information very quickly. It always amazes me, the variety of companies from countries around the world that take an interest in a call for an RFI. Often times we discover successful software companies that simply don’t turn up in regular (North American) IT publications. I end up talking with a lot of people about the RFI. It’s not always a good fit for their products but then it turns out such companies sometimes fit nicely with other projects. My point is, I like the discovery process.

The next big step will take place in a month when we receive the responses to the RFIs. That’s when it really starts to get interesting because we import the data to our evaluation system and start analyzing the different vendors support capabilities.

More Noticings from LinuxWorld Expo SF

After attending LinuxWorld Expos for a few years, I noticed a certain trend starting… that is, desktop Linux usage appears to be visibly on the rise. Considering I was attending a Linux-centric show, one would think that’s a given–but it’s not. I’ve been paying attention, informally (this is no scientific survey), to what people use at these shows. At many of the previous LinuxWorld Expos and Open Source Business Conferences I saw that not only a lot, but probably the majority of the attendees were using Windows-based laptops and more interesting was that a significant number of the presenters were using Microsoft Powerpoint on Windows for their slides.

With the variety of companies present at these shows not all of them should be expected to be pure-Linux or FOSS shops. A lot are providing a particular sort of software that happens to support Linux among other OSes, so it’s not entirely surprising that they’d be running OSes other than Linux. Still, at a Linux show, they ought to put their best feet forward. Anyhow, this time there were quite a few presenters showing slides with OpenOffice running on Linux. As I spied attendees typing away on their laptops, I saw that a lot were also running Linux. The predominant distro appeared to be a version of SUSE. Does the SUSE dominance portend something in the future of Linux-based business desktops?

As an aside, Linspire tried pumping the exhibition center full of its pheromones and jumped everyone in site, thrusting shiny boxed copies of its distro (with free manuals and other goodies). Always curious, I accepted. Linspire has no aspirations to the enterprise market, focusing solely on the home user, which makes them a bit of an anomaly among major Linux distros. Even other easy desktop distros like Ubuntu, Mandriva, and Xandros seem to have a few enterprise aims.

The Creative Commons had some great promotional items (and a well-done bit of DVD propaganda). My donation got me a snappy shirt that would either please Campbell’s soup or Andy Warhol–it’s hard to say which would have been more impressed. I was also instructed to take not one pin, but whole handfuls of pins until blood gushed from my palms. The trick now will be distributing them, maybe I’ll place them on the seats of some Metro cars.

And what about the enterprise apps? It would be nice to see more variety in the way of FOSS enterprise applications at LinuxWorld Expos (IT management, security, and development-related apps feel dominant, although as Linux gains home users, I wonder how the show will change to cover their interests?) A few of the enterprise app high-points included OpenBravo, which is an interesting new open source ERP system. I’ll have more on them (via TEC) in a bit. CentricCRM was visible and trying to show how it could be more appropriate for a large enterprise than its popular open source sister, SugarCRM. Finally, I’d like to mention Adaptive Planning, which although it has been in business for some time using an ASP model, just released its business performance management software under an open source license several days before the show. Adaptive Planning now is offering an on-line service as well as an on-site implementation with several different commercial options.

As a last note, in attending a presentation by Ingres’s CTO and strategy VP, Dave Dargo, he made a point about something like up to 80% of most IT organizations’ budgets go toward maintenance. If I understood him correctly, I think he was making a point about how the many existing proprietary vendors aren’t really interested much in new innovations with their customers in-mind, rather there is a lot of incentive to focus on all the clients spending money on old projects. I reflected on some of the recent acquisition press, and maybe I’m slow to have this really sink in, maybe it’s been quite obvious, but if this is the case then it puts a certain perspective on the motivation behind the Oracles, the Infors, etc. gobbling up so many competing companies. Does it matter so much whether they can offer a better solution for the customers of these gobbled-up companies? If they can integrate the wide range of systems? Is it mostly just important to them to milk a larger stable of maintenance customers?

A Few Noticings at LinuxWorld SF

I had a few small problems getting in. They don’t care much for analysts masquerading as wizened party crashers with large yellow balloons depicting the faces of thousands of homeless linux users who’d like to make Moscone Center their home.

Not that I’d do that.

Virtualization seems to be quite the theme these days. The last LinuxWorld I went to was in Toronto and it was a very different experience. The Toronto show seemed a bit lackluster in comparison… however it felt more personal and had one of the most interesting keynotes I’ve ever attended. Namely the fascinating explanation of National Geographic and IBM’s genographic mapping project (can’t wait to participate). Of course that was rather loosely related. SF LinuxWorld and virtualization… After listening to several talks I don’t think I see quite what makes this such a hot topic… some cost savings in CPU licenses, etc. what else though?

There was a very interesting panel discussion on legal FUD with Eben Moglen, Christine Martino, and Stuart Cohen. It was a good combination especially when it came to the GPL v3 issues. The commentary was more multi-faceted and less extreme sounding then it’s sometimes portrayed in popular media articles. In particular it wasn’t as though HP came up heavily in contradiction to the draft content… much more in the spirit of recognizing the collaborative work-in-progress nature. In addition none of the parties seemed to feel this draft process was causing any problems with the development community–noting FOSS dev was continuing just as rapidly as ever. Moglen noted that it was all going according to schedule and he expected things to stay on course for completion in 2007 as originally intended. Why has there been so much media fuss acting as though the drafting is moving too slowly? Maybe that signifies a lack of background research and too many people repeating one another.

Another interesting bit were the database situations… EnterpriseDB winning best DB (nice for them and PostgreSQL) but also the presence of Ingres. Ingres now indepedant (mostly) from CA is an open source database to watch. They have thousands of customers and a long solid history to back them but at the moment they’re pretty much a CA-only solution. So it will be interesting to see how their new-found visibility and open source licensing will help them expand beyond CA’s shadow and get supported by some of the other major enterprise solutions. When I talk to open source enterprise vendors I’m still only hearing support for the major proprietary datbases and PostgreSQL or MySQL.

Finally, before I wrap this up, I’ll note that this is my first post from my new Sharp Zaurus c3200, which arrived just before the show. It’s a great little Linux device! I’ll say more about it later and I’ll have another post on LinuxWorld SF noticings.

GPLv3 and Corporate Contrarian Hype

The latest draft of the third GPL version is provoking a lot of argument, posturing, and controversy. I’m glad its careful drafting process is taking the amount of time it is. I think it’s useful to widen the sphere of public awareness on the issues the license addresses. Some of the most controversial issues, such as digital rights/restriction management (DRM) and patents are going to impact our lives and culture in far reaching ways (they’re not isolated from technical and business issues). Yet a lot of people discussing these issues don’t seem to apply the rigourous thinking that is required.

I’d argue the FSF has a track record of considering important issues like these, with foresight and the creative will to develop strong, practical solutions, staving off potential damage to our freedoms. Damage that would otherwise be carried out by imaginary legal entities armed with human bullets, which fly toward profit so quickly they miss all other practical and ethical issues. The solutions have also enabled a huge amount of innovation and positive change.

But some of the most visible contrarians to this draft of the next GPL–opinions which are getting hyped, are mostly irrelevant in the greater scheme of things. I’m talking about the recent HP issues that were circulated around numerous web sites. To quote Christine Martino, vice president of Open Source And Linux with HP from the Internetnews.com article linked above,

“HP had hoped that the second draft would clarify the patent provision such as to ease concern that mere distribution of a single copy of GPL-licensed software might have significant adverse IP impact on a company…”

What does that mean, “significant adverse IP impact”? It’s removed from its context so I can’t be sure, but it sounds to me like the HP folks are taking issue with something the new GPL would prevent them from doing with their patent portfolio. Furthermore, referring to HP’s commentary, the Internetnews.com article states

“The second draft of the GPL version 3 license is not even a day old and already one of the largest Linux vendors in the world is taking issue with its content.”

So what? The FSF is interested in freedom, and its foresight in ensuring and encouraging that was the ultimate basis of practicality giving rise to the IT business shifts underway because of FOSS. Although the Open Source Initiative fairly claims the pragmatic approach under its rubric (as that is its stated goal), it doesn’t imply an either/or stance. Freedom doesn’t preclude pragmatism. Unfortunately too many articles treat these notions as mutually exclusive.

While the previous GPL versions were produced with a goal of promoting freedom in a software development basis, they also triggered important business and social developments. Why were they so successful? Because many many many individuals adopted these licenses. The freedom and collaboration they enabled for masses of individuals in free software development is key. A recent ZDNet blog post states

“But that’s not where the debate will really play out. It will really play out in the market. Will GPL companies switch to GPL v3, or explicitly demand retention of V2, which is frankly vague on the DRM question.”

Does it matter if major business entities like HP object to certain freedom promoting aspects of the license? Does their wanting to switch really signify whether the third version will be successful in getting adopted? I don’t think it matters much at all. HP is a player in the free software community, it is not the player. And that is true of everyone else. So HP commentary should be considered for what it’s worth–nothing more.

So, I’d argue against Steven J. Vaughan-Nichols’s point that the GPLv3 will be dead on arrival. As he mentions the HP issue, he also mentions Linus Torvalds’s objections. This is fair, from what I’ve read it sounds like Torvalds has some clearly thought-out opinions. From what I’ve read, some of these sound quite reasonable. As I said at the beginning, I like the debate this drafting process is raising, Torvalds and HP included. Nevertheless, from what I’ve read of Torvalds’s arguments, I have the impression he is single-mindedly focusing on technical issues and intentionally excluding debate on all else. I just don’t believe that’s ok. There are too many important, non-technical ramifications interconnected with information technology to ignore. It doesn’t mean everyone must think about these things, but how does it help to intentionally excise them from the debate?

——
Addendum

A new story covering the feedback issue in this debate was just published at NewsForge. This is a useful balance to the different sides involved.

Reference Site Visits, the Evidence

Yesterday I was editing a document for a project in which we’re helping an organization select its ERP system. The document covered practical reasons that the organization’s selection steering committee should take part in reference site visits. In other words (and this is a regular practice our company recommends) while evaluating the right system, the people that are responsible for overseeing its selection ought to visit real customer sites that have already implemented the system. (I suggested the author develop the document into an article as well, so we will likely be publishing a full article on the subject, at the TEC web site.)

I thought there were a few interesting points that seemed to jive with a recent Strategy+Business magazine article I read concerning evidence-based management (I appear to be quoting them a lot lately). In the Strategy+Business article, Why Managing by Facts Works, the authors point out

“…we are convinced that when companies base decisions on evidence, they enjoy a competitive advantage. And even when little or no data is available, there are things executives can do that allow them to rely more on evidence and logic and less on guesswork, fear, faith, or hope. For example, qualitative data, such as that gathered on field trips to retail sites for the purpose of testing existing assumptions, can be an extremely powerful form of useful evidence for quick analysis.”

If a committee of people is going to be involved in a big selection project, it can analyze all of the business processes and software functionality possible, it can see scripted vendor demonstrations, but it seems like it would still be pretty difficult to envision just how the system works out in the world, in actual production situations.

So if the steering committee of the selection project visits a few sites in their own industry, which have implemented the ERP system, they get the opportunity to see how unexpected issues arose and got resolved. They might witness benefits or problems that they didn’t expect or even consider beforehand. Finally, they have a chance to see how the system affects the people working with it. Because those issues would be important considerations for making a critical business and technical decision like selecting an ERP system, the site visit is a clear way to vaporize assumptions with real evidence.

A Real Year of the Linux Desktop–What’s Needed

They said it at LinuxWorld in Toronto a few months ago. They’ve buzzed it at analysts, and now the press is saying it to the public. Novell says this is the year of the Linux desktop, and I’m familiar with evidence showing gains in popularity for Linux. Yet, I disagree that this is the year. Nothing is happening this year to make it, specifically, the year of the Linux desktop and I’m going to hypothesize what could change that.

To me, there’s no contest, GNU/Linux systems have been offering more innovative, stable, easily productive, and pleasant desktop systems (KDE for example) for years. However, that’s not enough to move Linux to a place where it challenges the automatic momentum both Microsoft and Apple enjoy within the mindset of the general population (at least in North America–perhaps elsewhere this is different). The mindset of the user/customer environment is what is needed to turn it into the year of the Linux desktop–Novell isn’t making much of a dent in this regard.

Jem Matzen wrote why specialized systems as opposed to fancier eye candy would be a better answer to move in this direction (that’s my very over-simplified paraphrase). I appreciate that notion in part; I’d like to suggest something else though, something which I think would give GNU/Linux and FOSS applications a real poignant way to shift the public’s mindset toward their adoption. Even better, it’s a business model that could only, really work in its entirety within a Free and open source ecosystem. What I’m suggesting, is essentially like something James P. Womack and Daniel T. Jones recommend in their book, Lean Thinking, except applied within a FOSS ecosystem.

To catalyze the required mindset shift–and this may appear plain at first glance, let me flesh it out–if a customer could easily buy a computer system, stacked with the desired hardware, configured software, support expertise, update service, backup service, in addition to having automatic access to a range of web services (like music stores or VoIP services) optionally pre-setup, it would be a completely compelling solution. What’s so special? Don’t we see that from the likes of Apple or Dell? Not really. No company that I’m aware of actually does this to the degree I’m proposing but a GNU/Linux OS distribution is the one that would fit this model and allow it to work, now. I’ll continue by talking about what such a fictitious GNU/Linux solution provider would do and I’m going to refer to this fictitious company as Fictux.

A full computing solution should come from a company that pre-bundles everything its customers want, consistently supporting it, for the duration of ownership. It should not require anxious intervention from the owner when the owner desires a new component or new system, and the new system should have all data and applications from the old system installed, setup, and accessible upon delivery.

1) Getting the computer. It’s not impossible to find a company on-line that will sell a computer set up with Linux. There are some hardware vendors offering compelling Ubuntu and Linspire preinstalled systems. Every now and then you even hear about a big box store selling some Linux PCs. Some companies, like Dell, even let you pre-configure the hardware components to varying degrees. Fictux would make this selection easy, it would have pre-tested the hardware to be sure it all works together in combination with the applicable software. This is not a new idea but it must be combined perfectly with the rest of the service.

2) The right software, configured right. The system cannot simply be preloaded with a Linux distro! From the point-of-view of most average users, there probably isn’t a cognizance of getting anything extremely compelling from an OEM with Linux preinstalled, they might as well have Windows. Worse, getting a new system with the standard OS leaves too much effort to the user to seek and install all their desired applications (this is true of Windows, Macintosh, and Linux). Most standard Linux distributions get a running start (bundling thousands of apps) compared to Windows or Mac systems, but sometimes too many apps are a detriment. Worse is when the user gets apps targetting what s/he wants but they’re not necessarily the specific ones s/he wanted (say I want Kopete while my distro automatically gives me GAIM).

A long time ago, when I was a dedicated Mandrake (Mandriva) user, I remember suggesting (and I don’t recall if this was in a user forum, an e-mail, a comment form, or what) that they let users select every software package they want, in advance to downloading an installation ISO. Then the user could download a totally custom version of the distribution. That’s to say that Fictux would offer custom versions of its distribution, tailored to exactly what the user wants the instant the system is turned on. This must be done at the time of purchasing the hardware.

Could Microsoft or Apple get agreements, permanently ongoing agreements, from the thousands of potential proprietary software vendors a customer might want to have installed? Could Microsoft or Apple charge a humane price for such a system? It doesn’t seem plausible. However, a Linux-based manufacturer can do this because of its FOSS ecosystem.

If I was the customer, obviously over the computer’s lifetime I’d want to occasionally install something new, but currently when I, for example, install a Kubuntu system for the first time, I have to search through a package repository interface (though it’s an easily unified one) for whatever I want to install, then tell it to install–the consequence is that every time I set up a new computer with the operating system, I spend half a day just adding the applications I want and configuring them. Yet a Linux distribution is already a carefully selected collection of Free software applications, tied and tested together into a whole system. Why is practically every distribution offering its common system (sometimes there is a server or business version) and then asking the user to install all the options? Fictux would ask the options first and make the distribution, the user’s distribution. It could be an audio work-oriented distro, desktop publishing distro, file server distro, immediately upon powering on, and according to the user’s taste. Furthermore, and I’ll expand this when I get to backups, it should already be populated with the information about the user, his/her preferences, and files.

3) Provide the support expertise. Plenty of companies, especially in the open source world, have chosen a business model of providing support services. Why is this often an independent company from the hardware, software, or other services? Of course they’re not all independent companies, but Fictux, in providing each point I’m detailing here would also be the point of contact for any support-related issue. Software questions, hardware failures (even to the point of arranging pickup and delivery replacement service), possibly even in agreement with the ISP.

4) Manage the update service. If there is some sort of hardware recall, Fictux would be responsible. As new technology is available, Fictux stays on top of it and folds the new tech into its service. It’s got to preemptively know which hardware will best support new software and be able to let the user know, without requiring the user to research all kinds of options and configurations. I think the transparency of the many test releases in open source development might be especially helpful in this regard. As fixes for software bugs, security holes, and new versions become available, the company must manage these and make them simple for the user to be aware of and apply. This is essentially a no-brainer for Linux distributions, most of them already do this on the software side, it’s a matter of making this process as effortless on the hardware side. For example, current excitement is the Novell sponsored xgl/compiz combo. It requires certain graphics hardware. Fictux would offer this alongside its software update service so that the user immediately and easily understood what would be needed to get the latest fun features. Linux systems generally are able to support the hardware I throw at them (often more easily than Windows), though some exceptions stand out–as Linux systems gain in popularity, I expect this issue will continue to decrease.

5) Make the backup service easy and more useful than just a data backup. A number of different Internet-based backup services have been sprouting up, both for business and the regular home user, but these don’t interconnect as an integral part of the rest of the products and services I’ve mentioned for Fictux. Backing up data should come easily and automatically. It should be secure and accessible. But let it do more than just back-up data. It could be used for preconfiguring a system. Save all the configuration data throughout users’ computers’ lifetimes, even as new applications are installed. When it’s time to buy a new system, the customer won’t have to reselect all of his/her applications (like the first time) because it would already be known to Fictux. Even better, the computer system that the user receives would include all of his/her data, settings, bookmarks, etc. Many of these could even be imported from non-Linux systems at the first order. This would be like a dynamic “ghosting” system for companies that continually have to order new computers for employees. I’m sure there are vendors that already deliver similar services for large organizations but again, I’m not aware of a company that does it in conjunction with all of the rest of the items I’ve detailed and by scaling from one to hundreds or thousands of units.

6) Pre-setup web services. Deals used to come bundled by some manufacturers, months of AOL at a discount, just click the icon to activate it. Instead, allow the user to select the web services they use or would like to use (say VoIP services, on-line music stores, and even free services such as favourite Internet radio stations) in advance to receiving the computer, it would just be another configuration the company could easily arrange for its customers before the customers even start using their computers and more importantly it would allow Fictux to include the appropriate hardware to support these services (audio file player? headset?, etc.). It may be argued that these services are too vast to manage, but I think Fictux could find a way to bundle a service distribution in much the same manner it bundles the thousands of Free software applications in its repository.

Finally, as I said at the beginning, none of these ideas are necessarily new in-and-of themselves, they just haven’t all been offered together by one company. If each can be done by some company, why can’t they all be done by a single company? It should appeal from a business perspective because each provision of service or product helps the company further its sales effort within its own solution chain. The more important point, however is the customer/user. Each step of buying a computer, using it, managing to obtain and use software, hardware, and services, and finally, after a few years, buying a new one, is accompanied by anxiety, research efforts, and ultimately wasted time by the customer/user. A company should eliminate all of that extra effort. Most users only undertake these efforts because they have no choice (read, these steps themselves provide no value for the customer/user). As I mentioned in my second point, only a FOSS vendor can adequately offer such a solution. Furthermore I think a FOSS vendor would be especially suited to do the other steps well (such as the web services/hardware pre-configuration integration) because of its existing expertise in packaging complex and diverse software configurations.

A single vendor that can accomplish all of these steps would be offering something incredibly appealing for the masses (neophytes and computer experts alike) because it would be offering the only solution that is valuable from the start, with a minimum of wasted customer/user effort. I think this kind of solution would differentiate a company enough to challenge the automatic momentum Microsoft and Apple enjoy within the mindset of the general population. When it arrives, it might even shift the gradual gain in Linux adoption to a more pronounced, year of the Linux desktop.