A Real Year of the Linux Desktop–What’s Needed

They said it at LinuxWorld in Toronto a few months ago. They’ve buzzed it at analysts, and now the press is saying it to the public. Novell says this is the year of the Linux desktop, and I’m familiar with evidence showing gains in popularity for Linux. Yet, I disagree that this is the year. Nothing is happening this year to make it, specifically, the year of the Linux desktop and I’m going to hypothesize what could change that.

To me, there’s no contest, GNU/Linux systems have been offering more innovative, stable, easily productive, and pleasant desktop systems (KDE for example) for years. However, that’s not enough to move Linux to a place where it challenges the automatic momentum both Microsoft and Apple enjoy within the mindset of the general population (at least in North America–perhaps elsewhere this is different). The mindset of the user/customer environment is what is needed to turn it into the year of the Linux desktop–Novell isn’t making much of a dent in this regard.

Jem Matzen wrote why specialized systems as opposed to fancier eye candy would be a better answer to move in this direction (that’s my very over-simplified paraphrase). I appreciate that notion in part; I’d like to suggest something else though, something which I think would give GNU/Linux and FOSS applications a real poignant way to shift the public’s mindset toward their adoption. Even better, it’s a business model that could only, really work in its entirety within a Free and open source ecosystem. What I’m suggesting, is essentially like something James P. Womack and Daniel T. Jones recommend in their book, Lean Thinking, except applied within a FOSS ecosystem.

To catalyze the required mindset shift–and this may appear plain at first glance, let me flesh it out–if a customer could easily buy a computer system, stacked with the desired hardware, configured software, support expertise, update service, backup service, in addition to having automatic access to a range of web services (like music stores or VoIP services) optionally pre-setup, it would be a completely compelling solution. What’s so special? Don’t we see that from the likes of Apple or Dell? Not really. No company that I’m aware of actually does this to the degree I’m proposing but a GNU/Linux OS distribution is the one that would fit this model and allow it to work, now. I’ll continue by talking about what such a fictitious GNU/Linux solution provider would do and I’m going to refer to this fictitious company as Fictux.

A full computing solution should come from a company that pre-bundles everything its customers want, consistently supporting it, for the duration of ownership. It should not require anxious intervention from the owner when the owner desires a new component or new system, and the new system should have all data and applications from the old system installed, setup, and accessible upon delivery.

1) Getting the computer. It’s not impossible to find a company on-line that will sell a computer set up with Linux. There are some hardware vendors offering compelling Ubuntu and Linspire preinstalled systems. Every now and then you even hear about a big box store selling some Linux PCs. Some companies, like Dell, even let you pre-configure the hardware components to varying degrees. Fictux would make this selection easy, it would have pre-tested the hardware to be sure it all works together in combination with the applicable software. This is not a new idea but it must be combined perfectly with the rest of the service.

2) The right software, configured right. The system cannot simply be preloaded with a Linux distro! From the point-of-view of most average users, there probably isn’t a cognizance of getting anything extremely compelling from an OEM with Linux preinstalled, they might as well have Windows. Worse, getting a new system with the standard OS leaves too much effort to the user to seek and install all their desired applications (this is true of Windows, Macintosh, and Linux). Most standard Linux distributions get a running start (bundling thousands of apps) compared to Windows or Mac systems, but sometimes too many apps are a detriment. Worse is when the user gets apps targetting what s/he wants but they’re not necessarily the specific ones s/he wanted (say I want Kopete while my distro automatically gives me GAIM).

A long time ago, when I was a dedicated Mandrake (Mandriva) user, I remember suggesting (and I don’t recall if this was in a user forum, an e-mail, a comment form, or what) that they let users select every software package they want, in advance to downloading an installation ISO. Then the user could download a totally custom version of the distribution. That’s to say that Fictux would offer custom versions of its distribution, tailored to exactly what the user wants the instant the system is turned on. This must be done at the time of purchasing the hardware.

Could Microsoft or Apple get agreements, permanently ongoing agreements, from the thousands of potential proprietary software vendors a customer might want to have installed? Could Microsoft or Apple charge a humane price for such a system? It doesn’t seem plausible. However, a Linux-based manufacturer can do this because of its FOSS ecosystem.

If I was the customer, obviously over the computer’s lifetime I’d want to occasionally install something new, but currently when I, for example, install a Kubuntu system for the first time, I have to search through a package repository interface (though it’s an easily unified one) for whatever I want to install, then tell it to install–the consequence is that every time I set up a new computer with the operating system, I spend half a day just adding the applications I want and configuring them. Yet a Linux distribution is already a carefully selected collection of Free software applications, tied and tested together into a whole system. Why is practically every distribution offering its common system (sometimes there is a server or business version) and then asking the user to install all the options? Fictux would ask the options first and make the distribution, the user’s distribution. It could be an audio work-oriented distro, desktop publishing distro, file server distro, immediately upon powering on, and according to the user’s taste. Furthermore, and I’ll expand this when I get to backups, it should already be populated with the information about the user, his/her preferences, and files.

3) Provide the support expertise. Plenty of companies, especially in the open source world, have chosen a business model of providing support services. Why is this often an independent company from the hardware, software, or other services? Of course they’re not all independent companies, but Fictux, in providing each point I’m detailing here would also be the point of contact for any support-related issue. Software questions, hardware failures (even to the point of arranging pickup and delivery replacement service), possibly even in agreement with the ISP.

4) Manage the update service. If there is some sort of hardware recall, Fictux would be responsible. As new technology is available, Fictux stays on top of it and folds the new tech into its service. It’s got to preemptively know which hardware will best support new software and be able to let the user know, without requiring the user to research all kinds of options and configurations. I think the transparency of the many test releases in open source development might be especially helpful in this regard. As fixes for software bugs, security holes, and new versions become available, the company must manage these and make them simple for the user to be aware of and apply. This is essentially a no-brainer for Linux distributions, most of them already do this on the software side, it’s a matter of making this process as effortless on the hardware side. For example, current excitement is the Novell sponsored xgl/compiz combo. It requires certain graphics hardware. Fictux would offer this alongside its software update service so that the user immediately and easily understood what would be needed to get the latest fun features. Linux systems generally are able to support the hardware I throw at them (often more easily than Windows), though some exceptions stand out–as Linux systems gain in popularity, I expect this issue will continue to decrease.

5) Make the backup service easy and more useful than just a data backup. A number of different Internet-based backup services have been sprouting up, both for business and the regular home user, but these don’t interconnect as an integral part of the rest of the products and services I’ve mentioned for Fictux. Backing up data should come easily and automatically. It should be secure and accessible. But let it do more than just back-up data. It could be used for preconfiguring a system. Save all the configuration data throughout users’ computers’ lifetimes, even as new applications are installed. When it’s time to buy a new system, the customer won’t have to reselect all of his/her applications (like the first time) because it would already be known to Fictux. Even better, the computer system that the user receives would include all of his/her data, settings, bookmarks, etc. Many of these could even be imported from non-Linux systems at the first order. This would be like a dynamic “ghosting” system for companies that continually have to order new computers for employees. I’m sure there are vendors that already deliver similar services for large organizations but again, I’m not aware of a company that does it in conjunction with all of the rest of the items I’ve detailed and by scaling from one to hundreds or thousands of units.

6) Pre-setup web services. Deals used to come bundled by some manufacturers, months of AOL at a discount, just click the icon to activate it. Instead, allow the user to select the web services they use or would like to use (say VoIP services, on-line music stores, and even free services such as favourite Internet radio stations) in advance to receiving the computer, it would just be another configuration the company could easily arrange for its customers before the customers even start using their computers and more importantly it would allow Fictux to include the appropriate hardware to support these services (audio file player? headset?, etc.). It may be argued that these services are too vast to manage, but I think Fictux could find a way to bundle a service distribution in much the same manner it bundles the thousands of Free software applications in its repository.

Finally, as I said at the beginning, none of these ideas are necessarily new in-and-of themselves, they just haven’t all been offered together by one company. If each can be done by some company, why can’t they all be done by a single company? It should appeal from a business perspective because each provision of service or product helps the company further its sales effort within its own solution chain. The more important point, however is the customer/user. Each step of buying a computer, using it, managing to obtain and use software, hardware, and services, and finally, after a few years, buying a new one, is accompanied by anxiety, research efforts, and ultimately wasted time by the customer/user. A company should eliminate all of that extra effort. Most users only undertake these efforts because they have no choice (read, these steps themselves provide no value for the customer/user). As I mentioned in my second point, only a FOSS vendor can adequately offer such a solution. Furthermore I think a FOSS vendor would be especially suited to do the other steps well (such as the web services/hardware pre-configuration integration) because of its existing expertise in packaging complex and diverse software configurations.

A single vendor that can accomplish all of these steps would be offering something incredibly appealing for the masses (neophytes and computer experts alike) because it would be offering the only solution that is valuable from the start, with a minimum of wasted customer/user effort. I think this kind of solution would differentiate a company enough to challenge the automatic momentum Microsoft and Apple enjoy within the mindset of the general population. When it arrives, it might even shift the gradual gain in Linux adoption to a more pronounced, year of the Linux desktop.

Sides of Subverting Open Source

Martin Schneider at The 451 Group commented on whether the collective “we” can be too jaded regarding some proprietary vendors’ apparent embrace of open source methods. This was in response to a piece by Dave Rosenberg and Matt Asay about subverting open source for sake of certain marketing purposes. Rosenberg and Asay essentially say that Microsoft and SAP have a well-known history of speaking out against Free and open source software (FOSS) and concepts.

Certainly, Microsoft and SAP have put effort and money into spreading fear, uncertainty, and doubt (FUD), and both have publicly made, sometimes very strange statements about or against FOSS. Yet recently, both are putting some effort into releasing bits in an open source method or else funding some open source development. Rosenberg and Asay seem to think there is an ulterior motive,

“Any outreach attempts from vendors who have worked for years to destroy open source should be taken with a grain of salt and a sharp eye cast on motivating factors.”

Or could this mean, as Schneider suggests, that these companies are beginning to join the community’s stance that open source “…is simply a better way to create and distribute software.”? Rosenberg and Asay seem to take that into account by acknowledging the project leaders for the open source initiatives within these companies probably are working in earnest–I can’t help but lean toward a bigger picture that, as a whole, there is something else, more involved, taking place.

It makes perfect sense, if you’re a proprietary vendor, to delve deeply into your FOSS competitors, and for several reasons. I believe there are serious reasons to be wary of such proprietary vendors’ forays into FOSS and at the same time to embrace that. Here is why.

First, any vendor has to know what it’s competing against. This is just standard good business practice, there are even industries devoted to supporting this idea–competitive intelligence. What better way to understand the new models undoing your traditional strategy than to emulate them and find out how they work. The more you understand, the better can you build your products to compete and win. If the FOSS community innovates new technology, Microsoft wins by learning it and improving upon it for their own products, just like any good open source vendor would want to do (of course an open source vendor would participate by feeding the community with those improvements as well).

Second, what about that often referred to Machiavellian notion of keeping your friends close and enemies closer? If Microsoft can successfully attract an open source development community into its fold (so-to-speak) it gains a very powerful tool, a foothold into the “enemy’s” camp, which allows it to anticipate and prepare its proprietary strategies.

Third, does it hurt the proprietary vendor in any way? They’ve got all their proprietary business and propaganda in full swing, everyone already knows about that. On the other hand FOSS and Linux are gaining recognition. I’ll make an educated guess that FOSS and Linux are still not as well understood, in concept, by the majority of business decision-makers, much less the public in general. I think they still lack the massive public feeling of acceptance that most software vendors currently enjoy with their traditional proprietary business models. However, as that understanding and recognition grows in positive ways, it can only help companies like Microsoft and SAP to be able to show they’re just as much involved in the leading edge of technology practices. It’s simply good PR. If Microsoft and SAP can manage this while maintaining their proprietary side, so much the better for them (from their perspective).

Fourth, let’s suppose there truly is an ulterior motive to subvert FOSS communities. In the shoes of a company like Microsoft, it makes sense to blur the lines of differentiation between your proprietary approach and real FOSS approaches (hence the shared-source initiative). The harder it is for critics, detractors, or enemies to clearly differentiate your approach from their own, the harder it will be for them spotlight your weaknesses and their strengths, thus the customer cannot act on clear information for his or her software selection decisions. Furthermore, if you actually do participate in some ways with the FOSS community, you may gain some supporters that will defend, in good conscience, your motives, and possibly even turn a blind eye toward some of your other, less savoury practices (this not only blurs more boundaries but it again helps with grassroots PR, which is oh-so important on the Internet).

Finally, I’d like to say that already there is no clear side-versus-side here, we have to pay attention to the grey to really comprehend the situation. While I think we can see companies like Microsoft and SAP employing some intruiging strategies for subversion, and there are battles between models and methodologies, to a degree there is also some learning and the adoption of new and better practices. Because of the co-opetitive nature of FOSS models, the gradual adoption by the likes of proprietary vendors may even, unexpectedly, end up subverting those vendors’ models. We’re not too jaded to be constantly wary and suspect these companies of efforts to undermine FOSS, but we should, at the same time, cheer them on when they actually do participate in real FOSS processes.

Net Neutrality and Future Legacies

I’d like to comment quickly on the net neutrality issue. The Web thus far is a system–that from the beginning–essentially anyone could access in a like manner. A few companies have a strong interest in changing that though, in making, what I understand, are something like tiers of accessibility. Considering the life and social changes that have taken place as provoked by the new sorts of creative innovation the Web has fostered, I think changes limiting Net interoperation are incredibly bad ideas. A basic idea Tim Berners-Lee puts forward is

“Freedom of connection, with any application, to any party, is the fundamental social basis of the Internet, and, now, the society based on it.”

This may sound abstract to some but Bob Frankston wrote an entertaining piece that illustrates the unsavoury results of losing such freedom. For a thorough and technical analysis, I find Daniel Weitzner’s text on The Neutral Internet: An Information Architecture for Open Societies interesting.

The thing is, whatever starts taking place, technologically or in government policy now is going to be around for a while. People will adapt, install, and use software that is based on or otherwise enforces such technologies and policies. That means we have to imagine the consequences of a future saddled with the legacies we’re creating now. I hope we act to keep our liberty intact.

PeopleSoft Nuisance in North Dakota

A Computerworld article covers some of the problems (and ends with a few happier notes) about a PeopleSoft (Oracle) ERP implementation taking place in ND’s government and education sectors. Although the state agencies sound generally satisfied, the article focuses on North Dakota University System’s unhapiness with the unexpected massive cost and time overruns for getting their system implemented.

Why did they underestimate the costs, which ballooned from the extra time required for the (still) incomplete implementation? The article suggests the lesson to be learned is never embark on a major project like this without employing a full-time project manager (which, surprisingly it sounds like this implementation lacked from the start). But there is something else to learn from the article:

“The academic software modules, particularly a grants and contracts management application, also did not perform as expected and have required extensive customization, said Laura Glatt, vice chancellor of administrative affairs at the Bismarck-based university system.”

I wonder why they did not expect this? Perhaps their original RFI/RFP was not designed to request that information? Did they script some demonstration scenarios for the vendor to show them how the modules would accomplish the sort of functionality they needed? I’d think there could have been some way to prevent this issue–maybe the ghost of a full-time project manager would have thought of that during the selection and evaluation phases.

Blog News Feed Versus Newsletter Usage

The Wall Street Journal Online has a short and slightly thought-provoking interview with Jakob Nielsen concerning newsfeeds and blogging.

I think the news feed reader is taking the place of both some browsing activity and some e-mail activity. People ought to be viewing blogging and news feeds not as the “extreme edge” mentioned in the interview but rather a notable shift in the way people discover and retrieve information from web sites.

Lee Gomes (the interviewer) asked why Nielsen prefers an e-mail newsletter over a news feed. It brought up a few points on the focus of a newsletter but Nielsen cautioned “Unless a newsletter is very good, people will just say, ‘Oh no, more information.'” And I find that to be my case. There are a few newsletters I like reading but the majority have too much garbage to wade through and simply clutter my e-mail inbox. I’m hesitant to subscribe to anyone’s newsletter now that I’m invariably offered the option during any web site registration. Over the years, site after site, has reinforced the notion that once I subscribe, the subscription will balloon into unwanted mail and it will be difficult to remove myself from the lists. Even when that’s not the practice, there is that suspicion. Abuses have made that impression the general state.

Many years ago I attended a conference held by a local phone company, which was trying to convince its corporate clients to build corporate web sites (and hence they needed fast Internet connections). The conference had a number of very informative sessions highlighting the benefits a web site could bring to a business. One of the points I recall, was how much emphasis they put on having a clear and easy sign-up page for a company newsletter. They made the case that a well-designed newsletter, would help a company stay in contact with its customers (of course that lends itself to all kinds of wonderful marketing activities).

Now I hear similar arguments for the business benefits of blogging. Except the nice thing about blogging and hence blog news feeds, is that the subscribing user has complete control over whether s/he subscribes to it or not (unlike what happens when you release your e-mail address to the clutches of some unfamiliar internal machinations of a company you probably have little reason to trust).

One last thing. On the conversational aspect of blogs (which seems to be, at least in part, commentary on who actually is reading/using them), Nielsen comments that it works for fanatics “…who are engaged so much that they will go and check out these blogs all the time.” I’m inclined to agree temporarily, but it is a shortsighted viewpoint if that is where it ends. True most people I know, haven’t got a clue what a news feed reader is much less a blog, though since I’ve been using these for a while, they’re familiar concepts and tools for me. However, all technology uses tend to be that way. When I went to the conference I mentioned previously, an e-mail newsletter seemed like something only a small percentage of the population would ever use. That changed. Now that I regularly use a news feed reader to read articles, I use my web browser less frequently. That is a major shift in the way I access Web content.

Compiere Repots itself for Growth

It seems that open source ERP provider, Compiere, is prepping itself for a lot of new growth. Today it announced (hot on the heels of bringing in Andre Boisvert as its Chairman of the Board and Chief Business Development Officer) that it would be moving its corporate headquarters to California’s Silicon Valley and at the same time that it secured a nice little VC nest egg of $6 M (USD).

The company’s press release (linked above) has CEO, Janke, stating “…the market’s demand for our product has outgrown our capability to scale the business accordingly.” And then Boisvert mentions that Compiere’s in a good position with its “…modern architecture at a time when many proprietary legacy ERP systems are approaching the end of their intended life cycle.” That sort of growth should keep Compiere’s model vibrant.

In April the company announced seven new implementation partners, which I think bodes well for its business model–a model largely based on second level support and training (in other words supporting its partners). The more partners implementing it the better. The company had a little over forty toward the end of 2004 and now claims more than seventy. Hopefully the additional funding truly will help it scale for those partners’ demands.

Open Source Database and OS Demand Stats

A few articles about open source database growth made the rounds recently. Mostly these discuss a rise in growth, for example the EnterpriseDB survey notes

More than half of all survey respondents indicated that their respective companies had either already deployed an open source database or were more likely to deploy an open source database than any other open source application, including CRM, desktop productivity, and ERP. The survey was sponsored and administered by EnterpriseDB.

Gartner too published some stats on database growth, though of a slightly different nature.

The combined category of open source database management systems vendors, which includes MySQL and Ingres, showed the strongest growth, although it was one of the smallest revenue bases,” said Colleen Graham, principal analyst at Gartner.

These are all interesting so I thought I’d post a few stats TEC tracks about enterprise end user demand. We find out what companies are looking for as requirements for implementing different enterprise systems (ERP, CRM, SCM, etc.). It might be valuable to compare these different sources and types of stats for an overall picture.

According to our tracking of about 3,000 different users, the following numbers signify the percent of those users that selected each of these platforms as a technology requirement for their enterprise software selections (such as an ERP, CRM, SCM, etc. system). Note that we ask about some other platforms too but I’ve omitted those stats–they account for very small percentages.

DBMS Q1 2005 Q2 2005 Q1 2006
IBM DB2 7 7.2 7
Microsoft SQL Server 35.9 37.6 36.4
MySQL 8.9 9.6 12.7
Oracle 20.4 21 20.6
PostgreSQL 3.1 2.8 3.5
Hosted solution (not installed on a customer server) 0.5 0.6 3.4
Server Q1 2005 Q2 2005 Q1 2006
IBM iSeries (AS/400) 7.6 7.1 7.4
Linux (such as SUSE, Red Hat, or Debian) 11.8 11.4 12.9
Unix (such as Solaris or AIX) 13.3 12.7 11.5
Windows Server (such as NT/2000/XP) 54.2 54.2 49.4
Hosted solution (not installed on a customer server) 0.7 1.7 5.4

It’s pretty clear that we have not seen great changes in demand for Oracle, Microsoft, and IBM systems, but MySQL certainly has increased in 2006 over 2005 and PostgreSQL has been working its way up. I happen to know that so far for Q2 2006, the open source systems are set to surpass the previous quarters’ demand.

So while Gartner is calling attention to strong growth but small revenue bases, perhaps one could look at the direction the demand is moving in (based on the Enterprise DB survey and TEC’s stats) and guess that the revenue base may be ready to change.

Verifying an RFI

Today, I had a conversation with a consulting firm that works with TEC‘s decision support tools and knowledge bases (KBs) on enterprise software. In this case, they were engaged in an ERP selection project.

The consulting firm was asking me about the data accuracy (in our KB) regarding the functionality of some of the vendors they’d shortlisted. TEC researches and provides immense amounts of information on software products so it is an incredibly tricky task for us to ensure that the data is accurate and timely. Considering the number of clients using our evaluation services for their projects, as well as the consultants using the same services for their clients, it amazes me when a software vendor either isn’t amenable to providing updated information about their products, or in a few cases, is less than truthful about their products’ capabilities. That’s what I want to talk about in this post because I had to answer this consultant honestly, without bias, and what I explained to him about the way one abnormally naughty vendor treated the RFI response process, seemed to slightly sour him toward its product.

First, usually vendors respond in earnest to our RFI inquiries, it’s in their best interest. I wonder though, if a few vendors respond dishonestly while knowing TEC exposes its analysis data to thousands of customers (who may very well become sales for the vendor), how well are these vendors responding to the inquiries they receive from individual clients that don’t have many resources for vetting information? I mean to say, if you’re working on a project to select some kind of enterprise software system, design your own custom RFI, and send it out to a bunch of vendors, how are you going to be sure that the responses are truly accurate? Even consultants won’t have expertise on every product out there.

It seems to me that until you get to a stage where you’ve already selected a few vendors to give scripted demonstrations, there isn’t much of a way to verify the accuracy of the responses; and how much time will have elapsed just to get to that point? I’m not suggesting that vendors are likely to act in bad faith, criteria are also commonly misunderstood. Even with a focussed team of subject matter experts, editors, and translators, we get inquiries from very knowledgeable and intelligent people that don’t understand criteria we use for our data collection.

Here’s a way that fails. I once worked for a company that had a slick on-line decision support/analysis tool called Compariscope. Our analyst team would actually get copies of the software from the vendors and set up test environments. This ensured accuracy in the data but it also meant the scope of the analyses was extremely limited and because of the significant time required, we were always playing catch-up to the latest software releases. Perhaps, it could have worked if we’d had hundreds of analysts, vast supplies of equipment, and the vendors were all willing to give us copies of their software (often they responded to requests for software as though we’d handed them a cleaver and asked them to cut off their left leg for lunch). That business model quickly evaporated. So installing and testing every type of enterprise software application is not a feasible methodology for an anlyst firm, much less the end user company.

When I started working with TEC, we only covered discrete and process ERP systems, and at that point, we only provided data for about ten vendors. Our ERP analyst, PJ, could check out the information and have a decent idea if the vendor understood the RFI and made an earnest response. But a single person cannot verify every one of over 3,000 criteria and as we grew and started providing information on more software vendors and on more subject areas (SCM, CRM, etc.) it became quite difficult to make sure all of the data were accurate. Even with additional analysts, nobody in the world really knows what every product is capable of.

I’m curious to know if, anyone that might read this (consultants, people that have worked on their own selection projects, etc) has come up with some good methodologies to verify data after gathering your own RFI processes before spending serious time in product demonstrations. Please respond with your thoughts. Here is what we came up with.

TEC demands RFI responses from an official of the vendor that is responsible for replying to client RFIs. Then we take a few steps to vet the vendor’s data…

1) Once we retrieve a completed RFI, we have a team of people give it a quick review, checking for obvious errors and such, if it passes that test, it moves on, if not, it goes back to the vendor for revision.

2) Our analysts start reviewing the information based on their own knowledge and experience of course, but also things like the RFI being internally consistent with itself (if you’re careful there are some ways to structure an RFI like this). Benchmarks, using TEC decision analysis tools. Analysts also have to constantly be aware of what’s going on in the field so that they can see consistency with known customer results, peer findings, news, conference announcements, and vendor sources such as collateral, other products, services, and initiatives.

3) I came up with a veridical comparison method that aggregates all our existing vendor responses to the criteria in a knowledge area (ERP for example) and defines what the likely level of support would be for each criterion. This lets analysts flag criteria where a particulare vendor deviates far from an expected range and understand what the next most likely levels of support are. For example:

If we know that only two in thirty ERP vendors (at any tier) natively support a standard interface to CAD systems for direct data access, and we see a start-up vendor telling us this criterion is fully supported, our analysts know they’ve got to see the vendor demonstrate that. The reverse is true as well. Sometimes a vendor says it doesn’t support criteria “out-of-the-box” but when we talk to the vendor, or it demonstrates how its system works, we realize the vendor simply misunderstood the criterion. That’s a great opportunity for us to learn how to clarify the criterion’s wording.

4) As I hinted above–the demonstration. All of these checks can go only so far. When an analyst actually sees the vendor demonstrate its capabilities, he or she can definitely verify the accuracy of an RFI.

Finally, even with the checking, benchmarking, and reviewing, sometimes, within the thousands of criteria, an error falls through the cracks. Sometimes, admittedly, even we are not quite fast enough. On occasion a consultant or VAR, intimately familiar with a product, alerts us to an error. Other times a customer is already using some product and tells us about an error in its ratings on our system.

Perhaps it would have been best if we’d discovered these errors first. But, in my opinion, this is one of the great strengths of having information like this being accessed by so many people with different perspectives via the Internet. As FOSS communities and other collaborative projects like Wikipedia has demonstrated–a little effort from many people can go a long way to improve a common goal. In our case, making sure we maintain accurate information on enterprise software products. I’ll admit this cross-checking process is probably not very transparent to the public. Perhaps a more transparent and collaborative cross-checking process would be a method to further improve data accuracy.

E-mail Replacement Idea

In a previous post, I briefly commented on blogs as an e-mail replacement. It was an off-the-cuff remark but I started thinking about it more. Perhaps it could end spam?

This afternoon one of my colleagues came by my desk and commented on the RSS reader I had open. She wondered if it was a nice looking skin for Outlook (it wasn’t, but it did look nice because the reader was running on my Linux box rather than Windows). But the comment struck me because at first glance the RSS reader does look like essentially the same thing as an e-mail application. Repeat, what if we all begin blogging instead of e-mailing?

It takes no effort to imagine everyone having a blog and thus RSS feed. Rather than send response e-mails back and forth in conversation, trackbacks could accomplish the job. You’d only add a feed to your reader if it was someone you wanted to communicate with (trackback with). Instead of e-mail addresses, we’d have feed subscriptions.

The reverse could be true as well, you could flag categories or posts within your feed as private or visible only to particular acquaintances (friends, colleagues, etc.) perhaps using xfn features or some kind of social networking system. Conversations and subjects could be tracked by tagging them instead of creating an e-mail folder fungal multiplication horror.

Web sites already exist that let people aggregate feeds into custom pages, so that would take care of replacing web-based e-mail. A number of desktop applications are interfaces for blog posting without logging into the blog’s web interface. Perhaps those tools could be combined with the readers. Finally, instead of sending an e-mail, everyone’d just post to their blogs.

What is appealing about this idea? Unless I’m missing something, It seems like a rather easy way to eradicate spam and for some unfortunate OS users, decrease other contagions (unless of course you choose to subscribe to a spammer’s feed). The trick is, I think, getting enough people to blog in order to change the dominant communication method from e-mailing to blogging.

On the Subject of Learning, Tools for LMS Purchasers

Niall at NetDimensions Insights wrote up two nice pieces pointing out a few ways that people seeking a learning management system can use low-cost tools to compare the different offerings out there. He mentioned both the Brandon Hall feature comparison document as well as the LMS RFI templates that Technology Evaluation Centers offers–this is what caught my attention. Niall comments that

Use of this template does not mean that you do not need to perform a thorough analysis of your organization’s learning management requirements. Each organization will have a unique set of requirements and you should ensure that any additional requirements identified are added to the template.

That’s exactly it too. Templates like those can be helpful to research functional requirements/support requirements, and ultimately toward soliciting proposals, but any spreadsheet-esque comparison grid will only do so much when it comes to analysis.

In any case, I wanted to respond by pointing out another inexpensive tool a potential LMS purchaser could use, which is TEC’s online LMS evaluation tool. It has the same hierarchy of criteria as the spreadsheet, except it lets you prioritize those criteria based on your organization’s learning requirements (and can include additional custom criteria). Then it analyzes your priorities to figure out which vendors match your requirements (of course it uses rated vendor responses to all those criteria to do this). I think this supports, at least in part, Niall’s recommendation for a thorough analysis.