Would Gov’t Procurement Process Neglect FOSS?

The Canadian Association for Open Source (Clue) published a thought provoking letter to an ITBusiness.ca article today. The Clue letter says that “…What is needed is for the government to separate the pricing and procurement of the source product from the various value-add services…” which is an interesting reflection for current musings about Public Works and Government Services Canada’s proposed potential changes to government procurement processes. It’s certainly feasible to evaluate these separate areas in a sophisticated way that still allows for a comprehensive decision.

The point here is that with Free and open source software, frequently one is at a loss trying to get a development community to respond to particular business issues when the issues are peripheral to the development of the software. The reason is not because the development community is basking under the dream of an intoxicating four-leaf clover high grown in artificial pleasure pods, but because it’s outside the scope of what they’re pursuing and toiling at, namely developing the software. Such business functions are often taken up by other service organizations (which are generally a part of the development community too) that do focus on implementing, supporting, customizing, etc. the software.

Why address the letter to an ITBusiness.ca article? I’m not entirely clear on that, however I do see the sense in linking the issues. Various other ITBusiness.ca articles report on the change in procurement processes as involving a decrease in the number of qualified suppliers, perceived increase of barriers to SMB providers, and introduction of methods such as electronic reverse auctions, which some people seem to be claiming would emphasize low initial costs at the expense better long-term purchase strategies. If I understand correctly, I think that what the Clue article proposes fits with what would work well for the channel partners of large vendors (whether they’re open source or not). It recognizes that channel partners provide valuable services, which risk being slashed from the procurement process (if the ITBusiness.ca articles’ various representative quotes are accepted). These services actually may be valuable toward saving taxpayers’ money in ways that could not be accounted for if the government focuses on purchase costs with only a few vendors directly. So these channel partners are essentially the equivalent to the general open source type of business, which is about providing value-added services around a typically, zero-cost product. They naturally share a goal here.

On the one hand the proposed procurement process change sounds like it may favour FOSS solutions because of upfront cost factors. But if these solutions cannot even be considered because the actual providers can’t get the opportunity to be part of the process, it’s moot.

Bias and Time

This is about consulting-in-the-world. :-) (excuse my weak, philosophy in-joke)

Paul Murphy posted a thought-provoking piece concerning consulting bias (plus), called Corporate loyalties and the temporal disconnect. He calls attention to the idea that people cannot really claim to be unbiased. We are wise to disclose bias so that we know how to deal with it and how it affects the decisions we make. I understood Murphy’s point to address more of the way people compare, for example, a product now with their experience of it in the past not taking into account the context of the comparison being “now.”

“…the memories haven’t changed, but circumstances have – and basing actions on comparisons in which one side is frozen in time is therefore intellectually dishonest.”

He elucidates this conclusion with several situations in which types of temporal bias would affect a decision. The post mostly is being asserted as the viewpoint of a consultant and clearly it is supposed to focus on a particular type of bias–the temporal sort, but of course there are other types of bias to take into consideration. That’s why I liked his point that

“The whole bias issue generally represents a fundamental mis-understanding of the problem evaluation process: open bias is often a positive thing…”

So, in thinking about these points, I had to reflect on TEC (the company for which I work). Our site frequently proclaims that we’re “impartial” and we attempt to present analysis of software data without bias. Is that really possible? On the one hand, I’d like to say yes because the way we evaluate the functionality of, say, an open source ERP vendor against a proprietary one is based (this is the most rudimentary way of saying it, actually it’s more involved) on a program that calculates features supported against those not fully supported. In other words this should take out the human bias that might be present in a consultant trying to recommend a system to its client. The consultant may be susceptible to the temporal situation pointed out by Mr. Murphy or, more likely, might be involved in a certain business relationship with vendors that provides an incentive for recommending those vendors’ solutions. Our company on the other hand, has no alliance to any particular vendor.

However, I have to think that using a program to weigh functional capability and a lack of alignment with specific vendors, do not necessarily equate to a lack of bias. At some level we could probably discover some form of bias. For example, as our analysts model the criteria on which to evaluate vendors because they rely on their research and experience they probably introduce, however innocently, certain biases. I can think of one simple example right away. Before open source software became a well-known enterprise commodity, many of our analyses did not include criteria for open source database support, thus in some ways, perhaps proprietary solutions had a form of advantage.

I would hesitate to call this criticism but rather I think this may highlight a reason why analysts, consultants, etc. have to be constantly self-critical, constantly trying to reflect on why they conclude certain criteria are applicable toward software comparisons and merit further research or recommendation. Thinking on the processes we undertake to form these analyses, comparisons, or conclusions may also be enlightening toward trends of the times. And that comes back to Murphy’s point on the changing circumstances of temporality.

Company Acquisition is Customer Acquisition

A lucid read from AMR Research (I found this by way of The ERP Graveyard), which nicely discusses issues involved in enterprise software consolidation. I’m linking to this because I just mentioned in my previous post, the notion that the Infors (Golden Gate Capitals) of the world may be buying all the enterprise software vendors they can in order to accumulate maintenance customers and thus the revenue from those customers.

In some ways this may help the customers and AMR lists the reasons but there are also many problems this presents for customers. According to the AMR article, the strategy is likely concerned with packaging large customer bases (for future sale) rather than rounding out product functionality.

…this type of consolidation comes at a price that, unfortunately, customers must pay. By acquiring many competing products, aggregators tend to reduce competition within a market and cast the pall of acquisition over the remaining players. Because it is very expensive for customers to switch their enterprise applications, most companies have little choice but to stay put, pay maintenance, and hope for the best. While they may have had great leverage with the small vendor that originally sold them their software, they have little with their aggregator today.

So if the majority of a company’s IT budget is going to old projects I suppose this could be quite a concern. Unfortunately, and this may have been beyond the scope of the article, AMR did not really comment on how this affects innovation in the products.

What’s Going on with SMB Linux Accounting?

WebCPA published an overview of some of the issues involved in Linux and open source deployments for the SMB crowd. It mostly focuses on some of the financial packages available, mentioning companies like ACCPAC/Sage, Open Systems, and InsynQ as offering their solutions for a Linux platform. The article, while providing what I thought was a pretty wide-ranging overview, seemed blurry on a few points though (noting, for example, that there would soon be Macs able to run Linux natively–actually that’s been possible for a long time, and mentioned freeware when I believe it probably meant free software).

The main thing that interested me was its discussion the companies that provide accounting solutions for open source platforms, most of those recognized in the article, actually seem to fall a bit short. Indeed, to quote a comment from an ACCPAC reseller presented in the article

“…Beck says most Linux installations involve running IBM DB2 database on a Linux server while running the accounting application on a Microsoft system.”

I wonder why that is? Shortly after, the article states the following

Sage Accpac, which supports both Windows and Linux, estimates this year that as much as 20 percent of its installations were on Linux, up from 12 percent a year earlier.”

If that’s the case, I wonder why they’re not putting the complete system on Linux as opposed to setting up a mixed environment? Some of the arguments presented say that “the midmarket is not asking for it.” Is it that straightforward? I wonder if that means they’re only considering people that specifically say they want Linux. What if people want benefits, cost, support levels, functionality etc. that may be provided by a Linux platform but have not thought specifically to demand that via Linux? I have the impression that answering “the midmarket is not asking for it” seems a little incongruous with the rest of the stats about its deployment. I’d like to know how these match or do not match.

Finally, although the article highlights some proprietary financial apps that run with Linux or MySQL, it doesn’t mention a single financial application that is, itself open source. They exist though, perhaps I’ll follow up on this in the future.

Linux TCO with Eyes Open

IBM published an overview of two recent Linux TCO studies. One of the studies was done by the Robert Frances Group and the other by a group called Pund-IT Inc. Unlike another recent attention-getting study, these found the cost results were in Linux’s favour. I haven’t seen the actual studies so I don’t know much about the methodology they used but it seems one was done by surveying twenty companies regarding their application servers, while the other was an in-depth review of three specfic companies, each in a different industry. They concluded that the Linux deployments were significantly lower in TCO.

After the overview, the article provides an interview with the reports’ authors. One point that I thought was insightful came from RFG’s Chad Robinson. In discussing good and bad Linux deployments, he mentioned

“The people that go into Linux with their eyes open tend to be the most successful, because they don’t try to make Linux fit the old model. When you deploy Linux, it’s not enough to just to put a new operating system out there, because you’ve added an operating system to your mix, and that increases complexity. If you just drop Linux in as a replacement and you expect it to behave exactly the same way that your old operating system did, then you’re going to do a little worse than a little better.”

I think that makes a lot of sense. I frequently read articles that talk about advantages or disadvantages to deploying Linux, maybe whenever discussing these advantages or disadvantages there should also be a discussion on the ways these relate to and change the existing work environment. One might make a transportation analogy. Say, I have a car that I sometimes drive to work. Yet there is a cultural push to start riding bicycles instead. Perhaps this could be viewed as adding complexity because the roads must accommodate cars and bikes. However, when I ride a bicycle, I never go to a gas station to guzzle at the pump, it would be pointless (well if I was already feeling pointless I might make this a different story and have sip or two). The two different modes of transportation do not have the same requirements. The advantages of one (it reduces pollution and saves money) would be counteracted if everyone stuck to the same old, unnecessary model by guzzling gas from atop their bicycles. Quite a catastrophe.

About the Evaluation Layer for Open Source Services

I just read Alex Fletcher’s first piece of the Open Source Software Bedrock. He delineates three layers, namely, evaluation, adoption, and integration. Evaluation is what the other layers get stacked upon and altogether these make what he’s described as a supporting foundation for the policies, practices, and standards of the software’s life cycle. It seems to me that a guiding phenomenon inspiring the article is how FOSS changes the traditional selection/purchase process. Fletcher states:

“The traditional model of contacting a vendor, arranging a demo/sales pitch, wading through marketing fodder, etc. has been replaced with a model that shifts the balance of power from the vendor to the end user/customer.”

A point on this balance of power that I’m not sure surfaced in the article, is that sometimes a potential customer to an open source software firm has already downloaded, installed, and sampled the software before contacting the vendor for support or other services or products. (This is likely to be true, less frequently, when addressing totally proprietary software.) It means that the customer, on contacting a FOSS vendor, is doing so from a potentially more informed stance about what it requires from the vendor. This could also make it easier for the vendor to understand what is most valuable to provide the client. In other words, I’m not completely sure that this change in model necessarily shifts the balance of power. Perhaps instead, it shifts the needs assessment and provision processed in a way that might benefit both sides.

Fletcher makes a point that this “…paradigm requires a more prepared and motivated end user/customer…” which I appreciate though I also think, in some ways, it also aids that end. Another couple points that I thought were well-put and would like to address are Fletcher’s statements:

“It is a high priority to understand the exact support terms for a given piece of software, in line with any anticipated needs as revealed during the evaluation phase.” and “If the evaluation layer is done haphazardly the according adoption and integration layers will lack the proper support to be of any value.”

I believe this leads right into one of the greater points on evaluating an open source solution, which is how to ensure ongoing, stable, professional support. It seems to be a fear raised repeatedly by potential adopters of open source software. Yet that is the basis for many, if not the majority, of the vendors building their businesses around open source software. The support options are available so the customer must make sure it identifies the proper ones, which may not be that simple. I think, like other software evaluation practices, it is important to systematically identify business requirements from the different stakeholders within the company. Once those are well-understood the customer should thoroughly evaluate how potential vendors compare on all of the requirements.

A resource for evaluating open source IT and Linux service providers is the FOSS Evaluation Center. It offers about a thousand criteria addressing different support requirements a customer might have of a vendor, and it lets people compare vendors on each point. I designed those criteria, so it’s a bit of a plug, but it can be accessed for free, and I hope it’s useful. Another resource that might be useful toward the evaluation end (though I don’t have any experience with it) is a site called Find Open Source Support.

Sides of Subverting Open Source

Martin Schneider at The 451 Group commented on whether the collective “we” can be too jaded regarding some proprietary vendors’ apparent embrace of open source methods. This was in response to a piece by Dave Rosenberg and Matt Asay about subverting open source for sake of certain marketing purposes. Rosenberg and Asay essentially say that Microsoft and SAP have a well-known history of speaking out against Free and open source software (FOSS) and concepts.

Certainly, Microsoft and SAP have put effort and money into spreading fear, uncertainty, and doubt (FUD), and both have publicly made, sometimes very strange statements about or against FOSS. Yet recently, both are putting some effort into releasing bits in an open source method or else funding some open source development. Rosenberg and Asay seem to think there is an ulterior motive,

“Any outreach attempts from vendors who have worked for years to destroy open source should be taken with a grain of salt and a sharp eye cast on motivating factors.”

Or could this mean, as Schneider suggests, that these companies are beginning to join the community’s stance that open source “…is simply a better way to create and distribute software.”? Rosenberg and Asay seem to take that into account by acknowledging the project leaders for the open source initiatives within these companies probably are working in earnest–I can’t help but lean toward a bigger picture that, as a whole, there is something else, more involved, taking place.

It makes perfect sense, if you’re a proprietary vendor, to delve deeply into your FOSS competitors, and for several reasons. I believe there are serious reasons to be wary of such proprietary vendors’ forays into FOSS and at the same time to embrace that. Here is why.

First, any vendor has to know what it’s competing against. This is just standard good business practice, there are even industries devoted to supporting this idea–competitive intelligence. What better way to understand the new models undoing your traditional strategy than to emulate them and find out how they work. The more you understand, the better can you build your products to compete and win. If the FOSS community innovates new technology, Microsoft wins by learning it and improving upon it for their own products, just like any good open source vendor would want to do (of course an open source vendor would participate by feeding the community with those improvements as well).

Second, what about that often referred to Machiavellian notion of keeping your friends close and enemies closer? If Microsoft can successfully attract an open source development community into its fold (so-to-speak) it gains a very powerful tool, a foothold into the “enemy’s” camp, which allows it to anticipate and prepare its proprietary strategies.

Third, does it hurt the proprietary vendor in any way? They’ve got all their proprietary business and propaganda in full swing, everyone already knows about that. On the other hand FOSS and Linux are gaining recognition. I’ll make an educated guess that FOSS and Linux are still not as well understood, in concept, by the majority of business decision-makers, much less the public in general. I think they still lack the massive public feeling of acceptance that most software vendors currently enjoy with their traditional proprietary business models. However, as that understanding and recognition grows in positive ways, it can only help companies like Microsoft and SAP to be able to show they’re just as much involved in the leading edge of technology practices. It’s simply good PR. If Microsoft and SAP can manage this while maintaining their proprietary side, so much the better for them (from their perspective).

Fourth, let’s suppose there truly is an ulterior motive to subvert FOSS communities. In the shoes of a company like Microsoft, it makes sense to blur the lines of differentiation between your proprietary approach and real FOSS approaches (hence the shared-source initiative). The harder it is for critics, detractors, or enemies to clearly differentiate your approach from their own, the harder it will be for them spotlight your weaknesses and their strengths, thus the customer cannot act on clear information for his or her software selection decisions. Furthermore, if you actually do participate in some ways with the FOSS community, you may gain some supporters that will defend, in good conscience, your motives, and possibly even turn a blind eye toward some of your other, less savoury practices (this not only blurs more boundaries but it again helps with grassroots PR, which is oh-so important on the Internet).

Finally, I’d like to say that already there is no clear side-versus-side here, we have to pay attention to the grey to really comprehend the situation. While I think we can see companies like Microsoft and SAP employing some intruiging strategies for subversion, and there are battles between models and methodologies, to a degree there is also some learning and the adoption of new and better practices. Because of the co-opetitive nature of FOSS models, the gradual adoption by the likes of proprietary vendors may even, unexpectedly, end up subverting those vendors’ models. We’re not too jaded to be constantly wary and suspect these companies of efforts to undermine FOSS, but we should, at the same time, cheer them on when they actually do participate in real FOSS processes.

Blog News Feed Versus Newsletter Usage

The Wall Street Journal Online has a short and slightly thought-provoking interview with Jakob Nielsen concerning newsfeeds and blogging.

I think the news feed reader is taking the place of both some browsing activity and some e-mail activity. People ought to be viewing blogging and news feeds not as the “extreme edge” mentioned in the interview but rather a notable shift in the way people discover and retrieve information from web sites.

Lee Gomes (the interviewer) asked why Nielsen prefers an e-mail newsletter over a news feed. It brought up a few points on the focus of a newsletter but Nielsen cautioned “Unless a newsletter is very good, people will just say, ‘Oh no, more information.'” And I find that to be my case. There are a few newsletters I like reading but the majority have too much garbage to wade through and simply clutter my e-mail inbox. I’m hesitant to subscribe to anyone’s newsletter now that I’m invariably offered the option during any web site registration. Over the years, site after site, has reinforced the notion that once I subscribe, the subscription will balloon into unwanted mail and it will be difficult to remove myself from the lists. Even when that’s not the practice, there is that suspicion. Abuses have made that impression the general state.

Many years ago I attended a conference held by a local phone company, which was trying to convince its corporate clients to build corporate web sites (and hence they needed fast Internet connections). The conference had a number of very informative sessions highlighting the benefits a web site could bring to a business. One of the points I recall, was how much emphasis they put on having a clear and easy sign-up page for a company newsletter. They made the case that a well-designed newsletter, would help a company stay in contact with its customers (of course that lends itself to all kinds of wonderful marketing activities).

Now I hear similar arguments for the business benefits of blogging. Except the nice thing about blogging and hence blog news feeds, is that the subscribing user has complete control over whether s/he subscribes to it or not (unlike what happens when you release your e-mail address to the clutches of some unfamiliar internal machinations of a company you probably have little reason to trust).

One last thing. On the conversational aspect of blogs (which seems to be, at least in part, commentary on who actually is reading/using them), Nielsen comments that it works for fanatics “…who are engaged so much that they will go and check out these blogs all the time.” I’m inclined to agree temporarily, but it is a shortsighted viewpoint if that is where it ends. True most people I know, haven’t got a clue what a news feed reader is much less a blog, though since I’ve been using these for a while, they’re familiar concepts and tools for me. However, all technology uses tend to be that way. When I went to the conference I mentioned previously, an e-mail newsletter seemed like something only a small percentage of the population would ever use. That changed. Now that I regularly use a news feed reader to read articles, I use my web browser less frequently. That is a major shift in the way I access Web content.

Verifying an RFI

Today, I had a conversation with a consulting firm that works with TEC‘s decision support tools and knowledge bases (KBs) on enterprise software. In this case, they were engaged in an ERP selection project.

The consulting firm was asking me about the data accuracy (in our KB) regarding the functionality of some of the vendors they’d shortlisted. TEC researches and provides immense amounts of information on software products so it is an incredibly tricky task for us to ensure that the data is accurate and timely. Considering the number of clients using our evaluation services for their projects, as well as the consultants using the same services for their clients, it amazes me when a software vendor either isn’t amenable to providing updated information about their products, or in a few cases, is less than truthful about their products’ capabilities. That’s what I want to talk about in this post because I had to answer this consultant honestly, without bias, and what I explained to him about the way one abnormally naughty vendor treated the RFI response process, seemed to slightly sour him toward its product.

First, usually vendors respond in earnest to our RFI inquiries, it’s in their best interest. I wonder though, if a few vendors respond dishonestly while knowing TEC exposes its analysis data to thousands of customers (who may very well become sales for the vendor), how well are these vendors responding to the inquiries they receive from individual clients that don’t have many resources for vetting information? I mean to say, if you’re working on a project to select some kind of enterprise software system, design your own custom RFI, and send it out to a bunch of vendors, how are you going to be sure that the responses are truly accurate? Even consultants won’t have expertise on every product out there.

It seems to me that until you get to a stage where you’ve already selected a few vendors to give scripted demonstrations, there isn’t much of a way to verify the accuracy of the responses; and how much time will have elapsed just to get to that point? I’m not suggesting that vendors are likely to act in bad faith, criteria are also commonly misunderstood. Even with a focussed team of subject matter experts, editors, and translators, we get inquiries from very knowledgeable and intelligent people that don’t understand criteria we use for our data collection.

Here’s a way that fails. I once worked for a company that had a slick on-line decision support/analysis tool called Compariscope. Our analyst team would actually get copies of the software from the vendors and set up test environments. This ensured accuracy in the data but it also meant the scope of the analyses was extremely limited and because of the significant time required, we were always playing catch-up to the latest software releases. Perhaps, it could have worked if we’d had hundreds of analysts, vast supplies of equipment, and the vendors were all willing to give us copies of their software (often they responded to requests for software as though we’d handed them a cleaver and asked them to cut off their left leg for lunch). That business model quickly evaporated. So installing and testing every type of enterprise software application is not a feasible methodology for an anlyst firm, much less the end user company.

When I started working with TEC, we only covered discrete and process ERP systems, and at that point, we only provided data for about ten vendors. Our ERP analyst, PJ, could check out the information and have a decent idea if the vendor understood the RFI and made an earnest response. But a single person cannot verify every one of over 3,000 criteria and as we grew and started providing information on more software vendors and on more subject areas (SCM, CRM, etc.) it became quite difficult to make sure all of the data were accurate. Even with additional analysts, nobody in the world really knows what every product is capable of.

I’m curious to know if, anyone that might read this (consultants, people that have worked on their own selection projects, etc) has come up with some good methodologies to verify data after gathering your own RFI processes before spending serious time in product demonstrations. Please respond with your thoughts. Here is what we came up with.

TEC demands RFI responses from an official of the vendor that is responsible for replying to client RFIs. Then we take a few steps to vet the vendor’s data…

1) Once we retrieve a completed RFI, we have a team of people give it a quick review, checking for obvious errors and such, if it passes that test, it moves on, if not, it goes back to the vendor for revision.

2) Our analysts start reviewing the information based on their own knowledge and experience of course, but also things like the RFI being internally consistent with itself (if you’re careful there are some ways to structure an RFI like this). Benchmarks, using TEC decision analysis tools. Analysts also have to constantly be aware of what’s going on in the field so that they can see consistency with known customer results, peer findings, news, conference announcements, and vendor sources such as collateral, other products, services, and initiatives.

3) I came up with a veridical comparison method that aggregates all our existing vendor responses to the criteria in a knowledge area (ERP for example) and defines what the likely level of support would be for each criterion. This lets analysts flag criteria where a particulare vendor deviates far from an expected range and understand what the next most likely levels of support are. For example:

If we know that only two in thirty ERP vendors (at any tier) natively support a standard interface to CAD systems for direct data access, and we see a start-up vendor telling us this criterion is fully supported, our analysts know they’ve got to see the vendor demonstrate that. The reverse is true as well. Sometimes a vendor says it doesn’t support criteria “out-of-the-box” but when we talk to the vendor, or it demonstrates how its system works, we realize the vendor simply misunderstood the criterion. That’s a great opportunity for us to learn how to clarify the criterion’s wording.

4) As I hinted above–the demonstration. All of these checks can go only so far. When an analyst actually sees the vendor demonstrate its capabilities, he or she can definitely verify the accuracy of an RFI.

Finally, even with the checking, benchmarking, and reviewing, sometimes, within the thousands of criteria, an error falls through the cracks. Sometimes, admittedly, even we are not quite fast enough. On occasion a consultant or VAR, intimately familiar with a product, alerts us to an error. Other times a customer is already using some product and tells us about an error in its ratings on our system.

Perhaps it would have been best if we’d discovered these errors first. But, in my opinion, this is one of the great strengths of having information like this being accessed by so many people with different perspectives via the Internet. As FOSS communities and other collaborative projects like Wikipedia has demonstrated–a little effort from many people can go a long way to improve a common goal. In our case, making sure we maintain accurate information on enterprise software products. I’ll admit this cross-checking process is probably not very transparent to the public. Perhaps a more transparent and collaborative cross-checking process would be a method to further improve data accuracy.

On the Subject of Learning, Tools for LMS Purchasers

Niall at NetDimensions Insights wrote up two nice pieces pointing out a few ways that people seeking a learning management system can use low-cost tools to compare the different offerings out there. He mentioned both the Brandon Hall feature comparison document as well as the LMS RFI templates that Technology Evaluation Centers offers–this is what caught my attention. Niall comments that

Use of this template does not mean that you do not need to perform a thorough analysis of your organization’s learning management requirements. Each organization will have a unique set of requirements and you should ensure that any additional requirements identified are added to the template.

That’s exactly it too. Templates like those can be helpful to research functional requirements/support requirements, and ultimately toward soliciting proposals, but any spreadsheet-esque comparison grid will only do so much when it comes to analysis.

In any case, I wanted to respond by pointing out another inexpensive tool a potential LMS purchaser could use, which is TEC’s online LMS evaluation tool. It has the same hierarchy of criteria as the spreadsheet, except it lets you prioritize those criteria based on your organization’s learning requirements (and can include additional custom criteria). Then it analyzes your priorities to figure out which vendors match your requirements (of course it uses rated vendor responses to all those criteria to do this). I think this supports, at least in part, Niall’s recommendation for a thorough analysis.