Verifying an RFI

Today, I had a conversation with a consulting firm that works with TEC‘s decision support tools and knowledge bases (KBs) on enterprise software. In this case, they were engaged in an ERP selection project.

The consulting firm was asking me about the data accuracy (in our KB) regarding the functionality of some of the vendors they’d shortlisted. TEC researches and provides immense amounts of information on software products so it is an incredibly tricky task for us to ensure that the data is accurate and timely. Considering the number of clients using our evaluation services for their projects, as well as the consultants using the same services for their clients, it amazes me when a software vendor either isn’t amenable to providing updated information about their products, or in a few cases, is less than truthful about their products’ capabilities. That’s what I want to talk about in this post because I had to answer this consultant honestly, without bias, and what I explained to him about the way one abnormally naughty vendor treated the RFI response process, seemed to slightly sour him toward its product.

First, usually vendors respond in earnest to our RFI inquiries, it’s in their best interest. I wonder though, if a few vendors respond dishonestly while knowing TEC exposes its analysis data to thousands of customers (who may very well become sales for the vendor), how well are these vendors responding to the inquiries they receive from individual clients that don’t have many resources for vetting information? I mean to say, if you’re working on a project to select some kind of enterprise software system, design your own custom RFI, and send it out to a bunch of vendors, how are you going to be sure that the responses are truly accurate? Even consultants won’t have expertise on every product out there.

It seems to me that until you get to a stage where you’ve already selected a few vendors to give scripted demonstrations, there isn’t much of a way to verify the accuracy of the responses; and how much time will have elapsed just to get to that point? I’m not suggesting that vendors are likely to act in bad faith, criteria are also commonly misunderstood. Even with a focussed team of subject matter experts, editors, and translators, we get inquiries from very knowledgeable and intelligent people that don’t understand criteria we use for our data collection.

Here’s a way that fails. I once worked for a company that had a slick on-line decision support/analysis tool called Compariscope. Our analyst team would actually get copies of the software from the vendors and set up test environments. This ensured accuracy in the data but it also meant the scope of the analyses was extremely limited and because of the significant time required, we were always playing catch-up to the latest software releases. Perhaps, it could have worked if we’d had hundreds of analysts, vast supplies of equipment, and the vendors were all willing to give us copies of their software (often they responded to requests for software as though we’d handed them a cleaver and asked them to cut off their left leg for lunch). That business model quickly evaporated. So installing and testing every type of enterprise software application is not a feasible methodology for an anlyst firm, much less the end user company.

When I started working with TEC, we only covered discrete and process ERP systems, and at that point, we only provided data for about ten vendors. Our ERP analyst, PJ, could check out the information and have a decent idea if the vendor understood the RFI and made an earnest response. But a single person cannot verify every one of over 3,000 criteria and as we grew and started providing information on more software vendors and on more subject areas (SCM, CRM, etc.) it became quite difficult to make sure all of the data were accurate. Even with additional analysts, nobody in the world really knows what every product is capable of.

I’m curious to know if, anyone that might read this (consultants, people that have worked on their own selection projects, etc) has come up with some good methodologies to verify data after gathering your own RFI processes before spending serious time in product demonstrations. Please respond with your thoughts. Here is what we came up with.

TEC demands RFI responses from an official of the vendor that is responsible for replying to client RFIs. Then we take a few steps to vet the vendor’s data…

1) Once we retrieve a completed RFI, we have a team of people give it a quick review, checking for obvious errors and such, if it passes that test, it moves on, if not, it goes back to the vendor for revision.

2) Our analysts start reviewing the information based on their own knowledge and experience of course, but also things like the RFI being internally consistent with itself (if you’re careful there are some ways to structure an RFI like this). Benchmarks, using TEC decision analysis tools. Analysts also have to constantly be aware of what’s going on in the field so that they can see consistency with known customer results, peer findings, news, conference announcements, and vendor sources such as collateral, other products, services, and initiatives.

3) I came up with a veridical comparison method that aggregates all our existing vendor responses to the criteria in a knowledge area (ERP for example) and defines what the likely level of support would be for each criterion. This lets analysts flag criteria where a particulare vendor deviates far from an expected range and understand what the next most likely levels of support are. For example:

If we know that only two in thirty ERP vendors (at any tier) natively support a standard interface to CAD systems for direct data access, and we see a start-up vendor telling us this criterion is fully supported, our analysts know they’ve got to see the vendor demonstrate that. The reverse is true as well. Sometimes a vendor says it doesn’t support criteria “out-of-the-box” but when we talk to the vendor, or it demonstrates how its system works, we realize the vendor simply misunderstood the criterion. That’s a great opportunity for us to learn how to clarify the criterion’s wording.

4) As I hinted above–the demonstration. All of these checks can go only so far. When an analyst actually sees the vendor demonstrate its capabilities, he or she can definitely verify the accuracy of an RFI.

Finally, even with the checking, benchmarking, and reviewing, sometimes, within the thousands of criteria, an error falls through the cracks. Sometimes, admittedly, even we are not quite fast enough. On occasion a consultant or VAR, intimately familiar with a product, alerts us to an error. Other times a customer is already using some product and tells us about an error in its ratings on our system.

Perhaps it would have been best if we’d discovered these errors first. But, in my opinion, this is one of the great strengths of having information like this being accessed by so many people with different perspectives via the Internet. As FOSS communities and other collaborative projects like Wikipedia has demonstrated–a little effort from many people can go a long way to improve a common goal. In our case, making sure we maintain accurate information on enterprise software products. I’ll admit this cross-checking process is probably not very transparent to the public. Perhaps a more transparent and collaborative cross-checking process would be a method to further improve data accuracy.

Leave a Reply

Your email address will not be published. Required fields are marked *