Blog News Feed Versus Newsletter Usage

The Wall Street Journal Online has a short and slightly thought-provoking interview with Jakob Nielsen concerning newsfeeds and blogging.

I think the news feed reader is taking the place of both some browsing activity and some e-mail activity. People ought to be viewing blogging and news feeds not as the “extreme edge” mentioned in the interview but rather a notable shift in the way people discover and retrieve information from web sites.

Lee Gomes (the interviewer) asked why Nielsen prefers an e-mail newsletter over a news feed. It brought up a few points on the focus of a newsletter but Nielsen cautioned “Unless a newsletter is very good, people will just say, ‘Oh no, more information.'” And I find that to be my case. There are a few newsletters I like reading but the majority have too much garbage to wade through and simply clutter my e-mail inbox. I’m hesitant to subscribe to anyone’s newsletter now that I’m invariably offered the option during any web site registration. Over the years, site after site, has reinforced the notion that once I subscribe, the subscription will balloon into unwanted mail and it will be difficult to remove myself from the lists. Even when that’s not the practice, there is that suspicion. Abuses have made that impression the general state.

Many years ago I attended a conference held by a local phone company, which was trying to convince its corporate clients to build corporate web sites (and hence they needed fast Internet connections). The conference had a number of very informative sessions highlighting the benefits a web site could bring to a business. One of the points I recall, was how much emphasis they put on having a clear and easy sign-up page for a company newsletter. They made the case that a well-designed newsletter, would help a company stay in contact with its customers (of course that lends itself to all kinds of wonderful marketing activities).

Now I hear similar arguments for the business benefits of blogging. Except the nice thing about blogging and hence blog news feeds, is that the subscribing user has complete control over whether s/he subscribes to it or not (unlike what happens when you release your e-mail address to the clutches of some unfamiliar internal machinations of a company you probably have little reason to trust).

One last thing. On the conversational aspect of blogs (which seems to be, at least in part, commentary on who actually is reading/using them), Nielsen comments that it works for fanatics “…who are engaged so much that they will go and check out these blogs all the time.” I’m inclined to agree temporarily, but it is a shortsighted viewpoint if that is where it ends. True most people I know, haven’t got a clue what a news feed reader is much less a blog, though since I’ve been using these for a while, they’re familiar concepts and tools for me. However, all technology uses tend to be that way. When I went to the conference I mentioned previously, an e-mail newsletter seemed like something only a small percentage of the population would ever use. That changed. Now that I regularly use a news feed reader to read articles, I use my web browser less frequently. That is a major shift in the way I access Web content.

Compiere Repots itself for Growth

It seems that open source ERP provider, Compiere, is prepping itself for a lot of new growth. Today it announced (hot on the heels of bringing in Andre Boisvert as its Chairman of the Board and Chief Business Development Officer) that it would be moving its corporate headquarters to California’s Silicon Valley and at the same time that it secured a nice little VC nest egg of $6 M (USD).

The company’s press release (linked above) has CEO, Janke, stating “…the market’s demand for our product has outgrown our capability to scale the business accordingly.” And then Boisvert mentions that Compiere’s in a good position with its “…modern architecture at a time when many proprietary legacy ERP systems are approaching the end of their intended life cycle.” That sort of growth should keep Compiere’s model vibrant.

In April the company announced seven new implementation partners, which I think bodes well for its business model–a model largely based on second level support and training (in other words supporting its partners). The more partners implementing it the better. The company had a little over forty toward the end of 2004 and now claims more than seventy. Hopefully the additional funding truly will help it scale for those partners’ demands.

Verifying an RFI

Today, I had a conversation with a consulting firm that works with TEC‘s decision support tools and knowledge bases (KBs) on enterprise software. In this case, they were engaged in an ERP selection project.

The consulting firm was asking me about the data accuracy (in our KB) regarding the functionality of some of the vendors they’d shortlisted. TEC researches and provides immense amounts of information on software products so it is an incredibly tricky task for us to ensure that the data is accurate and timely. Considering the number of clients using our evaluation services for their projects, as well as the consultants using the same services for their clients, it amazes me when a software vendor either isn’t amenable to providing updated information about their products, or in a few cases, is less than truthful about their products’ capabilities. That’s what I want to talk about in this post because I had to answer this consultant honestly, without bias, and what I explained to him about the way one abnormally naughty vendor treated the RFI response process, seemed to slightly sour him toward its product.

First, usually vendors respond in earnest to our RFI inquiries, it’s in their best interest. I wonder though, if a few vendors respond dishonestly while knowing TEC exposes its analysis data to thousands of customers (who may very well become sales for the vendor), how well are these vendors responding to the inquiries they receive from individual clients that don’t have many resources for vetting information? I mean to say, if you’re working on a project to select some kind of enterprise software system, design your own custom RFI, and send it out to a bunch of vendors, how are you going to be sure that the responses are truly accurate? Even consultants won’t have expertise on every product out there.

It seems to me that until you get to a stage where you’ve already selected a few vendors to give scripted demonstrations, there isn’t much of a way to verify the accuracy of the responses; and how much time will have elapsed just to get to that point? I’m not suggesting that vendors are likely to act in bad faith, criteria are also commonly misunderstood. Even with a focussed team of subject matter experts, editors, and translators, we get inquiries from very knowledgeable and intelligent people that don’t understand criteria we use for our data collection.

Here’s a way that fails. I once worked for a company that had a slick on-line decision support/analysis tool called Compariscope. Our analyst team would actually get copies of the software from the vendors and set up test environments. This ensured accuracy in the data but it also meant the scope of the analyses was extremely limited and because of the significant time required, we were always playing catch-up to the latest software releases. Perhaps, it could have worked if we’d had hundreds of analysts, vast supplies of equipment, and the vendors were all willing to give us copies of their software (often they responded to requests for software as though we’d handed them a cleaver and asked them to cut off their left leg for lunch). That business model quickly evaporated. So installing and testing every type of enterprise software application is not a feasible methodology for an anlyst firm, much less the end user company.

When I started working with TEC, we only covered discrete and process ERP systems, and at that point, we only provided data for about ten vendors. Our ERP analyst, PJ, could check out the information and have a decent idea if the vendor understood the RFI and made an earnest response. But a single person cannot verify every one of over 3,000 criteria and as we grew and started providing information on more software vendors and on more subject areas (SCM, CRM, etc.) it became quite difficult to make sure all of the data were accurate. Even with additional analysts, nobody in the world really knows what every product is capable of.

I’m curious to know if, anyone that might read this (consultants, people that have worked on their own selection projects, etc) has come up with some good methodologies to verify data after gathering your own RFI processes before spending serious time in product demonstrations. Please respond with your thoughts. Here is what we came up with.

TEC demands RFI responses from an official of the vendor that is responsible for replying to client RFIs. Then we take a few steps to vet the vendor’s data…

1) Once we retrieve a completed RFI, we have a team of people give it a quick review, checking for obvious errors and such, if it passes that test, it moves on, if not, it goes back to the vendor for revision.

2) Our analysts start reviewing the information based on their own knowledge and experience of course, but also things like the RFI being internally consistent with itself (if you’re careful there are some ways to structure an RFI like this). Benchmarks, using TEC decision analysis tools. Analysts also have to constantly be aware of what’s going on in the field so that they can see consistency with known customer results, peer findings, news, conference announcements, and vendor sources such as collateral, other products, services, and initiatives.

3) I came up with a veridical comparison method that aggregates all our existing vendor responses to the criteria in a knowledge area (ERP for example) and defines what the likely level of support would be for each criterion. This lets analysts flag criteria where a particulare vendor deviates far from an expected range and understand what the next most likely levels of support are. For example:

If we know that only two in thirty ERP vendors (at any tier) natively support a standard interface to CAD systems for direct data access, and we see a start-up vendor telling us this criterion is fully supported, our analysts know they’ve got to see the vendor demonstrate that. The reverse is true as well. Sometimes a vendor says it doesn’t support criteria “out-of-the-box” but when we talk to the vendor, or it demonstrates how its system works, we realize the vendor simply misunderstood the criterion. That’s a great opportunity for us to learn how to clarify the criterion’s wording.

4) As I hinted above–the demonstration. All of these checks can go only so far. When an analyst actually sees the vendor demonstrate its capabilities, he or she can definitely verify the accuracy of an RFI.

Finally, even with the checking, benchmarking, and reviewing, sometimes, within the thousands of criteria, an error falls through the cracks. Sometimes, admittedly, even we are not quite fast enough. On occasion a consultant or VAR, intimately familiar with a product, alerts us to an error. Other times a customer is already using some product and tells us about an error in its ratings on our system.

Perhaps it would have been best if we’d discovered these errors first. But, in my opinion, this is one of the great strengths of having information like this being accessed by so many people with different perspectives via the Internet. As FOSS communities and other collaborative projects like Wikipedia has demonstrated–a little effort from many people can go a long way to improve a common goal. In our case, making sure we maintain accurate information on enterprise software products. I’ll admit this cross-checking process is probably not very transparent to the public. Perhaps a more transparent and collaborative cross-checking process would be a method to further improve data accuracy.

On the Subject of Learning, Tools for LMS Purchasers

Niall at NetDimensions Insights wrote up two nice pieces pointing out a few ways that people seeking a learning management system can use low-cost tools to compare the different offerings out there. He mentioned both the Brandon Hall feature comparison document as well as the LMS RFI templates that Technology Evaluation Centers offers–this is what caught my attention. Niall comments that

Use of this template does not mean that you do not need to perform a thorough analysis of your organization’s learning management requirements. Each organization will have a unique set of requirements and you should ensure that any additional requirements identified are added to the template.

That’s exactly it too. Templates like those can be helpful to research functional requirements/support requirements, and ultimately toward soliciting proposals, but any spreadsheet-esque comparison grid will only do so much when it comes to analysis.

In any case, I wanted to respond by pointing out another inexpensive tool a potential LMS purchaser could use, which is TEC’s online LMS evaluation tool. It has the same hierarchy of criteria as the spreadsheet, except it lets you prioritize those criteria based on your organization’s learning requirements (and can include additional custom criteria). Then it analyzes your priorities to figure out which vendors match your requirements (of course it uses rated vendor responses to all those criteria to do this). I think this supports, at least in part, Niall’s recommendation for a thorough analysis.

The Start

I needed a place to post some thoughts on things taking place in the IT world and I didn’t want my other sites to be mixed with those issues, hence my newest blog. Blogs appear to be a communication necessity now (sort’ve wonder if they may someday be able to replace e-mail, we could use them with private sections and private trackbacks). I’ve experimented with blogs in a number of ways for the past several years but it wasn’t until I recently read Robert Scoble and Shel Israel’s book, Naked Conversations, that I got inspired to begin using an RSS newsreader (actually I’m using both akregator and liferea) to keep track of a lot of blogs. After religiously reading these blogs for the last few months, I can’t not be part of the conversation anymore.

The company I work for, TEC (which has blogging on its horizon too), is in the process of developing a new decision support knowledge base addressing health care information management systems. I’ve been working on its structure today and noticed the analyst used an interesting acronym, ADL, which stands for the “activity of daily living”. I can’t help but wonder why the world needs such an acronym, can’t I just, err, live? It reminds of an article I saw linked in a blog yesterday (unfortunately I’ve forgotten which blog now), discussing a new wave of corporate buzzwords.

In any case, I’m sure ADL is an important technical term to the HCIMS industry (in a short bit I’ll probably know why). For the time being, I think I’ll just note that blogging has become a part of my ADL.