Continuing the Bullying of Analysts Issue

Today I read a SageCircle post about threatening analysts by cancelling business, which seems like a variety of bullying and certainly an abuse. I discussed analyst abuse previously, a situation that involved bullying an analyst. I looked at the situation as one that hampered both the analyst/vendor relationship and quality of communications. SageCircle offers the following smartness.

“First, it does not make business sense for an analyst at a major firm to change research that displeases a vendor, even one that is a client. If an analyst developed a reputation for being that malleable they would soon have no clients as what they sell in part is objectivity and independence.”

I completely agree with this statement. Unfortunately, it’s not always easy to show vendors that they’re not helping their cause when they try to undermine the objectivity of the analyst’s perspective. Occasionally a software vendor does try to unseat this balance–I’ve felt the implicit if not sometimes explicit threat of cancelled business. TEC based its model on trying to be an “impartial advocate for the end user” which is why our company has an audience that software vendors want to be in front of. That objectivity and independence is the wellspring of the audience the vendor seeks.

I tend to agree with most of the SageCircle points except I’m uneasy with the following.

“…analysts are not responsible for contract value so they don’t care if a vendor client cancels. Yes, the sales rep whose year just went down the drain will care, but the analyst just shrugs.”

But really, A cavalier attitude toward the work produced is unlikely to do anyone much good. Although the analyst may not be the one directly making the sale (in my company’s case we try to maintain a sort of church/state separation), all employees of a company do need to pull together in their work–after all the analyst’s job is every bit as much on the line as the salesperson’s. Does this imply that no analyst can be entirely objective? Well entire objectivity is a full topic in itself and covers a lot more ground than just where the money comes from.

So where am I going with that comment? Look, how could an analyst do his or her job well if s/he wasn’t attentive to a vendor’s concerns (even if they do involve threats or bullying)? There may be some underlying issue that has not been well understood or another sort of misunderstanding. The analyst, conscientious toward his or her labours, ought to critically consider these possibilities rather than shrug. I’d argue that the analyst ought to have the intellectual capacity to separate the threat from the issues so that s/he can rise above a vendors’ unsatisfactory communication skills (which, in the end, is all that a threat boils down to) in order to deal with the issue at hand.

As for the rest of the SageCircle post, it continues with a series of nicely-made other points on the topic of cancelled-business threats–I tend to agree with those and won’t comment further here. Software vendors, it’s worth a read!

Bullying Analysts isn’t the Best Way to Deal

I’ve enjoyed reading Robin Bloor’s series of posts on How to Deal with Analysts. The title of one called attention to analyst abuse, which set some thoughts meandering. Robin made a point under the heading of scruples, and related to briefings.

“The fundamental balancing act lies in the interaction between analyst and vendor. The vendors are keen for the analysts to know and understand their products. The analysts treat briefings as occasions for relationship building and selling.”

Although the point of the post I’ve quoted differs from what I’m about to mention, I really liked that bit in relation to the title on analyst abuse. One form of analyst abuse that could be included in a taxonomy on the subject: bullying.

A few months ago, my job function changed with some corporate restructuring. I found myself taking on the management of several additional teams, including directing TEC‘s research analyst group. It’s been an interesting and busy few months where (cue an excuse for my woeful lack of posting here) I’ve identified and set the groundwork for the year and focused attention on areas that needed it.

Yes, it’s an exciting time full of creative possibilities but when you take on a new job, role, or responsibility you quickly learn it comes ready to share its treasure trove of frustrations too. For example, analyst/vendor communications sometimes feel like an uplifting meeting of intelligent people, ready to help each other learn and spread useful information. Other times, communications go awry and it seems that one person after another dumps their grey matter onto a buzzing heap of rotting political motivation.

I think that Robin says a key thing in calling attention to the analyst and vendor interaction. Something important transpires (or ought to) between analyst and vendor, which involves building a relationship. That relationship (fundamentally if it’s good) requires understanding of the vendor’s product, direction, motivations, etc. Personally, if I don’t have a good relationship with someone, I find it more challenging to understand the person. Why? Pragmatically speaking, a poor relationship likely signifies that the people involved are not communicating well. Not communicating well certainly doesn’t improve understanding.

So, back to my frustations… there I was, sitting with one of my analysts on a briefing. The briefing resulted from a third party that provides some sort marketing/publicity/AR type of function for the software vendor. Ok, that’s fine, a nice briefing facilitated through this person’s efforts. However, the underside of this is that the third party only facilitated the briefing after falsely accusing the TEC analyst of having erred by excluding the vendor from an article we published. We agreed to the briefing under the assumption that it was a good opportunity to find out more and get to know this vendor better–after all, what harm could learning more do? Sadly, it seems the third party presented it to the vendor in a rather different light, one in which the vendor was inaccurately lead to believe we’d slighted them and owed them a fix.

The briefing was fine. Later however, the third party began a strangely vehement and tentacled campaign to charge us with further, (false) wrongdoings. The third party bestowed its unfounded opinions to a host of people including the vendor’s president–curiously shaping attitudes around a neglectful mythology. Coinciding with this, were the third party’s demands that we publish new research about this particular vendor or include mention of it in unmerited ways.

To me, this is a case of bullying. Here, the third party muddied rather than fostered good interactions between a vendor and an analyst. I suspect this particular third party has an odd sort of motivation to appear as an important source for garnering publicity (thereby securing its position with the vendor). Of course this is just an example, not necessarily the norm.

I don’t know whether bullying works on many analysts but it doesn’t impress me. At best, it cannot influence an analysis of the vendor, at worst, provided I have to continue dealing with the marketing/publicity/AR third party, it doesn’t compell me to reach out more than required to do my job properly and certainly raises some questions about that vendor’s interactions with its customers, partners, etc. I mean, is that part of its corporate culture?

It’s striking that a person hired to facilitate understanding with analysts, instead permeated vendor/analyst communications with misunderstanding. Bullying is analyst abuse, it fouls the relationship.

TEC’s Blog is Born!

The TEC Blog went live today. It’s been quite a while in the works but finally TEC is publishing its own analysis and corporate blog. My TEC colleagues and I will use it to regularly discuss enterprise software and selection issues, and augment the other research/articles we publish.

Although I’ll continue to blog here at, I’ll be addressing FOSS, software selection issues, and TEC’s services, research, and products on the TEC blog.

The TEC blog is actually a multi-blogging site. We’ve begun by publishing in English and Spanish, with additional blogs to come in other languages, including French and perhaps Chinese. We’re starting small at the moment and will then look at expanding it into more blogs. I expect we’ll have fun working out a number of kinks over the coming weeks. I hope the blog will make it much easier to have an open line of communication with our regular users and other visitors.

Fronting Prim and Proper Research

A long running debate at TEC, is it a good idea or bad idea to enable public visitor comments on our research? I’m not referring to blogs, which by their very nature are intended to enable commentary. I’m thinking in the context of analyst firm research. I think there is a lot of room here to create an interesting and valuable research methodology (I’m sure I’m not the first to say so). Here’s some background on my query.

TEC has published articles and other research on the IT/enterprise software front since the early 90s. For the majority of that time we haven’t asked our visitors to pay for much of this research. I often compare what we offer (rightly and wrongly) to things available from other analyst companies like Gartner. Gartner, for example, has just about everything locked behind its e-walls. It’s almost all for sale over there. If you go to Gartner for a report or some other research, you won’t see commentary posted under the report by regular visitors debating/debasing that report. Should you? Haven’t we all seen that some of the most significant cultural, business, political, and other developments are based on the new communication and collaboration means enabled by Internet technologies?

Back to TEC, I pushed for a while to have a simple comment system on our site. Something that our visitors could use to post thoughts about our articles, podcasts, reports, etc. It was implemented and people began using it. There were a mixture of comments. As you’d expect some were nice, some were not, some were well-thought out, others not so well. C’est la vie.

We didn’t implement an community moderation system like, say, Slashdot does. This then is where potential problems enter. I happen to be opposed to any electronic forum censorship (note: I don’t view a community moderation system as censorship, rather it’s a peer reviewed ranking device). Wading through online censorship experiences first-hand (dating all the way back to the days of BBSs) I’ve seen how censoring comments tends to destroy online communities or at least ultimately drives their quality down (I’d make an exception for things like spam, which aren’t comments in the first place). But that’s another debate.

A portion of TEC vehemently opposed displaying negative or poorly written comments, and with well-intentioned reasons. “Imagine if all analyst firms allowed such comments, they figured” (I’m paraphrasing the ideas). “Would they still be able to sell their research?” I think it’s a good question. Will people see commentary by other visitors and lose trust in what you have to publish? Does it detract from the professional image of the site? After all, sites like my Slashdot example, never portray themselves as analyst firms–they aim for a different impression entirely. Can an analyst firm, often sought out as subject matter experts, survive while fostering its own public criticism?

I think it could. A well-considered approach could enable that firm to take the reins and harness that criticism to improve. I think if you really are a subject matter expert, or even if you’re not an expert (I’m more of a generalist) but practice well-refined analysis and synthesis skills, you have nothing to hide and would welcome the opportunity to discuss your research publicly.

I would like to see greater online visitor participation. I think there is a lot of potential in getting all the different people related to aspects of the IT industry involved in voicing their activities, concerns, ideas, etc. around a specific body of research. It would probably make that research more valuable rather than detract from it. It could even give the firm totally new ideas for improving their products/services, just the way participation in FOSS development can.

Of course right now, we can all do this to some degree using blogs, but then aren’t we all just circling around the research, rather than assaulting it directly, in its home, where everyone else gets a chance to form some perspective. Maybe it’d be in an analyst firm’s interest to maintain that home? I’ve seen several peer reviewed journals on the Web, like First Monday. A few sites, such as ITerating, seem to be making some sort of effort to approach certain forms of IT research from this angle. RedMonk is interesting in that they espouse a similar idea through blogging. As I mentioned at the start however, the potential for reader participation is inherent to blogging; it’s not the same as offering a particular piece of research or report (or the methodology of developing it) to be ripped to shreads, lauded, or critically enhanced by its community of software users, consultants, vendors, developers, etc.

Whether or not it can be purchased is relevant to the business model, but not so much to the greater issue of what’s more useful–what can be done better? If you could derive a certain edge from opening up all your analyst research to public commentary, I think you might discover some very interesting competitive advantages. I’ve got ideas–but that’s for another time. I don’t know that I made a strong enough argument for the value of uncensored commentary and had to ask the dev team to remove the comment capability altogether. Maybe the implementation was too basic. Perhaps down the road we’ll find a way to make it more productive by implementing it differently. In any case, in the meantime, I’m happy to say that although our comment system died today, we simultaneously launched an official TEC blog. And that will be the subject of my next post.

Wiki While You Work

The Globe and Mail published an article about using wiki applications in the workplace. While not a new notion, this is the first time I’ve seen it in a regular newspaper and not an IT business rag. A point the article touches on is the wiki’s security. I think wiki security may be one of the more misunderstood issues about using a wiki for work and an important differentiating factor in determining when to use an enterprise content or document management system (CMS/DMS) and when to use a wiki. In fact, I think it’s hard to beat a wiki if you need an application to capture and disseminate employee knowledge.

“One drawback is security. Much of the hype around wikis concerns their ability to place everyone from the receptionists to clients to chief executive officers on the same virtual playing field.”

The key phrase above is that it puts people “on the same virtual playing field.” Useful things take place when people are uniformly able to document their activities, collaborative or otherwise. Simplicity is a defining aspect of wiki applications–they make it incredibly simple to collaborate on developing, publishing, or otherwise contributing to company information, documents, in some cases products, etc. I’ll talk about an internal wiki only, as I realize that one open to clients as well may present a slightly different set of issues. Still, I’d argue that in most cases the somewhat loose security issue is more of a benefit than a drawback. Let me illustrate this with how the company I work for, uses one.

Some time ago, frustrated with the problems of repeatedly sending mass e-mails to everyone in our company, I set up an internal corporate wiki. A wiki is excellent for work that is in constant flux or must be accessible by everyone in the company.

  • communicate important news or announcements
  • inform about policies that must be adhered to
  • distribute documents
  • collaborate on work issues
  • capture and disseminate the day-to-day knowledge that employees develop

I think these things fail through e-mail but work with a wiki. I think most of these things are usually (though not always) too encumbered with hierarchy structures, metadata entry, and access controls to be the most effective for the types of things I mentioned above. Even when people save e-mail messages, they must make repeated archaeological expeditions through their e-mail histories. If announcements need to be referred to in the future, there’s no guarantee people will be able to find them in an inbox. Policies and problems that have been solved are likely to be forgotten if they’re not easily present and visible, as they are in a wiki. Ensuring that people always use the most up-to-date versions of documents means making them easily accessible and that is so nicely accomplished with a wiki. Using e-mail to collaborate on projects can become a nightmare of criss-crossing information, which often leaves people out of the loop. If people are in the habit of working with a wiki on all sorts of general day-to-day tasks, it becomes an automatic, company-wide storehouse of employee knowledge.

Using a wiki facilitates these activities. For example, at TEC, internally we use the fantastic, open source Wikka Wiki application. It’s simple enough that people can be productive with it after about five/ten minutes of instruction. It doesn’t confuse with over-sparkly and burdensome features. It’s fast–takes fractions of a second to access and edit in a web browser. It doesn’t require manipulating difficult access permissions. These are all important features because they make it at least on par, if not sometimes easier than sending an e-mail or accessing a DMS. If you want to change peoples’ work habits from constant e-mail use, then I think the alternative ought to be at least as easy and efficient or else offer something so incredibly good as to compel its use.

Before the wiki, people would forget what an important policy might be after six months. Now, even if forgotten, it can be easily found for reference. Before the wiki, frequently used documents were sometimes difficult to disseminate in their most up-to-date form. Now they’re updated, in short order, on their corresponding wiki page.

Before the wiki, information about projects that different groups in the company had to collaborate on, was spread across different people’s e-mails. There was the risk that someone wouldn’t get all the information s/he needed. Now it gets collaboratively updated on pages that anyone within the company can see, which has the added benefit that sometimes people without an obvious, direct connection to the project can discover it and contribute or use it in positive ways that nobody would have imagined previously.

I don’t think a wiki replaces a DMS or vice versa. A DMS might sound like it is designed to capture and better enable such collaboration but I don’t believe that is necessarily its strongest point. I think a DMS is probably better-suited to developing documents that require tight version control, traditional hierarchy structures, and cannot necessarily be developed as content within web pages. A DMS might be more useful for archival purposes or for documents that are sensitive and absolutely must have special access controls. But a DMS tends to be more cumbersome in the security and access area, and thus loses utility in the area of capturing and disseminating employee knowledge.

Spreading the wiki. In the past, people sometimes would tell me about some sort of project they needed to work on or information they wanted to store in an easily usable way. I’d recommend they try the wiki to facilitate it. So they’d ask of course, “what’s that?” and I’d spend five/ten minutes explaining it. The interesting thing is that then they go off and explain it to other people on their teams, then the different teams work on things with the wiki, word-of-mouth makes its use spread. I’m sure this isn’t a 100% effective way to promote its use but I was pleasantly surprised that after implementing the wiki and announcing it, people started pushing its use of their own accord.

A system that requires a lot of security, perhaps needing more of a top-down approach, wouldn’t permit this type of usage to happen. Setting up access controls, accounts, and maybe designing structures for how a company uses its systems of collaboration and knowledge sharing may be time-consuming and ultimately not do the job for which they’re intended. On the other hand, a wiki method allows this to self-organize. The chaos of knowledge that frequently gets developed and lost throughout a work place gains a facility in which to reside and that attracts use.

New BI and CRM Evaluations

At TEC, we recently launched several new knowledge bases for comparing and analyzing software vendors and products. I’ve noticed a number of new sites in the recent past that are attempting similar tools to TEC’s evaluation system. I plan to write a little about these shortly. But for now, I thought I’d put the word out that our CRM and business intelligence (BI) segments expanded considerably.

Our BI analyst refined the research so that we now model, on the one hand, business intelligence systems, and on the other hand business performance management systems. It seems like those two areas were not previously well-distinguished, in part because the applications share a lot of common functionality. Hopefully these new BI & BPM evaluation centers will help users better understand how to select and employ these applications. We have evaluation data for a number of well-known vendors such as Oracle | Hyperion and Applix, but I’d like to include some open source apps, like Pentaho, in our knowledge bases.

In addition, our CRM analysts not only improved our model of CRM applications but also branched out a new model of sales force automation (SFA) systems. We already let users analyze their requirements for SugarCRM. Perhaps as demand for open source CRM products increases, other FOSS vendors will want a chance to be introduced in a fair way alongside their proprietary competitors.

Due-diligence in the Selection Process

An article on voting machine selection in the Boston Globe, caught my interest the other day. The infamous Diebold company seems to be suing the Commonwealth of Massachusetts for improperly selecting a competitor’s voting machines. Nevermind my opinion on the quality of Diebold’s voting products, the article caught my interest because of my involvement in complex software selection projects.

According to the article, Diebold claims the office of the secretary of state failed to choose the best voting machine. My first inclination is to assume that of course Diebold would say that, it’s the competition. Presumably the office of the secretary conducted some sort of selection process and based on whatever factors it defined for the decision found that the AutoMARK machines were better suited to its needs.

Assuming there truly was a fair and accurate selection process that led to the purchasing decision then Diebold appears out-of-line. However, what was that process? The article doesn’t discuss it. Wondering if it was made public, I did some searching but didn’t have much success finding any public record of what that might be.

Sometimes one can find public RFPs but there are many different methods used in government procurement processes. Perhaps if government selection processes are conducted well, with a verifiable trail of due-diligence, they should also be consistently made a matter of public record. That would ward off the impression that anything/anyone had undermined the selection process and improperly awarded a contract.

Chatting with one of my colleagues about this issue yesterday, he mused that our company often has an easier time offering our selection methodology services and tools to “developing” nations’ government organizations than to the “developed” ones. He came from a region that might yet be considered to have a developing nation status. I asked what he meant, and to paraphrase, he replied that often when you look at nations where the government has undergone a lot of upheaval, the public has a strong perception that government corruption needs to be brought under control. So these government organizations may be more easily willing to implement selection methodologies with well defined trails for due-diligence. Whereas countries with long-standing stable governments may employ officials that don’t feel quite the same pressures.

I suppose that’s only one person’s speculation but it reflects an important point: without a well-defined and documentable selection process, you open-yourself to the impression of bias or corruption. Based on information in the Boston Globe article, I don’t think we can assume anything corrupt necessarily happened within Massachussetts’ selection process. But it seems to illustrate a fine reason for why organisations (government or not) should carefully document processes, priorities, factors for decision-making, and respondents’ capabilities during their complex selection processes.

Redesign of TEC

Finally. TEC (the company I work) for launched its redesigned web site. Sometimes a web site redesign can be such a breath of fresh air. In spite of many people’s best and sincere efforts our old site didn’t seem to convey the services the company offered. Of course part of that is that businesses evolve over time. In any case, while the new site will probably still have a few kinks, it’s good to have something more representative of our software evaluation research and consultative services.

Competitive Conquest–Linux or Windows

In a recent article from Harvard Business School’s Working Knowledge, Sean Silverthorne, does some Q&A with Ramon Casadesus-Masanell and Pankaj Ghemawat about their research on the competition between Microsoft and Free and open source software (FOSS). It’s detailed and raises issues on FOSS distribution versus proprietary in relation to user adoption.

The article notes that “By lowering the price of Windows, the demand for Linux shrinks to the point where Linux is not a threat to the survival of Windows.” If I understood correctly, I don’t think that their study was intended to look at issues outside the scope of their economic model. Thus, the following is not criticism but rather some extra thought on the matter. I think there could be other issues that might make Linux a threat to the wide-spread survival of Windows.

For example, perhaps this is unlikely but if Windows exploits, viruses, etc. increased to the point where nobody could realistically use the operating system safely I would imagine totally different sorts of reasons compelling people to adopt Linux, namely privacy or safety concerns. Some time ago, I presented an article on how I thought a lean OS delivery strategy could really impact user adoption. I don’t see Microsoft able to do this, I think it is something that only a FOSS OS could accomplish because of the nature of the FOSS development/community/business models.

And what does the study discussed in the article illuminate?

The article discusses FOSS “demand-side learning” in which the development cycle is shorter because users have the opportunity to improve the software by modifying the code or contributing ideas. This may give the impression that FOSS, by virtue of increasing demand-side learning, would displace the position of proprietary software. However, the study’s authors note that their economic model does not show that to be the case.

They point out that “…the value of an operating system depends critically on the number of users, traditional software has an advantage…” that is, its usage is already spread far and wide. This first-mover advantage seems critical, according to their model, in what would prevent Linux from overtaking Windows.

In spite of demand-side learning, technically better software, and cost advantages, they note that without strategic buyers (such as governments or large organizations that choose to do wide-scale Linux roll-outs). It’s not likely that Linux will overtake Windows. Because of some those factors, they also note that Microsoft ultimately gains from people copying and distributing the Windows OS, even when MS doesn’t receive payment for the copies.

I also thought their points on societal welfare were interesting, in that they find “…a monopoly of Linux is always preferable…to a Windows monopoly…” but that it’s “ambiguous whether a duopoly Linux-Windows is better than a Windows monopoly.” I don’t think this takes into account issues like the importance of freedom within the context of modern technical societies. I would like to know more about what they considered in societal welfare.

Finally, among their recommendations for Microsoft, if it wishes to remain competitive (and I guess “remain” is the correct word since we do see sizable Linux increases in demand-side learning as well as the key strategic buyers they identified, taking action) is one that MS increase its demand-side learning. The thing is, how could Microsoft do that? The study recommends a number of methods. However, I don’t see a way for proprietary vendors such as Microsoft to seriously increase their demand-side learning enough to be competitive with FOSS communities, unless they themselves go FOSS.

Even though some sites have reported on this study from the perspective that Linux cannot overtake Windows, rather I think that so long as MS stays proprietary, the study points toward the direction of Linux.


Update: Dana Blankenthorn at ZDNet commented on this study as well. Dana brings up some other ideas that weren’t addressed in the study, like “…the idea that open source isn’t filled with clever entrepreneurs…” which I suppose heads toward something I was trying to show, that there are outside factors, which may be unexpected and could contribute to the competition in wholly different ways.

However, Dana seems to argue that the Harvard article is filled with conjecture and overreaching conclusions. I didn’t interpret it that way. I had the impression that the Harvard conclusions were drawn for only a very specific set of parameters as defined in their economic model. I don’t believe the conclusion was that “Microsoft will always beat open source.” in fact, while I saw that quote in Dana’s commentary I can’t find it in the original article.

Rather the original article seems to provide conditions that could lead to either side gaining or losing ground. The article says “Ultimately, the authors believe, neither side is likely to be forced from the battlefield” which is a rather different conclusion. The authors also hypothesized the following if MS set the price of Windows to zero “…Thus, we conjecture that even in this case, there would be people developing and using Linux…” and I don’t believe that conjecture helps support Bankenthorn’s interpretation of the article.

Why do I bring this up? Simply because I don’t think the original article was prophesying the future so much as examining what might happen under different conditions.