Why Categorizing, Comparing, Rating, and Evaluating a CMS is Really, Really Hard

By Deane Barker

Every once in a while, I get the idea of coming up with a big list of CMS, then categorizing them. It will be the master list!, I tell myself. (Yes, because this is not really a thing, right?)

It never works well because categorizing and comparing CMS is hard. For the sake of completeness, I’m going to explain some of the problems here to remind myself of them in case I ever try to embark on this again, and perhaps dissuade you from doing the same.

Now might be a good time to go read a prior post: Checking the Box: How CMS Feature Support Is Not a Binary Question. After reading that, you’ll hopefully realize that any consideration of a CMS is inherently difficult. Do you consider it out of the box? Do you allow basic configuration? Advanced configuration? Add-ons? Plugins? (looking at you, Drupal…) Expected development? Custom development?

It’s hard to draw a clear box around a CMS in order to evaluate it. A CMS as it’s delivered from the vendor and a CMS as it’s typically used are usually two very different things. Plus, the experience and competence of the integrator will have a huge impact on your system’s final capabilities – Episerver as Blend implements it and Episerver as an first-time integrator would implement it are going to be quite different.

But, for the sake of argument, let’s say we’ve somehow leveled that out and we now have some acceptable standard of build-out and implementation with which to compare systems.

So, now, how do we do that?

Well, we could do a feature level comparison, a la CMS Matrix (not linking because I don’t like it). That seems like the most obvious plan, but I’ve already made my feelings on this method clear. Read the prior post I mentioned, or this one: Five Tips to Getting a Good Response to a Content Management RFP.

[…] in content management, there are few things that are either yes or no […] it’s often a question not of whether a feature exists, but how it works.

Even if we come to some agreement on how to decide if Feature X exists, it’s still tough. Just because two different systems implement workflow doesn’t tell the whole story – workflow in one of them might be elegant, intuitive, and powerful, while the other implementation is awkward, confusing, and anemic. Yes, they both technically have it, but one sucks and the other is awesome. How do you quantify that?

I’ve thought it might be helpful to try and place systems on a “range” for a particular feature or attribute. So, one of our attributes could be workflow, and the range would go something like this:

  1. No workflow. Wherever content is created, it is automatically published, and edits go live immediately.

  2. Content can be saved as “draft” then “published” at a later time.

  3. Published content can be moved to an “unpublished” state, effectively removing it from public view without deleting it.

  4. Content can be moved to a “Pending Approval” state, where other users are notified for review. When all users (or a specified fraction) have approved, the content is published.

  5. etc.

So, simple functionality is at the top, getting more and more complete as we move down the range. The idea is that you would evaluate a CMS to see “how far downrange” it gets, and then plot where it lands as a point between two extremes.

Sadly, the problem is that a system might not implement functionality the way you expect, and it might have a hole pretty far up the range (you can’t “unpublish” content, for instance), but fulfills a bunch of stuff below that hole. How do you handle that?

This is when you start thinking that you just need to come up with a master menu of functionality, from which you can select what a CMS implements or not.

And…we’re back to CMS Matrix.

But what if we approached it differently, and made it more granular? What if we sift a list of functionality so granular as to be binary, so that a CMS had no choice but to either (1) support, or (2) not support the feature. Would you break apart larger features into smaller “featurettes.”

  • New content can be saved without being published.

  • Changes to content can be saved without being published.

  • Published content can be moved to an unpublished state without being deleted.

These are pretty granular, and we might be onto something here. What I think we’re trying do is atomize features until there’s no chance we could have a CMS fulfill just a fraction of one – we’re trying to apply the First Normal Form to our feature list, really.

The First Normal Form is predicated on breaking down elements to the smallest level that’s reasonable.

[…] values in the domains on which each relation is defined are required to be atomic with respect to the DBMS. […an atomic value] cannot be decomposed into smaller pieces by the DBMS (excluding certain special functions).

But how granular do you get with it? It would become apparent pretty quickly that the list would have to be breathtakingly long to encompass the entire feature set of CMS. Are we just creating CMS Matrix, writ large? (Answer: yes)

Assuming we had this, it would probably be so extensive as to be unworkable. The only way to derive value out of it would be to roll featurettes up into larger aggregates. So, we take 40 or so of the tiny featurettes and roll them up into an overall score for “workflow.”

To do this, we have to assign value to the featurettes. Since not all the 40 will be created equally – some are core, some are esoteric – we have to weight them by giving them each “points” will roll up to a “score” compared to the maximum for that aggregate.

And…we’ve just opened another massive can of worms.

Consider these two sub-concepts of workflow:

  1. New content can be saved as draft and previewed as it look when published.

  2. Arbitrary code can be executed when a page is published.

In general, I’m going to say that the first option there is more universally applicable and important to most people, so…more points? But the last item is more powerful, and when you need it, you really need it, so…more points for that?

You get my point here: the points need to indicate the relative important of each sub-concept, and who chooses that? How important a feature is cannot be universally quantified – what you want out of a CMS and what I want out of a CMS may not be the same thing.

I’ve discussed this before in The Fallacy of the Best CMS.

All content management systems have sweet spots – those requirements for which they were designed. [you have to match] a CMS with some requirements. This is the only way you can ever approach a question like “what is the best CMS?”

Without some requirements and a clear target of what we’re trying to do, I can’t assign a relative value for a feature. I can say whether it exists or not – a binary value; yes or no – but I can’t say how important this fact is to your project. The only point system that matters is the one created from your requirements. (Almost, see the end of this post for a qualification on that point.)

There’s also a point when you realize that a lot of the process of evaluation is subtle, vague, tacit, and predicated on experience. You can’t know X without knowing Y. Any statement assumes prior knowing of a dozen other statements.

Example, when I’m talking with colleagues in the industry, the suffix “-ish” gets used a lot.

  • “Isn’t CMS X kinda platform-ish?” (what’s a platform, and what does it mean opposed to something else?)

  • “Do you remember that decoupled-ish thing Vendor X was trying to do last year?” (what is decoupled?)

  • “They’ve tossed in a lot of marketing-ish features in this latest version.” (what did the last version look like?)

Things rarely exist in a purely binary form. Most everything exists on some point on a range between extremes or as a variable in relation to some other absolute thing, and you have to know what those extremes are or what that other thing is to make sense of where the point is and what that means for the CMS you’re talking about.

Related to this, a lot of this industry is described in terms of other systems:

  • “CMS X does less formal modeling than CMS Y.”

  • “The workflow in CMS X is just like CMS Y.”

  • “They took that one thing from CMS Y and made it enterprise-ish.”

Note that none of that makes sense unless you’re familiar with CMS Y. Without that base of knowledge, you have no point of comparison.

And there’s one last thing that’s subtle, but important –

What you often want to do is read between the lines of a straight review and get down to the….tone, of a system. Instead of specific features, what you often want to know is, what kind of system is it, at its core? This is something that’s often not easily quantified. It’s a quality that lurks in the gaps.

For example, we just worked with a system that did everything from the interface. There were no code files. Additionally, there was no templating language as a developer would normally understand it (simply token replacement). Also, all of their navigation controls were rolled up into pre-built, configurable elements, that you had to work around.

We can individually quantify all these items – we can break them apart, assign points, put them on a range, whatever. But we still wouldn’t have gotten down to the core essence of this CMS and the really important thing you needed to know – that this CMS was not really designed for system integrators or hardcore developers, but rather was designed for end customers.

Additionally, it was what I’ve come to call an “exploratory” system. It didn’t wrap itself around hard, set, requirements. Rather, it favored end customers who would explore its capabilities, find things it did particularly well, and then plan their implementations around these capabilities. (And, lest I sound too negative, it did this exceptionally well, and has a loyal end-user base that loves it for this.)

Tell me – how do you quantify that? How do you plot that on a range? How do you assign that a point value? How do you check that box?

It’s a subtle point (or, really, a collection of subtle individual points rolled into a even subtler aggregate point), but it’s arguably the most single important thing I could have learned about the CMS prior to deciding whether or not to work with it.

So, let’s review all the problems that present themselves when we try to categorize, rate, and compare CMS.

  1. How do you determine in which state to compare the CMS. Out of the box? Configured? Integrated?

  2. Do we compare an average implementation or an expert, one-in-a-million integration?

  3. How do we compare features? Just that they exist? Or do we qualify how well they’re done?

  4. How granular do we get with a feature comparison? To what extent do we break features apart into sub-features and compare them?

  5. How do we assign some universal relative important to feature X when every user needs something different?

  6. How do we account for background, tacit knowledge or lack thereof? How do we position a CMS feature on a range when not everyone knows the extremes? How do we relate one CMS to another when not everyone is familiar with both of them?

  7. How do we read between the lines to convey the zen of a particular CMS? How do we quantify the critical intangibles which seem obvious only in hindsight?

So, where does this leave CMS shoppers? What do you do about this? If truly, fairly, and effectively evaluating a CMS is your problem, how do you do it well?

It’s tempting to say, “ignore everything not related to your own requirements because that’s all that matters.” This isn’t bad advice on the surface, but you also have to be concerned about things unwritten that you may want to do in the future, and reading between the lines on requirements and asking the questions to the end users that maybe haven’t been asked. So, in this sense, even your own requirements might lead you down a bad path and paint you into a corner.

The bottom line: you need experience you don’t have. You need someone who has been down this road a hundred times before, yet has no financial investment in your decision.

For anything beyond a trivial CMS selection process, get a consultant. Find an unbiased analyst and have them guide you through this process, bringing all their experience to bear on it. They’ve likely forgotten more about CMS than the average buyer will ever know, and if they have a modicum of skill, they can explain hundreds of aspects of this process that have never even occurred to you.

I’ve worked with several analysts over the years in selection processes (I was a vendor, invited to pitch to their client), and I have never once met someone who regretted the expense. Even after the CMS was long-implemented, they still say that hiring a CMS selection consultant was one of the best expenses they ever made.

After reading this post, I hope you can understand why.

This is item #62 in a sequence of 357 items.

You can use your left/right arrow keys to navigate