Management Versus Delivery
While almost everything a CMS does is lumped under the umbrella of “management” by default, the lifecycle of a piece of content can effectively be split at a hypothetical “Publish” button.
Everything that happens to content from the moment it’s created until the moment it dies is “management.” The subset of everything that happens to the published version of that content from the moment it’s published is “delivery.” The two disciplines are quite different.
Management is about security, control, and efficiency. It’s composed of functionalities like content modeling, permissions, versioning, and workflow. These are features that ease the creation of content, enable editorial collaboration, and keep content secure.
Delivery is about optimization and performance. The features involved in delivery depend highly on the capabilities of the CMS. These capabilities are currently evolving quickly in the marketplace. Until recently, delivery simply meant making content available at a public location. Today, the modern CMS is highly concerned with the performance and optimization of the content it delivers.
In the commercial space we’ve seen a plethora of tools that enable advanced marketing during delivery. Features like personalization, A/B testing, and analytics have proliferated as different vendors try to set their systems apart. These features used to be provided by separate “marketing automation” software packages that operated solely in the delivery environment. More and more, these tools are being built into the CMS.
The unintended result is that core management tools have changed little in the last half-decade. These tools have reached maturity in many cases, and the focus is currently clearly on marketing and optimization tools during delivery. Management is generally considered “good enough.”
Coupled Versus Decoupled
The “management vs. delivery” dichotomy manifests itself technically when considering the coupling level of a CMS. What hosting relationship does the management environment of a CMS have to the delivery environment?
In a coupled system, management and delivery occur on the same server (or farm of servers). Editors manage content on the same system where visitors consume it. Management and delivery are simply two sides of the same software.
This is an extremely common paradigm. Many developers and editors know of nothing else.
In a decoupled system, management and delivery are (wait for it) decoupled from one another. Content is managed in one environment (one server or farm) and then published to a separate environment (another server or farm). In these situations, the management functions are sometimes referred to as the “repository server,” and the delivery of the content takes place on a “publishing server” or “delivery server.” In these cases, published content is transported to an entirely separate environment, which may or may not have any knowledge of how the content was created or how it is managed.
Fewer and fewer systems support this paradigm, and it’s normally seen in high-availability or distributed publishing environments, such as when a website is delivered from multiple servers spread across multiple data centers (though this may be changing, as we’ll discuss at the end of the book). It has the perceived benefits of security, stability, and some editorial advantage, as editors can make large-scale changes to content without affecting the publishing environment, only to “push” all changes as a single batch when the content is ready (though this advantage is steadily finding its way into more and more coupled systems).
Actual technical benefits of decoupling include the ability to publish to multiple servers without the need to install the CMS on each (which lowers license fees, in the case of commercial CMSs), and the ability to publish to highly distributed environments (multiple data centers on multiple continents, for example). Additionally, the delivery environment could be running on an entirely different technology stack than the management environment, as some systems publish “inert” assets such as simple HTML files or database records, which have few environment restrictions.
The primary drawback to decoupling is that published content is separated from the repository, which makes “live” features like personalization and user-generated content more complicated. For example, accepting user comments is more difficult when those comments have to be transported “backward” from the delivery server to the repository server, and then the blog post on which they appear has to be republished (with the new comments displayed) “forward” to the delivery server.
To counter this, decoupled CMSs are moving toward publishing content directly into companion software running on the delivery servers that has some level of knowledge of the management CMS and can enable content delivery features. The result is a CMS that’s split in half, with management features running in one environment, and delivery features running in another.
Decoupled systems tend to be clustered on the Java technology stack. Some ASP.NET systems exist, but virtually no PHP systems use this paradigm.
We’ll discuss the differences between the two publishing models in Output and Publication Management.
Installed Versus Software-as-a-Service (SaaS)
More and more IT infrastructure is moving to “the cloud,” and CMSs are no different. While the norm used to be installation and configuration on your server infrastructure, vendors are now offering hosted or SaaS solutions more often. It’s not uncommon to have software rented from the vendor and hosted in its environment.
The benefit purports to be a CMS that is hosted and supported by the vendor that developed it. Whether or not this provides actual benefit is up for debate. For many, “hosted” or “SaaS” just means “someone else’s headache,” and there are multiple other ways to achieve this outside of the vendors themselves.
Closely related to the installed vs. SaaS debate is whether or not the CMS supports multiple, isolated users in the same environment. So-called “single-tenant” vs. “multitenant” systems are much like living in a house vs. an apartment building. Users of a multitenant system exist in the same shared runtime environment, isolated only by login. They occupy a crowded room, but each appears to be the only one there.
The purported benefit here is a “hands off” approach to technology. These systems are promoted as giving you instant access and allowing you to concentrate on your content, not on the technology running it. The trade-off is limits on your ability to customize, since you’re sharing the system with other clients.
We’ll discuss this dichotomy in greater detail in Acquiring a CMS.
Code Versus Content
This code is usually managed in a source code management system such as Git or Team Foundation Server. It’s usually tested in a separate environment (a test or integration server) prior to launch. Launching new code is usually a scheduled event. Depending on your IT policy, new code might have to have approved test and change plans, as well as failure and backout plans in the event that something goes wrong.
With code under source control, there’s always “another place” where it lives. The CMS installation where it’s executing and providing value is not its home; it’s just deployed there for the moment. If that copy was ever destroyed for some reason, it could be redeployed from source control.
Content, on the other hand, is developed by editors and lives in the CMS. In coupled systems, it’s often developed in the production CMS and just kept unpublished until it’s ready to launch. It might be reviewed via a formal or informal workflow process, but often isn’t otherwise “tested.” If an editor has sufficient permissions, it’s possible to make a content change, review it, and publish it all within the span of a few minutes with no oversight.
Content will almost always change vastly more often than code. An organization might publish and modify content several dozen times a day, but only adjust the programming code behind the website every few months. When this happens, it’s to fix a bug or fundamentally change how something on the website functions, not simply to change the information presented to visitors.
Code and content are sometimes confused because of the legacy of static HTML websites. For an organization that built its website with static HTML, an HTML file had to be modified for a single word to change. Thus, a code change and a content change were the same thing.
Decoupled content management can also blur the line between code and content. In a decoupled system, modified content is often published to a test sandbox where it’s reviewed for accuracy, then published to the production environment. The existence of an entirely separate environment is similar to how code is managed. Content starts to act like code.
In these situations, it’s sometimes mentally hard to separate the test environment for content from the test environment for code. You have two different testing paradigms, each with its own environment, each pushing changes into the production environment.
This changes with a CMS, especially a coupled CMS. Under this paradigm, content changes without code changing at all. The verbiage of a press release might be completely rearranged, but the template that renders it is the same.
Organizations moving from static websites or decoupled systems sometimes have trouble adjusting to the idea of a “virtual” test/staging environment – unpublished content is created on the production server, and just not visible while it’s awaiting publication.
Their past correlation with code tempts them to treat content the same way and inter-mix the two concepts.
Code Versus Configuration
Many features in a CMS can be implemented through (1) developers writing code – either core code or templating code – or (2) editors and administrators working from the interface.
Developers have complete freedom, up to the limits of the system itself. There’s generally no functionality that is not available from code, as code itself is the core underpinning of the system. The only limitation on a developer is how well the API is architected to allow access and manipulation. But even with a poorly implemented API, a developer has the full capabilities of a programming language to get around shortcomings.
Editors, on the other hand, have access to only a subset of what a developer can do from code. They are limited to the functionality that has been exposed from the interface, which varies greatly depending on the system. Some systems allow minor configuration options to be set, while others have elaborate module and plug-in architectures that allow new functionality to be created from the interface on a running, production system.
Why wouldn’t a system expose all functionality from the interface? Sometimes it’s because a particular setting or piece of functionality is changed too infrequently to justify the effort of building an interface for it. Other times it’s because the person using it is more likely to be a developer who would like to change it from code, to ensure it gets versioned as source code and deployed like other code changes.
However, the most common reason is that the feature is simply too complicated to be managed by an interface. The ability to write code allows for the clear expression of extremely complex concepts. Developers are used to thinking abstractly about information and codifying it in code. Some features are simply too complex to easily build a management interface around them.