Most organizations who struggle with content management blame their CMS for their woes. The only hope for relief they see is to replace the platform and start over. But without fully understanding the true nature of the issues, an expensive CMS replacement project is not likely to improve anything; it could even make matters worse. It is highly likely that the CMS doesn't need to replaced and better results can be achieved by reconfiguring the current platform and adjusting roles and processes. As Ian Truscott so eloquently confesses:
Having seen plenty of examples over the years of web content management replacement projects and a common perception that any problem is a tools problem. Also, I confess having been in sales situations as a vendor that have preyed on the fact that an organization perceived that their incumbent product couldn’t do x, y or z when, in truth you can be fairly sure something has gone wrong along the way.
Before you go into CMS selection mode, consider a content management assessment that takes a holistic view of your content management tools and processes. It highly likely that you could be using your current tools more effectively. If that is the case, your organization can solve its content management problems with much less expense and risk. A content management assessment covers a lot of the same ground as a CMS selection but leaves open the possibility that the best platform may be the one that you are already on. If it turns out that you do need to replace your platform you have a head start in a selection because you will already understand your requirements and will some made some operational improvements. Otherwise, you will have a nice list of enhancements that you can use to tune your current tools and processes and achieve better results.
This is the first in a series of articles that explains different aspects of a content management assessment. The following articles will dive into questions one should ask in a content management assessment and how to get to the correct answers. These topics will also help new implementations avoid problems before they occur. Stay tuned!
I recently read Richard Thompson's excellent post "Is content strategy biased towards the written word?" I have to admit that during my read and also during the many moments of reflection that followed, I found myself thinking "but there is a good reason to focus on text as the primary format of content." Other content formats are best used to enhance the written word. Heck, I even wrote a blog post extolling the superiority of the written word over text. All the while, however, I was nagged by the possibility (rather probability) that I was projecting my personal own bias.
As I mentioned in my post, I don't have the patience to wait for information to be spooled out at a pace that I cannot control. I like to scan and search and you can't do that very easily in a video or audio. But, I also agree with Rich's point is that many of the most popular websites are filled with non-text content so I would be foolish to deny the appeal. But then I think that these sites are primarily for entertainment purposes and when I am in leisure mode, I am not so goal oriented. I like experiences like movies where I can sit back and let a story unfold.
As you can see with my flip-flopping, I haven't made up my mind on this issue but, like Rich, I am starting to be more aware of the circumstances when non-textual content would be preferable to text. Ideas that come to mind are:
Any form of entertainment (many people are more likely to watch a movie or listen to an audio book than read the book on which it was based)
Instruction on topics where technique is critical such as hitting a baseball or disarming a nuclear warhead
Things that you need to see to believe like pictures of a flooded town.
Can you think of other times when non-text content can go further than the written word? One thing is for certain; when you decide that you can communicate better with non-text content, you better be sure. Many organizations struggle to maintain their text-based content and non-text content is vastly harder to manage. You need more sophisticated skills and tools to update a video or audio track — even a picture. But maybe that is just my own bias peeking out again.
One of the main benefits of using a coupled (aka "frying") web content management system (WCMS) is that you get a web application development framework with which to build dynamic content-driven applications. Like nearly all modern web application development frameworks, a coupled CMS provides an implementation of the MVC (Model-View-Controller) pattern. For the less technical reader, the MVC pattern is used in just about all software that provides a user interface. The essence is that the model (the data), the view (how the information is presented and interacted with), and the controller (the business logic of the application) are separate concerns and keeping them separate makes the software more maintainable. I will let Wikipedia provide a further explanation of MVC.
The MVC implementation that comes with a typical WCMS is less flexible than a generic, all purpose web application framework. Your WCMS delivery tier makes a lot of assumptions because it knows that you are primarily trying to publish semi-structured content to some form of document format — probably an HTML page, but possibly some XML-based syndication format. Your WCMS knows roughly what your model looks like (it is a semi-structured content item from the repository that has only the data types that the repository supports); it has certain way of interpreting URLS; and the output is designed to be cached for rapid page loads. Those assumptions are handy because they make less work for the developer. In fact, most of the work on a typical WCMS implementation is done in the templates. There is hardly any work done in the model or controller logic.
But there are times when the assumptions of the CMS are broken. Anyone who has implemented a relatively sophisticated website on a CMS has had the experience of either overloading or working around the MVC pattern that comes with the CMS. The approach that I want to talk about here is what I call the PAC (Placeholder Application Controller) pattern. In a nutshell, this pattern de-emphasizes the roles of the model and controller and overloads the view (template) to support logic for a mini-application. The content item goes from being the model to a placeholder on the website. The controller is used more like a switchboard to dial up the right template.
Here is a common example. Let's say that you are building a web site for an insurance company. Most of the pages on the site are pretty much static. But there is one page that has a calculator where a visitor enters in some information about what he wants to insure and gets back a recommended coverage amount and estimated premiums. It would be pretty silly to try to manage all the data that drives the coverage calculator as content in the CMS. Instead, you would probably want to write the calculator in client-side Javascript, copy it into a presentation template and then assign that presentation template to a blank content page with the title "Coverage Calculator." The Coverage Calculator page in the content repository is really just a placeholder that gives your Javascript application a URL on the site.
To a lesser extent, home pages often implement the PAC pattern. In this case, the home page might be a simple empty placeholder page that is assigned a powerful template that queries and features content from across the site. When the controller grabs the template and model, it may only think that it is rendering the content that is managed in the home page asset. Little does the controller know, the template is going to take over and start to act like a controller — grabbing other models and applying other templates to them.
Placeholder Application Controller is one of those patterns that, once you think about it, you realize you use it all the time. It is convenient and practical but be careful with it because it is easy to get carried away. The main risk of the PAC pattern is that you are going against the grain of the WCMS architecture. Templates are supposed to be for formatting only. You may be pushing the templating language a little farther than it was intended to go and your code may become unmanageable. You also may be short-circuiting the security controls provided by the controller. Some WCMS platforms have a pluggable architecture that allows 3rd party modules (programmed in something other than the template language) to step in and play the roles of model, view, and controller. This helps keep the architecture cleaner but there will always be some limitations on how these modules are allowed to work. After a certain point, you will be better off going with a generic web application framework that affords you more flexibility and just use the WCMS to publish content into your custom web application. But that is a much larger undertaking.
Every organization should have a good hard look at their content strategy and bring in help if they need it. However, doing any kind of content strategy work (using external or internal resources) is a waste of time without first establishing a solid foundation.
Your content strategy cannot succeed unless:
You have information that a desirable audience wants
In order to create content, you need information. In order to create content that your target audience wants to consume, you need to have information that is of interest to them. A steady stream of irrelevant press releases cannot be the basis for a good content strategy. Neither is a random Twitter stream of links and retweets. The good news is that any organization with customers (product purchasers, potential buyers, subscribers, members, etc.) probably has information that is of interest to them. The challenge is figuring out what that information is, which requires an understanding of the audience.
The information you have is capable of driving desired behaviors
While your audience may find it intriguing, simply airing your dirty laundry isn't very strategic. For every bit of information you publish, think about expected and desired reponses. If you work for a product company that has launched a new product, the content should contain information about who should buy the product and why. If you have no information that would compel a customer to buy your product (that is, your product stinks), there is really nothing worth saying and no content strategy can solve that problem. Lying is a not a good content strategy. If you have information about a defect in your product, the content you produce should support customer loyalty by resolving the issue. If publishing is your business, your content should establish your organization as the authority on this topic (and topics like it) so that the audience subscribes and/or returns.
I call these the "First Principles of Content Strategy." Any content strategy work should be grounded on these principles. If you hire a content strategy consultant or staff an internal resource, testing these assertions first will help you start off on the right direction and save time. If your answer to either of these assertions is "no," put that content strategy project on hold. You have more pressing issues to address.
The other part of the announcement that intrigued me was why they went to the Eclipse Foundation and not the Apache Software Foundation. After all, Apache already has Chemistry, which is a collection of projects that implement the CMIS standard. My first reaction was that Nuxeo thinks the ASF is broken since its tiff with the Java Community Process. That may be so but I think there are better explanations. First, Chemistry already has a number of projects and it is still in the incubation stage. Perhaps Nuxeo feels that these aspects would dilute Nuxeo's influence. Second, Nuxeo has had a long relationship with Eclipse. Back in 2006, Nuxeo announced the Apogee project. Apogee is an RCP (an Eclipse build), designed to be a universal thick CMS client. Apogee never really caught on in the general marketplace, but I know that Nuxeo has been very successful implementing it for their customers as an efficient user interface for their own CMS. With the advantages of influence and the Eclipse Foundation's community and resources, Nuxeo Core may have great potential as the Eclipse Enterprise Content Repository Project (ECR).
Drupal represents a middle ground between framework and CMS that we’ve chosen not to take. Drupal is far more capable than a CMS like WordPress, but also much less flexible than a pure framework. But more importantly, the facts that Drupal isn’t object-oriented, isn’t MVC/MTV, doesn’t have an ORM, and is generally less flexible than a pure framework, not to mention our preference for working in Python over PHP, all contribute to our decision not to use it.
I reached a similar conclusion on a recent project.
There is a mini-meme going around about the fragility information in a digital society. The argument goes that, because we manage information in proprietary digital formats and rapidly changing devices, we are increasing the risk of losing everything. Archeologists of the future will not be able to analyze MS Word 2010 files on old hard drives like the archeologists of today can read ancient texts on papyrus and stone. One of the better posts is Cheryl McKinnon's OpenSource.com article "From information overload to Dark Ages 2.0?" and the video embedded below.
I call this a mini-meme because it is not stirring up a Y2K-bug-grade hysteria. I guess you need images of planes falling out of the sky to get that level of reaction. The concern for this is more in line with that of the constant low level anxiety that all CIOs feel from the information overload problem — "this is a big problem that we are going fix after we take care of all the urgent matters on our plate."
We do need to curate and preserve information that is important. We need to do it on an individual level so our descendants can know where they came from and we need to do it on a societal level to teach future civilizations what we know. The problem is that we have a lot of crap to sift through. It used to be that producing information assets was hard and expensive. If you got an idea to create something, you would consider the effort and cost it would take to produce and distribute it, and then weigh that against the value. If something was created, you could assume that someone thought it was important. Now... not so much (queue the LOLCats).
In ancient times you had to have near-deity status for someone to go through the trouble of carving your likeness in stone. The cost of portraiture has been steadily going down to the point where now people don't even know when their picture is being taken. Remember, not long ago (before digital photography became the norm), you would really hesitated before snapping the shutter because each picture would cost you regardless of whether it even came out or not? When I got my first digital camera, I would snap away but delete the photos off the card to save space. Now I don't even bother to delete the bad ones. You can even go onto Facebook and see that a grainy blur has been tagged as you.
The same thing happens in the office. When I came of age professionally, one of my first responsibilities was to produce and distribute a productivity report. This meant running the report, printing it, making copies, and then putting them in physical mail boxes. It took the better part of a morning to do it. If there was even an outside chance that report wasn't useful, that task would disappear. Ten years earlier, I would have had to type up the report and run it through a mimeograph. It would have taken more than a day and wouldn't have been worth it. Now email inboxes are flooded with automatically generated reports that nobody reads.
It will be difficult to go through all the digital information we are producing and decide what is worthy of preserving for the long term. We will probably procrastinate this effort until there is some sort of scarcity or scare that brings the threat of loss to the forefront. When we do get around to it, however, I think we are going to learn about ourselves in the process.
The one aspect that doesn't quite sit right with me is the implication that less investment in the product translates into less risk for the customers. In my opinion, there is extreme risk in products that are getting minimal development. Static-ness can be the first sign of decline. The supplier (be it a commercial software vendor or open source development community) may be losing interest in the product and is reducing investment. You could say that this risk is captured in the vendor restructuring dimension as downsizing the group managing the product. Still, I think that risk on the product development axis is higher on both extremes. In the example of the diagram, if Microsoft invested even less in their WCM Platform, I think risk would increase. Anyone remember Microsoft Content Management Server 2002?
BTW, I tried to add this as a comment to the post IntenseDebate appeared to be swallowing my comments :)
I have a complicated relationship with personal information management (PIM) tools. I love PIM software and services that promise to remember everything that I once knew. I am tempted by every new information gadget that comes down the pike. But I have also been burned lots of times too. I spent hours on corporate intranets documenting my knowledge only to have them shut down. I used proprietary software like the The Brain which became inaccessible when I switched computing platforms (note: now The Brain supports Linux and Mac OSX as well as Windows). My Google notebooks are now barely supported by Google. And now we have news of Delicious's uncertain future.
Because of these experiences, I have been fighting my urges to use services like Evernote, SpringPad, Yojimbo, etc.. Instead, I have been sticking with humble text files. In an earlier post, I described how I am using TextMate as a blogging tool. I have started to use it as a knowledge management tool as well.
I don't pretend that I have the features that products like Evernote do, but here is what I can do:
Create notebooks with multiple pages by using TextMate's project feature.
Organize pages into folders.
Create todo items and other tags that I can summarize by using the ToDo bundle
Search using "Find in Project" or Spotlight.
Collaborate with others using GitHub or another code hosting service.
Store and organize binary files like diagrams created in OmniGraffle.
The truth for me is that the greatest benefit of any PIM tool is simply the act of recording and organizing the information. These activities help me remember and process information into actionable knowledge in my head. You can do that with any tool. The second most important aspect is search and recall. This is where the open format of text files really excels. It would be really frustrating to try to retrieve some information but not have access to the software to consume it.
While I generally don't enjoy 2011 prediction posts, I really loved James Hoskins's article (20)11 predictions from the CMS coal face. I found it thoughtful, pragmatic, and meaningful. One of my favorite predictions is #8 Personalisation falters again. I have had similar challenges with personalization. Most content organizations do not have the maturity, discipline, and energy to fully leverage this kind of technology. Interest and investment inevitably subsides after irrational expectations are not immediately gratified. Investment is higher than expected: you have to manage more complex technology, you need to develop more content, and you need to test a lot more. Return is lower than expected: most companies don't even know how to measure the value of the return.
There are, of course, exceptions. The importance of display and placement are ingrained in retail culture (online and offline) and small tweaks can lead to big returns. Customer extranets are inherently personalized so I am not including that here.
Traditional media companies (magazines, newspapers, and television/radio stations) would like to personalize but they don't have quite the upside eCommerce sites do; on a media site, an extra click means a few more ad impressions rather than a potential sale. Most media clients find that the volume and turnover in content makes the cost of tuning personalization greater than the return. Showing articles that are related to the current article brings the biggest bang for the buck. Marketing sites are more campaign driven than personalized; you put up different pages with different URLs rather than creating a personalized user experience.
As problematic as it has been in the past, I think that succeeding with personalization is only going to get harder. First of all, multiple device support is going to eat up your development resources so you will have even less time to test and tune complex personalized views. Second, unless you are lucky enough to be one of the premier social networking sites, most of your audience will only be on your site for a page or two — if at all. Most visitors will follow a deep link to your site, scan the content, then go back to the conversation about it. Some visitors will take snapshot of the page using a service like Read It Later or Instaper. Some visitors will just read the conversation and never click through. The upshot is that most visitors will not hang around long enough to implicitly or explicitly build a profile that can drive personalization logic.
I think that the greatest potential for personalization will be to use a service like Facebook Connect like Levis is doing. When a visitor comes with Facebook Connect, they bring some additional context that can be used to drive personalization logic. Most web savvy people find Facebook Connect creepy (and will try to avoid it) but the vast majority of web surfers are either unaware or unconcerned with privacy issues. I would look for a major uptake of Facebook Connect. In particular, I expect to see recommended content display components that can be used in different presentation channels. In the near term, however, I the greatest returns will come from the "make every page a home page strategy" where each page promotes content that is related to the current page. That's not personalization. That's just good content re-use.