<!-- Content Here -->

Where content meets technology

Feb 15, 2010

How I use Twitter for Work





Publishing Decision Tree V2

Originally uploaded by sggottlieb

I just read Philippe Parker's thoughtful response to Janus Boye's provocative post "How I use Twitter for Work". Both these articles, plus my recent experience at PodCamp Western Mass, made me a little more conscious of my strategy and techniques for social media. As you can see from this geeky flow chart, I have put some thought into what I publish where. But I had thought less about who to follow.

My official Twitter policy was to only follow people that "inform and/or entertain me." Because I use Twitter mainly for work, my bias certainly leans toward the "inform" side. Although, I do appreciate a good snark once in while, I have un-followed people who fill the timeline with mostly personal stuff. If I found myself automatically skipping over someone's tweets because I was expecting something mundane, I un-followed him/her. If a Facebook friend re-published their Twitter stream into Facebook, I un-followed him/her on Twitter. These tactics kept my following count to a manageable number of 150. When I say manageable, I mean I am not overwhelmed by the volume of updates but I don't go back and read every tweet when I am away from Twitter for an extended period of time.

At PodCamp, I finally learned the value of lists. By using private lists for work, friends, fun, and news, I can follow more people but handle the traffic differently. When I am really busy, I just track my work list and my @replies. I glance at my friends and fun lists when I have more time but I never go back more than a few hours in the timeline. My news list takes the place of personal portals for the day's highlights. Since sorting this stuff out, my following count has grown to 164 with no real impact on time consumption.

The biggest change is in how I use RSS. With my new Twitter strategy, I check my reader less frequently and am able to skip over posts that I already found on Twitter. At this point, Twitter brings me timely (either because it is news or because everyone is talking about it) posts quicker. The un-tweeted RSS entries are still important to me for general learning and background knowledge. I expect that, over time, people will promote everything they write on Twitter. This has already happened with sites like CMSWire. Now FeedBurner gives you the option to automatically tweet every entry in your RSS feed. When a Twitter feed becomes identical to the RSS feed, I tend to un-follow/unsubscribe to one depending on how timely the information tends to be.

This system is working well for me now but I am sure that it will continue to change as the medium evolves. I am interested in learning other people's techniques. The tag #howiusetwitter seems appropriate and free.


Feb 11, 2010

Does "Intranet" Need a New Name?

James Robertson has an excellent post, Future principle: it’s more than the intranet, where he summarizes a movement to replace the term "intranet" with a word that reflects what an intranet could be. To quote:

There are some that would like to dump the “intranet” name, as it’s associated with the “old” vision of intranets as a publishing platform, a dumping group for documents, and a place for the CEO to post his thoughts.

This narrow vision of the intranet must certainly die. In the process, intranet teams need to go from being custodians of an internal website, to facilitators for business improvements. In many ways, the word “intranet” has too much baggage, and is an anchor for much-needed changes.

I agree that many people hear the word intranet and immediately think "dumping ground" but one does wonder if companies will not sully the next name by their continued failure to execute on the vision. The term "intranet" is actually pretty good and should be able to ride on the coat tails of the internet. The name "internet" wasn't brought down by failures like GeoCities because there is so much innovation happening; and failure is a necessary by-product of innovation. The difference is that failure kills most corporate intranets. Many intranets are big waterfall I.T. projects that are "complete" after launch. There is no time or budget left to learn from mistakes and adjust — the equivalent of a failed internet start-up but without the decency of shutting the servers down.

I don't expect companies will improve their execution of intranet projects until they start to change the way they build, launch, and manage internal products. The companies that are ahead of the curve should give their intranet an internal name to make users expect and work for more than the status quo.

BTW, I have a great replacement for the term "intranet" but I am not going to tell anyone because, sooner or later, it will be ruined by some comatose intranet initiative looking for some easy re-branding. :P

Feb 10, 2010

The Dead Zone of Software Pricing

A couple of weeks ago I subscribed to the Lean Startup Circle mailing list and I have been thoroughly enjoying the conversation ever since. If you have any entrepreneurial sensibilities lurking inside you, I highly recommend that you subscribe. The list participants have been in the trenches building companies and are happy to share what they have learned.

Recently, a thread on pricing caught my eye. This doesn't have to be strictly software licensing fees. It could be subscriptions or services too. Jim Murphy wrote that there are four pricing bands: low (< $500), medium ($500-$5,000), dead-zone ($5,000 - $20,000), and high ($20,000+). What is interesting is the "dead-zone." In this band the buying cycle is long and complex but the price of the product doesn't quite compensate for the high cost of sale. I am sure that most successful software vendors understand this either consciously or tacitly and price their products accordingly. From a buyers perspective, I was thinking that a $21,000 software product may have originally been $8,000  but is priced up by $13,000 to get out of the dead zone.

If you look at the CMS market, there are lots of commercial products with sticker prices that hover around $20,000 - $25,000. This range is much smaller than the actual deal size because the list price of $20,000 may be to license a single CPU with little capacity and no fault-tolerance. You will probably also need to add support and training which will typically bring the deal size to the $50,000-$90,000 range. Those figures can certainly justify a sales investment from the vendor. But, because of the complexity, dependencies, and high stakes of web content management, the cost of sale can be very high (remember, you have to factor in the costs of the losses as well as the wins and a flooded marketplace means lots of lost deals). Maybe the dead-zone of WCM is even higher than the average. Maybe it's more like $10,000 - $50,000.

$50,000 is a lot of money to many web initiatives that tend to have expectations of low costs and rapid results. Open source products like Drupal, Joomla! and Wordpress are doing quite well because they enable design studios (with minimal technical skills) to offer a complete website for half that price by taking a pre-existing theme and some modules and tweaking them just enough to make the site look original. Expression Engine, a free but non-open source product, is showing similar success. In these deals the cost of the software sale is essentially zero because the customer is not buying software; they are buying a website. They are considering font/palette/imagery rather than feature/function/value. Plus, because hosting for these platforms is so ubiquitous, the customer doesn't even have to complicate the transaction by involving their I.T. organization.

Commercial CMS vendors that are inflating their price to get above the dead zone are at a real risk here. Unless they can demonstrate value, their outsized prices will really stick out against products in the bottom two tiers. I think their best strategy is to shrink the dead zone by reducing the cost of sales. This means improving their channel sales and giving more access to customers who can take on more of the burden of evaluating the software. They also need to figure out a way to reward low-touch sales with discounts and charge prospective customers more when they demand the formality and overhead of the traditional enterprise sales cycle.

Feb 08, 2010

Developers and Designers

A few months ago I read Lukas Mathis' through provoking essay "Designers are not Programmers" where he makes the case for a separation between designers and developers. To summarize his argument, thinking about implementation details distracts the designer from the user and results in applications (and websites) that are easy to build but hard to use. He makes a very thorough case (you should definitely read the full essay) but something just doesn't sit well with me. In my practical experience, I find that teams are more efficient when roles overlap and people understand what is happening outside of their silo. Here are some reasons why:

  • A designer is often faced with lots of options of how to solve a user problem. When it is a coin toss between two solutions, why not choose the one that is easier to implement and apply the time and effort saved to something that really needs the additional complexity?

  • The static tools that pure designers use (e.g. photoshop) have no way to express interactive functionality. All the details that the developer needs to know need to be captured in some sort of specification that can never be complete and is usually out of date. Making the developers wait until the specification is done is inefficient.

  • Good software cannot be achieved by brilliant designers alone. It takes iteration and feedback to get it right. A cold hand-off between the designers and developers lengthens the iteration cycle (so you get fewer of them in a fixed amount of time and budget) and creates more of an opportunity for information loss.

In an ideal world with infinite time and money (and omniscience too), it might be better to have designers whose minds are unencumbered by knowledge of implementation details. Anything that they dream of can be implemented... with enough time and resources, of course. But I don't live in that world. In the world I live in, product managers and publishers have to make lots of compromises. They also need to be able to react efficiently to correct bad design decisions so that the product (or website) can continually improve. For that, you need an agile team that solve problems directly. this means staying out of a designer-only loop.

The most effective teams that I have worked on have all had a talented front end developer that can rapidly design in DHTML (leveraging javascript libraries and CSS) and knows enough server side scripting to make most user interface changes unassisted. With this mix of skills, it is truly amazing how quickly a small team can get a product in front of users where it can be improved by feedback.

Feb 03, 2010

The Myth of the Occasional CMS User

Not long ago, a university hired me to evaluate their CMS implementation. They were having doubts about their CMS selection because the implemented system was not living up to the lofty promises that got them the budget for the project. It turned out that they did make a reasonably good platform choice but they failed in two of the other critical content management success factors: expectation setting and project execution. The project execution wasn't horrible for a first try (they were working with a systems integrator that was new to the platform) and there were some remedial efforts already in progress to clean up the implementation. What was really killing them was that a primary goal for the CMS was the notion of "self service." They wanted their faculty to be able to edit their own profile pages. These faculty pages were more than a simple staff directory. Each entry was intended to showcase the experience and accomplishments of the professor and included a CV, a career narrative, and a history of publications. To maximize content reuse, the content model was highly structured — thereby increasing the complexity of the content entry forms. They might have been able to scale back on the detail but the amount of content they were looking for was not trivial: especially for an individual whose chief concern was not keeping a website up to date.

Often, one of the big justifications for a CMS is removing the webmaster bottleneck and delegating content entry to the people who have the information. The implicit assumption is that everyone wants to directly maintain their portion of the website but technology is standing in the way. But if you visit a CMS customer a while after implementation you are likely to find that the responsibility of adding content is still concentrated in a relatively small proportion of the employee population. Either the CMS never gets rolled out to the anticipated user base or it is rolled out and the user base fails to hold up their end of the bargain so the core content managers take back ownership. There are plenty of exceptions to this generalization — especially when there is an individual who has taken responsibility for outward (or inward) communication for his/her sub-group. For example, you might have someone from the human resources department whose job depends on communicating corporate policies to the staff.

You could interpret failed adoption as evidence of poor usability. That was my client's inclination. But you need to consider the usability of the other alternative, telling someone else to do add your content for you. Prior to the CMS, a professor would only have to send a terse, typo-riddled email containing some research references or a CV and the marketing department would clean it up, fill in the gaps, put it in the right style, and handle all of the coordination. How can you beat that? You can't.

Usability can't take the place of ownership and responsibility when managing content. As long as the marketing department owns the website, they will have the motivation to maintain it to the standards that they have set. It would be unreasonable to expect a professor to put the website above his other responsibilities: teaching and publishing academic books and articles. When the marketing department needs to spend a lot of energy cajoling and coordinating a professor for content, it is usually easier for that marketing department to do the content entry too.

Generally speaking, you can't delegate content authorship beyond the perception of site ownership. A good strategy is to centralize content authorship and give a relatively long turn around time (perhaps 5 business days) for site update requests. Those who do not feel ownership of the site will be fine with that lag. But those who are annoyed by the lag (or their staff) are good candidates for becoming content contributors. To them, using any CMS will be preferable to waiting a week. If they do complain, their criticism will be about needing more control to produce more elaborate content rather than needing simplicity for basic edits.

Once you scale back your expectation of distributed authorship, the role of an occasional user gets less important. The occasional user is certainly less important than the primary content contributor role or the engaged users that prioritize website management in their list of responsibilities. You shouldn't buy CMS for the occasional user. You should buy a CMS to maximize the effectiveness of your core users who are primarily responsible for the content and performance of the website.

Feb 01, 2010

In-Context and Power User Interfaces: One for the Sale, the Other for the Content Manager

A dirty little secret in the CMS industry is that, while in-context editing is often what sells a CMS, the "power user" interface is usually what winds up getting used after implementation. This phenomenon obviously creates problems in the selection process because, when the sales demo focus on an interface that users will quickly grow out of, any usability impressions are irrelevant. This is also part of a bigger problem: the importance of in-context editing for sales has caused many CMS vendors to neglect their power user interface.

It is easy to understand why the sales demo gravitates to the in-context user interface: the audience finds it more intuitive. What is less obvious is why. In a typical CMS sales demonstration, the audience has the perspective of a site visitor. After all, this is not their site. They have no responsibility for it. As a site visitor, we think of editing the content that we see: "I see a typo;" "that sentence is hard to read;" "I would prefer to see another picture here." The user just wants to go in and fix it — like a Wikipedia article. Until it's fixed, that content issue is going to bug the user so directness and immediacy are critical. Like with a wiki, the in-context is ideal for solving these kinds of problems.

The content manager, however, has an entirely different perspective. The content manager is thinking more about the whole web site than any one page. The content manager has to solve problems like re-organizing the website and making global content changes. Needing to manually change every single page of a website is not acceptable so content reuse should be top of mind. From this perspective, the appearance of a page is less important than the actual content, which also includes information you can't see on the page but drives the behavior of the site. You can even go so far as to say that the visible page (what the visitor sees) actively hides information that the content manager needs to see. The visitor shouldn't know where else a piece of content is featured on the web site or what caused the personalization logic to show this item in this particular case — but the content manager does. Incidentally, this is why you should make product demos focus on scenarios. Scenarios force you to think about what the content manager does - not just dream of being able to edit somebody else's web site.

Yes, you can make the argument that the occasional content contributor (who 80% of the time experiences the site as a visitor) needs a simplified user interface to fix the issues that they notice or keep a few bits of information up to date. But, as an organization gets more sophisticated with managing content, those cases of simplistically managed pages (with no reuse and no presentation logic) get less frequent. At that point, you are just talking about the "about us" page and some simple press releases. Are you surprised that this is what your basic generic CMS demo shows? Furthermore, I am beginning to believe that the occasional user is a myth (another blog post).

In-context editing interfaces are steadily getting more powerful by exposing functionality like targeting and A/B testing but there inevitably comes a point when the content manager wants the full power of the application at his fingertips. As the in-context editors get better, that point gets pushed further out. But adding complexity and power to the in-context editing interface will no doubt steepen the learning curve for the occasional user and minimize the wow factor of the demo. And no CMS vendor will do anything to reduce the wow factor of their product demo.

Jan 26, 2010

Designing for Drupal

Nica Lorbor, from Chapter Three, has a great post on their highly optimized Drupal design process. In the article, Nica shows how they start from a Drupal template that has roughly 25 common named elements (some native Drupal, some not) that can be styled according client specifications. A specialized Fireworks template calls out these elements and helps map them to a mockup to facilitate conversation between the designer and the developer. This creates a common language that binds the design to the code that the developer needs to work on. This also probably quickly identifies visual design elements that are not part of the normal Drupal/Chapter Three build, which need special consideration for estimation and budgeting.

I think the efficiency of this process supports my philosophy that you should work with a partner that specializes in the CMS that you intend to implement. Chapter Three didn't invent this methodology for their first Drupal implementation. I am sure that it evolved over time. It also speaks to the efficiency of designing with the platform in mind. With this process, everybody on the project (from designers to developers) understands what is going to be easy and what is going to be hard. Designers are guided towards concepts that are more efficient to build. Yes, you could build any design on Drupal, but that doesn't mean that you should. With some site designs, you will be fighting the natural tendencies of the platform: increasing both the implementation and maintenance costs.

I should probably make the point here that I am not implying that all Drupal sites (or sites from any other CMS) look alike - particularly to the untrained eye. Nearly all of what the casual site visitor notices (font, palette, page layout, buttons, etc.) is totally customizable in Drupal and any other CMS platform. Guessing what CMS was used involves much more subtle characteristics that you wouldn't notice unless you worked with the platform extensively.

This all raises a chicken and egg problem that I have discussed before. If the product influences the design and the design defines the requirements that drive the product selection, where do you start? As I mentioned in "When it is not all about the software," the key is knowing enough about your requirements to define a short list of options that you can evaluate on more subjective levels (such as aspects of design).

Jan 25, 2010

CMS Architecture: Managing Presentation Templates

Another geeky post...

In my last post, I described the relative merits of managing configuration in a repository vs. in the file system but excluded presentation templates even though how they are managed is just as interesting. Like configuration, presentation templates can be managed in the file system or in the content repository. Like with configuration, if you manage presentation templates in the repository, you need some way to deploy them from one instance of your site to another without moving the content over as well.

There are plenty of additional reasons why you would want to manage presentation templates on the file system. In particular, presentation templates are code and you want to be able to use proven coding tools and techniques to manage them. Good developers will be familiar with using a source code management system to synchronize their local work areas and branch/tag the source tree. Development tools (IDE's and text editors) are designed to work on files in a local file system. If you manage presentation templates in the repository you have to solve all sorts of problems like branching and merging and building a browser-based IDE or integrating with local IDEs. The latter can be done through WebDAV and I have also seen customers use an Ant builder in Eclipse to push a file with every time it has changed. Still, the additional complexity can create frustrating issues when the deployment mechanism breaks.

As much as it complicates the architecture, there is one very good case when you would want to manage presentation templates in the repository: when you have a centralized CMS instance that supports multiple, independently developed sub-sites. For example, lets say you are a university and each school or department has its own web developer that wants to design and implement his own site design. This developer is competent and trustworthy but you don't want to give him access to deploy his own code directly to the filesystem of the production server. He could accidentally break another site or, worse, bring down the whole server. You could centralize the testing and deployment of code, but that would just create a bottleneck. You could do something like put the CSS and JS in the repository and have him go all CSS Zen Garden, but sooner or later he will want to edit the HTML in the presentation templates.

In this scenario of distributed, delegated development, presentation templates are like content into two very important aspects:

  1. presentation templates need access control rules to determine who can edit what.

  2. presentations templates become user input (and user input should never be trusted).

The second point is really important. Just like you need to think twice when you allow a content contributor to embed potentially malicious javascript into pages, you need to worry that a delegated template developer can deploy potentially dangerous server side code. Once that code is on the filesystem of an environment it can create all sorts of mischief. It doesn't matter if it was intentional or not, if a programmer codes an infinite loop or compromises security, you have a problem. Using templating languages (like Smarty or Velocity) rather than a full programming language (like PHP or Java in JSP) will mitigate that risk but you still have to worry about the developer uploading a script that can run on your server. With staging and workflow, CMSs are good at managing semi-trusted content like presentation templates from distributed independent developers. There is a clear boundary between the runtime of the site and the underlying environment.

If your CMS uses file-system based presentation templates and you delegate sub-site development to the departments who own them, you should definitely put in place some sort of automated deployment mechanism that keeps FTP and SSH access out of the developers hands and reduces the potential for manual error. The following guidelines are worth following:

  • Code should always be deployed out of a source code system (via a branch or a tag). That way you will know what was deployed and you can redeploy the same tested code to different environments.

  • Deployments should be scripted. The scripts can manage the logic of what should be put where.

  • Every development team should have an integration environment where they can test code.

One of my clients uses a product called AnthillPro for deployments of all web applications and also presentation templates. It has taken a while to standardize and migrate all of the development teams but now I don't see how you can have a de-centralized development organization without it.

The other dimension to this problem is the coupling between the content model and the presentation templates. When you add an attribute to a content type, you need to update the presentation template to show it (or use it in some other way). The deployment of new presentation templates needs to be timed with content updates. Often content contributors will want to see the new attribute in preview when they are updating their content. Templates also need to fail gracefully when they request an attribute that does not yet exist or has not been populated yet. Typically, presentation templates evolve more rapidly than content models. After all, a change in a content model usually involves some manual content entry. In my scenario of the university, there is a benefit of centralizing the ownership of the content model. This allows content sharing across sites: if one department defines a news item differently than another department, it is difficult to have a combined news feed. Centralizing the content model will further slow its evolution because there needs to be alignment between the different departments.

Wow, two geeky posts in a row. I promise the next one will be less technical.

Jan 19, 2010

CMS Architecture: Managing Content Type Configurations

Warning: this post is highly technical. Non-programmers, please avert your eyes.

Deane Barker (from Blend Interactive) and I have a running conversation about CMS architectures. One of the recurring topics is how content models and other configuration is managed. There are two high-level approaches: inside the repository and outside the repository. Both have their advantages and disadvantages.

  • Managing content types outside the repository

    My preferred approach is to manage content type definitions in files that can be maintained in a source code management system. This way you can replicate a content type definition to different environments without moving the content. Developers can keep up to date with changes made by their colleagues. Configuration can be tested on Development and QA before moving to production. There is no user-interface to get in the way. No repetitive configuration tasks. Everything is scriptable and can be automated. I especially like it when content types are actual code classes so you can add helper methods in addition to traditional fields. Of course, when you get into this, it is a slippery slope into a tightly coupled display tier that can execute that logic.

    On the downside, it is often difficult to de-couple the content (which sits in the repository) from the content model (which defines the repository). When you push an updated content type to a site instance, you might need to change how the content is stored in the repository. This is more problematic in repositories that store content attributes as columns in a database. It is less of a problem in repositories that use XML or object databases (or name-value pairs in a relational database) where content from two different versions of the same model can coexist more easily.

    If you do manage content type definitions outside of the repository, a good pattern to follow is data migrations, which was made popular by Ruby on Rails. I am using a similar migration framework for Django called South. Basically, each migration is a little program that has two methods: forward and back ("up" and "down" in RoR. "Forwards" and "backwards" in South) that can add, remove, and alter columns and also move data around. The forward updates the database, the backward reverts to the earlier version.

  • Managing content types within the repository

    Most CMSs follow the approach of managing the content type definitions inside the repository and provide an administrative interface to create and edit content types. This is really convenient when you have one instance of the application and you want to do something like add a new field. There is no syntax to know and application validation can stop you from doing anything stupid. Some CMSs allow you to version content type definitions so that you can revert an upgrade.

    When you have multiple instances of your site, managing content types can be tedious and error prone if you need to go through the administrative interface of each instance and repeat your work. Of course, you can't copy the entire repository from one instance unless you want to overwrite your content. If your CMS is designed in this way, you should look for a packaging system that allows you to export a content definition (and other configurations) so that it can be deployed to another instance. Many CMSs allow an instance to push a package directly over to another instance. The packaging system may also do some data manipulation (like setting a default value for a required new field).

Unless you are building your own custom CMS, this all may seem like an academic question. It really is quite philosophical: is configuration content that is managed inside the application or does it need to be managed as part of the application. The same thing goes for presentation templates (but that is another blog post). However, if you intend to select a CMS (like most people should), it is important to understand the choice that the CMS developers made and how they work around the limitations of their choice. If you are watching a demo, and you see the sales engineer smartly adding fields through a UI, you should ask if this is the only way to update the content model and if you can push a content type definition from one instance to another. If the sales engineer is working in a code editor, you need to ask how the content is updated when a model update is deployed.

Jan 11, 2010

Writing Titles for SEO





SEO Unfriendly Pithy Titles

Originally uploaded by sggottlieb

Normally I don't worry too much about search engine optimization when I write blog posts. My writing is as much for organizing my own thoughts as it is to drive site traffic. My philosophy on search engine optimization is to produce good content and avoid hindering search engines indexing my site. Good content is clear, well organized, and useful. Not hindering search indexes means being text-rich, minimizing broken links, returning the appropriate status codes, and keeping HTML simple. I am not going to trick anyone to come to www.contenthere.net but if I have something that would useful to someone, I want it to be found.

After reading the title of a recent blog post ("The biggest thing since wood pulp"), I realized that I was breaking my own very lax rules. My attempt at a pithy title was effectively hiding what the article was about: a possible consequence of the Internet's disruption of the newspaper business. I looked at my recent posts (see screenshot) and realized that I do this quite a lot. One of my worst offenses is "Doubt," which offers an alternative to to matrix-based decision-making. Most people probably assume that I am talking about the movie of the same title. Another example is "Another Flower War", which is about a dispute between Magnolia the CMS and Magnolia the social bookmarking site. I know this title was misleading because I was getting comment spam for garden supply retailers.

Pithy titles may be effective in print media when the reader has already made the investment to browse through the publication and is looking for things that spark his interest. They may be marginally effective by causing a curious RSS subscriber to click through. But they are totally counter productive in a search result. Even if the search engine thinks that your article might be relevant to the query, the searcher is likely to assume that your article was listed in error as he scans the results. You have just done the searcher a disservice because you have hidden the answer to his problem.

To some extent, some open source projects share in this problem. Sometimes open source project names are taken from an obscure (nerdy) cultural reference or something to do with the history of the project. Sometimes project names are just intended to be fun. I remember an Optaros colleague telling me how silly he felt when he was talking to a CIO and suggested that they use Wackamole for network monitoring. A lot of insiders have to try and recommend a project with a silly name for it to get credibility in the mainstream.

Overly clever titles are an inside joke that excludes potential new readers. It's a little like giving a tourist directions that reference where a Dunkin' Donuts shop used to be. These names are useful in getting the attention from the old guard but they exclude the newbies. This may be an intentional community dynamic where new members need to demonstrate their commitment in order to get accepted and longstanding members feel bonded by their shared knowledge. But, if the goal is to bring outsiders in, the name of the project or an article should be clear and not overly silly and obscure.


← Previous Next → Page 21 of 75