<!-- Content Here -->

Where content meets technology

Jun 11, 2010

Never say "User"

Ever since I got into web content management, I have advised my clients to avoid the word "user." It's a useless word because you are never quite sure if someone is talking about a user of the CMS, or a user of the website. For this reason, I get my clients to adopt the words "contributor" and "visitor." A contributor is a person that contributes content or participates in the content workflow of the content management system. A visitor consumes content on the website.

The primary goal of a content management system is to mediate between these two populations. If a CMS was only to think of the contributor, the content would be poorly structured, cluttered, chaotically structured, and hard to find. By the way, pretty much every organization has one of those systems — it's called a shared drive. If a CMS only represented the visitors (or other consumers of content such as other systems), it would insist on extremely fine-grain structure for maximum reuse, pristine HTML (sorry, no WYSIWYG editors), and perfect quality (hello 10 step workflows). The CMS applies structure and rules to establish a compromise between these two groups.

Web 2.0-style sites jumble this model up a bit because the line between contributor and visitor are blurring. Visitors can potentially contribute. Web content management systems like Drupal, Plone and many others merge the contributor interface into the externally facing website. These platforms tend to call registered users (contributors and visitors) "members" and, by default, allow members registered themselves. The distinction between a self registered member and a "SuperUser" is just a matter of permissions.

I still think that the distinction of contributor and visitor is useful because members need to wear different hats. Sometimes they are visitors on the site simply trying find something. Other times they are contributors wanting to post something. The CMS is still mediating but instead of mediating between different types of users, it is potentially mediating between the same user in different contexts. It forces the contributor to put in a little extra effort to make life easier when he becomes a visitor. It would be a little like Clippy telling you "Oh, you don't want to use that file name and place that document there because you will NEVER find it again! And while you are at it, maybe you should print that out because you messed with the margins and I bet its going to look like hell." Clippy doesn't do that and we still find him annoying. Now you wonder why nobody loves their content management system?

Jun 09, 2010

Jeff Cram: Your website is not a project

Jeff Cram started blog series called post launch paradigm with a great post called "Your website is not a project." The article lists all the ways companies fail when they think of a website as a project to be completed.

If a website is not a project, what is it? Jeff calls it an "ongoing process." I call it a "product." Website product management is becoming an increasingly important service offering for Content Here and it is a natural extension of the selection work that I have been doing over the first three years of the company. During a selection engagement I create a road map of functionality to be implemented over time and set expectations for user adoption and incremental improvement. Recently selection clients have been engaging Content Here after implementation to help them progress along that road map. This feels great on a number of levels: these clients realize that their websites are not projects, they have bought in to the concept of continuous improvement, and I get to see the clients working with the products that they have selected. I even get to go through code once in a while!

May 11, 2010

Tips for Web Product Management

I am currently providing web product management services for two clients. One client is a start-up launching a new web-based product. The other is a 100 year old newspaper. While at face value these two clients couldn't appear to be more different, they are actually quite similar. Both are trying to innovate a viable product. The startup is building a new concept. The newspaper is a trying to re-imagine an old concept. In both cases the development backlog is a chaotic mess of items that range from little tweaks to major features. There is impatience for progress; but that urgency needs to be balanced with the need to build something that is scalable and sustainable if the business succeeds. The truth is most websites operate under these conditions to some degree. It is just the ambition of these two businesses raises the stakes and the stress level.

To be successful in these projects, I have had to draw on lots of different skills and experiences. Many of the concepts and techniques come from agile methodologies like Scrum and Lean software development. What follows is a list of principles and practices that I have found to be effective.

  • Establish a regular (2-3 week) release cycle. Everyone benefits from a regular release cycle. Stakeholders get the satisfaction of seeing progress. They don't panic if one of their requests doesn't get into the current release if there is a chance that it will be addressed in a subsequent release. The sooner a new feature hits the production site, the sooner it can be measured and improved. Shorter development cycles also mean smaller releases that are easier to test. Site visitors perceive a constantly improving site as being vibrant.

  • Define and communicate prioritization criteria. In order to keep releases small, you need a clear and open scoping process. Enhancement requests need to be evaluated against the site goals (such as creating new revenue opportunities, cutting costs, maintaining credibility, etc.). Without this kind of guidance, development gets chaotic. Developer time is not concentrated on work that matters. The pipeline tends to get clogged with small tweaks; larger, more substantial improvements never get done.

  • Make each release a blend of stakeholder-focused improvements and code maintenance. When code is not regularly optimized and refactored, entropy takes over and it becomes less maintainable. Development teams that are exclusively driven by stakeholder requests don't have time to keep the codebase clean. A broken window effect causes messy code to beget messy code. For this reason every release milestone should contain a balance of improvements that stakeholders see (new functionality, presentation template changes, etc.) and maintenance tasks (refactoring code, improving management scripts and infrastructure, etc.). By maintaining this discipline, the quality of the application improves (rather than degrades) over time.

  • Don't forget the HotFix queue. Even though you might have a methodical development plan, emergencies happen. In addition to regularly spaced released milestones, I typically create a "HotFix" milestone with a rolling due date of "yesterday." Emergency requests go into the HotFix queue and get addressed and deployed immediately. Of course, only I can put things into the HotFix queue and I base that decision on very specific criteria: current functionality is compromised, inaction is costing money (or some other measure of value like reputation), and it is a quick fix.

  • Write good tickets. Every change request gets entered in a ticket tracking system. Bug requests should be extremely descriptive: URLs, screenshots, steps to reproduce. Feature requests take the form of a full specification complete with annotated wireframes or mockups. Every new element shown needs an annotation describing the source of information and behavior. It is also a good idea to put in test conditions so that the QA staff know how to verify it is working.

  • Use your source code control system effectively. Create tags to remember milestones in the development history. Use branches only when you are simultaneously working on two versions of the application. The most likely reasons for branching are:

    • Having a production branch for hotfixes while development for the next release is done on trunk.

    • Using an experimentation branch for functionality that may or may not make it into the main code line.

    Don't use branches for personal work areas or to manage environment-specific configurations. Merging will be a pain and it will delay any integration testing you will need to do.

  • Automate deployments. Deployments should be simple and mindless. There should be one step to push the same exact code that was tested on the QA environment to the production environments. If someone needs to manually copy individual files, you are doing it wrong. At a previous client (a very large magazine publisher), we used AnthillPro for continuous integration and deployments. Each build of the application was stored in an build artifact library where it could be deployed to different environments with a push of a button. There were cool reports that showed you want build number was deployed where. But that was for managing 50+ applications across hundreds of servers. Now I am using lighter weight tools like Fabric to script builds and deployments.

  • Build a talented and committed team. I strongly believe that there is no room for mediocrity on an agile development team. Working in this way requires a lot of trust. Stakeholders need to trust that developers are working efficiently and doing necessary things. Developers need to rely on each other to communicate and make good decisions. You don't get that trust unless developers know the technology and are passionate about their craft.

If the website or web application that you manage is your product (or is critical to deliver your product), you need to manage it with this level of discipline and rigor. Otherwise the site will stagnate and you will be unprepared to respond to new market challenges and opportunities.

May 04, 2010

Another clue about Oracle's attitude toward Web Content Management

By now most industry analysts have grown skeptical of Oracle's commitment to web content management (WCM). Those analysts that are still in denial are either too focused on the document management side of enterprise content management (ECM) to even care or they are on Oracle's payroll. The writing has been on the wall for a while now. Before Oracle bought it, Stellent had (in my opinion) the best WCM functionality of any of the document-oriented ECM products. They were miles ahead of EMC/Documentum and IBM/FileNet. Stellent was even edging past traditional WCM products like Vignette and Interwoven who were neglecting WCM to concentrate on their ECM offering. The Stellent acquisition happened right before the release of a new version that introduced big WCM improvements. After the acquisition, Stellent got dumped into the "Fusion Middleware" (AKA "Neither Database nor ERP") division which was a clear sign that Oracle didn't want to spend too much time understanding what it bought.

The reason why Oracle bought Stellent is pretty clear. For readers who are not CMS historians, many years ago Stellent bought a company called "Inso" which developed the filters that could convert documents into different formats. Microsoft has Inso to thank for breaking WordPerfect's and Lotus 123's holds on their respective markets. Because of OEM'ed Inso technology, an MS Word user could open a WordPerfect document. Stellent used the acquired Inso technology to lead the market in word-processing-to-web functionality. More than with any other ECM product, a Stellent UCM user could realistically use MS Word to maintain a structured web asset. Oracle's plan for Stellent was to use those filters to help its document repository story. At the time, Oracle was pitching its "ECM-light" vision that positioned its database as a step up from a file system for storing documents. The database could store metadata and provided a search interface that could list documents in different ways. Inso filters helped parse documents for better indexing and also introduced a capability for exporting into different formats. Plus the Stellent user interface was a big improvement over anything that Oracle could cook up (no, knowledge workers do not want to work in SQLPlus).

Wow, that was some rant. But why I am talking about that now? Well, I was just listening to an NPR Environment podcast that was underwritten by Oracle (thanks Oracle, BTW). When reading the Oracle underwriter statement, the presenter instructed listeners to "visit www.oracle.com/ironman2 to learn more." Now we all know that Oracle is a big company and are probably too busy to create marketing landing pages for all of their different advertising campaigns (no matter how easy it is to do). You can make the old "cobbler's son" excuse. But in this era where the premium WCM vendors are selling on "interactive marketing" and "engagement" functionality wouldn't you think that Oracle would make an effort? Wouldn't it be helpful to know whether traffic was coming from an NPR or Iron Man 2 advertising spot? Ironically, I seem to remember A/B testing, marketing landing pages, and reporting functionality were all part of that mid-acquisition Stellent version. Apparently the Oracle marketing team seems not to have discovered it.

Apr 30, 2010

Jeff Potts strikes out on his own with Metaversant

My friend and former Optaros colleague Jeff Potts recently announced that he has left Optaros to form a new company called Metaversant. Jeff was Optaros' superstar Alfresco guy. He put Optaros on the Alfresco map and contributed to the Alfresco community by writing a great book (The Alfresco Developer Guide), maintaining useful information on his blog, and also publicly pushing Alfresco in the right direction. Jeff is a charter member of my informal "Content Here Information Partner (CHIPs)" network and I have regular briefings with him to keep up to date on all things Alfresco.

Since Optaros has shifted its strategy to focus on the intersection of community, commerce, and content, Alfresco's position as a core offering has diminished. Alfresco is more oriented toward file-based collaboration, intranets, and digital asset management than social publishing and commerce. Metaversant will focus on training and advising Alfresco customers. I admire Jeff's expertise in and passion and I know that he will be successful in this new venture. He will certainly get referrals from me.

Apr 26, 2010

The Captcha and Mouse Game

There has been a lot of Twitter chatter about this New York Times article on offshore captcha circumvention. The article describes how link spammers are hiring cheap offshore labor to manually solve captchas and dump comment spam on websites. If you use captcha as the only way to prevent comment spam, you should worry. However, if you are like me and use captcha only to level the playing field (by taking robots out of the equation), this is not a problem. In fact, I am happy that it raises the cost of link spamming and safe jobs are going to people that need them.

If you are manual link spammer, don't accept less than a penny a comment. You deserve it!

Apr 26, 2010

World Plone Day 2010

Mark your calendars, World Plone Day is on April 28th. World Plone Day is a free, annual, international event designed to introduce the Plone content management system to people outside of the Plone community. This year it is being held in 36 locations in 29 countries. The agenda usually contains a balance of business and technical topics. I just had a look at the Boston World Plone Day agenda and it looks particularly good.

If you have not looked at Plone recently, you should. With the official release of version 4.0 right around the corner, a lot of changes have happened. The architecture leverages more of the new Zope 3 technologies, performance has improved, and development techniques have evolved. A considerable amount of work is being done to make theming easier using tools like Deliverance. Also, the NoSQL movement hype may make the underlying object database (ZODB) less intimidating to architects. From a user perspective, the team has focused on some subtle improvements such as switching the default rich text editor to TinyMCE and creating a new default theme.

Apr 14, 2010

Supporting Internet Explorer 6

IE6 not supported on Microsoft.com

Over the past few days, I have been involved in a number of conversations about supporting Internet Explorer 6. Arguing about when to drop support for outdated browsers is a sport that is as old as the web itself. There is nothing really new here but the IE6 support debate feels particularly emotional — not as charged as back when people were arguing for only supporting Internet Explorer, but close.

IE6 had a really long run. It was Microsoft's browser offering for 5 years (late 2001 through late 2006). Up to that point, Microsoft was releasing a major version of IE every year. Now it looks like they are settling into a pace of every other year. That means that IE 6 was installed on a lot of computers. In particular, a lot of computers that were bought when internet usage was starting to get really ubiquitous. In many businesses and households, these computers were bought as an internet appliance with a really long expected lifespan — like a refrigerator or a telephone. Companies are hanging onto their old IE6 computers. Vista's flop means that Windows XP is still the corporate standard and IE6 comes with XP. Unless you have a technical or information-intensive job or are working at a new company, chances are you are on a highly locked down, old Windows XP computer that your employer begrudgingly bought to give you access to email and the intranet. Your employer doesn't want to upgrade your machine unless absolutely necessary. That usage pattern has caused IE6 to linger longer than other browsers. See how IE8 seems to eat up more of IE7's market share than IE6's?

Internet Explorer Browser Share

Not only do the numbers of IE6 user continue to be significant, the types of users seem to be desirable as well: internet n00bs that click on ads and buy what they see (with the money that was not taken by Nigerian 419 scams).

Technical people have little empathy for these types of users. The first thing we do when we boot up a relative's computer for home tech support is stop the malware/adware processes, install Firefox, and hide the IE icon. As developers, we know that a requirement for IE6 support translates into maintaining two code bases: one that uses all the goodness of the latest HTML and CSS standards and fast Javascript engines; and another that is a bundle of hacks to compensate for IE6's quirks. Many web development firms I know are starting to charge an additional 20% - 30% to include IE6 support. They are not price gauging. This is probably less than the actual cost. The customer will probably invest an even larger percentage of additional resources to maintain the application.

For this reason, an increasingly larger number of websites are discontinuing support for IE6. They have done the calculations and have decided that the convenience for the IE6 hold-outs is not worth additional cost and drag on innovation. I don't mean to sound like a jerk, but big web properties (like Google, Microsoft, and Content Here) dropping IE6 is a good thing for everyone (almost):

  • Visitors will have a greater incentive to upgrade. If they can't upgrade on their own, they can make the case to their employers that running a 9 year old browser is not acceptable.
  • The more modern technology will increase overall security
  • Web sites and applications can be developed more cheaply and with higher quality.
  • The spending to upgrade outdated equipment will be good for the economy. Companies and households don't have to buy $2,000 laptops, they can probably get away with cheap NetBooks.

This site never supported IE6. If you are stuck on that browser, I am sorry for the inconvenience that I have caused. But, I figure you are used to browsing broken websites by now :)

Apr 05, 2010

Django Action Item Follow Up

While moderating a comment on my "10 Django Master Class action items" post, I was inspired to evaluate how I am doing on these action items and whether they are helping. Below is a brief summary of my progress; but first a little background. Recently, I had the rare opportunity to rebuild (from the ground up) an application that I wrote for a client. The context was that the first version of the application was a prototype that I built to help demonstrate an idea to potential investors and customers. The prototype served its purpose excellently. It was able to evolve alongside the idea as my client got feedback and refined the value proposition. We came out of the prototyping phase with a strong vision and an excited group of investors and beta customers. To minimize costs I avoided refactoring the application and cut a lot of corners. By the end of the prototype phase, the idea had changed so much that we were really faking functionality by overloading different features. Still, for a ridiculously small investment, my client was able to develop and market test an idea. And now I get to build the application for real and apply the best practices that I learned about in the Django master class. Here is what I am doing and how it is working out.

  1. Use South for database migrations (adopted). I have grown so attached to South that I find it hard to imagine life without it. This is especially important because I am managing different environments and the object model is changing as I add new features.

  2. Use PostgreSQL rather than MySQL (adopted). I am steadily getting more comfortable with PosgreSQL. pgAdmin has been really helpful as I get up to speed with the syntactical differences from MySQL. So far, the biggest differences have been in user management and permissions.

  3. Use VirtualEnv (adopted). VirtualEnv + VirtualEnv Wrapper has been great. For a little while I was working on both the prototype and the actual application. VirtualEnv made it easy for me to switch back and forth. This will also be helpful when I upgrade to Django 1.2.

  4. Use PIP (adopted). I really like how you can do a "pip freeze" to create a requirements file that you can use to build up an environment.

  5. Break up functionality into lots of small re-usable applications (adopted). The prototype had one app. The production app that I am building has 6. One of the apps contains all the branding for the application and some tag libraries. Templates in other apps load a base template from my "skin" app. The best part of using this strategy is in testing and database migrations because you can test and migrate a project one app at a time. The hardest thing for me to figure out is how to manage inter-dependencies and coupling. One strategy that has worked well for me is to focus dependencies on just a couple of applications. For example, I have profile application which manages user profiles (extended from the base django.contrib.auth.User model.). I have other apps that relate to people but I am careful to create foreign key relationships to the User model rather than my profile model.

  6. Use Fabric for deployments (adopted). One word. AWESOME! I have scripts to set up a server and deploy my project without having to ssh onto the server. The scripts were not that hard to write. I took inspiration from some great posts (here and here). Now I can reliably push code (and media) with one local command. I am managing the development of another site running a PHP CMS and I am strongly considering having the team use Fabric for that as well.

  7. Use Django Fixtures (adopted). Managing fixtures in JSON has turned out to be really easy. I typically have two fixtures for each app: initial_data.json and <app_name>_test_data.json. initial_data.json mainly contains data for lookup tables. It is run automatically when syncdb (the Django command to update the database schema) is run. I typically create these files with the dumpdata command and then edit them manually.

  8. Look into the Python Fixture module (not adopted). I looked into this module but, to be honest, editing the JSON files is pretty easy so I don't see the need for it.

  9. Use django.test.TestCase more for unit testing (adopted). I have been doing a considerable amount of test driven development (TDD). It all started when I wanted to rewrite the core functionality but I needed to wait for someone else to re-build the HTML in the presentation templates. Now I have around 130 unit tests that I run before I commit any code. Focusing on unit testing has made me write code that is more atomic and easier to test. Now I think "how will I test this?" before I write any code.

  10. Use the highest version of Python that you can get away with (adopted). A big motivator for me here was when I upgraded my workstation to Snow Leopard which ships with Python 2.6.3. Getting 2.6.3 on my server was a little more complicated. I wound up using a host that comes with Ubuntu Karmic Koala which also comes with 2.6.3. I am really pleased with Ubuntu and it seems like most of the Django community is going that way.

I feel really lucky for the opportunity to rewrite an application and apply lessons learned. Too often you are stuck managing code that you (or someone else) wrote before you knew what you were doing. That is, before the functionality of the application was fully understood; before a feature of the API was available or known; before a more elegant solution was discovered. I am sure that I will continue to learn new things and want to apply them and I plan to continually refactor as long as I am involved with this project. But this full-reset has been a great experience.

Apr 02, 2010

Pragmatic Thinking and Learning vs. Knowledge Management

While reading Andy Hunt's excellent book Pragmatic Thinking & Learning: Refactor Your Wetware

I couldn't help but return to a conclusion that I reached long ago: "knowledge management," as an enterprise practice and class of software, is a false promise. Furthermore, traditional corporate training programs are doomed to failure.

I was first struck by this realization around ten years ago when I was working on a project for a department of the federal government. The premise of the project was to "capture the knowledge" from a generation of experienced staff that were on the cusp of retirement. This department was structured so that knowledge was concentrated in minority of senior employees. Underneath them there was a thin layer of mid-level staff; then a large group of juniors. The strategy was to video pre-retirees reminiscing about their experiences and the department could somehow do something with that "knowledge." The idea was dead on arrival and the prime contractor (that originally pitched it) knew it. I remember suggesting an alternative strategy of setting up an apprentice program where people could learn by doing rather than watching TV; but I was laughed out of the room. They had no interest in "capturing knowledge." Their primary business was hiring retirees and staffing them as consultants at the department. Failure was more profitable than success.

Ever since that experience, I have been keenly interested in the process of learning. As a technologist and a consultant, I am always learning so I have developed tactics that work for me. What surprised me in reading Pragmatic Thinking & Learning is that there is actual scientific theory that supports many of the tactics that I employ. What I like most about the book is that it talks about thinking and learning as a personal process that you have to do yourself. The most a teacher or a computer can do for you is provide information — data. To turn that information into knowledge, you have to internalize it into something that is meaningful to you. You need to put the information into context with other things you know.

Most corporate professional development programs ignore this truth about learning. They practice what the book calls "sheep-dip" training programs where training classes "dip" employees in information that quickly wears off. The only way that you learn from these classes is to apply what you heard right away. The learning happens after the class. This is why I like the idea of "drop-in labs" so much. I think the industry is starting to accept this "learn by doing" philosophy. Knowledge management experts are talking less about repositories and more about communities and workspaces. Emphasis seems to be shifting from the assets to the learning process.

On a personal level, being more conscious of these ideas is helping me be more deliberate about how I learn. The book advises setting SMART (specific, measurable, achievable, and relevant) objectives and using techniques like SQ3R (Survey, Question, Read, Recite (summarize), Review). But most importantly, learning has to be fun because we learn best through play. Yet another reason to work in a field that you love.

Pragmatic Thinking & Learning lives up to the high standards set by the The Pragmatic Programmer: From Journeyman to Master

and other books on the Pragmatic Bookshelf. It will be required reading for anyone that I hire.

← Previous Next → Page 19 of 75