<!-- Content Here -->

Where content meets technology

Sep 13, 2010

Web content retention requirements

During a recent web content management system selection project for a client in a highly regulated industry, I ran across a rather advanced content retention requirement that I have not seen before — at least not in web content management. This requirement was also new to the vendors that we were working with. I am curious if anyone has encountered a similar requirement and, if so, how it was satisfied.

The general gist is that the client does not want to retain outdated versions of assets outside of its mandated retention window of ten years. I am familiar with requirements for purging assets based on particular rules but versions of assets. Here is the scenario:

Purge old versions

A monthly process searches through the content repository and deletes the following:

  • Unpublished or replaced versions of assets that have been not been on the site for ten (10) years.

  • Entire assets that have been removed from the site greater than ten (10) years ago.

Based on these rules, the following items are deleted on June 1st 2020:

  • A version of the terms of service that was superseded by another version on June 1st 2010.

  • A promotional block (and all versions) that expired on June 1st 2010.

  • An article (and all versions) that was archived on June 1st 2010.

Purged assets and versions of assets are not recoverable by any means.

I first wondered why I had not run into this requirement before. I rarely see content retention requirements in web content management. Retention is more of an issue when it comes to email archiving and records management. Is purging old content necessary for WCM? Is it reasonable for WCM? One thing that makes web content different from these other forms of content management is that web content is deliberately published to an audience whereas other content may contain private communications between individuals. Consequently, a company should have greater confidence in their web content as being official corporate information. Furthermore, once something is published, it is out there. Infinite copies are made. Destroying the original won't make a difference.

My second thought was about the feasibility of satisfying this requirement. It strikes me that in order to meet this requirement, the CMS's API must support the ability to query and manipulate versions of assets. The CMS should also record the archival date (the date that the version was superseded by a following version) of each version. Otherwise queries may have to look at the publish date of the following version to determine the archival date of a version.

If you have experience or ideas on this issue, I would love to hear from you in the comments or via email (seth at contenthere.net)

Aug 05, 2010

Deane Barker: Editors Live in the Holes

A few days ago I read Deane Barker's excellent post Editors Live in the Holes (go ahead and read the post and then come back) and I have been thinking about it ever since. I have had the same experience several times and it is a good reminder for developers to pay special attention to configuring and testing the rich text editor. As Deane points out, it is too easy for developers to disregard "the holes" as a contributor problem, not a system problem.

To get it right, the holes need to be jointly owned by the designers, developers, and content contributors. Designers need to design for flexibility. Developers need to do everything they can to make contributors successful. But this raises something of a chicken and egg problem — at least for new CMS implementations (as opposed to migrations). In these projects, content entry typically occurs after the system is considered complete. This means that the designer and developer need to anticipate what rich text capabilities (formatting controls and the styles that control the display of rich text) the contributors will need. This is particularly important in the ever-present "generic page" content type that is typically used for the many one-off (odd ball) pages that exist in any website.

I have found two good techniques to get around this problem. First, it is good to test the rich text editor with a few of the more challenging one-off pages on the site. Take a page with embedded images and objects (like perhaps a Google map) and formatting and try to reproduce it in the rich text editor. Don't disable the rich text editor and edit the source. That is cheating. If it turns out you can't do it without pulling your hair out, you need to come up with a work around. If it is a really important page, you might need to develop a special content type and/or presentation template that does some of the work. If you find that there are too many challenging one-off pages to choose from, you might step back and consider enforcing more uniformity between pages. Otherwise, you will probably not be getting all of the value (content reuse and manageability) out of a CMS.

The second technique is to build a "style guide" page and place it in some discrete area on the site. The style guide page is a generic page that contains examples of all the stylings that are available to the contributor. For example, every heading level, paragraphs, lists (ordered and unordered), tables, embedded images, etc. The content contributor can visit this page to get an idea of what is possible and then open it in edit mode to see how the formatting was executed. The process of building and reviewing the style guide page is a useful forum to get designers, developers, and contributors together to collaborate and align. The fact that it is so tangible grounds everyone in the real capabilities of the platform. The style guide page is also the first place to check updates or enhancements to styles after launch.

At the end of the day, designers, developers, and contributors all want the site to be a success. They can't just claim victory on their little piece ("the mockups were approved," "we got out of QA," or "I got my page to preview!"). Editors may live in the holes but everyone has to keep the holes clean.

Jul 28, 2010

Will Day stay committed to web standards under Adobe's ownership?

I JUST heard about Adobe's acquisition of Day Software and have to admit my first reaction was total disappointment. I always admired Day's commitment to architecture and standards. Day is one of the few upper upper tier web content management companies to stay focused on the web — not just as a place to dump files but as medium for information exchange and creativity. David Nuescheler and Roy Fielding seemed to have a vision for how systems could openly interoperate through lightweight architectures like REST and standards like the JCR. Day has also been a great contributor to the Java community by pushing lighter weight technologies like OSGi and server-side JavaScript to keep Java relevant in a trend toward dynamically typed, scripting languages like PHP, Python, and Ruby. Day promoted this vision through the products they sold and also by contributing to open source projects.

I feel the complete opposite about Adobe. Adobe seems more interested in conquering the web than improving it. While Adobe has contributed several technologies that lowered barriers to entry, I think the overall net impact has been negative. Yes we have more content on the web thanks to Adobe, but much of that content is locked in Adobe's PDF and Flash formats where it is less accessible (and maintainable) than plain old DHTML. Adobe customers tend to overuse Adobe technologies like PDF for online forms when HTML would have done quite well. Flash-based navigation is also a problem; I can't tell you how many restaurant websites I have been where you can't link to a specific page because the whole site is one Flash movie. As a web consumer, how many hours have I waited for Acrobat reader to install/upgrade plugins (which further degrade performance) before allowing me to read PDFs that I clicked on? Expert tip: disable the PDFViewer plugin for Safari. Don't even get me started on DreadWeaver.

As you can see, my frustration with Adobe has been building for quite some time. It felt good to let that out. I haven't talked to David or Roy about Adobe so I don't know their opinion of Adobe before or after the merger talks started. I hope that Adobe permits them (even better, supports them) to continue their good work in web-based architectures. More likely Adobe is buying Day for its CRX repository and CQ5's workflow and digital asset management (DAM) functionality to connect creative teams using Adobe Creative Suite (Why couldn't they have just bought vjoon or WoodWing?) If this is the case, I hope Adobe will invest more in web publishing than they did JRun.

Jul 22, 2010

Keeping your content DRY

After over 10 years of working in content management, I have come to realize that there is only one way to learn the value of managing structured information: the hard way — and that way is only 50% effective. People can intellectually accept concepts like content re-use and content/layout separation, but in the heat of the moment, few can resist the siren song of a word processor and the clipboard. Pasting in a bunch of text into a rich text area (and then re-formatting it) provides so much more instant gratification than data entry into the fields of a structured content form. It is only after a number of painful global content changes that people come to realize that the value of all that painstaking WYSIWYG work has a very short shelf life. It is not until a migration onto another platform that one becomes aware of all that semi-redundant content. But that realization only happens around half the time. The other half of the time the site's unmanageability is blamed on the CMS. A clear sign that the content manager didn't make the connection is when there is a requirement that the new CMS have a global search and replace feature.

As someone who has seen many companies succeed and fail (and really fail) with content management, it is easy for me to notice these patterns. But that doesn't mean that I can make anyone short-circuit his/her learning process. If I were able to forcefully impose a highly structured content model on a client, all they would notice was the complexity of the content entry forms. They would take for granted the downstream benefits. The best you can do is gently guide and hope that guidance will lead to recognition when the site becomes unmanageable. I don't get too worked up about it. If I get frustrated, I can just talk to my friends in the DITA/XML advocate community. Their pain in working with technical documentation teams is way worse.

In the software development world, we have the concept of DRY (Don't Repeat Yourself). The idea is "every piece of knowledge must have a single, unambiguous, authoritative representation within a system." I call the opposite of DRY WET (Write Everything Thrice) or DAMP (Developer Accepts Maintenance Problems. Hat tip to Brian Kelly). This means copying and pasting code (rather than referencing it) or writing the same data over and over again. Part of the development process is recognizing patterns and coming up with ways to reduce redundancy. Good developers are constantly thinking about maintaining the code they write because they will inevitably need to add a feature of fix a bug. And the feedback cycle is really short for developers. You write a bit of code, test it, fix it, write some more code, test that and the first code you wrote, fix it.... If you did anything stupid, the time you have to wait before suffering for it is usually short. I am not saying that all developers practice DRY, but they have a better track record than content contributors.

Most content contributors don't have that short feedback loop. Too often, content is considered a "set it and forget it" initiative. You publish and move on. But I am seeing two positive trends in the content management industry that may shorten the feedback loop. First, there has been some great thought leadership around solving the "post launch paradigm". Second, many CMS vendors are building in analytics and multivariate testing functionality that encourages the content manager to constantly tweak a website to maximum performance. My hope is that awareness of this functionality will compel buyers to think of their content in a more dynamic way — something that evolves and improves like software. Then maybe we will hear content managers talking about their websites being DRY, WET, or DAMP.

Jul 21, 2010

Work Breakdown Structure vs. Deadlines

One of the most common points of friction between project managers and developers is planning work. Most programmers hate creating work breakdown structures (WBS). You can't blame them, accurately predicting steps and effort required to build undesigned software is impossible. Yes, you heard that right. Software development planning is impossible — at least for someone who likes precision, which most programmers do.

The problem is that every software development project is a unique collection of thousands of tiny details that each have the potential to suck up enormous amounts of time. The traditional, PMI-sanctioned WBS technique forces developers to name all the activities that will be required, sequence them with dependancies, and then create an estimate of each one. The assumption is that if you did the planning right, you should just be able to follow the steps and come out the other end on time and on budget. This also implies that if you didn't blindly follow the steps, the project plan was wrong — or you were too incompetent to follow the steps correctly. But with the fluid nature of software development, the project plan is always wrong. I used to think that precision would increase with finer granularity. The more lines in the project plan, the more accurate it would be. But now I think the opposite is true. The more tasks you add, the more guesses you make and the greater the overall variance. Even if you guessed every task right, there were probably just as many tasks that you forgot to add. And there are also lots of steps that you find you didn't need to do too.

While predicting a WBS is impossible, developers can get better at setting and meeting deadlines. There is a small nuance between setting a deadline and estimating tasks in a WBS. On the outside, the difference is so small that no one will notice. Nobody will care because they just want to know when the work will get done. But there is a difference. The WBS technique forces a linear accounting of all the work that needs to be done. Creating a deadline is more like adding a constraint (that you hope is reasonable) to help guide and prioritize the work that you wind up doing. Comparing the two is like comparing launching a rocket to flying a plane. PMI-style planning is like shooting a rocket: doing all the calculation at the beginning and then hoping that you accounted for everything before ignition. Setting a deadline turns the rocket into an airplane by adding a pilot that can steer. Realizing you can make adjustments after take-off transforms the pre-flight calculations from a fixed flight path to a map that you can use to make in-flight decisions. A deadline (either the final deadline or an intermediate milestone) is where you think you can be at a certain point of time (or after a certain amount of effort). When creating a deadline for yourself, you don't try to think of every possible task it will take. It is more like eyeballing distances than counting steps.

I became conscious of this distinction the other day when I was on a bike ride. I take pride in the fact that I usually get home within a few minutes of the time I tell my wife I will be back. Lots of times I pull in right at the minute. Putting on my planner hat, if I was asked how long a bike ride would take, I would want to know the exact route and measure the distance and slope and windspeed and make assumptions about average speed. When I put on my cycling helmet, I realize that most of those variables are under my control. I can shorten the route. I can ride faster. I can take an alternate road to stay out of a headwind. Because I know my cycling ability and the terrain so well, I make these adjustments without even thinking about it.

I know you are thinking that software development is not like riding a bike. There are all these externally imposed requirements, constraints, and dependencies that need to be accounted for. But think back and ask yourself: how many of these factors are added specifically for the purpose of creating the WBS? I feel like developers work against themselves by asking for more and more estimation inputs and being more prescriptive of how they will work. There is no way that every detail can be accounted for and every detail that you do add will constrain your ability to make adjustments.

For estimation purposes, requirements should represent boundaries of an acceptable solution. With this understanding, a developer needs to produce a reasonable deadline based on similar work and explain any assumptions made. An overall deadline or intermediate milestone shouldn't be overly ambitious. It should account for unknowns. If a deadline is not acceptable, scale back the scope until an acceptable deadline can be achieved. Through the course of the project, new information is going to present itself: the client is more particular than he was able to articulate; the available components are not as good as expected; new features are added to the scope. When any of these things happen, you make adjustments. You might be able to work a little more efficiently. You might be able to scale down scope in other areas. You might be able to delegate work back to the client. Or, you might just have to extend the deadline.

These adjustments require a decent partnership between the developer and the client where the deadline is jointly owned. It doesn't work when one party feels like the other is obligated to deliver no matter what. In the bicycle analogy, when two people go for a ride, they decide where they want to go. Usually the conversation plays out where one rider asks the other what sort of ride he is up for. The second rider may say he needs to get back in 2 hours and wants to get in some climbing. The first rider will suggest a route that he is familiar with. When they encounter construction that makes a road impassable, they may be able to find an alternative route that is just as good; they can hammer home over a longer route in a paceline; or they can call home to say that they are going to be late. Whether the first rider should have known about the construction is debatable (Did the construction just start? Was the overall distance to ambitious? Did the route not allow for adjustments?) but debating is not going to get anyone home sooner.

With experience, you do get better at making more realistic deadlines. And, more importantly, you also get better with time management. You will build an awareness of where you are in the overall process and know early if you are falling behind schedule. In the cycling analogy, you periodically glance at the clock, your current speed, the slope of the road, and which way the wind is blowing. In software development, you are looking at things like the calendar, the productivity, and the rate of defect identification. With this information rolling around in your subconscious, you start thinking about options instinctively. The client perception is that you planned well. But you really didn't. You managed time well. The up front estimate was just one of the many constraints that you juggled when developing the solution.

Jul 19, 2010

Open source project filtering

Roberto Galoppini has an interesting case study on selecting an open source project management tool. In it, he describes his SOS Open Source methodology for filtering open source projects by looking at a number of factors organized into three categories: sustainability, industrial strength, and project strategy. The case study doesn't go into much detail but Roberto has built a tool that aggregates quantitative and qualitative project information from a number of disparate sources and builds scores. I saw a demo around 6 months ago and was impressed by the graphs he was able to create. While this technique cannot be expected to make a technology decision for you (you need to know your requirements and to have hands-on experience for that), it can be used to filter down the market and help you decide where to invest your evaluation energy.

Despite its ubiquity, open source software is still unchartered territory for most technology buyers. That is not to say that most companies don't use open source software, nearly all companies leverage at least open source utilities, libraries, and infrastructure (operating systems, databases, web servers, etc.). Many companies use open source business applications too. It is just that many companies adopt open source technologies in haphazard and spontaneous ways — at least not with the same level of conscientiousness put into an expensive commercial software purchase. While I don't think buyers should put much stock in Gartner's or Forrester's opinion of technology, it barely exists for open source technologies. That point was hammered home in a recent a Olliance webinar when one of the panelists said that Gartner and Forrester offer no value on open source. All the CIOs on the panel leveraged their peers and internal experts rather than their analyst subscriptions.

Ideally, technology procurement should be able to sense if there is something wrong going on with the project. The information is out there and you can get it in real time (as opposed to commercial software companies that only report quarterly). You just need to know where to look. Tools like SOS Open Source provide a useful high level picture to quickly highlight potential issues that should be investigated. It is unlikely that mainstream analysts will be able to develop this level of awareness for open source projects so I think there is great opportunity for these data aggregation tools.

Jul 02, 2010

Would anyone "Like" this blog?

One of my newspaper clients recently added the Facebook "Like" button to their site and saw large increases in traffic. I was thinking of doing the same thing for Content Here but then I started to wonder "would I Like Content Here?" Don't get me wrong. I LOVE writing this blog and I also find the posts tremendously useful as a resource. Re-reading old posts is a great way for me to recreate an idea that I once had in my head or re-use an explanation for one of my clients. Sometimes I catch myself sending link after link to a client.

So while I LOVE this blog, I am not sure that I LIKE it — at least not in a Facebook kinda way. I guess it all boils down to how I use Facebook: I use it for purely social purposes. I keep strict separation between my Facebook world (where I connect with friends and family, many of whom are not technical) and my professional (Twitter and LinkedIn) world. Some contacts span both worlds — mainly people who I know professionally but also hang out with outside of work. On Facebook, I don't post about anything work-related; just as I don't bore dinner guests with esoteric content management theory or programming stuff. There I talk about things that many of my friends and I are passionate about or would find amusing. On Twitter and this blog, I write about things that I find interesting professionally. I avoid personal subjects like my family, political views, and silly humor. I have a feeling that others either consciously or unconsciously maintain this kind of barrier. How many people would want to confuse their non-technical mother-in-law and the rest of their social network by "liking" the post Code moves forward. Content moves backward? Probably about as many people who want their boss to see their beach pictures which were taken on a sick day.

This probably infuriates Facebook because they want to manage the full social graph — not just half of it. But I don't think they have a great answer for people like me. Some of my friends are working around this issue by creating two Facebook accounts: one for business and one for social. My good friend Brice Dunwoodie has a Facebook profile called Brice Dunwoodie SMG for his "semi-public self." But this isn't really a good solution for Facebook because it fractures their social graph. In order to pull these social and professional aspects together, Facebook would need to get really clever about its privacy and filtering settings which are already overly complicated and controversial.

If Facebook can't have all the social graph, which half would they want? Are they be satisfied with the social side of the social graph which they already dominate? Or would they prefer the professional side (currently owned by LinkedIn)? Historically, Facebook ad revenue has been low considering their huge traffic volumes. This makes sense because general interest content (like news, entertainment, personal statuses, and other content that people might "like" in a Facebook kind of way) has notoriously low CPM rates; not like niche publications that have their audience in a buying state of mind and know what types of products they are interested in. Facebook's bet seems to be that, through their social graph, they can improve the targeting problem for general interest content. If they are successful, they will achieve that lucrative formula of high traffic volume AND high CPM. If they are not successful, they will probably need to think of some other way to monetize that large but distracted audience.

Jun 30, 2010

So, this business analyst walks into a car dealership

A customer struts into a car dealership, slams a 200 page requirements document down onto a salesman's desk, and triumphantly declares "I know exactly what kind of car I want to buy." The startled salesman opens the document to a random section and starts to flip through a few pages that describe a lug nut in excruciating detail. He looks at another random section and sees requirements about how the steering wheel should be joined to the steering column. After regaining his composure, the salesman looks up and says "from this document, I can definitely see that you are looking for a car. What do you want to use it for?" The business analyst suddenly looks confused and says "I don't know. I don't drive."

This is not just a lame joke. It describes a scenario happens all the time in CMS selections. There are two main problems here. First is the obvious problem that the customer believes himself an expert in cars because he has done a ton of research but he doesn't have the critical experience of having driven one. He can name all the features of a car and knows what they do but he hasn't had to use them. The second issue is more subtle. His 200 page requirements document is more like a design specification for a product that has already been built. It goes into details that are unnecessary like how the steering wheel must be connected to the steering column. What kind of penalty does he give if the steering wheel is connected in a different (and perhaps better) way? More importantly, there is no way his requirements document can be exhaustive. It would really have to be 20,000+ pages to cover every detail with same depth. So entire aspects of the car are probably omitted. Maybe it was something important like which side of the car the steering wheel is on. Rather than try to design your own car in a vacuum and then go around and see which one matches it, it would be better to draw up some coarse filters (price, intended usage, etc.) and then look at cars that passed the filter in their totality and see which one feels right.

This sounds obvious for car buying but you would be surprised how many CMS buyers collect requirements like that car customer; Or do the abridged version where they just name countless features (which in car shopping would produce a list like "6 cup holders, 1 gas pedal, 1 brake pedal, 1 clutch (optional), 5 gears, 3 windshield wipers, 6 windows, 4 wheels, 4 tires, 1 spare wheel/tire ... "). In many cases the requirements are gathered by people who have never used, nor intend to use, the CMS. They can't paint the bigger picture of the user, the task, and the content.

By focusing on how people work, rather than the features themselves, CMS evaluation criteria can identify features that are important and with enough context to understand which implementations of that feature will make it useful. In the car dealership story, if the customer walked into the dealership and said "I drive like a maniac and the wheels of my last three cars fell off," the salesman would not only know that the customer needed lug nuts but really beefy lug nuts, a good suspension, and perhaps a driving lesson.

Scenarios are the best way to capture this context for a software selection. A good scenario will describe the intent behind the task (what is the user trying to accomplish?), the context (what time, resources, and information does the user have?) and the flow (how does the person work? Who else does he need to collaborate with?). In the process of documenting a scenario, a number of features will be identified — features you might not even think of in a requirement brainstorming session. After writing a scenario, I typically list features at the bottom of the scenario to call out what functionality was used. Scenarios don't have to be long or comprehensive. Usually 1/2 to 1 page will capture enough of the story to understand what needs to happen.

To beat on the car buying analogy once again, you could think of a scenario like the route for a test drive. If you live in a city and rarely use a highway, the best test drive would be to drive in traffic and try to parallel park and park in your neighborhood parking garage. That would be more informative than driving on the interstate. If you test drove all the cars on the same route you would notice some big differences; like that you can park the Honda Fit in a compact car space but you can't even get the Ridgeline into the parking garage because the turning radius is too big. Your average car dealer probably will not give you much flexibility on your driving route, but your CMS vendor will (or should). Use that access to your advantage and create the most realistic driving conditions possible.

The car buying analogy breaks down in one key area. When you buy a car, you sign the paper work and then you drive it off the lot. Content management systems are not like that. Before you can use a CMS, you need to implement the software to support your content, processes, and web design. You need to configure, customize, and extend the platform. Scenarios will help this process because, once you buy the software, they turn into the user stories that will drive your implementation planning and long term road map. Some user stories will be achievable by configuring out of the box functionality; others will take more effort.

So when you find yourself slogging through a spreadsheet with hundreds of rows of requirements, think of that car buyer and ask yourself "are these requirements really going to help me find a CMS that I will be able to use to manage my website(s)?" If you are honest with yourself, the answer will probably be "no." If it is "no," put away the spreadsheet and start writing scenarios.

Jun 28, 2010

HTML production for CMS implementations

Most new site CMS implementations (as opposed to site migrations from one CMS to another) start off with a set of HTML mockups. This can be a convenient starting place because, in addition to showing how the pages should look and informing the content model, having the HTML gives a good head start to presentation template development. Ideally the template developer just has to replace the sample "Lorem Ipsum" text with a tagging syntax that retrieves real content from the repository. There are even some graphical tools that help a developer map regions on the mockup with content from the repository. However, often moving from HTML mockups to presentation templates isn't so smooth. Sometimes the HTML has to be re-written from the ground up.

The most common source of problems is when the HTML is too specific. This usually occurs when the designer/developer who produces the mockups is accustomed building static HTML websites where she has full control over everything. HTML and CSS for an CMS implementation has to account for the fact that control is shared between the template and the content contributor. While the template controls the overall layout, the control contributor controls the navigation, text, images, and (with the help of a rich text editor) can even style body content. HTML code that is rigid and brittle breaks when stretched by unanticipated content. Here are some things to look out for.

  • Hard coded height and width dimensions on image tags. Most content contributors don't know the first thing about aspect ratios. They upload a picture and don't understand why it is squished on the page. While most CMS these can automatically scale images (and even if they can't the browser will), they can't all reshape them. While some CMS support cropping functionality for thumbnails, few content contributors know how to use it to precisely shape an image. I usually recommend setting only one dimension (usually width) and then letting the other dimension (usually the height) do what it needs to do. If you really need to control both, you can use this little background image trick:


    <div class="picture" style="background: url(<<horizontally scaled image path>>) no-repeat; height:150px;"></div>

    This uses the image CMS's image scaling to set the width and vertically crops the image after 150 pixels by making it a background image.

  • Overusing element ids. When you are only building a few pages and you want very direct control over elements, there is a temptation to code CSS to reference specific element ids rather than classes. In some cases, this makes sense. For example, when there is only one global left navigation component. However, it makes less sense for anything that a content contributor might have control over — like items in that navigational menu or anything else that repeats. I haven't used DreamWeaver (DreadWeaver, as I like to say) in years but I suspect that the HTML/CSS auto-generation generation prefers using IDs over classes because that is where I see it the most. The worst case I have seen was a sample search result page with every search result individually styled with element ids.

  • Over-complicated HTML. HTML is only going to get more complicated when it is infused with template syntax. It is best to start with HTML code that is as simple and terse as it can be. If a designer is still using nested tables to position things, have him work in photoshop. The more styling you can do in CSS the better. This will make templates cleaner, more efficient, and easier to manage. Plus, your CSS will survive a migration to another CMS better than your template code will.

  • Using images rather than text headings. While the font control afforded by images is nice, avoid using images for anything dealing with the navigation or page names. Otherwise content contributors will not be able to create new pages or re-organize the navigation without a designer to produce images. If you have a top level navigation that is unlikely to change, you can compromise by building images just for the top level page names. A decent strategy is to code the HTML like


    <h1 class="section-heading <<dynamic section name in lowercase >>"><<sectionname>></h1>

    for example:


    <h1 class="section-heading about">About</h1>

    This way, if a content contributor introduces a new section that doesn't have an image or style yet, there is a decent fallback of styled text.

  • Too many layouts. Most web content management systems prefer you to have an overall page layout template (also known as a master page) that is used for nearly all of the pages of the site and then content-type-specific templates that render in the "content area" in the center of the page. Things like the header, footer and global navigation components go in the page layout template. In many systems these two templates are not very much aware of each other because they are rendered at different times within the page generation process. The trick is to determine what portions of the page to put in the global template and what to put in the content-type specific templates. The more you put in the content-specific templates, the more flexibility you have but you also wind up having redundant code that adds management overhead. You also want to make sure that the design does not specify too many options for content presentation templates. In addition to adding to maintenance overhead, this also confuses the user. When lots of variability is required, it is a good technique to design the implementation to allow contributors to build pages with blocks of content. This way, the presentation template just has to define "slots" that contributors can fill (or not fill) with content.

Most of these tips will come more naturally to an advanced HTML that really knows his stuff than a pure designer with design tools that can create HTML. However, even the best HTML developers can have mental lapses when they get into a production groove. It is a good idea to understand the HTML producer's skill-set before assigning the task of HTML production and set expectations. Otherwise, you will probably get a rude awakening when template development is scheduled to start. If this type of HTML production is new to your team and you would like them to learn it, account for this learning by holding frequent reviews of the HTML code as it being produced. Start with the most simple content type (like a generic page) so you can focus on the global page layout and get alignment on static vs. variable components. Over time, your team will instinctively notice HTML code that works for the mockup but will be problematic in a presentation template.

Jun 24, 2010

Kindle needs a "lend" button

Whenever Amazon announces news about it's Kindle product, like with the recent Kindle price drop, I find myself referring to my reasons for not buying a Kindle. So far they are working out pretty well for me. The strongest argument has been the inability to share (first on the list). When I buy a physical book, which is usually not much more expensive than the digital version, I don't just buy the ability read the book myself. I am also buying something that I can share with others. Frequently I mention a book to someone and grab my copy to lend. And roughly half of the books that I read are on loan from others. You don't get this experience from a digital book and I would miss it.

Personally, I would reconsider my decision not to buy a Kindle if it had a "lend" feature. Here is how it would work. If I owned a digital copy of a book, I could click a "lend" button that would bring up a list of my friends. I would be able to set the length of the loan. During that period, the lendee would have access to the book but I would not. As the owner of the book, I could retrieve the book and, in doing so, remove it from the lendee's library. This feature could also be enabled for public and academic libraries.

This move would be great for Amazon (or a competitor that did it first). It would encourage people to buy the reader device when their friends buy one. It makes the reader more valuable and viral. It would alleviate feature/function competition. You would buy the reader your friends have, not the flashiest product with the best C|Net review. Publisher's would probably not be so keen on the idea. They would see fewer eBook sales. I think this issue could be addressed by Amazon increasing the digital copy price and sharing more revenue with the publisher. For reference books and classics, the publisher could see sales to people who borrowed the book but wanted their own copy.

This reminds me of Kevin Kelly's classic post "Better than Free," where he lists characteristics of content that make it worth paying for. One of the characteristics is "Embodiment," which digital content lacks. Making a digital edition virtually transferable (and not copyable) would certainly add embodiment because it would make it behave more like a physical asset.

Amazon (or any other digital reader maker): please steal this idea (if you haven't already thought of it yourself). I would really like to see lending digital content happen.

← Previous Next → Page 18 of 75