<!-- Content Here -->

Where content meets technology

Feb 25, 2010

Why CMS Vendor Acquisitions are Bad for Customers

It just occurred to me that my recent quotes on Fierce Content Management make me sound like the Statler and Waldorf of the content management industry. I really don't mean to sound so negative but, from where I sit, software company acquisitions are nearly always bad news. My clients are software customers and my job is to help them be better software customers by making better technology decisions. Mainstream software analysts, who are mainly speaking to the vendors, spin acquisitions in terms of what it means to competitors. They talk about how the acquisition changes the landscape and competitive dynamics. Mainstream analysts get pretty excited by mergers and acquisitions. It means that there is movement that needs to be understood and explained. It means that they get to re-answer the question that their software vendor and investor customers always ask "who is the best company in the market and why."

The software customer has a different question: "which one of these products is the best for me right now and in the foreseeable future." Market dominance plays a role but customers should be more concerned about their requirements and that the vendor will stick around and continue to focus on what matters to the customer. Customers should care less about who acquired who (today and yesterday) and about more who is at risk of getting acquired tomorrow. Acquisition adds uncertainty and risk that a customer would do best to avoid.

The dirty little secret about software company acquisitions is that, in most cases, they have nothing to do with technology. When one software company buys another, it is usually buying customers. Implicit in the transaction is the understanding that the customers are worth something and the acquiring company __can more effectively make a profit from them__.
An acquired CMS customer is valuable because switching costs are really high. To switch to another product, a customer would need to rebuild its web site, migrate its content, and retrain its people. Unless the product is totally doomed, the customer will probably stick around and pay support and maintenance for a for a while. The acquiring company can increase profitability by cutting down on product development, support, and sales. Most of these cut-backs directly affect the customer. The vision for the product will get cloudy. Enhancements will come out slower. Technical support and professional services will be less knowledgeable. Market-share will gradually decline. In some cases, the product will be totally retired in favor of an alternative offering by the same company. If there is an option of terminating support and maintenance, it is probably a good idea to exercise that option because the value of the service is likely to decline steadily.

The sad thing is that this dynamic is practically built into the traditional software company business model. A typical software company burns through investments to first build a product and then build a customer base. At a certain point, the growth trajectory will flatten (or decline) and investors will want to move their money into another growth opportunity. Some companies will be satisfied by having achieved a sustainable business. Others will cash out by being acquired. Others will look to grow by acquiring steady (but declining) revenue streams. The scariest companies to deal with are the acquiring companies — what I call "portfolio companies" that buy up products for their customers and then decide what to do with the technology. When you are a customer of one of these companies the future of the software you bought is vulnerable to the shifting attention of the vendor. If the vendor decides to keep the product around, it means they can successfully drain revenue from you. If the vendor gives you a "migration plan," it means that they have another product that is in an earlier stage of the same destiny. Neither case is good.

Feb 23, 2010

NoSQL Deja Vu

Around thirteen years ago, I helped build a prototype for a custom CRM system that ran on an object database (ObjectStore). The idea isn't quite as crazy as it sounds. The data was extremely hierarchical with parent companies and subsidiaries and divisions and then people assigned to the individual divisions. It was the kind of data model where nearly every query had several recursive joins and there were concerns about performance. Also, the team was really curious about object databases so it was a pretty cool project.

One thing that I learned during that project is that (at least back then) the object database market was doomed. The problem was that when you said "database," people heard "tables of information." When you said "data" people wanted to bring the database administrator (DBA) into the discussion. An object database, which has no tables and was alien to most DBAs, broke those two key assumptions and created an atmosphere of fear, uncertainty and doubt. The DBA, who built a career on SQL, didn't want to be responsible for something unfamiliar. The ObjectStore sales guy told me that he was only successful when the internal object database champion positioned the product as a "permanent object cache" rather than a database. By hiding the word "data," projects were able to fly under the DBA radar.

Fast forward to the present and it feels like the same conflict is happening over NoSQL databases. All the same dynamics seem to be here. Programmers love the idea of breaking out of old-fashioned tables for their non-tabular data. Programmers also like the idea of data that is as distributed as their applications are. Many DBAs are fearful of the technology. Will this marginalize their skills? Will they be on the hook when the thing blows up?

I don't know if NoSQL databases will suffer the same fate as object databases did back in the 90's but the landscape seems to have shifted since then. The biggest change is that DBAs are less powerful than they used to be. It used to be that if you were working on any application that was even remotely related to data, you had to have at least a slice of the DBA's time allocated to your project. Now, unless the application/business is very data centric (like accounting, ERP, CRM, etc.), there may not even be a DBA in the picture. This trend is a result of two innovations. First, is object relational mapping (ORM) technology where schemas and queries are automatically generated based on the code that the programmer writes. With ORM, you work in an object model and the data model follows. This takes the data model out of the DBA's hands. The second innovation is cheap databases. When databases were expensive, they were centrally managed and tightly controlled. To get access to a database, you needed to involve the database group. Now, with free databases, the database becomes just another component in the application. The database group doesn't get involved.

Now that the database is a decision made by the programmer, I think non-relational databases have a better chance of adoption. Writing non-SQL queries to modify data is less daunting for a programmer who is accustomed to working in different programming languages. Still, the programmer needs good tools to browse and modify data because he doesn't want to write code for everything. Successful NoSQL databases will have administration tools. The JCR has the JCR Explorer. CMIS has a cool Adobe Air-based explorer. Both of these cases are repository standards that sit above a (relational or non-relational) database but they were critical for adoption. CouchDB has an administration client called Futon but most of the other NoSQL databases just support an API. You also want to have the data accessible to reporting and business intelligence tools. I think that a proliferation of administration/inspection/reporting tools will be a good signal that NoSQL is taking off.

Another potential advantage is the trend toward distributed applications which breaks the model of having a centralized database service. Oracle spent so much marketing force building up their database as being the centralized information repository to rule the enterprise. In this world of distributed services talking through open APIs, that monolithic image looks primitive. What is more important is minimal latency, fault tolerance, and the ability to scale to very large data sets. A large centralized (and generalized) resource is at a disadvantage along all three of these dimensions. When you start talking about lots of independent databases, the homogeneity of data persistence becomes less of a concern. It's not like you are going to be integrating these services with SQL. If you did, your integration would be very brittle because these agilely-developed services are in a constant state of evolution. You just need to have strong, stable APIs to push and pull data in the necessary formats.

The geeky programmer in me (that loved working on that CRM project) is rooting for NoSQL databases. The recovering DBA in me cringes at the thought of battling data corruption with inferior, unfamiliar tools. In a perfect world, there will be room for both technologies: relational databases for relational data that needs to be centrally managed as an enterprise asset; NoSQL databases for data that doesn't naturally fit into a relational database schema or has volumes that would strain traditional database technology.

Feb 15, 2010

How I use Twitter for Work





Publishing Decision Tree V2

Originally uploaded by sggottlieb

I just read Philippe Parker's thoughtful response to Janus Boye's provocative post "How I use Twitter for Work". Both these articles, plus my recent experience at PodCamp Western Mass, made me a little more conscious of my strategy and techniques for social media. As you can see from this geeky flow chart, I have put some thought into what I publish where. But I had thought less about who to follow.

My official Twitter policy was to only follow people that "inform and/or entertain me." Because I use Twitter mainly for work, my bias certainly leans toward the "inform" side. Although, I do appreciate a good snark once in while, I have un-followed people who fill the timeline with mostly personal stuff. If I found myself automatically skipping over someone's tweets because I was expecting something mundane, I un-followed him/her. If a Facebook friend re-published their Twitter stream into Facebook, I un-followed him/her on Twitter. These tactics kept my following count to a manageable number of 150. When I say manageable, I mean I am not overwhelmed by the volume of updates but I don't go back and read every tweet when I am away from Twitter for an extended period of time.

At PodCamp, I finally learned the value of lists. By using private lists for work, friends, fun, and news, I can follow more people but handle the traffic differently. When I am really busy, I just track my work list and my @replies. I glance at my friends and fun lists when I have more time but I never go back more than a few hours in the timeline. My news list takes the place of personal portals for the day's highlights. Since sorting this stuff out, my following count has grown to 164 with no real impact on time consumption.

The biggest change is in how I use RSS. With my new Twitter strategy, I check my reader less frequently and am able to skip over posts that I already found on Twitter. At this point, Twitter brings me timely (either because it is news or because everyone is talking about it) posts quicker. The un-tweeted RSS entries are still important to me for general learning and background knowledge. I expect that, over time, people will promote everything they write on Twitter. This has already happened with sites like CMSWire. Now FeedBurner gives you the option to automatically tweet every entry in your RSS feed. When a Twitter feed becomes identical to the RSS feed, I tend to un-follow/unsubscribe to one depending on how timely the information tends to be.

This system is working well for me now but I am sure that it will continue to change as the medium evolves. I am interested in learning other people's techniques. The tag #howiusetwitter seems appropriate and free.


Feb 11, 2010

Does "Intranet" Need a New Name?

James Robertson has an excellent post, Future principle: it’s more than the intranet, where he summarizes a movement to replace the term "intranet" with a word that reflects what an intranet could be. To quote:

There are some that would like to dump the “intranet” name, as it’s associated with the “old” vision of intranets as a publishing platform, a dumping group for documents, and a place for the CEO to post his thoughts.

This narrow vision of the intranet must certainly die. In the process, intranet teams need to go from being custodians of an internal website, to facilitators for business improvements. In many ways, the word “intranet” has too much baggage, and is an anchor for much-needed changes.

I agree that many people hear the word intranet and immediately think "dumping ground" but one does wonder if companies will not sully the next name by their continued failure to execute on the vision. The term "intranet" is actually pretty good and should be able to ride on the coat tails of the internet. The name "internet" wasn't brought down by failures like GeoCities because there is so much innovation happening; and failure is a necessary by-product of innovation. The difference is that failure kills most corporate intranets. Many intranets are big waterfall I.T. projects that are "complete" after launch. There is no time or budget left to learn from mistakes and adjust — the equivalent of a failed internet start-up but without the decency of shutting the servers down.

I don't expect companies will improve their execution of intranet projects until they start to change the way they build, launch, and manage internal products. The companies that are ahead of the curve should give their intranet an internal name to make users expect and work for more than the status quo.

BTW, I have a great replacement for the term "intranet" but I am not going to tell anyone because, sooner or later, it will be ruined by some comatose intranet initiative looking for some easy re-branding. :P

Feb 10, 2010

The Dead Zone of Software Pricing

A couple of weeks ago I subscribed to the Lean Startup Circle mailing list and I have been thoroughly enjoying the conversation ever since. If you have any entrepreneurial sensibilities lurking inside you, I highly recommend that you subscribe. The list participants have been in the trenches building companies and are happy to share what they have learned.

Recently, a thread on pricing caught my eye. This doesn't have to be strictly software licensing fees. It could be subscriptions or services too. Jim Murphy wrote that there are four pricing bands: low (< $500), medium ($500-$5,000), dead-zone ($5,000 - $20,000), and high ($20,000+). What is interesting is the "dead-zone." In this band the buying cycle is long and complex but the price of the product doesn't quite compensate for the high cost of sale. I am sure that most successful software vendors understand this either consciously or tacitly and price their products accordingly. From a buyers perspective, I was thinking that a $21,000 software product may have originally been $8,000  but is priced up by $13,000 to get out of the dead zone.

If you look at the CMS market, there are lots of commercial products with sticker prices that hover around $20,000 - $25,000. This range is much smaller than the actual deal size because the list price of $20,000 may be to license a single CPU with little capacity and no fault-tolerance. You will probably also need to add support and training which will typically bring the deal size to the $50,000-$90,000 range. Those figures can certainly justify a sales investment from the vendor. But, because of the complexity, dependencies, and high stakes of web content management, the cost of sale can be very high (remember, you have to factor in the costs of the losses as well as the wins and a flooded marketplace means lots of lost deals). Maybe the dead-zone of WCM is even higher than the average. Maybe it's more like $10,000 - $50,000.

$50,000 is a lot of money to many web initiatives that tend to have expectations of low costs and rapid results. Open source products like Drupal, Joomla! and Wordpress are doing quite well because they enable design studios (with minimal technical skills) to offer a complete website for half that price by taking a pre-existing theme and some modules and tweaking them just enough to make the site look original. Expression Engine, a free but non-open source product, is showing similar success. In these deals the cost of the software sale is essentially zero because the customer is not buying software; they are buying a website. They are considering font/palette/imagery rather than feature/function/value. Plus, because hosting for these platforms is so ubiquitous, the customer doesn't even have to complicate the transaction by involving their I.T. organization.

Commercial CMS vendors that are inflating their price to get above the dead zone are at a real risk here. Unless they can demonstrate value, their outsized prices will really stick out against products in the bottom two tiers. I think their best strategy is to shrink the dead zone by reducing the cost of sales. This means improving their channel sales and giving more access to customers who can take on more of the burden of evaluating the software. They also need to figure out a way to reward low-touch sales with discounts and charge prospective customers more when they demand the formality and overhead of the traditional enterprise sales cycle.

Feb 08, 2010

Developers and Designers

A few months ago I read Lukas Mathis' through provoking essay "Designers are not Programmers" where he makes the case for a separation between designers and developers. To summarize his argument, thinking about implementation details distracts the designer from the user and results in applications (and websites) that are easy to build but hard to use. He makes a very thorough case (you should definitely read the full essay) but something just doesn't sit well with me. In my practical experience, I find that teams are more efficient when roles overlap and people understand what is happening outside of their silo. Here are some reasons why:

  • A designer is often faced with lots of options of how to solve a user problem. When it is a coin toss between two solutions, why not choose the one that is easier to implement and apply the time and effort saved to something that really needs the additional complexity?

  • The static tools that pure designers use (e.g. photoshop) have no way to express interactive functionality. All the details that the developer needs to know need to be captured in some sort of specification that can never be complete and is usually out of date. Making the developers wait until the specification is done is inefficient.

  • Good software cannot be achieved by brilliant designers alone. It takes iteration and feedback to get it right. A cold hand-off between the designers and developers lengthens the iteration cycle (so you get fewer of them in a fixed amount of time and budget) and creates more of an opportunity for information loss.

In an ideal world with infinite time and money (and omniscience too), it might be better to have designers whose minds are unencumbered by knowledge of implementation details. Anything that they dream of can be implemented... with enough time and resources, of course. But I don't live in that world. In the world I live in, product managers and publishers have to make lots of compromises. They also need to be able to react efficiently to correct bad design decisions so that the product (or website) can continually improve. For that, you need an agile team that solve problems directly. this means staying out of a designer-only loop.

The most effective teams that I have worked on have all had a talented front end developer that can rapidly design in DHTML (leveraging javascript libraries and CSS) and knows enough server side scripting to make most user interface changes unassisted. With this mix of skills, it is truly amazing how quickly a small team can get a product in front of users where it can be improved by feedback.

Feb 03, 2010

The Myth of the Occasional CMS User

Not long ago, a university hired me to evaluate their CMS implementation. They were having doubts about their CMS selection because the implemented system was not living up to the lofty promises that got them the budget for the project. It turned out that they did make a reasonably good platform choice but they failed in two of the other critical content management success factors: expectation setting and project execution. The project execution wasn't horrible for a first try (they were working with a systems integrator that was new to the platform) and there were some remedial efforts already in progress to clean up the implementation. What was really killing them was that a primary goal for the CMS was the notion of "self service." They wanted their faculty to be able to edit their own profile pages. These faculty pages were more than a simple staff directory. Each entry was intended to showcase the experience and accomplishments of the professor and included a CV, a career narrative, and a history of publications. To maximize content reuse, the content model was highly structured — thereby increasing the complexity of the content entry forms. They might have been able to scale back on the detail but the amount of content they were looking for was not trivial: especially for an individual whose chief concern was not keeping a website up to date.

Often, one of the big justifications for a CMS is removing the webmaster bottleneck and delegating content entry to the people who have the information. The implicit assumption is that everyone wants to directly maintain their portion of the website but technology is standing in the way. But if you visit a CMS customer a while after implementation you are likely to find that the responsibility of adding content is still concentrated in a relatively small proportion of the employee population. Either the CMS never gets rolled out to the anticipated user base or it is rolled out and the user base fails to hold up their end of the bargain so the core content managers take back ownership. There are plenty of exceptions to this generalization — especially when there is an individual who has taken responsibility for outward (or inward) communication for his/her sub-group. For example, you might have someone from the human resources department whose job depends on communicating corporate policies to the staff.

You could interpret failed adoption as evidence of poor usability. That was my client's inclination. But you need to consider the usability of the other alternative, telling someone else to do add your content for you. Prior to the CMS, a professor would only have to send a terse, typo-riddled email containing some research references or a CV and the marketing department would clean it up, fill in the gaps, put it in the right style, and handle all of the coordination. How can you beat that? You can't.

Usability can't take the place of ownership and responsibility when managing content. As long as the marketing department owns the website, they will have the motivation to maintain it to the standards that they have set. It would be unreasonable to expect a professor to put the website above his other responsibilities: teaching and publishing academic books and articles. When the marketing department needs to spend a lot of energy cajoling and coordinating a professor for content, it is usually easier for that marketing department to do the content entry too.

Generally speaking, you can't delegate content authorship beyond the perception of site ownership. A good strategy is to centralize content authorship and give a relatively long turn around time (perhaps 5 business days) for site update requests. Those who do not feel ownership of the site will be fine with that lag. But those who are annoyed by the lag (or their staff) are good candidates for becoming content contributors. To them, using any CMS will be preferable to waiting a week. If they do complain, their criticism will be about needing more control to produce more elaborate content rather than needing simplicity for basic edits.

Once you scale back your expectation of distributed authorship, the role of an occasional user gets less important. The occasional user is certainly less important than the primary content contributor role or the engaged users that prioritize website management in their list of responsibilities. You shouldn't buy CMS for the occasional user. You should buy a CMS to maximize the effectiveness of your core users who are primarily responsible for the content and performance of the website.

Feb 01, 2010

In-Context and Power User Interfaces: One for the Sale, the Other for the Content Manager

A dirty little secret in the CMS industry is that, while in-context editing is often what sells a CMS, the "power user" interface is usually what winds up getting used after implementation. This phenomenon obviously creates problems in the selection process because, when the sales demo focus on an interface that users will quickly grow out of, any usability impressions are irrelevant. This is also part of a bigger problem: the importance of in-context editing for sales has caused many CMS vendors to neglect their power user interface.

It is easy to understand why the sales demo gravitates to the in-context user interface: the audience finds it more intuitive. What is less obvious is why. In a typical CMS sales demonstration, the audience has the perspective of a site visitor. After all, this is not their site. They have no responsibility for it. As a site visitor, we think of editing the content that we see: "I see a typo;" "that sentence is hard to read;" "I would prefer to see another picture here." The user just wants to go in and fix it — like a Wikipedia article. Until it's fixed, that content issue is going to bug the user so directness and immediacy are critical. Like with a wiki, the in-context is ideal for solving these kinds of problems.

The content manager, however, has an entirely different perspective. The content manager is thinking more about the whole web site than any one page. The content manager has to solve problems like re-organizing the website and making global content changes. Needing to manually change every single page of a website is not acceptable so content reuse should be top of mind. From this perspective, the appearance of a page is less important than the actual content, which also includes information you can't see on the page but drives the behavior of the site. You can even go so far as to say that the visible page (what the visitor sees) actively hides information that the content manager needs to see. The visitor shouldn't know where else a piece of content is featured on the web site or what caused the personalization logic to show this item in this particular case — but the content manager does. Incidentally, this is why you should make product demos focus on scenarios. Scenarios force you to think about what the content manager does - not just dream of being able to edit somebody else's web site.

Yes, you can make the argument that the occasional content contributor (who 80% of the time experiences the site as a visitor) needs a simplified user interface to fix the issues that they notice or keep a few bits of information up to date. But, as an organization gets more sophisticated with managing content, those cases of simplistically managed pages (with no reuse and no presentation logic) get less frequent. At that point, you are just talking about the "about us" page and some simple press releases. Are you surprised that this is what your basic generic CMS demo shows? Furthermore, I am beginning to believe that the occasional user is a myth (another blog post).

In-context editing interfaces are steadily getting more powerful by exposing functionality like targeting and A/B testing but there inevitably comes a point when the content manager wants the full power of the application at his fingertips. As the in-context editors get better, that point gets pushed further out. But adding complexity and power to the in-context editing interface will no doubt steepen the learning curve for the occasional user and minimize the wow factor of the demo. And no CMS vendor will do anything to reduce the wow factor of their product demo.

Jan 26, 2010

Designing for Drupal

Nica Lorbor, from Chapter Three, has a great post on their highly optimized Drupal design process. In the article, Nica shows how they start from a Drupal template that has roughly 25 common named elements (some native Drupal, some not) that can be styled according client specifications. A specialized Fireworks template calls out these elements and helps map them to a mockup to facilitate conversation between the designer and the developer. This creates a common language that binds the design to the code that the developer needs to work on. This also probably quickly identifies visual design elements that are not part of the normal Drupal/Chapter Three build, which need special consideration for estimation and budgeting.

I think the efficiency of this process supports my philosophy that you should work with a partner that specializes in the CMS that you intend to implement. Chapter Three didn't invent this methodology for their first Drupal implementation. I am sure that it evolved over time. It also speaks to the efficiency of designing with the platform in mind. With this process, everybody on the project (from designers to developers) understands what is going to be easy and what is going to be hard. Designers are guided towards concepts that are more efficient to build. Yes, you could build any design on Drupal, but that doesn't mean that you should. With some site designs, you will be fighting the natural tendencies of the platform: increasing both the implementation and maintenance costs.

I should probably make the point here that I am not implying that all Drupal sites (or sites from any other CMS) look alike - particularly to the untrained eye. Nearly all of what the casual site visitor notices (font, palette, page layout, buttons, etc.) is totally customizable in Drupal and any other CMS platform. Guessing what CMS was used involves much more subtle characteristics that you wouldn't notice unless you worked with the platform extensively.

This all raises a chicken and egg problem that I have discussed before. If the product influences the design and the design defines the requirements that drive the product selection, where do you start? As I mentioned in "When it is not all about the software," the key is knowing enough about your requirements to define a short list of options that you can evaluate on more subjective levels (such as aspects of design).

Jan 25, 2010

CMS Architecture: Managing Presentation Templates

Another geeky post...

In my last post, I described the relative merits of managing configuration in a repository vs. in the file system but excluded presentation templates even though how they are managed is just as interesting. Like configuration, presentation templates can be managed in the file system or in the content repository. Like with configuration, if you manage presentation templates in the repository, you need some way to deploy them from one instance of your site to another without moving the content over as well.

There are plenty of additional reasons why you would want to manage presentation templates on the file system. In particular, presentation templates are code and you want to be able to use proven coding tools and techniques to manage them. Good developers will be familiar with using a source code management system to synchronize their local work areas and branch/tag the source tree. Development tools (IDE's and text editors) are designed to work on files in a local file system. If you manage presentation templates in the repository you have to solve all sorts of problems like branching and merging and building a browser-based IDE or integrating with local IDEs. The latter can be done through WebDAV and I have also seen customers use an Ant builder in Eclipse to push a file with every time it has changed. Still, the additional complexity can create frustrating issues when the deployment mechanism breaks.

As much as it complicates the architecture, there is one very good case when you would want to manage presentation templates in the repository: when you have a centralized CMS instance that supports multiple, independently developed sub-sites. For example, lets say you are a university and each school or department has its own web developer that wants to design and implement his own site design. This developer is competent and trustworthy but you don't want to give him access to deploy his own code directly to the filesystem of the production server. He could accidentally break another site or, worse, bring down the whole server. You could centralize the testing and deployment of code, but that would just create a bottleneck. You could do something like put the CSS and JS in the repository and have him go all CSS Zen Garden, but sooner or later he will want to edit the HTML in the presentation templates.

In this scenario of distributed, delegated development, presentation templates are like content into two very important aspects:

  1. presentation templates need access control rules to determine who can edit what.

  2. presentations templates become user input (and user input should never be trusted).

The second point is really important. Just like you need to think twice when you allow a content contributor to embed potentially malicious javascript into pages, you need to worry that a delegated template developer can deploy potentially dangerous server side code. Once that code is on the filesystem of an environment it can create all sorts of mischief. It doesn't matter if it was intentional or not, if a programmer codes an infinite loop or compromises security, you have a problem. Using templating languages (like Smarty or Velocity) rather than a full programming language (like PHP or Java in JSP) will mitigate that risk but you still have to worry about the developer uploading a script that can run on your server. With staging and workflow, CMSs are good at managing semi-trusted content like presentation templates from distributed independent developers. There is a clear boundary between the runtime of the site and the underlying environment.

If your CMS uses file-system based presentation templates and you delegate sub-site development to the departments who own them, you should definitely put in place some sort of automated deployment mechanism that keeps FTP and SSH access out of the developers hands and reduces the potential for manual error. The following guidelines are worth following:

  • Code should always be deployed out of a source code system (via a branch or a tag). That way you will know what was deployed and you can redeploy the same tested code to different environments.

  • Deployments should be scripted. The scripts can manage the logic of what should be put where.

  • Every development team should have an integration environment where they can test code.

One of my clients uses a product called AnthillPro for deployments of all web applications and also presentation templates. It has taken a while to standardize and migrate all of the development teams but now I don't see how you can have a de-centralized development organization without it.

The other dimension to this problem is the coupling between the content model and the presentation templates. When you add an attribute to a content type, you need to update the presentation template to show it (or use it in some other way). The deployment of new presentation templates needs to be timed with content updates. Often content contributors will want to see the new attribute in preview when they are updating their content. Templates also need to fail gracefully when they request an attribute that does not yet exist or has not been populated yet. Typically, presentation templates evolve more rapidly than content models. After all, a change in a content model usually involves some manual content entry. In my scenario of the university, there is a benefit of centralizing the ownership of the content model. This allows content sharing across sites: if one department defines a news item differently than another department, it is difficult to have a combined news feed. Centralizing the content model will further slow its evolution because there needs to be alignment between the different departments.

Wow, two geeky posts in a row. I promise the next one will be less technical.

← Previous Next → Page 21 of 75