<!-- Content Here -->

Where content meets technology

Nov 20, 2009

When it is not all about the software

When I help companies through a CMS selection, I focus on the whole solution rather than just the functionality of the software. Factors such as vendor compatibility and expertise availability (internal and external) also affect the sustainability of the solution — sometimes even more than the feature set. Several of my recent consulting projects have de-emphasized the software component of the solution even more. These selections were part of larger initiatives that required significant help from an outside partner. In one case there was a comprehensive site redesign that included digital strategy, re-branding, and information re-architecture as well as implementing new functionality. In another case, the client was shifting to an outsourced model where a partner was to maintain the full infrastructure and assume all development responsibilities. In situations like these, while the software is important, the biggest risk is choosing the wrong partner to work with.

A dual selection like this poses a real problem. If you focus on the partner, you have the software choice made for you. I know that there are systems integrators that claim technology agnosticism but I seriously doubt them. The truth is that it takes a couple implementations on any platform to get it right. In some cases, it takes many (like 5) projects to run out of ways to mess things up. That is the downside of flexibility. So, when someone says "technology agnostic," I hear either "we don't have skills on any platform" or, like the the waitress at Bob's Country Bunker said: "Oh, we got both kinds [of music]. We got Country, and Western." It could be even worse: the integrator who claims neutrality can be paid by the software vendor to recommend a solution.

The other option is to go with a pure design agency and select a product (and integrator) afterwards to implement that design. This can be inefficient because the designers can arbitrarily and unknowingly make decisions that make implementation harder. You will probably need to rescope and refactor the design based on the native capabilities of the platform you choose. A lot of time can be saved if you have someone to suggest more efficient options during the design process. I know I am going to get slammed here by developers and software companies saying their product can do anything. If they could, they would have 100% market share. Pure strategy and design companies also tend to underestimate the cost of implementation so you might running out of money when it is time to implement the design.

It's a chicken and egg problem. You can't choose a product until you have had help with your requirements and you can't get help with your requirements until you choose a development agency (that implicitly comes with a product). I have found this variation of my standard process to be effective.

  1. Gather the requirements that are the most meaningful to a product selection. I call these leading requirements.

  2. Use these functional and non-functional requirements to filter down the software marketplace to a very short list of product options (probably no more than two).

  3. Find a couple of the best web agencies who specialize in working on either of these platforms — ideally, two for each of the two products.

  4. Invite these agencies to present a solution based on their preferred platform. Like in my normal selection process, the presentation consists of the demonstration of scenarios that you defined when you gathered your leading requirements.

  5. Evaluate the presentations across two dimensions: the product and the agency.

I have found this process to be extremely effective. The benefit that I didn't anticipate was that the preparation of the prototype tested three very important aspects of the integrators: their consultative process for turning customer articulated requirements into a solution; their mastery over the platform; and their relationship with the software vendor. But the process is not perfect. Here are some issues:

  • A professional services company has lower margins than a software company. They don't have the prospect of an all-profit software license deal to justify a big bet on a sales initiative. Professional services companies will put up with a lot less run around than a software company will — especially the good ones. So the process has to be efficient and they need to have a good shot at winning. This means that you need to: have a short list of no more than four integrators; have budgeting worked out before you contact with them; and work together with them to get what you need. This collaboration is also useful in getting a feel for what it will be like to work with the firm.

  • Some integrators have dedicated sales teams that can prevent you from getting to know their consulting capabilities. You should do everything you can to work directly with real consultants. Not only is it important to learn about their style and capabilities, delivery staff are less likely to tell you what they think you want to hear.

  • Those filtering steps (1-3) are very hard if you are just learning about web development and don't know your way around the industry. You need to talk to lots of people and learn from their experiences or work with someone who has.

Despite all of these challenges, doing a dual selection is not impossible. In fact, it can be quite fruitful. You just need to be focused and disciplined in your approach and execution. If you are interested in learning more, I am teaching a workshop on Selecting a CMS at the Gilbane Conference in Boston on December 1st. I hope to see you there.

Nov 16, 2009

Deane Barker's tips on requests for proposals

While it may seem counter-intuitive to listen to a supplier telling you how to buy, you should definitely read Deane Barker's article "Five Tips to Getting a Good Response to a Content Management RFP." Deane is a co-founder of Blend Interactive, a web design and development firm. That may put him on the other side of the negotiation table but, as a potential partner, he wants you to be successful in your initiative as much as you do. That is actually not so out of the ordinary. As a consultant, you want to spend your time with clients who you have a great working relationship with. The better consultants can be more selective in the opportunities they pursue and nothing sends off more bad vibes than a dysfunctional selection process.

The article agrees with all the advice that I give on my blog (it even quotes me!). The one tip that buyers are going to question is openly stating budget. I tend to go back and forth on that myself. The benefit is that budget is the best way to communicate what you think the size of the project is. Getting that piece of information out in the open early will help the vendor present a solution that is in line with what you had envisioned. It also reduces the risk of harboring unrealistic expectations of what you can do. The risk of communicating budget is that the integrator will inflate the price to maximize margin. My current thinking is that you shouldn't be working with a partner who you fear will take advantage of you. You should structure your selection process to verify the integrity as much as the skills and experience of the vendor.

If you are in the market for a development partner, read these tips. If you can get to Boston next month, you should join me in my CMS Selection Workshop at the Gilbane conference. In fact, Deane is going to be there too.

Nov 11, 2009

Recovery.gov on Sharepoint now?

Nadav Schreibman just commented on my post "Is Drupal the right platform for whitehouse.gov?" to say that Recovery.gov is now running on Sharepoint. I was pretty shocked when I read the article and had to check the source on the site. Sure enough, I am seeing Sharepoint HTML in all of its postback glory.

The referenced article from Oh My Gov talks about how the Recovery.gov needed a more portal-like framework. Both Drupal and Sharepoint can be described as development frameworks. Drupal leans more towards being a CMS product (with strong capabilities for developing content) while Sharepoint is more of a portal (with strong capabilities for pulling content and data from other sources). It seems like Recovery.gov, with it's mandate to expose where recovery money is being spent, needed more portal and less CMS. This was an expensive change — the redesign project cost $18 Million, which may not be a lot of money in government spending terms but is more than most of my clients have lying around. Thanks Nadav for the tip!

Nov 11, 2009

Hints of change at Alfresco

I am beginning to see hints at serious changes happening within Alfresco. Historically the company has essentially operated as a commercial software company with a closed development model (that is, an internal opaque development team) and an open source version that was treated like shareware ("start using it and, if you like it, pay for the real product."). Gradually, Alfresco has been opening up to be more transparent and developer friendly. For example, now you can get the source of the Enterprise Edition. You just need to pay an annual subscription to use the compiled version if you want to get support.

Recent blog posts by Matt Asay (Alfresco's VP of Business Development) and John Newton (CTO) make me think there is more change to come. First, in April Matt wrote this post on how the (very permissive) Apache Software License is better than the GPL. That had me scratching my head because Alfresco uses the GPL license which is very strong at protecting IP. Alfresco had already loosened up a bit by providing a "FLOSS Exception" where a developer working on another project with another OSI approved license can incorporate Alfresco under that license. But the full Apache Software License goes much further. If Alfresco was Apache licensed, Oracle could embed Alfresco in one of their commercial software products for free.

Then John Newton wrote a post talking about the virtues of professional open source and described Alfresco as a company that made money entirely from support. At the time, I didn't really believe him because the terms felt like you needed to pay to use supportable software rather than pay for the support itself. I know this is a minor distinction but a support contract seems easier to walk away from than an annual subscription to use software. Still, I guess it would be possible to downgrade to a version of the Enterprise Edition that you compiled yourself.

More recently, Matt comes up with this article that is critical of "fauxpen source:" products that come out of a closed development process but are distributed under an open source license. He writes:

In the future, I think we'll see this "fauxpen-ness" fade as companies clearly separate their open-source efforts from their revenue models. Open source can provide a platform for monetization, but it isn't the best way to actually generate cash. Not for most companies, anyway.
I take this to mean that software companies will start to leverage the open source development model and get their revenue from sources other than renting out the IP of the software. Matt doesn't mention Day Software but that is clearly what Day is doing. Day sells commercial software products (CQ5 and the CRX) but heavily invests in components (JackRabbit and Sling) that they have donated to the Apache Software foundation for open development. They use these Apache components in their products and encourage their competitors to do so too. Similarly, IBM invests in lots of Apache projects and Eclipse. Ex-Alfrescan Kevin Cochrane now works at Day and I am wondering if he is convincing his former co-workers on this strategy. I wonder if, now that there is a sufficient developer community, Alfresco will start to put development of some of their components (like their CIFS implementation or their Surf framework) out in the open where more people can contribute to it.

If this is what is happening, (and now I am really speculating) it could mean one of two things. One, Alfresco has reached a size and level of profitability that it can afford to let go of some immediate revenue to fuel some longer term growth. Two, Alfresco is less focused on creating a company with a tight grip on IP that it can quickly sell. Either way, I am very interested in how this plays out and will be watching for Alfresco components being released into an active development community.

Disclosure I do not have any inside information on Alfresco and am speculating based on what I read on the web. I may be (and probably am) totally wrong.

Nov 10, 2009

Should you host your intranet and corporate website on one platform?

Often times by the time a client gets to finding me, they have reached a point where they are ready to throw away their entire web infrastructure: both their corporate website and their intranet. They hope that one well executed product selection can solve both problems. When approached by this kind of prospective client I am careful to set the expectation that this hope is probably not realistic — not impossible, just not realistic. That said, there are a few conditions where putting your intranet and external website on one stack does work. Here are the criteria that I use.

  1. The intranet is informational rather than collaborative. Different people mean different things when they say the word "intranet." For some, an intranet is just a collection of web pages that only employees can see. Others think of tools and workspaces that allow employees to collaborate and get things done. If you had the latter in mind, chances are you will be looking for a document management, ECM, or portal system like Sharepoint for knowledge worker collaboration. These systems are technically capable of pure web publishing but that is not their strength. Your website management team will feel constrained by a platform that treats web publishing as an afterthought. If you think of your employees and customers/prospects simply as different audiences that you need to publish to, a single platform may work just fine.

  2. The company has other platforms on which to build specialized, dynamic web applications. Sooner or later (or probably already) your company is going to need to develop content-oriented web applications and your WCMS is a logical place to start. However, these web applications are likely to introduce the most demanding and specialized requirements. It is quite possible that the union of your intranet and external website application requirements filter out every viable CMS — or at least force some painful compromises. If you have an alternative stack on which to build your fancy custom applications (possibly pulling content from your WCMS) and can let your WCMS focus on simple web publishing, you greatly increase the chance that you can comfortably support both internal and external publishing.

  3. The corporate website and intranet are owned by one communications group. Today you may be able to align your intranet and external publishing needs; but will they stay aligned? How will you arbitrate between the competing priorities of the intranet and marketing website? What if those priorities don't just compete for resources but actually conflict with each other? Things can get ugly when two different departments with different goals argue over what feature needs to be added next. These decisions get a lot easier if both the intranet and external website are owned by one communications group. Companies that are structured this way are usually small and can benefit the most from sharing the infrastructure cost between the intranet and extranet. Having the intranet owned by a communications group also pretty much ensures that criterion 1 is satisfied.

If your company meets these criteria, there is a good possibility that one instance of one CMS platform could serve you quite well. Otherwise, I would recommend doing one of your two projects and then putting the product that you used on the short list of products to consider for the other. If it turns out that the one platform supports the requirements of the second project, buy another instance of the software (hopefully at a discount) and start a new project to implement it. Take advantage of code and idea re-use but don't let that constrain the flexibility and agility that you need to achieve your goals.

Nov 04, 2009

The world's worst WCMS

I just read Philippe "@proops" Parker's tweet:

there's no "best" wcm, says @jarrodgingras, but is there a worst one? #fixwcm

My 140 character or less answer is "No" but I have more to say so I will elaborate here....

There is no worst WCMS. In fact, I would go so far as to say that every WCMS is (or at least was) the best WCMS for someone. The reason is this: every WCMS product was built to someone's specifications. Some WCMS development projects fail in their mission but we rarely see the products of those projects in the marketplace. What we do see are the products that delivered so well on their specification that somebody had the bright idea to get into the business of repeating that success for other organizations.

So every WCMS, at least at one point in its lifetime, was someone's best WCMS. But software is not static. It changes over time (at least it should) and it is quite possible that poor product management can make software worse. I see that as a very real problem for many of the software products in the marketplace. It takes discipline to avoid feature bloat that clutters the application and makes it less suitable for its best use. Software companies that look to competitors and potential customers for guidance are more vulnerable than companies that listen to their customers who are already using the software.

There is a great philosophy in software product management called "Make it suck less." The idea is that, rather than add new features to draw in new kinds of customers, make life better for people who already use your software. That is, don't ruin the software by trying to please everyone.

Sadly few companies take this approach and, as a result, the race to make worse software has a bigger, more competitive field than the race to make better software. Therefore, nobody is going to win the dubious distinction of the developer of the worst WCMS - its going to be a big tie.

Nov 03, 2009

10 Django Master Class action items

Edit: I wrote a follow-up post describing how I was doing with these action items. Enjoy!

A couple of weeks ago I attended Jacob Kaplan-Moss's Django Master Class in Springfield, Virginia. It was a great class and I walked out with a bunch of ideas for making better use of Django. What follows is a set of action items that I created for myself. Jacob was not this prescriptive in his presentation. These are just my personal decisions based on how he explained things.

  1. Use South for database migrations (complete). Unlike Rails, Django has no native system for synchronizing the database schema with code changes. Django will create your initial database schema for you but you need to modify the tables with SQL whenever your models change. South gives Django Rails-like migrations which consists of methods to alter the database and also roll-back changes. I ported a new application I am working on over to use South and am very impressed. Jacob gave some great advice to keep your schema migrations from your data migrations. For example, if you are renaming a field: you would create one migration to add the field; a second migration to move the data to the new field; and a third migration to delete the old field. Doing this will make your migrations safer and easier to roll-back.

  2. Use PostgreSQL rather than MySQL (complete). Jacob didn't talk disparagingly about MySQL but it was clear to me that PostgreSQL is what the cool kids are using. That is not to say there are not disagreements over what DB is best. I have been using MySQL for years but two things won me over. In the class, I learned that table alterations in MySQL are not transactional so if your South database migration fails, you can't roll-back so easily. The second factor came after the class when I was reading all these blog posts panicking about what will come of MySQL now that Oracle owns it. I agree with most pundits that Oracle doesn't have a great reason to invest in MySQL. My comfort level working with PostgreSQL is growing but its going to take a while to get as comfortable with the commands and syntax as I am with MySQL.

  3. Use VirtualEnv (complete). One thing about Python that always seemed hackey to me was the whole "site-packages" thing. I don't like how all your Python projects tend to share the same libraries. In Java, you are much more deliberate with your CLASSPATH. The class introduced me to virtualenv and its sister project virtualenvwrapper. This creates a virtual sandbox where you can manage libraries separately from your main Python installation. It is brilliant.

  4. Use PIP (complete). I was pretty haphazard about what tools I used to install Python packages. I admit that I didn't really know the difference between setuptools and easy_install. The Master Class nicely explained the different options and it seems like PIP is emerging as the Python package manager of choice.

  5. Break up functionality into lots of small re-usable applications (in process). Much of the advice from the class is summarized in James Bennett's DjangoCon 2008 talk: Reusable Apps. Watch the video and be convinced.

  6. Use Fabric for deployments (not started). My normal m.o. for deploying code has been to shell over to a server and svn export from my Subversion server. In multiple server environments, I would usually have some kind of rsync setup. However, in my one of my client projects (using Java), I started using AntHill Pro (plus Ant) for both continuous integration and deployment. From that experience I saw light on the automated deployments. Fabric is primitive compared to AntHill Pro (it doesn't have cool web-based UI) but it does allow you to run scripts remotely on other hosts. It's like Capistrano for Python. In the next phase of development, I will definitely be using this.

  7. Use Django Fixtures (not started). I am really embarrassed to say that I have avoided using Fixtures for loading lookup and test data. Instead, I have been doing horrid things with SQL and objects.create(). I am looking forward to reforming my errant ways. Fixtures allow you to create a data file that Django will load for you. It offers three format options: JSON, YAML, or XML. Jacob recommends YAML if you can be assured that you have access to PyYAML, otherwise go with JSON which is nearly as readable.

  8. Look into the Python Fixture module (not started). This straight Python module seems to be an alternative to the Django fixtures system. It is more oriented towards test data and looks a little like using mock objects. I need to dig in a little more before I make up my mind about it.

  9. Use django.test.TestCase more for unit testing (not started). I need to do more with unit tests. I have had some good experiences with writing DocTests but I should use the Django unit test framework more. This will allow me to use fixtures more too! Plus with Django 1.1, startapp even creates a tests.py for you. How can I resist an empty .py file?

  10. Use the highest version of Python that you can get away with (in progress). In the class, Jacob made the good point that every version of Python gets feature and performance improvements. Why not go with the latest stable version like 2.6? Snow Leopard did it for me. I will try to upgrade my server as soon as I can get away with it.

If you can make it to the next Django Master Class, I highly recommend you go. Otherwise, you should look into these resources and make your own educated decisions about whether to use them.

Nov 02, 2009

Should we start listening to Gartner now?

I just read Matt Asay's article "Time to upgrade upgrade open source perceptions of Gartner" where he gives Gartner credit for finally getting open source. His point is that Gartner has stopped steadfastly arguing that open source's impact was negligible. On that point, I have to agree. Gartner has jumped onto the open source bandwagon just as it was accelerating out of Gartner's reach. But what does that really mean? It means that Gartner's readership (predominantly technology vendors) have forced Gartner to cover a market segment that they are no longer able to ignore. It also means that some of the open core software vendors have gotten big enough that they now represent a decent market for Gartner reports and services.

From a technology customer's perspective, the value of Gartner's (and most of the other major analysts, for that matter) opinion is pretty much the same. Main stream technology analysts focus on the business of technology (market share/cap/potential) — not the design or suitability of the products. Market analysts don't work with the technology. They don't talk to users or developers. Half of the people they talk to are CIOs who couldn't identify the user interface of the software they are discussing.

So, if you are a software company or an investor trying to figure out where to spend your money, Gartner's reports just got a little bit more useful. If you are in the market to buy technology, Gartner won't help you understand your requirements and what product is the best fit for your organization.

Oct 28, 2009

Is Drupal the right platform for whitehouse.gov?

By now, most people have heard that whitehouse.gov has been migrated over to Drupal. Apparently, the administration has built a level of comfort with the platform with its experience using it for recovery.gov. While the Drupal community is doing high fives all around and equating this with being the most powerful CMS in the free world, I wouldn't go that far. I would say that Drupal is a very good choice for powering the kind of website that the Obama administration is trying to foster: content rich, news oriented, faceted, and populist.

Apparently, Slate Magazine's Chris Wilson wasn't so generous with his critique of the choice. I discovered the Slate article by reading Conor McNamara's excellent post
"Drupal misrepresented by Chris Wilson of slate.com." In my opinion, Conor's responses to Wilson's points are spot-on. Some of Wilson's criticisms reflect configuration choices, not limitations of the platform (like not being able to add Javascript to a content objects. It's a bad idea to allow this by the way.). Slate.fr uses Drupal and the Today's Pictures section of Slate.com is Drupal powered. It seems like Chris has just enough knowledge to sound ignorant. Readers of my reports know that I can get critical of technologies. But whenever I criticize, I make sure to do my research because I know that I am going to get attacked by defenders of that technology. Chris is about to learn this lesson. If Slate.com's commenting system wasn't so bad, he would learn even faster.

Oct 28, 2009

The CMS Myth on the value of drop-in labs

The CMS Myth presents nice strategy for training CMS users: drop-in labs. The idea is similar to a college professor's "office hours" where the professor schedules time in his office to field questions from students. In this case, it is a computer lab that an expert on the system staffs on a pre-set schedule. Like office hours, this is a complement, not a substitute, to traditional classroom training. A contributor attends a class, goes through classroom exercises there, and then comes back later to practice what he learns on his own work.

This seems like an effective strategy for large CMS rollouts where it is impractical to distribute expertly trained power users across the office. Small companies could benefit as well. I can see an advantage of allowing users to learn from each other as they passively listen to the instructor solving their co-workers problem as they do they their own work in the drop-in lab. As any CMS user knows, each piece of content has its own little nuances and there are so many different ways to use a CMS. Working with an expert nearby shortens period of learning the most effective strategies for achieving the desired result. For example, "should I associate this image directly with this asset or put the image in a central image library and then reference it?"

Back in my in-house I.T. days, we used to make a point of "walking the halls" answering questions for weeks after a big system rollout. Drop-in labs seem much more efficient but you don't get quite the same amount exercise.

← Previous Next → Page 23 of 75