Jan 07, 2010
One of my favorite podcasts, Planet Money, recently did a segment on bias in journalism. Apparently, back in the 1870's, most newspapers were blatantly affiliated with a political party. In fact, their bias was openly stated in their mission statement and it was part of the newspaper industry business model. In return for political support, a newspaper publisher would get lucrative contracts for printing government documents and cushy government posts like postmaster. You historians will remember that Ben Franklin, himself a publisher, was the first postmaster general. The newspaper publishers didn't do this because they were greedy or unscrupulous; it was hard to make a living publishing a paper any other way. The cost of printing and distributing a paper was more than the audience could afford. Given the options of the low return of serving an audience that can't afford your product or the high return of serving a political party, the choice was easy.
But then a big technology disruption happened that drastically cut the cost of publishing a newspaper and all of the sudden made the business more profitable. It wasn't the Internet, it was cheap paper made from wood pulp. Up until that point, the news was printed on rag paper like what our dollars bills are made of. With cheaper paper, a publisher could lower the price and get a higher volume of sales. You could sell even more papers if you had higher quality news. More newspapers entered the market but there were still barriers to entry to preserve profits. Even though the variable cost of the papers was low, the up front fixed costs of the presses and distribution channels kept competition manageable.
Like wood pulp, the Internet also disrupted the news business but in a different ways. While the Internet has reduced the cost of distributing the news (especially over large geographic areas), it has also driven the revenues down. Most importantly, the classifieds business has been eaten up by sites like Craigslist. Advertisers have other options to reach a more targeted audience than general news can. And, of course, there are no barriers to entry. Even an schmo like me can have a blog.
Will the decline of profitability push publishers back to bias? There are some concerning signs. Not so much with the big media brands (although you could definitely make the argument that all publications have their leanings), but there are plenty of examples in blogosphere (and Twitter) where bloggers are paid to promote a product or political agenda. The distinction between professional and amateur media is blurring. We tend to follow people (professionals and amateurs) who share our passion (usually on very narrow topics) and we unsubscribe when they lose our interest or confidence. But very few people are getting rich on having us as an audience — not until they sell out and exploit our trust. Then we will leave as quickly as we came because there is bound to be a fresh new voice that values our attention as unprofitable as it may be.
As an audience that wants to be informed, we need to do two things.... First, we need to figure out a way to compensate for the value that we enjoy. There have been some interesting ideas of establishing non-profits and public-media style structures. Second, we need to become very good critical thinkers. We need to be able to filter and verify the information that is bombarding us. We need to become our own editors if we can't immediately trust anything in print. I am sure that is what people did back in the 1860's did too.
Jan 04, 2010
I have finally gotten around to reading Clay Shirky's excellent book Here Comes Everybody

. I love Clay's writing style and the way his perspectives make me think. One of the points that really resonated with me was about open source. But before I get into it, I should say that when Clay talks about open source, he is really talking about community developed open source (not commercial or institutional open source). Readers of this blog know that the designation "open source" simply means that it is licensed under an OSI certified license. "Open source" says nothing about how the software was developed. Given that disclaimer, Clay nails it when he describes the advantage of community developed software:
The bulk of open source projects fail, and most of the remaining successes are quite modest. But does that mean the threat from open systems generally is overrated and the commercial software industry can breathe easy? Here the answer is no. Open source is a profound threat, not because the open source ecosystem is outsuccessing commercial efforts but because it is outfailing them. Because the open source ecosystem, and by extension open social ecosystems generally, rely on peer production, the work on those systems can be considerably more experimental at a considerable less cost, than any firm can afford. (from page 245 of the hard cover version)
This quote comes from a chapter called "Failure for Free" that discusses the importance of failure for innovation. The key point is that traditional companies can't afford to really innovate because they must limit their bets to initiatives that they feel have a high likelihood of success. Innovation winds up being constrained to small tweaks to things that are well known. Real breakthroughs are rare. Community development doesn't make better ideas but it reduces the cost (and risk) of trying ideas that at first seem unlikely to succeed. Out of all this experimentation comes unexpected successes. Even the failures provide useful data that can be turned into future successes.
When open source critics see the volume of incomplete or dead open source projects as evidence against the sustainability of open source, they are missing the point. A low success rate is critical to real innovation; but that is hard to understand by someone for whom failure is discouraged or not tolerated at all. It is widely accepted that the best design for an experiment is one with a 50% chance of failure: if you know what the outcome will be, the experiment is not worth doing. But if the cost (in time and resources) of the experiment is zero or close to it, the ideal failure rate is much higher.
While we have seen many examples of community developed software out-innovating its commercial peers, this phenomenon has very little to do with open source. An open source license can create an environment that invites contribution by reducing the risk that one will exploit and unduly profit from another's contribution. But what is more important is how the software is managed. Most successful open source projects have adopted an architecture consisting of a core that is tightly managed for stability and quality and is extensible by adding modules. The bar for module code is considerably lower than core code. This allows an open and dynamic community to experiment with new ideas. The community not only bears the cost of failure but it also brings in new perspectives that the core maintainers don't have. Some of these extensions will be absorbed into the core or will become part of a commonly used bundle. Others will wither and die.
This pattern of stable core and innovative fringe does not have to be unique to open source. It doesn't really matter how the core is licensed if entry into the fringe is open. The one area that open source licensing helps with the core is in achieving a level of ubiquity that attracts a community. High licensing fees or the bottleneck of a traditional software sales process can limit the size of the user base and discourage contribution. The community developer needs the promise of a large user base to justify the time investment of contributing his work. But open source is not the only way to achieve a large user base and there are plenty of examples of commercial software and SaaS user communities that have a successful code sharing ecosystem. One company that is particularly successful is Salesforce.com. Salesforce has created a very powerful API and module packaging framework. More importantly, they have priced the product to penetrate small-medium sized companies and non-profits. In particular, the Salesforce Foundation makes the product very inexpensive for non-profits. These strategies have infused the customer population with lots companies that are highly motivated to share. These customers are active both within the AppExchange community and also integrating 3rd party open source projects like Plone (see Plone/Salesforce Integration) and Drupal (see Salesforce module).
On the flip side, there are plenty of commercial open source software companies that not been able to leverage community development. There are two primary ways an open source software product can prevent community development from flourishing. First, there can be factors that limit adoption of the core like in the case of open core products that discourage the use of the open source version. Second, the architecture can be monolithic so the only way for an outsider to add a feature is to fork the codebase. All a software producer has to do is solve those two problems. The supplier doesn't even really need to provide tools for sharing code. If the install base is large (which usually means the software is useful) and the architecture is modular, developers will find a way to communicate and share code. They can use services like Google Code, GitHub, or SourceForge to collaborate. Too often companies put code sharing tools in place without solving those two problems and then complain when they don't see any activity. In many cases, the user audience is too small to support a contributing ecosystem. In other cases, the incentives are not lined up appropriately. Shirky calls the three ingredients of a collaborative community: promise, tools, and bargain. The promise is what the contributor can expect to get out of participating; the bargain is the terms and rules for participating. Open source licensing helps with the promise and the bargain. The open source promise is that others may help improve your code. The open source bargain is that people can't take exclusive ownership and profit from your work. Some communities, like Salesforce and Joomla! have thriving commercial extension marketplaces. In those cases the promise and bargain are different but they are very motivating.
Any widely used software application, if it is appropriately designed, can benefit from community development and the benefits are not limited to the successful contributions. Perhaps the biggest value of community development is increased innovation through reducing the cost of failure. In order to harvest this value, a company needs to actively monitor, analyze and participate in its community. There is a goldmine of information sitting in the chaff of failed contributions as well as the modules that do gain traction. Companies that ignore this value do so at their own peril.
Jan 01, 2010
Thank you to all my clients, colleagues and readers for making 2009 another great year. I wish you all a healthy, fulfilling, and prosperous 2010!
Dec 18, 2009
This post was originally written as a comment on Jon Mark's excellent post Visions of Jon: WCM is for Losers but it got out of hand and I suspect that it is too long for a comment so I am re publishing it here.
Thanks for the great post Jon! I have to agree with you that the term Web Content Management System is misleading because of its diverse focus on multiple publishing channels. You probably remember that in the old days (~1996), the term "CMS" was first used to describe products like Vignette and what are now called ECM systems were just called Document Management Systems, Records Management Systems, etc. When the big DMS vendors started to covet the term "content," the (then) smaller WCM vendors had to slide over a bit and qualify their category with a "W." Then some of them started to ruin themselves by trying to expand into document, management, records management, etc. - but that's another story.
But enough about the Ghosts of Christmas Past... I agree with the point that a WCMS has multiple aspects. I would name three: a management tier to edit semi-structured content, a repository to store the semi-structured content, and a rendering tier to render the content. Usually the repository is more tightly coupled to the management tier so it is often tucked into the management application. In fact, the three components are often bundled for convenience.
In my mind, what sets WCM apart from the other forms of CMS is the C. I still think of Content as having more structure (and less embedded formatting) than what you typically find in an ECM repository. In the ECM world, the structured information is called metadata and is not considered part of the asset (a MS Office file that jumbles together information and presentation). A WCM asset needs to be rendered (given a format) to be useful to a consumer. This is why a WCMS needs a good rendering system.
Most ECM assets can just be downloaded but the range of formats they can take is limited. You can get a different file format (like a PDF) or a different scaling or cropping of an image but the output looks essentially the same. ECM has tricks to add structured information like metadata and embedded tags but that is going against the grain. WCM, which is inherently structured, knows what each of the different elements of an asset mean. I like to say that ECM is documents pretending to be content and WCM is content pretending to be documents. That is, ECM starts with a document and tries to pull information out of it while a WCM starts with structured information and renders it into a .html, .pdf, .xml, or any other kind of document.
So, at the end of it all, I would say that WCM should be renamed back to CM and ECM should be renamed to EDM: Enterprise Document Management.
Dec 17, 2009
I have been catching up on product demos recently and have seen some really elegant functionality for marketers. Several products have introduced modules that allow CMS users to plan, implement, and measure multivariate testing, search engine optimization, and personalization without the continued support of a developer. A developer has to put the tools in place but, after that, the CMS user can fully control the behavior of the site and optimize its business impact.
It is easy to get excited by this functionality. But then you think of the difficulty your average organization has with even the basic aspects of content production and you wonder if they ready for these tools. How can you do an A/B test if getting someone to write option "A" is a struggle and option "B" would be a miracle? Of course, not all companies suffer from these issues. The more sophisticated publishers and eCommerce companies have been doing these advanced site management activities even when the technology stood in their way, much less facilitated them. But your average marketing site is still in the dark ages when it comes to managing content.
Will your average marketing group be able to leverage this functionality? Or will it be yet another unused feature that clutters the user interface? My hope is that once contributors and site managers start to experiment and see results, they will rapidly evolve to be more committed and proactive in the same way social media participants started to embrace tagging when they saw their work start to surface in more prominent places. A tight and accurate feedback loop is important in any learning process so maybe access to testing and metrics has been the missing feature rather than usability-oriented functionality like in-context editing. It is probably more effective to make a task interesting and rewarding than to make that uninteresting work easier to do. The former strategy will cause a user to overcome any obstacle he is faced with. In the latter strategy, there will always be some excuse not to do the work. I touched on this idea in an earlier post called "What your intranet needs is a publisher!"
That is not to say that all you need to do is drop in personalization functionality one day and your organization will immediately change. I have seen many organizations fail with this approach because they didn't have the content to support it. I saw a slide from a Sitecore (below, click for a larger image) demonstration that shows a gradual maturity process that begins with understanding your users and your content and ending with real time personalization.
I very much agree with the model but I think that the trick to success is being able to see real results at every step of the way. Most organizations will not have the commitment to sustain a continuous effort through long periods of no results. Results are easy to felt from the conversion tracking step onward. Getting to that point, however, will require organizations to aggressively push through traffic analysis, experience analysis, and content profiling. Perhaps there is room to get some intermediate outcomes along the way that can keep an organization hungry for more. These outcomes can't just be in the form of a spreadsheet that is out of date by the time that it is presented. There needs to be some tangible business impact like being able to make a small change to the website that yields some measurable result. If this is not possible, the best one can do is time-box this period of no feedback and have faith that the results will be worth that fixed investment.
They are just coming on the market now but I am really looking forward to seeing the organizational response to these marketing tools. The early adopters will certainly benefit from the functionality because, by being early adopters, they have already shown their commitment to improvement. It will be really interesting to see examples of this functionality transforming mainstream organizations that have struggled in the past.
Dec 07, 2009
I generally dislike the "map of the market" approach to describing a software market because the actual use of software is too specific to be generalized into abstract dimensions like "high and low end" or "innovative." However, during a recent Content Here leadership offsite (OK, I went on a bike ride), I was thinking about Content Here's positioning in the marketplace and I found a picture to be quite helpful. This is what I came up with (click on the image for a larger view).
The primary point of the diagram is to show that consumers enter the information marketplace with different types of questions and information providers offer different types of answers. The intent of the question appears on a continuum that ranges from problem focused to solution focused. Technology buyers are focused on their business problems. As you progress towards the "problem" end of the spectrum, the questions get more specific and require intimate knowledge of the context and domain to answer. On the other end of the spectrum, the consumers (software vendors and investors) are trying to understand trends that will inform a business strategy for managing (or investing in) a solution. The information on the solution side gets very abstract and speculative because the solutions themselves are designed to be re-usable across many different kinds of problems and software companies need to build products that will be sold in the future. In the center of the spectrum is where the interests of the buyers and sellers converge. Here the buyers are thinking a little beyond the specific problems they are trying to solve today to where they need to be in five years. Here vendors are thinking beyond how their solution measures up today to what buyers need going forward.
All of these questions are important and the marketplace for answers has evolved to answer them. On the sellers side of the diagram, you have the major analyst firms who primarily serve the technology vendors. Their information can also useful to the CIO that is trying to understand trends but it will not be useful for a decision faced today. I didn't put too much thought into the positions of the "sell-side" analysts on the right side of the diagram. I would love input here.
I am more interested in the left side of the diagram — in particular, where Content Here is focused. My unofficial tag line for Content Here is "Content Here helps technology buyers be awesome at making decisions" (format borrowed from the Joel on Software article: "Figuring out what your company is all about"). Content Here tries to help client's answer the questions of what technology to buy and what do to with it. To play this role, I need a deep understanding of how the various technology products work — but not as deep as a systems integrator that specializes in that technology or the software vendor's technical support. Consequently, my reports are very technical compared to other analyst reports. I tend not to go as broad as CMS Watch because I stop keeping up with a product once I realize it is not relevant to my potential client base. Only when I hear that something has changed with the product do I check back (and this is why I don't publish a list of technologies that I follow or don't follow). Most importantly, I need to be able to rapidly discover a client's requirements through an efficient consultative process.
This hybrid approach puts Content Here in between a systems integrator and an analyst company in terms of detail. I am not surprised by low number of competitors because this is a hard place to be. I need to dig into the many technologies I cover by building prototypes and talking with implementors. I also need to understand the business aspects of how the technologies are used. I am always on the challenging slope of the learning curve. I can neither sit back and pontificate on the abstract nor enjoy the luxury of knowing every detail. As difficult as it is to be in this position, I can't think of a more stimulating business to be in.
That is as far as I have thought things through. I would love to hear your feedback on the usefulness of the concept and the positioning of the different players. In particular, if you are a consumer of information, is the information marketplace serving you with the appropriate level of detail?
Dec 04, 2009
I am catching up from a whirlwind of activity at the Gilbane Conference in Boston this week. I gave three presentations (below), organized a breakfast for open source CMS software executives, and had a great time talking with so many industry friends. It was particularly nice to meet people like Scott Liewehr (@sliewehr), Scott Paley (@spaley), Jeffrey MacIntyre (@jeffmacintyre), and Lars Trieloff (@trieloff) who I had known only virtually before. I wished I could have stayed for the third day of the conference but I had to get back to work. Everyone seemed to feel very positive about business and where content management is going so I left brimming with enthusiasm for 2010.
What follows is a brief run through of the presentations I gave.
On Tuesday morning, I did a presentation called Open Source WCM and Standards for the CMS Professionals Summit. To summarize, open source really has nothing to do with open standards but there are some areas where they converge. "Open source" describes a license. Any software can be open source if it is assigned an OSI-compliant license. Open standards is about software design — technology choices about what standards to support. That said, there are three areas where open source and open standards converge:
-
when an open source project is started to create a reference implementation for an emerging standard (note, on slide 5 I didn't say that Alfresco was created as a reference implementation for CMIS, you had to be there.);
-
when there is a chicken and egg problem of value and adoption (like RSS and now RDF) some open source projects have the install base to easily create widespread support and a lower hurdle create an implementation;
-
projects driven by developers tend to put a higher priority on aspects like integration and attention to technical detail than marketing driven products which are more feature oriented.
How to Select a WCMS
My "How to select a WCMS" workshop is turning into a signature presentation for me. There was not too much difference from prior presentations of this workshop except this time I went into more detail on using doubt to make a decision. At that time, my friend Tim McLaughlin, from Siteworx, had popped into the room. He told me afterwards that he agrees with the approach and even read a scientific paper that found that the best decision makers use this method of elimination for choosing. Tim, if you are reading this, you owe me that link!
One particularly interesting part of that worshop was that one of the audience doubted the necessity of content management systems in general. So I was put into the position of having to defend the industry. He was coming from an organization that was managing 100 very small, unique, independently managed, and unimportant websites. In this case, I had to agree and I used the metaphor of a factory. You don't build a factory to produce less than 10 units. A CMS would not help him until he started to try to manage all those websites in a more uniform way. For the time being, I suggested that he look into Adobe Contribute which handles basic things like deployment and library services without trying to manage reusable content.
I presented with Kathleen Reidy from The 451 Group on "The rise of Open Source WCM." Kathleen had some great slides talking about commercial open source vendors in the market. My presentation was from a buyer and implementer perspective. The general message was that buyers have the benefit of more choices and more information but they also have the responsibility to take a more active approach to selection. They can't expect an analyst firm or salesman to tell them what is the best product. They need to understand their requirements and implement a solution that solves their business problems. Open source software suppliers depend on customers doing more pre-sales work themselves and they pass that savings back in the form of no (or low) licensing fees.
The biggest disappointment was when Deane Barker misunderstood slide 5 and tweeted that I think that open source is like a free puppy. Of course, this was re-tweeted several times. As I have said, all CMSs are like puppies: some are free, some cost lots of money, but all require care and feeding. If you have the intention of owning a puppy and understand the costs involved, you appreciate that a free puppy is less expensive than a puppy that you have to buy. If you spontaneously come home with a a puppy just because it was free, you might be in for an unpleasant surprise. Similarly, if you adopt a CMS just because it is free and you have not budgeted for properly implementing a website, you will get into yourself into trouble. Later, Deane and I had a great dinner together with David Hobbs before I headed back home.
Nov 23, 2009
Over the past few years we have witnessed the transfer of website ownership out of the technology organization and into the department that owns the information. This has been a positive trend. While technology is certainly necessary to run a website, what makes a website successful is the content and its ability to communicate to an audience. Even though most I.T. departments recognize this, they have other systems that compete for enhancement resources and they often find themselves blamed for latency on web initiatives. Furthermore, technology is an easy scapegoat for more embarrassing organizational deficiencies like not having skilled and motivated contributors. Removing I.T. from the equation puts power and accountability squarely in the hands of the people who own the content (and the conversation) and removes any and all ambiguity about who to blame when the website fails to produced desired results.
As attractive as these outcomes are, transferring ownership is not comfortable. It puts a business manager into a position that he or she has not been in before: an owner of technology. Growing up through the ranks of leadership, most managers didn't sign up for planning technology projects, directing technical staff, or taking shifts on "pager duty." They purposely avoided a technical career track and neglected to build those skills. On a good day, they can ignore technology but on a bad day... well, we try not to think about those. Instinctively, the business manager wants someone who he can shout at (or hover over) until the issue is fixed. But who is that person and who pays him?
The first step is to hire a web development shop (systems integrator) to do the initial implementation. Even if you are an I.T. organization, I recommend bringing in a systems integrator who has significant experience on the platform you plan to deploy. These systems are extremely flexible, which means that there are plenty of bad design choices that you can make; you want to work with someone who has ruled out at least a few of them. But what happens after launch? An optimistic (naive, really) approach is to believe that, once the system is implemented, business users can just run it themselves. That doesn't work. Even if you are lucky enough to have no bugs and contributors that don't make mistakes, a successful web initiative attracts ideas for making it better. Ignoring those ideas is demoralizing to the users and causes them to dislike the solution.
In many cases, the business unit will create a retainer relationship with the systems integrator where the system's integrator plays the role of outsourced I.T. This typically only works when there is an internal staff member who plays the role of "product manager" and directs the external development team. Otherwise, the relationship can get very costly as hundreds of uncoordinated change requests start to degrade the usability and maintainability of the solution. It is also costly to have this external partner provide the first line of support. You don't want to pay consultant rates to triage "the Internet is down" and "I can't find the 'any' key."
If it is critical to the business that the website continually adapt to changing requirements and opportunities, it will expensive to maintain a team of consultants working on an endless enhancement queue. At some point, it will be cost effective to hire internal development resources to implement routine enhancements (like template changes). I think that more and more companies are falling into this category. The web is changing so fast. You need to try new things and continually improve.
Here are some staffing levels to consider:
-
__Infrequently changing website: __ one website product manager managing an external developer. The product manager also does training, contributor mentorship, and first line technical support (including pager duty to differentiate between an unplanned outage and "the AOL is broken."). The product manager should also maintain and interpret website analytics reports and Google Webmaster Tools.
-
__Continually optimized website: __ a website product manager with web developer skills. In this case, the product manager has the skills to reliably execute routine template edits and basic system administration tasks (configuring accounts and roles on the system, unlocking files, etc.). It is a good idea for the third party systems integration company to regularly review new code to suggest refactoring for quality, maintainability, and security. Like with the previous staffing level, the systems integrator is also brought in for larger, more complex enhancement projects.
-
__Steadily changing website: __ a website project manager managing an internal web developer. The web developer has solid DHTML development skills and is proficient in the template developing syntax of the WCMS. Like with the previous level, a third party systems integrator does periodic code review and executes larger enhancement projects. The development queue consists of a medium size (3 days to execute) enhancement, plus a steady stream of tweaks.
-
__Rapidly changing website: __ when I interviewed publishing companies for my Web Technologies for Publishers report series, I learned that most successful web publishing companies had a development team of between 2-5 developers and deployed new code as frequently as once per week. If your business is your website, this is the staffing level that you probably need.
Full outsourcing of all technology knowledge and ownership is not realistic. At the least, you need to have staff that understand the technology enough to know what to do and how to direct people to do it. How much you go beyond that depends on how aggressively you need to advance your platform.
Nov 20, 2009
When I help companies through a CMS selection, I focus on the whole solution rather than just the functionality of the software. Factors such as vendor compatibility and expertise availability (internal and external) also affect the sustainability of the solution — sometimes even more than the feature set. Several of my recent consulting projects have de-emphasized the software component of the solution even more. These selections were part of larger initiatives that required significant help from an outside partner. In one case there was a comprehensive site redesign that included digital strategy, re-branding, and information re-architecture as well as implementing new functionality. In another case, the client was shifting to an outsourced model where a partner was to maintain the full infrastructure and assume all development responsibilities. In situations like these, while the software is important, the biggest risk is choosing the wrong partner to work with.
A dual selection like this poses a real problem. If you focus on the partner, you have the software choice made for you. I know that there are systems integrators that claim technology agnosticism but I seriously doubt them. The truth is that it takes a couple implementations on any platform to get it right. In some cases, it takes many (like 5) projects to run out of ways to mess things up. That is the downside of flexibility. So, when someone says "technology agnostic," I hear either "we don't have skills on any platform" or, like the the waitress at Bob's Country Bunker said: "Oh, we got both kinds [of music]. We got Country, and Western." It could be even worse: the integrator who claims neutrality can be paid by the software vendor to recommend a solution.
The other option is to go with a pure design agency and select a product (and integrator) afterwards to implement that design. This can be inefficient because the designers can arbitrarily and unknowingly make decisions that make implementation harder. You will probably need to rescope and refactor the design based on the native capabilities of the platform you choose. A lot of time can be saved if you have someone to suggest more efficient options during the design process. I know I am going to get slammed here by developers and software companies saying their product can do anything. If they could, they would have 100% market share. Pure strategy and design companies also tend to underestimate the cost of implementation so you might running out of money when it is time to implement the design.
It's a chicken and egg problem. You can't choose a product until you have had help with your requirements and you can't get help with your requirements until you choose a development agency (that implicitly comes with a product). I have found this variation of my standard process to be effective.
-
Gather the requirements that are the most meaningful to a product selection. I call these leading requirements.
-
Use these functional and non-functional requirements to filter down the software marketplace to a very short list of product options (probably no more than two).
-
Find a couple of the best web agencies who specialize in working on either of these platforms — ideally, two for each of the two products.
-
Invite these agencies to present a solution based on their preferred platform. Like in my normal selection process, the presentation consists of the demonstration of scenarios that you defined when you gathered your leading requirements.
-
Evaluate the presentations across two dimensions: the product and the agency.
I have found this process to be extremely effective. The benefit that I didn't anticipate was that the preparation of the prototype tested three very important aspects of the integrators: their consultative process for turning customer articulated requirements into a solution; their mastery over the platform; and their relationship with the software vendor. But the process is not perfect. Here are some issues:
-
A professional services company has lower margins than a software company. They don't have the prospect of an all-profit software license deal to justify a big bet on a sales initiative. Professional services companies will put up with a lot less run around than a software company will — especially the good ones. So the process has to be efficient and they need to have a good shot at winning. This means that you need to: have a short list of no more than four integrators; have budgeting worked out before you contact with them; and work together with them to get what you need. This collaboration is also useful in getting a feel for what it will be like to work with the firm.
-
Some integrators have dedicated sales teams that can prevent you from getting to know their consulting capabilities. You should do everything you can to work directly with real consultants. Not only is it important to learn about their style and capabilities, delivery staff are less likely to tell you what they think you want to hear.
-
Those filtering steps (1-3) are very hard if you are just learning about web development and don't know your way around the industry. You need to talk to lots of people and learn from their experiences or work with someone who has.
Despite all of these challenges, doing a dual selection is not impossible. In fact, it can be quite fruitful. You just need to be focused and disciplined in your approach and execution. If you are interested in learning more, I am teaching a workshop on Selecting a CMS at the Gilbane Conference in Boston on December 1st. I hope to see you there.
Nov 16, 2009
While it may seem counter-intuitive to listen to a supplier telling you how to buy, you should definitely read Deane Barker's article "Five Tips to Getting a Good Response to a Content Management RFP." Deane is a co-founder of Blend Interactive, a web design and development firm. That may put him on the other side of the negotiation table but, as a potential partner, he wants you to be successful in your initiative as much as you do. That is actually not so out of the ordinary. As a consultant, you want to spend your time with clients who you have a great working relationship with. The better consultants can be more selective in the opportunities they pursue and nothing sends off more bad vibes than a dysfunctional selection process.
The article agrees with all the advice that I give on my blog (it even quotes me!). The one tip that buyers are going to question is openly stating budget. I tend to go back and forth on that myself. The benefit is that budget is the best way to communicate what you think the size of the project is. Getting that piece of information out in the open early will help the vendor present a solution that is in line with what you had envisioned. It also reduces the risk of harboring unrealistic expectations of what you can do. The risk of communicating budget is that the integrator will inflate the price to maximize margin. My current thinking is that you shouldn't be working with a partner who you fear will take advantage of you. You should structure your selection process to verify the integrity as much as the skills and experience of the vendor.
If you are in the market for a development partner, read these tips. If you can get to Boston next month, you should join me in my CMS Selection Workshop at the Gilbane conference. In fact, Deane is going to be there too.