<!-- Content Here -->

Where content meets technology

Aug 22, 2006

Nice article benchmarking Joomla! and Drupal

I just saw this blog post by Dries, who is one of the lead programmers on the Drupal project. The article compares the performance of Drupal and Joomla!. The tests look pretty fair. The interesting finding is that, while Joomla! generates pages faster, Drupal's caching system is more efficient. Joomla!s performance boost from caching seems really modest. Perhaps Dries knows more about Drupal caching than Joomla! That would be understandable. For both of these applications, the caching mechanism is not used for logged in users. Both projects are adding framework layers and making the code more object oriented. These layers of abstraction make the code more maintainable but make the application do more work in executing code. Both give you choices of developing PHP based templates or using a rendering engine with non-PHP based tagging syntax. Still, the benchmarks look pretty good. I am sure they would be even better with a PHP accelerator.

On a similar topic, I was at a meeting of the Boston Chapter of CM Professionals and one of the attendees brought up the fact that most commercial software EULAs and certain Federal laws forbid customers from publishing reviews or benchmarks of their software. Man, I would get into trouble if I wrote about commercial software. If anyone knows more information about this, please post comments or send me an email.

Aug 22, 2006

Managing Projects with Trac

[This is a continuation of a sporadic series that started with this post]

In open source projects, bug lists are not just to record of defects, they are often also the main organizing system for the project. Bug lists are where new ideas for functionality are captured, releases are planned, and the vision of the project assembles. Not everyone can contribute code, but they can report bugs and their needs. I don't think corporate I.T. shops leverage issue trackers enough. They mainly see them as a backward looking reactionary tool rather than a forward looking planning tool.

We have standardized on the open source issue tracking software Trac by Edgewall Software. Here is my system for using it....

Usually my first engagement with a client is a short scoping/roadmap project. During this time, I collect high level requirements and put them into a roadmap, recommend technologies, and do some estimation and planning around the initial release of the application. During this phase, I usually work in a spreadsheet because it is quick for data entry and editing. I try to organize functionality into phased releases that balance time to delivery and functionality. I generally follow the practice of making the releases as small as possible. It is also a good idea to organize releases into themes. This makes it easy to communicate to stakeholders what is coming up next and when.

I start using Trac at the start of the first release project. Here is how I configure it:

  • I create a component for each of the major functional areas of the application. Usually there is a "content" component.
  • I create version numbers for each of the releases. 1.0 is the initial release. Then 1.1 for the release after that and so on.
  • To the default ticket types (defect, enhancement, task), I add "port" to represent something that exists in legacy system that is being replaced; and "question."
  • In the Roadmap section, I create the following milestones for each release:

  • Infrastructure setup

  • Design (e.g. 1.1 Online Calendar: 1 Design)

  • Code Complete (e.g. 1.1 Online Calendar: 2 Code Complete)

  • QA Complete (e.g. 1.1 Online Calendar: 3 QA Complete)
  • Maintenance (e.g. 1.1 Online Calendar: 4 Maintenance)

Trac comes with some basic reports but you can add new reports through the user interface if you know SQL. I create a report for each release (filtering on the version field). The Roadmap page is a dashboard showing how many tickets have been assigned to each milestone and how many are outstanding. When I think that I have a good idea of how to implement a feature, I add comments to the ticket. I can also associate files such as screenshots when I want to show how the feature might look.

So that is how I use Trac as an issue tracker. But Trac does so much more. Since we use Subversion, we set it up to browse the source code repository. There is also a wiki feature which I use for project documentation. In general, when documentation is on a wiki, I find that people become more concerned with the content than with the formatting. That is a good thing because you want to reduce the effort to make documentation current. Formatting puts up an unnecessary barrier because it sets the expectation that something has to be really polished before it is shown.

The Timeline is a view of all that is happening on the project. You can filter what to include from the following options: milestones, ticket changes, repository checkins, and wiki changes. The Timeline view also publishes to RSS so you can see all the information within your RSS reader. However, I still like to get these notices over email which is another configuration option. There is nothing like knowing immediately when someone checks in some code so that you can review and advise if there is a preferred way. Our developers usually add the ticket number to their checkin comment so you can see why the code was checked in.

And that is all there is to it. Trac is a simple tool but extremely useful. All of my clients that have used it so far have liked it although they hate it when I assign them tickets ;)

Aug 21, 2006

Whatever happened to the URL?

Even back when I was developing websites in 1998, it was considered really amateur to use frames. One of the primary arguments against using frames is that they mess up book marking (that, and they cause the back button to behave inconsistently). The browser does not know which frame within the frameset you are interested in. When you reload the page, the frameset reloads to its default state. The only way to get back is to re-enact the sequence of actions that took you to that view.

The single most important thing about the web is the URL. The URL is a resource's identity on the web. It is the magic behind links. It is your only way to get back to where you were. Without URL's the interconnected web melts into a de-referenced puddle.

So why are there so many new web technologies disrespecting URL based navigation? AJAX is actually not so bad as long as you use it for targeted functionality and give the user an alternative link based navigation. But JSF and Flash based web applications really annoy me. Applications built on these technologies seem to have abandoned the concept of the web and are just using a browser as a runtime environment. That may be OK if you think of the web as just an efficient software deployment tool but it is not OK if you want your application to behave as a node in an interconnected network. With content, especially web content but also documents, it is absolutely essential that each asset is referenceable. Referenceable content allows for very easy integration across systems. For example, you can link to an item in your Digital Records Management system from within your Customer Relationship Management system so a user is able to see an invoice that a customer complained about. This is the ultimate in loosely coupled architectures. With WYSIWYG editor, a very non-technical business user is able to create all sorts of aggregated views of content. Why even bother with a web services API if you can't even do this?

So if you are like me and you believe that the Internet, Web 2.0, and the Semantic Web is all about making connections with content, stay away from technologies and uses of technologies that undermine the linkages.

Aug 15, 2006

Usability and Intuition

I am going to stop using the word "intuitive" when I am talking about software and I think that everyone else should too. Judging from a Google search on my blog, I seem to be doing a good job of not abusing the word so far. But I want to use "intuitive" even less. The problem is that when people say that something is "intuitive", they are implying that it makes intrinsic sense to any rational human being. However, what they really mean is that it makes sense to them based on their own frame of reference. This is, of course, irrelevant to anyone who does not have the same exact frame of reference. So rather than say something is intuitive, why not say that it would probably make sense to someone that is used to using software XYZ and not be so egocentric?

I understood this point at some level for a long time but it became really obvious to me recently when someone I know was having a really hard time learning to use a Mac after years of being a PC user. I am not naming names because it is so uncool and potentially socially alienating not to gush over the Mac UI. The Mac and Windows are similar in some ways and, in others, they are very different. Those differences are extremely disorienting and frustrating when all you want to do is move around the computer with the instinct that you have developed over the years from using another platform. You learn to use a computer. You are not born with the instincts to use one. Therefore, it is not intuitive. Duh.

That is not to say that a software program cannot make itself become more learnable by behaving consistently across its different functions. I think that is what people sometimes mean by the word intuitive - a user can infer how one feature is going to work based on behavior of another feature that the user has already learned. For example, if the application always has right-click context-sensitive menus, a user trains himself to right click when trying to execute a function. However, there is no innate, primordial instinct to right-click. The user is must making logical assumptions after getting over an initial learning curve.

Despite claims of intuitiveness and user-friendlines, usability is probably the single most vexing issue in the content management industry. Most people don't like their content management tools and undermine their employers content management efforts by working around them. So what can be done to solve this problem? If you let go of the intuitive myth, you can go in two general directions: you can accept that people will need training on software; or you can try to mimic things that you know your users are familiar with (in other words, piggy back off of someone else's training). Both of these have their issues and chances are, you will have to do a little bit of both.

Over the past few years, we have been conditioned to expect that software can be deployed without training. Years ago, when I worked for relatively large companies there were regularly held training sessions on how to use critical business applications. I see less of that now because of standardization on suites like Microsoft Office. If you don't know how to use MS Word, you probably wouldn't get past the first interview. There is also now a critical mass of experienced users so that a newbie can "groundhog" over his cubicle wall and ask - so training is happening, just not in the classroom. Public facing web applications are even harder to train people on because new users arrive totally spontaneously. Online help somewhat solves the problem but people seem to be reading this less and less.

The mimic strategy is in tension with the software industry's culture of innovation. Companies want to differentiate. They want their customers to upgrade to new versions. Software designers and developers like to be clever. Users don't want to be clever at all. They don't want to think about the tool. They are totally focused on their business task.

Back in the early days of the web, things were really chaotic with web designers actively trying to break conventions and build something unique. This was when you had all those websites with tiny text and icons meant to be cryptic. For a little while, that was OK because the web was recreational and users were into exploring and appreciated the reward of finding hidden gems. Now that the web has matured into a business tool that people need to use to get their job done, websites look a lot more uniform: tabbed or left side navigation, a footer of informational links, search and help on the upper right. While not truly "intuitive" these designs are familiar to someone that is used to visiting websites. Designers have learned to focus their creativity to solve specific problems within a uniform framework.

Both of these practices can and have been applied to content management interfaces up to a certain point. You see contextual help and you see user interface patterns modeled after familiar business applications such as Office. You even see content management software using familiar external tools as work environments - for example, authoring in Word, Outlook integration, dragging and dropping files like in Windows Explorer. However, these strategies tend to break down when dealing with concepts specific to content management. For example, versioning is totally beyond the physical desktop metaphor on which most desktop applications are based. Most users don't get versioning because it deals with a fourth dimension (time) that is not well represented in a three dimensional virtual space so they create multiple copies of the document with different file names. Workflow state is another concept that is difficult to represent with familiar tools (although a common hack is to use folders). So are the concepts of metadata and single sourcing. CMS that embrace the familiar tools strategy tend to under-deliver or under-emphasize these content management features.

Here are some adjustments that I think that the content management industry needs to adopt rather than go on believing they can independently innovate away the usability issue.

  1. Accept that content management is a discipline with some skills and concepts that practitioners need to learn. This is why CM Professionals was created. Vendors have to stop training these concepts just within the concept of their toolset.

  2. Participate in User Interface standards that address content management best practices and theory. Someone started a working group on OpenUsability. It is probably not going to go anywhere unless the commercial vendors get behind it.

  3. Create a standard CMS enabled UI. This is the idea behind the Apogee Project. There are also extensions to Windows Explorer that add additional right click menus for content management tasks. For an example of this, check out the Tortiose CVS and Tortoise SVN projects.

  4. When selecting a CMS, take into consideration tools that the intended users are used to using right now. Show them the UI. It will either resonate with them or not. This is why there are so many different CMS out there. Each one is the result of someone (or a group of people) solving a problem from their own frame of reference.

  5. Do not give every user group the exact same tools and expect them to be equally happy with them.

  6. Accept that you will need to customize t
    he user interface to make it work within the users' business context.

A couple of trends make me think that these adjustments are within reach. The rise of open source opens a number of possibilities: the ability to experiment with different solutions, the ability ot customize, and the value that the open source community places on standards. Also, content management functionality is being pushed down into the infrastructure layer where it can be more ubiquitous. Eventually, features like versioning, metadata and workflow state will become better integrated into the baseline computing environment that everyone is familiar with. I know that there will be compromises between purist theory and practical implementation, but it is a step in the right direction.

People don't make intuitive user interfaces. Someone's frame of reference makes an interface appear intuitive. Software makers need to pick a target audience and leverage what these users already know. Users need to expand their frame of reference to include content management concepts. Consultants need to really understand where the users are coming from and use that as a starting point for designing the solution. It is just intuitive. DOH!

Aug 10, 2006

IBM agrees to buy FileNet for $1.6 bln

And the mergers and acquisitions continue.... As analysts have been saying for months, infrastructure companies are buying up the content management pure plays and calling ECM infrastructure. IBM recently announced that they are buying FileNet. That leaves Vignette, Stellent and Interwoven, and BroadVision for companies like Sun, HP, RedHat, Novell, etc. to fight over. I don't know who is going to pair with who but if I had to guess...

  • Sun buys Vignette

  • Novell buys BroadVision

  • Red Hat buys Alfresco

  • HP buys Interwoven who buys Stellent

  • Google makes them all irrelevant ;)

Aug 02, 2006

Jul 28, 2006

Open Source Governance within the Enterprise

I recently spoke to a friend of mine, who works for a large financial services company, about their open source policies and practices. The big financial services companies are, in general, slightly ahead of the curve in their use of open source. That, and the fact that they are also highly regulated and have a potential high risk exposure, makes financial services a good place to look for open source governance models. Why do you need an open source governance policy and procedures? Because open source software can be acquired differently than commercial software. Unlike commercial software whose acquisition can be regulated as it goes through a budgeting and accounting process, open source can be freely downloaded and used without institutional awareness.

Like it or not, your company is probably using more open source software than you are aware of. In fact, if you are building web applications in anything but .NET, I would be surprised if you didn't use some open source framework or development tool in your custom application. You would be crazy not to. Companies that have not adapted to the existence of open source usually have what I call a "flavor of the month" architecture where the next application is built on what was written about in the last post on Slashdot or The ServerSide. I am not saying variety is a bad thing. There just needs to be a reason to use something different - like different requirements.

Here are some things that this company does to manage software in a world where open source exists. I have heard similar processes at other major companies that I have spoken to so this company is not unique.

  • Define what open source is. They exclude Linux (which they consume like commercial software with support contracts and all) and open source software bundled in commercial products.
  • Accept that open source is out there and has potential value.
  • Sponsor a cross line of business open source review board. This organization is responsible for responding requests to use a new open source application.
  • Manage a list of all usages of open source software within the firm. If you want to use an open source application, for research or production purposes, you consult the list for the software and version that you want to use. If you find it being used, you communicate your use to the "open source librarian." If it is not currently being used, you submit a request to the open source review board. They will tell you if there is a preferred alternative or put it on the list.
  • Publish a handbook explaining the policy and processes around open source software.

Does this type of program scale down to smaller size companies with less technology discipline than a large financial services company? I would say yes and, in some ways, it should be easier. There does not have to be a dedicated review board. An enterprise architect or architecture group should have an awareness of what software is running on the network: commercial and open source. This knowledge will help them:

  • Standardize on some core frameworks. This will reduce maintenance costs and help with interoperability. Again, homogeneity is not the goal. Especially in a web services world, applications do not have to be built on the same platform. You just need a better excuse to deviate from the standard than "I wanted to see what all the hype was about for Ruby on Rails."
  • Get to short lists of components quicker. It is easier to select from a list of pre-qualified options than comb source forge or Freshmeat every time you want to use a component.

  • Reuse your understanding of licensing implications. There are many open source licenses and some play better together than others.

  • Identify internal experts. If you know that an application was built on a technology, you can ask the developer who built it what they thought of the technology and best practices.

So, if your company is using open source software (and, chances are, it is) it is best to get a handle on what is being used and how.

Jul 24, 2006

Google is in my kitchen

I have taken a certain amount of pride in not being swept up in the Google craze. Yes, Google is the only search engine that I use and when I want to get directions, maps.google.com is the URL I enter. However, I have held back from getting a GMail account or setting up a Google Calendar. That took a lot of will power last year when people were introducing themselves as "Hi, I am so-and-so, do you want an invitation for a GMail account?"

What is my beef against the Goog? Nothing really. I have just had the same Yahoo! mail account for 8 years and I can't imagine giving it up. In fact, my loyalty to Yahoo! has become a cornerstone of my thinking about digital businesses. The idea is that you can't compete on functionality because it is less expensive for a competitor to copy your functionality than it was for you to build in the first place. To bring an innovative feature to the market, it takes a lot of design and trial and error. When you copy a feature, you get to learn from your competitors mistakes and take a more direct route to a better solution. What keeps customers is data which translates into switching costs for users.

Email is particularly interesting because it is not just the inbox. That can be easily imported. It is the entry for you in all your friends and associates address books. So, when you change your email service provider, you have to tell everyone your new address and they have to update your contact information. I know there are services like Plaxo but they creep me out too much to use. As an aside, never send out the mail address that comes with your ISP. That will unnecessarily tie you to that ISP even if there is a better option out there. Most of the current AOL users that I know don't care for the service but just keep paying AOL to keep their email address. Most colleges and universities have free email forwarding services for alumni. If you are not totally ashamed of your alma mater, you might consider taking advantage of their email forward service.

So, back to Google. Hell or high water, I am keeping my Yahoo! mail account. But, recently Google has been working its way into my life. I just got another Dell laptop and, what do you know, Google Desktop was pre-installed. In setting up the computer, I anguished for a couple of minutes over whether to uninstall it ("It is going to slow up my machine." "It may help me someday and, besides, I want to know what all the fuss is about.") Google Desktop stayed. Then a colleague from CM Professionals sent me a link to a Google Spreadsheet. Of course, the first step was to create an account.

At first, I thought the temptation of Google was threatening my data theory. But now I am thinking that it is supporting it. Google is trying to lure me with data that I don't have: indexes of my local files, spreadsheets that my colleagues created. Once they hook me in to registering and using Google tools to collaborate with my colleagues, they will be able to offer me other services. I would have to say it worked. I registered with Google.... Using my Yahoo! mail address ;).

Jul 18, 2006

Migrating from commercial to open source CMS

I just saw a post on the Bricolage developer list asking if anyone has any experience migrating from Vignette to Bricolage. I don't know the context of this migration but I am definitely seeing a trend. Two of our current customers are hanging up their mid/upper tier WCM licenses in favor of simpler open source applications. We have one customer who is migrating from Rhythmyx to Plone and another customer who decided not to use their corporate standard of FatWire for a new web property they are launching. They are using Bricolage. Interestingly, cost was not the primary factor. Both clients wanted to have something simple to use and and easy to extend.

Jul 13, 2006

← Previous Next → Page 60 of 75