Jul 08, 2008
Who owns your website?
In the old days, corporate websites and externally facing web applications used to be managed by the only group that knew anything about technology: corporate (or enterprise) IT - along with email, phones, and printers. However, as companies are incorporating websites and web applications as part of their product strategy, externally facing applications are starting to run into limitations from being managed by enterprise IT. They need more product oriented ownership. You could argue that if a piece of technology is externally facing (be it a customer extranet that supports customers use of the product, a marketing campaign website that brings brand value, or a service that a product plugs into), the technology is part of the product.
Product and enterprise technology are like yin and yang. Product technology requires constant innovation like the product it supports. Product technology is in a state of constant motion as it reaches for new opportunities to differentiate from competitors and serve customers better. Enterprise technology, most of the time, is the opposite. Enterprise technology needs to be as fixed as possible so that it can serve as a solid foundation to support various business processes. It is a cost center and money saved goes directly to the bottom line. A good enterprise technology manager looks at requests for change with skepticism. "Why is the change necessary?" "What are the risks if we do it?" "What are the risks if we don't do it?" "What is the ROI?" If enterprise IT didn't think this way, they would be rolling out a new phone system every week and that would be bad.
A best practice is having technology groups that report to the business unit that their technology serves - not to the centralized CIO who is responsible for everything from buying desktops and printers to email servers. The companies that are structured this way can be more agile and there is less tension in getting things done. The trend is definitely moving in this direction. By comparison, companies that lump all computer oriented technology together remind me of people who think that, because I work in "something related to computers," I must know why there's is broken.
The big problem with this decentralized technology strategy is that it can be highly inefficient. There will be redundancies in software, development, and people if every department has its own IT group. The role of the CTO (or the CIO if there is no CTO) should be balance efficiency and opportunity by setting parameters on the freedoms of technology choices of the different technology groups and establishing mechanisms for deploying and running these systems efficiently. The model may be similar to the manufacturing industries where the product designer has freedom within parameters set by engineering and manufacturing. In software, the business unit would build applications in a sanctioned (and supportable) set of tools. At Google, that is C++, Java, Python, and JavaScript. There may be a server platform and database that enterprise IT is good at managing. The business unit's product-oriented IT organization builds the application, tests it, and then hosts it on enterprise-managed infrastructure (or, better yet, a 3rd party data center). The business unit is responsible for fixing and enhancing the application. Enterprise IT keeps the servers running.
In practice, many companies work this way by having business units and departments hire outside systems integrators to be their own product oriented IT. That usually works well except for two problem areas: enterprise IT staff feel like they were circumvented on a project they should own and, when the consultants leave, no one knows how to maintain the application. The latter problem is worse if former is not solved - enterprise IT will be even less likely to step up and fix problems in an application that they resent. I think to be successful in the outsourcing model, the department that owns the product needs to have technical people to maintain and extend the application. Or, at least, work with a systems integrator that they can keep on retainer for maintenance.
This has big implications for the business units and departments. They will have more autonomy but they will also have more responsibility. They will need to know how to hire technical people, execute projects, and own technology. This is new ground for many and it will be hard work to get up to speed. But, as business models become more digital, these skills will become harder for business owners to live without.
Jul 07, 2008
I have been playing around with services that help me keep track of where I am going and where I have been. So far, the standout application for me has been Tripit. You mainly use Tripit by forwarding the email confirmations that you get from various travel booking services that you use. Tripit parses through the confirmation email and identifies where you are going and when. It also is smart enough to put together several reservations as part of the same trip. For example, if I book a hotel for the January 10th and 11th and a train ticket that leaves on the 10th and returns on the 12th, Tripit knows that it is part of the same trip. I found the email parsing capabilities to be surprisingly accurate. I occasionally try to stump Tripit by forwarding emails from international travel or very small hotels. It almost always got it right. Recently, when using a client's corporate travel service, I noticed an option to add my itineraries directly to Tripit (see screenshot). I wouldn't be surprised if other travel services started to giving their customers this option.
Tripit provides a lot of upside for the minimal effort it takes to forward an email. For one, I get a nice little itinerary that I can print out. The itinerary has the address, confirmation codes, and telephone numbers of all the places that I need to be. It even has maps. I can also subscribe to an iCal calendar of my trips so everything is put right into my local calendar (I use iCal) along with my meetings. This helps prevent me from scheduling meetings when I am en route. My wife also subscribes to my calendar so she has all the details in her personal calendar as well.
The other service that I think I will start to use is brightkite. I will use this to post when I am at a particular place and I want to meet up with a friend. I like the features of brightkite. You can easily "check in" to a location either by address or by using a saved place (called a "marker"). Brightkite suppors a simple syntax that you can use over SMS. Plus you can Ping.fm to email in your location.
I have also tried Dopplr and Fire Eagle. Dopplr didn't do it for me at all. You have to manually enter your trips. I tried to have Dopplr listen to my Tripit calendar but it was horrendously inaccurate. Dopplr had me taking all these crazy trips to various parts of the world and treated lay-overs at different airports as discrete trips. It wouldn't be so bad if Dopplr was not counting my carbon emissions. I hated feeling defensive about all those trips I was not event taking. I closed my Dopplr account.
I am on the fence about Fire Eagle. I should feel very lucky that I have an account there. I hear that invites are very hard to get. But I don't feel like I get much value from the service. Fire Eagle can listen to other services to find out where I am. I have it listening to brightkite. There are a number of other applications which I don't use (like Dopplr, Plazes, and Navizon) that also talk to Fire Eagle. The only benefit that I see is that it gives Yahoo more information to serve me up better ads (which really isn't a benefit for me) and search results (but I use Google search).
So, for now, I am sticking with Tripit and brightkite.
Jul 02, 2008
I just finished Martin Aspeli's Professional Plone Development

. This is the third Plone book that I have read over the years and it is definitely the most advanced. Do not take the words "professional" and "development" in the title lightly. Martin is a brilliant developer with a long history in the Plone project so I would expect nothing less from him.
What I found most helpful about the book is its coverage of the many new concepts that were introduced in versions 2.5 and 3 of Plone. If you have been away from Plone for a while, it may be time to check back in. I think the coolest stuff is the incorporation of Zope 3 constructs. The overall trend with Zope 3 is to make the platform less monolithic and more modular. This allows you to use Zope components in everyday Python applications and to use more standard Python programming techniques in Zope applications. The core Plone community has been energized by these new ideas for a while but they are just now starting to work their way into the mainstream. If you don't know about these concepts yet, you are starting to fall behind.
Considering the breadth, complexity, and innovativeness of Plone 3, Martin had a lot of ground to cover. Still, he was able to provide useful summaries and examples while keeping some semblance of narrative flow (don't get too attached to his case example though. the book frequently wanders away from the example project of a movie theater website). Professional Plone Development helps orient readers and prepare them to embrace and leverage the concepts that advanced Plone developers use in successful projects. Of course, to complete his education, the developer will have to read the source code and other online resources and, of course, learn by doing. One thing I would like to see more of is references to online documentation (both in-line references and a "further reading" section at the end of each chapter).
The ideal reader of this book is someone who knows his way around Plone and is proficient in Python - perhaps someone who has read Plone Live (which is now, unfortunately, pretty outdated - so much for a constantly updating book) and has built a few sites. This kind of reader will really benefit from Martin's best practices of test-first development and setting up an efficient development environment.
I also like how Martin de-mystifies the Zope platform which I have always regarded with fear and respect (except when I worked on a project to re-wire Zope to talk to an Oracle database rather than the ZODB: pt1, pt2, pt3). Martin reminds us that Zope is just a bunch of Python modules and it is OK to poke around and modify code to see how things work. Just remember to set the code back to how you found it or risk angering the mystical Zope spirit :)
Jul 01, 2008
A couple of days ago, when presenting to a client, I started to talk about different types of users and analogy popped into my head. It seemed to resonate with the group so I thought I would share it.
I find there are two high level approaches that people take when using software: to methodologically follow steps, or to play around with it until you get the results that you want. People tend to gravitate to one or the other.
People who like to follow steps use software like they are baking a cake. When you bake a cake, the ingredients combine to create complex chemical reactions that you have no hope of understanding. The best you can do is religiously follow the steps and hope for the results that are promised. (Or at least that is how most people bake cakes. I am sure that expert bakers are different.) For many, that is what using technology is like. Cooking, for most of us, is a far more experimental exercise. You can follow portions of the recipe and improvise the rest. You can add flavors until you get what you want. If it starts to taste bad, you can usually bring it back to edible by adding a little of this or a little of that.
Not to be age-ist, but millennials are often chefs. They are comfortable enough with technology to improvise when completing a task or use a tool to do something completely new. Digital natives aren't afraid they will "break the internet." They won't read the documentation until they are stuck. People who didn't grow up with software tend to be much more cautious and follow the directions. They panic a little when something unexpected happens.
So why does this matter? It is important to user interface design when you have to serve these user populations. For the bakers, step-by-step instructions built right into the UI help. You also need to be very careful about consistency in naming conventions (just like the story of the user not being able to find the "any" key, a baker will tense up if the instructions talk about a "save" button but the button on the form says "submit"). For chefs, you need to do a lot more testing to ensure that unexpected uses don't mess things up. Expect chefs to do a lot of trial and error. If you know that they will not like the consequences of executing a function, warn them. The classic case is warning a user that if they navigate away from the page, their work will be lost. A baker wouldn't navigate away from the page unless the instructions told him to.
Anyway, I (and I think my audience) found it helpful to visualize a user carefully measuring their actions with a software application as if he is following a complicated cake recipe verses clicking around like he is throwing spices into a pot. Maybe you will too.
Jun 30, 2008
The Apache Sling team recently announced the first official release of Sling. Now you can download some nicely packaged Sling bundles to play around with.
I have been experimenting with the Sling/CRX bundle that came with Day Software's JCR Cup 2008 competition (entries due midnight September, 30) and was really impressed by what I saw.
Sling allows you to write applications on top of the JCR using either server side or client side Javascript. On the server side, you can create Java Script Templates (ESP files) that give you access to the full JCR API. Templates are stored in the repository and called using an elegant MVC request processing framework. Templates can be called directly, or can be associated with content types and executed when an asset of that time is requested. As you might expect from Roy Fielding's employer, it is all very REST. For client-side scripting, you just import a Javascript file called sling.js and you get methods like "Sling.getContent" (which gives you an array of Javascript objects).
Despite the fact that Sling is still an incubation project, it is fairly mature and robust. Day's upcoming release of Communiqué (version 5) uses Sling extensively. I envision Sling being used in a presentation tier where pages are statically rendered (baked) from content in the JCR and Sling is used to power dynamic AJAX overlays using content from replicated JCR workspaces.
I really like the fact that logic is written in an interpreted language like Javascript. Development and deployment is faster when you take out the compilation step. Furthermore, Sling is built as OSGi (using Apache Felix) bundles so it is more modular and flexible than a typical monolithic Java web application.
The CRX (or the free Apache JackRabbit implementation of the JCR) and Sling should be considered along side Alfresco with its elegant Web Scripts (which also uses Javascript as a scripting language). Alfresco has some nice virtualization features but there may be a higher level of lock-in to the Alfresco API's. Alfresco has a user-oriented user interface while the CRX only has a JCR browser which is really only intended for administrators. However, in both cases, you will probably want to develop your own user interfaces because Alfresco's current WCM UI is not optimized for managing web content (improvements are scheduled for mid 2009 - interestingly, the Alfresco team is calling these enhancements "project _Sling_shot).
Jun 24, 2008
I just got through a 2 hour WebEx session where I walked through my deliverable with my client. At $0.33/minute/person for a pay per use session, the bill probably came to around $120 (without integrated voice). That is actually more than the train ticket to get to New York where the client is. Still, when you factor in the time (9 hours of travel round trip!), parking at train station, and (of course) the carbon, it was an economical choice.
An even more economical choice would have been to use Google Doc's presentation feature for free. It would have worked for me because I wasn't showing anything other than slides. If I was demo-ing an application I would have needed to use WebEx or some other real screen sharing application.
Oh well. Something to keep in mind for next time.
Jun 23, 2008
Via Boris Mann's blog, I just learned about Drupy - a full port of Drupal on Python. Among all the initial reactions I have to this announcement, the one that screams the loudest is "why?"
Drupal is an intentionally, non-object oriented framework. Drupal does not use PHP's implementation of classes and there is no inheritance - at least not in a classic OOP sense . The architecture works by exposing hooks by which modules can be called by core system functions and do extra stuff. To a Python programmer thats just, well, un-pythonic.
Python is a super-object oriented language. EVERYTHING is an object. Even a function is an object (pause to wait for non-pythonista heads to stop spinning). Python also has plenty of frameworks of its own. Zope is the grand-daddy of content oriented web application frameworks but it is a lot more complicated to fully understand than Drupal. Django fills the role of a leaner, content-friendly Python web application development framework. Furthermore, one of the really great things about Drupal is that it is written in PHP and it doesn't take too long for a good PHP programmer to understand how it works. There are far fewer Python developers than PHP developers out there.
The single reason why this could be a good idea is if you could run Drupal modules on top of Drupy. However, given that modules often don't work across multiple versions of Drupal, my confidence that they could integrate with Drupy is very low. I think a better idea would be a CMS that is functionally similar to Drupal written on top of a Python framework (like Django). In particular, a CMS with strong user profile management, a taxonomy based system for organizing the repository, and extensible through the addition of modules.
Of course, like with a lot of other things, I reserve the right to change my mind when I see more. But for now, I'll take my Drupal in PHP and go with something else when I want to code in Python. Drupy team, I welcome you to prove me wrong.
Jun 23, 2008
Although few people do it, restoring from backup is the only way to ensure that your backup and recovery system works. Since upgrading to Leopard, I have been using Time Machine to back up my laptop over the network (onto a and external hard drive connected via USB to my Airport Extreme). What better way to test a restore from Time Machine than to put in a bigger hard drive and restore? I figured if it didn't work, I would still have my old hard drive (still working but too small).
I am happy to report that the restore went perfectly. The general instructions are to be found here. The only difference is that I restored from the network rather than directly through the USB port. The one little hitch I ran into was that it took a couple of times for the utility to see my new local hard drive.
I am the kind of person who is frequently shocked when things work as advertised so I was in awe that, after I swapped out the hard drive and ran through the process, it was like nothing happened but the free space of my hard drive grew. All the software was as I left it. Even my BASH history was intact. The only thing missing was my "Downloads" - I had to recreate it. I guess this is because Mac regards this as a temporary space and not worth recovering.
So, don't hesitate to set up Time Machine. It could help you recover from a massive hard drive failure, a stolen laptop, or any other disaster - as if it never happened.
Jun 19, 2008
I just got through reading Dan Liliedahl's book OpenCms 7 Development (Packt Publishing). I met Dan when I was at the OpenCms Days developer conference and was impressed with his presentation. Dan knows his stuff (not just about OpenCms - he worked for FutureTense in the early days).
The book was first introduced at the conference. I was surprised that Dan was able to get it out so quickly after Version 7 was released. It seemed like Version 6 was out a long time before a book on it came out. Dan did mention that writing about the product as it was being developed was a challenge.
When I started reading the book, I was pleasantly surprised to not have to go through any content management theory. The book stays true to its title. Not that theory isn't important but I think it is reasonable to assume that someone developing on a CMS knows the about the basic concepts. If you don't, some background reading (and also some requirements analysis too) is in order.
One short-coming about diving right into the OpenCms architecture is that the beginning is a little choppy as the author tries to orient the reader to the platform (OpenCms is a very mature and elaborate application). Although it is choppy, there are some very good explanations of things like OpenCms's request processing chain and how the code is organized. There are also excellent tips on configuration management and how to configure your IDE. Still a reader may want to supplement the book by reading some additional OpenCms doc to help introduce him to some of the bigger OpenCms concepts.
The book hits its stride as it gets into the examples, which revolve around building a blogging site. There is good coverage on everything from creating content types and display templates to building extensions. By over-engineering some of the design, Dan is able to go into depth in modularizing code and managing logic in Java classes. Dan's experience in building big sites shows in how he designs for manageability and reuse. All the code is put into modules that can be exported and deployed to different OpenCms instances. The book also covers some of the new features like WebDAV, the new security model (with organizational units) and the relationship engine.
The one area that I think could use a little more coverage is on the TemplateOne and TemplateTwo frameworks. Dan builds everything from scratch to show how OpenCms works but these frameworks allow you to get up an running with less development. Unfortunately, neither of these are particularly well covered in the OpenCms documentation. Perhaps a whole book on TemplateOne and TemplateTwo is in order.
Overall, OpenCms 7 Development is a must read for anyone who wants to implement robust sites on the OpenCms platform.
OpenCms is covered in the Open Source Web Content Management in Java report. The standalone evaluation of OpenCms is also available.
Jun 19, 2008
Flickr tag: wc08
Twitter: #wc08
I am on my way back to Massachusetts after thoroughly enjoying my time at Web Content 2008. Thank you Scott Abel, Michael Silverman, and the rest of the Duo Consulting team for putting on another great conference. The event was sold out this year. I would register early next year.
Also, it was great to meet Deane Barker and Adrian Sutton and hang out with the old crew: Scott, Lisa Welchman, John Eckman, Rahel Bailie, and
Jarrod Gingras.
What follows are my rough (and I mean rough) notes from the sessions that I attended. They are as much for my memory as for your information so feel free to skim and click through to the slides which Scott has conscientiously posted. Not all of them are up yet but I am sure that Scott will track down all the speakers and beat their slides out of them sooner or later :)
Dick Costolo - The Next Content Wave: Hypersyndication (slides)
The general theme is that the conversation is more important than the content. Commenters are creating brands for themselves without even having a website. The conversation is distributed and does not revolve around the source (FriendFeed, Twitter, Digg, Del.icio.us, etc.).
This is a major disruption in the marketplace. Evidence: the mainstream media companies and investors have been resisting it. Investors that missed out on opportunities like FeedBurner are seeing potential here.
[Seth: I still don't know how these services will monetize the value]
All media is going to be social.
What Comes After Post Modernism? is a great post. The comments (all 132 of them) are just as good. The problem is social media is that you lose authority. [Seth: is just just a temporarily as authority shifts to new sources? Will we have the capacity to recognize so many authorities or will authority become more personal?]
What does this mean for advertising? Feed originated traffic has a much lower CPM than search originating traffic [Seth: that was why I wasn't making millions in Ad Sense on my blog! This makes sense, search is more intent laden. People search because they need something. Feeds is more informational. Advertising strategies focusing on direct response marketing (rather than branding campaigns - see more detail this on my notes from Gian Fulgoni), will get better results from search traffic)]
Feed traffic demands a new strategies for monetizing content.
Get Satisfaction is a cool new concept. It allows companies to outsource their customer service. It is basically a third party hosted forum that allows vendor employees and the community to answer support questions. Whole Foods is very active with it. This could harness the power of the "professional commenter" who wants to build a brand out of his expertise. As a customer, wouldn't it be cool if your vendor and other customers competed with each other to give you the best answer to your question?
Darren Barefoot - The Many-Armed Starfish: Today and Tomorrow in Social Media (slides)
Social media has been around forever. What is new is broadcast/publishing media and that is turning out to be short lived. Mash-ups are old too. Great example of the fact that modern remakes of Elizabethan playhouses are based on a drawing made by an amateur artist visiting the original.
Darren's presentation had lots of ideas from Clay Shirky and Jay Rosen. He describes the rise of the creative class and the notion that the Internet can make you a "little bit famous." In the broadcast, 1 to many style, fame was an all or nothing type of thing.
There are 7 characteristics of social media: Conversation (2 way through comments etc.); collaboration (connects people that were not connected. like wikipedia 7million editors.); sharing; scope (web is infinite. itunes has broken our idea of what an album is); community (find affinity groups); transparency and authenticity;
Piero Tintori - Running an Efficient CMS Evaluation and Procurement Process (slides)
I wouldn't normally expect to hear good advice about software selection from the CEO of a CMS vendor but Terminal Four's Piero Tintori is such a genuine and nice guy that I had to sit in and see what he had to say.
With very little promotion of Terminal Four, Piero did a very good presentation on how to work more efficiently with CMS vendors. It has been a while since I had worked for a CMS vendor and it was a good refresher see things from the vendor perspective and think about how to make the little adjustments to make the process easier on everyone.
Some really good points worth calling out:
-
Take the time to understand your requirements to write a clear, concise and thoughtful RFP. The response is going to be 2-3 times as large as the RFP and you don't want documents that take forever to review. Leaving in irrelevant cut/paste text (like procurement questions designed for other types of products such as "is your product radio-active") or repetitive and/or contradicting requirements shows that you are not serious about your RFP and the vendor will may put less time in the response. Furthermore, showing yourself to be sloppy may make the vendor inflate the price to mitigate the risk of having to spend more resources to make you successful.
-
Be realistic about timing. Don't set an overly aggressive timetable. Everyone hates a hurry up and wait relationship and you want your vendor to take your deadlines seriously. Piero had some nice rough guidelines of 6 to 8 weeks from RFP to selection on projects less than $50,000 and 8-12 weeks for projects more than $50,000. Companies should expect 4-12 months from initial inception to selection. My experience is that there is usually some organizational hiccup that delays these projects. To make things as efficient as possible, it is best to line up and plan with people from procurement, legal, and budget holders.
-
You are looking for a product that is right for your organization [totally agree]. Piero recommends working with a software vendor that is the same size or smaller than your company [Piero would say that. Terminal Four is small. This advice would take big vendors like Oracle and IBM out of most company's short lists. On the other hand, working with really large software vendors is hard if you are little company]. You should also try working with a vendor that is excited about your project.
-
Communicate your budget. Most buyers are afraid that if they tell how much money they have to spend, the price will always be exactly that amount. However, your budget is one of your requirements and a vendor can tailor his solution better if he knows that constraint. If the product is way out of your price range, it is a waste of everyone's time to evaluate
it. I would tend to agree with Piero here even though I understand why my clients will want to hold back this information.
-
Follow through with reference checking before you make your choice - not as a formality after it is too late.
-
Timing is important. In the U.K. vendors are really busy responding to RFP's in Feb, July, and December. I know that customers in the U.S. buying from public companies try to do deals around the fiscal quarters to get preferential pricing. If you try to work with vendors during the busy season, the good vendors (that are busy) will be rushed. The marginal vendors (who are not busy) will have an advantage. Also, I know that if you work with a large vendor, it takes them a long time to process a purchase when sales volume is high so you might miss end of the quarter pricing if you try to hold it to the wire.
Tim Yager & Jim Thaxton - Size Doesn’t Matter: How to Build and Maintain Huge CMS Projects (slides)
Tim and Jim (two developers from Duo) did a great (and funny) talk about migrating big web sites. It was a little like a midwestern millennial version of Car Talk. They had useful, down to earth advice about prototyping a solution, migrating content, gradually phasing out the old system, and applying lessons learned to future maintenance and enhancement of the system. The slides are worthwhile reading.
John Eckman - Upload, Tag, Share, Discuss: Content Management in the Age of User Participation (slides)
I think John's talk would have been an excellent keynote (future conference organizers, consider it!). He described it to me as "high concept, low information" but sometimes you need the big ideas to make sense of the little facts.
The big idea that John had was that social media is less about the tools (or containers) and more about the empty spaces within that people fill with their creative energy. We, as content management professionals, tend to focus on technology and process. We think that content management should be done by qualified professionals. Social media is changing things as it breaks down the barrier between the contributor and the reader.
Classic content management is not going away but it is starting to share the stage with community management. The rest of the talk was about ways to manage a community.
You need to establish norms and standards and socialize community. Decide what kind of community that you want. Do you want to create the stereotype of a library (strict rules, quiet behavior), the coffee house (somewhat boisterous), mardi gras (anything goes).
It is not one conversation. There are many conversations and they can happen in different environments. Behavior is a function of person and his environment. Offering different environments will encourage different behavior.
How to control communities
-
Terms of service. Avoid surprising people. The terms of service need to be readable and understandable by the community but also have the appropriate protection to satisfy the company. Wordpress.com has a good model of a legalese version and a common language version.
-
Identity: anonymity can lead to bad behavior. Make it easy for people to build a reputation. If they can build a reputation, they will try to make a good one.
-
Communities are hard to start but resist the urge to fake it by artificially seeding content. You can seed content but it has to be real. There was some follow on discussion about the relative challenge to start social media inside and outside the company. John said that there needs to be genuine executive sponsorship and real incentives. I need to follow up with him on this. We have both worked for companies that talked the talk here but were not successful.
-
Exclusivity for everyone. All these new services have private betas to create the feeling of exclusivity and establish a value of membership (in addition to controlling growth). I think that Fire Eagle is a great example. Invites have been hard to get and this has created a lot of interest around a service that is not all that useful (IMO).
-
Moderation. Fully moderated may make you liable for the content. Meta moderated (slashdot. Amazon's "Was this helpful?"). Community moderated (flag as offensive). Post moderated (let it through. check it later). The R.E.M. tour site (built with Drupal) pulls in content from flickr and twitter.
A mixture UGC/Professional content has higher CPM.
Take your content and put it in lots of different places. Interact with customers where they are.
Gian Fulgoni - Maximizing the ROI from Online Marketing
The big idea is that web marketing is too focused on click through as a measurement of value. We are attracted to click through because it is the easiest thing to measure but there is equal or greater value in delayed purchases (not-trackable) and brand awareness and recall.
Advertisers are not thinking about enough global campaigns even though the internet is growing more rapid outside of the US. 77% of global internet users are outside of us.
In a down economy, advertisers may push more of their ad dollars online because it is cheaper and they can save money. Online advertising is represents 7% of the total ad spend but there are 40% more ads on the web than on broadcast.
Direct mail is at the top of 21% [Seth: no wonder I get so much junk mail]. Broadcast is 15%. Newspaper is 14%
Search is 41% of online advertising.
60% of advertising spend is for branding campaigns. not direct response.
Search is better for direct response marketing.
The number of unique visitors reported by publishers is inflated by cookie deletion. 30% people delete cookies every month or so (usually 4x per month).
Who is clicking? Most of the clicks come from a small population of young, low income surfers (low value to advertisers).
Most purchases are buying offline. Conversion metrics are missing this.
Research:
Direct online was only 16% of buying effect. 21% purchased little later like in another browsing session. 60% bought later offline.
ROI is a lot bigger than currently thought.
Searchers are older, wealthier, and more educated. Retail is hurting brand and increasing price sensitivity through their price based marketing campaigns.
95% of google ads are not being clicked on.
16% improvement in brand association for top ad - just a text ad.
Success of craigslist shows that coupling of content and classified advertisers is less natural in the internet. when people want stuff. they don't go to the news content. they go to the classified site.