One thing we figured out a while ago is that merging two (or more) datasets with high quality metadata results in a new dataset with much lower quality metadata. The "measure" of this quality is just subjective and perceptual, but it's a constant thing: everytime we showed this to people that cared about the data more than the software we were writing, they could not understand why we were so excited about such a system, where clearly the data was so much poorer than what they were expecting.
Stefano speaks mainly from a Semantic Web perspective but his observations are very relevant to content management and aggregating content from multiple sources. Right now the general business world is very far behind the community in which Stefano works (librarians, which you could say are metadata professionals). Our users struggle to invest any time author good metadata. But once we finally get them to truly focus on the metadata (or automate them out of the process), hopefully library science and the semantic web will have solved the issues and nuances of when you have good metadata and are ready to really use it.