Oct 05, 2010
I have a reputation in the content management industry for being an open source advocate. Commercial CMS vendors that do not know me well sometimes balk when they hear that I am involved in a product selection. They are afraid that I am going to steer my client toward an open source solution. It is true that I know a lot about the open source options. I have written reports on open source content management systems and directed a content management practice at a consulting company specializing in open source software. However, people that know me well know that I don't favor software based on licensing model. I focus on the appropriateness and quality of the technology as well as the viability and compatibility of the supplier. I have worked for systems integration partners of CMS vendors and I even did a tour of duty in the professional services organization of CMS Vendor. In fact, 8 of my last 10 product selections resulted in commercial CMS purchases — and I think that these clients all made good choices based on their particular requirements and infrastructure.
Perhaps a greater fear from some vendors is that my client is going to be price sensitive. I think this is a valid point but not because there may be an open source solution on the short list. The short lists that I provide my clients typically contain products from different price points. I can do this because I take the time to learn about the products and what they are good at. I see roughly 50 product demos a year and I regularly talk to people who implement the technology. CMS selection consultants that build short-lists from third party analyst research tend to pick from a single pricing tier based on the client's budget. When this happens, value is factored out of the decision-making equation and I think that is a problem.
When competing against lower price products, the onus is on the vendor to justify the additional cost of its solution by demonstrating additional value. I think that is a fair request and the better upper tier vendors succeed in answering that request. To prove value, a vendor needs to have great software that efficiently solves hard problems. The best product for the a particular set of requirements is not always the most expensive one. When it is, the vendor of that product needs to invest time to understand and translate features into the valuable solution. It is not enough to point to an analyst chart to justify a price position. A vendor needs to show why the product is a better choice than potentially less expensive alternatives.
Sep 20, 2010
A while back I wrote a post about how I use Google Calendar sharing to help my clients schedule me for meetings. Recently, I have started to experiment with a service called Tungle Me that essentially does the same thing but allows people to create meeting requests too. Calendar sharing is a great help but something is missing and I am surprised nobody has done anything about it.
There is a big difference between my meeting schedule and my availability. You can't assume that my availability equals all the gaps between my meetings because often I need to travel to and from a meeting and that makes time when I am not available. My hack-ish work-around is to schedule two overlapping meetings: one to block off my travel time, and another for the actual meeting. My colleague who is viewing my free/busy time calendar sees the two meetings as one block of time when I am unavailable. I guess instead I could create events for my travel time before and after. Both options are clumsy but they work.
I was thinking a really useful feature for a calendaring system would be to add a "buffer time" field when you mark an event to "show as busy." Buffer time would simply the number of minutes (before and after) to expand the event on your free/busy calendar. It could be a text input with a syntax of "30" (for adding 30 minutes before and after the event) or "30,15" (for 30 minutes before and 15 minutes after). Buffer time could also be useful on your personal, full-calendar view because it would tell you when you need to leave for your meeting. I imagine this may pose a problem for calendaring systems that share a single meeting object between all of the attendees because each attendee will have different buffer times. But this is not a problem that some good data modeling can't solve.
Hopefully the collaboration vendors will start to build this capability into their products soon. In the meantime, does anyone have a better work around than what I have been doing?
Sep 13, 2010
During a recent web content management system selection project for a client in a highly regulated industry, I ran across a rather advanced content retention requirement that I have not seen before — at least not in web content management. This requirement was also new to the vendors that we were working with. I am curious if anyone has encountered a similar requirement and, if so, how it was satisfied.
The general gist is that the client does not want to retain outdated versions of assets outside of its mandated retention window of ten years. I am familiar with requirements for purging assets based on particular rules but versions of assets. Here is the scenario:
Purge old versions
A monthly process searches through the content repository and deletes the following:
Based on these rules, the following items are deleted on June 1st 2020:
-
A version of the terms of service that was superseded by another version on June 1st 2010.
-
A promotional block (and all versions) that expired on June 1st 2010.
-
An article (and all versions) that was archived on June 1st 2010.
Purged assets and versions of assets are not recoverable by any means.
I first wondered why I had not run into this requirement before. I rarely see content retention requirements in web content management. Retention is more of an issue when it comes to email archiving and records management. Is purging old content necessary for WCM? Is it reasonable for WCM? One thing that makes web content different from these other forms of content management is that web content is deliberately published to an audience whereas other content may contain private communications between individuals. Consequently, a company should have greater confidence in their web content as being official corporate information. Furthermore, once something is published, it is out there. Infinite copies are made. Destroying the original won't make a difference.
My second thought was about the feasibility of satisfying this requirement. It strikes me that in order to meet this requirement, the CMS's API must support the ability to query and manipulate versions of assets. The CMS should also record the archival date (the date that the version was superseded by a following version) of each version. Otherwise queries may have to look at the publish date of the following version to determine the archival date of a version.
If you have experience or ideas on this issue, I would love to hear from you in the comments or via email (seth at contenthere.net)
Aug 05, 2010
A few days ago I read Deane Barker's excellent post Editors Live in the Holes (go ahead and read the post and then come back) and I have been thinking about it ever since. I have had the same experience several times and it is a good reminder for developers to pay special attention to configuring and testing the rich text editor. As Deane points out, it is too easy for developers to disregard "the holes" as a contributor problem, not a system problem.
To get it right, the holes need to be jointly owned by the designers, developers, and content contributors. Designers need to design for flexibility. Developers need to do everything they can to make contributors successful. But this raises something of a chicken and egg problem — at least for new CMS implementations (as opposed to migrations). In these projects, content entry typically occurs after the system is considered complete. This means that the designer and developer need to anticipate what rich text capabilities (formatting controls and the styles that control the display of rich text) the contributors will need. This is particularly important in the ever-present "generic page" content type that is typically used for the many one-off (odd ball) pages that exist in any website.
I have found two good techniques to get around this problem. First, it is good to test the rich text editor with a few of the more challenging one-off pages on the site. Take a page with embedded images and objects (like perhaps a Google map) and formatting and try to reproduce it in the rich text editor. Don't disable the rich text editor and edit the source. That is cheating. If it turns out you can't do it without pulling your hair out, you need to come up with a work around. If it is a really important page, you might need to develop a special content type and/or presentation template that does some of the work. If you find that there are too many challenging one-off pages to choose from, you might step back and consider enforcing more uniformity between pages. Otherwise, you will probably not be getting all of the value (content reuse and manageability) out of a CMS.
The second technique is to build a "style guide" page and place it in some discrete area on the site. The style guide page is a generic page that contains examples of all the stylings that are available to the contributor. For example, every heading level, paragraphs, lists (ordered and unordered), tables, embedded images, etc. The content contributor can visit this page to get an idea of what is possible and then open it in edit mode to see how the formatting was executed. The process of building and reviewing the style guide page is a useful forum to get designers, developers, and contributors together to collaborate and align. The fact that it is so tangible grounds everyone in the real capabilities of the platform. The style guide page is also the first place to check updates or enhancements to styles after launch.
At the end of the day, designers, developers, and contributors all want the site to be a success. They can't just claim victory on their little piece ("the mockups were approved," "we got out of QA," or "I got my page to preview!"). Editors may live in the holes but everyone has to keep the holes clean.
Jul 28, 2010
I JUST heard about Adobe's acquisition of Day Software and have to admit my first reaction was total disappointment. I always admired Day's commitment to architecture and standards. Day is one of the few upper upper tier web content management companies to stay focused on the web — not just as a place to dump files but as medium for information exchange and creativity. David Nuescheler and Roy Fielding seemed to have a vision for how systems could openly interoperate through lightweight architectures like REST and standards like the JCR. Day has also been a great contributor to the Java community by pushing lighter weight technologies like OSGi and server-side JavaScript to keep Java relevant in a trend toward dynamically typed, scripting languages like PHP, Python, and Ruby. Day promoted this vision through the products they sold and also by contributing to open source projects.
I feel the complete opposite about Adobe. Adobe seems more interested in conquering the web than improving it. While Adobe has contributed several technologies that lowered barriers to entry, I think the overall net impact has been negative. Yes we have more content on the web thanks to Adobe, but much of that content is locked in Adobe's PDF and Flash formats where it is less accessible (and maintainable) than plain old DHTML. Adobe customers tend to overuse Adobe technologies like PDF for online forms when HTML would have done quite well. Flash-based navigation is also a problem; I can't tell you how many restaurant websites I have been where you can't link to a specific page because the whole site is one Flash movie. As a web consumer, how many hours have I waited for Acrobat reader to install/upgrade plugins (which further degrade performance) before allowing me to read PDFs that I clicked on? Expert tip: disable the PDFViewer plugin for Safari. Don't even get me started on DreadWeaver.
As you can see, my frustration with Adobe has been building for quite some time. It felt good to let that out. I haven't talked to David or Roy about Adobe so I don't know their opinion of Adobe before or after the merger talks started. I hope that Adobe permits them (even better, supports them) to continue their good work in web-based architectures. More likely Adobe is buying Day for its CRX repository and CQ5's workflow and digital asset management (DAM) functionality to connect creative teams using Adobe Creative Suite (Why couldn't they have just bought vjoon or WoodWing?) If this is the case, I hope Adobe will invest more in web publishing than they did JRun.
Jul 22, 2010
After over 10 years of working in content management, I have come to realize that there is only one way to learn the value of managing structured information: the hard way — and that way is only 50% effective. People can intellectually accept concepts like content re-use and content/layout separation, but in the heat of the moment, few can resist the siren song of a word processor and the clipboard. Pasting in a bunch of text into a rich text area (and then re-formatting it) provides so much more instant gratification than data entry into the fields of a structured content form. It is only after a number of painful global content changes that people come to realize that the value of all that painstaking WYSIWYG work has a very short shelf life. It is not until a migration onto another platform that one becomes aware of all that semi-redundant content. But that realization only happens around half the time. The other half of the time the site's unmanageability is blamed on the CMS. A clear sign that the content manager didn't make the connection is when there is a requirement that the new CMS have a global search and replace feature.
As someone who has seen many companies succeed and fail (and really fail) with content management, it is easy for me to notice these patterns. But that doesn't mean that I can make anyone short-circuit his/her learning process. If I were able to forcefully impose a highly structured content model on a client, all they would notice was the complexity of the content entry forms. They would take for granted the downstream benefits. The best you can do is gently guide and hope that guidance will lead to recognition when the site becomes unmanageable. I don't get too worked up about it. If I get frustrated, I can just talk to my friends in the DITA/XML advocate community. Their pain in working with technical documentation teams is way worse.
In the software development world, we have the concept of DRY (Don't Repeat Yourself). The idea is "every piece of knowledge must have a single, unambiguous, authoritative representation within a system." I call the opposite of DRY WET (Write Everything Thrice) or DAMP (Developer Accepts Maintenance Problems. Hat tip to Brian Kelly). This means copying and pasting code (rather than referencing it) or writing the same data over and over again. Part of the development process is recognizing patterns and coming up with ways to reduce redundancy. Good developers are constantly thinking about maintaining the code they write because they will inevitably need to add a feature of fix a bug. And the feedback cycle is really short for developers. You write a bit of code, test it, fix it, write some more code, test that and the first code you wrote, fix it.... If you did anything stupid, the time you have to wait before suffering for it is usually short. I am not saying that all developers practice DRY, but they have a better track record than content contributors.
Most content contributors don't have that short feedback loop. Too often, content is considered a "set it and forget it" initiative. You publish and move on. But I am seeing two positive trends in the content management industry that may shorten the feedback loop. First, there has been some great thought leadership around solving the "post launch paradigm". Second, many CMS vendors are building in analytics and multivariate testing functionality that encourages the content manager to constantly tweak a website to maximum performance. My hope is that awareness of this functionality will compel buyers to think of their content in a more dynamic way — something that evolves and improves like software. Then maybe we will hear content managers talking about their websites being DRY, WET, or DAMP.
Jul 21, 2010
One of the most common points of friction between project managers and developers is planning work. Most programmers hate creating work breakdown structures (WBS). You can't blame them, accurately predicting steps and effort required to build undesigned software is impossible. Yes, you heard that right. Software development planning is impossible — at least for someone who likes precision, which most programmers do.
The problem is that every software development project is a unique collection of thousands of tiny details that each have the potential to suck up enormous amounts of time. The traditional, PMI-sanctioned WBS technique forces developers to name all the activities that will be required, sequence them with dependancies, and then create an estimate of each one. The assumption is that if you did the planning right, you should just be able to follow the steps and come out the other end on time and on budget. This also implies that if you didn't blindly follow the steps, the project plan was wrong — or you were too incompetent to follow the steps correctly. But with the fluid nature of software development, the project plan is always wrong. I used to think that precision would increase with finer granularity. The more lines in the project plan, the more accurate it would be. But now I think the opposite is true. The more tasks you add, the more guesses you make and the greater the overall variance. Even if you guessed every task right, there were probably just as many tasks that you forgot to add. And there are also lots of steps that you find you didn't need to do too.
While predicting a WBS is impossible, developers can get better at setting and meeting deadlines. There is a small nuance between setting a deadline and estimating tasks in a WBS. On the outside, the difference is so small that no one will notice. Nobody will care because they just want to know when the work will get done. But there is a difference. The WBS technique forces a linear accounting of all the work that needs to be done. Creating a deadline is more like adding a constraint (that you hope is reasonable) to help guide and prioritize the work that you wind up doing. Comparing the two is like comparing launching a rocket to flying a plane. PMI-style planning is like shooting a rocket: doing all the calculation at the beginning and then hoping that you accounted for everything before ignition. Setting a deadline turns the rocket into an airplane by adding a pilot that can steer. Realizing you can make adjustments after take-off transforms the pre-flight calculations from a fixed flight path to a map that you can use to make in-flight decisions. A deadline (either the final deadline or an intermediate milestone) is where you think you can be at a certain point of time (or after a certain amount of effort). When creating a deadline for yourself, you don't try to think of every possible task it will take. It is more like eyeballing distances than counting steps.
I became conscious of this distinction the other day when I was on a bike ride. I take pride in the fact that I usually get home within a few minutes of the time I tell my wife I will be back. Lots of times I pull in right at the minute. Putting on my planner hat, if I was asked how long a bike ride would take, I would want to know the exact route and measure the distance and slope and windspeed and make assumptions about average speed. When I put on my cycling helmet, I realize that most of those variables are under my control. I can shorten the route. I can ride faster. I can take an alternate road to stay out of a headwind. Because I know my cycling ability and the terrain so well, I make these adjustments without even thinking about it.
I know you are thinking that software development is not like riding a bike. There are all these externally imposed requirements, constraints, and dependencies that need to be accounted for. But think back and ask yourself: how many of these factors are added specifically for the purpose of creating the WBS? I feel like developers work against themselves by asking for more and more estimation inputs and being more prescriptive of how they will work. There is no way that every detail can be accounted for and every detail that you do add will constrain your ability to make adjustments.
For estimation purposes, requirements should represent boundaries of an acceptable solution. With this understanding, a developer needs to produce a reasonable deadline based on similar work and explain any assumptions made. An overall deadline or intermediate milestone shouldn't be overly ambitious. It should account for unknowns. If a deadline is not acceptable, scale back the scope until an acceptable deadline can be achieved. Through the course of the project, new information is going to present itself: the client is more particular than he was able to articulate; the available components are not as good as expected; new features are added to the scope. When any of these things happen, you make adjustments. You might be able to work a little more efficiently. You might be able to scale down scope in other areas. You might be able to delegate work back to the client. Or, you might just have to extend the deadline.
These adjustments require a decent partnership between the developer and the client where the deadline is jointly owned. It doesn't work when one party feels like the other is obligated to deliver no matter what. In the bicycle analogy, when two people go for a ride, they decide where they want to go. Usually the conversation plays out where one rider asks the other what sort of ride he is up for. The second rider may say he needs to get back in 2 hours and wants to get in some climbing. The first rider will suggest a route that he is familiar with. When they encounter construction that makes a road impassable, they may be able to find an alternative route that is just as good; they can hammer home over a longer route in a paceline; or they can call home to say that they are going to be late. Whether the first rider should have known about the construction is debatable (Did the construction just start? Was the overall distance to ambitious? Did the route not allow for adjustments?) but debating is not going to get anyone home sooner.
With experience, you do get better at making more realistic deadlines. And, more importantly, you also get better with time management. You will build an awareness of where you are in the overall process and know early if you are falling behind schedule. In the cycling analogy, you periodically glance at the clock, your current speed, the slope of the road, and which way the wind is blowing. In software development, you are looking at things like the calendar, the productivity, and the rate of defect identification. With this information rolling around in your subconscious, you start thinking about options instinctively. The client perception is that you planned well. But you really didn't. You managed time well. The up front estimate was just one of the many constraints that you juggled when developing the solution.
Jul 19, 2010
Roberto Galoppini has an interesting case study on selecting an open source project management tool. In it, he describes his SOS Open Source methodology for filtering open source projects by looking at a number of factors organized into three categories: sustainability, industrial strength, and project strategy. The case study doesn't go into much detail but Roberto has built a tool that aggregates quantitative and qualitative project information from a number of disparate sources and builds scores. I saw a demo around 6 months ago and was impressed by the graphs he was able to create. While this technique cannot be expected to make a technology decision for you (you need to know your requirements and to have hands-on experience for that), it can be used to filter down the market and help you decide where to invest your evaluation energy.
Despite its ubiquity, open source software is still unchartered territory for most technology buyers. That is not to say that most companies don't use open source software, nearly all companies leverage at least open source utilities, libraries, and infrastructure (operating systems, databases, web servers, etc.). Many companies use open source business applications too. It is just that many companies adopt open source technologies in haphazard and spontaneous ways — at least not with the same level of conscientiousness put into an expensive commercial software purchase. While I don't think buyers should put much stock in Gartner's or Forrester's opinion of technology, it barely exists for open source technologies. That point was hammered home in a recent a Olliance webinar when one of the panelists said that Gartner and Forrester offer no value on open source. All the CIOs on the panel leveraged their peers and internal experts rather than their analyst subscriptions.
Ideally, technology procurement should be able to sense if there is something wrong going on with the project. The information is out there and you can get it in real time (as opposed to commercial software companies that only report quarterly). You just need to know where to look. Tools like SOS Open Source provide a useful high level picture to quickly highlight potential issues that should be investigated. It is unlikely that mainstream analysts will be able to develop this level of awareness for open source projects so I think there is great opportunity for these data aggregation tools.
Jul 02, 2010
One of my newspaper clients recently added the Facebook "Like" button to their site and saw large increases in traffic. I was thinking of doing the same thing for Content Here but then I started to wonder "would I Like Content Here?" Don't get me wrong. I LOVE writing this blog and I also find the posts tremendously useful as a resource. Re-reading old posts is a great way for me to recreate an idea that I once had in my head or re-use an explanation for one of my clients. Sometimes I catch myself sending link after link to a client.
So while I LOVE this blog, I am not sure that I LIKE it — at least not in a Facebook kinda way. I guess it all boils down to how I use Facebook: I use it for purely social purposes. I keep strict separation between my Facebook world (where I connect with friends and family, many of whom are not technical) and my professional (Twitter and LinkedIn) world. Some contacts span both worlds — mainly people who I know professionally but also hang out with outside of work. On Facebook, I don't post about anything work-related; just as I don't bore dinner guests with esoteric content management theory or programming stuff. There I talk about things that many of my friends and I are passionate about or would find amusing. On Twitter and this blog, I write about things that I find interesting professionally. I avoid personal subjects like my family, political views, and silly humor. I have a feeling that others either consciously or unconsciously maintain this kind of barrier. How many people would want to confuse their non-technical mother-in-law and the rest of their social network by "liking" the post Code moves forward. Content moves backward? Probably about as many people who want their boss to see their beach pictures which were taken on a sick day.
This probably infuriates Facebook because they want to manage the full social graph — not just half of it. But I don't think they have a great answer for people like me. Some of my friends are working around this issue by creating two Facebook accounts: one for business and one for social. My good friend Brice Dunwoodie has a Facebook profile called Brice Dunwoodie SMG for his "semi-public self." But this isn't really a good solution for Facebook because it fractures their social graph. In order to pull these social and professional aspects together, Facebook would need to get really clever about its privacy and filtering settings which are already overly complicated and controversial.
If Facebook can't have all the social graph, which half would they want? Are they be satisfied with the social side of the social graph which they already dominate? Or would they prefer the professional side (currently owned by LinkedIn)? Historically, Facebook ad revenue has been low considering their huge traffic volumes. This makes sense because general interest content (like news, entertainment, personal statuses, and other content that people might "like" in a Facebook kind of way) has notoriously low CPM rates; not like niche publications that have their audience in a buying state of mind and know what types of products they are interested in. Facebook's bet seems to be that, through their social graph, they can improve the targeting problem for general interest content. If they are successful, they will achieve that lucrative formula of high traffic volume AND high CPM. If they are not successful, they will probably need to think of some other way to monetize that large but distracted audience.
Jun 30, 2010
A customer struts into a car dealership, slams a 200 page requirements document down onto a salesman's desk, and triumphantly declares "I know exactly what kind of car I want to buy." The startled salesman opens the document to a random section and starts to flip through a few pages that describe a lug nut in excruciating detail. He looks at another random section and sees requirements about how the steering wheel should be joined to the steering column. After regaining his composure, the salesman looks up and says "from this document, I can definitely see that you are looking for a car. What do you want to use it for?" The business analyst suddenly looks confused and says "I don't know. I don't drive."
This is not just a lame joke. It describes a scenario happens all the time in CMS selections. There are two main problems here. First is the obvious problem that the customer believes himself an expert in cars because he has done a ton of research but he doesn't have the critical experience of having driven one. He can name all the features of a car and knows what they do but he hasn't had to use them. The second issue is more subtle. His 200 page requirements document is more like a design specification for a product that has already been built. It goes into details that are unnecessary like how the steering wheel must be connected to the steering column. What kind of penalty does he give if the steering wheel is connected in a different (and perhaps better) way? More importantly, there is no way his requirements document can be exhaustive. It would really have to be 20,000+ pages to cover every detail with same depth. So entire aspects of the car are probably omitted. Maybe it was something important like which side of the car the steering wheel is on. Rather than try to design your own car in a vacuum and then go around and see which one matches it, it would be better to draw up some coarse filters (price, intended usage, etc.) and then look at cars that passed the filter in their totality and see which one feels right.
This sounds obvious for car buying but you would be surprised how many CMS buyers collect requirements like that car customer; Or do the abridged version where they just name countless features (which in car shopping would produce a list like "6 cup holders, 1 gas pedal, 1 brake pedal, 1 clutch (optional), 5 gears, 3 windshield wipers, 6 windows, 4 wheels, 4 tires, 1 spare wheel/tire ... "). In many cases the requirements are gathered by people who have never used, nor intend to use, the CMS. They can't paint the bigger picture of the user, the task, and the content.
By focusing on how people work, rather than the features themselves, CMS evaluation criteria can identify features that are important and with enough context to understand which implementations of that feature will make it useful. In the car dealership story, if the customer walked into the dealership and said "I drive like a maniac and the wheels of my last three cars fell off," the salesman would not only know that the customer needed lug nuts but really beefy lug nuts, a good suspension, and perhaps a driving lesson.
Scenarios are the best way to capture this context for a software selection. A good scenario will describe the intent behind the task (what is the user trying to accomplish?), the context (what time, resources, and information does the user have?) and the flow (how does the person work? Who else does he need to collaborate with?). In the process of documenting a scenario, a number of features will be identified — features you might not even think of in a requirement brainstorming session. After writing a scenario, I typically list features at the bottom of the scenario to call out what functionality was used. Scenarios don't have to be long or comprehensive. Usually 1/2 to 1 page will capture enough of the story to understand what needs to happen.
To beat on the car buying analogy once again, you could think of a scenario like the route for a test drive. If you live in a city and rarely use a highway, the best test drive would be to drive in traffic and try to parallel park and park in your neighborhood parking garage. That would be more informative than driving on the interstate. If you test drove all the cars on the same route you would notice some big differences; like that you can park the Honda Fit in a compact car space but you can't even get the Ridgeline into the parking garage because the turning radius is too big. Your average car dealer probably will not give you much flexibility on your driving route, but your CMS vendor will (or should). Use that access to your advantage and create the most realistic driving conditions possible.
The car buying analogy breaks down in one key area. When you buy a car, you sign the paper work and then you drive it off the lot. Content management systems are not like that. Before you can use a CMS, you need to implement the software to support your content, processes, and web design. You need to configure, customize, and extend the platform. Scenarios will help this process because, once you buy the software, they turn into the user stories that will drive your implementation planning and long term road map. Some user stories will be achievable by configuring out of the box functionality; others will take more effort.
So when you find yourself slogging through a spreadsheet with hundreds of rows of requirements, think of that car buyer and ask yourself "are these requirements really going to help me find a CMS that I will be able to use to manage my website(s)?" If you are honest with yourself, the answer will probably be "no." If it is "no," put away the spreadsheet and start writing scenarios.