<!-- Content Here -->

Where content meets technology

Feb 01, 2024

Viable > Lovable

The term "viable" (as in Minimum Viable Product or MVP) may be the least understood term in software development. In common parlance, an MVP is often used to describe something that sucks but may get better. This negative connotation causes many to prefer the term Minimum Lovable Product (MLP). "Viable," as it is often understood, means unlovable.

I would like to redeem the word "viable" because it has a lot more to offer than lovable. I think of viable as having some kind of advantage that would make a customer choose it over another option (either a competing product or making do without it).

Thinking about viability in this way helps you ask better questions to understand the value of the solution. What problem are we solving and are customers motivated to solve it? What alternatives are there? Why is this solution better?

Viability in a mature market is very different than in a new category. In a mature market, viability depends on differentiation: a new approach, a different price point, etc. In a new category, viability depends on customers relating to the problem and understanding the benefits of solving it in an innovative way.

The company behind the solution also factors into its viability. For a portfolio software company (that sells many solutions into an enterprise) viability for a product could mean being almost as good as a best of breed choice -- appealing to customers who want the simplicity or savings of a single vendor. For example, most users prefer Slack and Zoom but enterprises often buy MS Teams. Startups have a higher viability bar. Their solution needs to be appealing enough to overcome any concerns about the instability of the company or friction from dealing with another vendor.

Pricing factors into viability. People will not invest in a costly solution to a minor problem.

We lose all of this nuance when we talk about "lovable." A viable product can still be lovable but what makes a product successful is its viability.

May 25, 2023

Penetration Rate

A couple of months ago I wrote a post about Retention Rate as the first metric to watch when rolling out a new product or feature. High retention rate is strong evidence of your product's utility and usability. Once you have established this value, it's time to drive adoption.

You measure adoption with penetration rate, which is calculated as the number of active users divided by the total number of potential users. Sometimes there is debate around the scope of the potential user base. I advocate for a broader definition. For example, even if you have only launched in one locale, I think that you should calculate penetration rate against worldwide potential customers. This way, when you launch a new locale, your penentration rate goes up and not down. In general, I like metrics that go up when good things happen rather than ones whose dips can be explained by progress in other areas.

To increase penetration rate, you need to attract new users and keep the ones that you have. You should track both customer acquisition and retention because aggressive customer acquisition can mask churn, which is harmful to your business long term. You don't want to waste resources chasing customers that you can't keep.

Consider the channels you have to build awareness: release notes, tool tips and interactive product tours, searchable documentation, email, youtube channels... But be aware of the annoyance factor, the more accessible (in your face) the channel, the more disruptive it can be when unwanted. My team at Alexa Audio owns the hints that occur during audio playback. Alexa "by the way" hints can be very annoying so we have strict rules for which ones we allow during the audio experience. The hint must introduce a feature or content that is useful in current context. For example, we might suggest how to make a play queue continuous after the music naturally ends.

You need to be scientific in how you measure the impact of an experiment. This means randomized tests with control and treatment groups and the ability to compare immediate and long term behavior of both groups. A high conversion rate is not good if those who got the impression wind up using your product less over the following months.

In addition to building awareness, you need to make sure that the customer's first attempt to use the feature is successful and they experience immediate value. If the feature is not designed for immediate value (like auto-save), show progress toward that value (like showing a last saved time).

Most organizations don't have the analytical capacity to measure and manage penetration rate for every feature in their product. It is best to focus on "high value actions:" features that, when adopted, increase overall engagement with the product and value exchange between you and your customers. There are sophisticated statistical models that can calculate the monetary value of each action but a simpler approach is to segment your user population by overall engagement (such the days in a month that they use the product). Compare the behavior of the top quartile with the bottom quartile and see which features your most active customers love. Then you can build programs to help your less active users discover these features.

Apr 28, 2023

Decisions

Decisions, regardless of whether they are right or wrong, are critical for moving an organization forward. Organizations that can't make decisions waste time rehashing the same discussions and eventually wind up going with whatever option was the default. A well managed decision, even if it leads to the wrong option, has value because it draws attention to an issue, allows the organization to act intentionally, and creates an opportunity to learn from the selected course of action to course correct. It's better to make a decision that you can pivot from than to cower in indecision. At Amazon, that's the "Bias for Action" Leadership principle.

To get better at making decisions, organizations should use every decision as an opportunity to improve their process. Once you are aware of the necessity of a decision, I recommend you start your road to improvement by asking the following questions.

1. What decisions do we need to make to move forward?

A decision is only necessary if not making it stops progress. As long as it doesn't slow you down, deferring a decision allows you to make a more informed decision in the future. But be aware of of decisions you are unconsciously making by over-linear thinking that ignores options -- especially those that are hard to reverse.

2. Whose decision is it?

A decision should be made by someone who is accountable for the short and long term success of the impacted component, system, or experience. That person should own all of the relevant dimensions: cost, time to market, usability, viability, etc. The decision maker is also responsible for deciding who needs to be consulted or informed and manage that communication. If someone finds out about a decision and has the authority to question or reverse it after the fact, that's on the decision maker. So the first thing to do is understand the blast radius of the decision. A smaller internal technical design choice could be made by the person maintaining the code. If it has larger implications, the decision maker needs to have a broader scope of accountability.

3. What information is needed to have reasonable confidence?

You will never have complete information about all the implications of all of the potential options. If you did, the choice would be too obvious to even consider it a decision. The threshold for the level of confidence depends on the reversibility of the decision. If reversing a decision has no cost, you might as well just start trying options. But that is never the case because just trying something has the cost of time. The decision maker should be able to articulate up front what data they would need to reduce risk to an acceptable level. Examples include a technical proof of concept demonstrating feasibility, user research using prototypes, a well designed survey with a large enough sample size to achieve statistical significance, revenue or cost projections. Be aware that sometimes the cost of gathering this information outweighs the risk of the decision itself.

4. How will you validate whether you made the right decision?

Decisions shouldn't be fire and forget. Your plan to implement the decision should include instrumentation, monitoring, and attention to the results. Ideally, you should think of thresholds when you should reconsider your choice. For example, if we see a retention rate of less than Y, we will pivot to a different choice. Or time could be part of your threshold. For example, we will run this experiment for 4 weeks and then look at the data. Good decisions should preserve the opportunity to change course. But you need to know when to consider those other options.

Mar 27, 2023

Follow Your Shot

The first big project of my technology career was a company-wide rollout of Windows 95. Our tiny five person IT organization (which included desktop support, network admin, and software development) took on a herculean task of adding RAM and installing the new operating system on over 500 employee workstations. After each work day, we stayed around until midnight to process 50 or so computers and then we got to work early the next day to train and support the employees whose computers we upgraded. While we were all exhausted from the previous night, morning support was the most critical part of the project because there were problems on some machines and, even when there weren't, users needed help finding their way around the new OS. Those first few minutes determined whether they loved the new experience or felt screwed by IT.

That's when I learned one of the most important lessons of my career: follow your shot. I usually don't love using sports analogies for work but I think most people have seen (or maybe even been) a tenacious hockey/basketball/soccer/lacrosse... player who scores off their own rebound.

Following your shot means caring about the result more than just completing the task. It means, monitoring a new feature's utilization and fault rates, testing it the field, talking to customers... and making adjustments to redirect a miss. Teams that lack a shot following mentality have less impact because they don't notice missed steps or miscalculations. They just move onto the next task or project. Every team I have joined eventually notices a code branch that was never merged, a feature that was never fully dialed up, or a bug that made a feature inaccessible. Task-oriented teams also have a higher risk of burnout because checking off a task or completing a project is a relief, not a reward -- not like the satisfaction of having a measurable impact.

We often talk about the distinction between being "process oriented" vs. "results driven." Some criticize process orientation for not caring enough about outputs. Results driven cultures can be toxic for overly punishing failure and not appreciating effort. A "follow your shot" culture is a balance between these two extremes. It values the methodical execution of a defined process and also the agility to adapt to get the desired result.

Whatever the task, there is a "follow your shot" behavior that will increase its chance of success: checking the deployment pipeline after a code merge; monitoring dashboards during a dial-up; following up on a customer support case to ensure that the problem was resolved. You just need to ask yourself "how do I follow this shot?"

Feb 12, 2023

Your First Metric: Retention Rate

If you invested time and effort to build a new feature on your product, you had at least an informal hypothesis that the new capability would bring value to your customers. But did you actually test that hypothesis by validating the value? Chances are you obsessed over the design, implementation, and delivery of the feature but your attention shifted to the next project once the feature was launched. If you make a habit out of this, your product becomes a bloated jumble of features that your customers can't discover, don't understand, or don't like. You have fallen into the Feature Debt Trap. The cost of maintaining a product like this is high because of the large surface area for defects and you are vulnerable to disruption by a simpler product that delivers high value by scratching the right itches.

To avoid this fate, or at least slow the onset, you must concentrate your investment on the features that matter the most to your customers and ensure that they are delivering high value. If you look at one metric, start with retention - not for your entire product but for users of the feature. The strongest evidence of a feature delivering value is continued use. Retention means that customers have a recurring need and have found your solution the most convenient way to satisfy it.

The level of usage regularity one can expect from a feature depends on the feature. For example, your payroll product's W-2 generation function probably just gets used once per year. There are some important features that solve sporadic problems such as password reset. But, for most features, regular usage means high perceived value.

You measure retention by duration. 2-Week Retention is the percent of customers who continued to use the feature a week after they started to use it. 3-Week Retention is the percent of that group that continued to use it in the 3rd weeks and so on. Retention usually declines over weeks and then stabilizes. The shape of the decline is called a Survival Curve. The longest survivors are getting the most value from the feature. For some features, the time range when the survival curve starts to flatten can be considered the habit formation period - meaning that if someone sticks with something for this long, they continue to use it. For example, a meditation app. Or the survivor curve could be an asymptote approaching the ideal segment of users.

If the retention rate is low, your feature or product suffers from one or both of the following issues:

  1. Low usability. There are easier ways to solve the problem.
  2. Low utility. The problem that the feature solves isn't a regular priority for the customer.

You should fix those problems before you invest in promoting the feature or invest in support and maintenance.

The low utility problem is the harder one to address. It could mean you were undisciplined in managing the product ("wouldn't it be cool if") or didn't understand your customer very well. Or it could be that the customer doesn't understand the value. For example, they may not anticipate the value of having a backup or auto-save. Whatever the root cause, you need to determine whether you can establish value or figure out an exit. Otherwise, the cost of maintaining the feature will be all waste.

The low usability problem is easier to solve and there are great techniques and skilled people who can help. The only hitch is that you need to plan for the investment. If you are locked into a road map that promises a hit parade of new feature launches, you will need to reset expectations. Nobody wins if your feature launches don't deliver value to your customers.

If a feature is not fixable, you should eliminate it. But that is hard to do if it has a small group of loyal customers that have relied on this feature to address an important need. To stay out of this situation, give yourself a two way door by leveraging pilot or soft launch strategies. Often these approaches are just used to test scaling but they offer a great opportunity to monitor retention.

Once you have achieved a suitable retention rate, your attention can shift to building adoption (measured by penetration rate) by raising awareness for the feature. But you still need to monitor retention in the form of lapse and churn. But that is for another blog post.

Dec 09, 2019

Hot Lots

When I start working with an established software development team, my favorite tool for understanding their process is a "hot lot." Hot lot is a manufacturing term where an order is expedited by being allowed to jump the queue. Hot lots are closely watched for progress and potential delays. In the world of software development, a hot lot can be a feature request that goes through a process of design, implementation, testing, enablement (documentation), release, promotion, and evaluation. A hot lot should accelerate the process by adjusting priorities but it should not circumvent the process by breaking rules or cutting corners.

By prioritizing a feature and watching it go through the process at top speed, you can learn many things. For example, you can learn...

  • Whether the process is even responsive enough to accept a hot lot. Sometimes engineering is tied to rigid roadmaps and nothing, no matter how important, can jump the line. This is concerning if those roadmaps stretch out beyond the opportunities you can reliably anticipate.
  • Whether there even is a defined process. Is there a mechanism for ensuring all of the tasks (qa, documentation, deployment, etc.) are completed? Or maybe there is a process but nobody follows it. If there is no practiced process, you can't trust the integrity of the system or whether anything is really "done."
  • How the process is structured into steps, roles, and hand-offs. How many people touch the feature? How much time does each step take? How much time is spent waiting between steps? Is there excessive back and forth? Lean Six Sigma has a great framework for studying process cycle time, wait times, and waste across the value stream.
  • The theoretical speed limit of the current process. You will never know how responsive your process can be when you always slow it down with competing priorities. Often actual speed is much slower than potential speed because of various delays, distractions, and interruptions that are not the fault of the process.
  • Whether there are structural blockers like "we only release every 3 months." Or maybe the team is distributed over many time zones with little overlap for hand-offs and feedback.
  • Whether there are capacity blocks like "Joe is the only person who can do that step and he is not available."
  • How easy it is to monitor the process. Can you go to one place and see the completed and remaining work?
  • The amount of managerial overhead that the process requires. For example, is there a project manager that needs to track and delegate every task?
  • The artifacts the process creates. Can you go back and see what was done and why?
  • How the response to the feature was measured and incorporated into future improvement ideas.

After running through a couple of these experiments, I have a pretty good understanding of the process structure, its theoretical speed, its strengths, and its flaws. At that point, we can start to come up with ideas for improvement. The low hanging fruit is usually pretty obvious ... especially to people who have been participating in the process but not paying attention to the overall throughput. Optimizations can be designed collaboratively and tested by future hot lots. I find that teams are generally comfortable with this evaluation because it doesn't target people as much as the framework that people work in. Usually processes (at least as they are practiced) form organically so nobody feels threatened by process improvements -- especially if they are clearly supported by empirical evidence.

Even if you have been working with a team for a while, try pushing through a hot lot and pay close attention to it. There is really no better way to understand the execution of a process.

Nov 07, 2019

What problem are we solving?

Before building any functionality, a product team should first start by fully understanding the problem they are being asked to solve. This may sound obvious but I can’t tell you how many times I see one-liner Jira tickets that ask for something without explaining why. But the “why” is the most important part for a number of reasons.

  1. The team has to agree that the problem exists and is worth solving. The impact and urgency is a primary factor in prioritization.
  2. Being grounded in the “why” informs creativity to answer the “what” and the “how.” Design begins with empathy and you can’t have empathy if you don’t know what your users are struggling with.
  3. Solutions should be evaluated on how well they address the problem. This evaluation should drive design, QA, and post-release review.

To help people focus on the problem, I use a simple tool that I call a “problem definition.” This is a document (preferably a wiki page) that describes the problem and why it is important: inefficiency, risk, etc. There is also a section for proposed solutions where the author can suggest their ideas. The problem definition then becomes a focal point for clarification and learning. Stakeholders can ask questions to explore the use case.

I think this type of document was the original intent behind the “User Story” used in various agile methodologies. But over time, the User Story has been corrupted into a formulaic and useless “As a _____, I want to ________ so I can ________”; I have yet to read a User Story that really got to the heart of the problem and why it was worth solving.

Problem definitions are precursors to project artifacts like specifications and work items. They should be easy for anyone to write in their own language. No commitment is made to implement a solution. Sometimes problems can be solved with training or better documentation. Even if no action is taken, expressing and hearing these issues is important in bridging the gap between the product team and its users.

Everyone on the team should be able to answer the question “why are we doing this?” If they can’t, they can’t be expected to be contribute to an effective solution.

May 23, 2019

Email is the portal

Aberdeen is a Market Intelligence company. We provide market data (Firmographic, Technographic, Leads, and Intent) as well as quantitative and qualitative insights based on those data. My primary role as Chief Technology Officer is to develop and improve products that deliver and maximize the value of these data and insights. This is really the same "right content, right context, right time" problem that I have been working on for years as a content management professional.

Our strategy for detail data is to push them directly to the systems where they are actionable. For example, our Aberdeen Intent for Salesforce app creates Salesforce opportunities out of companies that are showing intent. The Salesforce app also includes some charts to visualize summaries and trends. We also have other apps to help Salesforce users interact with our firmographic and technographic data. But Salesforce accounts are often rationed and not everyone spends their time there. The conventional answer to reach other users is a customer portal.

But does the world really need yet another portal?

Technical and non-technical roles are forced to work in so many different platforms. I feel especially bad for marketers (queue the scary Brinker SuperGraphic). But every office job today seems to involve logging into different systems to check in on various dashboards or consoles.

Yes, single sign-on can make authentication easier. But SSO is rarely configured because so few of these systems are owned by Corp IT. Plus, you need to remember where to go.

Yes, an email alert can suggest when it may be worthwhile to check in on a dashboard. But establishing the right threshold for notification involves time consuming trial and error that few have the patience for. It only takes a few "false alarm" notifications to make you hesitate before following a link.

Corporate portal technologies tried to solve this problem by assembling views (portlets) into one interface but the result was both unwieldy and incomplete. There is a constant flow of BI initiatives that try to solve this problem by bringing the data together into a unified place. Too complicated. Too cumbersome. And yet another place to go.

So most users are doomed to flit from portal to portal like a bee looking for nectar.

I am starting to believe that we already have the unified, integrated portal we have been looking for. It is the email client where we spend hours of every work day. Rather than develop a dashboard or portal that people need to go to, deliver simple glance-able email reports that highlight what is new/different and needs attention.

Longtime readers of this blog may be aware of my contempt for email as a collaboration and information management tool. However, even in the age of Slack, there is no more reliable way to notify business users than email. Decision makers live in their email clients. If you want to get information in front of someone, the best place to put it is in their inbox.

Designing email-friendly business intelligence is not trivial. Beyond the technical limitations of email clients' HTML rendering capabilities, you also have to consider the context. People are already overloaded with email so the reports need to minimize cognitive load. They need to quickly convey what is important within a couple distracted seconds. Perhaps even on a mobile phone in between meetings. Less is more - just a few Key Performance Indicators (KPIs) to make the user feel like he is in the loop and can answer questions about the status of the program or take action if necessary.

Frequency is also an important factor. The cadence should align with the decision making cycle. These emails are not for catching daily anomalies. Those types of warnings are better handled by system alerts that only go out when thresholds are met (behind schedule, over budget, no data detected...).

As I think about a portal to deliver Aberdeen's market intelligence insight, I keep going back to the question, what if our BI portal wasn't a portal at all? Wouldn't it be better to put our data into user interfaces that our clients are already looking at?

Oct 10, 2017

Release notes: the most important documentation you will ever write

One of my favorite high school teachers used to say "if you want to learn about the world, read the newspaper." This is great advice because, by updating you on what is happening now, newspaper articles expose you to places, people, and histories that you can dig into more with additional research.

I feel the same way about release notes. Let's face it, people don't read product manuals cover to cover any more than they do encyclopedias. You read a reference resource when you want to learn more about something that you are already aware of. In the product world, you read the manual when you are struggling to figure out how to use a feature. But what if you don't know a feature exists? That is where release notes come in.

If you practice lean product development, your product should be constantly improving so there should be plenty of material to talk about in release notes. You should also have an engaged customer base that likes the product, wants to learn new ways to use it, and is excited to know what is coming next. Release notes are the ideal channel to educate these customers about product features and progress. So make your release notes good!

While they are not release notes in a classic sense (because they are not tied to individual releases) a good model to follow is the "What's new with Alexa" emails that Amazon Echo owners get. These emails are easy to read and have just enough information to tease the reader into trying something and learning more. They also mention older features that new users may not have heard about yet. In fact, I am convinced that some weeks those emails only contain older features. But they are still useful reminders!

The redesigned App Store for iOS 11 also has some of these qualities.

I pay a lot of attention to release notes for all of the products I manage. Here are some habits that I think make them successful.

  • Write the first draft of the release notes at the start of a release cycle. That is, when the release is scoped and getting prepared for QA.
  • Invite internal stakeholders (especially sales, support, and QA) to review and comment on early release notes drafts. You might learn about functional issues that were not considered during the initial design. You may also learn about how to better describe/position the feature.
  • Make the release notes fun and easy to read. I find that humor are helpful in keeping the reader's attention.
  • Make release notes required reading for staff. After sending out the release notes, I email managers a quiz question to ask their team. 
  • Keep the notes short. Focus on what the change is and the problem it solves. Talking about the "why" is critical because it helps the reader understand what the feature is used for. For more details, link to longer form knowledge base articles or product documentation.
  • Include a "new to you" section that describes a pre-existing feature that is under-utilized or not widely known about. 
  • Copy the release notes into the Git merge commit message. This makes deployed versions stand out and searchable. 

Oct 03, 2017

Lean Product Development with SLCs rather than MVPs

I honestly can't think of a better way to build products and services than the lean product development method. All of the work I have done over the last several years has followed the pattern of growing a customer base around a small simple product that evolves and matures in response to the needs of the users.

Our latest product, Lionbridge onDemand, has been a great success and yet it started out so small: we were testing whether we could build a business around a self service video translation site. We built the simplest app possible but left room for growth. We leveraged operational talent that Lionbridge already had and an entrepreneurial spirit that was not being satisfied. We tested different ways to drive customers to the site. We got just enough market response to see a path and keep moving forward. Since our first sale, we have been constantly learning, optimizing, and expanding. Now onDemand is a thriving business with a broad range of services. We translate all sorts of content (over 40 file types) into pretty much any language you can think of. We have a public API and integrations with several popular content management and commerce systems. But most importantly, we have happy customers who rely on the service.

Whenever I want to build a new product, or even add a new feature to an existing product, my plan is to start with an MVP (minimum viable product) and iterate from there. But this article, "I hate MVPs. So do your customers. Make it SLC instead" by Jason Cohen, gave me pause. SLC stands for "Simple, Lovable, and Complete" and, after reading the article, I realize that SLCs are the recipe for success in lean product development; not MVPs.

Here is the difference. An MVP is (in a pure sense) the flimsiest thing you can build to answer a question about a potential market. The emphasis is more on the "M" (minimum) than the "V" (viability) and most interpret that balance as a semi functional prototype. This kind of MVP is frustrating to customers because it doesn't solve their problem in a helpful way. It focuses on validating that the problem exists and that there is value in solving it. MVPs are selfish because they prioritize research over actual customer benefit.

If you really want to learn about customer need, and build good relationships at the same time, you should take one customer problem and solve it well. Build an SLC. I know that the line is blurry here. An MVP could be an SLC if you make lovability a requirement for viability. But often you hear bad solutions as being excused or justified as "just an MVP."

The first iteration of Lionbridge onDemand did what was needed for a successful, gratifying transaction. In particular:

  • The customer could upload very large files. This is necessary because video files are big.
  • The project was automatically quoted so the customer didn't have to wait.
  • The customer could connect with an online sales agent through chat. This turned out to be a critical feature because we learned so much from our customers during these interactions.
  • The customer could pay for the service using a credit card so he/she got the experience of a seamless, self-service transaction.
  • The operations team was prepared to produce a high quality deliverable in a reasonable timeframe. The customer's need was satisfied.
While we launched the first iteration of Lionbridge onDemand quickly (less than 2 months from concept to first sale) we took the time to get those pieces right. That was a little over 4 years ago and our first customer continues to do business with us.

We are constantly adding new features to Lionbridge onDemand and every time we do, we treat it as an MVP SLC. We don't launch the feature unless it makes the user's experience a little better by solving a problem (however small) in a complete and satisfactory way.
Next → Page 1 of 3