<!-- Content Here -->

Where content meets technology

Jun 22, 2023

AI and Content Licensing

In my last re-platforming of this blog, I accidentally dropped the Creative Commons Attribution licensing that I had been using. Blogging platforms treat licensing as part of the format rather than the content itself. The format is part of the CMS theme so when the CMS changes, the content moves but the licensing does not. I am still trying to make up my mind as to whether I think that is a good thing. But at the moment, I am thinking about the broader issue of content re-use and attribution in light of being used as AI training data.

People publish content for a variety of reasons. Personally, I write to explore and refine ideas and also for the potential to discuss these topics with people who stumble across my posts (although that rarely happens). There is also a recognition element. My blog is where people can associate me with what I know and think. Many websites and communities are built around the value of recognition. For example, sites like Stack Overflow have a culture around recognizing and rewarding expertise.

I have been using the Creative Commons Attribution license because I want people to use and further my ideas and I also want to be part of the ongoing discussion and evolution of those ideas. Based on the language of the license, I thought it would protect these interests:

"You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use"

But, according to the article "Should CC-Licensed Content be Used to Train AI? It Depends" (by Brigitte Vézina and Sarah Hinchliff Pearson), there is no agreement that any form of copyright applies to AI training.

Large Language Models, trained on terabytes of content (the GPT-4 dataset is 1 petabyte), create new value for content consumers wanting condensed answers. But that intermediation saps value from the content producers and publishers. The ChatGPT user has no idea if some pearl of wisdom came from me (doubtful) and I have no idea if my knowledge was accessed or what became of it.

I think that I will continue to write even though I know my words will be anonymized by AI. I still get the value of using writing to organize my thoughts and to develop my communication skills. Jack Ivers has a great post describing the reasons for writing every day. But I don't think I would be as excited to post answers on Stack Overflow unless I wanted to build adoption for a particular technology that I supported. I am even less likely to post an answer on Quora.

I wonder if AI chatbots will stifle other contributors' motivation. Perhaps it already has but I haven't heard much of an uproar. If generative AI drives the extinction of user generated content (which helps improve AI), the progress of knowledge will slow because it will not be able to incorporate new experiences.

Wikipedia is a bit different. Wikipedia contributors are mainly concerned about the accuracy of the content rather than attribution. In many ways, personal attribution taints the authority of the article with the possibility of bias. Consequently, you have to dig to find who wrote what. Wikipedia is already harvested by search engines and voice assistants (both Alexa and Google Assistant rely heavily on it). The contributors don't seem to mind.

For now, I have re-added a Creative Commons license to the footer of this blog and the syndication feed (Pelican, I might submit a pull request for that). Not that it does any good.

May 25, 2023

Penetration Rate

A couple of months ago I wrote a post about Retention Rate as the first metric to watch when rolling out a new product or feature. High retention rate is strong evidence of your product's utility and usability. Once you have established this value, it's time to drive adoption.

You measure adoption with penetration rate, which is calculated as the number of active users divided by the total number of potential users. Sometimes there is debate around the scope of the potential user base. I advocate for a broader definition. For example, even if you have only launched in one locale, I think that you should calculate penetration rate against worldwide potential customers. This way, when you launch a new locale, your penentration rate goes up and not down. In general, I like metrics that go up when good things happen rather than ones whose dips can be explained by progress in other areas.

To increase penetration rate, you need to attract new users and keep the ones that you have. You should track both customer acquisition and retention because aggressive customer acquisition can mask churn, which is harmful to your business long term. You don't want to waste resources chasing customers that you can't keep.

Consider the channels you have to build awareness: release notes, tool tips and interactive product tours, searchable documentation, email, youtube channels... But be aware of the annoyance factor, the more accessible (in your face) the channel, the more disruptive it can be when unwanted. My team at Alexa Audio owns the hints that occur during audio playback. Alexa "by the way" hints can be very annoying so we have strict rules for which ones we allow during the audio experience. The hint must introduce a feature or content that is useful in current context. For example, we might suggest how to make a play queue continuous after the music naturally ends.

You need to be scientific in how you measure the impact of an experiment. This means randomized tests with control and treatment groups and the ability to compare immediate and long term behavior of both groups. A high conversion rate is not good if those who got the impression wind up using your product less over the following months.

In addition to building awareness, you need to make sure that the customer's first attempt to use the feature is successful and they experience immediate value. If the feature is not designed for immediate value (like auto-save), show progress toward that value (like showing a last saved time).

Most organizations don't have the analytical capacity to measure and manage penetration rate for every feature in their product. It is best to focus on "high value actions:" features that, when adopted, increase overall engagement with the product and value exchange between you and your customers. There are sophisticated statistical models that can calculate the monetary value of each action but a simpler approach is to segment your user population by overall engagement (such the days in a month that they use the product). Compare the behavior of the top quartile with the bottom quartile and see which features your most active customers love. Then you can build programs to help your less active users discover these features.

Apr 28, 2023

Decisions

Decisions, regardless of whether they are right or wrong, are critical for moving an organization forward. Organizations that can't make decisions waste time rehashing the same discussions and eventually wind up going with whatever option was the default. A well managed decision, even if it leads to the wrong option, has value because it draws attention to an issue, allows the organization to act intentionally, and creates an opportunity to learn from the selected course of action to course correct. It's better to make a decision that you can pivot from than to cower in indecision. At Amazon, that's the "Bias for Action" Leadership principle.

To get better at making decisions, organizations should use every decision as an opportunity to improve their process. Once you are aware of the necessity of a decision, I recommend you start your road to improvement by asking the following questions.

1. What decisions do we need to make to move forward?

A decision is only necessary if not making it stops progress. As long as it doesn't slow you down, deferring a decision allows you to make a more informed decision in the future. But be aware of of decisions you are unconsciously making by over-linear thinking that ignores options -- especially those that are hard to reverse.

2. Whose decision is it?

A decision should be made by someone who is accountable for the short and long term success of the impacted component, system, or experience. That person should own all of the relevant dimensions: cost, time to market, usability, viability, etc. The decision maker is also responsible for deciding who needs to be consulted or informed and manage that communication. If someone finds out about a decision and has the authority to question or reverse it after the fact, that's on the decision maker. So the first thing to do is understand the blast radius of the decision. A smaller internal technical design choice could be made by the person maintaining the code. If it has larger implications, the decision maker needs to have a broader scope of accountability.

3. What information is needed to have reasonable confidence?

You will never have complete information about all the implications of all of the potential options. If you did, the choice would be too obvious to even consider it a decision. The threshold for the level of confidence depends on the reversibility of the decision. If reversing a decision has no cost, you might as well just start trying options. But that is never the case because just trying something has the cost of time. The decision maker should be able to articulate up front what data they would need to reduce risk to an acceptable level. Examples include a technical proof of concept demonstrating feasibility, user research using prototypes, a well designed survey with a large enough sample size to achieve statistical significance, revenue or cost projections. Be aware that sometimes the cost of gathering this information outweighs the risk of the decision itself.

4. How will you validate whether you made the right decision?

Decisions shouldn't be fire and forget. Your plan to implement the decision should include instrumentation, monitoring, and attention to the results. Ideally, you should think of thresholds when you should reconsider your choice. For example, if we see a retention rate of less than Y, we will pivot to a different choice. Or time could be part of your threshold. For example, we will run this experiment for 4 weeks and then look at the data. Good decisions should preserve the opportunity to change course. But you need to know when to consider those other options.

Mar 27, 2023

Follow Your Shot

The first big project of my technology career was a company-wide rollout of Windows 95. Our tiny five person IT organization (which included desktop support, network admin, and software development) took on a herculean task of adding RAM and installing the new operating system on over 500 employee workstations. After each work day, we stayed around until midnight to process 50 or so computers and then we got to work early the next day to train and support the employees whose computers we upgraded. While we were all exhausted from the previous night, morning support was the most critical part of the project because there were problems on some machines and, even when there weren't, users needed help finding their way around the new OS. Those first few minutes determined whether they loved the new experience or felt screwed by IT.

That's when I learned one of the most important lessons of my career: follow your shot. I usually don't love using sports analogies for work but I think most people have seen (or maybe even been) a tenacious hockey/basketball/soccer/lacrosse... player who scores off their own rebound.

Following your shot means caring about the result more than just completing the task. It means, monitoring a new feature's utilization and fault rates, testing it the field, talking to customers... and making adjustments to redirect a miss. Teams that lack a shot following mentality have less impact because they don't notice missed steps or miscalculations. They just move onto the next task or project. Every team I have joined eventually notices a code branch that was never merged, a feature that was never fully dialed up, or a bug that made a feature inaccessible. Task-oriented teams also have a higher risk of burnout because checking off a task or completing a project is a relief, not a reward -- not like the satisfaction of having a measurable impact.

We often talk about the distinction between being "process oriented" vs. "results driven." Some criticize process orientation for not caring enough about outputs. Results driven cultures can be toxic for overly punishing failure and not appreciating effort. A "follow your shot" culture is a balance between these two extremes. It values the methodical execution of a defined process and also the agility to adapt to get the desired result.

Whatever the task, there is a "follow your shot" behavior that will increase its chance of success: checking the deployment pipeline after a code merge; monitoring dashboards during a dial-up; following up on a customer support case to ensure that the problem was resolved. You just need to ask yourself "how do I follow this shot?"

Feb 12, 2023

Your First Metric: Retention Rate

If you invested time and effort to build a new feature on your product, you had at least an informal hypothesis that the new capability would bring value to your customers. But did you actually test that hypothesis by validating the value? Chances are you obsessed over the design, implementation, and delivery of the feature but your attention shifted to the next project once the feature was launched. If you make a habit out of this, your product becomes a bloated jumble of features that your customers can't discover, don't understand, or don't like. You have fallen into the Feature Debt Trap. The cost of maintaining a product like this is high because of the large surface area for defects and you are vulnerable to disruption by a simpler product that delivers high value by scratching the right itches.

To avoid this fate, or at least slow the onset, you must concentrate your investment on the features that matter the most to your customers and ensure that they are delivering high value. If you look at one metric, start with retention - not for your entire product but for users of the feature. The strongest evidence of a feature delivering value is continued use. Retention means that customers have a recurring need and have found your solution the most convenient way to satisfy it.

The level of usage regularity one can expect from a feature depends on the feature. For example, your payroll product's W-2 generation function probably just gets used once per year. There are some important features that solve sporadic problems such as password reset. But, for most features, regular usage means high perceived value.

You measure retention by duration. 2-Week Retention is the percent of customers who continued to use the feature a week after they started to use it. 3-Week Retention is the percent of that group that continued to use it in the 3rd weeks and so on. Retention usually declines over weeks and then stabilizes. The shape of the decline is called a Survival Curve. The longest survivors are getting the most value from the feature. For some features, the time range when the survival curve starts to flatten can be considered the habit formation period - meaning that if someone sticks with something for this long, they continue to use it. For example, a meditation app. Or the survivor curve could be an asymptote approaching the ideal segment of users.

If the retention rate is low, your feature or product suffers from one or both of the following issues:

  1. Low usability. There are easier ways to solve the problem.
  2. Low utility. The problem that the feature solves isn't a regular priority for the customer.

You should fix those problems before you invest in promoting the feature or invest in support and maintenance.

The low utility problem is the harder one to address. It could mean you were undisciplined in managing the product ("wouldn't it be cool if") or didn't understand your customer very well. Or it could be that the customer doesn't understand the value. For example, they may not anticipate the value of having a backup or auto-save. Whatever the root cause, you need to determine whether you can establish value or figure out an exit. Otherwise, the cost of maintaining the feature will be all waste.

The low usability problem is easier to solve and there are great techniques and skilled people who can help. The only hitch is that you need to plan for the investment. If you are locked into a road map that promises a hit parade of new feature launches, you will need to reset expectations. Nobody wins if your feature launches don't deliver value to your customers.

If a feature is not fixable, you should eliminate it. But that is hard to do if it has a small group of loyal customers that have relied on this feature to address an important need. To stay out of this situation, give yourself a two way door by leveraging pilot or soft launch strategies. Often these approaches are just used to test scaling but they offer a great opportunity to monitor retention.

Once you have achieved a suitable retention rate, your attention can shift to building adoption (measured by penetration rate) by raising awareness for the feature. But you still need to monitor retention in the form of lapse and churn. But that is for another blog post.

Jan 04, 2023

Be Even Better

I have recently run across two professional development practices that I am interested in adopting; and now I am starting to think that combining them together will make them even more useful. The first practice is Jacob Kaplan-Moss's recommendation to maintain a transition file. This document is designed to help you train your successor and should contain notes on team mechanisms, role responsibilities, projects, and (if you are a manager) the people you manage. Jacob's reasoning for maintaining and updating this file every 2-3 months is mostly around supporting your team and showing general professionalism when you decide to leave. But I think this introspection is valuable even if you never leave your role. By preparing to teach someone to do your job, you can leverage the "Protégé Effect" to understand your job even better. Preparing to explain why something is done affords a perspective to see gaps, inconsistencies, and opportunities.

That brings me to the "Be Even Better" practice, which I learned about in an internal Amazon email list for managers. The post described a quarterly personal development day that team members schedule for themselves with the subject "Be Even Better." I love the title because it encourages a positive form of self criticism that embraces a "Growth Mindset."

One of the suggested question prompts to initiate the "Be Even Better" exercise is "if someone came in tomorrow and took my place, what would they be shocked or surprised about?" By updating the transition file and then asking this question, you give yourself an opportunity to combine the energy of a newcomer who wants to have an impact, with the experience and context of an incumbent. The next steps in the process are to prioritize, plan, and set goals for improvement. If you were training your successor, it would feel much better to say "this is how we are solving this problem" than to appear content that the problem persists.

I think we humans have a natural tendency to settle into a ruts where we just go through the motions of our day - attending meetings, responding to emails, processing deliverables... We get tired of being annoyed by issues and develop an unhealthy tolerance of them. Then we complain about burnout and "needing a change." We under-appreciate the potential to change our perspectives and see new challenges and opportunities in our current work environment where we can harness our domain expertise and professional relationships to achieve more.

That is not to say that a change of work environment isn't also a good idea. Being on the messy side of a learning curve is great mental exercise and the ability to share practices and ideas from different places is great for organizations. But a job change is risky. The recruitment process isn't ideal for learning whether you will like the new role. At Amazon, we have many "boomerangs" who come back after jumping into sub-optimal jobs. The art of managing a career is being able to maximize the growth you can get out of every role that you have and seeing when growth opportunities start to actually diminish .. rather than just appear to diminish because you stopped paying attention.

Dec 17, 2022

Digital Refactor

In light of the recent drama over at Twitter, I have been re-evaluating my use of social media. I have been off Facebook for years and for months I have made an effort to immerse myself in longer form content (newspaper articles, books, etc.). But I hadn't been able to resist the temptation of the Twitter timeline - specifically the little dopamine hits I got from snarky jokes and also surges of outrage at the world's injustices.

When I joined Mastodon (my handle is @sgottlieb@better.boston), I didn't want to recreate the same experience that I had on Twitter; specifically, the habit of trolling for dopamine. But it is taking me some time to figure out what I want from a timeline. I don't want to rely on a timeline to "stay informed" or "keep up to date." With their journalism skills and editorial standards, newspapers are much better resources to satisfy those needs. And I don't need to sift through a disorganized timeline to find links to articles that I should read. Content is already nicely organized in news websites and various newsletters that I subscribe to.

I rarely used Twitter as a social space to have conversations with people I know. And I am not looking for Facebook-style highlight reels from my real world friends. To maintain connections, I am much better off having real conversations.

So what problem am I looking to Mastodon to solve? I am inspired by passionate people doing noble things like advocating for more livable cities, scientific research, or any kind of creative expression. I like the idea of an open space to talk about esoteric topics. I am trying to learn how to use Mastodon to provide that experience. I think the solution lies in hashtags and being very deliberate in who I follow. It's a work in progress and I am patient.

Reach out if you have cracked the Mastodon code to create a perfect timeline.

Feb 07, 2020

Feb 06, 2020

Comfort is the single most important factor of change management

I have been reading some articles on digital transformation and change management and I am surprised about how little attention is given to the notion of comfort. The reason why change is hard is that it makes people feel uncomfortable. And when people are not comfortable, their confidence, morale, and productivity suffers.

At the very least, there is the loss of familiarity. Habituated routines that once hummed along on the edge of our consciousness all of the sudden require direct attention. Tasks take longer to do. Mistakes are made. The confidence of mastery can feel threatened.

There is always a group of people who benefited from the old way of doing things even if that way was inefficient or dysfunctional. Dysfunction can create jobs or mask poor performance so don't assume that everyone is onboard to eradicate it.

There is great advice for assessing change readiness and rolling out change. But be aware that the resistance that you feel is a primal tendency to seek and preserve comfort. In addition to your conventional change management best practices, focus your attention on people whose comfort will be the most impacted. These will be people who have developed mastery over specialized skills and knowledge or have accumulated power from the current dynamics. Think about what benefits would be the most motivating for those groups to buy in. If you don't have their support, you can count on their interference.

And remember to measure the impact of a change after people have had a chance to re-equilibrate and form new comfortable habits.

Jan 04, 2020

Content Here Republished on Pelican

I have been steadily reducing my Google footprint over the past year. I switched from gmail. I no longer use Google drive or photos. And, most recently, I migrated this blog from Blogger to a statically generated site hosted on Amazon S3.

I use a framework called Pelican to generate the full site from content files written in Markdown. I am writing this post using a Markdown editor called Typora, although most text editors have very good Markdown support. Then I run a command that generates HTML files and pushes them to S3.

Migration was incredibly easy. Google Takeout allows you to export all your posts as an Atom file. Then I wrote a script that turned the Atom feed into Markdown files. It was easy to keep the same URLs thanks to the "blogger:filename" elements in the Atom file. For a template, I chose a theme called Blue Penguin. After changing a couple of colors, I was done!

I am still working out the finer points of the workflow. So far, the main limitation I see is that I can't post from any computer like I was able to do with Blogger (and before that Wordpress). Before being able to generate the site, you need to set up a local environment -- not hard thanks to GitHub and Pipenv, but not something that I would want to do on my work computer. Probably the next time I get inspired to blog while traveling, I will email myself a post to publish later.

Overall, I am pretty happy with this setup!

← Previous Next → Page 2 of 75