When I start working with an established software development team, my favorite tool for understanding their process is a "hot lot." Hot lot is a manufacturing term where an order is expedited by being allowed to jump the queue. Hot lots are closely watched for progress and potential delays. In the world of software development, a hot lot can be a feature request that goes through a process of design, implementation, testing, enablement (documentation), release, promotion, and evaluation. A hot lot should accelerate the process by adjusting priorities but it should not circumvent the process by breaking rules or cutting corners.
By prioritizing a feature and watching it go through the process at top speed, you can learn many things. For example, you can learn...
- Whether the process is even responsive enough to accept a hot lot. Sometimes engineering is tied to rigid roadmaps and nothing, no matter how important, can jump the line. This is concerning if those roadmaps stretch out beyond the opportunities you can reliably anticipate.
- Whether there even is a defined process. Is there a mechanism for ensuring all of the tasks (qa, documentation, deployment, etc.) are completed? Or maybe there is a process but nobody follows it. If there is no practiced process, you can't trust the integrity of the system or whether anything is really "done."
- How the process is structured into steps, roles, and hand-offs. How many people touch the feature? How much time does each step take? How much time is spent waiting between steps? Is there excessive back and forth? Lean Six Sigma has a great framework for studying process cycle time, wait times, and waste across the value stream.
- The theoretical speed limit of the current process. You will never know how responsive your process can be when you always slow it down with competing priorities. Often actual speed is much slower than potential speed because of various delays, distractions, and interruptions that are not the fault of the process.
- Whether there are structural blockers like "we only release every 3 months." Or maybe the team is distributed over many time zones with little overlap for hand-offs and feedback.
- Whether there are capacity blocks like "Joe is the only person who can do that step and he is not available."
- How easy it is to monitor the process. Can you go to one place and see the completed and remaining work?
- The amount of managerial overhead that the process requires. For example, is there a project manager that needs to track and delegate every task?
- The artifacts the process creates. Can you go back and see what was done and why?
- How the response to the feature was measured and incorporated into future improvement ideas.
After running through a couple of these experiments, I have a pretty good understanding of the process structure, its theoretical speed, its strengths, and its flaws. At that point, we can start to come up with ideas for improvement. The low hanging fruit is usually pretty obvious ... especially to people who have been participating in the process but not paying attention to the overall throughput. Optimizations can be designed collaboratively and tested by future hot lots. I find that teams are generally comfortable with this evaluation because it doesn't target people as much as the framework that people work in. Usually processes (at least as they are practiced) form organically so nobody feels threatened by process improvements -- especially if they are clearly supported by empirical evidence.
Even if you have been working with a team for a while, try pushing through a hot lot and pay close attention to it. There is really no better way to understand the execution of a process.