We have been working with people for a while now to better measure and understand the impact of issues that affect process maturity. The feedback we’ve received is that the results clearly reflect reality, but we always get the same question next:
What do I do now?
This is hard to answer because it depends on where you are today, and the answer will change over time.
We do our best to offer advice to customers, but until now they’ve been mostly on their own figuring out which actions will have the biggest effect.
Through working with different organizations, we have found that most teams follow similar paths from inception to maturity.
Also, despite using different nomenclature, tools, and approaches to work, the software engineering community has come to agree on a broad set of universal best practices for building teams that predictably deliver high-quality software.
Here we introduce an engineering process maturity scorecard that captures these best practices and shows you where you are in your journey so you know what to do next.
Our main goal here is to identify common best practices, not to advocate for a particular tool or process. As such, we have tried to base the scorecard definitions on fundamental principles that are tool and process agnostic. In theory, a mature organization with 100% home-grown solutions should be able to pass.
While software engineers largely agree on the best practices listed here, you can always find those who dissent. Explaining the full reasoning behind each best practice is beyond the scope of this article, but we briefly describe the benefits of each one, and you can find more with a quick internet search.
Some also question whether best practices are worthwhile for smaller, newer teams. It is important not to conflate whether a best practice is essential with whether it is beneficial. Less experienced engineers may think that they don’t need to do something because they can survive without it, even though doing it would help them.
Unless otherwise noted, we believe the best practices in this scorecard are helpful for teams of all sizes, starting at just two people.
For the scorecard to work well, it should measure results consistently in a way that ties back to business impact. It would not be helpful, for example, to flag legacy code or tools that are rarely used.
To achieve this goal, we measure the percentage of time that people spend following each best practice, which reflects the impact it has on their productivity.
Time ratios further enable teams to set goals below 100% and customize goals to their needs – acknowledging that things are rarely perfect in practice and giving them good transition points to move on to the next goals.
Time ratios also make it easy to track historical progress and show how things have improved over time (which CEOs love to see!)
minware can measure many of these scores for you automatically. However, you can also start by just asking people to write down approximately how much time they spent following each best practice recently.
Finally, we recommend aggregating results at the team level. The reason is that many of the artifacts – such as repositories, projects, and sprints – are owned by a team, not by an individual. Also, teams often have different maturity levels, so aggregating at a higher organizational level loses that context.
We have organized the scorecard into different sections for each logical area. The items within each section are generally sorted from lower to higher maturity, though some later items may come before earlier items on certain teams.
There is no one-size-fits-all path for every team because the impact of each issue depends on team members’ background and experience.
What we recommend is to first assess the team’s biggest pain points, which can be done with an informal survey or by using a system like minware to analyze issues in each area.
Once you have a clear picture of the top problems (e.g., sprint plan interruptions, unproductive meetings, frequent production bugs, etc.) you can select a few scorecard items related to those problems and create goals for the team.
With target scorecard numbers in place, teams can independently review the things that hurt the score and come up with a plan of action for meeting their goals. (A sprint retrospective is a great time to do this.)
After teams meet their goals, the process then repeats, moving on to the next bottleneck.
Please see the full list of scorecard items for more details.
Creating a high-functioning engineering team requires doing a lot of things well in a wide range of areas.
There are some areas we did not include here for the sake of brevity that we may want to cover in the future, like talent acquisition, people management, and DevOps.
However, this scorecard should serve as a good starting point for anyone who wants to assess how well their team follows modern engineering best practices.
If you have other ideas about best practices we should include, or would like help implementing an engineering process maturity scorecard in your organization, we’d love to hear from you.