The goal of this report is to answer the question: How many bugs are we creating and fixing? This can inform decisions about investments in better quality practices and test automation.
This report starts by showing change failure rate (CFR), one of the four key DORA metrics. Change failure rate tracks the number of high-severity bugs created per change. This report uses merges to a main/master branch for the change count, but can also be configured to use deployments.
Beyond change failure rate, it is important to know how frequently bugs are created of all severity levels, bug backlog trends, and the total amount of time that people have to spend fixing bugs. This provides a holistic view of quality.
The next metric shown in this report is new bugs per dev day (NBD). One problem with change failure rate is it depends on the size of each deployment, so can be heavily influenced by CI/CD practices without any changes to quality. New bugs per dev day gives you a more accurate picture of how many new bugs are created per day of active development.
The report then shows net bug creation minus resolution (NBCR) to give you a sense of whether teams are accumulating bugs or burning down the bug backlog on net.
This report includes two scorecard metrics related to bug management: bug fix vs. find rate (BFFR), which shows how many bugs are fixed within 30 days of discovery, and non-bug load (NBL), which shows whether the team is able to spend an adequate amount of time on new functionality or whether they are overloaded with bug fixes.
The next metric shows the average bug backlog size (ABBS) history grouped by priority, team, and person so that you can pinpoint the root cause of undesirable trends in bug backlog growth.
Finally, the report shows a non-bug load (NBL) history chart to visualize changes in the team’s available time to do new feature work over time.