This report shows how much time people spend on code that never has a review, is never merged, or addresses a review comment vs. how much time they spend work on pre-review code that is successfully merged.
It can identify several different anti-patterns teams are insufficiently using pull requests and review code, or where a lot of code is abandoned or requires rework.
The report contains five metrics and shows individual values relative to the team and organization for each one so that you can further investigate outlier scenarios where people are struggling to merge code.
This metric looks at how many dev days went into commits that were made directly to a main or master branch rather than a feature branch.
Committing to main/master is a bad practice because it circumvents all the quality controls associated with branches and pull requests.
The never merged dev day metric looks at what portion of the time developers spend on code that has not yet been merged to any main or integration branch (such as “dev”). This metric highlights situations where people are discarding work, or where pull requests stay open for a really long time.
A little bit of never-merged work is to be expected because some pull requests don’t go as planned and end up being closed. However, a large amount of it is a problem and indicates that developers are not able to merge code that they spent time writing.
This is a chart where you can click on the data points to see the specific pull requests associated with each week.
This metric shows you how much time developers spend on code that is merged without ever receiving a code review.
Merging code without any code review (not even a rubber-stamp approval with no feedback) is risky and likely to lead to quality problems.
This is an interesting metric that shows how much time developers spend on code prior to receiving review, and how much they spend after.
High ratios of post-review work indicate that developers are opening pull requests that require a substantial amount of rework prior to being merged, which may mean they are struggling with code quality or requirements. When this happens, it is important to investigate whether product managers and tech leads set clear expectations for developers, or whether the developers failed to follow guidance or lack the skill set necessary to succeed on their own.
Also, keep in mind that an extremely low PRR metric is also bad. This can happen when there aren’t any code reviews or there are rubber-stamp reviews. If PRR is too low across the team, it may mean that the code review process is weak and people are merging low-quality code.
Finally, the pre-review code dev day metric combines all the other metrics in this report by looking at what percentage of dev days go toward code prior to review that is in a pull request and eventually receives a review.
You can think of this metric as the amount of time that developers spend working on code that is merged with review before getting help from others. Again, it shouldn’t be too high because that can indicate rubber-stamp reviews and merging low-quality work. However, numbers below 50% are worth investigating.