Engineering Metrics for Board Meetings

All Posts
Share this post
Share this post

One of the common reasons people deploy engineering metrics is due to pressure from the board of directors and/or CEO to provide greater visibility into their oh-so expensive engineering organization. They are not unjustified in this desire, given that software is an increasingly large part of the budget in many organizations.

Some engineers try to resist providing transparency to the board/CEO as a result of previous traumatic experiences with micromanagement. However, if done well, sharing metrics with the board can increase their confidence in engineering and even make them excited to invest more money in the team.

The Goals of Engineering Metrics for Boards

Before diving into the metrics themselves, it’s important to first understand the goals of sharing them with the board.

Goal #1: Alignment

The first goal is to improve alignment between the engineering team, the board, and the CEO. CEOs and boards often get a bad reputation in the media, but they are generally trying to do the right thing for the employees and the company. Similarly, engineers usually take pride in their work and try to do the right thing for the business (not just do as little work for as much pay as possible).

Issues with alignment tend to arise because engineers and the board have different perspectives about the best way to achieve their common goal of improving the company. Boards don’t see all the low-level things that happen to ship software, nor do they hear directly from individuals about how they feel. On the other hand, engineers are often insulated from how investors value the business, the cost of capital, and where money really comes from for their paycheck.

So, what are the areas where engineers commonly struggle to get alignment and buy-in from their board about priorities? Well, it tends to be behind-the-scenes developer experience stuff. People outside engineering see the features that ship and may even hear about bugs or quality issues. But, they don’t see how painful it is to deliver those features due to overaccumulation of tech debt, process problems, or personnel issues on the team.

These behind-the-scenes issues tend to grow over time if left unaddressed, increasingly slowing down development and driving away the best engineers. They are similar to financial debt in that they are a significant liability, but dissimilar in that they don’t show up on the balance sheet, so are not directly visible to the board.

When delivering metrics to the board, the first goal that engineers should have in mind is accurately portraying issues that impede engineering so that the board is aligned about investing in improvements.

Goal #2: Predictability

Board meetings don’t happen very frequently, and often focus on longer-term plans. At the same time, a key tenet of agile software development is adapting to circumstances rather than sticking to an outdated plan.

This can create conflicts between the board and engineers, with the board justifiably wanting to review strategic plans and then have them actually implemented rather than thrown out the next week.

Another important goal of sharing engineering metrics with the board is to establish the parameters of predictability so the board can depend on promises they receive while engineering retains enough flexibility to practice agile software development. As such, metrics should highlight impediments to predictability (like tech debt and quality issues), their impact, and the cost of fixing them.

Goal #3: Confidence

Finally, it is better for everyone if the board trusts that the engineering team is responsibly managing its budget. It’s not that boards and CEOs don’t trust engineers, but it would be negligent on their part not to verify what they are told.

On this front, it is the responsibility of engineering to account for where their resources are spent and the impact of internal improvements on productivity so that senior leaders have confidence in engineering management. This also benefits engineering because it makes the board more willing to increase their budget.

Engineering Metrics for the Board

This section highlights different categories of engineering metrics that can be helpful to show the board to achieve the goals outlined above. It is divided into different sections for different areas of engineering productivity, each roughly corresponding to one slide you can create in a board presentation.

People Metrics

Employee satisfaction is the number one most important metric for a healthy engineering team. Whether you believe 10x engineers exist, every company has top performers who are highly respected in the organization. If they leave, they can set off a talent exodus that permanently cripples the team. So, it is critical to retain them and keep them happy, because they can easily cause more than 10x damage.

To show employee satisfaction, there are a few different types of data you can show:

  • Glassdoor Reviews - These are the ground truth of employee satisfaction, particularly the bad reviews. They are on the internet anyway, so you should highlight these and what they are saying.
  • Undesirable Attrition - People aren’t always honest about why they leave, but sometimes they are. It’s important to surface rates of known undesirable attrition so that everyone can get behind putting all hands on deck to fix the issues that drove people out.
  • Employee NPS - This is the answer to a question like “One the scale 1-10, how likely are you to refer others to work at this company?” You can use a tool for this or do your own simple survey with Google forms. This metric is helpful because it provides an early warning before you end up with undesirable attrition or bad Glassdoor reviews. Again, it’s important to collect qualitative feedback with this as well to identify the issues that lead to negative responses.
  • Offer Acceptance - This metric covers people who don’t join the organization in the first place, and is especially helpful because you may be failing to attract the best engineers because they anticipate a lack of satisfaction. It is also good to solicit feedback from candidates if they are willing to provide it. This metric can surface issues with compensation, but also culture and practices that are turning people away and need improvement.

With employee satisfaction metrics, it’s helpful to also count and categorize the types of feedback in addition to the raw numbers. The benefit of this is that it builds alignment around investing in the internal improvements that the team needs to make to improve its scores.

Finally, Overall Onboarding Time can be a useful people metric to show in addition to employee satisfaction. This metric helps expose tech debt and put a concrete cost on it so that the team can invest in bringing new people up to speed more quickly. You can collect this by surveying people, or looking at how long it takes for new team members to reach a consistent  velocity.

Quality

The next area that’s important to cover in engineering board presentations is quality. There may already be some sense of how quality impacts customers with things like the churn rate, but here you should highlight the impact of quality on the pace and predictability of engineering work. This helps the board understand the importance of investing in quality improvements and tech debt reduction to increase feature velocity.

Here are some specific metrics that can paint a high-level picture of the current state of quality in your organization:

  • Bug Fix vs. Find Rate - This one is listed first because it can show whether the team is actively sacrificing quality to ship more features. Any sustained drop in the fix vs. find rate can have severe long-term consequences, and the team should fight for sufficient time to keep their bug backlog under control and maintain this metric in good standing.
  • Overall Bug Load - This measures the percentage of velocity that teams currently have to spend fixing bugs, showing the ongoing tax that quality has on the team.
  • Theoretical Bug Load - If the bug fix vs. find rate is low, this metric provides further illumination by showing what the overall bug load would be if the fix vs. find rate were at a normal level, and can expose situations where teams are really underwater with quality issues.
  • Revert Rate - This metric shows the percentage of time that changes have to be reverted because they caused a severe issue in production. This shows the additional cost of bugs beyond the overall bug load due to rework on the change that caused the bug.

Finally, it can be useful to present a Total Quality Overhead metric, which is the Theoretical Bug Load plus the Revert Rate. At the end of the day, this is the total amount of engineering resources that are unavailable for new feature work, and can be the basis for justifying investments in quality improvement and tech debt reduction.

Project Predictability

Once quality overhead is out of the way, the board will probably want to know what you got done compared to what you planned on the roadmap. This only makes sense, as it indicates the likelihood of delivering on your current plans in the future.

This is not to say that changing plans is bad or that the board should be upset if you went another direction to pursue better opportunities. What’s most important is that projects don’t end up taking a lot longer than expected without realizing commensurate value, because this means that any return on investment (ROI) calculations done to justify the projects will be wrong. Essentially what the board should care about is that realized ROI doesn’t come in significantly below planned ROI, like the project that was supposed to take a month taking six.

Here are some ways you can provide transparency into project predictability, depending on what data you have available:

  • Estimated vs. Realized ROI - If you have actual ROI estimates and results available for your projects, then you should stop there and just present those. However, this is rare as it can be difficult to compute the value of each project in isolation.
  • Actual vs. Estimated Roadmap Schedule - If you plan out a roadmap with expected timelines, then you can compare the length those to the actual timelines for each project to show the percent over or under total project time. This isn’t ideal though if the underlying assumptions behind the timeline change, like different amounts of quality overhead or personnel allocated to each project.
  • Scope Creep Rate - You can compute the scope creep rate by looking at the total amount of work that went into a project at its conclusion compared to the amount that was originally estimated when you committed resources to the project. This can be done either in units of developer time or story points. minware can compute this for you as long as you have an estimate for the scope of work at the start of a project denoted by an epic ticket.
  • Non-bug Sprint Interruption Rate - If you don’t have any form of project estimates at the start or you typically do a lot of small projects, then the non-bug sprint interruption rate can be a helpful proxy for the ratio of unplanned to planned work. This doesn’t capture everything because work in the sprint at the start can still be unplanned from the project perspective, but it does set a lower bound because sprint interruptions typically don’t represent planned work.

At the end of the day, you want to convey to the board the Total Project Overhead rate, which is the amount of additional time taken to complete projects as a percentage of their original estimates. This enables you to communicate clearly about how much padding to add to projects when making roadmap commitments, and identify significant variations representing problems or improvements.

Effort Allocation

The final set of metrics that boards should see is how effort (and therefore money) was allocated on the engineering team. This overlaps a bit with quality and project overhead metrics, but is good to show too because it directly maps to the bottom line.

To compute these metrics, a system like minware will be the most accurate (and easy to implement) because it looks at data from version control and ticket tracking. You could also use manually recorded time logs or story points. These approaches are less reliable and miss some things, but are better than showing up to a board meeting empty-handed.

Here are the categories of time that you can show with the output of a system like minware:

  • Unattributable Time  - This is the overall portion of time spent that cannot be allocated to a particular task in your ticket tracking system. Unattributable time includes things like branches without tickets, commits to a main branch, and simply time with no commit or ticket activity. Engineering managers should look at each type of unattributable time separately, but the board probably just cares about the overall rate so they have a sense of how much time is unaccounted for and whether the metric is moving in the right direction.
  • Bugs - You may have separate quality metrics, but it’s also nice to show total time that went toward bugs.
  • Non-Project Tickets - In addition to bugs, it’s helpful to see how much time went toward all the miscellaneous tasks that don’t fall under the umbrella of a project represented by an epic ticket.
  • Total Project Time - This leaves you with the total time spent on projects after excluding the other time types above. Optionally, depending on whether you have project estimates in place, you can separate this one into Planned Project Time and Unplanned Project Time to show what portion of effort went into predictable delivery (and therefore can be used to estimate the capacity to deliver future work).

After you compile these aggregate metrics, you will want to break down the Project Time metrics by large project or initiative. This empowers the CEO and board to assess actual ROI, which should be their primary concern. When you do this, just make sure the numbers add up to 100% and that the “other” category is small.

Metrics Not to Share

This list is not exhaustive and there are other things that may be helpful to show to the board in different areas, like performance or security KPIs. However, there are some things you should probably avoid sharing with the board, not because you want to hide them, but because they can be misleading or easy to misinterpret.

  • Story Point Velocity - You probably knew this was going to be at the top of the list, but story points are meaningless as an absolute metric and should only be used for relative comparisons (e.g., what portion of effort went into bug fixes). The simple reason is that they are trivial to inflate, which will inevitably happen if they start getting shared with the board.
  • Sprint Completion - Looking at the percentage of points delivered each sprint from the original commitment is a relative metric, but it’s simply too low-level to be useful for the board. It impacts predictability a bit, but it’s hard to draw meaningful conclusions from this metric because different teams may have more or less aggressive sprint commitments. Turning it into a target may lead teams to just deliver less so that they are being “predictable.”
  • Delivery Rate of Roadmap Projects - The project predictability metrics described earlier help by showing how much overhead to expect when delivering projects. However, the roadmap should not be set in stone, as this prevents the team from substituting better projects when opportunities arise to realize higher ROI. This metric essentially puts you on a path to waterfall development instead of agile, which the software engineering community generally agrees is a bad thing.
  • DORA Metrics - DORA is an acronym that stands for DevOps Research and Assessment. DORA Metrics are a suite of four things that can be helpful internally, but are not good to share directly with the board because they are easy to manipulate or don’t tell the whole story in the following ways:
    • Deployment Frequency - Deploying more often is generally good as it indicates a more efficient deployment process, and it is clearly bad to have very infrequent deployments. However, if you are deploying at a reasonable pace, going faster might not actually help for various reasons, like working with app store processes that cap your frequency or putting extra effort into feature flagging small pieces of projects so you can “deploy” them even if they are inaccessible to customers. Pushing deployment frequency can also disincentivize working on larger more difficult tasks.
    • Lead Time for Changes - Again, all else being equal, working in smaller batches is good and reduces risk. The problem is that lead time for changes is your per-task flow efficiency multiplied by your task size. What matters most is the flow efficiency part, and focusing on overall lead time again incentivizes people to avoid tasks that are inherently larger and more difficult.
    • Mean Time to Recover - Like deployment frequency, this metric can be helpful to look at if it is really bad and systems are often down for days, but smaller variations when you are in a decent state are less meaningful. A big reason is that outages have different costs depending on their severity. Recovering from a minor loss of functionality simply matters much less than recovering from an active security breach where customer data is flying out the window. What you may want to look at instead is total outage cost, particularly if you offer SLAs to your customers.
    • Change Failure Rate - This one is also nice in concept and related to the revert rate. The difference is that the revert rate can be computed based on the amount of effort that went into the code being reverted to estimate the impact on productivity. Just looking at the count of failed changes won’t tell you how important they were. Similarly to the previous metric, if what you care about is customer impact, then severity of the failure is also very important and you should instead be looking at total outage cost.

The Bottom Line

This article covered the goals engineering teams should keep in mind when communicating with their CEO or board, then walked through several types of metrics that can help achieve those goals.

Being asked to present engineering metrics in a board meeting is a great opportunity for engineering leaders – not a threat – because it gives them a platform for helping the board understand the challenges they face and make smarter decisions about engineering investments.

Compiling all of these metrics can be a lot of work, but minware has a lot of them out of the box. If you’d like to learn more, then let’s talk – we are eager to help!