Shaping Engineering Culture With a Monthly Report Card
Publish a monthly report card to spark better conversations around engineering performance, and support a culture of ownership and accountability.
Measuring engineering performance is hard. There’s no single metric that adequately captures what engineers do in modern software organizations. At the same time, engineers are notoriously adept at optimizing whatever metric you use to measure their work. Tracking the wrong metrics can lead to overinvestment in specific short-term outcomes, at the expense of building a healthy and sustainable product engineering culture.
In my most recent role, we muddled along for years with cascading goals that tied engineering performance to achieving top-level company objectives. Engineers had limited ability to influence these goals, and consequently didn’t show much interest in the goal setting process, or in measuring performance over time. This undermined our efforts to build a culture of ownership and accountability within the product engineering team.
This year, I got together with my product management counterpart Henry Vasquez to come up with a better framework for tracking our performance. What we landed on was a monthly report card with a broad spectrum of engineering and product metrics. We published the report card to our entire product engineering team, and made it publicly available to anyone in the business.
The report card ended up being more useful and relatable than our previous attempts at measuring performance. It created clarity for our cross-functional teams on what we considered important, and led the teams to independently tackle problem areas and advance our software delivery process. At the same time, the report card highlighted our successes and achievements, and gave us an effective tool to communicate with leaders and executives on our own terms.
Here, I’ll share how to create a monthly report card to spark a healthy conversation around product engineering performance, and build a culture of ownership and accountability within the team.
Choosing Your Metrics
The magic of a report card comes from including a broad spectrum of metrics. It’s difficult to define a single metric that aligns well with success for the organization as a whole. But by combining a range of product and engineering metrics, you can describe the kind of engineering environment you’re looking to establish.
The metrics you choose define how the rest of the organization view your performance, and send a strong message to the team of what behaviors and outcomes you value. If you want to focus the team on creating more value for customers, you need metrics to measure customer outcomes. If you care deeply about the wellbeing of your team, you should measure their happiness and their daily engineering experience. If you want to improve your operational maturity, you may want to measure your progress on infrastructure automation, or public cloud spend, or service reliability.
Ideally, some metrics will be thematically linked to your high-level company goals, and formulated in a way that’s relevant to product engineering teams. This creates clarity for engineers, while eliminating potential conflicts between engineering achievement and company goal achievement. At the same time, the report card should include metrics that you plan to track in perpetuity — such as deployment frequency and the rate of customer-reported defects. These evergreen metrics define what’s considered healthy in your daily work.
How many metrics you pick communicates whether you’re focused on achieving a small number of specific outcomes, or trying to build a well-rounded product engineering environment with fewer blind spots. If your startup is running out of money, it’s fine to focus on growing revenue, or reducing operational costs. But this sort of hyper-specialization will have unintended consequences over time. As your organization grows and matures, you’ll want to expand the scope of what you track. From our experience, 20–30 metrics from a wide range of areas gives you a good overview of your team’s health and performance.
With so many metrics in play, team members can easily succumb to metric fatigue. And tracking a large number of slow-moving metrics makes it difficult to get a quick read on whether you’re making progress towards your goals. So I suggest highlighting a subset of metrics that you’re actively trying to improve. You can do this by including targets for each metric. This shows which metrics you expect to move, and which ones you’re just keeping an eye on.
In our case, we assembled about 20 engineering metrics, and 10 product-centric metrics. These included:
DORA metrics for application delivery; deployment frequency, lead time for changes, change failure rate, and time to restore service
Flow metrics, like cycle time, pull request review time, and the average number of tasks in progress per engineer
Operational metrics, like API success rate, 95th percentile response time, and AWS spend
Incoming defects and product questions from customers
Customer and user engagement metrics for key product areas, and tracking towards expected product outcomes
Although not part of our report card, we also tracked employee and customer NPS scores on a different cadence
Out of these, we were moving about five metrics aggressively forward. For the rest, we kept an eye on overall trends, and made opportunistic improvements throughout the year.
If you’re looking for more guidance on how to pick engineering performance metrics, Nicole Forsgren et al. have you covered with The SPACE of Developer Productivity. The authors argue convincingly for using a selection of metrics from five dimensions:
Satisfaction and well-being
Performance
Activity
Communication and collaboration
Efficiency and flow
Creating Visibility
Now that you’ve assembled your metrics, you need to decide how to present them, to whom, and how often.
You can host the report card anywhere, so long as the data is easy to digest; it could be a wiki page, dashboard, or a shared document. If you collect internal metrics in a data platform — such as Splunk — you can build a report that automatically pulls the latest information. My team used a simple tabular representation on a wiki page, with color coding to indicate the health of each metric. But graphs or sparklines to show changes over time may be helpful. Another suggestion is to use a rolling one-year window for the metrics, so you’re not looking at just a single data point at the start of the year. The trend is usually more interesting than the current value.
What does matter is that you make the data accessible to anyone in the organization. You should share the same report card with your team, and with leaders, executives, and other stakeholders in the wider company. Public visibility creates alignment and clear internal and external accountability. It also gives team members a sense of ownership — both of their goal achievement, and of how their work is represented.
It’s good to create clarity on how your metrics are collected. You could do this by linking each metric in the report card to the underlying data source. This allows people to analyze the data independently, to draw their own conclusions, or to explore new angles on the same information. It also reduces the risk of bias through cherrypicked data — or the perception of bias where none exists.
You may wonder why I’m specifically recommending a monthly cadence for updating the report card. Why not refresh it continuously, or align it with your company-wide goal setting cycle?
We arrived at the monthly frequency by a process of elimination. Although we used quarterly company-level goals, the report card needs to be more frequent. Otherwise, by the time you discover you’re off track, it’s too late to do something about it. Meanwhile, weekly updates exhaust the team, and are often unexciting. Very few metrics that really matter move this fast. For us, monthly updates was the goldilocks frequency to capture progress on product adoption and engineering improvement initiatives.
Once you’ve published your monthly update, invest in a burst of publicity. Share the results wherever your team hangs out: In Slack, email, internal wiki, all hands meetings, or reports to the executive leadership. Bring it up in 1:1s and career development conversations as well, and in any other context where you talk about engineering performance. Continuous reinforcement ensures your team and leadership take the report card seriously, and look at the metrics for guidance in their daily work.
Don’t shy away from communicating failures, or metrics that aren’t moving. The intent is to create full transparency, and spark a productive discussion about what to do next. Public failures are a great opportunity to create interest in the process, and bring people along on the journey of fixing the underlying problems.
Putting the Report Card to Work
The monthly report card was well received by my teams. It created a balanced view of our product engineering performance, guided our investments, and helped us represent our work to the wider organization.
One of my favorite benefits of the monthly report card is that it highlights continuous, incremental progress on slow-moving metrics. Engineers may strive to shorten cycle times, or increase deployment frequency, or reduce customer-reported defects. But it’s very hard to see the fruits of these labors in realtime, which makes it thankless work. With the report card, you can visualize month-over-month or year-over-year improvements on these metrics, and show the impact of incremental changes over time.
By carefully selecting your metrics, you can also use the report card to address specific practical or political challenges. For example:
It’s a great tool for creating alignment between engineering and product management teams, because it creates joint ownership of a set of performance metrics. By picking more or less aggressive targets, you can indicate the relative priority of investment in each area.
If your organizational goals are too abstract for engineers to rally around, the report card can bridge the gap. By anchoring your metrics in the high-level goals, you create concrete and actionable ways for engineers to contribute to the organization’s strategic priorities.
You can use the report card to prioritize work for engineering enablement and platform teams. These teams can use application delivery, flow, and operational metrics to identify problem areas and create tools to improve the engineering experience.
If you’re held accountable externally for specific metrics — such as change failure rate, test coverage, or service level agreements — you can bake these into your report card. This meets the external reporting requirement, while shifting the conversation away from a small number of metrics to focus on a more holistic view of product engineering performance.
This versatility means you can adapt the report card to meet the needs of your team, and to evolve along with your organization.
What kind of environment do you strive to establish for your team, and what are the behaviors you’d like to reinforce? How would you like to be measured by your leadership? What metrics would you like to move forward this year? If you can answer these questions, you can build a report card to help shape the culture of your team — and to take control of the narrative around product engineering performance.