How Scorecards work in OpsLevel: a truly flexible model
We’ve always been opinionated about Service Maturity—but one of our strongest held beliefs is that there’s no one-size-fits-all approach. Some organizations want to set global service standards to monitor software health org-wide. Others want total team autonomy to establish a unique culture and related service expectations. And most fall somewhere in between. That’s exactly why we’ve been thoughtful in how we build out our Service Maturity feature suite to include Scorecards and the Rubric.
We want to support organizations across the entire spectrum by providing flexibility where you want it, and consistency where you need it.
In OpsLevel, you can start building a Service Maturity program on day 0, and trust that our product offering will empower it to scale however you need long-term. Here’s why.
Flexibility at the foundation
Scorecards are important, but they are one part of the larger Service Maturity picture in OpsLevel. We offer a global Rubric that makes tracking global standards easy and individual Scorecards that allow you to track standards scoped by team, group, category, or focus area.
Both features are built on Checks (the specific tests you’re running against your service at any given moment) and Filters (the conditions set to determine which services a Check is applied to).
While most IDPs offer a scorecard feature to set and measure progress at the team or group level, this combination guarantees that we can meet any engineering organization where they’re at. This gives them the freedom to get started as a team, while still giving cross-functional guilds (like DevOps or Platform) an easy way to rollout org-wide best practices if and when they’re ready.
Consistency built on a shared taxonomy
We believe that to support the continuous growth of your organization’s Service Maturity program, all Scorecards must be built using a consistent structure. Service Maturity levels need to be standardized across Scorecards in order to provide any meaningful context into where a service stands. After all, having one Scorecard set up as bronze/silver/gold and another as A/B/C really starts to muddy the water.
In OpsLevel, the taxonomy for service maturity levels is set at the account level, to ensure each new Scorecard is measuring against the same general definition of “good”, “better”, and “best”—even if all of the standards themselves are different.
Giving engineering teams a shared language and taxonomy ensures that:
- Service owners can easily toggle between Scorecards, without losing context into what each level indicates and what it means for their services
- Leaders and cross-functional stakeholders can easily compare service health across any layer of the organization
When to use Scorecards
We’ve talked at length about why Scorecards exist in conjunction with the Rubric. But when, specifically, do they come in handy? While the Rubric is great for global standards across your org, here are some examples where Scorecards are useful:
- A new team or group wants to set standards for themselves, without impacting the global Rubric
- A new guild or practice area wants to establish a set of checks that’s relevant for multiple teams but isn’t globally applicable yet
- Different divisions have different sets of standards and want to scope their checks differently as a result
- A team wants to A/B test check structure across different teams, before adding something new to the Rubric
- An individual product development team wants to track OKRs or other KPIs within OpsLevel
Ready to take your team’s Service Maturity to the next level… literally? Read our guide on when and why to use Scorecards, or go straight to our technical documentation.
Not an OpsLevel customer just yet? Try things out with a free trial account.