Why Code Coverage Matters For a Quality Culture (Part 1)
How taking a measurement and making it visible can lead to more mature development and quality practices.
Federico Toledo reached out to me and asked for my opinion about what test managers are doing to show test coverage. This is part one of a two part blog. This focuses on in-house quality teams and development. The second part will focus on consulting/outsourced quality metrics.
Dashboards, however and wherever they are created are a great conversation piece and motivator around growing a more mature team. But those dashboards start with data, generated from something - in this case test automation.
What if that automation isn’t in place, what then? Test cases executed manually work too. There are plenty of tools that are out there that can neatly track executed manual tests.
Finding what data is available and starting with making that data visible, by any means, is where I generally start. We can’t have a conversation about risks without it.
I’ve worked with teams at all levels of development practices, and what I initially look for with a team is how much ownership they have of the product/code. There are a few questions I ask to gauge what I can do next :
Do they care about the code?
Do they want to improve it?
Do they have a mandate/flexibility to improve?
Are they supported by management to make improvements to their practices and processes?
If a “yes” is missing for one of those questions, then I start with trying to change the mindset around those questions to get to a “yes”. Because no amount of testing will help alter anything, other than how much someone is yelling about the testing. Testing is nearly useless without teams invested in making it work for them. This is also the biggest risk to the code: Lack of care/ownership.
With the right mindset in place, it’s just a matter of finding a starting point and asking about what kind of metrics we want to track, whether they are basic test execution metrics, DORA, or - my least favorite - escaped bugs.
Personally, I like to start with unit tests and code coverage metrics. Once these are visible, we can start asking more questions:
Is the code testable?
Is the team responsive to unit test failures?
Are we gating code merges when unit test fail?
Are we gating code merges when coverage drops below a mandated point?
One of the comments I get about this approach is that developers can “game” the coverage metric. I always respond with, yeah, they can. But if they are pushing bad code because they gamed the system, the system will tell on them in one way or another. And no dev I’ve ever worked with has ever purposefully “gamed” the coverage metric just so they could push code. They know that good signal is more important than just shoving code into production. Because, their name is still on that PR, and if it fails in production, the whole mountain of crap comes down on them or their team. So most devs want to do the right thing.
However, I understand that trust is hard. People are naturally wired or have real-life proof of when they couldn’t actually trust something/someone. If there are trust issues, around devs, testers, management - then no amount of testing will fix that either. Which is also a huge risk, whether people want to admit it or not.
Showing any metrics starts the conversation around risk. Owning the product/code, as a team, creates trust. Accountability creates teamwork. Visibility creates improvement.
Showing improvement, or nuance (Ex: raw defect numbers vs time to resolve issues), via the metrics can show that some basic risks are mitigated. However, metics don’t remove the need to communicate. In fact, the more complex and sophisticated your metrics are the more conversations you need to have so everyone understands what they mean and where the gaps exist.
And like any test automation, if you’re always getting a “green” signal, is it worth the ROI to keep tracking that particular metric? How can you get better “signal,” faster? That’s when the real magic is happens!
Circling back to Federico’s post, I don’t spend a whole lot of time building out any reports or dashboards. The ROI isn’t worth it. Once the team improves, whatever you were measuring becomes less important, and the focus shifts. Checking to make sure it doesn’t slide backwards should be automated in some way (like PR gates), while efforts shift to another prioritized area of risk to make more improvements.
Interesting read, thank you.
You're writing:
> my least favorite - escaped bugs.
Why is it your least favorite?