5 New Metrics For Quality at Speed Test Automation
We are all tired of egos.
“I have greatly improved the number of tests, coverage, flakiness ratio”.
Great. But who’s using it? Which value did they create? At which speed?
Test automation is about accelerating the delivery of valuable software, bringing confidence to the team to deliver their changes.
Valuable metrics must therefore cascade from this goal, far from traditional KPIs even arguing formulas for computing an ROI.
Let’s see the Quality Engineering metrics you can use for test automation.
Follow Cerberus Testing for more open-source test automation.
Satisfaction and engagement of the product team
We can easily say to be customer-centric—what does that mean for test automation?
Valuable things are used by people on a regular basis and paid for.
If you still have your job, it’s already a good clue but that’s not sufficient.
The satisfaction and engagement of your product team with your automated tests is a good indicator of their usefulness.
They can be well implemented and non-technical; but people must notice at some point their value, even if it’s only 1 minute to give a go-nogo.
Use data on your automated tests such as the analytics of their usage, reaction to notifications, word of mouth.
You can even add surveys regularly and within test campaign reports.
Minimal waiting time per delivery cycle-time
Cross-functional teams have the objective to accelerate the delivery of valuable software with stability.
Their time is one the most valuable assets when they have to perform fast iterations, ideally within minutes.
They will not wait for a test campaign lasting more than a few minutes, even fewer hours.
Measuring the waiting-time induced by your automated tests is critical and must be contained over time, especially when adding tests.
Keep a focus on these metrics on your different campaigns to keep your efforts useful:
- Build dashboards for the waiting-time due to your tests;
- Invest in automated tests parallelization for fast test execution;
- Leverage native alerting functionalities of your test automation platform.
This metric can be more important than the tests in themselves executed: pass a threshold and your tests lose their value, becoming ignored.
Lead-time to implement minimal quality gates
Software teams that accelerate have two common traits: they deliver change fasters and within new components.
Your reactivity is therefore key to keep helping them to iterate with Quality at Speed.
If you are not there, they will take the best alternative available at that time: manual test, unit test, or even nothing at the risk of discovering major issues later on.
The lead-time to implement the minimal quality gates is an important metric showing your capacity to adapt in an ever-changing environment.
If it’s complex to set up a minimal quality gate or have to reconsider too many things there is a problem, that is not necessarily duplication.
Most of the time issues in maintainability come from a lack of consideration in the design phases. You will have to invest more in decoupling and test design.
MTTA a bug from user-story definition
The Mean Time To Acknowledge (MTTA) is the average time it takes from when an alert is triggered to when work begins on the issue.
The feedback loop of acknowledgment starts from the user-story definition in test automation, hence the need for shift-left.
Users finding bugs in production do not come back.
Test automation goals to reduce risks must therefore be measured with these metrics, looking for issues in your process when discovering critical issues.
The goal begins to reach an acceptable level, not maximize it; remember it’s about risk reduction, not risk removal.
MTTR to fix flaky tests in non-regression campaign
We cannot trust systems or people that are unstable. It’s the same with automated tests.
Mean Time To Repair (MTTR) is the average time required to fix a failed component or device and return it to production status.
Unstable tests—also known as flaky tests—must be rapidly fixed to keep the team satisfied and iterating with confidence.
You can use the dashboards and alerting capabilities to remove tests from your campaigns changing their status or presence in the test suite.
A series of features are available within Cerberus Testing to reduce flakiness such as test data library, self-healing, or even rerun.
Test automation metrics to accelerate software delivery
Test automation objective is to bring confidence to deliver changes faster to software teams accelerating their rate of changes.
Metrics focusing on their success and what they find valuable is therefore what matters.
It’s not about optimizing a QA silo for the sake of it. Accelerate the delivery of valuable software is the mission of test automation for QA engineers.
There’s no time to lose. Start measuring and leveraging a ready-to-use platform to accelerate your test automation with Cerberus Testing.
Get your free plan here.