The Frugal Way To Deliver Quality Gates in Production

cerberus-frugal-quality-gates-in-production

User experience drives the bottom line of digital business. One failure, latency, or non-managed exception disrupting the user journey means a lost opportunity. Successful user experience is therefore a priority.

This pattern known as shift-right can be implemented in a variety of ways. We can select specific observability, testing, and alerting solutions to deploy in production. While this can do a part of the job, it creates another context hard to reconcile for the team.

In this article, we propose to rely on your test automation assets to build quality gates in production. We will use Cerberus Testing to demonstrate the added value of a shared and end-to-end collaboration.

In this previous article, we cover how to shift right with a continuous testing framework. We will detail the concrete steps to take in order to implement your quality gates in production:

  • Clarify your assets, assumptions and objectives
  • Test your key value and growth assumptions
  • Schedule a test automation campaign in production
  • Use analytics to improve the experience and delivery
  • Foster a context for continuous experience improvement

Let’s start by clarifying the key elements of your initiative.

Clarify your assets, assumptions and objectives

We first need to understand which digital assets are in scope. We have to start by identifying the software under test. From there, we can identify the related elements such as the requirement referential, automated tests, and production measurement.

The above elements give you a good picture of the user experience to address. You should know the various personas involved, their customer journeys, and which devices they mostly used. Your quality gates objective is to ensure the various contexts are working properly in production.

Figure 1: An example of traffic dilution to details per user journey, Medium.

That leads us to our value and growth assumptions, similarly to the Lean Startup methodology. Your value hypothesis represents the expected value you can get from improving the identified user experience. Your growth hypothesis represents your capacity to handle the execution of your quality gates in a variety of journeys, devices and with a frequent execution rate.

Your next step is to quickly validate or invalidate the hypotheses while still having a low commitment.

Test your key value and growth assumptions

You don’t need fancy processes or tools to test hypotheses. Be pragmatic and look for the most efficient ways. You can achieve that by asking good questions to the right persons, and by performing minimal testing on your tooling.

You derived from your initial analysis that your identified user journeys are worthwhile measuring and improving them; that’s your key-value hypothesis. Do your homework first and then involve the relevant stakeholders.

Start by looking at the application analytics dashboard, verifying that these user journeys are actually true paths taken by the customer, at a significant ratio. Then, verify this assumption by asking the product owner, manager, and more user-oriented team: “Do you care about improving these user journeys now and in the coming months?”.

Figure 2: The example of ecommerce customer journeys map, Venngage.

You can move to verify your implementation assumptions. You have to execute your tests on a variety of devices on a regular schedule in production, and access reporting. Check and ask your tooling team the following questions: “What are the available browsers and mobile devices? Can I run 50 to 100 tests every 5 minutes? Is there a native reporting and notification system?”.

Hoping you got these answers right you can continue. In our case with Cerberus Testing, these answers are positive, letting you proceed.

Schedule a test automation campaign in production

The definition of a test automation campaign requires various elements. First, we need to configure the scope of tests to execute. Then, define its execution parameters such as frequency, traceability. We end up by configuring the quality gates ratio and associated notifications.

The scope of tests to execute comes from your initial analysis. You can configure a fixed list or a dynamic one. We recommend the dynamic approach offering more decoupling and scalability. Instead of configuring a fixed list of tests, we will configure the criteria that dynamically select the matching ones.

Figure 3: Configure a campaign with dynamic filter on tag “monitoring”.

Your next step is to define the execution parameters. The first one is to define the frequency of execution. As you are implementing quality gates in production on the user experience, the tests should be quite regular. Even if you are not performing technical monitoring, the data you will collect will be valuable over time. 

Figure 4: Configure various campaign scheduling with a flexible crontab expression.

You can then define additional execution parameters such as systematic screenshots on error, increased robot traceability, or even automated rerun. The rerun ones are useful to avoid false positives but should be used wisely and not by default. Else, you end up hiding issues and also not having the exact performance measurement. 

Figure 5: Configure the notifications on defined triggers and channels.

The final step is to define the quality gates ratio and notifications. The ratio lets you define the number of tests and associated priority to consider the campaign as OK or KO. This is helpful when your test suite is larger. To start, we recommend you to leave it as it is. Lastly, configure your slack and email notifications to receive as soon as possible the update of your campaigns.

You are now ready to use your work to improve the user experience.

Use analytics to improve the experience and delivery

The value of analytics is to bring insight you cannot access another way. Most importantly, the bottom line results require taking decisions and actions on these insights. You can now leverage your quality gates in production.

The native analytics dashboard available allows you to identify improvements. Is the performance the same on all smartphone devices? Is there a specific country with an abnormal performance? Is there a time in the day where specific journeys are not working as expected? Are there unstable user experience paths, in which conditions? You can also drill down within the CI/CD pipeline to compare some data.

Figure 6: The native analytics dashboard available in Cerberus Testing for quality gates, Sourceforge.

These questions are clues to use as a detective. You need to analyze, interact with knowledgeable people, and ask more questions. You can rely on 5-why analysis and other types of problem-solving frameworks to accelerate your journey. Don’t be afraid to involve transversal stakeholders for specific topics, some problems are by nature of different causes and require different expertise to be solved.

Step by step, your work capitalizing on reusable assets can truly drive continuous improvement.

Foster a context for continuous experience improvement

Your quality gates in production are essential to keep the organization focused on the user experience. We covered the various prisms required to achieve them: UX, product management, engineering, and operations.

Driving continuous improvement is a necessity where the digital experience is the driver for growth. At the same time, the need for constant evolution and complexities to deliver represent a real challenge.

We shared an approach allowing you to capitalize on your existing assets, also minimizing the switching cost and mental overload on your teams. The result is an improved collaboration with minimal overhead of new processes or tooling.

Time flies, act now. Accelerate today with Cerberus Testing for Free.

The Frugal Way To Deliver Quality Gates in Production
Scroll to top