A Framework for Customer Journey Monitoring
In today’s digital landscape, customer journeys are fundamental to user experience and are crucial for driving conversions, making them a critical focus for organizations striving for digital performance.
Achieving a successful user experience doesn’t happen overnight. Instead, it’s a path of continuous improvement, step-by-step advancements, and a persistent test-and-learn approach, with very few shortcuts.
The foundation of any improvement lies in measurement, and customer journeys are no exception, underscoring the vital importance of their continuous monitoring. Our quality initiatives must integrate this end-user experience perspective, leveraging its full capabilities.
But how exactly can this be achieved? This article will share a step-by-step, repeatable approach to rapidly implement and iterate your customer journey monitoring. We will walk through a real e-commerce use case, providing actionable guidelines and recommended best practices.
As an introduction, you can read our previous articles to understand where quality meets customer journey monitoring and its 5 hidden powerful benefits.
Establishing the Customer Journey Monitoring Context
The initial step in this process is to identify the “right customer journeys” within your specific business context.
This can be effectively achieved by following the “Question Asker” model of Quality Assurance, as detailed in the article: “6 Quality Questions for Testing Customer Journeys“. Once identified, the focus shifts to building these “customer journeys right”.
For our case study, we will use Damart, an e-commerce retail and fashion website. This allows us to refer to a traditional e-commerce journey for practical examples. We have identified several key customer journeys based on common use-cases typically found in functional non-regression campaigns.
Our customer journey monitoring will primarily concentrate on the Conversion stage of the e-commerce journey. We’ve also included the crucial non-functional requirement of consistently verifying login capabilities.
While seemingly trivial, neglecting to monitor this aspect can lead to a drastic decrease in business performance in the event of a disruption. Utilizing a robust framework is an excellent way to structure an efficient and repeatable process for journey testing and overall quality.
The Core Process to Monitor Customer Journeys
We will leverage the fast feedback loop cycles supported by Cerberus Testing: Define, Execute, Analyze. The framework areas that support this cycle are the Test Repository, Test Execution, and Test Reporting, respectively. Specific functionalities are leveraged to support effective customer journey monitoring.
First and foremost, we begin by defining and organizing the test case repository. While the structure of your test cases is flexible , we recommend utilizing a folder structure, step library, and application definition to establish expandable foundations. The crucial element for pivoting between the test repository and execution is the strategic use of labels, which we will cover in the subsequent section.
The Execution area is the next phase we will utilize. The process here is similar to defining a standard test campaign. We can employ labels, a test case selector, and choose the relevant environments. The primary difference lies in the scheduling, which for customer journey monitoring, is typically much more frequent.
The final areas are Reporting and Analytics, which are essential for generating actionable insights from our customer journey executions. We can shift our perspective from a single campaign to a broader range of executions by utilizing the various dashboards available. Let’s begin by tagging our tests to facilitate decoupled test and campaign management.
Organizing the Tests with Labels
Our test cases are now prepared for customer journey monitoring. A recommended best practice is to leverage non-functional regression test cases to maintain alignment and enable early detection throughout the software delivery chain.
Customer journey campaigns can be defined using various criteria, including test case, application, system, environment, and labels. We prefer to use labels as a means to decouple our test case repository from their execution. Hard-coding a specific test case, application, or environment can unexpectedly lead to the execution of non-expected test cases. Explicit labels provide superior visibility and ensure explicit inclusion within the monitoring campaign.
In our specific case, we tag our test cases with “Campaign_Scheduled” to clearly identify them as part of the customer journey campaign. These test cases may still possess other labels and requirements that are useful for reporting purposes, as we will explore later. The next crucial step is to configure our test suite for monitoring, which is referred to as a Campaign.
Schedule a Monitoring Campaign for Optimal Performance
The pivot between our test case repository and execution, in our scenario, is the label. We recommend ensuring that the campaign description clearly states its goal and maintains high cohesion for the sake of efficient reporting and execution speed.
The “TestCase List” tab, the second tab, allows us to configure the content of our test campaign. Here, we solely use the label that was previously defined. As illustrated, you can also scope the suite based on other criteria, such as “Test Case Criterias” or various other labels.
The subsequent tab, “Environments, Countries & Robots List,” must be carefully tailored to the specific touchpoints identified within your customer journeys. Traditionally, the environment will be production, although staging environments can be used in some cases. The countries and devices you select will depend directly on your unique context and application deployments.
The execution settings are also highly flexible in their definition. For customer journeys, we recommend configuring a screenshot on error while limiting detailed tracing capabilities. Our primary goal is to react swiftly while accurately measuring the actual performance of our tests. A screenshot is typically very useful, and restricting data collection helps avoid biasing our test performance. Retry mechanisms should also be limited to prevent masking stability issues. Our objective is to ensure the quality of our customer experience, rather than simply achieving a green dashboard.
Our objective is to ensure the quality of our customer experience, not to have a green dashboard.
The priority of scheduling is a critical aspect for customer journey monitoring, particularly with the limiting factor of parallel executions. Supervision campaigns are typically small and executed regularly, whereas non-regression campaigns tend to be larger and run on-demand. We can guarantee that monitoring will run first by setting the “Priority” field to 0, or at least lower than other defined campaigns. Subsequently, the queue management system will handle the priorities accordingly.
The scheduler area enables you to define one or more executions using a crontab format. The frequency of these executions is typically a balance between the value, criticality, and frequency of the journeys themselves. A minimum of an hourly execution is generally recommended. The final part involves defining notifications via email and Slack as required. The specific approach depends on your internal processes; some teams directly integrate with the Cerberus Testing APIs from their operational monitoring solutions. Our customer journeys monitoring campaigns are now fully configured to run regularly, initiating the collection of valuable data for future analysis. We can now transition to the analytical phase.
Reporting, Analysis, and Actionable Insights
Various dashboards are available to support our visibility requirements. We differentiate Reporting from Analytics based on the distinct perspectives and insights they can provide. Let’s begin with campaign reporting.
The first piece of information we seek is the execution status of a single campaign. We can access the native reporting within the Campaign Reporting area. Various dashboards are available to support our analysis, including the global campaign status with a breakdown by status, by test folder, and by labels, followed by a detailed table of all executions. Each test case execution report is then available, complete with traceability information.
Here’s an example of the status presented by labels, providing us with visibility into our customer journey monitoring using the “Campaign_Scheduled” tag. In contrast, the other tags offer a stability perspective for each website area, namely Account, Cart, Connection, Homepage, and Products.
Our next objective is to understand the customer journey performance over time. This requires combining various campaign execution reports and statuses to provide a comprehensive view. The “Campaign Reporting Over Time” feature delivers the analytical dashboards we need, showcasing execution time split by devices and environment, the ratio of executions, and stability dashboards.
Here’s an example illustrating a visual representation of our flakiness ratio over time. From this, we can then loop back to the other dashboards to ascertain if the instabilities are occurring within a specific context. We can also zoom in on a particular campaign report and test case traces to pinpoint and narrow down problems. From this point, we have established a repeatable structure that can effectively support our customer experience monitoring
From here, we have a repeatable structure that can support our customer experience monitoring.
From Reactive to Proactive Customer Journeys Monitoring
The framework and process described here will establish scalable foundations, starting with a robust test case design, decoupling various requirements, and proper configuration. The expansion of this framework then becomes a balance to maintain.
Our core focus must remain on customer experience monitoring, actively supporting our quality initiative, and consistently delivering value to our customers. The various elements defined within this framework facilitate an evolution from a reactive to a proactive approach, and from a siloed to a transversal one.
Sharing this customer perspective across operations is an excellent way to align diverse teams, from product management to operations, on a common objective. Have a successful customer journey monitoring experience, ensuring a positive user experience from start to finish.
Start now with Cerberus Testing.