This Is How To Implement Maintainable Automated Tests
An important feature is late. The development team finally commits the last changes. We run the campaign, but it is full of orange and red indicators. We don’t have time to update the tests, so the campaign is bypassed to be updated “later”. In reality, it starts to be forgotten that precise day.
The journey of test automation is full of bottlenecks. Once convinced by test automation, we can finally move forward. But with a growing test suite comes another challenge, the one of maintenance. Forgetting about test maintainability as a requirement can lead to the situation described above.
The adoption of test automation is supported by a drive to accelerate software changes frequency with stability. The natural tendency is to increase our test automation coverage until we are sufficiently confident. The thing is, software applications are evolving to iterate on the value proposition through digital interactions. Therefore, we have to deal both reactively and proactively with test maintenance.
Let’s first define automated test maintainability.
What do maintainable automated tests mean?
Maintainability is a requirement by itself, part of the non-functional requirements category. A key element of maintainable automated tests lies in the ease of dealing with changes. Like in software, the Pareto principles apply to automated tests with 80% of the effort passed on their maintenance.
Good maintainability supports ease of defects fixing at their base, acting surgically limiting impacts on the whole system. This good structuration results in preventing unexpected side-effects while performing changes.
Consequently, we can maximize the automated tests’ useful life (remember the test campaign bypass & forget?), efficiency, reliability, and safety. Most importantly, we keep the ability to meet new requirements, a must-have for a continuous testing approach.
Let’s look at specific techniques, starting far from technical details.
A structure aligned on the business domain
The product management teams manage their test referential because they have specificities. It is then more accessible for the technical team to implement only the relevant tests with specific information. We can do better than that; why having a silo in the first place?
Teams require a transversal collaboration to have a fast feedback loop on their experimentation. A shared test repository will support an active collaboration, a common vocabulary, and a similar structure. These elements are valuable for test maintenance.
We will take the example of an e-commerce platform. Traditional product areas will be present such as the Catalog, Product, Customer, Checkout, and Support. Each of these areas can then be subdivided into customer journeys composed of pages and context. This is a business structure we need to reuse in our tests to maximize alignment and understanding.
We have to close even more gaps, describing clearly the test cases.
Clear self-explanatory test cases
Imagine you just arrived as a QA replacing a leaving member. Unfortunately, no hand-over was possible. You are left with what remains of the documentation. Your first reflex will be to search for the automated tests, to find the “reality”.
Well-structured, readable, and understandable test cases are at the foundation of knowledge management. In our example, the value is to secure the continuity of knowledge within a team. Additionally, clarity is also helpful on the daily basis of collaboration.
Efficient solutions do not mean to be complex. In test automation, clear test cases with simple action and controls in plain language are required foundations. The Keyword Driven Testing (KDT) frameworks are relying on this approach.
Specific techniques can also improve our approaches, such as Behavior-Driven Development (BDD) and the Single Responsibility Principle (SRP). A more practical test is also possible: ask a business person unfamiliar with your test what he understands after reading it once.
A good structure can then be reused.
Reusable components we can easily update
Factorization is a practice trained early in engineering courses. Modules, functions, and components are identified as good practice. Test automation also benefits from these practices, where similar thinking can be applied.
Automated tests are composed of a set of actions and controls, then grouped into a logical structure. In Cerberus Testing, a library of automation actions are available for both actions and controls, simplifying their usage and maintenance.
A Step then allows the grouping of actions and controls inside a Test Case. Similarly, we can keep our step in a Step Library to reuse it across our various tests. The value is to simplify the changes in evolution; we only have to change one place.
Cerberus Testing also supports the Page Object Model pattern with the application object repository. This feature enables to manage a referential of objects reused across various test cases with a centralized repository. That way, a single change in an object is applied to all test cases using the components.
We have an interesting toolbox to use wisely.
Test Maintenance requires a balance
The main risk is to fall into the over-optimization pitfall where the goal isn’t to ease maintenance but to maximize reuse. This practice leads to dangerous heterogeneous modules reused everywhere we fear to change, the inverse of our initial goal.
Our answers are in the balance of various elements. We must balance business and technical constraints to achieve understanding and usability for both worlds. Coupling and decoupling require an objective assessment of their trade-offs.
We have to keep our initial test maintenance goals as a guiding principle: ease the changes of existing and future requirements. Test maintainability is an art of balance we must master in a continuously changing world.
Start using Cerberus Testing for free.
Wikipedia, Maintainability https://en.wikipedia.org/wiki/Maintainability