This series aims to help decision makers think about critical issues and ultimately settle on answers to these issues before dedicating money, resources and time to a test automation project.
It's critical to have answers to the following questions:
Do we have adequate environments that are representative of Prod?
Typically, a minimum of 4 environments (dev, qa, staging and prod) are required to ensure automated tests can run and provide accurate results. There might be more depending on the nature of the app, for example if the app is going to bear a heavy load you would require a perf environment for performance testing.
Are we able to see the app with test data similar to Prod?
Test data varies from application to application. Banking apps, for example, can be tricky to generate test data due to security concerns. E-commerce apps can also be problematic in developing test data for, due to integrations with platforms like Adobe Commerce (previously Magento). You can of course utilise mocks and stubs, but whenever possible it's best to avoid doing so.
What is the desired level of device/platform coverage?
If the app in development is mobile or if you’re developing a web based application, you’re going to have to ensure that the app is tested on as many devices, OS’s, browsers and different versions as possible.
Cloud testing platforms have become the preferred way of doing so as opposed to managing the process in house, but these can bring further complications.
What is the existing release process and how willing will the development team be to adjust philosophically to the necessary changes required to allow automation to provide value.
Before introducing test automation to a team, developers would have been used to pushing code to main, deploying to Staging or Prod without any delays. Test automation will inevitably slow this process down. Has the development team taken this into consideration?
How much resource is the team willing to dedicate to Test automation?
Manual testers are not automation engineers. I am not stating they cannot learn the skills necessary to build or extend a test framework, build test scripts or contribute to the CI/CD infrastructure, but this might take months or years depending on the individual. The common mistake for resource decision makers is to assign a manual tester who shows a willingness to learn test automation to the team. This is a very bad idea, especially without having an automation expert to teach and guide the tester.
When the framework is built and the initial test scripts have been developed, the CI/CD process is fully operational, all might seem well. But what happens when new features are built, more test scripts have been developed, old test scripts require updating, and are retired? Are there enough test automation resources to handle such changes?
Upcoming
We’ll delve deeper into each question posed in upcoming posts aiming to provide solutions as to how to avoid these potential pitfalls that have doomed countless test automation projects.