top of page

Tales of an Automation Engineer - Environment Chaos


In the previous post and the first in the Tales of an Automation Engineer series, the question Do we have adequate environments that are representative of Prod? was the first question that was suggested as requiring an answer before an automation project begins. This entry in the series aims to answer this question.


It has been my experience that before an automation team is brought into a project, typically the development environment/s that exist are nowhere near adequate for automated tests. At the minimum, the pre-prod environments that should exist are dev, qa and staging. This is somewhat adequate when an application is in the earliest stages of development, prior to AT. As automation test development work begins and test scripts are written to cover features in development and existing features that require regression tests, the following env’s will be required:

  • Development

  • QA

  • Staging

  • UAT

  • Performance


The QA and Staging environments will typically be reserved for automation purposes. QA to develop automation scripts, and Staging to execute feature tests. I’ve worked on projects in the past where manual testing and UAT testing is also performed on the Staging environment which will cause conflict with the automated tests.. Conflicts can also be caused when front end automated tests are being executed along with back end automated tests. Typically this can be avoided by scheduling the backend and front end tests to run at different times.


If the application is one that requires performance testing, this can also theoretically be carried out on the Staging env, but I would advise this to be carried out in its own environment.


What I’ve outlined above is a somewhat simplified breakdown. There can of course be more environments whose purpose is to develop/test specific integrations or are set up with specific configurations.


Each environment of course requires additional maintenance and resource cost from the DevOps or SRE teams. This is often not possible or low priority, which means that some kind of compromised solution of where to develop and execute the automated tests needs to be agreed upon. This will most certainly lead to failed tests and a reduction in the confidence level in the automated tests results.


Summary

I hope to have provided a good insight into what the development environmental requirements are in order to successfully introduce automated tests into your project. As with all aspects of software engineering, if adequate planning is carried out beforehand and the required environments are understood and the team has the expertise and resources in order to rapidly spin up new environments, then the automated tests will provide accurate results. If the opposite is true, rapidly spinning up new environments is slow or not possible, then the automated tests will ultimately be unreliable, which will cause the development to lose confidence in the test results.

Recent Posts

See All
bottom of page