Skip to main content

Top 5 traps of test automation

This article by me was originally published on TheServerSide.

There’s a common phrase in testing: if you do something more than once – automate it. Software testing, where we routinely perform similar actions, is a perfect base for automation. In modern software development, with the use of microservices and continuous deployment approach, we want to deliver features fast and often. Therefore, test automation becomes even more important, yet still facing some common problems. Based on my experience, here is my list of top 5 mistakes that teams make in acceptance test automation.

Stability 

False Fails? Always-red plans? We all know that. Stability of automated tests is one of the most obvious issues, yet most difficult to obtain. Even big players like LinkedIn or Spotify admit to have struggled with it. While designing your automation, you should put extra attention to stability, as it is the most frequent cause of failure and test inefficiency. Writing tests is just a beginning, thus you should always plan some time in sprint for maintenance and revisions.

UI perspective 

Majority of modern applications are web-based, therefore the most preferable way for functional testing is from the UI perspective. Despite their usefulness, browser tests also have some substantial problems like slow execution time or stability randomness. Since we’re in the world of microservices, it’s worth to consider dividing tests into layers – testing your application features directly through webservices integration (or backend in general) and limiting UI tests to a minimal suite of smoke tests would be more efficient.

Overmocking 

Because of many dependencies over the systems, mocking services become a popular pattern and are also often forced over test environment limitations. However, you should pay great attention to the volume of your mocked checks – mocks can miss the truth or be outdated, so your development would be held on false assumptions. There’s a saying: “Don’t mock what you don’t own”, which means you can stub only these pieces of architecture that you’re implementing. This is a proper approach when you test integration with the external system, but what if you want to assume stable dependencies and test only your implementation? Then yes, you mock everything except what-you-own. To sum up, the mocking and stubbing strategy can differ depending on test purposes.

Tight coupling with framework 

That’s a tricky one. Developers often tend to choose frameworks and tools based on the current trends. The same applies to test frameworks where we have at least a few great frameworks to use just for a test runner, not to mention the REST client, build tool and so on. While choosing a technology stack, we should bear in mind the necessity to stay as much independent from tools as we can – it’s the test scenario that is the most important, not the frameworks.

Keep it simple 

Depending on your team structure, acceptance tests are implemented by developers or testers. Usually the developers are better in writing code, while the testers have a more functional approach (it’s not a rule, though). Automated test are not a product itself, but rather a tool, therefore I would put functionality over complexity. Your test codebase is nearly as big as the tested system? Try to categorize tests according to their domain or type. Adding new tests requires a time-consuming code structure analysis? Sometimes a more verbose (but more readable) code is better for your tests than complex structures.

Summary 

The worst-case scenario that can happen to your automated tests is abandoning them due to relatively simple, yet common issues. Time saved by automating simple test cases can be used for executing more complex scenarios manually, and that leads to better software quality and higher employee motivation in general. If you have some other interesting experiences with test automation, let me know!

Popular posts from this blog

Testing Asynchronous APIs: Awaitility tutorial

Despite the growing popularity of test automation, most of it is still likely to be done on the frontend side of application. While GUI is a single layer that puts all the pieces together, focusing your automation efforts on the backend side requires dealing with distributed calls, concurrency, handling their diversity and integration. Backend test automation is especially popular in the microservices architecture, with testing REST API’s. I’ve noticed that dealing with asynchronous events is particularly considered as challenging. In this article I want to cover basic usage of Awaitility – simple java library for testing asynchronous events. All the code examples are written in groovy and our REST client is Rest-Assured. Synchronous vs Asynchronous  In simple words, synchronous communication is when the API calls are dependent and their order matters, while asynchronous communication is when the API calls are independent. Quoting Apigee definition: Synchronous...

REST API mocking with Wiremock

Probably every developer or tester have used mocks at least once in their daily professional work. Functionality mocking is an excellent way to improve development process of integrated systems production, or testing heavy dependent application functionalities. With the growth of popularity of REST webservices, API mocking is becoming hot topic. In this article I would like to introduce a simple getting-started tutorial of setting basic standalone REST API mock server with Wiremock on your local machine. Wiremock is a simple library written in Java for mocking web services. Installation  To run standalone wiremock server, download jar from here and run: $ java -jar wiremock-1.55-standalone.jar you should see: This means that wiremock has started an empty mock server on localhost on port 8080. After you navigate to http://localhost:8080/__admin/ in your browser, you should get empty mappings entity: You can also change default port by adding --port...

Rerun Flaky Tests – Spock Retry

One question I get asked a lot is how you can automatically rerun your test on failure. This is a typical case for heavy, functional test scenarios, which are often flaky. While test flakiness and its management is crucial and extensive matter itself, in this post I want to give a shout to the extremely simple yet useful library: Spock-Retry. It introduce possibility to create retry policies for Spock tests, without any additional custom-rules implementation – just one annotation. If you are not a fan of Spock testing framework and you prefer JUnit – stay tuned! I will post analogous bit about rerunning JUnit tests soon. Instalation  If you are an maven user, add following dependency: <dependency>       < groupId > com.anotherchrisberry < /groupId >       < artifactId > spock-retry < /artifactId >       < version > 0.6.2 < /version >      ...