Skip to main content

Testing in Continuous Delivery: Shift Left

In today’s constantly changing market, continuous delivery is one of the most popular engineering approaches: most companies claim they work according to CD rules, or at least don’t say out loud they don’t. Popularization of this methodology comes from its main idea: an engineering process based on short, repetitive iterations, where every iteration ends with delivering user value and getting feedback from it.

Knowing the main rules of the continuous delivery approach, how do we deal with testing and quality assurance in such a fast and repetitive process?

Get rid of the walls

Let’s consider a simplified engineering process based on the following consecutive phases: specification, development, testing and operations. From a quality assurance perspective, the problem is that testing is the last phase before shipping features to the users, which is often too late, especially if we’re dealing with the iterative nature of a continuous delivery approach. Throwing changes over proverbial wall to another team is no longer effective, and causes bottlenecks in the whole process due to handover times. We don’t want to separate analytics from the development team during the specification phase. Furthermore, we’d want to find bugs already in the development phase, not when the implementation is already done. Keep in mind also that the aim of our quality efforts is the production environment, so we want testers to be able to get end users’ feedback and deal with production configurations.

The concept that follows those approaches is often called shift left, which means shifting tools and knowledge to the left, to earlier phases in the engineering process.

Tester’s perspective

Specification:
As we’ve already said, the whole development team (which testers are part of) should participate in creating story specifications. But how can we assure quality and prevent defects in such an early phase, before implementation? A crucial difference between the traditional approach and testing in continuous delivery is that testers are creating a strategy for the whole process, not only executing given scenarios. Part of this is to outline a testing approach already in the specification phase, in form of simple testing notes. These don’t have to be full blown testing scenarios, rather notes about critical quality aspects and things to keep an eye on.

This exercise certainly lets us share testing knowledge through the team, but what’s even more important is having the whole team focusing on quality, which helps to prevent a lot of defects before we even start development.

Development:
Following shift left practice, we don’t want to wait with testing for development to be done. Finding bugs in your code should be done in parallel with writing it.

One way to achieve this is to test small pieces of implementation. We can divide it into testing the backend and frontend separately (and then whole piece together), but we can go even deeper here, and test isolated parts of backend code with other modules being mocked based on contract. This idea of testing smaller chunks of code requires close work with the developers and a good understanding of your system backend architecture.

Another idea is to pair testers and developers. Testers often come to pair programming unwillingly – just like testers are more experienced in testing, developers are usually more experienced in writing code. But testers bring to the table a very unique way of looking at implementation, which is quality and end-user perspective. This special combination of technical and domain knowledge makes development more defect-proof and user-oriented.

Testing:
Let’s remind some fundamentals of continuous delivery, which is development process based on short and iterative cycles. There’s no way to achieve this while having testing phase as a manual process. One of the keys to continuous delivery is automation and testing is no exception – you should have most of your checks automated. Of course it doesn’t mean that you must cover every edge case of your application with expensive end-to-end tests. Test engineers should have the best knowledge of which parts of application should be covered with, for example, functional Selenium tests, when to use backend integration tests and when unit tests are just fine. Sounds familiar? Yes – it’s a test pyramid. Our goal is not to maintain the largest number of tests, but rather to get quick and stable quality feedback.

Does it mean that there’s no room for manual tests in continuous delivery? Remember that there’s always going to be some number of tests that are very expensive to automate, and at the same time, very important from a business perspective. Having most of our tests automated gives us time to fully concentrate our manual efforts on the most crucial domain and exploratory validation.

Operations:
Operations phase usually means going to production. If our features are on production environments, does it mean it’s the end of quality cycle? Remember that big part of continuous delivery methodology is getting end users’ feedback.

A straightforward way of collecting this kind of feedback are application logs. We should always make sure that we have proper business-level logs – they’re our way to reproduce and trace how users interact with our product. Application logs usually come in pair with metrics: CPU, disk, latency, etc. Those are another important gauges that give an instant feedback on our application health.

An approach that is getting more and more attention lately is testing in production. One would ask: isn’t it the single worst practice? But we’re not talking here about throwing untested changes to production environments and waiting until something goes wrong. There are various techniques of production testing, such as canary releases, where we introduce new features only to a limited range of production users, who give us feedback with minimized risk. Remember that this limited range of production users can also mean your single test user – this way you’re gaining ability to test your features in full isolation on production configuration.

Dev, Test, Ops

Arguably one of the most hyped terms in the development world right now is DevOps. The origin of this term comes directly from the shift left concept, and it means giving tools and operations responsibilities to developers in order to reduce bottlenecks and handover time.

There are also other very similar terms – TestOps and DevTestOps. With this approach you start quality efforts with early stages of development and extend them also to production environments. When we’re considering this kind of approach, the term that describes such practices even better is continuous testing. It’s a group of techniques where testing means building and implementing quality assurance strategy throughout whole development cycle, not only in single testing phases.

Summary 

The continuous delivery methodology provides many new challenges for test engineers. With the iterative development approach, it’s important to maintain the quality of the product in every phase of the cycle. This approach is often called continuous testing – this means testing from the very first phase, reducing bottlenecks and handover times by combining various tools and practices. It is also ones associated so far with different roles in the engineering process. Testing in continuous delivery means building and maintaining a quality assurance strategy in the whole development cycle, and in every phase.

Popular posts from this blog

Testing Asynchronous APIs: Awaitility tutorial

Despite the growing popularity of test automation, most of it is still likely to be done on the frontend side of application. While GUI is a single layer that puts all the pieces together, focusing your automation efforts on the backend side requires dealing with distributed calls, concurrency, handling their diversity and integration. Backend test automation is especially popular in the microservices architecture, with testing REST API’s. I’ve noticed that dealing with asynchronous events is particularly considered as challenging. In this article I want to cover basic usage of Awaitility – simple java library for testing asynchronous events. All the code examples are written in groovy and our REST client is Rest-Assured. Synchronous vs Asynchronous  In simple words, synchronous communication is when the API calls are dependent and their order matters, while asynchronous communication is when the API calls are independent. Quoting Apigee definition: Synchronous  If a

Rerun Flaky Tests – Spock Retry

One question I get asked a lot is how you can automatically rerun your test on failure. This is a typical case for heavy, functional test scenarios, which are often flaky. While test flakiness and its management is crucial and extensive matter itself, in this post I want to give a shout to the extremely simple yet useful library: Spock-Retry. It introduce possibility to create retry policies for Spock tests, without any additional custom-rules implementation – just one annotation. If you are not a fan of Spock testing framework and you prefer JUnit – stay tuned! I will post analogous bit about rerunning JUnit tests soon. Instalation  If you are an maven user, add following dependency: <dependency>       < groupId > com.anotherchrisberry < /groupId >       < artifactId > spock-retry < /artifactId >       < version > 0.6.2 < /version >       < type > pom < /type >   </dependency> For gradle users:

Performance Testing – Vegeta Attack!

Performance testing is crucial field of modern software development. Taking into account that in today’s world majority of app’s communication is web-based, it turns out to be even more important. However, it still enjoys less interest than automated functional testing, and publications on load testing subject usually focus on mature and complex tools like JMeter or Gatling. In this post I’d like to introduce command line usage of a super simple and lightweight tool for performance testing of HTTP services, which is Vegeta. Who is Vegeta  Besides Dragon Ball’s character, Vegeta is a simple load testing tool written in GO, that can be used both from command line and as an external library in your project. In order to use Vegeta, download executables from here . It is recommended to set environment variable for ease of use.  For Mac users, you can download vegeta directly from Homebrew repositories: $ brew update && brew install vegeta Vegeta’s Arsenal  Usage