Skip to main content

Posts

Notes after TestingCup 2018

On May 28-29th I attended TestingCup conference in Łódź. Having quite unique perspective: this was my second year in row as a Speaker at this conference I want to share some thoughts on the event. Dust has settled, lets go!  Championship  Originally TestingCup is a software testing championship . Wait, what? Yes, the formula is unique: teams and individuals from all around Poland are competing in finding the most bugs and defects in specially prepared application - Mr. Buggy . I don’t have specific data, but since this year’s conference was all english I guess competitors were not only from Poland. As a spectator, I must say that the whole competition looked very professional. There were team shirts and names, podium and trophies (gold cup and cash).  Some cons? Testing championship is held at the first day of the conference. So this is already a conference, but if you’re not taking part in the championship… there’s not much to do, since all the talks are in the next da

Rerun Flaky Tests – Spock Retry

One question I get asked a lot is how you can automatically rerun your test on failure. This is a typical case for heavy, functional test scenarios, which are often flaky. While test flakiness and its management is crucial and extensive matter itself, in this post I want to give a shout to the extremely simple yet useful library: Spock-Retry. It introduce possibility to create retry policies for Spock tests, without any additional custom-rules implementation – just one annotation. If you are not a fan of Spock testing framework and you prefer JUnit – stay tuned! I will post analogous bit about rerunning JUnit tests soon. Instalation  If you are an maven user, add following dependency: <dependency>       < groupId > com.anotherchrisberry < /groupId >       < artifactId > spock-retry < /artifactId >       < version > 0.6.2 < /version >       < type > pom < /type >   </dependency> For gradle users:

Generating Test Data – jFairy

Test data has been always an issue. If you are running selenium or backend automated tests based on user-related scenarios, in order to make your tests more efficient you need to provide unique and realistic user test data. There’re many ways to deal with this: dumping samples of production databases, writing your own data generators or using the very same data in every test run and cleaning database afterwards. In this post I want to write about small and handy Java library for generating fake test data – jFairy. Since the library is super-simple to use, this post is just the shout-out for the nice tool I’ve been using in many different automation projects and I hope I’ll put a spotlight on it for my readers. Installation  If you use maven, to include jFairy in your project add the following dependency: <dependency>     < groupId > io.codearte.jfairy < /groupId >     < artifactId > jfairy < /artifactId >     < version > 0.5.9

REST-Assured: going deeper

In my previous post I described the basic REST-Assured usage – the lightweight framework for testing RESTful services. Despite the fact that described range of functionalities would be enough in most cases,REST-Assured has a lot more to offer. In this post I would like to bring some more advanced examples of the framework usage. I recommend you also my other posts about REST-Assured and building microservice’s test automation frameworks: REST-Assured – framework overview Building microservices testing framework Object Mapping  Sending request’s body as string is easy and straightforward, but it can be inconvenient in the case of more complex operations on request / response properties. Proven solution for this is a good-known serialization of request/response body to objects. REST-Assured supports object mapping to (and from) JSON and XML. For JSON you need either to have Jackson or Gson in your classpath and for XML you need JAXB. Here is an example of request object s

Testing Asynchronous APIs: Awaitility tutorial

Despite the growing popularity of test automation, most of it is still likely to be done on the frontend side of application. While GUI is a single layer that puts all the pieces together, focusing your automation efforts on the backend side requires dealing with distributed calls, concurrency, handling their diversity and integration. Backend test automation is especially popular in the microservices architecture, with testing REST API’s. I’ve noticed that dealing with asynchronous events is particularly considered as challenging. In this article I want to cover basic usage of Awaitility – simple java library for testing asynchronous events. All the code examples are written in groovy and our REST client is Rest-Assured. Synchronous vs Asynchronous  In simple words, synchronous communication is when the API calls are dependent and their order matters, while asynchronous communication is when the API calls are independent. Quoting Apigee definition: Synchronous  If a

REST-Assured framework overview

In modern software development, REST services becomes most popular choice for implementing distributed and scalable web application. They are light and easy to maintain, which results in faster and more effective implementation and system integration. I recommend you also my other posts about REST-Assured and building microservice’s test automation frameworks: REST-Assured: going deeper Building microservices testing framework With the increase popularity of RESTful services, there is a need for fast and lightweight tool for REST webservices testing automation. One of the most popular choice is Rest-Assured framework from Jayway. It introduces simplicity of testing web services from dynamic languages like groovy or ruby to java. In this post we will get our hands dirty and write automatic test in Rest-Assured framework. In order to create complete implementation of automated tests in Rest-Assured framework, we need to write our code against some example API. We’ll use

Testing in Continuous Delivery: Shift Left

In today’s constantly changing market, continuous delivery is one of the most popular engineering approaches: most companies claim they work according to CD rules, or at least don’t say out loud they don’t. Popularization of this methodology comes from its main idea: an engineering process based on short, repetitive iterations, where every iteration ends with delivering user value and getting feedback from it. Knowing the main rules of the continuous delivery approach, how do we deal with testing and quality assurance in such a fast and repetitive process? Get rid of the walls Let’s consider a simplified engineering process based on the following consecutive phases: specification, development, testing and operations. From a quality assurance perspective, the problem is that testing is the last phase before shipping features to the users, which is often too late, especially if we’re dealing with the iterative nature of a continuous delivery approach. Throwing changes over pro