Skip to main content

Selenium Grid with Docker

Selenium webdriver on it’s own, or with it’s implementation, like Geb is arguably the most popular solution for testing web-based applications. Besides all it’s greatness, it has some flaws. Selenium tests are slow, and it’s cost of maintenance is big. The answer for the first issue is distributed testing with Selenium Grid, which I described previously. 

From the DevOps perspective though, setting Selenium Grid configuration like that is highly over-expensive and non-scalable. The answer for this can be Docker with it’s docker-compose tool. In this post we will try to create vm provisioned by docker-compose and set up scale Selenium Grid. All of this will be run with one command.

What is Docker 

In simple words, Docker – with use of linux-containers – allows you to pack all your application dependencies, like database, system libraries and so on, into standardised and portable units, called containers. The main difference from virtualization tools like vagrant, is that you don’t need to ship entire OS to your CI or production server. Instead of this, you manage containers with independent units. This is just a big picture of the Docker motivation. For detailed documentation and installation instructions, please visit Docker official site. 

Since there are plenty of Docker installation’s instructions on the web, we’ll assume that you’ve already done it (you can reffer to get started section from official documentation), and you have default machine up and running. To test your installation, type:

$ docker info

…and you should see similar output:


Setting Grid configuration 

Little reminder of Selenium Grid architecture. The entry point of our Grid is Selenium Hub. It’s a place (vm or bare metal machine) where we point our test execution. Next element are nodes, which are machines that – previously registered to hub – can execute our selenium tests.

In order to create the hub on our localhost, we need to pull and run container from Docker repository with selenium hub:

$ docker run -d --name selenium-hub -p 4444:4444 selenium/hub

This command will download and run hub container on our localhost. When container’s downlaod is complete, visit http://localhost:4444/grid/console, and you should see an empty grid console (if you create your docker machine with different address than standard, change localhost to choosen IP).

Now we will create two nodes, one with Firefox and second with Chrome. To download and run chrome container:

$ docker run -d -P --link selenium-hub:hub selenium/node-firefox

…and for Chrome:

$ docker run -d -P --link selenium-hub:hub selenium/node-chrome

We should have three docker containers running on our local docker machine. We can check that with:

$ docker ps

Output should be similar to this:


We’ve created Firefox and Chrome nodes, so web console of our grid should display:


Provisioning with docker-compose 

Everything is great, but what about this one-line command to start this whole thing up? Here comes docker-compose tool. Docker-compose is definition file for multi-container docker set ups. First of all, let’s stop all running containers. You can do it with:

$ docker stop $(docker ps -a -q)

We have to create docker-compose.yml file with the following content:


File structure is rather simple. We’ve defined seleniumhub entity, pointed image name (images can be found in docker hub repository) and assign ports. Then, we’ve defined two node entities: chromenode and firefoxnode. Important thing here is that we have to link them to seleniumhub container. Since it’s and yaml file, you should be aware of proper indentation. When our file is ready, run:

$ docker-compose up -d

If everything went smooth, you can check docker ps or point directly to your browser and open http://localhost:4444/grid/console. In result, just like previously there is hub with two nodes, but this time the configuration is define in one file and can be run with one command. Docker-compose file can be added now to your repository and reuse.

Scaling 

When our test base grows, two nodes can be far not enough. Luckily, docker-compose comes with great feature which allows to scale number of similar containers on the fly. If your two-nodes grid is running, and you want to increase the number of chrome nodes to three, enter command:

$ docker-compose scale chromenode=3

Now you have two more containers with chrome node, registered to your hub.


Continue reading 

If you want to continue reading and expand your knowledge in area of Docker and Selenium Grid, I recommend you these books:


Summary 

If you are running Selenium Grid configuration, Docker can be great way to boost your productivity and to help you managing your stack. In future post I will describe some more advanced configurations with Docker. If you have any questions, please leave a comment.


Popular posts from this blog

Testing Asynchronous APIs: Awaitility tutorial

Despite the growing popularity of test automation, most of it is still likely to be done on the frontend side of application. While GUI is a single layer that puts all the pieces together, focusing your automation efforts on the backend side requires dealing with distributed calls, concurrency, handling their diversity and integration. Backend test automation is especially popular in the microservices architecture, with testing REST API’s. I’ve noticed that dealing with asynchronous events is particularly considered as challenging. In this article I want to cover basic usage of Awaitility – simple java library for testing asynchronous events. All the code examples are written in groovy and our REST client is Rest-Assured. Synchronous vs Asynchronous  In simple words, synchronous communication is when the API calls are dependent and their order matters, while asynchronous communication is when the API calls are independent. Quoting Apigee definition: Synchronous...

REST API mocking with Wiremock

Probably every developer or tester have used mocks at least once in their daily professional work. Functionality mocking is an excellent way to improve development process of integrated systems production, or testing heavy dependent application functionalities. With the growth of popularity of REST webservices, API mocking is becoming hot topic. In this article I would like to introduce a simple getting-started tutorial of setting basic standalone REST API mock server with Wiremock on your local machine. Wiremock is a simple library written in Java for mocking web services. Installation  To run standalone wiremock server, download jar from here and run: $ java -jar wiremock-1.55-standalone.jar you should see: This means that wiremock has started an empty mock server on localhost on port 8080. After you navigate to http://localhost:8080/__admin/ in your browser, you should get empty mappings entity: You can also change default port by adding --port...

Rerun Flaky Tests – Spock Retry

One question I get asked a lot is how you can automatically rerun your test on failure. This is a typical case for heavy, functional test scenarios, which are often flaky. While test flakiness and its management is crucial and extensive matter itself, in this post I want to give a shout to the extremely simple yet useful library: Spock-Retry. It introduce possibility to create retry policies for Spock tests, without any additional custom-rules implementation – just one annotation. If you are not a fan of Spock testing framework and you prefer JUnit – stay tuned! I will post analogous bit about rerunning JUnit tests soon. Instalation  If you are an maven user, add following dependency: <dependency>       < groupId > com.anotherchrisberry < /groupId >       < artifactId > spock-retry < /artifactId >       < version > 0.6.2 < /version >      ...