Saturday, December 20, 2014

DDT: generic DataProvider update to allow concurrent entity modification

In a previous post we've created generic DataProvider (DP) that allows concurrent read operations from multiple data sources. In this article we'll update existing solution with a new Save feature, as in some cases we need to modify entities dynamically, e.g. precondition / post condition for reset password test.

Let's start with DAO interface updates:


As you can see, we've added a new save definition for further entities' modifications confirmation. Now let's take a look at its implementation:


The only thing I'd like to pay attention to is merge operation. If N threads try to modify the same entity concurrently, we need to make sure that it won't be completely overwritten. For example, for Users table, if 1st thread changes email and 2nd one - password concurrently, we expect to see both updates, instead of entire entity replacement by the last thread. However, if both threads try to change the same field, of course this particular value should be overwritten. To handle all these moments, we should also add a special annotation to all the entities that could be potentially modified.


DynamicUpdate annotation will help to get latest entity state before merging.

However, if you expect to see entity overwriting behavior, you can just skip DynamicUpdate annotation and use any of the following API: save / saveOrUpdate / merge.

The next question is: how to call save method within test case? DP injects only entities into test signature and save operation can be accessed only from DAO object. Let's take a look at DataProviderUtils once again.


As you can see, we already use DAO object for retrieving DB fields. So now we may want to inject it into entities. Or it's even better to do that with BaseEntity:


So now we can call save method from any entity. Let's move back to DataProviderUtils to see how to inject DAO object into BaseEntity instance.


There were made only 2 small changes in comparison with previous version:
  1. retrievedFields list became more obvious. Now it collects BaseEntity objects. We need this update to avoid further casting.
  2. DataSet was extended with updateFieldsWith API for storing DAO objects.
Let's take a look at DataSet modifications:


As you can see, we're looping through fields' list and injecting DAO object.

Now we can easily save our entities directly from test cases:


These 2 tests modify the same entity. And if we run them concurrently (assuming that default password value is "password"), we may see 2 different results:
  1. email = test.user1@email.com, password = password1.
  2. email = test.user3@email.com, password = password1.
Depending on threads' execution order, email field may be overwritten, but password will be merged independently, as only 1 thread will touch it.

That's pretty much it. You can find sources on GitHub.

Thursday, November 27, 2014

DDT: generic DataProvider for handling multiple DB entities in a single test

There're plenty of approaches we use for processing expected data to avoid hardcoding and tests cases' support simplification. In this article we'll look at a way of providing test data via DB.

As you may know, TestNG has internal mechanism for working with user data, called DataProvider (DP). It's pretty straightforward for common cases. But due to its static nature, we can face with different issues while tests' scaling. Parallel execution can deal with static context only if you manually take care about your objects' thread safety.

The other question is: what's the easiest way of pulling DB data using DP mechanism? Common approach assumes creating model with relative DAO / DAOImpl. It's an obvious task for pure developers, but could be very hard for automation engineers, who mostly don't have enough technical skills. Even if they had. The next question would be: how to ask DP pulling data independently from entity type, its schema or unknown rows amount? What if we need to use several entities within 1 test? What if we even need to use different DB? Quite enough questions to take a beer and completely forget about this idea...

If you're still here, I'll show how to create a generic thread-safe approach for pulling DB data and easily use your models within tests.

Let's start with preparation. You'll need 2 sample MySQL data sources: AUTOMATION and PRODUCTION with the following structure.



Fill in created tables with some sample data:






So now we have 2 DB with 5 tables in total. I guess you have some questions regarding IMPORT_DATA table purpose, but I'll explain it a little bit later.

Let's continue with configuration. Assuming that we're going to scale our tests, it's better to rely on one of existing specialized libraries to be confident with data thread-safety. So the first thing you need to read about is Hibernate ORM.

Now add the following dependencies into you pom.xml to start working with Hibernate:


To operate with DB data in Java code, first we need to create appropriate tables' representations. BaseEntity will be an abstract superclass for all the entities.


To avoid further ID duplication (as it's present in all tables), it will be enough to define this field only once on a parent class level.

Now let's create all the entities using JPA annotations:




Try to create appropriate entities for PRODUCTION database. It's important to specify both getters and setters for all the fields.

Well, we are ready to map newly created entities with real DB tables in Hibernate config files. As we have 2 data sources, we need to create xml for each:


Don't forget to change user credentials. As you can see, there're 3 mappings pointing to newly created entity classes. Create the same config for PRODUCTION data source by yourself. Now Hibernate can make a relation between entities and real DB tables.

To load config file and start using DB entities, we can create a simple utility class for sessions management:


As you can see, we establish connections with both DB, and it will be done before tests execution. To recognize them, we can use schema name as a key.

I've already mentioned, that we're looking for generic solution, so let's create a generic DAO interface with its implementation.



As we'll use only SELECT queries, generic DAO should be quite enough. You can ignore commit / rollback for common cases. It was left for some tricky moments. And here's some common actions implementation:


We're passing entity and schema as a constructor parameters to allow getting fields in a generic way.

That's pretty much related to DB configuration. So now we're ready for creating generic DP.

Let's assume that we can specify any amount of entities we want to use within particular test. But how do we pass them all into DP? Custom annotations will help.

If we used a single DB, 1 common annotation with entity class as a parameter could be quite enough. But in case of multiple data sources, we face with the following questions:

  • How to define a list of entities?
  • How to identify schema for each specified entity? 

So ideally, we need to define annotation that will also contain a schema name besides entity. But how could these entities be merged together then? Thanks to Java 8 developers and their array of annotations improvements.



Repeatable option allows us to use multiple Entity annotations for a single test case. As you can also see, there were added 2 more parameters: invocationCount and ids. Both are optional, but can be very important. If we specify several entities with different sizes, which size should be chosen by DP as a base one? To handle such situation there was added a min size lookup logic. This minimum should be used to iterate retrieved data.

Now let's look at DP implementation:


As you can see, we're looping through entities annotations and creating generic DAO using entity class name and schema. Next goal is to analyze optional parameters and choose a valid records' extraction strategy. IDs array has a top priority. If it's not specified, we retrieve everything. Depending on computed min size, we prepare an output container. To store transitional results (retrieved records), we use simple DataSet objects list.

Let's look at how simple and flexible is now using generic DP in tests:


And here are results of above tests execution in 3 parallel threads:


As you can see, we can specify as many entities we want, independently from data source. It could be even DB of different types, e.g. MySQL and Oracle. You can also use the same entities with different ids.

The last thing I want to cover is IMPORT_DATA table example, that is used in a third test. As you may notice, this table differs from others. It consists of pure foreign keys. It's a little bit tricky moment. If we need lots of entities within a single test, it's much easier to create a composite table that refers to particular records from other tables, instead of N separate entities setting via annotations. Just drill into ImportData entity again to see how Hibernate can easily retrieve Users and Files records by their foreign keys.

That's pretty much it. You can find full source code, as always, on GitHub. Take your time and happy codding!

Friday, October 24, 2014

RESTful SikuliX

As you may know, new version of SikuliX should be released soon. I had a chance to participate in a development process. So in this article I'd like to share some notes about new feature - remote SikuliX client / server, implemented via REST.

The idea of using SikuliX in a remote context appeared due to automation architecture chosen for the project I'm currently working on. As we scale our tests between number of environments, there was a necessity in resolving some complicated tasks, that couldn't be done with common libraries like Selenium, on remote VMs. SikuliX had everything we needed, except remote platform we could use in a context of existing architecture.

After number of efforts, I've finally found approach that would fit all project needs. There was built a RESTful client-server platform, that included latest SikuliX API and several useful utility classes for interacting with remote file system. After some period of time I've realized that this approach could be useful for others, so now I'm happy to anounce new experimental features, that are already pushed into SikuliX repository - RemoteServer and RESTClient.

These modules are not fully tested yet, but you can give it a try and report all issues you'll find or push appropriate pull requests. There're some tests written for Windows / Linux OS, that could be a start point for you.

And now let's take a look at provided functionality.

RemoteServer module is based on Grizzly Http Sever and contains the following endpoints:
  • http://ip:port/sikuli/cmd/execute - uses apache commons exec library for running command line on a remote VM.
  • http://ip:port/sikuli/file/upload - uploads a list of provided files to the given remote path.
  • http://ip:port/sikuli/file/download - downloads a single file from remote path (multiple file download feature is not implemented yet).
  • http://ip:port/sikuli/file/delete - removes file or directory by a given remote path (quick note: file system operations were implemented using apache commons io library).
  • http://ip:port/sikuli/file/exists - checks a list of inputs, if appropriate files or directories exist.
  • http://ip:port/sikuli/file/createFolder - creates directory by a given remote path.
  • http://ip:port/sikuli/file/cleanFolder- removes content of a given remote directory.
  • http://ip:port/sikuli/file/copyFolder- copies 1 folder's content to another.
  • http://ip:port/sikuli/image/click - uses SikuliX API for clicking provided image with a given wait timeout on a remote VM.
  • http://ip:port/sikuli/image/setText - uses SikuliX API to print some text into appropriate control with a given wait timeout on a remote VM.
  • http://ip:port/sikuli/image/exists - uses SikuliX API for checking if image is present on a screen or not on a remote VM.
  • http://ip:port/sikuli/image/dragAndDrop - uses SikuliX API for dragging and dropping objects on remote VM.
If you take a look at code, you'll find it pretty straightforward, e.g. here's a delete processor:
    @POST
    @Path("/delete")
    public Response delete(@QueryParam("path") final String path) {
        return Response.status(FileUtility.delete(path) ?  
               Response.Status.OK : Response.Status.NOT_FOUND) 
               .build();
    }
As you may saw above, common SikuliX APIs provides us a way of setting wait timeout. This functionality is implemented via observers mechanism, that uses onAppear event to process requested actions:
    private RemoteDesktop onAppear(final Pattern element,  
            final SikuliAction action, final String text) {
        desktop.onAppear(element, new ObserverCallBack() {
            public void appeared(ObserveEvent e) {
                switch (action) {
                    case CLICK:
                        e.getMatch().click();
                        break;
                    case TYPE:
                        e.getMatch().click();
                        e.getMatch().type(text);
                        break;
                }

                desktop.stopObserver();
            }
        });

        return this;
    }
Before moving to client's part, let's take a look at one more interesting block, related to remote command line execution. As I've mentioned above, we use apache commons exec library for this purpose. And I must say, it provides a fantastic feature, that I struggled with for a while - delayed exit from main thread by timeout. You may know that common java command line executor will stuck forever, if started process waits for user input or it's just a simple server application. Let's look what commons exec can provide for this particular case:
    public static int executeCommandLine(final Command command) {
        if (command.getProcess() == null) {
            CMD_LOGGER.severe("There's nothing to execute.");
            return -1;
        }

        CMD_LOGGER.info("Processing the following command: " + 
                command.getProcess() + (command.getArgs() != null ? 
                " " + command.getArgs() : ""));

        final long timeout = (command.getTimeout() > 0 ? 
                command.getTimeout() : 0) * 1000;
        final CommandLine commandLine = new CommandLine( 
                separatorsToSystem(quoteArgument( 
                    command.getProcess())));

        if (command.getArgs() != null) {
            for (String arg : command.getArgs()) {
                commandLine.addArgument(quoteArgument(arg));
            }
        }

        final ExecutionResultsHandler resultHandler = 
                new ExecutionResultsHandler();
        final PumpStreamHandler streamHandler = 
                new PumpStreamHandler( 
                    new ExecutionLogger(CMD_LOGGER, Level.INFO), 
                    new ExecutionLogger(CMD_LOGGER, Level.SEVERE));
        final DefaultExecutor executor = new DefaultExecutor();

        executor.setStreamHandler(streamHandler);
        executor.setProcessDestroyer( 
                new ShutdownHookProcessDestroyer());

        try {
            executor.execute(commandLine, resultHandler);
            resultHandler.waitFor(timeout);
        } catch (InterruptedException | IOException e) {
            CMD_LOGGER.severe("Command execution failed: " 
                + e.getMessage());
            return -1;
        }

        return resultHandler.getExitValue();
    }
ExecutionResultsHandler will let process be released by a given timeout.

That's almost pretty much related to server side. To build remote server, use the following command:
mvn clean install
It will create a jar with all necessary dependencies in your target folder.

To start server, use the following command:
java -jar sikulixremoteserver-1.1.0-jar-with-dependencies.jar port
Port is optional. You can skip it, if you want to use default one - 4041.

Now it's time to look at client side, that is located inside RESTClient module.

There's nothing specific. Code is pretty straightforward, as it only takes care about sending necessary objects to listed above endpoints. Client implements SikuliX interface. Besides that, you may find some other interfaces used as an input methods' arguments. We decided to leave them in project to allow users overriding client's methods and common sending containers. It was done for number of reasons. One of them is incompatible Jersey 1.x and 2.x versions. If your project uses Jersey 1.x dependencies, you won't be able to use new SikuliX REST client, as it's based on Jersey 2.x. In such case you will need to implement your own client using SikuliX remote interfaces.

As a simple example of REST call implementation, let's take a look at multiple files upload API:
    public void uploadFile(final List filesPath, 
            final String saveToPath) {
        final MultiPart multiPart = 
            new MultiPart(MediaType.MULTIPART_FORM_DATA_TYPE);

        for (String path : filesPath) {
            multiPart.bodyPart(new FileDataBodyPart("file", 
                new File(separatorsToSystem(path)), 
                MediaType.APPLICATION_OCTET_STREAM_TYPE));
        }

        final Response response = service.path("file")
                .path("upload")
                .queryParam("saveTo", separatorsToSystem(saveToPath))
                .request(MediaType.APPLICATION_JSON_TYPE)
                .post(Entity.entity(multiPart, 
                    multiPart.getMediaType()));

        if (response.getStatus() == 
                Response.Status.OK.getStatusCode()) {
            CLIENT_LOGGER.info("File(-s) " + filesPath + 
                " has been saved to " + 
                separatorsToSystem(saveToPath) + " on " + ip);
        } else {
            CLIENT_LOGGER.severe("Unable to save file(-s) " + 
                filesPath + " to " + separatorsToSystem(saveToPath) + 
                " on " + ip);
        }

        response.close();
    }
As you see, we can pass a list of files' paths for uploading. It's pretty useful when we need to copy expected images we want to allow SikuliX interact with on a remote VM.

Provided tests were created for Windows OS and haven't been tested on Unix or Mac yet. If you're going to give it a try, you'll need to install and start remote server first. By default all the tests are disabled to avoid build failures, as such verifications are very platform and configuration specific. To enable them, just change the following option in a pom.xml:
skipTests=false
To choose classes to be included into test run, you need to modify suites.xml located in resources folder. Actually, you should carefully explore resources before execution. Batches' extensions should be renamed to .bat. And you may also need to provide your own images, as they are very OS specific.

When you finish with resources, you'll need to update BaseTest configuration:
  • SIKULIX_SERVER_IP field must refer your newly raised remote server IP address.
  • WAIT_TIMEOUT will tell SikuliX to wait until expected image is appeared on a screen.
  • SIMILARITY level will be used while images comparison.
As we've already mentioned file upload scenario, let's take a look at appropriate test:
    @Test
    public void uploadFilesToServer() {
        getClient().uploadFile(Arrays.asList( 
                BATCH_RUNNER_SCRIPT.getPath(), 
                BATCH_SAMPLE_SCRIPT.getPath()), 
                SERVER_PATH.getPath());

        assertTrue(getClient().exists(Arrays.asList( 
                SERVER_PATH.getPath() + "/" + 
                    BATCH_RUNNER_SCRIPT.getName(), 
                SERVER_PATH.getPath() + "/" + 
                    BATCH_SAMPLE_SCRIPT.getName())));
    }
To perform common SikuliX actions, you can use the following example:
    @Test
    public void callCommandLineFromStartMenu() {
        getClient().click(new ImageBox( 
                getResource(RESOURCE_BUTTON_START_IMAGE).getPath(), 
                SIMILARITY), WAIT_TIMEOUT);

        getClient().setText(new ImageBox( 
                getResource(RESOURCE_INPUT_FIND_FILES_IMAGE)
                    .getPath(), SIMILARITY), 
                "cmd" + Key.ENTER, WAIT_TIMEOUT);

        assertTrue(getClient().exists(new ImageBox( 
                getResource(RESOURCE_INPUT_CMD_IMAGE).getPath(), 
                SIMILARITY), WAIT_TIMEOUT));
    }
You can find more examples in official SikuliX2 GitHub repository.

Take your time and let me or Raimund Hocke know, if you find these new features useful and what can be done better.

Saturday, October 11, 2014

How Java 8 can simplify test automation

In this short article we'll take a look at some useful Java 8 tricks, that may help us to simplify automated tests development process.

As you know, there were added lots of interesting features: lambdas, functional interfaces, streams, etc. And I guess beginners or old school developers may find this info subtle or even difficult. But I believe when you dig deeper, you'll see how could it be easier to implement some features we spent lots of code lines on before.

One of the most popular problem we face with while web automation is timeouts. Selenium fans know how could it be painful to implement scenarios for modern JS-based web applications with lots of dynamic components. End-users can't even imaging what's the price of fancy layout in terms of automation effort.

There're number of techniques for resolving tests' failures related to elements' presence, visibility, etc. issues. One of them is using explicit waits aka WebDriverWait with ExpectedConditions.

Generally, we can create some custom findElement implementation to force webdriver wait for some element's visibility for <= N time units:


And it will work for common scenarios. But what if we want to wait for some other condition, like element to be clickable? In such case we would need to create some overloaded methods or for example prepare an enum to pass its values to findElement and call appropriate condition depending on inputs. In other words we would need to create some workaround to make our search method generic. With Java 8 this task can be easily done via functional interfaces:


As you see, we pass a function of 1 input - By, and 1 output - ExpectedCondition<WebElement>. And now let's look at ExpectedConditions class:


Both methods have the same input / output. It means that they match our function definition. Well, let's take a look how easy we can pass above methods' references to our findElement implementation:


As you can see new language level provides us a way of static methods referring. Technically, you can refer even objects' methods, but it's out of scope of this article.

Besides methods' references, you can also see Optional, that is a built-in functional interface. It's pretty good for objects' validation, that may potentially be null. It supports flexible filtering and setting default value, if input argument is null.

Let's move on and take a look at example of getting text from a list of nodes:


It's pretty straightforward: we just looping though WebElements list to retrieve text and put it into new array list. And now let's look at how could it be implemented via Java 8 streams:


We use stream for looping through the list of WebElements. Map converts each element into another object via the given function. In our case we call WebElement::getText, trim and save it into a new object. Then we use collect for putting all text nodes into a new list. Collectors helps to identify output collection type. It's a very useful utility class that you definitely should take a look at.

Our last example is a little bit more complicated. It's based on a previous article source code. If you remember, we used custom TestNG listener for retrieving basic test results info and populating Mustache template structure. We had to loop through nested complicated collections for getting different entities, and also sort intermediate and final objects' lists.


Let's see what Java 8 provides for the same task:


First of all, for loops were replaced with streamsif was replaced with filter. Then we used map to apply the following complicated transformation.

One the main goals was to get test context from internal TestNG collections and save it inside custom TestResult entity for further parsing. So we had to loop through each test result group - suite.getResults().values().stream() - and put appropriate item - results.getTestContext() - to implicit intermediate collection List<TestResult>. By the way, sorted helped us to apply custom comparator on fly.

To save test results' logical hierarchy, this collection was passed as a constructor parameter to Suite entity. Last steps were the same as in case of intermediate collection preparation: calling sorted with custom lambda comparator and putting Suite objects into output List<Suite>.

That's pretty much it. Now you're aware of some latest Java 8 features and its positive impact on automation efforts.

Wednesday, October 8, 2014

Custom reporting engine with Mustache

In addition to previous post based on Velocity, I'd like to create some useful notes about another interesting template engine - Mustache. Let's take a look at its syntax to understand how easy is to inject different kinds of objects into template's context.

In this article we will create a simple test results overview template based on TestNG statistics.

To start with templates development using Mustache, first you need to include appropriate dependency, based on Mustache.java project. As we will also use TestNG, let's add it to our root pom either.


To add *.mustache templates' syntax support to IntelliJ IDEA, you can install Handlebars/Mustache plugin:


Let's create report.mustache file in resources folder.
To refer some java object, that you want to inject into template, use the following syntax:


Where reportTitle is a name of declared java variable. So we should use {{varName}} syntax for simple objects.

As Mustache is a logic-less template engine, it means that there're no loops and conditional statements available in its syntax scope. So how we can refer collections then?


Where suites and testResults is a list of java objects. Mustache automatically resolves such construction, as an iterative section. Everything located between {{#listName}} and {{/listName}} will appear N times in output report, where N = listName.size().

In the above example you can also see some new definition that differs from common variable syntax: {{#translate}}some text{{/translate}}. This particular object stores resource bundle to get locale depending properties' values by putting appropriate keys between above constructions. Such syntax reserved for Mustache functions. Among them there's a BundleFunctions class you can use for loading needed resources.

So how could we override default TestNG report with our custom one using Mustache?

First we need to create a custom listener, that will implement IReporter interface. When we override generateReport method, we will get access to test results context.


To create a report from Mustache template, first we need to compile it and then execute with some obvious parameters. FileWriter points to output report file. getScope method just returns a map of objects, where keys reflect template variables we discussed above, and values - appropriate objects we want to display in report. Note that all these objects should have public getters to let Mustashe engine getting appropriate access to their values.

Now we need to add a custom listener to maven-surefire-plugin configuration:


As you see, default TestNG listeners are disabled. Also there were added 2 custom properties: report.title and locale to make our example more realistic. Finally, to scale our test examples, there was added a composite base.suite.xml.

Sample source code also contains 2 entities: Suites and TestResults - classes to parse ISuite list given by overridden generateReport method. Note that we could use this list directly in Mustache template without creating additional wrappers. But in such case template's structure would be more complicated due to TestNG internal objects' depth.

You can pull sources from GitHub. And output report will look like the following:


Thursday, September 25, 2014

Jenkins plugin for killing Selenium Grid

In this article I'd like to describe some basics of Jenkins CI plugins development.

Those who ever worked with Selenium Grid knows that sometimes it may stuck due to number of reasons. To minimize problems with test execution on stucked environment, we can create a simple plugin, that could shutdown hub / nodes. Of course environment killing wouldn't be enough to make our test execution process more stable. We would also need to think about some trigger to raise our configuration back. But let's start with simple things first...

Before coding, you'll need to make some preparations. Hope you already have Maven configured. To start developing Jenkins plugins, you need to add the following to .m2/settings.xml:


Now let's create a project skeleton with the following command: mvn -cpu hpi:create.
It will ask you several questions about group and artifact ids. For this example I left default groupId and set selenium-utils artifactId. After finishing with skeleton preparation, you'll see newly created selenium-utils folder with basic Jenkins plugin example that can be packaged to .hpi format (you can further upload it directly to Jenkins via plugin manager) or even deployed directly to test Jenkins instance. You can read more about project structure and common commands in the official tutorial.

Assuming you've already played with auto-generated basic Jenkins plugin, let's start developing our own, keeping in mind that our main goal is to create a trigger for killing Selenium Grid hub / nodes.

First you need to add the following dependencies into your pom.xml:


Selenium Grid provides us REST APIs we can use to shutdown hub / nodes, e.g. to kill hub we need to send the following GET request:

http://hubIp:port/selenium-server/driver?cmd=shutDownSeleniumServer 

Jersey client can help us to communicate with Grid. So let's create a simple client:


Actually, that's pretty much related to plugin's functionality. Now we only need to configure its layout and create appropriate handlers.

Let's start with resources folder: .jelly files are intended for creating form controls. Syntax is very similar to HMTL. Note: if you use IntelliJ IDEA, you can install Stapler plugin for .jelly syntax highlighting and IntelliSense support.

For this particular case we won't need global.jelly, so you can remove it. It could be required, if we needed to setup some global setting in the following Jenkins section:


config.jelly will introduce form components to be displayed for job's build action. Now let's think about what exact components we would need to achieve our main goal - killing hub / nodes. Generally, we should know only ip addresses and ports. Assuming we have 1 hub and N nodes we want to shutdown. For hub we would probably need only 2 textboxes. But what about nodes? If we say there could be N nodes' configurations, it's not enough to have just pure textbox components. We need something iterative. For this purpose we can use repatableProperty. So our config.jelly will look like the following:


In this part you should pay attention to field attributes, as soon we'll refer them in our java classes. Besides that, we see repeatableProperty component - something that will iteratively appear on a form, if we need more items. But this component should define some internal UI structure, e.g. the node ip / port textboxes. To define such structure we need to create another config.jelly, but it should be placed in a separate folder. So let's prepare our folders structure first.

Rename HelloWordBuilder to SeleniumBuilder and create new NodeConfiguration folder on the same level with SleniumBuilder. Now let's create config.jelly in NodeConfiguration directory with following repeatable content:


Besides common textboxes you can see 2 buttons for adding new / removing existing node configuration block. Currently we don't see any relation between these 2 jelly configs yet. It will appear only on class level.

We're almost ready with layout. As you may saw, there're some help files present besides configs. This is normaly some user-friendly description we see after clicking question mark against appropriate field. Note that these files should be named like help-fieldName, where fieldName should be exactly the same, as we created in our config.jelly. We'll skip this part.

Now it's time to create our plugin's engine, that will put everything together.

Important note: our resource config folders (SeleniumBuilder and NodeConfiguration) must be reflected via java classes. So we need to use exactly the same names for our java handlers.

Let's rename existing HelloWorldBuilder class to SeleniumBuilder. It will be our main plugin class. As you may see, it extends Builder class. It means that appropriate instance will be created when user selects our plugin while job's configuration.

All declared fields should match names defined in relative config.jelly. It'll let us to remember hub / nodes settings we put during job's configuration.

Constructor should be annotated with @DataBoundConstructor and list all declared instance variables as parameters, so that we can initialize them in its body. We would also need to create appropriate fields' getters.


When user triggers a build, perform method is invoked. At this point we сan easily access our saved configuration. Let's skip it and return later.

Our plugin should also contain an extension point. Generally it's a singleton, that defines some common plugin's configuration, such as display name, form validation rules, allows to load persisted global configuration, etc.


Some of you may notice that we missed one important part - nodes configuration. As we defined an appropriate config.jelly, we should also create its java handler. It will look almost the same, as SeleniumBuilder.

Let's create a new class NodeConfiguration in the same package, as SeleniumBuilder. It should also follow the same rules, but extend different classes.


If you remember from jelly configuration part, plugin's UI supports iterative nodes configuration block. It means that we should also create a list of NodeConfiguration and pass it to SeleniumBuilder's constructor. Note that it should still contain the same name, as we've defined in config.jelly for repeatableProperty.


Hope you've already added SeleniumClient into project. Let's finalize our plugin and add appropriate logic into perform method. To shutdown hub / nodes we just need to pass ip / port parameters we've read from plugin's UI.


That's pretty much it. Now let's test it: mvn hpi:run - it will run Jenkins instance with already deployed plugin.

If you open any job's configuration and expand Build section, you'll see our plugin in a list:


Let's click it. As you see, our plugin form contains exactly the same fields we've configured in jelly files. Validation prompts us to add some values.


Assuming you've already raised hub / nodes, let's fill in our fields and save a job.

Now if you trigger a new build and open console log, you would probably see the following:


If you want to deploy a plugin to your own Jenkins instance, you should run mvn install command first to create an .hpi file (will appear in your project's target folder), that you can then manually upload from Manage Plugins / Advanced section.

You can find full plugin's source on GitHub.

So what's next? Currently we have a mechanism for killing hub / nodes. And you would probably want to extend it by adding some new features, e.g. restart trigger, or implement even better solution. Take your time and play with a plugin. Good luck!