Saturday, November 21, 2015

WebDriver vs Select2

Nowadays it's quite popular to use Select2 control instead of a common one. Second version has some cool features like filtering, tagging, themes support, etc. On the other hand, sometimes it's quite hard to automate interaction with such controls due to their dynamic nature.

So what challenges do we face, while trying to access Select2 via WebDriver? First of all, we can't use existing Select wrapper to control this component anymore. The other problem is dynamic filtering: as sendKeys prints text character by character, Select2 will be constantly updating its state during typing. Besides that, we can't predict list items' loading time, as it's very dependent from collection size and performance.

We could try to play with WebDriverWait to resolve potential issues, but to be honest there're plenty of factors, which may produce an unexpected result. It's quite hard to control this component even with explicit waits technique. So how we could sort it out?

In this article I'll show how to create a custom Select2 wrapper, which will use its native API for further interaction.

We'll apply an existing template from one of my previous articles to avoid re-inventing the wheel. But first, let's take a look at Select2 native API, which we could use in our wrapper's implementation.


This is a common Select2 structure. As you can see, old select control with options' list is located below the main component. It's usually hidden. Options' values may differ from displayed text. So ideally, it'd be nice to get some option's value by visible text first. And then ask Select2 to display it.

Let's play with browser console first. Assuming that we want to select Monday from dropdown, we need to retrieve option's value, which is equal m first.


As you can see, it could be done via pure jQuery syntax.

So how we could ask Select2 to display an option, which has m value? There's a special function select2, which allows to specify different native actions like open, val, data, etc. It allows us to pass option's value directly to Select2 control for displaying Monday.


Technically, it's everything we need to reach our initial goal. Let's create Select2 wrapper now.
We're calling JavascriptExecutor internally to apply scenario we've already played with in browser console before. Our wrapper extends HTMLElement to allow using custom component directly in PageObjects without explicit initialization.
Hope it'll help you to forget about StaleElementReferenceException, while working with WebDriver and Select2. You can find sources as usual on GitHub.

Monday, August 10, 2015

Selenium content supplier

Assuming that you've ever worked with Selenium, you know that to run tests in Chrome / Internet Explorer browser you should supply special so-called standalone server (chromedriver / IEDriverServer), which implements WebDriver wire protocol. When browser updates happen, most likely you'll need to update appropriate drivers as well. If you work with Grid, sometimes you may also need to update appropriate standalone server when new Selenium version appears.

Well, it could be quite bothering to keep external Selenium content always up-to-date. Would be nice to automate this process somehow, right? As you may guessed, this would be our primary goal for today.

Let's start with some introduction first. There're 2 public resources where you can find a list of available items for downloading Selenium content: selenium storage and chromedriver storage. Direct links will move you to the root XML view, which are useless from end-user perspective. But from developers point it's a mine of information. If you look at these XMLs in details, you'll notice that there's a list of Contents nodes, and each of them contains a special Key. You may wonder how could it help us with our main task? In fact, this key is the last part of an end-point for downloading particular content. So if we concatenate a root URL with one of the listed keys, we'll get a full download URL of any available resource. And it means that we can use a simple GET request for retrieving any content we want. Well, it's quite easy task if you know which particular version is newest. But in fact, we don't have such information. Or we do? Let's take a look at our XML again:
As you can see, all the keys are sorted, so if we could parse this XML somehow and retrieve last node's info, it'd resolve our issue with latest version recognition.

Fortunately, we use IntelliJ IDEA, which can generate XSD schema from XML:


And then we can generate source code from XSD schema:


So to get our XML model, we just need to save it somewhere in project, and everything else could be done in several clicks using our favourite IDE.


Well, now we have a model, and it means that we could send a simple GET request to the root URL to retrieve a list of available contents and put everything into newly generated entities. I prefer REST approach, so let's see how we could do that with Jersey client:
First method tries to get XML content and put it into ListBucketResultType.class. Second overloaded method loops through each content node, applies filtering by key and returns last matching value. You may wonder, what Content type is about? It's a custom interface, which is intended to provide access to common Selenium content wrappers: ChromeDriver, IEDriver and SeleniumServer. As we may want to download different resources of different OS types / bitness, it was necessary to make our code generic.
As you can see, there was created some preceded configuration for easier content parsing. Each wrapper contains its own set of characteristics. Let's see how we can use this code with content downloading API:
As you can see, we're passing exact content type we want to download. Next goal is to parse XML and find out latest key using getLatestPath API. Received key should be additionally splitted, as it contains the following format - version/resourceName. Now we're ready to prepare a new saving path according to known info about resource name and output folder. When it's done, we just need to send a GET request to remote end-point and read appropriate response into InputStream. To save received file data, we may want to use Apache IOUtils copy API. The last step is to return a saved file name for further processing.

You may wonder, what kind of processing do we else need? Well, some of available artifacts are zipped. So it'd be nice to perform automatic unzipping when downloading is completed, right? That's why we return a saved file name. To unzip items we may want to use some existing library, like zip4j:
There's nothing specific in this code that needs to be discussed, so I'll leave it here for your own investigation.

Cool, so now we can download and unzip any Selenium content we want. But what's about remote delivery? If we're working with Grid, we may want to update not only a hub VM, but all the nodes as well. For this particular case there was added a server platform, with a simple file upload service:
Note that Java 8 parallel streams allows us to increase performance while files processing. Besides that, there was added unzip functionality as well. Now let's take a look at client API for file uploading:
Here we prepare a MultiPart files content for further sending according to passed list of paths. By default, unzipping feature is enabled. But it's safe, as backend side uses zip extension filtering.

That's it. Let's see some test samples to understand how it's easy to download latest Selenium content using this library:
In a first example we try to download the entire Selenium content and unzip it in the same output folder. Second sample represents how to download particular resources and send them to remote VM.

You may wonder, where do we need to specify ip / port of a remote server. It could be done via the main client class parametrized constructor:
For local files processing you may want to use just a default constructor.

That's all about Selenium content supplier. Hope you'll find it useful for your projects. By the way, you can combine it with EnvironmentWatcher service to implement the following scenario on fly: stop all services -> update Selenium content -> raise up the entire automation staff with new jars / drivers.

You can download sources on GitHub. And related samples as well. Main project is not in official Maven repository yet, so you need to build it by your own. Just use mvn clean install command to generate appropriate artifacts, and then you can add the following dependency to your own projects:

Sunday, August 2, 2015

Java 8 impact on test automation framework design - Part 2

In the first part I've shown you how to prettify UI tests using Java 8 interfaces. This time, we'll take a look at a more complicated example.

Well, you may know that there're 2 common ways of accessing web controls via WebDriver:
  1. @FindBy + WebElement -> automatic lookup with PageFactory.initElements.
  2. By -> delayed lookup with driver.findElement or WebDriverWait + ExpectedConditions.
Personally I prefer second option, as it gives better flexibility, while working with complicated JS-based websites, when we always need to wait for something. But on the other hand, By class seems a bit non-obvious, plus there's no factory implemented for this case yet. Well, actually in the first part we did created a custom factory, so we could say that this problem has gone. But besides that, it'd be nice to see some similar elements' definition style, as it was implemented for pure WebElement.

In this article I'll show you how to create custom typified elements and a generic initializer, similar to mentioned above initElements.

We'll modify some part of code from one of my previous articles, as the idea remains the same: creating a custom HTML annotation and HTMLElement class. But this time we'll also implement some more specific elements, like Button, TextInput, Label, etc., mostly as it was done in Html Elements framework.

Let's see HTMLElement (aka base element) first. It won't be a full listing, only some key moments:
As we're going to create more specific elements, it's important to pass WebDriver instance to our base element's constructor for further usage in a combination with WebDriverWait. To be honest, there're lots of ExpectedConditions we may use for element's locating, but for educational purposes we'll look only at the most popular: visibilityOfElementLocated, presenceOfElementLocated and elementToBeClickable. All these conditions could be described via the following function:
This function was applied in a generic waitUntil method, so that we could pass any of listed above ExpectedConditions as a parameter. Now we're ready to create some more specific elements, e.g. TextInput:
As you can see, it extends HTMLElement. Now we can use waitUntil method to locate only those inputs which are clickable. Besides that, we've defined a custom logic: clear input and type some text. Note that element's locator reference is physically located in a super class.

Let's assume that we've already created a set of specific elements. So how could we initialize them? If you look at PageFactory sources, you'll notice some dark reflection magic. It'll take you some time to figure out how it's implemented. But I'll show you an alternative, even more generic and darker way of elements' initialization. It'll be still reflection-based, but with a help of Java 8 features we'll see how it could be implemented within single interface.

To make this experiment more realistic, we'll create some other type of element, so that we could see that our elements' supplier is not hardly dependent from a single type and is definitely generic. So what type of element would it be? Have you ever heard something about SikuliX? It's an OCR tool, which may help us to resolve some complicated automation tasks, which are impossible with Selenium. So before starting with elements' initializer, I'll create a model for SikuliDriver, ScreenElement and its ImageElement implementation. Well, hope some time in the future I'll have enough capacity to implement a full functional approach to make SikuliX closer to WebDriver interface. But for now it'll be just a mock.

Here's a draft implementation, which will be further mocked in test:
There won't be any real clicks or text typing, but we need to know that element has been successfully initialized and we could perform some basic actions.

Well, our alternative model is ready, so now let's add WebDriver and SikuliDriver(mock) into BaseTest class.
Note that well-known initialization / quiting staff was skipped, but you can find a full source later on GitHub. Mockito library was used for mocking, so don't forget to add appropriate dependency into root pom.xml:
And now it's time for something very special. Welcome, our magic interface - ElementsSupplier. I'll try to explain everything within the following listing, as there's a complicated combination of reflection, streams, lambdas and default methods.
Let's start with the end. As you may noticed from HTMLElement and ImageElement constructors' signature, both receive specific drivers as a fist argument, which are needed for further elements locating:
We also have 2 custom annotations - HTML and Image, which values need to parsed and supplied to appropriate elements' constructors side by side with mentioned above drivers. It's a bit tricky moment. In case of a single element's type, we know exactly which annotation to parse, which driver to use and which constructor to call. But our case is more generic. We don't know exactly how many elements' types, drivers and annotations are there. So we can't predict which constructor to call. Here's our first requirement: a class which implements ElementsSupplier interface must provide a list of supported drivers and annotations:
In case or drivers, we may want a Stream of their instances for further passing to matching constructors. In case of annotations, we just need their types to detect if particular instance variable contains one of supported items. Now let's take a look at our BasePage class, which implements ElementsSupplier:
As you can see, we've overridden both abstract methods to provide WebDriver and SikuliDriver instances, as well as HTML and Image annotation types. Now our interface knows a search direction (annotations) and first constructors' arguments (drivers' instances).

We can also see that BasePage default constructor explicitly calls default initElements method, passing this as a parameter. You may wonder, what does this mean in such context? It's a reference on a top-level PageObject, which we have triggered to be initialized. So we ask our interface to initialize all the custom fields within particular class and its super-classes.

Now let's take a look at common algorithm for elements' initialization:
  1. First we need to loop through each declared field of a current PageObject class and its super-classes, until we reach a base Object.class. We can do that with Stream.iterate API. But with one important note. Pure java implementation doesn't support any good exit criteria except setting limit operation. By default we don't know a number of super-classes we want iterate, so the only valid condition for us will be !currentClass.equals(Object.class). Fortunately, there's a great Streams extension library com.codepoetics.protonpack.StreamUtils, which allows us to set appropriate Predicate to break an infinite loop, when condition is met (takeWhile API).
  2. Next we need to loop through all declared class fields and find out, if any of supported annotation types is present.
  3. If anything found, we retrieve annotation by its type and call specialized initElement method for further field initialization.
  4. initElement itself could be splitted into several logical parts. First of all, we need to retrieve all annotation values. It's a bit tricky part, as getDeclaredMethods() API doesn't guarantee to return an ordered list of methods (how they were declared in a class). But order is very important while passing arguments to appropriate constructor. That's why we are using custom methods' comparator (by name), which meets our order requirements. But anyway you could always override default methodsComparator() with your own custom logic.
  5. These are annotations' arguments, but what's about drivers? Our constructors require particular driver instance as a first parameter. Here's the other tricky moment. Both drivers are of generic interface type. And there's no easy way to guess which exact type is assigned to particular object. That's why we have to loop through all supported drivers, insert one to the beginning of annotations' arguments list and pass it deeper to createInstance method.
  6. createInstance uses common java reflection API to intialize our custom elements with provided arguments list. As I've mentioned above, there's no easy way to detect assigned interface type, so we additionally try to check whether WebDriver or SiluliDriver types are assignable to provided arguments. If yes, we return more specific type to be able to find matching constructor. In case of any exception, we return empty Optional. It means that there's no matching constructor found for particular combination of a driver / annotation arguments, and we should try another driver as a first parameter.
  7. The final step is to check if any object instance was created. In a positive case we make field accessible and put a newly initialized reference inside.
That's it. Now we can make sure that all the fields are initialized. There's only 1 note left. As SikuliDriver is mocked, we should also mock ScreenElements to check if our approach works. It could be done somewhere in @BeforeMethod.
Let's declare the same items e.g. in HomePage:
And add appropriate call into test case:
Well, there's no any valid logic in uploadFile call. The only purpose of this is to see a working WebDriver test with appropriate SikuliX console log messages:


That's all. You can find sources as usual on GitHub.

Thursday, July 30, 2015

Java 8 impact on test automation framework design - Part 1

Well, it took me a bit longer than I had expected to resolve all the urgent tasks. But finally I'm back and ready to share some new material with you.

In this article I'd like to describe some fresh thoughts about web automation framework design. I've been playing with different design approaches for several years. Primary goals were: increasing system tests' readability and reducing time on their support.

I believe that any good test should be written in terms of DSL, so that anyone could understand its context and purpose. As normally we're running tests via CI servers, it's important to reflect all the steps performed during execution in test results report. You can achieve this goal via AOP and custom annotation to collect everything and inject appropriate info into report. On the other hand, you can use some existing solution like Allure Test Report.

Well, this is all about test steps. But what's about verifications? Normally, we're using asserts to compare actual and expected result. When we're talking about UI tests, there could be much more than just a single verification used. Would be nice to see them all in test results report as well, right? Welcome, our first technical blocker. We can't annotate asserts inside test method's body. So the only way to workaround this is to create an assertions wrapper or custom matchers.

Wrapper implementation is a bit out of context of a common inheritance model. Let's assume that we have some BaseTest class, which is intended to control tests execution flow and some internal staff preparation. As you may know, multiple inheritance is mostly impossible in Java (I'll describe why mostly below). It means that our assertions wrapper should be transformed into utility class with static methods. Is it good or bad? There's no exact answer. So I'll leave it for your own analysis.

Matchers - seems like a better solution. But how much time should we waste on their preparation, customization and support? Depends on...

Anyway, I'd like to show you a third way, which is about 'inheritence mostly impossible'. Well, in Java 8 there was introduced a new concept - default interface methods. To get better understanding of what it is, I'd recommend to read Java 8 in Action book. Basically, recent interfaces allow us to create methods with bodies. You may wonder what do we need it for? One of potential purposes is to extend existing functionality without making outstanding impact on entire project(-s). As you may know from previous Java versions, when class is connected with interface, it agrees to implement all declared methods. Let's imaging that you're developing some popular library and one day you decided to extend an existing interface with some new method's definition. When you publish an updated version of your library, you may wonder how much angry emails would you receive. The reason is that users' code may fail to compile until they implement your new addition. Imagine if there were lots of entry points, where this interface was defined. Potential impact could be enormous. So how new Java interfaces could help? Well, first of all, default method doesn't require to be overridden. Now you can safely add some extended APIs directly inside interfaces without any impact on related classes. Sounds cool, isn't it? But what's about inheritance? Keeping in mind that default method looks like a common one except some minor syntax differences, plus a fact that a single class may implement any amount of interfaces, we may guess that this opens us a direct way to multiple inheritance. Wow, that's awesome! Let's see how it may help us with our automation routine.

As I've mentioned above, it would be great, besides common steps, to print all the verification staff into test results report as well. We'll start with some preparations first. To avoid re-inventing the wheel, Allure will be used as a code base for steps definition and printing. It'll be a multi-module maven project to achieve better domain part separation from framework core. In your root pom.xml you should add reporting section with allure-maven-plugin. Once it's done, just add 2 modules to your root: core / domain. Your pom.xml should now look something similar:

<pre class="brush: xml"></pre>
Let's create some common abstraction layer in core module. It'll be a BasePage and BaseTest classes. We'll leave them blank for a while and continue with domain module.

Assuming that you're already familiar with PageObject pattern, we'll need to create a template for some sample test scenario. Let's say we're going to check Google account authorization flow. To achieve this goal we need at least 2 pages: Login and Home. Keeping in mind that all the steps should be printed directly into report, we'll use appropriate @Step annotation from Allure framework:



As you can see, nothing specific. Just a simple authorization flow with username verification. Well, to resolve missing dependencies we should update domain module's pom.xml.


Note that Allure requires including AspectJ dependencies to perform steps interception in runtime. As TestNG was chosen as a unit framework, we had to add appropriate Allure adaptor, which implements a special listener for collecting necessary test data.

Finally, we can create a simple test using provided above steps.


Pretty straightforward script, isn't it? You may just wonder about loadUrl and homePage methods (by the way, it was first mentioned in LoginPage class). But let's keep an intrigue for a while.

So our main goal is to annotate assertEquals with @Step somehow. Besides that, another logical blocker occurs: URL loading action is something that happens mostly only first time, when browser is opened. So logically first navigation step doesn't relate to any page or application itself. In such case, where should we put this API? In core module? But how we'll return LoginPage instance then, if framework logically and physically is a completely independent unit, which shouldn't be related to any domain at all? So domain module then, right? Ok, but again where should we put it? Our test class already extends BaseTest. It means that we can't inherit anything else.

And a headshot - PageFactory. If you have ever worked with Selenium, you may know that there's a special factory class, which is intended for PageObjects + WebElements initialization. Well, and what if I don't use WebElements? What if use By locators? Where's my By factory? Someone may say: you don't need a factory, just use common class initialization technique. Ok, but where should I store getters for my PageObjects then? Ah, you're saying to create my own factory now? Behind the scene, I'm always wondering, why should I call such low level API directly in tests? Why should I save intermediate page objects state in variables to verify something or just break the chain for some other actions? Maybe I'm a bit idealistic, but I've been looking for some good design approach for a long time to make tests fancy as much as possible, to completely remove all the low level staff from highest abstraction layer. And now... now I can say that I found some technical approach to achieve this goal. As you may guess that's all about default interface methods.

Let's start with some light scenario - verification. Everything we need is to create an interface with a simple default method to verify 2 String values - expected / actual result.


As you can see, there's no magic at all. Common interface style, common method's signature, except default keyword. This basically means that a class, which implements above interface, may call verifyTextEquals method directly, like in case if this method was a part of it. Isn't it cool? The other big advantage is that we shouldn't necessarily override it. But we still have such opportunity, if really needed.

So now if we link this interface with our class, we could modify test the following way:


I hope you haven't forgot about main interface feature, which allows class to implement as many interfaces as it could, yet? Well, it's a good time to implement custom PageFactory then, isn't it?

Let's move back to core module. We need to modify BaseTest class for creating PageObjects' storage. You may know that PageObject pattern assumes that we'll often return a new instance of a page. But in case of delayed elements' search (By locators), do we really need to create redundant objects in memory? In such context it's better to think about page caching. Let's say we could avoid creating new objects, if page already exists in a storage. But with 1 small note: storage should be refreshed after each test execution to avoid keeping useless objects in memory for a long time. Let's see how could it look like:


Normally, we may want to hide storage from outside word. So only getter was made public. Here we're using TestNG specific annotation to automatically clear storage after each test execution. Storage itself is a common map, where value is a page object instance and key is of generic interface type. Let's see how it looks like:


Here you can see 2 static methods: first one is about providing page object instance by key, and second one - our magic navigation method. But it doesn't return any PageObject yet, intrigue. Also we may want to define a special create method, which is called while putting values into storage. But actual implementation left to higher abstraction layers (if you remember, we've discussed a role of framework as an independent unit a bit earlier).

The final piece of a puzzle lays in domain layer. Now we need to provide a more specific page objects creation logic. And as you may guess, it'll be implemented via another interface. Let's call it PageObjectsSupplier:


First thing we may noticed is a PageObject enum, which implements just created GenericPage. As you remember, we have previously defined an abstract method create to pass implementation details to domain specific area. So PageObject must implement this method now. As it has an enum type, each unique item will provide its own implementation. Exactly what we need!

There're also 3 default methods. You had a chance to see loadUrl before in provided above test implementation. So we've just wrapped original navigation method defined on a core level with a domain specific logic of returning new LoginPage instance. As this method is default one, we could call it directly in a test.

Others are just common page objects getters to avoid direct low-level getPageObject method's calls with type casting. So it's just some kind of synthetic sugar for more concise instance access. Note that we use putIfAbsent method for populating pages' storage. It means that there will be created only 1 instance of particular page. Well, it may seem a bit excessive to define both enum items and relative getters, but on the other hand it's technically and logically clearer than hundreds lines of reflection or just separated utility class. Plus we found a better place, where to store first navigation logic. Anyway, it's only an alternative approach and it's up to you what to choose.

Now we only need to connect newly created interface with our test class to apply multiple inheritance magic. Just a quick suggestion: to avoid excessive interfaces' enumeration, we could join them together in 1 more specific interface e.g. TestCase using inheritance.


So our final test case variant would be the following:


If we run this test and then generate Allure Report via mvn site command, we'll see all the steps, including verification. Isn't it look perfect?


Note that I'm using my own web sever for viewing reports. You may want to read official Allure docs to find out a list of available maven commands.

In a second part we'll take a look at more complicated and interesting example with custom PageElements. Source code will be also available later on GitHub.

Hope this article helped you to get better understanding of default methods and how they could improve your automation routine.

Monday, February 9, 2015

Environment Watcher or how to create a service for handling stuck automation processes

In one of my previous articles I've shared some notes about Jenkins plugins development by example of selenium grid killer. I've also mentioned that it's not enough to just kill hub or nodes. Ideally, it should be a full functional restart trigger.

Let's imagine that we have 2 VMs for test automation purposes. We've raised selenium grid hub, har storage and browsermob proxy on the first VM, and selenium grid node, sikuli server on the second VM. You may know that sometimes bad things happen and our environment gets stuck due to number of reasons. And it may affect the entire automation process, if we e.g. have lots of scheduled jobs. Of course we can login to failed environment and restart services manually, but it could be a quite bothering task, especially during debugging. And what if it failed during nightly run? For such cases it could be useful to have some trigger, which could restart all services before new execution process is started. So in this article I'll show you how to create a simple RESTful trigger for handling described above situations.

I call it Environment Watcher. Technically, we'll have an http server with the following features:
  • Killing common tasks, such as browsers' instances and their drivers (for chome and ie).
  • Killing java tasks. If in case of browsers / drivers we can find them by name in a task manager, selenium grid / browsermob proxy and sikuli process is quite hard to find, as they all have the same name - java.exe.
  • Starting batch files via command line executor service. Normally, such batches raise selenium grid hub / nodes, remote browsermob proxy and sikuli jars.
Let's start with services implementation. To kill common tasks, which could be easily found via task manager by name, we'll use the following snippet:


If we use this code in a command line, it will loop through existing tasks list and kill everything that matches criteria marked with ?. Question mark will be dynamically replaced with a list of names we're looking for.

In regards to java tasks searching, we'll use a tool called JPS (a part of JDK), which will help us to list and kill any JVM process.


Well, now we can create appropriate endpoints:  


In both cases we call command line executor utility based on Apache Commons Exec library.

To run batches we'll use the same technique, but the root process will be a tool called PSExec, instead of cmd. As you may know, if our batch process is continuously waiting for user input, it will hang a java process, which called it. That's why PSExec was chosen. It'll be like a proxy for starting batch in a separate process and immediately quiting.


These are our key server-side services. Now let's take a quick look at client side, that is written via Jersey.

Here's an example, how we can use java tasks killer service:


Normally, you'll need to pass a list of task names (which you may want to kill) to appropriate method. But I've also added some preceded structures like JavaTask / CommonTask to make it easier to start using code.


This code will kill any running instance of selenium standalone (hub / node) and listed browsers / their drivers.

Well, now we have a client-server solution, which can kill and restart anything on a remote VM. And it's time to update our Jenkins plugin.


There were added some new entries into our jelly configs. Besides listed above features, I've also added an opportunity to change hub ip for connected nodes dynamically (in json config files). This may be useful, if you're passing it from Jenkins parameter into your java code through environment variable. But it would be bothering to modify configs for all VMs manually. So I've put appropriate trigger into our Environment Watcher service. There was also added a windows' minimization feature, as sometimes, after processes' restarting, browser may be opening behind command line windows, that may cause failures in case of using some image recognition tools like SikuliX.

Updated plugin's UI will look like the following:


Normally, you'll need to specify a valid watcher ip / port and check available options, if you want to kill common / java tasks. Additionally you'll need to set a path to a batch file, which will start killed processes again. Optionally, you can reconfigure node's json file by a given path with a new hub ip. 

So now you can use this plugin for restarting your environment, for example, before new job execution. As a result you'll see the following output in Jenkins console log:


As usual, you can find env-watcher and selenium-utils sources on GitHub.