If you keep the code structure that you mention in the question, sooner or later maintaining it will become a nightmare. Try to stick to the rule: the same test code (written once) for all browsers (environments).
This condition will force you to solve two issues:
1) how to run the tests for all chosen browsers
2) how to apply specific browser workarounds without polluting the test code
Actually, this seems to be your question.
Here is how I solved the first issue.
First, I defined all the environments that I am going to test. I call 'environments' all the conditions under which I want to run my tests: browser name, version number, OS, etc. So, separately from test code, I created an enum like this:
public enum Environments {
FF_18_WIN7("firefox", "18", Platform.WINDOWS),
CHR_24_WIN7("chrome", "24", Platform.WINDOWS),
IE_9_WIN7("internet explorer", "9", Platform.WINDOWS)
;
private final DesiredCapabilities capabilities;
private final String browserName;
private final String version;
private final Platform platform;
Environments(final String browserName, final String version, final Platform platform) {
this.browserName = browserName;
this.version = version;
this.platform = platform;
capabilities = new DesiredCapabilities();
}
public DesiredCapabilities capabilities() {
capabilities.setBrowserName(browserName);
capabilities.setVersion(version);
capabilities.setPlatform(platform);
return this.capabilities;
}
public String browserName() {
return browserName;
}
}
It's easy to modify and add environments whenever you need to. As you can notice, I am using this to create and retrieve the DesiredCapabilities that later will be used to create a specific WebDriver.
In order to make the tests run for all the defined environments, I used JUnit's (4.10 in my case) org.junit.experimental.theories:
@RunWith(MyRunnerForSeleniumTests.class)
public class MyWebComponentTestClassIT {
@Rule
public MySeleniumRule selenium = new MySeleniumRule();
@DataPoints
public static Environments[] enviroments = Environments.values();
@Theory
public void sample_test(final Environments environment) {
Page initialPage = LoginPage.login(selenium.driverFor(environment), selenium.getUserName(), selenium.getUserPassword());
// your test code here
}
}
The tests are annotated as @Theory
(not as @Test
, like in normal JUnit tests) and are passed a parameter. Each test will run then for all the defined values of this parameter, which should be an array of values annotated as @DataPoints
. Also, you should use a runner that extends from org.junit.experimental.theories.Theories
. I use org.junit.rules
to prepare my tests, putting there all the necessary plumbing. As you can see I get the specific capabilities driver through the Rule, too. Though you could use the following code right in your test:
RemoteWebDriver driver = new RemoteWebDriver(new URL(some_url_string), environment.capabilities());
The point is that having it in the Rule you write the code once and use it for all your tests.
As for Page class, it is a class where I put all the code that uses driver's functionality (find an element, navigate, etc.). This way, again, the test code stays neat and clear and, again, you write it once and use it in all your tests.
So, this is the solution for the first issue. (I know that you can do a similar thing with TestNG, but I didn't try it.)
To solve the second issue, I created a special package where I keep all the code of browser specific workarounds. It consists of an abstract class, e.g. BrowserSpecific, that contains the common code which happens to be different (or have a bug) in some browser. In the same package I have classes specific for every browser used in tests and each of them extends BrowserSpecific.
Here is how it works for the Chrome driver bug that you mention. I create a method clickOnButton
in BrowserSpecific with the common code for the affected behaviour:
public abstract class BrowserSpecific {
protected final RemoteWebDriver driver;
protected BrowserSpecific(final RemoteWebDriver driver) {
this.driver = driver;
}
public static BrowserSpecific aBrowserSpecificFor(final RemoteWebDriver driver) {
BrowserSpecific browserSpecific = null;
if (Environments.FF_18_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new FireFoxSpecific(driver);
}
if (Environments.CHR_24_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new ChromeSpecific(driver);
}
if (Environments.IE_9_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new InternetExplorerSpecific(driver);
}
return browserSpecific;
}
public void clickOnButton(final WebElement button) {
button.click();
}
}
and then I override this method in the specific class, e.g. ChromeSpecific, where I place the workaround code:
public class ChromeSpecific extends BrowserSpecific {
ChromeSpecific(final RemoteWebDriver driver) {
super(driver);
}
@Override
public void clickOnButton(final WebElement button) {
// This is the Chrome workaround
String script = MessageFormat.format("window.scrollTo(0, {0});", button.getLocation().y);
driver.executeScript(script);
// Followed by common behaviour of all the browsers
super.clickOnButton(button);
}
}
When I have to take into account the specific behaviour of some browser, I do the following:
aBrowserSpecificFor(driver).clickOnButton(logoutButton);
instead of:
button.click();
This way, in my common code, I can identify easily where the workaround has been applied and I keep the workarounds isolated from the common code. I find it easy to maintain, as the bugs are usually being solved and the workarounds may or should be changed or eliminated.
One last word about executing the tests. As you are going to use Selenium Grid you will want to use the possibility to run the tests in parallel, so remember to configure this feature for your JUnit tests (available since v. 4.7).