Who writes the automated UI tests? Developers or Testers?
Asked Answered
H

8

13

We're in the initial stages of a large project, and have decided that some form of automated UI testing is likely going to be useful for us, but have not yet sorted out exactly how this is going to work...

The primary goal is to automate a basic install and run-through of the app, so if a developer causes a major breakage (eg: app won't install, network won't connect, window won't display, etc) the testers don't have to waste their time (and get annoyed by) installing and configuring a broken build

A secondary goal is to help testers when dealing with repetitive tasks.

My question is: Who should create these kinds of tests? The implicit assumption in our team has been that the testers will do it, but everything I've read on the net always seems to imply that the developers will create them, as a kind of 'extended unit test'.

Some thoughts:

  • The developers seem to be in a much better position to do this, given that they know control ID's, classes, etc, and have a much better picture of how the app is working

  • The testers have the advantage of NOT knowing how the app is working, and hence can produce tests which may be much more useful

  • I've written some initial scripts using IronRuby and White. This has worked really well, and is powerful enough to do literally anything, but then you need to be able to write code to write the UI tests

  • All of the automated UI test tools we've tried (TestComplete, etc) seem to be incredibly complex and fragile, and while the testers can use them, it takes them about 100 times longer and they're constantly running into "accidental complexity" caused by the UI test tools.

  • Our testers can't code, and while they're plenty smart, all I got were funny looks when I suggested that testers could potentially write simple ruby scripts (even though said scripts are about 100x easier to read and write than the mangled mess of buttons and datagrids that seems to be the standard for automated UI test tools).

I'd really appreciate any feedback from others who have tried UI automation in a team of both developers and testers. Who did what, and did it work well? Thanks in advance!

Edit: The application in question is a C# WPF "rich client" application which connects to a server using WCF

Hyaluronidase answered 18/8, 2009 at 23:53 Comment(0)
S
4

Ideally it should really be QA who end up writing the tests. The problem with using a programmatic solution is the learning curve involved in getting the QA people up to speed with using the tool. Developers can certainly help with this learning curve and help the process by mentoring, but it still takes time and is a drag on development.

The alternative is to use a simple GUI tool which backs a language (and data scripts) and enables QA to build scripts visually, delving into the finer details of the language only when really necessary - development can also get involved here also.

The most successful attempts I've seen have definitely been with the latter, but setting this up is the hard part. Selenium has worked well for simple web applications and simple threads through the application. JMeter also (for scripted web conversations for web services) has worked well... Another option which is that of in house built test harness - a simple tool over the top of a scripting language (Groovy, Python, Ruby) that allows QA to put test data into the application either via a GUI or via data files. The data files can be simple properties files, or in more complex cases structured (something like YAML or even Excel) data files. That way they can build the basic smoke tests to start, and later expand that into various scenario driven tests.

Finally... I think rich client apps are way more difficult to test in this way, but it depends on the nature of the language and the tools available to you...

Sven answered 19/8, 2009 at 0:15 Comment(2)
We've decided to go with the option of getting the QA people to write basic 'stub' scripts (using IronRuby, which is very easy to read and write), and then have the developers fixing up the code and implementing the parts that the QA people weren't able to do. Hopefully it will go wellHyaluronidase
Maybe instead of Selenium like applications you should try products like IBM Functional Tester, HP QuickTestPro, Borland Silktest or MSVS Test Edition. As for web apps there are also dedicated libraries like WebAii or WatiN that help to write scripts. Anyway make your QA people learn and adopt. Pretty much like developer - need to learn something new everyday.Shir
L
4

In my experience, testers who can code will switch jobs for a pay raise as developers.

I agree with you on the automated UI testing tools. Every place I've worked that was rich enough to afford WinRunner or LoadRunner couldn't afford the staff to actually use it. The prices may have changed, but back then, these were in the high 5-digit to low 6-digit price tags (think of the price of a starter home). The products were hard to use, and were usually kept uninstalled in a locked cabinet because everyone was afraid of getting in trouble for breaking them.

Lanctot answered 18/8, 2009 at 23:58 Comment(1)
LoadRunner is for performance testing and is actually hard to master. Problem is not with C it uses but with correlations and analyzing Network Traffic. As for GUI automated tests QTP is far better choice. It allows to build test via record and playback, but also to script the same test in VBScript. Still price may be issue.Shir
S
4

Ideally it should really be QA who end up writing the tests. The problem with using a programmatic solution is the learning curve involved in getting the QA people up to speed with using the tool. Developers can certainly help with this learning curve and help the process by mentoring, but it still takes time and is a drag on development.

The alternative is to use a simple GUI tool which backs a language (and data scripts) and enables QA to build scripts visually, delving into the finer details of the language only when really necessary - development can also get involved here also.

The most successful attempts I've seen have definitely been with the latter, but setting this up is the hard part. Selenium has worked well for simple web applications and simple threads through the application. JMeter also (for scripted web conversations for web services) has worked well... Another option which is that of in house built test harness - a simple tool over the top of a scripting language (Groovy, Python, Ruby) that allows QA to put test data into the application either via a GUI or via data files. The data files can be simple properties files, or in more complex cases structured (something like YAML or even Excel) data files. That way they can build the basic smoke tests to start, and later expand that into various scenario driven tests.

Finally... I think rich client apps are way more difficult to test in this way, but it depends on the nature of the language and the tools available to you...

Sven answered 19/8, 2009 at 0:15 Comment(2)
We've decided to go with the option of getting the QA people to write basic 'stub' scripts (using IronRuby, which is very easy to read and write), and then have the developers fixing up the code and implementing the parts that the QA people weren't able to do. Hopefully it will go wellHyaluronidase
Maybe instead of Selenium like applications you should try products like IBM Functional Tester, HP QuickTestPro, Borland Silktest or MSVS Test Edition. As for web apps there are also dedicated libraries like WebAii or WatiN that help to write scripts. Anyway make your QA people learn and adopt. Pretty much like developer - need to learn something new everyday.Shir
G
4

I worked over 7 years as an application developer before I finally switched to testing and test automation. Testing is much more challenging than coding, and any automation developer who wants to succeed should master testing skills.

Some time ago I put my thoughts on skill matrices in a couple of blog posts.

If interested to discuss:

http://automation-beyond.com/2009/05/28/qa-automation-skill-matrices/

Thanks.

Gestation answered 28/8, 2009 at 13:51 Comment(2)
Oh, and for install and basic run-through automation, without an actual functional testing, you don't really need QTP or TestComplete. Try AutoIt (it's free).Gestation
I am highly skeptical any QA salary matches a good developer's salary so switching to a QA from Dev seems am an odd thing to do, career-wise. Saying testing is much more challenging than coding is a very opinion-based.. I do both and prefer coding moreGaray
D
3

I think having the developers write the tests will be of the most use. That way, you can get "breakage checking" throughout your dev cycle, not just at the end. If you do nightly automated builds, you can catch and fix bugs when they're small, before they grow into huge, mean, man-eating bugs.

Danette answered 19/8, 2009 at 0:5 Comment(0)
P
3

What about the testers proposing the tests, and the developers actually writing it ?

Pioneer answered 19/8, 2009 at 0:20 Comment(2)
In general, devs are too busy writing the code that QA will be testing, and will have no time to write the tests. I don't know about your company, but at every one I've worked at, if a dev has time to write tests, the (mis)managers criticize the dev because the dev should have been adding more features.Lanctot
the thoughtworks "Twist" application looks to be modeled along this kind of thingHyaluronidase
C
2

I believe at first it largely depends on the tools you use.

Our company currently uses Selenium (We're a Java shop).

The Selenium IDE (which records actions in Firefox) works OK, but developers need to manually correct mistakes it makes against our webapps, so it's not really appropriate for QA to write tests with.

One thing I tried in the past (with some success), was to write library functions as wrappers for Selenium functions. They read as plain english:

selenium.clickButton("Button Text")

...but behind the scenes check for proper layout and tags on the button, has an id etc.

Unfortunately this required a lot of set up to allow easy writing of tests.

I recently became aware of a tool called Twist (from Thoughtworks, built on the Eclipse engine), which is a wrapper for Selenium, allowing plain English style tests to be written. I am hoping to be able to supply this to the testers, who can write simple assertions in plain English!

It automatically creates stubs for new assertions too, so the testers could write the tests, and pass them to developers if they need new code.

Cosmo answered 19/8, 2009 at 0:21 Comment(1)
This is basically what I've done with my ruby scripts (abstracting over the white framework). I have code like "ok_button = window.find_button("OK"); ok_button.click"Hyaluronidase
N
1

I've found the most reasonable option is to have enough specs such that the QA folks can stub out the test, basically figure out what they want to test at each 'screen' or on each component, and stub those out. The stubs should be named such that they're very descriptive as to what they're testing. This also offers a way to crystalize functional requirements. In fact, doing the requirements in this fashion are particularly easy, and help non-technical people really work through the muddy waters of their own though process.

The stubs can be filled in via a combination of QA/dev people. This allows you to CHEAPLY train QA people as to how to write tests, and they typically slurp it up as it furthers their job security.

Nagy answered 19/8, 2009 at 0:2 Comment(2)
How would the QA folk stub out the test? Would they write an actual code file full of tests? or hand you a word document full of requirements? I presume the developers would then fill in the stubs with the actual UI-automation code?Hyaluronidase
The QA actually function stubs, or whatever they maybe. The names should serve as agile documentation. You should be able to parse this, and run a simple split on camel case to get rough sentences out lining each requirement.Nagy
B
0

I think it depends mostly on the skill level of your test team, the tools available, and the team culture with respect to how developers and testers interact with each other. My current situation is that we have a relatively technical test team. All testers are expected to have development skills. In our case, testers write UI Automation. If your test team doesn't have those skills they will not be set up for success. In that case, it may be best for developers to write you UI automation.

Other factors to consider:

What other testing tasks are on the testers' plate? Who are your customers and what are their expectations related to quality? What is the skill level of the development team and what is their willingness to take on test automation work?

-Ron

Biochemistry answered 24/9, 2009 at 1:52 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.