How detailed should a customer acceptance test be?
Asked Answered
P

6

16

Here is a test description, testing the "Create New Widget" use-case.

  • Confirm that you can enter a new widget into the system.

Here is another test description, testing the "Create New Widget" use-case.

  • Bring up the application.
  • Create a new widget by the name of "A-008", with the description being "Test Widget for Acceptance Test 3-45".
  • Confirm that the widget is now visible in the leftmost widget tree view.
  • Select another widget in the tree view, then select the widget "A-008" again. Confirm that the values in the widget display equal the values you entered.
  • Delete widget "A-008" and close the application

Here is another test description, testing the "Create New Widget" use-case.

  • Bring up the application.
  • Bring up a second instance of the application viewing the same data.
  • In the first instance of the application, right-click on the "Widgets" node. In the ensuing context menu, activate the "Create New Widget" menu item.
  • A "New Widget" window should be activated. Leave every field blank, and press the OK button. A message box should come up saying "Please enter a Widget name". Press OK on this message box.
  • Enter "A-008" in the "Name" field.
  • Set the description field to "The llama (Lama glama) is a South American camelid, widely used as a pack animal by the Incas and other natives of the Andes mountains. In South America llamas are still used as beasts of burden, as well as for the production of fiber and meat. The height of a full-grown, full-size llama is between 5.5 feet (1.6 meters) to 6 feet (1.8 meters) tall at the top of the head. They can weigh between approximately 280 pounds (127 kilograms) and 450 pounds (204 kilograms). At birth, a baby llama (called a cria) can weigh between 20 pounds (9 kilograms) to 30 pounds (14 kilograms).
  • Press the OK button. A message box should appear saying "The description must be 512 characters or less"
  • Set the description to "'); DELETE FROM WIDGET WHERE 1=1;" in the "Description" field. Press the OK button.
  • In the left-most tree view, a new widget by the name of "A-008" should have appeared.
  • Activate a window in the second instance of the application, and confirm that Widget "A-008" has automatically appeared in that tree view as well.
  • In the first instance of the application, right-click on the "Widgets" node. In the ensuing context menu, activate the "Create New Widget" menu item. A "New Widget" window should be activated.
  • Set the name to "A-008", and press OK. A message box must come up, saying "A Widget with this name already exists. Please select another Widget name".
  • Press the OK button on this message box, then press the Cancel Button to exit the "Create Widget" dialog box.
  • Display the widget page for widget "A-008" in the second instance.
  • In the first instance, press the "Undo" menu item
  • Confirm that the second instance is now displaying the start page.
  • .................etc..............

Each example tests that you can create a new widget. In the third test, I was testing the functionality as an experienced programmer, thinking "OK, where are all of the places a bug can appear", and checking every one of these. Is the third one appropriate for a customer acceptance test?

How comprehensive is "too comprehensive"?

Phenylalanine answered 7/5, 2009 at 0:26 Comment(0)
D
10

The user acceptance test cases should be detailed and simple but not as detailed as your third example. Acceptance testing is about making sure the customer gets what they agreed to. If you simply say, "click this then click that, etc, etc..." that is more like a functional test. You are not eliciting users to verify that functionality meets the test case laid out in the acceptance test. You are only asking them to click through tests that you could have simply automated.

User acceptance tests should be more along the lines of "create widget, verify that it appears, delete widget, etc." This will also encourage users to seek out individual features and (as a side effect) flush out any usability problems you may have overlooked.

Divinadivination answered 7/5, 2009 at 0:49 Comment(1)
Pressing the "tick" button on this because it's making good points clearly, but I acknowledge that the other posts do the same.Phenylalanine
B
1

I think that your acceptance tests should primarily be good path tests. Sometimes the "good" path will ensure that errors are properly handled. You should have other tests that validate your security and exercise the corner cases, but an acceptance test is more about making sure that the correct application has been built than making sure that every possible condition is handled correctly. If you have good unit tests and use best practices, then I think that the good path testing is entirely appropriate.

For example, I wouldn't necessarily test that I don't have problems with SQL injection if I've used a technology that enforces parameterized queries or where I generate queries by hand (I don't), that the unit tests validate that injection fails. Addressing the corner cases in unit tests makes it less important to focus on them in acceptance tests. If you need to include a few as examples to the customer that your backend implementation addresses their concerns, then by all means do so, but I wouldn't test things that I know I've addressed adequately via other testing.

Bibulous answered 7/5, 2009 at 0:47 Comment(0)
T
1

That looks more like a feature test plan to me (i.e. internal testing)

Acceptance testing usually refers to what you show the customer. I guess you could give the customer a test like that - good luck though

For user acceptance testing, i prefer a very simple format (of course, this probably wont be appropriate say for space shuttle software or banking). Its fine for small-to-mid web projects

The crux of it is; make a table which lists every page in the system, than make a column for the client to initial, and another one for yourself to initial. You sit with the client for a few hours and go through everything. If they are happy with a page, they sign off on it

For the full details of the template, see: User Acceptance Testing

Trapezium answered 10/6, 2010 at 10:12 Comment(1)
Interesting that the breakdown is by pages, rather than user stories. The controls on each page might all work fine, but the flow of information from one page to another could be broken, and I'm not sure if this test template covers this. Maybe the tests should break down the functionality two ways - by each page (or window and dialog in a rich client app), and also by each use- case.Phenylalanine
H
0

In a perfect world, the test description would read:

  • Confirm that all automated tests run to completion successfully

There would be one automated test for each path in the use case.

Any form of scripted, manual testing is going to be error prone and miss bugs, not to mention labour-intensive.

Hendley answered 7/5, 2009 at 0:32 Comment(4)
No, this is a CUSTOMER acceptance test. A test to be performed by a customer. If I'm a customer, even if the development company ASSURES me they have full automated tests, I still have to test the system before it goes into production and before the associated vendors get paid.Phenylalanine
Why can't a customer sign off on a set of automated tests?Hendley
Don't get me wrong, as a developer I think automated tests are great, to the point where I would refuse to work on a project that didn't have them. But the average customer has no technical skills, and would not even pretend to know how to evaluate an automated test. He/she would look at any automated test descriptions, and say "This is just some bunch of gobbledygook, get it out of my way so I can see if this application works."Phenylalanine
@MitchWheat probably the same reason people check their orders at fast food drive-throughs. It's the equivalent of just looking at the receipt and not checking the actual items. And software development tends to have a higher defect rate than fast food, even with automated testing.Tervalent
T
0

What does your spec say? If it covers all of the things outlined in your third testcase then why would I, as your customer, not want to see that your product is compliant with the entire spec?

If you don't have an explicit set of requirements (facepalm), then break up your testing into modules: Qualification (with customer), Integration (developers testing modules work together) and Development (developers testing individual modules).

Automate DT&E as much as possible (e.g. use unit tests to test for SQL-injection, string-length overflows, etc.). This should be easy to do because your backend should be separate from the GUI that communicates to it (right?). Most of the GUI stuff you've outlined in the third testcase can be covered as part of Integration Testing (because you're really testing the integration between the backend and the GUI).

If the customer can review your Unit Tests, your Integration test procedures and results, then Qualification testing can be quite easy and painless.

Treasury answered 7/5, 2009 at 0:51 Comment(3)
I'm intrigued by what "facepalm" might be referring to. (I've never heard the term).Phenylalanine
en.wikipedia.org/wiki/Facepalm#Facepalm I did wrap it in asterisks to indicate an action but SO decided to put in itallics instead. My intent of that comment was that doing customer acceptance tests without an agreed set of requirements is a one-way ticket to pain-central.Treasury
I was thinking it was a software product (* facepalm *)Phenylalanine
C
0

They should test the normal use cases (not the exceptional one). But if they test the exceptionnal ones, it is very cool !

Acceptance tests cannot be too detailed. The more they test, the more they enjoy the final product.

Caulicle answered 7/5, 2009 at 0:55 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.