Database for Importing NUnit results?
Asked Answered
E

7

7

I have a large set of NUnit tests; I need to import the results from a given run into a database, then characterize the set of results and present them to the users (email for test failures, web presentation for examining results). I need to be tracking multiple runs over time, as well (for reporting failure rates over time, etc.).

The XML will be the XML generated by nunit-console. I would like to import the XML with a minimum of fuss into some database that can then be used to persist and present results. We will have a number of custom categories that we will need to be able to sort across, as well.

Does anyone know of a database schema that can handle importing this type of data that can be customized to our individual needs? This type of problem seems like it should be common, and so a common solution should exist for it, but I can't seem to find one. If anyone has implemented such a solution before, advice would be appreciated as well.

Emphatic answered 23/4, 2009 at 17:32 Comment(1)
If your customer/stake holder is utilizing the nunit results to assert your features (specs) are complete. You should look into Behavior Driven Development. These frameworks provide results in a easy to read format. en.wikipedia.org/wiki/Behavior_Driven_DevelopmentAliped
E
3

It sounds to me like you're actually after a build server such as CruiseControl.NET or TeamCity.

Get the build server to run the tests, and it does the job of telling people what failed, and why.

I recommend TeamCity as it's several orders of magnitude easier to set up.

Easting answered 23/4, 2009 at 17:36 Comment(3)
There is a great deal that goes alont with a Build Server. It may be the right answer, but it is not always or obvisouly better than working with results directly. We do not have the budget for TeamCity (although I think I am starting to get through to my boss), and CruiseControl.Net does not fit well in what we do now for builds. I am interested in a solution just using NUnit and a database.Lelia
Why don't you have the budget for a free piece of software? TeamCity Professional doesn't cost anything.Slackjawed
TeamCity is free (as in Beer) for small teams working on fairly straightforward projects. We have neither.Lelia
L
2

I am here looking to solve the same issue. We are currently leaning toward writing an XSLT to transform the XML results into insert statements. Then run the resulting file of insert statements through a command line SQL interpreter. Ideally, I would rather have an NUnit add-in/extention that handles all this for me. Unfortunately I have not been able to find one.

Lelia answered 7/12, 2009 at 17:15 Comment(0)
F
1

To build off IainMH's answer you may want to take a look at using Trac with BITTEN, it is an open source build system, that can run n-unit tests and report the results. I currently use it for that exact functionality.

Foppish answered 7/12, 2009 at 17:31 Comment(0)
P
1

When using MS SQL, you can import all XMLs to a common column of [xml] datatype. Upon this, xpaths, searching and transformations can be performed.

Pyrexia answered 8/12, 2009 at 10:18 Comment(0)
C
1

Another Alternative to CruiseControl or TeamCity is Atlassians Bamboo if you're strapped for cash. I'm a huge fan of their software for it's ease of use and they have a deal on where you can get bamboo for 10 bucks.

Commotion answered 10/12, 2009 at 16:56 Comment(0)
E
1

We've hoped to avoid this, but we've generated a database schema from the NUnit result XML schema; it's a bit deficient, however, because NUnit does some (inaccurate and strange) processing to determine some of the critical statistics ("ignored" vs. "not run", for example).

We're still hoping to find a schema / process that is NOT a complete CIT build system which can allow us to customize a database for importing the results, but currently we're using a hand-rolled database which we'll need to do a lot of customizing on to get the desired reporting.

Emphatic answered 21/5, 2010 at 20:7 Comment(1)
Would you be willing to share what you have so far? I am coming back to this and considering taking it on personally as an addition to NUnit.Lelia
M
-3

Why do you need to have the results in a database? Who is going to use them? The number of failures cannot be large. If it is (repeatedly) your development process is wrong. Fix the process. Eliminate waste (one of the lean principles), don't collect it.

Take smaller steps (shorter iterations, continuous build), eliminate dependencies.

This is is not commonly done, because projects that have these kind of problems don't deliver but get cancelled (eventually).

[edit] Michael, tracking nunit failures over a longer time provides zero value. You need a short feedback loop. Fix the problems now. If you wait till you have accumulated a lot of problems, you are going to be overwhelmed by the noise.

Good problem tracking is done at the right (highest possible abstraction) level. Definitely not unit test.

Martinez answered 19/5, 2010 at 20:14 Comment(4)
Tracking what fails (and when and how often) is key to identifying the root causes of your problems. Without it you waste time trying to solve the wrong process problems. I'd say good problem tracking is critical to a lean development process.Workbag
Michael, exactly. We're suffering with a legacy codebase with a wildly insufficient test set; what tests we have (it's a large number; but they provide incomplete coverage) have a certain amount of inconsistency in their results. Tracking the inconsistencies, over long periods of time (months) is important to our prioritization.Emphatic
Although I agree in principal, in our case analysis of even one days failures can provide a great deal of insight. We test how our software interacts with our hardware. If a failure occurs only on color sensors then that gives us an idea of where to look for the issue. If it occurs only on high resolution black and white sensors then we will look else where. There are a dozen properties to our sensors that may (or may not) be important. Analyzing the results in the XML is impracticle. I am attempting to eliminate waste.Lelia
In addition to analyzing the results for a single test run, there is a great deal of value in analyzing the results over time. If we can look at the results for all the build for the last 6 months we may see trends that are less obvious when looking at individual builds. Do the Unit Tests tend to fail more often when we are approaching a release? Do they fail more at the begining of sprints with a great deal of new features? There is much that can be learned by caregully analyzing your data. Never underestimate the power of information. :-)Lelia

© 2022 - 2024 — McMap. All rights reserved.