.NET Automated Testing for Long Running Processes
Asked Answered
P

4

6

We would like to do some automated integration testing of a process that requires sending data to an external source and then validating that the data has been correctly displayed on their website.

However it may take several hours before the data appears on the website.

The problem with traditional NUnit or MSTest is that the test will be held up for hours waiting for the result.

I have seen PNUnit which could be used - to run all the tests in parallel, but it doesn't seem an elegant solution to me. What if there are 1000 tests? Won't this create loads of processes/threads on the server? And how to keep a track of all of them.

So has anyone solved this problem? Did you home grow a solution, or is there an open source solution to this?

Precipitate answered 3/11, 2011 at 23:39 Comment(2)
Thanks for the answers guys. Nice ideas. I was hoping to get a response from someone who had implemented something like this. I am thinking the best way to approach this is to grind out some C# and create something from first princples. I am not sure NUnit/MSTest etc can do this in an elegant way.Precipitate
I think your test if far too wide in scope and includes many conditions that could go wrong that are not within your code. So its no longer a "unit" test, but rather a "system" test which is why you are finding it difficult to fit it into the unit test mold. Unit tests are supposed to be small isolated pieces of functionality ("units"). Often run as part of a build process even. I think you are looking to do more of a system test and should search for tools such as load testers or client simulation testers, etc... Many of these tools are built for long running processes.Meseems
I
2

This problem can be easily resolved by separating test data insertion and verifying. Just load all available test data to the system, wait for several hours until processing is done and then execute verification tests.

Imprint answered 26/11, 2011 at 14:53 Comment(4)
Nice idea. I don't think any of the test frameworks lend themselves to this though? For example Nunit you run the tests and they pass or fail. Do you suggest continuously running the tests from some other process then using NUnit to validate the data in the DB collected over the last day?Precipitate
"All" frameworks lend themselves to this technique! You just have to load some predetermined test data that you look for later, or save your test data in your "insert/upload" test to the filesystem, then read that file in your "verify" test later on to check against. The trick is the timing which is why I don't think this is an appropriate unit test in the first place, but rather a "system test".Meseems
Unit tests frameworks usually have "SetUp" feature, which is usually used to prepare environment for testing. You can use it to insert test data to your system. nunit.org/index.php?p=setupFixture&r=2.4Imprint
Ben thanks for the answer, but no the NUnit MSTest etc were not designed for this. They are completely synchronous, all in one, run it now, pass or fail. Maybe there is a test framework that can handle long running asynchronous and give an approriate interface and set of tools. I was hoping someone would know of one and the specifics of using it.Precipitate
N
1

Seems like PNUnit would be a good solution. If you are worried about "too many processes/threads" on the server, just throttle how many tests PNUnit can run at once (say, Max N); when a test completes, schedule the next test. I'm not asserting PNUnit knows how to do this; you may have to implement this custom.

Nasia answered 26/11, 2011 at 17:56 Comment(0)
H
0

There is a discussion about exactly this problem on the NUnit-Discuss Google Group right now: http://groups.google.com/group/nunit-discuss/browse_thread/thread/645ecaefb4f978fa?hl=en

I hope this helps :)

Hottentot answered 28/11, 2011 at 13:40 Comment(1)
Thanks I think they are stuggling to find a solution too. Perhaps this is too tough a problem at the moment :-)Precipitate
W
0

Martin, before I get to my solution, for unit testing it would seem to me that you would want to test only what you can control. The above is more like what I call regression testing. I am assuming 'their' website is someone else's website. May I ask what happens if you follow the rules of the interface/integration but nothing ever gets on their screen, while it may be a problem what would or could you do about it? Moreover, what happens when they change their website or algorithms? You will end up needing to write code based on what they do which sucks.

That said, like mentioned above, you can separate the loading tests and the verifying data tests. I confess I know nothing of PNunit, but simply throwing threads at it isn't going to solve the 3 hours latency for each round trip test.

if you need to run synchronously, you could load all the data in ClassInitialize() then sleep until it is time to verify and run the actual tests.

If it were me, I'd just have one test project for the loading tests and then one project for verifying the results a few hours later. Having it synchronous doesn't seem like it would buy you much benefit other than ensuring the precondition passes before testing the results, which can be handled in other ways as well.

Worker answered 28/11, 2011 at 22:10 Comment(2)
thanks for the suggestion. seems like the load now then verify later is popular. problem is no frameworks seem to support this so the options are just "roll your own" or start an open source initiative.Precipitate
i tried to address why i figured no frameworks do this and it is likely the fact that the other website is a black box and the tests that verify that black box should be in that solution and not the consuming application, but that is just my opinion. along the lines of separation of concernsWorker

© 2022 - 2024 — McMap. All rights reserved.