Does YAGNI also apply when writing tests?
Asked Answered
D

11

13

When I write code I only write the functions I need as I need them.

Does this approach also apply to writing tests?

Should I write a test in advance for every use-case I can think of just to play it safe or should I only write tests for a use-case as I come upon it?

Dropkick answered 3/6, 2009 at 15:44 Comment(3)
WTF is YANGI? Is it anything like TMTOWTDI?Wenzel
@darthcoder: YAGNI = You Aren't Gonna Need It.Active
No, Bill, it's "Ain't". Really, sounds sooo much better.Giovanna
H
13

I think that when you write a method you should test both expected and potential error paths. This doesn't mean that you should expand your design to encompass every potential use -- leave that for when it's needed, but you should make sure that your tests have defined the expected behavior in the face of invalid parameters or other conditions.

YAGNI, as I understand it, means that you shouldn't develop features that are not yet needed. In that sense, you shouldn't write a test that drives you to develop code that's not needed. I suspect, though, that's not what you are asking about.

In this context I'd be more concerned with whether you should write tests that cover unexpected uses -- for example, errors due passing null or out of range parameters -- or repeating tests that only differ with respect to the data, not the functionality. In the former case, as I indicated above, I would say yes. Your tests will document the expected behavior of your method in the face of errors. This is important information to people who use your method.

In the latter case, I'm less able to give you a definitive answer. You certainly want your tests to remain DRY -- don't write a test that simply repeats another test even if it has different data. Alternatively, you may not discover potential design issues unless you exercise the edge cases of your data. A simple example is a method that computes a sum of two integers: what happens if you pass it maxint as both parameters? If you only have one test, then you may miss this behavior. Obviously, this is related to the previous point. Only you can be sure when a test is really needed or not.

Hoarse answered 3/6, 2009 at 15:52 Comment(0)
N
9

Yes YAGNI absolutely applies to writing tests.

As an example, I, for one, do not write tests to check any Properties. I assume that properties work a certain way, and until I come to one that does something different from the norm, I won't have tests for them.

You should always consider the validity of writing any test. If there is no clear benefit to you in writing the test, then I would advise that you don't. However, this is clearly very subjective, since what you might think is not worth it someone else could think is very worth the effort.

Also, would I write tests to validate input? Absolutely. However, I would do it to a point. Say you have a function with 3 parameters that are ints and it returns a double. How many tests are you going to write around that function. I would use YAGNI here to determine which tests are going to get you a good ROI, and which are useless.

Nonoccurrence answered 3/6, 2009 at 15:47 Comment(9)
How about writing tests to test the use of a function in a way you never intend on using it. Would you test that to be sure it doesn't break in wierd cases or do you just say YAGNI?Dropkick
+1 - No need to test simple setters and getters either. I've even seen tests to check that a constructor returned an object of the correct type. What could you do if that failed? :)Active
If you have a new use case that requires a function to be used in a novel way - write that new test. If the test fails, fix the function. The existing tests will ensure that you don't break the existing code.Perpetuity
Obviously, there are some common sense limits to my response below, which I think Joseph touched on nicely here. Generally, properties and certain other "trivial" aspects of code -- assuming they are truly trivial -- can be skipped over. (Again, coverage isn't the whole story -- make sure your tests THOROUGHLY cover all your logic branches.)Harvestman
Bill the Lizard: I can actually see a case for unit testing the "constructor" (well, initializer methods) in some languages, like Objective-C. There actually is a chance that those methods may return nil (ObjC's equivalent of null/Nothing) ... Yeah, you can't do much, but it does tell you your initializer is totally hosed and you better go fix the danged thing!Harvestman
@Dropkick I updated my answer to address your comment. Does that help?Nonoccurrence
@John Rudy: Yes, if it can go wrong, you need to test it. The particular test I was talking about was someone using instanceof to test that the return value from a Java constructor was the right object type. I'm pretty sure that can't fail, and I wouldn't know what to do if it did. (I do need to find out if there's any chance it can return null, though. As far as I know a constructor in Java will either throw an exception or return an object reference.)Active
@Bill the Lizard: Yep, I don't remember Java too well, but I'm reasonably certain it either throws an exception or returns an object of the correct type. :)Harvestman
Bill: constructor can return null if you have fun with AOP :P i done it, not very usefull, but cool :)Noseband
M
4

Write the test as you need it. Tests are code. Writing a bunch of (initially failing) tests up front breaks the red/fix/green cycle of TDD, and makes it harder to identify valid failures vs. unwritten code.

Motel answered 3/6, 2009 at 15:48 Comment(2)
There's a difference between a failed feature and unwritten code?Wilton
In TDD, the only test failure should be the test for the code you are about to write, and then you write the code that makes the test pass. Any other regression needs to be addressed.Motel
P
4

You should write the tests for the use cases you are going to implement during this phase of development.

This gives the following benefits:

  1. Your tests help define the functionality of this phase.
  2. You know when you've completed this phase because all of your tests pass.
Perpetuity answered 3/6, 2009 at 15:50 Comment(0)
B
3

You should write tests that cover all your code, ideally. Otherwise, the rest of your tests lose value, and you will in the end debug that piece of code repeatedly.

So, no. YAGNI does not include tests :)

Bier answered 3/6, 2009 at 15:49 Comment(4)
That's a nice world you're living it. What color is the sky?Fiddling
Just because the world is imperfect does not mean we should stop striving for perfection.Wenzel
Well.. let's say I wrote a test that did something with an object. If it works, then I know that that class worked. That means I've covered the constructor and the properties, for instance. I'm not proposing that we write a test for every knack-and-cranny. It should simply cover all the code. If you don't test all you code, then you are not using TDD very well. You simply lose time in the long run.Bier
Actually, striving for perfection is exactly what we should stop doing at the point where it costs us more than living with the remaining imperfection.Fiddling
H
3

You'll probably get some variance here, but generally, the goal of writing tests (to me) is to ensure that all your code is functioning as it should, without side effects, in a predictable fashion and without defects. In my mind, then, the approach you discuss of only writing tests for use cases as they are come upon does you no real good, and may in fact cause harm.

What if the particular use case for the unit under test that you ignore causes a serious defect in the final software? Has the time spent developing tests bought you anything in this scenario beyond a false sense of security?

(For the record, this is one of the issues I have with using code coverage to "measure" test quality -- it's a measurement that, if low, may give an indication that you're not testing enough, but if high, should not be used to assume that you are rock-solid. Get the common cases tested, the edge cases tested, then consider all the ifs, ands and buts of the unit and test them, too.)

Mild Update

I should note that I'm coming from possibly a different perspective than many here. I often find that I'm writing library-style code, that is, code which will be reused in multiple projects, for multiple different clients. As a result, it is generally impossible for me to say with any certainty that certain use cases simply won't happen. The best I can do is either document that they're not expected (and hence may require updating the tests afterward), or -- and this is my preference :) -- just writing the tests. I often find option #2 is for more livable on a day-to-day basis, simply because I have much more confidence when I'm reusing component X in new application Y. And confidence, in my mind, is what automated testing is all about.

Harvestman answered 3/6, 2009 at 15:49 Comment(2)
But what if The use-case never shows up in the software? Then the fact that it passes or fails is irrelevant and it turns out that I wasted time trying to make a test pass for something that I don't need.Dropkick
If you are testing truly impossible branches, then yes, you're in a YAGNI world. But you do need to remember that the code you are writing today will be used in the future as well, and that use case that you thought was never going to happen just might a year down the road. Ultimately, I have to agree with tvanfosson: Only you know what code you really need to test. Just make sure to do so thoroughly. :)Harvestman
F
3

There is of course no point in writing tests for use cases you're not sure will get implemented at all - that much should be obvious to anyone.

For use cases you know will get implemented, test cases are subject to diminishing returns, i.e. trying to cover each and every possible obscure corner case is not a useful goal when you can cover all important and critical paths with half the work - assuming, of course, that the cost of overlooking a rarely occurring error is endurable; I would certainly not settle for anything less than 100% code and branch coverage when writing avionics software.

Fiddling answered 3/6, 2009 at 15:55 Comment(2)
When would you write a test for an obscure use case? when you first add the method or when you actually go to use it in the specific scenario?Dropkick
Usually a "use case" is a much higher level concept than a method. But in general TDD mandates that you write the test for a specific piece of functionality right before you implement it. Before you first add the method write a test that makes sure it does what it should do at that time. When you extend its functionality, extend or add tests before you do so.Fiddling
M
2

You should certainly hold off writing test cases for functionality you're not going to implement yet. Tests should only be written for existing functionality or functionality you're about to put in.

However, use cases are not the same as functionality. You only need to test the valid use cases that you've identified, but there's going to be a lot of other things that might happen, and you want to make sure those inputs get a reasonable response (which could well be an error message).

Obviously, you aren't going to get all the possible use cases; if you could, there'd be no need to worry about computer security. You should get at least the more plausible ones, and as problems come up you should add them to the use cases to test.

Magallanes answered 3/6, 2009 at 15:58 Comment(0)
D
1

I think the answer here is, as it is in so many places, it depends. If the contract that a function presents states that it does X, and I see that it's got associated unit tests, etc., I'm inclined to think it's a well-tested unit and use it as such, even if I don't use it that exact way elsewhere. If that particular usage pattern is untested, then I might get confusing or hard-to-trace errors. For this reason, I think a test should cover all (or most) of the defined, documented behavior of a unit.

If you choose to test more incrementally, I might add to the doc comments that the function is "only tested for [certain kinds of input], results for other inputs are undefined".

Dynel answered 3/6, 2009 at 16:14 Comment(0)
K
0

I frequently find myself writing tests, TDD, for cases that I don't expect the normal program flow to invoke. The "fake it 'til you make it" approach has me starting, generally, with a null input - just enough to have an idea in mind of what the function call should look like, what types its parameters will have and what type it will return. To be clear, I won't just send null to the function in my test; I'll initialize a typed variable to hold the null value; that way when Eclipse's Quick Fix creates the function for me, it already has the right type. But it's not uncommon that I won't expect the program normally to send a null to the function. So, arguably, I'm writing a test that I AGN. But if I start with values, sometimes it's too big a chunk. I'm both designing the API and pushing its real implementation from the beginning. So, by starting slow and faking it 'til I make it, sometimes I write tests for cases I don't expect to see in production code.

Karolinekaroly answered 3/6, 2009 at 16:15 Comment(0)
A
0

If you're working in a TDD or XP style, you won't be writing anything "in advance" as you say, you'll be working on a very precise bit of functionality at any given moment, so you'll be writing all the necessary tests in order make sure that bit of functionality works as you intend it to.

Test code is similar with "code" itself, you won't be writing code in advance for every use cases your app has, so why would you write test code in advance ?

Aqualung answered 3/6, 2009 at 16:46 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.