Unit testing Anti-patterns catalogue
Asked Answered
S

31

203

anti-pattern : there must be at least two key elements present to formally distinguish an actual anti-pattern from a simple bad habit, bad practice, or bad idea:

  • Some repeated pattern of action, process or structure that initially appears to be beneficial, but ultimately produces more bad consequences than beneficial results, and
  • A refactored solution that is clearly documented, proven in actual practice and repeatable.

Vote for the TDD anti-pattern that you have seen "in the wild" one time too many.
The blog post by James Carr and Related discussion on testdrivendevelopment yahoogroup

If you've found an 'unnamed' one.. post 'em too. One post per anti-pattern please to make the votes count for something.

My vested interest is to find the top-n subset so that I can discuss 'em in a lunchbox meet in the near future.

Sniper answered 2/12, 2008 at 11:24 Comment(7)
Aaron, you seem to be all over this one :) Would it be a good idea to add the tag-lines or slogans as comments so that we can have less scrolling.. what say?Sniper
This is coming up rather well.. thanks guys n gals. Keep 'em coming.. one of the most informative SO posts IMHOSniper
+1 love this thread!!! And most of these are so true and prevailent too!Inexpert
Nice thread, why is this community wiki though???Ceres
Coz it is kind of a poll - you wouldn't wanna be harvesting rep just coz you posted the most common type of anti-pattern ;)Sniper
Most answers i see are unit-testing-antipatterns but not tdd-anipatterns. For Example Happy Path is a qa antipattern but totally valid for tdd. In my opinion tdd is to implement just enough to make it work by prefering the happy-path and ignoring codecoverage. So can we change the question title so that they better fit the answers :-)Donnie
@Donnie - Agreed. Not anti-patterns for the test driven practice.Sniper
W
70

Second Class Citizens - test code isn't as well refactored as production code, containing a lot of duplicated code, making it hard to maintain tests.

Wheen answered 2/12, 2008 at 11:24 Comment(0)
P
67

The Free Ride / Piggyback -- James Carr, Tim Ottinger
Rather than write a new test case method to test another/distinct feature/functionality, a new assertion (and its corresponding actions i.e. Act steps from AAA) rides along in an existing test case.

Ploce answered 2/12, 2008 at 11:24 Comment(7)
Yeah, that's my favorite one. I do it all the time. Oh... wait... you said that this was a bad thing. :-)Uraninite
I'm not so sure this is an anti-pattern. All invariants must be true after every possible mutator call. So you will want to check that every invariant is true after every combination of mutator and input data that you are testing. But you will want to reduce duplication, and ensure you check all the invariants, including those that do not currently cause test failures. So you put them all in a checkInvariants() verification function and use that in every test. The code changes and another invariant is added. You put that in the function too, of course. But it is a freerider.Tribade
@Tribade - Over time, the test name no longer matches all the things it tests. Also you have some thrashing due to intertwining tests ; a failure does not point out the exact cause of failure. e.g. a canonical example of this test would read something like Opaque Superset of all Arrange steps >> Act >> Assert A >> Act some more >> Assert B >> Act some more >> Assert C. Now ideally if A and C are broken, you should see 2 test failures. With the above test, you'd see only one, then you fix A and on the next run, it'd tell you that now C is broken. now imagine 5-6 distinct tests fused together..Sniper
"the test name no longer matches all the things it tests" Only if the test is named for the post condition that was originally present. If you name for the combination of method-name, set-up state and input data (method arguments), there is no problem.Tribade
"a failure does not point out the exact cause of failure" no assertion failure ever indicates the cause of a failure. That requires some delving into the implementation details: debugging for a regression failure, your knowledge of the development state for some TDD work.Tribade
"Arrange steps >> Act >> Assert A >> Act some more >> Assert B >> Act some more >> Assert C" you seem to be talking about a different kind of anti-pattern (called greedy test, IIRC) here, in which additional assertinos and actions have been added. I'm dead against that anti-pattern. But this reply is about "a new assertion rides along in an existing test case".Tribade
@Tribade - "greedy test" is what this post is about - the emphasis is on "orthogonal/new feature/functionality" and not on "an assertion". The other scenario is relatively rare. To prevent ambiguity, I'll update the post. In my experience, you can write tests that isolate a failure down to a specific source line(/method). Also I've moved to testing behavior (instead of methods) - leads to less brittle tests.Sniper
S
64

Happy Path

The test stays on happy paths (i.e. expected results) without testing for boundaries and exceptions.

JUnit Antipatterns

Shriek answered 2/12, 2008 at 11:24 Comment(1)
Cause: Either exaggerated time constraints or blatant lazyness. Refactored solution: Get some time to write more tests to get rid of the false positives. The latter cause needs a whip. :)Tisiphone
T
59

The Local Hero

A test case that is dependent on something specific to the development environment it was written on in order to run. The result is the test passes on development boxes, but fails when someone attempts to run it elsewhere.

The Hidden Dependency

Closely related to the local hero, a unit test that requires some existing data to have been populated somewhere before the test runs. If that data wasn’t populated, the test will fail and leave little indication to the developer what it wanted, or why… forcing them to dig through acres of code to find out where the data it was using was supposed to come from.


Sadly seen this far too many times with ancient .dlls which depend on nebulous and varied .ini files which are constantly out of sync on any given production system, let alone extant on your machine without extensive consultation with the three developers responsible for those dlls. Sigh.

Tashinatashkent answered 2/12, 2008 at 11:24 Comment(1)
That's a nice example of the WOMPC developer acronym. "Works on my PC!" (usually said to get testers off your back.)Venosity
P
58

Chain Gang

A couple of tests that must run in a certain order, i.e. one test changes the global state of the system (global variables, data in the database) and the next test(s) depends on it.

You often see this in database tests. Instead of doing a rollback in teardown(), tests commit their changes to the database. Another common cause is that changes to the global state aren't wrapped in try/finally blocks which clean up should the test fail.

Ploce answered 2/12, 2008 at 11:24 Comment(1)
this one is just plain nasty.. Breaks the tests must be independent notion. But I've read about it in multiple places.. guess 'popular TDD' is pretty messed upSniper
S
56

The Mockery
Sometimes mocking can be good, and handy. But sometimes developers can lose themselves and in their effort to mock out what isn’t being tested. In this case, a unit test contains so many mocks, stubs, and/or fakes that the system under test isn’t even being tested at all, instead data returned from mocks is what is being tested.

Source: James Carr's post.

Sniper answered 2/12, 2008 at 11:24 Comment(3)
I believe the cause for this is that your class under test has way too many dependencies. Refactored alternative is to extract code that can be isolated.Tisiphone
@Spoike; If you're in a layered architecture that really depends on the role of the class; some layers tend to have more dependencies than others.Vaunt
I saw recently, in a respected blog, the creation of a mock entity setup to be returned from a mock repository. WTF? Why not just instantiate a real entity in the first place. Myself, I just got burned by a mocked interface where my implementation were throwing NotImplementedExceptions all around.Scholium
S
40

The Silent Catcher -- Kelly?
A test that passes if an exception is thrown.. even if the exception that actually occurs is one that is different than the one the developer intended.
See Also: Secret Catcher

[Test]
[ExpectedException(typeof(Exception))]
public void ItShouldThrowDivideByZeroException()
{
   // some code that throws another exception yet passes the test
}
Sniper answered 2/12, 2008 at 11:24 Comment(1)
That one's tricky and dangerous (ie makes you think you tested code that always explodes every time it's run). That's why I try to be specific about both an exception class and something unique within the message.Stripy
S
34

Excessive Setup -- James Carr
A test that requires a huge setup in order to even begin testing. Sometimes several hundred lines of code are used to prepare the environment for one test, with several objects involved, which can make it difficult to really ascertain what is tested due to the “noise” of all of the setup going on. (Src: James Carr's post)

Sniper answered 2/12, 2008 at 11:24 Comment(2)
I understand that excessive test setup usually points to a) poorly structured code or b) insufficient mocking, correct?Akkerman
Well every situation could be different. It could be due to high coupling. But usually it is a case of overspecification, specifying (mock expectations) each and every collaborator in the scenario - this couples the test to the implementation and makes them brittle. If the call to the collaborator is an incidental detail to the test, it should not be in the test. This also helps in keep the test short and readable.Sniper
S
34

The Inspector
A unit test that violates encapsulation in an effort to achieve 100% code coverage, but knows so much about what is going on in the object that any attempt to refactor will break the existing test and require any change to be reflected in the unit test.


'how do I test my member variables without making them public... just for unit-testing?'

Sniper answered 2/12, 2008 at 11:24 Comment(2)
Cause: Absurd reliance on white-box testing. There are tools for generating these kind of tests like Pex on .NET. Refactored solution: Test for behavior instead and if you really need to check boundary values then let automated tools generate the rest.Tisiphone
Before Moq came around, I had to abandon mocking frameworks in favor of handwriting my mocks. It was just too easy to tie my tests to the actual implementation, making any refactoring next to impossible. I can't tell the difference, other than with Moq, I rarely do these kinds of mistakes.Scholium
P
32

Anal Probe

A test which has to use insane, illegal or otherwise unhealthy ways to perform its task like: Reading private fields using Java's setAccessible(true) or extending a class to access protected fields/methods or having to put the test in a certain package to access package global fields/methods.

If you see this pattern, the classes under test use too much data hiding.

The difference between this and The Inspector is that the class under test tries to hide even the things you need to test. So your goal is not to achieve 100% test coverage but to be able to test anything at all. Think of a class that has only private fields, a run() method without arguments and no getters at all. There is no way to test this without breaking the rules.


Comment by Michael Borgwardt: This is not really a test antipattern, it's pragmatism to deal with deficiencies in the code being tested. Of course it's better to fix those deficiencies, but that may not be possible in the case of 3rd party libraries.

Aaron Digulla: I kind of agree. Maybe this entry is really better suited for a "JUnit HOWTO" wiki and not an antipattern. Comments?

Ploce answered 2/12, 2008 at 11:24 Comment(10)
isn't this the same as the Inspector?Sniper
No, the inspector thrives to achieve the utmost code coverage. This one here tries to test anything at all. Think of a class which has only private fields, a run() method without arguments and no getters at all.Ploce
Hmm.. this line 'the class under test tries to hide even the things you need to test' indicates a power struggle between the class and the test. If it should be tested.. it should be publicly reachable somehow.. via class behavior/interface.. this somehow smells of breaching encapsulationSniper
This most often happens when you need to access some service from a third party API. Try to write a test for the Java Mail API or MQSeries which doesn't actually modifies any data or needs a running server ...Ploce
npellow: Maven2 has a plugin for that, hasn't it?Ploce
This is not really a test antipattern, it's pragmatism to deal with deficiencies in the code being tested. Of course it's better to fix those deficiencies, but that may not be possible in the case of 3rd party libraries.Aggrieve
@Michael: Yes the antipattern here is exactly that the test should be testing externally visible behavior instead of poking into internals. Such tests frequently break when the SUT is refactored... Same as inspector. The test author is doing the easy thing instead of the right thing.. this anti-pattern is a deo to patch the design smells of the code.. Over an extended period, you have a tangled mess of tests that are a pain to maintain.Sniper
@Gishu: Still, sometimes you cannot do the right thing - for instance when, as I wrote, your test involves code that you don't control.Aggrieve
@Michael: Aah.. you're speaking for scenarios involving legacy code/third party code. This post (most of it) deals with greenfield TDD if I'm not mistaken. For legacy code, it might be ok (although I'd still try to fix the design if it's a 1-2 day effort). For third party code, you definitely should not be testing it. e.g. I'd not write unit tests for classes in the .net framework... in short you don't write tests for code that you don't control. What you might want to do there is write interface level tests so that you know if a new version of the dll breaks your code.Sniper
IDK, it must have some sort of side effect. I'd test the side effect. Not sure what you mean about testing third party API, I'd argue you should wrap that in your own code that you can test was used correctly, then integration test that code against the third party API. Wouldn't unit test third party code.Stripy
M
26

The Test With No Name -- Nick Pellow

The test that gets added to reproduce a specific bug in the bug tracker and whose author thinks does not warrant a name of its own. Instead of enhancing an existing, lacking test, a new test is created called testForBUG123.

Two years later, when that test fails, you may need to first try and find BUG-123 in your bug tracker to figure out the test's intent.

Mastoid answered 2/12, 2008 at 11:24 Comment(2)
So true. Tho that is slightly more helpful than a test called "TestMethod"Gauffer
unless the bugtracker changes, and you loose the old tracker and its issue identifiers...so PROJECT-123 no longer means anything....Inexpert
S
25

The Slow Poke

A unit test that runs incredibly slow. When developers kick it off, they have time to go to the bathroom, grab a smoke, or worse, kick the test off before they go home at the end of the day. (Src: James Carr's post)

a.k.a. the tests that won't get run as frequently as they should

Sniper answered 2/12, 2008 at 11:24 Comment(4)
Some tests run slowly by their very nature. If you decide to not run these as often as the others, then make sure that they at least run on a CI server as often as possible.Kakalina
This is an obvious question but what are the most general ways to fix this?Akkerman
This initially seems beneficial, eh?Elemental
@TopherHunt Typically the tests are slow because they have some expensive dependency (ie filesystem, database). The trick is to analyze the dependencies until you see the problem, then push the dependency up the callstack. I wrote a case study where my students' took their unit-test suite from 77 seconds to 0.01 seconds by fixing their dependencies: github.com/JoshCheek/fast_testsStripy
P
20

The Butterfly

You have to test something which contains data that changes all the time, like a structure which contains the current date, and there is no way to nail the result down to a fixed value. The ugly part is that you don't care about this value at all. It just makes your test more complicated without adding any value.

The bat of its wing can cause a hurricane on the other side of the world. -- Edward Lorenz, The Butterfly Effect

Ploce answered 2/12, 2008 at 11:24 Comment(3)
What is the anti-pattern here: What does a test like this look like? Is there a fix? Is there any arguable advantage to the code-under-test to factor out a dependency like System.DateTime.Now, besides having simpler or more deterministic unit tests?Mitch
In Java, an example would be to call toString() on an object which doesn't overwrite the method. That will give you the ID of the object which depends on the memory address. Or toString() contains the primary key of the object and that changes every time you run the test. There are three ways to fix this: 1. Change the code you're testing, 2. using regexp to remove the variable parts of the test results or 3. use powerful tools to overwrite system services to make them return predictable results.Ploce
The underlying cause for this anti-pattern is that the code under test doesn't care how much effort it might be to test it. So the whim of a developer is the wing of the butterfly which causes problems elsewhere.Ploce
T
19

The Flickering Test (Source : Romilly Cocking)

A test which just occasionally fails, not at specific times, and is generally due to race conditions within the test. Typically occurs when testing something that is asynchronous, such as JMS.

Possibly a super set to the 'Wait and See' anti-pattern and 'The Sleeper' anti-pattern.

The build failed, oh well, just run the build again. -- Anonymous Developer

Threesome answered 2/12, 2008 at 11:24 Comment(5)
@Stuart - a must see video describing this is "Car Stalled - Try Now!" videosift.com/video/… This pattern could also be called "Try Now!", or just - "The Flakey Test"Mastoid
I once wrote a test for a PRGN that ensured a proper distribution. Occasionally, it would fail at random. Go figure. :-)Kakalina
Wouldn't this be a good test to have? If a test ever fails, you need to track down the source of the problem. I fought with someone about a test which failed between 9p and midnight. He said it was random/intermittent. It was eventually traced to a bug dealing with timezones. Go figure.Polytrophic
@Christian Vest Hansen: couldn't you seed it?Bullivant
@trenton It's only a good test to have if the developers can be bothered to track it down, instead of just ignoring it (which they can get away with, as it passes most of the time).Yesteryear
M
19

Wait and See

A test that runs some set up code and then needs to 'wait' a specific amount of time before it can 'see' if the code under test functioned as expected. A testMethod that uses Thread.sleep() or equivalent is most certainly a "Wait and See" test.

Typically, you may see this if the test is testing code which generates an event external to the system such as an email, an http request or writes a file to disk.

Such a test may also be a Local Hero since it will FAIL when run on a slower box or an overloaded CI server.

The Wait and See anti-pattern is not to be confused with The Sleeper.

Mastoid answered 2/12, 2008 at 11:24 Comment(5)
Hmm.. well I use something like this. how else would I be able to test multi-threaded code?Sniper
@Gishu, do you really want to unit test multiple threads running concurrently? I try to just unit test whatever the run() method does in isolation. An easy way to do this is by calling run() - which will block, instead of start() from the unit test.Mastoid
@Sniper use CountDownLatches, Semaphores, Conditions or the like, to have the threads tell each other when they can move on to the next level.Kakalina
An example: madcoderspeak.blogspot.com/2008/11/… Brew button evt. The observer is polling at intervals and raising changed events.. in which case I add a delay so that the polling threads gets a chance to run before the test exits.Sniper
I think the cartoon link is broken.Bullivant
S
17

Inappropriately Shared Fixture -- Tim Ottinger
Several test cases in the test fixture do not even use or need the setup / teardown. Partly due to developer inertia to create a new test fixture... easier to just add one more test case to the pile

Sniper answered 2/12, 2008 at 11:24 Comment(1)
It may also be that the class under test is trying to do too much.Infective
S
16

The Giant

A unit test that, although it is validly testing the object under test, can span thousands of lines and contain many many test cases. This can be an indicator that the system under tests is a God Object (James Carr's post).

A sure sign for this one is a test that spans more than a a few lines of code. Often, the test is so complicated that it starts to contain bugs of its own or flaky behavior.

Sniper answered 2/12, 2008 at 11:24 Comment(0)
S
15

I'll believe it when I see some flashing GUIs
An unhealthy fixation/obsession with testing the app via its GUI 'just like a real user'

Testing business rules through the GUI is a terrible form of coupling. If you write thousands of tests through the GUI, and then change your GUI, thousands of tests break.
Rather, test only GUI things through the GUI, and couple the GUI to a dummy system instead of the real system, when you run those tests. Test business rules through an API that doesn't involve the GUI. -- Bob Martin

“You must understand that seeing is believing, but also know that believing is seeing.” -- Denis Waitley

Sniper answered 2/12, 2008 at 11:24 Comment(5)
If you thought flashing GUIs is wrong, I saw someone who wrote a jUnit test that started up the GUI and needed user interaction to continue. It hanged the rest of the test suite. So much for test automation!Tisiphone
I disagree. Testing GUI's are hard, but they are also a source of errors. Not testing them is just lazy.Lesleylesli
the point here is that you shouldn't test GUIs but rather that you shouldn't test only via the GUI. You can perform 'headless' testing withouth the GUI. Keep the GUI as thin as possible - use a flavor of MVP - you can then get away with not testing it at all. If you find that you have bugs cropping up in the thin GUI layer all the time, cover it with tests.. but most of the time, I dont find it worth the effort. GUI 'wiring' errors are usually easier to fix...Sniper
@Spoike: Guided manual tests aren't bad, nor is using jUnit (or any other unit testing framework) to drive automated testing that aren't unit tests. You just shouldn't put those in the same project, nor treat them like unit tests (e.g. run constantly, or after every build).Mitch
@MerlynMorgan-Graham I agree, and I didn't mean that you shouldn't test the GUI. The conviction held by team members that it was OK to mix guided manual tests with automatic ones, was disturbing me. I've found out later it was an excellent way to get everyone who are not used to TDD to stop using it. I find that mixing functional tests (which are volatile) with unit tests (which are supposed to be stable) is bad if you want to follow TDD process.Tisiphone
M
14

The Sleeper, aka Mount Vesuvius -- Nick Pellow

A test that is destined to FAIL at some specific time and date in the future. This often is caused by incorrect bounds checking when testing code which uses a Date or Calendar object. Sometimes, the test may fail if run at a very specific time of day, such as midnight.

'The Sleeper' is not to be confused with the 'Wait And See' anti-pattern.

That code will have been replaced long before the year 2000 -- Many developers in 1960

Mastoid answered 2/12, 2008 at 11:24 Comment(2)
I'd rather call this a dormant Volcano :).. but I know what you're talking about.. e.g. a date chosen as a future date for a test at the time of writing will become a present/past date when that date goes by.. breaking the test. Could you post an example.. just to illustrate this.Sniper
@Sniper - +1 . I was thinking the same, but couldn't decide between the two. I updated the title to make this a little clearer ;)Mastoid
U
11

got bit by this today:

Wet Floor:
The test creates data that is persisted somewhere, but the test does not clean up when finished. This causes tests (the same test, or possibly other tests) to fail on subsequent test runs.

In our case, the test left a file lying around in the "temp" dir, with permissions from the user that ran the test the first time. When a different user tried to test on the same machine: boom. In the comments on James Carr's site, Joakim Ohlrogge referred to this as the "Sloppy Worker", and it was part of the inspiration for "Generous Leftovers". I like my name for it better (less insulting, more familiar).

Uvarovite answered 2/12, 2008 at 11:24 Comment(2)
You can use junit's temporary-folder-rule to avoid wet floors.Dink
This kind of relates to a Continuous Integration anti pattern. In CI, every developer should have his/her own work space and resources, and the build machine should be it's own environment as well. Then you avoid things like permission problems (or maybe you end up hiding them so that they only turn up in production.)Assignee
M
11

The Dead Tree

A test which where a stub was created, but the test wasn't actually written.

I have actually seen this in our production code:

class TD_SomeClass {
  public void testAdd() {
    assertEquals(1+1, 2);
  }
}

I don't even know what to think about that.

Mccarty answered 2/12, 2008 at 11:24 Comment(2)
:) - also known as Process Compliance Backdoor.Sniper
We had an example of this recently in a test and method-under-test that had been refactored repeatedly. After a few iterations, the test became a call to the method-under-test. And because the method now returned void, there weren't any assertions to be asserted. So basically, the test was just making sure the method didn't throw an exception. Didn't matter if it actually did anything useful or correctly. I found it in code review and asked, "So ... what are we even testing here?"Assignee
S
11

The Cuckoo -- Frank Carver
A unit test which sits in a test case with several others, and enjoys the same (potentially lengthy) setup process as the other tests in the test case, but then discards some or all of the artifacts from the setup and creates its own.
Advanced Symptom of : Inappropriately Shared Fixture

Sniper answered 2/12, 2008 at 11:24 Comment(0)
T
10

The Environmental Vandal

A 'unit' test which for various 'requirements' starts spilling out into its environment, using and setting environment variables / ports. Running two of these tests simultaneously will cause 'unavailable port' exceptions etc.

These tests will be intermittent, and leave developers saying things like 'just run it again'.

One solution Ive seen is to randomly select a port number to use. This reduces the possibility of a conflict, but clearly doesnt solve the problem. So if you can, always mock the code so that it doesn't actually allocate the unsharable resource.

Telecommunication answered 2/12, 2008 at 11:24 Comment(2)
@gcrain.. tests should be deterministic. IMO a better approach would be to use a 'well-known-in-the-team' port for testing and cleanup before and after the test correctly such that it's always available...Sniper
@gishu - the problem is not that there are no setup() and teardown() methods to handle using these ports. the problem is for example running a CI server, and multiple versions of the test run at the same time, attempting to use the same, hardcoded-in-the-test port numbersTelecommunication
V
10

The Turing Test

A testcase automagically generated by some expensive tool that has many, many asserts gleaned from the class under test using some too-clever-by-half data flow analysis. Lulls developers into a false sense of confidence that their code is well tested, absolving them from the responsibility of designing and maintaining high quality tests. If the machine can write the tests for you, why can't it pull its finger out and write the app itself!

Hello stupid. -- World's smartest computer to new apprentice (from an old Amiga comic).

Vandal answered 2/12, 2008 at 11:24 Comment(0)
V
10

The Forty Foot Pole Test

Afraid of getting too close to the class they are trying to test, these tests act at a distance, separated by countless layers of abstraction and thousands of lines of code from the logic they are checking. As such they are extremely brittle, and susceptible to all sorts of side-effects that happen on the epic journey to and from the class of interest.

Vandal answered 2/12, 2008 at 11:24 Comment(0)
S
10

The Secret Catcher -- Frank Carver
A test that at first glance appears to be doing no testing, due to absence of assertions. But "The devil is in the details".. the test is really relying on an exception to be thrown and expecting the testing framework to capture the exception and report it to the user as a failure.

[Test]
public void ShouldNotThrow()
{
   DoSomethingThatShouldNotThrowAnException();
}
Sniper answered 2/12, 2008 at 11:24 Comment(4)
This can in fact be a valid test, in my opinion - especially as a regression test.Praenomen
sorry again got this confused with Silent catcher... unit tests should state intent clearly about what is being tested rather than saying 'this should work'.. (+1 tp something is better than nothing. esp if you're in legacy regression country)Sniper
In this kinds of tests, I am at least catching Exception and assign it to a variable. Then I assert for not null.Scholium
Some frameworks have a Assert.DoesNotThrow(SomeDelegateType act) style assertion that can be used specifically in cases like this. I find this less gross than having a test case that succeeds when a constructor returns non-null, but fails when the constructor throws. A constructor will never return null. (Note: only applies to languages where a constructor is guaranteed to return non-null)Mitch
P
9

Doppelgänger

In order to test something, you have to copy parts of the code under test into a new class with the same name and package and you have to use classpath magic or a custom classloader to make sure it is visible first (so your copy is picked up).

This pattern indicates an unhealthy amount of hidden dependencies which you can't control from a test.

I looked at his face ... my face! It was like a mirror but made my blood freeze.

Ploce answered 2/12, 2008 at 11:24 Comment(0)
R
7

The Test It All

I can't believe this hasn't been mentioned till now, but tests should not break the Single Responsibility Principle.

I have come across this so many times, tests that break this rule are by definition a nightmare to maintain.

Ronironica answered 2/12, 2008 at 11:24 Comment(0)
S
7

The Mother Hen -- Frank Carver
A common setup which does far more than the actual test cases need. For example creating all sorts of complex data structures populated with apparently important and unique values when the tests only assert for presence or absence of something.
Advanced Symptom of: Inappropriately Shared Fixture

I don't know what it does ... I'm adding it anyway, just in case. -- Anonymous Developer

Sniper answered 2/12, 2008 at 11:24 Comment(0)
M
6

Line hitter

On the first look tests covers everything and code coverage tools confirms it with 100%, but in reality tests only hit code without any output analyses.

coverage-vs-reachable-code

Merridie answered 2/12, 2008 at 11:24 Comment(0)
S
0

The Conjoined Twins

Tests that people are calling "Unit Tests" but are really integration tests since they are not isolated from dependencies (file configuration, databases, services, other in other words the parts not being tested in your tests that people got lazy and did not isolate) and fail due to dependencies that should have been stubbed or mocked.

Saltarello answered 2/12, 2008 at 11:24 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.