Why do so many assertEquals()
or similar functions take the expected value as the first parameter and the actual one as second?
This seems counterintuitive to me, so is there a particular reason for this unusual order?
The answer from Kent Beck, co-creator of JUnit (where possibly this convention originates, since his earlier SUnit doesn't appear to have included assertEquals
):
Line a bunch of assertEquals in a row. Having expected first makes them read better.
In the initial version of my answer, I said that I didn't understand this. Here's what I often see in tests:
assertEquals(12345, user.getId());
assertEquals("kent", user.getUsername());
assertEquals("Kent Beck", user.getName());
I would think this would read better with the actual value first. That puts more of the repetitive boilerplate together, aligning the method calls whose values we're testing:
assertEquals(user.getId(), 12345);
assertEquals(user.getUsername(), "kent");
assertEquals(user.getName(), "Kent Beck");
(And there are other reasons that I prefer this order, but for the purpose of this question about why it's the other way, Kent's reasoning appears to be the answer.)
However, Bob Stein has a comment below (much like this one) that suggests a couple things that "expected first" has going for it. The main idea is that expected values are probably typically shorter -- often literals or variables/fields, rather than possibly complex method calls. As a result:
- It's easier to identify both the expected and actual values at a glance.
- It's possible to use a small amount of extra whitespace to align them (if you prefer that kind of thing, though I don't see it used in the earliest JUnit commit I could find easily):
assertEquals(12345, user.getId());
assertEquals("kent", user.getUsername());
assertEquals("Kent Beck", user.getName());
Thanks, Bob!
Because the authors had a 50% chance of matching your intuition.
Because of the other overload
assertWhatever(explanation, expected, actual)
And the explanation, which is part of what you know, goes with the expected, which is what you know, as opposed to the actual, which you don't know at the time you write the code.
assertGreater(a, b)
asserting b > a
is intuitive... (and I think that's why they removed the non commutative methods) –
Stevenson =
instead of equivalence check ==
as it would throw an error when you tried to re-assign the constant. (I think I first saw it promoted in 'writing solid code') It creates (IMHO) less readable code (I prefer 'result should be as expected' style grammar) but gets the compiler to bust your chops if you leave out an = sign –
Paramount An ulterior purpose of assertEqual()
is to demo code for human readers.
A simple function call shows the return value on the left and the call on the right.
y = f(x)
Following that convention, a self-testing demonstration of the function could look like:
assertEqual(y, f(x))
The order is (expected, actual).
Here's a demo of the sum() function with a literal expected return value on the left, and a function call that calculates the actual return value on the right:
assertEqual(15, sum((1,2,3,4,5)))
Similarly, here's a demo of an expression. It is also natural in (expected, actual) order:
assertEqual(4, 2 + 2)
Another reason is stylistic. If you like lining things up, the expected parameter is better on the left because it tends to be shorter:
assertEqual(42, 2 * 3 * 7)
assertEqual(42, (1 << 1) + (1 << 3) + (1 << 5))
assertEqual(42, int('110', int('110', 2)))
I suspect this solves the mystery @ChrisPovirk raised about what Kent Beck meant by "expected first makes them read better."
Thanks Andrew Weimholt and Ganesh Parameswaran for these formulae.
This is a very intesting topic, and lots of very educational answers here too! Here is what I learn from them:
Intuitive/counter-intuitive can be considered as subjective, so no matter which order it was originally defined, perhaps 50% of us would not be happy.
Personally I would have preferred it were designed as
assertEqual(actual, expected)
, because, given the conceptual similarity betweenassert
andif
, I would wish it follows the norm ofif actual == expect
, for example,if a == 1
.(PS: It is true that there are different opinions which prompts to write if statement in the "reverse order", i.e.
if(1==a) {...}
, in order to guard you from accidentally missing one=
. But that style was far from the norm, even in the C/C++ world. And if you happen to be writing Python code, you are not vulnerable to that nasty typo in the first place, becauseif a = 1
is not valid in Python.)The practical convincing reason to do
assertEqual(expect, actual)
, is that the unittest library in your language likely already follows that order to generate readable error message. For example, in Python, when you doassertEqual(expected_dictionary, actual_dictionary)
, unittest will display missing keys in actual with prefix-
, and extra keys with prefix+
, just like when you do agit diff old_branch new_branch
.Intuitive or not, this is the single most convincing reason to stick with the
assertEqual(expected, actual)
order. If you happen to not like it, you better still accept it, because "practicality beats purity".Lastly, if you need a way to help you remember the order, this answer compares
assertEqual(expected_result, actual_calculation)
to the assignment statement orderresult = calculate(...)
. It can be a good way to MEMORIZE the de-facto behavior, but IMHO it is not the undebatable reasoning of that order is more intuitive.
So here you go. Happy assertEqual(expect, actual)
!
assertEqual(actual, expect)
- so now choose... ;) –
Frippery I agree with the consensus that consistency is #1, but the behavior of comparing dictionaries may be a helpful data point if you're evaluating this question.
When I see a "+" on a diff, I read this as "the procedure being tested added this." Again, personal preferences apply.
Note: I used alphabetized keys and made the dictionary longer so that only a middle key would change for clarity of the example. Other scenarios display more obfuscated diffs. Also noteworthy, assertEqual uses assertDictEqual in >=2.7 and >=3.1
File exl.py
from unittest import TestCase
class DictionaryTest(TestCase):
def test_assert_order(self):
self.assertEqual(
{
'a_first_key': 'value',
'key_number_2': 'value',
'z_last_key': 'value',
'first_not_second': 'value',
},
{
'a_first_key': 'value',
'key_number_2': 'value',
'z_last_key': 'value',
'second_not_first': 'value',
}
)
Run:
python -m unittest exl
Output:
F
======================================================================
FAIL: test_assert_order (exl.DictionaryTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "exl.py", line 18, in test_assert_order
'second_not_first': 'value',
AssertionError: {'a_first_key': 'value', 'z_last_key': 'value', 'key_number_2': 'value', 'first_ [truncated]... != {'a_first_key': 'value', 'z_last_key': 'value', 'key_number_2': 'value', 'second [truncated]...
{'a_first_key': 'value',
- 'first_not_second': 'value',
'key_number_2': 'value',
+ 'second_not_first': 'value',
'z_last_key': 'value'}
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (failures=1)
The xUnit testing convention is expected/actual. So, for many that is the natural order since that's what they learnt.
Interestingly, in a break from convention for an xUnit framework, qunit goes for actual/expected. At least with JavaScript you can just create a new function that encapsulates the old one and assign it the original variable:
var qunitEquals = equals;
equals = function(expected, actual, message) {
qunitEquals(actual, expected, message);
};
The documentation for assertEqual
names the first parameter first
, and the second parameter second
:
assertEqual(first, second, msg=None)
Test that
first
andsecond
are equal. If the values do not compare equal, the test will fail.
However, if you look at most of the examples in the documentation, they place the received value first, and the expected value second (the opposite of what your question post claims):
self.assertEqual(self.widget.size(), (50,50), 'incorrect default size')
So I would say the convention is assertEqual(got, expected)
, and not the other way round!
Either way, your tests will still work.
I'm a little surprised not to see this answer already, because it's always seemed like the most likely explanation to me.
Imagine you didn't have assertEquals
, but just assert
. How would you write the test? You might think to write it as:
assert(actual == expected)
But in many cases, they won't be the same object, just equivalent ones, so (and this is perhaps language-dependent), you can't reliably use the ==
operator to express your intent. So you switch it to:
assert(actual.equals(expected))
And things are fine for a while. But then you introduce a bug, and the test fails, because the result (actual) becomes null. But the test doesn't fail the way you expect -- instead, you can't even invoke actual.equals
at all, because you don't even have an object to call a method on! Your test code blows up with an exception because the test itself is fragile.
But your expected object will never be null.
Many people working in OO languages have got used to this, and they make a habit of writing all method-based conditionals like if ("foo".equals(myString))
, which is still safe in the case that myString
is null (though the reverse is not safe).
So the best habit for writing asserts is:
assert(expected.equals(actual))
... which fails if actual is wrong, even null.
Once you've spent some years in this kind of situation, and you decide to write a unit testing framework with an assertEquals
method, there's only one ordering of the arguments that is going to feel natural to you :)
assertEquals(x,y)
as it is for assert(x.equals(y))
. I'm just trying to suggest how the grammar likely evolved. –
Lenticularis The explanation I heard is that it comes from test-driven development (TDD).
In test-driven development, you start with the test, and then write the code.
Starting assertions by writing the expectation, and then call the code that should produce it, is a mini version of that mindset.
Of course, this may just be a story people tell. Don't know that it was a conscious reason.
© 2022 - 2024 — McMap. All rights reserved.
assertEqual
itself docs.python.org/2/library/… and browsing that page shows the order is inconsistent within the unittest module itself. But this Python issue implies (actual, expected) is, in fact, the standard: bugs.python.org/issue10573 – AssentorassertEquals
is deprecated—useassertEqual
(even though, at least in 2.7, it doesn't actually indicate which parameter is expected and which is actual) – Assentorself.assertEqual(actual, expected)
is more logical thanself.assertEqual(expected, actual)
:if smth == other:
->assert smth == other
->self.assertEqual(smth, other)
. For me this is the usual order, I have yet to find out why others think different. – Histoneself.assertEqual(ltuae, 42)
will say 42 was expected, or 54. If a test fails, I want the message to be helpful and accurate, so the bug can be fixed as quickly as possible; the new parameter names make that harder. – AssentorassertEqual(actual, expected)
, the opposite of what the question post states. See my answer: https://mcmap.net/q/167201/-why-are-assertequals-parameters-in-the-order-expected-actual – Eberle