Testcafe tests ran by automation servers fail randomly
Asked Answered
O

2

5

This is inconsistent behavior, I am getting random results with both Jenkins and GitLab-runner. When running the failing tests on the same build machine, everything goes well. (e.g. the tests that failed when ran by the automation server, pass when ran manually - using test.only)

Tried mostly headless with Firefox and Chrome but the same randomness happens with full UI as well.

First off, I thought there was a resource problem on the build machine because of concurrent tasks, but I ruled this off by scheduling a nightly build. Moreover, I even reduced the speed to 0.8.

Has anyone else encountered this behavior? Any hint will be greatly appreciated.

Orientalize answered 5/4, 2019 at 13:9 Comment(4)
Did you get some specific error? If so, which one? Did the errors appear after any TestCafe update? Please provide more details about your scenario and share your test code. In addition, specify which TestCafe version you are using.Kasi
It might be possible that you are sharing a singleton object between all tests by doing an import declaration for this object.Maffa
Thank you for your comments. Errors were mostly assertion errors like: "AssertionError: expected '$5998' to deeply equal '$8997'" ... and: "Cannot obtain information about the node because the specified selector does not match any node in the DOM tree." TestCafe v1.1.0 and I remember updating a month ago, hoping the issues will go away but no luck. Following inputs from @TheJames helped.Orientalize
Thank you. I am not sharing any class singleton objects between tests, but I am sharing and importing constant JSON objects with selectors and other hard-coded values I need.Orientalize
H
4

Try to enable the quarantine mode and try to skip javascript errors.

Without a specific error message is difficult to pin-point the cause

Hesketh answered 5/4, 2019 at 15:5 Comment(1)
Thank you very much for your inputs, adding those parameters fixed the problems for now, I'll get back with a simplified test case if the behavior persists. Errors were mostly assertion errors like: "AssertionError: expected '$5998' to deeply equal '$8997'" ... and: "Cannot obtain information about the node because the specified selector does not match any node in the DOM tree."Orientalize
A
2

Flaky tests are serious problem and require a mitigation strategy. Non-determinism could plague your CICD pipelines and block/delay development until those issues are spotted and resolved. In my opinion even after tons of effort to reduce such problematic tests, flaky ones are inevitable when the test conditions reach a certain complexity level. The main goal, then, is to appropriately manage those.

Couple of measures that might help:

Here is more on the topic.

Abnormity answered 18/4, 2019 at 4:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.