SpecFlow - Retry failed tests
Asked Answered
M

4

9

Is there a way to implement an AfterScenario hook to re-run the current test in case of Fail?

Something like this:

[AfterScenario("retry")]
public void Retry()
{
    if (ScenarioContext.Current.TestError != null)
    {
     // ?     
    }
}

Note: The tests in my project are combined in Ordered tests and executed via MsTest.

Mulct answered 22/1, 2014 at 9:0 Comment(2)
What condition makes that it succeeds if you run it a second time?Fairly
Good question @rene! I guess my whole idea is stillborn.Mulct
R
-5

The purpose of Specflow scenarios is to assert that a system is behaving as expected.

If some temporal issue causes the test to fail, then getting the test to re-run and "hoping for the best" is not going to resolve the problem! Having a test fail occasionally should not be expected behaviour. A test should give a consistent result every time it is executed.

A great post on what makes a good test can be found here and that answer also states that a test should be:

Repeatable: Tests should produce the same results each time.. every time. Tests should not rely on uncontrollable params.

In this case it's quite right for the test to fail. You should now investigate why exactly the test occasionally fails.

Most often tests fail due to timing issues e.g. an element not being present during a page load. In this scenario given a consistent test environment (i.e. same test database, same test browsers, same network set-up), then again you will be able to write repeatable tests. Look at this answer on using WebDriverWait to wait for a predetermined amount of time to test for the existence of expected DOM elements.

Reflux answered 22/1, 2014 at 14:36 Comment(9)
To whoever downvoted me, please explain why you think tests should not have guaranteed repeatable behavior.Reflux
@BenSmith, in my case tests are failing because of Selenium. It sometimes just unable to find element on page, although its present and document is ready. When i run very same test several times (like i just copy-paste the scenario) it can be green as well as red. Unstability of Selenium or third-party service is the main reason to try and run scenario on its failure.Gluteus
@VladislavQulin In that case your test is written correctly. Find out what the error is in the test, and rewrite it so that Selinium behaves consistently. I've had similar problems in the past and you can almost guarantee that it is not the tools fault, it's how you are using it.Reflux
@BenSmith , downvote. You are giving a strong advice but not a practical one. Selenium has environmental issues, one of the examples is Firefox taking too long to load (do we even want to investigate why sometimes it is loading fast and sometimes not?), suddenly the ajax slightly taking longer, etc... So, I wouldn't use such bold theoretical statement.Taneshatang
@Adam, read up on FIRST principle for unit testing. Do a Google search for it too. It's strong advice, because the majority agree that tests should be repeatable and not temporal. Evironmental issue are a test "smell" that you are not writing the correct test.Reflux
I bet @Taneshatang agrees with the principal/guideline, as do I. And as someone who has to balance my principals/guidelines with each other and reality, I was interested in an answer to the question. LaTisha maybe have provided one; I'll check that out. if BenSmith and others can rely on 100% consistent browser/server behavior, awesome, good for you. Sometimes that's my situation, sometimes not. These comments would have been better as a comment to the question, because they were relevant and potentially helpful comments, but they were not an answer. (A pet peeve of mine, sorry.)Audra
@PatrickKarcher Testing is tough. As a full stack engineer, if I were to write unit tests or integration tests, and their results were indeterminate, then equally these would be unacceptable, and putting in retry code would equally be unacceptable to me. I've added more detail to the answer where I'd advise on using a WebDriverWait routine. If the test environment is consistent, then the wait code should be 100% repeatable.Reflux
Downvote, you're not providing solution, rather elaborate on perfect world approach.Whiteley
@BenSmith, your comment is valid. However, SpecFlow is used for behavioral testing not just "simple" AAA unit tests. In some cases, you need to test asynchronous behavior: - You have service which is running in isolated environment(service, mock server, other tools) - You'll call rest API which starts the background job - You want to check the result of the background job Tests are repeatable and should pass every time, but in such cases, you need a circuit breaker and try again. Your comment is right, but as it was noted here before - downvotes say it's not helpfulMegaphone
Z
6

This plugin is awesome. https://github.com/arrty/specflow-retry. I got it to work with nunit and his example is using MS-Test

It will allow you to do this:

@retry:2
Scenario: Tag on scenario is preferred
Then scenario should be run 3 times
Zacek answered 12/2, 2016 at 23:11 Comment(0)
M
5

Let me start off by saying I agree a test SHOULD be stable and SHOULD never be retried. However, we do not live in an ideal world and in some very specific scenario retrying a test can be a valid usecase.

I am running UI tests (using selenium against an angular app) where sometimes the chromedriver turns unresponsive for unclear reasons. This behavior is entirely out of my control and working solutions do not exist. I cannot retry this in a SpecFlow step since I have "Given" steps that login to the application. When it fails in a "When" step I need to rerun the "Given" steps as well. In this scenario I want to close the driver, start it again, and rerun all previous steps. As a last resort, I wrote a custom testrunner for SpecFlow that can recover from an error like this:

Disclaimer: This is not intended usage and it may break in any version of SpecFlow. If you are a testing purist, do not read any further.

First we create a class that makes it easy to create a custom ITestRunner (provide all methods as virtual so they can be overridden):

public class OverrideableTestRunner : ITestRunner
{
    private readonly ITestRunner _runner;

    public OverrideableTestRunner(ITestRunner runner)
    {
        _runner = runner;
    }

    public int ThreadId => _runner.ThreadId;

    public FeatureContext FeatureContext => _runner.FeatureContext;

    public ScenarioContext ScenarioContext => _runner.ScenarioContext;

    public virtual void And(string text, string multilineTextArg, Table tableArg, string keyword = null)
    {
        _runner.And(text, multilineTextArg, tableArg, keyword);
    }

    public virtual void But(string text, string multilineTextArg, Table tableArg, string keyword = null)
    {
        _runner.But(text, multilineTextArg, tableArg, keyword);
    }

    public virtual void CollectScenarioErrors()
    {
        _runner.CollectScenarioErrors();
    }

    public virtual void Given(string text, string multilineTextArg, Table tableArg, string keyword = null)
    {
        _runner.Given(text, multilineTextArg, tableArg, keyword);
    }

    public virtual void InitializeTestRunner(int threadId)
    {
        _runner.InitializeTestRunner(threadId);
    }

    public virtual void OnFeatureEnd()
    {
        _runner.OnFeatureEnd();
    }

    public virtual void OnFeatureStart(FeatureInfo featureInfo)
    {
        _runner.OnFeatureStart(featureInfo);
    }

    public virtual void OnScenarioEnd()
    {
        _runner.OnScenarioEnd();
    }

    public virtual void OnScenarioInitialize(ScenarioInfo scenarioInfo)
    {
        _runner.OnScenarioInitialize(scenarioInfo);
    }

    public virtual void OnScenarioStart()
    {
        _runner.OnScenarioStart();
    }

    public virtual void OnTestRunEnd()
    {
        _runner.OnTestRunEnd();
    }

    public virtual void OnTestRunStart()
    {
        _runner.OnTestRunStart();
    }

    public virtual void Pending()
    {
        _runner.Pending();
    }

    public virtual void SkipScenario()
    {
        _runner.SkipScenario();
    }

    public virtual void Then(string text, string multilineTextArg, Table tableArg, string keyword = null)
    {
        _runner.Then(text, multilineTextArg, tableArg, keyword);
    }

    public virtual void When(string text, string multilineTextArg, Table tableArg, string keyword = null)
    {
        _runner.When(text, multilineTextArg, tableArg, keyword);
    }
}

Next we create the custom testrunner that remembers the calls made for a scenario and can rerun the previous steps:

public class RetryTestRunner : OverrideableTestRunner
{
    /// <summary>
    /// Which exceptions to handle (default: all)
    /// </summary>
    public Predicate<Exception> HandleExceptionFilter { private get; set; } = _ => true;

    /// <summary>
    /// The action that is executed to recover
    /// </summary>
    public Action RecoverAction { private get; set; } = () => { };

    /// <summary>
    /// The maximum number of retries
    /// </summary>
    public int MaxRetries { private get; set; } = 10;

    /// <summary>
    /// The executed actions for this scenario, these need to be replayed in the case of an error
    /// </summary>
    private readonly List<(MethodInfo method, object[] args)> _previousSteps = new List<(MethodInfo method, object[] args)>();

    /// <summary>
    /// The number of the current try (to make sure we don't go over the specified limit)
    /// </summary>
    private int _currentTryNumber = 0;

    public NonSuckingTestRunner(ITestExecutionEngine engine) : base(new TestRunner(engine))
    {
    }

    public override void OnScenarioStart()
    {
        base.OnScenarioStart();

        _previousSteps.Clear();
        _currentTryNumber = 0;
    }

    public override void Given(string text, string multilineTextArg, Table tableArg, string keyword = null)
    {
        base.Given(text, multilineTextArg, tableArg, keyword);
        Checker()(text, multilineTextArg, tableArg, keyword);
    }

    public override void But(string text, string multilineTextArg, Table tableArg, string keyword = null)
    {
        base.But(text, multilineTextArg, tableArg, keyword);
        Checker()(text, multilineTextArg, tableArg, keyword);
    }

    public override void And(string text, string multilineTextArg, Table tableArg, string keyword = null)
    {
        base.And(text, multilineTextArg, tableArg, keyword);
        Checker()(text, multilineTextArg, tableArg, keyword);
    }

    public override void Then(string text, string multilineTextArg, Table tableArg, string keyword = null)
    {
        base.Then(text, multilineTextArg, tableArg, keyword);
        Checker()(text, multilineTextArg, tableArg, keyword);
    }

    public override void When(string text, string multilineTextArg, Table tableArg, string keyword = null)
    {
        base.When(text, multilineTextArg, tableArg, keyword);
        Checker()(text, multilineTextArg, tableArg, keyword);
    }

    // Use this delegate combination to make a params call possible
    // It is not possible to use a params argument and the CallerMemberName
    // in one method, so we curry the method to make it possible. #functionalprogramming
    public delegate void ParamsFunc(params object[] args);

    private ParamsFunc Checker([CallerMemberName] string method = null)
    {
        return args =>
        {
            // Record the previous step
            _previousSteps.Add((GetType().GetMethod(method), args));

            // Determine if we should retry
            if (ScenarioContext.ScenarioExecutionStatus != ScenarioExecutionStatus.TestError || !HandleExceptionFilter(ScenarioContext.TestError) || _currentTryNumber >= MaxRetries)
            {
                return;
            }

            // HACKY: Reset the test state to a non-error state
            typeof(ScenarioContext).GetProperty(nameof(ScenarioContext.ScenarioExecutionStatus)).SetValue(ScenarioContext, ScenarioExecutionStatus.OK);
            typeof(ScenarioContext).GetProperty(nameof(ScenarioContext.TestError)).SetValue(ScenarioContext, null);

            // Trigger the recovery action
            RecoverAction.Invoke();

            // Retry the steps
            _currentTryNumber++;
            var stepsToPlay = _previousSteps.ToList();
            _previousSteps.Clear();
            stepsToPlay.ForEach(s => s.method.Invoke(this, s.args));
        };
    }
}

Next, configure SpecFlow to use our own testrunner (this can also be added as a plugin).

 /// <summary>
/// We need this because this is the only way to configure specflow before it starts
/// </summary>
[TestClass]
public class CustomDependencyProvider : DefaultDependencyProvider
{
    [AssemblyInitialize]
    public static void AssemblyInitialize(TestContext testContext)
    {
        // Override the dependency provider of specflow
        ContainerBuilder.DefaultDependencyProvider = new CustomDependencyProvider();
        TestRunnerManager.OnTestRunStart(typeof(CustomDependencyProvider).Assembly);
    }

    [AssemblyCleanup]
    public static void AssemblyCleanup()
    {
        TestRunnerManager.OnTestRunEnd(typeof(CustomDependencyProvider).Assembly);
    }

    public override void RegisterTestThreadContainerDefaults(ObjectContainer testThreadContainer)
    {
        base.RegisterTestThreadContainerDefaults(testThreadContainer);

        // Use our own testrunner
        testThreadContainer.RegisterTypeAs<NonSuckingTestRunner, ITestRunner>();
    }
}

Also, add this to your .csproj:

<PropertyGroup>
  <GenerateSpecFlowAssemblyHooksFile>false</GenerateSpecFlowAssemblyHooksFile>
</PropertyGroup>

Now we can use the testrunner to recover from errors:

[Binding]
public class TestInitialize
{
    private readonly RetryTestRunner _testRunner;

    public TestInitialize(ITestRunner testRunner)
    {
        _testRunner = testRunner as RetryTestRunner;
    }

    [BeforeScenario()]
    public void TestInit()
    {
        _testRunner.RecoverAction = () =>
        {
            StopDriver();
            StartDriver();
        };

        _testRunner.HandleExceptionFilter = ex => ex is WebDriverException;
    }
}

To use this in an AfterScenario step, you could add a RetryScenario() method to the testrunner and call that.

As a last note: Use this as a last resort when there is nothing you can do about it. Running flaky tests is better than running no tests at all.

Meeker answered 30/3, 2020 at 8:59 Comment(1)
"Running flaky tests is better than running no tests at all." - you sure about that? ;)Reflux
B
2

I wanted to be able to retry failed tests, but still report them as failed in the test results. This would let me easily identify the scenarios in which the code works, but which also are prone to sporadic issues due to network latency, etc. Those failures would have a different priority than new failures due to code changes.

I managed to do this using MsTest, due to the fact that you can create a class that inherits from TestMethodAttribute.

First, I added this section to the bottom of my csproj file, to call a custom powershell script after the *.feature.cs files had been generated but before the actual build:

<Target Name="OverrideTestMethodAttribute" BeforeTargets="PrepareForBuild">
    <Message Text="Calling OverrideTestMethodAttribute.ps1" Importance="high" />
    <Exec Command="powershell -Command &quot;$(ProjectDir)OverrideTestMethodAttribute.ps1&quot;" />
</Target>

The OverrideTestMethodAttribute.ps1 powershell script then does a find/replace to change all of the TestMethodAttribute references to my IntegrationTestMethodAttribute. The script contents are:

Write-Host "Running OverrideTestMethodAttribute.ps1"

$mask = "$PSScriptRoot\Features\*.feature.cs"
$codeBehindFiles = Get-ChildItem $mask
Write-Host "Found $($codeBehindFiles.Count) feature code-behind files in $mask"
foreach ($file in $codeBehindFiles)
{
    Write-Host "Working on feature code-behind file: $($file.PSPath)"
    $oldContent = Get-Content $file.PSPath
    $newContent = $oldContent.Replace(`
        '[Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute()]', `
        '[MyCompany.MyProduct.IntegrationTestMethodAttribute()]')

    Set-Content -Path $file.PSPath -Value $newContent
}

And the IntegrationTestMethodAttribute class that does the actual retrying:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace MyCompany.MyProduct
{
    public class IntegrationTestMethodAttribute : TestMethodAttribute
    {
        public override TestResult[] Execute(ITestMethod testMethod)
        {
            TestResult[] testResults = null;
            var failedAttempts = new List<TestResult>();

            int maxAttempts = 5;
            for (int i = 0; i < maxAttempts; i++)
            {
                testResults = base.Execute(testMethod);
                Exception ex = testResults[0].TestFailureException;
                if (ex == null)
                {
                    break;
                }
                failedAttempts.AddRange(testResults);
            }

            if (failedAttempts.Any() && failedAttempts.Count != maxAttempts)
            {
                TestResult testResult = testResults[0];

                var messages = new StringBuilder();
                for (var i = 0; i < failedAttempts.Count; i++)
                {
                    var result = failedAttempts[i];
                    messages.AppendLine("");
                    messages.AppendLine("");
                    messages.AppendLine("");
                    messages.AppendLine($"Failure #{i + 1}:");
                    messages.AppendLine(result.TestFailureException.ToString());
                    messages.AppendLine("");
                    messages.AppendLine(result.TestContextMessages);
                }

                testResult.Outcome = UnitTestOutcome.Error;
                testResult.TestFailureException = new Exception($"Test failed {failedAttempts.Count} time(s), then succeeded");
                testResult.TestContextMessages = messages.ToString();
                testResult.LogError = "";
                testResult.DebugTrace = "";
                testResult.LogOutput = "";
            }
            return testResults;
        }
    }
}
Borlow answered 26/11, 2019 at 13:45 Comment(0)
R
-5

The purpose of Specflow scenarios is to assert that a system is behaving as expected.

If some temporal issue causes the test to fail, then getting the test to re-run and "hoping for the best" is not going to resolve the problem! Having a test fail occasionally should not be expected behaviour. A test should give a consistent result every time it is executed.

A great post on what makes a good test can be found here and that answer also states that a test should be:

Repeatable: Tests should produce the same results each time.. every time. Tests should not rely on uncontrollable params.

In this case it's quite right for the test to fail. You should now investigate why exactly the test occasionally fails.

Most often tests fail due to timing issues e.g. an element not being present during a page load. In this scenario given a consistent test environment (i.e. same test database, same test browsers, same network set-up), then again you will be able to write repeatable tests. Look at this answer on using WebDriverWait to wait for a predetermined amount of time to test for the existence of expected DOM elements.

Reflux answered 22/1, 2014 at 14:36 Comment(9)
To whoever downvoted me, please explain why you think tests should not have guaranteed repeatable behavior.Reflux
@BenSmith, in my case tests are failing because of Selenium. It sometimes just unable to find element on page, although its present and document is ready. When i run very same test several times (like i just copy-paste the scenario) it can be green as well as red. Unstability of Selenium or third-party service is the main reason to try and run scenario on its failure.Gluteus
@VladislavQulin In that case your test is written correctly. Find out what the error is in the test, and rewrite it so that Selinium behaves consistently. I've had similar problems in the past and you can almost guarantee that it is not the tools fault, it's how you are using it.Reflux
@BenSmith , downvote. You are giving a strong advice but not a practical one. Selenium has environmental issues, one of the examples is Firefox taking too long to load (do we even want to investigate why sometimes it is loading fast and sometimes not?), suddenly the ajax slightly taking longer, etc... So, I wouldn't use such bold theoretical statement.Taneshatang
@Adam, read up on FIRST principle for unit testing. Do a Google search for it too. It's strong advice, because the majority agree that tests should be repeatable and not temporal. Evironmental issue are a test "smell" that you are not writing the correct test.Reflux
I bet @Taneshatang agrees with the principal/guideline, as do I. And as someone who has to balance my principals/guidelines with each other and reality, I was interested in an answer to the question. LaTisha maybe have provided one; I'll check that out. if BenSmith and others can rely on 100% consistent browser/server behavior, awesome, good for you. Sometimes that's my situation, sometimes not. These comments would have been better as a comment to the question, because they were relevant and potentially helpful comments, but they were not an answer. (A pet peeve of mine, sorry.)Audra
@PatrickKarcher Testing is tough. As a full stack engineer, if I were to write unit tests or integration tests, and their results were indeterminate, then equally these would be unacceptable, and putting in retry code would equally be unacceptable to me. I've added more detail to the answer where I'd advise on using a WebDriverWait routine. If the test environment is consistent, then the wait code should be 100% repeatable.Reflux
Downvote, you're not providing solution, rather elaborate on perfect world approach.Whiteley
@BenSmith, your comment is valid. However, SpecFlow is used for behavioral testing not just "simple" AAA unit tests. In some cases, you need to test asynchronous behavior: - You have service which is running in isolated environment(service, mock server, other tools) - You'll call rest API which starts the background job - You want to check the result of the background job Tests are repeatable and should pass every time, but in such cases, you need a circuit breaker and try again. Your comment is right, but as it was noted here before - downvotes say it's not helpfulMegaphone

© 2022 - 2024 — McMap. All rights reserved.