Contents

    Guides

    Efficient Ways to Rerun Failed Test Cases in TestNG

    Published on

    September 29, 2025
    Efficient Ways to Rerun Failed Test Cases in TestNG

    Automated testing is an integral part of modern software development, ensuring applications are robust and free from regressions. 

    However, even with automated tests, failures are inevitable. In TestNG, one of the most widely used testing frameworks for Java, re-running failed tests is a common requirement. By rerunning these tests, teams can isolate flaky tests from actual failures, ensuring the test suite's reliability. 

    This article explores the importance of re-running failed test cases in TestNG and provides a comprehensive guide on the methods, challenges, and best practices involved.

    Why Re-running Failed Test Cases in TestNG?

    Re-running failed tests can save time and improve the accuracy of test results in several ways:

    • Identify Flaky Tests: Flaky tests are those that intermittently fail or pass. By rerunning failed tests, testers can determine whether failures are due to unstable scripts or actual bugs in the application.
    • Improve Test Efficiency: Rather than re-running the entire test suite, rerunning only the failed tests can significantly reduce the execution time.
    • Ensure Stability: Some tests may fail due to temporary issues like network instability, application timeouts, or resource unavailability. Re-running failed tests ensures that these issues don’t lead to incorrect conclusions.
    • Provide Accurate Results: Reruns help verify if a failure was a one-off event or a consistent issue, aiding in more accurate test reporting.

    Causes Behind Test Failures in TestNG

    TestNG failures can be caused by various reasons, and understanding these causes is crucial to re-running tests effectively:

    • Application Bugs: These are the most straightforward reasons for test failures. Bugs in the application code can result in errors that cause tests to fail.
    • Test Script Errors: Sometimes, the test itself is the problem. Incorrect locators, unhandled exceptions, or synchronization issues in the test script can cause failures.
    • Environmental Issues: Unstable networks, insufficient resources (e.g., memory), or slow response times from the application can lead to test failures.
    • Browser-Specific Issues: Cross-browser inconsistencies can also result in tests failing on one browser but passing on others.
    • Data Dependencies: Tests that rely on external data may fail if that data is not available or is inconsistent.

    Analyzing TestNG Failure Reports

    After running a test suite in TestNG, a detailed failure report is generated. TestNG provides two main reports:

    • TestNG HTML Report: This includes a detailed list of test results with timestamps, test names, and the reason for failure.
    • testng-failed.xml: This file contains only the failed test cases from the last test run, enabling the tester to rerun just the failed tests without having to execute the entire suite again.

    By carefully analyzing the failure reports, testers can identify which tests failed and understand the root causes, which helps in applying the right approach for reruns.

    Approaches for Rerunning Failed Test Cases in TestNG

    There are several ways to re-run failed tests in TestNG, depending on the complexity and requirements of the project:

    Leveraging the testng-failed.xml for Reruns

    TestNG automatically generates a testng-failed.xml file inside the test-output folder. This file contains the details of only the failed test cases from the last run.
    To rerun the failed tests, simply execute the testng-failed.xml file as a TestNG suite. This approach is very effective in rerunning the exact set of failed tests, ensuring minimal overhead.

    Implementing the IRetryAnalyzer Interface

    TestNG offers the IRetryAnalyzer interface, which allows for automatic retries of failed tests. You can set the number of retries to ensure that flaky tests are retried a defined number of times before being marked as failed.

    Here’s an example of using IRetryAnalyzer:

    public class RetryAnalyzer implements IRetryAnalyzer {

        private int count = 0;

        private static final int MAX_RETRY = 2;

        @Override

        public boolean retry(ITestResult result) {

            if (count < MAX_RETRY) {

                count++;

                return true; // Retry test

            }

            return false; // Stop retrying

        }

    }

    In your test class:

    @Test(retryAnalyzer = RetryAnalyzer.class)

    public void testLogin() {

        // test logic

    }

    This ensures that failed tests automatically retry before being marked as failed.

    Customizing Retry Logic with Listeners

    You can also use ITestListener to capture the status of tests and perform custom actions, such as retries or logging, when a test fails. This approach offers more control over the rerun process.

    Example listener:

    public class TestListener implements ITestListener {

        @Override

        public void onTestFailure(ITestResult result) {

            System.out.println("Test failed: " + result.getName());

            // Custom logic for retries

        }

    }

    Listeners can be added to your testng.xml file for global application.

    Demonstrating Failed Test Re-runs: A Practical Example

    Let’s assume a test fails due to intermittent network issues. By using IRetryAnalyzer, we can automatically retry the failed test a couple of times before marking it as a failure.

    @Test(retryAnalyzer = RetryAnalyzer.class)

    public void testNetworkConnectivity() {

        // Test code that checks network connectivity

        Assert.assertTrue(isConnected());

    }

    If the test fails due to temporary network instability, the retry mechanism will attempt it again, providing a higher chance of success.

    Common Pitfalls in Re-running Failed Tests

    While rerunning failed tests is beneficial, there are a few challenges:

    • Flaky Tests: If a test fails intermittently due to the application or network issues, rerunning it may not always resolve the root cause.
    • Overuse of Retries: Continuously retrying tests without investigating the cause may mask genuine defects in the application.
    • Parallel Test Execution: When rerunning tests in parallel, there might be resource conflicts or interference between tests, especially if they depend on shared data.
    • Data Dependencies: If failed tests rely on specific data, rerunning them without resetting the data may result in the same failure.

    Essential Practices for Efficient Test Retry Management

    To ensure efficient management of rerun scenarios, follow these best practices:

    • Limit the Number of Retries: Avoid excessive retries that mask real issues. Limit retries to 2–3 attempts.
    • Analyze Test Failures: Don’t blindly rerun tests—first analyze the reasons for failure and determine if a retry is warranted.
    • Reset Test Data: Ensure that the test environment is reset before rerunning tests to avoid residual data conflicts.
    • Leverage Reporting: Use detailed reporting mechanisms, such as testng-failed.xml and custom logs, to track which tests failed and why.
    • Combine Retry with Cleanup: After a test fails, ensure proper cleanup (e.g., clearing cookies, resetting databases) before re-running to avoid issues from the previous run.

    The Importance of Testing on Real Devices and Browsers

    While rerunning tests locally can help identify transient issues, it's essential to test on real environments to account for real-world variables. BrowserStack Automate provides access to a wide range of real browsers and devices, ensuring your tests reflect the true behavior of applications in diverse conditions.

    Benefits of real device/browser testing:

    • Cross-Browser Validation: Rerun failed tests across different browsers (Chrome, Firefox, Safari, etc.) to identify browser-specific issues.
    • Mobile Device Testing: Test on real mobile devices to ensure that mobile-specific issues do not interfere with your tests.
    • Network Simulation: Test on real network conditions, providing more accurate results for performance and stability.

    Running tests on real infrastructure is a reliable way to validate failures across various platforms and ensure consistency.

    Final Thoughts

    Re-running failed test cases in TestNG is a critical process for any automation suite, ensuring that temporary failures or unstable tests do not cloud the overall test results. With approaches like using the testng-failed.xml file, IRetryAnalyzer, and listeners, you can automate the retry process efficiently. 

    However, it is crucial to analyze the failures before blindly rerunning tests to avoid masking real issues. Additionally, testing on real browsers and devices is essential to capture environment-specific problems and deliver accurate, production-ready test results.

    Run Selenium Tests on Cloud

    Data-rich bug reports loved by everyone

    Get visual proof, steps to reproduce and technical logs with one click

    Make bug reporting 50% faster and 100% less painful

    Rating LogosStars
    4.6
    |
    Category leader

    Liked the article? Spread the word

    Put your knowledge to practice

    Try Bird on your next bug - you’ll love it

    “Game changer”

    Julie, Head of QA

    star-ratingstar-ratingstar-ratingstar-ratingstar-rating

    Overall rating: 4.7/5

    Try Bird later, from your desktop