# Run group

# Overview

Running groups allows you to execute multiple test suites together while maintaining organization and getting consolidated results. You can choose different environments and generate comprehensive reports for your test executions.

# Running a Group

# Starting Group Execution

  1. Navigate to the Groups page
  2. Find the group you want to run (e.g., "AirTable APIs")
  3. Click the green "Run Group" button next to the group

# Environment Selection

Before execution starts, you can configure:

  1. Choose environment from the dropdown (e.g., "DEFAULT")
  2. Click "Select Environment" to confirm
  3. Options available:
    • Run all tests using "Run all" button
    • Generate a report using "Generate Report" button

# Execution Interface

# Available Actions During Run

  • Run: Execute individual test suites
  • Details: View detailed test results
  • Go to test suite: Navigate to the test suite configuration
  • Generate Report: Create a comprehensive test report

# Test Execution Status

The interface shows:

  • Number of tests executed (e.g., "3/3 tests executed")
  • Tests passed (shown in green)
  • Tests failed (shown in red)
  • Tests without assertions
  • Running status indicator

# Important Notifications

  • Warning message: "Please do not close this page while running of test suites is in progress."
  • Real-time execution updates
  • Test completion status

# Test Suite Controls

# Individual Test Suite Actions

Each test suite in the group shows:

  1. Test suite name
  2. Run button
  3. Details button
  4. Go to test suite link
  5. Execution statistics:
    • Tests executed count
    • Passed tests count
    • Failed tests count
    • Tests without assertions

# Test Execution Details

# Viewing Test Details

When you click the "Details" button for a test suite, you'll see a detailed breakdown of each test execution:

# Details View Components

  1. Test Description Column

    • Shows the purpose of each test
    • Example test types:
      • "Test with missing compulsory fields to trigger error response"
      • "Test with an incorrect HTTP method"
      • "Test accessing an HTTPS endpoint with HTTP"
  2. Status Code Column

    • Displays HTTP response codes
    • Common codes shown:
      • 200: Success
      • 401: Unauthorized
      • 404: Not Found
  3. Result Column

    • Shows test outcome (e.g., "FAILED", "PASSED")
    • Color-coded for quick recognition:
      • Red for failed tests
      • Green for passed tests

# Test Execution Statistics

The details view provides comprehensive statistics:

  • Total tests executed (e.g., "3/3 tests executed")
  • Number of passed tests (shown in green)
  • Number of failed tests (shown in red)
  • Tests without assertions (shown in yellow)
  • Warning indicators (⚠️) when applicable

# Example Test Results

Test Suite: Json Generator Endpoint
- 15/16 tests executed
- 2 passed
- 2 failed
- 11 without assertions

Test Suite: Untitled test suite
- 4/4 tests executed
- 0 passed
- 4 failed
- 0 without assertions

# Interactive Elements

  • Play button (▶️) next to each test for individual test execution
  • Expandable test descriptions for more details
  • Quick access to full test suite via "Go to test suite" button

# Understanding Test Outcomes

  1. Failed Tests

    • Review the status code to understand the type of failure
    • Check the test description for expected behavior
    • Common failure patterns:
      • Missing required fields
      • Incorrect HTTP methods
      • Protocol mismatches (HTTP vs HTTPS)
  2. Tests Without Assertions

    • Indicates tests that ran but had no pass/fail criteria
    • May need add assertions using either test suite page or workbench
    • Marked with warning indicators when present
  3. Navigation and Control

    • Use "Run" to re-execute individual tests
    • "Go to test suite" for test configuration
    • "Details" to expand/collapse test information

# Best Practices for Using Details View

  1. Analysis

    • Review failed tests first
    • Check patterns in status codes
    • Note tests without assertions for improvement
  2. Documentation

    • Use test descriptions for clear failure understanding
    • Document unexpected status codes
    • Track patterns across multiple runs
  3. Troubleshooting

    • Use status codes to identify issue types
    • Compare similar test results
    • Check for consistent failure patterns
  4. Reporting

    • Include details view in test reports
    • Highlight critical failures
    • Track assertion coverage

# Best Practices

# Environment Management

  • Always verify the selected environment before running
  • Use consistent environments for related test suites
  • Document environment-specific configurations

# Execution Tips

  1. Wait for all tests to complete before navigating away
  2. Generate reports after execution for documentation
  3. Review failed tests immediately
  4. Check tests without assertions for completeness
  5. Use the details view for debugging failures

# Group Execution Strategy

  • Run smaller groups first to verify configuration
  • Use the "Run all" feature for comprehensive testing
  • Monitor execution progress for all test suites
  • Generate reports for documentation and analysis
  • Review execution results before proceeding with dependent tests

# Testing Workflow

  1. Select the appropriate environment for your test run
  2. Choose between running individual test suites or the entire group
  3. Monitor the execution progress
  4. Review test results and generate reports
  5. Investigate and document any failures
  6. Export results for sharing or documentation

Remember to wait for all test executions to complete before closing the page or navigating away from the results view.