# End-to-End (E2E) Testing

KushoAI's End-to-End testing feature enables you to create sophisticated API testing workflows by chaining multiple API calls together. This powerful functionality allows you to build complex, sequential test scenarios where the output of one API serves as input for subsequent APIs, enabling comprehensive integration testing.

# Overview

E2E workflows consist of multiple interconnected API calls that execute in sequence. You can add APIs to your workflow either by creating them manually or by selecting from your existing test suites. When you select an API from existing test suites, KushoAI automatically generates new test cases specifically for the E2E context.

Key Features:

  • Sequential API Execution: Chain multiple APIs together with data flow
  • Data Mapping: Pass response data between APIs using various reference methods
  • Test Case Combinations: Execute multiple test case combinations across connected APIs
  • Execution Profiles: Save and reuse test case selections for consistent testing
  • Dynamic Data Generation: Built-in functions for generating test data

Requirements:

  • Minimum 2 APIs per workflow (single API execution not supported in E2E context)
  • Maximum 20 APIs per workflow for optimal performance

# Creating an E2E Workflow

# Step 1: Initialize Your Workflow

  1. Navigate to the Create E2E Workflow page

  2. Enter a descriptive name and description for your workflow, then click Create

  3. You'll be redirected to the workflow builder interface

# Step 2: Add APIs to Your Workflow

You have two options for adding APIs to your workflow:

# Adding APIs Manually

  1. Click Enter New API and fill in the required API details
  2. Click Add API in the bottom right corner to add the API to your workflow

  3. Once added, KushoAI automatically generates test cases for the API. Click the hamburger menu (☰) at the top of the API node to view and manage these test cases

# Adding APIs from Existing Test Suites

  1. Click Select Existing API to open the API selection sidebar
  2. Browse through your existing APIs listed in the sidebar
  3. Select multiple APIs by clicking on them (you can choose as many as needed for your workflow)
  4. Click Add Selected to add the chosen APIs to your workflow
  5. Once added, you can connect these APIs and configure the data flow as described in the next section
  6. Continue adding additional APIs to complete your workflow

# Connecting APIs and Data Flow

# Basic API Connection

To establish data flow between APIs in your workflow:

  1. Click and drag from the source API's response field to the target API
  2. This creates a connection that passes response data between the APIs

Dragging API Response
Dragging API Response
Completing API Connection
Completing API Connection

# Advanced Data Mapping

For more granular control over data flow, you can reference specific fields from previous API responses. KushoAI provides multiple methods for referencing API response data:

# Available Response Components

You can access the following components from any API response:

  • response: The main response body containing the API's returned data
  • headers: HTTP response headers
  • statusCode: HTTP status code (200, 404, 500, etc.)
  • request: Details of the original request that was sent

# 1. Named References ()

Reference specific APIs by their test suite names or IDs:

Basic Reference Format:

%%<test_suite_id>.<response_field>%%

Connected API Reference Format:

%%<test_suite_name>.<key>.<field>%%

Examples:

{
  "id": "%%Previous Connected API.response.id%%",
  "allowCredentials": "%%Previous Connected API.headers['access-control-allow-credentials']%%",
  "filter": "%%IBM Intraday Data Fetch.response.date%%"
}

To use this method:

  1. Click the edit button on the target API
  2. Open the Edit Drawer (located on the right side of the interface)
  3. Under Connected API Requests, locate the test suite name and reference format

Edit Drawer Location
Edit Drawer Location
Test Suite IDs
Test Suite IDs

# 2. Previous References (JSON-e $eval)

Use the previous field within JSON-e expressions to reference the immediately preceding API:

Basic Previous Usage:

{
  "userId": {
    "$eval": "previous.response.userId"
  },
  "token": {
    "$eval": "previous.response.token"
  },
  "status": {
    "$eval": "previous.statusCode"
  }
}

Previous Field Access:

  • previous.response - Main API response body
  • previous.headers - HTTP response headers
  • previous.statusCode - HTTP status code
  • previous.request - Original request details

When to Use Each Method:

  • Use previous for simple linear workflows and sequential data transformations
  • Use named references for complex workflows, non-adjacent API references, or when API order might change

# 3. Automatic Field Mapping (Autofill)

KushoAI provides intelligent autofill that automatically maps fields from previous API responses based on field names and smart matching strategies.

Basic Syntax:

__autofill__                           # Basic autofill using field name
__autofill:strategy__                  # Strategy-specific autofill using field name
__autofill:strategy:targetKey__        # Strategy-specific autofill with target key

Available Strategies:

Semantic Strategy (Default) - Uses concept mappings:

{
  "token": "__autofill__",              // Matches access_token, auth_token, jwt
  "id": "__autofill:semantic:user__"    // Matches user_id, customer_id, etc.
}

Exact Strategy - Precise matching with path support:

{
  "userId": "__autofill:exact:user.userId__",           // Nested object access
  "orderId": "__autofill:exact:orders[0].orderId__",    // Array index access  
  "eventId": "__autofill:exact:events.eventId__"        // Smart array shortcut
}

Fuzzy Strategy - Handles naming convention variations:

{
  "userId": "__autofill:fuzzy:user_id__"  // Matches userId, user_id, UserID, etc.
}

Path-Based Exact Matching Examples:

For complex API responses:

{
  "user": {
    "userId": "12345",
    "profile": {"email": "user@example.com"}
  },
  "orders": [
    {"orderId": "ORD001", "items": [{"price": 29.99}]}
  ]
}

Use path notation:

{
  "userId": "__autofill:exact:user.userId__",
  "email": "__autofill:exact:user.profile.email__", 
  "firstOrderId": "__autofill:exact:orders[0].orderId__",
  "firstItemPrice": "__autofill:exact:orders[0].items[0].price__",
  "anyOrderId": "__autofill:exact:orders.orderId__"
}

# Advanced JSON-e Operations

KushoAI supports JSON-e templating for complex data manipulation and transformation using the $eval syntax.

# Combining with Previous References

{
  "userId": {
    "$eval": "previous.response.userId"
  },
  "profileData": {
    "$eval": "merge(previous.response.profile, {status: 'active'})"
  },
  "userEmail": {
    "$eval": "lowercase(previous.response.email)"
  },
  "isSuccessful": {
    "$eval": "previous.statusCode == 200"
  }
}

# Complex Operations

{
  "extractedId": {
    "$eval": "split(previous.response.resourceUrl, '/').slice(-1)[0]"
  },
  "processedArray": {
    "$eval": "map(previous.response.items, item => item.name)"
  },
  "conditionalValue": {
    "$eval": "previous.statusCode >= 200 && previous.statusCode < 300 ? previous.response.data : null"
  }
}

# Combining Autofill with JSON-e

{
  "upperCaseEmail": {
    "$eval": "uppercase(__autofill:exact:email__)"
  },
  "formattedId": {
    "$eval": "concat('USER-', __autofill:fuzzy:user_id__)"
  }
}

# JSON-e Capabilities

  • Reference API Data: Use %%API_Name.key.field%% syntax for named references
  • Previous API Access: Use previous.key.field within $eval expressions
  • JavaScript-like Expressions: Complex data manipulation with $eval
  • Supported Operations:
    • Array operations (split, join, filter, map)
    • String manipulations (substring, replace, concatenation)
    • Arithmetic operations (+, -, *, /)
    • Logical comparisons (==, !=, >, <)
  • Multi-API Integration: Combine data from multiple API responses
  • Variable Access: Reference workflow variables using `` syntax

# Dynamic Test Data Generation

KushoAI provides built-in functions to generate dynamic test data within your E2E workflows.

# Random Date Generation

Future Dates:

{
  "scheduledDate": {
    "$eval": "randomFutureDate('YYYY-MM-DD HH:mm:ss')"
  }
}

Date Ranges:

{
  "eventDate": {
    "$eval": "randomDate('YYYY-MM-DD', -10, 10)"
  }
}

Generates dates between 10 days ago and 10 days from now.

# Random UUID Generation

Array-based (reusable):

{
  "sessionId": {"$eval": "randomUUID[0]"},
  "requestId": {"$eval": "randomUUID[0]"},    // Same UUID
  "correlationId": {"$eval": "randomUUID[1]"} // Different UUID
}

Function-based:

{
  "transactionId": {"$eval": "randomUUIDFn()"}
}

# Random String Generation

Pattern-based:

{
  "userName": {
    "$eval": "randomStringFromFormat('Xxxxxx Xxxxxxx')"
  }
}

*Format: x=lowercase, X=uppercase, #=number, =alphanumeric

Length-based:

{
  "sessionToken": {"$eval": "randomString(32)"}
}

# Persistent Random Data

Use reference names to maintain consistent values across workflow execution:

{
  "customerId": {"$eval": "randomString(6, 'customer-id')"},
  "customerEmail": {"$eval": "randomStringFromFormat('customer-Xxxxxx@test.com', 'customer-email')"},
  "sameCustomerId": {"$eval": "randomString(8, 'customer-id')"} // Returns same value
}

Features:

  • Values persist throughout workflow execution
  • Cross-function reference sharing
  • Case-sensitive reference names
  • Unique references prevent conflicts

# Test Management

# Test Case Configuration

After adding APIs to your workflow, manage which test cases to execute:

  1. Click the hamburger menu (☰) on any API node to access test case management

  2. The Test Case Management modal opens, displaying all available test cases

Test Case Operations:

  • Select Test Cases: Choose which test cases to include in workflow execution
  • Configure Combinations: Set up different test case combinations for each API
  • Preview Execution: Review selected test combinations before running
  • Close Modal: Click "X" to close modal and save selections

# Execution Profiles

Execution Profiles provide a way to save and reuse specific test case selections for your E2E workflows, ensuring consistent test execution across team members and environments.

# Key Benefits

  • Consistency: Same test cases executed every time across team members
  • Efficiency: Quickly switch between different testing scenarios
  • CI/CD Integration: Use profiles for automated test execution with consistent selections
  • Team Collaboration: Share standardized test configurations

# Creating Execution Profiles

  1. Configure Test Selections: Select test cases across all APIs in your workflow
  2. Open Execution Profile Menu: Click "Execution Profile" button in toolbar
  3. Save Profile: Choose "Save Profile" from dropdown
  4. Name Your Profile: Enter descriptive name (e.g., "Smoke Tests", "Critical Path")
  5. Review and Adjust: Modal shows current selections with pagination support
  6. Complete Save: Click "Save Profile" to create execution profile

Important: Every API in your E2E workflow must have at least one test case selected to create a valid execution profile.

# Managing Execution Profiles

Viewing All Profiles:

  • Access "View All Profiles" from Execution Profile dropdown
  • Browse saved profiles with detailed information

Profile Operations:

  • Edit Profile: Click edit icon to modify test case selections
  • Copy UUID: Use copy icon to get profile UUID for CI/CD integration
  • Delete Profile: Remove profiles no longer needed

# Profile Structure

Each execution profile contains:

  • Profile Name: User-friendly identifier for testing scenario
  • Test Case Selections: Specific test cases selected for each API/test suite
  • API Combinations: Configuration of how test cases combine across connected APIs
  • UUID: Unique identifier for programmatic access

# Best Practices

Naming Conventions:

  • "E2E Smoke Tests" - Quick validation of critical user journeys
  • "Full E2E Regression" - Comprehensive testing of all workflow paths
  • "User Onboarding Flow" - Complete new user registration and setup process
  • "Payment Processing Path" - End-to-end payment and transaction workflows

Profile Organization:

  • Development Validation: Quick checks during feature development
  • Pre-Release Testing: Comprehensive workflow validation before deployment
  • Production Monitoring: Critical path verification in live environments
  • Integration Testing: Full workflow validation after system changes

# Workflow Execution

# Running Your Workflow

  1. After configuring test cases or selecting an execution profile, click Run at the top of the interface

  2. KushoAI executes all possible combinations of your selected test cases
  3. The execution results window displays comprehensive test details for each combination

# Execution Flow with Profiles

When you run an E2E workflow with an execution profile:

  1. Profile Loading: System loads your saved test case selections for all APIs
  2. Combination Generation: Creates all possible execution paths based on selected test cases across APIs
  3. Sequential Execution: Runs test combinations according to workflow's API connection sequence
  4. Result Compilation: Provides comprehensive results for all executed combinations with data flow validation

# Example E2E Execution

For a workflow with Profile "User Journey":

  • API 1 (Registration) with 2 selected test cases
  • API 2 (Login) with 3 selected test cases
  • API 3 (Profile Update) with 2 selected test cases
  • Connected in sequence: Registration → Login → Profile Update

The system executes 12 combinations (2 × 3 × 2):

  1. Registration-Test1 → Login-Test1 → Update-Test1
  2. Registration-Test1 → Login-Test1 → Update-Test2
  3. Registration-Test1 → Login-Test2 → Update-Test1 ...and so on for all combinations

# Understanding Execution Results

# Result Overview

For each test combination, you can view:

  • Test Description: Clear description of test case combination being executed
  • Request Details: Actual request sent to API (after resolving all dynamic parameters)
  • Response Data: Complete API response received
  • Status Code: HTTP status code returned by API
  • Assertion Results: Pass/fail status for each assertion in test case

# Detailed View

  • Expandable Rows: Click any test result row to expand and view full request/response details
  • Dynamic Parameter Resolution: Request shows all resolved dynamic parameters, including:
    • Special function outputs (random dates, UUIDs, strings)
    • Data from previous API responses
    • Variable substitutions and JSON-e expressions

# Re-execution

  • Run All Button: Located in top right corner, allows re-execution of all test combinations
  • Fresh Execution: Each re-run generates new dynamic values and re-evaluates all expressions

# API Wait Times

Configure delays between API calls for scenarios requiring processing time:

  1. Access Wait Time Settings: Click clock icon () next to any API's URL
  2. Set Delay: Modal opens where you can enter wait time in seconds
  3. Default Behavior: Default wait time is 0 seconds (immediate execution)
  4. Execution Flow: Wait time applied after API completes before calling next API

Use Cases:

  • APIs requiring processing time before dependent calls
  • Rate limiting compliance
  • Simulating realistic user interaction timing
  • Debugging timing-related issues

# Run History

KushoAI maintains complete history of all workflow executions for analysis and reference.

# Accessing Run History

  1. Click Run History button to view previous workflow executions
  2. Browse chronological list of past runs
  3. Select any previous run to view its detailed results

# Run History Features

  • Historical Results: Each run displays same comprehensive details as fresh execution
  • Result Preservation: All request/response data, assertions, and dynamic parameter resolutions preserved
  • Comparison Capability: Compare results across different runs to track API behavior over time
  • Audit Trail: Complete record of workflow testing for compliance and debugging

# CI/CD Integration

Execution profiles enable automated E2E testing in CI/CD pipelines:

  • API Integration: Use profile UUIDs in CI/CD pipelines for consistent E2E test execution
  • Workflow Validation: Automatically test complete user journeys as part of deployment processes
  • Data Flow Testing: Validate API integrations and data passing between services
  • Regression Prevention: Ensure workflow changes don't break existing user journeys

For detailed CI/CD setup instructions, see the CI/CD Integration Guide.

# Workflow Limitations

To ensure optimal performance and reliability, E2E workflows have the following constraints:

# API Limits

  • Maximum APIs per Workflow: 20 APIs
  • Minimum APIs Required: 2 APIs (single API workflows not supported)

# Data Size Limitations

Since E2E workflows execute in browser environment, there are practical limits:

  • Response Size: Individual API responses should not exceed 10MB
  • Total Workflow Data: Combined data from all API responses should stay under 50MB
  • Request Payload: Individual request payloads limited to 5MB
  • Browser Memory: Large datasets may impact browser performance

# Performance Recommendations

  • Keep response payloads lean by requesting only necessary data fields
  • Use pagination for large dataset APIs
  • Consider workflow segmentation for complex data processing scenarios
  • Monitor browser memory usage during execution of data-intensive workflows

# Best Practices

# Workflow Design

  • Start Simple: Begin with basic linear workflows before adding complexity
  • Plan Data Flow: Map out which data needs to pass between APIs
  • Use Descriptive Names: Name workflows and APIs clearly for team collaboration
  • Test Incrementally: Add and test one API at a time

# Data Mapping Strategy

  • Use previous for simple sequential workflows
  • Use autofill for intelligent field matching across different API response formats
  • Use named references for complex, multi-branch workflows
  • Use exact paths for precise extraction from nested structures

# Test Case Management

  • Create Meaningful Profiles: Use execution profiles for different testing scenarios
  • Organize by Purpose: Group test cases by functionality (smoke, regression, etc.)
  • Regular Maintenance: Keep profiles updated as APIs evolve
  • Team Coordination: Share profile UUIDs and naming conventions across team

# Performance Optimization

  • Minimize Test Combinations: Select only necessary test cases to reduce execution time
  • Use Wait Times Judiciously: Only add delays where actually needed
  • Monitor Resource Usage: Keep an eye on response sizes and execution times
  • Cache Common Data: Use persistent random data for consistent test scenarios

Remember: E2E testing is powerful but can be complex. Start with simple workflows and gradually add sophistication as you become comfortable with the features.