Metadata-Version: 2.3
Name: codemie-test-harness
Version: 0.1.377
Summary: Autotest for CodeMie backend and UI
Author: Anton Yeromin
Author-email: anton_yeromin@epam.com
Requires-Python: >=3.12,<4.0
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Requires-Dist: PyHamcrest (>=2.1.0,<3.0.0)
Requires-Dist: aws-assume-role-lib (>=2.10.0,<3.0.0)
Requires-Dist: boto3 (>=1.39.8,<2.0.0)
Requires-Dist: click (>=8.1.7,<9.0.0)
Requires-Dist: codemie-plugins (>=0.1.123,<0.2.0)
Requires-Dist: codemie-sdk-python (==0.1.377)
Requires-Dist: msal (>=1.31.1,<2.0.0)
Requires-Dist: pytest (>=8.4.1,<9.0.0)
Requires-Dist: pytest-playwright (>=0.7.0,<0.8.0)
Requires-Dist: pytest-repeat (>=0.9.3,<0.10.0)
Requires-Dist: pytest-reportportal (>=5.5.2,<6.0.0)
Requires-Dist: pytest-rerunfailures (>=15.1,<16.0)
Requires-Dist: pytest-timeout (>=2.4.0,<3.0.0)
Requires-Dist: pytest-xdist (>=3.6.1,<4.0.0)
Requires-Dist: python-dotenv (>=1.1.0,<2.0.0)
Requires-Dist: python-gitlab (>=5.6.0,<6.0.0)
Requires-Dist: pyyaml (>=6.0.2,<7.0.0)
Requires-Dist: questionary (>=2.1.0,<3.0.0)
Requires-Dist: rich (>=14.0.0,<15.0.0)
Requires-Dist: tzlocal (>=5.3.1,<6.0.0)
Description-Content-Type: text/markdown

# CodeMie Test Harness

End-to-end, integration, and UI test suite for CodeMie services. Test LLM assistants, workflows, tools, and integrations with a user-friendly CLI or pytest.

## Quick Start

```bash
# Install and launch interactive mode
pip install codemie-test-harness
codemie-test-harness

# Or run without installing
uvx codemie-test-harness
```

**First time?** The CLI guides you through setup. Just select **Configuration → Setup** and follow the prompts.

**Running tests?** Choose **Run Tests → Run Test Suite → smoke** for a quick 5-10 minute validation.

## Requirements

- **Python**: 3.9 or higher
- **Platform**: Linux, macOS, Windows (WSL recommended)
- **For UI tests**: Playwright browsers (`playwright install`)
- **For most test suites**: AWS credentials (see configuration below)

> **⚠️ AWS Credentials Required**
> Most test suites (smoke, api, ui, opensource, enterprise) require AWS credentials to load integration settings from Parameter Store.
> **Exception**: The `sanity` suite works without AWS credentials.
> See [AWS Credentials Setup](#aws-credentials-setup) for details.

## Table of Contents

- [Quick Start](#quick-start)
- [Requirements](#requirements)
- [Part 1: Interactive CLI (Recommended)](#part-1-interactive-cli-recommended)
  - [Installation](#installation)
  - [Getting Started](#getting-started)
  - [Navigation Tips](#navigation-tips)
  - [Configuration](#configuration)
  - [Running Tests](#running-tests)
  - [Interactive Features](#interactive-features)
  - [Configuration File Reference](#configuration-file-reference)
- [Complete Example: First Test Run](#complete-example-first-test-run)
- [Part 2: Running with pytest](#part-2-running-with-pytest)
  - [Installation for Contributors](#installation-for-contributors)
  - [Configuration with .env File](#configuration-with-env-file)
  - [Running Tests with pytest](#running-tests-with-pytest)
- [Troubleshooting](#troubleshooting)
- [Quick Reference Card](#quick-reference-card)
- [Support](#support)

---

## Part 1: Interactive CLI (Recommended)

The easiest way to use the test harness. No command memorization required - just navigate menus and follow prompts.

> **For developers**: See [Part 2: Running with pytest](#part-2-running-with-pytest) for direct pytest commands and `.env` configuration.
>
> **For technical details**: See `CLAUDE.md` in the repository for architecture and development patterns.

### Installation

**Option 1: Install with pip (persistent installation)**

```bash
pip install codemie-test-harness
```

**Option 2: Run with uvx (no installation needed)**

```bash
uvx codemie-test-harness
```

**Verify installation:**

```bash
codemie-test-harness --help
```

### Getting Started

Launch the interactive CLI:

```bash
codemie-test-harness
```

You'll see the main menu:

```
╔═══════════════════════════════════════════════╗
║    CodeMie Test Harness - Interactive Mode    ║
╚═══════════════════════════════════════════════╝

? What would you like to do?
  🚀 Run Tests
❯ ⚙️ Configuration
  🤖 Chat with Assistant
  🔄 Execute Workflow
  ❌ Exit
```

**First-time setup:** Navigate to **Configuration** → **Setup (Quick Configuration)** to configure your environment.

### Navigation Tips

Before diving into configuration, here's how to navigate the interactive CLI:

- **Arrow Keys**: Navigate menu options
- **Enter**: Select an option
- **Ctrl+C**: Cancel/Exit at any time
- **Back Options**: Every submenu has a "Back" option
- **Menu Loops**: Configuration menus loop until you select "Back"
- **Smart Defaults**: Pre-filled values for common configurations
- **Validation**: Input validation prevents invalid values

### Configuration

The interactive CLI guides you through configuration with smart defaults and validation.

#### Quick Setup Wizard

Select: **Configuration** → **Setup (Quick Configuration)**

The wizard will guide you through:

**1. Select Environment**

```
? Select environment:
  Localhost - http://localhost:8080
❯ Preview - https://codemie-preview.lab.epam.com/code-assistant-api
  Production - https://codemie.lab.epam.com/code-assistant-api
  Custom - Enter URL manually
```

**2. Authentication Setup**

The wizard automatically detects localhost and skips authentication setup. For remote environments (Preview/Production), it prompts for:

- Auth Server URL (default provided)
- Client ID (default provided)
- Realm Name (default provided)
- Client Secret
- Optional: Username/Password authentication

**3. AWS Credentials (Optional)**

Configure AWS credentials to access integration settings from Parameter Store:

```
? How would you like to configure AWS credentials?
  📁 Use existing AWS profile
  ➕ Create new AWS profile
  🔑 Enter access keys directly
  ⏭️ Skip AWS configuration
  ⬅️ Back
```

**4. Summary & Confirmation**

The wizard displays all configured values with masked sensitive data.

#### Configuration for Localhost

**Minimal localhost setup:**

1. Select **Configuration** → **Setup**
2. Choose **Localhost** environment
3. Authentication is automatically skipped ✓
4. Configure AWS credentials (see requirements below)
5. Done! Ready to run tests

**What you need:**
- CodeMie API running on localhost:8080
- AWS credentials for integration settings

**AWS Credentials Requirements:**
- **REQUIRED for all test suites** (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
- **EXCEPTION: Sanity suite does not require AWS** - Only tests assistants/workflows/datasources without integrations
- Provides automatic loading of integration credentials from Parameter Store
- Without AWS: You can manually configure integrations in the config file

#### Configuration for Preview/Production

**Remote environment setup:**

1. Select **Configuration** → **Setup**
2. Choose **Preview** or **Production** environment
3. Configure authentication:
   - Accept default Auth Server URL or enter custom
   - Accept default Client ID or enter custom
   - Accept default Realm Name or enter custom
   - Enter Client Secret
   - Optional: Configure username/password
4. Configure AWS credentials (see requirements below)
5. Done! Ready to run tests

**AWS Credentials Requirements:**
- **REQUIRED for all test suites** (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
- **EXCEPTION: Sanity suite does not require AWS** - Only tests assistants/workflows/datasources without integrations
- Provides automatic loading of integration credentials from Parameter Store

#### AWS Credentials Setup

AWS credentials enable automatic loading of integration settings (GitLab, JIRA, Confluence, etc.) from Parameter Store.

**When do you need AWS?**
- ✅ **Required**: `smoke`, `api`, `ui`, `opensource`, `enterprise` suites
- ⏭️ **Not required**: `sanity` suite (no integrations)
- ⚠️ **Alternative**: Manual configuration in config file (tedious for 86+ variables)

**Navigate to:** Configuration → AWS Management

**Option 1: Use Existing AWS Profile** (Recommended)

Select your profile from `~/.aws/credentials`:

```
? Select AWS profile:
❯ default
  codemie-prod
  codemie-dev
```

**Option 2: Create New AWS Profile**

The CLI guides you through:
1. Enter profile name (e.g., "codemie")
2. Enter AWS Access Key ID
3. Enter AWS Secret Access Key
4. Profile is saved to `~/.aws/credentials` with secure permissions

**Option 3: Enter Access Keys Directly**

Keys are stored in the test harness configuration file (`~/.codemie/test-harness.json`).

**Option 4: Remove AWS Credentials**

Clears all AWS configuration from the test harness.

#### Integrations Management

Manage credentials for 86+ integration variables across 10 categories.

**Navigate to:** Configuration → Integrations Management

**Features:**

1. **📋 View Current Integrations** - See all configured integrations (masked or real values)
2. **📂 View Categories** - List all integration categories:
   - Version Control (GitLab, GitHub)
   - Project Management (JIRA Server/Cloud, Confluence Server/Cloud)
   - Cloud Providers (AWS, Azure, GCP)
   - Code Quality (SonarQube, SonarCloud)
   - DevOps (Azure DevOps)
   - Access Management (Keycloak)
   - Notifications (Email, OAuth, Telegram)
   - Data Management (MySQL, PostgreSQL, MSSQL, LiteLLM, Elasticsearch)
   - IT Service (ServiceNow)
   - Quality Assurance (Report Portal, Kubernetes)
3. **⚙️ Setup by Category** - Interactive wizard for specific category
4. **✅ Validate Integrations** - Check configuration completeness

**Example: Setup GitLab Integration**

1. Select **Configuration** → **Integrations Management** → **Setup by Category**
2. Choose **Version Control**
3. Enter values for each prompt (or press Enter to skip):
   ```
   GITLAB_URL: https://gitlab.example.com
   GITLAB_TOKEN: **********************
   GITLAB_PROJECT: https://gitlab.example.com/group/project
   GITLAB_PROJECT_ID: 12345
   ```

### Running Tests

#### Test Suites (Recommended)

**Navigate to:** Run Tests → Run Test Suite

Choose from 6 predefined test suites optimized for different use cases:

| Suite | Use Case | Description | Workers | Reruns | Time |
|-------|----------|-------------|---------|--------|------|
| **sanity** | DevOps CI/CD | Fastest - API sanity checks for deployment validation | 8 | 2 | ~2 min |
| **smoke** | Local Dev | All smoke tests (API + UI) for rapid feedback | 8 | 2 | 8-12 min |
| **smoke-api** | Local Dev | API-only smoke tests - fast backend validation | 8 | 2 | 3-5 min |
| **smoke-ui** | Local Dev | UI-only smoke tests - critical user paths | 4 | 2 | 3-5 min |
| **api** | QA Regression | Full API regression (parallel-safe tests) | 10 | 2 | 30-45 min |
| **ui** | QA UI Tests | Full UI regression with Playwright | 4 | 2 | 20-30 min |
| **opensource** | Feature Testing | Non-enterprise (open-source) features | 10 | 2 | 25-35 min |
| **enterprise** | Feature Testing | Enterprise-only features | 10 | 2 | 15-25 min |

**Interactive Flow:**

1. Select suite from the list with descriptions
2. Configure number of parallel workers (default provided)
3. Configure number of reruns on failure (default provided)
4. Review execution summary with marks, workers, and reruns
5. Confirm to start execution
6. Tests run with live output
7. See completion status

**Example: Running Smoke Tests**

```
? Select a test suite:
❯ smoke        - Quick smoke tests for local development
  sanity       - Sanity check for DevOps CI/CD pipelines
  api          - Full API regression suite
  [...]

? Number of parallel workers: 8
? Number of test reruns on failure: 2

Running test suite: smoke
Description: Quick smoke tests for local development
Marks: smoke
Workers: 8
Reruns: 2

? Proceed with test execution? Yes

[pytest output...]

✓ Test execution completed!
```

#### Running by Custom Marks

**Navigate to:** Run Tests → Run with Custom Marks

**Interactive Flow:**

1. **Optional:** View available marks first
   - Choose format: List view (simple) or Table view (with file details)
2. Enter pytest marks expression with logical operators
3. Configure workers and reruns
4. Review summary and confirm
5. Execute tests

**Common Mark Examples:**

```
# Single mark
api

# Multiple marks with AND
smoke and api

# Multiple marks with OR
jira or confluence

# Exclude marks with NOT
api and not ui

# Complex expressions with parentheses
(gitlab or github) and code_kb

# Multiple exclusions
api and not (ui or not_for_parallel_run)
```

**📋 Available Marks by Category:**

Before running custom marks, view all available marks:
```bash
codemie-test-harness marks          # List view
codemie-test-harness marks --verbose # Detailed view with file locations
```

**🏗️ Architecture**
- `api` - API integration tests
- `ui` - UI tests with Playwright
- `mcp` - Model Context Protocol tests
- `plugin` - Plugin functionality tests

**💨 Speed**
- `smoke` - Quick smoke tests
- `sanity` - Sanity checks (fastest, no AWS required)

**🔐 License**
- `enterprise` - Enterprise features
- `opensource` - Non-enterprise features (implied by absence of `enterprise`)

**🔗 Integrations**
- `gitlab`, `github`, `git` - Version control systems
- `jira`, `jira_cloud` - JIRA integrations
- `confluence`, `confluence_cloud` - Confluence integrations
- `ado` - Azure DevOps
- `servicenow` - ServiceNow

**📚 Knowledge Bases**
- `jira_kb` - JIRA knowledge base tests
- `confluence_kb` - Confluence knowledge base tests
- `code_kb` - Code knowledge base tests

**🤖 Features**
- `assistant` - Assistant functionality
- `workflow` - Workflow execution
- `llm` - LLM model tests
- `datasource` - Datasource management
- `conversations` - Conversation API

**⚠️ Special**
- `not_for_parallel_run` - Sequential execution required

### Interactive Features

#### Configuration Management

**List Settings**
- View all configured settings
- Sensitive values are masked by default
- See total count of configured values

**Set Specific Value**
- Set any configuration key manually
- Autocomplete suggestions for common keys
- Secure password input for sensitive values

**Get Specific Value**
- View a single configuration value
- Shows masked value for sensitive keys

**Unset Specific Value**
- Remove specific configuration keys
- Confirmation prompt before removal

#### Assistant Chat

**Navigate to:** Chat with Assistant

**Features:**
- Start new conversations or continue existing ones
- Stream responses in real-time
- Langfuse tracing support
- Interactive message input

**Usage:**
1. Enter assistant ID
2. Optional: Enter conversation ID to continue previous chat
3. Optional: Enable streaming or Langfuse tracing
4. Type your message
5. View assistant response
6. Continue conversation

#### Workflow Execution

**Navigate to:** Execute Workflow

**Features:**
- Execute workflows by ID
- Provide user input
- Custom execution IDs
- View execution results

**Usage:**
1. Enter workflow ID
2. Optional: Provide user input for the workflow
3. Optional: Specify custom execution ID
4. Execute and view results

### Configuration File Reference

All configuration is stored in: `~/.codemie/test-harness.json`

**Priority Order** (highest to lowest):
1. CLI flags (temporary, for single run)
2. Environment variables (from `.env` file)
3. Configuration file (`~/.codemie/test-harness.json`)
4. AWS Parameter Store (if AWS credentials configured)
5. Default values (built-in)

**Viewing the file:**
```bash
cat ~/.codemie/test-harness.json | jq
```

**Manual editing** (advanced):
```bash
# Backup first
cp ~/.codemie/test-harness.json ~/.codemie/test-harness.json.backup

# Edit with your preferred editor
nano ~/.codemie/test-harness.json
```

**Resetting configuration:**
```bash
rm ~/.codemie/test-harness.json
codemie-test-harness  # Start fresh
```

---

## Complete Example: First Test Run

Here's a complete walkthrough for first-time users:

**1. Install**
```bash
pip install codemie-test-harness
```

**2. Launch interactive mode**
```bash
codemie-test-harness
```

**3. Configure (first time only)**
```
Select: ⚙️ Configuration
Select: Setup (Quick Configuration)
Choose: Preview environment
Accept defaults for Auth Server, Client ID, Realm
Enter: Your Client Secret
Choose: Use existing AWS profile (or enter credentials)
Confirm configuration
```

**4. Run tests**
```
Select: 🚀 Run Tests
Select: Run Test Suite
Choose: smoke (Quick smoke tests)
Workers: 8 (press Enter for default)
Reruns: 2 (press Enter for default)
Confirm: Yes
Wait: 5-10 minutes
Result: See test results!
```

**5. Explore results**
- Check `~/.codemie/test-harness.json` for saved configuration
- Test results are displayed in terminal
- If ReportPortal is configured, view results there

---

## Part 2: Running with pytest

For contributors working from the repository or users preferring direct pytest commands.

### Installation for Contributors

**1. Clone the repository:**

```bash
git clone <repository-url>
cd test-harness
```

**2. Install with Poetry:**

```bash
poetry install
```

**3. Install Playwright browsers (for UI tests):**

```bash
playwright install
```

### Configuration with .env File

Create a `.env` file in the `codemie_test_harness` directory.

#### Configuration for Localhost

**Minimal localhost setup:**

```properties
CODEMIE_API_DOMAIN=http://localhost:8080
TEST_USER_FULL_NAME=dev-codemie-user

# AWS credentials (see requirements below)
# Option 1: Use AWS profile
AWS_PROFILE=my-profile-name

# Option 2: Direct credentials
AWS_ACCESS_KEY=<your_aws_access_key>
AWS_SECRET_KEY=<your_aws_secret_key>
```

**AWS Credentials Requirements:**
- **REQUIRED for all test suites** (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
- **EXCEPTION: Sanity suite does not require AWS** - Only tests assistants/workflows/datasources without integrations
- Provides automatic loading of integration credentials from Parameter Store
- Without AWS: You can manually configure integrations in `.env`

#### Configuration for Preview/Production

**Full remote setup:**

```properties
# API Configuration
CODEMIE_API_DOMAIN=https://codemie-preview.lab.epam.com/code-assistant-api

# Authentication
AUTH_SERVER_URL=https://auth.codemie.lab.epam.com/
AUTH_CLIENT_ID=codemie-preview-sdk
AUTH_REALM_NAME=codemie-prod
AUTH_USERNAME=<username>
AUTH_PASSWORD=<password>

# AWS credentials (see requirements below)
AWS_PROFILE=codemie-preview
# OR
# AWS_ACCESS_KEY=<key>
# AWS_SECRET_KEY=<secret>

# Optional: Test configuration
DEFAULT_TIMEOUT=60
CLEANUP_DATA=True

# Optional: UI Testing
FRONTEND_URL=https://codemie-preview.lab.epam.com
HEADLESS=True
```

**AWS Credentials Requirements:**
- **REQUIRED for all test suites** (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
- **EXCEPTION: Sanity suite does not require AWS** - Only tests assistants/workflows/datasources without integrations
- Provides automatic loading of integration credentials from Parameter Store

#### Integration Configuration

You can manually configure integrations in `.env` or use AWS Parameter Store.

**Version Control:**

```properties
GIT_ENV=gitlab  # or github

# GitLab
GITLAB_URL=https://gitlab.example.com
GITLAB_TOKEN=<token>
GITLAB_PROJECT=https://gitlab.example.com/group/project
GITLAB_PROJECT_ID=12345

# GitHub
GITHUB_URL=https://github.com
GITHUB_TOKEN=<token>
GITHUB_PROJECT=https://github.com/org/repo
```

**Project Management:**

```properties
# JIRA Server
JIRA_URL=https://jira.example.com
JIRA_TOKEN=<token>
JIRA_JQL=project = 'PROJECT' and status = 'Open'

# JIRA Cloud
JIRA_CLOUD_URL=https://company.atlassian.net
JIRA_CLOUD_EMAIL=user@company.com
JIRA_CLOUD_TOKEN=<api_token>

# Confluence
CONFLUENCE_URL=https://confluence.example.com
CONFLUENCE_TOKEN=<token>
CONFLUENCE_CQL=space = 'SPACE' and type = page
```

**Credential Priority:**
1. Environment variables in `.env` (highest)
2. AWS Parameter Store
3. Default values

### Running Tests with pytest

#### Test Suites with pytest

Run the same test suites using pytest directly:

**Smoke Tests (Local Development)**

```bash
# All smoke tests (API + UI)
pytest -n 8 -m "smoke" --reruns 2

# API smoke tests only (fast backend validation)
pytest -n 8 -m "smoke and api and not ui" --reruns 2

# UI smoke tests only (critical user paths)
pytest -n 4 -m "smoke and ui" --reruns 2
```

**Sanity Tests (CI/CD)**

```bash
pytest -n 8 -m "sanity" --reruns 2
```

**Full API Regression**

```bash
# Parallel-safe tests
pytest -n 10 -m "api" --reruns 2

# Sequential tests (run separately)
pytest -m "api and not_for_parallel_run" --reruns 2
```

**UI Tests**

```bash
pytest -n 4 -m "ui" --reruns 2
```

**Open Source Features**

```bash
pytest -n 10 -m "not enterprise and api" --reruns 2
```

**Enterprise Features**

```bash
pytest -n 10 -m "enterprise" --reruns 2
```

#### Test Suite Comparison

| Suite | CLI Command | pytest Command                                        |
|-------|-------------|-------------------------------------------------------|
| **sanity** | `codemie-test-harness run sanity` | `pytest -n 8 -m "sanity" --reruns 2`                  |
| **smoke** | `codemie-test-harness run smoke` | `pytest -n 8 -m "smoke" --reruns 2`                   |
| **api** | `codemie-test-harness run api` | `pytest -n 10 -m "api" --reruns 2`                    |
| **ui** | `codemie-test-harness run ui` | `pytest -n 4 -m "ui" --reruns 2`                      |
| **opensource** | `codemie-test-harness run opensource` | `pytest -n 10 -m "not enterprise and api" --reruns 2` |
| **enterprise** | `codemie-test-harness run enterprise` | `pytest -n 10 -m "enterprise" --reruns 2`             |

#### Custom Mark Selection with pytest

**Single Mark:**

```bash
pytest -n 8 -m "gitlab" --reruns 2
```

**AND Operator (both marks required):**

```bash
pytest -n 8 -m "api and jira" --reruns 2
pytest -n 4 -m "gitlab and code_kb" --reruns 2
```

**OR Operator (either mark):**

```bash
pytest -n 8 -m "jira or confluence" --reruns 2
pytest -n 6 -m "jira_kb or confluence_kb" --reruns 2
```

**NOT Operator (exclude marks):**

```bash
pytest -n 10 -m "api and not ui" --reruns 2
pytest -n 8 -m "not not_for_parallel_run" --reruns 2
```

**Complex Expressions:**

```bash
# Multiple conditions with parentheses
pytest -n 8 -m "(gitlab or github) and code_kb" --reruns 2

# Exclude multiple marks
pytest -n 10 -m "api and not (ui or not_for_parallel_run)" --reruns 2

# Knowledge base tests only
pytest -n 8 -m "(jira_kb or confluence_kb or code_kb)" --reruns 2
```

#### Common Testing Scenarios

**Testing Specific Integrations:**

```bash
# GitLab integration
pytest -n 8 -m "api and gitlab" --reruns 2

# JIRA integration
pytest -n 8 -m "api and jira" --reruns 2

# Confluence integration
pytest -n 8 -m "api and confluence" --reruns 2

# All Git providers
pytest -n 8 -m "gitlab or github or git" --reruns 2
```

**Testing Specific Components:**

```bash
# Workflows
pytest -n 8 -m "api and workflow" --reruns 2

# Assistants
pytest -n 8 -m "api and assistant" --reruns 2

# LLM models
pytest -n 8 -m "api and llm" --reruns 2

# MCP (Model Context Protocol)
pytest -n 8 -m "api and mcp" --reruns 2

# Plugins
pytest -n 8 -m "api and plugin" --reruns 2
```

**Testing Without Full Backend:**

```bash
# Exclude plugin tests (when NATS is not running)
pytest -n 8 -m "api and not plugin" --reruns 2

# Exclude MCP tests (when mcp-connect is not running)
pytest -n 8 -m "api and not mcp" --reruns 2

# Exclude both
pytest -n 8 -m "api and not (plugin or mcp)" --reruns 2
```

#### pytest Flags Explained

| Flag | Description | Example |
|------|-------------|---------|
| `-n <number>` | Number of parallel workers (pytest-xdist) | `-n 8` |
| `-m "<expression>"` | Select tests by marks | `-m "api and not ui"` |
| `--reruns <number>` | Retry failed tests N times (pytest-rerunfailures) | `--reruns 2` |
| `--count <number>` | Run each test N times (pytest-repeat) | `--count 50` |
| `--timeout <seconds>` | Per-test timeout in seconds (pytest-timeout) | `--timeout 600` |
| `-v` | Verbose output | `-v` |
| `-s` | Show print statements | `-s` |
| `-x` | Stop on first failure | `-x` |
| `--lf` | Run last failed tests | `--lf` |
| `--reportportal` | Report results to ReportPortal | `--reportportal` |

#### Test Timeout Configuration

Control per-test timeout to prevent hanging tests.

**In .env file:**

```properties
TEST_TIMEOUT=600  # 10 minutes per test
```

**Via pytest command:**

```bash
# Set timeout for this run
pytest -n 8 -m "api" --timeout 900 --reruns 2

# Disable timeout (debugging only)
pytest -m "slow_tests" --timeout 0
```

**Default:** 300 seconds (5 minutes) per test

When a test exceeds the timeout:
- Test is terminated immediately
- Marked as FAILED with timeout message
- Stack trace shows where execution stopped
- Other tests continue normally

#### UI Tests with Playwright

**Install browsers (one-time):**

```bash
playwright install
```

**Run UI tests:**

```bash
pytest -n 4 -m "ui" --reruns 2
```

**Headless mode:**

Set `HEADLESS=True` in `.env` or:

```bash
HEADLESS=True pytest -n 4 -m "ui" --reruns 2
```

#### ReportPortal Integration

**Configure in .env:**

```properties
RP_ENDPOINT=https://reportportal.example.com
RP_PROJECT=codemie_tests
RP_API_KEY=<api_key>
```

**Run with ReportPortal:**

```bash
pytest -n 10 -m "api" --reruns 2 --reportportal
```

## Troubleshooting

### Common Issues

**"Command not found: codemie-test-harness"**
- Run `pip install codemie-test-harness` or use `uvx codemie-test-harness`
- Check that pip's bin directory is in your PATH

**"Authentication failed"**
- Verify AUTH_CLIENT_SECRET is correct
- Check AUTH_SERVER_URL is accessible
- For localhost, authentication is automatically skipped

**"AWS Parameter Store access denied"**
- Verify AWS credentials with `aws sts get-caller-identity`
- Check that your AWS user has Parameter Store read permissions
- Required path: `/codemie/autotests/integrations/*`

**"Tests hanging or timing out"**
- Check DEFAULT_TIMEOUT in configuration (default: 300 seconds)
- Increase timeout: `pytest --timeout 600 -m "slow_tests"`
- For debugging, disable timeout: `pytest --timeout 0`

**"Playwright browser not found"**
- Run `playwright install` to download browsers
- For specific browser: `playwright install chromium`

**"Integration tests failing"**
- Verify integration credentials in Configuration → Integrations Management
- Run validation: Configuration → Integrations Management → Validate Integrations
- Check if integration services (GitLab, JIRA, etc.) are accessible

**Stopping tests mid-run**
- Press `Ctrl+C` to gracefully stop pytest
- Running tests will complete their current test
- Cleanup happens automatically even on interrupt

**Configuration not persisting**
- Configuration is stored in `~/.codemie/test-harness.json`
- Check file permissions: `ls -la ~/.codemie/test-harness.json`
- Use Configuration → List Settings to verify saved values

## Quick Reference Card

### Most Common Commands

```bash
# Interactive mode (easiest)
codemie-test-harness

# Quick test runs
codemie-test-harness run sanity     # Fastest (2 min, no AWS)
codemie-test-harness run smoke      # Quick (5-10 min)
codemie-test-harness run api        # Full regression (30-45 min)

# Configuration
codemie-test-harness config list    # View all settings
codemie-test-harness marks          # List available marks

# With pytest
pytest -n 8 -m "smoke" --reruns 2                    # Smoke tests
pytest -n 10 -m "api" --reruns 2                     # API tests
pytest -n 4 -m "ui" --reruns 2                       # UI tests
pytest -m "api and gitlab" --reruns 2                # GitLab tests
```

### Files & Paths

```bash
~/.codemie/test-harness.json        # Configuration file
~/.aws/credentials                  # AWS credentials
codemie_test_harness/.env           # Environment variables (for pytest)
```

### Quick Setup

```bash
# Install
pip install codemie-test-harness

# First run
codemie-test-harness                # Interactive setup wizard

# Or quick config
codemie-test-harness config set CODEMIE_API_DOMAIN http://localhost:8080
codemie-test-harness config set AWS_PROFILE my-profile
```

## Support

For issues, questions, or contributions:
- Create an issue in the repository
- Contact: Anton Yeromin (anton_yeromin@epam.com)


