Metadata-Version: 2.4
Name: lumenova-beacon
Version: 2.5.4
Summary: Lumenova Beacon SDK - A Python SDK for observability tracing with OpenTelemetry-compatible span export
Project-URL: Homepage, https://lumenova.ai
Author-email: Lumenova AI <support@lumenova.ai>
Maintainer-email: Lumenova AI <support@lumenova.ai>
License-Expression: Apache-2.0
License-File: LICENSE
Keywords: ai,langchain,llm,monitoring,observability,opentelemetry,sdk,tracing
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Monitoring
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: httpx>=0.27.0
Requires-Dist: tenacity>=9.0.0
Requires-Dist: typing-extensions>=4.12.0
Provides-Extra: aws
Requires-Dist: boto3>=1.34.0; extra == 'aws'
Provides-Extra: crewai
Requires-Dist: crewai>=0.28.0; extra == 'crewai'
Requires-Dist: opentelemetry-api==1.38.0; extra == 'crewai'
Requires-Dist: opentelemetry-exporter-otlp-proto-common==1.38.0; extra == 'crewai'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == 'crewai'
Requires-Dist: opentelemetry-exporter-otlp-proto-http==1.38.0; extra == 'crewai'
Requires-Dist: opentelemetry-sdk==1.38.0; extra == 'crewai'
Provides-Extra: dev
Requires-Dist: black>=24.0.0; extra == 'dev'
Requires-Dist: ipython>=8.37.0; extra == 'dev'
Requires-Dist: mypy>=1.10.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.4.0; extra == 'dev'
Provides-Extra: examples
Requires-Dist: anthropic>=0.40.0; extra == 'examples'
Requires-Dist: crewai>=0.1.0; extra == 'examples'
Requires-Dist: fastapi>=0.115.0; extra == 'examples'
Requires-Dist: google-genai>=1.0.0; extra == 'examples'
Requires-Dist: litellm>=1.70.0; extra == 'examples'
Requires-Dist: llama-index-core>=0.12.3; extra == 'examples'
Requires-Dist: llama-index-embeddings-azure-openai>=0.3.0; extra == 'examples'
Requires-Dist: llama-index-llms-azure-openai>=0.3.0; extra == 'examples'
Requires-Dist: openai>=1.0.0; extra == 'examples'
Requires-Dist: openinference-instrumentation-google-genai>=0.1.0; extra == 'examples'
Requires-Dist: openinference-instrumentation-llama-index>=4.3.8; extra == 'examples'
Requires-Dist: opentelemetry-instrumentation-anthropic>=0.1.0; extra == 'examples'
Requires-Dist: opentelemetry-instrumentation-fastapi==0.59b0; extra == 'examples'
Requires-Dist: opentelemetry-instrumentation-httpx==0.59b0; extra == 'examples'
Requires-Dist: opentelemetry-instrumentation-openai>=0.1.0; extra == 'examples'
Requires-Dist: opentelemetry-instrumentation-redis==0.59b0; extra == 'examples'
Requires-Dist: opentelemetry-instrumentation-requests==0.59b0; extra == 'examples'
Requires-Dist: opentelemetry-instrumentation==0.59b0; extra == 'examples'
Requires-Dist: python-dotenv>=1.0.0; extra == 'examples'
Requires-Dist: redis>=5.0.0; extra == 'examples'
Requires-Dist: requests>=2.31.0; extra == 'examples'
Requires-Dist: strands-agents-tools>=0.2.0; extra == 'examples'
Requires-Dist: strands-agents>=1.15.0; extra == 'examples'
Provides-Extra: langchain
Requires-Dist: faiss-cpu>=1.13.1; extra == 'langchain'
Requires-Dist: langchain-anthropic>=0.3.0; extra == 'langchain'
Requires-Dist: langchain-community>=0.4.1; extra == 'langchain'
Requires-Dist: langchain-core>=0.3.0; extra == 'langchain'
Requires-Dist: langchain-openai>=1.1.5; extra == 'langchain'
Requires-Dist: langchain>=1.0.0; extra == 'langchain'
Requires-Dist: langgraph>=1.0.5; extra == 'langchain'
Requires-Dist: opentelemetry-api==1.38.0; extra == 'langchain'
Requires-Dist: opentelemetry-exporter-otlp-proto-common==1.38.0; extra == 'langchain'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == 'langchain'
Requires-Dist: opentelemetry-exporter-otlp-proto-http==1.38.0; extra == 'langchain'
Requires-Dist: opentelemetry-sdk==1.38.0; extra == 'langchain'
Provides-Extra: litellm
Requires-Dist: litellm>=1.70.0; extra == 'litellm'
Requires-Dist: opentelemetry-api==1.38.0; extra == 'litellm'
Requires-Dist: opentelemetry-exporter-otlp-proto-common==1.38.0; extra == 'litellm'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == 'litellm'
Requires-Dist: opentelemetry-exporter-otlp-proto-http==1.38.0; extra == 'litellm'
Requires-Dist: opentelemetry-sdk==1.38.0; extra == 'litellm'
Provides-Extra: opentelemetry
Requires-Dist: opentelemetry-api==1.38.0; extra == 'opentelemetry'
Requires-Dist: opentelemetry-exporter-otlp-proto-common==1.38.0; extra == 'opentelemetry'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == 'opentelemetry'
Requires-Dist: opentelemetry-exporter-otlp-proto-http==1.38.0; extra == 'opentelemetry'
Requires-Dist: opentelemetry-sdk==1.38.0; extra == 'opentelemetry'
Provides-Extra: strands
Requires-Dist: boto3>=1.34.0; extra == 'strands'
Requires-Dist: opentelemetry-api==1.38.0; extra == 'strands'
Requires-Dist: opentelemetry-exporter-otlp-proto-common==1.38.0; extra == 'strands'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == 'strands'
Requires-Dist: opentelemetry-exporter-otlp-proto-http==1.38.0; extra == 'strands'
Requires-Dist: opentelemetry-sdk==1.38.0; extra == 'strands'
Requires-Dist: strands-agents-tools>=0.2.0; extra == 'strands'
Requires-Dist: strands-agents>=1.15.0; extra == 'strands'
Provides-Extra: test
Requires-Dist: boto3>=1.34.0; extra == 'test'
Requires-Dist: crewai>=0.28.0; extra == 'test'
Requires-Dist: langchain-core>=0.3.0; extra == 'test'
Requires-Dist: opentelemetry-api==1.38.0; extra == 'test'
Requires-Dist: opentelemetry-exporter-otlp-proto-common==1.38.0; extra == 'test'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == 'test'
Requires-Dist: opentelemetry-exporter-otlp-proto-http==1.38.0; extra == 'test'
Requires-Dist: opentelemetry-sdk==1.38.0; extra == 'test'
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'test'
Requires-Dist: pytest>=8.0.0; extra == 'test'
Requires-Dist: ruff>=0.4.0; extra == 'test'
Requires-Dist: strands-agents>=1.15.0; extra == 'test'
Description-Content-Type: text/markdown

# Lumenova Beacon SDK

[![PyPI version](https://img.shields.io/pypi/v/lumenova-beacon.svg)](https://pypi.org/project/lumenova-beacon/)
[![Python Versions](https://img.shields.io/pypi/pyversions/lumenova-beacon.svg)](https://pypi.org/project/lumenova-beacon/)
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)

> A Python observability SDK for AI/LLM applications — trace agentic frameworks (LangChain, LangGraph, CrewAI, Strands), LLM calls, and custom code with OpenTelemetry-compatible spans.

## Features

- **LangChain/LangGraph Integration** - Automatic tracing for chains, agents, tools, retrievers, with interrupt/resume and agent handoff support
- **Strands Agents Integration** - Callback handler for AWS Strands agent tracing
- **CrewAI Integration** - Event listener for CrewAI crew tracing
- **LiteLLM Integration** - Callback logger for LiteLLM proxy tracing
- **Agentic Governance** - Real-time policy enforcement for AI agent tool calls and LLM invocations
- **OpenTelemetry Integration** - Automatic instrumentation for Anthropic, OpenAI, FastAPI, Redis, HTTPX, and more
- **Manual & Decorator Tracing** - Create spans manually or use `@trace` decorator
- **Dataset Management** - ActiveRecord-style API for managing test datasets
- **Prompt Management** - Version-controlled prompt templates with labels (staging, production)
- **Experiment & Evaluation Management** - Run experiments over datasets and evaluate results
- **Data Masking** - Built-in PII detection and redaction via Beacon Guardrails
- **Flexible Transport** - HTTP or file-based span export
- **Full Async Support** - Async/await throughout

## Requirements

- Python 3.10+

## Installation

```bash
# Base installation
pip install lumenova-beacon

# With OpenTelemetry support
pip install lumenova-beacon[opentelemetry]

# With LangChain/LangGraph support
pip install lumenova-beacon[langchain]

# With LiteLLM support
pip install lumenova-beacon[litellm]

# With Strands Agents support
pip install lumenova-beacon[strands]

# With CrewAI support
pip install lumenova-beacon[crewai]

# With AWS Secrets Manager support
pip install lumenova-beacon[aws]
```

## Quick Start

### LangChain / LangGraph

```python
from lumenova_beacon import BeaconClient, BeaconLangGraphHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# Initialize client
client = BeaconClient(
    endpoint="https://your-beacon-endpoint.lumenova.ai",
    api_key="your-api-key",
)

# Create a tracing handler
handler = BeaconLangGraphHandler(session_id="session-123")

# All LangChain operations are now traced automatically
llm = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_template("Tell me about {topic}")
chain = prompt | llm

response = chain.invoke(
    {"topic": "AI agents"},
    config={"callbacks": [handler]}
)
```

### Basic Tracing

```python
from lumenova_beacon import BeaconClient, trace

client = BeaconClient(
    endpoint="https://your-beacon-endpoint.lumenova.ai",
    api_key="your-api-key",
    session_id="my-session"
)

@trace
def my_function(x, y):
    return x + y

result = my_function(10, 20)  # Automatically traced
```

## Configuration

### Environment Variables

All environment variables work as fallback — constructor parameters override them:

| Variable | Purpose | Default |
|----------|---------|---------|
| `BEACON_ENDPOINT` | API base URL for OTLP export | (required unless using `file_directory`) |
| `BEACON_API_KEY` | Authentication token | |
| `BEACON_SESSION_ID` | Default session ID for spans | |
| `BEACON_SERVICE_NAME` | Service name for OTEL resource (fallback: `OTEL_SERVICE_NAME`) | |
| `BEACON_ENVIRONMENT` | Deployment environment (e.g., "production", "staging") | |
| `BEACON_VERIFY` | SSL certificate verification | `true` |
| `BEACON_EAGER_EXPORT` | Export spans eagerly on end | `true` |
| `BEACON_ISOLATED` | Use private TracerProvider (avoids conflicts) | `false` |
| `BEACON_USER_EMAIL` | User email for attribution on spans | |
| `BEACON_USER_NAME` | User display name for attribution on spans | |
| `BEACON_USER_ID` | User ID for attribution on spans | |
| `BEACON_AWS_SECRET_NAME` | AWS Secrets Manager secret name (resolves API key) | |
| `BEACON_AWS_SECRET_KEY` | Key path within AWS secret | `api_key` |
| `BEACON_AWS_REGION` | AWS region for Secrets Manager | (boto3 default) |
| `BEACON_EXTRA_OTLP_ENDPOINTS` | Additional OTLP gRPC endpoints (comma-separated) | |

```bash
# Bash/Linux/macOS
export BEACON_ENDPOINT="https://your-beacon-endpoint.lumenova.ai"
export BEACON_API_KEY="your-api-key"
export BEACON_SESSION_ID="my-session"
```

```powershell
# PowerShell
$env:BEACON_ENDPOINT = "https://your-beacon-endpoint.lumenova.ai"
$env:BEACON_API_KEY = "your-api-key"
$env:BEACON_SESSION_ID = "my-session"
```

### Configuration Options

```python
from lumenova_beacon import BeaconClient

client = BeaconClient(
    # Connection
    endpoint="https://your-beacon-endpoint.lumenova.ai",
    api_key="your-api-key",
    verify=True,
    headers={"Custom-Header": "value"},

    # Span Configuration
    session_id="my-session",
    service_name="my-service",
    environment="production",

    # User Identity
    user_email="user@example.com",
    user_name="Alice",
    user_id="user-123",

    # OpenTelemetry
    auto_instrument_opentelemetry=True,   # Auto-configure OTEL (default: True)
    isolated=False,                        # Use private TracerProvider (default: False)
    auto_instrument_litellm=False,         # Auto-configure LiteLLM (default: False)

    # Data Masking
    masking_function=None,                 # Custom masking function (optional)

    # General
    enabled=True,
    eager_export=True,
    record_stacktrace=False,               # Capture full stack traces on exceptions
)
```

### File Transport

For local development or testing, use `file_directory` instead of `endpoint`:

```python
from lumenova_beacon import BeaconClient

client = BeaconClient(
    file_directory="./traces",
)
```

## Integrations

### 1. LangChain/LangGraph

Automatically trace all LangChain and LangGraph operations — chains, agents, tools, retrievers, and LLM calls:

```python
from lumenova_beacon import BeaconClient, BeaconLangGraphHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

client = BeaconClient()
handler = BeaconLangGraphHandler(session_id="session-123")

# Use with request-time callbacks (recommended)
llm = ChatOpenAI(model="gpt-4")
response = llm.invoke(
    "What is the capital of France?",
    config={"callbacks": [handler]}
)

# Works with chains
prompt = ChatPromptTemplate.from_template("Tell me about {topic}")
chain = prompt | llm
response = chain.invoke(
    {"topic": "AI"},
    config={"callbacks": [handler]}
)

# Traces agents, tools, retrievers, and more
from langchain.agents import create_react_agent, AgentExecutor

agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke(
    {"input": "What's the weather?"},
    config={"callbacks": [handler]}
)
```

#### LangGraph with Interrupt/Resume

For LangGraph agents with checkpointing, use `BeaconLangGraphConfig` to automatically continue traces across interrupt/resume cycles:

```python
from lumenova_beacon import BeaconClient, BeaconLangGraphConfig

client = BeaconClient()

beacon = BeaconLangGraphConfig(
    graph=agent.graph,
    thread_id=thread_id,
    agent_name="planner",
    session_id="session-123",
    autodetect_handoffs=True,  # Auto-detect agent-to-agent handoffs
)
config = beacon.get_config()
result = await agent.graph.ainvoke(state, config)

# For async checkpointers (e.g., AsyncPostgresSaver):
config = await beacon.aget_config()
```

### 2. LiteLLM

Trace all LiteLLM operations across multiple LLM providers:

```python
from lumenova_beacon import BeaconClient, BeaconLiteLLMLogger
import litellm

# Option 1: Auto-instrumentation
client = BeaconClient(auto_instrument_litellm=True)

# Option 2: Manual registration
client = BeaconClient()
litellm.callbacks = [BeaconLiteLLMLogger()]

# All LiteLLM calls are now traced
response = litellm.completion(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello"}],
    metadata={
        "generation_name": "greeting",
        "session_id": "user-123",
    }
)
```

### 3. Strands Agents

Trace AWS Strands Agent executions with automatic span hierarchy:

```python
from lumenova_beacon import BeaconClient, BeaconStrandsHandler
from strands import Agent

client = BeaconClient()
handler = BeaconStrandsHandler(
    session_id="my-session",
    agent_name="My Agent",
)

agent = Agent(model=model, callback_handler=handler)
result = agent("Hello, world!")
print(handler.trace_id)  # Link to Beacon trace
```

### 4. CrewAI

Trace CrewAI Crew executions via the event listener:

```python
from lumenova_beacon import BeaconClient, BeaconCrewAIListener
from crewai import Agent, Crew, Task

client = BeaconClient()

# Auto-registers with CrewAI event bus
listener = BeaconCrewAIListener(
    session_id="my-session",
    crew_name="My Research Crew",
)

crew = Crew(agents=[...], tasks=[...])
result = crew.kickoff()
print(listener.trace_id)  # Link to Beacon trace
```

### 5. OpenTelemetry

Beacon automatically configures OpenTelemetry to export spans:

```python
from lumenova_beacon import BeaconClient
from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor

# Initialize (auto-configures OpenTelemetry)
client = BeaconClient(
    endpoint="https://your-beacon-endpoint.lumenova.ai",
    api_key="your-api-key",
    auto_instrument_opentelemetry=True  # Default
)

# Instrument libraries
AnthropicInstrumentor().instrument()
OpenAIInstrumentor().instrument()

# Now all API calls are automatically traced!
from anthropic import Anthropic
anthropic = Anthropic()
response = anthropic.messages.create(
    model="claude-3-5-sonnet-20241022",
    messages=[{"role": "user", "content": "Hello!"}]
)  # Automatically traced with proper span hierarchy
```

#### Supported Instrumentors

Install additional instrumentors as needed:

```bash
pip install opentelemetry-instrumentation-anthropic
pip install opentelemetry-instrumentation-openai
pip install opentelemetry-instrumentation-fastapi
pip install opentelemetry-instrumentation-redis
pip install opentelemetry-instrumentation-httpx
pip install opentelemetry-instrumentation-requests
```

## Tracing

### Decorator Tracing

The `@trace` decorator automatically captures function execution:

```python
from lumenova_beacon import trace

# Simple usage
@trace
def process_data(data):
    return data.upper()

# With custom name
@trace(name="custom_operation")
def another_function():
    pass

# Capture inputs and outputs
@trace(capture_args=True, capture_result=True)
def calculate(x, y):
    return x + y

# Works with async functions
@trace
async def async_operation():
    await some_async_call()
```

### Manual Tracing

For more control, use context managers:

```python
from lumenova_beacon import BeaconClient
from lumenova_beacon.types import SpanKind, StatusCode

client = BeaconClient()

# Context manager
with client.trace("operation_name") as span:
    span.set_attribute("user_id", "123")
    span.set_input({"query": "search term"})

    try:
        result = do_work()
        span.set_output(result)
        span.set_status(StatusCode.OK)
    except Exception as e:
        span.record_exception(e)
        span.set_status(StatusCode.ERROR, str(e))
        raise

# Async context manager
async with client.trace("async_operation") as span:
    result = await async_work()
    span.set_output(result)

# Direct span creation
span = client.create_span(
    name="manual_span",
    kind=SpanKind.CLIENT,
)
span.start()
# ... do work ...
span.end()
```

## Agentic Governance

Enforce governance policies on AI agent actions in real time. The governance system evaluates tool calls and LLM invocations against policy stacks before and after execution, blocking actions that violate your rules.

Key capabilities:
- **Pre & post execution hooks** — evaluate actions before they run (prevent violations) and after (audit outputs)
- **Fail-open resilience** — defaults to allowing actions when the governance API is unreachable
- **Violation strategies** — raise immediately, stop gracefully after current action, or escalate after N violations

### Decorator

```python
from lumenova_beacon import governance

# Bare decorator — requires GovernanceConfig on the active BeaconClient
@governance
def search_web(query: str) -> str:
    return web_search(query)

# With policy stack IDs
@governance(stack_ids=["my-policy-stack"])
def send_email(to: str, body: str) -> bool:
    return email_client.send(to, body)

# Tag-based policy discovery with strict mode
@governance(tags=["env:prod", "team:security"], fail_open=False)
def delete_record(record_id: str) -> None:
    db.delete(record_id)

# Works with async functions
@governance(stack_ids=["my-policy-stack"])
async def run_query(sql: str) -> list[dict]:
    return await db.execute(sql)
```

### LangChain/LangGraph Callback Handler

```python
from lumenova_beacon import BeaconLangGraphHandler, BeaconLangGraphGovernanceHandler
from langchain_openai import ChatOpenAI

handler = BeaconLangGraphHandler(session_id="session-123")
governance_handler = BeaconLangGraphGovernanceHandler(
    stack_ids=["my-policy-stack"],
    fail_open=True,
    on_violation="raise",
)

llm = ChatOpenAI(model="gpt-4")
response = llm.invoke(
    "What is the capital of France?",
    config={"callbacks": [handler, governance_handler]}
)
```

### Integrated Tracing + Governance with BeaconLangGraphConfig

For LangGraph agents, pass a `GovernanceConfig` to `BeaconLangGraphConfig` to get both tracing and governance in a single setup:

```python
from lumenova_beacon import BeaconLangGraphConfig, GovernanceConfig

beacon = BeaconLangGraphConfig(
    graph=agent.graph,
    thread_id=thread_id,
    agent_name="planner",
    governance=GovernanceConfig(
        stack_ids=["my-policy-stack"],
        fail_open=True,
        on_violation="raise",
        max_violations=3,
    ),
)
config = beacon.get_config()
result = await agent.graph.ainvoke(state, config)
```

### GovernanceConfig Options

```python
from lumenova_beacon import GovernanceConfig

config = GovernanceConfig(
    stack_ids=["stack-1", "stack-2"],       # Policy stack IDs (at least one of stack_ids/tags required)
    tags=["env:prod"],                       # Tag-based policy discovery
    fail_open=True,                          # Allow actions when API is unreachable (default: True)
    enabled_hooks={"pre_tool", "post_tool"}, # Which hooks to run (default: all four)
    on_violation="raise",                    # "raise" (immediate) or "stop" (graceful shutdown)
    max_violations=5,                        # Escalate to stop mode after N violations
    timeout=2.0,                             # API call timeout in seconds
    debug=False,                             # Enable debug logging
)
```

Available hooks: `pre_tool`, `post_tool`, `pre_llm`, `post_llm`.

### Handling Violations

```python
from lumenova_beacon.exceptions import GovernanceViolationError

try:
    result = agent.invoke(state, config)
except GovernanceViolationError as e:
    print(e.message)       # Human-readable reason
    print(e.policies)      # List of rules that were evaluated
    print(e.latency_ms)    # Evaluation latency
```

## Dataset Management

Manage test datasets with an ActiveRecord-style API. All methods have async variants with an `a` prefix (e.g., `acreate`, `aget`, `alist`).

```python
from lumenova_beacon import BeaconClient
from lumenova_beacon.datasets import Dataset, DatasetRecord

client = BeaconClient()

# Create dataset
dataset = Dataset.create(
    name="qa-evaluation",
    description="Question answering test cases"
)

# Add records with flexible column-based data
dataset.create_record(
    data={
        "prompt": "What is AI?",
        "expected_answer": "Artificial Intelligence is...",
        "difficulty": "easy",
        "category": "definitions"
    }
)

# Bulk create records
dataset.bulk_create_records([
    {"data": {"question": "What is ML?", "expected_answer": "Machine Learning..."}},
    {"data": {"question": "What is DL?", "expected_answer": "Deep Learning..."}},
])

# List, get, update, delete
datasets, pagination = Dataset.list(page=1, page_size=20, search="qa")
dataset = Dataset.get(dataset_id="dataset-uuid", include_records=True)
dataset.update(name="updated-name", description="New description")
dataset.delete()
```

## Prompt Management

Version-controlled prompt templates with labels. All methods have async variants with an `a` prefix.

```python
from lumenova_beacon import BeaconClient
from lumenova_beacon.prompts import Prompt

client = BeaconClient()

# Create text prompt
prompt = Prompt.create(
    name="greeting",
    template="Hello {{name}}! Welcome to {{company}}.",
    description="Customer greeting template",
    tags=["customer-support", "greeting"]
)

# Create chat prompt
prompt = Prompt.create(
    name="support-bot",
    messages=[
        {"role": "system", "content": "You are a helpful assistant for {{product}}."},
        {"role": "user", "content": "{{question}}"}
    ],
)

# Fetch and use prompts
prompt = Prompt.get("greeting", label="production")
message = prompt.format(name="Alice", company="Acme Corp")
# Result: "Hello Alice! Welcome to Acme Corp."

# Versioning and labels
new_version = prompt.publish(
    template="Hi {{name}}! Welcome to {{company}}. We're excited to have you!",
    message="Added enthusiastic tone"
)
prompt.set_label("production", version=2)

# Convert to LangChain template
lc_prompt = prompt.to_langchain()  # Returns PromptTemplate or ChatPromptTemplate
```

## Experiment & Evaluation Management

```python
from lumenova_beacon.experiments import Experiment
from lumenova_beacon.evaluations import Evaluation

# Create and run an experiment
experiment = Experiment.create(
    name="qa-eval-v1",
    dataset_id="dataset-uuid",
    description="Evaluate QA model accuracy",
)
run = experiment.run(process_fn=my_pipeline)

# Create an evaluation
evaluation = Evaluation.create(
    name="accuracy-check",
    experiment_id="experiment-uuid",
)
```

## LLM Config Management

```python
from lumenova_beacon.llm_configs import LLMConfig

config = LLMConfig.get(config_id="config-uuid")
configs, pagination = LLMConfig.list(page=1, page_size=20)
```

## Data Masking

Automatically mask sensitive data (PII) before spans are exported:

```python
from lumenova_beacon import BeaconClient
from lumenova_beacon.masking.integrations.beacon_guardrails import (
    create_beacon_masking_function,
    MaskingMode,
    PIIType,
)

# Create a masking function backed by Beacon Guardrails API
masking_fn = create_beacon_masking_function(
    pii_types=[PIIType.PERSON, PIIType.EMAIL_ADDRESS, PIIType.US_SSN],
    mode=MaskingMode.REDACT,
)

# Pass it to the client - all span data is masked before export
client = BeaconClient(
    endpoint="https://your-beacon-endpoint.lumenova.ai",
    api_key="your-api-key",
    masking_function=masking_fn,
)
```

You can also provide a custom masking function:

```python
def my_masking_fn(text: str) -> str:
    return text.replace("secret", "***")

client = BeaconClient(masking_function=my_masking_fn)
```

## Guardrails

Apply content guardrail policies via the Beacon API:

```python
from lumenova_beacon.guardrails import Guardrail

guardrail = Guardrail(guardrail_id="guardrail-uuid")

# Sync
result = guardrail.apply("some user input")

# Async
result = await guardrail.aapply("some user input")
```

## API Reference

### Main Exports

```python
from lumenova_beacon import (
    BeaconClient,           # Main client
    BeaconConfig,           # Configuration class
    get_client,             # Get current client singleton
    trace,                  # Tracing decorator
    # Integrations (lazy-loaded)
    BeaconLangGraphHandler,  # LangChain/LangGraph
    BeaconLangGraphConfig,  # LangGraph configuration (adds support for interruptions)
    BeaconStrandsHandler,   # Strands Agents
    BeaconCrewAIListener,   # CrewAI
    BeaconLiteLLMLogger,    # LiteLLM
    # Governance (lazy-loaded)
    BeaconLangGraphGovernanceHandler,  # LangChain/LangGraph governance
    GovernanceConfig,                  # Governance configuration
    governance,                        # Function decorator
)

from lumenova_beacon.datasets import Dataset, DatasetRecord
from lumenova_beacon.prompts import Prompt
from lumenova_beacon.experiments import Experiment
from lumenova_beacon.evaluations import Evaluation, EvaluationRun
from lumenova_beacon.llm_configs import LLMConfig
from lumenova_beacon.guardrails import Guardrail
from lumenova_beacon.types import SpanKind, StatusCode, SpanType
```

See source docstrings for full API details on each class and method.

## Error Handling

All exceptions inherit from `BeaconError`. Key exception types:

- `ConfigurationError` — invalid configuration
- `TransportError` (`HTTPTransportError`, `FileTransportError`) — export failures
- `GovernanceViolationError` — policy blocked an action (includes `message`, `policies`, `latency_ms`)
- `DatasetError`, `PromptError`, `ExperimentError`, `EvaluationError` — resource-specific errors (each with `NotFound` and `Validation` variants)

All HTTP operations automatically retry up to 3 times with exponential backoff.

### Graceful Degradation

```python
from lumenova_beacon import BeaconClient

# Disable tracing in development
client = BeaconClient(enabled=False)

# Tracing becomes no-op when disabled
@trace
def my_function():
    return "result"  # No tracing overhead
```

## License

This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
