Metadata-Version: 2.4
Name: puvinoise-sdk
Version: 0.2.25
Summary: Independent OpenTelemetry SDK for AI agents (Anthropic, OpenAI, Ollama)
Author-email: PUVI LABS PRIVATE LIMITED <kalaiselvan@puvilabs.com>
License: Proprietary
Project-URL: Homepage, https://puvilabs.com
Keywords: puvinoise,opentelemetry,otel,tracing,llm,agents,observability
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Operating System :: OS Independent
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: System :: Monitoring
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: opentelemetry-api>=1.20
Requires-Dist: opentelemetry-sdk>=1.20
Requires-Dist: opentelemetry-exporter-otlp>=1.20
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.20; extra == "anthropic"
Provides-Extra: openai
Requires-Dist: openai>=1.0; extra == "openai"
Provides-Extra: all
Requires-Dist: anthropic>=0.20; extra == "all"
Requires-Dist: openai>=1.0; extra == "all"

# puvinoise-sdk

Independent OpenTelemetry SDK for AI agents. Supports Anthropic Claude, OpenAI, Ollama, and other LLM providers with tracing and observability.

## Install

```bash
pip install puvinoise-sdk
```

Optional provider extras:

```bash
pip install puvinoise-sdk[anthropic]
pip install puvinoise-sdk[openai]
pip install puvinoise-sdk[all]
```

## Quick start

```python
from puvinoise import bootstrap, run_with_trace

bootstrap()  # Configure via env: PUVI_AGENT_NAME, CUST_LLM_PROVIDER, PUVINOISE_TENANTID, PUVI_API_KEY, PUVINOISE_END_POINT_URL

with run_with_trace("my-agent-run"):
    # Your agent / LLM calls
    pass
```

## Environment

The SDK reads these variables from the environment (e.g. from a `.env` file):

| Variable | Description |
|----------|-------------|
| **`PUVI_AGENT_NAME`** | Agent/service name (e.g. `my-agent-system`). Internally the SDK uses the variable `puvicustagentname` for this value to avoid name conflicts with customer agent code. |
| **`CUST_LLM_PROVIDER`** | Connected integration / LLM provider (e.g. `openai`, `anthropic`, `ollama`); mapped from your tenant’s connected integration |
| **`PUVINOISE_TENANTID`** | Tenant ID for multi-tenant isolation (from your PUViNoise organization) |
| **`PUVI_API_KEY`** | Agent access key for authenticating with the OTLP endpoint (sent as Bearer token) |
| **`PUVINOISE_END_POINT_URL`** | OTLP collector URL (e.g. `http://host:4318`; `/v1/traces` is appended if missing) |

Example `.env`:

```env
PUVI_AGENT_NAME=my-agent-system
CUST_LLM_PROVIDER=openai
PUVINOISE_TENANTID=your-tenant-id
PUVI_API_KEY=your_agent_access_key
PUVINOISE_END_POINT_URL=http://localhost:4318
```

## Telemetry emitted

The SDK sends all env-derived parameters to the collector so the dashboard can display and filter by them:

- **Resource attributes** (on every span): `service.name` (from `PUVI_AGENT_NAME`), `tenant.id` (from `PUVINOISE_TENANTID`), `puvinoise.llm_provider` (from `CUST_LLM_PROVIDER`).
- **Span attributes** on `agent.run`: `agent.name`, `agent.run_id`, `tenant.id`, `puvinoise.llm_provider`, `llm.provider`.

The backend and dashboard use these to scope traces by tenant, filter by agent name and LLM provider, and show tenant ID and provider in the trace list and detail views.

## Decision telemetry (AI agent signals)

The SDK can emit **decision telemetry** for behaviour intelligence (intent, tool candidates, reasoning checkpoints, decision confidence, tool selection reasoning, agent state transitions). See [DECISION_TELEMETRY_SDK_DESIGN.md](../docs/DECISION_TELEMETRY_SDK_DESIGN.md) for the full design.

**Optional run-level intent** (set on `agent.run` span and propagated to child spans):

```python
from puvinoise import bootstrap, run_with_trace

run_with_trace(my_agent_fn, agent_name="my-agent", goal="answer user question", task_type="qa", intent="answer_question")
```

**Emit signals explicitly** (must be inside an active trace, e.g. inside `run_with_trace` or under an `agent.run` span):

```python
from puvinoise import (
    record_intent_classification,
    record_tool_candidates,
    record_reasoning_checkpoint,
    record_decision_confidence,
    record_tool_selection_reasoning,
    record_agent_state_transition,
)

# Intent (also sets agent.intent / agent.goal / agent.task_type on current span)
record_intent_classification("answer_question", goal="answer user question", task_type="qa", confidence=0.92)

# Tool candidates and selection reasoning (on tool span)
record_tool_candidates(["web_search", "calculator"], selected_name="web_search", reasoning="need current info", confidence=0.9)
record_tool_selection_reasoning("web_search", "need current info", confidence=0.9)

# Reasoning checkpoint (e.g. after LLM call)
record_reasoning_checkpoint("after_system_prompt", summary="model ready", step_index=1)

# Decision confidence
record_decision_confidence(0.88, scope="turn")

# Agent state transition (updates context state)
record_agent_state_transition("thinking", "tool_calling", reason="model decided to use tool")
```

**Tool decorator with decision telemetry:**

```python
from puvinoise import instrument_tool_call

@instrument_tool_call(
    "web_search",
    tool_candidates=["web_search", "calculator"],
    selection_reasoning="need current info",
    selection_confidence=0.9,
)
def search_web(query: str):
    ...
```

Spans are still exported on the same schedule (default every 1 second via the batch processor). Decision signals appear as span attributes and as `decision.*` span events so the behaviour pipeline can build decision graphs and run the behaviour intelligence engine.

## License

Proprietary – PUVI LABS PRIVATE LIMITED
