Metadata-Version: 2.4
Name: veritell-langchain
Version: 0.1.6
Summary: Veritell LangChain SDK
Author: Veritell
License: Apache-2.0
Keywords: veritell,langchain,evaluation
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
License-File: NOTICE
Requires-Dist: requests>=2.31
Provides-Extra: dev
Requires-Dist: tomli>=2.0.0; python_version < "3.11" and extra == "dev"
Requires-Dist: pytest>=8.0; extra == "dev"
Requires-Dist: ruff>=0.6.0; extra == "dev"
Requires-Dist: build>=1.2.0; extra == "dev"
Requires-Dist: twine>=5.0.0; extra == "dev"
Dynamic: license-file

# veritell-langchain

LLM evaluation and AI validation for LangChain applications.

Veritell-LangChain is a Python SDK that integrates Veritell’s AI risk and output validation API into LangChain workflows.

It enables structured evaluation of:

- Hallucination risk
- Bias detection
- Safety concerns
- Model reliability scoring

This package is designed for engineers building production AI systems who need measurable quality assurance before deployment.

## 🚀 Installation

```bash
pip install veritell-langchain
```

For development:

```bash
pip install -e .[dev]
```

## 🔐 Get Access (Free Beta)

To use the SDK, you need:

- A Veritell account
- An API key

Join the beta and create an API key:

- Join beta → https://veritell.ai/join-beta
- API overview → https://veritell.ai/api-overview

Store your key as an environment variable:

macOS / Linux:

```bash
export VERITELL_API_KEY="<your_api_key>"
```

Windows PowerShell:

```powershell
$env:VERITELL_API_KEY="<your_api_key>"
```

## ⚡ Quick Start (Streaming Evaluation)

By default, `evaluate_stream()` will **generate the primary response** using `primary_model` and then evaluate it with the judge models.

```python
from veritell_langchain import VeritellEvaluator

# Uses VERITELL_API_KEY automatically
v = VeritellEvaluator(base_url="https://veritell.ai/api")

for event in v.evaluate_stream(
    prompt="Explain the benefits of renewable energy.",
    primary_model="gpt-4o-mini",
    judges=["gpt-4o-mini", "grok-3-mini-latest"],
):
    print(event.event_type, event.data)
```

### Optional: evaluate your own model output (recommended for production)

If you already ran a model in your LangChain app and want Veritell to evaluate *that exact output*, pass `model_output=...`.
In this mode, Veritell will treat your provided text as the primary output (it won’t re-generate it).

```python
from veritell_langchain import VeritellEvaluator

v = VeritellEvaluator(base_url="https://veritell.ai/api")

prompt = "Explain the benefits of renewable energy."
chain_output = "Renewable energy reduces emissions and improves energy security."

for event in v.evaluate_stream(
    prompt=prompt,
    primary_model="gpt-4o-mini",
    judges=["gpt-4o-mini", "grok-3-mini-latest"],
    model_output=chain_output,
):
    print(event.event_type, event.data)
```

Streaming responses are returned as NDJSON events.

## 🧠 What This Package Enables

Veritell-LangChain adds structured LLM evaluation to your workflow.

Use it to:

- Detect hallucinations in LLM outputs
- Identify bias patterns
- Evaluate safety and compliance risk
- Generate structured risk scores
- Integrate AI validation into CI/CD pipelines
- Add AI quality assurance before production

It acts as a validation layer between experimentation and enterprise deployment.

## 🔄 Recommended LangChain Integration Pattern

This package does not automatically hook into LangChain callbacks (yet).

Recommended MVP workflow:

1) Run your LangChain chain or agent
2) Capture the prompt and model output
3) Send both to Veritell for evaluation
4) Review structured risk results in the dashboard

Example:

```python
from veritell_langchain import VeritellEvaluator

prompt = "Explain the benefits of renewable energy."

# Example model output (replace with your chain result)
prediction = "Renewable energy reduces emissions and improves energy security."

v = VeritellEvaluator(base_url="https://veritell.ai/api")

for event in v.evaluate_stream(
    prompt=prompt,
    primary_model="gpt-4o-mini",
    judges=["gpt-4o-mini", "grok-3-mini-latest"],
    model_output=prediction,
):
    print(event.event_type, event.data)
```

View evaluation runs in the dashboard:

- https://veritell.ai/trust/dashboard

## ⚙ Configuration

Environment variables:

- `VERITELL_API_KEY` (required)
- `VERITELL_API_BASE_URL` (optional)
- `VERITELL_TIMEOUT` (optional, seconds)

Authentication uses:

- `X-Api-Key: <VERITELL_API_KEY>`

## 🧪 Production Example

If you installed from PyPI, the simplest way to run a “production” example is to copy/paste this snippet into your own project (it’s the same code as the repo example):

```python
import os
from veritell_langchain import VeritellEvaluator

api_key = os.getenv("VERITELL_API_KEY")
if not api_key:
    raise RuntimeError("VERITELL_API_KEY is not set")

v = VeritellEvaluator(api_key=api_key, base_url="https://veritell.ai/api")

for event in v.evaluate_stream(
    prompt="Explain the benefits of using renewable energy sources.",
    primary_model="gpt-4o-mini",
    judges=["gpt-4o-mini"],
    model_output="Renewables reduce emissions and improve energy security.",
):
    print(event.event_type, event.data)
```

If you want the exact file, it lives in the source repository as:

- `examples/real_usage_prod.py`

## 🎯 When to Use Veritell-LangChain

Use this package if you are:

- Building LangChain agents in production
- Evaluating LLM outputs for reliability
- Implementing AI testing workflows
- Deploying AI in regulated industries
- Adding structured AI governance controls
- Performing hallucination or bias detection

If you are searching for:

- LangChain evaluation tool
- LLM hallucination detection Python
- AI validation library
- LLM testing framework
- Responsible AI Python package

This SDK is designed for those use cases.

## 📄 License

Apache-2.0
