Metadata-Version: 2.4
Name: synth-ai
Version: 0.9.6
Requires-Dist: pydantic>=2.0.0
Requires-Dist: requests>=2.32.3
Requires-Dist: pynacl>=1.5.0
Requires-Dist: tqdm>=4.66.4
Requires-Dist: typing-extensions>=4.0.0
Requires-Dist: rich>=13.9.0
Requires-Dist: openai>=1.99.0
Requires-Dist: fastapi>=0.115.12
Requires-Dist: uvicorn>=0.34.2
Requires-Dist: numpy>=2.2.3
Requires-Dist: sqlalchemy>=2.0.42
Requires-Dist: click>=8.1.7,<8.2
Requires-Dist: aiohttp>=3.8.0
Requires-Dist: nest-asyncio>=1.6.0
Requires-Dist: httpx>=0.28.1
Requires-Dist: datasets>=4.0.0
Requires-Dist: jsonschema>=4.23.0
Requires-Dist: pillow>=10.0.0
Requires-Dist: imageio>=2.34.0
Requires-Dist: nle
Requires-Dist: anthropic>=0.42.0 ; extra == 'all'
Requires-Dist: groq>=0.30.0 ; extra == 'all'
Requires-Dist: google-genai>=1.26.0 ; extra == 'all'
Requires-Dist: pandas>=2.2.3 ; extra == 'analytics'
Requires-Dist: build>=1.2.2.post1 ; extra == 'dev'
Requires-Dist: twine>=4.0.0 ; extra == 'dev'
Requires-Dist: keyring>=24.0.0 ; extra == 'dev'
Requires-Dist: pytest>=8.3.3 ; extra == 'dev'
Requires-Dist: pytest-xdist>=3.6.1 ; extra == 'dev'
Requires-Dist: pytest-timeout>=2.3.1 ; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.24.0 ; extra == 'dev'
Requires-Dist: pytest-cov>=4.1.0 ; extra == 'dev'
Requires-Dist: pyright>=1.1.350 ; extra == 'dev'
Requires-Dist: coverage[toml]>=7.3.0 ; extra == 'dev'
Requires-Dist: ruff>=0.1.0 ; extra == 'dev'
Requires-Dist: papermill>=2.6.0 ; extra == 'dev'
Requires-Dist: nest-asyncio>=1.6.0 ; extra == 'dev'
Requires-Dist: anthropic>=0.42.0 ; extra == 'providers'
Requires-Dist: groq>=0.30.0 ; extra == 'providers'
Requires-Dist: google-genai>=1.26.0 ; extra == 'providers'
Provides-Extra: all
Provides-Extra: analytics
Provides-Extra: dev
Provides-Extra: providers
Provides-Extra: schemas
License-File: synth_ai_py/LICENSE
Summary: Serverless Posttraining for Agents - Core AI functionality and tracing
Author-email: Synth AI <josh@usesynth.ai>
License: MIT
Requires-Python: >=3.11
Description-Content-Type: text/markdown; charset=UTF-8; variant=GFM
Project-URL: Homepage, https://github.com/synth-laboratories/synth-ai
Project-URL: Issues, https://github.com/synth-laboratories/synth-ai/issues
Project-URL: Repository, https://github.com/synth-laboratories/synth-ai

# Synth

[![Python](https://img.shields.io/badge/python-3.11+-blue)](https://www.python.org/)
[![PyPI](https://img.shields.io/badge/PyPI-0.9.4-orange)](https://pypi.org/project/synth-ai/)
[![Crates.io](https://img.shields.io/crates/v/synth-ai?label=crates.io)](https://crates.io/crates/synth-ai)
[![License](https://img.shields.io/badge/license-MIT-green)](LICENSE)

Build systems for OOMs more complexity.

Continual and offline optimization for prompts, context, skills, and long-horizon memory.

Use the SDK in Python (`uv add synth-ai`) and Rust (beta) (`cargo add synth-ai`), or call Synth endpoints from any language.

## Synth Style

Synth is built for frontier builders first. We:

- push interface complexity inward (strong server contracts, simpler app surfaces)
- design online/offline parity with pause/resume as first-class controls
- meet production code where it is (no forced lock-in or rewrites)
- build general algorithmic foundations, then layer targeted affordances

For engineering principles and coding standards, see [specs/README.md](specs/README.md).

<p align="center">
  <picture align="center">
    <source media="(prefers-color-scheme: dark)" srcset="assets/langprobe_v2_dark.png">
    <source media="(prefers-color-scheme: light)" srcset="assets/langprobe_v2_light.png">
    <img alt="Bar chart comparing baseline vs GEPA-optimized prompt performance across GPT-4.1 Nano, GPT-4o Mini, and GPT-5 Nano." src="assets/langprobe_v2_light.png">
  </picture>
</p>

<p align="center">
  <i>Average accuracy on <a href="https://arxiv.org/abs/2502.20315">LangProBe</a> prompt optimization benchmarks.</i>
</p>

## Demo Walkthroughs

- [GEPA Banking77 Prompt Optimization](https://docs.usesynth.ai/cookbooks/banking77-colab)
- [GEPA Crafter VLM Verifier Optimization](https://docs.usesynth.ai/cookbooks/verifier-optimization)
- [GraphGen Image Style Matching](https://docs.usesynth.ai/cookbooks/graphs/overview)

Benchmark and demo runner source files live in the `Benchmarking` repo (`../Benchmarking` in a sibling checkout).

## Product Focus

- **Continual Learning Sessions (MIPRO + GEPA)**: run online sessions that update prompts from reward feedback during live traffic, with first-class `pause`/`resume`/`cancel` controls.
- **Discrete GEPA Optimization (Prompt + Context)**: run offline GEPA jobs for controlled batch optimization, compare artifacts, and promote the best candidates.
- **Voyager for Skills + Long-Term Memory**: optimize skill/context surfaces and use durable memory with retrieval and summarization for long-horizon agent systems.
- **One Canonical Runtime Surface**: use shared `systems`, `offline`, and `online` primitives across SDK and HTTP APIs.
- **Agent Infrastructure Built In**: run with pools, containers, and tunnels for local or managed rollouts without forcing app rewrites.
- **Graph + Verifier Workflows**: train GraphGen pipelines and rubric-based verifiers for domain-specific evaluation loops.

## Getting Started

### Python SDK

```bash
uv add synth-ai
# or
pip install synth-ai==0.9.4
```

### Rust SDK (beta)

```bash
cargo add synth-ai
```

### API (any language)

Use your `SYNTH_API_KEY` and call Synth HTTP endpoints directly.

Docs: [docs.usesynth.ai](https://docs.usesynth.ai)

## Codex CLI Setup

Install Synth, then register the hosted managed-research MCP server with one command:

```bash
uv tool install synth-ai
synth-ai mcp codex install
```

Codex will start the OAuth flow for the hosted MCP server. After login, call `smr_projects_list`, `smr_project_status_get`, or `smr_project_trigger_run`.

If you need the local stdio fallback instead of the hosted endpoint:

```bash
synth-ai setup
synth-ai mcp codex install --transport stdio
```

