Metadata-Version: 2.4
Name: keiro
Version: 0.4.1
Summary: Keiro client — call the EB1 multi-model ensemble API.
Author: Keiro Engineering
License-Expression: LicenseRef-Proprietary
Project-URL: Homepage, https://pypi.org/project/keiro/
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: <3.14,>=3.11
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.32.2
Requires-Dist: PyYAML>=6.0.1
Requires-Dist: rich>=13.0.0
Provides-Extra: dev
Requires-Dist: pytest>=8.3.2; extra == "dev"
Requires-Dist: ruff>=0.12.0; extra == "dev"

# Keiro

EB1 multi-model ensemble inference. Run multiple frontier models in parallel
and synthesize the best response.

## Quick start

```bash
pip install keiro
keiro setup
```

```python
from keiro import models

print(models("eb1-preview", "What is machine learning?"))
```

## How it works

EB1 sends your prompt to multiple frontier models (Claude, GPT, Gemini) in
parallel, then a judge synthesizes the strongest elements into a single
response. The result is more accurate and more complete than any individual
model.

## Models

| Model | Description |
|-------|-------------|
| `eb1` (default) | 3-model ensemble with synthesis |
| `eb1-preview` | Preview ensemble (GPT-5.2, Gemini, Claude) |
| `eb1-pro` | 4-model ensemble for harder tasks |
| `claude-opus-4-6` | Direct passthrough (no ensemble) |
| `gpt-5.2` | Direct passthrough |

```python
from keiro import models

# Default ensemble
answer = models("eb1", "Solve this step by step: what is 23 * 47?")

# Specific model
answer = models("claude-opus-4-6", "Write a haiku")
```

## Prompt-first API

```python
from keiro import models

reply = models.response("eb1-preview", "Explain quantum computing in one paragraph.")
print(reply.text)

creative = models.instance("eb1-preview", temperature=0.8)
print(creative("Write a limerick about debugging."))

for chunk in models.stream("eb1-preview", "Draft a launch email."):
    print(chunk, end="")
```

`complete(...)` is still available as the smallest one-liner, but `models`
is the preferred external interface because it matches Ember's public
prompt-first API more closely.

## Configuration

**Interactive setup** (recommended):

```bash
keiro setup
```

This validates your API key against the gateway and saves credentials to
`~/.keiro/credentials`.

**Environment variables**:

```bash
export KEIRO_API_KEY="your-api-key"
export KEIRO_BASE_URL="http://54.202.103.124:8080"  # optional
```

**Explicit arguments**:

```python
from keiro import ModelsAPI

models = ModelsAPI(api_key="your-key", base_url="http://54.202.103.124:8080")
print(models("eb1-preview", "Hello"))
```

Precedence: explicit arguments > environment variables > credentials file.

## Requirements

- Python 3.11+
- No GPU required (inference runs on Keiro's hosted infrastructure)
