Metadata-Version: 2.4
Name: everstaff
Version: 0.0.1
Summary: AI agents that know when to act and when to ask — autonomous by default, human-supervised when it counts.
Project-URL: Homepage, https://github.com/yuriiiz/everstaff
Project-URL: Repository, https://github.com/yuriiiz/everstaff
Author-email: Yurii <yuriiiz@gmail.com>
License: MIT
Keywords: agent,ai,autonomous,hitl,llm,multi-agent
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.11
Requires-Dist: aiohttp>=3.9
Requires-Dist: apscheduler<4,>=3.10
Requires-Dist: fastapi>=0.100.0
Requires-Dist: litellm>=1.50.0
Requires-Dist: mcp>=1.1.0
Requires-Dist: pydantic>=2.5.0
Requires-Dist: pyjwt[crypto]>=2.8.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: uvicorn[standard]>=0.20.0
Requires-Dist: websockets>=12.0
Provides-Extra: all
Requires-Dist: langfuse>=3.0.0; extra == 'all'
Requires-Dist: lark-oapi<2.0,>=1.0; extra == 'all'
Requires-Dist: opentelemetry-api>=1.20.0; extra == 'all'
Requires-Dist: opentelemetry-exporter-otlp>=1.20.0; extra == 'all'
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == 'all'
Provides-Extra: dev
Requires-Dist: mypy>=1.7.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.1.0; extra == 'dev'
Requires-Dist: pytest>=7.4.0; extra == 'dev'
Requires-Dist: ruff>=0.1.0; extra == 'dev'
Provides-Extra: langfuse
Requires-Dist: langfuse>=3.0.0; extra == 'langfuse'
Provides-Extra: lark
Requires-Dist: lark-oapi<2.0,>=1.0; extra == 'lark'
Provides-Extra: otel
Requires-Dist: opentelemetry-api>=1.20.0; extra == 'otel'
Requires-Dist: opentelemetry-exporter-otlp>=1.20.0; extra == 'otel'
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == 'otel'
Description-Content-Type: text/markdown

# Everstaff

**AI agents that know when to act and when to ask — autonomous by default, human-supervised when it counts.**

Everstaff is an open-source platform for running AI agents that work around the clock. Agents operate autonomously, but pause and request human approval at critical decisions. Every step is observable, every action is permissioned.

---

## Features

### Core Runtime
- **Multi-LLM support** — OpenAI, Anthropic, Gemini, and 100+ providers via LiteLLM
- **Streaming** — real-time WebSocket streaming with a built-in web UI
- **Session persistence** — resume any session from where it left off
- **Tool permissions** — fine-grained allow/deny rules with wildcard matching

### Human-in-the-Loop (HITL)
- Agents pause mid-run and request human approval before critical actions
- Approve, reject, or comment directly from the web UI
- Configurable: block on every tool call, only on request, or never
- Async — agents queue HITL requests and wait without blocking other work

### Multi-Agent
- Delegate subtasks to specialized child agents
- DAG-based workflow engine with parallel execution and automatic replanning
- Child agent results flow back to the parent agent transparently

### Autonomous Daemon
- Schedule agents on cron, webhooks, or custom events
- Agents run unattended and escalate to humans when needed
- Per-trigger HITL channels — each trigger gets its own approval queue

### Extensibility
- **Skills** — composable instruction modules agents load on demand
- **Tools** — Python functions exposed to agents with typed schemas
- **Knowledge base** — attach document directories for context injection
- **MCP servers** — connect any Model Context Protocol server

### Self-Hosted
- OIDC authentication with email whitelist
- OpenTelemetry and Langfuse tracing
- S3 or local storage for session data
- Docker image included

---

## Quick Start

```bash
pip install everstaff

mkdir my-agents && cd my-agents
everstaff init          # scaffold config, agent dirs, Dockerfile, .env.example
```

`init` creates:

```
.agent/config.yaml    # LLM models, storage, auth, HITL channels
agents/               # drop agent definitions here
skills/               # custom skills
tools/                # custom Python tools
main.py               # server entry point
.env.example          # API key template
```

Set your API key and start:

```bash
cp .env.example .env
# edit .env: ANTHROPIC_API_KEY=sk-...

python main.py
```

Open [http://localhost:8000](http://localhost:8000) — the web UI is bundled.

### Docker

```bash
docker run -p 8000:8000 \
  -v $(pwd)/.agent:/app/.agent \
  -e ANTHROPIC_API_KEY=sk-... \
  ghcr.io/your-org/everstaff
```

---

## How It Works

Agents are defined in a config file. Drop one in your `.agent/agents/` directory and it appears in the UI immediately.

```yaml
agent_name: Code Reviewer
instructions: |
  Review pull requests for bugs, security issues, and style violations.
  When you find a critical issue, request human approval before proceeding.

tools: [Bash, Read, Glob, Grep]

hitl_mode: on_request   # pause and ask when uncertain

autonomy:
  triggers:
    - type: webhook
      path: /hooks/github
```

---

## Documentation

- [Getting Started](docs/getting-started.md)
- [Architecture](docs/architecture.md)
- [API Reference](docs/api-reference.md)
- [Permissions](docs/module-permissions.md)
- [Skills](docs/module-skills.md)
- [Workflow Engine](docs/module-workflow.md)
- [Tracing](docs/module-tracing.md)

---

## Development

```bash
# Install dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Start dev server (API + frontend separately)
uvicorn everstaff.server:app --reload
cd web && npm install && npm run dev
```

---

## License

MIT
