Metadata-Version: 2.4
Name: agent-framework-lib
Version: 0.8.4.post2
Summary: A comprehensive Python framework for building and serving conversational AI agents with FastAPI
Author-email: Sebastian Pavel <sebastian@cinco.ai>, Elliott Girard <elliott.girard@icloud.com>
Maintainer-email: Sebastian Pavel <sebastian@cinco.ai>
License: MIT
Project-URL: Homepage, https://github.com/Cinco-AI/AgentFramework
Project-URL: Repository, https://github.com/Cinco-AI/AgentFramework.git
Project-URL: Issues, https://github.com/Cinco-AI/AgentFramework/issues
Project-URL: Documentation, https://github.com/Cinco-AI/AgentFramework/blob/main/README.md
Project-URL: Changelog, https://github.com/Cinco-AI/AgentFramework/blob/main/docs/CHANGELOG.md
Project-URL: Bug Tracker, https://github.com/Cinco-AI/AgentFramework/issues
Project-URL: Source Code, https://github.com/Cinco-AI/AgentFramework
Keywords: ai,agents,fastapi,llamaindex,framework,conversational-ai,multi-agent,llm,openai,gemini,chatbot,session-management,framework-agnostic
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Communications :: Chat
Classifier: Topic :: Internet :: WWW/HTTP :: HTTP Servers
Classifier: Framework :: FastAPI
Classifier: Environment :: Web Environment
Classifier: Typing :: Typed
Requires-Python: <3.14,>=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: aiofiles>=24.1.0
Requires-Dist: fastapi>=0.115.12
Requires-Dist: uvicorn>=0.34.2
Requires-Dist: fastmcp>=2.2.7
Requires-Dist: mcp-python-interpreter
Requires-Dist: pyyaml>=6.0.2
Requires-Dist: pydantic>=2.0.0
Requires-Dist: opentelemetry-sdk>=1.33.1
Requires-Dist: opentelemetry-api>=1.33.1
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.33.1
Requires-Dist: pymongo>=4.10.1
Requires-Dist: motor>=3.6.0
Requires-Dist: black>=25.1.0
Requires-Dist: markitdown[all]>=0.1.2
Requires-Dist: psutil>=7.0.0
Requires-Dist: weasyprint>=60.0
Requires-Dist: markdown>=3.5
Requires-Dist: playwright>=1.56.0
Requires-Dist: elasticsearch<9.0.0,>=8.11.0
Requires-Dist: ddgs>=9.9.3
Requires-Dist: httpx>=0.28.1
Requires-Dist: beautifulsoup4>=4.12.0
Requires-Dist: llama-index>=0.14.16
Requires-Dist: llama-index-core>=0.14.16
Requires-Dist: llama-index-workflows>=2.16.0
Requires-Dist: llama-index-llms-openai>=0.4.0
Requires-Dist: llama-index-llms-anthropic>=0.11.0
Requires-Dist: llama-index-llms-google-genai>=0.1.0
Requires-Dist: graphiti-core>=0.24.3
Requires-Dist: tiktoken>=0.7.0
Requires-Dist: falkordb>=1.0.0
Requires-Dist: grpcio-status>=1.71.2
Requires-Dist: nodeenv>=1.8.0
Requires-Dist: asyncpg>=0.31.0
Requires-Dist: drawpyo>=0.2.5
Requires-Dist: openpyxl>=3.1.5
Requires-Dist: python-pptx>=1.0.0
Requires-Dist: python-docx>=1.1.0
Requires-Dist: jsonschema>=4.20.0
Requires-Dist: pymupdf>=1.27.2.2
Provides-Extra: llamaindex
Requires-Dist: llama-index>=0.14.16; extra == "llamaindex"
Requires-Dist: llama-index-core>=0.14.16; extra == "llamaindex"
Requires-Dist: llama-index-workflows>=2.16.0; extra == "llamaindex"
Requires-Dist: llama-index-llms-openai>=0.4.0; extra == "llamaindex"
Requires-Dist: llama-index-llms-google-genai>=0.1.0; extra == "llamaindex"
Requires-Dist: llama-index-llms-anthropic>=0.11.0; extra == "llamaindex"
Provides-Extra: mcp
Requires-Dist: llama-index-tools-mcp>=0.4.0; extra == "mcp"
Provides-Extra: websearch
Requires-Dist: ddgs>=8.0.0; extra == "websearch"
Provides-Extra: microsoft
Provides-Extra: excel
Requires-Dist: openpyxl>=3.1.0; extra == "excel"
Provides-Extra: drawio
Requires-Dist: drawpyo>=0.1.0; extra == "drawio"
Provides-Extra: gitnexus
Requires-Dist: httpx>=0.27.0; extra == "gitnexus"
Provides-Extra: dev
Requires-Dist: pytest>=8.4.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: pytest-cov>=6.2.1; extra == "dev"
Requires-Dist: pytest-mock>=3.10.0; extra == "dev"
Requires-Dist: pytest-benchmark>=4.0.0; extra == "dev"
Requires-Dist: pytest-xdist>=3.3.0; extra == "dev"
Requires-Dist: black>=25.1.0; extra == "dev"
Requires-Dist: flake8>=6.0.0; extra == "dev"
Requires-Dist: mypy>=1.5.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: pre-commit>=3.0.0; extra == "dev"
Requires-Dist: aiohttp>=3.12.13; extra == "dev"
Requires-Dist: httpx>=0.28.1; extra == "dev"
Requires-Dist: coverage>=7.0.0; extra == "dev"
Provides-Extra: mongodb
Requires-Dist: pymongo>=4.10.1; extra == "mongodb"
Requires-Dist: motor>=3.6.0; extra == "mongodb"
Provides-Extra: elasticsearch
Requires-Dist: elasticsearch>=8.11.0; extra == "elasticsearch"
Provides-Extra: s3
Requires-Dist: boto3>=1.34.0; extra == "s3"
Requires-Dist: botocore>=1.34.0; extra == "s3"
Provides-Extra: minio
Requires-Dist: minio>=7.2.0; extra == "minio"
Provides-Extra: gcp
Requires-Dist: google-cloud-storage>=2.14.0; extra == "gcp"
Provides-Extra: multimodal
Requires-Dist: pillow>=10.0.0; extra == "multimodal"
Requires-Dist: opencv-python>=4.8.0; extra == "multimodal"
Requires-Dist: pytesseract>=0.3.10; extra == "multimodal"
Provides-Extra: memory
Requires-Dist: memori>=0.1.0; extra == "memory"
Requires-Dist: graphiti-core>=0.3.0; extra == "memory"
Provides-Extra: memori
Requires-Dist: memori>=0.1.0; extra == "memori"
Provides-Extra: graphiti
Requires-Dist: graphiti-core>=0.24.3; extra == "graphiti"
Provides-Extra: graphiti-falkordb
Requires-Dist: graphiti-core[falkordb]>=0.24.3; extra == "graphiti-falkordb"
Provides-Extra: graphiti-neo4j
Requires-Dist: graphiti-core>=0.24.3; extra == "graphiti-neo4j"
Requires-Dist: neo4j>=5.0.0; extra == "graphiti-neo4j"
Provides-Extra: graphiti-all
Requires-Dist: graphiti-core[falkordb]>=0.24.3; extra == "graphiti-all"
Requires-Dist: neo4j>=5.0.0; extra == "graphiti-all"
Provides-Extra: postgresql
Requires-Dist: asyncpg>=0.29.0; extra == "postgresql"
Provides-Extra: observability
Requires-Dist: opentelemetry-sdk>=1.33.1; extra == "observability"
Requires-Dist: opentelemetry-api>=1.33.1; extra == "observability"
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.33.1; extra == "observability"
Requires-Dist: traceloop-sdk>=0.30.0; extra == "observability"
Provides-Extra: monitoring
Requires-Dist: psutil>=5.9.0; extra == "monitoring"
Provides-Extra: all
Requires-Dist: agent-framework-lib[dev,drawio,elasticsearch,excel,gcp,graphiti-all,llamaindex,mcp,memory,microsoft,minio,mongodb,monitoring,multimodal,observability,postgresql,s3,websearch]; extra == "all"
Dynamic: license-file

# Agent Framework Library

[![PyPI version](https://badge.fury.io/py/agent-framework-lib.svg)](https://pypi.org/project/agent-framework-lib/)
[![Tests](https://github.com/Cinco-AI/AgentFramework/workflows/Tests%20%26%20Coverage/badge.svg)](https://github.com/Cinco-AI/AgentFramework/actions)
[![Coverage](https://codecov.io/gh/Cinco-AI/AgentFramework/branch/main/graph/badge.svg)](https://codecov.io/gh/Cinco-AI/AgentFramework)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Documentation](https://img.shields.io/badge/docs-mkdocs-blue.svg)](https://cinco-ai.github.io/AgentFramework/)

A comprehensive Python framework for building and serving conversational AI agents with FastAPI. Create production-ready AI agents in minutes with automatic session management, streaming responses, file storage, and easy MCP integration.

**Key Features:**
- 🚀 **Quick Setup** - Create agents in 10-15 minutes
- 🔌 **Easy MCP Integration** - Connect to external tools effortlessly
- 🎯 **Skills System** - Markdown-based, on-demand capability loading for token optimization
- 🔄 **Multi-Provider Support** - OpenAI, Anthropic, Gemini
- 🎯 **Smart Model Routing** - Auto mode selects the best model per query complexity
- 💾 **Session Management** - Automatic conversation persistence
- 📁 **File Storage** - Local, S3, MinIO, GCP support
- 🤝 **A2A Protocol** - [Agent-to-Agent](docs/A2A_GUIDE.md) communication via JSON-RPC
- 📊 **Observability** - Metrics, tracing, and logging via OpenTelemetry
- 🐘 **PostgreSQL** - Used by A2A Task Store and Memory Provider

## Installation

**Python:** `>=3.10,<3.14`

```bash
# Install with LlamaIndex support (recommended)
uv add agent-framework-lib[llamaindex]

# Install with MCP support
uv add agent-framework-lib[llamaindex,mcp]

# Install with all features
uv add agent-framework-lib[all]
```

**Available extras:** `llamaindex`, `mcp`, `gitnexus`, `mongodb`, `elasticsearch`, `postgresql`, `s3`, `minio`, `gcp`, `multimodal`, `memory`, `memori`, `graphiti`, `graphiti-falkordb`, `graphiti-neo4j`, `graphiti-all`, `observability`, `monitoring`, `websearch`, `dev`, `all`

**Optional: System Dependencies**

The framework **automatically detects and configures** system libraries. Manual installation is only needed if you encounter issues:

**For PDF Generation (WeasyPrint):**
```bash
# macOS
brew install pango gdk-pixbuf libffi cairo

# Ubuntu/Debian
sudo apt-get install libpango-1.0-0 libpangoft2-1.0-0 libgdk-pixbuf2.0-0 libffi-dev libcairo2

# Fedora/RHEL
sudo dnf install pango gdk-pixbuf2 libffi-devel cairo
```

**For Chart/Mermaid Image Generation (Playwright):**
```bash
# Install Playwright and browser
uv add playwright
playwright install chromium
```

**For MCP Python Server (Deno):**
```bash
# macOS/Linux
curl -fsSL https://deno.land/install.sh | sh

# Windows (PowerShell)
irm https://deno.land/install.ps1 | iex
```

### Post-Installation Script (Recommended)

The framework includes a CLI script that automatically installs all optional dependencies (Playwright browsers and Deno runtime):

```bash
# Run after installing the package
agent-framework-post-install
```

This script:
- ✅ Installs Playwright Chromium browser (for charts, mermaid diagrams, tables)
- ✅ Installs Deno runtime (for MCP servers like `mcp-run-python`)
- ✅ Works on Windows, macOS, and Linux
- ✅ Detects if dependencies are already installed (fast path)

**Note:** The framework also attempts lazy auto-installation when tools are first used, but running the post-install script ensures everything is ready upfront.

The framework handles library path configuration automatically on startup.

## 🤖 Framework Helper Agent

The framework includes a built-in AI assistant that helps you create agents! Access it at `/helper` when running any agent server.

**Features:**
- 🧠 Deep knowledge of framework documentation, examples, and source code
- 🔍 **GitNexus-powered code intelligence** - 12937 symbols, 36668 relationships, 300 execution flows
- 💡 Code generation assistance
- 📚 Indexed knowledge base (30+ files)
- 🗄️ Persistent knowledge graph (FalkorDB) - survives server restarts
- 🔎 Hybrid search (semantic + structural + text fallback)

**Access:** `http://localhost:8000/helper`

### GitNexus Setup (Recommended)

For the best experience, install and run GitNexus to enable semantic code search:

```bash
# 1. Install and index the codebase
./scripts/setup_gitnexus.sh

# 2. Start the GitNexus server
./scripts/start_gitnexus_server.sh

# 3. Verify it's running
curl http://localhost:4747/health
```

The helper agent will automatically use GitNexus if available, with graceful fallback to text search if not running.

**What GitNexus enables:**
- Semantic code search across the entire framework
- Full symbol context (callers, callees, imports, dependencies)
- Execution flow tracing (300 end-to-end code paths)
- Impact analysis and relationship queries

📖 **[User Guide](docs/GITNEXUS_USER_GUIDE.md)** - Complete guide for using GitNexus features with the Helper Agent

**Optional dependencies:**
```bash
# Install GitNexus support
uv add agent-framework-lib[gitnexus]
```

The helper agent indexes:
- All documentation (`docs/*.md`)
- All examples (`examples/*.py`)
- All builtin skills (`agent_framework/skills/builtin/skills/*/SKILL.md`)
- Core framework source (tools, storage, memory, session management)

**Re-indexing:** If you update documentation or examples, trigger a re-index:
```bash
# Re-index knowledge base
curl -X POST http://localhost:8000/helper/reindex

# Re-index GitNexus (after code changes)
npx gitnexus analyze --embeddings
```

**Model Configuration:**

By default, the helper agent uses Claude (if `ANTHROPIC_API_KEY` is set) or GPT-5.4 (if `OPENAI_API_KEY` is set). You can override this with:

```env
# Force a specific model (useful if your Anthropic key has reached its limit)
HELPER_AGENT_MODEL=gpt-5.4
```

**Example questions:**
- "How do I create an agent with memory?"
- "Show me the execution flow for skill loading"
- "What calls the MemoryProvider class?"
- "Find all code related to authentication"
- "What's the difference between Memori and Graphiti?"
- "How do I configure S3 storage?"
- "Search the web for LlamaIndex best practices"

## 🐳 Docker Development Environment

For local development, use Docker Compose to run all external services (Elasticsearch, MongoDB, PostgreSQL, FalkorDB, MinIO):

```bash
# Start all services
docker-compose --profile all up -d

# Copy environment template
cp .env.docker .env
# Edit .env to add your LLM API keys

# Stop services
docker-compose down
```

Use profiles to start only what you need:
```bash
docker-compose --profile storage up -d  # Elasticsearch, MongoDB, MinIO
docker-compose --profile memory up -d   # PostgreSQL, FalkorDB
```

**Full documentation:** See [Docker Setup Guide](docs/DOCKER_SETUP.md) for service details, ports, credentials, and troubleshooting.

## 🚀 Getting Started

### Create Your First Agent

Here's a complete, working agent with LlamaIndex:

```python
from typing import List
from agent_framework import LlamaIndexAgent, create_basic_agent_server

class MyAgent(LlamaIndexAgent):
    def __init__(self):
        super().__init__(
            agent_id="my_calculator_agent",
            name="Calculator Agent",
            description="A helpful calculator assistant that can perform basic math operations."
        )
    
    def get_agent_prompt(self) -> str:
        """Define your agent's behavior and personality."""
        return "You are a helpful calculator assistant."
  
    def get_agent_tools(self) -> List[callable]:
        """Define the tools your agent can use.
        
        Tools are automatically converted to LlamaIndex FunctionTool instances.
        The function name becomes the tool name, and the docstring becomes the description.
        """
        def add(a: float, b: float) -> float:
            """Add two numbers together."""
            return a + b
        
        def multiply(a: float, b: float) -> float:
            """Multiply two numbers together."""
            return a * b
        
        # Just return the functions - automatic conversion to FunctionTool
        return [add, multiply]

# Start server - includes streaming, session management, web UI
create_basic_agent_server(MyAgent, port=8000)
```

**Required Methods:**
- `__init__()` - Call `super().__init__(agent_id, name, description)` with required identity info
- `get_agent_prompt()` - Return system prompt string
- `get_agent_tools()` - Return list of tools (can be empty)

**Optional Methods (have default implementations):**
- `create_fresh_context()` - Create new LlamaIndex Context (default provided)
- `serialize_context(ctx)` - Serialize context for persistence (default provided)
- `deserialize_context(state)` - Deserialize context from state (default provided)
- `initialize_agent()` - Customize agent creation (default: FunctionAgent)
- `configure_session()` - Add session setup logic

**That's it!** The framework provides default implementations for context management (state persistence), so you only need to implement the three core methods above.

**Run it:**
```bash
# Set your API key
export OPENAI_API_KEY=sk-your-key-here

# Run the agent
uv run python my_agent.py

# Open http://localhost:8000/ui
```

## ⚙️ Configure Your Agent

### Environment Setup

Create a `.env` file:

```env
# Required: At least one API key
OPENAI_API_KEY=sk-your-openai-key
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key
GEMINI_API_KEY=your-gemini-key

# Model Configuration
DEFAULT_MODEL=gpt-5.4-mini

# Multi-Model Routing (Auto Mode)
DEFAULT_MODEL_MODE=auto                    # "auto" or specific model name
AUTO_CLASSIFIER_MODEL=gpt-5.4-nano           # Model for complexity classification
PREFERRED_LIGHT_MODELS=gpt-5.4-nano,claude-haiku-4-5-20251001
PREFERRED_STANDARD_MODELS=gpt-5.4-mini
PREFERRED_ADVANCED_MODELS=gpt-5.2,claude-opus-4-6

# Session Storage (optional)
SESSION_STORAGE_TYPE=memory  # or "mongodb" or "elasticsearch"
MONGODB_CONNECTION_STRING=mongodb://localhost:27017
MONGODB_DATABASE_NAME=agent_sessions

# File Storage (optional)
LOCAL_STORAGE_PATH=./file_storage
AWS_S3_BUCKET=my-bucket
S3_AS_DEFAULT=false
```

### Remote Configuration (Elasticsearch-Managed Agents)

For production deployments, you can configure agents to be managed entirely via Elasticsearch, allowing ops teams to modify prompts and models at runtime without code deployments.

**Enable remote configuration:**

```python
from agent_framework import LlamaIndexAgent

class OpsMangedAgent(LlamaIndexAgent):
    def __init__(self):
        super().__init__(
            agent_id="ops_managed_agent",
            name="Ops Managed Agent",
            description="An agent configured via Elasticsearch."
        )
    
    @classmethod
    def get_use_remote_config(cls) -> bool:
        """Enable Elasticsearch-only configuration."""
        return True
    
    def get_agent_prompt(self) -> str:
        # Fallback prompt if ES config not available
        return "You are a helpful assistant."
    
    def get_agent_tools(self) -> list:
        return []
```

**Behavior:**

| `use_remote_config` | Server Startup | Session Init |
|---------------------|----------------|--------------|
| `False` (default) | Pushes hardcoded config to ES if different | Merges ES config with hardcoded |
| `True` | Skips pushing to ES | Reads ES config only (no merge) |

**When to use:**
- `use_remote_config=False` (default): Code-managed agents where developers control the config
- `use_remote_config=True`: Ops-managed agents where configuration is modified via ES/Kibana

**Fallback:** If `use_remote_config=True` but no ES config exists, the system falls back to hardcoded config and pushes it to ES with a warning.

## 🎯 Multi-Model Selection

The framework includes intelligent model routing that automatically selects the best model based on query complexity.

### Auto Mode (Default)

When `DEFAULT_MODEL_MODE=auto`, the system analyzes each query and routes it to the appropriate tier:

| Tier | Icon | Use Case | Example Models |
|------|------|----------|----------------|
| **Light** | 💨 | Simple queries, greetings, basic info | gpt-5.4-nano, claude-haiku-4-5 |
| **Standard** | ⚖️ | Typical questions, explanations | gpt-5.4-mini, claude-sonnet-4-6 |
| **Advanced** | 🧠 | Complex analysis, creative tasks | gpt-5.2, claude-opus-4-6 |

**Benefits:**
- 💰 **Cost optimization** - Use cheaper models for simple queries
- ⚡ **Speed** - Faster responses for trivial messages
- 🎯 **Quality** - Powerful models for complex tasks

### Manual Model Selection

Users can also select a specific model from the UI dropdown:
- Models grouped by tier with availability indicators (✓/✗)
- Preference persisted in localStorage
- Real-time routing indicator shows selected model

### Configuration

```env
# Default mode when no user preference
DEFAULT_MODEL_MODE=auto

# Model used for complexity classification (should be fast and cheap)
AUTO_CLASSIFIER_MODEL=gpt-5.4-nano

# Preferred models per tier (comma-separated, in order of preference)
PREFERRED_LIGHT_MODELS=gpt-5.4-nano,claude-haiku-4-5-20251001,gemini-2.5-flash-lite
PREFERRED_STANDARD_MODELS=gpt-5.4-mini,gemini-2.5-flash
PREFERRED_ADVANCED_MODELS=gpt-5.2,claude-opus-4-6,gemini-2.5-pro
```

### API Endpoint

```bash
# Get available models
curl http://localhost:8000/api/models

# Response
{
  "models_by_tier": {
    "light": [{"id": "gpt-5.4-nano", "provider": "openai", "available": true}, ...],
    "standard": [...],
    "advanced": [...]
  },
  "default_mode": "auto",
  "classifier_model": "gpt-5.4-nano"
}
```

### Backward Compatibility

Agents with hardcoded models continue to work without changes:

```python
class MyAgent(LlamaIndexAgent):
    def __init__(self):
        super().__init__(...)
        self._default_model = "gpt-5.2"  # This model will always be used
```

### LlamaIndex Agent Configuration

Control model behavior in your agent:

```python
class MyAgent(LlamaIndexAgent):
    def __init__(self):
        super().__init__(
            agent_id="my_agent",
            name="My Agent",
            description="A helpful assistant."
        )
        # Default model config (can be overridden per session)
        self.default_temperature = 0.7
        self.default_model = "gpt-5.4-mini"
```

**Runtime Configuration:**

Users can override settings per session via the API or web UI:
- Model selection (gpt-5.2, claude-sonnet-4-6, gemini-2.5-pro)
- Temperature (0.0 - 1.0)
- Max tokens
- System prompt override

## 🔧 Create Custom Tools

Custom tools extend your agent's capabilities. The tool name and docstring are crucial - they tell the agent when and how to use the tool.

### Basic Custom Tool

```python
def get_weather(city: str) -> str:
    """Get the current weather for a specific city.
    
    Args:
        city: The name of the city to get weather for
        
    Returns:
        A description of the current weather
    """
    # Your implementation here
    return f"The weather in {city} is sunny, 22°C"

# Add to your agent
class MyAgent(LlamaIndexAgent):
    def get_agent_tools(self):
        # Just return the function - automatic conversion to FunctionTool
        # Function name = tool name, docstring = tool description
        return [get_weather]
```

**Important:**
- **Function name** should be explicit and descriptive (e.g., `get_weather`, not `weather`)
- **Docstring** is added as the tool description - the agent uses this to understand when to call the tool
- **Type hints** help the agent understand parameters
- **Args/Returns documentation** provides additional context

### Custom Tool with Dependencies

For tools that need file storage or other dependencies, use closures to capture context:

```python
from agent_framework import LlamaIndexAgent
from agent_framework.storage.file_system_management import FileStorageFactory

class MyAgent(LlamaIndexAgent):
    def __init__(self):
        super().__init__(
            agent_id="my_agent",
            name="My Agent",
            description="A helpful assistant with custom tools."
        )
        self.file_storage = None
    
    async def _ensure_file_storage(self):
        if self.file_storage is None:
            self.file_storage = await FileStorageFactory.create_storage_manager()
    
    async def configure_session(self, session_configuration):
        await self._ensure_file_storage()
        self._user_id = session_configuration.get('user_id', 'default_user')
        self._session_id = session_configuration.get('session_id')
        await super().configure_session(session_configuration)
    
    def get_agent_tools(self):
        storage = self.file_storage
        user_id = self._user_id
        session_id = self._session_id
        
        async def store_result(param1: str, param2: int) -> str:
            """Process data and store results.
            
            Args:
                param1: Description of first parameter
                param2: Description of second parameter
                
            Returns:
                Result description
            """
            result = f"Processed {param1} with {param2}"
            file_id = await storage.store_file(
                user_id=user_id,
                session_id=session_id,
                filename="result.txt",
                content=result.encode()
            )
            return f"Result stored with ID: {file_id}"
        
        return [store_result]
```

### Tool Naming Best Practices

```python
# ✅ GOOD - Explicit and clear
def calculate_mortgage_payment(principal: float, rate: float, years: int) -> float:
    """Calculate monthly mortgage payment."""
    pass

def send_email_notification(recipient: str, subject: str, body: str) -> bool:
    """Send an email notification to a recipient."""
    pass

# ❌ BAD - Too vague
def calculate(x: float, y: float) -> float:
    """Do calculation."""
    pass

def send(data: str) -> bool:
    """Send something."""
    pass
```

## 🔌 Adding MCP Servers

MCP (Model Context Protocol) allows your agent to connect to external tools and services.

### Basic MCP Setup

```python
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec

class MyAgent(LlamaIndexAgent):
    def __init__(self):
        super().__init__(
            agent_id="my_agent",
            name="MCP Agent",
            description="An assistant with access to external tools via MCP servers."
        )
        self.mcp_tools = []
        self._mcp_initialized = False
    
    async def _initialize_mcp_tools(self):
        """Load tools from MCP servers."""
        if self._mcp_initialized:
            return
        
        # Configure your MCP server
        mcp_configs = [
            {
                "command": "uvx",
                "args": ["mcp-server-filesystem"],
                "env": {"FILESYSTEM_ROOT": "/path/to/workspace"}
            }
        ]
        
        for config in mcp_configs:
            client = BasicMCPClient(
                config["command"],
                args=config["args"],
                env=config.get("env", {})
            )
            
            # Load tools from the MCP server
            mcp_tool_spec = McpToolSpec(client=client)
            tools = await mcp_tool_spec.to_tool_list_async()
            self.mcp_tools.extend(tools)
        
        self._mcp_initialized = True
    
    async def initialize_agent(self, model_name, system_prompt, tools, **kwargs):
        # Load MCP tools before initializing agent
        await self._initialize_mcp_tools()
        
        # Combine with other tools
        all_tools = self.get_agent_tools()
        await super().initialize_agent(model_name, system_prompt, all_tools, **kwargs)
    
    def get_agent_tools(self):
        # Return built-in tools + MCP tools
        return self.mcp_tools
```

### Multiple MCP Servers

```python
def _get_mcp_configs(self):
    """Configure multiple MCP servers."""
    return [
        {
            "name": "filesystem",
            "command": "uvx",
            "args": ["mcp-server-filesystem"],
            "env": {"FILESYSTEM_ROOT": "/workspace"}
        },
        {
            "name": "github",
            "command": "uvx",
            "args": ["mcp-server-github"],
            "env": {
                "GITHUB_TOKEN": os.getenv("GITHUB_TOKEN")
            }
        },
        {
            "name": "python",
            "command": "uvx",
            "args": ["mcp-run-python", "stdio"]
        }
    ]
```

### Popular MCP Servers

```bash
# Filesystem operations
uvx mcp-server-filesystem

# GitHub integration
uvx mcp-server-github

# Python code execution
uvx mcp-run-python

# Database access
uvx mcp-neo4j-cypher
uvx mcp-server-postgres
```

**Installation:**
```bash
# Install with MCP support
uv add agent-framework-lib[llamaindex,mcp]

# Or add MCP to existing installation
uv add agent-framework-lib[mcp]

# MCP servers are run via uvx (no separate install needed)
```

**Using Deno-based MCP servers:**

If you need to use Deno-based MCP servers (like TypeScript MCP servers), the framework provides a helper function to ensure Deno works correctly even if it's not in your PATH:

```python
from agent_framework import get_deno_command

# Configure a Deno-based MCP server
mcp_config = {
    "command": get_deno_command(),  # Automatically uses correct Deno path
    "args": ["run", "-N", "jsr:@pydantic/mcp-run-python", "stdio"]
}
```

This helper function:
- ✅ Automatically finds Deno even if not in system PATH
- ✅ Works seamlessly after `agent-framework-post-install`
- ✅ Returns absolute path to Deno binary when needed

## 🧠 Memory Module

Add long-term semantic memory to your agents, enabling them to remember information across conversations and provide personalized responses.

### Quick Start

```python
from agent_framework import LlamaIndexAgent
from agent_framework.memory import MemoryConfig

class MyMemoryAgent(LlamaIndexAgent):
    def __init__(self):
        super().__init__(
            agent_id="memory_agent",
            name="Memory Agent",
            description="An agent with long-term memory."
        )
    
    def get_agent_prompt(self) -> str:
        return "You are a helpful assistant that remembers user preferences."
    
    def get_agent_tools(self) -> list:
        return []
    
    def get_memory_config(self):
        """Enable memory - just override this method!"""
        return MemoryConfig.memori_simple(
            database_url="sqlite:///memory.db"
        )
```

### Memory Providers

| Provider | Backend | Best For |
|----------|---------|----------|
| **Memori** | SQLite, PostgreSQL, MySQL | Fast queries, simple setup |
| **Graphiti** | FalkorDB, Neo4j | Complex relationships, temporal queries |
| **Hybrid** | Both | Best of both worlds |

### Configuration Options

```python
# Memori with SQLite (simplest)
MemoryConfig.memori_simple(database_url="sqlite:///memory.db")

# Graphiti with FalkorDB
MemoryConfig.graphiti_simple(use_falkordb=True)

# Hybrid mode (both providers)
MemoryConfig.hybrid(
    memori_database_url="sqlite:///memory.db",
    graphiti_use_falkordb=True
)
```

### Memory Modes

- **Passive Injection**: Relevant memories automatically injected into prompts
- **Active Tools**: Agent can explicitly `recall_memory()`, `store_memory()`, `forget_memory()`

### Installation

```bash
# All memory support
uv add agent-framework-lib[memory]

# Or individual providers
uv add agent-framework-lib[memori]
uv add agent-framework-lib[graphiti]
```

**More info:** See [Memory Installation Guide](docs/MEMORY_INSTALLATION.md) and [Creating Agents Guide](docs/CREATING_AGENTS.md#adding-memory-to-your-agent)

## 🎯 Skills System

The Skills System provides modular, on-demand capability loading that reduces token consumption by ~80%. Instead of loading all instructions into every system prompt, skills deliver detailed instructions only when needed.

Skills are exclusively defined as `SKILL.md` markdown files with YAML frontmatter, loaded by `MarkdownSkillLoader`. Each skill uses `ShellTool` to execute standalone CLI scripts and `WebFetchTool` for web content retrieval.

### How It Works

```
BEFORE: System Prompt = Base (~500) + Rich Content (~3000) = ~3500 tokens/message
AFTER:  System Prompt = Base (~500) + Skills Discovery (~200) = ~700 tokens/message
        + On-demand skill loading (~500 tokens, one-time per skill)
```

### Quick Start

Skills are automatically available in all agents via `BaseAgent`. No need to explicitly inherit from `SkillsMixin`:

```python
from agent_framework import LlamaIndexAgent

class MySkillsAgent(LlamaIndexAgent):
    def __init__(self):
        super().__init__(
            agent_id="skills_agent",
            name="Skills Agent",
            description="An agent with on-demand capabilities."
        )
        # Built-in markdown skills are automatically registered by BaseAgent.__init__
    
    def get_agent_prompt(self) -> str:
        # Skills discovery prompt is automatically appended by BaseAgent
        return "You are a helpful assistant."
    
    def get_agent_tools(self) -> list:
        # Skill tools are auto-loaded - no need to add them manually!
        return []  # Only return custom tools specific to your agent
```

### Built-in Skills (21 total)

| Category | Skills |
|----------|--------|
| **Visualization** | chart, mermaid, table |
| **Document** | file, unified_pdf, file_access, excel, drawio, **powerpoint**, **word** |
| **Web** | web_news_search |
| **Multimodal** | multimodal, **image_gen** |
| **UI** | form, optionsblock, image_display, **email_template**, **skill_creator** |
| **Data** | **csv**, **data_format** |
| **Code** | **code_format** |

**New in v0.9.0:** 8 additional skills added (in **bold**):
- **powerpoint** — Generate .pptx presentations with slides, layouts, and themes
- **word** — Create .docx documents with formatted text, tables, and images
- **image_gen** — AI image generation via DALL-E 3 and DALL-E 2
- **csv** — Create, read, and transform CSV files
- **data_format** — Convert JSON ↔ YAML and validate schemas
- **code_format** — Format Python, JavaScript, JSON, YAML code
- **email_template** — Generate responsive HTML email templates
- **skill_creator** — Guide for creating custom skills via API or code

Each skill is a `SKILL.md` file in `agent_framework/skills/builtin/skills/` with an associated CLI script executed via `ShellTool`.

### Agent Workflow

1. Agent receives user request: "Create a bar chart"
2. Agent calls `list_skills()` → sees available skills
3. Agent calls `load_skill("chart")` → gets chart instructions from SKILL.md
4. Agent constructs the shell command as described in the instructions
5. Agent executes via `shell_exec` → script generates the chart
6. Optionally calls `unload_skill("chart")` when done

**More info:** See [Custom Skills Guide](docs/CUSTOM_SKILLS_GUIDE.md), [Creating Agents Guide](docs/CREATING_AGENTS.md#skills-integration) and [skills_demo_agent.py](examples/skills_demo_agent.py)

## 📝 Rich Content Capabilities (Automatic)

All agents automatically support rich content generation including:
- 📊 **Mermaid diagrams** (version 10.x syntax)
- 📈 **Chart.js charts** (bar, line, pie, doughnut, polarArea, radar, scatter, bubble)
- 📋 **Interactive forms** (formDefinition JSON)
- 🔘 **Clickable option buttons** (optionsblock)
- 📑 **Formatted tables** (tabledata)

**This is automatic!** The framework injects rich content instructions into all agent system prompts by default. You don't need to add anything to your `get_agent_prompt()`.

### Disabling Rich Content

If you need to disable automatic rich content injection for a specific agent or session:

**Via Session Configuration (UI or API):**
```python
# When initializing a session
session_config = {
    "user_id": "user123",
    "session_id": "session456",
    "enable_rich_content": False  # Disable rich content
}
```

**Via Web UI:**
Uncheck the "Enable rich content capabilities" checkbox when creating a session.

### Format Examples

**Chart:**
````markdown
```chart
{
  "type": "chartjs",
  "chartConfig": {
    "type": "bar",
    "data": {
      "labels": ["Mon", "Tue", "Wed"],
      "datasets": [{
        "label": "Sales",
        "data": [120, 150, 100]
      }]
    }
  }
}
```
````

**Options Block:**
````markdown
```optionsblock
{
  "question": "What would you like to do?",
  "options": [
    {"text": "Continue", "value": "continue"},
    {"text": "Cancel", "value": "cancel"}
  ]
}
```
````

**Table:**
````markdown
```tabledata
{
  "caption": "Sales Data",
  "headers": ["Month", "Revenue"],
  "rows": [["Jan", "$1000"], ["Feb", "$1200"]]
}
```
````

## 🌐 Web Interface

The framework includes a built-in web UI for testing and interacting with your agent.

**Access:** `http://localhost:8000/ui`

**Features:**
- 💬 Real-time message streaming
- 🎨 Rich format rendering (charts, tables, mermaid diagrams)
- 📁 File upload and management
- ⚙️ Model and parameter configuration
- 💾 Session management
- 📊 Conversation history
- 🎯 Interactive option blocks and forms

**Quick Test:**
```bash
# Start your agent
uv run python my_agent.py

# Open in browser
open http://localhost:8000/ui
```

The UI automatically detects and renders:
- Chart.js visualizations from `chart` blocks
- Mermaid diagrams from `mermaid` blocks
- Tables from `tabledata` blocks
- Interactive forms from `formDefinition` JSON
- Clickable options from `optionsblock`

**API Documentation:** `http://localhost:8000/docs` (Swagger UI)

## 📚 Additional Resources

### Documentation
- **[Installation Guide](docs/installation-guide.md)** - Detailed setup instructions
- **[Configuration Guide](docs/configuration.md)** - Environment and settings configuration
- **[Creating Agents Guide](docs/CREATING_AGENTS.md)** - Guide to building custom agents
- **[Tools and MCP Guide](docs/TOOLS_AND_MCP_GUIDE.md)** - Tools and MCP integration
- **[Memory Installation Guide](docs/MEMORY_INSTALLATION.md)** - Memory module setup
- **[API Reference](docs/api-reference.md)** - Complete API documentation
- **[A2A Guide](docs/A2A_GUIDE.md)** - Agent-to-Agent protocol documentation
- **[Observability Guide](docs/OBSERVABILITY_GUIDE.md)** - Metrics, tracing, and logging
- **[Architecture Diagram](docs/ARCHITECTURE_DIAGRAM.md)** - System architecture overview
- **[File Storage Guide](docs/FILE_STORAGE_GUIDE.md)** - Local, S3, MinIO, GCP storage backends
- **[Multimodal Tools Guide](docs/MULTIMODAL_TOOLS_GUIDE.md)** - Multimodal processing tools
- **[Streaming Events Frontend](docs/STREAMING_EVENTS_FRONTEND.md)** - SSE event format for frontends
- **[Docker Setup Guide](docs/DOCKER_SETUP.md)** - Docker Compose development environment

### Examples
- **[Simple Agent](examples/simple_agent.py)** - Basic calculator agent
- **[File Storage Agent](examples/agent_with_file_storage.py)** - File management
- **[MCP Integration](examples/agent_with_mcp.py)** - MCP integration
- **[Memory Agent](examples/agent_with_memory_simple.py)** - Agent with long-term memory
- **[Multi-Skills Agent](examples/agent_example_multi_skills.py)** - Complete multi-skills agent
- **[Custom Framework Agent](examples/custom_framework_agent.py)** - Custom framework implementation
- **[Skills Demo Agent](examples/skills_demo_agent.py)** - Skills system demonstration

### API Endpoints

**Core:**
- `POST /message` - Send message to agent
- `POST /init` - Initialize session
- `POST /end` - End session
- `GET /sessions` - List sessions

**Files:**
- `POST /files/upload` - Upload file
- `GET /files/{file_id}/download` - Download file
- `GET /files` - List files

**Full API docs:** `http://localhost:8000/docs`

### Authentication

```env
# API Key Authentication
REQUIRE_AUTH=true
API_KEYS=sk-key-1,sk-key-2
```

```bash
curl -H "Authorization: Bearer sk-key-1" \
  http://localhost:8000/message \
  -H "Content-Type: application/json" \
  -d '{"query": "Hello!"}'
```

---

**Quick Links:**
- 🎨 [Web UI](http://localhost:8000/ui)
- 📖 [API Docs](http://localhost:8000/docs)
- ⚙️ [Config Test](http://localhost:8000/config/models)
