Metadata-Version: 2.4
Name: llm4s
Version: 0.1.11
Summary: K9s-inspired observability control tower for LLM & MCP Servers
Author: yunho
License: MIT
Project-URL: Homepage, https://github.com/IAMUNO/LLM4s
Project-URL: Repository, https://github.com/IAMUNO/LLM4s
Project-URL: Issues, https://github.com/IAMUNO/LLM4s/issues
Keywords: mcp,llm,observability,debugger,claude,textual,tui
Classifier: Development Status :: 3 - Alpha
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Debuggers
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: fastapi
Requires-Dist: uvicorn
Requires-Dist: aiosqlite
Requires-Dist: pydantic
Requires-Dist: textual
Requires-Dist: psutil
Requires-Dist: platformdirs
Provides-Extra: gpu
Requires-Dist: gputil>=1.4.0; extra == "gpu"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21; extra == "dev"

# LLM4s (MCP-Lens)

[![Python 3.10+](https://img.shields.io/badge/python-3.10%2B-blue.svg)](https://www.python.org/)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)

> K9s-inspired observability tool for LLM & MCP Server communication.

LLM4s intercepts JSON-RPC traffic between LLM clients (Claude Desktop, etc.) and MCP servers, providing real-time monitoring, debugging, and replay capabilities — all from a terminal UI.

## Architecture

```
Claude Desktop / LLM Client
    ↓ stdin
InterceptorProxy  ←→  SilentLogger (asyncio.Queue → SQLite WAL)
    ↓ stdout
MCP Server
    ↕
Textual TUI (real-time polling)
```

**Key Design**: Zero-overhead logging via async queue — no impact on LLM ↔ Server communication speed.

## Quick Start

```bash
# 1. Install
pip install llm4s

# 2. Register API keys (optional — env vars also work)
llm4s config --add gemini YOUR_KEY
llm4s config --add openai YOUR_KEY

# 3. Run proxy (intercepts MCP server traffic)
llm4s proxy -- node /path/to/mcp-server/index.js

# 4. Launch TUI dashboard (in another terminal)
llm4s tui
```

## TUI Navigation

LLM4s follows a 3-tier hierarchy inspired by K9s:

| Tier | View | Description |
|------|------|-------------|
| 1 | **Providers** | Registered LLM providers & API key status |
| 2 | **Sessions** | Individual agent execution sessions |
| 3 | **Logs** | JSON-RPC packet details & payload inspector |

### Key Bindings

| Key | Action |
|-----|--------|
| `Enter` | Drill down to next tier |
| `Esc` | Go back to previous tier |
| `/` | Search / Semantic filter |
| `e` | Edit payload (Log detail) |
| `r` | Replay edited message |
| `x` | Export logs to Markdown |
| `n` | Rename session |
| `d` | Delete session |
| `R` | Refresh data |
| `F10` / `Q` | Quit |

### Semantic Filter (Advanced Search)

In the log view, use structured queries to filter precisely:

```
method:tools/call       # Filter by method name
tokens>100              # Token count threshold
dir:C2S                 # Direction filter (C2S or S2C)
method:call tokens>50   # Combine multiple filters
```

## Replay & Edit

Select any logged message, press `e` to edit the JSON payload, then `r` to re-inject it into the MCP server. Replayed messages appear with a `[REPLAY]` tag.

## CLI Commands

```bash
llm4s proxy -- <command>     # Intercept MCP server
llm4s tui                     # Launch dashboard
llm4s config --add <p> <key>  # Register API key
llm4s config --list           # List providers
llm4s doctor                  # System health check
llm4s add-server <name> --command <cmd> -- <args>
llm4s list-servers            # View registered servers
```

## Tech Stack

| Component | Technology |
|-----------|-----------|
| UI | [Textual](https://textual.textualize.io/) — async TUI framework |
| Database | SQLite (WAL mode) via `aiosqlite` |
| Backend | [FastAPI](https://fastapi.tiangolo.com/) |
| Core | Python `asyncio` — non-blocking interceptor |

## License

MIT
