Metadata-Version: 2.4
Name: arbis-llmwrap
Version: 0.3.5
Summary: Decorator to wrap LLM calls for production use with flexible prompt binding.
License: MIT
Keywords: llm,decorator,prompt,logging,cython
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests>=2.31
Requires-Dist: cryptography>=42.0
Provides-Extra: dev
Requires-Dist: cython>=3.0; extra == "dev"
Requires-Dist: wheel; extra == "dev"
Provides-Extra: integration
Requires-Dist: openai<2.31.0,>=2.30.0; extra == "integration"
Requires-Dist: jiter<0.14.0,>=0.13.0; extra == "integration"
Dynamic: license-file

# arbis-llmwrap

Lightweight wrappers for LLM calls in Python:

- function wrapper: `wrap_llm_call`
- line wrapper: `wrap_llm_line`

This page intentionally documents usage only. Internal implementation details are not published.

## Table of Contents

- [Install](#install)
- [Quick Start](#quick-start)
- [Function Wrapper (`wrap_llm_call`)](#function-wrapper-wrap_llm_call)
- [Line Wrapper (`wrap_llm_line`)](#line-wrapper-wrap_llm_line)
- [Common Config](#common-config)
- [License](#license)

## Install

```bash
pip install arbis-llmwrap
```

## Quick Start

```python
from llmwrap import wrap_llm_call

@wrap_llm_call(
    company_name="Example Co",
    project_name="Support",
    agent_name="quickstart_agent",
    secret_key="vt_live_xxxx",
    prompt_arg="prompt",
    max_tries=1,
)
def ask(prompt: str) -> str:
    return client.responses.create(model="gpt-4.1-mini", input=prompt).output_text
```

## Function Wrapper (`wrap_llm_call`)

```python
from llmwrap import wrap_llm_call

def has_tool_calls(raw) -> bool:
    try:
        return bool(raw.choices[0].message.tool_calls)
    except Exception:
        return False

@wrap_llm_call(
    company_name="Example Co",
    project_name="Agent Runtime",
    agent_name="function_wrapper_example",
    secret_key="vt_live_xxxx",
    prompt_arg="messages",
    passthrough_when=has_tool_calls,
    max_tries=1,
)
def run_turn(messages: list[dict]):
    return client.chat.completions.create(
        model="gpt-4.1-mini",
        messages=messages,
        tools=[...],
        tool_choice="auto",
        temperature=0,
    )
```

## Line Wrapper (`wrap_llm_line`)

```python
from llmwrap import wrap_llm_line

payload = {
    "model": "gpt-4.1-mini",
    "messages": [{"role": "user", "content": "Give one sentence with token TOKEN-42."}],
}

completion = wrap_llm_line(
    llm_call=lambda p: client.chat.completions.create(
        model=p["model"],
        messages=p["messages"],
        temperature=0,
    ),
    prompt=payload,
    prompt_json_pointer="/messages/0/content",
    company_name="Example Co",
    project_name="Runtime",
    agent_name="line_wrapper_example",
    secret_key="vt_live_xxxx",
    max_tries=1,
)
```

## Common Config

- `company_name`, `project_name`, `agent_name`, `secret_key`
- `max_tries` (`>= 1`)
- optional: `response_extractor`, `prompt_json_pointer`, `passthrough_when`, `response_answer_json_pointer`, `return_merger`

## License

MIT. See [LICENSE](LICENSE).
