Metadata-Version: 2.4
Name: arbis-llmwrap
Version: 0.3.3
Summary: Decorator to wrap LLM calls for production use with a simple, answer-only interface.
License: MIT
Keywords: llm,decorator,prompt,logging,cython
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests>=2.31
Requires-Dist: cryptography>=42.0
Provides-Extra: dev
Requires-Dist: cython>=3.0; extra == "dev"
Requires-Dist: wheel; extra == "dev"
Dynamic: license-file

# llmwrap

**llmwrap** is a Python library for wrapping an LLM-calling function with a simple decorator. You configure the decorator once, keep your own model-calling code inside a normal Python function, and call that function as usual.

## Install

```bash
pip install arbis-llmwrap
```

## Quick Start

Decorate your LLM function once, then call it with prompts as usual:

```python
from llmwrap import wrap_llm_call

@wrap_llm_call(
    company_name="My Company",
    project_name="My Project",
    agent_name="My Agent",
    secret_key="vt_live_xxxx",
    max_tries=3,
)
def user_llm(prompt: str) -> str:
    # Call your LLM provider here and return a string.
    response = some_client.chat(prompt)
    return response

answer = user_llm("What is 2+2?")
```

## How To Use The Decorator

`wrap_llm_call(...)` is a decorator factory. You call it with configuration values, and it returns a decorator that wraps your function.

```python
from llmwrap import wrap_llm_call

decorator = wrap_llm_call(
    company_name="My Company",
    project_name="My Project",
    agent_name="My Agent",
    secret_key="vt_live_xxxx",
    max_tries=3,
)
```

You usually apply it directly with `@wrap_llm_call(...)`.

## Decorator Arguments

### `company_name: str`

Name of your company or organization.

### `project_name: str`

Name of the project this function belongs to.

### `agent_name: str`

Name of the agent, assistant, or workflow this function represents.

### `secret_key: str`

Credential value passed when configuring the decorator.

### `max_tries: int = 3`

Maximum number of attempts used by the wrapper. The value must be `>= 1`.

## Requirements For The Wrapped Function

The wrapped function should follow the current interface below:

- The first argument must be `prompt`.
- The function must return a Python `str`.

Example:

```python
@wrap_llm_call(
    company_name="My Company",
    project_name="My Project",
    agent_name="Support Bot",
    secret_key="vt_live_xxxx",
)
def ask_model(prompt: str) -> str:
    return my_model.generate(prompt)
```

## Calling A Decorated Function

Once decorated, call your function the same way you would call any normal Python function:

```python
result = ask_model("Write a short welcome message.")
print(result)
```

The decorated function returns a string.

## Using Your Own LLM Client

You can keep your provider-specific code inside the wrapped function. The decorator does not require a specific SDK.

```python
@wrap_llm_call(
    company_name="My Company",
    project_name="Internal Tools",
    agent_name="Draft Writer",
    secret_key="vt_live_xxxx",
)
def generate_copy(prompt: str) -> str:
    response = client.responses.create(
        model="my-model",
        input=prompt,
    )
    return response.output_text
```

## Common Pattern

Keep the decorated function small and focused:

- accept a prompt string
- call your LLM provider
- return the resulting text

```python
@wrap_llm_call(
    company_name="Example Co",
    project_name="Docs",
    agent_name="Summarizer",
    secret_key="vt_live_xxxx",
    max_tries=2,
)
def summarize(prompt: str) -> str:
    return provider.generate_text(prompt)
```

## Current Limitations

At the moment, the decorator expects:

- `prompt` as the first function argument
- a string return value from the wrapped function

If your function takes a different signature or returns a tuple, dict, or object, adapt that function so it still exposes a `prompt` first argument and returns the text as `str`.

## License

MIT. See [LICENSE](LICENSE).
