Metadata-Version: 2.4
Name: arbis-llmwrap
Version: 0.3.4
Summary: Decorator to wrap LLM calls for production use with flexible prompt binding.
License: MIT
Keywords: llm,decorator,prompt,logging,cython
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests>=2.31
Requires-Dist: cryptography>=42.0
Provides-Extra: dev
Requires-Dist: cython>=3.0; extra == "dev"
Requires-Dist: wheel; extra == "dev"
Dynamic: license-file

# llmwrap

**llmwrap** is a Python library for wrapping an LLM-calling function with a simple decorator. You configure the decorator once, keep your own model-calling code inside a normal Python function, and call that function as usual.

## Install

```bash
pip install arbis-llmwrap
```

## Quick Start

Decorate your LLM function once, then call it with prompts as usual:

```python
from llmwrap import wrap_llm_call

@wrap_llm_call(
    company_name="My Company",
    project_name="My Project",
    agent_name="My Agent",
    secret_key="vt_live_xxxx",
    prompt_arg="prompt",
    max_tries=3,
)
def user_llm(prompt: str) -> str:
    # Call your LLM provider here and return a string.
    response = some_client.chat(prompt)
    return response

answer = user_llm("What is 2+2?")
```

## How To Use The Decorator

`wrap_llm_call(...)` is a decorator factory. You call it with configuration values, and it returns a decorator that wraps your function.

```python
from llmwrap import wrap_llm_call

decorator = wrap_llm_call(
    company_name="My Company",
    project_name="My Project",
    agent_name="My Agent",
    secret_key="vt_live_xxxx",
    prompt_arg="prompt",
    max_tries=3,
)
```

You usually apply it directly with `@wrap_llm_call(...)`.

## Decorator Arguments

### `company_name: str`

Name of your company or organization.

### `project_name: str`

Name of the project this function belongs to.

### `agent_name: str`

Name of the agent, assistant, or workflow this function represents.

### `secret_key: str`

Credential value passed when configuring the decorator.

### `max_tries: int = 3`

Maximum number of attempts used by the wrapper. The value must be `>= 1`.

### `prompt_arg: str = "prompt"`

Name of the function argument that contains the prompt to wrap.

This allows the prompt to appear anywhere in the function signature as long as you tell the decorator which argument to use.

### `response_extractor: Callable[[Any], str] | None = None`

Optional function that extracts the response text from the wrapped function result.

Use this when your function returns something other than a plain string, such as:

- a tuple
- a dict
- an SDK response object

If omitted, the wrapped function is expected to return a string directly.

## Supported Function Shapes

The wrapped function can now:

- receive the prompt in any named argument position
- return either a plain string or a larger object, as long as `response_extractor` returns the text to process

## Basic String Return Example

```python
@wrap_llm_call(
    company_name="My Company",
    project_name="My Project",
    agent_name="Support Bot",
    secret_key="vt_live_xxxx",
    prompt_arg="prompt",
)
def ask_model(prompt: str) -> str:
    return my_model.generate(prompt)
```

## Prompt Is Not The First Argument

```python
@wrap_llm_call(
    company_name="My Company",
    project_name="My Project",
    agent_name="Planner",
    secret_key="vt_live_xxxx",
    prompt_arg="user_prompt",
)
def ask_model(model_name: str, user_prompt: str, temperature: float = 0.2) -> str:
    return my_client.generate(
        model=model_name,
        prompt=user_prompt,
        temperature=temperature,
    )
```

## Tuple Return Example

This example matches the style you asked for:

```python
@wrap_llm_call(
    company_name="Example Co",
    project_name="Assistant Platform",
    agent_name="Tuple Example",
    secret_key="vt_live_xxxx",
    prompt_arg="prompt",
    response_extractor=lambda result: result[0],
)
def run_model(client, prompt: str) -> tuple[str, dict]:
    text = client.generate(prompt)
    metadata = {"provider": "example"}
    return text, metadata
```

## Dict Return Example

```python
@wrap_llm_call(
    company_name="My Company",
    project_name="Internal Tools",
    agent_name="Draft Writer",
    secret_key="vt_live_xxxx",
    prompt_arg="prompt",
    response_extractor=lambda result: result["text"],
)
def generate_copy(prompt: str) -> dict:
    response = client.responses.create(
        model="my-model",
        input=prompt,
    )
    return {
        "text": response.output_text,
        "request_id": response.id,
    }
```

## Calling A Decorated Function

Once decorated, call your function the same way you would call any normal Python function:

```python
result = ask_model("Write a short welcome message.")
print(result)
```

The decorated function returns a string.

## Using Your Own LLM Client

You can keep your provider-specific code inside the wrapped function. The decorator does not require a specific SDK.

```python
@wrap_llm_call(
    company_name="My Company",
    project_name="Internal Tools",
    agent_name="Draft Writer",
    secret_key="vt_live_xxxx",
    prompt_arg="messages",
    response_extractor=lambda result: result.output_text,
)
def generate_copy(messages: str, model: str = "my-model"):
    response = client.responses.create(
        model=model,
        input=messages,
    )
    return response
```

## Common Pattern

Keep the decorated function small and focused:

- accept your normal application arguments
- call your LLM provider
- return either text directly or a larger result object
- use `prompt_arg` and `response_extractor` to tell the decorator what to use

```python
@wrap_llm_call(
    company_name="Example Co",
    project_name="Docs",
    agent_name="Summarizer",
    secret_key="vt_live_xxxx",
    prompt_arg="user_input",
    response_extractor=lambda result: result["content"],
    max_tries=2,
)
def summarize(user_input: str, style: str = "short") -> dict:
    return provider.generate_text(user_input, style=style)
```

## Notes

- `prompt_arg` must match a named parameter on the wrapped function.
- `response_extractor` must return a string.
- The decorated function still returns a string result.

If your wrapped function already returns a plain string, you do not need `response_extractor`.

## License

MIT. See [LICENSE](LICENSE).
