Metadata-Version: 2.1
Name: verlex
Version: 0.8.40
Summary: Run your code in the cloud with a single function call
Author: Verlex Team
Maintainer: Verlex Team
License: Apache-2.0
Project-URL: Homepage, https://verlex.dev
Project-URL: Documentation, https://verlex.dev/docs
Keywords: cloud,serverless,machine-learning,gpu,cloud-computing,distributed-computing
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: System :: Distributed Computing
Classifier: Topic :: Internet :: WWW/HTTP :: HTTP Servers
Classifier: Environment :: Console
Classifier: Framework :: AsyncIO
Classifier: Typing :: Typed
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: cloudpickle>=3.0.0
Requires-Dist: httpx>=0.25.0
Requires-Dist: pynvml>=11.5.0
Provides-Extra: agent
Requires-Dist: psutil>=5.9.0; extra == "agent"
Provides-Extra: cli
Requires-Dist: rich>=13.0.0; extra == "cli"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: mypy>=1.0.0; extra == "dev"
Requires-Dist: sentry-sdk>=1.0.0; extra == "dev"
Provides-Extra: overflow
Requires-Dist: psutil>=5.9.0; extra == "overflow"

# Verlex

**Run your code in the cloud for the price of a coffee.**

Verlex is a Python SDK that lets you execute code on the cheapest available cloud infrastructure across AWS, GCP, and Azure — all with a single function call.

## Installation

```bash
pip install verlex
```

With ML dependencies:

```bash
pip install verlex[ml]
```

## Quick Start

```python
import verlex

def train_model():
    import torch
    model = torch.nn.Linear(100, 10)
    # Your training code here...
    return {"accuracy": 0.95}

# Run it in the cloud — one line!
result = verlex.cloud(train_model, api_key="gw_your_key")
print(result)
```

## Basic Usage

### One-Liner (Simplest)

Every function works as a standalone call — just pass your `api_key`:

```python
import verlex

# Run in the cloud
result = verlex.cloud(train_model, api_key="gw_your_key")

# Analyze resources
rec = verlex.analyze(train_model, api_key="gw_your_key")

# Estimate cost
costs = verlex.estimate_cost(train_model, api_key="gw_your_key")
```

### Specifying Resources

```python
result = verlex.cloud(
    train_model,
    api_key="gw_your_key",
    gpu="A100",       # Specific GPU type
    gpu_count=2,      # Multiple GPUs
    memory="64GB",    # Memory requirement
    timeout=7200,     # 2 hour timeout
)
```

### Context Manager (Multi-step Sessions)

```python
with verlex.GateWay(api_key="gw_your_key") as gw:
    rec = gw.analyze(train_model)
    costs = gw.estimate_cost(train_model)
    result = gw.run(train_model)
```

### Async Execution

```python
with verlex.GateWay(api_key="gw_your_key") as gw:
    # Submit jobs (non-blocking)
    job1 = gw.run_async(train_model_1)
    job2 = gw.run_async(train_model_2)

    # Wait for results when needed
    result1 = job1.result()
    result2 = job2.result()
```

## Pricing Modes

Choose your price-speed tradeoff with a single `fast` flag:

| Mode | Wait Time | Best For |
|------|-----------|----------|
| **Performance** (`fast=True`) | Immediate | Time-sensitive workloads |
| **Standard** (`fast=False`) | Up to 10 min | Batch jobs, cost-sensitive |

```python
# Performance mode - immediate execution
result = verlex.cloud(my_function, api_key="gw_your_key", fast=True)

# Standard mode (default) - wait for lower prices
result = verlex.cloud(my_function, api_key="gw_your_key")
```

## Authentication

### Option 1: Direct API Key (Inline)

```python
result = verlex.cloud(my_function, api_key="gw_your_key")
```

### Option 2: Environment Variable

```bash
export VERLEX_API_KEY="gw_your_key"
```

```python
result = verlex.cloud(my_function)  # picks up VERLEX_API_KEY
```

## Automatic Cloud Offloading

Don't know which functions are heavy? Let Verlex figure it out:

```python
import verlex
verlex.overflow(fast=True)

# Your code runs normally. When CPU or memory exceeds 85%,
# functions are automatically offloaded to the cheapest cloud.
data = load_data()
result = train_model(data)   # system overloaded? → cloud
evaluate(result)             # resources free → runs locally
```

Install with: `pip install 'verlex[overflow]'`

## Agent Daemon

Monitor your system and offload heavy Python processes:

```bash
# Watch for heavy processes and offer to offload
verlex agent watch

# Auto-offload without prompting
verlex agent watch --auto

# Submit a script directly via source-code pipeline
verlex agent run train.py --gpu A100
```

Install with: `pip install 'verlex[agent]'`

## CLI

```bash
# Login
verlex login

# Run a script
verlex run train.py

# Run with specific GPU
verlex run train.py --gpu A100

# Check job status
verlex jobs

# View account info
verlex whoami
```

## Supported Cloud Providers

- **AWS** - EC2, with Spot instances (up to 90% off)
- **GCP** - Compute Engine, with Preemptible VMs (up to 91% off)
- **Azure** - VMs, with Spot instances (up to 81% off)

## Links

- **Website**: [verlex.dev](https://verlex.dev)
- **Documentation**: [verlex.dev/docs](https://verlex.dev/docs)

## Contact

- **Support**: support@verlex.dev
- **Sales**: sales@verlex.dev
- **General**: contact@verlex.dev

## License

Apache 2.0
