Metadata-Version: 2.4
Name: zest-transfer
Version: 0.3.2
Summary: P2P acceleration for ML model distribution
Author: zest contributors
License-Expression: MIT
Project-URL: Homepage, https://github.com/praveer13/zest
Project-URL: Repository, https://github.com/praveer13/zest
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.28
Provides-Extra: hf
Requires-Dist: huggingface_hub>=0.20; extra == "hf"

# zest — P2P Acceleration for ML Model Distribution

**zest** accelerates ML model downloads by adding a peer-to-peer layer on top of HuggingFace's [Xet storage](https://huggingface.co/docs/xet/index). Models download from nearby peers first, falling back to HuggingFace CDN — never slower than vanilla `hf_xet`.

## Install

```bash
pip install zest-transfer
```

## Authentication

zest needs a HuggingFace token to download models. Set it up once:

```bash
# option 1: environment variable
export HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxx

# option 2: huggingface-cli (token saved to ~/.cache/huggingface/token)
pip install huggingface_hub
huggingface-cli login
```

Get your token at [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).

## Quick Start

### CLI

```bash
# Pull a model (uses P2P when peers available, CDN fallback)
zest pull meta-llama/Llama-3.1-8B

# Files land in standard HF cache — transformers.from_pretrained() just works
python -c "from transformers import AutoModel; AutoModel.from_pretrained('meta-llama/Llama-3.1-8B')"
```

### Python API

```python
import zest

# One-line activation — monkey-patches huggingface_hub
zest.enable()

# Or pull directly
path = zest.pull("meta-llama/Llama-3.1-8B")
```

### Environment Variable

```bash
# Auto-enable on import
ZEST=1 python train.py
```

## How It Works

HuggingFace's Xet protocol breaks files into content-addressed ~64KB chunks grouped into **xorbs**. zest adds a BitTorrent-compatible peer swarm so these immutable xorbs can be served by anyone who already downloaded them.

```
For each xorb needed:
  1. Check local cache
  2. Ask peers (BitTorrent protocol)
  3. Fall back to CDN (presigned S3 URLs)
```

Every download makes the network faster for the next person.

## P2P Testing

```bash
# Server A: pull a model and seed it
zest pull gpt2
zest serve

# Server B: pull from Server A
zest pull gpt2 --peer <server-a-ip>:6881
```

## Links

- [GitHub](https://github.com/praveer13/zest)
- [Xet Protocol](https://huggingface.co/docs/xet/index)
