Metadata-Version: 2.4
Name: quarterbit
Version: 50.1.0
Summary: QuarterBit - Train AI models and earn $AXM on the AXIOM network. The first verifiable distributed AI training.
Home-page: https://quarterbit.dev
Author: Clouthier Simulation Labs
Author-email: Clouthier Simulation Labs <info@quarterbit.dev>
License: MIT
Project-URL: Homepage, https://quarterbit.dev
Project-URL: Documentation, https://quarterbit.dev/docs
Keywords: ai,training,distributed,axiom,blockchain,llm,memory-efficient
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.12
Description-Content-Type: text/markdown
Requires-Dist: numpy>=1.24.0
Requires-Dist: requests>=2.28.0
Provides-Extra: gpu
Requires-Dist: torch>=2.0.0; extra == "gpu"
Requires-Dist: cupy-cuda12x; extra == "gpu"
Provides-Extra: cli
Requires-Dist: typer>=0.9; extra == "cli"
Requires-Dist: rich>=13.0; extra == "cli"
Provides-Extra: full
Requires-Dist: torch>=2.0.0; extra == "full"
Requires-Dist: cupy-cuda12x; extra == "full"
Requires-Dist: typer>=0.9; extra == "full"
Requires-Dist: rich>=13.0; extra == "full"
Dynamic: author
Dynamic: home-page
Dynamic: requires-python

# QuarterBit — Earn $AXM Training AI

**The world's first verifiable distributed AI training network.**

Turn your GPU into income. Train real AI models. Get paid in $AXM.

## Why Train on AXIOM?

| Feature | AXIOM | Other Networks |
|---------|-------|----------------|
| **Verification** | Mathematical proof of work | Trust validators (gameable) |
| **Payment** | Guaranteed for valid work | Subjective, can be rejected |
| **Transparency** | On-chain rewards, public multipliers | Opaque scoring |
| **Hardware** | Any GPU or CPU | Often restricted |

**You train. We verify. You get paid. No trust required.**

## Installation

```bash
pip install quarterbit

# With GPU support + CLI
pip install quarterbit[full]
```

## Quick Start — 3 Commands to Start Earning

```bash
# 1. Create your wallet
quarterbit init

# 2. Register your hardware (auto-detects GPU)
quarterbit register

# 3. Start training (runs as daemon, auto-restarts)
quarterbit start --daemon
```

That's it. Your GPU is now earning $AXM.

## Daemon Mode — Set It and Forget It

```bash
# Start background daemon (survives reboots)
quarterbit daemon start

# Check status
quarterbit daemon status

# View live earnings
quarterbit stats --watch
```

The daemon automatically:
- Selects highest-paying compatible tasks
- Restarts on crashes
- Claims rewards when thresholds are met
- Logs everything to `~/.quarterbit/logs/`

## CLI Commands

```bash
# Wallet
quarterbit init              # Create new wallet
quarterbit balance           # Check $AXM balance

# Training
quarterbit register          # Register hardware capabilities
quarterbit start             # Start training (foreground)
quarterbit start --daemon    # Start as background service
quarterbit stop              # Stop training gracefully
quarterbit tasks             # List available tasks

# Earnings
quarterbit stats             # Your training statistics
quarterbit claim             # Claim pending rewards
quarterbit momentum          # View your reward multiplier

# Daemon
quarterbit daemon start      # Start background service
quarterbit daemon stop       # Stop background service
quarterbit daemon status     # Check if running
quarterbit daemon logs       # View recent logs
```

## Reward Multipliers — Consistency Pays

AXIOM rewards consistent trainers with multipliers up to **10x**:

| Consistency | Multiplier | Example Earnings |
|-------------|------------|------------------|
| New trainer | 1.0x | 10 $AXM/batch |
| 50% consistent | 1.3x | 13 $AXM/batch |
| 80% consistent | 1.8x | 18 $AXM/batch |
| 95%+ consistent | 2.5x+ | 25+ $AXM/batch |

**The longer you train reliably, the more you earn per batch.**

## Hardware Requirements

| Tier | VRAM | Example Hardware | Task Types |
|------|------|------------------|------------|
| CPU | Any | Any computer | Small models, data prep |
| Small | 4-8 GB | RTX 3060, RTX 4060 | GPT-2, small LLMs |
| Medium | 12-24 GB | RTX 3090, RTX 4090 | LLaMA-7B, Mistral |
| Large | 40-80 GB | A100, H100 | LLaMA-70B, large models |

**No minimum.** Even a laptop CPU can earn $AXM on compatible tasks.

## Python SDK

```python
from quarterbit import AxiomTrainer

# Initialize
trainer = AxiomTrainer(
    rpc_url="https://rpc.quarterbit.dev",
    private_key="0x..."  # Or use keyfile
)

# Register hardware (auto-detects)
trainer.register()

# Get compatible tasks for your hardware
tasks = trainer.get_compatible_tasks()
print(f"Found {len(tasks)} tasks you can train")

# Start training (distributed mode - syncs with other trainers)
for task in tasks:
    trainer.train_distributed(task, num_rounds=100)
    trainer.claim_rewards(task.task_id)

# Check earnings
print(f"Balance: {trainer.get_balance() / 1e18:.2f} $AXM")
```

## Task Submitter SDK

Have a model to train? Submit tasks and let the network train it:

```python
from quarterbit import AxiomTaskSubmitter

submitter = AxiomTaskSubmitter(
    rpc_url="https://rpc.quarterbit.dev",
    private_key="0x..."
)

# Create training task
task = await submitter.create_task(
    model=my_model,
    dataset=my_dataset,
    reward_per_batch=10,  # $AXM per batch
    total_batches=1000
)

# Monitor progress
async for progress in submitter.train_loop(task.task_id):
    print(f"Loss: {progress['loss']:.4f}")

# Get trained model
trained_model = submitter.get_model()
```

## Security

- **Staking**: Trainers stake $AXM as collateral
- **Verification**: VLA exact arithmetic proves correct computation
- **Slashing**: Invalid work = stake slashed
- **Encryption**: Gradient data encrypted end-to-end

Your work is verified mathematically, not by subjective validators.

## System Requirements

- **Python**: 3.12+
- **OS**: Windows, Linux, or WSL
- **GPU**: Optional (CUDA 12.x for GPU acceleration)

## Links

- **Website**: https://quarterbit.dev
- **Documentation**: https://quarterbit.dev/docs

## License

MIT — Clouthier Simulation Labs 2026

Free to use. Decentralized architecture — anyone can run nodes.
