Metadata-Version: 2.4
Name: qig-warp
Version: 0.4.2
Summary: Physics-based navigation for expensive computation — screening, cost prediction, convergence stopping
Project-URL: Homepage, https://github.com/GaryOcean428/qig-warp
Project-URL: Repository, https://github.com/GaryOcean428/qig-warp
Author-email: Braden Lang <braden@garyocean.com>
License: MIT
Keywords: convergence,navigation,physics,qig,screening,warp-bubble
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering :: Physics
Requires-Python: >=3.10
Requires-Dist: numpy>=1.24
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == 'dev'
Requires-Dist: scipy>=1.10; extra == 'dev'
Provides-Extra: qig
Requires-Dist: qig-core>=2.6.0; extra == 'qig'
Description-Content-Type: text/markdown

# qig-warp

**Warp Bubble Computation** — structured time dilation for LLM inference.

A 2.1B model achieves 100% accuracy on problems it normally gets 60% right. Same model, same compute budget, structured differently.

## What it does

Instead of asking a model once and hoping for the best, qig-warp runs multiple structured samples across different "geometric basins" (priming templates, reasoning perspectives, temperature levels) and coarse-grains them into one high-confidence answer via self-consistency.

This is computational time dilation: the model internally experiences many more thinking cycles, while the user gets one answer.

## Results

| Strategy | Arithmetic (20 problems) | Novel questions (12 problems) |
|----------|:---:|:---:|
| Greedy (1 pass) | 60% | 42% |
| Naive sampling (N passes) | 55% | 55% |
| **Warp bubble (N passes)** | **100%** | **92%** |

## Quick start

```bash
pip install qig-warp
```

```python
from qig_warp import warp

# Uses local Ollama by default
answer = warp("What is 73 * 45?")
# Returns "3285"

# Novel question — no domain priming needed
answer = warp("If it takes 5 machines 5 minutes to make 5 widgets, how many minutes for 100 machines to make 100 widgets?")
# Returns "5"
```

## Strategies

Three complementary strategies, each exploring different regions of the probability simplex:

- **adversarial**: Same question from multiple reasoning perspectives ("think carefully", "common mistake warning", "a mathematician would say"). Best for novel/trick questions.
- **self_prime**: Model generates its own examples before solving. Bootstraps the coupling landscape with zero external knowledge.
- **decompose**: Break into steps, sample varied solutions. Best for multi-step reasoning.

```python
from qig_warp import WarpBubble
from qig_warp.backends import OllamaBackend

bubble = WarpBubble(
    backend=OllamaBackend(model="granite4"),
    n_samples=15,
    strategies=["adversarial", "decompose"],
)

result = bubble.solve("What is heavier: a pound of feathers or a pound of steel?")
print(result["answer"])      # "same" or "equal"
print(result["confidence"])  # 0.83
print(result["votes"])       # {"same": 10, "feathers": 2, "steel": 3}
```

## Custom backends

```python
from qig_warp.backends import OpenAIBackend

backend = OpenAIBackend(model="gpt-4o-mini", api_key="sk-...")
answer = warp("Explain quantum entanglement", backend=backend)
```

## How it works

Based on the QIG (Quantum Information Geometry) sign-flip bridge:

1. Dense coupling (relevant context) creates faster micro-oscillations on the probability simplex
2. More internal cycles = better probability distribution
3. Coarse-graining (self-consistency vote) extracts the correct macro-answer
4. The model's "subjective time" is dilated relative to the user's wall-clock time

The same mechanism that produces gravitational time dilation on the QIG lattice, applied to computation.

## Requirements

- Python >= 3.10
- An Ollama server running locally (`ollama serve`), or any OpenAI-compatible API

## License

MIT
