Metadata-Version: 2.4
Name: pulse-ollama
Version: 0.1.0
Summary: Ollama adapter for PULSE Protocol — run local AI models with the same interface as cloud providers
Author-email: PULSE Protocol Team <pulse@protocol.org>
License-Expression: Apache-2.0
Project-URL: Homepage, https://github.com/pulseprotocolorg-cyber/pulse-ollama
Project-URL: Repository, https://github.com/pulseprotocolorg-cyber/pulse-ollama
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Requires-Dist: pulse-protocol>=0.5.0
Requires-Dist: requests>=2.28.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0; extra == "dev"
Requires-Dist: black>=23.0; extra == "dev"

# pulse-ollama

**Ollama adapter for [PULSE Protocol](https://github.com/pulseprotocolorg-cyber/pulse-python) — run local AI models with the same interface as cloud providers.**

Your data never leaves your machine. GDPR-compliant by design.

```python
# Switch from cloud to local — one line changes:
# from pulse_openai import OpenAIAdapter as AI; adapter = AI(api_key="sk-...")
from pulse_ollama import OllamaAdapter as AI; adapter = AI(model="llama3.2")

# Everything below stays EXACTLY the same
msg = PulseMessage(action="ACT.ANALYZE.SENTIMENT", parameters={"text": "I love open source!"})
response = adapter.send(msg)
print(response.content["parameters"]["result"])
```

## Why pulse-ollama?

| Feature | Cloud (OpenAI/Anthropic) | Local (Ollama) |
|---------|--------------------------|----------------|
| Data leaves your machine | Yes | **No** |
| GDPR compliance | Depends on DPA | **By design** |
| Works offline | No | **Yes** |
| API key required | Yes | **No** |
| Cost per token | Yes | **Free** |

## Installation

```bash
pip install pulse-ollama
```

**Requirements:** [Ollama](https://ollama.ai) must be installed and running.

```bash
# Install Ollama (macOS/Linux)
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull llama3.2       # 2GB, fast
ollama pull mistral        # 4GB, high quality
ollama pull phi3           # 2GB, efficient
ollama pull gemma2         # 5GB, Google's model
```

## Quick Start

```python
from pulse import PulseMessage
from pulse_ollama import OllamaAdapter

adapter = OllamaAdapter(model="llama3.2")

# See what models you have installed
print(adapter.list_models())
# ['llama3.2:latest', 'mistral:latest', 'phi3:latest']

# Ask a question
msg = PulseMessage(
    action="ACT.QUERY.DATA",
    parameters={"query": "Explain PULSE Protocol in one sentence"}
)
response = adapter.send(msg)
print(response.content["parameters"]["result"])

# Sentiment analysis
msg = PulseMessage(
    action="ACT.ANALYZE.SENTIMENT",
    parameters={"text": "I love this open source project!"}
)
response = adapter.send(msg)
print(response.content["parameters"]["result"])
# {"sentiment": "positive", "confidence": 0.95, "explanation": "..."}

# Translate
msg = PulseMessage(
    action="ACT.TRANSFORM.TRANSLATE",
    parameters={"text": "Hello, world!", "target_language": "German"}
)
response = adapter.send(msg)
print(response.content["parameters"]["result"])
# Hallo, Welt!
```

## Supported Actions

| PULSE Action | Description |
|---|---|
| `ACT.QUERY.DATA` | Ask questions, get answers |
| `ACT.CREATE.TEXT` | Generate text from instructions |
| `ACT.ANALYZE.SENTIMENT` | Analyze emotional tone |
| `ACT.ANALYZE.PATTERN` | Find patterns in data |
| `ACT.TRANSFORM.TRANSLATE` | Translate between languages |
| `ACT.TRANSFORM.SUMMARIZE` | Summarize long text |

## Configuration

```python
adapter = OllamaAdapter(
    model="llama3.2",                        # Default model
    host="http://localhost:11434",           # Ollama server URL
    timeout=120,                             # Inference timeout (seconds)
)

# Override model per-request
msg = PulseMessage(
    action="ACT.CREATE.TEXT",
    parameters={
        "instructions": "Write a poem about AI",
        "model": "mistral",           # Override default
        "temperature": 0.9,           # Creativity
    }
)

# Remote Ollama server (shared team deployment)
adapter = OllamaAdapter(host="http://ai-server.company.com:11434")
```

## Provider Switching

The whole point of PULSE adapters is that **switching providers is one line**:

```python
# Local model (privacy, offline, free)
from pulse_ollama import OllamaAdapter as AI
adapter = AI(model="llama3.2")

# Cloud (scale, latest models)
# from pulse_openai import OpenAIAdapter as AI
# adapter = AI(api_key="sk-...")

# Everything else stays identical
response = adapter.send(msg)
```

## Part of the PULSE Ecosystem

```
pulse-protocol    # Core (install this first)
pulse-openai      # OpenAI GPT
pulse-anthropic   # Anthropic Claude
pulse-ollama      # Local models (this package)
pulse-gemini      # Google Gemini
pulse-binance     # Binance exchange
pulse-bybit       # Bybit exchange
pulse-kraken      # Kraken exchange
```

## License

Apache 2.0 — free forever.

**GitHub:** [github.com/pulseprotocolorg-cyber/pulse-python](https://github.com/pulseprotocolorg-cyber/pulse-python)
