Metadata-Version: 2.4
Name: prajnyavan
Version: 0.1.3
Summary: Persistent, emotionally-weighted memory for any LLM
Author: Prajnyavan Team
License: MIT
Project-URL: Homepage, https://github.com/yourname/prajnyavan
Project-URL: Repository, https://github.com/yourname/prajnyavan
Project-URL: Issues, https://github.com/yourname/prajnyavan/issues
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.28
Provides-Extra: openai
Requires-Dist: openai>=1.0; extra == "openai"
Provides-Extra: cohere
Requires-Dist: cohere>=4.0; extra == "cohere"
Provides-Extra: local
Requires-Dist: sentence-transformers>=2.2; extra == "local"
Provides-Extra: all
Requires-Dist: openai>=1.0; extra == "all"
Requires-Dist: cohere>=4.0; extra == "all"
Requires-Dist: sentence-transformers>=2.2; extra == "all"

# Prajnyavan — The Memory Layer for AI

Persistent, emotionally-weighted memory for any LLM. Works with ChatGPT, Claude, Gemini, Perplexity — any model.

---

## Zero-config quickstart (3 steps)

```bash
pip install prajnyavan           # default uses free local embeddings
prajnyavan dev                   # starts a local daemon in ~/.prajnyavan
```

```python
from prajnyavan import auto_client

mem = auto_client(user_id="user_123")

@mem.wrap_llm(user_id="user_123")
def chat(prompt):
    return your_llm_call(prompt)  # e.g., OpenAI, Anthropic, local model

# Advanced: your wrapped function can accept optional kwargs if you want raw context
# def chat(prompt, memory_context=None, enriched_prompt=None): ...
# - memory_context: the retrieved memories list
# - enriched_prompt: the prompt with memory context injected
```

- No Docker, ports, or tokens to juggle for local use.
- If you set `OPENAI_API_KEY` or `COHERE_API_KEY`, embeddings auto-upgrade; otherwise the free local model is used (downloaded once and cached).
- Data and model cache live in `~/.prajnyavan`.

CLI helpers:
- `prajnyavan status` — check health, port, model cache
- `prajnyavan stop` — stop the daemon
- `prajnyavan reset` — stop and clear local data/cache

---

## Works with any LLM

```python
import anthropic
from prajnyavan import auto_client

mem = auto_client(user_id="user_123")

@mem.wrap_llm(user_id="user_123")   # same user_id = same memories
def chat_with_claude(prompt):
    return anthropic.Anthropic().messages.create(
        model="claude-opus-4-6",
        max_tokens=1000,
        messages=[{"role": "user", "content": prompt}]
    ).content[0].text

# Claude reads the same memories as ChatGPT
chat_with_claude("What do you know about Bruno?")
# → "Bruno is your dog who loves long walks..."
```

---

## Production / Advanced (Docker & K8s)

If you prefer containerized deploys, the Rust service still runs fine in Docker:

```bash
docker-compose up -d
curl http://localhost:9999/health
```

Then connect with:

```python
from prajnyavan import MemoryClient, get_token

token = get_token("http://localhost:9999")
mem = MemoryClient(base_url="http://localhost:9999", token=token, embedding_provider="openai")
```

---

## How it works

Every message is embedded (locally by default) and stored with recency and emotion scores.  
On each new prompt, only the top few relevant memories (similarity × importance × recency) are injected, not the whole history.  
No LLM confusion, even with tens of thousands of past chats.

---

## Requirements

- Python 3.9+
- For pure local use: nothing else (the local embedding model downloads once).
- For hosted embeddings: set `OPENAI_API_KEY` or `COHERE_API_KEY`.

---

## Testing

Install dev dependencies (brings pytest) and run inside the project virtualenv:

```bash
python -m pip install -r requirements-dev.txt
.\.venv\Scripts\pytest -k dev_roundtrip -m integration -q
```

To exercise the integration roundtrip end-to-end, set the flag:

```bash
RUN_PRAJNYAVAN_INTEGRATION=1 .\.venv\Scripts\pytest -k dev_roundtrip -m integration -q -vv
```

Shortcut (creates venv if missing and runs the integration test):

```powershell
pwsh -File scripts/test.ps1
```

---

## License

MIT
