Metadata-Version: 2.4
Name: cartisien-engram
Version: 0.6.0
Summary: Persistent semantic memory for AI agents — SQLite-backed, local-first, zero config
Author-email: Cartisien Interactive <jeff@cartisien.com>
License: MIT
Project-URL: Homepage, https://github.com/Cartisien/engram-py
Project-URL: Repository, https://github.com/Cartisien/engram-py
Keywords: ai,memory,agents,llm,sqlite,embeddings,semantic-search
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Provides-Extra: langchain
Requires-Dist: langchain>=0.1.0; extra == "langchain"
Provides-Extra: dev
Requires-Dist: pytest>=8.0; extra == "dev"

# cartisien-engram

> **Persistent semantic memory for AI agents — Python SDK**

```python
from engram import Engram, EngramConfig

memory = Engram(EngramConfig(db_path="./memory.db"))

# Store
memory.remember("user_123", "User prefers TypeScript and dark mode", "user")

# Recall semantically — finds the right memory without exact keyword match
results = memory.recall("user_123", "what are the user's preferences?", limit=5)
# [MemoryEntry(content='User prefers TypeScript and dark mode', similarity=0.82, ...)]
```

---

## Install

```bash
pip install cartisien-engram
```

### Optional: LangChain integration

```bash
pip install "cartisien-engram[langchain]"
```

### Optional: Local embeddings (recommended)

```bash
# Install Ollama: https://ollama.ai
ollama pull nomic-embed-text
```

Without Ollama, keyword search is used automatically.

---

## Quick Start

```python
from engram import Engram, EngramConfig

memory = Engram(EngramConfig(
    db_path="./agent.db",
    embedding_url="http://localhost:11434",  # Ollama default
))

# In your agent loop
def handle_message(session_id: str, user_input: str) -> str:
    # 1. Recall relevant context
    context = memory.recall(session_id, user_input, limit=5)
    context_str = "\n".join(f"[{e.role}]: {e.content}" for e in context)

    # 2. Build prompt + call your LLM
    response = llm.chat(f"Context:\n{context_str}\n\nUser: {user_input}")

    # 3. Store both sides
    memory.remember(session_id, user_input, "user")
    memory.remember(session_id, response, "assistant")

    return response
```

---

## LangChain Integration

Drop-in replacement for `ConversationBufferMemory`:

```python
from engram.langchain import EngramMemory
from langchain.chains import ConversationChain
from langchain.llms import OpenAI

memory = EngramMemory(
    session_id="user_abc",
    db_path="./memory.db",
    recall_limit=10
)

chain = ConversationChain(llm=OpenAI(), memory=memory)
chain.predict(input="My name is Jeff and I'm building GovScout")
# Start a new session — memory persists
chain.predict(input="What am I building?")  # Remembers "GovScout"
```

---

## API

### `Engram(config?)`

```python
config = EngramConfig(
    db_path="./memory.db",       # SQLite path (default: ":memory:")
    max_context_length=4000,     # Max chars per entry
    embedding_url="http://localhost:11434",  # Ollama base URL
    embedding_model="nomic-embed-text",      # Embedding model
    semantic_search=True,        # Enable semantic search
)
memory = Engram(config)
```

### `remember(session_id, content, role="user", metadata=None)`

```python
entry = memory.remember("session_1", "User loves Thai food", "user")
# MemoryEntry(id=..., content=..., role='user', timestamp=...)
```

### `recall(session_id, query=None, limit=10, options=None)`

Semantic search when Ollama available, keyword fallback otherwise.

```python
results = memory.recall("session_1", "food preferences", limit=5)
# [MemoryEntry(..., similarity=0.84)]
```

### `history(session_id, limit=20)`

Chronological conversation history.

```python
chat = memory.history("session_1", limit=20)
```

### `forget(session_id, id=None, before=None)`

```python
memory.forget("session_1")                    # clear all
memory.forget("session_1", id="abc123")       # delete one
memory.forget("session_1", before=datetime.now())  # delete old
```

### `stats(session_id)`

```python
stats = memory.stats("session_1")
# {"total": 42, "by_role": {"user": 21, "assistant": 21}, "with_embeddings": 42}
```

### Context manager

```python
with Engram(EngramConfig(db_path="./memory.db")) as memory:
    memory.remember("s1", "test", "user")
```

---

## Part of the Cartisien Memory Suite

| Package | Language | Purpose |
|---------|----------|---------|
| [`cartisien-engram`](https://github.com/Cartisien/engram-py) | Python | This package |
| [`@cartisien/engram`](https://github.com/Cartisien/engram) | TypeScript/Node | TS SDK |
| [`@cartisien/engram-mcp`](https://github.com/Cartisien/engram-mcp) | TypeScript | MCP server |
| `@cartisien/extensa` | TypeScript | Vector infrastructure *(soon)* |
| `@cartisien/cogito` | TypeScript | Agent identity *(soon)* |

---

MIT © [Cartisien Interactive](https://cartisien.com)
