Metadata-Version: 2.4
Name: antaris-suite
Version: 6.0.9
Summary: Complete Antaris AI infrastructure for OpenClaw — memory, routing, safety, and context management
Author-email: Antaris Analytics <dev@antarisanalytics.ai>
License-Expression: Apache-2.0
Project-URL: Homepage, https://antarisanalytics.ai
Project-URL: Documentation, https://docs.antarisanalytics.ai
Project-URL: Repository, https://github.com/Antaris-Analytics-LLC/antaris-suite
Keywords: ai,agents,memory,guardrails,context,routing,mcp
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Provides-Extra: mcp
Requires-Dist: mcp>=1.0.0; extra == "mcp"
Provides-Extra: semantic
Requires-Dist: sentence-transformers>=2.0.0; extra == "semantic"
Provides-Extra: pro
Requires-Dist: antaris-suite[mcp,semantic]; extra == "pro"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Dynamic: license-file

# Antaris Core v5.5.0

Agent infrastructure for intelligent, secure, and memory-persistent AI systems.

**v5.5.0** — Cross-channel recency injection, parsica-memory rename, bootstrap guard, enricher key fix, contracts schema audit

## What's New in v5.5.0

### Cross-Channel Recency Injection ⚡
Agents now automatically receive context from recent conversations in OTHER channels/servers. Before each response, the plugin queries the memory store for recent memories outside the current session and injects them as ambient context. This gives agents awareness of what's happening across all their channels without explicit queries.

- **Configurable:** `recencyEnabled` (default: true), `recencyWindow` (default: 6 hours), `recencyLimit` (default: 5 entries)
- **Session-aware ambient mode:** `session_awareness` (default: `off`) can enable the new tracker-backed ambient cross-session system; `ambient_awareness` controls whether ambient context is injected when that system is on.
- **Deduplication:** memories already present from semantic recall are excluded
- **Channel labels:** each injected memory includes its source channel for attribution

### Parsica-Memory Rename
`antaris-memory` has been renamed to `parsica-memory` inside the suite. Parsica is the brand for memory + search products. The standalone PyPI package is `parsica-memory` (v2.1.3+). Internal imports remain `antaris_memory` for backward compatibility.

### Previous (v5.3.1)
- **`check_bootstrap_files()`** — warns when workspace files approach OpenClaw's 35K char injection limit
- **`get_health()` bootstrap check** — `bootstrap_files_ok` in health reports
- **Enricher `ANTARIS_LLM_API_KEY`** — reads OpenClaw plugin config API key, zero extra configuration
- **Contracts v5.3.0** — schema audit, memory.py updated with 11 new fields
- **ESM-safe imports** — `fs`/`os` replacing `require()` calls in plugin

## Packages

| Package | Version | Description |
|---------|---------|-------------|
| `parsica-memory` | 2.3.0 | Persistent memory with 11-layer BM25F search, LLM enrichment, WAL, sharding, cross-channel recency |
| `antaris-router` | 5.3.0 | Intelligent model routing with cost tracking, confidence gating, A/B testing |
| `antaris-guard` | 5.3.0 | Prompt injection detection, PII filtering, rate limiting, behavioral analysis |
| `antaris-context` | 5.3.0 | Context compression, hard budget enforcement, summarization, relevance scoring |
| `antaris-pipeline` | 5.3.0 | Agent orchestration pipeline with per-stage telemetry and OpenClaw bridge |
| `antaris-openclaw-plugin` | 5.5.0 | OpenClaw plugin — auto-recall, auto-ingest, cross-channel recency, Discord bridge, compaction recovery |

## OpenClaw Plugin Configuration

| Key | Default | Description |
|-----|---------|-------------|
| `memoryPath` | `./antaris_memory_store` | Memory store path for the plugin bridge. |
| `enableGuard` | `false` | Enable the guard pipeline. |
| `guardMode` | `monitor` | Guard mode. When guard is disabled, plugin forces monitor mode. |
| `autoRecall` | `true` | Run pre-turn memory recall. |
| `autoIngest` | `true` | Store post-turn memories automatically. |
| `searchLimit` | `5` | Max recalled memories per turn. |
| `minRelevance` | `0.3` | Minimum relevance threshold for recall. |
| `crossSessionRecall` | `semantic` | Legacy cross-session recall mode for the old plugin path when `session_awareness` is `off`. |
| `session_awareness` | `off` | New tracker-backed cross-session mode. When not `off`, the ambient/session tracker is the authority for cross-session awareness. |
| `ambient_awareness` | `true` | Inject ambient context from the tracker when `session_awareness` is enabled. |
| `agentName` | `Agent` | Agent identity used for memory scoping/provenance and bridge calls. |
| `sessionIsolation` | `true` | Keep per-channel sessions separate. Set `false` for one unified session across channels. |
| `recencyEnabled` | `true` | Enable recency recall injection. |
| `recencyWindow` | `6` | Recency window in hours. |
| `recencyLimit` | `10` | Max recency memories injected. |
| `syncPeers` | `[]` | Optional sync peer list. |
| `syncDailyLogs` | `true` | Include daily logs in sync behavior. |
| `syncSchedule` | `03:00` | Scheduled sync time. |
| `syncMaxHours` | `24` | Max hours for sync lookback. |
| `enrichModel` | provider-dependent | Optional enrichment model override. |

For **OpenClaw plugin** installation, see [`antaris-openclaw-plugin/INSTALL.md`](antaris-openclaw-plugin/INSTALL.md).

## Commands

### `/prune` — Memory Store Cleanup

Tiered memory cleanup with dual-layer protection (content keywords + enrichment/access score gate).

| Command | Description |
|---------|-------------|
| `/prune small` | Dry-run: pipeline fragments, heartbeats, entries <40 chars |
| `/prune medium` | Dry-run: extends small + zero-access >30 days, near-duplicates |
| `/prune large` | Dry-run: extends medium + zero-access >14 days with low decay |
| `/prune small\|medium\|large confirm` | Apply the prune (auto-backup first) |
| `/prune sessions` | Dry-run stale/aborted sessions |
| `/prune sessions confirm` | Remove stale sessions |
| `/prune undo` | Restore last backup |
| `/prune backups` | List available backups |

### `context` — Cross-Channel Context Sync

Reads recent Discord channel history, summarizes each channel's activity via LLM, and ingests the summaries into the memory store. Any instance that runs this immediately gets caught up on what every other channel has been doing.

**Usage:** Say `context 12` (or 3, 6, 24, 36, 48) in any channel. The number is hours of history to sync.

| Command | Description |
|---------|-------------|
| `context 3` | Sync last 3 hours |
| `context 6` | Sync last 6 hours |
| `context 12` | Sync last 12 hours (default) |
| `context 24` | Sync last 24 hours |
| `context 36` | Sync last 36 hours |
| `context 48` | Sync last 48 hours |

**Default channels synced:**
- `#antaris-analytics-llc`
- `#antaris-suite`
- `#antaris-bot`
- `#wealthhealth-antaris-forge`
- `#antaris-search`
- Personal DM channel

**How it works:**
1. Reads all messages from each channel within the time window (paginated, no caps)
2. Includes both human and bot messages (so instances see each other's work)
3. Summarizes each active channel via Haiku (cheap/fast)
4. Ingests each summary as `source="channel_sync"` episodic memory
5. Reports: channels synced, message counts, which channels had activity

**Note:** Currently runs through the agent (say "context 12" without slash). Plugin command routing (`/context`) is pending an OpenClaw command registration fix.

## Benchmarks

**v4.9.20 — Mac Mini M4 (10-core, 32GB) · Python 3.14 · 7,658 memories**

### Search Quality (doc2query self-recall, 150-sample benchmark)

| Metric | Result |
|--------|--------|
| R@1 | 61.9% |
| R@3 | 75.1% |
| R@5 | 79.3% |
| MRR | 0.688 |
| p50 | 84ms |
| p95 | 134ms |
| Provenance | 100% |

### Hard Corpus (30 vocabulary-gap queries, zero keyword overlap)

| Metric | Raw BM25 | With Enrichment |
|--------|----------|-----------------|
| R@1 | 10.0% | 46.7% |

### Search Engine Layers

1. BM25+ with δ normalization
2. BM25F per-field scoring (content/enriched/keywords/queries independent avg lengths)
3. Safelist normalizer (~50 domain morphological mappings)
4. LLM enrichment field boosts (enriched_summary 1.25×, search_queries 1.40×)
5. Top-K window filter (5,350 → 159 candidates)
6. Word expansion (9,007 words from SO + code + Wikipedia corpus)
7. Embedding reranker (Layer 10 — MiniLM centroid vectors)
8. PRF pseudo-relevance feedback (Layer 11)
9. Ingest quality gates (noise regex, length minimum, prefix-aware dedup)
10. Tiered storage (hot/warm/cold shards with LRU cache)
11. WAL (write-ahead log) for crash safety

## Changelog

### v4.9.20 (2026-03-08) — Current
- BM25F per-field scoring with independent field average lengths
- Query expansion removed (was inflating 3-word queries to 156 tokens)
- Keyword weight doubled (2×)
- Boost stacking cleanup (removed non-discriminative boosts)
- `/context` cross-channel sync command
- R@1: 61.9%, R@3: 75.1%, R@5: 79.3%

### v4.9.18 (2026-03-07)
- ChatGPT release review fixes
- All version strings unified across root/packages/plugin
- word_expansion.json loader handles both tuple-pair and string-list formats
- Session isolation behavior documented (None→wildcard is intentional)
- Root tests updated (220 passed, 0 failures)
- mypy python_version typo fixed

### v4.9.17 (2026-03-06)
- **24 bug fixes** (3 critical, 6 high, 7 medium, 8 low)
- Critical: content_norms dedup mismatch, live fact double-ingest, session summary synthesis non-functional
- High: shard merge enrichment loss, CrossSession TOCTOU race, shard cache FIFO→LRU, compact enrichment protection, WAL replay IDF, session key collapse
- Universal word expansion: 9,007 words from SO + code-search-net + Wikipedia + C4
- TDZ crash fix in agent_end (was silently killing all post-turn memory storage)

### v4.9.16 (2026-03-05)
- BM25+ (δ=0.5 floor), safelist normalizer, word-embedding query vector, PRF Layer 11
- `/prune` command: small/medium/large tiers, undo, backups
- R@1: 47.1%, MRR: 0.540

### v4.9.14 (2026-03-04)
- Word-embedding query vector (Layer 10 primary path)
- PRF Layer 11 pseudo-relevance feedback
- R@1: 45.3%, MRR: 0.531

### v4.9.13 (2026-03-04)
- BM25 normalization overhaul + safelist normalizer
- R@1: 39.6%, MRR: 0.473

## License

MIT
