Metadata-Version: 2.4
Name: django-vcache
Version: 2.0.3
Classifier: Development Status :: 5 - Production/Stable
Classifier: Framework :: Django
Classifier: Framework :: Django :: 5.0
Classifier: Framework :: Django :: 5.1
Classifier: Framework :: Django :: 5.2
Classifier: Framework :: Django :: 6.0
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Programming Language :: Rust
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Dist: django>=5.0
Requires-Dist: ormsgpack
Requires-Dist: pyzstd ; python_full_version < '3.14'
License-File: LICENSE
Summary: A specialized, lightweight Django cache backend for Valkey.
Author-email: David Burke <david@burkesoftware.com>
Requires-Python: >=3.12
Description-Content-Type: text/markdown; charset=UTF-8; variant=GFM
Project-URL: Bug Tracker, https://gitlab.com/glitchtip/django-vcache/issues
Project-URL: Homepage, https://gitlab.com/glitchtip/django-vcache

# django-vcache

A fast, async-native Django cache backend for Valkey (and Redis). Opinionated and secure by default.

It powers the [GlitchTip](https://glitchtip.com) open-source error tracking platform.

Why django-vcache?

- **Fast** — Rust I/O driver with msgpack serialization (via ormsgpack). 13x faster than Django's built-in RedisCache under concurrent ASGI load.
- **Async-native** — True async I/O, not `sync_to_async` thread-pool wrappers. Single multiplexed connection handles all concurrency.
- **Secure by default** — No pickle. Msgpack cannot execute arbitrary code on deserialization. No special configuration needed.
- **Efficient** — One multiplexed connection for both sync and async. No connection pool to tune. Automatic zstd compression for large values.
- **Python 3.14 ready** — Uses stdlib `compression.zstd` on 3.14+, no third-party compression dependency needed.

## Benchmarks

Measured on Python 3.14 with granian ASGI server, 300 concurrent connections, 60 seconds per test. Each request performs 6 cache operations (get, get_many, set, set compressed, incr, get compressed). Both backends hitting the same local Valkey instance.

Reproduce with `docker compose run --rm app python bench_compare.py`

| | django-vcache | Django RedisCache | Δ |
|---|---|---|---|
| **Requests/sec** | 1,518 | 113 | **+1,243%** |
| **Peak RSS** | 135 MB | 508 MB | **−73%** |
| **Valkey connections** | 2 | 3,654 | |

Django's `RedisCache` wraps every async call in `sync_to_async`, spawning a thread per operation. Under load this creates thousands of connections and unbounded memory growth. django-vcache uses a single multiplexed connection with native async I/O.

Status: Stable and used in production.

## Installation

```bash
pip install django-vcache
```

## Usage

Update your `settings.py` to configure the cache backend:

```python
CACHES = {
    "default": {
        "BACKEND": "django_vcache.backend.ValkeyCache",
        "LOCATION": "valkey://your-valkey-host:6379/1",
    },
}
```

You can then use Django's cache framework as usual:

```python
from django.core.cache import cache

cache.set('my_key', 'my_value', 30)
value = cache.get('my_key')
```

## API

django-vcache implements the full [Django cache API](https://docs.djangoproject.com/en/stable/topics/cache/) with native async variants (`aget`, `aset`, etc.). All standard methods work as documented by Django:

`get`, `set`, `add`, `delete`, `touch`, `incr`, `decr`, `has_key`, `get_many`, `set_many`, `delete_many`, `get_or_set`, `clear`

### Extras beyond Django

These methods extend the standard Django cache API:

**`lock(key, timeout=None, blocking=True, ...)`** / **`alock(...)`** — Distributed locking via Valkey. Not available in cluster mode.

```python
with cache.lock("my-lock", timeout=10):
    # exclusive access
    ...

async with cache.alock("my-lock", timeout=10):
    ...
```

**`get_raw_client()`** — Access the underlying Rust driver instance for operations not covered by the Django cache API. Reuses the existing connection.

```python
client = cache.get_raw_client()
```

### Atomic incr/decr

`incr` and `decr` use native Redis `INCRBY`/`DECRBY` commands. If the key does not exist, it is created with the delta value (Redis behavior). This is atomic and safe for concurrent counters.

### Serialization

Values are serialized with [msgpack](https://github.com/aviramha/ormsgpack) (via ormsgpack) by default. Large values (>1KB by default) are compressed with zstd. Integer values are stored as raw strings to support native `INCRBY`.

For projects that need to cache arbitrary Python objects (Django models, custom classes, etc.), a pickle serializer is available:

```python
"OPTIONS": {
    "SERIALIZER": "pickle",  # default: "msgpack"
}
```

> **Note:** Pickle can execute arbitrary code on deserialization. Only use it if you trust all data in your cache.

Configure the compression threshold with `COMPRESS_MIN_LEN` in `OPTIONS`:

```python
"OPTIONS": {
    "COMPRESS_MIN_LEN": 2048,  # compress values larger than 2KB
}
```

## Async usage

You must use an ASGI server for async cache methods. For example, you can use `granian`:

```bash
granian --interface asgi --host 0.0.0.0 --port 8000 myproject.asgi:application
```

Sync methods (`get`, `set`, etc.) work in any context (ASGI or WSGI).


## FAQ

**Is this production-ready?**
Yes. It powers [GlitchTip](https://glitchtip.com) in production. The driver automatically reconnects after Valkey restarts and supports standalone, Sentinel, and Cluster topologies.

**How is this different from django-valkey?**
django-vcache is opinionated — one serializer, one connection strategy, no knobs to turn. If you need maximum flexibility (custom serializers, connection pools, pluggable clients), use [django-valkey](https://github.com/django-commons/django-valkey). If you want something fast that just works, use this.

**Does it work with Redis?**
Yes. `redis://` and `rediss://` URLs work. Valkey and Redis are wire-compatible.

## Contributing

### Development Environment

This project uses Docker for development. To get started:

1.  Clone the repository.
2.  Build and start the services:

    ```bash
    docker compose up -d --build
    ```

This will start a Valkey container and an `app` container with the Django sample project running on `http://localhost:8000`. The development server uses `granian` with auto-reload, so changes you make to the code will be reflected automatically.

#### Using Valkey Sentinel

To run the development environment with Valkey Sentinel enabled, use the override compose file:

```bash
docker compose -f compose.yml -f compose.sentinel.yml up -d --build
```

You will also need to configure your `sample/settings.py` to use the Sentinel URL. The recommended way is to set the `VALKEY_URL` environment variable before starting the services:

```bash
export VALKEY_URL="sentinel://localhost:26379/mymaster/1"
```

The application will then be available at `http://localhost:8000`.

### Using Valkey Cluster

To use `django-vcache` with a Valkey Cluster, set the `CLUSTER_MODE` option to `True` in your cache configuration. The `LOCATION` should point to one of the cluster's nodes; the driver will automatically discover the rest of the cluster nodes.

```python
CACHES = {
    "default": {
        "BACKEND": "django_vcache.backend.ValkeyCache",
        "LOCATION": "valkey://your-cluster-node-1:6379/0",
        "OPTIONS": {
            "CLUSTER_MODE": True,
        }
    },
}
```

Note that distributed locking (via `cache.lock()` and `cache.alock()`) is not supported when `CLUSTER_MODE` is enabled. Attempting to use these methods will raise a `NotImplementedError`.

To run the development environment with Valkey Cluster enabled, use the override compose file and environment variables:

```bash
docker compose -f compose.yml -f compose.cluster.yml up -d --build \
    -e VALKEY_URL='valkey://valkey-1:6379/0' \
    -e VALKEY_CLUSTER_MODE='true'
```

The application will then be available at `http://localhost:8000`.

### Running Tests

To run the test suite, execute the following command:

```bash
docker compose run --rm app bash -c "python sample/manage.py test"
```

## Credits

Inspired by the excellent work of django-valkey, django-redis, valkey-glide, but re-architected for strict resource efficiency and modern async/sync hybrid stacks.

