Metadata-Version: 2.4
Name: signlangtk
Version: 0.1.6
Summary: Sign Language Toolkit for sign language research
Author: Sign Language Research Team
License-Expression: CC-BY-NC-ND-4.0
Project-URL: Repository, https://github.com/ed-fish/Sign-Language-Toolkit
Project-URL: Documentation, https://ed-fish.github.io/Sign-Language-Toolkit/
Project-URL: Issues, https://github.com/ed-fish/Sign-Language-Toolkit/issues
Project-URL: Changelog, https://github.com/ed-fish/Sign-Language-Toolkit/releases
Keywords: sign language,computer vision,machine learning,linguistics,ELAN
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.24.0
Requires-Dist: scipy>=1.10.0
Requires-Dist: h5py>=3.8.0
Requires-Dist: tqdm>=4.65.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: click>=8.1.0
Requires-Dist: defusedxml>=0.7.0
Requires-Dist: nltk>=3.8.0
Requires-Dist: huggingface_hub>=0.20.0
Provides-Extra: mediapipe
Requires-Dist: mediapipe>=0.10.0; extra == "mediapipe"
Provides-Extra: wilor
Requires-Dist: torch>=2.0.0; extra == "wilor"
Requires-Dist: smplx>=0.1.28; extra == "wilor"
Requires-Dist: pytorch-lightning>=2.0.0; extra == "wilor"
Requires-Dist: yacs>=0.1.8; extra == "wilor"
Requires-Dist: ultralytics>=8.0.0; extra == "wilor"
Requires-Dist: timm>=0.9.0; extra == "wilor"
Requires-Dist: dill>=0.3.0; extra == "wilor"
Provides-Extra: nlf
Requires-Dist: torch>=2.0.0; extra == "nlf"
Provides-Extra: teaser
Requires-Dist: torch>=2.0.0; extra == "teaser"
Requires-Dist: ultralytics>=8.0.0; extra == "teaser"
Requires-Dist: timm>=0.9.0; extra == "teaser"
Provides-Extra: rtmpose
Requires-Dist: torch>=2.0.0; extra == "rtmpose"
Requires-Dist: mmpose>=1.1.0; extra == "rtmpose"
Requires-Dist: mmdet>=3.0.0; extra == "rtmpose"
Requires-Dist: mmengine>=0.7.0; extra == "rtmpose"
Requires-Dist: mmcv>=2.0.0; extra == "rtmpose"
Requires-Dist: openmim>=0.3.0; extra == "rtmpose"
Requires-Dist: decord>=0.6.0; extra == "rtmpose"
Provides-Extra: smplfx
Requires-Dist: torch>=2.0.0; extra == "smplfx"
Requires-Dist: smplx>=0.1.28; extra == "smplfx"
Requires-Dist: h5py>=3.10.0; extra == "smplfx"
Requires-Dist: hdf5plugin>=4.0.0; extra == "smplfx"
Requires-Dist: decord>=0.6.0; extra == "smplfx"
Provides-Extra: torch
Requires-Dist: torch>=2.0.0; extra == "torch"
Requires-Dist: torchvision>=0.15.0; extra == "torch"
Provides-Extra: data
Requires-Dist: lmdb>=1.4.0; extra == "data"
Requires-Dist: msgpack>=1.0.0; extra == "data"
Provides-Extra: metrics
Requires-Dist: sacrebleu>=2.3.0; extra == "metrics"
Requires-Dist: rouge-score>=0.1.2; extra == "metrics"
Provides-Extra: analysis
Requires-Dist: scikit-learn>=1.3.0; extra == "analysis"
Requires-Dist: umap-learn>=0.5.0; extra == "analysis"
Requires-Dist: hdbscan>=0.8.0; extra == "analysis"
Requires-Dist: albumentations>=1.3.0; extra == "analysis"
Provides-Extra: vis
Requires-Dist: matplotlib>=3.7.0; extra == "vis"
Requires-Dist: opencv-python>=4.8.0; extra == "vis"
Provides-Extra: api
Requires-Dist: fastapi>=0.109.0; extra == "api"
Requires-Dist: uvicorn[standard]>=0.25.0; extra == "api"
Requires-Dist: pydantic>=2.5.0; extra == "api"
Requires-Dist: python-multipart>=0.0.6; extra == "api"
Requires-Dist: slowapi>=0.1.9; extra == "api"
Requires-Dist: openai>=1.12.0; extra == "api"
Requires-Dist: anthropic>=0.39.0; extra == "api"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: mypy>=1.0.0; extra == "dev"
Provides-Extra: docs
Requires-Dist: mkdocs>=1.5.0; extra == "docs"
Requires-Dist: mkdocs-material>=9.5.0; extra == "docs"
Requires-Dist: mkdocstrings[python]>=0.24.0; extra == "docs"
Requires-Dist: pymdown-extensions>=10.0; extra == "docs"
Provides-Extra: all
Requires-Dist: signlangtk[analysis,api,data,mediapipe,metrics,nlf,teaser,torch,vis,wilor]; extra == "all"
Dynamic: license-file

# Sign Language Toolkit (SLTK)

A research toolkit for sign language video analysis. SLTK provides a complete pipeline from raw video to rich, multi-tier ELAN annotation files: **3D hand reconstruction**, **full-body pose**, **face tracking**, **automatic sign segmentation**, **gloss spotting**, and **non-manual signal detection**.

```
Video (.mp4)
    │
    ├─ WiLoR ──────► 3D hand keypoints + MANO rotations   (_wilor.h5)
    ├─ NLF ────────► Full-body SMPL-X pose                 (_nlf.h5)
    ├─ TEASER ─────► FLAME face parameters                 (_teaser.h5)
    │
    ├─ Segmenter ──► Sign boundaries (BIO labels)
    ├─ SignRep ────► Gloss spotting (dictionary matching)
    ├─ NMS ────────► Blinks, nods, shakes, mouth, gaze
    │
    └─ All results ► Multi-tier ELAN file (.eaf)
```

## Installation

```bash
# Core library (ELAN I/O, CLI, corpus tools)
pip install signlangtk

# With web interface
pip install "signlangtk[api]"

# With hand extraction (WiLoR) — requires CUDA
pip install "signlangtk[wilor]"

# Full body (NLF) + face (TEASER)
pip install "signlangtk[nlf,teaser]"

# Everything (excludes rtmpose/smplfx which need mmcv via mim)
pip install "signlangtk[all]"
```

The PyPI package is `signlangtk`, the Python import is `sltk`:

```python
import sltk
from sltk.extraction.wilor import WiLoRExtractor
from sltk.extraction.nlf import NLFExtractor
from sltk.extraction.teaser import TeaserExtractor
```

Requires Python 3.10+. For development from source:

```bash
git clone https://github.com/ed-fish/Sign-Language-Toolkit.git
cd Sign-Language-Toolkit
pip install -e ".[dev,api]"
```

---

## Quick Start: Video to Multi-Tier ELAN

This end-to-end example takes a single video and produces an ELAN file with sign boundaries, spotted glosses, and non-manual signals on separate tiers.

```python
from sltk.extraction.wilor import WiLoRExtractor
from sltk.extraction.teaser import TeaserExtractor
from sltk.segmentation.runner import get_runner
from sltk.segmentation.h5_loader import h5_to_features
from sltk.segmentation.postprocess import extract_segments
from sltk.nms.runner import detect_nms, export_results
from sltk.io.elan_roundtrip import ElanDocument

VIDEO = "recording.mp4"
FPS = 25.0

# ── Step 1: Extract poses ───────────────────────────────────────────
# Hands (WiLoR → MANO 3D)
with WiLoRExtractor() as ext:
    ext.load_model()
    ext.extract_from_video(VIDEO, "recording_wilor.h5")

# Face (TEASER → FLAME)
with TeaserExtractor() as ext:
    ext.load_model()
    ext.extract_from_video(VIDEO, "recording_teaser.h5")

# ── Step 2: Segment signs ──────────────────────────────────────────
features = h5_to_features("recording_wilor.h5")   # (T, 192)
runner = get_runner()
labels = runner.predict(features)                  # 0=OUT, 1=IN, 2=BEGIN
segments = extract_segments(labels)                # [(start, end), ...]

# ── Step 3: Detect non-manual signals ──────────────────────────────
blinks, nms_events, quality = detect_nms(
    "recording_teaser.h5",
    detectors={"all"},
)

# ── Step 4: Build multi-tier ELAN file ─────────────────────────────
doc = ElanDocument.new(video_path=VIDEO)

# Add segmentation tier
doc.add_tier("Segmentation")
for start_frame, end_frame in segments:
    doc.add_segment("Segmentation", start_frame / FPS, end_frame / FPS, "SIGN")

# Add NMS tiers (blinks, head movements, mouth, eyebrows, gaze)
for tier_name in ["BLINK", "HEAD-NOD", "HEAD-SHAKE", "HEAD-TILT",
                  "MOUTH-MOVEMENT", "EYEBROW-RAISE", "EYE-GAZE"]:
    doc.add_tier(tier_name)

for b in blinks:
    doc.add_segment("BLINK", b.start_frame / FPS, b.end_frame / FPS, "blink")

for ev in nms_events:
    doc.add_segment(ev.tier, ev.start_frame / FPS, ev.end_frame / FPS, ev.label)

doc.save("recording_full.eaf")
```

Open `recording_full.eaf` in ELAN to see all tiers aligned with the video.

---

## Extraction: Hands, Body, and Face

All extractors share the same interface: `load_model()`, `extract_from_video()`, `process_batch()`. Weights are auto-downloaded from HuggingFace Hub on first use.

### WiLoR — 3D Hand Reconstruction

Produces 21 keypoints per hand + MANO rotation matrices per frame.

```python
from sltk.extraction.wilor import WiLoRExtractor, WiLoRConfig

config = WiLoRConfig(
    device="cuda:0",
    img_batch_size=128,
    rescale_factor=2.0,
)
with WiLoRExtractor(config) as ext:
    ext.load_model()
    result = ext.extract_from_video("video.mp4", "video_wilor.h5")
```

> **Citation required.** If you use WiLoR hand extraction you must cite:
>
> ```bibtex
> @inproceedings{potamias2024wilor,
>     title     = {{WiLoR}: End-to-end 3D Hand Localization and Reconstruction in-the-wild},
>     author    = {Potamias, Rolandos Alexandros and Ploumpis, Stylianos and Moschoglou, Stylianos and Triantafyllou, Vasileios and Zafeiriou, Stefanos},
>     booktitle = {European Conference on Computer Vision (ECCV)},
>     year      = {2024}
> }
> ```

**Output H5 structure:**
```
video_wilor.h5
├── attrs: fps, num_frames, resolution
├── frame_idx      (num_frames, 2)           # sparse: (start_idx, count)
├── kpts_3d        (num_detections, 21, 3)   # 3D hand keypoints
├── right          (num_detections,)          # True = right hand
└── mano/
    ├── hand_pose      (num_detections, 15, 3, 3)  # joint rotations
    └── global_orient  (num_detections, 1, 3, 3)    # wrist rotation
```

### NLF — Full-Body SMPL-X

Produces 55 SMPL-X joints (body + hands + face landmarks) per frame.

```python
from sltk.extraction.nlf import NLFExtractor, NLFConfig

with NLFExtractor(NLFConfig(device="cuda:0")) as ext:
    ext.load_model()
    result = ext.extract_from_video("video.mp4", "video_nlf.h5")
```

### TEASER — FLAME Face Parameters

Produces FLAME 3D face parameters: jaw pose, expression coefficients, shape, eyelid state, and head pose per frame. This is what drives NMS detection.

```python
from sltk.extraction.teaser import TeaserExtractor, TeaserConfig

with TeaserExtractor(TeaserConfig(device="cuda:0")) as ext:
    ext.load_model()
    result = ext.extract_from_video("video.mp4", "video_teaser.h5")
```

> **Citation required.** If you use TEASER face extraction you must cite:
>
> ```bibtex
> @article{liu2025teaser,
>     title   = {Teaser: Token Enhanced Spatial Modeling for Expressions Reconstruction},
>     author  = {Liu, Yunfei and Zhu, Lei and Lin, Lijian and Zhu, Ye and Zhang, Ailing and Li, Yu},
>     journal = {arXiv preprint arXiv:2502.10982},
>     year    = {2025}
> }
> ```

### Batch Processing

All extractors support batch processing over a directory:

```python
from pathlib import Path
from sltk.extraction.wilor import WiLoRExtractor

with WiLoRExtractor() as ext:
    ext.load_model()
    results = ext.process_batch(
        video_paths=list(Path("videos/").glob("*.mp4")),
        output_dir=Path("poses/"),
        skip_existing=True,
    )
    for path, result in results.items():
        print(f"{path}: {result.num_frames} frames, {result.num_detections} detections")
```

### Model Weights

All weights are auto-downloaded from [HuggingFace Hub](https://huggingface.co/fiskenai/vltk) and cached at `~/.cache/sltk/weights/`. Override with environment variables:

| Variable | Model |
|----------|-------|
| `SLTK_WILOR_CHECKPOINT` | WiLoR hand model |
| `SLTK_NLF_MODEL` | NLF body model |
| `SLTK_TEASER_CHECKPOINT` | TEASER face model |
| `SLTK_SIGNREP_CHECKPOINT` | SignRep embedding model |
| `SLTK_SEGMENTOR_V2_CHECKPOINT` | Segmenter model |

---

## Segmentation: Finding Sign Boundaries

The segmenter is a 4-layer Transformer that reads WiLoR hand features and predicts per-frame BIO labels (`0`=OUT, `1`=IN_SIGN, `2`=BEGIN).

> **Citation required.** If you use the sign segmentation model you must cite:
>
> ```bibtex
> @inproceedings{he2025improving,
>     title     = {Improving Continuous Sign Language Recognition with Adapted Image Models},
>     author    = {He, Lianyu and Tian, Haocong and Fan, Shujing and Woll, Bencie and Bowden, Richard},
>     booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
>     year      = {2025}
> }
> ```

### From WiLoR H5

```python
from sltk.segmentation.runner import get_runner, segment_h5
from sltk.segmentation.output import OutputFormat
from sltk.segmentation.h5_loader import h5_to_features
from sltk.segmentation.postprocess import extract_segments

# High-level: segment a file → ELAN or JSON
segment_h5(
    "video_wilor.h5",
    output_path="video_segments.eaf",
    output_format=OutputFormat.ELAN,
    fps=25.0,
    media_path="video.mp4",
)

# Low-level: get raw predictions
features = h5_to_features("video_wilor.h5")  # (T, 192)
runner = get_runner()
labels = runner.predict(features)             # (T,) values 0/1/2
segments = extract_segments(labels)           # [(start_frame, end_frame), ...]
```

---

## Gloss Spotting: Matching Signs to a Dictionary

SignRep extracts 768-dim visual features from 16-frame sliding windows, then matches each detected segment against a dictionary of known sign features using cosine similarity.

> **Citation required.** If you use SignRep gloss spotting you must cite:
>
> ```bibtex
> @inproceedings{wong2025signrep,
>     title     = {SignRep: Enhancing Self-supervised Sign Representations},
>     author    = {Wong, Mathew and Fish, Ed and Sherrah, Jamie and Sherwood, Thomas and Sherwood, Nathan},
>     booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
>     year      = {2025}
> }
> ```

```python
from sltk.embedding.pipeline import SignRepPipeline

pipeline = SignRepPipeline()

# Extract dense features from the full video
continuous = pipeline.extract_continuous("video.mp4", stride=4)

# Load your sign dictionary (folder of .npz files, one per sign)
dictionary = pipeline.load_dictionary(["/data/dictionaries/bsldict/signrep/"])

# Match segments to dictionary entries
result = pipeline.spot(
    features=continuous,
    segments=[{"segment_id": 0, "start_frame": 12, "end_frame": 45}],
    dictionary=dictionary,
    top_k=10,
)

for seg in result.segments:
    print(f"Segment {seg.start_ms}ms-{seg.end_ms}ms:")
    for gl in seg.top_glosses:
        print(f"  {gl['gloss']} ({gl['similarity']:.3f})")

# Save as ELAN (creates Rank-1..N and Score-1..N tiers)
result.save_eaf("video_spotted.eaf", media_path="video.mp4")
```

### Building a Dictionary

Before spotting, build a dictionary from isolated sign videos:

```python
result = pipeline.extract_dictionary("isolated_sign.mp4", method="middle")
result.save_npz("dictionary/HELLO.npz")
```

---

## Non-Manual Signals: Face and Head Analysis

Detect blinks, head nods/shakes/tilts, mouth movements, eyebrow raises, gaze direction, and eye squints from TEASER face-tracking data.

```python
from sltk.nms.runner import detect_nms, export_results

# Detect all NMS events from a TEASER H5 file
blinks, nms_events, quality = detect_nms(
    "video_teaser.h5",
    detectors={"all"},           # or specific: {"blink", "nod", "mouth"}
    smpl_path="video_nlf.h5",   # optional: enables gaze detection
)

# Export to ELAN (one tier per signal type)
export_results(
    blinks, nms_events, "video_teaser.h5",
    output_dir="output/",
    formats=["elan", "json", "csv"],
    media_path="video.mp4",
)
```

### Available Detectors

| Detector | ELAN Tier | Signal | Input |
|----------|-----------|--------|-------|
| `blink` | `BLINK` | Eye closures | TEASER eyelid params |
| `nod` | `HEAD-NOD` | Vertical head oscillation | TEASER head pitch |
| `shake` | `HEAD-SHAKE` | Horizontal head oscillation | TEASER head yaw |
| `tilt` | `HEAD-TILT` | Side-to-side head tilt | TEASER head roll |
| `mouth` | `MOUTH-MOVEMENT` | Lip/mouth movement | FLAME expression |
| `eyebrow` | `EYEBROW-RAISE` | Eyebrow raise/furrow | FLAME expression |
| `gaze` | `EYE-GAZE` | Gaze direction | NLF eye pose |
| `squint` | `EYE-SQUINT` | Partial eye closure | TEASER eyelid |

---

## Feature Representations

SLTK computes several feature representations from the extracted H5 pose data.

### WiLoR Segmenter Features (192-dim)

MANO rotation matrices converted to axis-angle, both hands concatenated. Used by the Transformer segmenter.

```python
from sltk.segmentation.h5_loader import h5_to_features
features = h5_to_features("video_wilor.h5")  # (T, 192)
```

### Angle Features (104-dim)

Body joint angles + hand Euler angles from MANO rotations.

```python
from sltk.processing.features import compute_angle_features
angles = compute_angle_features(body_poses, right_hand, left_hand)  # (T, 104)
```

### HaMeR Features (288-dim)

Flattened MANO rotation matrices for both hands.

```python
from sltk.processing.features import compute_hamer_features
hamer = compute_hamer_features(
    mano_global_orient_right, mano_hand_pose_right,
    mano_global_orient_left, mano_hand_pose_left,
)  # (T, 288)
```

### SignRep Embeddings (768-dim)

Dense visual features from the SignRep ViT model. 16-frame windows, L2-normalized.

```python
from sltk.embedding.pipeline import SignRepPipeline
pipeline = SignRepPipeline()
continuous = pipeline.extract_continuous("video.mp4", stride=4)
# continuous.features: (num_windows, 768)
```

### Combined Features from H5

```python
from sltk.processing.features import load_features_from_nlf_wilor
angles, hamer = load_features_from_nlf_wilor("video_nlf.h5", "video_wilor.h5")
```

---

## ELAN File I/O

### Writing a New ELAN File

```python
from sltk.io.elan_roundtrip import ElanDocument

doc = ElanDocument.new(video_path="video.mp4")
doc.add_tier("Gloss")
doc.add_tier("NMS")
doc.add_segment("Gloss", 0.0, 1.5, "HELLO")
doc.add_segment("Gloss", 1.5, 3.0, "WORLD")
doc.add_segment("NMS", 0.2, 0.8, "nod")
doc.save("output.eaf")
```

### Reading and Modifying Existing ELAN Files

```python
doc = ElanDocument.open("annotations.eaf")
tiers = doc.get_tiers()         # list of TierInfo
segments = doc.get_segments()   # list of SegmentInfo (all tiers)

# Add new annotations from pipeline results
doc.add_tier("AutoSegmentation")
doc.add_segment("AutoSegmentation", 1.0, 2.5, "SIGN")
doc.save()  # preserves all original XML structure
```

### Simple Read/Write

```python
from sltk.io import read_eaf, write_eaf
from sltk.data import Segment, SegmentList

# Read
segments = read_eaf("annotations.eaf", tiers=["Gloss"])

# Write
new_segments = SegmentList([
    Segment(start=0.0, end=1.5, label="HELLO", tier="Gloss"),
    Segment(start=1.5, end=3.0, label="WORLD", tier="Gloss"),
])
write_eaf(new_segments, "output.eaf", video_path="video.mp4")
```

---

## Web Interface

SLTK includes a React frontend for browsing workspaces, running processing jobs, and exploring corpus data.

```bash
# Development: FastAPI (port 8000) + Vite (port 5173)
bash scripts/run_dev.sh

# Production
cd frontend && npm ci && npm run build && cd ..
sltk serve --host 0.0.0.0 --port 8000
```

| Route | Page | Purpose |
|-------|------|---------|
| `/` | Workspaces | Create/switch workspaces, scan directories |
| `/process` | Process | Submit segmentation and spotting jobs |
| `/explore` | Explore | Search glosses, view video clips, corpus statistics |
| `/viewer` | Viewer | Video playback with annotation overlay |
| `/analysis/*` | Analysis | Vocabulary, concordance, n-grams, collocations, durations |

---

## CLI Reference

```bash
sltk convert input.npy output.h5 --from mediapipe --to wilor --fps 25
sltk evaluate predictions.txt references.txt --task translation
sltk to-elan segments.json --video source.mp4 --output annotations.eaf
sltk from-elan annotations.eaf --output segments.json --tier Gloss
sltk info video_wilor.h5
sltk serve --host 0.0.0.0 --port 8000 --reload
```

## Configuration

| Variable | Description | Default |
|----------|-------------|---------|
| `SLTK_CORS_ORIGINS` | Allowed CORS origins | `http://localhost:5173,http://localhost:3000` |
| `SLTK_ALLOWED_PATHS` | Filesystem whitelist for API | `/vol/research,/home` |
| `SLTK_WEIGHTS_DIR` | Override weight cache location | `~/.cache/sltk/weights/` |

## Supported Pose Formats

| Format | Extractor | Keypoints | Description |
|--------|-----------|-----------|-------------|
| WiLoR | `WiLoRExtractor` | 21 per hand | MANO 3D hand mesh with rotation matrices |
| NLF/SMPL-X | `NLFExtractor` | 55 joints | Full body with axis-angle rotations |
| TEASER/FLAME | `TeaserExtractor` | FLAME params | Face parameters: jaw, expression, shape, eyelid |
| MediaPipe | `MediaPipeExtractor` | 33+42+468 | Fast 2D/3D holistic landmarks |
| RTMPose | `RTMPoseExtractor` | 133 | COCO-WholeBody, multi-person |

All stored as HDF5 (`.h5`) files.

## Testing

```bash
pytest                       # Full suite (1500+ tests)
pytest -m "not slow"         # Skip slow tests
pytest -m api                # API tests only
```

## License

This project is licensed under **CC-BY-NC-ND-4.0** (Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International).

### Third-Party Licenses

SLTK bundles or depends on models that have their own license terms. By using SLTK you agree to comply with all applicable licenses:

| Component | License | Link |
|-----------|---------|------|
| **MANO** (hand model) | MANO License (non-commercial) | https://mano.is.tue.mpg.de/license.html |
| **SMPL / SMPL-X** (body model) | SMPL License (non-commercial) | https://smpl.is.tue.mpg.de/license.html |
| **FLAME** (face model) | FLAME License (non-commercial) | https://flame.is.tue.mpg.de/license.html |
| **WiLoR** | Apache 2.0 | [Potamias et al., 2024] |
| **TEASER** | See paper | [Liu et al., 2025] |
| **NLF** | See repository | [Sarandi et al.] |

Please refer to each project's license before using their models in your work.
