Metadata-Version: 2.4
Name: filai
Version: 0.1.54
Summary: Deep Learning GUI for filament networks Segmentation & Tracking in Microscopy Images
Author-email: Vatsal Patel <vatsal-dp@users.noreply.github.com>
Maintainer: Orlando Arguello-Miranda
Maintainer-email: Vatsal Patel <vatsal-dp@users.noreply.github.com>
License-Expression: BSD-3-Clause
Project-URL: Homepage, https://github.com/vatsal-dp/filogui3
Project-URL: Repository, https://github.com/vatsal-dp/filogui3
Project-URL: Issues, https://github.com/vatsal-dp/filogui3/issues
Project-URL: Documentation, https://github.com/vatsal-dp/filogui3#readme
Keywords: filament networks,segmentation,microscopy,cellpose,omnipose,tracking,deep-learning,computer-vision,biology,cell-imaging,fungal-imaging
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Bio-Informatics
Classifier: Topic :: Scientific/Engineering :: Image Recognition
Classifier: Topic :: Scientific/Engineering :: Medical Science Apps.
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Operating System :: OS Independent
Classifier: Environment :: GPU :: NVIDIA CUDA
Requires-Python: <3.11,>=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: PySide6==6.9.0
Requires-Dist: pillow<11,>=9.5
Requires-Dist: matplotlib<3.9,>=3.7
Requires-Dist: tifffile<2022,>=2021.11.2
Requires-Dist: numpy<1.26,>=1.22.4
Requires-Dist: natsort>=8.4.0
Requires-Dist: opencv-python-headless>=4.8.1.78
Requires-Dist: pandas<2.1,>=1.5
Requires-Dist: openpyxl>=3.0.0
Requires-Dist: torch<2.6,>=2.0
Requires-Dist: torchvision<0.21,>=0.15
Requires-Dist: setuptools<81
Requires-Dist: omnipose==1.0.6
Requires-Dist: cellpose==3.0.8
Requires-Dist: watchdog
Requires-Dist: scipy<1.10,>=1.9
Requires-Dist: scikit-image<0.23,>=0.19
Requires-Dist: requests
Requires-Dist: tqdm
Requires-Dist: pyqtgraph
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: black>=23.0; extra == "dev"
Requires-Dist: flake8>=6.0; extra == "dev"
Requires-Dist: mypy>=1.0; extra == "dev"
Dynamic: license-file

# FILAI

Developed at the Miranda Laboratory at NCSU with the support of NIH-NIGMS R00GM135487 and the National Institute for Theory and Mathematics in Biology (NITMB). This research was supported in part by grants from the NSF (DMS-2235451) and Simons Foundation (MPS-NITMB-00005320) to the NSF-Simons National Institute for Theory and Mathematics in Biology (NITMB).

<!-- ## Project Description -->

Deep Learning Platform for microscopy analysis of filamentous fungal life cycles.

Python package leveraging a PySide6 GUI for segmentation and tracking of fungal filament via deep learning image segmentation and generative frame interpolation in microscopy time-lapse images. Built by combining Cellpose, Omnipose, and real-time frame interpolation (RIFE).

[![Python 3.10](https://img.shields.io/badge/python-3.10-blue.svg)](https://www.python.org/downloads/)
[![License: BSD 3-Clause](https://img.shields.io/badge/License-BSD%203--Clause-orange.svg)](https://opensource.org/licenses/BSD-3-Clause)

---

## Features

* **7 Pre-trained Models** — Detection (instance segmentation) of different fungal morphological landmarks: fungal filaments (Hyphae), conidia, filament tips, branching points, septa, and crossing points.
* **Ensemble Segmentation Mode** — Pretrained models can be combined to produce multidimensional masks depicting different detected structures on fungal filament images, available for single images or batch processing of image time series.
* **RIFE Interpolation** — Generative temporal upsampling of image time series (2×/4×/8×/16×) to facilitate the tracking of filaments and fungal features.
* **Tracking** — Based solely on mask overlap, with gap-filling and ID consistency thanks to RIFE interpolation.
* **Model Retraining** — Fine-tune models to your custom datasets by providing or labeling new masks to retrain the Cellpose or Omnipose models.
* **Interactive Visualization** — Real-time overlay.

---

## Installation

**Prerequisites:** Install [Anaconda](https://www.anaconda.com/products/distribution) first.

### Anaconda Setup (Required)

1. Download the installer for your OS:
   - Anaconda Distribution: https://www.anaconda.com/products/distribution
2. Install Anaconda:
   - **Windows:** Run the installer (`.exe`). You can leave "Add Anaconda to my PATH environment variable" unchecked, then use **Anaconda Prompt**.
   - **macOS/Linux:** Run the installer and allow shell initialization when prompted.
3. Open a new terminal (or Anaconda Prompt on Windows) and verify:

```bash
conda --version
python --version
```

4. If `conda` is not found on macOS/Linux, initialize your shell and restart terminal:

```bash
conda init zsh
# or:
conda init bash
```

5. Create and activate the FILAI environment:

```bash
conda create -n filai python=3.10 -y
conda activate filai
```

6. Optional but recommended once on a fresh install:

```bash
conda update -n base -c defaults conda -y
```

7. Daily usage:
   - Start work: `conda activate filai`
   - Exit environment: `conda deactivate`

### Windows (Sometimes Required): Microsoft Visual C++ Build Tools

If you see an error like `Microsoft Visual C++ 14.x is required` during `pip install`, install Microsoft's C++ build tools:

1. Download **Build Tools for Visual Studio**: https://visualstudio.microsoft.com/visual-cpp-build-tools/
2. Run the installer and select **Desktop development with C++**
3. In installation details, ensure these are selected:
   - **MSVC v14x C++ build tools** (latest available)
   - **Windows 10/11 SDK** (latest available)
4. Complete installation, then restart your terminal
5. Re-activate your environment and retry:

```bash
conda activate filai
pip install --no-cache-dir filai
```

### Quick Install (CPU - Recommended for First Install)

If you already created `filai` in the Anaconda setup steps above, skip step 1.

```bash
# 1. Create environment with Python 3.10 (important!)
conda create -n filai python=3.10 -y
conda activate filai

# 2. Install FILAI (includes CPU versions of PyTorch)
pip install filai

# 3. Download models (segmentation + interpolation)
filai-download-models

# 4. Launch
filai
```

### Upgrade an Existing Installation

If FILAI is already installed in your existing Conda environment, run:

```bash
# 1. Activate your existing FILAI environment
conda activate filai

# 2. Upgrade FILAI from PyPI
python -m pip install --upgrade filai

# 3. Verify installed version
python -c "import importlib.metadata as im; print(im.version('filai'))"

# 4. Optional: refresh/download model files
filai-download-models
```

If you hit stale wheel/cache issues during upgrade, retry with:
```bash
python -m pip install --upgrade --no-cache-dir filai
```

### GPU Support (NVIDIA Only - Optional but Recommended for Speed)

FILAI installs CPU versions of PyTorch by default. For **significantly faster** segmentation and interpolation, upgrade to GPU support:

**Prerequisites:**
1. Install [NVIDIA drivers](https://www.nvidia.com/Download/index.aspx) for your GPU
2. Verify GPU detection: Run `nvidia-smi` in terminal and note the CUDA version displayed

**Upgrade to GPU:**

```bash
# Activate your filai environment
conda activate filai

# Remove CPU versions
pip uninstall torch torchvision -y

pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu118
```

**Verify GPU installation:**
After installing, check that PyTorch sees your GPU:
```bash
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
```

You should see `CUDA available: True`.

**Note:** For Mac (M1/M2/M3), use:
```bash
conda install pytorch torchvision -c pytorch -y
```

**Note:** The model downloader will download all required models including:
- Segmentation models (8 models, ~200MB)
- RIFE interpolation model (~55MB)

To download only specific models, run `filai-download-models --list` to see options.

**Daily Usage:** After installation, just run:
```bash
conda activate filai
filai
```

**Try built-in sample data (5 TIFF frames):**
```bash
conda activate filai
filai --test
```
This copies packaged sample frames to `~/.filai/toy_dataset` (writable) and opens them automatically in Processing view.
Packaged sample TIFFs in `filai/toy_dataset` are source assets and should remain unmodified.
In `--test`, single-model mode auto-defaults Resize to `0.5` for `Coni_7` and `1.0` for other models (you can still edit Resize manually).
Use `--test` (double dash); `-test` is intentionally not supported.

---

## Usage Guide

### Input Naming

No strict naming format is required for the GUI to load original input images.

For reliable behavior, use this naming convention:
- Time series: zero-padded sequential names (example: `frame_0001.tif`, `frame_0002.tif`, ...) so frame order stays correct.
- If pairing with masks: keep the same base name (example: `frame_0001.tif` ↔ `frame_0001_cp_masks.tif`) for best auto-matching.

### Where Images and Masks Should Be

- **Segmentation:** Put source images in the folder you load in the Processing view (`Load image` or `Load image folder`). Generated masks are saved by FILAI in output subfolders for the selected model.
- **Interpolation:** Put source time-series images in one input folder, and choose a separate output folder for interpolated frames.
- **Tracking:** Provide a folder containing labeled mask files (for example `*_cp_masks.tif`) and choose a separate output folder for tracking exports.
- **Model Retraining:** Use two folders: one `Images Folder` (training images) and one `Masks Folder` (matching labeled masks with corresponding base names).

### Segmentation

**Single Image:**
1. Load image
2. Select model in Models tab
3. Run segmentation
4. View color-coded overlay

**Batch Processing:**
1. Load image folder
2. Choose model
3. Click "Run Time Series"
4. Results saved to `{folder}/{model_name}/`

**Ensemble Mode:**
1. Select multiple models in Models tab
2. Run segmentation
3. Each structure type in unique color

### Frame Interpolation

1. Go to Interpolation tab
2. Select input/output folders
3. Choose factor (2×, 4×, 8×, 16×)
4. Click "Start Interpolation"
5. Preview results in viewer

### Tracking

1. Go to Tracking tab
2. Load masks folder
3. Set output folder
4. Click "Start Tracking"
5. Export: `.npy`, `.mat`, `.pklt` formats

### Model Retraining

1. Open the `Retrain` view from the left sidebar (`ADVANCED -> Retrain`).
2. Select `Images Folder` (training images) and `Masks Folder` (labeled masks).
3. Click `Base Model (.pth)` and choose the pretrained model file you want to fine-tune.
4. Set the mask pattern if needed (for example `_cp_masks`), then click `Validate Data` to confirm image-mask pairs.
5. Set training hyperparameters: `Epochs`, `Learning Rate`, and `Weight Decay` (defaults work for many use cases).
6. Click `Start Training` and confirm the training dialog.
7. Monitor progress in the training progress window. (Detailed console logs are shown when verbose logging is enabled.)
8. After completion, FILAI saves the retrained model in the generated `preprocessed_<timestamp>/models/` folder, typically named `<base_model>_retrain.pth` (or `<base_model>_retrain_<timestamp>.pth` if needed).
9. To use the retrained model in segmentation, open `ADVANCED -> Add Model`, click `Browse and Add Model`, and select the saved `.pth` file.
10. The imported model appears in the models list and can be used in single-model or ensemble workflows. If performance is not satisfactory, refine labels/data and retrain.

---

## Pre-trained Models

| Model | Purpose |
|-------|---------|
| `FilaTip_6.pth` | **Filament tips** (recommended) |
| `Coni_7.pth` | Conidia/spores |
| `Phore_2.pth` | Sporangiophore |
| `FilaBranch_2.pth` | Branch points |
| `FilaCross_2.pth` | Crossings |
| `FilaSeptum_4.pth` | Septa |
| `Retrain_omni_5.pth` | General/custom |

---

## Troubleshooting

**"conda: command not found"**
- Install Miniconda: https://docs.conda.io/en/latest/miniconda.html
- Restart terminal after installation

**Installation takes long**
- Normal - downloads ~2.5 GB (PyTorch + dependencies)
- First install: 10-20 minutes

**Windows install error: "Microsoft Visual C++ 14.x is required"**
- Install Build Tools for Visual Studio: https://visualstudio.microsoft.com/visual-cpp-build-tools/
- Select workload: **Desktop development with C++**
- Then retry: `pip install --no-cache-dir filai`

**Models don't download**
- Check internet connection
- Run manually: `filai-download-models`
- Check `FiloGUI_models/` folder

**GPU not detected**
- Verify: `python -c "import torch; print(torch.cuda.is_available())"`
- Install correct CUDA version for your GPU
- Application will auto-fallback to CPU

**Out of memory**
- Reduce batch size
- Process smaller image regions
- Use CPU mode: `conda install pytorch torchvision cpuonly -c pytorch -y`

---

## File Formats

**Supported Inputs:**
- Images: `.tif`, `.tiff`, `.png`, `.jpg`
- Masks: `.tif`, `.tiff`, `.png`, `.jpg` (labeled)

**Outputs:**
- Masks: `_cp_masks.tif` (labeled)
- Tracks: `.npy` (NumPy), `.mat` (MATLAB), `.pklt` (metadata)
- Interpolated: Same format as input

### Directory Arrangement and Naming

Drag and drop a directory of 2D image files into the GUI (or load it from the file menu) to start segmentation, labeling, interpolation, or processing. Each file should contain a single 2D image and use a standard image extension.

**Important behavior:**
- FILAI writes generated masks/labels and processed images into the loaded directory.
- Removing those generated files will remove the corresponding saved results from prior sessions.
- Keep image dimensions consistent within the same channel.
- For best loading reliability, keep only valid image files and FILAI-generated files in the working directory.

**Multi-channel and pre-generated mask naming:**
- Use a clear data-type identifier immediately before the extension (for example: `_phase.tif`, `_channel2.png`, `_mask1.tif`).
- Any image that is a label/mask should include `_mask` in its identifier (for example: `_mask_nucleus.tif`, `_mask_cytoplasm.jpg`).
- Each identifier/group should have the same number of time points; mismatched counts across channels or masks will raise an error.

**Example directory (2 time points, 2 channels, 2 masks):**
- `im001_channel1.tif`, `im001_channel2.tif`, `im001_mask1.tif`, `im001_mask2.tif`
- `im002_channel1.tif`, `im002_channel2.tif`, `im002_mask1.tif`, `im002_mask2.tif`

---

<!-- ## Citation

If you use FILAI in your research, please cite:

```
FILAI: Deep Learning Platform for Fungal filament networks Analysis
Vatsal Patel (2026)
``` -->

<!-- **Built on:**
- Cellpose: Stringer et al. (2021) - Nature Methods
- Omnipose: Cutler et al. (2022) - Nature Methods  
- RIFE: Huang et al. (2022) - ECCV

--- -->

## License

BSD 3-Clause License - see [LICENSE](LICENSE) file

---

<!-- ## Contributing

Issues and pull requests welcome at: https://github.com/vatsal-dp/filogui3

--- -->

<!-- ## Package Info

**Version:** 0.1.54 (PyPI)  
**Python:** 3.10  
**Dependencies:** PyTorch, cellpose, omnipose, PySide6, numpy, scipy, matplotlib, sympy

**Command-line Tools:**
- `filai` - Launch GUI
- `filai-download-models` - Download pre-trained models

--- -->

<!-- **Made for the microscopy community**
filai-download-models
filai
``` -->

<!-- **If installation fails on Windows** (Visual C++ Build Tools error):
```bash
conda create -n filai python=3.10 -y
conda activate filai
conda install -c conda-forge cellpose=2.1.0 omnipose=0.3.0 -y
pip install filai
filai-download-models
filai
pip install torch torchvision  # or: conda install pytorch torchvision cpuonly -c pytorch
pip install filai
filai-download-models --all
filai
``` -->

**That's it!** The GUI will open and you're ready to analyze microscopy images.
<!-- 
> **Detailed guide:** See [INSTALLATION.md](INSTALLATION.md) for troubleshooting and development installation. -->
<!-- > **Important:** PyTorch must be installed before FILAI. The package will use your existing PyTorch installation.

--- -->

## Package Structure

```
FILAI/
├── filai/                   # Main package
│   ├── main.py                  # GUI application entry point
│   ├── tracking_functions.py   # Overlap-based tracking algorithm
│   ├── model_downloader.py     # Automated model fetching
│   ├── interpolate.py           # RIFE interpolation wrapper
│   ├── gui_styles.py            # UI theming
│   ├── assets/                  # Icons and resources
│   └── rife_model/              # RIFE neural network implementation
├── FiloGUI_models/              # Downloaded models directory
│   ├── Coni_7.pth              # Conidia segmentation
│   ├── FilaTip_6.pth           # Filament tips
│   └── ...                      # 5 additional models
├── pyproject.toml               # Package configuration
├── LICENSE                      # BSD 3-Clause License
└── README.md                    # This file
```

---

## Core Workflow

### 1. Segmentation

FILAI provides three segmentation modes powered by Cellpose and Omnipose:

**Single Image Mode**
```
Load Image → Select Model → Run Segmentation → View Overlay
```
- Instantly segment individual microscopy images
- Real-time color-coded mask overlay
- Export labeled masks as `.tif` files

**Batch Time-Series Mode**
```
Select Folder → Choose Model → Run Time Series → Auto-save Results
```
- Process entire directories of sequential images
- Results saved to `{folder}/{model_name}/` with `_cp_masks.tif` suffix
- Progress tracking with frame counter

**Ensemble Mode**
```
Models Tab → Check Multiple Models → Run Ensemble → Multi-color Overlay
```
- Combine predictions from multiple specialized models
- Each structure type rendered in unique color
- Ideal for complex samples with tips, branches, and septa

---

### 2. Frame Interpolation (RIFE)

Increase temporal resolution using AI-powered interpolation for tracking fast-moving structures:

**Usage**
```
Input Folder → Output Folder → Interpolation Factor → Start
```

**Interpolation Options:**
- **2×** — Double frame count (1 intermediate frame)
- **4×** — Quadruple frames (3 intermediate frames)
- **8×** — 8× frames (7 intermediate frames)
- **16×** — 16× frames (15 intermediate frames)

**Example:** 20 frames → 305 frames at 16× (perfect for tip velocity analysis)

**Technical Details:**
- Supports `.tif`, `.tiff`, `.png`, `.jpg`
- Auto-converts `uint32` → `uint16` for microscopy compatibility
- GPU-accelerated (CUDA) with CPU fallback
- Preserves bit depth and dynamic range
- Integrated preview browser with frame navigation

---

### 3. Tracking

Track individual cells/tips across time with gap-filling and ID consistency:

**Usage**
```
Mask Folder → Output Folder → Start Tracking → Export Results
```

**Output Formats:**
- `{pos}_Tracks.npy` — 3D NumPy array (H × W × T)
- `{pos}_ART_Tracks_MATLAB.mat` — MATLAB-compatible format
- `{pos}_Tracks_vars_file.pklt` — Metadata (sizes, lifetimes, statistics)

<!--
#### How the Tracking Algorithm Works

FILAI uses an **overlap-based tracking engine** that maintains consistent cell IDs across frames:

**Core Algorithm Pipeline:**

1. **Overlap Analysis**
   - Each cell in frame *N* matched to cells in frame *N+1* via pixel overlap
   - Primary threshold: **35%** overlap (reliable matches)
   - Fallback threshold: **10%** (difficult cases, growing cells)

2. **Gap Memory System**
   - Cells that disappear temporarily stored in gap buffer
   - When cell reappears with ≥10% overlap → recovered with original ID
   - Prevents false merging via conflict detection

3. **Morphological Refinement**
   - Opening operations remove segmentation noise
   - Pixel assignments updated to prevent double-counting
   - Preserves cell boundaries and label integrity

4. **Interrupted Track Splitting**
   - Cells that disappear and reappear assigned **new unique IDs**
   - First appearance keeps original ID, subsequent appearances split off
   - Prevents renumbering cascades (Tip 2 → Tip 1 when Tip 1 is missing)

**Why This Matters:**

```diff
 Without Tracking:
Frame 1-5:  Tip 1, Tip 2, Tip 3
Frame 6-7:  Tip 2, Tip 3  (Tip 1 missing, everything renumbers!)
Frame 8-20: Tip 1, Tip 2  (WRONG IDs — analysis corrupted)

 With Tracking:
Frame 1-5:  Tip 1, Tip 2, Tip 3
Frame 6-7:  [gap], Tip 2, Tip 3  (Gap tracked, IDs stable)
Frame 8-20: Tip 21, Tip 2, Tip 3 (New ID for reappearance)
```

**Performance Features:**
- Automatically runs when using "Detect Tip Indices" in GUI
- Tracked masks cached to `{folder}_tracked/` for instant reload
- Vectorized NumPy operations for speed
- Processes 100+ frame time-series in seconds
-->

---

## Pre-trained Models

FILAI includes **7 specialized models** (~25MB each, ~175MB total):

| Model | Purpose | Best For |
|-------|---------|----------|
| `Coni_7.pth` | **Conidia** | Fungal spores, reproductive structures |
| `Phore_2.pth` | **Sporangiophore** | Spore-bearing aerial hyphae |
| `FilaBranch_2.pth` | **Branch Points** | Hyphal branching junctions |
| `FilaCross_2.pth` | **Crossings** | Overlapping filament networks detection |
| `FilaSeptum_4.pth` | **Septa** | Cell wall divisions, compartments |
| `FilaTip_6.pth` | **Tips** | **Filament tip tracking (recommended)** |
| `Retrain_omni_5.pth` | **General** | Multi-purpose, custom-trained |

### Model Management

```bash
# List available models
filai-download-models --list

# Download all models
filai-download-models --all

# Download specific models
filai-download-models FilaTip_6 Coni_7

# Interactive selection
filai-download-models
```

Models downloaded to `FiloGUI_models/` in current directory or `~/.filai/models/`

---

## Recommended Workflow: Complete Analysis Pipeline

For high-quality filament networks tip tracking:

```
1. Acquire Microscopy Data
   └─ Time-lapse .tif series (e.g., 20-50 frames, 2-5 min intervals)

2. Frame Interpolation (Optional but Recommended)
   └─ RIFE 8× or 16× for capturing fast growth dynamics
   └─ 20 frames → 305 frames at 16×

3. Segmentation
   └─ Batch process with FilaTip_6 model
   └─ Outputs: {folder}/FilaTip_6/*_cp_masks.tif

4. Tracking
   └─ Run tracking on segmented masks
   └─ Automatically handles gaps and ID consistency
   └─ Outputs: Tracked masks + metadata

5. Analysis
   └─ Load .npy or .mat files in Python/MATLAB
   └─ Extract growth rates, velocities, morphology
```

**Typical Performance:**
- Segmentation: ~2-5 seconds/frame (GPU)
- Interpolation 16×: ~1-3 seconds/frame (GPU)
- Tracking: ~0.5 seconds/frame (CPU)

---

## Development

### Install from Source

```bash
git clone https://github.com/vatsal-dp/filogui3.git
cd filogui3

conda create -n filai-dev python=3.10
conda activate filai-dev

conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -e .  # Editable install

filai-download-models --all
filai
```

### Hot-Reload Development Mode

```bash
python dev_runner.py  # Auto-reloads on code changes
```

### Project Structure

```python
filai/
├── main.py                      # Main GUI (5000+ lines, modular views)
├── tracking_functions.py        # Core tracking algorithm
├── model_downloader.py          # Automated model fetching
├── interpolate.py               # RIFE wrapper
├── gui_styles.py                # Centralized theming
├── interpolation_view.py        # Interpolation tab UI
├── interpolation_dialog.py      # Interpolation dialogs
├── debug_viewer.py              # Debug visualization
├── matlab_equivalent_functions.py  # MATLAB compatibility layer
└── rife_model/                  # RIFE neural network
    ├── RIFE.py                  # Main RIFE model
    ├── IFNet.py                 # Feature extraction
    └── pytorch_msssim/          # Perceptual loss
```

---

## Technical Specifications

### Supported Formats

| Category | Formats | Notes |
|----------|---------|-------|
| **Input Images** | `.tif`, `.tiff`, `.png`, `.jpg` | 8/16/32-bit |
| **Segmentation Output** | `.tif` (16-bit labeled) | Instance masks |
| **Tracking Output** | `.npy`, `.mat`, `.pklt` | NumPy, MATLAB, Pickle |

<!-- ### System Requirements

| Component | Minimum | Recommended |
|-----------|---------|-------------|
| **Python** | 3.10 | 3.10-3.12 |
| **RAM** | 8 GB | 16+ GB |
| **GPU** | Optional | NVIDIA GPU with CUDA 11.8+ |
| **Storage** | 2 GB | 10+ GB (for datasets) | -->

<!-- ### Backends

- **Segmentation:** Cellpose 2.1.0 / Omnipose 0.4.4 (GPU-accelerated)
- **Interpolation:** RIFE (ECCV 2022) with PyTorch 1.12.0
- **Tracking:** Custom overlap-based algorithm (NumPy/SciPy)
- **GUI:** PySide6 6.9.0 (Qt6) -->

---

## Troubleshooting

### GPU Not Detected

```python
import torch
print(torch.cuda.is_available())  # Should be True
```

**Fix:**
```bash
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
```

### Models Not Found

```bash
# Download to current directory
filai-download-models --all

# Or specify custom path
filai-download-models --all --output-dir /path/to/models
```

### Import Errors (Cellpose/Omnipose)

```bash
pip install --upgrade cellpose==2.1.0 omnipose==0.4.4
```

### RIFE Not Available Warning

```bash
pip install opencv-python-headless
```

<!-- For more issues, see [INSTALLATION.md](INSTALLATION.md) or [open an issue](https://github.com/vatsal-dp/filogui3/issues). -->

---

<!-- ## License

--- -->

## Acknowledgements

Sandhya Neupane, Susmita Gaire, Kevin Garcia, Tika B. Adhikari, Orlando Arguello-Miranda

1 Plant and Microbial Biology, North Carolina State University.  
2 Entomology and Plant Pathology, North Carolina State University.  
3 Crop and Soil Sciences, North Carolina State University.  
Current address: Department of Natural Sciences, Tennessee Wesleyan University.
