Releases: talmolab/sleap
SLEAP v1.6.2
SLEAP v1.6.2
SLEAP v1.6.2 is a patch release with bug fixes for GUI performance, data integrity, training configuration, and updated dependencies.
Quick install/upgrade:
uv tool install --python 3.13 "sleap[nn]==1.6.2" --torch-backend autoSee the v1.6.0 release notes for full details on the latest major release.
Bug Fixes
Fix GUI freeze when adding instances on suggested frames (#2632)
Fixed a performance regression that caused the GUI to freeze for ~10 minutes when adding instances on videos with large suggestion sets (~100k suggestions). The status bar update had O(n×m) complexity which has been reduced to O(n+m).
Fix skeleton node removal not updating instance point data (#2621)
Fixed a critical bug where deleting nodes from a skeleton via the GUI did not update instance point arrays. This caused file corruption where saved files couldn't be reopened due to shape mismatches. Files now correctly update point data when nodes are removed.
Improve training dialog UI with smart field visibility (#2619)
Training dialog improvements:
- Added toggle visibility for early stopping and OHKM parameter fields
- Hide OHKM fields for centroid models where they don't apply
- Fixed incorrect label "Sigma for Edges" → "Sigma for Identity" in multi-class bottom-up pipeline options
Fix Hydra override parsing error in exported train-script.sh (#2612)
Fixed OverrideParseException errors when running exported training scripts on SLURM clusters. The ckpt_dir and run_name values are now properly quoted to handle special characters.
Fix anchor part sync for top-down-id pipeline (#2610)
Fixed anchor_part selection not syncing correctly for the top-down-id pipeline in the training configuration dialog.
Dependency Updates
sleap-io 0.6.4 → 0.6.5
- ROI and Segmentation Mask support (experimental): New
ROIclass for vector geometry andSegmentationMaskclass for raster masks - COCO Detection & Segmentation I/O (experimental): Read/write bounding box, polygon, and RLE mask annotations
- Ultralytics Detection & Segmentation I/O (experimental): Extended YOLO format support
- NumPy 2.x compatibility: Fixed serialization errors when saving
.slpfiles
See the sleap-io v0.6.5 release notes for full details.
sleap-nn 0.1.0 → 0.1.2
- 6.7x faster bottom-up inference on NVIDIA A40 GPUs
- TUI Config Generator: Interactive wizard for generating training configs on remote systems
- Post-processing filters: New
--filter_min_visible_nodes,--filter_min_mean_node_scoreoptions - Multi-GPU fixes: Fixed DDP collective mismatch crashes and NCCL deadlocks
- Tracking fixes: Fixed track stealing bug and spurious track creation
See the sleap-nn v0.1.1 and v0.1.2 release notes for full details.
Full Changelog: v1.6.1...v1.6.2
SLEAP v1.6.1
SLEAP v1.6.1
SLEAP v1.6.1 is a patch release with bug fixes for Linux Qt compatibility, training configuration saving, and a new --video-backend CLI option.
Quick install/upgrade:
uv tool install --python 3.13 "sleap[nn]==1.6.1" --torch-backend autoSee the v1.6.0 release notes for full details on the latest major release.
Bug Fixes
Fix Linux Qt library conflicts (#2604)
On some Linux distributions (Debian 12, Fedora 43, and others with system Qt 6 packages), SLEAP could crash on launch with ImportError: undefined symbol errors due to conflicts between system Qt libraries and PySide6's bundled Qt. SLEAP now ensures PySide6's bundled Qt libraries take precedence on Linux by setting LD_LIBRARY_PATH and QT_PLUGIN_PATH before launching.
Fix training config save dialog bugs (#2603)
Fixed two bugs in the Training Configuration dialog reported in #2602:
- Run names were ignored when saving configs: User-entered run names were overwritten with auto-generated timestamps when using "Save configuration files..." or "Export training job package...". Run names are now preserved correctly.
- YAML file picker showed no files: The "Select training config file..." dropdown's file browser failed to show
.yaml/.ymlfiles due to an incorrect file filter separator. Fixed to use Qt's expected format.
New Features
--video-backend CLI option (#2604)
A new --video-backend flag allows selecting the video decoding backend when launching SLEAP:
sleap --video-backend FFMPEGThis is useful for working around h264 codec errors that can occur with OpenCV's default backend on some Linux systems. The setting persists across sessions via user preferences.
Other Changes
- Added
workflow_dispatchtrigger to the build workflow for manual CI re-runs - Added display/GUI and video codec troubleshooting documentation
Full Changelog: v1.6.0...v1.6.1
SLEAP v1.6.0
What's New in SLEAP 1.6
SLEAP 1.6 is a major update with new backbone architectures, a redesigned training and inference experience, automated label quality control, ONNX/TensorRT export for faster deployment, a unified CLI, and MANY bug fixes. This release spans 70+ PRs since v1.5.2.
Quick start:
uv tool install --python 3.13 "sleap[nn]==1.6.0" --torch-backend autoSee below for more detailed installation instructions.
Major Changes
New Backbone Architectures in Training Dialog (#2579)
SLEAP 1.6 brings ConvNeXt and Swin Transformer backbone support to the GUI training dialog, alongside UNet:
- ConvNeXt -- Modern convolutional architecture in tiny/small/base/large variants (28M-198M parameters) with optional ImageNet pretrained weights for faster convergence.
- Swin Transformer (SwinT) -- Transformer-based backbone in tiny/small/base variants (28M-88M parameters) with optional ImageNet pretrained weights.
Select these from the training dialog's new backbone selector dropdown. See the sleap-nn model documentation for details.
Unified sleap CLI
SLEAP now has a single sleap command as the primary entry point. Running sleap launches the GUI, and subcommands provide access to all tools:
sleap # Launch the GUI
sleap doctor # System diagnostics and troubleshooting
sleap export-model # Export models to ONNX/TensorRTAll sleap-io and sleap-nn CLI commands are now integrated as sleap subcommands (#2524, #2541, #2559, #2587, #2595, #2597):
| Command | Description |
|---|---|
sleap doctor |
System diagnostics and troubleshooting |
sleap show |
Display labels file summary |
sleap convert |
Convert between label formats |
sleap split |
Split labels into train/val/test |
sleap unsplit |
Recombine split labels |
sleap merge |
Merge multiple labels files |
sleap render |
Render pose videos |
sleap fix |
Fix/repair labels files |
sleap embed / sleap unembed |
Manage embedded video data |
sleap trim |
Trim labels to subset |
sleap reencode |
Re-encode embedded videos |
sleap transform |
Coordinate-aware video transformations |
sleap filenames |
List video filenames in labels |
sleap train |
Train models (from sleap-nn) |
sleap predict |
Run inference (from sleap-nn) |
sleap export-model |
Export models to ONNX/TensorRT |
ONNX & TensorRT Model Export
Export trained models to optimized formats for 3-6x faster inference (#2573, #2594, #2595, #2597):
sleap export-model model.ckpt -o model.onnx --format onnx
sleap export-model model.ckpt -o model.engine --format tensorrtRun inference on exported models:
sleap predict model.onnx video.mp4 -o predictions.slpBenchmark results (NVIDIA RTX A6000, batch size 8):
| Model Type | PyTorch | TensorRT FP16 | Speedup |
|---|---|---|---|
| Single Instance | 3,111 FPS | 11,039 FPS | 3.5x |
| Centroid | 453 FPS | 1,829 FPS | 4.0x |
| Top-Down | 94 FPS | 525 FPS | 5.6x |
| Bottom-Up | 113 FPS | 524 FPS | 4.6x |
To install export dependencies, reinstall with the appropriate extra: "sleap[nn,export]==1.6.0" (ONNX), "sleap[nn,export-gpu]==1.6.0" (ONNX + GPU), or "sleap[nn,tensorrt]==1.6.0" (TensorRT). See the sleap-nn Export Guide for full benchmarks and details.
Label Quality Control (#2547)
New sleap.qc module with GMM-based anomaly detection to automatically identify annotation errors. Accessible via Analyze > Label QC... in the GUI.
- Detects 10+ error types: isolated misses, jitter, visibility errors, scale issues, left-right swaps, gross misses, missing instances, and duplicates
- Dockable GUI widget with score histograms and sensitivity controls
- Keyboard navigation (Space/Shift+Space) to quickly review flagged instances
- Export to CSV or add flagged instances to Suggestions for batch review
Redesigned Training & Inference Dialogs (#2506, #2509, #2519, #2556, #2557, #2579)
The training and inference dialogs have been completely redesigned with native Qt for a faster, more polished experience:
- 55x faster config loading via rapidyaml and lazy loading
- Smaller dialog that fits on 1280x720 screens
- Augmentation controls simplified with on/off checkboxes and rotation presets (default: full ±180°)
- Device and worker settings default from user preferences
- Random Seed field for reproducible train/validation splits
- Evaluation metrics can be computed at configurable epoch intervals (mOKS, mAP, mAR, PCK, distance metrics logged to WandB)
- Prediction handling modes: Choose Keep, Replace, or Clear all predictions during inference
- "Random sample (current video)" inference target for quick model testing
- Form state persists after clicking Cancel
Real-Time Inference Progress (#2575)
The inference dialog now provides detailed progress feedback:
- Threaded inference: UI remains responsive during long-running jobs
- Live progress display:
Predicted: 100/1,410 | FPS: 38.4 | ETA: 34s - Log viewer: Dark-themed scrollable log showing subprocess output in real-time
- Working cancel button: Properly terminates the inference process
- "Delete All Predictions" now completes in milliseconds (was minutes on large datasets)
Video Rendering Overhaul (#2558)
Rendering is now powered by sleap-io's rendering engine with a live preview before exporting:
- 12+ new color palettes and 5 marker shapes
- Color by track, instance, or node
- Alpha transparency support for overlays
- Non-blocking video export with progress bar and cancel support
See the rendering documentation for details.
Filter Overlapping Instances (#2574)
New post-inference deduplication to remove duplicate predictions:
- IOU method: Filter by bounding box overlap
- OKS method: Filter by keypoint-based similarity
- Available in both the GUI (checkbox + threshold slider) and CLI (
--filter_overlappingflags)
sleap doctor Diagnostics (#2524, #2551, #2553)
The sleap doctor command has been overhauled:
- Consolidated, copy-paste-friendly diagnostic output
- Git info display for editable installs (branch, commit hash)
- Comprehensive UV and conda introspection with conflict warnings
- System resources display (RAM and disk usage)
-o/--outputflag to save diagnostics to file- Spinner during PyTorch import to show the command is working
How to Install
Step 1: Install uv (skip if already installed)
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | shStep 2: Install SLEAP v1.6.0
uv tool install --python 3.13 "sleap[nn]==1.6.0" --torch-backend autoThat's it! SLEAP is now available system-wide. The --torch-backend auto flag automatically detects your GPU (NVIDIA, AMD, Intel, or CPU). Run uv self update first if you get an error about this flag.
Step 3: Verify installation and launch
sleap doctor # Check your setup
sleap # Launch the GUINote: The
sleap-labelcommand from v1.5.x still works as an alias for launching the GUI.
Upgrading from v1.5.x?
uv tool install --reinstall --python 3.13 "sleap[nn]==1.6.0" --torch-backend autoQuick data viewing (no permanent install)
uvx sleap labels.slpVersion compatibility
| SLEAP | sleap-io | sleap-nn |
|---|---|---|
| 1.6.x | 0.6.x | 0.1.x |
| 1.5.x | 0.5.x | 0.0.x |
Note: Starting with SLEAP v1.5+, all deep learning functionality is powered by the PyTorch-based
sleap-nnbackend. TensorFlow models (withUNetbackbones) from earlier versions are still supported for inference. Refer to the Migrating to 1.5+ docs for more details!
Dependency Updates
sleap-io v0.6.4 (#2597)
- ~90x faster SLP loading with lazy loading mode for large prediction files
- Pose rendering at ~50 FPS for publication-ready videos (
sleap render) - Data codecs for converting Labels to pandas DataFrames, NumPy arrays, and dictionaries
- Content-based video matching for reliable cross-platform merges even when file paths differ
- New
Labels.match()API for inspecting matching results without merging - Negative frames support: Mark frames as containing no instances (
LabeledFrame.is_negative) - 8 new CLI commands:
merge,unsplit,fix,embed,unembed,trim,reencode,transform - CSV format support for MATLAB interoperability
- Safe video matching prevents silent data corruption during merges
- Embedded images preserved during CLI operations (
sio fix,sio convert, etc.) - 23x faster
.pkg.slpsaves, 2.7x faster embedded video loading - Bug fixes for video matching, rendering, embedded videos, skeleton consolidation
sleap-nn v0.1.0 (#2597)
- ONNX/TensorRT export for 3-6x faster inference
- Epoch-end evaluation metrics: mOKS, mAP, mAR, PCK, and distance metrics logged to WandB
- Post-inference filtering: Greedy NMS to remove duplicate predictions
- 17-51x faster peak refinement (integral refinement now works on Mac)
- GUI progress mode: JSON output for real-time progress in SLEAP GUI
- Faster inference via GPU-accelerated normalization (17% typ...
SLEAP v1.6.0a3
SLEAP v1.6.0a3
About the v1.6 Pre-release Series
This is a pre-release for SLEAP v1.6.0. It contains many new features and improvements, but is not yet considered stable. For production use, see v1.5.2.
We are releasing a series of pre-releases that incrementally build towards the stable v1.6.0 release. Each pre-release adds new features and bug fixes:
| Version | Summary |
|---|---|
| v1.6.0a0 | Unified sleap CLI, redesigned training dialog (55x faster loading), bug fixes for adding instances from predictions |
| v1.6.0a1 | Label QC for automated error detection, 8 new CLI commands from sleap-io, video rendering with live preview |
| v1.6.0a2 | Revamped installation docs, epoch-end evaluation metrics, content-based video matching, bug fix for export training package |
| v1.6.0a3 (current) | ConvNeXt/SwinT backbones, ONNX/TensorRT export, real-time inference progress, macOS bug fixes |
Note: Starting with SLEAP v1.5+, all deep learning functionality is powered by the PyTorch-based
sleap-nnbackend. TensorFlow models (withUNetbackbones) from earlier versions are still supported for inference. Refer to the Migrating to 1.5+ docs for more details!
How to Install
Step 1: Install uv (skip if already installed)
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | shStep 2: Install SLEAP v1.6.0a3
uv tool install --python 3.13 "sleap[nn]==1.6.0a3" --with "sleap-io==0.6.3" --with "sleap-nn==0.1.0a4" --prerelease allow --torch-backend autoThat's it! SLEAP is now available system-wide. The --torch-backend auto flag automatically detects your GPU (NVIDIA, AMD, Intel, or CPU). Be sure to do a uv self update if you get an error about this flag.
Step 3: Verify installation
sleap doctorUpgrading from v1.6.0a2?
uv tool upgrade sleap --upgrade-package sleap-io --upgrade-package sleap-nnOr for a clean reinstall:
uv tool install --reinstall --python 3.13 "sleap[nn]==1.6.0a3" --with "sleap-io==0.6.3" --with "sleap-nn==0.1.0a4" --prerelease allow --torch-backend autoRollback to stable
If you encounter issues, rollback to the latest stable release:
uv tool install --python 3.13 "sleap[nn]==1.5.2" --torch-backend autoVersion compatibility
| SLEAP | sleap-io | sleap-nn |
|---|---|---|
| 1.6.0a3 | 0.6.3 | 0.1.0a4 |
| 1.6.0a2 | 0.6.2 | 0.1.0a2 |
| 1.6.0a1 | 0.6.1 | 0.1.0a1 |
What's New in v1.6.0a3
New Backbone Architectures (#2579)
Train models using modern transformer-based and ConvNeXt architectures as alternatives to UNet:
- ConvNeXt: Select from tiny/small/base/large variants (28M-198M parameters) with optional ImageNet pretrained weights for faster convergence
- Swin Transformer (SwinT): Select from tiny/small/base variants (28M-88M parameters) with optional ImageNet pretrained weights
Access these from the training dialog's new backbone selector dropdown.
ONNX/TensorRT Model Export (#2573)
Export trained models to optimized formats for 3-6x faster inference:
sleap-nn-export model.ckpt -o model.onnx --format onnx
sleap-nn-export model.ckpt -o model.engine --format tensorrtRun inference on exported models:
sleap-nn-predict model.onnx video.mp4 -o predictions.slpBenchmark results (NVIDIA RTX A6000, batch size 8):
| Model Type | PyTorch | TensorRT FP16 | Speedup |
|---|---|---|---|
| Single Instance | 3,111 FPS | 11,039 FPS | 3.5x |
| Centroid | 453 FPS | 1,829 FPS | 4.0x |
| Top-Down | 94 FPS | 525 FPS | 5.6x |
| Bottom-Up | 113 FPS | 524 FPS | 4.6x |
See the sleap-nn Export Guide for full benchmarks and usage details.
Real-Time Inference Progress (#2575)
The inference dialog now provides detailed progress feedback:
- Threaded inference: UI remains responsive during long-running jobs
- Live progress display:
Predicted: 100/1,410 | FPS: 38.4 | ETA: 34s - Log viewer: Dark-themed scrollable log showing subprocess output in real-time
- Working cancel button: Properly terminates inference when clicked
Filter Overlapping Instances (#2574)
New GUI controls to remove duplicate/overlapping predictions after inference:
- Enable filtering checkbox in the Preprocessing/Postprocessing section
- Method selection: IOU (bounding box overlap) or OKS (keypoint-based similarity)
- Threshold control: Lower values = more aggressive filtering (default: 0.8)
Also available via CLI:
sleap-nn-track model.ckpt video.mp4 --filter_overlapping --filter_overlapping_method oks --filter_overlapping_threshold 0.5Evaluation During Training (#2579)
New evaluation section in the training dialog:
- Enable evaluation checkbox to compute metrics during training
- Frequency control to set how often evaluation runs (in epochs)
- Metrics logged to WandB: mOKS, mAP, mAR, PCK, distance percentiles
Performance Improvements
- Delete All Predictions (#2575): Now completes in milliseconds instead of minutes on large datasets
- 17-51x faster peak refinement in sleap-nn (v0.1.0a4): Enables integral refinement on Mac (previously disabled)
Bug Fixes
macOS Fixes
- Fixed crash when opening training dialog on macOS with Homebrew installed. The crash was caused by a conflict between Homebrew's libpng and macOS's ImageIO framework during Qt font rendering. (#2571)
- Fixed dialog button ordering on macOS. Training and inference dialog buttons now appear in consistent order across all platforms (Mac, Windows, Linux). (#2576)
- Fixed default button highlighting: The "Run" button now correctly appears as the default (highlighted) button instead of "Copy to clipboard". (#2576)
UI Fixes
- Fixed dark mode for training dialog main tab. The background now properly follows the system theme instead of remaining white. (#2572)
- Fixed loss monitor to recognize sleap-nn's metric naming convention (
train/loss,val/loss). (#2579)
Other Fixes
- Fixed ConvNeXt/SwinT training crash in sleap-nn: Resolved skip connection channel mismatch that caused errors during validation. (sleap-nn v0.1.0a4)
- Fixed inference progress ending at 99%: Now correctly shows 100% when complete. (sleap-nn v0.1.0a4)
- Fixed CSV learning rate logging: The
learning_ratecolumn intraining_log.csvis no longer empty. (sleap-nn v0.1.0a4)
Dependency Updates
sleap-nn v0.1.0a4
Changes since v0.1.0a2 (the previous minimum version):
- ONNX/TensorRT Export: Export models to optimized formats for 3-6x faster inference
- Post-Inference Filtering: Greedy NMS to remove duplicate predictions (
--filter_overlapping) - 17-51x Faster Peak Refinement: Fast tensor indexing replaces kornia's
crop_and_resize - GUI Progress Mode: New
--guiflag enables JSON output for real-time GUI progress - Simplified Train CLI:
sleap-nn train config.yaml(positional config path) - Bug fixes for ConvNeXt/SwinT training, CSV logging, progress display
sleap-io v0.6.3
Changes since v0.6.2 (the previous minimum version):
- Negative Frames Support: Mark frames as containing no instances (
LabeledFrame.is_negative) - Embedded Images Preserved: CLI commands (
sio fix,sio convert, etc.) no longer strip embedded images - Smart Skeleton Consolidation: Compatible skeletons are reassigned instead of deleted
Full Changelog
Enhancements
- Add ConvNeXt and SwinT backbone options to training dialog by @talmo in #2579
- Improve inference dialog with real-time progress and UI fixes by @talmo in #2575
- Add filter_overlapping controls to training/inference dialogs by @talmo in #2574
- Bump sleap-io to 0.6.3 and sleap-nn to 0.1.0a3, add new CLI commands by @talmo in #2573
Fixes
- Fix macOS crash caused by Homebrew libpng conflict by @talmo in #2571
- Fix training dialog main tab background to match config tabs by @talmo in #2572
- Fix Mac dialog button order and default button styling by @talmo in #2576
Workflows
- A...
SLEAP v1.6.0a2
SLEAP v1.6.0a2
About the v1.6 Pre-release Series
This is a pre-release for SLEAP v1.6.0. It contains many new features and improvements, but is not yet considered stable. For production use, see v1.5.2.
We are releasing a series of pre-releases that incrementally build towards the stable v1.6.0 release. Each pre-release adds new features and bug fixes:
| Version | Summary |
|---|---|
| v1.6.0a0 | Unified sleap CLI, redesigned training dialog (55x faster loading), bug fixes for adding instances from predictions |
| v1.6.0a1 | Label QC for automated error detection, 8 new CLI commands from sleap-io, video rendering with live preview |
| v1.6.0a2 (current) | Revamped installation docs, epoch-end evaluation metrics, content-based video matching, bug fix for export training package |
Note: Starting with SLEAP v1.5+, all deep learning functionality is powered by the PyTorch-based
sleap-nnbackend. TensorFlow models (withUNetbackbones) from earlier versions are still supported for inference. Refer to the Migrating to 1.5+ docs for more details!
How to Install
Step 1: Install uv (skip if already installed)
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | shStep 2: Install SLEAP v1.6.0a2
uv tool install --python 3.13 "sleap[nn]==1.6.0a2" --with "sleap-io==0.6.2" --with "sleap-nn==0.1.0a2" --prerelease allow --torch-backend autoThat's it! SLEAP is now available system-wide. The --torch-backend auto flag automatically detects your GPU (NVIDIA, AMD, Intel, or CPU). Be sure to do a uv self update if you get an error about this flag.
Step 3: Verify installation
sleap doctorUpgrading from v1.6.0a1?
uv tool upgrade sleap --upgrade-package sleap-io --upgrade-package sleap-nnOr for a clean reinstall:
uv tool install --reinstall --python 3.13 "sleap[nn]==1.6.0a2" --with "sleap-io==0.6.2" --with "sleap-nn==0.1.0a2" --prerelease allow --torch-backend autoRollback to stable
If you encounter issues, rollback to the latest stable release:
uv tool install --python 3.13 "sleap[nn]==1.5.2" --torch-backend autoVersion compatibility
| SLEAP | sleap-io | sleap-nn |
|---|---|---|
| 1.6.0a2 | 0.6.2 | 0.1.0a2 |
| 1.6.0a1 | 0.6.1 | 0.1.0a1 |
| 1.6.0aN | 0.6.x | 0.1.0aN |
| 1.6.x | 0.6.x | 0.1.x |
| 1.5.x | 0.5.x | 0.0.x |
What's New in v1.6.0a2
-
Revamped Installation Documentation:
- Complete rewrite of installation docs with simplified workflow (#2567)
- Single universal install command for all platforms using
--torch-backend auto - Reduced from 8 installation paths to 2 (tool install + development setup)
- New
uvx sleap labels.slpoption for viewing data without permanent installation - Streamlined upgrade flow with
uv tool upgrade sleap
-
Python 3.13 Default:
- Python 3.13 is now the default recommended version (#2565)
- Python 3.12 remains supported
-
Bug Fixes:
-
- Content-Based Video Matching: Videos are now automatically matched by pose annotations or pixel content, enabling reliable cross-platform merges even when file paths differ
- New
Labels.match()API: Inspect matching results without merging — ideal for evaluation workflows - Video Color Mode Control: New
Labels.set_video_color_mode()method andsio fix --video-colorCLI option - Bug fixes for HDF5 dataset matching and provenance conflict handling
-
- Epoch-End Evaluation Metrics: Real-time mOKS, mAP, mAR, PCK, and distance metrics logged to WandB during training
- Robust Video Matching: Uses sleap-io's
Labels.match()API for better cross-platform evaluation - Bug fixes for embedded video handling and centroid model ground truth matching
Full Changelog
Enhancements
Fixes
Workflows
- Fix docs workflow race condition with concurrency group by @talmo in #2563
- Housekeeping: Python 3.13 default and sleap-support skill by @talmo in #2565
Dependencies
Full Changelog: v1.6.0a1...v1.6.0a2
SLEAP v1.6.0a1
SLEAP v1.6.0a1
About the v1.6 Pre-release Series
This is a pre-release for SLEAP v1.6.0. It contains many new features and improvements, but is not yet considered stable. For production use, see v1.5.2.
We are releasing a series of pre-releases that incrementally build towards the stable v1.6.0 release. Each pre-release adds new features and bug fixes:
| Version | Summary |
|---|---|
| v1.6.0a0 | Unified sleap CLI, redesigned training dialog (55x faster loading), bug fixes for adding instances from predictions |
| v1.6.0a1 (current) | Label QC for automated error detection, 8 new CLI commands from sleap-io, video rendering with live preview |
Note: Starting with SLEAP v1.5+, all deep learning functionality is powered by the PyTorch-based
sleap-nnbackend. TensorFlow models (withUNetbackbones) from earlier versions are still supported for inference. Refer to the Migrating to 1.5+ docs for more details!
How to Install
Step 1: Install uv (skip if already installed)
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | shStep 2: Install SLEAP v1.6.0a1
Windows/Linux with NVIDIA GPU (CUDA 12.8)
uv tool install --reinstall --python 3.12 "sleap[nn]==1.6.0a1" --with "sleap-io==0.6.1" --with "sleap-nn==0.1.0a1" --prerelease allow --index https://download.pytorch.org/whl/cu128 --index https://pypi.org/simpleWindows/Linux with NVIDIA GPU (CUDA 13.0)
uv tool install --reinstall --python 3.12 "sleap[nn]==1.6.0a1" --with "sleap-io==0.6.1" --with "sleap-nn==0.1.0a1" --prerelease allow --index https://download.pytorch.org/whl/cu130 --index https://pypi.org/simpleWindows/Linux without GPU (CPU only)
uv tool install --reinstall --python 3.12 "sleap[nn]==1.6.0a1" --with "sleap-io==0.6.1" --with "sleap-nn==0.1.0a1" --prerelease allow --index https://download.pytorch.org/whl/cpu --index https://pypi.org/simplemacOS
uv tool install --reinstall --python 3.12 "sleap[nn]==1.6.0a1" --with "sleap-io==0.6.1" --with "sleap-nn==0.1.0a1" --prerelease allowStep 3: Verify installation
sleap doctorUpgrading from v1.6.0a0?
Use the same commands as above. The --reinstall flag will create a clean environment with the new dependencies.
Rollback to stable
If you encounter issues, rollback to the latest stable release:
# Windows/Linux (CUDA 12.8)
uv tool install --reinstall --python 3.12 "sleap[nn]==1.5.2" --index https://download.pytorch.org/whl/cu128 --index https://pypi.org/simple
# Windows/Linux (CPU only)
uv tool install --reinstall --python 3.12 "sleap[nn]==1.5.2" --index https://download.pytorch.org/whl/cpu --index https://pypi.org/simple
# macOS
uv tool install --reinstall --python 3.12 "sleap[nn]==1.5.2"Version compatibility
| SLEAP | sleap-io | sleap-nn |
|---|---|---|
| 1.6.0a1 | 0.6.1 | 0.1.0a1 |
| 1.6.0aN | 0.6.x | 0.1.0aN |
| 1.6.x | 0.6.x | 0.1.x |
| 1.5.x | 0.5.x | 0.0.x |
What's New in v1.6.0a1
-
Label Quality Control (QC):
- New
sleap.qcmodule with GMM-based anomaly detection to automatically identify annotation errors (#2547) - Detects 10+ error types: isolated misses, jitter, visibility errors, scale issues, left-right swaps, gross misses, missing instances, and duplicates
- Dockable GUI widget accessible via Analyze > Label QC... with score histograms and sensitivity controls
- Keyboard navigation (Space/Shift+Space) to quickly navigate flagged instances
- Export to CSV or add flagged instances to Suggestions for review
- New
-
Enhanced CLI:
- 8 new CLI commands from sleap-io:
sleap merge,sleap unsplit,sleap fix,sleap embed,sleap unembed,sleap trim,sleap reencode,sleap transform(#2559) - See the sleap-io CLI documentation for detailed usage
- 8 new CLI commands from sleap-io:
-
Video Rendering Overhaul:
- Now powered by sleap-io's rendering engine — see rendering documentation for details (#2558)
- Live preview of rendered frames with all style options before exporting
- 12+ new color palettes and 5 marker shapes with options to color by track, instance, or node
- Alpha transparency support for overlays
- Non-blocking video export with progress bar and cancel support
-
Training Dialog Improvements:
- Form state now persists after clicking Cancel (#2557)
- Device and worker settings default from user preferences instead of being overwritten by profiles (#2557)
- Updated all baseline profiles to use full ±180° rotation augmentation (#2557)
- Added Random Seed field for reproducible train/validation splits (#2557)
- New tooltips for Input Scaling, Batch Size, Predict On, and tracker fields (#2556)
-
Inference Improvements:
-
sleap doctorImprovements:- Consolidated, copy-paste-friendly diagnostic output (#2553)
- Git info display for editable installs (branch, commit hash) (#2553)
- Comprehensive UV and conda introspection with conflict warnings (#2553)
- System resources display (RAM and disk usage) (#2553)
- New
-o/--outputflag to save diagnostics to file (#2553) - Added spinner during PyTorch import to show command is working (#2551)
- Fixed path truncation in tables (#2551)
-
Bug Fixes:
- Fixed terminal spam from "Error processing frame" messages when scrubbing
.pkg.slpfiles (#2554)
- Fixed terminal spam from "Error processing frame" messages when scrubbing
-
- 8 new CLI commands:
merge,unsplit,fix,embed,unembed,trim,reencode,transform - CSV format support for MATLAB interoperability
- Coordinate-aware video transformations
- 23x faster
.pkg.slpsaves, 2.7x faster embedded video loading - Bug fixes for video matching, rendering, and embedded videos
- 8 new CLI commands:
-
- Training progress bar during dataset caching (no more apparent "freeze")
- Automatic WandB local log cleanup to save disk space
- Simplified log format for cleaner output
Full Changelog
Enhancements
- Add sleap.qc module for label quality control by @talmo in #2547
- Add sleap-io v0.6.1 CLI commands by @talmo in #2559
- Upgrade video rendering to use sleap-io API with live preview and non-blocking progress by @talmo in #2558
- Improve training config dialog UX by @talmo in #2557
- Add missing tooltips to training config and tracker form fields by @talmo in #2556
- Add "Random sample (current video)" inference target option by @talmo in #2555
- Improve sleap doctor with consolidated diagnostic output by @talmo in #2553
- Improve sleap doctor UX: add spinner and fix path truncation by @talmo in #2551
Fixes
- Suppress frame error spam when scrubbing pkg.slp files by @talmo in #2554
- Add --exclude_user_labeled flag to sleap-nn-track CLI shim by @talmo in #2552
Workflows
Dependencies
Full Changelog: v1.6.0a0...v1.6.0a1
SLEAP v1.6.0a0
SLEAP v1.6.0a0
About the v1.6 Pre-release Series
This is a pre-release for SLEAP v1.6.0. It contains many new features and improvements, but is not yet considered stable. For production use, see v1.5.2.
We are releasing a series of pre-releases that incrementally build towards the stable v1.6.0 release. Each pre-release adds new features and bug fixes:
| Version | Summary |
|---|---|
| v1.6.0a0 (current) | Unified sleap CLI, redesigned training dialog (55x faster loading), bug fixes for adding instances from predictions |
| v1.6.0a1 | Label QC for automated error detection, 8 new CLI commands from sleap-io, video rendering with live preview |
Note: Starting with SLEAP v1.5+, all deep learning functionality is powered by the PyTorch-based
sleap-nnbackend. TensorFlow models (withUNetbackbones) from earlier versions are still supported for inference. Refer to the Migrating to 1.5+ docs for more details!
How to Install
Step 1: Install uv (skip if already installed)
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | shStep 2: Install SLEAP v1.6.0a0
Windows/Linux with NVIDIA GPU (CUDA 12.8)
uv tool install --force --python 3.12 "sleap[nn]==1.6.0a0" --with "sleap-io==0.6.0" --with "sleap-nn==0.1.0a0" --prerelease allow --index https://download.pytorch.org/whl/cu128 --index https://pypi.org/simpleWindows/Linux with NVIDIA GPU (CUDA 13.0 - NEW!)
uv tool install --force --python 3.12 "sleap[nn]==1.6.0a0" --with "sleap-io==0.6.0" --with "sleap-nn==0.1.0a0" --prerelease allow --index https://download.pytorch.org/whl/cu130 --index https://pypi.org/simpleWindows/Linux without GPU (CPU only)
uv tool install --force --python 3.12 "sleap[nn]==1.6.0a0" --with "sleap-io==0.6.0" --with "sleap-nn==0.1.0a0" --prerelease allow --index https://download.pytorch.org/whl/cpu --index https://pypi.org/simplemacOS
uv tool install --force --python 3.12 "sleap[nn]==1.6.0a0" --with "sleap-io==0.6.0" --with "sleap-nn==0.1.0a0" --prerelease allowStep 3: Verify installation
sleap doctorUpgrading from v1.5.x?
Use the same commands as above. The --force flag will replace your existing installation.
Rollback to stable
If you encounter issues, rollback to the latest stable release:
# Windows/Linux (CUDA 12.8)
uv tool install --force --python 3.12 "sleap[nn]==1.5.2" --index https://download.pytorch.org/whl/cu128 --index https://pypi.org/simple
# Windows/Linux (CPU only)
uv tool install --force --python 3.12 "sleap[nn]==1.5.2" --index https://download.pytorch.org/whl/cpu --index https://pypi.org/simple
# macOS
uv tool install --force --python 3.12 "sleap[nn]==1.5.2"Version compatibility
| SLEAP | sleap-io | sleap-nn |
|---|---|---|
| 1.6.0aN | 0.6.x | 0.1.0aN |
| 1.6.x | 0.6.x | 0.1.x |
| 1.5.x | 0.5.x | 0.0.x |
What's New in v1.6.0a0
-
Unified CLI:
-
Training GUI Overhaul:
- 55x faster config loading and much faster dialog startup (#2506, #2516)
- Completely redesigned training dialog with native Qt, unified frame targeting, and 356 new tests (#2519)
- New Frame Target Selector for flexible training/inference frame selection (#2519)
- Prediction handling modes: Keep, Replace, or Clear all predictions during inference (#2519)
- Smaller dialog that fits on 1280x720 screens (#2509, #2519)
- Augmentation controls simplified with on/off checkboxes and rotation presets (#2509)
- WandB integration improvements with run URL display and auto-browser-open (#2525)
-
Crop Size Visualization:
-
New Features:
- "Check for Updates" dialog showing versions for sleap, sleap-io, and sleap-nn (#2499)
- "Delete Predictions on User-Labeled Frames" for cleaning up duplicate instances (#2505)
- Startup banner with version info and branding when launching
sleap-label(#2517) - Progress dialog with cancel support for package export (#2522)
- Support for loading legacy SLEAP metrics from v1.4.1 and earlier (#2480)
-
Critical Bug Fixes:
- Fixed GUI freeze when editing predictions with NaN coordinates on Linux Qt 6.10+ (#2467)
- Fixed catastrophic data loss bug where removing a video could delete frames from ALL videos with the same resolution (#2535)
- Fixed prediction deletion incorrectly removing user-labeled instances (#2478)
- Fixed predictions not being fully converted to instances when adding from predictions (#2539)
-
- ~90x faster SLP loading with new lazy loading mode for large prediction files
- Pose rendering at ~50 FPS for publication-ready videos (
sleap render) - Data codecs for converting Labels to pandas DataFrames, NumPy arrays, and dictionaries
- Safe video matching prevents silent data corruption during merges
- Fixed package export losing videos without labeled frames (#282)
- Fixed video provenance breaking during merge operations (#302)
- Breaking: Merge API simplified (
video_matcher=→video=,frame_strategy=→frame=)
-
- Faster inference via GPU-accelerated normalization (17% for typical video, up to 50% for large RGB images)
- CUDA 13.0 support for latest NVIDIA GPUs
- Provenance tracking embeds full reproducibility metadata in output files
- Enhanced WandB with interactive visualizations and per-head loss logging
- Fixed crash on frames with empty instances (#385)
- Fixed
--exclude_user_labeledbeing ignored with--video_index(#397) - Fixed run folder cleanup when training canceled via GUI (#392)
- Breaking: Crop size semantics changed - top-down models now crop first, then resize
- Breaking: Output file naming changed (
labels_train_gt_0.slp→labels_gt.train.0.slp)
-
Other dependency changes:
- Removed 8 unused dependencies for faster installation (#2486)
Full Changelog
Enhancements
- Add unified CLI with
sleapcommand by @talmo in #2524 - Add sleap-io CLI command inheritance (
show,convert,split,filenames,render) by @talmo in #2541 - Add "Check for Updates" to Help menu and implement update checker by @jaw039 in #2499
- Refactor training/inference dialog with native Qt and unified frame targeting by @talmo in #2519
- Training GUI QOL improvements (augmentation checkboxes, rotation presets, overfit mode) by @talmo in #2509
- Improve training dialog startup performance (~55x faster config loading) by @talmo in #2506
- Add crop size visualization for top-down training pipelines by @gitttt-1234 in #2483
- Add Instance Size Distribution widget for crop size analysis by @talmo in #2528
- Add Delete Predictions on User-Labeled Frames feature by @talmo in #2505
- Add support for loading legacy SLEAP metrics (<=v1.4.1) by @gitttt-1234 in #2480
- Add progress dialog and completion notification for package export by @talmo in #2522
- Add startup banner with version info and branding by @talmo in #2517
- Improve baseline config display names in training config selector by @gitttt-1234 in #2471
- Fix WandB checkbox state and add run URL display by @talmo in #2525
Fixes
- Fix GUI freeze when editing predictions with NaN coordinates by @gitttt-1234 in #2467
- Fix remove_video() to use identity comparison instead of matches_content() by @talmo in #2535
- Fix prediction deletion to prevent removing labeled instances by @gitttt-1234 in #2478
- Add failing tests for predictions-not-fully-added bugs (and fix) by @talmo in #2539
- Fix GUI freeze during labeled video export by @gitttt-1234 in #2484
- Fix skeleton loading returning list instead of single Skeleton by @gitttt-1234 in #2493
- Fix missing file dialog for ImageVideo backend (list of frame paths) by @alicup29 in #2498
- Fix delete unused tracks crash with untracked instances by @talmo in #2503
- Fix plateau detection to use absolute threshold mode by @gitttt-1234 in #2469
- Update predictions output path for inference (multi-v...
SLEAP v1.5.2
What's Changed
SLEAP v1.5.2 – Bug Fixes & Dependency Updates
This release includes important bug fixes for GUI rendering and Windows compatibility, dependency updates for improved stability, and further documentation improvements.
Note: Starting with SLEAP v1.5+, all deep learning functionality is powered by the PyTorch-based
sleap-nnbackend. TensorFlow models (withUNetbackbones) from earlier versions are still supported for inference. Refer Migrating to 1.5+ docs for more details!
How to install?
You can now install SLEAP quickly using uv
Step 1: Install uv - an ultra-fast Python package manager
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | shStep 2: Install sleap
# Windows/ Linux (CUDA)
uv tool install --python 3.13 "sleap[nn]==1.5.2" --index https://download.pytorch.org/whl/cu128 --index https://pypi.org/simple
# Windows/ Linux (CPU)
uv tool install --python 3.13 "sleap[nn]==1.5.2" --index https://download.pytorch.org/whl/cpu --index https://pypi.org/simple
# macOS
uv tool install --python 3.13 "sleap[nn]==1.5.2"
Check the full installation guide for platform-specific instructions and advanced options.
Once you've installed SLEAP, run the below command from anywhere in your terminal
sleap-labelThe GUI should open up!
Upgrading from v1.5.1?
If you already have SLEAP v1.5.1 installed, you can upgrade to v1.5.2 using the following commands based on your installation method:
If installed with uv tool install:
The simplest upgrade command (preserves your original Python version and index URLs):
uv tool upgrade sleapOr, if you want to ensure you're using Python 3.13 and refresh your installation:
uv tool uninstall sleap
# Then reinstall with the commands from the installation section aboveNote:
uv tool upgradeautomatically preserves the index URLs (CUDA/CPU) and Python version from your original installation. If you installed with--index https://download.pytorch.org/whl/cu128, the upgrade will continue using the CUDA 12.8 index.
If installed with pip in a conda environment:
conda activate sleap
pip install --upgrade "sleap[nn]"For platform-specific indexes (CUDA/CPU), add the appropriate --extra-index-url:
# CUDA 12.8
pip install --upgrade "sleap[nn]" --extra-index-url https://download.pytorch.org/whl/cu128 --index-url https://pypi.org/simple
# CPU
pip install --upgrade "sleap[nn]" --extra-index-url https://download.pytorch.org/whl/cpu --index-url https://pypi.org/simpleIf installed with uv add (project-based):
# Navigate to your project directory
uv sync --upgradeIf installed from source:
cd sleap
git pull
uv sync --upgradeAfter upgrading, verify the installation:
python -c "import sleap; sleap.versions()"You should see SLEAP: 1.5.2 in the output.
Highlights
-
Dependency updates:
- Updated minimum
sleap-ioversion to 0.5.7 - Updated minimum
sleap-nnversion to 0.0.4 - Removed
cattrsdependency for simplified dependency management - Added
--python 3.13flag to installation commands to prevent Python 3.14 compatibility issues
- Updated minimum
-
Bug fixes:
- Fixed color rendering in
sleap-render: Videos now display correct colors with proper BGR to RGB conversion (#2444) - Fixed Windows GUI crash: Resolved Qt widget attribute error when loading .slp files on Windows (#2440)
- Fixed instance coloring: Multiple instances in older SLEAP projects now display with distinct colors instead of the same color (#2434)
- Fixed color rendering in
-
Documentation improvements:
- Consolidated repetitive installation documentation (reduced by 55 lines while preserving all essential information)
- Improved
uv addinstallation workflow instructions with Windows troubleshooting tips - Clearer platform-specific installation guidance
Full Changelog: v1.5.1...v1.5.2
SLEAP v1.5.1
What's Changed
SLEAP v1.5.1 – Bug fixes & Documentation Improvements
This release focuses on a few bug fixes in the training pipeline, improving installation instructions, and updating documentation for a smoother user experience.
Note: Starting with SLEAP v1.5+, all deep learning functionality is powered by the PyTorch-based
sleap-nnbackend. TensorFlow models (withUNetbackbones) from earlier versions are still supported for inference. Refer Migrating to 1.5+ docs for more details!
How to install?
You can now install SLEAP quickly using uv
Step 1: Install uv - an ultra-fast Python package manager
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | shStep 2: Install sleap
# Windows/ Linux (CUDA)
uv tool install "sleap[nn]" --index https://download.pytorch.org/whl/cu128 --index https://pypi.org/simple
# Windows/ Linux (CPU)
uv tool install "sleap[nn]" --index https://download.pytorch.org/whl/cpu --index https://pypi.org/simple
# macOS
uv tool install "sleap[nn]"
Check the full installation guide for platform-specific instructions and advanced options.
Once you've installed SLEAP, run the below command from anywhere in your terminal
sleap-labelThe GUI should open up!
Highlights
- Improved installation:
- Platform-specific dependency groups for sleap installation with CUDA support.
- Fixed CUDA installation issues on Windows.
- Updated installation instructions and options for clarity.
- Documentation updates:
- Fixed typos and broken links.
- Improved CLI docs with new options and guidance on legacy CLIs.
- Fixed MkDocs versioning and improved doc site structure.
- Error handling: sleap-nn import errors are now handled gracefully with clear user guidance.
- Bug fixes: Minor fixes across CLI and docs to improve stability.
Full Changelog: v1.5.0...v1.5.1
SLEAP v1.5.0
What's New in SLEAP 1.5
SLEAP 1.5 represents a major milestone with significant architectural improvements, performance enhancements, and new installation methods. Here are the key changes:
Major Changes
Updated dependencies
We have now updated to support Python 3.12+ and support many new versions of the many libraries that SLEAP uses. This should make it much easier to install on modern platforms, support new architectures, and make development much easier.
UV-Based Installation
SLEAP 1.5+ now uses uv for installation, making it much faster than previous methods. Get up and running in seconds with our streamlined installation process.
PyTorch Backend
Neural network backend switched from TensorFlow to PyTorch via sleap-nn, providing:
- Much faster training and inference speeds: Up to 2.5x faster training and inference times.
- Modern deep learning capabilities: PyTorch with upcoming integrations with a whole slew of modern deep learning models and packages.
- Improved developer experience: Check out the dedicated backend repo at https://github.com/talmolab/sleap-nn
- Multi-GPU training: Full support for using multiple GPUs for accelerated and larger scale training.
- Backwards compatibility: You are able to use your existing trained SLEAP models from v1.4.1 for the UNet backend with no changes (see notes below).
Refreshed Documentation Websites
- The new landing page is now live at: https://sleap.ai
- The new documentation is now live at: https://docs.sleap.ai
- The old docs (v1.4.1) are will remain available at: https://legacy.sleap.ai
Standalone Libraries
SLEAP GUI is now supported by two new packages for modular workflows:
SLEAP-IO
I/O backend for handling labels, processing .slp files, and data manipulation. Essential for any SLEAP workflow and can be used independently for data processing tasks.
SLEAP-NN
PyTorch-based neural network backend for training and inference. Perfect for custom training pipelines, remote processing, and headless server deployments.
Torch Backend Changes
New Backbones
SLEAP 1.5 introduces three powerful new backbone architectures (check here for more details):
- UNet - Classic encoder-decoder architecture for precise pose estimation
- SwinT - Swin Transformer for state-of-the-art performance
- ConvNeXt - Modern convolutional architecture with improved efficiency
Legacy Support
We've maintained full backward compatibility:
- GUI Support: SLEAP now uses a new YAML-based config file structure, but you can still upload and work with old SLEAP JSON files in the GUI. For details on converting legacy SLEAP 1.4 config/JSON files to the new YAML format, see our conversion guide.
- TensorFlow Model Inference: Continue to support running inference on old TensorFlow models (UNet backbone only). Check using legacy models for more details.