Skip to content

docs: add LS-EEND vs Sortformer enrollment feedback from integration testing#397

Merged
Alex-Wengg merged 1 commit intomainfrom
docs/lseend-sortformer-enrollment-feedback
Mar 18, 2026
Merged

docs: add LS-EEND vs Sortformer enrollment feedback from integration testing#397
Alex-Wengg merged 1 commit intomainfrom
docs/lseend-sortformer-enrollment-feedback

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Mar 18, 2026

Summary

  • Adds enrollment limitations section to LS-EEND docs: score range bounded to ~0.2–0.8 (sigmoid over cosine), slot collision with similar voices, weaker score-extraction fallback, training data gap vs Sortformer
  • Adds enrollment strengths section to Sortformer docs: consistently strong auto-mapping for all 4 speakers including similar voices, real-world training data advantage
  • Updates GettingStarted comparison table with "Pre-enrolled speaker mapping" row and a summary section explaining the enrollment accuracy gap

Source: Discord integration feedback from Adam Tow and model architecture clarifications from Gradient Descent (LS-EEND sigmoid score range, training data composition, attractor suppression as potential path forward).

Test plan

  • Verify markdown renders correctly on GitHub
  • Cross-reference links between LS-EEND.md and Sortformer.md enrollment sections resolve correctly

Open with Devin

…testing

Captures real-world integration feedback on speaker enrollment behavior:
- LS-EEND slot collision with similar voices, score range limitations
- Sortformer's stronger auto-mapping for pre-enrolled speaker workflows
- Training data differences explaining the enrollment accuracy gap
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 18, 2026

Claude finished @Alex-Wengg's task —— View job


PR Review Complete ✅

Todo List:

  • Create pending review
  • Get PR diff to understand changes and line numbers
  • Read modified documentation files for context
  • Provide comprehensive review feedback
  • Submit review with overall summary

Documentation Review Summary

I've completed a comprehensive review of this PR's documentation changes. The additions provide valuable real-world integration insights about enrollment capabilities comparing LS-EEND vs Sortformer models.

Key Findings:

Technical accuracy: Content correctly explains sigmoid bounds, training data differences, and architectural limitations
Evidence-based: Grounded in actual 4-speaker integration testing
Well-structured: Good cross-referencing and logical organization
Actionable: Clear guidance on model selection for enrollment workflows

Review Submitted:

I've submitted a non-blocking review with positive feedback and minor suggestions for improvement. The documentation enhancements significantly improve user guidance for enrollment-based workflows.

The PR is ready for merge with the suggested minor improvements being optional enhancements rather than blocking issues.


Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no bugs or issues to report.

Open in Devin Review

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR adds valuable documentation about enrollment feedback comparing LS-EEND vs Sortformer based on real integration testing. The content is technically accurate, well-structured, and provides actionable insights for users choosing between models for enrollment workflows.

Key Strengths:

  • Evidence-based content from actual 4-speaker integration testing
  • Clear technical explanations (sigmoid bounds, training data differences, architectural limitations)
  • Good cross-referencing between documents
  • Helpful additions to the comparison table

Minor suggestions:

  • Consider briefly defining "Pre-enrolled speaker mapping" for new users
  • Ensure consistent formatting (bold usage) across documents

Overall, this enhances the documentation quality and will help users make informed decisions. The technical details about LS-EEND's 0.2-0.8 score bounds and Sortformer's real-world training advantages are particularly valuable.

@github-actions
Copy link
Copy Markdown

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (176.3 KB)

Runtime: 0m39s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 0.00% Average Word Error Rate
WER (Med) 0.00% Median Word Error Rate
RTFx 0.00x Real-time factor (higher = faster)
Total Audio 0.0s Total audio duration processed
Total Time 0.0s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.000s Average chunk processing time
Max Chunk Time 0.000s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m32s • 03/18/2026, 03:41 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Runtime: 4m22s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.64x
test-other 1.35% 0.00% 3.50x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 5.80x
test-other 1.00% 0.00% 3.17x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.65x Streaming real-time factor
Avg Chunk Time 1.389s Average time to process each chunk
Max Chunk Time 1.662s Maximum chunk processing time
First Token 1.633s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.62x Streaming real-time factor
Avg Chunk Time 1.440s Average time to process each chunk
Max Chunk Time 2.466s Maximum chunk processing time
First Token 1.592s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 9m24s • 03/18/2026, 03:48 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 4.62x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 12.842 5.6 Fetching diarization models
Model Compile 5.504 2.4 CoreML compilation
Audio Load 0.062 0.0 Loading audio file
Segmentation 24.109 10.6 VAD + speech detection
Embedding 226.415 99.6 Speaker embedding extraction
Clustering (VBx) 0.764 0.3 Hungarian algorithm + VBx clustering
Total 227.347 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 251.3s processing • Test runtime: 4m 29s • 03/18/2026, 03:51 PM EST

@github-actions
Copy link
Copy Markdown

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 24.40x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 11.095 25.8 Fetching diarization models
Model Compile 4.755 11.1 CoreML compilation
Audio Load 0.075 0.2 Loading audio file
Segmentation 12.888 30.0 Detecting speech regions
Embedding 21.480 50.0 Extracting speaker voices
Clustering 8.592 20.0 Grouping same speakers
Total 42.998 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 43.0s diarization time • Test runtime: 5m 21s • 03/18/2026, 03:56 PM EST

@github-actions
Copy link
Copy Markdown

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 7.3x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 7m 19s • 2026-03-18T20:02:34.549Z

@github-actions
Copy link
Copy Markdown

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 655.3x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 683.9x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@Alex-Wengg Alex-Wengg merged commit 8bc2bdf into main Mar 18, 2026
16 checks passed
@Alex-Wengg Alex-Wengg deleted the docs/lseend-sortformer-enrollment-feedback branch March 18, 2026 21:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant