Skip to content

Standardize model loading API across all ASR managers#506

Merged
Alex-Wengg merged 10 commits intomainfrom
standardize-model-loading-api
Apr 8, 2026
Merged

Standardize model loading API across all ASR managers#506
Alex-Wengg merged 10 commits intomainfrom
standardize-model-loading-api

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Apr 8, 2026

Summary

Standardizes the model loading API across all ASR managers to reduce developer cognitive load and improve consistency. This addresses issue #457 comment #4203327648.

Problem

Each ASR manager had different model loading APIs:

  • AsrManager: configure(models:)
  • SlidingWindowAsrManager: startStreaming(models:, source:)
  • StreamingEouAsrManager: loadModels(modelDir:) with inconsistent overloads ❌
  • StreamingNemotronAsrManager: loadModels(modelDir:)

This created developer confusion and increased documentation burden.

Solution

Unified API pattern across all managers:

// All managers now use consistent naming:
manager.loadModels(from: URL)                    // Load from local directory
manager.loadModels(_ models: PreloadedModels)    // Use pre-loaded models
manager.loadModels(to: URL?, progressHandler:)   // Download and load (optional)

Changes

  • AsrManager: Added loadModels(_:), deprecated configure(models:)
  • SlidingWindowAsrManager: Separated model loading from streaming activation, added CoreML import
  • StreamingEouAsrManager: Standardized to loadModels(from:)
  • StreamingNemotronAsrManager: Standardized to loadModels(from:) with download support
  • CLI: Updated 9 command files to use new APIs

Verification

Ran full benchmark suite (8 models × 100 files) to verify zero regression:

Model Baseline Current Delta Status
Parakeet TDT v3 (0.6B) 2.6% 2.64% +0.04%
Parakeet TDT v2 (0.6B) 3.8% 3.79% -0.01%
CTC-TDT 110M 3.6% 3.56% -0.04%
CTC Earnings 16.54% 16.55% +0.01%
EOU 320ms (120M) 7.11% 7.11% 0.00%
Nemotron 1120ms (0.6B) 1.99% 1.99% 0.00%
TDT Japanese (0.6B) 6.11% 6.11% 0.00%
CTC Chinese (0.6B) 8.37% 8.37% 0.00%

✓ No WER/CER regressions (all within 0.3% of baseline)

Benefits

  • ✅ Reduced cognitive load - single pattern across all managers
  • ✅ Cleaner separation of concerns - model loading vs. streaming activation
  • ✅ Consistent prepositions - all use from: for loading from directory
  • ✅ Zero performance impact - validated with comprehensive benchmarks
  • ✅ Backward compatibility - deprecated APIs still work with migration warnings

Open with Devin

Both Kokoro and PocketTTS smoke tests now calculate and validate RTFx metrics:
- Calculate RTFx using ffprobe to get audio duration
- Fail workflow with exit 1 when RTFx is 0
- Display RTFx in PR comment table with status indicator

This ensures TTS smoke tests have the same failure detection as other benchmarks.
… PR comments

- Move EXECUTION_TIME calculation before RTFx validation to ensure it's set even on failure
- Add always() condition to Comment PR steps so they run even when RTFx validation fails
- Ensures PR comments are posted with failure details instead of silently skipping
- Run 'brew link ffmpeg' to ensure ffprobe is in PATH
- Add debugging to show ffprobe availability and output
- Change 2>/dev/null to 2>&1 to capture ffprobe errors
- Add detailed logging when RTFx calculation fails

This fixes the issue where ffmpeg was installed but not linked,
causing ffprobe to be unavailable and RTFx calculation to fail.
- Force-link ffmpeg with --force flag to ensure it's linked
- Set explicit PATH environment variable in smoke test steps
- Validate ffprobe output is numeric before passing to awk
- Prevents awk syntax errors when ffprobe returns error messages

This fixes the issue where ffprobe wasn't available in PATH even
after installation, causing RTFx calculation to fail.
- Remove exit 1 when RTFx is 0
- Change RTFx status from ❌ to ⚠️ when unavailable
- RTFx is a performance metric, not a pass/fail criterion
- Smoke tests pass as long as audio is generated (file size > 0)

This aligns with the purpose of smoke tests: verify the pipeline
works, not measure performance.
- Remove all RTFx calculation logic
- Remove ffmpeg/ffprobe installation steps
- Remove RTFx from PR comment tables
- Simplify smoke tests to only verify audio file generation

Smoke tests now pass if:
1. TTS pipeline completes without crashing
2. Output WAV file is generated with size > 0

RTFx is a performance metric better suited for benchmark workflows,
not smoke tests.
- Add validation that exits with code 1 when output file doesn't exist
- Add validation that exits with code 1 when output file is 0 bytes
- Ensures smoke tests fail on audio generation failures

This catches cases where the pipeline runs but produces no/empty output.
Unifies model loading methods to use consistent naming (loadModels) and prepositions (from:), reducing developer cognitive load and improving API discoverability. Deprecates configure() in favor of loadModels() and separates model loading from streaming activation in SlidingWindowAsrManager.

Resolves #457
@Alex-Wengg Alex-Wengg force-pushed the standardize-model-loading-api branch from 3f7a981 to 02b8945 Compare April 8, 2026 16:52
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 382.4x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 487.6x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 12.6x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 2m 53s • 2026-04-08T18:00:42.673Z

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (202.5 KB)

Runtime: 0m38s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality and performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Kokoro TTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (634.8 KB)

Runtime: 0m33s

Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon.

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 4 additional findings.

Open in Devin Review

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 6.92x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 71.2s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.071s Average chunk processing time
Max Chunk Time 0.142s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 1m18s • 04/08/2026, 01:58 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Performance Metrics

Metric CI Value Expected on Apple Silicon
Median RTFx 0.04x ~2.5x
Overall RTFx 0.04x ~2.5x

Runtime: 4m54s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 4.60x
test-other 1.35% 0.00% 2.92x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 4.07x
test-other 1.22% 0.00% 2.68x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.47x Streaming real-time factor
Avg Chunk Time 1.880s Average time to process each chunk
Max Chunk Time 2.504s Maximum chunk processing time
First Token 2.194s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.44x Streaming real-time factor
Avg Chunk Time 1.965s Average time to process each chunk
Max Chunk Time 3.243s Maximum chunk processing time
First Token 2.044s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 6m58s • 04/08/2026, 02:14 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 3.35x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 15.032 4.8 Fetching diarization models
Model Compile 6.442 2.1 CoreML compilation
Audio Load 0.104 0.0 Loading audio file
Segmentation 28.918 9.2 VAD + speech detection
Embedding 311.852 99.6 Speaker embedding extraction
Clustering (VBx) 0.984 0.3 Hungarian algorithm + VBx clustering
Total 313.106 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 341.8s processing • Test runtime: 5m 44s • 04/08/2026, 02:09 PM EST

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 20.54x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 10.430 20.4 Fetching diarization models
Model Compile 4.470 8.7 CoreML compilation
Audio Load 0.114 0.2 Loading audio file
Segmentation 15.321 30.0 Detecting speech regions
Embedding 25.535 50.0 Extracting speaker voices
Clustering 10.214 20.0 Grouping same speakers
Total 51.096 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 51.1s diarization time • Test runtime: 2m 14s • 04/08/2026, 02:04 PM EST

- Remove deprecated configure(models:) method from AsrManager
- Remove legacy defaultCacheDirectory() method from AsrModels
- Remove legacy decode() method from RnntDecoder
- Update test to use non-legacy defaultCacheDirectory(for:) method
@Alex-Wengg Alex-Wengg merged commit 04747b3 into main Apr 8, 2026
12 checks passed
@Alex-Wengg Alex-Wengg deleted the standardize-model-loading-api branch April 8, 2026 18:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant