Skip error recovery on intentional cancellation#481
Conversation
When SlidingWindowAsrManager is cancelled, CancellationError propagates through processWindow() and the audio buffer loop. Previously this triggered attemptErrorRecovery() which resets the decoder and, as a last resort, re-downloads models — neither of which is appropriate for an intentional shutdown. Guard both catch sites with `error is CancellationError || Task.isCancelled` to return immediately instead. Fixes #477 https://claude.ai/code/session_01696MyMtoiM6T8ruCdCCHab
c2ce33b to
6be4fd2
Compare
Qwen3-ASR int8 Smoke Test ✅
Performance Metrics
Runtime: 3m48s Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx. |
✅ Japanese ASR Benchmark Results (CTC)Status: Passed
✅ Benchmark completed successfully. The TDT Japanese hybrid model (CTC preprocessor/encoder + TDT decoder/joint) is working correctly. View benchmark log |
Sortformer High-Latency Benchmark ResultsES2004a Performance (30.4s latency config)
Sortformer High-Latency • ES2004a • Runtime: 2m 23s • 2026-04-04T14:01:59.112Z |
Parakeet EOU Benchmark Results ✅Status: Benchmark passed Performance Metrics
Streaming Metrics
Test runtime: 1m35s • 04/04/2026, 09:57 AM EST RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O |
Kokoro TTS Smoke Test ✅
Runtime: 0m37s Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon. |
ASR Benchmark Results
|
| Dataset | WER Avg | WER Med | RTFx | Status |
|---|---|---|---|---|
| test-clean | % | % | x | |
| test-other | % | % | x |
Parakeet v2 (English-optimized)
| Dataset | WER Avg | WER Med | RTFx | Status |
|---|---|---|---|---|
| test-clean | % | % | x | |
| test-other | % | % | x |
Streaming (v3)
| Metric | Value | Description |
|---|---|---|
| WER | % | Word Error Rate in streaming mode |
| RTFx | x | Streaming real-time factor |
| Avg Chunk Time | s | Average time to process each chunk |
| Max Chunk Time | s | Maximum chunk processing time |
| First Token | s | Latency to first transcription token |
| Total Chunks | Number of chunks processed |
Streaming (v2)
| Metric | Value | Description |
|---|---|---|
| WER | % | Word Error Rate in streaming mode |
| RTFx | x | Streaming real-time factor |
| Avg Chunk Time | s | Average time to process each chunk |
| Max Chunk Time | s | Maximum chunk processing time |
| First Token | s | Latency to first transcription token |
| Total Chunks | Number of chunks processed |
Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming
files per dataset • Test runtime: • 04/04/2026, 09:51 AM EST
RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)
Expected RTFx Performance on Physical M1 Hardware:
• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations
Testing methodology follows HuggingFace Open ASR Leaderboard
PocketTTS Smoke Test ✅
Runtime: 0m42s Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon. |
Offline VBx Pipeline ResultsSpeaker Diarization Performance (VBx Batch Mode)Optimal clustering with Hungarian algorithm for maximum accuracy
Offline VBx Pipeline Timing BreakdownTime spent in each stage of batch diarization
Speaker Diarization Research ComparisonOffline VBx achieves competitive accuracy with batch processing
Pipeline Details:
🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 306.3s processing • Test runtime: 5m 4s • 04/04/2026, 10:04 AM EST |
Speaker Diarization Benchmark ResultsSpeaker Diarization PerformanceEvaluating "who spoke when" detection accuracy
Diarization Pipeline Timing BreakdownTime spent in each stage of speaker diarization
Speaker Diarization Research ComparisonResearch baselines typically achieve 18-30% DER on standard datasets
Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:
🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 34.0s diarization time • Test runtime: 2m 3s • 04/04/2026, 09:59 AM EST |
VAD Benchmark ResultsPerformance Comparison
Dataset Details
✅: Average F1-Score above 70% |
Summary
SlidingWindowAsrManager.processWindow()and the audio buffer loop againstCancellationError/Task.isCancelledFixes #477