feat: Support Parakeet-TDT-CTC-110M hybrid model#433
Conversation
Add AsrModelVersion.tdtCtc110m for the 110M parameter hybrid TDT-CTC model. Key differences from the 0.6B models: - Fused preprocessor+encoder (no separate Encoder.mlmodelc) - Smaller dimensions: encoderHidden=512, vocabSize=1024, 1 LSTM layer - Array-format vocabulary (vocab.json) instead of dict format - blankId=1024 (same as v2) Changes: - AsrModels: optional encoder, fused frontend loading, array vocab support - AsrManager: version-aware decoder state shapes, fused frontend availability - AsrTranscription: skip encoder step when preprocessor output is fused - TdtDecoderState: parameterized LSTM layer count - TdtDecoderV3: use config.encoderHiddenSize instead of auto-detection - EncoderFrameView: accept explicit hidden size parameter - TranscribeCommand: --model-version tdt-ctc-110m, --model-dir flags - ModelNames: parakeetTdtCtc110m repo, fused model requirements
Default ASRConfig uses encoderHiddenSize=1024 but the 110m model produces encoder output with hidden size 512, causing a runtime crash in EncoderFrameView. Adapt the config from the model version before passing it to the decoder.
- Accept --model-version tdt-ctc-110m/110m - Use model-version-aware ASRConfig (blankId, encoderHiddenSize) - Fix CI debug path to use AsrModels.defaultCacheDirectory - Update usage text
- TranscribeCommand: add --model-dir and tdt-ctc-110m to help text, fix modelVersionLabel ternary that mislabeled 110m as "v3" in JSON - TdtDecoderV3.prepareJointInput: use config.encoderHiddenSize instead of convenience init that hardcodes 1024
The AsrModels struct holds strong references to MLModel objects. Without clearing it, cleanup() only nil'd the individual model properties but the AsrModels copy still retained all four models.
Resolve conflicts in ModelNames.swift by keeping both multilingualG2p and parakeetTdtCtc110m enum cases.
Speaker Diarization Benchmark ResultsSpeaker Diarization PerformanceEvaluating "who spoke when" detection accuracy
Diarization Pipeline Timing BreakdownTime spent in each stage of speaker diarization
Speaker Diarization Research ComparisonResearch baselines typically achieve 18-30% DER on standard datasets
Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:
🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 35.8s diarization time • Test runtime: 3m 11s • 03/26/2026, 03:11 PM EST |
Sortformer High-Latency Benchmark ResultsES2004a Performance (30.4s latency config)
Sortformer High-Latency • ES2004a • Runtime: 5m 18s • 2026-03-26T19:07:12.624Z |
Offline VBx Pipeline ResultsSpeaker Diarization Performance (VBx Batch Mode)Optimal clustering with Hungarian algorithm for maximum accuracy
Offline VBx Pipeline Timing BreakdownTime spent in each stage of batch diarization
Speaker Diarization Research ComparisonOffline VBx achieves competitive accuracy with batch processing
Pipeline Details:
🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 281.6s processing • Test runtime: 4m 58s • 03/26/2026, 03:02 PM EST |
VAD Benchmark ResultsPerformance Comparison
Dataset Details
✅: Average F1-Score above 70% |
- Fix vocabulary filename: vocab.json → parakeet_vocab.json - Fix iOS build: Add actor-safe getDecoderLayers() method to AsrManager - Fix iOS build: Use await for actor-isolated access in ChunkProcessor - Add missing multilingualG2p case in getRequiredModelNames These changes enable TDT-CTC-110M to compile and run successfully on iOS devices.
Parakeet EOU Benchmark Results ✅Status: Benchmark passed Performance Metrics
Streaming Metrics
Test runtime: 0m14s • 03/26/2026, 03:14 PM EST RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O |
Benchmark RequiredBefore merging, please benchmark TDT-CTC-110M on LibriSpeech test-clean: swift build -c release
.build/release/fluidaudiocli asr-benchmark --subset test-clean --model-version tdt-ctc-110mShould verify:
Update |
PocketTTS Smoke Test ✅
Runtime: 0m24s Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon. |
ASR Benchmark Results ✅Status: All benchmarks passed Parakeet v3 (multilingual)
Parakeet v2 (English-optimized)
Streaming (v3)
Streaming (v2)
Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming 25 files per dataset • Test runtime: 7m54s • 03/26/2026, 03:10 PM EST RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time Expected RTFx Performance on Physical M1 Hardware:• M1 Mac: ~28x (clean), ~25x (other) Testing methodology follows HuggingFace Open ASR Leaderboard |
Qwen3-ASR int8 Smoke Test ✅
Runtime: 3m11s Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx. |
- 3.01% WER on 2,620 files - 96.5x RTFx (37 seconds per hour of audio) - Validated iOS compatibility - 0% median WER shows most files transcribed perfectly
- Hybrid TDT-CTC architecture with 110M parameters - 3.01% WER on LibriSpeech test-clean - 96.5x RTFx performance on M2 Mac - iOS compatible with fused preprocessor+encoder
- Add inline documentation explaining why decoderLayers=2 is the default - v2 and v3 models use 2 LSTM layers (most common architecture) - tdtCtc110m uses 1 layer (smaller variant) - Fallback to 2 when models not loaded ensures v2/v3 compatibility Addresses review questions from @Alex-Wengg about the '2' default value.
- Overview and benchmark results (3.01% WER, 96.5x RTFx) - Quick start guide with Swift code examples - Detailed architecture and model pipeline workflow - Complete code workflow from loading to transcription - Model files structure and specifications - iOS integration guide with performance metrics - CLI benchmark commands - Comparison with v3 model - Resources and links
Addresses review feedback: 1. Fix incorrect HuggingFace link in benchmarks.md - Was: parakeet-tdt-0.6b-v3-coreml (v3 model) - Now: parakeet-tdt-ctc-110m-coreml (correct 110M model) 2. Add comprehensive unit tests for tdtCtc110m model version: - Test hasFusedEncoder property (true for 110m) - Test encoderHiddenSize (512 vs 1024 for v2/v3) - Test blankId (1024 same as v2) - Test decoderLayers (1 vs 2 for v2/v3) - Test repo mapping (.parakeetTdtCtc110m) - Test usesSplitFrontend (false for fused model) - Test default cache directory structure - Test vocabulary filename (parakeet_vocab.json array format) - Test all model versions have required properties 3. Add ModelNames tests for parakeetTdtCtc110m repo: - Test repo properties (remotePath, name, folderName) - Test vocabulary uses array format - Test uses requiredModelsFused (3 files, no separate Encoder) - Test required model count (3 .mlmodelc files) - Test requiredModelsFused structure All tests passing (27 AsrModelsTests + 18 ModelNamesTests = 45 tests)
Summary
Adds support for NVIDIA's Parakeet-TDT-CTC-110M hybrid model with fused preprocessor+encoder architecture.
Based on the work by @JarbasAl in #383.
Key Changes
Model Architecture
Code Modifications
--model-version tdt-ctc-110mand--model-dirflagsCLI Usage
Testing
Related