Skip to content

feat: Support Parakeet-TDT-CTC-110M hybrid model#433

Merged
Alex-Wengg merged 13 commits intomainfrom
feat/tdt-ctc-110m
Mar 26, 2026
Merged

feat: Support Parakeet-TDT-CTC-110M hybrid model#433
Alex-Wengg merged 13 commits intomainfrom
feat/tdt-ctc-110m

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Mar 26, 2026

Summary

Adds support for NVIDIA's Parakeet-TDT-CTC-110M hybrid model with fused preprocessor+encoder architecture.

Based on the work by @JarbasAl in #383.

Key Changes

Model Architecture

  • Fused preprocessor+encoder: No separate Encoder.mlmodelc file
  • Smaller dimensions: encoderHidden=512, vocabSize=1024, single LSTM layer
  • Array-format vocabulary: vocab.json instead of dict format
  • BlankId: 1024 (same as v2)

Code Modifications

  • AsrModels: Optional encoder support, fused frontend loading, array vocab handling
  • AsrManager: Version-aware decoder state shapes, fused frontend availability checking
  • AsrTranscription: Skip encoder step when preprocessor output is fused
  • TdtDecoderState: Parameterized LSTM layer count
  • TdtDecoderV3: Use config.encoderHiddenSize instead of auto-detection
  • EncoderFrameView: Accept explicit hidden size parameter
  • TranscribeCommand: New --model-version tdt-ctc-110m and --model-dir flags
  • ModelNames: parakeetTdtCtc110m repo reference

CLI Usage

swift run fluidaudiocli transcribe audio.wav --model-version tdt-ctc-110m
swift run fluidaudiocli transcribe audio.wav --model-version tdt-ctc-110m --model-dir /path/to/custom/models

Testing

Related

IMG_5033


Open with Devin

JarbasAl and others added 7 commits March 16, 2026 05:54
Add AsrModelVersion.tdtCtc110m for the 110M parameter hybrid TDT-CTC
model. Key differences from the 0.6B models:

- Fused preprocessor+encoder (no separate Encoder.mlmodelc)
- Smaller dimensions: encoderHidden=512, vocabSize=1024, 1 LSTM layer
- Array-format vocabulary (vocab.json) instead of dict format
- blankId=1024 (same as v2)

Changes:
- AsrModels: optional encoder, fused frontend loading, array vocab support
- AsrManager: version-aware decoder state shapes, fused frontend availability
- AsrTranscription: skip encoder step when preprocessor output is fused
- TdtDecoderState: parameterized LSTM layer count
- TdtDecoderV3: use config.encoderHiddenSize instead of auto-detection
- EncoderFrameView: accept explicit hidden size parameter
- TranscribeCommand: --model-version tdt-ctc-110m, --model-dir flags
- ModelNames: parakeetTdtCtc110m repo, fused model requirements
Default ASRConfig uses encoderHiddenSize=1024 but the 110m model produces
encoder output with hidden size 512, causing a runtime crash in
EncoderFrameView. Adapt the config from the model version before passing
it to the decoder.
- Accept --model-version tdt-ctc-110m/110m
- Use model-version-aware ASRConfig (blankId, encoderHiddenSize)
- Fix CI debug path to use AsrModels.defaultCacheDirectory
- Update usage text
- TranscribeCommand: add --model-dir and tdt-ctc-110m to help text,
  fix modelVersionLabel ternary that mislabeled 110m as "v3" in JSON
- TdtDecoderV3.prepareJointInput: use config.encoderHiddenSize instead
  of convenience init that hardcodes 1024
The AsrModels struct holds strong references to MLModel objects.
Without clearing it, cleanup() only nil'd the individual model
properties but the AsrModels copy still retained all four models.
Resolve conflicts in ModelNames.swift by keeping both multilingualG2p and parakeetTdtCtc110m enum cases.
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 6 additional findings.

Open in Devin Review

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 26, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 29.29x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 6.728 18.8 Fetching diarization models
Model Compile 2.883 8.0 CoreML compilation
Audio Load 0.064 0.2 Loading audio file
Segmentation 10.743 30.0 Detecting speech regions
Embedding 17.905 50.0 Extracting speaker voices
Clustering 7.162 20.0 Grouping same speakers
Total 35.821 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 35.8s diarization time • Test runtime: 3m 11s • 03/26/2026, 03:11 PM EST

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 26, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 9.9x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 5m 18s • 2026-03-26T19:07:12.624Z

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 26, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 4.09x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 13.087 5.1 Fetching diarization models
Model Compile 5.609 2.2 CoreML compilation
Audio Load 0.073 0.0 Loading audio file
Segmentation 25.245 9.8 VAD + speech detection
Embedding 255.549 99.6 Speaker embedding extraction
Clustering (VBx) 0.783 0.3 Hungarian algorithm + VBx clustering
Total 256.536 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 281.6s processing • Test runtime: 4m 58s • 03/26/2026, 03:02 PM EST

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 26, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 555.4x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 603.0x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

- Fix vocabulary filename: vocab.json → parakeet_vocab.json
- Fix iOS build: Add actor-safe getDecoderLayers() method to AsrManager
- Fix iOS build: Use await for actor-isolated access in ChunkProcessor
- Add missing multilingualG2p case in getRequiredModelNames

These changes enable TDT-CTC-110M to compile and run successfully on iOS devices.
devin-ai-integration[bot]

This comment was marked as resolved.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 26, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 0.00% Average Word Error Rate
WER (Med) 0.00% Median Word Error Rate
RTFx 0.00x Real-time factor (higher = faster)
Total Audio 0.0s Total audio duration processed
Total Time 0.0s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.000s Average chunk processing time
Max Chunk Time 0.000s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m14s • 03/26/2026, 03:14 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@Alex-Wengg
Copy link
Copy Markdown
Member Author

Alex-Wengg commented Mar 26, 2026

Benchmark Required

Before merging, please benchmark TDT-CTC-110M on LibriSpeech test-clean:

swift build -c release
.build/release/fluidaudiocli asr-benchmark --subset test-clean --model-version tdt-ctc-110m

Should verify:

  • WER performance vs existing models (v2: ~5.2%, v3: ~3.8%)
  • RTFx on Apple Silicon
  • Peak memory usage
  • Verify fused preprocessor+encoder works correctly

Update benchmarks.md with results before merge.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 26, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (180.0 KB)

Runtime: 0m24s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 26, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.32x
test-other 1.80% 0.00% 3.41x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 4.48x
test-other 1.00% 0.00% 2.99x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.56x Streaming real-time factor
Avg Chunk Time 1.601s Average time to process each chunk
Max Chunk Time 2.544s Maximum chunk processing time
First Token 1.819s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.53x Streaming real-time factor
Avg Chunk Time 1.644s Average time to process each chunk
Max Chunk Time 2.343s Maximum chunk processing time
First Token 1.717s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 7m54s • 03/26/2026, 03:10 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 26, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Runtime: 3m11s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

- 3.01% WER on 2,620 files
- 96.5x RTFx (37 seconds per hour of audio)
- Validated iOS compatibility
- 0% median WER shows most files transcribed perfectly
- Hybrid TDT-CTC architecture with 110M parameters
- 3.01% WER on LibriSpeech test-clean
- 96.5x RTFx performance on M2 Mac
- iOS compatible with fused preprocessor+encoder
devin-ai-integration[bot]

This comment was marked as resolved.

- Add inline documentation explaining why decoderLayers=2 is the default
- v2 and v3 models use 2 LSTM layers (most common architecture)
- tdtCtc110m uses 1 layer (smaller variant)
- Fallback to 2 when models not loaded ensures v2/v3 compatibility

Addresses review questions from @Alex-Wengg about the '2' default value.
- Overview and benchmark results (3.01% WER, 96.5x RTFx)
- Quick start guide with Swift code examples
- Detailed architecture and model pipeline workflow
- Complete code workflow from loading to transcription
- Model files structure and specifications
- iOS integration guide with performance metrics
- CLI benchmark commands
- Comparison with v3 model
- Resources and links
Addresses review feedback:
1. Fix incorrect HuggingFace link in benchmarks.md
   - Was: parakeet-tdt-0.6b-v3-coreml (v3 model)
   - Now: parakeet-tdt-ctc-110m-coreml (correct 110M model)

2. Add comprehensive unit tests for tdtCtc110m model version:
   - Test hasFusedEncoder property (true for 110m)
   - Test encoderHiddenSize (512 vs 1024 for v2/v3)
   - Test blankId (1024 same as v2)
   - Test decoderLayers (1 vs 2 for v2/v3)
   - Test repo mapping (.parakeetTdtCtc110m)
   - Test usesSplitFrontend (false for fused model)
   - Test default cache directory structure
   - Test vocabulary filename (parakeet_vocab.json array format)
   - Test all model versions have required properties

3. Add ModelNames tests for parakeetTdtCtc110m repo:
   - Test repo properties (remotePath, name, folderName)
   - Test vocabulary uses array format
   - Test uses requiredModelsFused (3 files, no separate Encoder)
   - Test required model count (3 .mlmodelc files)
   - Test requiredModelsFused structure

All tests passing (27 AsrModelsTests + 18 ModelNamesTests = 45 tests)
@Alex-Wengg Alex-Wengg merged commit 0f7493b into main Mar 26, 2026
16 checks passed
@Alex-Wengg Alex-Wengg deleted the feat/tdt-ctc-110m branch March 26, 2026 19:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants