Skip to content

Fix Kokoro v2 source_noise dtype and distribution#447

Merged
Alex-Wengg merged 1 commit intomainfrom
fix-kokoro-source-noise
Mar 28, 2026
Merged

Fix Kokoro v2 source_noise dtype and distribution#447
Alex-Wengg merged 1 commit intomainfrom
fix-kokoro-source-noise

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Mar 27, 2026

Fixes audio trimming issues in Kokoro TTS by switching to v1 models and computing audio length from pred_dur output.

Changes

1. Switch to v1 models on all platforms

  • Before: macOS used v2 fp16 models, iOS used v1
  • After: All platforms use v1 models to avoid source_noise bugs
  • v2 models have broken audio_length_samples output (always returns 0)

2. Fix audio trimming using pred_dur

  • Problem: Model's audio_length_samples output is broken (returns 0)
  • Solution: Compute audio length from pred_dur output: sum(pred_dur) * 600 samples/frame
  • Results:
    • "Hello world" → 1.5s (was 5s with no trimming)
    • "This is a test of kokoro" → 2.35s (was 5s)
    • Proper trimming without cutting off trailing consonants

Technical Details

v1 models don't have the source_noise input (it's internalized), avoiding the dtype and distribution issues entirely. The pred_dur output provides accurate frame counts that can be reliably converted to sample counts.

Fixes #445

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 27, 2026

Claude finished @Alex-Wengg's task —— View job


PR Review: Fix Kokoro v2 source_noise dtype and distribution

Todo List:

  • Create pending PR review
  • Get PR diff to understand changes and line numbers
  • Read and analyze changed files
  • Add inline review comments for specific issues
  • Submit comprehensive review
  • Complete final summary

Review Complete

I've thoroughly reviewed this PR and submitted a comprehensive code review. The fixes address two critical bugs:

  1. dtype mismatch: .float16.float32 to match Core ML model expectations
  2. Wrong distribution: uniform → Gaussian to match PyTorch torch.randn() behavior

Key findings:

  • ✅ Both bug fixes are technically correct and well-implemented
  • ✅ Good code consolidation with new shared RandomUtils.swift
  • ✅ Performance improvements by eliminating unnecessary conversions
  • ✅ Consistent implementation across synthesis and warm-up paths
  • 💡 Minor suggestion about numerical stability in Box-Muller implementation

Recommendation: This is a high-quality fix that should be merged. The changes improve correctness without introducing regressions.


Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Review: Fix Kokoro v2 source_noise dtype and distribution

This PR successfully addresses two critical bugs in Kokoro v2's source_noise generation. The fixes are technically sound and well-implemented. Here's my detailed analysis:

Bug Fixes - Excellent Work

1. dtype mismatch (.float16 → .float32)

  • Fix: Correctly addresses the Core ML model expectation mismatch
  • Performance: Eliminates unnecessary vImage conversion overhead
  • Consistency: Both synthesis and warm-up paths now use .float32 consistently

2. Wrong distribution (uniform → Gaussian)

  • Fix: Properly implements standard normal distribution matching PyTorch's torch.randn()
  • Accuracy: Box-Muller transform correctly generates N(0,1) noise as expected by SourceModuleHnNSF

🔧 Code Quality Improvements

Shared RandomUtils.swift

  • Good consolidation of Box-Muller implementation from PocketTTS
  • Clear documentation linking to PyTorch equivalence
  • Proper parameterization with RNG for testability

Code Cleanup

  • Removal of complex vImage conversion logic simplifies maintenance
  • Eliminated unused Accelerate import from TtsModels.swift
  • Consistent implementation across KokoroSynthesizer.swift and TtsModels.swift

💡 Minor Suggestions

RandomUtils.swift:12 - Nit: Consider Float.ulpOfOne instead of Float.leastNonzeroMagnitude for the lower bound. Box-Muller requires u1 > 0, but leastNonzeroMagnitude (~1e-45) could cause numerical issues with logf(). Float.ulpOfOne (~1e-7) provides better stability.

Performance consideration: The current implementation generates noise sample-by-sample in a loop. For large arrays, vectorized generation might be more efficient, but given this is likely not a performance bottleneck, the current readable implementation is fine.

🎯 Architecture & Maintainability

  • Good: Centralized gaussian random generation in shared utilities
  • Good: Consistent application across both synthesis pipeline and model warm-up
  • Good: Clear comments explaining the mathematical requirements
  • Good: Maintains existing API contracts while fixing internal implementation

Performance Impact

  • Positive: Eliminates Float32→Float16 conversion overhead
  • Positive: Removes unnecessary temporary buffer allocation
  • Neutral: Sample-by-sample generation vs. previous bulk generation (likely negligible impact)

🔍 Security & Safety

  • No security concerns identified
  • Proper memory management with existing multiArrayPool patterns
  • Safe pointer operations with correct capacity bounds

📋 Overall Assessment

This is a high-quality fix that addresses real bugs with proper technical solutions. The implementation follows Swift best practices and maintains consistency with the existing codebase architecture. The fixes should resolve the identified dtype and distribution issues in Kokoro v2 models.

Recommendation: ✅ Approve - These are objective bug fixes that improve correctness without introducing regressions.

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 4 additional findings.

Open in Devin Review

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 27, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 759.2x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 802.5x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 27, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 4.07x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 13.667 5.3 Fetching diarization models
Model Compile 5.857 2.3 CoreML compilation
Audio Load 0.060 0.0 Loading audio file
Segmentation 28.859 11.2 VAD + speech detection
Embedding 257.077 99.6 Speaker embedding extraction
Clustering (VBx) 0.776 0.3 Hungarian algorithm + VBx clustering
Total 258.027 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 286.7s processing • Test runtime: 5m 1s • 03/27/2026, 08:57 PM EST

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 27, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (191.3 KB)

Runtime: 0m33s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 27, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 14.5x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 2m 27s • 2026-03-28T00:45:58.740Z

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 27, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 29.38x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 8.648 24.2 Fetching diarization models
Model Compile 3.706 10.4 CoreML compilation
Audio Load 0.075 0.2 Loading audio file
Segmentation 10.709 30.0 Detecting speech regions
Embedding 17.848 50.0 Extracting speaker voices
Clustering 7.139 20.0 Grouping same speakers
Total 35.715 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 35.7s diarization time • Test runtime: 3m 23s • 03/27/2026, 08:53 PM EST

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 27, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 11.71x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 43.5s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.044s Average chunk processing time
Max Chunk Time 0.087s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 1m0s • 03/27/2026, 08:58 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 27, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Runtime: 3m13s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 27, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.96x
test-other 1.40% 0.00% 3.91x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 5.48x
test-other 1.00% 0.00% 3.85x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.63x Streaming real-time factor
Avg Chunk Time 1.397s Average time to process each chunk
Max Chunk Time 1.516s Maximum chunk processing time
First Token 1.658s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.69x Streaming real-time factor
Avg Chunk Time 1.320s Average time to process each chunk
Max Chunk Time 1.418s Maximum chunk processing time
First Token 1.325s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 7m8s • 03/27/2026, 08:43 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

devin-ai-integration[bot]

This comment was marked as resolved.

@Alex-Wengg Alex-Wengg force-pushed the fix-kokoro-source-noise branch from b9ab93d to c13d259 Compare March 28, 2026 00:16
Switches to v1 models on all platforms to avoid source_noise issues in v2.
Fixes audio endpoint trimming by computing length from pred_dur output.

Changes:
- ModelNames.swift: Use v1 models (.mlmodelc) on all platforms instead of v2 (_v2.mlmodelc)
- KokoroSynthesizer.swift: Compute audio length from pred_dur (frames * 600) instead of broken audio_length_samples

Results:
- "Hello world" → 1.5s (was 5s)
- "This is a test of kokoro" → 2.35s (was 5s)
- Proper trimming without cutting off speech

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@Alex-Wengg Alex-Wengg force-pushed the fix-kokoro-source-noise branch from c13d259 to 9368acc Compare March 28, 2026 00:18
@Alex-Wengg Alex-Wengg merged commit 01f1ae2 into main Mar 28, 2026
12 checks passed
@Alex-Wengg Alex-Wengg deleted the fix-kokoro-source-noise branch March 28, 2026 00:22
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 6 additional findings in Devin Review.

Open in Devin Review

Comment on lines +419 to 423
var totalFrames: Float = 0.0
let predDurPtr = predDurArray.dataPointer.bindMemory(to: Float.self, capacity: predDurArray.count)
for i in 0..<predDurArray.count {
totalFrames += predDurPtr[i]
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 pred_dur dataPointer bound as Float.self without checking actual MLMultiArray data type

The new pred_dur reading code at KokoroSynthesizer.swift:420 assumes the array is float32 by using bindMemory(to: Float.self), but never checks predDurArray.dataType. If the CoreML model outputs pred_dur in float16 format (2 bytes/element), binding as Float.self (4 bytes/element) causes an out-of-bounds memory read (reading count * 4 bytes from a count * 2 byte buffer) and produces garbage frame counts, leading to incorrect audio trimming. This is inconsistent with the audio output handling just below at KokoroSynthesizer.swift:448, which correctly checks audioArrayUnwrapped.dataType == .float32 before pointer-binding and has a safe fallback using [i].floatValue.

Safe pattern used for audio (line 448) but not for pred_dur
// Audio output: correctly checks dataType
if audioArrayUnwrapped.dataType == .float32 {
    let sourcePointer = audioArrayUnwrapped.dataPointer.bindMemory(...)
} else {
    // safe fallback via NSNumber
}

// pred_dur: no dataType check
let predDurPtr = predDurArray.dataPointer.bindMemory(to: Float.self, ...)
Suggested change
var totalFrames: Float = 0.0
let predDurPtr = predDurArray.dataPointer.bindMemory(to: Float.self, capacity: predDurArray.count)
for i in 0..<predDurArray.count {
totalFrames += predDurPtr[i]
}
var totalFrames: Float = 0.0
if predDurArray.dataType == .float32 {
let predDurPtr = predDurArray.dataPointer.bindMemory(to: Float.self, capacity: predDurArray.count)
for i in 0..<predDurArray.count {
totalFrames += predDurPtr[i]
}
} else {
for i in 0..<predDurArray.count {
totalFrames += predDurArray[i].floatValue
}
}
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Kokoro v2 fp16 models produce audio artifacts on M3 Pro

1 participant