Skip to content

Conversation

@AlanConstantino
Copy link

Summary

The generate-coreml-model.sh script currently passes --optimize-ane True to the conversion script, but this flag is explicitly marked as broken:

parser.add_argument("--optimize-ane", type=bool, help="optimize for ANE execution (currently broken)", default=False)

Problem

When --optimize-ane True is used, the generated CoreML models:

  1. Load successfully - whisper.cpp reports Core ML model loaded
  2. Crash during inference - SIGSEGV when processing audio

The ANE optimization code (WhisperANE classes) hasn't been maintained to work with:

  • Newer model architectures (e.g., large-v3-turbo with 128 mel bins vs 80)
  • Recent coremltools versions that generate iOS 18+ specific ops (Ios18.scaledDotProductAttention, etc.)

Solution

Remove the --optimize-ane True flag from the shell script, allowing it to use the working default (False).

Added a comment explaining why ANE optimization is disabled for future maintainers.

Testing

Tested on M4 Mac mini with large-v3-turbo model:

  • Before (with --optimize-ane): SIGSEGV crash during inference
  • After (without flag): Model loads and transcribes correctly with COREML = 1

Related

The --optimize-ane flag in convert-whisper-to-coreml.py is explicitly
marked as 'currently broken' in its help text. When enabled, it produces
CoreML models that load successfully but crash with SIGSEGV during
inference.

The ANE optimization code (WhisperANE classes) hasn't been maintained
to work with:
- Newer model architectures (e.g., large-v3-turbo with 128 mel bins)
- Recent coremltools versions that generate iOS 18+ specific ops

Removing this flag allows generate-coreml-model.sh to produce working
CoreML models that run correctly on Apple Silicon via Metal/ANE without
the broken optimization path.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant