-
Notifications
You must be signed in to change notification settings - Fork 243
344 lines (295 loc) · 21.4 KB
/
asr-benchmark.yml
File metadata and controls
344 lines (295 loc) · 21.4 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
name: ASR Benchmark
on:
pull_request:
branches: [main]
workflow_dispatch:
jobs:
asr-benchmark:
name: ASR Benchmark
runs-on: macos-15
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v5
- uses: swift-actions/setup-swift@v2
with:
swift-version: "6.1"
- name: Cache Dependencies
uses: actions/cache@v4
with:
path: |
.build
~/Library/Application Support/FluidAudio/Models/parakeet-tdt-0.6b-v3-coreml
~/Library/Application Support/FluidAudio/Models/parakeet-tdt-0.6b-v2-coreml
~/Library/Application Support/FluidAudio/Datasets/LibriSpeech
~/Library/Caches/Homebrew
/usr/local/Cellar/ffmpeg
/opt/homebrew/Cellar/ffmpeg
key: ${{ runner.os }}-asr-${{ hashFiles('Package.resolved', 'Sources/FluidAudio/Frameworks/**', 'Sources/FluidAudio/ModelRegistry.swift', 'Sources/FluidAudio/ModelNames.swift') }}
- name: Install ffmpeg
run: |
brew install ffmpeg || echo "ffmpeg may already be installed"
ffmpeg -version || echo "ffmpeg not available"
- name: Build
run: swift build -c release
- name: Run Benchmarks
id: benchmark
run: |
MAX_FILES="25"
BENCHMARK_START=$(date +%s)
# Set error handling
set -o pipefail
# Function to run benchmark with error capture
run_benchmark() {
local SUBSET=$1
local MAX=$2
local OUTPUT=$3
local EXTRA_ARGS="${4:-}"
echo "========================================="
echo "Running ASR benchmark: $SUBSET (max $MAX files)"
echo "Output: $OUTPUT"
echo "Extra args: $EXTRA_ARGS"
echo "========================================="
if swift run fluidaudiocli asr-benchmark \
--subset "$SUBSET" --max-files "$MAX" \
--auto-download --output "$OUTPUT" $EXTRA_ARGS > benchmark_log.txt 2>&1; then
echo "✅ Benchmark $SUBSET completed successfully"
return 0
else
echo "❌ Benchmark $SUBSET FAILED with exit code $?"
echo "Full output:"
cat benchmark_log.txt
# Continue with other benchmarks even if one fails
return 1
fi
}
# Run benchmarks with error capture
run_benchmark "test-clean" "$MAX_FILES" "asr_results_clean.json" || CLEAN_FAILED=1
run_benchmark "test-other" "$MAX_FILES" "asr_results_other.json" || OTHER_FAILED=1
run_benchmark "test-clean" "5" "asr_results_streaming.json" "--test-streaming --chunk-duration 0.5" || STREAMING_FAILED=1
# English-optimized (v2) runs
run_benchmark "test-clean" "$MAX_FILES" "asr_results_clean_v2.json" "--model-version v2" || CLEAN_V2_FAILED=1
run_benchmark "test-other" "$MAX_FILES" "asr_results_other_v2.json" "--model-version v2" || OTHER_V2_FAILED=1
run_benchmark "test-clean" "5" "asr_results_streaming_v2.json" "--test-streaming --chunk-duration 0.5 --model-version v2" || STREAMING_V2_FAILED=1
# Extract metrics with error handling
if [ -f asr_results_clean.json ]; then
CLEAN_WER_AVG=$(jq -r '.summary.averageWER * 100' asr_results_clean.json 2>/dev/null)
CLEAN_WER_MED=$(jq -r '.summary.medianWER * 100' asr_results_clean.json 2>/dev/null)
CLEAN_AUDIO=$(jq -r '.summary.totalAudioDuration' asr_results_clean.json 2>/dev/null)
CLEAN_TIME=$(jq -r '.summary.totalProcessingTime' asr_results_clean.json 2>/dev/null)
CLEAN_RTFx=$(jq -r '.summary.medianRTFx' asr_results_clean.json 2>/dev/null)
# Format values only if they exist and are not null
[ "$CLEAN_WER_AVG" != "null" ] && [ -n "$CLEAN_WER_AVG" ] && CLEAN_WER_AVG=$(printf "%.2f" "$CLEAN_WER_AVG") || CLEAN_WER_AVG="N/A"
[ "$CLEAN_WER_MED" != "null" ] && [ -n "$CLEAN_WER_MED" ] && CLEAN_WER_MED=$(printf "%.2f" "$CLEAN_WER_MED") || CLEAN_WER_MED="N/A"
[ "$CLEAN_RTFx" != "null" ] && [ -n "$CLEAN_RTFx" ] && CLEAN_RTFx=$(printf "%.2f" "$CLEAN_RTFx") || CLEAN_RTFx="N/A"
fi
if [ -f asr_results_clean_v2.json ]; then
CLEAN_V2_WER_AVG=$(jq -r '.summary.averageWER * 100' asr_results_clean_v2.json 2>/dev/null)
CLEAN_V2_WER_MED=$(jq -r '.summary.medianWER * 100' asr_results_clean_v2.json 2>/dev/null)
CLEAN_V2_RTFx=$(jq -r '.summary.medianRTFx' asr_results_clean_v2.json 2>/dev/null)
[ "$CLEAN_V2_WER_AVG" != "null" ] && [ -n "$CLEAN_V2_WER_AVG" ] && CLEAN_V2_WER_AVG=$(printf "%.2f" "$CLEAN_V2_WER_AVG") || CLEAN_V2_WER_AVG="N/A"
[ "$CLEAN_V2_WER_MED" != "null" ] && [ -n "$CLEAN_V2_WER_MED" ] && CLEAN_V2_WER_MED=$(printf "%.2f" "$CLEAN_V2_WER_MED") || CLEAN_V2_WER_MED="N/A"
[ "$CLEAN_V2_RTFx" != "null" ] && [ -n "$CLEAN_V2_RTFx" ] && CLEAN_V2_RTFx=$(printf "%.2f" "$CLEAN_V2_RTFx") || CLEAN_V2_RTFx="N/A"
fi
if [ -f asr_results_other.json ]; then
OTHER_WER_AVG=$(jq -r '.summary.averageWER * 100' asr_results_other.json 2>/dev/null)
OTHER_WER_MED=$(jq -r '.summary.medianWER * 100' asr_results_other.json 2>/dev/null)
OTHER_AUDIO=$(jq -r '.summary.totalAudioDuration' asr_results_other.json 2>/dev/null)
OTHER_TIME=$(jq -r '.summary.totalProcessingTime' asr_results_other.json 2>/dev/null)
OTHER_RTFx=$(jq -r '.summary.medianRTFx' asr_results_other.json 2>/dev/null)
# Format values only if they exist and are not null
[ "$OTHER_WER_AVG" != "null" ] && [ -n "$OTHER_WER_AVG" ] && OTHER_WER_AVG=$(printf "%.2f" "$OTHER_WER_AVG") || OTHER_WER_AVG="N/A"
[ "$OTHER_WER_MED" != "null" ] && [ -n "$OTHER_WER_MED" ] && OTHER_WER_MED=$(printf "%.2f" "$OTHER_WER_MED") || OTHER_WER_MED="N/A"
[ "$OTHER_RTFx" != "null" ] && [ -n "$OTHER_RTFx" ] && OTHER_RTFx=$(printf "%.2f" "$OTHER_RTFx") || OTHER_RTFx="N/A"
fi
if [ -f asr_results_other_v2.json ]; then
OTHER_V2_WER_AVG=$(jq -r '.summary.averageWER * 100' asr_results_other_v2.json 2>/dev/null)
OTHER_V2_WER_MED=$(jq -r '.summary.medianWER * 100' asr_results_other_v2.json 2>/dev/null)
OTHER_V2_RTFx=$(jq -r '.summary.medianRTFx' asr_results_other_v2.json 2>/dev/null)
[ "$OTHER_V2_WER_AVG" != "null" ] && [ -n "$OTHER_V2_WER_AVG" ] && OTHER_V2_WER_AVG=$(printf "%.2f" "$OTHER_V2_WER_AVG") || OTHER_V2_WER_AVG="N/A"
[ "$OTHER_V2_WER_MED" != "null" ] && [ -n "$OTHER_V2_WER_MED" ] && OTHER_V2_WER_MED=$(printf "%.2f" "$OTHER_V2_WER_MED") || OTHER_V2_WER_MED="N/A"
[ "$OTHER_V2_RTFx" != "null" ] && [ -n "$OTHER_V2_RTFx" ] && OTHER_V2_RTFx=$(printf "%.2f" "$OTHER_V2_RTFx") || OTHER_V2_RTFx="N/A"
fi
if [ -f asr_results_streaming.json ]; then
STREAMING_WER=$(jq -r '.summary.averageWER * 100' asr_results_streaming.json 2>/dev/null)
STREAMING_RTFx=$(jq -r '.summary.medianRTFx' asr_results_streaming.json 2>/dev/null)
STREAMING_AVG_CHUNK=$(jq -r '.summary.streaming.avgChunkProcessingTime' asr_results_streaming.json 2>/dev/null)
STREAMING_MAX_CHUNK=$(jq -r '.summary.streaming.maxChunkProcessingTime' asr_results_streaming.json 2>/dev/null)
STREAMING_CHUNKS=$(jq -r '.summary.streaming.totalChunksProcessed' asr_results_streaming.json 2>/dev/null)
STREAMING_FIRST_TOKEN=$(jq -r '.summary.streaming.avgFirstTokenLatency // "N/A"' asr_results_streaming.json 2>/dev/null)
# Format values only if they exist and are not null
[ "$STREAMING_WER" != "null" ] && [ -n "$STREAMING_WER" ] && STREAMING_WER=$(printf "%.2f" "$STREAMING_WER") || STREAMING_WER="N/A"
[ "$STREAMING_RTFx" != "null" ] && [ -n "$STREAMING_RTFx" ] && STREAMING_RTFx=$(printf "%.2f" "$STREAMING_RTFx") || STREAMING_RTFx="N/A"
[ "$STREAMING_AVG_CHUNK" != "null" ] && [ -n "$STREAMING_AVG_CHUNK" ] && STREAMING_AVG_CHUNK=$(printf "%.3f" "$STREAMING_AVG_CHUNK") || STREAMING_AVG_CHUNK="N/A"
[ "$STREAMING_MAX_CHUNK" != "null" ] && [ -n "$STREAMING_MAX_CHUNK" ] && STREAMING_MAX_CHUNK=$(printf "%.3f" "$STREAMING_MAX_CHUNK") || STREAMING_MAX_CHUNK="N/A"
[ "$STREAMING_FIRST_TOKEN" != "null" ] && [ -n "$STREAMING_FIRST_TOKEN" ] && [ "$STREAMING_FIRST_TOKEN" != "N/A" ] && STREAMING_FIRST_TOKEN=$(printf "%.3f" "$STREAMING_FIRST_TOKEN")
fi
if [ -f asr_results_streaming_v2.json ]; then
STREAMING_V2_WER=$(jq -r '.summary.averageWER * 100' asr_results_streaming_v2.json 2>/dev/null)
STREAMING_V2_RTFx=$(jq -r '.summary.medianRTFx' asr_results_streaming_v2.json 2>/dev/null)
STREAMING_V2_AVG_CHUNK=$(jq -r '.summary.streaming.avgChunkProcessingTime' asr_results_streaming_v2.json 2>/dev/null)
STREAMING_V2_MAX_CHUNK=$(jq -r '.summary.streaming.maxChunkProcessingTime' asr_results_streaming_v2.json 2>/dev/null)
STREAMING_V2_CHUNKS=$(jq -r '.summary.streaming.totalChunksProcessed' asr_results_streaming_v2.json 2>/dev/null)
STREAMING_V2_FIRST_TOKEN=$(jq -r '.summary.streaming.avgFirstTokenLatency // "N/A"' asr_results_streaming_v2.json 2>/dev/null)
[ "$STREAMING_V2_WER" != "null" ] && [ -n "$STREAMING_V2_WER" ] && STREAMING_V2_WER=$(printf "%.2f" "$STREAMING_V2_WER") || STREAMING_V2_WER="N/A"
[ "$STREAMING_V2_RTFx" != "null" ] && [ -n "$STREAMING_V2_RTFx" ] && STREAMING_V2_RTFx=$(printf "%.2f" "$STREAMING_V2_RTFx") || STREAMING_V2_RTFx="N/A"
[ "$STREAMING_V2_AVG_CHUNK" != "null" ] && [ -n "$STREAMING_V2_AVG_CHUNK" ] && STREAMING_V2_AVG_CHUNK=$(printf "%.3f" "$STREAMING_V2_AVG_CHUNK") || STREAMING_V2_AVG_CHUNK="N/A"
[ "$STREAMING_V2_MAX_CHUNK" != "null" ] && [ -n "$STREAMING_V2_MAX_CHUNK" ] && STREAMING_V2_MAX_CHUNK=$(printf "%.3f" "$STREAMING_V2_MAX_CHUNK") || STREAMING_V2_MAX_CHUNK="N/A"
[ "$STREAMING_V2_FIRST_TOKEN" != "null" ] && [ -n "$STREAMING_V2_FIRST_TOKEN" ] && [ "$STREAMING_V2_FIRST_TOKEN" != "N/A" ] && STREAMING_V2_FIRST_TOKEN=$(printf "%.3f" "$STREAMING_V2_FIRST_TOKEN")
fi
# Output metrics
echo "CLEAN_WER_AVG=${CLEAN_WER_AVG:-N/A}" >> $GITHUB_OUTPUT
echo "CLEAN_WER_MED=${CLEAN_WER_MED:-N/A}" >> $GITHUB_OUTPUT
echo "CLEAN_RTFx=${CLEAN_RTFx:-N/A}" >> $GITHUB_OUTPUT
echo "CLEAN_V2_WER_AVG=${CLEAN_V2_WER_AVG:-N/A}" >> $GITHUB_OUTPUT
echo "CLEAN_V2_WER_MED=${CLEAN_V2_WER_MED:-N/A}" >> $GITHUB_OUTPUT
echo "CLEAN_V2_RTFx=${CLEAN_V2_RTFx:-N/A}" >> $GITHUB_OUTPUT
echo "OTHER_WER_AVG=${OTHER_WER_AVG:-N/A}" >> $GITHUB_OUTPUT
echo "OTHER_WER_MED=${OTHER_WER_MED:-N/A}" >> $GITHUB_OUTPUT
echo "OTHER_RTFx=${OTHER_RTFx:-N/A}" >> $GITHUB_OUTPUT
echo "OTHER_V2_WER_AVG=${OTHER_V2_WER_AVG:-N/A}" >> $GITHUB_OUTPUT
echo "OTHER_V2_WER_MED=${OTHER_V2_WER_MED:-N/A}" >> $GITHUB_OUTPUT
echo "OTHER_V2_RTFx=${OTHER_V2_RTFx:-N/A}" >> $GITHUB_OUTPUT
# Streaming metrics
echo "STREAMING_WER=${STREAMING_WER:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_RTFx=${STREAMING_RTFx:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_AVG_CHUNK=${STREAMING_AVG_CHUNK:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_MAX_CHUNK=${STREAMING_MAX_CHUNK:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_CHUNKS=${STREAMING_CHUNKS:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_FIRST_TOKEN=${STREAMING_FIRST_TOKEN:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_V2_WER=${STREAMING_V2_WER:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_V2_RTFx=${STREAMING_V2_RTFx:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_V2_AVG_CHUNK=${STREAMING_V2_AVG_CHUNK:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_V2_MAX_CHUNK=${STREAMING_V2_MAX_CHUNK:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_V2_CHUNKS=${STREAMING_V2_CHUNKS:-N/A}" >> $GITHUB_OUTPUT
echo "STREAMING_V2_FIRST_TOKEN=${STREAMING_V2_FIRST_TOKEN:-N/A}" >> $GITHUB_OUTPUT
EXECUTION_TIME=$(( ($(date +%s) - BENCHMARK_START) / 60 ))m$(( ($(date +%s) - BENCHMARK_START) % 60 ))s
echo "EXECUTION_TIME=$EXECUTION_TIME" >> $GITHUB_OUTPUT
echo "FILES_COUNT=$MAX_FILES" >> $GITHUB_OUTPUT
# Validate RTFx values - 0 indicates benchmark failure
if [ "$CLEAN_RTFx" = "0.00" ] || [ "$CLEAN_RTFx" = "N/A" ]; then
echo "⚠️ test-clean RTFx is 0 or N/A - benchmark may have failed"
CLEAN_RTFX_FAILED=1
fi
if [ "$CLEAN_V2_RTFx" = "0.00" ] || [ "$CLEAN_V2_RTFx" = "N/A" ]; then
echo "⚠️ test-clean (v2) RTFx is 0 or N/A - benchmark may have failed"
CLEAN_V2_RTFX_FAILED=1
fi
if [ "$OTHER_RTFx" = "0.00" ] || [ "$OTHER_RTFx" = "N/A" ]; then
echo "⚠️ test-other RTFx is 0 or N/A - benchmark may have failed"
OTHER_RTFX_FAILED=1
fi
if [ "$OTHER_V2_RTFx" = "0.00" ] || [ "$OTHER_V2_RTFx" = "N/A" ]; then
echo "⚠️ test-other (v2) RTFx is 0 or N/A - benchmark may have failed"
OTHER_V2_RTFX_FAILED=1
fi
if [ "$STREAMING_RTFx" = "0.00" ] || [ "$STREAMING_RTFx" = "N/A" ]; then
echo "⚠️ streaming RTFx is 0 or N/A - benchmark may have failed"
STREAMING_RTFX_FAILED=1
fi
if [ "$STREAMING_V2_RTFx" = "0.00" ] || [ "$STREAMING_V2_RTFx" = "N/A" ]; then
echo "⚠️ streaming (v2) RTFx is 0 or N/A - benchmark may have failed"
STREAMING_V2_RTFX_FAILED=1
fi
# Report failures summary
if [ ! -z "$CLEAN_FAILED" ] || [ ! -z "$OTHER_FAILED" ] || [ ! -z "$STREAMING_FAILED" ] || \
[ ! -z "$CLEAN_V2_FAILED" ] || [ ! -z "$OTHER_V2_FAILED" ] || [ ! -z "$STREAMING_V2_FAILED" ] || \
[ ! -z "$CLEAN_RTFX_FAILED" ] || [ ! -z "$CLEAN_V2_RTFX_FAILED" ] || \
[ ! -z "$OTHER_RTFX_FAILED" ] || [ ! -z "$OTHER_V2_RTFX_FAILED" ] || \
[ ! -z "$STREAMING_RTFX_FAILED" ] || [ ! -z "$STREAMING_V2_RTFX_FAILED" ]; then
echo "BENCHMARK_STATUS=PARTIAL_FAILURE" >> $GITHUB_OUTPUT
echo "⚠️ Some benchmarks failed:"
[ ! -z "$CLEAN_FAILED" ] && echo " - test-clean benchmark failed"
[ ! -z "$OTHER_FAILED" ] && echo " - test-other benchmark failed"
[ ! -z "$STREAMING_FAILED" ] && echo " - streaming benchmark failed"
[ ! -z "$CLEAN_V2_FAILED" ] && echo " - test-clean (v2) benchmark failed"
[ ! -z "$OTHER_V2_FAILED" ] && echo " - test-other (v2) benchmark failed"
[ ! -z "$STREAMING_V2_FAILED" ] && echo " - streaming (v2) benchmark failed"
[ ! -z "$CLEAN_RTFX_FAILED" ] && echo " - test-clean RTFx is 0"
[ ! -z "$CLEAN_V2_RTFX_FAILED" ] && echo " - test-clean (v2) RTFx is 0"
[ ! -z "$OTHER_RTFX_FAILED" ] && echo " - test-other RTFx is 0"
[ ! -z "$OTHER_V2_RTFX_FAILED" ] && echo " - test-other (v2) RTFx is 0"
[ ! -z "$STREAMING_RTFX_FAILED" ] && echo " - streaming RTFx is 0"
[ ! -z "$STREAMING_V2_RTFX_FAILED" ] && echo " - streaming (v2) RTFx is 0"
exit 1
else
echo "BENCHMARK_STATUS=SUCCESS" >> $GITHUB_OUTPUT
echo "✅ All benchmarks completed successfully"
fi
- name: Comment PR
if: always() && github.event_name == 'pull_request'
continue-on-error: true
uses: actions/github-script@v7
with:
script: |
const benchmarkStatus = '${{ steps.benchmark.outputs.BENCHMARK_STATUS }}';
const statusEmoji = benchmarkStatus === 'SUCCESS' ? '✅' : '⚠️';
const statusText = benchmarkStatus === 'SUCCESS' ? 'All benchmarks passed' : 'Some benchmarks failed (see logs)';
const body = `## ASR Benchmark Results ${statusEmoji}
**Status:** ${statusText}
### Parakeet v3 (multilingual)
| Dataset | WER Avg | WER Med | RTFx | Status |
|---------|---------|---------|------|--------|
| test-clean | ${{ steps.benchmark.outputs.CLEAN_WER_AVG }}% | ${{ steps.benchmark.outputs.CLEAN_WER_MED }}% | ${{ steps.benchmark.outputs.CLEAN_RTFx }}x | ${parseFloat('${{ steps.benchmark.outputs.CLEAN_WER_AVG }}') < 10 ? '✅' : '${{ steps.benchmark.outputs.CLEAN_WER_AVG }}' === 'N/A' ? '❌' : '⚠️'} |
| test-other | ${{ steps.benchmark.outputs.OTHER_WER_AVG }}% | ${{ steps.benchmark.outputs.OTHER_WER_MED }}% | ${{ steps.benchmark.outputs.OTHER_RTFx }}x | ${parseFloat('${{ steps.benchmark.outputs.OTHER_WER_AVG }}') < 20 ? '✅' : '${{ steps.benchmark.outputs.OTHER_WER_AVG }}' === 'N/A' ? '❌' : '⚠️'} |
### Parakeet v2 (English-optimized)
| Dataset | WER Avg | WER Med | RTFx | Status |
|---------|---------|---------|------|--------|
| test-clean | ${{ steps.benchmark.outputs.CLEAN_V2_WER_AVG }}% | ${{ steps.benchmark.outputs.CLEAN_V2_WER_MED }}% | ${{ steps.benchmark.outputs.CLEAN_V2_RTFx }}x | ${parseFloat('${{ steps.benchmark.outputs.CLEAN_V2_WER_AVG }}') < 10 ? '✅' : '${{ steps.benchmark.outputs.CLEAN_V2_WER_AVG }}' === 'N/A' ? '❌' : '⚠️'} |
| test-other | ${{ steps.benchmark.outputs.OTHER_V2_WER_AVG }}% | ${{ steps.benchmark.outputs.OTHER_V2_WER_MED }}% | ${{ steps.benchmark.outputs.OTHER_V2_RTFx }}x | ${parseFloat('${{ steps.benchmark.outputs.OTHER_V2_WER_AVG }}') < 20 ? '✅' : '${{ steps.benchmark.outputs.OTHER_V2_WER_AVG }}' === 'N/A' ? '❌' : '⚠️'} |
### Streaming (v3)
| Metric | Value | Description |
|--------|-------|-------------|
| WER | ${{ steps.benchmark.outputs.STREAMING_WER }}% | Word Error Rate in streaming mode |
| RTFx | ${{ steps.benchmark.outputs.STREAMING_RTFx }}x | Streaming real-time factor |
| Avg Chunk Time | ${{ steps.benchmark.outputs.STREAMING_AVG_CHUNK }}s | Average time to process each chunk |
| Max Chunk Time | ${{ steps.benchmark.outputs.STREAMING_MAX_CHUNK }}s | Maximum chunk processing time |
| First Token | ${{ steps.benchmark.outputs.STREAMING_FIRST_TOKEN }}s | Latency to first transcription token |
| Total Chunks | ${{ steps.benchmark.outputs.STREAMING_CHUNKS }} | Number of chunks processed |
### Streaming (v2)
| Metric | Value | Description |
|--------|-------|-------------|
| WER | ${{ steps.benchmark.outputs.STREAMING_V2_WER }}% | Word Error Rate in streaming mode |
| RTFx | ${{ steps.benchmark.outputs.STREAMING_V2_RTFx }}x | Streaming real-time factor |
| Avg Chunk Time | ${{ steps.benchmark.outputs.STREAMING_V2_AVG_CHUNK }}s | Average time to process each chunk |
| Max Chunk Time | ${{ steps.benchmark.outputs.STREAMING_V2_MAX_CHUNK }}s | Maximum chunk processing time |
| First Token | ${{ steps.benchmark.outputs.STREAMING_V2_FIRST_TOKEN }}s | Latency to first transcription token |
| Total Chunks | ${{ steps.benchmark.outputs.STREAMING_V2_CHUNKS }} | Number of chunks processed |
<sub>*Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming*</sub>
<sub>${{ steps.benchmark.outputs.FILES_COUNT }} files per dataset • Test runtime: ${{ steps.benchmark.outputs.EXECUTION_TIME }} • ${new Date().toLocaleString('en-US', { timeZone: 'America/New_York', year: 'numeric', month: '2-digit', day: '2-digit', hour: '2-digit', minute: '2-digit', hour12: true })} EST</sub>
<sub>**RTFx** = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time<br>Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O<br>Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)</sub>
### Expected RTFx Performance on Physical M1 Hardware:
**• M1 Mac: ~28x (clean), ~25x (other)**
**• CI shows ~0.5-3x due to virtualization limitations**
<sub>Testing methodology follows [HuggingFace Open ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard)</sub>
<!-- fluidaudio-benchmark-asr -->`;
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const existing = comments.find(c =>
c.body.includes('<!-- fluidaudio-benchmark-asr -->')
);
if (existing) {
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: existing.id,
body: body
});
} else {
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: body
});
}
- name: Upload Results
if: always()
uses: actions/upload-artifact@v4
with:
name: asr-results
path: asr_results_*.json