Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThe pull request introduces AI text enhancement capabilities using Ollama LLMs. A new Changes
Sequence DiagramsequenceDiagram
participant User
participant TranscriptionFeature
participant TranscriptionClient as TranscriptionClient<br/>(WhisperKit)
participant AIEnhancementFeature
participant AIEnhancementClient
participant Ollama
participant Storage
User->>TranscriptionFeature: Start recording & transcription
TranscriptionFeature->>TranscriptionClient: transcribe(audio, model, settings)
TranscriptionClient->>Ollama: [Process with WhisperKit]
Ollama-->>TranscriptionClient: Transcribed text
TranscriptionClient->>TranscriptionClient: Apply auto-capitalization logic<br/>(based on HexSettings)
TranscriptionClient-->>TranscriptionFeature: Return text
alt AI Enhancement Enabled
TranscriptionFeature->>AIEnhancementFeature: Trigger enhancement
AIEnhancementFeature->>AIEnhancementClient: Check Ollama availability
AIEnhancementClient->>Ollama: GET /api/version
Ollama-->>AIEnhancementClient: Available
AIEnhancementFeature->>AIEnhancementClient: enhance(text, model, options)
AIEnhancementClient->>AIEnhancementClient: Validate input & clamp parameters
AIEnhancementClient->>Ollama: POST /api/generate<br/>(with prompt template + text)
Ollama-->>AIEnhancementClient: Enhanced text response
AIEnhancementClient-->>AIEnhancementFeature: Enhanced result
AIEnhancementFeature-->>TranscriptionFeature: Enhancement complete
else AI Enhancement Disabled
TranscriptionFeature->>Storage: Use original text
end
TranscriptionFeature->>Storage: Store final transcript
Storage-->>User: Transcription complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
Localizable.xcstrings (1)
38-670: 🛠️ Refactor suggestionNew localization keys added without comments or translations.
The file has been updated with numerous new localization keys related to the AI enhancement feature, but they lack comments explaining their context and translations for other supported languages like German.
For consistency with existing localization keys in the file, consider adding:
- Comments explaining where and how each string is used
- Translations for the supported languages (particularly German, which appears to be supported)
This will ensure a consistent experience for users across all supported languages.
🧹 Nitpick comments (14)
Hex/Models/HexSettings.swift (1)
126-160: Efficient caching implementation for HexSettings.The caching mechanism for HexSettings is a good performance optimization to reduce disk I/O. Setting a 5-second expiration is a reasonable balance between performance and freshness.
However, there's a minor issue in the variable declaration:
-private var cachedSettings: HexSettings? = nil +private var cachedSettings: HexSettings?Since Swift initializes optionals to nil by default, the explicit nil initialization is redundant.
🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 127-127: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
Hex/Features/Settings/AIEnhancementFeature.swift (2)
21-21: Remove redundant optional initialization
errorMessageis declared as an optional, which isnilby default. Initialising it explicitly adds noise and triggers the SwiftLint warning you saw.-var errorMessage: String? = nil +var errorMessage: String?🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 21-21: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
24-26: Consider hoistingdefaultAIModelto astatic letBecause
defaultAIModelnever varies per‐instance, astaticconstant is marginally cheaper and communicates intent more clearly:-// Computed property for convenient access to the default model -var defaultAIModel: String { - "gemma3" -} +static let defaultAIModel = "gemma3"You can then reference it with
State.defaultAIModel.
This is purely stylistic, feel free to ignore if you prefer the current form.Hex/Features/Transcription/TranscriptionFeature.swift (2)
60-61: UnusedaiEnhancementcancel ID
CancelID.aiEnhancementis declared but never used in a.cancellable(id:)call.
If you intentionally removed cancellation to avoid premature termination, consider deleting the enum case altogether (or re-introduce the ID in.run {}via.cancellable(id: CancelID.aiEnhancement)).This keeps the enum in sync with real usage and avoids confusion for future maintainers.
51-54:aiEnhancementErroraction is never dispatchedThe catch-block in
enhanceWithAIsends.aiEnhancementResult(result)instead of.aiEnhancementError. Either:
- Remove the unused case to simplify the reducer, or
- Emit the dedicated error action to handle/report enhancement failures separately.
Aligning intent and implementation prevents dead code paths.
Hex/Clients/PasteboardClient.swift (2)
60-64: Remove now-unused static tracking properties
savedChangeCountandsavedPasteboardNameare written insavePasteboardStatebut never read elsewhere after the refactor. They can be deleted to reduce clutter:-// Stores the previous pasteboard owner change count -private static var savedChangeCount: Int = 0 -// Stores the previous pasteboard contents name for tracking -private static var savedPasteboardName: String?
94-109: Return value ofwriteObjectsshould be checked
NSPasteboard.writeObjects(_:)returnsBoolindicating success.
Silently ignoring a failure may leave the pasteboard empty, causing data loss on restore.-if let items = backupPasteboard.pasteboardItems { - backupPasteboard.writeObjects(items) +if let items = backupPasteboard.pasteboardItems, + !items.isEmpty { + let ok = backupPasteboard.writeObjects(items) + if !ok { + print("⚠️ Failed to write items to backup pasteboard") + } }Hex/Clients/AIEnhancementClient.swift (7)
1-11: Header block includes an unusual creator attribution.The file header indicates it was created by "Claude AI" which is unusual for source code files. Consider changing this to reflect the actual developer or your team name for consistency with other files in the project.
12-14: Consider using OllamaKit directly as suggested by the comment.The commented-out code suggests a future enhancement to use OllamaKit directly. This could provide better integration with Ollama and potentially simplify the code by leveraging an official or community-maintained client library rather than implementing the API integration manually.
Would you like me to research if OllamaKit exists and provide implementation guidelines for integrating it?
96-101: Replace magic number with a named constant.The check
text.count > 5uses a hardcoded value. This would be clearer as a named constant.+ // Minimum text length required for enhancement + private let minimumTextLengthForEnhancement = 5 + /// Enhances text using a local AI model func enhance(text: String, model: String, options: EnhancementOptions, progressCallback: @escaping (Progress) -> Void) async throws -> String { // Skip if the text is empty or too short - guard !text.isEmpty, text.count > 5 else { + guard !text.isEmpty, text.count > minimumTextLengthForEnhancement else { print("[AIEnhancementClientLive] Text too short for enhancement, returning original") return text }
115-116: Use localized error messages instead of hardcoded strings.Error messages are hardcoded in English. Since your app supports localization (as seen in the Localizable.xcstrings file), consider using localized strings for error messages.
- throw NSError(domain: "AIEnhancementClient", code: -5, - userInfo: [NSLocalizedDescriptionKey: "Ollama is not available. Please ensure it's running."]) + throw NSError(domain: "AIEnhancementClient", code: -5, + userInfo: [NSLocalizedDescriptionKey: NSLocalizedString("Ollama is not available. Please ensure it's running.", comment: "Error when Ollama service is unavailable")])
214-252: Consider making temperature and token limits part of the EnhancementOptions validation.The limits for temperature and token count are enforced in the enhancement method rather than in the EnhancementOptions struct itself.
Consider validating these parameters in the EnhancementOptions initializer instead, ensuring that invalid values cannot be created in the first place:
struct EnhancementOptions { /// The prompt to send to the AI model for text enhancement var prompt: String /// Temperature controls randomness: lower values (0.1-0.3) are more precise, /// higher values (0.7-1.0) give more creative/varied results var temperature: Double /// Maximum number of tokens to generate in the response var maxTokens: Int + // Valid ranges for parameters + private static let minTemperature = 0.1 + private static let maxTemperature = 1.0 + private static let minTokens = 100 + private static let maxTokens = 2000 + /// Default prompt for enhancing transcribed text with clear instructions static let defaultPrompt = """ // [existing prompt] """ /// Default enhancement options for transcribed text static let `default` = EnhancementOptions( prompt: defaultPrompt, temperature: 0.3, maxTokens: 1000 ) /// Custom initialization with sensible defaults init(prompt: String = defaultPrompt, temperature: Double = 0.3, maxTokens: Int = 1000) { self.prompt = prompt - self.temperature = temperature - self.maxTokens = maxTokens + self.temperature = max(Self.minTemperature, min(Self.maxTemperature, temperature)) + self.maxTokens = max(Self.minTokens, min(Self.maxTokens, maxTokens)) } }Then in the
enhanceWithOllamamethod:- // Build request parameters with appropriate defaults - let temperature = max(0.1, min(1.0, options.temperature)) // Ensure valid range - let maxTokens = max(100, min(2000, options.maxTokens)) // Reasonable limits let requestDict: [String: Any] = [ "model": model, "prompt": fullPrompt, - "temperature": temperature, - "max_tokens": maxTokens, + "temperature": options.temperature, + "max_tokens": options.maxTokens, "stream": false, "system": "You are an AI that improves transcribed text while preserving meaning." ]
139-141: Consider making the Ollama API endpoint configurable.The Ollama API endpoint is hardcoded in multiple places. Consider making it configurable, either through a configuration file or an environment variable, to support different setups.
class AIEnhancementClientLive { + // MARK: - Configuration + + private let ollamaBaseURL: String + + init(ollamaBaseURL: String = "http://localhost:11434") { + self.ollamaBaseURL = ollamaBaseURL + } // MARK: - Public Methods // ... existing code ... /// Checks if Ollama is available on the system func isOllamaAvailable() async -> Bool { // Simple check - try to connect to Ollama's API endpoint do { - var request = URLRequest(url: URL(string: "http://localhost:11434/api/version")!) + var request = URLRequest(url: URL(string: "\(ollamaBaseURL)/api/version")!) request.timeoutInterval = 5.0 // Longer timeout for more reliabilityAnd similarly update the other hardcoded URLs.
Also applies to: 179-180, 225-229
99-131: Consider using a proper logging framework instead of print statements.The code uses
os.logor a third-party solution that supports different log levels and better formatting.Here's how you might use
os.loginstead:+import os class AIEnhancementClientLive { + // MARK: - Logging + + private let logger = Logger(subsystem: "com.yourcompany.Hex", category: "AIEnhancement") // MARK: - Public Methods /// Enhances text using a local AI model func enhance(text: String, model: String, options: EnhancementOptions, progressCallback: @escaping (Progress) -> Void) async throws -> String { // Skip if the text is empty or too short guard !text.isEmpty, text.count > 5 else { - print("[AIEnhancementClientLive] Text too short for enhancement, returning original") + logger.debug("Text too short for enhancement, returning original") return text } let progress = Progress(totalUnitCount: 100) progressCallback(progress) - print("[AIEnhancementClientLive] Starting text enhancement with model: \(model)") - print("[AIEnhancementClientLive] Text to enhance (\(text.count) chars): \"\(text.prefix(50))...\"") + logger.debug("Starting text enhancement with model: \(model)") + logger.debug("Text to enhance (\(text.count) chars): \"\(text.prefix(50))...\"")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
Hex/Clients/AIEnhancementClient.swift(1 hunks)Hex/Clients/PasteboardClient.swift(2 hunks)Hex/Clients/RecordingClient.swift(2 hunks)Hex/Clients/TranscriptionClient.swift(1 hunks)Hex/Features/App/AppFeature.swift(3 hunks)Hex/Features/Settings/AIEnhancementFeature.swift(1 hunks)Hex/Features/Settings/AIEnhancementView.swift(1 hunks)Hex/Features/Settings/SettingsFeature.swift(5 hunks)Hex/Features/Settings/SettingsView.swift(1 hunks)Hex/Features/Transcription/TranscriptionFeature.swift(6 hunks)Hex/Features/Transcription/TranscriptionIndicatorView.swift(8 hunks)Hex/Models/HexSettings.swift(5 hunks)Localizable.xcstrings(17 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
Hex/Features/Settings/AIEnhancementView.swift (1)
Hex/Clients/AIEnhancementClient.swift (1)
isOllamaAvailable(136-159)
Hex/Clients/AIEnhancementClient.swift (1)
Hex/Clients/TranscriptionClient.swift (1)
getAvailableModels(198-200)
Hex/Features/Transcription/TranscriptionFeature.swift (1)
Hex/Clients/AIEnhancementClient.swift (1)
enhance(96-133)
🪛 SwiftLint (0.57.0)
Hex/Models/HexSettings.swift
[Warning] 127-127: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
Hex/Features/Settings/AIEnhancementFeature.swift
[Warning] 21-21: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
🔇 Additional comments (41)
Hex/Models/HexSettings.swift (6)
20-25: The AI enhancement settings are well organized and properly integrated.The new settings for disabling auto-capitalization and AI enhancement options are properly added to the struct with appropriate defaults. The naming is clear and descriptive.
42-46: Good practice adding corresponding CodingKeys entries.All the new properties have corresponding CodingKeys, which ensures proper JSON coding/decoding consistency.
62-67: Well-structured initializer parameters.New parameters added to the initializer with appropriate default values that match the property defaults.
82-86: Proper initializer implementation.All new properties are correctly initialized from the parameters.
116-122: Proper decoder handling for new properties.The custom decoder correctly handles all new properties with appropriate default values if they are missing from the encoded data.
168-168: Good implementation of caching in the SharedReaderKey.The update to use the cached settings is a nice optimization and maintains the same functionality.
Hex/Features/Settings/SettingsView.swift (1)
221-226: Well-integrated UI control for the new auto-capitalization setting.The toggle is properly bound to the HexSettings property and includes clear descriptive text and an appropriate icon. It's consistently styled with other toggles in the General section.
Hex/Features/App/AppFeature.swift (3)
18-18: Good addition of the AI enhancement tab.The new
aiEnhancementcase is properly added to the ActiveTab enum.
84-90: Well-structured UI integration for the AI enhancement feature.The new AI Enhancement button is implemented consistently with other navigation items, using the same pattern for button styling and tagging.
110-112: Good integration of AIEnhancementView in the detail view.The AIEnhancementView is properly scoped to the settings.aiEnhancement state and action, following the established pattern.
Hex/Clients/RecordingClient.swift (2)
384-384: Good improvement: Usingletfor immutable allocation.The change from
vartoletfordeviceNamePtris good practice since it's only initialized once and never modified after allocation. This ensures immutability and prevents accidental modifications.
562-583: Nice optimization to throttle meter updates.This implementation adds intelligent throttling for meter updates, which:
- Reduces UI updates for imperceptible changes (< 0.05 for average, < 0.1 for peak)
- Only forces updates every ~500ms (5 updates at 100ms intervals)
- Improves performance and reduces resource usage while maintaining responsiveness
+ var lastMeter = Meter(averagePower: 0, peakPower: 0) + var updateCount = 0 + + // Only emit if there's a significant change, or every ~5 updates (500ms) + let significantChange = abs(currentMeter.averagePower - lastMeter.averagePower) > 0.05 || + abs(currentMeter.peakPower - lastMeter.peakPower) > 0.1 + + if significantChange || updateCount >= 5 { + meterContinuation.yield(currentMeter) + lastMeter = currentMeter + updateCount = 0 + } else { + updateCount += 1 + }Hex/Features/Settings/SettingsFeature.swift (5)
39-40: Well-structured integration of AIEnhancementFeature.The AI enhancement feature is properly added as a property in the State struct, following the existing pattern for feature composition.
67-68: Consistent action enum extension.The new action case for AI enhancement follows the established pattern in this reducer.
83-85: Good use of Scope for feature composition.The
AIEnhancementFeatureis correctly scoped within the main reducer, ensuring proper separation of concerns.
112-123: Performance improvement for device refresh.Two smart optimizations:
- Extended refresh interval from 120 to 180 seconds to reduce resource usage
- Added conditional refresh that only runs when app is active AND settings panel is visible
This will improve battery life and reduce unnecessary background processing.
298-300: Consistent handling of sub-feature actions.The
.aiEnhancementaction handler follows the established pattern for delegating to the scoped sub-reducer.Hex/Features/Transcription/TranscriptionIndicatorView.swift (6)
17-17: Well-integrated new enhancing status.The new
enhancingcase is correctly added to the Status enum with a dedicated green color to visually distinguish it from other statuses.Also applies to: 24-24
33-33: Consistent styling for enhancing status.The
enhancingstatus is properly handled in all styling computations (background, stroke, and inner shadow), maintaining visual consistency with existing states.Also applies to: 44-44, 55-55
68-68: Complete visual integration of enhancing state.The enhancement state is thoroughly implemented in:
- Shadow effects with appropriate opacity levels
- Glow effects using the green color
- Animation effects with a dedicated counter variable
This provides a cohesive visual experience for the new state.
Also applies to: 108-117, 128-131
133-147: Optimized animation with consolidated task.Excellent optimization to use a single animation task for both
transcribingandenhancingstates instead of separate tasks. The code:
- Only runs animation when needed (status check)
- Updates the correct counter based on current status
- Maintains the same timing for animations (0.3s)
This reduces resource usage while providing the same visual feedback.
150-151: Clear tooltip behavior for distinct states.Good decision to explicitly limit the "Model prewarming..." tooltip to only appear for the
prewarmingstate, keeping the UI clean during enhancement.
178-178: Complete preview with all states.Adding the
enhancingstatus to the preview ensures developers can test and verify all possible visual states.Hex/Features/Settings/AIEnhancementView.swift (7)
11-54: Well-structured AIEnhancementView with conditional sections.The view is well-organized with:
- Logical section grouping
- Conditional rendering based on feature enablement and Ollama availability
- Appropriate task initialization for data loading
- Clear section headers and explanatory footers
The form style and binding to the store follow SwiftUI best practices.
59-119: Informative connection status view for better user experience.The connection status view provides:
- Clear visual alert with appropriate icon and styling
- Detailed setup instructions with bullet points
- Actionable buttons for downloading Ollama and checking connection
- Proper spacing and visual hierarchy
This helps users understand what's needed to make the feature work.
122-156: Good activation toggle with status feedback.The toggle implementation:
- Properly uses
withLockfor thread-safe settings updates- Triggers Ollama availability check when enabled
- Shows connection status indicator when connected
- Has clear explanatory text
I appreciate the visual indicator (green dot) when connected.
159-258: Complete model selection UI with all possible states.The model selection section handles all states gracefully:
- Loading state with progress indicator
- Error state with message
- Empty state with helpful link
- Normal state with proper picker
The refresh button and explanatory footer provide good UX.
261-316: Well-designed temperature control with clear visual cues.The temperature slider implementation:
- Shows precise numeric value
- Uses clear label indicators for "Precision" vs "Creativity"
- Has appropriate range (0-1) and step (0.05)
- Updates settings thread-safely with
withLock- Includes explanatory text about the impact of different values
319-400: Versatile prompt configuration with expandable editing.The prompt section offers a good balance of simplicity and power:
- Collapsed view shows preview with limited lines
- Expandable view provides full editing capability
- Reset button to restore defaults
- Monospaced font in editor for better code/prompt editing
- Different footer text based on expanded state
The animation for expanding/collapsing is a nice touch.
402-413: Reusable bullet point helper for consistent formatting.Good extraction of the bullet point rendering into a helper function for consistent styling and reuse throughout the view.
Hex/Clients/AIEnhancementClient.swift (6)
18-27: Good use of dependency injection pattern with TCA.The
AIEnhancementClientstructure effectively uses the@DependencyClientmacro for dependency injection, providing clear method signatures with sensible defaults. This follows the TCA pattern well and enables easy testing through dependency substitution.
30-70: Well-designed options struct with clear documentation.The
EnhancementOptionsstruct is well-designed with:
- Clear documentation for each property
- Appropriate default values
- A detailed default prompt with specific instructions
- Clean initialization with sensible defaults
This makes the API both easy to use with defaults and flexible for custom configurations.
136-159: Good implementation of Ollama availability check.The
isOllamaAvailablemethod is well-implemented with:
- Appropriate timeout settings
- Clear logging
- Proper error handling that defaults to false when errors occur
- Status code validation
- Useful debug information
This should provide reliable detection of the Ollama service.
162-209: Well-structured model fetching implementation.The
getAvailableModelsmethod provides a comprehensive implementation:
- Clean nested structure for JSON decoding
- Proper error handling at each step
- Appropriate timeout settings
- Result sorting for better UX
- Detailed error messages with error propagation
This will provide reliable model listing functionality.
92-134: Well-structured error handling and progress reporting in the enhancement logic.The
enhancemethod contains robust error handling and progress reporting:
- Checks for Ollama availability before proceeding
- Propagates errors appropriately
- Reports progress at key points in the process
- Logs useful diagnostic information
- Returns the original text when enhancement fails or isn't possible
This helps ensure a good user experience even when things go wrong.
214-329: Comprehensive implementation of text enhancement via Ollama API.The
enhanceWithOllamamethod provides a thorough implementation:
- Input validation
- Well-constructed prompt format
- Parameter validation
- Proper HTTP request setup
- Comprehensive error handling with descriptive messages
- Progress reporting at multiple stages
- Response parsing and cleanup
- Fallback to original text when needed
The implementation should provide reliable enhancement functionality with good error recovery.
Localizable.xcstrings (5)
203-207: New auto-capitalization feature strings.The addition of "Disable auto-capitalization" and "Disable automatic capitalization in transcriptions" strings aligns with the PR objective mentioning the new
disableAutoCapitalizationsetting. This provides the necessary UI text for the feature mentioned in the PR summary.
38-40: AI Enhancement feature localization keys added.The addition of "AI Enhancement" and related description strings properly supports the new AI enhancement feature mentioned in the PR objectives. These strings provide the necessary UI text for the feature described in the PR summary.
Also applies to: 259-261, 672-673
422-432: Ollama status indicator strings added.The Ollama connection status strings ("Ollama Connected", "Ollama Not Connected", "Ollama Status") provide clear feedback to users about the connection state for the AI enhancement feature, which is important for user experience when integrating with an external service.
152-153: Response style configuration strings added.The strings related to "Creativity", "Precision", and "Response Style" along with explanatory text provide user-friendly controls for configuring the AI enhancement's behavior, allowing users to tailor the enhancement process to their needs.
Also applies to: 367-368, 492-493, 526-527
262-263: Enhancement prompt customization strings added.The strings for "Enhancement Prompt", prompt editing instructions, and "Reset to Default" provide users with the ability to customize how the AI enhances their transcriptions, offering flexibility while maintaining the option to revert to defaults.
Also applies to: 371-372, 523-524, 655-656
| import ComposableArchitecture | ||
| import SwiftUI | ||
|
|
||
| struct AIEnhancementView: View { | ||
| @Bindable var store: StoreOf<AIEnhancementFeature> | ||
| @State private var showExpandedPrompt = false | ||
| @State private var isHoveringModelSelect = false | ||
|
|
||
| var body: some View { | ||
| Form { | ||
| // Activation Section (Always show this first) | ||
| Section { | ||
| activationToggle | ||
| } header: { | ||
| Text("AI Enhancement") | ||
| } footer: { | ||
| Text("Enable AI to improve transcription quality by fixing grammar, formatting, and errors.") | ||
| .foregroundColor(.secondary.opacity(0.7)) | ||
| .font(.caption) | ||
| } | ||
|
|
||
| // Only show other settings if AI enhancement is enabled | ||
| if store.hexSettings.useAIEnhancement { | ||
| // Connection Status Section (only if AI enhancement is enabled) | ||
| if !store.isOllamaAvailable { | ||
| Section { | ||
| connectionStatusView | ||
| } header: { | ||
| Text("Ollama Status") | ||
| } | ||
| } | ||
|
|
||
| // Model Selection Section | ||
| modelSelectionSection | ||
|
|
||
| // Temperature Control Section | ||
| temperatureSection | ||
|
|
||
| // Prompt Configuration Section | ||
| promptSection | ||
| } | ||
| } | ||
| .formStyle(.grouped) | ||
| .task { | ||
| await store.send(.task).finish() | ||
| } | ||
| } |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Verify ability to detect Ollama timeout issues.
The current implementation relies on the AIEnhancementClient's availability check, but there might be edge cases where Ollama appears available but becomes unresponsive during actual usage.
🏁 Script executed:
#!/bin/bash
# Check the timeout handling in AIEnhancementClient's isOllamaAvailable method
# Look for timeout configuration in AIEnhancementClient
rg -A 2 "timeoutInterval" --glob "**/*EnhancementClient*"
# Check error handling for connection issues
rg -A 5 "try await" --glob "**/*EnhancementClient*"Length of output: 3778
🏁 Script executed:
#!/bin/bash
# Extract the isOllamaAvailable implementation
rg -A20 "func isOllamaAvailable" --glob "Hex/Clients/AIEnhancementClient.swift"
echo "--------------------------------------------------"
# Extract the generation call (enhanceWithOllama) implementation
rg -A20 "func enhanceWithOllama" --glob "Hex/Clients/AIEnhancementClient.swift"Length of output: 4135
Handle Ollama timeouts and connectivity failures during enhancement
Although isOllamaAvailable() uses a 5 s timeout and returns false on error, the actual generation call in enhanceWithOllama(…) (60 s timeout) simply propagates any thrown error. This means the UI may think Ollama is available but then hang or crash on a timeout. We need to:
• Add explicit error handling around the URLSession call in enhanceWithOllama to catch timeouts and other network errors, then translate them into a user‐facing state (e.g. mark Ollama unavailable, show an error banner).
• In your feature reducer (or wherever .enhance is called), catch enhancement errors and dispatch an action to recheck availability or present an alert.
Example diff in AIEnhancementClient.swift:
// generation call
- let (data, response) = try await URLSession.shared.data(for: request)
+ let (data, response): (Data, URLResponse)
+ do {
+ (data, response) = try await URLSession.shared.data(for: request)
+ } catch {
+ // Treat timeouts and connectivity issues as “Ollama unavailable”
+ print("[AIEnhancementClientLive] Generation failed: \(error.localizedDescription)")
+ throw NSError(domain: "AIEnhancementClient",
+ code: -1001, // NSURLErrorTimedOut or similar
+ userInfo: [NSLocalizedDescriptionKey: "Ollama is unresponsive"])
+ }And in your reducer:
- Catch the thrown error from
enhance, send a new.ollamaBecameUnavailableaction. - In response, re-run
isOllamaAvailable()and display an alert if still down.
This ensures the user never hits a silent timeout and always sees a clear “Ollama unavailable” state.
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)
396-435:⚠️ Potential issueCompile-time error – missing argument labels when calling
enhance
AIEnhancementClient.enhanceexpects external labelstext:model:options:, but the call is missing them.-let enhancedText = try await aiEnhancement.enhance(result, model, options) { _ in } +let enhancedText = try await aiEnhancement.enhance( + text: result, + model: model, + options: options +) { _ in }
🧹 Nitpick comments (2)
Hex/Features/Transcription/TranscriptionFeature.swift (2)
433-435: Consider documenting the cancellation decision more clearly.The code avoids making the AI enhancement cancellable due to potential issues. This reasoning should be better documented for future maintainers to understand the rationale.
// Don't make this cancellable to avoid premature cancellation // This may have been causing the issue with the enhancement being cancelled +// Note: AI enhancement operations may take longer than typical network operations. +// Making this cancellable was causing the enhancement to be prematurely cancelled when +// another action was dispatched or when the user interacted with the UI.
534-540: Commented-out code needs explanation.The decision not to cancel AI enhancement during the cancel flow should be better documented.
// Don't cancel AI enhancement as it might cause issues -// .cancel(id: CancelID.aiEnhancement), +// AI enhancement is intentionally not cancelled here because: +// 1. It might be near completion when cancelled, wasting processing +// 2. It can cause race conditions with result handling +// .cancel(id: CancelID.aiEnhancement),
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
Hex.xcodeproj/project.pbxproj(6 hunks)Hex/Clients/AIEnhancementClient.swift(1 hunks)Hex/Clients/RecordingClient.swift(2 hunks)Hex/Clients/TranscriptionClient.swift(4 hunks)Hex/Features/Settings/SettingsFeature.swift(5 hunks)Hex/Features/Transcription/TranscriptionFeature.swift(8 hunks)
✅ Files skipped from review due to trivial changes (1)
- Hex.xcodeproj/project.pbxproj
🚧 Files skipped from review as they are similar to previous changes (4)
- Hex/Clients/RecordingClient.swift
- Hex/Features/Settings/SettingsFeature.swift
- Hex/Clients/TranscriptionClient.swift
- Hex/Clients/AIEnhancementClient.swift
🔇 Additional comments (8)
Hex/Features/Transcription/TranscriptionFeature.swift (8)
22-22: State tracking addition looks good.Adding the
isEnhancingstate flag properly tracks when AI enhancement is active, maintaining consistency with other state flags in the struct.
50-55: Well-structured action cases for AI enhancement flow.The new action cases properly handle all aspects of the AI enhancement flow, including state management, results, errors, and Ollama availability.
62-62: Cancel ID for AI enhancement added correctly.This addition follows the established pattern for other cancellable operations in the codebase.
70-70: Dependency injection properly implemented.The
aiEnhancementdependency is correctly injected following the established pattern.
120-154: Well-implemented AI enhancement action handlers.The implementation checks for Ollama connectivity issues specifically and triggers availability rechecks when needed. Good error handling and logging.
357-391: Good conditional flow for AI enhancement.The transcription result handler now intelligently routes the result through AI enhancement when enabled, with proper state management.
438-460: AI enhancement result handler looks good.The handler properly resets state and finalizes the transcript similarly to the original flow.
582-583: Good UI state prioritization for enhancing status.The status logic correctly prioritizes showing the enhancing status before transcribing or recording states.
There was a problem hiding this comment.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)
158-164: 🛠️ Refactor suggestionUpdate cancel guard condition to include
isEnhancingThe cancel handler has a guard condition that only runs if
isRecordingorisTranscribingare true, but it doesn't checkisEnhancing. This might lead to inconsistencies if cancel is triggered during enhancement.Update the guard condition to include all relevant states:
case .cancel: // Only cancel if we're in the middle of recording or transcribing - guard state.isRecording || state.isTranscribing else { + guard state.isRecording || state.isTranscribing || state.isEnhancing else { return .none } return handleCancel(&state)
♻️ Duplicate comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)
423-425:⚠️ Potential issueMissing argument labels in
enhancemethod callThe
AIEnhancementClient.enhancemethod likely requires argument labels (text:model:options:), but they're omitted here.The call should include the external parameter labels to prevent compilation errors:
-let enhancedText = try await aiEnhancement.enhance(result, model, options) { progress in +let enhancedText = try await aiEnhancement.enhance( + text: result, + model: model, + options: options +) { progress in // Optional: Could update UI with progress information here if needed }🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 423-423: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
🧹 Nitpick comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)
423-425: Unused closure parameters flagged by static analysisThere are two instances of unused parameters in closures that should be replaced with
_as recommended by SwiftLint.Apply these changes:
// In recheckOllamaAvailability handler -return .run { send in +return .run { _ in // In enhanceWithAI method -let enhancedText = try await aiEnhancement.enhance(...) { progress in +let enhancedText = try await aiEnhancement.enhance(...) { _ inAlso applies to: 147-148
🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 423-423: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
Hex/Features/Transcription/TranscriptionFeature.swift(8 hunks)
🧰 Additional context used
🪛 SwiftLint (0.57.0)
Hex/Features/Transcription/TranscriptionFeature.swift
[Warning] 147-147: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
[Warning] 423-423: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
🔇 Additional comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)
435-437: Uncancellable AI enhancement operationThe AI enhancement effect is deliberately not made cancellable (with a comment explaining it avoids premature cancellation), but this creates inconsistency with the cancel handler which resets the
isEnhancingstate flag without actually cancelling the operation.This could lead to a situation where the UI shows enhancement is not happening (after cancel), but the enhancement is still running in the background. Consider the implications:
- What happens if the enhancement completes after cancellation?
- Could this lead to unexpected state or race conditions?
You might need additional safeguards in
handleAIEnhancementto check if the operation was cancelled before applying the result.
| return .run { send in | ||
| let isAvailable = await aiEnhancement.isOllamaAvailable() | ||
| if !isAvailable { | ||
| // Could dispatch to a UI state to show an alert or notification | ||
| print("[TranscriptionFeature] Ollama is not available. AI enhancement is disabled.") | ||
| // Here you would typically update UI state to show an alert | ||
| } | ||
| } |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Incomplete handling of Ollama unavailability
The rechecking logic detects Ollama unavailability but only logs it without updating any UI state or taking corrective action. Users won't be notified of the issue.
Consider updating to actually dispatch an action that updates the UI when Ollama is unavailable:
case .recheckOllamaAvailability:
// Recheck if Ollama is available and update UI accordingly
return .run { send in
let isAvailable = await aiEnhancement.isOllamaAvailable()
if !isAvailable {
// Could dispatch to a UI state to show an alert or notification
print("[TranscriptionFeature] Ollama is not available. AI enhancement is disabled.")
// Here you would typically update UI state to show an alert
+ await send(.someActionToShowOllamaUnavailableAlert)
}
}🏁 Script executed:
#!/bin/bash
# Verify if there's any UI state or user notification mechanism implemented elsewhere
# Look for UI alerts or notifications related to Ollama availability
rg -A 3 -B 3 "Ollama.*not available|ollamaUnavailable" --glob "*.swift"Length of output: 2932
Dispatch a UI action in TranscriptionFeature for Ollama unavailability
The .recheckOllamaAvailability branch only logs the error but doesn’t surface it to the user. Your Settings view already shows a message when store.isOllamaAvailable is false—do the same in Transcription:
• File: Hex/Features/Transcription/TranscriptionFeature.swift (Lines 147–154)
• Add a new action in TranscriptionAction, e.g. .ollamaUnavailable
• Update the reducer:
case .recheckOllamaAvailability:
return .run { send in
let isAvailable = await aiEnhancement.isOllamaAvailable()
if !isAvailable {
print("[TranscriptionFeature] Ollama is not available. AI enhancement is disabled.")
+ await send(.ollamaUnavailable)
}
}• Handle .ollamaUnavailable in the reducer to set a UI flag (e.g. state.showOllamaAlert = true)
• In TranscriptionView, bind an Alert to state.showOllamaAlert, similar to AIEnhancementView.swift
This ensures users receive an immediate notification when Ollama can’t be reached.
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 147-147: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
|
i want this!! |
|
@0ni-x4 motivated me to finish this. will get it going!! |
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (4)
Hex/Features/Transcription/TranscriptionFeature.swift (4)
154-162: Missing UI feedback for Ollama unavailability.The rechecking logic detects Ollama unavailability but only logs it without updating any UI state or taking corrective action. Users won't be notified of the issue.
#!/bin/bash # Verify if there's any UI state update or user notification mechanism implemented elsewhere # Look for UI alerts or notifications related to Ollama availability rg -A 3 -B 3 "Ollama.*not available|ollamaUnavailable" --glob "*.swift"🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 155-155: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
579-581: Potential UI state inconsistency with commented cancellation code.The cancel handler resets the
isEnhancingstate but explicitly avoids cancelling the AI enhancement effect. This creates a visual/UI inconsistency where the UI indicates cancellation but the operation continues in the background.Consider these options:
- Make the AI enhancement cancellable and actually cancel it
- Add state tracking to ignore enhancement results if cancel was requested
- Document this behavior clearly for future maintainers
// Don't cancel AI enhancement as it might cause issues // .cancel(id: CancelID.aiEnhancement), + // TODO: This creates a potential inconsistency - the UI shows cancellation + // but enhancement continues in background. Consider implementing a safer + // cancellation approach or state tracking to ignore late results.
138-147:⚠️ Potential issueIncomplete error handling in AI enhancement.
The comment states "For other errors, just use the original transcription," but the code returns
.nonewhich doesn't actually restore or use the original transcription result. When a non-Ollama error occurs, the transcription might be lost.case let .aiEnhancementError(error): // Check if this is an Ollama connectivity error let nsError = error as NSError if nsError.domain == "AIEnhancementClient" && (nsError.code == -1001 || nsError.localizedDescription.contains("Ollama")) { print("AI Enhancement error due to Ollama connectivity: \(error)") return .send(.ollamaBecameUnavailable) } else { // For other errors, just use the original transcription print("AI Enhancement error: \(error)") - return .none + // Pass the original transcription result through + return .send(.transcriptionResult(result)) }
433-475:⚠️ Potential issueIncorrect parameter passing in AI enhancement call.
The
enhancemethod is called without parameter labels, butAIEnhancementClient.enhancerequires external labels (text:,model:,options:).- let enhancedText = try await aiEnhancement.enhance(result, model, options) { progress in + let enhancedText = try await aiEnhancement.enhance( + text: result, + model: model, + options: options + ) { progress in🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 461-461: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
🧹 Nitpick comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)
461-461: Unused parameter in closure.The
progressparameter in the closure is unused. Replace it with_to avoid the SwiftLint warning.- let enhancedText = try await aiEnhancement.enhance(result, model, options) { progress in + let enhancedText = try await aiEnhancement.enhance(result, model, options) { _ in🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 461-461: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
Hex/Clients/KeyEventMonitorClient.swift(3 hunks)Hex/Clients/PasteboardClient.swift(2 hunks)Hex/Clients/RecordingClient.swift(2 hunks)Hex/Features/Transcription/TranscriptionFeature.swift(10 hunks)Hex/Features/Transcription/TranscriptionIndicatorView.swift(7 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- Hex/Features/Transcription/TranscriptionIndicatorView.swift
🧰 Additional context used
🧬 Code Graph Analysis (1)
Hex/Features/Transcription/TranscriptionFeature.swift (3)
Hex/Clients/AIEnhancementClient.swift (2)
isOllamaAvailable(136-159)enhance(96-133)Hex/Clients/RecordingClient.swift (1)
observeAudioLevel(621-623)Hex/Clients/TranscriptionClient.swift (1)
transcribe(206-247)
🪛 SwiftLint (0.57.0)
Hex/Features/Transcription/TranscriptionFeature.swift
[Warning] 155-155: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
[Warning] 185-185: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
[Warning] 461-461: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
🔇 Additional comments (19)
Hex/Clients/KeyEventMonitorClient.swift (2)
11-36: Well-designed thread safety implementation for Sauce library.This new
SafeSauceenum provides an excellent thread-safe wrapper around Sauce library calls, effectively preventing_dispatch_assert_queue_failerrors. The implementation correctly handles the case when code is already on the main thread versus when it needs to dispatch.
47-47: Good use of the thread-safe wrapper.The
KeyEventinitializer now correctly uses the thread-safe wrapper to prevent potential crashes from background thread access.Hex/Clients/RecordingClient.swift (2)
384-384: Appropriate use ofletfor immutable pointer allocation.Changed from
vartoletwhich is correct since this pointer doesn't need to be mutable after allocation.
562-612: Excellent adaptive throttling for audio metering.The implementation adds sophisticated adaptive throttling that:
- Adjusts sampling intervals based on activity level (80-150ms)
- Only emits updates when there are significant changes
- Has smart fallbacks to ensure UI responsiveness
This will reduce CPU usage and improve battery life while maintaining responsive UI feedback.
Hex/Clients/PasteboardClient.swift (6)
68-92: Efficient pasteboard backup implementation.This new approach using NSPasteboard's native capabilities is more efficient than manually copying data items. The use of a unique temporary pasteboard is elegant and avoids potential data loss issues.
95-109: Clean pasteboard restoration with proper cleanup.The restoration process correctly handles the pasteboard state and cleans up the temporary pasteboard to avoid memory leaks.
112-115: Good practice keeping the legacy method with a warning.Maintaining backward compatibility while clearly marking this method as deprecated will help with future code maintenance.
164-166: Smart conditional pasteboard backup.Only saving the pasteboard state when clipboard retention is disabled is a good optimization that avoids unnecessary work.
180-211: Thread-safe key code retrieval and efficient event posting.The thread safety check and main thread dispatching for key code retrieval aligns with the improvements in
KeyEventMonitorClient. Using autoreleasepool for the event posting sequence ensures proper resource management.
222-229: Appropriate delay and cleanup handling.Adding a delay before restoration gives the paste operation time to complete, and wrapping the restoration in an autoreleasepool helps with memory management.
Hex/Features/Transcription/TranscriptionFeature.swift (9)
22-22: Good addition of enhancing state tracking.Adding the
isEnhancingstate variable allows proper UI feedback during the AI enhancement process.
50-56: Well-defined actions for AI enhancement flow.The new actions clearly separate different aspects of the AI enhancement process, promoting a clean state management approach.
70-70: Added AI enhancement dependency.Correctly added the dependency to access AI enhancement functionality.
89-97: Optimized audio level update logic.The conditional update based on significant changes is a good optimization that reduces unnecessary state updates and UI refreshes.
183-214: Excellent audio meter update optimization.The rate limiting and significance threshold for audio meter updates will reduce UI updates and improve performance, especially during quieter periods.
🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 185-185: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
375-375: Good integration of HexSettings with transcription client.Passing the settings to the transcription client enables features like disabling auto-capitalization.
394-429: Well-structured conditional AI enhancement.The transcription result handler now correctly branches based on user settings, either proceeding to AI enhancement or finalizing the transcription directly. The code properly extracts necessary settings values.
477-500: Complete AI enhancement result handling.The implementation correctly updates all relevant state variables and proceeds to finalize the recording with the enhanced transcript.
622-623: Good UI status prioritization.Prioritizing the
.enhancingstate in the UI status logic ensures users get appropriate feedback during the AI enhancement process.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)
466-509: 🛠️ Refactor suggestionFix argument labels in AIEnhancement.enhance call.
The code bypasses Swift's argument labels by directly accessing the enhance method. This is error-prone and less readable.
- // Access the raw value directly to avoid argument label issues - let enhanceMethod = aiEnhancement.enhance - let enhancedText = try await enhanceMethod(result, model, options) { progress in + // Use proper argument labels for better readability and type safety + let enhancedText = try await aiEnhancement.enhance( + text: result, + model: model, + options: options + ) { progress in🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 495-495: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
🧹 Nitpick comments (2)
Hex/Features/Transcription/TranscriptionFeature.swift (2)
234-242: Enhance meter update with for-where pattern.Consider using a Swift for-where pattern to simplify the conditional meter updates.
- for await meter in await recording.observeAudioLevel() { - // Check if we should send this update - if await rateLimiter.shouldUpdate(meter: meter) { - // The Effect.run captures its function as @Sendable, so we're already on an appropriate context - // for sending actions. ComposableArchitecture handles dispatching to the main thread as needed. - await send(.audioLevelUpdated(meter)) - } + for await meter in await recording.observeAudioLevel() where await rateLimiter.shouldUpdate(meter: meter) { + // The Effect.run captures its function as @Sendable, so we're already on an appropriate context + // for sending actions. ComposableArchitecture handles dispatching to the main thread as needed. + await send(.audioLevelUpdated(meter))🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 239-239:
whereclauses are preferred over a singleifinside afor(for_where)
495-495: Replace unused closure parameter with underscore.The progress parameter in the closure is unused and should be replaced with an underscore.
- let enhancedText = try await enhanceMethod(result, model, options) { progress in + let enhancedText = try await enhanceMethod(result, model, options) { _ in🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 495-495: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
Hex.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved(2 hunks)Hex/Features/Transcription/TranscriptionFeature.swift(10 hunks)
✅ Files skipped from review due to trivial changes (1)
- Hex.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved
🧰 Additional context used
🪛 SwiftLint (0.57.0)
Hex/Features/Transcription/TranscriptionFeature.swift
[Warning] 239-239: where clauses are preferred over a single if inside a for
(for_where)
[Warning] 201-201: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
[Warning] 495-495: Unused parameter in a closure should be replaced with _
(unused_closure_parameter)
[Warning] 618-618: TODOs should be resolved (Consider implementing a safer ...)
(todo)
🔇 Additional comments (16)
Hex/Features/Transcription/TranscriptionFeature.swift (16)
22-23: State management for tracking AI enhancement progress looks good.The new
isEnhancingstate properly tracks when AI enhancement is active, providing a clear separation between transcription and enhancement states.
27-27: Well-designed fallback mechanism for transcription.Using
pendingTranscriptionas a fallback storage is an excellent approach to ensure the original transcription is preserved in case of AI enhancement failures.
51-56: Comprehensive action enumeration for AI enhancement flow.The action cases cover all necessary states for the AI enhancement flow including state changes, results, error handling, and Ollama availability checks.
63-63: LGTM: New cancellation ID added for AI enhancement.Properly defines a cancellation ID for the AI enhancement operation.
71-71: LGTM: Dependency injection for AI enhancement client.Correctly uses dependency injection for the AI enhancement client following TCA patterns.
90-98: Optimized meter updates improve UI performance.The optimization to only update the meter when there's a significant change or during active recording reduces unnecessary UI updates.
131-133: Clean state management for AI enhancement status.Simple and direct state management for the enhancing state.
135-136: Proper handling of AI enhancement results.Correctly delegates to a helper method to handle successful AI enhancement results.
161-178: Properly implemented Ollama availability rechecking.Correctly handles the case when Ollama becomes unavailable, showing an error to the user.
198-232: Well-implemented actor for meter rate limiting.The
MeterRateLimiteractor is a good implementation for Swift concurrency. It safely manages state and implements an adaptive threshold for meter updates.🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 201-201: Initializing an optional variable with nil is redundant
(redundant_optional_initialization)
404-404: LGTM: Updated transcription call with settings parameter.Properly passes the settings to respect configuration options like
disableAutoCapitalization.
424-460: Well-designed enhancement decision flow.Good conditional logic to determine whether AI enhancement should be applied based on user settings.
512-535: LGTM: Proper AI enhancement result handling.The handler correctly updates all relevant state properties and proceeds with storing the transcript.
609-620: Addressed cancellation concerns with detailed comment.The code intentionally avoids canceling AI enhancement operations to prevent Ollama API issues. The comment clearly explains the trade-off and plans for future improvements.
🧰 Tools
🪛 SwiftLint (0.57.0)
[Warning] 618-618: TODOs should be resolved (Consider implementing a safer ...)
(todo)
662-673: LGTM: Updated status logic with enhancement state.The status logic correctly prioritizes showing the enhancing state when active.
138-159:Details
✅ Verification successful
Consider improving error recovery flow to avoid false positives.
The error handling correctly identifies Ollama connectivity issues, but there's a potential edge case where
pendingTranscriptionmight be nil, which would result in empty text being processed.
🏁 Script executed:
#!/bin/bash # Check if there are cases where pendingTranscription could be nil when error handling is triggered # Look for places where pendingTranscription is set to nil rg "pendingTranscription = nil" --type swift # Look for places where aiEnhancementError action is sent without setting pendingTranscription rg "send\(\.aiEnhancementError" --type swift -A 10 -B 10Length of output: 2321
To confirm where and how
pendingTranscriptionis set, let’s list all usages in the reducer around that property:
🏁 Script executed:
#!/bin/bash rg "pendingTranscription" -n Hex/Features/Transcription/TranscriptionFeature.swift -A 5 -B 5Length of output: 2204
Error recovery flow is safe—no
pendingTranscriptionnil scenario
state.pendingTranscriptionis set at line 429 before the AI enhancement call.- It’s only cleared on success at line 519, so in the
.aiEnhancementErrorcase it will always be non-nil.- The
?? ""fallback further guards against any unexpected nil, preventing crashes.No further changes needed.
|
Let me know when you're ready for a final review :D This looks like it's gonna be great! |
|
Super pumped for this @plyght this will unlock lot 🙌 |
|
@lkeude96 appreciate it! sorry it's taking a while to finish... there's some performance issues :/ @kitlangton actually would be great if you could take a look and help me diagnose why the spinner wheel is super laggy! |
|
I understand, I might be asking for more, is there an option to add another bigger model that supports my 48 gig MacBook Pro M4 other than Ollama |
|
@VipinReddy all https://ollama.com models will be supported for the text enhancement. |
|
Super excited about this! |
|
@kitlangton would love your help on this to fix the performance of the bubble when transcribing, etc. I've seen that you'd like to reduce redundancy and too many options in settings. please let me know what you'd like me to fix, change, etc here, before merging. I will fix the merge conflicts! |
|
buzzin for this! |
|
Would love to see this soon aded ;) |
dd4356f to
c1e89f2
Compare
There was a problem hiding this comment.
Pull request overview
Adds an “AI Enhancement” feature to post-process transcriptions using a local Ollama model, plus several performance/UX tweaks around transcription metering and indicator states. The PR also includes pasteboard reliability changes and substantial Xcode project / SwiftPM resolution updates.
Changes:
- Introduces
AIEnhancementClient+ Settings UI/Reducer to configure Ollama connectivity, model selection, prompt, and temperature. - Adds an “enhancing” state to the transcription flow and indicator UI, and throttles metering updates.
- Updates pasteboard handling and modifies build/dependency configuration files.
Reviewed changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| Localizable.xcstrings | Adds localization keys for AI enhancement UI. |
| HexCore/Sources/HexCore/Settings/HexSettings.swift | Adds AI enhancement settings + default prompt. |
| HexCore/Sources/HexCore/Logging.swift | Adds aiEnhancement logging category. |
| Hex/Features/Transcription/TranscriptionIndicatorView.swift | Adds “enhancing” indicator state and refactors view for performance. |
| Hex/Features/Transcription/TranscriptionFeature.swift | Adds AI enhancement flow and metering throttling; integrates enhancing state. |
| Hex/Features/Settings/SettingsFeature.swift | Scopes new AIEnhancementFeature into settings. |
| Hex/Features/Settings/AIEnhancementView.swift | New settings form for Ollama connection + prompt/model/temperature. |
| Hex/Features/Settings/AIEnhancementFeature.swift | New reducer to check Ollama availability and load models. |
| Hex/Features/App/AppFeature.swift | Adds a new app tab for AI Enhancement. |
| Hex/Clients/TranscriptionClient.swift | Extends transcribe API to accept settings; attempts to support disabling auto-capitalization. |
| Hex/Clients/RecordingClient.swift | Meter sampling throttling + minor pointer immutability fix. |
| Hex/Clients/PasteboardClient.swift | Minor pasteboard behavior/logging tweaks. |
| Hex/Clients/AIEnhancementClient.swift | New Ollama-backed enhancement client. |
| Hex.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved | Updates (and removes) SwiftPM pins. |
| Hex.xcodeproj/project.pbxproj | Large project configuration + dependency reference changes, plus app naming/version/build-setting churn. |
Comments suppressed due to low confidence (1)
Hex/Features/Transcription/TranscriptionFeature.swift:472
transcription.transcribewas updated to accept an additionalHexSettings?argument, but this call site still uses the old 4-parameter signature. This will fail to compile and also leaves the locally capturedsettingsvalue unused. Passsettings(orstate.hexSettings) intotranscribeto match the new API.
// Extract all required state values to local variables to avoid capturing inout parameter
let model = state.hexSettings.selectedModel
let language = state.hexSettings.outputLanguage
let settings = state.hexSettings
// recordingStartTime captured in handleTranscriptionResult
state.isPrewarming = true
return .run { [sleepManagement] send in
// Allow system to sleep again
await sleepManagement.allowSleep()
var audioURL: URL?
do {
let capturedURL = await recording.stopRecording()
guard !Task.isCancelled else { return }
soundEffect.play(.stopRecording)
audioURL = capturedURL
// Create transcription options with the selected language
// Note: cap concurrency to avoid audio I/O overloads on some Macs
let decodeOptions = DecodingOptions(
language: language,
detectLanguage: language == nil, // Only auto-detect if no language specified
chunkingStrategy: .vad,
)
let result = try await transcription.transcribe(capturedURL, model, decodeOptions) { _ in }
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| state.isTranscribing = false | ||
| state.isPrewarming = false | ||
| state.isEnhancing = false // Reset the enhancing state | ||
| state.pendingTranscription = nil // Clear the pending transcription since enhancement succeeded | ||
|
|
There was a problem hiding this comment.
pendingTranscription is referenced here, but TranscriptionFeature.State does not define a pendingTranscription property anywhere in the module. This will not compile unless the state field is added (or the line is removed if it’s leftover from a previous approach).
| case let .aiEnhancementError(error): | ||
| if error is AIEnhancementError { | ||
| transcriptionFeatureLogger.notice("AI enhancement error (Ollama): \(error.localizedDescription)") | ||
| return .send(.ollamaBecameUnavailable) | ||
| } else { | ||
| transcriptionFeatureLogger.error("AI enhancement error: \(error.localizedDescription)") | ||
| return .none | ||
| } |
There was a problem hiding this comment.
On .aiEnhancementError, state.isEnhancing is never reset to false. If enhancement fails, the UI can remain stuck in the “enhancing” indicator state until the next successful run/cancel. Reset the relevant state flags (e.g., isEnhancing, possibly isTranscribing/isPrewarming) when handling enhancement errors.
| case .recheckOllamaAvailability: | ||
| return .run { send in | ||
| let isAvailable = await aiEnhancement.isOllamaAvailable() | ||
| if !isAvailable { | ||
| transcriptionFeatureLogger.notice("Ollama is not available. AI enhancement is disabled.") | ||
| } | ||
| } |
There was a problem hiding this comment.
This branch logs that “AI enhancement is disabled” when Ollama is unavailable, but it doesn’t actually update any state (e.g., hexSettings.useAIEnhancement = false) or notify the UI. Either update settings/state to reflect the disabled behavior, or adjust the log messaging and add a user-visible error path.
| return .merge( | ||
| .cancel(id: CancelID.transcription), | ||
| .cancel(id: CancelID.aiEnhancement), | ||
| .run { [sleepManagement] _ in |
There was a problem hiding this comment.
CancelID.aiEnhancement is cancelled here, but the AI enhancement effect is never marked .cancellable(id: CancelID.aiEnhancement, ...) (it’s explicitly commented as not cancellable). As written, this cancellation does nothing and is misleading—either make the enhancement effect cancellable or remove the unused cancel ID/cancel call.
| var text = results.map(\.text).joined(separator: " ") | ||
|
|
||
| // Use provided settings or default to auto-capitalization | ||
| let useAutoCapitalization = settings == nil ? true : !settings!.disableAutoCapitalization | ||
|
|
||
| // Convert to lowercase if auto-capitalization is disabled | ||
| if !useAutoCapitalization { | ||
| text = text.lowercased() |
There was a problem hiding this comment.
HexSettings does not define disableAutoCapitalization, but it’s accessed here (settings!.disableAutoCapitalization). This is a compile error. Either add disableAutoCapitalization to HexSettings (including coding keys/schema) or remove/replace this logic with an existing setting.
| // Memoize these calculations to prevent recalculating on every render | ||
| private func recordingOpacity(for power: Double, threshold: Double = 0.1) -> Double { | ||
| guard status == .recording else { return 0 } | ||
| return power < threshold ? power / threshold : 1 | ||
| } | ||
|
|
||
| // Cache shadow colors based on status and power | ||
| @ViewBuilder | ||
| private func shadowEffect(averagePower: Double) -> some View { | ||
| switch status { | ||
| case .recording: | ||
| EmptyView() | ||
| .shadow(color: .red.opacity(averagePower), radius: 4) | ||
| .shadow(color: .red.opacity(averagePower * 0.5), radius: 8) | ||
| case .enhancing: | ||
| EmptyView() | ||
| .shadow(color: enhanceBaseColor.opacity(0.7), radius: 4) | ||
| .shadow(color: enhanceBaseColor.opacity(0.4), radius: 8) | ||
| case .transcribing, .prewarming: | ||
| EmptyView() | ||
| .shadow(color: transcribeBaseColor.opacity(0.7), radius: 4) | ||
| .shadow(color: transcribeBaseColor.opacity(0.4), radius: 8) | ||
| default: | ||
| EmptyView() | ||
| .shadow(color: .red.opacity(0), radius: 4) | ||
| .shadow(color: .red.opacity(0), radius: 8) | ||
| } | ||
| } |
There was a problem hiding this comment.
The helper methods recordingOpacity(for:) and shadowEffect(averagePower:) are no longer used anywhere in this view after the refactor. Leaving them in place adds dead code and makes the optimization intent harder to follow; either wire them into the new CapsuleWithEffects implementation or remove them.
| ); | ||
| name = Hex; | ||
| packageProductDependencies = ( | ||
| 47E05E012D444EE900D26DA6 /* ComposableArchitecture */, | ||
| 47E05E042D444EF800D26DA6 /* Sauce */, | ||
| 47E05E092D44525B00D26DA6 /* Dependencies */, | ||
| 47E05E0B2D44525B00D26DA6 /* DependenciesMacros */, | ||
| 47E05E262D44555500D26DA6 /* WhisperKit */, | ||
| 4765045D2D45900200C7EA60 /* Pow */, | ||
| 476BAD3D2D47E7880088C61F /* Sparkle */, | ||
| B5045C962D78DED500D0A119 /* MarkdownUI */, | ||
| 47C08AB52DE9F61B00564AE6 /* Inject */, | ||
| 47512ABE2E14D8C9000E25BA /* WhisperKit */, | ||
| 476316262E5FB31400913CDE /* HexCore */, | ||
| 47E16A612EC6C9D300885CF7 /* FluidAudio */, | ||
| ); |
There was a problem hiding this comment.
This project file update removes key dependencies (e.g., HexCore and Inject) from packageReferences / packageProductDependencies, but the app still imports those modules in many source files. As-is, the project will not build until those packages are re-added (or the imports/usages are removed/replaced).
| @@ -480,19 +446,10 @@ | |||
| "@executable_path/../Frameworks", | |||
| ); | |||
| MACOSX_DEPLOYMENT_TARGET = 15.0; | |||
| MARKETING_VERSION = 0.7.3; | |||
| OTHER_LDFLAGS = ( | |||
| "-Xlinker", | |||
| "-interposable", | |||
| ); | |||
| PRODUCT_BUNDLE_IDENTIFIER = com.kitlangton.Hex.debug; | |||
| PRODUCT_NAME = "Hex Debug"; | |||
| RUNTIME_EXCEPTION_ALLOW_DYLD_ENVIRONMENT_VARIABLES = NO; | |||
| RUNTIME_EXCEPTION_ALLOW_JIT = NO; | |||
| RUNTIME_EXCEPTION_ALLOW_UNSIGNED_EXECUTABLE_MEMORY = NO; | |||
| RUNTIME_EXCEPTION_DEBUGGING_TOOL = NO; | |||
| RUNTIME_EXCEPTION_DISABLE_EXECUTABLE_PAGE_PROTECTION = NO; | |||
| RUNTIME_EXCEPTION_DISABLE_LIBRARY_VALIDATION = YES; | |||
| MARKETING_VERSION = 0.2.1; | |||
| OTHER_LDFLAGS = ""; | |||
| PRODUCT_BUNDLE_IDENTIFIER = com.kitlangton.Hex; | |||
| PRODUCT_NAME = "$(TARGET_NAME)"; | |||
| SWIFT_EMIT_LOC_STRINGS = YES; | |||
There was a problem hiding this comment.
This change set significantly rewrites build settings (e.g., MARKETING_VERSION, CURRENT_PROJECT_VERSION, bundle identifier/product name settings, linker flags) beyond the PR description’s stated goal of simplifying code signing. Please confirm these version/build-setting changes are intentional; otherwise, revert the unrelated configuration churn to avoid accidental release/version regressions.
| @@ -42,7 +24,8 @@ | |||
| "kind" : "remoteSourceControl", | |||
| "location" : "https://github.com/EmergeTools/Pow", | |||
| "state" : { | |||
| "revision" : "1b4b1dda28c50b95f0872927ee2226fe8b58950e" | |||
| "revision" : "a504eb6d144bcf49f4f33029a2795345cb39e6b4", | |||
| "version" : "1.0.5" | |||
| } | |||
| }, | |||
| { | |||
| @@ -51,34 +34,34 @@ | |||
| "location" : "https://github.com/Clipy/Sauce", | |||
| "state" : { | |||
| "branch" : "master", | |||
| "revision" : "9c0de6c233f29d892e86dda68c2dd791aa10670c" | |||
| "revision" : "9ed4ca442cdd4be20449479b4e8f157ea96e7542" | |||
| } | |||
| }, | |||
| { | |||
There was a problem hiding this comment.
Package.resolved drops previously used dependencies like Inject and FluidAudio, and also changes a large set of versions. Since the codebase still imports Inject (and conditionally FluidAudio), this resolution change is likely to break builds unless the project/dependencies were intentionally migrated. Consider restoring the removed pins or documenting the dependency migration and updating imports accordingly.
| }, | ||
| "%.2f" : { | ||
|
|
||
| }, |
There was a problem hiding this comment.
The string catalog now includes a "%.2f" entry, which looks like a numeric format specifier rather than user-facing text. If this was produced by string extraction from specifier: "%.2f", consider preventing it from being localized (or formatting the value without introducing a localizable key) to avoid cluttering translations with non-UI strings.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
Hex/Clients/TranscriptionClient.swift (1)
225-246:⚠️ Potential issue | 🟠 MajorApply the capitalization setting on the Parakeet path too.
The new post-processing only runs after WhisperKit. If the selected model is Parakeet,
disableAutoCapitalizationis ignored and users get different output depending on backend.💡 Proposed fix
func transcribe( url: URL, model: String, options: DecodingOptions, settings: HexSettings? = nil, progressCallback: `@escaping` (Progress) -> Void ) async throws -> String { @@ if isParakeet(model) { @@ let startTx = Date() let text = try await parakeet.transcribe(preparedClip.url) transcriptionLogger.info("Parakeet transcription took \(String(format: "%.2f", Date().timeIntervalSince(startTx)))s") transcriptionLogger.info("Parakeet request total elapsed \(String(format: "%.2f", Date().timeIntervalSince(startAll)))s") - return text + return normalizeTranscription(text, settings: settings) } @@ - var text = results.map(\.text).joined(separator: " ") - - // Use provided settings or default to auto-capitalization - let useAutoCapitalization = settings == nil ? true : !settings!.disableAutoCapitalization - - // Convert to lowercase if auto-capitalization is disabled - if !useAutoCapitalization { - text = text.lowercased() - } - - return text + let text = results.map(\.text).joined(separator: " ") + return normalizeTranscription(text, settings: settings) } + + private func normalizeTranscription(_ text: String, settings: HexSettings?) -> String { + guard settings?.disableAutoCapitalization == true else { return text } + return text.lowercased() + }Also applies to: 279-289
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Hex/Clients/TranscriptionClient.swift` around lines 225 - 246, The Parakeet branch in transcribe(url:model:options:settings:progressCallback:) ignores DecodingOptions.disableAutoCapitalization, so after getting text from parakeet.transcribe(...) apply the same post-processing step used by the WhisperKit path that respects options.disableAutoCapitalization (i.e., run the capitalization/auto-capitalization transform conditioned on options.disableAutoCapitalization), and mirror this fix in the other Parakeet handling block around the code referenced (the second Parakeet path at lines ~279–289) so both Parakeet flows produce the same post-processed output as WhisperKit.Hex.xcodeproj/project.pbxproj (1)
564-570:⚠️ Potential issue | 🟠 MajorRemove branch-based tracking from WhisperKit in project.pbxproj.
The project file specifies
branch = mainfor WhisperKit at line 568, but the checked-inPackage.resolvedpins version0.12.0with a specific revision. This mismatch creates reproducibility issues—future dependency resolves could pull a different commit from main than what was reviewed in this PR. Change the requirement to pin an exact version or revision to match the resolved state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Hex.xcodeproj/project.pbxproj` around lines 564 - 570, The XCRemoteSwiftPackageReference for "WhisperKit" (the block with isa = XCRemoteSwiftPackageReference and repositoryURL "https://github.com/argmaxinc/WhisperKit") currently uses branch = main; replace the branch-based requirement with a pinned requirement matching Package.resolved (either set requirement to an exactVersion = "0.12.0" or to the specific revision hash from Package.resolved) so the project.pbxproj references the exact version/revision instead of tracking main.Hex/Features/Transcription/TranscriptionFeature.swift (1)
586-586:⚠️ Potential issue | 🟠 MajorAvoid logging raw transcript content.
This logs full transcribed text directly. Keep content private (or log only length/metadata) to avoid PII leakage.
As per coding guidelines "Use the unified logging helper
HexLogfor all diagnostics ... use privacy annotations (, privacy: .private) for sensitive data like transcript text or file paths."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Hex/Features/Transcription/TranscriptionFeature.swift` at line 586, Replace the direct call to transcriptionFeatureLogger.info("Raw transcription: '\(result)'") with the unified HexLog helper and avoid logging raw transcript content; instead log non-sensitive metadata such as result.count or a masked snippet and include the transcript as a private field using privacy: .private if you must log it. Locate the logging in TranscriptionFeature.swift where transcriptionFeatureLogger.info is used (the variable/result named result) and change it to use HexLog (the project-wide logging helper) with privacy annotations for the transcript and public metadata only.
♻️ Duplicate comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)
543-557:⚠️ Potential issue | 🔴 CriticalCancellation is reintroduced as non-functional for AI enhancement.
The reducer cancels
CancelID.aiEnhancement(Line 705), but the enhancement effect is not cancellable. A canceled session can still deliver late.aiEnhancementResultand paste text after cancel.🔒 Suggested fix
return .merge( .send(.setEnhancingState(true)), .run { send in do { let enhancedText = try await aiEnhancement.enhance(result, model, options) { _ in } await send(.aiEnhancementResult(enhancedText, audioURL)) } catch { transcriptionFeatureLogger.error("AI enhancement failed: \(error.localizedDescription)") await send(.aiEnhancementError(error)) } - } + } + .cancellable(id: CancelID.aiEnhancement, cancelInFlight: true) )Also applies to: 705-705
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Hex/Features/Transcription/TranscriptionFeature.swift` around lines 543 - 557, The AI enhancement effect is not cancellable, so CancelID.aiEnhancement in the reducer can't stop late .aiEnhancementResult deliveries; make the .run effect that calls aiEnhancement.enhance cancellable by attaching the cancellation identifier (CancelID.aiEnhancement) to that effect (the .run that sends .aiEnhancementResult / .aiEnhancementError), or use the cancellable variant of .run that accepts a Task handle and checks Task.isCancelled before sending results from aiEnhancement.enhance; ensure the cancellation id referenced is CancelID.aiEnhancement so late results are suppressed after cancellation.
🧹 Nitpick comments (1)
Hex/Features/Settings/AIEnhancementFeature.swift (1)
89-90: Use the same default prompt constant thatHexSettingspersists.
resetToDefaultPromptresets fromEnhancementOptions.defaultPrompt, but new settings initialize fromHexSettings.defaultAIEnhancementPrompt. Two sources of truth here can drift and make “Reset” inconsistent with a fresh install/default decode.♻️ Proposed fix
case .resetToDefaultPrompt: - state.$hexSettings.withLock { $0.aiEnhancementPrompt = EnhancementOptions.defaultPrompt } + state.$hexSettings.withLock { $0.aiEnhancementPrompt = HexSettings.defaultAIEnhancementPrompt } return .none🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Hex/Features/Settings/AIEnhancementFeature.swift` around lines 89 - 90, The resetToDefaultPrompt branch uses EnhancementOptions.defaultPrompt causing a mismatch with the persisted default; change it to use the same constant HexSettings.defaultAIEnhancementPrompt so reset and fresh defaults match — update the case .resetToDefaultPrompt inside state.$hexSettings.withLock where aiEnhancementPrompt is set to assign HexSettings.defaultAIEnhancementPrompt instead of EnhancementOptions.defaultPrompt (or remove the redundant EnhancementOptions constant if unused elsewhere).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Hex/Clients/AIEnhancementClient.swift`:
- Around line 182-189: The request payload builds requestDict with top-level
temperature and max_tokens which Ollama ignores; update requestDict so
generation controls are nested under an "options" dictionary and rename
max_tokens to num_predict: include "options": ["temperature": temperature,
"num_predict": maxTokens] (or equivalent types) while keeping "model" and
"prompt"/"system" at the top level and preserving "stream": false; modify the
construction that creates requestDict (references: requestDict, model,
fullPrompt, temperature, maxTokens) to place those keys under "options" so
Ollama receives the correct generation parameters.
In `@Hex/Clients/TranscriptionClient.swift`:
- Around line 281-286: The code references a non-existent property
disableAutoCapitalization on HexSettings causing a compile error; add a Bool
property named disableAutoCapitalization (with a sensible default, e.g. false)
to the HexSettings struct in HexCore, and update TranscriptionClient to safely
read it via optional chaining (use settings?.disableAutoCapitalization) when
computing useAutoCapitalization in the method using the useAutoCapitalization
variable so the compilation and behavior are correct.
In `@Hex/Features/Settings/AIEnhancementFeature.swift`:
- Around line 38-52: The code currently triggers Ollama checks and model loading
even when state.useAIEnhancement is false; update the logic so .task does not
send .checkOllamaAvailability unless state.useAIEnhancement is true, and in the
.ollamaAvailabilityResult handler only return .send(.loadAvailableModels) when
isAvailable && state.useAIEnhancement; apply the same guard where similar logic
appears around modelsLoaded (the other block at lines ~69-76) so models are
never fetched or modelsLoaded mutated when useAIEnhancement is disabled.
- Around line 6-15: The file references HexSettings inside the
AIEnhancementFeature.State but doesn't import its defining module; add an import
for HexCore at the top of the file so HexSettings is resolvable. Specifically,
update the imports above the `@Reducer` declaration (where AIEnhancementFeature
and State are defined) to include HexCore so the compiler can find the public
HexSettings type.
In `@Hex/Features/Transcription/TranscriptionFeature.swift`:
- Around line 152-159: The aiEnhancementError handler currently only logs and
conditionally sends .ollamaBecameUnavailable, leaving isEnhancing/isTranscribing
and fallback output handling untouched; update the .aiEnhancementError(error)
branch to always reset enhancement/transcription state and emit any necessary
fallback output before returning: when error is AIEnhancementError keep the
transcriptionFeatureLogger.notice and send .ollamaBecameUnavailable but also
clear isEnhancing/isTranscribing (or dispatch the existing action that resets
those flags) and dispatch the existing fallback/output action so the transcript
isn't lost; when error is not AIEnhancementError log the error via
transcriptionFeatureLogger.error but likewise reset the flags and emit the same
fallback/output action (or send a specific .enhancementFailed action) instead of
simply returning .none so state cannot remain stuck.
- Around line 560-569: TranscriptionFeature.State is missing the
pendingTranscription property referenced by handleAIEnhancement; add an optional
property (e.g., var pendingTranscription: String? = nil) to
TranscriptionFeature.State so the line state.pendingTranscription = nil
compiles, and ensure any other uses of pendingTranscription in the feature match
this type and optional semantics.
- Around line 492-524: The non-AI branch short-circuits post-processing
(skipping the shared remapping/removal/normalization logic used by the AI path),
so update the branch that checks state.hexSettings.useAIEnhancement to invoke
the same shared post-processing used by the AI flow (e.g., call the centralized
handler such as handleAIEnhancement or extract the common finalization logic
into a shared method) before calling finalizeRecordingAndStoreTranscript; ensure
you preserve the same state updates (state.isTranscribing/state.isPrewarming)
and pass the same parameters (result, audioURL, duration, sourceAppBundleID,
sourceAppName, transcriptionHistory) so non-AI users receive identical
remapping/removal/normalization behavior as the enhanceWithAI path.
In `@Hex/Features/Transcription/TranscriptionIndicatorView.swift`:
- Around line 289-293: The glow effect is being applied unconditionally; update
the body(content:) in TranscriptionIndicatorView to only apply .glow when status
is a glowing state (e.g., .enhancing or .transcribing) and use the correct color
per state (use enhanceBaseColor.opacity(0.4) for .enhancing and the transcribing
blue color for .transcribing); for all other statuses
(recording/option-key/prewarming/etc.) omit the glow path (i.e., don't call
changeEffect or pass a no-op effect) so those states avoid the glow performance
path and maintain their intended styling.
---
Outside diff comments:
In `@Hex.xcodeproj/project.pbxproj`:
- Around line 564-570: The XCRemoteSwiftPackageReference for "WhisperKit" (the
block with isa = XCRemoteSwiftPackageReference and repositoryURL
"https://github.com/argmaxinc/WhisperKit") currently uses branch = main; replace
the branch-based requirement with a pinned requirement matching Package.resolved
(either set requirement to an exactVersion = "0.12.0" or to the specific
revision hash from Package.resolved) so the project.pbxproj references the exact
version/revision instead of tracking main.
In `@Hex/Clients/TranscriptionClient.swift`:
- Around line 225-246: The Parakeet branch in
transcribe(url:model:options:settings:progressCallback:) ignores
DecodingOptions.disableAutoCapitalization, so after getting text from
parakeet.transcribe(...) apply the same post-processing step used by the
WhisperKit path that respects options.disableAutoCapitalization (i.e., run the
capitalization/auto-capitalization transform conditioned on
options.disableAutoCapitalization), and mirror this fix in the other Parakeet
handling block around the code referenced (the second Parakeet path at lines
~279–289) so both Parakeet flows produce the same post-processed output as
WhisperKit.
In `@Hex/Features/Transcription/TranscriptionFeature.swift`:
- Line 586: Replace the direct call to transcriptionFeatureLogger.info("Raw
transcription: '\(result)'") with the unified HexLog helper and avoid logging
raw transcript content; instead log non-sensitive metadata such as result.count
or a masked snippet and include the transcript as a private field using privacy:
.private if you must log it. Locate the logging in TranscriptionFeature.swift
where transcriptionFeatureLogger.info is used (the variable/result named result)
and change it to use HexLog (the project-wide logging helper) with privacy
annotations for the transcript and public metadata only.
---
Duplicate comments:
In `@Hex/Features/Transcription/TranscriptionFeature.swift`:
- Around line 543-557: The AI enhancement effect is not cancellable, so
CancelID.aiEnhancement in the reducer can't stop late .aiEnhancementResult
deliveries; make the .run effect that calls aiEnhancement.enhance cancellable by
attaching the cancellation identifier (CancelID.aiEnhancement) to that effect
(the .run that sends .aiEnhancementResult / .aiEnhancementError), or use the
cancellable variant of .run that accepts a Task handle and checks
Task.isCancelled before sending results from aiEnhancement.enhance; ensure the
cancellation id referenced is CancelID.aiEnhancement so late results are
suppressed after cancellation.
---
Nitpick comments:
In `@Hex/Features/Settings/AIEnhancementFeature.swift`:
- Around line 89-90: The resetToDefaultPrompt branch uses
EnhancementOptions.defaultPrompt causing a mismatch with the persisted default;
change it to use the same constant HexSettings.defaultAIEnhancementPrompt so
reset and fresh defaults match — update the case .resetToDefaultPrompt inside
state.$hexSettings.withLock where aiEnhancementPrompt is set to assign
HexSettings.defaultAIEnhancementPrompt instead of
EnhancementOptions.defaultPrompt (or remove the redundant EnhancementOptions
constant if unused elsewhere).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5fb2ede8-08e4-4d40-959d-ea5d96ee94b6
📒 Files selected for processing (15)
Hex.xcodeproj/project.pbxprojHex.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolvedHex/Clients/AIEnhancementClient.swiftHex/Clients/PasteboardClient.swiftHex/Clients/RecordingClient.swiftHex/Clients/TranscriptionClient.swiftHex/Features/App/AppFeature.swiftHex/Features/Settings/AIEnhancementFeature.swiftHex/Features/Settings/AIEnhancementView.swiftHex/Features/Settings/SettingsFeature.swiftHex/Features/Transcription/TranscriptionFeature.swiftHex/Features/Transcription/TranscriptionIndicatorView.swiftHexCore/Sources/HexCore/Logging.swiftHexCore/Sources/HexCore/Settings/HexSettings.swiftLocalizable.xcstrings
✅ Files skipped from review due to trivial changes (2)
- HexCore/Sources/HexCore/Logging.swift
- Hex/Clients/PasteboardClient.swift
🚧 Files skipped from review as they are similar to previous changes (3)
- Hex/Clients/RecordingClient.swift
- Hex/Features/Settings/SettingsFeature.swift
- Localizable.xcstrings
| let requestDict: [String: Any] = [ | ||
| "model": model, | ||
| "prompt": fullPrompt, | ||
| "temperature": temperature, | ||
| "max_tokens": maxTokens, | ||
| "stream": false, | ||
| "system": "You are an AI that improves transcribed text while preserving meaning." | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In the current Ollama "Generate a response" API docs, should generation controls like temperature and max output tokens be under options, and is the token key num_predict(notmax_tokens)?
💡 Result:
Yes, in the current Ollama "Generate a response" API (/api/generate), generation controls like temperature and max output tokens are specified under the top-level "options" object in the request body. The token limit key is "num_predict" (not "max_tokens").
Citations:
- 1: https://docs.ollama.com/api/generate
- 2: https://github.com/jmorganca/ollama/blob/main/docs/api.md
- 3: https://mintlify.com/ollama/ollama/api/endpoints/generate
- 4: https://medium.com/@laurentkubaski/ollama-generate-endpoint-parameters-bdf9c2b340d1
- 5: https://medium.com/@laurentkubaski/ollama-model-options-0eee31c902d3
- 6: https://ollama.apidog.io/generate-request-with-options-14808431e0
- 7: https://www.mintlify.com/ollama/ollama/api/endpoints/generate
- 8: https://docs.ollama.com/modelfile
🏁 Script executed:
cat -n Hex/Clients/AIEnhancementClient.swift | sed -n '175,200p'Repository: kitlangton/Hex
Length of output: 1173
Ollama generation options must be nested in the options object with correct parameter keys.
temperature and max_tokens are sent at the top level of the request. The Ollama /api/generate endpoint requires generation controls under an options object, and the token limit key is num_predict (not max_tokens). The current payload will silently ignore these settings.
Suggested fix
let requestDict: [String: Any] = [
"model": model,
"prompt": fullPrompt,
- "temperature": temperature,
- "max_tokens": maxTokens,
+ "options": [
+ "temperature": temperature,
+ "num_predict": maxTokens
+ ],
"stream": false,
"system": "You are an AI that improves transcribed text while preserving meaning."
]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Hex/Clients/AIEnhancementClient.swift` around lines 182 - 189, The request
payload builds requestDict with top-level temperature and max_tokens which
Ollama ignores; update requestDict so generation controls are nested under an
"options" dictionary and rename max_tokens to num_predict: include "options":
["temperature": temperature, "num_predict": maxTokens] (or equivalent types)
while keeping "model" and "prompt"/"system" at the top level and preserving
"stream": false; modify the construction that creates requestDict (references:
requestDict, model, fullPrompt, temperature, maxTokens) to place those keys
under "options" so Ollama receives the correct generation parameters.
| // Use provided settings or default to auto-capitalization | ||
| let useAutoCapitalization = settings == nil ? true : !settings!.disableAutoCapitalization | ||
|
|
||
| // Convert to lowercase if auto-capitalization is disabled | ||
| if !useAutoCapitalization { | ||
| text = text.lowercased() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
echo "Occurrences of disableAutoCapitalization:"
rg -n --type=swift '\bdisableAutoCapitalization\b'
echo
echo "HexSettings declarations / extensions / re-exports:"
rg -n --type=swift 'struct\s+HexSettings\b|extension\s+HexSettings\b|typealias\s+HexSettings\b|@_exported\s+import\s+HexCore|import\s+HexCore'Repository: kitlangton/Hex
Length of output: 2392
🏁 Script executed:
cat -n HexCore/Sources/HexCore/Settings/HexSettings.swiftRepository: kitlangton/Hex
Length of output: 15107
disableAutoCapitalization does not exist in HexSettings—the code will not compile.
The HexSettings struct in HexCore/Sources/HexCore/Settings/HexSettings.swift has no disableAutoCapitalization property. Line 282 in Hex/Clients/TranscriptionClient.swift attempting to access settings!.disableAutoCapitalization will fail compilation.
🧰 Tools
🪛 SwiftLint (0.63.2)
[Warning] 282-282: Force unwrapping should be avoided
(force_unwrapping)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Hex/Clients/TranscriptionClient.swift` around lines 281 - 286, The code
references a non-existent property disableAutoCapitalization on HexSettings
causing a compile error; add a Bool property named disableAutoCapitalization
(with a sensible default, e.g. false) to the HexSettings struct in HexCore, and
update TranscriptionClient to safely read it via optional chaining (use
settings?.disableAutoCapitalization) when computing useAutoCapitalization in the
method using the useAutoCapitalization variable so the compilation and behavior
are correct.
| import ComposableArchitecture | ||
| import Foundation | ||
| import SwiftUI | ||
|
|
||
| @Reducer | ||
| struct AIEnhancementFeature { | ||
| @ObservableState | ||
| struct State: Equatable { | ||
| @Shared(.hexSettings) var hexSettings: HexSettings | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
echo "Top of AIEnhancementFeature.swift:"
sed -n '1,20p' Hex/Features/Settings/AIEnhancementFeature.swift
echo
echo "HexSettings declarations / HexCore imports / re-exports:"
rg -n --type=swift '@_exported\s+import\s+HexCore|import\s+HexCore|struct\s+HexSettings\b|typealias\s+HexSettings\b'Repository: kitlangton/Hex
Length of output: 2689
🏁 Script executed:
# Check complete imports in AIEnhancementFeature.swift
head -30 Hex/Features/Settings/AIEnhancementFeature.swift
# Check for Hex module package definition and exports
find . -name "Package.swift" -o -name "module.modulemap" | head -5 | xargs cat 2>/dev/null
# Check if there's a Hex module bridging header or public exports
fd -e swift Hex/Hex.swift Hex/HexPublic.swift 2>/dev/null | xargs cat 2>/dev/null
# Check AppHexSettings.swift to understand the typealias context
cat -n Hex/Models/AppHexSettings.swiftRepository: kitlangton/Hex
Length of output: 2761
🏁 Script executed:
# Get all imports from the full AIEnhancementFeature.swift file
rg -n '^\s*import\s+' Hex/Features/Settings/AIEnhancementFeature.swift
# Check file length
wc -l Hex/Features/Settings/AIEnhancementFeature.swiftRepository: kitlangton/Hex
Length of output: 178
Import HexCore before referencing HexSettings.
This file uses HexSettings directly on line 15, but does not import HexCore. The type is defined in HexCore/Sources/HexCore/Settings/HexSettings.swift as a public struct. While a typealias exists in Hex/Models/AppHexSettings.swift, that file is not imported here, so the typealias is inaccessible. This will fail to compile.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Hex/Features/Settings/AIEnhancementFeature.swift` around lines 6 - 15, The
file references HexSettings inside the AIEnhancementFeature.State but doesn't
import its defining module; add an import for HexCore at the top of the file so
HexSettings is resolvable. Specifically, update the imports above the `@Reducer`
declaration (where AIEnhancementFeature and State are defined) to include
HexCore so the compiler can find the public HexSettings type.
| case .task: | ||
| return .send(.checkOllamaAvailability) | ||
|
|
||
| case .checkOllamaAvailability: | ||
| return .run { send in | ||
| let isAvailable = await aiEnhancement.isOllamaAvailable() | ||
| await send(.ollamaAvailabilityResult(isAvailable)) | ||
| } | ||
|
|
||
| case let .ollamaAvailabilityResult(isAvailable): | ||
| state.isOllamaAvailable = isAvailable | ||
|
|
||
| // If Ollama is available, load models | ||
| if isAvailable { | ||
| return .send(.loadAvailableModels) |
There was a problem hiding this comment.
Don’t fetch models on view load while enhancement is off.
Opening this tab currently checks Ollama and can load models even with useAIEnhancement == false. Because modelsLoaded may rewrite selectedAIModel, simply visiting a disabled settings screen can still mutate persisted AI settings.
💡 Proposed fix
case .task:
- return .send(.checkOllamaAvailability)
+ guard state.hexSettings.useAIEnhancement else { return .none }
+ return .send(.checkOllamaAvailability)Also applies to: 69-76
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Hex/Features/Settings/AIEnhancementFeature.swift` around lines 38 - 52, The
code currently triggers Ollama checks and model loading even when
state.useAIEnhancement is false; update the logic so .task does not send
.checkOllamaAvailability unless state.useAIEnhancement is true, and in the
.ollamaAvailabilityResult handler only return .send(.loadAvailableModels) when
isAvailable && state.useAIEnhancement; apply the same guard where similar logic
appears around modelsLoaded (the other block at lines ~69-76) so models are
never fetched or modelsLoaded mutated when useAIEnhancement is disabled.
| case let .aiEnhancementError(error): | ||
| if error is AIEnhancementError { | ||
| transcriptionFeatureLogger.notice("AI enhancement error (Ollama): \(error.localizedDescription)") | ||
| return .send(.ollamaBecameUnavailable) | ||
| } else { | ||
| transcriptionFeatureLogger.error("AI enhancement error: \(error.localizedDescription)") | ||
| return .none | ||
| } |
There was a problem hiding this comment.
Enhancement error path can stall state and lose output.
This handler treats every AIEnhancementError as Ollama availability loss, and the non-AIEnhancementError path returns .none. Both paths skip state reset/fallback output handling, so isEnhancing/isTranscribing can remain stuck and transcription can be dropped.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Hex/Features/Transcription/TranscriptionFeature.swift` around lines 152 -
159, The aiEnhancementError handler currently only logs and conditionally sends
.ollamaBecameUnavailable, leaving isEnhancing/isTranscribing and fallback output
handling untouched; update the .aiEnhancementError(error) branch to always reset
enhancement/transcription state and emit any necessary fallback output before
returning: when error is AIEnhancementError keep the
transcriptionFeatureLogger.notice and send .ollamaBecameUnavailable but also
clear isEnhancing/isTranscribing (or dispatch the existing action that resets
those flags) and dispatch the existing fallback/output action so the transcript
isn't lost; when error is not AIEnhancementError log the error via
transcriptionFeatureLogger.error but likewise reset the flags and emit the same
fallback/output action (or send a specific .enhancementFailed action) instead of
simply returning .none so state cannot remain stuck.
| // First check if we should use AI enhancement | ||
| if state.hexSettings.useAIEnhancement { | ||
| return enhanceWithAI(result: result, audioURL: audioURL, state: state) | ||
| } else { | ||
| state.isTranscribing = false | ||
| state.isPrewarming = false | ||
|
|
||
| // If empty text, nothing else to do | ||
| guard !result.isEmpty else { | ||
| return .none | ||
| } | ||
|
|
||
| let duration = state.recordingStartTime.map { Date().timeIntervalSince($0) } ?? 0 | ||
| let sourceAppBundleID = state.sourceAppBundleID | ||
| let sourceAppName = state.sourceAppName | ||
| let transcriptionHistory = state.$transcriptionHistory | ||
|
|
||
| return .run { send in | ||
| do { | ||
| try await finalizeRecordingAndStoreTranscript( | ||
| result: result, | ||
| duration: duration, | ||
| sourceAppBundleID: sourceAppBundleID, | ||
| sourceAppName: sourceAppName, | ||
| audioURL: audioURL, | ||
| transcriptionHistory: transcriptionHistory | ||
| ) | ||
| } catch { | ||
| await send(.transcriptionError(error, audioURL)) | ||
| } | ||
| } | ||
| .cancellable(id: CancelID.transcription) | ||
| } |
There was a problem hiding this comment.
Non-AI path skips transcript post-processing.
When useAIEnhancement == false, this branch finalizes immediately and bypasses remapping/removal logic (and other shared normalization checks) now centralized in handleAIEnhancement. This changes behavior for all non-AI users.
♻️ Suggested fix
if state.hexSettings.useAIEnhancement {
return enhanceWithAI(result: result, audioURL: audioURL, state: state)
} else {
- state.isTranscribing = false
- state.isPrewarming = false
-
- guard !result.isEmpty else {
- return .none
- }
-
- let duration = state.recordingStartTime.map { Date().timeIntervalSince($0) } ?? 0
- let sourceAppBundleID = state.sourceAppBundleID
- let sourceAppName = state.sourceAppName
- let transcriptionHistory = state.$transcriptionHistory
-
- return .run { send in
- do {
- try await finalizeRecordingAndStoreTranscript(
- result: result,
- duration: duration,
- sourceAppBundleID: sourceAppBundleID,
- sourceAppName: sourceAppName,
- audioURL: audioURL,
- transcriptionHistory: transcriptionHistory
- )
- } catch {
- await send(.transcriptionError(error, audioURL))
- }
- }
- .cancellable(id: CancelID.transcription)
+ return handleAIEnhancement(&state, result: result, audioURL: audioURL)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Hex/Features/Transcription/TranscriptionFeature.swift` around lines 492 -
524, The non-AI branch short-circuits post-processing (skipping the shared
remapping/removal/normalization logic used by the AI path), so update the branch
that checks state.hexSettings.useAIEnhancement to invoke the same shared
post-processing used by the AI flow (e.g., call the centralized handler such as
handleAIEnhancement or extract the common finalization logic into a shared
method) before calling finalizeRecordingAndStoreTranscript; ensure you preserve
the same state updates (state.isTranscribing/state.isPrewarming) and pass the
same parameters (result, audioURL, duration, sourceAppBundleID, sourceAppName,
transcriptionHistory) so non-AI users receive identical
remapping/removal/normalization behavior as the enhanceWithAI path.
| private func handleAIEnhancement( | ||
| _ state: inout State, | ||
| result: String, | ||
| audioURL: URL | ||
| ) -> Effect<Action> { | ||
| state.isTranscribing = false | ||
| state.isPrewarming = false | ||
| state.isEnhancing = false // Reset the enhancing state | ||
| state.pendingTranscription = nil // Clear the pending transcription since enhancement succeeded | ||
|
|
There was a problem hiding this comment.
pendingTranscription is referenced but not defined in state.
Line 568 writes state.pendingTranscription = nil, but TranscriptionFeature.State has no pendingTranscription member. This is a compile-time failure.
💡 Minimal fix
- state.pendingTranscription = nil // Clear the pending transcription since enhancement succeeded📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private func handleAIEnhancement( | |
| _ state: inout State, | |
| result: String, | |
| audioURL: URL | |
| ) -> Effect<Action> { | |
| state.isTranscribing = false | |
| state.isPrewarming = false | |
| state.isEnhancing = false // Reset the enhancing state | |
| state.pendingTranscription = nil // Clear the pending transcription since enhancement succeeded | |
| private func handleAIEnhancement( | |
| _ state: inout State, | |
| result: String, | |
| audioURL: URL | |
| ) -> Effect<Action> { | |
| state.isTranscribing = false | |
| state.isPrewarming = false | |
| state.isEnhancing = false // Reset the enhancing state |
🧰 Tools
🪛 SwiftLint (0.63.2)
[Warning] 560-560: Function body should span 60 lines or less excluding comments and whitespace: currently spans 62 lines
(function_body_length)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Hex/Features/Transcription/TranscriptionFeature.swift` around lines 560 -
569, TranscriptionFeature.State is missing the pendingTranscription property
referenced by handleAIEnhancement; add an optional property (e.g., var
pendingTranscription: String? = nil) to TranscriptionFeature.State so the line
state.pendingTranscription = nil compiles, and ensure any other uses of
pendingTranscription in the feature match this type and optional semantics.
| func body(content: Content) -> some View { | ||
| content.changeEffect( | ||
| .glow(color: status == .enhancing ? enhanceBaseColor.opacity(0.4) : .red.opacity(0.4), radius: 6), | ||
| value: status | ||
| ) |
There was a problem hiding this comment.
The glow effect is still enabled for every visible state.
LightweightEffects always returns a Pow .glow, so recording/option-key/prewarming still pay for the effect path, and transcribing/prewarming now get a red glow instead of the blue transcribing color. That undercuts the perf work here and changes the indicator styling.
💡 Proposed fix
struct LightweightEffects: ViewModifier {
var status: TranscriptionIndicatorView.Status
var enhanceBaseColor: Color
func body(content: Content) -> some View {
- content.changeEffect(
- .glow(color: status == .enhancing ? enhanceBaseColor.opacity(0.4) : .red.opacity(0.4), radius: 6),
- value: status
- )
+ switch status {
+ case .transcribing, .prewarming:
+ content.changeEffect(.glow(color: .blue.opacity(0.4), radius: 6), value: status)
+ case .enhancing:
+ content.changeEffect(.glow(color: enhanceBaseColor.opacity(0.4), radius: 6), value: status)
+ default:
+ content
+ }
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Hex/Features/Transcription/TranscriptionIndicatorView.swift` around lines 289
- 293, The glow effect is being applied unconditionally; update the
body(content:) in TranscriptionIndicatorView to only apply .glow when status is
a glowing state (e.g., .enhancing or .transcribing) and use the correct color
per state (use enhanceBaseColor.opacity(0.4) for .enhancing and the transcribing
blue color for .transcribing); for all other statuses
(recording/option-key/prewarming/etc.) omit the glow path (i.e., don't call
changeEffect or pass a no-op effect) so those states avoid the glow performance
path and maintain their intended styling.
@kitlangton
This pull request introduces significant enhancements and fixes across multiple areas of the codebase, including the addition of a new AI enhancement feature, improvements to pasteboard handling, and updates to existing functionalities. Below is a breakdown of the most important changes grouped by theme:
New Feature: AI Enhancement
AIEnhancementClientinHex/Clients/AIEnhancementClient.swift, which provides functionality for enhancing transcribed text using local AI models like Ollama. This includes methods for checking model availability, retrieving available models, and performing text enhancement with detailed options.aiEnhancementtab in the app's UI by updatingAppFeatureandAppViewto include the AI enhancement feature. This includes a new button and navigation logic. [1] [2] [3]Improvements to Pasteboard Handling
savePasteboardStateinPasteboardClientLiveto limit the number of pasteboard items saved (to 5) and restrict the size of saved data (to 1MB per item) to reduce memory usage.pasteWithClipboardmethod to conditionally save the pasteboard state only when necessary, added delays for better system processing, and ensured proper restoration of the pasteboard state. [1] [2]Updates to Transcription Functionality
TranscriptionClientLiveto respect a newdisableAutoCapitalizationsetting fromhex_settings.json. If this setting is enabled, transcribed text will remain in lowercase.Build Configuration Changes
CODE_SIGN_IDENTITYfor macOS builds inHex.xcodeproj/project.pbxprojto"-"to simplify code signing during development. [1] [2]Minor Fixes
RecordingClientLiveto useletinstead ofvarfor thedeviceNamePtrallocation to ensure immutability.Summary by CodeRabbit
New Features
Settings
Performance