Skip to content

AI Enhancements#64

Open
plyght wants to merge 10 commits intokitlangton:mainfrom
plyght:feat/ai-improvement
Open

AI Enhancements#64
plyght wants to merge 10 commits intokitlangton:mainfrom
plyght:feat/ai-improvement

Conversation

@plyght
Copy link
Copy Markdown
Contributor

@plyght plyght commented Apr 23, 2025

@kitlangton
This pull request introduces significant enhancements and fixes across multiple areas of the codebase, including the addition of a new AI enhancement feature, improvements to pasteboard handling, and updates to existing functionalities. Below is a breakdown of the most important changes grouped by theme:

New Feature: AI Enhancement

  • Added a new AIEnhancementClient in Hex/Clients/AIEnhancementClient.swift, which provides functionality for enhancing transcribed text using local AI models like Ollama. This includes methods for checking model availability, retrieving available models, and performing text enhancement with detailed options.
  • Introduced a new aiEnhancement tab in the app's UI by updating AppFeature and AppView to include the AI enhancement feature. This includes a new button and navigation logic. [1] [2] [3]

Improvements to Pasteboard Handling

  • Updated savePasteboardState in PasteboardClientLive to limit the number of pasteboard items saved (to 5) and restrict the size of saved data (to 1MB per item) to reduce memory usage.
  • Enhanced the pasteWithClipboard method to conditionally save the pasteboard state only when necessary, added delays for better system processing, and ensured proper restoration of the pasteboard state. [1] [2]

Updates to Transcription Functionality

  • Modified the transcription process in TranscriptionClientLive to respect a new disableAutoCapitalization setting from hex_settings.json. If this setting is enabled, transcribed text will remain in lowercase.

Build Configuration Changes

  • Changed the CODE_SIGN_IDENTITY for macOS builds in Hex.xcodeproj/project.pbxproj to "-" to simplify code signing during development. [1] [2]

Minor Fixes

  • Updated RecordingClientLive to use let instead of var for the deviceNamePtr allocation to ensure immutability.

Summary by CodeRabbit

  • New Features

    • Added AI Enhancement feature to improve transcription quality using local Ollama language models with configurable settings for response creativity and custom prompts.
    • Added visual indicator showing enhancement in progress during transcription pipeline.
  • Settings

    • Added ability to disable auto-capitalization in transcriptions.
    • New AI Enhancement settings tab with model selection, response style adjustment, and custom prompt editing.
  • Performance

    • Optimized audio metering for reduced CPU usage with adaptive sampling intervals.

@plyght plyght marked this pull request as ready for review April 28, 2025 00:39
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 28, 2025

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The pull request introduces AI text enhancement capabilities using Ollama LLMs. A new AIEnhancementClient provides async methods to enhance transcribed text, check Ollama availability, and fetch available models. The transcription pipeline integrates enhancement as a post-processing stage, with supporting UI views, settings persistence, and localization strings.

Changes

Cohort / File(s) Summary
AI Enhancement Client
Hex/Clients/AIEnhancementClient.swift
New dependency-injected client for enhancing text using locally running Ollama LLMs. Provides async methods to enhance text, check Ollama availability via /api/version, and fetch model names. Validates inputs, clamps temperature/maxTokens, constructs composite prompts, and parses JSON responses with comprehensive error handling.
Settings & Configuration
HexCore/Sources/HexCore/Settings/HexSettings.swift, Hex/Features/Settings/AIEnhancementFeature.swift, Hex/Features/Settings/AIEnhancementView.swift, Hex/Features/Settings/SettingsFeature.swift
Added AI enhancement settings to HexSettings (enablement flag, model selection, custom prompt, temperature). New TCA reducer manages Ollama availability checks, model loading, and settings mutations. New SwiftUI form view provides UI for toggling enhancement, selecting models, configuring temperature/prompt, with status indicators and setup instructions.
Transcription Enhancement
Hex/Features/Transcription/TranscriptionFeature.swift, Hex/Clients/TranscriptionClient.swift
Extended transcription pipeline with post-processing AI enhancement stage. TranscriptionClient now accepts optional HexSettings to conditionally apply auto-capitalization. TranscriptionFeature adds enhancement state tracking, Ollama availability checking, and integrates AIEnhancementClient for text improvement. Added cancel support and error recovery.
UI & Indicators
Hex/Features/Transcription/TranscriptionIndicatorView.swift, Hex/Features/App/AppFeature.swift
Added enhancing status to transcription indicator with green styling and separate effect animation. Refactored view hierarchy with new CapsuleWithEffects and LightweightEffects components. Added AI Enhancement tab to app navigation and detail panel routing.
Metering & Audio
Hex/Clients/RecordingClient.swift
Optimized meter emission with adaptive sampling: yields only on significant power changes, enforces 300ms update intervals, and adjusts samplingInterval (80–150ms) based on activity state. Refactored pointer allocation to use type inference.
Localization
Localizable.xcstrings
Added 36 new localization string keys for AI enhancement UI, Ollama status messaging, model selection, prompt customization, temperature control, and setup instructions.
Infrastructure & Project
Hex.xcodeproj/project.pbxproj, Hex.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved, HexCore/Sources/HexCore/Logging.swift
Updated Swift package dependencies (replaced HexCore, Inject, FluidAudio; upgraded WhisperKit, Pow, Sparkle, swift-composable-architecture, swift-dependencies). Updated build configuration and product naming. Added aiEnhancement logging category.
Minor Cleanup
Hex/Clients/PasteboardClient.swift
Removed whitespace and updated inline comments; no behavioral changes.

Sequence Diagram

sequenceDiagram
    participant User
    participant TranscriptionFeature
    participant TranscriptionClient as TranscriptionClient<br/>(WhisperKit)
    participant AIEnhancementFeature
    participant AIEnhancementClient
    participant Ollama
    participant Storage

    User->>TranscriptionFeature: Start recording & transcription
    TranscriptionFeature->>TranscriptionClient: transcribe(audio, model, settings)
    TranscriptionClient->>Ollama: [Process with WhisperKit]
    Ollama-->>TranscriptionClient: Transcribed text
    TranscriptionClient->>TranscriptionClient: Apply auto-capitalization logic<br/>(based on HexSettings)
    TranscriptionClient-->>TranscriptionFeature: Return text

    alt AI Enhancement Enabled
        TranscriptionFeature->>AIEnhancementFeature: Trigger enhancement
        AIEnhancementFeature->>AIEnhancementClient: Check Ollama availability
        AIEnhancementClient->>Ollama: GET /api/version
        Ollama-->>AIEnhancementClient: Available
        AIEnhancementFeature->>AIEnhancementClient: enhance(text, model, options)
        AIEnhancementClient->>AIEnhancementClient: Validate input & clamp parameters
        AIEnhancementClient->>Ollama: POST /api/generate<br/>(with prompt template + text)
        Ollama-->>AIEnhancementClient: Enhanced text response
        AIEnhancementClient-->>AIEnhancementFeature: Enhanced result
        AIEnhancementFeature-->>TranscriptionFeature: Enhancement complete
    else AI Enhancement Disabled
        TranscriptionFeature->>Storage: Use original text
    end

    TranscriptionFeature->>Storage: Store final transcript
    Storage-->>User: Transcription complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • kitlangton

Poem

🐰 A hop through enhancements so fine,
With Ollama's wisdom now intertwined,
The transcriptions glow green while they mend,
Each word polished from start until end,
AI and Swift in harmony blend!

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 36.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'AI Enhancements' is vague and generic, using non-descriptive terminology that does not clearly convey the specific changes in the changeset. Use a more specific title that highlights the primary change, such as 'Add AI text enhancement feature with Ollama support' or 'Introduce AIEnhancementClient for transcription post-processing'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (1)
Localizable.xcstrings (1)

38-670: 🛠️ Refactor suggestion

New localization keys added without comments or translations.

The file has been updated with numerous new localization keys related to the AI enhancement feature, but they lack comments explaining their context and translations for other supported languages like German.

For consistency with existing localization keys in the file, consider adding:

  1. Comments explaining where and how each string is used
  2. Translations for the supported languages (particularly German, which appears to be supported)

This will ensure a consistent experience for users across all supported languages.

🧹 Nitpick comments (14)
Hex/Models/HexSettings.swift (1)

126-160: Efficient caching implementation for HexSettings.

The caching mechanism for HexSettings is a good performance optimization to reduce disk I/O. Setting a 5-second expiration is a reasonable balance between performance and freshness.

However, there's a minor issue in the variable declaration:

-private var cachedSettings: HexSettings? = nil
+private var cachedSettings: HexSettings?

Since Swift initializes optionals to nil by default, the explicit nil initialization is redundant.

🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 127-127: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)

Hex/Features/Settings/AIEnhancementFeature.swift (2)

21-21: Remove redundant optional initialization

errorMessage is declared as an optional, which is nil by default. Initialising it explicitly adds noise and triggers the SwiftLint warning you saw.

-var errorMessage: String? = nil
+var errorMessage: String?
🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 21-21: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)


24-26: Consider hoisting defaultAIModel to a static let

Because defaultAIModel never varies per‐instance, a static constant is marginally cheaper and communicates intent more clearly:

-// Computed property for convenient access to the default model
-var defaultAIModel: String {
-    "gemma3"
-}
+static let defaultAIModel = "gemma3"

You can then reference it with State.defaultAIModel.
This is purely stylistic, feel free to ignore if you prefer the current form.

Hex/Features/Transcription/TranscriptionFeature.swift (2)

60-61: Unused aiEnhancement cancel ID

CancelID.aiEnhancement is declared but never used in a .cancellable(id:) call.
If you intentionally removed cancellation to avoid premature termination, consider deleting the enum case altogether (or re-introduce the ID in .run {} via .cancellable(id: CancelID.aiEnhancement)).

This keeps the enum in sync with real usage and avoids confusion for future maintainers.


51-54: aiEnhancementError action is never dispatched

The catch-block in enhanceWithAI sends .aiEnhancementResult(result) instead of .aiEnhancementError. Either:

  1. Remove the unused case to simplify the reducer, or
  2. Emit the dedicated error action to handle/report enhancement failures separately.

Aligning intent and implementation prevents dead code paths.

Hex/Clients/PasteboardClient.swift (2)

60-64: Remove now-unused static tracking properties

savedChangeCount and savedPasteboardName are written in savePasteboardState but never read elsewhere after the refactor. They can be deleted to reduce clutter:

-// Stores the previous pasteboard owner change count
-private static var savedChangeCount: Int = 0
-// Stores the previous pasteboard contents name for tracking
-private static var savedPasteboardName: String?

94-109: Return value of writeObjects should be checked

NSPasteboard.writeObjects(_:) returns Bool indicating success.
Silently ignoring a failure may leave the pasteboard empty, causing data loss on restore.

-if let items = backupPasteboard.pasteboardItems {
-    backupPasteboard.writeObjects(items)
+if let items = backupPasteboard.pasteboardItems,
+   !items.isEmpty {
+    let ok = backupPasteboard.writeObjects(items)
+    if !ok {
+        print("⚠️  Failed to write items to backup pasteboard")
+    }
 }
Hex/Clients/AIEnhancementClient.swift (7)

1-11: Header block includes an unusual creator attribution.

The file header indicates it was created by "Claude AI" which is unusual for source code files. Consider changing this to reflect the actual developer or your team name for consistency with other files in the project.


12-14: Consider using OllamaKit directly as suggested by the comment.

The commented-out code suggests a future enhancement to use OllamaKit directly. This could provide better integration with Ollama and potentially simplify the code by leveraging an official or community-maintained client library rather than implementing the API integration manually.

Would you like me to research if OllamaKit exists and provide implementation guidelines for integrating it?


96-101: Replace magic number with a named constant.

The check text.count > 5 uses a hardcoded value. This would be clearer as a named constant.

+    // Minimum text length required for enhancement
+    private let minimumTextLengthForEnhancement = 5
+    
     /// Enhances text using a local AI model
     func enhance(text: String, model: String, options: EnhancementOptions, progressCallback: @escaping (Progress) -> Void) async throws -> String {
         // Skip if the text is empty or too short
-        guard !text.isEmpty, text.count > 5 else {
+        guard !text.isEmpty, text.count > minimumTextLengthForEnhancement else {
             print("[AIEnhancementClientLive] Text too short for enhancement, returning original")
             return text
         }

115-116: Use localized error messages instead of hardcoded strings.

Error messages are hardcoded in English. Since your app supports localization (as seen in the Localizable.xcstrings file), consider using localized strings for error messages.

-                throw NSError(domain: "AIEnhancementClient", code: -5, 
-                              userInfo: [NSLocalizedDescriptionKey: "Ollama is not available. Please ensure it's running."])
+                throw NSError(domain: "AIEnhancementClient", code: -5, 
+                              userInfo: [NSLocalizedDescriptionKey: NSLocalizedString("Ollama is not available. Please ensure it's running.", comment: "Error when Ollama service is unavailable")])

214-252: Consider making temperature and token limits part of the EnhancementOptions validation.

The limits for temperature and token count are enforced in the enhancement method rather than in the EnhancementOptions struct itself.

Consider validating these parameters in the EnhancementOptions initializer instead, ensuring that invalid values cannot be created in the first place:

struct EnhancementOptions {
    /// The prompt to send to the AI model for text enhancement
    var prompt: String
    
    /// Temperature controls randomness: lower values (0.1-0.3) are more precise,
    /// higher values (0.7-1.0) give more creative/varied results
    var temperature: Double
    
    /// Maximum number of tokens to generate in the response
    var maxTokens: Int
    
+    // Valid ranges for parameters
+    private static let minTemperature = 0.1
+    private static let maxTemperature = 1.0
+    private static let minTokens = 100
+    private static let maxTokens = 2000
+    
    /// Default prompt for enhancing transcribed text with clear instructions
    static let defaultPrompt = """
    // [existing prompt]
    """
    
    /// Default enhancement options for transcribed text
    static let `default` = EnhancementOptions(
        prompt: defaultPrompt,
        temperature: 0.3,
        maxTokens: 1000
    )
    
    /// Custom initialization with sensible defaults
    init(prompt: String = defaultPrompt, temperature: Double = 0.3, maxTokens: Int = 1000) {
        self.prompt = prompt
-        self.temperature = temperature
-        self.maxTokens = maxTokens
+        self.temperature = max(Self.minTemperature, min(Self.maxTemperature, temperature))
+        self.maxTokens = max(Self.minTokens, min(Self.maxTokens, maxTokens))
    }
}

Then in the enhanceWithOllama method:

-    // Build request parameters with appropriate defaults
-    let temperature = max(0.1, min(1.0, options.temperature)) // Ensure valid range
-    let maxTokens = max(100, min(2000, options.maxTokens))   // Reasonable limits
     
     let requestDict: [String: Any] = [
         "model": model,
         "prompt": fullPrompt,
-        "temperature": temperature,
-        "max_tokens": maxTokens,
+        "temperature": options.temperature,
+        "max_tokens": options.maxTokens,
         "stream": false,
         "system": "You are an AI that improves transcribed text while preserving meaning."
     ]

139-141: Consider making the Ollama API endpoint configurable.

The Ollama API endpoint is hardcoded in multiple places. Consider making it configurable, either through a configuration file or an environment variable, to support different setups.

class AIEnhancementClientLive {
+    // MARK: - Configuration
+    
+    private let ollamaBaseURL: String
+    
+    init(ollamaBaseURL: String = "http://localhost:11434") {
+        self.ollamaBaseURL = ollamaBaseURL
+    }
    
    // MARK: - Public Methods
    
    // ... existing code ...
    
    /// Checks if Ollama is available on the system
    func isOllamaAvailable() async -> Bool {
        // Simple check - try to connect to Ollama's API endpoint
        do {
-            var request = URLRequest(url: URL(string: "http://localhost:11434/api/version")!)
+            var request = URLRequest(url: URL(string: "\(ollamaBaseURL)/api/version")!)
            request.timeoutInterval = 5.0 // Longer timeout for more reliability

And similarly update the other hardcoded URLs.

Also applies to: 179-180, 225-229


99-131: Consider using a proper logging framework instead of print statements.

The code uses print statements for logging. Consider using a proper logging framework like os.log or a third-party solution that supports different log levels and better formatting.

Here's how you might use os.log instead:

+import os

class AIEnhancementClientLive {
+    // MARK: - Logging
+    
+    private let logger = Logger(subsystem: "com.yourcompany.Hex", category: "AIEnhancement")
    
    // MARK: - Public Methods
    
    /// Enhances text using a local AI model
    func enhance(text: String, model: String, options: EnhancementOptions, progressCallback: @escaping (Progress) -> Void) async throws -> String {
        // Skip if the text is empty or too short
        guard !text.isEmpty, text.count > 5 else {
-            print("[AIEnhancementClientLive] Text too short for enhancement, returning original")
+            logger.debug("Text too short for enhancement, returning original")
            return text
        }
        
        let progress = Progress(totalUnitCount: 100)
        progressCallback(progress)
        
-        print("[AIEnhancementClientLive] Starting text enhancement with model: \(model)")
-        print("[AIEnhancementClientLive] Text to enhance (\(text.count) chars): \"\(text.prefix(50))...\"")
+        logger.debug("Starting text enhancement with model: \(model)")
+        logger.debug("Text to enhance (\(text.count) chars): \"\(text.prefix(50))...\"")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9a4c38b and 9e51c96.

📒 Files selected for processing (13)
  • Hex/Clients/AIEnhancementClient.swift (1 hunks)
  • Hex/Clients/PasteboardClient.swift (2 hunks)
  • Hex/Clients/RecordingClient.swift (2 hunks)
  • Hex/Clients/TranscriptionClient.swift (1 hunks)
  • Hex/Features/App/AppFeature.swift (3 hunks)
  • Hex/Features/Settings/AIEnhancementFeature.swift (1 hunks)
  • Hex/Features/Settings/AIEnhancementView.swift (1 hunks)
  • Hex/Features/Settings/SettingsFeature.swift (5 hunks)
  • Hex/Features/Settings/SettingsView.swift (1 hunks)
  • Hex/Features/Transcription/TranscriptionFeature.swift (6 hunks)
  • Hex/Features/Transcription/TranscriptionIndicatorView.swift (8 hunks)
  • Hex/Models/HexSettings.swift (5 hunks)
  • Localizable.xcstrings (17 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
Hex/Features/Settings/AIEnhancementView.swift (1)
Hex/Clients/AIEnhancementClient.swift (1)
  • isOllamaAvailable (136-159)
Hex/Clients/AIEnhancementClient.swift (1)
Hex/Clients/TranscriptionClient.swift (1)
  • getAvailableModels (198-200)
Hex/Features/Transcription/TranscriptionFeature.swift (1)
Hex/Clients/AIEnhancementClient.swift (1)
  • enhance (96-133)
🪛 SwiftLint (0.57.0)
Hex/Models/HexSettings.swift

[Warning] 127-127: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)

Hex/Features/Settings/AIEnhancementFeature.swift

[Warning] 21-21: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)

🔇 Additional comments (41)
Hex/Models/HexSettings.swift (6)

20-25: The AI enhancement settings are well organized and properly integrated.

The new settings for disabling auto-capitalization and AI enhancement options are properly added to the struct with appropriate defaults. The naming is clear and descriptive.


42-46: Good practice adding corresponding CodingKeys entries.

All the new properties have corresponding CodingKeys, which ensures proper JSON coding/decoding consistency.


62-67: Well-structured initializer parameters.

New parameters added to the initializer with appropriate default values that match the property defaults.


82-86: Proper initializer implementation.

All new properties are correctly initialized from the parameters.


116-122: Proper decoder handling for new properties.

The custom decoder correctly handles all new properties with appropriate default values if they are missing from the encoded data.


168-168: Good implementation of caching in the SharedReaderKey.

The update to use the cached settings is a nice optimization and maintains the same functionality.

Hex/Features/Settings/SettingsView.swift (1)

221-226: Well-integrated UI control for the new auto-capitalization setting.

The toggle is properly bound to the HexSettings property and includes clear descriptive text and an appropriate icon. It's consistently styled with other toggles in the General section.

Hex/Features/App/AppFeature.swift (3)

18-18: Good addition of the AI enhancement tab.

The new aiEnhancement case is properly added to the ActiveTab enum.


84-90: Well-structured UI integration for the AI enhancement feature.

The new AI Enhancement button is implemented consistently with other navigation items, using the same pattern for button styling and tagging.


110-112: Good integration of AIEnhancementView in the detail view.

The AIEnhancementView is properly scoped to the settings.aiEnhancement state and action, following the established pattern.

Hex/Clients/RecordingClient.swift (2)

384-384: Good improvement: Using let for immutable allocation.

The change from var to let for deviceNamePtr is good practice since it's only initialized once and never modified after allocation. This ensures immutability and prevents accidental modifications.


562-583: Nice optimization to throttle meter updates.

This implementation adds intelligent throttling for meter updates, which:

  • Reduces UI updates for imperceptible changes (< 0.05 for average, < 0.1 for peak)
  • Only forces updates every ~500ms (5 updates at 100ms intervals)
  • Improves performance and reduces resource usage while maintaining responsiveness
+  var lastMeter = Meter(averagePower: 0, peakPower: 0)
+  var updateCount = 0
+
+  // Only emit if there's a significant change, or every ~5 updates (500ms)
+  let significantChange = abs(currentMeter.averagePower - lastMeter.averagePower) > 0.05 ||
+                         abs(currentMeter.peakPower - lastMeter.peakPower) > 0.1
+  
+  if significantChange || updateCount >= 5 {
+    meterContinuation.yield(currentMeter)
+    lastMeter = currentMeter
+    updateCount = 0
+  } else {
+    updateCount += 1
+  }
Hex/Features/Settings/SettingsFeature.swift (5)

39-40: Well-structured integration of AIEnhancementFeature.

The AI enhancement feature is properly added as a property in the State struct, following the existing pattern for feature composition.


67-68: Consistent action enum extension.

The new action case for AI enhancement follows the established pattern in this reducer.


83-85: Good use of Scope for feature composition.

The AIEnhancementFeature is correctly scoped within the main reducer, ensuring proper separation of concerns.


112-123: Performance improvement for device refresh.

Two smart optimizations:

  1. Extended refresh interval from 120 to 180 seconds to reduce resource usage
  2. Added conditional refresh that only runs when app is active AND settings panel is visible

This will improve battery life and reduce unnecessary background processing.


298-300: Consistent handling of sub-feature actions.

The .aiEnhancement action handler follows the established pattern for delegating to the scoped sub-reducer.

Hex/Features/Transcription/TranscriptionIndicatorView.swift (6)

17-17: Well-integrated new enhancing status.

The new enhancing case is correctly added to the Status enum with a dedicated green color to visually distinguish it from other statuses.

Also applies to: 24-24


33-33: Consistent styling for enhancing status.

The enhancing status is properly handled in all styling computations (background, stroke, and inner shadow), maintaining visual consistency with existing states.

Also applies to: 44-44, 55-55


68-68: Complete visual integration of enhancing state.

The enhancement state is thoroughly implemented in:

  • Shadow effects with appropriate opacity levels
  • Glow effects using the green color
  • Animation effects with a dedicated counter variable

This provides a cohesive visual experience for the new state.

Also applies to: 108-117, 128-131


133-147: Optimized animation with consolidated task.

Excellent optimization to use a single animation task for both transcribing and enhancing states instead of separate tasks. The code:

  1. Only runs animation when needed (status check)
  2. Updates the correct counter based on current status
  3. Maintains the same timing for animations (0.3s)

This reduces resource usage while providing the same visual feedback.


150-151: Clear tooltip behavior for distinct states.

Good decision to explicitly limit the "Model prewarming..." tooltip to only appear for the prewarming state, keeping the UI clean during enhancement.


178-178: Complete preview with all states.

Adding the enhancing status to the preview ensures developers can test and verify all possible visual states.

Hex/Features/Settings/AIEnhancementView.swift (7)

11-54: Well-structured AIEnhancementView with conditional sections.

The view is well-organized with:

  • Logical section grouping
  • Conditional rendering based on feature enablement and Ollama availability
  • Appropriate task initialization for data loading
  • Clear section headers and explanatory footers

The form style and binding to the store follow SwiftUI best practices.


59-119: Informative connection status view for better user experience.

The connection status view provides:

  • Clear visual alert with appropriate icon and styling
  • Detailed setup instructions with bullet points
  • Actionable buttons for downloading Ollama and checking connection
  • Proper spacing and visual hierarchy

This helps users understand what's needed to make the feature work.


122-156: Good activation toggle with status feedback.

The toggle implementation:

  • Properly uses withLock for thread-safe settings updates
  • Triggers Ollama availability check when enabled
  • Shows connection status indicator when connected
  • Has clear explanatory text

I appreciate the visual indicator (green dot) when connected.


159-258: Complete model selection UI with all possible states.

The model selection section handles all states gracefully:

  • Loading state with progress indicator
  • Error state with message
  • Empty state with helpful link
  • Normal state with proper picker

The refresh button and explanatory footer provide good UX.


261-316: Well-designed temperature control with clear visual cues.

The temperature slider implementation:

  • Shows precise numeric value
  • Uses clear label indicators for "Precision" vs "Creativity"
  • Has appropriate range (0-1) and step (0.05)
  • Updates settings thread-safely with withLock
  • Includes explanatory text about the impact of different values

319-400: Versatile prompt configuration with expandable editing.

The prompt section offers a good balance of simplicity and power:

  • Collapsed view shows preview with limited lines
  • Expandable view provides full editing capability
  • Reset button to restore defaults
  • Monospaced font in editor for better code/prompt editing
  • Different footer text based on expanded state

The animation for expanding/collapsing is a nice touch.


402-413: Reusable bullet point helper for consistent formatting.

Good extraction of the bullet point rendering into a helper function for consistent styling and reuse throughout the view.

Hex/Clients/AIEnhancementClient.swift (6)

18-27: Good use of dependency injection pattern with TCA.

The AIEnhancementClient structure effectively uses the @DependencyClient macro for dependency injection, providing clear method signatures with sensible defaults. This follows the TCA pattern well and enables easy testing through dependency substitution.


30-70: Well-designed options struct with clear documentation.

The EnhancementOptions struct is well-designed with:

  • Clear documentation for each property
  • Appropriate default values
  • A detailed default prompt with specific instructions
  • Clean initialization with sensible defaults

This makes the API both easy to use with defaults and flexible for custom configurations.


136-159: Good implementation of Ollama availability check.

The isOllamaAvailable method is well-implemented with:

  • Appropriate timeout settings
  • Clear logging
  • Proper error handling that defaults to false when errors occur
  • Status code validation
  • Useful debug information

This should provide reliable detection of the Ollama service.


162-209: Well-structured model fetching implementation.

The getAvailableModels method provides a comprehensive implementation:

  • Clean nested structure for JSON decoding
  • Proper error handling at each step
  • Appropriate timeout settings
  • Result sorting for better UX
  • Detailed error messages with error propagation

This will provide reliable model listing functionality.


92-134: Well-structured error handling and progress reporting in the enhancement logic.

The enhance method contains robust error handling and progress reporting:

  • Checks for Ollama availability before proceeding
  • Propagates errors appropriately
  • Reports progress at key points in the process
  • Logs useful diagnostic information
  • Returns the original text when enhancement fails or isn't possible

This helps ensure a good user experience even when things go wrong.


214-329: Comprehensive implementation of text enhancement via Ollama API.

The enhanceWithOllama method provides a thorough implementation:

  • Input validation
  • Well-constructed prompt format
  • Parameter validation
  • Proper HTTP request setup
  • Comprehensive error handling with descriptive messages
  • Progress reporting at multiple stages
  • Response parsing and cleanup
  • Fallback to original text when needed

The implementation should provide reliable enhancement functionality with good error recovery.

Localizable.xcstrings (5)

203-207: New auto-capitalization feature strings.

The addition of "Disable auto-capitalization" and "Disable automatic capitalization in transcriptions" strings aligns with the PR objective mentioning the new disableAutoCapitalization setting. This provides the necessary UI text for the feature mentioned in the PR summary.


38-40: AI Enhancement feature localization keys added.

The addition of "AI Enhancement" and related description strings properly supports the new AI enhancement feature mentioned in the PR objectives. These strings provide the necessary UI text for the feature described in the PR summary.

Also applies to: 259-261, 672-673


422-432: Ollama status indicator strings added.

The Ollama connection status strings ("Ollama Connected", "Ollama Not Connected", "Ollama Status") provide clear feedback to users about the connection state for the AI enhancement feature, which is important for user experience when integrating with an external service.


152-153: Response style configuration strings added.

The strings related to "Creativity", "Precision", and "Response Style" along with explanatory text provide user-friendly controls for configuring the AI enhancement's behavior, allowing users to tailor the enhancement process to their needs.

Also applies to: 367-368, 492-493, 526-527


262-263: Enhancement prompt customization strings added.

The strings for "Enhancement Prompt", prompt editing instructions, and "Reset to Default" provide users with the ability to customize how the AI enhances their transcriptions, offering flexibility while maintaining the option to revert to defaults.

Also applies to: 371-372, 523-524, 655-656

Comment on lines +8 to +54
import ComposableArchitecture
import SwiftUI

struct AIEnhancementView: View {
@Bindable var store: StoreOf<AIEnhancementFeature>
@State private var showExpandedPrompt = false
@State private var isHoveringModelSelect = false

var body: some View {
Form {
// Activation Section (Always show this first)
Section {
activationToggle
} header: {
Text("AI Enhancement")
} footer: {
Text("Enable AI to improve transcription quality by fixing grammar, formatting, and errors.")
.foregroundColor(.secondary.opacity(0.7))
.font(.caption)
}

// Only show other settings if AI enhancement is enabled
if store.hexSettings.useAIEnhancement {
// Connection Status Section (only if AI enhancement is enabled)
if !store.isOllamaAvailable {
Section {
connectionStatusView
} header: {
Text("Ollama Status")
}
}

// Model Selection Section
modelSelectionSection

// Temperature Control Section
temperatureSection

// Prompt Configuration Section
promptSection
}
}
.formStyle(.grouped)
.task {
await store.send(.task).finish()
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify ability to detect Ollama timeout issues.

The current implementation relies on the AIEnhancementClient's availability check, but there might be edge cases where Ollama appears available but becomes unresponsive during actual usage.


🏁 Script executed:

#!/bin/bash
# Check the timeout handling in AIEnhancementClient's isOllamaAvailable method

# Look for timeout configuration in AIEnhancementClient
rg -A 2 "timeoutInterval" --glob "**/*EnhancementClient*"

# Check error handling for connection issues 
rg -A 5 "try await" --glob "**/*EnhancementClient*"

Length of output: 3778


🏁 Script executed:

#!/bin/bash
# Extract the isOllamaAvailable implementation
rg -A20 "func isOllamaAvailable" --glob "Hex/Clients/AIEnhancementClient.swift"

echo "--------------------------------------------------"

# Extract the generation call (enhanceWithOllama) implementation
rg -A20 "func enhanceWithOllama" --glob "Hex/Clients/AIEnhancementClient.swift"

Length of output: 4135


Handle Ollama timeouts and connectivity failures during enhancement

Although isOllamaAvailable() uses a 5 s timeout and returns false on error, the actual generation call in enhanceWithOllama(…) (60 s timeout) simply propagates any thrown error. This means the UI may think Ollama is available but then hang or crash on a timeout. We need to:

• Add explicit error handling around the URLSession call in enhanceWithOllama to catch timeouts and other network errors, then translate them into a user‐facing state (e.g. mark Ollama unavailable, show an error banner).
• In your feature reducer (or wherever .enhance is called), catch enhancement errors and dispatch an action to recheck availability or present an alert.

Example diff in AIEnhancementClient.swift:

     // generation call
-    let (data, response) = try await URLSession.shared.data(for: request)
+    let (data, response): (Data, URLResponse)
+    do {
+      (data, response) = try await URLSession.shared.data(for: request)
+    } catch {
+      // Treat timeouts and connectivity issues as “Ollama unavailable”
+      print("[AIEnhancementClientLive] Generation failed: \(error.localizedDescription)")
+      throw NSError(domain: "AIEnhancementClient",
+                    code: -1001, // NSURLErrorTimedOut or similar
+                    userInfo: [NSLocalizedDescriptionKey: "Ollama is unresponsive"])
+    }

And in your reducer:

  • Catch the thrown error from enhance, send a new .ollamaBecameUnavailable action.
  • In response, re-run isOllamaAvailable() and display an alert if still down.

This ensures the user never hits a silent timeout and always sees a clear “Ollama unavailable” state.

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)

396-435: ⚠️ Potential issue

Compile-time error – missing argument labels when calling enhance

AIEnhancementClient.enhance expects external labels text:model:options:, but the call is missing them.

-let enhancedText = try await aiEnhancement.enhance(result, model, options) { _ in }
+let enhancedText = try await aiEnhancement.enhance(
+    text: result,
+    model: model,
+    options: options
+) { _ in }
🧹 Nitpick comments (2)
Hex/Features/Transcription/TranscriptionFeature.swift (2)

433-435: Consider documenting the cancellation decision more clearly.

The code avoids making the AI enhancement cancellable due to potential issues. This reasoning should be better documented for future maintainers to understand the rationale.

// Don't make this cancellable to avoid premature cancellation
// This may have been causing the issue with the enhancement being cancelled
+// Note: AI enhancement operations may take longer than typical network operations.
+// Making this cancellable was causing the enhancement to be prematurely cancelled when
+// another action was dispatched or when the user interacted with the UI.

534-540: Commented-out code needs explanation.

The decision not to cancel AI enhancement during the cancel flow should be better documented.

// Don't cancel AI enhancement as it might cause issues
-// .cancel(id: CancelID.aiEnhancement),
+// AI enhancement is intentionally not cancelled here because:
+// 1. It might be near completion when cancelled, wasting processing
+// 2. It can cause race conditions with result handling
+// .cancel(id: CancelID.aiEnhancement),
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e51c96 and 27fef1e.

📒 Files selected for processing (6)
  • Hex.xcodeproj/project.pbxproj (6 hunks)
  • Hex/Clients/AIEnhancementClient.swift (1 hunks)
  • Hex/Clients/RecordingClient.swift (2 hunks)
  • Hex/Clients/TranscriptionClient.swift (4 hunks)
  • Hex/Features/Settings/SettingsFeature.swift (5 hunks)
  • Hex/Features/Transcription/TranscriptionFeature.swift (8 hunks)
✅ Files skipped from review due to trivial changes (1)
  • Hex.xcodeproj/project.pbxproj
🚧 Files skipped from review as they are similar to previous changes (4)
  • Hex/Clients/RecordingClient.swift
  • Hex/Features/Settings/SettingsFeature.swift
  • Hex/Clients/TranscriptionClient.swift
  • Hex/Clients/AIEnhancementClient.swift
🔇 Additional comments (8)
Hex/Features/Transcription/TranscriptionFeature.swift (8)

22-22: State tracking addition looks good.

Adding the isEnhancing state flag properly tracks when AI enhancement is active, maintaining consistency with other state flags in the struct.


50-55: Well-structured action cases for AI enhancement flow.

The new action cases properly handle all aspects of the AI enhancement flow, including state management, results, errors, and Ollama availability.


62-62: Cancel ID for AI enhancement added correctly.

This addition follows the established pattern for other cancellable operations in the codebase.


70-70: Dependency injection properly implemented.

The aiEnhancement dependency is correctly injected following the established pattern.


120-154: Well-implemented AI enhancement action handlers.

The implementation checks for Ollama connectivity issues specifically and triggers availability rechecks when needed. Good error handling and logging.


357-391: Good conditional flow for AI enhancement.

The transcription result handler now intelligently routes the result through AI enhancement when enabled, with proper state management.


438-460: AI enhancement result handler looks good.

The handler properly resets state and finalizes the transcript similarly to the original flow.


582-583: Good UI state prioritization for enhancing status.

The status logic correctly prioritizes showing the enhancing status before transcribing or recording states.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)

158-164: 🛠️ Refactor suggestion

Update cancel guard condition to include isEnhancing

The cancel handler has a guard condition that only runs if isRecording or isTranscribing are true, but it doesn't check isEnhancing. This might lead to inconsistencies if cancel is triggered during enhancement.

Update the guard condition to include all relevant states:

case .cancel:
  // Only cancel if we're in the middle of recording or transcribing
- guard state.isRecording || state.isTranscribing else {
+ guard state.isRecording || state.isTranscribing || state.isEnhancing else {
    return .none
  }
  return handleCancel(&state)
♻️ Duplicate comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)

423-425: ⚠️ Potential issue

Missing argument labels in enhance method call

The AIEnhancementClient.enhance method likely requires argument labels (text:model:options:), but they're omitted here.

The call should include the external parameter labels to prevent compilation errors:

-let enhancedText = try await aiEnhancement.enhance(result, model, options) { progress in
+let enhancedText = try await aiEnhancement.enhance(
+  text: result,
+  model: model,
+  options: options
+) { progress in
  // Optional: Could update UI with progress information here if needed
}
🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 423-423: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)

🧹 Nitpick comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)

423-425: Unused closure parameters flagged by static analysis

There are two instances of unused parameters in closures that should be replaced with _ as recommended by SwiftLint.

Apply these changes:

// In recheckOllamaAvailability handler
-return .run { send in
+return .run { _ in

// In enhanceWithAI method
-let enhancedText = try await aiEnhancement.enhance(...) { progress in
+let enhancedText = try await aiEnhancement.enhance(...) { _ in

Also applies to: 147-148

🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 423-423: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 27fef1e and 25679ae.

📒 Files selected for processing (1)
  • Hex/Features/Transcription/TranscriptionFeature.swift (8 hunks)
🧰 Additional context used
🪛 SwiftLint (0.57.0)
Hex/Features/Transcription/TranscriptionFeature.swift

[Warning] 147-147: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)


[Warning] 423-423: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)

🔇 Additional comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)

435-437: Uncancellable AI enhancement operation

The AI enhancement effect is deliberately not made cancellable (with a comment explaining it avoids premature cancellation), but this creates inconsistency with the cancel handler which resets the isEnhancing state flag without actually cancelling the operation.

This could lead to a situation where the UI shows enhancement is not happening (after cancel), but the enhancement is still running in the background. Consider the implications:

  1. What happens if the enhancement completes after cancellation?
  2. Could this lead to unexpected state or race conditions?

You might need additional safeguards in handleAIEnhancement to check if the operation was cancelled before applying the result.

Comment on lines +147 to +154
return .run { send in
let isAvailable = await aiEnhancement.isOllamaAvailable()
if !isAvailable {
// Could dispatch to a UI state to show an alert or notification
print("[TranscriptionFeature] Ollama is not available. AI enhancement is disabled.")
// Here you would typically update UI state to show an alert
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Incomplete handling of Ollama unavailability

The rechecking logic detects Ollama unavailability but only logs it without updating any UI state or taking corrective action. Users won't be notified of the issue.

Consider updating to actually dispatch an action that updates the UI when Ollama is unavailable:

case .recheckOllamaAvailability:
  // Recheck if Ollama is available and update UI accordingly
  return .run { send in
    let isAvailable = await aiEnhancement.isOllamaAvailable()
    if !isAvailable {
      // Could dispatch to a UI state to show an alert or notification
      print("[TranscriptionFeature] Ollama is not available. AI enhancement is disabled.")
      // Here you would typically update UI state to show an alert
+     await send(.someActionToShowOllamaUnavailableAlert)
    }
  }

🏁 Script executed:

#!/bin/bash
# Verify if there's any UI state or user notification mechanism implemented elsewhere

# Look for UI alerts or notifications related to Ollama availability
rg -A 3 -B 3 "Ollama.*not available|ollamaUnavailable" --glob "*.swift"

Length of output: 2932


Dispatch a UI action in TranscriptionFeature for Ollama unavailability

The .recheckOllamaAvailability branch only logs the error but doesn’t surface it to the user. Your Settings view already shows a message when store.isOllamaAvailable is false—do the same in Transcription:

• File: Hex/Features/Transcription/TranscriptionFeature.swift (Lines 147–154)
• Add a new action in TranscriptionAction, e.g. .ollamaUnavailable
• Update the reducer:

case .recheckOllamaAvailability:
  return .run { send in
    let isAvailable = await aiEnhancement.isOllamaAvailable()
    if !isAvailable {
      print("[TranscriptionFeature] Ollama is not available. AI enhancement is disabled.")
+     await send(.ollamaUnavailable)
    }
  }

• Handle .ollamaUnavailable in the reducer to set a UI flag (e.g. state.showOllamaAlert = true)
• In TranscriptionView, bind an Alert to state.showOllamaAlert, similar to AIEnhancementView.swift

This ensures users receive an immediate notification when Ollama can’t be reached.

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 147-147: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)

@0ni-x4
Copy link
Copy Markdown

0ni-x4 commented May 13, 2025

i want this!!

@plyght
Copy link
Copy Markdown
Contributor Author

plyght commented May 13, 2025

@0ni-x4 motivated me to finish this. will get it going!!

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
Hex/Features/Transcription/TranscriptionFeature.swift (4)

154-162: Missing UI feedback for Ollama unavailability.

The rechecking logic detects Ollama unavailability but only logs it without updating any UI state or taking corrective action. Users won't be notified of the issue.

#!/bin/bash
# Verify if there's any UI state update or user notification mechanism implemented elsewhere

# Look for UI alerts or notifications related to Ollama availability
rg -A 3 -B 3 "Ollama.*not available|ollamaUnavailable" --glob "*.swift"
🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 155-155: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)


579-581: Potential UI state inconsistency with commented cancellation code.

The cancel handler resets the isEnhancing state but explicitly avoids cancelling the AI enhancement effect. This creates a visual/UI inconsistency where the UI indicates cancellation but the operation continues in the background.

Consider these options:

  1. Make the AI enhancement cancellable and actually cancel it
  2. Add state tracking to ignore enhancement results if cancel was requested
  3. Document this behavior clearly for future maintainers
// Don't cancel AI enhancement as it might cause issues
// .cancel(id: CancelID.aiEnhancement),
+ // TODO: This creates a potential inconsistency - the UI shows cancellation
+ // but enhancement continues in background. Consider implementing a safer
+ // cancellation approach or state tracking to ignore late results.

138-147: ⚠️ Potential issue

Incomplete error handling in AI enhancement.

The comment states "For other errors, just use the original transcription," but the code returns .none which doesn't actually restore or use the original transcription result. When a non-Ollama error occurs, the transcription might be lost.

case let .aiEnhancementError(error):
  // Check if this is an Ollama connectivity error
  let nsError = error as NSError
  if nsError.domain == "AIEnhancementClient" && (nsError.code == -1001 || nsError.localizedDescription.contains("Ollama")) {
    print("AI Enhancement error due to Ollama connectivity: \(error)")
    return .send(.ollamaBecameUnavailable)
  } else {
    // For other errors, just use the original transcription
    print("AI Enhancement error: \(error)")
-   return .none
+   // Pass the original transcription result through
+   return .send(.transcriptionResult(result))
  }

433-475: ⚠️ Potential issue

Incorrect parameter passing in AI enhancement call.

The enhance method is called without parameter labels, but AIEnhancementClient.enhance requires external labels (text:, model:, options:).

-          let enhancedText = try await aiEnhancement.enhance(result, model, options) { progress in
+          let enhancedText = try await aiEnhancement.enhance(
+            text: result,
+            model: model,
+            options: options
+          ) { progress in
🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 461-461: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)

🧹 Nitpick comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)

461-461: Unused parameter in closure.

The progress parameter in the closure is unused. Replace it with _ to avoid the SwiftLint warning.

-          let enhancedText = try await aiEnhancement.enhance(result, model, options) { progress in
+          let enhancedText = try await aiEnhancement.enhance(result, model, options) { _ in
🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 461-461: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 25679ae and e2ce7ca.

📒 Files selected for processing (5)
  • Hex/Clients/KeyEventMonitorClient.swift (3 hunks)
  • Hex/Clients/PasteboardClient.swift (2 hunks)
  • Hex/Clients/RecordingClient.swift (2 hunks)
  • Hex/Features/Transcription/TranscriptionFeature.swift (10 hunks)
  • Hex/Features/Transcription/TranscriptionIndicatorView.swift (7 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • Hex/Features/Transcription/TranscriptionIndicatorView.swift
🧰 Additional context used
🧬 Code Graph Analysis (1)
Hex/Features/Transcription/TranscriptionFeature.swift (3)
Hex/Clients/AIEnhancementClient.swift (2)
  • isOllamaAvailable (136-159)
  • enhance (96-133)
Hex/Clients/RecordingClient.swift (1)
  • observeAudioLevel (621-623)
Hex/Clients/TranscriptionClient.swift (1)
  • transcribe (206-247)
🪛 SwiftLint (0.57.0)
Hex/Features/Transcription/TranscriptionFeature.swift

[Warning] 155-155: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)


[Warning] 185-185: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)


[Warning] 461-461: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)

🔇 Additional comments (19)
Hex/Clients/KeyEventMonitorClient.swift (2)

11-36: Well-designed thread safety implementation for Sauce library.

This new SafeSauce enum provides an excellent thread-safe wrapper around Sauce library calls, effectively preventing _dispatch_assert_queue_fail errors. The implementation correctly handles the case when code is already on the main thread versus when it needs to dispatch.


47-47: Good use of the thread-safe wrapper.

The KeyEvent initializer now correctly uses the thread-safe wrapper to prevent potential crashes from background thread access.

Hex/Clients/RecordingClient.swift (2)

384-384: Appropriate use of let for immutable pointer allocation.

Changed from var to let which is correct since this pointer doesn't need to be mutable after allocation.


562-612: Excellent adaptive throttling for audio metering.

The implementation adds sophisticated adaptive throttling that:

  1. Adjusts sampling intervals based on activity level (80-150ms)
  2. Only emits updates when there are significant changes
  3. Has smart fallbacks to ensure UI responsiveness

This will reduce CPU usage and improve battery life while maintaining responsive UI feedback.

Hex/Clients/PasteboardClient.swift (6)

68-92: Efficient pasteboard backup implementation.

This new approach using NSPasteboard's native capabilities is more efficient than manually copying data items. The use of a unique temporary pasteboard is elegant and avoids potential data loss issues.


95-109: Clean pasteboard restoration with proper cleanup.

The restoration process correctly handles the pasteboard state and cleans up the temporary pasteboard to avoid memory leaks.


112-115: Good practice keeping the legacy method with a warning.

Maintaining backward compatibility while clearly marking this method as deprecated will help with future code maintenance.


164-166: Smart conditional pasteboard backup.

Only saving the pasteboard state when clipboard retention is disabled is a good optimization that avoids unnecessary work.


180-211: Thread-safe key code retrieval and efficient event posting.

The thread safety check and main thread dispatching for key code retrieval aligns with the improvements in KeyEventMonitorClient. Using autoreleasepool for the event posting sequence ensures proper resource management.


222-229: Appropriate delay and cleanup handling.

Adding a delay before restoration gives the paste operation time to complete, and wrapping the restoration in an autoreleasepool helps with memory management.

Hex/Features/Transcription/TranscriptionFeature.swift (9)

22-22: Good addition of enhancing state tracking.

Adding the isEnhancing state variable allows proper UI feedback during the AI enhancement process.


50-56: Well-defined actions for AI enhancement flow.

The new actions clearly separate different aspects of the AI enhancement process, promoting a clean state management approach.


70-70: Added AI enhancement dependency.

Correctly added the dependency to access AI enhancement functionality.


89-97: Optimized audio level update logic.

The conditional update based on significant changes is a good optimization that reduces unnecessary state updates and UI refreshes.


183-214: Excellent audio meter update optimization.

The rate limiting and significance threshold for audio meter updates will reduce UI updates and improve performance, especially during quieter periods.

🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 185-185: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)


375-375: Good integration of HexSettings with transcription client.

Passing the settings to the transcription client enables features like disabling auto-capitalization.


394-429: Well-structured conditional AI enhancement.

The transcription result handler now correctly branches based on user settings, either proceeding to AI enhancement or finalizing the transcription directly. The code properly extracts necessary settings values.


477-500: Complete AI enhancement result handling.

The implementation correctly updates all relevant state variables and proceeds to finalize the recording with the enhanced transcript.


622-623: Good UI status prioritization.

Prioritizing the .enhancing state in the UI status logic ensures users get appropriate feedback during the AI enhancement process.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)

466-509: 🛠️ Refactor suggestion

Fix argument labels in AIEnhancement.enhance call.

The code bypasses Swift's argument labels by directly accessing the enhance method. This is error-prone and less readable.

- // Access the raw value directly to avoid argument label issues
- let enhanceMethod = aiEnhancement.enhance
- let enhancedText = try await enhanceMethod(result, model, options) { progress in
+ // Use proper argument labels for better readability and type safety
+ let enhancedText = try await aiEnhancement.enhance(
+   text: result, 
+   model: model, 
+   options: options
+ ) { progress in
🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 495-495: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)

🧹 Nitpick comments (2)
Hex/Features/Transcription/TranscriptionFeature.swift (2)

234-242: Enhance meter update with for-where pattern.

Consider using a Swift for-where pattern to simplify the conditional meter updates.

- for await meter in await recording.observeAudioLevel() {
-   // Check if we should send this update
-   if await rateLimiter.shouldUpdate(meter: meter) {
-     // The Effect.run captures its function as @Sendable, so we're already on an appropriate context
-     // for sending actions. ComposableArchitecture handles dispatching to the main thread as needed.
-       await send(.audioLevelUpdated(meter))
-   }
+ for await meter in await recording.observeAudioLevel() where await rateLimiter.shouldUpdate(meter: meter) {
+   // The Effect.run captures its function as @Sendable, so we're already on an appropriate context
+   // for sending actions. ComposableArchitecture handles dispatching to the main thread as needed.
+     await send(.audioLevelUpdated(meter))
🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 239-239: where clauses are preferred over a single if inside a for

(for_where)


495-495: Replace unused closure parameter with underscore.

The progress parameter in the closure is unused and should be replaced with an underscore.

- let enhancedText = try await enhanceMethod(result, model, options) { progress in
+ let enhancedText = try await enhanceMethod(result, model, options) { _ in
🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 495-495: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e2ce7ca and dd4356f.

📒 Files selected for processing (2)
  • Hex.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved (2 hunks)
  • Hex/Features/Transcription/TranscriptionFeature.swift (10 hunks)
✅ Files skipped from review due to trivial changes (1)
  • Hex.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved
🧰 Additional context used
🪛 SwiftLint (0.57.0)
Hex/Features/Transcription/TranscriptionFeature.swift

[Warning] 239-239: where clauses are preferred over a single if inside a for

(for_where)


[Warning] 201-201: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)


[Warning] 495-495: Unused parameter in a closure should be replaced with _

(unused_closure_parameter)


[Warning] 618-618: TODOs should be resolved (Consider implementing a safer ...)

(todo)

🔇 Additional comments (16)
Hex/Features/Transcription/TranscriptionFeature.swift (16)

22-23: State management for tracking AI enhancement progress looks good.

The new isEnhancing state properly tracks when AI enhancement is active, providing a clear separation between transcription and enhancement states.


27-27: Well-designed fallback mechanism for transcription.

Using pendingTranscription as a fallback storage is an excellent approach to ensure the original transcription is preserved in case of AI enhancement failures.


51-56: Comprehensive action enumeration for AI enhancement flow.

The action cases cover all necessary states for the AI enhancement flow including state changes, results, error handling, and Ollama availability checks.


63-63: LGTM: New cancellation ID added for AI enhancement.

Properly defines a cancellation ID for the AI enhancement operation.


71-71: LGTM: Dependency injection for AI enhancement client.

Correctly uses dependency injection for the AI enhancement client following TCA patterns.


90-98: Optimized meter updates improve UI performance.

The optimization to only update the meter when there's a significant change or during active recording reduces unnecessary UI updates.


131-133: Clean state management for AI enhancement status.

Simple and direct state management for the enhancing state.


135-136: Proper handling of AI enhancement results.

Correctly delegates to a helper method to handle successful AI enhancement results.


161-178: Properly implemented Ollama availability rechecking.

Correctly handles the case when Ollama becomes unavailable, showing an error to the user.


198-232: Well-implemented actor for meter rate limiting.

The MeterRateLimiter actor is a good implementation for Swift concurrency. It safely manages state and implements an adaptive threshold for meter updates.

🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 201-201: Initializing an optional variable with nil is redundant

(redundant_optional_initialization)


404-404: LGTM: Updated transcription call with settings parameter.

Properly passes the settings to respect configuration options like disableAutoCapitalization.


424-460: Well-designed enhancement decision flow.

Good conditional logic to determine whether AI enhancement should be applied based on user settings.


512-535: LGTM: Proper AI enhancement result handling.

The handler correctly updates all relevant state properties and proceeds with storing the transcript.


609-620: Addressed cancellation concerns with detailed comment.

The code intentionally avoids canceling AI enhancement operations to prevent Ollama API issues. The comment clearly explains the trade-off and plans for future improvements.

🧰 Tools
🪛 SwiftLint (0.57.0)

[Warning] 618-618: TODOs should be resolved (Consider implementing a safer ...)

(todo)


662-673: LGTM: Updated status logic with enhancement state.

The status logic correctly prioritizes showing the enhancing state when active.


138-159:

Details

✅ Verification successful

Consider improving error recovery flow to avoid false positives.

The error handling correctly identifies Ollama connectivity issues, but there's a potential edge case where pendingTranscription might be nil, which would result in empty text being processed.


🏁 Script executed:

#!/bin/bash
# Check if there are cases where pendingTranscription could be nil when error handling is triggered

# Look for places where pendingTranscription is set to nil
rg "pendingTranscription = nil" --type swift

# Look for places where aiEnhancementError action is sent without setting pendingTranscription
rg "send\(\.aiEnhancementError" --type swift -A 10 -B 10

Length of output: 2321


To confirm where and how pendingTranscription is set, let’s list all usages in the reducer around that property:


🏁 Script executed:

#!/bin/bash
rg "pendingTranscription" -n Hex/Features/Transcription/TranscriptionFeature.swift -A 5 -B 5

Length of output: 2204


Error recovery flow is safe—no pendingTranscription nil scenario

  • state.pendingTranscription is set at line 429 before the AI enhancement call.
  • It’s only cleared on success at line 519, so in the .aiEnhancementError case it will always be non-nil.
  • The ?? "" fallback further guards against any unexpected nil, preventing crashes.

No further changes needed.

@kitlangton
Copy link
Copy Markdown
Owner

Let me know when you're ready for a final review :D This looks like it's gonna be great!

@lkeude96
Copy link
Copy Markdown

lkeude96 commented Jun 2, 2025

Super pumped for this @plyght this will unlock lot 🙌

@plyght
Copy link
Copy Markdown
Contributor Author

plyght commented Jun 3, 2025

@lkeude96 appreciate it! sorry it's taking a while to finish... there's some performance issues :/

@kitlangton actually would be great if you could take a look and help me diagnose why the spinner wheel is super laggy!

@VipinReddy
Copy link
Copy Markdown

I understand, I might be asking for more, is there an option to add another bigger model that supports my 48 gig MacBook Pro M4 other than Ollama

@plyght
Copy link
Copy Markdown
Contributor Author

plyght commented Jun 3, 2025

@VipinReddy all https://ollama.com models will be supported for the text enhancement.

@devtanna
Copy link
Copy Markdown

Super excited about this!

@plyght
Copy link
Copy Markdown
Contributor Author

plyght commented Nov 17, 2025

@kitlangton would love your help on this to fix the performance of the bubble when transcribing, etc. I've seen that you'd like to reduce redundancy and too many options in settings. please let me know what you'd like me to fix, change, etc here, before merging. I will fix the merge conflicts!

@supnim
Copy link
Copy Markdown

supnim commented Dec 16, 2025

buzzin for this!

@pierremouchan
Copy link
Copy Markdown

Would love to see this soon aded ;)

@0ni-x4
Copy link
Copy Markdown

0ni-x4 commented Feb 24, 2026

@plyght

Copilot AI review requested due to automatic review settings April 10, 2026 17:20
@plyght plyght force-pushed the feat/ai-improvement branch from dd4356f to c1e89f2 Compare April 10, 2026 17:20
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an “AI Enhancement” feature to post-process transcriptions using a local Ollama model, plus several performance/UX tweaks around transcription metering and indicator states. The PR also includes pasteboard reliability changes and substantial Xcode project / SwiftPM resolution updates.

Changes:

  • Introduces AIEnhancementClient + Settings UI/Reducer to configure Ollama connectivity, model selection, prompt, and temperature.
  • Adds an “enhancing” state to the transcription flow and indicator UI, and throttles metering updates.
  • Updates pasteboard handling and modifies build/dependency configuration files.

Reviewed changes

Copilot reviewed 15 out of 15 changed files in this pull request and generated 11 comments.

Show a summary per file
File Description
Localizable.xcstrings Adds localization keys for AI enhancement UI.
HexCore/Sources/HexCore/Settings/HexSettings.swift Adds AI enhancement settings + default prompt.
HexCore/Sources/HexCore/Logging.swift Adds aiEnhancement logging category.
Hex/Features/Transcription/TranscriptionIndicatorView.swift Adds “enhancing” indicator state and refactors view for performance.
Hex/Features/Transcription/TranscriptionFeature.swift Adds AI enhancement flow and metering throttling; integrates enhancing state.
Hex/Features/Settings/SettingsFeature.swift Scopes new AIEnhancementFeature into settings.
Hex/Features/Settings/AIEnhancementView.swift New settings form for Ollama connection + prompt/model/temperature.
Hex/Features/Settings/AIEnhancementFeature.swift New reducer to check Ollama availability and load models.
Hex/Features/App/AppFeature.swift Adds a new app tab for AI Enhancement.
Hex/Clients/TranscriptionClient.swift Extends transcribe API to accept settings; attempts to support disabling auto-capitalization.
Hex/Clients/RecordingClient.swift Meter sampling throttling + minor pointer immutability fix.
Hex/Clients/PasteboardClient.swift Minor pasteboard behavior/logging tweaks.
Hex/Clients/AIEnhancementClient.swift New Ollama-backed enhancement client.
Hex.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved Updates (and removes) SwiftPM pins.
Hex.xcodeproj/project.pbxproj Large project configuration + dependency reference changes, plus app naming/version/build-setting churn.
Comments suppressed due to low confidence (1)

Hex/Features/Transcription/TranscriptionFeature.swift:472

  • transcription.transcribe was updated to accept an additional HexSettings? argument, but this call site still uses the old 4-parameter signature. This will fail to compile and also leaves the locally captured settings value unused. Pass settings (or state.hexSettings) into transcribe to match the new API.
    // Extract all required state values to local variables to avoid capturing inout parameter
    let model = state.hexSettings.selectedModel
    let language = state.hexSettings.outputLanguage
    let settings = state.hexSettings
    // recordingStartTime captured in handleTranscriptionResult
    
    state.isPrewarming = true

    return .run { [sleepManagement] send in
      // Allow system to sleep again
      await sleepManagement.allowSleep()

      var audioURL: URL?
      do {
        let capturedURL = await recording.stopRecording()
        guard !Task.isCancelled else { return }
        soundEffect.play(.stopRecording)
        audioURL = capturedURL

        // Create transcription options with the selected language
        // Note: cap concurrency to avoid audio I/O overloads on some Macs
        let decodeOptions = DecodingOptions(
          language: language,
          detectLanguage: language == nil, // Only auto-detect if no language specified
          chunkingStrategy: .vad,
        )
        
        let result = try await transcription.transcribe(capturedURL, model, decodeOptions) { _ in }
        

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 565 to 569
state.isTranscribing = false
state.isPrewarming = false
state.isEnhancing = false // Reset the enhancing state
state.pendingTranscription = nil // Clear the pending transcription since enhancement succeeded

Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pendingTranscription is referenced here, but TranscriptionFeature.State does not define a pendingTranscription property anywhere in the module. This will not compile unless the state field is added (or the line is removed if it’s leftover from a previous approach).

Copilot uses AI. Check for mistakes.
Comment on lines +152 to +159
case let .aiEnhancementError(error):
if error is AIEnhancementError {
transcriptionFeatureLogger.notice("AI enhancement error (Ollama): \(error.localizedDescription)")
return .send(.ollamaBecameUnavailable)
} else {
transcriptionFeatureLogger.error("AI enhancement error: \(error.localizedDescription)")
return .none
}
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On .aiEnhancementError, state.isEnhancing is never reset to false. If enhancement fails, the UI can remain stuck in the “enhancing” indicator state until the next successful run/cancel. Reset the relevant state flags (e.g., isEnhancing, possibly isTranscribing/isPrewarming) when handling enhancement errors.

Copilot uses AI. Check for mistakes.
Comment on lines +164 to +170
case .recheckOllamaAvailability:
return .run { send in
let isAvailable = await aiEnhancement.isOllamaAvailable()
if !isAvailable {
transcriptionFeatureLogger.notice("Ollama is not available. AI enhancement is disabled.")
}
}
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This branch logs that “AI enhancement is disabled” when Ollama is unavailable, but it doesn’t actually update any state (e.g., hexSettings.useAIEnhancement = false) or notify the UI. Either update settings/state to reflect the disabled behavior, or adjust the log messaging and add a user-visible error path.

Copilot uses AI. Check for mistakes.
Comment on lines 703 to 706
return .merge(
.cancel(id: CancelID.transcription),
.cancel(id: CancelID.aiEnhancement),
.run { [sleepManagement] _ in
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CancelID.aiEnhancement is cancelled here, but the AI enhancement effect is never marked .cancellable(id: CancelID.aiEnhancement, ...) (it’s explicitly commented as not cancellable). As written, this cancellation does nothing and is misleading—either make the enhancement effect cancellable or remove the unused cancel ID/cancel call.

Copilot uses AI. Check for mistakes.
Comment on lines +279 to +286
var text = results.map(\.text).joined(separator: " ")

// Use provided settings or default to auto-capitalization
let useAutoCapitalization = settings == nil ? true : !settings!.disableAutoCapitalization

// Convert to lowercase if auto-capitalization is disabled
if !useAutoCapitalization {
text = text.lowercased()
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HexSettings does not define disableAutoCapitalization, but it’s accessed here (settings!.disableAutoCapitalization). This is a compile error. Either add disableAutoCapitalization to HexSettings (including coding keys/schema) or remove/replace this logic with an existing setting.

Copilot uses AI. Check for mistakes.
Comment on lines +74 to +101
// Memoize these calculations to prevent recalculating on every render
private func recordingOpacity(for power: Double, threshold: Double = 0.1) -> Double {
guard status == .recording else { return 0 }
return power < threshold ? power / threshold : 1
}

// Cache shadow colors based on status and power
@ViewBuilder
private func shadowEffect(averagePower: Double) -> some View {
switch status {
case .recording:
EmptyView()
.shadow(color: .red.opacity(averagePower), radius: 4)
.shadow(color: .red.opacity(averagePower * 0.5), radius: 8)
case .enhancing:
EmptyView()
.shadow(color: enhanceBaseColor.opacity(0.7), radius: 4)
.shadow(color: enhanceBaseColor.opacity(0.4), radius: 8)
case .transcribing, .prewarming:
EmptyView()
.shadow(color: transcribeBaseColor.opacity(0.7), radius: 4)
.shadow(color: transcribeBaseColor.opacity(0.4), radius: 8)
default:
EmptyView()
.shadow(color: .red.opacity(0), radius: 4)
.shadow(color: .red.opacity(0), radius: 8)
}
}
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The helper methods recordingOpacity(for:) and shadowEffect(averagePower:) are no longer used anywhere in this view after the refactor. Leaving them in place adds dead code and makes the optimization intent harder to follow; either wire them into the new CapsuleWithEffects implementation or remove them.

Copilot uses AI. Check for mistakes.
Comment on lines 160 to 171
);
name = Hex;
packageProductDependencies = (
47E05E012D444EE900D26DA6 /* ComposableArchitecture */,
47E05E042D444EF800D26DA6 /* Sauce */,
47E05E092D44525B00D26DA6 /* Dependencies */,
47E05E0B2D44525B00D26DA6 /* DependenciesMacros */,
47E05E262D44555500D26DA6 /* WhisperKit */,
4765045D2D45900200C7EA60 /* Pow */,
476BAD3D2D47E7880088C61F /* Sparkle */,
B5045C962D78DED500D0A119 /* MarkdownUI */,
47C08AB52DE9F61B00564AE6 /* Inject */,
47512ABE2E14D8C9000E25BA /* WhisperKit */,
476316262E5FB31400913CDE /* HexCore */,
47E16A612EC6C9D300885CF7 /* FluidAudio */,
);
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This project file update removes key dependencies (e.g., HexCore and Inject) from packageReferences / packageProductDependencies, but the app still imports those modules in many source files. As-is, the project will not build until those packages are re-added (or the imports/usages are removed/replaced).

Copilot uses AI. Check for mistakes.
Comment on lines 424 to 453
@@ -480,19 +446,10 @@
"@executable_path/../Frameworks",
);
MACOSX_DEPLOYMENT_TARGET = 15.0;
MARKETING_VERSION = 0.7.3;
OTHER_LDFLAGS = (
"-Xlinker",
"-interposable",
);
PRODUCT_BUNDLE_IDENTIFIER = com.kitlangton.Hex.debug;
PRODUCT_NAME = "Hex Debug";
RUNTIME_EXCEPTION_ALLOW_DYLD_ENVIRONMENT_VARIABLES = NO;
RUNTIME_EXCEPTION_ALLOW_JIT = NO;
RUNTIME_EXCEPTION_ALLOW_UNSIGNED_EXECUTABLE_MEMORY = NO;
RUNTIME_EXCEPTION_DEBUGGING_TOOL = NO;
RUNTIME_EXCEPTION_DISABLE_EXECUTABLE_PAGE_PROTECTION = NO;
RUNTIME_EXCEPTION_DISABLE_LIBRARY_VALIDATION = YES;
MARKETING_VERSION = 0.2.1;
OTHER_LDFLAGS = "";
PRODUCT_BUNDLE_IDENTIFIER = com.kitlangton.Hex;
PRODUCT_NAME = "$(TARGET_NAME)";
SWIFT_EMIT_LOC_STRINGS = YES;
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change set significantly rewrites build settings (e.g., MARKETING_VERSION, CURRENT_PROJECT_VERSION, bundle identifier/product name settings, linker flags) beyond the PR description’s stated goal of simplifying code signing. Please confirm these version/build-setting changes are intentional; otherwise, revert the unrelated configuration churn to avoid accidental release/version regressions.

Copilot uses AI. Check for mistakes.
Comment on lines 1 to 40
@@ -42,7 +24,8 @@
"kind" : "remoteSourceControl",
"location" : "https://github.com/EmergeTools/Pow",
"state" : {
"revision" : "1b4b1dda28c50b95f0872927ee2226fe8b58950e"
"revision" : "a504eb6d144bcf49f4f33029a2795345cb39e6b4",
"version" : "1.0.5"
}
},
{
@@ -51,34 +34,34 @@
"location" : "https://github.com/Clipy/Sauce",
"state" : {
"branch" : "master",
"revision" : "9c0de6c233f29d892e86dda68c2dd791aa10670c"
"revision" : "9ed4ca442cdd4be20449479b4e8f157ea96e7542"
}
},
{
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Package.resolved drops previously used dependencies like Inject and FluidAudio, and also changes a large set of versions. Since the codebase still imports Inject (and conditionally FluidAudio), this resolution change is likely to break builds unless the project/dependencies were intentionally migrated. Consider restoring the removed pins or documenting the dependency migration and updating imports accordingly.

Copilot uses AI. Check for mistakes.
Comment on lines +19 to 22
},
"%.2f" : {

},
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The string catalog now includes a "%.2f" entry, which looks like a numeric format specifier rather than user-facing text. If this was produced by string extraction from specifier: "%.2f", consider preventing it from being localized (or formatting the value without introducing a localizable key) to avoid cluttering translations with non-UI strings.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
Hex/Clients/TranscriptionClient.swift (1)

225-246: ⚠️ Potential issue | 🟠 Major

Apply the capitalization setting on the Parakeet path too.

The new post-processing only runs after WhisperKit. If the selected model is Parakeet, disableAutoCapitalization is ignored and users get different output depending on backend.

💡 Proposed fix
   func transcribe(
     url: URL,
     model: String,
     options: DecodingOptions,
     settings: HexSettings? = nil,
     progressCallback: `@escaping` (Progress) -> Void
   ) async throws -> String {
@@
     if isParakeet(model) {
@@
       let startTx = Date()
       let text = try await parakeet.transcribe(preparedClip.url)
       transcriptionLogger.info("Parakeet transcription took \(String(format: "%.2f", Date().timeIntervalSince(startTx)))s")
       transcriptionLogger.info("Parakeet request total elapsed \(String(format: "%.2f", Date().timeIntervalSince(startAll)))s")
-      return text
+      return normalizeTranscription(text, settings: settings)
     }
@@
-    var text = results.map(\.text).joined(separator: " ")
-    
-    // Use provided settings or default to auto-capitalization
-    let useAutoCapitalization = settings == nil ? true : !settings!.disableAutoCapitalization
-    
-    // Convert to lowercase if auto-capitalization is disabled
-    if !useAutoCapitalization {
-      text = text.lowercased()
-    }
-    
-    return text
+    let text = results.map(\.text).joined(separator: " ")
+    return normalizeTranscription(text, settings: settings)
   }
+
+  private func normalizeTranscription(_ text: String, settings: HexSettings?) -> String {
+    guard settings?.disableAutoCapitalization == true else { return text }
+    return text.lowercased()
+  }

Also applies to: 279-289

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Clients/TranscriptionClient.swift` around lines 225 - 246, The Parakeet
branch in transcribe(url:model:options:settings:progressCallback:) ignores
DecodingOptions.disableAutoCapitalization, so after getting text from
parakeet.transcribe(...) apply the same post-processing step used by the
WhisperKit path that respects options.disableAutoCapitalization (i.e., run the
capitalization/auto-capitalization transform conditioned on
options.disableAutoCapitalization), and mirror this fix in the other Parakeet
handling block around the code referenced (the second Parakeet path at lines
~279–289) so both Parakeet flows produce the same post-processed output as
WhisperKit.
Hex.xcodeproj/project.pbxproj (1)

564-570: ⚠️ Potential issue | 🟠 Major

Remove branch-based tracking from WhisperKit in project.pbxproj.

The project file specifies branch = main for WhisperKit at line 568, but the checked-in Package.resolved pins version 0.12.0 with a specific revision. This mismatch creates reproducibility issues—future dependency resolves could pull a different commit from main than what was reviewed in this PR. Change the requirement to pin an exact version or revision to match the resolved state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex.xcodeproj/project.pbxproj` around lines 564 - 570, The
XCRemoteSwiftPackageReference for "WhisperKit" (the block with isa =
XCRemoteSwiftPackageReference and repositoryURL
"https://github.com/argmaxinc/WhisperKit") currently uses branch = main; replace
the branch-based requirement with a pinned requirement matching Package.resolved
(either set requirement to an exactVersion = "0.12.0" or to the specific
revision hash from Package.resolved) so the project.pbxproj references the exact
version/revision instead of tracking main.
Hex/Features/Transcription/TranscriptionFeature.swift (1)

586-586: ⚠️ Potential issue | 🟠 Major

Avoid logging raw transcript content.

This logs full transcribed text directly. Keep content private (or log only length/metadata) to avoid PII leakage.

As per coding guidelines "Use the unified logging helper HexLog for all diagnostics ... use privacy annotations (, privacy: .private) for sensitive data like transcript text or file paths."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Features/Transcription/TranscriptionFeature.swift` at line 586, Replace
the direct call to transcriptionFeatureLogger.info("Raw transcription:
'\(result)'") with the unified HexLog helper and avoid logging raw transcript
content; instead log non-sensitive metadata such as result.count or a masked
snippet and include the transcript as a private field using privacy: .private if
you must log it. Locate the logging in TranscriptionFeature.swift where
transcriptionFeatureLogger.info is used (the variable/result named result) and
change it to use HexLog (the project-wide logging helper) with privacy
annotations for the transcript and public metadata only.
♻️ Duplicate comments (1)
Hex/Features/Transcription/TranscriptionFeature.swift (1)

543-557: ⚠️ Potential issue | 🔴 Critical

Cancellation is reintroduced as non-functional for AI enhancement.

The reducer cancels CancelID.aiEnhancement (Line 705), but the enhancement effect is not cancellable. A canceled session can still deliver late .aiEnhancementResult and paste text after cancel.

🔒 Suggested fix
     return .merge(
       .send(.setEnhancingState(true)),
       .run { send in
         do {
           let enhancedText = try await aiEnhancement.enhance(result, model, options) { _ in }
           await send(.aiEnhancementResult(enhancedText, audioURL))
         } catch {
           transcriptionFeatureLogger.error("AI enhancement failed: \(error.localizedDescription)")
           await send(.aiEnhancementError(error))
         }
-      }
+      }
+      .cancellable(id: CancelID.aiEnhancement, cancelInFlight: true)
     )

Also applies to: 705-705

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Features/Transcription/TranscriptionFeature.swift` around lines 543 -
557, The AI enhancement effect is not cancellable, so CancelID.aiEnhancement in
the reducer can't stop late .aiEnhancementResult deliveries; make the .run
effect that calls aiEnhancement.enhance cancellable by attaching the
cancellation identifier (CancelID.aiEnhancement) to that effect (the .run that
sends .aiEnhancementResult / .aiEnhancementError), or use the cancellable
variant of .run that accepts a Task handle and checks Task.isCancelled before
sending results from aiEnhancement.enhance; ensure the cancellation id
referenced is CancelID.aiEnhancement so late results are suppressed after
cancellation.
🧹 Nitpick comments (1)
Hex/Features/Settings/AIEnhancementFeature.swift (1)

89-90: Use the same default prompt constant that HexSettings persists.

resetToDefaultPrompt resets from EnhancementOptions.defaultPrompt, but new settings initialize from HexSettings.defaultAIEnhancementPrompt. Two sources of truth here can drift and make “Reset” inconsistent with a fresh install/default decode.

♻️ Proposed fix
             case .resetToDefaultPrompt:
-                state.$hexSettings.withLock { $0.aiEnhancementPrompt = EnhancementOptions.defaultPrompt }
+                state.$hexSettings.withLock { $0.aiEnhancementPrompt = HexSettings.defaultAIEnhancementPrompt }
                 return .none
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Features/Settings/AIEnhancementFeature.swift` around lines 89 - 90, The
resetToDefaultPrompt branch uses EnhancementOptions.defaultPrompt causing a
mismatch with the persisted default; change it to use the same constant
HexSettings.defaultAIEnhancementPrompt so reset and fresh defaults match —
update the case .resetToDefaultPrompt inside state.$hexSettings.withLock where
aiEnhancementPrompt is set to assign HexSettings.defaultAIEnhancementPrompt
instead of EnhancementOptions.defaultPrompt (or remove the redundant
EnhancementOptions constant if unused elsewhere).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@Hex/Clients/AIEnhancementClient.swift`:
- Around line 182-189: The request payload builds requestDict with top-level
temperature and max_tokens which Ollama ignores; update requestDict so
generation controls are nested under an "options" dictionary and rename
max_tokens to num_predict: include "options": ["temperature": temperature,
"num_predict": maxTokens] (or equivalent types) while keeping "model" and
"prompt"/"system" at the top level and preserving "stream": false; modify the
construction that creates requestDict (references: requestDict, model,
fullPrompt, temperature, maxTokens) to place those keys under "options" so
Ollama receives the correct generation parameters.

In `@Hex/Clients/TranscriptionClient.swift`:
- Around line 281-286: The code references a non-existent property
disableAutoCapitalization on HexSettings causing a compile error; add a Bool
property named disableAutoCapitalization (with a sensible default, e.g. false)
to the HexSettings struct in HexCore, and update TranscriptionClient to safely
read it via optional chaining (use settings?.disableAutoCapitalization) when
computing useAutoCapitalization in the method using the useAutoCapitalization
variable so the compilation and behavior are correct.

In `@Hex/Features/Settings/AIEnhancementFeature.swift`:
- Around line 38-52: The code currently triggers Ollama checks and model loading
even when state.useAIEnhancement is false; update the logic so .task does not
send .checkOllamaAvailability unless state.useAIEnhancement is true, and in the
.ollamaAvailabilityResult handler only return .send(.loadAvailableModels) when
isAvailable && state.useAIEnhancement; apply the same guard where similar logic
appears around modelsLoaded (the other block at lines ~69-76) so models are
never fetched or modelsLoaded mutated when useAIEnhancement is disabled.
- Around line 6-15: The file references HexSettings inside the
AIEnhancementFeature.State but doesn't import its defining module; add an import
for HexCore at the top of the file so HexSettings is resolvable. Specifically,
update the imports above the `@Reducer` declaration (where AIEnhancementFeature
and State are defined) to include HexCore so the compiler can find the public
HexSettings type.

In `@Hex/Features/Transcription/TranscriptionFeature.swift`:
- Around line 152-159: The aiEnhancementError handler currently only logs and
conditionally sends .ollamaBecameUnavailable, leaving isEnhancing/isTranscribing
and fallback output handling untouched; update the .aiEnhancementError(error)
branch to always reset enhancement/transcription state and emit any necessary
fallback output before returning: when error is AIEnhancementError keep the
transcriptionFeatureLogger.notice and send .ollamaBecameUnavailable but also
clear isEnhancing/isTranscribing (or dispatch the existing action that resets
those flags) and dispatch the existing fallback/output action so the transcript
isn't lost; when error is not AIEnhancementError log the error via
transcriptionFeatureLogger.error but likewise reset the flags and emit the same
fallback/output action (or send a specific .enhancementFailed action) instead of
simply returning .none so state cannot remain stuck.
- Around line 560-569: TranscriptionFeature.State is missing the
pendingTranscription property referenced by handleAIEnhancement; add an optional
property (e.g., var pendingTranscription: String? = nil) to
TranscriptionFeature.State so the line state.pendingTranscription = nil
compiles, and ensure any other uses of pendingTranscription in the feature match
this type and optional semantics.
- Around line 492-524: The non-AI branch short-circuits post-processing
(skipping the shared remapping/removal/normalization logic used by the AI path),
so update the branch that checks state.hexSettings.useAIEnhancement to invoke
the same shared post-processing used by the AI flow (e.g., call the centralized
handler such as handleAIEnhancement or extract the common finalization logic
into a shared method) before calling finalizeRecordingAndStoreTranscript; ensure
you preserve the same state updates (state.isTranscribing/state.isPrewarming)
and pass the same parameters (result, audioURL, duration, sourceAppBundleID,
sourceAppName, transcriptionHistory) so non-AI users receive identical
remapping/removal/normalization behavior as the enhanceWithAI path.

In `@Hex/Features/Transcription/TranscriptionIndicatorView.swift`:
- Around line 289-293: The glow effect is being applied unconditionally; update
the body(content:) in TranscriptionIndicatorView to only apply .glow when status
is a glowing state (e.g., .enhancing or .transcribing) and use the correct color
per state (use enhanceBaseColor.opacity(0.4) for .enhancing and the transcribing
blue color for .transcribing); for all other statuses
(recording/option-key/prewarming/etc.) omit the glow path (i.e., don't call
changeEffect or pass a no-op effect) so those states avoid the glow performance
path and maintain their intended styling.

---

Outside diff comments:
In `@Hex.xcodeproj/project.pbxproj`:
- Around line 564-570: The XCRemoteSwiftPackageReference for "WhisperKit" (the
block with isa = XCRemoteSwiftPackageReference and repositoryURL
"https://github.com/argmaxinc/WhisperKit") currently uses branch = main; replace
the branch-based requirement with a pinned requirement matching Package.resolved
(either set requirement to an exactVersion = "0.12.0" or to the specific
revision hash from Package.resolved) so the project.pbxproj references the exact
version/revision instead of tracking main.

In `@Hex/Clients/TranscriptionClient.swift`:
- Around line 225-246: The Parakeet branch in
transcribe(url:model:options:settings:progressCallback:) ignores
DecodingOptions.disableAutoCapitalization, so after getting text from
parakeet.transcribe(...) apply the same post-processing step used by the
WhisperKit path that respects options.disableAutoCapitalization (i.e., run the
capitalization/auto-capitalization transform conditioned on
options.disableAutoCapitalization), and mirror this fix in the other Parakeet
handling block around the code referenced (the second Parakeet path at lines
~279–289) so both Parakeet flows produce the same post-processed output as
WhisperKit.

In `@Hex/Features/Transcription/TranscriptionFeature.swift`:
- Line 586: Replace the direct call to transcriptionFeatureLogger.info("Raw
transcription: '\(result)'") with the unified HexLog helper and avoid logging
raw transcript content; instead log non-sensitive metadata such as result.count
or a masked snippet and include the transcript as a private field using privacy:
.private if you must log it. Locate the logging in TranscriptionFeature.swift
where transcriptionFeatureLogger.info is used (the variable/result named result)
and change it to use HexLog (the project-wide logging helper) with privacy
annotations for the transcript and public metadata only.

---

Duplicate comments:
In `@Hex/Features/Transcription/TranscriptionFeature.swift`:
- Around line 543-557: The AI enhancement effect is not cancellable, so
CancelID.aiEnhancement in the reducer can't stop late .aiEnhancementResult
deliveries; make the .run effect that calls aiEnhancement.enhance cancellable by
attaching the cancellation identifier (CancelID.aiEnhancement) to that effect
(the .run that sends .aiEnhancementResult / .aiEnhancementError), or use the
cancellable variant of .run that accepts a Task handle and checks
Task.isCancelled before sending results from aiEnhancement.enhance; ensure the
cancellation id referenced is CancelID.aiEnhancement so late results are
suppressed after cancellation.

---

Nitpick comments:
In `@Hex/Features/Settings/AIEnhancementFeature.swift`:
- Around line 89-90: The resetToDefaultPrompt branch uses
EnhancementOptions.defaultPrompt causing a mismatch with the persisted default;
change it to use the same constant HexSettings.defaultAIEnhancementPrompt so
reset and fresh defaults match — update the case .resetToDefaultPrompt inside
state.$hexSettings.withLock where aiEnhancementPrompt is set to assign
HexSettings.defaultAIEnhancementPrompt instead of
EnhancementOptions.defaultPrompt (or remove the redundant EnhancementOptions
constant if unused elsewhere).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5fb2ede8-08e4-4d40-959d-ea5d96ee94b6

📥 Commits

Reviewing files that changed from the base of the PR and between dd4356f and c1e89f2.

📒 Files selected for processing (15)
  • Hex.xcodeproj/project.pbxproj
  • Hex.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved
  • Hex/Clients/AIEnhancementClient.swift
  • Hex/Clients/PasteboardClient.swift
  • Hex/Clients/RecordingClient.swift
  • Hex/Clients/TranscriptionClient.swift
  • Hex/Features/App/AppFeature.swift
  • Hex/Features/Settings/AIEnhancementFeature.swift
  • Hex/Features/Settings/AIEnhancementView.swift
  • Hex/Features/Settings/SettingsFeature.swift
  • Hex/Features/Transcription/TranscriptionFeature.swift
  • Hex/Features/Transcription/TranscriptionIndicatorView.swift
  • HexCore/Sources/HexCore/Logging.swift
  • HexCore/Sources/HexCore/Settings/HexSettings.swift
  • Localizable.xcstrings
✅ Files skipped from review due to trivial changes (2)
  • HexCore/Sources/HexCore/Logging.swift
  • Hex/Clients/PasteboardClient.swift
🚧 Files skipped from review as they are similar to previous changes (3)
  • Hex/Clients/RecordingClient.swift
  • Hex/Features/Settings/SettingsFeature.swift
  • Localizable.xcstrings

Comment on lines +182 to +189
let requestDict: [String: Any] = [
"model": model,
"prompt": fullPrompt,
"temperature": temperature,
"max_tokens": maxTokens,
"stream": false,
"system": "You are an AI that improves transcribed text while preserving meaning."
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In the current Ollama "Generate a response" API docs, should generation controls like temperature and max output tokens be under options, and is the token key num_predict(notmax_tokens)?

💡 Result:

Yes, in the current Ollama "Generate a response" API (/api/generate), generation controls like temperature and max output tokens are specified under the top-level "options" object in the request body. The token limit key is "num_predict" (not "max_tokens").

Citations:


🏁 Script executed:

cat -n Hex/Clients/AIEnhancementClient.swift | sed -n '175,200p'

Repository: kitlangton/Hex

Length of output: 1173


Ollama generation options must be nested in the options object with correct parameter keys.

temperature and max_tokens are sent at the top level of the request. The Ollama /api/generate endpoint requires generation controls under an options object, and the token limit key is num_predict (not max_tokens). The current payload will silently ignore these settings.

Suggested fix
         let requestDict: [String: Any] = [
             "model": model,
             "prompt": fullPrompt,
-            "temperature": temperature,
-            "max_tokens": maxTokens,
+            "options": [
+                "temperature": temperature,
+                "num_predict": maxTokens
+            ],
             "stream": false,
             "system": "You are an AI that improves transcribed text while preserving meaning."
         ]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Clients/AIEnhancementClient.swift` around lines 182 - 189, The request
payload builds requestDict with top-level temperature and max_tokens which
Ollama ignores; update requestDict so generation controls are nested under an
"options" dictionary and rename max_tokens to num_predict: include "options":
["temperature": temperature, "num_predict": maxTokens] (or equivalent types)
while keeping "model" and "prompt"/"system" at the top level and preserving
"stream": false; modify the construction that creates requestDict (references:
requestDict, model, fullPrompt, temperature, maxTokens) to place those keys
under "options" so Ollama receives the correct generation parameters.

Comment on lines +281 to +286
// Use provided settings or default to auto-capitalization
let useAutoCapitalization = settings == nil ? true : !settings!.disableAutoCapitalization

// Convert to lowercase if auto-capitalization is disabled
if !useAutoCapitalization {
text = text.lowercased()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -e

echo "Occurrences of disableAutoCapitalization:"
rg -n --type=swift '\bdisableAutoCapitalization\b'

echo
echo "HexSettings declarations / extensions / re-exports:"
rg -n --type=swift 'struct\s+HexSettings\b|extension\s+HexSettings\b|typealias\s+HexSettings\b|@_exported\s+import\s+HexCore|import\s+HexCore'

Repository: kitlangton/Hex

Length of output: 2392


🏁 Script executed:

cat -n HexCore/Sources/HexCore/Settings/HexSettings.swift

Repository: kitlangton/Hex

Length of output: 15107


disableAutoCapitalization does not exist in HexSettings—the code will not compile.

The HexSettings struct in HexCore/Sources/HexCore/Settings/HexSettings.swift has no disableAutoCapitalization property. Line 282 in Hex/Clients/TranscriptionClient.swift attempting to access settings!.disableAutoCapitalization will fail compilation.

🧰 Tools
🪛 SwiftLint (0.63.2)

[Warning] 282-282: Force unwrapping should be avoided

(force_unwrapping)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Clients/TranscriptionClient.swift` around lines 281 - 286, The code
references a non-existent property disableAutoCapitalization on HexSettings
causing a compile error; add a Bool property named disableAutoCapitalization
(with a sensible default, e.g. false) to the HexSettings struct in HexCore, and
update TranscriptionClient to safely read it via optional chaining (use
settings?.disableAutoCapitalization) when computing useAutoCapitalization in the
method using the useAutoCapitalization variable so the compilation and behavior
are correct.

Comment on lines +6 to +15
import ComposableArchitecture
import Foundation
import SwiftUI

@Reducer
struct AIEnhancementFeature {
@ObservableState
struct State: Equatable {
@Shared(.hexSettings) var hexSettings: HexSettings

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -e

echo "Top of AIEnhancementFeature.swift:"
sed -n '1,20p' Hex/Features/Settings/AIEnhancementFeature.swift

echo
echo "HexSettings declarations / HexCore imports / re-exports:"
rg -n --type=swift '@_exported\s+import\s+HexCore|import\s+HexCore|struct\s+HexSettings\b|typealias\s+HexSettings\b'

Repository: kitlangton/Hex

Length of output: 2689


🏁 Script executed:

# Check complete imports in AIEnhancementFeature.swift
head -30 Hex/Features/Settings/AIEnhancementFeature.swift

# Check for Hex module package definition and exports
find . -name "Package.swift" -o -name "module.modulemap" | head -5 | xargs cat 2>/dev/null

# Check if there's a Hex module bridging header or public exports
fd -e swift Hex/Hex.swift Hex/HexPublic.swift 2>/dev/null | xargs cat 2>/dev/null

# Check AppHexSettings.swift to understand the typealias context
cat -n Hex/Models/AppHexSettings.swift

Repository: kitlangton/Hex

Length of output: 2761


🏁 Script executed:

# Get all imports from the full AIEnhancementFeature.swift file
rg -n '^\s*import\s+' Hex/Features/Settings/AIEnhancementFeature.swift

# Check file length
wc -l Hex/Features/Settings/AIEnhancementFeature.swift

Repository: kitlangton/Hex

Length of output: 178


Import HexCore before referencing HexSettings.

This file uses HexSettings directly on line 15, but does not import HexCore. The type is defined in HexCore/Sources/HexCore/Settings/HexSettings.swift as a public struct. While a typealias exists in Hex/Models/AppHexSettings.swift, that file is not imported here, so the typealias is inaccessible. This will fail to compile.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Features/Settings/AIEnhancementFeature.swift` around lines 6 - 15, The
file references HexSettings inside the AIEnhancementFeature.State but doesn't
import its defining module; add an import for HexCore at the top of the file so
HexSettings is resolvable. Specifically, update the imports above the `@Reducer`
declaration (where AIEnhancementFeature and State are defined) to include
HexCore so the compiler can find the public HexSettings type.

Comment on lines +38 to +52
case .task:
return .send(.checkOllamaAvailability)

case .checkOllamaAvailability:
return .run { send in
let isAvailable = await aiEnhancement.isOllamaAvailable()
await send(.ollamaAvailabilityResult(isAvailable))
}

case let .ollamaAvailabilityResult(isAvailable):
state.isOllamaAvailable = isAvailable

// If Ollama is available, load models
if isAvailable {
return .send(.loadAvailableModels)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Don’t fetch models on view load while enhancement is off.

Opening this tab currently checks Ollama and can load models even with useAIEnhancement == false. Because modelsLoaded may rewrite selectedAIModel, simply visiting a disabled settings screen can still mutate persisted AI settings.

💡 Proposed fix
             case .task:
-                return .send(.checkOllamaAvailability)
+                guard state.hexSettings.useAIEnhancement else { return .none }
+                return .send(.checkOllamaAvailability)

Also applies to: 69-76

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Features/Settings/AIEnhancementFeature.swift` around lines 38 - 52, The
code currently triggers Ollama checks and model loading even when
state.useAIEnhancement is false; update the logic so .task does not send
.checkOllamaAvailability unless state.useAIEnhancement is true, and in the
.ollamaAvailabilityResult handler only return .send(.loadAvailableModels) when
isAvailable && state.useAIEnhancement; apply the same guard where similar logic
appears around modelsLoaded (the other block at lines ~69-76) so models are
never fetched or modelsLoaded mutated when useAIEnhancement is disabled.

Comment on lines +152 to +159
case let .aiEnhancementError(error):
if error is AIEnhancementError {
transcriptionFeatureLogger.notice("AI enhancement error (Ollama): \(error.localizedDescription)")
return .send(.ollamaBecameUnavailable)
} else {
transcriptionFeatureLogger.error("AI enhancement error: \(error.localizedDescription)")
return .none
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Enhancement error path can stall state and lose output.

This handler treats every AIEnhancementError as Ollama availability loss, and the non-AIEnhancementError path returns .none. Both paths skip state reset/fallback output handling, so isEnhancing/isTranscribing can remain stuck and transcription can be dropped.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Features/Transcription/TranscriptionFeature.swift` around lines 152 -
159, The aiEnhancementError handler currently only logs and conditionally sends
.ollamaBecameUnavailable, leaving isEnhancing/isTranscribing and fallback output
handling untouched; update the .aiEnhancementError(error) branch to always reset
enhancement/transcription state and emit any necessary fallback output before
returning: when error is AIEnhancementError keep the
transcriptionFeatureLogger.notice and send .ollamaBecameUnavailable but also
clear isEnhancing/isTranscribing (or dispatch the existing action that resets
those flags) and dispatch the existing fallback/output action so the transcript
isn't lost; when error is not AIEnhancementError log the error via
transcriptionFeatureLogger.error but likewise reset the flags and emit the same
fallback/output action (or send a specific .enhancementFailed action) instead of
simply returning .none so state cannot remain stuck.

Comment on lines +492 to +524
// First check if we should use AI enhancement
if state.hexSettings.useAIEnhancement {
return enhanceWithAI(result: result, audioURL: audioURL, state: state)
} else {
state.isTranscribing = false
state.isPrewarming = false

// If empty text, nothing else to do
guard !result.isEmpty else {
return .none
}

let duration = state.recordingStartTime.map { Date().timeIntervalSince($0) } ?? 0
let sourceAppBundleID = state.sourceAppBundleID
let sourceAppName = state.sourceAppName
let transcriptionHistory = state.$transcriptionHistory

return .run { send in
do {
try await finalizeRecordingAndStoreTranscript(
result: result,
duration: duration,
sourceAppBundleID: sourceAppBundleID,
sourceAppName: sourceAppName,
audioURL: audioURL,
transcriptionHistory: transcriptionHistory
)
} catch {
await send(.transcriptionError(error, audioURL))
}
}
.cancellable(id: CancelID.transcription)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Non-AI path skips transcript post-processing.

When useAIEnhancement == false, this branch finalizes immediately and bypasses remapping/removal logic (and other shared normalization checks) now centralized in handleAIEnhancement. This changes behavior for all non-AI users.

♻️ Suggested fix
   if state.hexSettings.useAIEnhancement {
     return enhanceWithAI(result: result, audioURL: audioURL, state: state)
   } else {
-    state.isTranscribing = false
-    state.isPrewarming = false
-
-    guard !result.isEmpty else {
-      return .none
-    }
-
-    let duration = state.recordingStartTime.map { Date().timeIntervalSince($0) } ?? 0
-    let sourceAppBundleID = state.sourceAppBundleID
-    let sourceAppName = state.sourceAppName
-    let transcriptionHistory = state.$transcriptionHistory
-
-    return .run { send in
-      do {
-        try await finalizeRecordingAndStoreTranscript(
-          result: result,
-          duration: duration,
-          sourceAppBundleID: sourceAppBundleID,
-          sourceAppName: sourceAppName,
-          audioURL: audioURL,
-          transcriptionHistory: transcriptionHistory
-        )
-      } catch {
-        await send(.transcriptionError(error, audioURL))
-      }
-    }
-    .cancellable(id: CancelID.transcription)
+    return handleAIEnhancement(&state, result: result, audioURL: audioURL)
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Features/Transcription/TranscriptionFeature.swift` around lines 492 -
524, The non-AI branch short-circuits post-processing (skipping the shared
remapping/removal/normalization logic used by the AI path), so update the branch
that checks state.hexSettings.useAIEnhancement to invoke the same shared
post-processing used by the AI flow (e.g., call the centralized handler such as
handleAIEnhancement or extract the common finalization logic into a shared
method) before calling finalizeRecordingAndStoreTranscript; ensure you preserve
the same state updates (state.isTranscribing/state.isPrewarming) and pass the
same parameters (result, audioURL, duration, sourceAppBundleID, sourceAppName,
transcriptionHistory) so non-AI users receive identical
remapping/removal/normalization behavior as the enhanceWithAI path.

Comment on lines +560 to 569
private func handleAIEnhancement(
_ state: inout State,
result: String,
audioURL: URL
) -> Effect<Action> {
state.isTranscribing = false
state.isPrewarming = false
state.isEnhancing = false // Reset the enhancing state
state.pendingTranscription = nil // Clear the pending transcription since enhancement succeeded

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

pendingTranscription is referenced but not defined in state.

Line 568 writes state.pendingTranscription = nil, but TranscriptionFeature.State has no pendingTranscription member. This is a compile-time failure.

💡 Minimal fix
-    state.pendingTranscription = nil  // Clear the pending transcription since enhancement succeeded
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private func handleAIEnhancement(
_ state: inout State,
result: String,
audioURL: URL
) -> Effect<Action> {
state.isTranscribing = false
state.isPrewarming = false
state.isEnhancing = false // Reset the enhancing state
state.pendingTranscription = nil // Clear the pending transcription since enhancement succeeded
private func handleAIEnhancement(
_ state: inout State,
result: String,
audioURL: URL
) -> Effect<Action> {
state.isTranscribing = false
state.isPrewarming = false
state.isEnhancing = false // Reset the enhancing state
🧰 Tools
🪛 SwiftLint (0.63.2)

[Warning] 560-560: Function body should span 60 lines or less excluding comments and whitespace: currently spans 62 lines

(function_body_length)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Features/Transcription/TranscriptionFeature.swift` around lines 560 -
569, TranscriptionFeature.State is missing the pendingTranscription property
referenced by handleAIEnhancement; add an optional property (e.g., var
pendingTranscription: String? = nil) to TranscriptionFeature.State so the line
state.pendingTranscription = nil compiles, and ensure any other uses of
pendingTranscription in the feature match this type and optional semantics.

Comment on lines +289 to +293
func body(content: Content) -> some View {
content.changeEffect(
.glow(color: status == .enhancing ? enhanceBaseColor.opacity(0.4) : .red.opacity(0.4), radius: 6),
value: status
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The glow effect is still enabled for every visible state.

LightweightEffects always returns a Pow .glow, so recording/option-key/prewarming still pay for the effect path, and transcribing/prewarming now get a red glow instead of the blue transcribing color. That undercuts the perf work here and changes the indicator styling.

💡 Proposed fix
 struct LightweightEffects: ViewModifier {
   var status: TranscriptionIndicatorView.Status
   var enhanceBaseColor: Color
   
   func body(content: Content) -> some View {
-    content.changeEffect(
-      .glow(color: status == .enhancing ? enhanceBaseColor.opacity(0.4) : .red.opacity(0.4), radius: 6),
-      value: status
-    )
+    switch status {
+    case .transcribing, .prewarming:
+      content.changeEffect(.glow(color: .blue.opacity(0.4), radius: 6), value: status)
+    case .enhancing:
+      content.changeEffect(.glow(color: enhanceBaseColor.opacity(0.4), radius: 6), value: status)
+    default:
+      content
+    }
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Hex/Features/Transcription/TranscriptionIndicatorView.swift` around lines 289
- 293, The glow effect is being applied unconditionally; update the
body(content:) in TranscriptionIndicatorView to only apply .glow when status is
a glowing state (e.g., .enhancing or .transcribing) and use the correct color
per state (use enhanceBaseColor.opacity(0.4) for .enhancing and the transcribing
blue color for .transcribing); for all other statuses
(recording/option-key/prewarming/etc.) omit the glow path (i.e., don't call
changeEffect or pass a no-op effect) so those states avoid the glow performance
path and maintain their intended styling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants