feat: add OpenAI-compatible API provider for protocol generation#4
Merged
feat: add OpenAI-compatible API provider for protocol generation#4
Conversation
64aafda to
27d471d
Compare
Move provider-specific configuration into the provider object. Rename DefaultProtocolGenerator → ClaudeCLIProtocolGenerator with claudeBin as a stored property. PipelineQueue no longer carries claudeBin — it's encapsulated in the generator passed at construction.
Add ProtocolProvider enum (claudeCLI, openAICompatible) and new AppSettings properties: protocolProvider, openAIEndpoint, openAIModel, openAIAPIKey (stored via KeychainHelper). Add httpError and connectionFailed cases to ProtocolError.
HTTP client for OpenAI-compatible APIs (Ollama, LM Studio, llama.cpp). Uses URLSession.bytes for server-sent event streaming. Includes testConnection() for the settings UI to verify endpoint reachability.
Add provider picker to Settings with conditional UI: Claude CLI shows binary picker, OpenAI-Compatible shows endpoint/model/API key fields and a "Test Connection" button. MeetingTranscriberApp creates the appropriate ProtocolGenerating implementation based on the selected provider.
SSE line parsing tests (content extraction, [DONE], empty delta, invalid JSON, role-only delta). AppSettings tests for new protocol provider properties (default, persistence, Keychain-backed API key). SettingsView tests for provider picker and conditional UI rendering.
What: Wider settings window (520px), full-width endpoint field, model Picker populated from API, auto-fetch on section appear. Reasoning: - Problem: Endpoint URL wrapped/overlapped in 480px, model TextField showed duplicate placeholder, no model discovery - Decision: VStack for endpoint (full width), Picker for model when models are fetched, auto-fetch via testConnection on appear. Button label changes from "Fetch Models" to "Refresh Models" once populated. Auto-selects first model if current setting not in fetched list.
What: Replace stored ProtocolGenerating instance with a factory closure that is called each time a job is processed. Reasoning: - Problem: Switching provider in Settings required restarting the watch loop because the generator was fixed at PipelineQueue construction - Decision: protocolGeneratorFactory closure captures current settings, evaluated per-job — hot-swapping between Claude CLI and OpenAI works without restart
27d471d to
6bac554
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
ProtocolGeneratingprotocol to removeclaudeBinparameter — provider config is now encapsulated in each implementationProtocolProviderenum (claudeCLI,openAICompatible) with new AppSettings properties (endpoint, model, API key via KeychainHelper)OpenAIProtocolGeneratorwith SSE streaming viaURLSession.bytes— supports Ollama, LM Studio, llama.cpp, and any OpenAI-compatible server/v1/models, and connection test buttonTest plan