Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@
"@electron-toolkit/utils": "^4.0.0",
"@google/genai": "^1.30.0",
"@jxa/run": "^1.4.0",
"@modelcontextprotocol/sdk": "^1.22.0",
"@modelcontextprotocol/sdk": "^1.25.1",
"axios": "^1.13.2",
"better-sqlite3-multiple-ciphers": "12.4.1",
"cheerio": "^1.1.2",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,13 +47,11 @@ const OPENAI_REASONING_MODELS = [
'gpt-5-nano',
'gpt-5-chat'
]
const OPENAI_IMAGE_GENERATION_MODELS = [
'gpt-4o-all',
'gpt-4o-image',
'gpt-image-1',
'dall-e-3',
'dall-e-2'
]
const OPENAI_IMAGE_GENERATION_MODELS = ['gpt-4o-all', 'gpt-4o-image']
const OPENAI_IMAGE_GENERATION_MODEL_PREFIXES = ['dall-e-', 'gpt-image-']
const isOpenAIImageGenerationModel = (modelId: string): boolean =>
OPENAI_IMAGE_GENERATION_MODELS.includes(modelId) ||
OPENAI_IMAGE_GENERATION_MODEL_PREFIXES.some((prefix) => modelId.startsWith(prefix))
Comment on lines +50 to +54
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Extract duplicated image generation detection logic to shared location.

The constants OPENAI_IMAGE_GENERATION_MODELS, OPENAI_IMAGE_GENERATION_MODEL_PREFIXES, and the predicate isOpenAIImageGenerationModel are duplicated in both openAICompatibleProvider.ts (lines 50-54) and openAIResponsesProvider.ts (lines 42-46). This violates the DRY principle and creates a maintenance burden.

Consider extracting this logic to:

  • A shared utility module (e.g., src/main/presenter/llmProviderPresenter/utils/modelDetection.ts), or
  • The BaseLLMProvider class as a protected static method

This ensures consistency and reduces the risk of divergence between the two implementations.

</review_comment_end>

🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
around lines 50-54, the image-generation detection constants and the
isOpenAIImageGenerationModel predicate are duplicated elsewhere; extract
OPENAI_IMAGE_GENERATION_MODELS, OPENAI_IMAGE_GENERATION_MODEL_PREFIXES, and
isOpenAIImageGenerationModel into a single shared location (preferably
src/main/presenter/llmProviderPresenter/utils/modelDetection.ts or as a
protected static on BaseLLMProvider), export the predicate from that new
module/class, then replace the local definitions in this file (and the duplicate
in openAIResponsesProvider.ts) with imports calling the shared predicate, and
run tests/build to ensure no breakage.


// Add supported image size constants
const SUPPORTED_IMAGE_SIZES = {
Expand Down Expand Up @@ -1524,7 +1522,7 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
if (!this.isInitialized) throw new Error('Provider not initialized')
if (!modelId) throw new Error('Model ID is required')

if (OPENAI_IMAGE_GENERATION_MODELS.includes(modelId)) {
if (isOpenAIImageGenerationModel(modelId)) {
yield* this.handleImgGeneration(messages, modelId)
} else {
yield* this.handleChatCompletion(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,11 @@ const OPENAI_REASONING_MODELS = [
'gpt-5-nano',
'gpt-5-chat'
]
const OPENAI_IMAGE_GENERATION_MODELS = [
'gpt-4o-all',
'gpt-4o-image',
'gpt-image-1',
'dall-e-3',
'dall-e-2'
]
const OPENAI_IMAGE_GENERATION_MODELS = ['gpt-4o-all', 'gpt-4o-image']
const OPENAI_IMAGE_GENERATION_MODEL_PREFIXES = ['dall-e-', 'gpt-image-']
const isOpenAIImageGenerationModel = (modelId: string): boolean =>
OPENAI_IMAGE_GENERATION_MODELS.includes(modelId) ||
OPENAI_IMAGE_GENERATION_MODEL_PREFIXES.some((prefix) => modelId.startsWith(prefix))

// 添加支持的图片尺寸常量
const SUPPORTED_IMAGE_SIZES = {
Expand Down Expand Up @@ -303,7 +301,7 @@ export class OpenAIResponsesProvider extends BaseLLMProvider {
if (!this.isInitialized) throw new Error('Provider not initialized')
if (!modelId) throw new Error('Model ID is required')

if (OPENAI_IMAGE_GENERATION_MODELS.includes(modelId)) {
if (isOpenAIImageGenerationModel(modelId)) {
yield* this.handleImgGeneration(messages, modelId)
} else {
yield* this.handleChatCompletion(
Expand Down