Skip to content

fix: handle invalid maxOutputTokens value (0 or undefined)#22016

Open
Jedeiah wants to merge 13 commits intoanomalyco:devfrom
Jedeiah:fix/maxOutputTokens-validation
Open

fix: handle invalid maxOutputTokens value (0 or undefined)#22016
Jedeiah wants to merge 13 commits intoanomalyco:devfrom
Jedeiah:fix/maxOutputTokens-validation

Conversation

@Jedeiah
Copy link
Copy Markdown

@Jedeiah Jedeiah commented Apr 11, 2026

Issue for this PR

Closes #21858
Closes #21522

Type of change

  • Bug fix

What does this PR do?

Fixes maxOutputTokens returning 0 when model.limit.output is 0, undefined, or negative. Returns OUTPUT_TOKEN_MAX (32000) as fallback when model output limit is invalid.

When plugins pass maxOutputTokens=0 via chat.params hook, OpenCode 1.4.0+ requires >= 1, which causes "maxOutputTokens must be >= 1" error for local providers like Ollama.

How did you verify your code works?

  • Added 5 unit tests for maxOutputTokens function covering edge cases (undefined, 0, negative, valid values)
  • All tests pass

Screenshots / recordings

N/A (not a UI change)

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

Fixes anomalyco#21858
Fixes anomalyco#21522

- Returns OUTPUT_TOKEN_MAX when model.limit.output is 0, undefined, or negative
- Adds unit tests for maxOutputTokens function
@github-actions github-actions bot added needs:compliance This means the issue will auto-close after 2 hours. and removed needs:compliance This means the issue will auto-close after 2 hours. labels Apr 11, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for updating your PR! It now meets our contributing guidelines. 👍

@wjiuxing
Copy link
Copy Markdown

When will this PR be merged in pls? I try to add limit.context/output into the model info but it doesn't work, as this comment: #22253 (comment)

@Jedeiah
Copy link
Copy Markdown
Author

Jedeiah commented Apr 14, 2026

这个 PR 什么时候能合并?我尝试在模型信息中添加 limit.context/output,但是不起作用,就像这条评论里说的那样:#22253 (comment)

根本原因是 oh-my-openagent 读取到某些模型的能力缓存 limit.output = 0,
没有做防御性校验,直接把 0 透传成 maxOutputTokens=0 发给 OpenCode,
OpenCode 1.4.0+ 要求 maxOutputTokens >= 1 所以报错。


以下是合理的临时方案(可丢给Ai让它去解决):

方案 A(限自定义模型,且要覆盖所有 agent 用到的模型才有效):

如果你用的是自定义 provider 里声明的模型,可以在 opencode.json 的对应模型
条目下加 limit.output,让 oh-my-openagent 读到正确值而不是缓存里的 0:

{
  "provider": {
    "your-provider": {
      "models": {
        "your-model-id": {
          "limit": {
            "output": 4096,
            "context": 128000
          }
        }
      }
    }
  }
}

注意:oh-my-openagent 会启动多个 agent,每个 agent 用不同模型。
必须给所有涉及的模型都加上 limit.output,漏一个就还会触发。
对 opencode 内置识别的标准模型(如 anthropic/claude-sonnet)无效。


方案 B(治本,推荐):

直接给 oh-my-openagent 的 dist/index.js 打补丁,
在写入和透传 maxOutputTokens 的三个关键路径上过滤掉 <= 0 的值。

文件路径(以 macOS 为例,Windows 路径类似):
~/.cache/opencode/packages/oh-my-openagent@latest/node_modules/oh-my-openagent/dist/index.js

改动 1 - applySessionPromptParams 写入时(搜 model.maxTokens !== undefined):

改前:
...model.maxTokens !== undefined ? { maxOutputTokens: model.maxTokens } : {}

改后:
...typeof model.maxTokens === "number" && model.maxTokens > 0 ? { maxOutputTokens: model.maxTokens } : {}

改动 2 - 读取会话参数发请求时(搜 storedPromptParams.maxOutputTokens !== undefined):

改前:

if (storedPromptParams.maxOutputTokens !== undefined) {
  output.maxOutputTokens = storedPromptParams.maxOutputTokens;
}

改后:

if (typeof storedPromptParams.maxOutputTokens === "number" && storedPromptParams.maxOutputTokens > 0) {
  output.maxOutputTokens = storedPromptParams.maxOutputTokens;
}

改动 3 - compatibility 结果写回时(搜 compatibility.maxTokens !== undefined):

改前:

if (compatibility.maxTokens !== undefined) {
  output.maxOutputTokens = compatibility.maxTokens;
} else {
  delete output.maxOutputTokens;
}

改后:

if (typeof compatibility.maxTokens === "number" && compatibility.maxTokens > 0) {
  output.maxOutputTokens = compatibility.maxTokens;
} else {
  delete output.maxOutputTokens;
}

改完后删除模型能力缓存再重启:
rm ~/.cache/oh-my-opencode/model-capabilities.json # macOS/Linux
Windows: 删除 %USERPROFILE%\.cache\oh-my-opencode\model-capabilities.json

⚠️ 注意:插件更新后补丁可能会被覆盖,需要重新执行一遍。

@Jedeiah
Copy link
Copy Markdown
Author

Jedeiah commented Apr 16, 2026

Hi @maintainers, this PR has been open for a while and I've resolved the merge conflicts. The code passes all unit tests and meets the contributing guidelines. Could someone please review and merge it? This fixes the maxOutputTokens >= 1 error for issues #21858 and #21522. Thank you!

@Jedeiah
Copy link
Copy Markdown
Author

Jedeiah commented Apr 16, 2026

Hi @thdxr @Brendonovich @kitlangton, this PR has been open for a while and I've resolved the merge conflicts. The code passes all unit tests and meets the contributing guidelines. Could someone please review and merge it? This fixes the maxOutputTokens >= 1 error for issues #21858 and #21522. Thank you!

@Jedeiah
Copy link
Copy Markdown
Author

Jedeiah commented Apr 16, 2026

The unit test failure on Windows () is a pre-existing flaky test in the base branch, not caused by my changes. I verified locally - the test passes on macOS.

This appears to be the same issue mentioned in PR #22587 where the author also noted: 'The unit (windows) CI failure is unrelated to this PR. This appears to be a pre-existing issue on the base branch.'

Could someone approve and merge this PR? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Ollama integration: Invalid maxOutputTokens error Invalid argument for parameter maxOutputTokens: maxOutputTokens must be >= 1

2 participants