Skip to content

flmalte/opencode-fastmode

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

opencode-fastmode

Workaround plugin for GPT-5.4 fast mode in OpenCode.

This package avoids slash-command hacks. It uses:

  • an OpenCode plugin that applies serviceTier: "priority" in chat.params
  • a small CLI that updates the persisted fast mode state

It is not first-class OpenCode fast mode support. It does not add /fast, prompt status UI, or model-level controls metadata inside OpenCode itself. It only applies the request option for supported model calls.

Because the toggle happens outside the chat flow, it does not require a model reply and does not add transcript noise.

Quick start

Local development

  1. Install the CLI from this repo:
npm install -g /absolute/path/to/opencode-fastmode
  1. Load the plugin from a global OpenCode plugin shim:
export { FastmodePlugin, default } from "/absolute/path/to/opencode-fastmode/index.js"

Save that as ~/.config/opencode/plugins/fastmode.js.

  1. Restart OpenCode.

  2. Toggle and verify:

oc-fast on
oc-fast status

After publishing to npm

  1. Install:
npm install -g opencode-fastmode
  1. Add the plugin to ~/.config/opencode/opencode.jsonc:
{
  "$schema": "https://opencode.ai/config.json",
  "plugin": ["opencode-fastmode"]
}
  1. Restart OpenCode.

What it supports

  • openai/gpt-5.4
  • all OpenCode agents that use openai/gpt-5.4
  • persisted state in ~/.config/opencode/fastmode.json

What it does not support

  • /fast inside OpenCode
  • prompt status line indicators
  • OpenCode model controls metadata
  • upstream migration behavior or compatibility aliases

State file

~/.config/opencode/fastmode.json is the shared state between the CLI and the OpenCode plugin.

  • oc-fast on|off|toggle writes to this file
  • the plugin reads this file for every chat.params call
  • if the file is missing, fast mode defaults to OFF

Example:

{
  "models": {
    "openai/gpt-5.4": {
      "enabled": true
    }
  }
}

Yes, it is still needed in the current design. It is what makes the toggle persistent without requiring a model message, a slash command, or an OpenCode restart for every change.

If you delete it, the package will simply recreate default state on the next CLI write.

Install

1. Install the package for the CLI

After publishing to npm:

npm install -g opencode-fastmode

For local development:

npm install -g /absolute/path/to/opencode-fastmode

2. Load the plugin in OpenCode

After publishing to npm, add it to ~/.config/opencode/opencode.jsonc:

{
  "$schema": "https://opencode.ai/config.json",
  "plugin": ["opencode-fastmode"]
}

For local development before publishing, you can load the repo directly from a global plugin file:

export { FastmodePlugin, default } from "/absolute/path/to/opencode-fastmode/index.js"

Place that file in ~/.config/opencode/plugins/ and restart OpenCode.

CLI usage

oc-fast on
oc-fast off
oc-fast toggle
oc-fast status
oc-fast path

Example output:

Fast mode enabled for openai/gpt-5.4

Use this for current state feedback:

oc-fast status

How it works

When fast mode is enabled, the plugin checks each model call in chat.params. If the current model is openai/gpt-5.4, it sets:

{
  "serviceTier": "priority"
}

No reasoning or verbosity settings are modified.

This mirrors the manual options.serviceTier = "priority" workaround people have discussed for OpenCode config overrides.

Verify it is active

  • run oc-fast status
  • make sure your active model is openai/gpt-5.4
  • restart OpenCode after changing plugin installation/config

Development

Run tests:

npm test

Publish

  1. Create a GitHub repo
  2. Push this project
  3. Publish to npm:
npm publish

Then switch your OpenCode config to the npm package name and remove any local plugin shim.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors