Drive your local OpenCode AI coding agent from a Feishu / Lark bot — even from your phone, even on a flaky VPN.
A small, resilient Python bridge that connects Feishu's WebSocket event stream to a locally-running OpenCode HTTP server. Send a message in Feishu → AI works on your code → streaming reply (with live tool-call visibility) lands back in chat.
Feishu user ──► Feishu WS ──► bridge ──► opencode HTTP/SSE ──► AI reply ──► Feishu
You want to keep an AI coding assistant nudging your project forward while you're in a meeting, on your commute, or otherwise indisposed. OpenCode is great in the terminal — but it's tied to one machine. This bridge gives you a remote control.
- Streaming replies — the bot message updates live as OpenCode thinks
- Tool-call visibility — see
💻 bash,📖 read,✏️ edit, etc. with status icons (⏳/▶️ /✅/❌) in real time - Cancel in-flight turns —
/cancelaborts the running turn cleanly via opencode's abort endpoint - Interactive questions — the AI can ask you to pick an option (see Interactive questions below); answer by replying with a number or the option text
- Rich-text rendering via Feishu
postmessages (with plain-text fallback) - Feishu WebSocket — no public IP, no webhooks, no tunnels needed
- Resilient against unstable networks (VPN drops, captive portals):
- SDK auto-reconnects + supervisor restart loop
- Inbound message dedup (Feishu re-delivers after reconnect)
- Outbound API retries with exponential backoff
- OpenCode subprocess auto-restart
- Conversation ↔ session state persisted to disk; survives bridge restarts
- Long AI tasks keep running across Feishu blips; result delivered on reconnect
- In-chat commands:
/help,/new,/cwd <path>,/cancel,/status - Optional user allowlist so randos in your group can't drive your laptop
- macOS or Linux
- Python 3.10+
opencodeinstalled and configured (model + auth)- A Feishu/Lark "self-built" application — see Feishu setup below
git clone https://github.com/Bojun-Vvibe/feishu-opencode-bridge.git
cd feishu-opencode-bridge
# Use uv (recommended) ...
uv venv
uv pip install -e .
# ... or plain pip:
python3 -m venv .venv
.venv/bin/pip install -e .export FEISHU_APP_ID=cli_xxxxxxxxxxxxxxxx
export FEISHU_APP_SECRET=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export OPENCODE_CWD=/path/to/your/project # AI will operate here
.venv/bin/feishu-opencode-bridge
# or: .venv/bin/python -m feishu_opencode_bridge.mainYou'll see something like:
[INFO] workspace = /path/to/your/project
[INFO] starting opencode: opencode serve --hostname 127.0.0.1 --port 61088
[INFO] opencode ready on 127.0.0.1:61088
[INFO] starting Feishu WebSocket client
[INFO] bridge ready, waiting for Feishu messages…
[Lark] connected to wss://msg-frontier.feishu.cn/...
Now open Feishu, find your bot, and say hi.
All config is via environment variables:
| Variable | Default | Purpose |
|---|---|---|
FEISHU_APP_ID |
— | Required. Your Feishu app's App ID |
FEISHU_APP_SECRET |
— | Required. Your Feishu app's App Secret |
OPENCODE_CWD |
cwd |
Working directory the AI sees |
FEISHU_ALLOWED_USERS |
empty | Comma-separated open_id allowlist (empty = unrestricted) |
OPENCODE_CMD |
opencode |
Override the opencode binary path |
BRIDGE_STATE_DIR |
~/.feishu-opencode-bridge |
Where to keep persistent state |
BRIDGE_EDIT_THROTTLE_S |
1.5 |
Min seconds between streaming edits to a Feishu message |
BRIDGE_MAX_MESSAGE_LEN |
8000 |
Truncate replies longer than this (chars) |
BRIDGE_DEDUP_TTL_S |
600 |
TTL for inbound message-id dedup |
| Command | Effect |
|---|---|
/help |
Show command list |
/new |
Start a fresh OpenCode session (aborts any in-flight turn first) |
/cwd <path> |
Switch the AI's working directory; opens a new session |
/cancel (alias /stop) |
Abort the currently-running turn for this chat |
/status |
Show current session ID, working directory, any active turn, and any pending question |
Anything else is forwarded as a prompt to OpenCode.
The AI can pause mid-turn to ask you a multiple-choice question. When it does, you'll see a card like:
❓ 修复策略 挑一种方式 1. 快速补丁 — 只改这一处 2. 重构 — 一劳永逸 回复数字或选项文字 · 5 分钟内有效
Answer by replying in chat with:
- a number (
1), or - the option label (case-insensitive; unambiguous prefix also matches), or
- any text — but only if the AI flagged the question as accepting custom answers
After you answer, the AI continues in the same OpenCode session. If you don't reply within 5 minutes, the card turns grey and the question is dropped.
Notes:
- The card has no buttons — Feishu's WebSocket SDK silently drops card action events, so we use a display-only numbered list and match your plain-text reply. Works identically in 1-1 chats and groups.
- The AI is capped at 8 consecutive ask→answer cycles per user-initiated turn.
- To teach your own AI to use this feature, point it at the
## If you need a decision from the usersection of anAGENTS.mdin its workspace.
You need a Feishu/Lark self-built application with bot capability and a couple of permissions. You only do this once.
- Go to Feishu Open Platform and create a custom app.
- Add capability → Bot.
- Permissions — enable:
im:message(read messages)im:message.p2p_msg(receive direct messages — required for 1:1 chats)im:message.group_at_msg(receive group @ mentions)im:message:send_as_bot(send messages as the bot)
- Events & Callbacks → Subscribe to event:
im.message.receive_v1(接收消息)
- Events & Callbacks → Subscription mode: pick "Long connection (WebSocket)". No webhook URL needed.
- Version Management & Release → create a version and publish. (Without this, your changes don't take effect.)
- Copy your App ID and App Secret from "Credentials & Basic Info".
- Find your bot in the Feishu app (search by name) and start a chat.
For Lark (international): also set
FEISHU_DOMAIN=https://open.larksuite.com(lark-oapi will pick this up).
The bridge is built around the assumption that the WebSocket will drop:
- The lark SDK's WS client reconnects on its own. We wrap it in a supervisor that restarts even if the SDK gives up.
- Inbound
message_ids pass through a 10-minute TTL set so reconnect-redelivery doesn't cause double execution. - Every Feishu API call (
send,edit) retries up to 5× with exponential backoff (max 8 s). - Every OpenCode HTTP call retries up to 4× with backoff. If the OpenCode process is dead, we restart it before retrying.
- If OpenCode restarts mid-conversation, the lost session is detected (404) and a fresh one is created automatically.
~/.feishu-opencode-bridge/state.jsonkeeps the chat → session mapping. Restarting the bridge keeps your context.- The "💭 thinking…" placeholder uses
edit; if Feishu was offline when the placeholder was sent, we fall back to a brand-new message so the answer always lands.
Helper scripts are provided. Copy .env.example to .env and fill in your Feishu credentials, then:
./scripts/start.sh # start in background (refuses if already running)
./scripts/start.sh --force # kill any existing bridge/opencode and restart
./scripts/stop.sh # stop the bridge and any child opencode server
tail -f /tmp/feishu-opencode-bridge.logstart.sh waits up to 10 s for the bridge ready log line and reports success/failure before returning. PID is written to /tmp/feishu-opencode-bridge.pid.
If you'd rather do it by hand:
nohup env \
PYTHONUNBUFFERED=1 \
FEISHU_APP_ID=... FEISHU_APP_SECRET=... OPENCODE_CWD=... \
.venv/bin/feishu-opencode-bridge > bridge.log 2>&1 &
echo $! > bridge.pidA proper macOS launchd plist or systemd unit is left as an exercise — the bridge is a single long-running process, configure it like any other.
- You are giving an AI shell access to your machine under your user. OpenCode's
bashtool can run anything you can run. - Always run with
OPENCODE_CWDpointing to a project directory, never your home. - Set
FEISHU_ALLOWED_USERSif your bot is in a group chat. - For real isolation, run the bridge inside a Docker container or under a dedicated user account.
- Prefer
"allow"/"deny"permission rules over"ask"in the opencode config the bridge loads. The bridge is headless;"ask"rules hang turns silently. See Troubleshooting.
- Inbound: text and
post(rich-text) messages are handled. Rich-text is flattened to plain text (titles kept, links becometext (url),@mentions become@name, images/emotions are dropped). Other types (image / file / sticker / audio / card) get a one-line "unsupported" reply instead of being processed. - One opencode
serveinstance per bridge process; multiple Feishu chats share it via separate sessions. /cancelaborts the LLM step but cannot undo side effects already produced by tools (e.g. files written bybash rm).- Streaming edits are throttled (default 1.5 s) to stay under Feishu rate limits; tune via
BRIDGE_EDIT_THROTTLE_S.
The LLM most likely called a tool that requires user approval. The bridge
runs opencode serve headless — there is no UI to click "allow", so
any tool call whose permission resolves to ask hangs the turn forever.
Check your opencode config (~/.config/opencode/opencode.json or a
per-project opencode.json inside OPENCODE_CWD) for any rule set to
"ask". Typical culprits:
{
"permission": {
"edit": "ask",
"bash": { "*": "ask" },
"external_directory": { "*": "ask" }
}
}Fix by flipping those to "allow" (or "deny" if you want to block them
outright). You can also narrow OPENCODE_CWD to a directory whose paths
never trip the gated rules.
The bridge emits a WARNING at startup listing any ask rules it found
in the config it will load — scan bridge.log for
opencode config ... contains 'ask' permission rules if you're not sure.
See the opencode permissions docs for the full rule syntax.
Tail bridge.log and look for recv from ... : followed by your text.
- Line present, no
POST /session/.../messageafterwards — the turn is stuck on anaskpermission (see above). - Line absent entirely — the message either wasn't
text/post(the bridge logsunsupported message_type=<type>and replies once), or the Feishu event didn't reach the bridge. If it's the latter, check the long-connection is still up ([Lark] connected to wss://...should be the most recent Lark log line) and that the app is still published in the Feishu admin console. Any config change requires a new version release to take effect.
Verify the credentials directly — this rules out bridge code issues:
curl -s -X POST https://open.feishu.cn/open-apis/auth/v3/tenant_access_token/internal \
-H "Content-Type: application/json" \
-d '{"app_id":"<APP_ID>","app_secret":"<APP_SECRET>"}'code:0 means the credentials are good. Anything else — copy the secret
again (the "show password" icon in the Feishu console occasionally
renders truncated), or click the reset icon to rotate the secret.
PRs welcome.
Comparison: why not code-while-shit / cws?
cws is a bigger bridge that supports multiple agent backends (claude-code, codex, opencode). At the time of writing its opencode backend hasn't been validated against modern OpenCode (1.14+) and uses an outdated CLI flag and a guessed-at HTTP shape (POST /chat, which doesn't exist).
feishu-opencode-bridge is intentionally narrower:
cws |
this project | |
|---|---|---|
| Agents | claude-code, codex, opencode | opencode only |
| Lines of code | ~3000 | ~900 |
| Streaming replies | no | yes |
| Tool-call visibility in chat | no | yes |
| Resilience features | basic | designed for VPN flaps |
| In-chat approval cards | yes (claude-code/codex) | no (opencode auto-approves) |
Use cws if you want claude-code or codex. Use this if you want OpenCode and a small surface area you can read in one sitting.
MIT — see LICENSE.