Terminal coding agent for DeepSeek V4. It runs from the
deepseekcommand, streams reasoning blocks, edits local workspaces with approval gates, and includes an auto mode that chooses both model and thinking level per turn.
deepseek is distributed as Rust binaries: the dispatcher command
(deepseek) and the companion TUI runtime (deepseek-tui). Pick whichever
install path you already use; they all put the same commands on your PATH.
The npm package is an installer/wrapper for the release binaries, not the
agent runtime itself.
# 1. npm โ easiest if you already use Node. The package downloads the
# matching prebuilt Rust binaries from GitHub Releases.
npm install -g deepseek-tui
# 2. Cargo โ no Node needed.
cargo install deepseek-tui-cli --locked # `deepseek` (entry point)
cargo install deepseek-tui --locked # `deepseek-tui` (TUI binary)
# 3. Homebrew โ macOS package manager.
brew tap Hmbown/deepseek-tui
brew install deepseek-tui
# 4. Direct download โ no package manager or toolchain.
# https://github.com/Hmbown/DeepSeek-TUI/releases
# Prebuilt for Linux x64/ARM64, macOS x64/ARM64, Windows x64.
# 5. Docker โ prebuilt release image.
docker run --rm -it \
-e DEEPSEEK_API_KEY \
-v "$PWD:/workspace" \
ghcr.io/hmbown/deepseek-tui:latestIn mainland China, speed up the npm path with
--registry=https://registry.npmmirror.com, or use the Cargo mirror below.
DeepSeek TUI is a coding agent that runs in your terminal. It can read and edit files, run shell commands, search the web, manage git, and coordinate sub-agents from a keyboard-driven TUI.
It is built around DeepSeek V4 (deepseek-v4-pro / deepseek-v4-flash), including 1M-token context windows, streaming reasoning blocks, and prefix-cache-aware cost reporting.
- Auto mode โ
--model auto//model autochooses both the model and thinking level for each turn - Thinking-mode streaming โ see DeepSeek reasoning blocks as the model works
- Full tool suite โ file ops, shell execution, git, web search/browse, apply-patch, sub-agents, MCP servers
- 1M-token context โ context tracking, manual or configured compaction, and prefix-cache telemetry
- Three modes โ Plan (read-only explore), Agent (interactive with approval), YOLO (auto-approved)
- Reasoning-effort tiers โ cycle through
off โ high โ maxwithShift + Tab - Session save/resume โ checkpoint and resume long-running sessions
- Workspace rollback โ side-git pre/post-turn snapshots with
/restoreandrevert_turn, without touching your repo's.git - Durable task queue โ background tasks can survive restarts
- HTTP/SSE runtime API โ
deepseek serve --httpfor headless agent workflows - MCP protocol โ connect to Model Context Protocol servers for extended tooling; please see docs/MCP.md
- Native RLM (
rlm_query) โ run batched analysis through cheapdeepseek-v4-flashchildren using the same API client - LSP diagnostics โ inline error/warning surfacing after every edit via rust-analyzer, pyright, typescript-language-server, gopls, clangd
- User memory โ optional persistent note file injected into the system prompt for cross-session preferences
- Localized UI โ
en,ja,zh-Hans,pt-BRwith auto-detection - Live cost tracking โ per-turn and session-level token usage and cost estimates; cache hit/miss breakdown
- Skills system โ composable, installable instruction packs from GitHub with no backend service required
deepseek (dispatcher CLI) โ deepseek-tui (companion binary) โ ratatui interface โ async engine โ OpenAI-compatible streaming client. Tool calls route through a typed registry (shell, file ops, git, web, sub-agents, MCP, RLM) and results stream back into the transcript. The engine manages session state, turn tracking, the durable task queue, and an LSP subsystem that feeds post-edit diagnostics into the model's context before the next reasoning step.
See docs/ARCHITECTURE.md for the full walkthrough.
npm install -g deepseek-tui
deepseek --version
deepseek --model autoPrebuilt binaries are published for Linux x64, Linux ARM64 (v0.8.8+), macOS x64, macOS ARM64, and Windows x64. For other targets (musl, riscv64, FreeBSD, etc.), see Install from source or docs/INSTALL.md.
On first launch you'll be prompted for your DeepSeek API key. The key is saved to ~/.deepseek/config.toml so it works from any directory without OS credential prompts.
You can also set it ahead of time:
deepseek auth set --provider deepseek # saves to ~/.deepseek/config.toml
deepseek auth status # shows the active credential source
export DEEPSEEK_API_KEY="YOUR_KEY" # env var alternative; use ~/.zshenv for non-interactive shells
deepseek
deepseek doctor # verify setupIf deepseek doctor says the rejected key came from DEEPSEEK_API_KEY, remove
the stale export from your shell startup file, open a fresh shell, or run
deepseek auth set --provider deepseek. Use deepseek auth status to see the
config, keyring, and env-var source state without printing the key. Saved config
keys take precedence over the keyring and environment and are easier to rotate.
To rotate or remove a saved key:
deepseek auth clear --provider deepseek.
Use deepseek --model auto or /model auto when you want DeepSeek TUI to decide how much model and reasoning power a turn needs.
Auto mode controls two settings together:
- Model:
deepseek-v4-flashordeepseek-v4-pro - Thinking:
off,high, ormax
Before the real turn is sent, the app makes a small deepseek-v4-flash routing call with thinking off. That router looks at the latest request and recent context, then selects a concrete model and thinking level for the real request. Short/simple turns can stay on Flash with thinking off; coding, debugging, release work, architecture, security review, or ambiguous multi-step tasks can move up to Pro and/or higher thinking.
auto is local to DeepSeek TUI. The upstream API never receives model: "auto"; it receives the concrete model and thinking setting chosen for that turn. The TUI shows the selected route, and cost tracking is charged against the model that actually ran. If the router call fails or returns an invalid answer, the app falls back to a local heuristic. Sub-agents inherit auto mode unless you assign them an explicit model.
Use a fixed model or fixed thinking level when you want repeatable benchmarking, a strict cost ceiling, or a specific provider/model mapping.
npm i -g deepseek-tui works on glibc-based ARM64 Linux from v0.8.8 onward. You can also download prebuilt binaries from the Releases page and place them side by side on your PATH.
If GitHub or npm downloads are slow from mainland China, use a Cargo registry mirror:
# ~/.cargo/config.toml
[source.crates-io]
replace-with = "tuna"
[source.tuna]
registry = "sparse+https://mirrors.tuna.tsinghua.edu.cn/crates.io-index/"Then install both binaries (the dispatcher delegates to the TUI at runtime):
cargo install deepseek-tui-cli --locked # provides `deepseek`
cargo install deepseek-tui --locked # provides `deepseek-tui`
deepseek --versionPrebuilt binaries can also be downloaded from GitHub Releases. Use DEEPSEEK_TUI_RELEASE_BASE_URL for mirrored release assets.
Scoop is a Windows package manager. DeepSeek TUI is listed
in Scoop's main bucket, but that manifest updates independently and can lag the
GitHub/npm/Cargo release. Run scoop update first, then verify the installed
version with deepseek --version:
scoop update
scoop install deepseek-tui
deepseek --versionUse npm or direct GitHub release downloads when you need the newest release before Scoop's manifest catches up.
Install from source
Works on any Tier-1 Rust target โ including musl, riscv64, FreeBSD, and older ARM64 distros.
# Linux build deps (Debian/Ubuntu/RHEL):
# sudo apt-get install -y build-essential pkg-config libdbus-1-dev
# sudo dnf install -y gcc make pkgconf-pkg-config dbus-devel
git clone https://github.com/Hmbown/DeepSeek-TUI.git
cd DeepSeek-TUI
cargo install --path crates/cli --locked # requires Rust 1.88+; provides `deepseek`
cargo install --path crates/tui --locked # provides `deepseek-tui`Both binaries are required. Cross-compilation and platform-specific notes: docs/INSTALL.md.
# NVIDIA NIM
deepseek auth set --provider nvidia-nim --api-key "YOUR_NVIDIA_API_KEY"
deepseek --provider nvidia-nim
# Fireworks
deepseek auth set --provider fireworks --api-key "YOUR_FIREWORKS_API_KEY"
deepseek --provider fireworks --model deepseek-v4-pro
# Generic OpenAI-compatible endpoint
deepseek auth set --provider openai --api-key "YOUR_OPENAI_COMPATIBLE_API_KEY"
OPENAI_BASE_URL="https://openai-compatible.example/v4" deepseek --provider openai --model glm-5
# Self-hosted SGLang
SGLANG_BASE_URL="http://localhost:30000/v1" deepseek --provider sglang --model deepseek-v4-flash
# Self-hosted vLLM
VLLM_BASE_URL="http://localhost:8000/v1" deepseek --provider vllm --model deepseek-v4-flash
# Self-hosted Ollama
ollama pull deepseek-coder:1.3b
deepseek --provider ollama --model deepseek-coder:1.3bA maintenance release anchored by a v0.8.27 / v0.8.28 regression fix plus 25 community PRs. Full changelog.
- Scroll demon, gone for good (#1085 regression). Parallel sub-
agents running
exec_shellwould scroll the alt-screen out from under ratatui's diff renderer, leaving a blank band growing above the header. Three layers of defence now: atracing-subscriberwriting to~/.deepseek/logs/tui-YYYY-MM-DD.log, an fd-leveldup2stderr redirect for the alt-screen lifetime (Unix), and module-level#![deny(clippy::print_stdout, clippy::print_stderr)]on the TUI runtime modules. Neweprintln!s insidetools/,core/,tui/,network_policy.rs, orruntime_threads.rsnow fail CI. - Ctrl+R session restore is workspace-scoped (#1395, PR #1397 from @linzhiqin2003) โ previously listed every saved session on disk, which meant Project A's history could leak into Project B.
- Runtime version visible in the header. A discreet
v0.8.29chip sits in the header's right cluster alongside the provider / effort / Live / context chips. Drops first under tight terminal width. - MCP HTTP transport honors HTTP(S)_PROXY (#1408 from
@hlx98007) โ corporate / Clash / Shadowsocks proxies now apply
to MCP HTTP connections, matching every other tool on the box.
NO_PROXYhonored. - MCP discovery survives malformed items (#1410 from @Liu-Vince) โ one bad tool / resource / prompt entry no longer drops the whole page; the malformed entry is skipped and the rest of the catalogue surfaces normally.
- MCP SSE accepts CRLF-framed endpoint events (#1309, PR #1358 from @reidliu41) โ FastMCP / uvicorn streams no longer time out waiting for LF-only event separators.
- Composer ignores leaked mouse-report bytes (#1418, PR #1421
from @reidliu41) โ terminal chains that leak
[<35;44;18Mstyle mouse reports into stdin no longer fill the input area. - Footer chips respect the available width (#1357, PR #1417 from @Wenjunyun123) โ long cache / aux chips drop before crowding the left status line or composer area on narrow terminals.
- Note management commands (PR #1407 from @reidliu41) โ
/note add,/note list, and friends for persistent maintainer notes inside the TUI. /init-style global AGENTS.md merges with project AGENTS.md (#1157, PR #1399 from @linzhiqin2003) โ your~/.deepseek/ AGENTS.mdbaseline now layers under the workspace's own AGENTS.md instead of being shadowed.- Language directive: thinking matches the user's message language
(#1118, PR #1398 from @linzhiqin2003) โ
reasoning_contentfollows the latest user message language, not the project context's inferredlang. - Web search filters spam-stuffed SERPs (#964, PR #1396 from @linzhiqin2003) โ Bing / DDG fallback paths drop the generated-content / SEO-farm domains that were poisoning quick lookups.
- Auto routing recognises CJK debug / search keywords (PRs #1401
and #1402 from @linzhiqin2003) โ
--model autoand the reasoning-effort picker correctly route Chinese / Japanese technical queries instead of falling through to the generic baseline. - Deferred tools hydrate schemas before first execution (#1419,
PR #1429 from @SamhandsomeLee) โ
edit_fileand other deferred tools now load, show their expected fields, and ask the model to retry instead of executing guessed argument names. - DeepSeek aliases replay thinking-mode tool turns (PR #1428
from @Beltran12138) โ
deepseek-chatanddeepseek-reasonernow get the samereasoning_contentreplay treatment as explicit V4 model IDs, avoiding second-turn 400s after tool calls. - Skill completions stay under
/skill(#1437, PR #1442 from @reidliu41) โ large local skill collections no longer crowd the root slash-command menu. edit_filerejects no-op replacements (PR #1460 from @xiluoduyu) โ identicalsearch/replacevalues now fail validation instead of returning an empty diff.- Windows terminal layout gets width-stable glyphs (#1314, PR #1465 from @CrepuscularIRIS) โ header and file-tree icons no longer rely on SMP emoji that cmd / PowerShell can mismeasure.
- Ghostty uses low-motion rendering by default (#1445, PR #1468 from @CrepuscularIRIS) โ affected terminals avoid animation flicker without manual config.
- Docker buildx provenance EPERM failures get a hint (#1449, PR #1469 from @CrepuscularIRIS) โ macOS shell output points at the provenance flag when that restricted metadata write fails.
- Windows CMD mouse-wheel fallback scrolls the transcript (#1443, PR #1471 from @CrepuscularIRIS) โ wheel events mapped to Up / Down no longer cycle composer history when mouse capture is off.
- Sync-to-CNB workflow hardened โ explicit
permissions: contents: read, narrowed trigger tomain+v*tags (no longer mirrors feature branches),actions/checkoutbumped v3 โ v4. - +438 LOC of new test coverage for
error_taxonomy,parse_pages_arg, web-search precedence, andsanitize_stream_chunkcontrol-byte filtering (PRs #1403-#1406 from @linzhiqin2003).
Thanks to @linzhiqin2003 (10 landings this cycle), @reidliu41 (5 landings), @CrepuscularIRIS (4 landings), @SamhandsomeLee, @Beltran12138, @Wenjunyun123, @hlx98007, @Liu-Vince, @xiluoduyu, and @shenxiaodaosanhua for the bug report.
deepseek # interactive TUI
deepseek "explain this function" # one-shot prompt
deepseek --model deepseek-v4-flash "summarize" # model override
deepseek --model auto "fix this bug" # auto-select model + thinking
deepseek --yolo # auto-approve tools
deepseek auth set --provider deepseek # save API key
deepseek doctor # check setup & connectivity
deepseek doctor --json # machine-readable diagnostics
deepseek setup --status # read-only setup status
deepseek setup --tools --plugins # scaffold tool/plugin dirs
deepseek models # list live API models
deepseek sessions # list saved sessions
deepseek resume --last # resume the most recent session in this workspace
deepseek resume <SESSION_ID> # resume a specific session by UUID
deepseek fork <SESSION_ID> # fork a session at a chosen turn
deepseek serve --http # HTTP/SSE API server
deepseek serve --acp # ACP stdio adapter for Zed/custom agents
deepseek run pr <N> # fetch PR and pre-seed review prompt
deepseek mcp list # list configured MCP servers
deepseek mcp validate # validate MCP config/connectivity
deepseek mcp-server # run dispatcher MCP stdio server
deepseek update # check for and apply binary updatesDocker images are published to GHCR for release builds:
docker volume create deepseek-tui-home
docker run --rm -it \
-e DEEPSEEK_API_KEY="$DEEPSEEK_API_KEY" \
-v deepseek-tui-home:/home/deepseek/.deepseek \
ghcr.io/hmbown/deepseek-tui:latestDeepSeek can run as a custom Agent Client Protocol server for editors that spawn local ACP agents over stdio. In Zed, add a custom agent server:
{
"agent_servers": {
"DeepSeek": {
"type": "custom",
"command": "deepseek",
"args": ["serve", "--acp"],
"env": {}
}
}
}The first ACP slice supports new sessions and prompt responses through your existing DeepSeek config/API key. Tool-backed editing and checkpoint replay are not exposed through ACP yet.
| Key | Action |
|---|---|
Tab |
Complete / or @ entries; while running, queue draft as follow-up; otherwise cycle mode |
Shift+Tab |
Cycle reasoning-effort: off โ high โ max |
F1 |
Searchable help overlay |
Esc |
Back / dismiss |
Ctrl+K |
Command palette |
Ctrl+R |
Resume an earlier session |
Alt+R |
Search prompt history and recover cleared drafts |
Ctrl+S |
Stash current draft (/stash list, /stash pop to recover) |
@path |
Attach file/directory context in composer |
โ (at composer start) |
Select attachment row for removal |
Full shortcut catalog: docs/KEYBINDINGS.md.
| Mode | Behavior |
|---|---|
| Plan ๐ | Read-only investigation โ model explores and proposes a plan (update_plan + checklist_write) before making changes |
| Agent ๐ค | Default interactive mode โ multi-step tool use with approval gates; model outlines work via checklist_write |
| YOLO โก | Auto-approve all tools in a trusted workspace; still maintains plan and checklist for visibility |
User config: ~/.deepseek/config.toml. Project overlay: <workspace>/.deepseek/config.toml (denied: api_key, base_url, provider, mcp_config_path). config.example.toml has every option.
Key environment variables:
| Variable | Purpose |
|---|---|
DEEPSEEK_API_KEY |
API key |
DEEPSEEK_BASE_URL |
API base URL |
DEEPSEEK_HTTP_HEADERS |
Optional custom model request headers, e.g. X-Model-Provider-Id=your-model-provider |
DEEPSEEK_MODEL |
Default model |
DEEPSEEK_STREAM_IDLE_TIMEOUT_SECS |
Stream idle timeout in seconds, default 300, clamped to 1..=3600 |
DEEPSEEK_PROVIDER |
deepseek (default), nvidia-nim, openai, openrouter, novita, fireworks, sglang, vllm, ollama |
DEEPSEEK_PROFILE |
Config profile name |
DEEPSEEK_MEMORY |
Set to on to enable user memory |
NVIDIA_API_KEY / OPENAI_API_KEY / OPENROUTER_API_KEY / NOVITA_API_KEY / FIREWORKS_API_KEY / SGLANG_API_KEY / VLLM_API_KEY / OLLAMA_API_KEY |
Provider auth |
OPENAI_BASE_URL / OPENAI_MODEL |
Generic OpenAI-compatible endpoint and model ID |
SGLANG_BASE_URL |
Self-hosted SGLang endpoint |
VLLM_BASE_URL |
Self-hosted vLLM endpoint |
OLLAMA_BASE_URL |
Self-hosted Ollama endpoint |
OLLAMA_MODEL |
Self-hosted Ollama model tag |
NO_ANIMATIONS=1 |
Force accessibility mode at startup |
SSL_CERT_FILE |
Custom CA bundle for corporate proxies |
Set locale in settings.toml, use /config locale zh-Hans, or rely on LC_ALL/LANG to choose UI chrome and the fallback language sent to V4 models. The latest user message still wins for natural-language reasoning and replies, so Chinese user turns stay Chinese even on an English system locale. See docs/CONFIGURATION.md and docs/MCP.md.
| Model | Context | Input (cache hit) | Input (cache miss) | Output |
|---|---|---|---|---|
deepseek-v4-pro |
1M | $0.003625 / 1M* | $0.435 / 1M* | $0.87 / 1M* |
deepseek-v4-flash |
1M | $0.0028 / 1M | $0.14 / 1M | $0.28 / 1M |
DeepSeek Platform defaults to https://api.deepseek.com/beta in v0.8.16 so beta-gated API features can be tested without extra setup. Set base_url = "https://api.deepseek.com" to opt out.
Legacy aliases deepseek-chat / deepseek-reasoner map to deepseek-v4-flash and retire after July 24, 2026. NVIDIA NIM variants use your NVIDIA account terms.
DeepSeek Pro rates currently reflect a limited-time 75% discount, which remains valid until 15:59 UTC on 31 May 2026. After that time, the TUI cost estimator will revert to the base Pro rates.
Note
For the latest DeepSeek-V4-Pro pricing, including the current 75% discount valid until 15:59 UTC on 31 May 2026, please consult the official DeepSeek pricing page. All rates listed in the README correspond to the officially published values.
DeepSeek TUI discovers skills from workspace directories (.agents/skills โ skills โ .opencode/skills โ .claude/skills โ .cursor/skills) and global directories (~/.agents/skills โ ~/.claude/skills โ ~/.deepseek/skills). Each skill is a directory with a SKILL.md file:
~/.agents/skills/my-skill/
โโโ SKILL.md
Frontmatter required:
---
name: my-skill
description: Use this when DeepSeek should follow my custom workflow.
---
# My Skill
Instructions for the agent go here.Commands: /skills (list), /skill <name> (activate), /skill new (scaffold), /skill install github:<owner>/<repo> (community), /skill update / uninstall / trust. Community installs from GitHub require no backend service. Installed skills appear in the model-visible session context; the agent can auto-select relevant skills via the load_skill tool when your task matches their descriptions.
| Doc | Topic |
|---|---|
| ARCHITECTURE.md | Codebase internals |
| CONFIGURATION.md | Full config reference |
| MODES.md | Plan / Agent / YOLO modes |
| MCP.md | Model Context Protocol integration |
| RUNTIME_API.md | HTTP/SSE API server |
| INSTALL.md | Platform-specific install guide |
| MEMORY.md | User memory feature guide |
| SUBAGENTS.md | Sub-agent role taxonomy and lifecycle |
| KEYBINDINGS.md | Full shortcut catalog |
| RELEASE_RUNBOOK.md | Release process |
| LOCALIZATION.md | UI locale matrix & switching |
| OPERATIONS_RUNBOOK.md | Ops & recovery |
Full Changelog: CHANGELOG.md.
- DeepSeek โ thank you for the models and support that power every turn. ๆ่ฐข DeepSeek ๆไพๆจกๅไธๆฏๆ๏ผ่ฎฉๆฏไธๆฌกไบคไบๆไธบๅฏ่ฝใ
- DataWhale ๐ โ thank you for your support and for welcoming us into the Whale Brother family. ๆ่ฐข DataWhale ็ๆฏๆ๏ผๅนถๆฌข่ฟๆไปฌๅ ๅ ฅโ้ฒธๅ ๅผโๅคงๅฎถๅบญใ
This project ships with help from a growing community of contributors:
- merchloubna70-dot โ 28 PRs spanning features, fixes, and VS Code extension scaffolding (#645โ#681)
- WyxBUPT-22 โ Markdown rendering for tables, bold/italic, and horizontal rules (#579)
- loongmiaow-pixel โ Windows + China install documentation (#578)
- 20bytes โ User memory docs and help polish (#569)
- staryxchen โ glibc compatibility preflight (#556)
- Vishnu1837 โ glibc compatibility improvements (#565)
- shentoumengxin โ Shell
cwdboundary validation (#524) - toi500 โ Windows paste fix report
- xsstomy โ Terminal startup repaint report
- melody0709 โ Slash-prefix Enter activation report
- lloydzhou and jeoor โ Compaction cost reports; lloydzhou also contributed deterministic environment context (#813, #922) and KV prefix-cache stabilisation (#1080)
- Agent-Skill-007 โ README clarity pass (#685)
- woyxiang โ Windows install documentation (#696)
- wangfeng โ Pricing/discount info update (#692)
- zichen0116 โ CODE_OF_CONDUCT.md (#686)
- dfwqdyl-ui โ model ID case-sensitivity compatibility report (#729)
- Oliver-ZPLiu โ stale
working...state bug report and Windows clipboard fallback (#738, #850) - reidliu41 โ resume hint, workspace trust persistence, Ollama provider support, and thinking-block stream finalization (#863, #870, #921, #1078)
- xieshutao โ plain Markdown skill fallback (#869)
- GK012 โ npm wrapper
--versionfallback (#885) - y0sif โ parent turn-loop wakeup after direct child sub-agent completion (#901)
- mac119 and leo119 โ
deepseek updatecommand documentation (#838, #917) - dumbjack / ๆตฉๆทผ็mac โ command-safety null-byte hardening (#706, #918)
- macworkers โ fork confirmation with the new session id (#600, #919)
- zero and zerx-lab โ notification condition config and richer OSC 9 notification body (#820, #920)
- chnjames โ cached @mention completions, config recovery polish, and Windows UTF-8 shell output (#849, #927, #982, #1018)
- angziii โ config safety, async cleanup, Docker hardening, and command-safety fixes (#822, #824, #827, #831, #833, #835, #837)
- elowen53 โ UTF-8 decoding and deterministic test coverage (#825, #840)
- wdw8276 โ
/renamecommand for custom session titles (#836) - banqii โ
.cursor/skillsdiscovery path support (#817) - junskyeed โ dynamic
max_tokenscalculation for API requests (#826) - Hafeez Pizofreude โ SSRF protection in
fetch_urland Star History chart - Unic (YuniqueUnic) โ Schema-driven config UI (TUI + web)
- Jason โ SSRF security hardening
- axobase001 โ snapshot orphan cleanup, npm install guards, session telemetry fixes, model-scope cache clear, symlinked skill support, and npm mirror-escape-hatch guidance (#975, #1032, #1047, #1049, #1052, #1019, #1051, #1056)
- MengZ-super โ
/themecommand for dark/light toggle and SSE gzip/brotli decompression (#1057, #1061) - DI-HUO-MING-YI โ Plan-mode read-only sandbox safety fix (#1077)
- bevis-wong โ precise paste-Enter auto-submit reproducer (#1073)
- Duducoco and AlphaGogoo โ skills slash-menu and
/skillscoverage fix (#1068, #1083) - ArronAI007 โ window-resize artifact fix for macOS Terminal.app and ConHost (#993)
- THINKER-ONLY โ OpenRouter and custom-endpoint model-ID preservation (#1066)
- Jefsky โ DeepSeek endpoint correction report (#1079, #1084)
- wlon โ NVIDIA NIM provider API-key preference diagnosis (#1081)
See CONTRIBUTING.md. Pull requests welcome โ check the open issues for good first contributions.
Support: Buy me a coffee.
Note
Not affiliated with DeepSeek Inc.
