feat(skills,node): integrate SKILL.md + managed Node runtime (#681)#723
feat(skills,node): integrate SKILL.md + managed Node runtime (#681)#723senamakel merged 33 commits intotinyhumansai:mainfrom
Conversation
…tinyhumansai#681) Extends the skills module with agentskills.io-style discovery: * Parses YAML frontmatter in SKILL.md (name, description, version, author, tags, allowed-tools, license) via serde_yaml, with a catch-all extras map for forward-compatible keys. * Adds SkillScope (User / Project / Legacy) and discover_skills(), which scans ~/.openhuman/skills, ~/.agents/skills, <ws>/.openhuman/ skills, <ws>/.agents/skills and the legacy <ws>/skills directory. * Gates project-scope loading behind an explicit <ws>/.openhuman/trust marker (is_workspace_trusted helper). * Resolves name collisions with project > user > legacy precedence and surfaces shadowing as per-skill warnings. * Inventories bundled resources under scripts/, references/, assets/. * Keeps the existing load_skills(workspace_dir) API as a backwards-compatible wrapper for existing callers. * Preserves legacy skill.json fallback (marked legacy = true). * Lenient validation: missing / mismatched / oversized name or description produce warnings instead of errors. * Adds 12 unit tests covering frontmatter parsing, trust gating, collision shadowing, lenient fallbacks, legacy JSON and resource inventory. One existing prompt test updated to use ..Default::default() for the expanded struct. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…#681) Per the agentskills.io SKILL.md spec, only name, description, license, compatibility, metadata, and allowed-tools are valid top-level keys. Our SkillFrontmatter had version, author, and tags as top-level fields, which drifted from the spec and would silently swallow mis-shaped data for skills authored against the canonical schema. Demote version/author/tags into the metadata map. Non-spec top-level keys still parse via #[serde(flatten)] into extra, and when present there we emit a migration warning and still populate the derived Skill fields so existing skills keep working. Add compatibility as an optional string alongside license. New tests cover the spec-compliant metadata shape, the deprecated top-level fallback path, and verify that the full spec frontmatter parses without leaking into extras. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…nsai#681) Node.js distributions ship as .tar.xz on Unix and .zip on Windows. The upcoming Node runtime bootstrap extracts these in pure Rust; xz2 is built with the `static` feature so liblzma is bundled and not a system dependency. zip is pinned to default-features = false + deflate to avoid pulling in bzip2/zstd/aes which we don't need for Node archives. Deps only — no behavior change. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Introduce managed Node.js runtime configuration so skills that require `node`/`npm` (agentskills.io packages with build steps) can be wired in behind a single config switch. Fields: - `node.enabled` — master switch (default true) - `node.version` — pinned release (default `v22.11.0` LTS) - `node.cache_dir` — absolute path for managed distributions; empty = use workspace default - `node.prefer_system` — reuse matching system `node` when found (default true); set false for reproducible CI / airgapped deploys Env overrides land in load.rs alongside the existing LOCAL_AI_TIER block: - OPENHUMAN_NODE_ENABLED - OPENHUMAN_NODE_VERSION - OPENHUMAN_NODE_CACHE_DIR - OPENHUMAN_NODE_PREFER_SYSTEM No resolver / downloader yet — subsequent commits in the tinyhumansai#681 series consume this config to bootstrap the runtime. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ai#681) Introduce the `openhuman::node_runtime` module and its first piece: a synchronous resolver that walks `PATH`, probes `node --version`, and returns a `SystemNode` when the host toolchain major-version matches the configured target. Why resolve first: a successful probe lets the bootstrap skip a ~60 MB download per managed install. Matching is intentionally loose on the patch level (Node LTS lines are ABI-stable and skills pin their own deps via package-lock.json); operators needing strict pinning can set `node.prefer_system = false`. Tracing is verbose by design — resolver decisions gate the download path, so operators need a clear breadcrumb trail at `debug`/`info`. Includes unit tests for `parse_node_version` covering `v` prefix, whitespace, major-only, and malformed inputs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…humansai#681) Add the second piece of the managed Node.js runtime: streaming download of OS/arch-specific prebuilt archives from nodejs.org, gated by a SHA-256 match against the release's signed `SHASUMS256.txt`. Key contracts: - `NodeDistribution::for_host(version)` picks the right archive for the current OS/arch tuple. Supported matrix: * darwin-{arm64,x64}.tar.xz * linux-{arm64,x64,armv7l}.tar.xz * win-{arm64,x64}.zip Unsupported hosts surface a clear error so the caller can flip `node.enabled = false` or point at a pre-installed toolchain. - `fetch_shasums(client, version)` parses `SHASUMS256.txt` into a filename -> hex digest map. Tolerant of trailing / signature blocks. - `download_distribution(...)` streams chunks into the target path, computes SHA-256 on the fly, and wipes the partial file if the digest does not match. Integrity check is mandatory — skills will execute untrusted code inside the resolved runtime. Verbose tracing at `info` and `debug` follows the repo debug-logging rule; operators can grep `[node_runtime::downloader]` to trace every GET, chunk boundary (via total-bytes log), and hash decision. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…#681) Extract downloaded Node.js distributions and move them into the final cache path without leaving the reader observing a half-populated directory. - `extract_distribution(archive, extract_root, is_zip)` — wraps the synchronous `tar`/`xz2`/`zip` crates in `spawn_blocking` and returns the single top-level folder produced by the archive (`node-vX.Y.Z- <os>-<arch>/`). `set_preserve_permissions(true)` + `set_overwrite( true)` on the tar side keeps the `node` binary's `+x` bit. The zip path restores Unix mode bits via `unix_mode()` so cross-OS builds still land correct permissions. - `atomic_install(staged, final_dest)` — renames a staged directory into place via a single `rename(2)`, moving any pre-existing install to a `.old-<pid>` sibling first and cleaning it up on success. This gives concurrent readers a "before" or "after" view, never a partial one. - Unsafe zip paths (`enclosed_name()` rejection) are logged and skipped rather than traversed — defence against malicious archives, even though the digest check in the downloader already rules out tampered official releases. No new tests here: the logic leans on upstream tar/zip crates, and meaningful end-to-end coverage requires a real archive on disk — that comes in the integration-test commit later in this series. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…mansai#681) Stitch resolver, downloader, and extractor into a single idempotent entry point callers use at startup (or lazily before the first `node_exec`/`npm_exec` call). `NodeBootstrap::resolve()` contract: 1. Return cached `ResolvedNode` if a previous call already succeeded. 2. Bail when `node.enabled = false`. 3. If `node.prefer_system`, probe the host PATH. Matching major version wins — return a `System`-sourced `ResolvedNode`. 4. Otherwise compute the install path (`{cache_dir}/node-v{version}-{os}-{arch}/`). If it already contains valid bins, reuse it. This makes the bootstrap ~free across restarts once a managed install lands. 5. Else fetch `SHASUMS256.txt`, locate the expected digest for our archive, stream the download, extract into a `.stage-<pid>` scratch dir, `atomic_install` the top-level folder into the final path, and remove scratch + archive to reclaim disk. Concurrency is handled by a `tokio::sync::Mutex<Option<ResolvedNode>>` — parallel callers queue behind the first one; the winner memoises, the losers pick up the cached result. No race can produce two concurrent downloads of the same archive. Platform-specific bin layout lives in `managed_bin_dir` / `build_ resolved`: - Unix: `<install>/bin/{node,npm}` - Windows: `<install>/{node.exe,npm.cmd}` This closes the resolver/downloader/cache layer promised in the tinyhumansai#681 checkpoint. Tools (`node_exec`, `npm_exec`) and shell-PATH injection layer on top in the next batch of commits. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…#681) Executes `inline_code` (via `node -e`) or a workspace-relative `script_path` through the NodeBootstrap-resolved `node` binary. POSIX single-quote quoting keeps user input inert; `env_clear` + allow-list mirrors the shell tool so secrets never leak. 300s default timeout (capped at 1800s), 1MB stdout/stderr caps. PATH is prepended with the resolved bin dir so child processes see managed `node`/`npm`. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…#681) Thin npm CLI wrapper paired with node_exec. Subcommands go through `is_sane_subcommand` (alphanumerics + `._-:` only) and a deny-list (publish/adduser/login/token/…) blocks registry-mutation and auth flows. `cwd` is resolved under the workspace — absolute paths and `..` components are rejected. 600s default timeout (1800s ceiling), 1MB stdout/stderr caps. PATH prepended with managed bin dir so npm's own node/corepack lookups hit the managed toolchain. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…yhumansai#681) Adds a non-blocking `NodeBootstrap::try_cached()` primitive that peeks the memoised `ResolvedNode` without holding the async lock or triggering a download. ShellTool gains an optional `node_bootstrap` field wired through a new `with_node_bootstrap` constructor; when set, each shell invocation consults `try_cached()` and, on a hit, prepends the managed bin dir to the child PATH using the platform separator. Unrelated shell commands stay byte-identical — no download is ever forced from the shell path. Consequence: once `node_exec`/`npm_exec` have resolved the toolchain once, skills that shell out to `node`/`npm`/`npx`/`corepack` transparently pick up the managed install without any per-command coordination. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…umansai#681) Wires the skills-oriented Node.js toolchain into the tool registry: * Constructs one session-scoped `NodeBootstrap` in `all_tools_with_runtime` when `root_config.node.enabled` is true — all three consumers (ShellTool, NodeExecTool, NpmExecTool) share the same `Arc<NodeBootstrap>` so the download/extract/install pipeline runs at most once per session and the memoised `ResolvedNode` is reused across every shelling-out call. * Swaps ShellTool to `with_node_bootstrap(...)` when the bootstrap exists so shell PATH injection fires automatically once any node/npm tool has resolved the runtime. * Registers `node_exec` and `npm_exec` only when the flag is on — flipping `node.enabled = false` cleanly removes both tools and falls back to the legacy ShellTool construction. * Adds two regression tests (`all_tools_registers_node_exec_when_node_enabled` and `all_tools_excludes_node_exec_when_node_disabled`) so future refactors can't silently drop the wiring. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
tinyhumansai#681) code_executor is the sandboxed developer sub-agent — it writes, runs, and debugs code — and tool_maker is the narrow self-healing agent that polyfills missing commands. Both now see the managed Node.js tools so skills and polyfills can call into the resolved runtime directly rather than relying on whatever `node`/`npm` happens to be on the host PATH. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…yhumansai#681) Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds a managed Node.js runtime: config schema and env overrides, system detection with timeouts, verified download/extract/install, a memoized bootstrap resolver, two new tools (node_exec, npm_exec) with conditional registration and PATH injection, plus SKILL.md frontmatter parsing and multi-root skill discovery. Changes
sequenceDiagram
participant Client
participant NodeBootstrap
participant SystemResolver
participant Downloader
participant Extractor
participant FileSystem
Client->>NodeBootstrap: resolve()
activate NodeBootstrap
NodeBootstrap->>NodeBootstrap: check_cached()
alt Cached exists
NodeBootstrap-->>Client: ResolvedNode
else
alt prefer_system
NodeBootstrap->>SystemResolver: detect_system_node(target_version)
activate SystemResolver
SystemResolver->>FileSystem: locate node, run `node --version` (5s timeout)
SystemResolver-->>NodeBootstrap: Some(SystemNode) or None
deactivate SystemResolver
end
alt SystemNode available
NodeBootstrap-->>Client: ResolvedNode (System)
else
NodeBootstrap->>Downloader: fetch_shasums(version)
activate Downloader
Downloader->>FileSystem: GET SHASUMS256.txt
Downloader-->>NodeBootstrap: shasums map
deactivate Downloader
NodeBootstrap->>Downloader: download_distribution(archive)
activate Downloader
Downloader->>FileSystem: stream download + SHA-256 verification
Downloader-->>NodeBootstrap: verified archive
deactivate Downloader
NodeBootstrap->>Extractor: extract_distribution(archive)
activate Extractor
Extractor->>FileSystem: extract to staged dir (tar.xz or zip)
Extractor-->>NodeBootstrap: staged top-level dir
deactivate Extractor
NodeBootstrap->>Extractor: atomic_install(staged, final_dest)
activate Extractor
Extractor->>FileSystem: rename staged→final (backup if exists)
Extractor-->>NodeBootstrap: final_dest
deactivate Extractor
NodeBootstrap-->>Client: ResolvedNode (Managed)
end
end
deactivate NodeBootstrap
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
banger PR. waiting to merge |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
src/openhuman/skills/ops.rs (1)
502-514: Dead code:SKILL.mdchecks are unreachable in legacy loader.
load_from_legacy_manifestis only called whenSKILL.mddoesn't exist (seeload_skill_dirlines 383-393). Theskill_md.exists()checks here will always be false:
- Line 504:
read_skill_md_description(&skill_md)will always returnNone(file read fails), falling back to"No description provided".- Line 510: The conditional will always take the
elsebranch.Consider simplifying:
♻️ Proposed simplification
- let skill_md = dir.join(SKILL_MD); let description = if manifest.description.is_empty() { - read_skill_md_description(&skill_md) - .unwrap_or_else(|| "No description provided".to_string()) + "No description provided".to_string() } else { manifest.description }; - let location = if skill_md.exists() { - Some(skill_md) - } else { - Some(manifest_path.to_path_buf()) - }; + let location = Some(manifest_path.to_path_buf());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/skills/ops.rs` around lines 502 - 514, The SKILL.md existence checks in load_from_legacy_manifest are dead code because load_skill_dir only calls load_from_legacy_manifest when SKILL.md is absent; remove the redundant skill_md/SKILL_MD existence logic and the call to read_skill_md_description, and instead directly use manifest.description (or the manifest_path fallback) for description and location; update load_from_legacy_manifest to compute description = if manifest.description.is_empty() { "No description provided".to_string() } else { manifest.description } and set location = Some(manifest_path.to_path_buf()), removing references to skill_md, SKILL_MD, and read_skill_md_description.src/openhuman/config/schema/load.rs (1)
796-824: Log invalid Node boolean env values and de-duplicate parsing.At Line 797 and Line 817, invalid values are silently ignored. Please emit a warning (with env var name) and reuse one parser to keep behavior consistent.
As per coding guidelines: "Add substantial, development-oriented logs while implementing features or fixes so issues are easy to trace end-to-end; log critical checkpoints including ... branch decisions ... and error handling paths."♻️ Suggested refactor
+ let parse_bool_env = |name: &str, raw: &str| -> Option<bool> { + match raw.trim().to_ascii_lowercase().as_str() { + "1" | "true" | "yes" | "on" => Some(true), + "0" | "false" | "no" | "off" => Some(false), + _ => { + tracing::warn!(env = %name, value = %raw, "invalid boolean env override ignored"); + None + } + } + }; + // Node runtime overrides if let Ok(flag) = std::env::var("OPENHUMAN_NODE_ENABLED") { - let normalized = flag.trim().to_ascii_lowercase(); - match normalized.as_str() { - "1" | "true" | "yes" | "on" => self.node.enabled = true, - "0" | "false" | "no" | "off" => self.node.enabled = false, - _ => {} + if let Some(enabled) = parse_bool_env("OPENHUMAN_NODE_ENABLED", &flag) { + self.node.enabled = enabled; } } @@ if let Ok(flag) = std::env::var("OPENHUMAN_NODE_PREFER_SYSTEM") { - let normalized = flag.trim().to_ascii_lowercase(); - match normalized.as_str() { - "1" | "true" | "yes" | "on" => self.node.prefer_system = true, - "0" | "false" | "no" | "off" => self.node.prefer_system = false, - _ => {} + if let Some(prefer_system) = parse_bool_env("OPENHUMAN_NODE_PREFER_SYSTEM", &flag) { + self.node.prefer_system = prefer_system; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/config/schema/load.rs` around lines 796 - 824, The env-boolean parsing for OPENHUMAN_NODE_ENABLED and OPENHUMAN_NODE_PREFER_SYSTEM silently ignores invalid values and the parsing logic is duplicated; add a single helper function (e.g., parse_env_bool or parse_bool_env) in load.rs that takes the env var name, reads and normalizes the value, returns Result<bool, String> or Option<bool>, and use it for both self.node.enabled and self.node.prefer_system; when the helper encounters an unrecognized value, emit a warning log including the env var name and the invalid value so callers see why the fallback occurred, and replace the duplicated match blocks with calls to this helper to keep behavior consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/node_runtime/bootstrap.rs`:
- Around line 245-253: resolve_from_system currently forwards SystemNode.version
(which includes a leading "v") into build_resolved, violating the
ResolvedNode::version contract; update resolve_from_system to normalize the
version by removing a leading 'v' or 'V' (e.g. via strip_prefix or
trim_start_matches) before calling build_resolved so ResolvedNode::version is
always stored without the leading "v" (refer to resolve_from_system, SystemNode,
build_resolved, and ResolvedNode::version).
In `@src/openhuman/node_runtime/downloader.rs`:
- Around line 181-217: The download loop can leave a truncated file at
target_path on failures from response.chunk(), file.write_all(), or a suppressed
file.flush(); modify the code in the download routine (the loop that calls
response.chunk(), file.write_all(), and the subsequent file.flush()) to ensure
the partial file is removed on any error path and to propagate flush errors
instead of ignoring them. Implement this by adding a cleanup step that calls
tokio::fs::remove_file(target_path).await in the error branch (or use a
scope/guard that removes the file on Err) and replace file.flush().await.ok()
with a propagated result (using ? or map_err) so flush failures trigger cleanup
and return an error, preserving the existing SHA mismatch removal behavior for
consistency. Ensure you reference target_path, response.chunk(), write_all(),
and file.flush() when making the changes.
In `@src/openhuman/node_runtime/extractor.rs`:
- Around line 181-205: The current sequence renames final_dest to a backup and
then performs fs::rename(&staged, &final_dest) but does not restore the backup
if that second rename fails; modify the code around the fs::rename(&staged,
&final_dest) call in extractor.rs so that on error you attempt to move the
backup path (the Option<PathBuf> named backup) back to final_dest before
returning the error: capture the rename error, if backup.is_some() attempt
fs::rename(&backup_path, &final_dest) (or fs::remove_dir_all then fs::rename as
appropriate) and log any restore failure as secondary, then return the original
error (with context) so the working runtime is restored to the previous install
when staged->final_dest fails.
In `@src/openhuman/node_runtime/resolver.rs`:
- Around line 128-145: The probe_node_version function currently calls
Command::output() which blocks indefinitely; replace that with spawning the
child and enforcing a real timeout: use
Command::new(path).arg("--version")...spawn() to get a Child, then use a timeout
API (e.g. wait_timeout::ChildExt::wait_timeout or an async tokio::time::timeout
alternative) with Duration::from_secs(5) to wait for the child; if the child
exits within the timeout, call child.wait_with_output() to capture stdout/stderr
and return the version string, otherwise kill the child (child.kill() and
child.wait()) and return None. Update references in probe_node_version to use
spawn, wait_timeout, kill, and wait_with_output accordingly.
In `@src/openhuman/tools/impl/system/node_exec.rs`:
- Around line 162-175: The code builds a shell command using script_path and
extra_args without validating they stay inside workspace_dir, allowing path
traversal like "../etc/passwd"; implement a resolver similar to
npm_exec.rs::resolve_cwd(): add a new resolve_script_path(workspace_dir,
script_path) that rejects absolute paths and any ".." components, canonicalizes
the joined path against workspace_dir, returns the safe relative/resolved path,
and use that resolved path (instead of the raw script_path) when calling
shell_quote in node_exec.rs (also apply the same validation to any extra_args
that represent file paths); reference inline_code, script_path, extra_args,
shell_quote, resolved.node_bin and replace usages accordingly.
---
Nitpick comments:
In `@src/openhuman/config/schema/load.rs`:
- Around line 796-824: The env-boolean parsing for OPENHUMAN_NODE_ENABLED and
OPENHUMAN_NODE_PREFER_SYSTEM silently ignores invalid values and the parsing
logic is duplicated; add a single helper function (e.g., parse_env_bool or
parse_bool_env) in load.rs that takes the env var name, reads and normalizes the
value, returns Result<bool, String> or Option<bool>, and use it for both
self.node.enabled and self.node.prefer_system; when the helper encounters an
unrecognized value, emit a warning log including the env var name and the
invalid value so callers see why the fallback occurred, and replace the
duplicated match blocks with calls to this helper to keep behavior consistent.
In `@src/openhuman/skills/ops.rs`:
- Around line 502-514: The SKILL.md existence checks in
load_from_legacy_manifest are dead code because load_skill_dir only calls
load_from_legacy_manifest when SKILL.md is absent; remove the redundant
skill_md/SKILL_MD existence logic and the call to read_skill_md_description, and
instead directly use manifest.description (or the manifest_path fallback) for
description and location; update load_from_legacy_manifest to compute
description = if manifest.description.is_empty() { "No description
provided".to_string() } else { manifest.description } and set location =
Some(manifest_path.to_path_buf()), removing references to skill_md, SKILL_MD,
and read_skill_md_description.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7a5b9a6c-1f56-4999-a1a3-43d017fb7e40
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockapp/src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (20)
Cargo.tomlsrc/openhuman/agent/agents/code_executor/agent.tomlsrc/openhuman/agent/agents/tool_maker/agent.tomlsrc/openhuman/channels/tests/prompt.rssrc/openhuman/config/schema/load.rssrc/openhuman/config/schema/mod.rssrc/openhuman/config/schema/node.rssrc/openhuman/config/schema/types.rssrc/openhuman/mod.rssrc/openhuman/node_runtime/bootstrap.rssrc/openhuman/node_runtime/downloader.rssrc/openhuman/node_runtime/extractor.rssrc/openhuman/node_runtime/mod.rssrc/openhuman/node_runtime/resolver.rssrc/openhuman/skills/ops.rssrc/openhuman/tools/impl/system/mod.rssrc/openhuman/tools/impl/system/node_exec.rssrc/openhuman/tools/impl/system/npm_exec.rssrc/openhuman/tools/impl/system/shell.rssrc/openhuman/tools/ops.rs
…em (tinyhumansai#681) CodeRabbit R1 flagged that build_resolved receives a raw version string from SystemNode. detect_system_node already trims the prefix, but trim defensively at the resolve_from_system boundary too so any future code path constructing SystemNode with an un-normalised version won't emit "vvX.Y.Z". Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…es (tinyhumansai#681) CodeRabbit R2 flagged that download_distribution silently swallowed flush errors via `.ok()` and left partial files on disk when any chunk/write/flush step errored. Wrap the streaming loop in an async block returning Result<()>, propagate flush errors, and on any failure delete the partial archive so a retry starts clean and callers never see a half-written file. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…inyhumansai#681) CodeRabbit R3 flagged that atomic_install left the user with no Node runtime on disk if the staged->final rename failed after a backup had been taken. Wrap the rename in `if let Err`, restore the backup if present, log restore failures separately (warning), and always return the original error so the caller sees the true failure cause. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…inyhumansai#681) CodeRabbit R4 flagged that probe_node_version used Command::output() with no real timeout — a broken shim or FUSE-backed binary could hang the bootstrap forever. Add wait-timeout crate dep and rewrite the probe to spawn with piped stdio, wait_timeout(5s), kill on timeout, and read stdout/stderr from the piped handles after exit. Logs a warning when a probe times out so the install can be diagnosed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
CodeRabbit R5 flagged that node_exec joined user-supplied script_path onto the workspace without rejecting `..` segments or absolute paths, letting a prompt-injected `../../../etc/passwd` read arbitrary files via node. Add a resolve_script_path helper mirroring npm_exec::resolve_cwd — reject empty, absolute, parent-dir, and Windows-prefix components. Extra positional args stay opaque (shell-quoted, not path-checked) and get an explanatory comment at the call site. Unit tests cover each rejection case plus the relative- subdir happy path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…tinyhumansai#681) CodeRabbit N1 flagged dead code: load_from_legacy_manifest only runs when load_skill_dir already determined SKILL.md is absent, so the fallback that re-checked for SKILL.md and its helper read_skill_md_description could never execute. Simplify to a direct description/location decision and delete the dead helper. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…values (tinyhumansai#681) CodeRabbit N2 flagged two near-identical boolean env-override match blocks for OPENHUMAN_NODE_ENABLED and OPENHUMAN_NODE_PREFER_SYSTEM. Extract a module-level parse_env_bool helper that accepts 1/true/yes/on and 0/false/no/off (case-insensitive) and emits a tracing::warn when the value is unrecognised so silent mis-spellings don't invisibly leave the config unchanged. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/node_runtime/bootstrap.rs`:
- Around line 230-235: probe_managed_install() is currently accepting whatever
build_resolved() returns even when npm is missing, causing a corrupted managed
cache to be reused; change probe_managed_install() (and the same logic around
the npm check used at the other site) to validate that npm_bin.exists() (and any
other required launchers) before accepting a managed install and return an
error/None when npm is absent so build_resolved() will trigger a reinstall;
specifically, update the code paths that inspect npm_bin (the npm launcher check
near the existing tracing::warn!) to fail the probe instead of only warning,
ensuring npm_exec can recover by forcing a fresh managed install.
In `@src/openhuman/node_runtime/resolver.rs`:
- Around line 116-125: The which_node() function currently returns the first
regular file named node but doesn't verify it is executable; change which_node
to skip non-executable matches by checking candidate metadata for execute
permission before returning. Implement an is_executable check used in
which_node: on Unix inspect metadata().permissions().mode() & 0o111 != 0, and on
Windows treat the existence of the .exe suffix (std::env::consts::EXE_SUFFIX) as
sufficient (or use metadata().is_file()); only return Some(candidate) when
is_executable(candidate) is true to avoid masking later valid installs.
In `@src/openhuman/skills/ops.rs`:
- Around line 563-569: The YAML parse failure is only logged and replaced with
SkillFrontmatter::default(), so the real parse error never appears in the
Skill.warnings shown in the catalog; update the
serde_yaml::from_str::<SkillFrontmatter>(&yaml) error branch to append a
diagnostic containing the parse error (err) to the Skill.warnings collection for
this skill (while still returning SkillFrontmatter::default()), so the concrete
parse message is surfaced in Skill.warnings instead of only a log line. Ensure
you reference the local frontmatter variable and push the formatted error into
the skill's warnings slice/vec where other warnings are collected.
- Around line 592-605: The walk_files function currently uses
path.is_dir()/is_file() which follow symlinks; change it to detect and skip
symlinks so you neither recurse into directory symlinks nor treat symlinked
files as bundled. Inside walk_files (and when iterating entries) call
entry.file_type() or std::fs::symlink_metadata(&path) and if
file_type.is_symlink() { continue; } then only recurse when file_type.is_dir()
and push only when file_type.is_file(); preserve existing error handling for
metadata/file_type calls.
- Around line 356-376: scan_root currently iterates std::fs::read_dir() which
yields entries in unspecified order causing nondeterministic winners when two
sibling folders declare the same frontmatter.name; change scan_root to collect
the directory entries into a Vec, filter to directories and non-dot names, sort
that Vec by a stable key (e.g. entry.file_name() as a string, optionally
case-insensitive) and then iterate the sorted list calling load_skill_dir(&path,
&dir_name, scope) so same-scope name collisions are resolved deterministically;
keep existing error handling where read_dir fails.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 42f0cec6-de2c-4514-bb92-5692d490462c
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
Cargo.tomlsrc/openhuman/config/schema/load.rssrc/openhuman/node_runtime/bootstrap.rssrc/openhuman/node_runtime/downloader.rssrc/openhuman/node_runtime/extractor.rssrc/openhuman/node_runtime/resolver.rssrc/openhuman/skills/ops.rssrc/openhuman/tools/impl/system/node_exec.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- src/openhuman/node_runtime/extractor.rs
- src/openhuman/node_runtime/downloader.rs
- src/openhuman/tools/impl/system/node_exec.rs
`probe_managed_install` now verifies the npm launcher exists before reusing an extracted install. A download interrupted after `node` was extracted but before `npm` would otherwise be cached forever and `npm_exec` could never self-heal — now the corrupted cache forces a fresh download via the normal resolve path. Addresses CodeRabbit review feedback on PR tinyhumansai#723. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…inyhumansai#723) `which_node` previously returned the first matching filename on `PATH` regardless of its execute bit. A non-executable `node` placeholder earlier in `PATH` (e.g. an unprivileged shim left by a failed install) would mask a valid later install and force the managed runtime download. Now checks `file && (mode & 0o111 != 0)` on Unix to mirror shell `which` behaviour. Windows remains unchanged — `.exe` suffix already encodes executability for the loader. Addresses CodeRabbit review feedback on PR tinyhumansai#723. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
`read_dir` order is unspecified by the OS. When two sibling skill directories declare the same logical `frontmatter.name` (which can differ from the folder name), cross-scope/same-scope deduplication downstream would pick a non-deterministic winner across runs. Sort entries by on-disk directory name for a stable, reproducible order so collision resolution is deterministic. Addresses CodeRabbit review feedback on PR tinyhumansai#723. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…i#723) `parse_skill_md` previously swallowed the `serde_yaml` error via `log::warn!` and fell back to an empty `SkillFrontmatter`, then the catalog reported a generic "could not parse — exposing directory as placeholder". Skill authors had no way to see the real cause without scraping logs. Return parse-level diagnostics as a third tuple element. `load_from_skill_md` merges them into the skill's user-visible `warnings`, so the catalog now surfaces the actual YAML error (e.g. "frontmatter parse error: mapping values are not allowed here at line 3 column 12"). Addresses CodeRabbit review feedback on PR tinyhumansai#723. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…#723) `walk_files` used `is_dir()` / `is_file()` which transparently follow symlinks. Two failure modes: 1. **Unbounded recursion** — a skill resource symlink that points back at an ancestor (e.g. `resources/self -> resources/`) would cause infinite traversal, eventually blowing the stack. 2. **Silent out-of-tree leakage** — a symlink pointing at `/`, `/etc`, or another skill's directory would enumerate its contents into the current skill's resource listing. Switch to `entry.file_type()` and skip symlinks before descending. Directories and regular files behave as before. Addresses CodeRabbit review feedback on PR tinyhumansai#723. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/node_runtime/bootstrap.rs`:
- Around line 123-130: The default cache_root should not be workspace-local:
change cache_root() so when config.cache_dir is empty it uses a user-owned OS
cache (e.g. dirs::cache_dir() or dirs_next::cache_dir() and join a stable
subpath like "openhuman/node-runtime") instead of
workspace_dir.join("node-runtime"); keep honor of an explicit config.cache_dir
when set. Add a concise rustdoc on cache_root() describing this policy and
ensure probe_managed_install() and any code that reuses workspace-local runtimes
only do so when an explicit, trusted override/flag is present (e.g. a new
allow_workspace_cache or explicit config.cache_dir value), updating
probe_managed_install() to check that flag before accepting a workspace-local
node/npm tree.
In `@src/openhuman/node_runtime/resolver.rs`:
- Around line 90-110: The current resolver accepts a system Node when majors
match but doesn't verify npm, causing cached System Node to break npm_exec on
distros that package npm separately; update the resolver logic (the code path
that returns Some(SystemNode) and constructs SystemNode) to verify npm is
present and usable before returning: perform a lightweight check (e.g., probe
for npm via the same mechanism used by npm_exec or run "npm --version") and if
the check fails, log a clear message and return None so the managed runtime will
be downloaded instead of caching NodeSource::System.
In `@src/openhuman/skills/ops.rs`:
- Around line 371-380: The directory discovery loop currently uses path.is_dir()
which follows symlinks; modify the loop that iterates entries (the code calling
entry.path(), path.is_dir(), and load_skill_dir) to explicitly skip symlinked
entries before treating them as directories: obtain the DirEntry's file type
(via entry.file_type() or path.symlink_metadata()) and if file_type.is_symlink()
is true then continue, otherwise proceed with the existing is_dir check and call
to load_skill_dir; this ensures symlinked skill directories are ignored.
- Around line 601-608: The function inventory_resources currently descends into
each resource root (RESOURCE_DIRS) after testing root.is_dir(), which follows
symlinks and can recurse into external trees; before calling walk_files(root,
...), call std::fs::symlink_metadata on the root Path and skip the directory if
its FileType reports is_symlink() (i.e., treat symlinked roots as non-existent),
so inventory_resources rejects symlinked top-level resource directories rather
than dereferencing them; update the logic around root, root.is_dir(), and the
walk_files invocation accordingly.
- Around line 221-234: The current shim load_skills(workspace_dir) omits
user-scoped directories and thus drops user-installed skills for existing
callers; update the callers in src/openhuman/agent/harness/session/builder.rs
and src/openhuman/channels/runtime/startup.rs to call discover_skills (or
discover_skills_inner with the appropriate user None/Some args) instead of
load_skills so user home paths (~/.openhuman/skills and ~/.agents/skills) are
included, or alternatively modify load_skills to delegate to discover_skills
(preserving workspace precedence) so its public behavior remains
backward-compatible while retaining user-scope discovery.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9b23deb7-d072-47a9-a61c-48b15fa3bb1b
📒 Files selected for processing (3)
src/openhuman/node_runtime/bootstrap.rssrc/openhuman/node_runtime/resolver.rssrc/openhuman/skills/ops.rs
…humansai#723) Default `cache_root()` to `dirs::cache_dir()/openhuman/node-runtime/` so a repo cannot ship a checked-in `./node-runtime/` and have the bootstrap reuse it as a trusted managed install. Explicit `config.cache_dir` still wins; workspace-local falls back only when `dirs::cache_dir()` is unavailable, and we emit a warning on that fallback. As a second line of defence, `probe_managed_install()` now canonicalises both the install dir and the cache root and refuses any install that escapes the cache tree. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ai#723) On distros that package `node` and `npm` separately (Debian/Ubuntu, Alpine `nodejs-current`, some NixOS setups) the host `node` can be present without `npm`. `npm_exec` then fails every call because the resolved `SystemNode` has no usable npm launcher. Before returning `Some(SystemNode)`, locate `npm` on `PATH` and probe `npm --version` through the same `wait_timeout` path as the node probe. Either missing gate falls back to the managed download flow. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
) Existing production callers (`agent::harness::session::builder`, `channels::runtime::startup`) reach the skill catalog via `load_skills(workspace_dir)`. Previously this shim passed `None` for the home directory, so skills installed under `~/.openhuman/skills/` and `~/.agents/skills/` were silently dropped even after the multi-scope discovery landed in `discover_skills`. Delegate to `discover_skills_inner` with `dirs::home_dir()` so user-scope skills reach the runtime; project-scope still wins on name collision. Tests stay hermetic via a `load_skills_ws` helper that preserves the old workspace-only semantics. A new `load_skills_surfaces_user_scope` test drives a tempdir-as-home through `discover_skills` and asserts the user-scope skill is returned with `SkillScope::User`. Addresses CodeRabbit review comment 3116332244. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
`scan_root` previously accepted any entry where `path.is_dir()` returned true. `is_dir()` dereferences symlinks, so a link from `<skills-root>/foo -> /some/external/tree` would load as a legitimate skill even though `walk_files` already rejects symlinks deeper in the resource walker. Attacker-authored symlinks in a skills root would therefore have one remaining escape hatch. Switch to `entry.file_type()` (non-dereferencing) and reject both symlinked and non-directory entries at the top level. Treat a failed `file_type()` probe as "not safe to traverse" and skip. A new `symlinked_skill_dirs_are_skipped` test (Unix-only, since symlinks are the platform guarantee being tested) creates an external skill tempdir, links it into the workspace skills root, and asserts `load_skills_ws` returns empty. Addresses CodeRabbit review comment 3116332252. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
`inventory_resources` gated each resource sub-root (`scripts/`, `references/`, `assets/`) on `root.is_dir()`. `is_dir()` follows symlinks, so a `scripts -> /etc` link inside a skill directory would pass the check and `walk_files` would inventory the external tree. Deeper symlinks inside the walk were already rejected by `walk_files`, but the root-level check was the missing layer. Use `std::fs::symlink_metadata` for a non-dereferencing probe and reject roots whose own `file_type().is_symlink()` returns true. Treat any `symlink_metadata` error as a non-existent root and skip. A new `symlinked_resource_roots_are_rejected` test (Unix-only) links a skill's `assets/` sub-root to an external tempdir with a file inside and asserts `inventory_resources` returns empty. Addresses CodeRabbit review comment 3116332260. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
src/openhuman/skills/ops.rs (2)
264-297: Add debug/trace checkpoints in skill discovery flow.The new multi-scope flow has key branch decisions (trusted gating, roots scanned, collision outcomes) but no debug/trace instrumentation, which makes “why didn’t my skill load?” hard to diagnose.
As per coding guidelines: "Use
log/tracingatdebugortracelevel for development-oriented diagnostics on new or changed flows — log entry/exit points, branch decisions, external calls, retries, state transitions, and error paths".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/skills/ops.rs` around lines 264 - 297, Add debug/trace logging to the skill discovery flow in discover_skills_inner: log entry/exit of discover_skills_inner, the home_dir and workspace_dir presence, whether trusted gating allowed project roots to be scanned, each root returned by user_roots() and project_roots(), each call to scan_root() with the SkillScope, and collision outcomes when absorb() overrides an existing entry in by_name (include the skill name and scopes). Use the project logging/tracing facility at debug/trace level and add logs around the final sort/return of out so callers can see the resulting ordered skill list.
674-1028: Split this module to keepops.rsfocused and under size target.
src/openhuman/skills/ops.rsnow combines discovery, parsing, resource walking, and a large test suite in one file (>1000 lines). Please split into sibling modules (e.g.,discovery.rs,parser.rs,resources.rs, and test modules) to keep responsibilities isolated.As per coding guidelines: "Keep source files ≤ ~500 lines; split modules when growing to maintain single responsibility".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/skills/ops.rs` around lines 674 - 1028, The ops.rs file is too large and mixes discovery, parsing, and resource-walking code plus tests; split responsibilities into sibling modules: create discovery.rs for discover_skills, discover_skills_inner, load_skills_ws (test shim) and init_skills_dir; parser.rs for parse_skill_md and frontmatter parsing helpers; resources.rs for inventory_resources and related FS-walking helpers; move the corresponding unit tests into each module (or into a tests submodule per module), keep test helpers like write adjacent to tests, and update ops.rs (or src/openhuman/skills/mod.rs) to pub use the moved functions (e.g., pub use discovery::{discover_skills, init_skills_dir}; pub use parser::parse_skill_md; pub use resources::inventory_resources) so external callers are unchanged; ensure visibility (pub(crate)/pub) and imports (use crate::openhuman::skills::...) are adjusted and run cargo test to fix any path or privacy errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/node_runtime/bootstrap.rs`:
- Around line 220-223: After calling build_resolved(bin_dir, version,
NodeSource::Managed) for a fresh install, re-run the managed install validation
(the same checks probe_managed_install performs) against the extracted install
before accepting/caching it; if validation fails (e.g., npm launcher missing or
not a file), reject/cleanup the install and return an error instead of returning
the build result so resolve()/npm_exec cannot remain broken. Apply the same
re-validation to the other fresh-install path around the code at the second
affected block (the other build_resolved return).
- Around line 202-218: The bootstrap currently races across processes because
archive_path and scratch (created from process id) are shared; protect the
cache/version lifecycle by acquiring a filesystem lock scoped to the cache_root
+ dist.archive_name (or version) before downloading/extracting, and release it
after atomic_install completes; replace the PID-only scratch path with a truly
unique temp directory (e.g., create a uniquified temp dir via tempfile::Builder
or mkdtemp semantics) and use that for extract_distribution so concurrent
bootstraps do not remove each other’s staging; ensure download_distribution,
extract_distribution and atomic_install are executed while holding the file lock
and still clean up the unique scratch and archive_path after releasing the lock
as needed.
In `@src/openhuman/skills/ops.rs`:
- Around line 574-585: The loop that splits frontmatter vs body drops any
subsequent lines equal to '---' because the delimiter check runs
unconditionally; update the logic in the for loop (the block using variables
lines, terminated, yaml, body) so the trim-and-check for '---' only occurs when
!terminated — e.g., move the if line.trim() == "---" branch inside the
!terminated branch so that after terminated==true any '---' lines are appended
to body (body.push_str(line); body.push('\n')) instead of being skipped.
- Around line 358-362: Reject symlinked scan roots in scan_root by calling
std::fs::symlink_metadata(root) before attempting to read_dir and returning an
empty Vec if symlink_metadata fails or if metadata.file_type().is_symlink() is
true; update the top of fn scan_root to perform this check (use
std::fs::symlink_metadata and metadata.file_type().is_symlink()) and only call
std::fs::read_dir(root) afterwards so discovery will not traverse outside the
workspace when the provided root is a symlink.
---
Nitpick comments:
In `@src/openhuman/skills/ops.rs`:
- Around line 264-297: Add debug/trace logging to the skill discovery flow in
discover_skills_inner: log entry/exit of discover_skills_inner, the home_dir and
workspace_dir presence, whether trusted gating allowed project roots to be
scanned, each root returned by user_roots() and project_roots(), each call to
scan_root() with the SkillScope, and collision outcomes when absorb() overrides
an existing entry in by_name (include the skill name and scopes). Use the
project logging/tracing facility at debug/trace level and add logs around the
final sort/return of out so callers can see the resulting ordered skill list.
- Around line 674-1028: The ops.rs file is too large and mixes discovery,
parsing, and resource-walking code plus tests; split responsibilities into
sibling modules: create discovery.rs for discover_skills, discover_skills_inner,
load_skills_ws (test shim) and init_skills_dir; parser.rs for parse_skill_md and
frontmatter parsing helpers; resources.rs for inventory_resources and related
FS-walking helpers; move the corresponding unit tests into each module (or into
a tests submodule per module), keep test helpers like write adjacent to tests,
and update ops.rs (or src/openhuman/skills/mod.rs) to pub use the moved
functions (e.g., pub use discovery::{discover_skills, init_skills_dir}; pub use
parser::parse_skill_md; pub use resources::inventory_resources) so external
callers are unchanged; ensure visibility (pub(crate)/pub) and imports (use
crate::openhuman::skills::...) are adjusted and run cargo test to fix any path
or privacy errors.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 01688e20-2301-4515-b03e-53bfd93d1f69
📒 Files selected for processing (3)
src/openhuman/node_runtime/bootstrap.rssrc/openhuman/node_runtime/resolver.rssrc/openhuman/skills/ops.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/openhuman/node_runtime/resolver.rs
| let cache_root = self.cache_root(); | ||
| tokio::fs::create_dir_all(&cache_root) | ||
| .await | ||
| .with_context(|| format!("creating cache root {}", cache_root.display()))?; | ||
| let archive_path = cache_root.join(&dist.archive_name); | ||
| download_distribution(&self.client, &dist, &archive_path, &expected).await?; | ||
|
|
||
| // Extract into a scratch folder so a partial extraction never | ||
| // contaminates the cache root; `atomic_install` promotes the | ||
| // inner top-level folder into the final install path. | ||
| let scratch = cache_root.join(format!(".stage-{}", std::process::id())); | ||
| // Wipe any leftover from a previous crashed run. | ||
| let _ = tokio::fs::remove_dir_all(&scratch).await; | ||
| let top_level = extract_distribution(&archive_path, &scratch, dist.is_zip).await?; | ||
| atomic_install(&top_level, &install_dir).await?; | ||
| let _ = tokio::fs::remove_dir_all(&scratch).await; | ||
| let _ = tokio::fs::remove_file(&archive_path).await; |
There was a problem hiding this comment.
The shared cache still races across bootstraps/processes.
The tokio::sync::Mutex only protects callers sharing this NodeBootstrap. archive_path is global inside cache_root, and the PID-only scratch directory is not unique across bootstrap instances in the same process. Two concurrent installs can overwrite/remove the same archive or staging tree and turn a cold start into a corrupt cache or a spurious bootstrap failure. Please add a filesystem lock scoped to the cache/version, and switch the scratch path to a truly unique temp dir.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/node_runtime/bootstrap.rs` around lines 202 - 218, The
bootstrap currently races across processes because archive_path and scratch
(created from process id) are shared; protect the cache/version lifecycle by
acquiring a filesystem lock scoped to the cache_root + dist.archive_name (or
version) before downloading/extracting, and release it after atomic_install
completes; replace the PID-only scratch path with a truly unique temp directory
(e.g., create a uniquified temp dir via tempfile::Builder or mkdtemp semantics)
and use that for extract_distribution so concurrent bootstraps do not remove
each other’s staging; ensure download_distribution, extract_distribution and
atomic_install are executed while holding the file lock and still clean up the
unique scratch and archive_path after releasing the lock as needed.
| let bin_dir = managed_bin_dir(&install_dir); | ||
| let version = dist.version.trim_start_matches('v').to_string(); | ||
| build_resolved(bin_dir, version, NodeSource::Managed) | ||
| } |
There was a problem hiding this comment.
Re-validate the just-installed tree before caching it.
probe_managed_install() already rejects installs where npm is missing or not a file, but the fresh-install path bypasses that guard and caches whatever build_resolved() returns. If extraction leaves a broken npm launcher or symlink, the first resolve() succeeds and npm_exec stays broken for the rest of the process.
Suggested fix
- let bin_dir = managed_bin_dir(&install_dir);
- let version = dist.version.trim_start_matches('v').to_string();
- build_resolved(bin_dir, version, NodeSource::Managed)
+ probe_managed_install(&install_dir, &cache_root, &self.config.version)
+ .context("managed install validation failed after extraction")Also applies to: 255-260
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/node_runtime/bootstrap.rs` around lines 220 - 223, After
calling build_resolved(bin_dir, version, NodeSource::Managed) for a fresh
install, re-run the managed install validation (the same checks
probe_managed_install performs) against the extracted install before
accepting/caching it; if validation fails (e.g., npm launcher missing or not a
file), reject/cleanup the install and return an error instead of returning the
build result so resolve()/npm_exec cannot remain broken. Apply the same
re-validation to the other fresh-install path around the code at the second
affected block (the other build_resolved return).
| fn scan_root(root: &Path, scope: SkillScope) -> Vec<Skill> { | ||
| let entries = match std::fs::read_dir(root) { | ||
| Ok(entries) => entries, | ||
| Err(_) => return Vec::new(), | ||
| }; |
There was a problem hiding this comment.
Reject symlinked scan roots before calling read_dir.
Line 359 dereferences root directly. If <workspace>/skills or <workspace>/.openhuman/skills is itself a symlink, discovery can traverse outside the workspace tree.
Proposed fix
fn scan_root(root: &Path, scope: SkillScope) -> Vec<Skill> {
+ let meta = match std::fs::symlink_metadata(root) {
+ Ok(m) => m,
+ Err(_) => return Vec::new(),
+ };
+ if meta.file_type().is_symlink() || !meta.is_dir() {
+ return Vec::new();
+ }
+
let entries = match std::fs::read_dir(root) {
Ok(entries) => entries,
Err(_) => return Vec::new(),
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/skills/ops.rs` around lines 358 - 362, Reject symlinked scan
roots in scan_root by calling std::fs::symlink_metadata(root) before attempting
to read_dir and returning an empty Vec if symlink_metadata fails or if
metadata.file_type().is_symlink() is true; update the top of fn scan_root to
perform this check (use std::fs::symlink_metadata and
metadata.file_type().is_symlink()) and only call std::fs::read_dir(root)
afterwards so discovery will not traverse outside the workspace when the
provided root is a symlink.
| for line in lines { | ||
| if line.trim() == "---" { | ||
| terminated = true; | ||
| continue; | ||
| } | ||
| if !terminated { | ||
| yaml.push_str(line); | ||
| yaml.push('\n'); | ||
| } else { | ||
| body.push_str(line); | ||
| body.push('\n'); | ||
| } |
There was a problem hiding this comment.
Do not drop --- lines from Markdown body after frontmatter ends.
At Line 575, the delimiter check runs even after termination, so body lines equal to --- are silently removed. That mutates valid SKILL.md content.
Proposed fix
- for line in lines {
- if line.trim() == "---" {
- terminated = true;
- continue;
- }
- if !terminated {
+ for line in lines {
+ if !terminated && line.trim() == "---" {
+ terminated = true;
+ continue;
+ }
+ if !terminated {
yaml.push_str(line);
yaml.push('\n');
} else {
body.push_str(line);
body.push('\n');
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for line in lines { | |
| if line.trim() == "---" { | |
| terminated = true; | |
| continue; | |
| } | |
| if !terminated { | |
| yaml.push_str(line); | |
| yaml.push('\n'); | |
| } else { | |
| body.push_str(line); | |
| body.push('\n'); | |
| } | |
| for line in lines { | |
| if !terminated && line.trim() == "---" { | |
| terminated = true; | |
| continue; | |
| } | |
| if !terminated { | |
| yaml.push_str(line); | |
| yaml.push('\n'); | |
| } else { | |
| body.push_str(line); | |
| body.push('\n'); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/skills/ops.rs` around lines 574 - 585, The loop that splits
frontmatter vs body drops any subsequent lines equal to '---' because the
delimiter check runs unconditionally; update the logic in the for loop (the
block using variables lines, terminated, yaml, body) so the trim-and-check for
'---' only occurs when !terminated — e.g., move the if line.trim() == "---"
branch inside the !terminated branch so that after terminated==true any '---'
lines are appended to body (body.push_str(line); body.push('\n')) instead of
being skipped.
…tinyhumansai#681) Introduces `install_skill_from_url(url, timeout_secs?)` — a JSON-RPC method that shells out to `npx --yes skills add <url>` under the managed Node runtime so the UI can install published SKILL.md packages directly. Security posture: - https scheme only (no http, file, ssh, git+https…) - Rejects `localhost`, `*.localhost`, `*.local`, RFC1918 private IPv4, loopback, link-local, multicast, broadcast, unspecified, 100.64/10 CGN, 0.0.0.0/8, and IPv6 loopback/unspecified/multicast, fc00::/7 ULA, fe80::/10 link-local. Explicitly covers 169.254.169.254 cloud metadata. - Trims + caps URL at 2048 chars; parses with the `url` crate. - IPv6 brackets stripped from `host_str()` before address parse. Process posture: - Reuses `NodeBootstrap` so the managed toolchain resolves first. - `env_clear()` + explicit PATH injection (bootstrap bin_dir first) + a narrow safe-env allow-list (HOME, TERM, LANG, LC_ALL, LC_CTYPE, USER, SHELL, TMPDIR). Matches the npm_exec pattern from tinyhumansai#723. - Default 60s wall-clock timeout, capped at 600s. - Captures stdout/stderr; returns both on success or failure. - Diff-based `new_skills`: snapshots discovered skills pre-install and reports slugs that appear post-install. Surface: - JSON-RPC: `openhuman.skills_install_from_url` params: { url: string, timeout_secs?: number } result: { url, stdout, stderr, new_skills[] } - Wired into `all_skills_controller_schemas()` and `all_skills_registered_controllers()`. Tests (5 new unit tests, all pass — 51/51 in skills::): - validate_install_url_accepts_public_https - validate_install_url_rejects_non_https_scheme (http, file, ftp, ssh, git+https, javascript) - validate_install_url_rejects_empty_and_oversized - validate_install_url_rejects_private_and_loopback (20 URLs inc. CGN, cloud metadata, IPv6 ULA/link-local/loopback/multicast) - validate_install_url_rejects_malformed (missing scheme, empty host, non-https scheme, unparseable bracketed host) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…tinyhumansai#681) Introduces `install_skill_from_url(url, timeout_secs?)` — a JSON-RPC method that shells out to `npx --yes skills add <url>` under the managed Node runtime so the UI can install published SKILL.md packages directly. Security posture: - https scheme only (no http, file, ssh, git+https…) - Rejects `localhost`, `*.localhost`, `*.local`, RFC1918 private IPv4, loopback, link-local, multicast, broadcast, unspecified, 100.64/10 CGN, 0.0.0.0/8, and IPv6 loopback/unspecified/multicast, fc00::/7 ULA, fe80::/10 link-local. Explicitly covers 169.254.169.254 cloud metadata. - Trims + caps URL at 2048 chars; parses with the `url` crate. - IPv6 brackets stripped from `host_str()` before address parse. Process posture: - Reuses `NodeBootstrap` so the managed toolchain resolves first. - `env_clear()` + explicit PATH injection (bootstrap bin_dir first) + a narrow safe-env allow-list (HOME, TERM, LANG, LC_ALL, LC_CTYPE, USER, SHELL, TMPDIR). Matches the npm_exec pattern from tinyhumansai#723. - Default 60s wall-clock timeout, capped at 600s. - Captures stdout/stderr; returns both on success or failure. - Diff-based `new_skills`: snapshots discovered skills pre-install and reports slugs that appear post-install. Surface: - JSON-RPC: `openhuman.skills_install_from_url` params: { url: string, timeout_secs?: number } result: { url, stdout, stderr, new_skills[] } - Wired into `all_skills_controller_schemas()` and `all_skills_registered_controllers()`. Tests (5 new unit tests, all pass — 51/51 in skills::): - validate_install_url_accepts_public_https - validate_install_url_rejects_non_https_scheme (http, file, ftp, ssh, git+https, javascript) - validate_install_url_rejects_empty_and_oversized - validate_install_url_rejects_private_and_loopback (20 URLs inc. CGN, cloud metadata, IPv6 ULA/link-local/loopback/multicast) - validate_install_url_rejects_malformed (missing scheme, empty host, non-https scheme, unparseable bracketed host) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…681) (#740) * feat(skills/core): add read_skill_resource with size + traversal guards (#681) Introduces `read_skill_resource(skill_id, relative_path)` in the skills ops module. Used by the new `skills.read_resource` RPC (landed in a follow-up commit) to let the UI preview files bundled alongside a SKILL.md without having to shell out to the Node runtime. Guards rejecting each known attack surface have their own unit test: - empty skill_id / empty relative_path - unknown skill - absolute paths - `..` traversal escapes (checked after canonicalization against the skill root, reusing the pattern from the Node exec allowlist) - directory targets - symlinked leaves (reject via `symlink_metadata` before open) - files over the 128 KB cap - non-UTF-8 content (binary allowlist is text-only) Happy-path test covers a small text resource under the skill root. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * feat(skills/core): wire skills.read_resource RPC + namespace (#681) Adds `skills` RPC namespace with `skills.list` and `skills.read_resource` handlers. `read_resource` delegates to `read_skill_resource` (previous commit) and surfaces path-traversal / size / encoding errors back to the caller verbatim so the UI can render the error string as-is. - `src/openhuman/skills/schemas.rs` (new): controller + schema definitions, plus unit tests for schema name stability, round-trip of the minimum `SkillSummary` fields, and controller list/schema length parity. - `src/openhuman/skills/mod.rs`: declare `pub mod schemas` and re-export `all_skills_controller_schemas`, `all_skills_registered_controllers`, and `skills_schemas`. - `src/core/all.rs`: register the controllers + schemas and add a namespace description so the RPC discovery endpoint surfaces it. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * feat(app/api): typed skillsApi client for list + read_resource (#681) Thin typed wrapper around the `skills.list` and `skills.read_resource` RPCs added in the previous commit. The client normalises the backend response shape (bytes + UTF-8 content) and rethrows backend error strings verbatim so the preview pane can render them unchanged. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * feat(app/ui): SkillResourceTree groups bundled resources by top dir (#681) Presentational component that takes the `resources: PathBuf[]` from a loaded Skill and renders it as a grouped list (scripts, assets, references, etc. based on the first path segment). Selecting a leaf calls `onSelect(relativePath)` so the parent drawer can drive preview state. Stateless — no fetching, no effects. Styling follows the stone/coral design tokens used across the Skills page. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * feat(app/ui): SkillResourcePreview size-gated viewer + specs (#681) Presentational component that fetches a single bundled resource via `skillsApi.readSkillResource`. Backend caps payloads at 128 KB and either returns UTF-8 text or a plain error string, so the preview pane has three visual states: loading, error, success. - On error (e.g. "path escape", ">128KB", "non-UTF-8"), renders the backend message verbatim in a coral panel. - On success, renders a monospace pre block with the byte count in the footer. - `key={id:path}` on the mount site (in SkillDetailDrawer, next commit) drives a remount when the selected resource changes — so no setState-in-effect hack is needed to reset loading state. Vitest specs cover: loading state, success rendering with byte footer, error rendering for traversal / oversize / encoding strings, cancelled fetch guard on unmount. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * feat(app/ui): SkillDetailDrawer right-side detail panel + specs (#681) Slide-in right-hand drawer that displays frontmatter metadata (description, version, author, license, tags, allowed_tools) and hosts SkillResourceTree + SkillResourcePreview. Opened by clicking a skill card on the Skills page (wired in the next commit). - Focus management: on mount, focuses the close button via `window.requestAnimationFrame` and restores the previously focused element on unmount. - Esc + backdrop click dismiss. - Preview pane is conditionally rendered and keyed on `${skill.id}:${selectedResource}` so changing the selected resource remounts the previewer (avoids setState-in-effect pattern). Vitest specs cover: render with frontmatter, resource tree click opens preview, close button / Esc / backdrop dismiss paths, focus restoration, empty-resources case. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * feat(app/ui): open SkillDetailDrawer on skill card click (#681) Wires the Skills page to the new drawer: clicking a skill card sets `selectedSkill` state, which mounts `SkillDetailDrawer`. Dismissing the drawer clears the state. Cards gain an explicit "View details" affordance for discoverability. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * feat(skills/core): add skills.create RPC for scaffolded SKILL.md authoring (#681) Adds `create_skill` in ops.rs plus the `skills.create` controller + handler in schemas.rs. Writes a minimal SKILL.md (with optional license/author/tags/ allowed-tools frontmatter) under the selected scope, scaffolds scripts/ references/assets subdirs, and re-discovers the skill to return the parsed SkillSummary. Legacy scope rejects; Project scope requires the trust marker; User scope is always allowed when a home directory is available. Hardened against path traversal the same way read_skill_resource is: canonicalize the scope root, canonicalize the target dir, reject unless the target starts with the root. Slug derivation is ASCII-only (collapse whitespace/ -/_ to a single hyphen, drop other chars, trim hyphens, enforce MAX_NAME_LEN). Tests (hermetic via create_skill_inner): - user-scope happy path (slug, metadata, SKILL.md on disk, subdirs) - slug collision rejection - invalid name (no alphanumerics) rejection - project-scope without trust marker rejection - project-scope with trust marker happy path - legacy-scope rejection - empty-description rejection - slugify edge cases Closes part of #681 (backend scope for create flow). * feat(skills/core): add skills.install_from_url RPC via npx skills add (#681) Introduces `install_skill_from_url(url, timeout_secs?)` — a JSON-RPC method that shells out to `npx --yes skills add <url>` under the managed Node runtime so the UI can install published SKILL.md packages directly. Security posture: - https scheme only (no http, file, ssh, git+https…) - Rejects `localhost`, `*.localhost`, `*.local`, RFC1918 private IPv4, loopback, link-local, multicast, broadcast, unspecified, 100.64/10 CGN, 0.0.0.0/8, and IPv6 loopback/unspecified/multicast, fc00::/7 ULA, fe80::/10 link-local. Explicitly covers 169.254.169.254 cloud metadata. - Trims + caps URL at 2048 chars; parses with the `url` crate. - IPv6 brackets stripped from `host_str()` before address parse. Process posture: - Reuses `NodeBootstrap` so the managed toolchain resolves first. - `env_clear()` + explicit PATH injection (bootstrap bin_dir first) + a narrow safe-env allow-list (HOME, TERM, LANG, LC_ALL, LC_CTYPE, USER, SHELL, TMPDIR). Matches the npm_exec pattern from #723. - Default 60s wall-clock timeout, capped at 600s. - Captures stdout/stderr; returns both on success or failure. - Diff-based `new_skills`: snapshots discovered skills pre-install and reports slugs that appear post-install. Surface: - JSON-RPC: `openhuman.skills_install_from_url` params: { url: string, timeout_secs?: number } result: { url, stdout, stderr, new_skills[] } - Wired into `all_skills_controller_schemas()` and `all_skills_registered_controllers()`. Tests (5 new unit tests, all pass — 51/51 in skills::): - validate_install_url_accepts_public_https - validate_install_url_rejects_non_https_scheme (http, file, ftp, ssh, git+https, javascript) - validate_install_url_rejects_empty_and_oversized - validate_install_url_rejects_private_and_loopback (20 URLs inc. CGN, cloud metadata, IPv6 ULA/link-local/loopback/multicast) - validate_install_url_rejects_malformed (missing scheme, empty host, non-https scheme, unparseable bracketed host) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(app/api): skillsApi.createSkill + installSkillFromUrl wrappers (#681) Adds typed frontend wrappers for the two new skill-authoring RPC methods: - `skillsApi.createSkill(input)` — scaffolds a new SKILL.md skill via `openhuman.skills_create`. Accepts camelCase `allowedTools` and rekeys it to the `allowed-tools` spelling the SKILL.md frontmatter convention expects, matching `SkillsCreateParams` in `src/openhuman/skills/schemas.rs`. Optional fields are only sent when explicitly provided so the Rust `#[serde(default)]` defaults apply cleanly. - `skillsApi.installSkillFromUrl(input)` — installs a published skill package via `openhuman.skills_install_from_url`. Accepts camelCase `timeoutSecs` and rekeys it to `timeout_secs`. Normalizes the response (snake_case `new_skills` -> camelCase `newSkills`, missing list -> []). Both wrappers reuse the existing `unwrapEnvelope` helper so they tolerate either a bare RPC payload or the `{ data: … }` envelope some transports emit. Adds `CreateSkillInput`, `InstallSkillFromUrlInput`, and `InstallSkillFromUrlResult` type exports for downstream modal components. Tests (vitest, 6 new specs, all pass): - createSkill forwards inputs and rekeys allowedTools - createSkill omits optional fields when absent - createSkill unwraps envelope responses - installSkillFromUrl forwards url and rekeys timeoutSecs - installSkillFromUrl omits timeout_secs + defaults newSkills to [] - installSkillFromUrl unwraps envelope responses Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(app/ui): CreateSkillModal for scaffolding SKILL.md skills (#681) Adds a centered white modal that scaffolds a new SKILL.md skill via `skillsApi.createSkill`, matching the settings-modal design rules (520px desktop, 16px radius, backdrop+blur, Escape/click-out to close, focus capture). Form fields mirror the Rust `SkillsCreateParams` schema: - name (required) — display name, also slugified into the on-disk directory; a live slug preview surfaces what will hit disk - description (required) — short prose; written as the `description:` field in the generated YAML frontmatter - scope (user | project radio) — `legacy` is hidden because that layout is read-only and being phased out - license (optional) — free-form SPDX-style string - author (optional) - tags (optional, CSV) — normalised client-side; empty entries dropped - allowedTools (optional, CSV) — rekeyed to `allowed-tools` on the JSON-RPC wire by `skillsApi.createSkill` The slug preview mirrors `slugify_skill_name` on the Rust side (lowercase ASCII alnum + `-`, collapse repeats, trim edge hyphens) so the user sees what the Rust slugifier will produce; the Rust side stays authoritative when the skill is persisted. On success `onCreated(skill)` fires with the freshly-discovered `SkillSummary`, letting the parent grid insert the new row without a full refetch. On failure the Rust error string is surfaced verbatim in a coral-styled alert and the submit button re-enables. Vitest specs cover: required-field rendering, live slug preview, submit-disabled gating, Escape close, wire-format rekey of `allowedTools` → `'allowed-tools'`, `onCreated` dispatch, and error-banner recovery. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(app/ui): InstallSkillDialog for npx skills add <url> (#681) Adds a centered white modal that installs a published skill package via `skillsApi.installSkillFromUrl`. The Rust side shells out to `npx --yes skills add <url>` under the managed Node toolchain, with an allow-list on the URL (https only, no private/loopback/link-local/ multicast/cloud-metadata hosts) and a wall-clock timeout (default 60s, max 600s). UI contract: - Single URL input plus optional timeout in seconds. - Client-side `isLikelyValidUrl` fails fast on non-https URLs so the user doesn't pay a round-trip for shape errors the Rust side would reject anyway; the Rust side remains authoritative. - Timeout field validates `1 <= n <= 600` client-side to mirror the server-side clamp range. - While the RPC is in flight we render a spinner with "Running `npx skills add`…" copy and disable close / backdrop dismiss so we don't orphan the subprocess. - On success we surface the list of `newSkills` (ids that appeared post-install) plus captured stdout/stderr panes inside collapsible <details> elements, then hand the full result back to the caller via `onInstalled` so the parent can refetch the skills list and auto-select the new row. - On failure the Rust error string is rendered verbatim in a coral alert and the submit button re-enables. Vitest specs cover: required-field rendering, URL shape gating (empty, malformed, http://, https://), timeout range validation, `timeoutSecs` forwarding on submit, success panel with newSkills rendering, blank timeout omitted from payload, and error-banner recovery. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(app/ui): wire New skill + Install from URL into Skills page (#681) Adds a header row on the Skills page with two buttons: - **New skill** → opens `CreateSkillModal` - **Install from URL** → opens `InstallSkillDialog` Extracts the existing `listSkills` effect into a reusable `refreshDiscoveredSkills` helper so both new flows can reconcile their results against the freshly-discovered `SkillSummary` rows rather than relying on the optimistic payload from the RPC alone. Create flow: - Optimistically appends the returned `SkillSummary` to `discoveredSkills` (dedupe by id). - Auto-opens the detail drawer for the new skill so the user lands in context — matches the install flow's UX. - Follows up with `refreshDiscoveredSkills()` so version/author/ warnings picked up by the Rust discoverer end up in state too. Install flow: - Always refreshes the list (the install can add multiple skills if the package declares several). - Auto-opens the detail drawer for the first newly-installed skill when at least one id is reported back; otherwise leaves the grid in its refreshed state. Both buttons sit in a flush `max-w-lg` header above the existing search bar, styled consistent with `UnifiedSkillCard` CTAs — ocean primary for "New skill" (positive action), neutral stone for "Install from URL" (secondary). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(skills/core): direct SKILL.md fetch replaces npx skills add (#681) Installer no longer shells out to the vercel-labs/skills CLI. It now fetches SKILL.md over HTTPS, validates YAML frontmatter, and writes into the user's skills dir. Size cap (1 MiB), timeout clamp (1-600s), GitHub blob->raw URL normalization, and path-traversal guards are covered by unit tests. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * refactor(app/ui): install dialog copy + categorized errors for direct fetch (#681) Dialog subtitle, helper text, and in-flight indicator reflect the new direct SKILL.md fetch flow. Errors from the core are categorized into friendly titles (URL rejected, too large, timeout, parse failure, already installed, write failed) with the raw backend message tucked under a details disclosure. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * test(app/ui): dialog specs cover direct-fetch fixtures + error categorization (#681) Fixtures updated to raw GitHub SKILL.md URLs. New cases assert the categorization helper surfaces the right title for invalid SKILL.md, unsupported URL form, and unknown backend errors (raw text hidden under details). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * fix(skills/core): install_from_url writes to user scope (#681) Project scope (`<ws>/.openhuman/skills/`) is gated on a `<ws>/.openhuman/trust` marker that the workspace rarely has, so freshly-installed skills were invisible to `skills.list` until the user opted the workspace into trust. Route installs to `~/.openhuman/skills/<slug>` — the user-scope root that `discover_skills` always scans — so "Install from URL" surfaces the new skill immediately. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs(skills): align install_from_url docs with direct-fetch impl (#681) `install_from_url` stopped shelling out to `npx --yes skills add <url>` when it was rewritten to fetch SKILL.md over HTTPS directly, but schema descriptions and SDK wrappers still described the old subprocess flow. Fix the module-level rustdoc, the JSON-RPC schema `description`/`comment` fields, and the TS client wrapper doc comments so the surface documents what actually runs. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * fix(skills/core): DNS-to-private-IP SSRF guard + install rollback (#681) Two fixes surfaced by CodeRabbit review on PR #740: * `validate_install_url` only inspected literal-IP hosts, so a public-looking hostname like `evil.example.com` with an A record pointing at `127.0.0.1` / `169.254.x` / etc. would still be handed to `reqwest`. Resolve the host via `tokio::net::lookup_host` before the GET and reject if any returned address falls in loopback / private / link-local / multicast / unspecified ranges. Document the remaining DNS-rebinding gap (pinning to a `SocketAddr` + custom reqwest resolver is tracked separately). * If `std::fs::write` or `std::fs::rename` fails after `create_dir_all` succeeded, the empty/partial target directory used to survive and permanently block retries under the same slug. Wrap the write+rename in a rollback that removes the temp file + the just-created directory on failure (best-effort; cleanup errors are logged and the original write error is surfaced). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * style(skills): cargo fmt break long Host::Ipv6 conditional (#681) CI's cargo fmt (stable) rewraps the long `.map(..).unwrap_or(false)` chain that passed local fmt. Apply the break so the pre-push hook and the upstream lint job agree. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Summary
SKILL.mdwith YAML frontmatter aligned to the agentskills.io spec (name,description,license,compatibility,metadata,allowed-tools, plus unknown-field passthrough).~/.openhuman/skills,~/.agents/skills), Project (<workspace>/.openhuman/skillsbehind atrustmarker), and Legacy (<workspace>/skills) — resources (scripts/,references/,assets/) inventoried per skill.node, otherwise download + SHASUMS256-verify + atomic-install into a user cache, under a bootstrap mutex.node_exec(run JS via the managed node) andnpm_exec(run npm subcommands with a disallow-list for network-mutating actions) — behind anode.enabledconfig flag.node_exec+npm_execto thecode_executorandtool_makersub-agents.Problem
Issue #681 asks for SKILL.md integration (spec: https://agentskills.io). Today openhuman can run QuickJS skills from a git submodule but cannot ingest a plain-markdown skill package that declares its tools + resources via YAML frontmatter. Separately, sub-agents lack a managed JavaScript runtime — without it, skills authored for
node_exec/npm_execcannot run portably across hosts.Solution
Rust core — 14 micro-commits across 4 modules
Skills (
src/openhuman/skills/)d121f32b build(skills): add serde_yaml for SKILL.md frontmatter1e1f9657 feat(skills): parse SKILL.md frontmatter and add multi-path discovery776aee34 fix(skills): align frontmatter with agentskills.io specConfig (
src/openhuman/config/schema/)17cd8fac feat(config): add NodeConfig with env overrides(enable flag, version pin, cache dir, timeouts)Node runtime (
src/openhuman/node_runtime/)0c4d62f2 build(runtime): add tar+xz2+zip for node runtime extractionb0775b49 feat(node_runtime): detect compatible system node on PATHda891639 feat(node_runtime): SHASUMS256-verified distribution downloader57d5ffa2 feat(node_runtime): archive extraction + atomic install5769e752 feat(node_runtime): bootstrap mutex + cache + bin path helperTools (
src/openhuman/tools/impl/system/)ed4f4270 feat(tools): node_exec runs JS via managed node runtime7892f7fe feat(tools): npm_exec runs npm via managed node runtime16f80755 feat(tools): shell prepends managed node bin to PATH when cached584a3bcb feat(tools): register node_exec + npm_exec behind node.enabled793a2331 feat(agents): grant node_exec + npm_exec to code_executor + tool_makerSecurity posture
npm_execdisallow-list rejectspublish,adduser,login,token,access,team,hook,profile, etc. — only local project operations are permitted.cwdoverride in both tools rejects../ absolute paths escaping the workspace.SAFE_ENV_VARSallowlist before invoking node/npm.SHASUMS256.txtbefore atomic rename into cache.Submission Checklist
cargo check --manifest-path app/src-tauri/Cargo.toml, but dedicated unit tests are not yet in this PR.node_runtime/*,skills/ops.rsadditions,node_exec,npm_exec) carry rustdoc on public items and module headers.Impact
node.enabledin config; off by default.https://nodejs.org/dist/<ver>/archive + SHASUMS256.txt. No runtime auto-update.skills/legacy path still discovered.Related
Summary by CodeRabbit
New Features
Skills
Configuration