Conversation
- Add Dockerfile with multi-stage build (builder + runtime) - Add .dockerignore to exclude unnecessary files - Add GitHub Actions workflow for Docker builds - Triggers on push to main, version tags, and PRs - Builds linux/amd64 and linux/arm64 images - Pushes to ghcr.io/wesm/msgvault - Uses GitHub Actions cache for faster builds - Includes smoke tests for PR validation - Dockerfile includes healthcheck for container orchestration - Runs as non-root user (UID 1000) for security Part of wesm#116 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Comprehensive documentation covering: - Quick start for Docker deployment - OAuth device flow for headless authentication - Troubleshooting common OAuth errors - Complete NAS setup guide with docker-compose.yml - Security recommendations (API key, HTTPS, firewall, backups) - Platform-specific notes for Synology, QNAP, Raspberry Pi - Cron schedule reference - Container management commands Part of wesm#116 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Dockerfile: - Add libstdc++6 for CGO/DuckDB runtime dependencies - Create /data directory with correct ownership before USER switch Workflow: - Fix smoke test permissions by creating dir with chmod 777 before mount Docs: - Update Quick Start to include config file for network access - Add note explaining loopback limitation without config Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- .dockerignore: Add explicit excludes for sensitive files (config.toml, client_secret*.json, *.pem, *.key) - docs/docker.md: Generate random API key instead of placeholder to prevent copy-paste of insecure defaults Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- golang:1.25-bookworm pinned to sha256:38342f3e... - debian:bookworm-slim pinned to sha256:98f4b71d... - Add comment noting module path dependency in ldflags Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds POST /api/v1/auth/token/{email} endpoint that accepts a token
JSON body and saves it to the tokens directory. This enables the
headless OAuth workaround where users authenticate on a desktop
machine and upload the token to a remote NAS/server.
Also adds `msgvault export-token` CLI command that reads a local
token and uploads it to a remote msgvault instance.
Usage:
msgvault export-token user@gmail.com --to http://nas:8080 --api-key KEY
Part of wesm#116
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add comprehensive tests for handleUploadToken endpoint: - Happy path with valid token - Invalid JSON handling - Missing refresh_token validation - Invalid email format detection - Missing email path parameter - Add tests for sanitizeTokenPath: - Normal emails (with dots, plus signs) - Path traversal prevention - Special characters (slashes, null bytes) - Fix URL path escaping in export-token command: - Use url.PathEscape() for email in request URL Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Document the export-token command as an alternative to device flow OAuth. This workaround is needed because Google's device flow doesn't support all Gmail API scopes for some OAuth configurations. The workflow: 1. Authenticate on local machine with browser 2. Export token to NAS via API: msgvault export-token --to http://nas:8080 3. Add account to config.toml on NAS Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Reduce user friction identified during NAS deployment testing: - Add `msgvault setup` wizard for guided first-run configuration - Add env var support for export-token (MSGVAULT_REMOTE_URL/API_KEY) - Add config persistence for remote server after successful export - Add OAuth auto-discovery to find client_secret*.json in ~/Downloads - Add RemoteConfig and Save() method to config package - Fix search API FTS5 time parsing (was returning 500 errors) - Update docs/docker.md with troubleshooting and Synology ACL notes The setup wizard helps users: 1. Locate/configure Google OAuth credentials 2. Create config.toml automatically 3. Optionally save remote NAS server for token export Export-token now uses resolution order: flag > env var > config file After first successful export, credentials are saved for future use. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Setup wizard now: - Auto-generates secure API key for NAS - Creates complete nas-bundle/ with config.toml, client_secret.json, docker-compose.yml - Saves remote URL and API key locally so export-token needs no flags Also: - serve command now starts without accounts (warns instead of fails) This allows token upload before accounts are configured Tested end-to-end flow: 1. msgvault setup - finds OAuth, generates NAS bundle 2. msgvault add-account - OAuth via browser 3. msgvault export-token - no flags needed! 4. scp nas-bundle to NAS, docker-compose up 5. Full sync works Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Setup wizard now asks for port separately from hostname, allowing users to specify a custom port (e.g., 8180) to avoid conflicts with other services. The port is used in both: - docker-compose.yml port mapping - Local config remote URL Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Server now starts with 0 accounts (warns instead of failing). This enables the token upload workflow where: 1. Deploy container with no accounts 2. Upload tokens via API 3. Add accounts to config 4. Restart to enable scheduled sync Both checks (initial config check and scheduler count) now warn instead of erroring. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Added POST /api/v1/accounts endpoint that adds an account to the server's config.toml. The export-token command now automatically calls this after uploading a token. This completes the fully automated NAS deployment flow: 1. msgvault setup - finds OAuth, generates NAS bundle 2. msgvault add-account - browser OAuth 3. Deploy bundle to NAS, start container 4. msgvault export-token - uploads token AND configures account No more manual config editing on the NAS! Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove auto-discovery of client_secret.json files - users now explicitly paste the path to their OAuth credentials file. This is clearer and doesn't make assumptions about where users store their files. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix Docker build: whitelist quickstart.md in .dockerignore (go:embed) - Enforce HTTPS by default in export-token with --allow-insecure flag - Add path traversal protection with email validation and sanitization - Add 30s HTTP client timeout to prevent indefinite hangs - Restrict GitHub Actions packages:write to non-PR events only - Don't leak JSON parse errors to API clients (log server-side only) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add ability for CLI commands to query remote msgvault server instead of local database. If [remote].url is configured, commands automatically use the remote server - no flags needed in typical usage. Architecture: - CLI Command → OpenStore() → [local or remote?] - Local: store.Store → SQLite - Remote: remote.Store → HTTP API → NAS Server New files: - internal/remote/store.go: HTTP client implementing MessageStore interface - cmd/msgvault/cmd/store_resolver.go: Resolves local vs remote based on config Updated commands with remote support: - stats: Show archive statistics - search: Full-text search - show-message: View message details - list-accounts: List configured accounts Config: - [remote].allow_insecure: Allow HTTP for trusted networks (Tailscale) - --local flag: Force local DB when remote is configured Resolution order: 1. --local flag → always local 2. [remote].url set → use remote 3. Default → use local DB Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, ListMessages and GetMessage scanned dates directly into
sql.NullTime, which relied on go-sqlite3's automatic parsing. This
failed silently for certain datetime formats (returning 0001-01-01).
The search endpoint worked because scanMessageRowsFTS already used
string scanning, but it was missing fractional second formats.
Changes:
- ListMessages: Use scanMessageRows instead of inline scanning
- GetMessage: Scan date as string, parse with parseSQLiteTime
- scanMessageRows: Scan date as string instead of sql.NullTime
- parseSQLiteTime: Add all formats from dbTimeLayouts in sync.go
- Fractional seconds with timezone
- Fractional seconds without timezone
- Date-only format
This fixes:
- GET /api/v1/messages returning 500 error
- GET /api/v1/messages/{id} returning 500 error
- Search dates showing as 0001-01-01
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix strings.ContainsAny(email, "/\\..") rejecting all emails with dots (every normal address). Replace with separate checks for / \ and ".." only. - Setup wizard now sets AllowInsecure=true when generating http:// URLs so the config works on first use. - export-token persists AllowInsecure when --allow-insecure is used. - Move limit<=0 guard before division in ListMessages/SearchMessages to prevent divide-by-zero panic. - Remove unused remoteHint and localDBExists functions. - Add tests for email validation, sanitization, remote store construction, and pagination edge cases. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
remote/store_test.go: - Auth header presence and absence - Error response decoding (JSON and plain text) - Invalid URL scheme rejection - Trailing slash normalization and default timeout - GetStats success and error paths - GetMessage success and 404 handling - SearchMessages query URL encoding - ListAccounts success path cmd/setup_test.go: - createNASBundle output files and content - Bundle without OAuth secrets omits client_secret.json - Bundle with nonexistent secrets path fails - generateAPIKey length and uniqueness config/config_test.go: - Save/Load round-trip preserving all fields - AllowInsecure persistence through save/load cycle - Secure file permissions on saved config - Overwrite behavior on re-save Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Split runExportToken into: - tokenExporter struct with injectable httpClient, tokensDir, stdout/stderr - export() method containing validation, token read, upload, and account POST - runExportToken() as thin cobra handler: param resolution and config save - resolveParam() helper for flag > env > config resolution This enables httptest-backed tests that verify: - Token upload sends correct path, headers, and body - HTTP 500 errors surface with status code and body - Missing token file fails before contacting server - HTTPS enforcement rejects http:// without allowInsecure - HTTP allowed when allowInsecure=true, with stderr warning - Invalid email rejected before network calls - Account POST sends correct email in JSON body - Account POST failure is non-fatal (warning only) - Invalid URL scheme rejected - resolveParam precedence: flag > env > config > empty Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Quick Start: add required [oauth] section so `serve` starts - Rewrite headless OAuth section to match actual behavior: Google's device flow doesn't support Gmail scopes, so users must authenticate locally and export tokens via `export-token` command - Remove fabricated device flow output (google.com/device, code entry) - Fix `sync --limit` to `sync-full --limit` (--limit only exists on sync-full) - Fix initial sync commands to use `sync-full` (incremental sync requires a prior full sync) - Update NAS setup steps to use token export workflow - Update troubleshooting to remove device flow references Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
export-token only checked the --allow-insecure CLI flag, ignoring cfg.Remote.AllowInsecure from config. This meant HTTP remotes saved by setup or a prior --allow-insecure export would fail on subsequent exports without the flag, breaking the "no flags needed" flow. Resolve allowInsecure as: CLI flag || config value, matching how URL and API key are already resolved. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Most users deploy to a NAS on a trusted LAN or use Tailscale. Replace the Caddy/nginx reverse proxy configs with a Tailscale recommendation, and add --allow-insecure to all HTTP export-token examples that were missing it. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
All Docker deployment content has been migrated to the docs site at msgvault.io/guides/remote-deployment/. Maintaining duplicate docs across two repos caused drift — this session alone required fixing the same issues in both places multiple times. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove scanMessageRowsFTS which was identical to scanMessageRows after the parseSQLiteTime refactor unified date handling - Add Host validation in remote.New() to reject malformed URLs like "http://" or "https://:8080" at config time instead of failing later with an unhelpful request error - Add test for empty host rejection Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…heduler Three fixes for handleAddAccount: 1. Data race: cfg.Accounts was read/written concurrently by handleListAccounts and handleAddAccount. Added sync.RWMutex to Server to protect access. 2. Scheduler registration: newly added accounts were saved to config but not registered with the live scheduler, requiring a restart. Added AddAccount to SyncScheduler interface and call it after save. 3. Cron validation: invalid cron expressions were accepted and saved, only failing on next restart. Now validates via scheduler.ValidateCronExpr before persisting. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The code comment and docs/api.md claimed the default was 86400, but the actual Go zero-value default is 0. The server.go code only sets 86400 as a fallback when CORS origins are configured and no explicit max_age is provided. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Windows doesn't support Unix-style file permissions (0600 appears as 0666). Skip the group/other permission checks on Windows. Also fix TestSaveAndLoad_RoundTrip to use a platform-absolute path instead of a Unix-only /path/to/secrets.json which is relative on Windows (no drive letter). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace os.Create (0666) + chmod pattern with os.OpenFile using 0600 mode directly, eliminating the window where files are world-readable. Also fix NAS bundle success message to only mention client_secret.json when it was actually copied (skipped in "keep existing OAuth" flow). Update .roborev.toml with guidance on intentional design decisions (HTTP defaults, plaintext key display, enabled override, pagination) to reduce false positives in automated reviews. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add code comments documenting the reasoning at each site rather than suppressing review findings via config: - HTTP defaults in setup wizard (Tailscale/LAN threat model) - API key printed to stdout (interactive CLI session) - enabled=true forced on account creation (export-token workflow) - Page-aligned pagination assumption in remote store Trim .roborev.toml to a brief scope note pointing to code comments. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When the user keeps their existing OAuth config during setup, the secrets path was empty, so createNASBundle skipped the copy while the generated config.toml still referenced /data/client_secret.json. Fall back to cfg.OAuth.ClientSecrets so the existing file gets copied into the bundle. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
handleAddAccount appended to cfg.Accounts before calling Save(). If Save() failed, the in-memory state retained the new account, causing subsequent requests to report "exists" for an account that was never persisted. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add TestHandleAddAccountSaveFailure: forces Save() to fail and verifies cfg.Accounts is rolled back (no stale in-memory state) - Rename TestCreateNASBundle_ExistingSecretsFallback to TestCreateNASBundle_CopiesSecrets (name matched what it tests) - Add assertion that generated config.toml references /data/client_secret.json Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add 6 new endpoints to support TUI remote access: - GET /api/v1/aggregates - top-level aggregation (7 ViewTypes) - GET /api/v1/aggregates/sub - drill-down sub-aggregation - GET /api/v1/messages/filter - filtered message list with pagination - GET /api/v1/stats/total - detailed stats with filters - GET /api/v1/search/fast - fast metadata search with stats - GET /api/v1/search/deep - FTS5 body search Server now accepts optional query.Engine for aggregate queries via NewServerWithOptions. Existing NewServer remains backward compatible. Part of wesm#130 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add remote.Engine implementing full query.Engine interface - Add tests for aggregate API endpoints (10 new tests) - Initialize DuckDB engine in serve command for aggregate endpoints - Support all TUI operations: Aggregate, SubAggregate, ListMessages, GetMessage, GetTotalStats, Search, SearchFast, ListAccounts - Mark unsupported operations (GetAttachment, GetGmailIDsByFilter) with ErrNotSupported for graceful handling in remote mode Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add IsRemote option to TUI for feature detection - Add --local flag to force local database (override remote config) - Connect to remote msgvault server when [remote].url is configured - Disable deletion staging (d/D keys) in remote mode - Disable attachment export (e key) in remote mode - Show flash messages for disabled features in remote mode Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add serve command and daemon mode section to README.md - Add new TUI support API endpoints to docs/api.md - Fix tui.go help text: msgvault-sync build-parquet → msgvault build-cache - Fix quickstart.md: sync-incremental → sync (primary command name) - Add --local flag and remote mode docs to quickstart.md and CLAUDE.md - Add missing commands to README: show-message, list-accounts, update, setup Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
roborev: Combined ReviewVerdict: Not ready to merge — there is one high-severity auth exposure risk plus multiple medium-severity correctness/availability issues. High
Medium
Synthesized from 4 reviews (agents: codex, gemini | types: security, default) |
- Update SQLiteEngine and DuckDBEngine descriptions (no longer future) - Add RemoteEngine section for TUI remote support - Fix command name: build-analytics → build-cache - Update Parquet schema to show all cached tables - Update Go libraries to reflect actual implementation Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
roborev: Combined ReviewVerdict: Changes are not ready to merge due to 1 High and 4 Medium-severity issues. High
Medium
Synthesized from 4 reviews (agents: codex, gemini | types: security, default) |
- Use r.Context() instead of context.Background() in all API handlers (handleAggregates, handleSubAggregates, handleFilteredMessages, handleTotalStats, handleFastSearch, handleDeepSearch) - Add doRequestWithContext to remote.Store for context propagation - Plumb context through all remote.Engine HTTP requests for proper cancellation/timeout support - Fix SubAggregate to use opts.TimeGranularity instead of filter - Add validation for view_type in handleFastSearch (return 400 if invalid) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Addresses roborev testing gap finding by adding a test that verifies the /api/v1/search/fast endpoint returns 400 Bad Request with error code 'invalid_view_type' when an invalid view_type is provided. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
roborev: Combined ReviewVerdict: PR has one High and multiple Medium issues that should be addressed before merge. High
Medium
Synthesized from 4 reviews (agents: codex, gemini | types: security, default) |
roborev: Combined ReviewVerdict: Not ready to merge; reviewers identified 1 High and 5 Medium-severity issues. High
Medium
Synthesized from 4 reviews (agents: codex, gemini | types: security, default) |
Resolve conflicts by: - Taking upstream for .roborev.toml, Dockerfile, cmd/msgvault/cmd/setup.go - Keeping our doRequestWithContext in internal/remote/store.go for context propagation - Keeping our TUI aggregate endpoints in internal/api/handlers.go - Keeping our test additions in internal/api/handlers_test.go - Keeping our engine field in internal/api/server.go All tests pass after merge. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
If there are security concerns it keeps raising that don't make sense, feel free to add stuff to review_guidelines in .roborev.toml |
Replace loop with variadic append as suggested by gosimple linter. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
This PR implements TUI remote support (Issue #130) - enabling Changes:
Status: All tests pass locally. Just pushed a lint fix - waiting on CI. Ready for review when CI is green. |
roborev: Combined ReviewVerdict: 4 Medium-severity issues should be addressed before merge (1 availability risk, 3 regressions). Medium
Synthesized from 4 reviews (agents: gemini, codex | types: security, default) |
Clarify that remote engine query reconstruction, empty search handling, and TimeGranularity defaults are intentional design choices. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
roborev: Combined ReviewVerdict: Not ready to merge; there are multiple Medium-severity issues that should be fixed first. Critical
High
Medium
Synthesized from 4 reviews (agents: gemini, codex | types: security, default) |
The Problem
msgvault already supports remote CLI commands (
stats,search,show-message) against a NAS server. But the TUI—the primary way to explore your archive—only works locally. If your archive lives on a NAS, you can't browse it from your laptop.What This PR Does
Enables
msgvault tuito work transparently against a remote server. When[remote].urlis configured, the TUI connects to your NAS and gives you the full browsing experience: aggregate views, drill-down navigation, search, filtering—everything works identically to local mode.Implementation
Server side: 6 new API endpoints that expose the
query.Enginecapabilities needed by the TUI:/api/v1/aggregates– sender/domain/label/time aggregations/api/v1/aggregates/sub– drill-down into sub-aggregations/api/v1/messages/filter– filtered message lists with pagination/api/v1/stats/total– stats with filter context/api/v1/search/fast– metadata search (subject, sender, recipient)/api/v1/search/deep– full-text body search via FTS5Client side:
remote.Engineimplements the fullquery.Engineinterface over HTTP, so the TUI code doesn't change at all—it just gets a different engine.Safety: Deletion staging and attachment export are disabled in remote mode. These require local file access and are potentially destructive, so they're local-only operations.
Testing
Closes #130