This is an open-source project that gives an AI a persistent self - a continuous identity that remembers, reflects, and evolves over time. You run it on your own computer (or a home server), where a PostgreSQL database acts as the AI's "brain," storing everything it learns, believes, and experiences. The AI itself can be any LLM you choose: a cloud service like Gemini, Claude, or Grok, or a local model running through Ollama or vLLM. The system sits between you and the model, enriching every conversation with relevant memories and forming new ones from what you discuss.
The project includes an autonomous "heartbeat" - the AI periodically wakes up on its own, reviews its goals, reflects on recent experiences, and can even decide to reach out to the user. It maintains an identity (values, self-concept, boundaries), a worldview (beliefs with confidence scores), and an emotional state that evolves based on what happens to it.
The explicit design goal is to implement the structural prerequisites of selfhood—continuity of memory, coherent identity, autonomous goal-pursuit, emotional responsiveness.
- Project-Level Memory: Create isolated memory databases for different projects (e.g.,
agi_project_myapp), allowing focused contexts. - Streamlit Dashboard: Visual interface to explore memories, view cognitive health, and inspect the agent's goals.
- MCP Server: Seamless integration with Claude Desktop, Gemini CLI, and other MCP-compliant tools.
- Autonomous Heartbeat: Background workers that allow the AI to "think" and organize its memory when you aren't talking to it.
- Vector + Graph Storage: Hybrid architecture combining semantic search (pgvector) with reasoning relationships (Apache AGE).
- Docker Desktop (must be running)
- Python 3.10+
-
Clone and Configure:
git clone https://github.com/chipoto69/AGI_m3m0ry.git cd AGI_m3m0ry cp .env.local .env -
Start Services: This spins up the database, embedding service, and message queue.
./agi up
-
Initialize: This sets up the default agent configuration (identity, goals).
./agi init
This repository provides a standard MCP Server configuration.
For Claude Desktop: Run the provided script to automatically inject the configuration into your Claude Desktop config:
python3 configure_mcp.py(Restart Claude Desktop after running this)
For Gemini CLI / AmpCode / Other Tools:
Copy the configuration from mcp_config.json (generated in the project root) and paste it into your tool's settings file.
Visualize your agent's memory and state:
pip install streamlit pandas plotly
streamlit run dashboard.pyVisit http://localhost:8501 in your browser.
You can create separate "brains" for different software projects to keep context clean.
-
Create a Project:
./agi project create my_new_app
-
Switch Context:
- Dashboard: Use the sidebar dropdown to select
my_new_app. - MCP/Agent: Update your agent's configuration (env var
POSTGRES_DB) to useagi_project_my_new_app. You can get the connection string:./agi project config my_new_app
- Dashboard: Use the sidebar dropdown to select
- Chat with Memory:
agi chat - Ingest Documents:
agi ingest --input ./docs - Check Status:
agi status - Manage Services:
agi up,agi down,agi logs - Start Autonomous Workers:
agi start(Starts "heartbeat" and maintenance loops)
-
Working Memory
- Temporary storage for active processing
- Automatic expiry mechanism
- Vector embeddings for content similarity
-
Episodic Memory
- Event-based memories with temporal context
- Stores actions, contexts, and results
- Emotional valence tracking and verification status
-
Semantic Memory
- Fact-based knowledge storage
- Confidence scoring, source tracking, and contradiction management
-
Procedural Memory
- Step-by-step procedure storage (skills)
- Success rate tracking and failure point analysis
-
Strategic Memory
- Pattern recognition storage
- Adaptation history and context applicability
- Memory Clustering: Automatic thematic grouping of related memories with centroid tracking.
- Worldview Integration: Belief system modeling with confidence scores that filter memory retrieval.
- Graph Relationships: Apache AGE integration for complex memory networks (causal, temporal, etc.).
- Database: PostgreSQL with extensions:
pgvector(vector similarity)AGE(graph database)btree_gist&pg_trgm(indexing/search)
- Embeddings: Local inference via
text-embeddings-inference(Docker). - Messaging: RabbitMQ (for autonomous worker communication).
Use cognitive_memory_api.py as a thin client and build your own UX/API around it.
from cognitive_memory_api import CognitiveMemory
async with CognitiveMemory.connect(DSN) as mem:
await mem.remember("User likes concise answers")
ctx = await mem.hydrate("How should I respond?", include_goals=False)Your app talks directly to Postgres functions/views. Postgres is the system of record.
-- Store a memory (embedding generated inside the DB)
SELECT create_semantic_memory('User prefers dark mode', 0.9);
-- Retrieve relevant memories
SELECT * FROM fast_recall('What do I know about UI preferences?', 5);Turn on the workers so the database can schedule heartbeats, process external_calls, and keep the memory substrate healthy.
docker compose --profile active up -dKeep the brain in Postgres, but run side effects (email/text/posting) via an explicit outbox consumer.
- Heartbeat queues outreach into
outbox_messages - A separate delivery service enforces policy/approval and marks messages as sent.
- Vector Search: Sub-second similarity queries on 10K+ memories.
- Memory Storage: Supports millions of memories with proper indexing.
- Cluster Operations: Efficient graph traversal for relationship queries.
- Maintenance: Requires periodic consolidation and pruning (handled by the maintenance worker).
Database Connection Errors:
- Ensure Docker is running.
- Check logs:
./agi logs. - If ports conflict, check
.env(default is5432or5433).
Memory Search Performance:
- Rebuild vector indexes if queries are slow.
- Check
memory_healthview for system statistics.
MIT