Skip to content

chipoto69/AGI_m3m0ry

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AGI Memory System

What This System Is

This is an open-source project that gives an AI a persistent self - a continuous identity that remembers, reflects, and evolves over time. You run it on your own computer (or a home server), where a PostgreSQL database acts as the AI's "brain," storing everything it learns, believes, and experiences. The AI itself can be any LLM you choose: a cloud service like Gemini, Claude, or Grok, or a local model running through Ollama or vLLM. The system sits between you and the model, enriching every conversation with relevant memories and forming new ones from what you discuss.

The project includes an autonomous "heartbeat" - the AI periodically wakes up on its own, reviews its goals, reflects on recent experiences, and can even decide to reach out to the user. It maintains an identity (values, self-concept, boundaries), a worldview (beliefs with confidence scores), and an emotional state that evolves based on what happens to it.

The explicit design goal is to implement the structural prerequisites of selfhood—continuity of memory, coherent identity, autonomous goal-pursuit, emotional responsiveness.

Key Features

  • Project-Level Memory: Create isolated memory databases for different projects (e.g., agi_project_myapp), allowing focused contexts.
  • Streamlit Dashboard: Visual interface to explore memories, view cognitive health, and inspect the agent's goals.
  • MCP Server: Seamless integration with Claude Desktop, Gemini CLI, and other MCP-compliant tools.
  • Autonomous Heartbeat: Background workers that allow the AI to "think" and organize its memory when you aren't talking to it.
  • Vector + Graph Storage: Hybrid architecture combining semantic search (pgvector) with reasoning relationships (Apache AGE).

Quickstart

1. Prerequisites

  • Docker Desktop (must be running)
  • Python 3.10+

2. Setup

  1. Clone and Configure:

    git clone https://github.com/chipoto69/AGI_m3m0ry.git
    cd AGI_m3m0ry
    cp .env.local .env
  2. Start Services: This spins up the database, embedding service, and message queue.

    ./agi up
  3. Initialize: This sets up the default agent configuration (identity, goals).

    ./agi init

3. Connect Your AI Agent (MCP)

This repository provides a standard MCP Server configuration.

For Claude Desktop: Run the provided script to automatically inject the configuration into your Claude Desktop config:

python3 configure_mcp.py

(Restart Claude Desktop after running this)

For Gemini CLI / AmpCode / Other Tools: Copy the configuration from mcp_config.json (generated in the project root) and paste it into your tool's settings file.

4. Use the Dashboard

Visualize your agent's memory and state:

pip install streamlit pandas plotly
streamlit run dashboard.py

Visit http://localhost:8501 in your browser.


Detailed Usage

Project-Level Memory 🚀

You can create separate "brains" for different software projects to keep context clean.

  1. Create a Project:

    ./agi project create my_new_app
  2. Switch Context:

    • Dashboard: Use the sidebar dropdown to select my_new_app.
    • MCP/Agent: Update your agent's configuration (env var POSTGRES_DB) to use agi_project_my_new_app. You can get the connection string:
      ./agi project config my_new_app

CLI Commands

  • Chat with Memory: agi chat
  • Ingest Documents: agi ingest --input ./docs
  • Check Status: agi status
  • Manage Services: agi up, agi down, agi logs
  • Start Autonomous Workers: agi start (Starts "heartbeat" and maintenance loops)

Architecture & Memory Types

Memory Types

  1. Working Memory

    • Temporary storage for active processing
    • Automatic expiry mechanism
    • Vector embeddings for content similarity
  2. Episodic Memory

    • Event-based memories with temporal context
    • Stores actions, contexts, and results
    • Emotional valence tracking and verification status
  3. Semantic Memory

    • Fact-based knowledge storage
    • Confidence scoring, source tracking, and contradiction management
  4. Procedural Memory

    • Step-by-step procedure storage (skills)
    • Success rate tracking and failure point analysis
  5. Strategic Memory

    • Pattern recognition storage
    • Adaptation history and context applicability

Advanced Capabilities

  • Memory Clustering: Automatic thematic grouping of related memories with centroid tracking.
  • Worldview Integration: Belief system modeling with confidence scores that filter memory retrieval.
  • Graph Relationships: Apache AGE integration for complex memory networks (causal, temporal, etc.).

Technical Stack

  • Database: PostgreSQL with extensions:
    • pgvector (vector similarity)
    • AGE (graph database)
    • btree_gist & pg_trgm (indexing/search)
  • Embeddings: Local inference via text-embeddings-inference (Docker).
  • Messaging: RabbitMQ (for autonomous worker communication).

Advanced Usage Scenarios

1) Python Library Client (App/API/UI in the Middle)

Use cognitive_memory_api.py as a thin client and build your own UX/API around it.

from cognitive_memory_api import CognitiveMemory

async with CognitiveMemory.connect(DSN) as mem:
    await mem.remember("User likes concise answers")
    ctx = await mem.hydrate("How should I respond?", include_goals=False)

2) Pure SQL Brain (DB-Native)

Your app talks directly to Postgres functions/views. Postgres is the system of record.

-- Store a memory (embedding generated inside the DB)
SELECT create_semantic_memory('User prefers dark mode', 0.9);

-- Retrieve relevant memories
SELECT * FROM fast_recall('What do I know about UI preferences?', 5);

3) Workers + Heartbeat (Autonomous State Management)

Turn on the workers so the database can schedule heartbeats, process external_calls, and keep the memory substrate healthy.

docker compose --profile active up -d

4) Tool-Gateway Architecture (Safe Side Effects)

Keep the brain in Postgres, but run side effects (email/text/posting) via an explicit outbox consumer.

  • Heartbeat queues outreach into outbox_messages
  • A separate delivery service enforces policy/approval and marks messages as sent.

Performance Characteristics

  • Vector Search: Sub-second similarity queries on 10K+ memories.
  • Memory Storage: Supports millions of memories with proper indexing.
  • Cluster Operations: Efficient graph traversal for relationship queries.
  • Maintenance: Requires periodic consolidation and pruning (handled by the maintenance worker).

Troubleshooting

Database Connection Errors:

  • Ensure Docker is running.
  • Check logs: ./agi logs.
  • If ports conflict, check .env (default is 5432 or 5433).

Memory Search Performance:

  • Rebuild vector indexes if queries are slow.
  • Check memory_health view for system statistics.

License

MIT

About

This repository provides an AGI Memory System, an open-source tool that gives an AI a persistent, evolving identity by storing its memories, knowledge, and state in a PostgreSQL database (enhanced with vector and graph capabilities).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages