Local Open-source micro-agents that observe, log and react, all while keeping your data private and secure.
An open-source platform for running local AI agents that observe your screen while preserving privacy.
Observer.mp4
Creating your own Observer AI agent is simple, and consist of three things:
- SENSORS - input that your model will have
- MODELS - models run by ollama or by Ob-Server
- TOOLS - functions for your model to use
- Navigate to the Agent Dashboard and click "Create New Agent"
- Fill in the "Configuration" tab with basic details (name, description, model, loop interval)
- Give your model a system prompt and Sensors! The current Sensors that exist are:
- Screen OCR ($SCREEN_OCR) Captures screen content as text via OCR
- Screenshot ($SCREEN_64) Captures screen as an image for multimodal models
- Agent Memory ($MEMORY@agent_id) Accesses agents' stored information
- Clipboard ($CLIPBOARD) It pastes the clipboard contents
- Microphone* ($MICROPHONE) Captures the microphone and adds a transcription
- Screen Audio* ($SCREEN_AUDIO) Captures the audio transcription of screen sharing a tab.
- All audio* ($ALL_AUDIO) Mixes the microphone and screen audio and provides a complete transcription of both (used for meetings).
* Uses a whisper model with transformers.js (only supports whisper-tiny english for now)
- Decide what tools do with your models
responsein the Code Tab:
notify(title, options)β Send notificationsgetMemory(agentId)*β Retrieve stored memory (defaults to current agent)setMemory(agentId, content)*β Replace stored memoryappendMemory(agentId, content)*β Add to existing memorystartAgent(agentId)*β Starts an agentstopAgent(agentId)*β Stops an agenttime()- Gets current timesendEmail(content, email)- Sends an emailsendSms(content, phone_number)- Sends an SMS to a phone number, format as e.g. sendSms("hello",+181429367")sendWhatsapp(content, phone_number)- Sends a whatsapp message, IMPORTANT: temporarily to counter anti spam, Observer is sending only static messages disregarding "content" variable.startClip()- Starts a recording of any video media and saves it to the recording Tab.stopClip()- Stops an active recordingmarkClip(label)- Adds a label to any active recording that will be displayed in the recording Tab.
The "Code" tab now offers a notebook-style coding experience where you can choose between JavaScript or Python execution:
JavaScript agents run in the browser sandbox, making them ideal for passive monitoring and notifications:
// Remove Think tags for deepseek model
const cleanedResponse = response.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
// Preserve previous memory
const prevMemory = await getMemory();
// Get time
const time = time();
// Update memory with timestamp
appendMemory(`[${time}] ${cleanedResponse}`);Note: any function marked with
*takes anagentIdargument.
If you omitagentId, it defaults to the agent thatβs running the code.
Python agents run on a Jupyter server with system-level access, enabling them to interact directly with your computer:
#python <-- don't remove this!
print("Hello World!", response, agentId)
# Example: Analyze screen content and take action
if "SHUTOFF" in response:
# System level commands can be executed here
import os
# os.system("command") # Be careful with system commands!The Python environment receives:
response- The model's outputagentId- The current agent's ID
To use Python agents:
- Run a Jupyter server on your machine
- Configure the connection in the Observer AI interface:
- Host: The server address (e.g., 127.0.0.1)
- Port: The server port (e.g., 8888)
- Token: Your Jupyter server authentication token
- Test the connection using the "Test Connection" button
- Switch to the Python tab in the code editor to write Python-based agents
ObserverLocal.mp4
There are a couple of ways to get Observer up and running with local inference. We recommend using Docker for the simplest setup.
This method uses Docker Compose to run Observer-Ollama and a local Ollama instance together in containers. This process makes all processing happen 100% in your computer.
Prerequisites:
- Docker installed.
- Docker Compose installed (often included with Docker Desktop).
Instructions:
-
Clone this repository (or download the
docker-compose.ymlfile):git clone https://github.com/Roy3838/Observer.git cd Observer docker-compose up --build -
Access Observer:
- WebApp: Open your browser to
https://app.observer-ai.com - Accept Local Certificates Open up
https://localhost:3838and your browser will show a warning about an "unsafe" or "untrusted" connection. This is because the proxy uses a self-signed SSL certificate for local HTTPS. You'll need to click "Advanced" and "Proceed to localhost (unsafe)" (or similar wording) to accept it. These certificates are signed by your computer! and this step is needed to make the browser happy and let it "see" the ollama server.
- WebApp: Open your browser to
-
Pull Ollama Models: Once the services are running, you can pull models into your Ollama instance using the terminal feature in the Observer UI, or by running:
docker-compose exec ollama_service ollama pull llama3 # Or any other model
OR by Using the Web App:
- Go to the Web UI (
https://app.observer-ai.com). - In the Models tab, click on add model. This will give you the shell to your connected ollama instance, download models using ollama run.
- Go to the Web UI (
To Stop Observer (Docker Setup):
docker-compose downThis method is the same as the Full docker setup, but accessing https://localhost:8080 for the webapp instead of https://app.observer-ai.com for serving.
This works as a 100% offline alternative, but because of the offline "unsecure" environment (it is secure it just isn't https), Auth0 will complain; so the sendSms, sendWhatsapp and sendEmail tools won't work.
I recommend going with Option 1 (it is 100% local) to have all of the Auth0 features. But i still wanted to give the option to self-host the webpage.
Save your agent, test it from the dashboard, and export the configuration to share with others!
We welcome contributions from the community! Here's how you can help:
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- GitHub: @Roy3838
- Project Link: https://observer-ai.com
Built with β€οΈ by Roy Medina for the Observer AI Community Special thanks to the Ollama team for being an awesome backbone to this project!