A modern low-code visual programming platform built on NodeGraphQt and qfluentwidgets, supporting drag-and-drop component orchestration, asynchronous execution, file operations, control flow logic, and one-click export of workflows into standalone, executable projects—enabling seamless transition from development to deployment.
| Traditional Low-Code Tools | CanvasMind |
|---|---|
| Static component assembly | Dynamic expressions + global variables drive parameters |
| Only serial execution | Full conditional branching, iteration, and loops |
| No custom logic | Embedded code editor for writing Python components freely |
| Execution = endpoint | One-click export to standalone projects (API, CLI, Docker) |
| AI disconnected from canvas | Deep LLM integration: yellow jump / purple create buttons for canvas-aware intelligent completion |
| Fixed Runtime Environment | Supports remote execution via SSH: Features integrated Python environment management for SSH servers and supports dispatching nodes to the server-side for execution. |
| No Trigger Node or Hard-coded Trigger Options | Extensible Plugin Trigger System: Decoupled architecture allowing dynamic loading of Cron, Webhook, and File-watchers; UI auto-syncs with backend logic |
- Dynamic Property Grid – Render adaptive UI controls (text fields, numeric inputs, file selectors, toggles, sliders) based on parameter data types and validation rules
- Hierarchical Property Tree – Organize nested configurations into expandable/collapsible tree structures with drag-and-drop reordering for complex workflows
- Context-Aware Validation – Apply real-time validation logic based on parameter dependencies (e.g., enabling/disabling fields based on toggle states)
- Interactive Tree Navigation – Context menus and visual indicators for managing parent-child relationships in hierarchical data structures
- Parallel DAG Execution – Independent branches are executed concurrently via a high-performance task scheduler, maximizing CPU/GPU utilization across the workflow.
- Hybrid Runtime Orchestration – Supports seamless mixing of execution environments:
- Interactive IPython Kernel: Leveraging local persistent sessions for rapid debugging and state retention.
- Remote SSH Workers: Transparently dispatching heavy-compute nodes (e.g., Model Training/Inference) to high-performance servers with automated environment syncing.
- Selective In-Memory Persistence (Caching) – Users can toggle "Pin to Memory" for specific nodes; results are cached in the active process RAM to eliminate redundant re-computation and I/O overhead during iterative tuning.
- Intelligent Topological Dispatch – Automatically resolves dependencies and routes tasks to the optimal target (Local/Remote/IPython) based on node configuration.
- Unified State Management – Real-time visualization of node status (Queued / Running / Success / Failed) across all distributed workers on a single canvas.
- High-Speed Data Serialization – Utilizes
pyarrowandpicklefor low-latency data transfer between local and remote environments.
- Type-Aware Suggestions – Automatically match compatible downstream components based on output port types
- Multi-Port Grouping – Recommendations grouped by source port for clarity
- Visual Differentiation – Color-coded suggestions per port type
- Cross-Canvas Learning – Tracks component connection frequency to improve recommendations over time
- Yellow Jump Buttons: When the LLM references an existing node, a yellow
[Node Name](jump)button appears—click to instantly navigate to that node on the canvas. - Purple Create Buttons: When recommending a new capability, a purple
[Component Name](create)button is generated—click to instantiate the component from your library and auto-connect it. - Multimodal Context Injection: Automatically passes node JSON, variable states, and base64-encoded images to the LLM for precise, actionable suggestions.
- Canvas-Aware Completion: Supports simultaneous references to multiple existing nodes (yellow) and recommendations for missing components (purple), enabling end-to-end workflow completion.
- Conditional Branching – Enable/disable branches based on
$...$expressions (if/elselogic) - Iteration – Loop over lists or arrays, executing subgraphs per element
- Loop Control – Fixed-count or condition-driven loops
- Dynamic Subgraph Skipping – Entire downstream subgraphs of inactive branches are skipped for efficiency
- Expression-Driven Logic – Branch conditions, loop counts, etc., support dynamic expressions
- Structured Scopes – Three variable scopes:
env(environment),custom(user-defined), andnode_vars(node outputs) - Dynamic Expressions – Use
$env_user_id$or$custom_threshold * 2$in any parameter field - Runtime Evaluation – Expressions resolved before execution, with support for nested dicts/lists
- Secure Sandbox – Powered by
asteval; prevents unsafe operations and isolates environments viacontextmanager - UI Integration – Select variables or type expressions directly in component property panels
- Full Python Logic – Write complete
run()methods and helper functions inside nodes - Dynamic Ports – Add/remove input/output ports via UI; bind global variables as defaults
- Full Feature Integration – Leverages global variables, expressions, auto-dependency install, logging, and status visualization
- Safe Execution – Runs in isolated subprocesses with timeout control, error capture, and retry support
- Developer-Friendly Editor – Professional code editor with dark theme, syntax highlighting, intelligent autocomplete, folding, and error diagnostics
- Dynamic Plugin Loading – Decoupled architecture that automatically discovers and registers new trigger types (Cron, Webhook, File Watcher) from the plugin directory without restarting.
- Auto-Adaptive UI – Node property panels dynamically reconstruct their input widgets based on the selected plugin, ensuring a clean, context-aware interface.
- Event-Driven Execution – Transition from manual execution to automated workflows by reacting to external HTTP requests, schedule patterns, or file system changes.
- Lifecycle Management – Built-in safety logic that automatically unregisters backend listeners when a canvas is closed or a node is deleted to prevent resource leaks.
- Dynamic Loading – Auto-scans
components/directory and loads new components - Pydantic Schemas – Define inputs, outputs, and properties using Pydantic models
- Per-Node Logging – Each node maintains its own execution log
- State Persistence – Save/load entire workflows
- Auto Dependency Resolution – Components declare
requirements; missing packages are auto-installed at runtime
- Subgraph Export – Select any group of nodes and export as a self-contained project
- Train/Inference Separation – Export only inference logic with trained models bundled
- Zero-Dependency Runtime – Generated project runs independently—no CanvasMind required
- Multi-Environment Support – Auto-generated
requirements.txtenables deployment to servers, Docker, or CLI environments
- Direct Invocation – Canvas can call exported project scripts by name and retrieve results
- Parameter Passing – Node properties define tool-call parameters, passed automatically at runtime
- Full Logging – Detailed logs of tool execution are captured and returned for debugging
- LLM Function Calling Ready – Standardized tool name, input/output schema, and examples for seamless LLM integration
pip install -r requirements.txtpython main.pypython build.py| Type | Description | Example |
|---|---|---|
TEXT |
Text input | String parameters |
LONGTEXT |
Long text input | Multi-line strings |
INT |
Integer | Numeric values |
FLOAT |
Floating point | Decimal numbers |
BOOL |
Boolean | Toggle switches |
CSV |
CSV list data | Column selections |
JSON |
JSON structure | Dynamic nested data |
EXCEL |
Excel data | Cell ranges |
FILE |
File path | Local file reference |
UPLOAD |
Document upload | User-uploaded files |
SKLEARNMODEL |
Scikit-learn model | Trained .pkl models |
TORCHMODEL |
PyTorch model | .pt or .pth models |
IMAGE |
Image data | Base64 or file paths |
| Type | Description | Example |
|---|---|---|
TEXT |
Text input | Short strings |
LONGTEXT |
Long text input | Code snippets, prompts |
INT / FLOAT |
Numeric input | Thresholds, counts |
BOOL |
Toggle | Enable/disable flags |
CHOICE |
Dropdown | Predefined options |
DYNAMICFORM |
Dynamic form | Variable-length lists |
RANGE |
Numeric range | Min/max sliders |
VARIABLE |
variable selector | global_variable |
FILE SELECT |
Select file | canvas_files/model.pth |
- Create Node – Drag from left panel to canvas
- Connect Nodes – Drag from output port to input port
- Run Node – Right-click → “Run This Node”
- View Logs – Right-click → “View Node Logs”
- Loops – Use Loop/Iterate nodes with Backdrop for structured iteration
- File Handling – Click file picker in property panel
- Workflow Management – Save/load via top-left buttons
- Node Grouping – Select multiple nodes → right-click → “Create Backdrop”
- Dependency Management – Failed components auto-install missing
requirements
Ctrl+R– Run workflowCtrl+S– Save workflowCtrl+O– Load workflowCtrl+A– Select all nodesDel– Delete selected nodes
- Idle – Gray border
- Running – Blue border
- Success – Green border
- Failed – Red border
- Idle – Yellow
- Input Active – Blue
- Output Active – Green
- Each node has independent logs with timestamps
- Powered by Loguru – use
self.loggerin components - All
print()output is automatically captured
- Inputs auto-populated from upstream outputs
- Outputs stored by port name
- Full multi-input/multi-output support
Export any subgraph as a self-contained project that runs in any Python environment—no CanvasMind required.
- Train/Inference Split – Export only inference logic with models bundled
- Team Sharing – Share full workflows as runnable projects
- Production Deployment – Run on servers or in Docker
- Offline Execution – CLI-only environments
✅ Smart Dependency Analysis – Copies only necessary component code
✅ Path Rewriting – Model/data files copied and converted to relative paths
✅ Column Selection Preserved – CSV column config fully retained
✅ Environment Isolation – Auto-generated requirements.txt
✅ Ready-to-Run – Includes run.py and api_server.py
- Select Nodes – Choose any nodes on canvas (multi-select supported)
- Click Export – Top-left “Export Model” button (📤 icon)
- Choose Directory – Project folder auto-generated
- Run Externally:
# Install dependencies
pip install -r requirements.txt
# Run model
python run.pymodel_xxxxxxxx/
├── model.workflow.json # Full workflow definition (nodes, connections, column selections)
├── project_spec.json # Input/output schema
├── preview.png # Canvas preview snapshot
├── README.md # Project overview
├── requirements.txt # Auto-analyzed dependencies
├── run.py # CLI entrypoint
├── api_server.py # FastAPI microservice
├── scan_components.py # Component loader
├── runner/
│ ├── component_executor.py
│ └── workflow_runner.py
├── components/ # Original component code (preserved structure)
│ ├── base.py
│ └── your_components/
└── inputs/ # Bundled models/data files
- ✅ Visual canvas (NodeGraphQt)
- ✅ Control flow: conditionals, loops, iteration
- ✅ Global variables + expression system
- ✅ Dynamic code components (embedded editor)
- ✅ Intelligent node recommendations
- ✅ One-click export (CLI + API)
- ✅ Multi-environment management
- ✅ LLM context integration (yellow jump / purple create buttons)
- ✅ Parallel & remote execution
- ⏳ Code-to-canvas auto-creation (from editor → new node)
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the GPLv3 License.
- NodeGraphQt – Node graph framework
- PyQt-Fluent-Widgets – Fluent Design UI library
- Loguru – Elegant Python logging




