Skip to content

edwardbr/Canopy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

333 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Canopy

Canopy

License C++ CMake Platform

A Modern C++ Remote Procedure Call Library for High-Performance Distributed Systems

Note: Canopy is in Beta, including the documentation, and is under active development.

Current implementation status:

  • C++ is the primary and most complete implementation
  • Rust exists as an experimental interoperable implementation under rust/, currently focused on blocking Protocol Buffers with local and dynamic-library transports
  • JavaScript support exists as a reduced-trust generated client/transport layer for WebSocket-oriented scenarios, not as a full Canopy runtime equivalent

If you want to make your own app try copying this to get started: Example Canopy App

The Pitch

If your system crosses process, machine, plugin, or trust boundaries, you are already paying a tax in handwritten glue: marshalling, transport plumbing, callback wiring, and lifetime management. Canopy turns that tax into generated code and reusable runtime structure.

In practical terms, Canopy aims to:

  • remove much of the transport and serialization glue developers would otherwise write by hand
  • let one interface definition work across local, process, network, and trust-boundary transports
  • preserve remote object identity and lifetime semantics instead of forcing everything into stateless request/response patterns
  • support both blocking and coroutine builds from the same C++ interface and implementation structure

When it fits best:

  • systems that cross local, process, network, or trust boundaries repeatedly
  • plugin and child-process architectures
  • C++ systems that want generated RPC instead of hand-written protocol glue
  • applications that need remote callbacks or distributed object lifetimes

When it is probably not the right tool:

  • purely local applications with no meaningful boundary crossings
  • simple public HTTP or JSON APIs where request/response is enough
  • projects that do not want generated code in the build
  • teams that need full cross-language runtime parity today

Start Here


Why Canopy?

Distributed C++ systems are expensive to build because every boundary tends to accumulate bespoke protocol code. Two components talking across a process boundary, a network connection, a plugin boundary, or a security enclave often means hand-written serialization, connection management, callback plumbing, and error handling. Canopy aims to replace much of that with generated interfaces and reusable runtime structure, with the project goal often described as removing roughly 70-80% of that boundary glue code.

Performance Notes

Canopy is intended for high-throughput C++ systems, but the right performance story depends on transport, serializer, and execution mode.

What is currently defensible from the release coroutine benchmark tree:

  • the project ships working benchmark targets for:
    • full-stack RPC transport comparisons
    • serializer round-trip measurements
    • streamed transport microbenchmarks
  • current serializer benchmarks show very low overhead on small native C++ shapes, with many scalar round-trips in the tens of nanoseconds
  • YAS is generally strongest for C++-only high-performance paths
  • Protocol Buffers remains viable for interoperable paths and is the current serializer used by the experimental Rust implementation
  • coroutine builds exist specifically to support higher-throughput streamed and networked transports

Current caveat:

  • some release coroutine streaming microbenchmarks still need investigation, so the strongest benchmark claims should currently be made around serializer costs and the existence of the benchmark coverage rather than polished end-to-end streaming leaderboard numbers
           .idl  
         │
         ┌────┴────┐
       proxy      stub
         │          │
      caller      callee

Write the interface once in IDL. Canopy generates type-safe C++ proxy and stub code from a simple Interface Definition Language. You call a remote object exactly as you would a local one; marshalling, routing, and lifecycle management are handled for you.


  ┌──────────────────────────────────────┐
  direct  DLL  SPSC   TCP   TLS   SGX  IPC
  └──────────────────────────────────────┘
       same generated interface

Works across every boundary you care about. The primary C++ implementation runs over in-process direct calls, in-process DLL boundaries, shared-memory SPSC queues, TCP sockets, TLS-encrypted streams, child-process IPC transports, and SGX secure enclaves. Switching transport is a matter of changing which stream or transport you construct — your interface code does not change.


 ╔═════════════ TLS ════════════════╗
 ║ ╔═══════════ TCP ══════════════╗ ║
 ║ ║ ╔═════════ SPSC ═══════════╗ ║ ║
 ║ ║ ║    streaming::stream     ║ ║ ║
 ║ ║ ╚══════════════════════════╝ ║ ║
 ║ ╚══════════════════════════════╝ ║
 ╚══════════════════════════════════╝

Streams compose. Transport streams stack cleanly: wrap a TCP stream in an SPSC buffering layer, then wrap that in TLS, and hand the result to the transport. Each layer only knows about the stream interface below it. Adding encryption, compression, or custom framing requires no changes to the RPC layer above or the network layer below.


  ┌──── build flag ────┐
  │                    │
  ▼                    ▼
blocking            co_await
 A→B→C→D            A→B→C→D
    (same source code, two modes)

Blocking and coroutine modes from the same source. The same C++ implementation compiles in both a straightforward blocking mode (useful for debugging and simple deployments) and a full coroutine mode using C++20 co_await. Switching between them is a build flag; your code does not change. This matters particularly for AI-assisted development: LLMs can generate and reason about Canopy interfaces and implementations reliably because there is no hidden async machinery to infer.


            ┌──[root zone]──┐
           /        │        \
       [zone A]  [zone B]  [zone C]
         │          │         │
       [sub]    peer link    [sub]
     /   \
         node A    node B

Distributed by design. Each machine or process hosts its own root zone. Child zones branch from it for plugins, enclaves, or any other isolation boundary. Multiple nodes connect as peers over the network. Objects living at any depth in any node's zone tree can call objects at any depth in any other node's tree — the routing is automatic. With TUN implementation planned it is hoped that each RPC object has the option of having its own exposed IP address.


  ╭──────────────────────────────────╮
  │  BINARY ◄────────●────────► JSON │
  │            PROTO   YAS           │
  │         per-connection dial      │
  ╰──────────────────────────────────╯

No Serialization format lockin. Canopy can be extended to use any reasonable serialisation format. Binary YAS format for C++ high performance throughput, compressed binary for bandwidth-constrained links, JSON for human-readable debugging and cross-language interop, Protocol Buffers for teams that need a language-neutral wire format. The format can be negotiated per-connection or overridden per-call. Today, the experimental Rust implementation is Protocol Buffers only.


[ Machine A ]          [ Machine B ]          [ Machine C ]
       |                      |                      |
  Owns Object <---shared_ptr--- Receives Ref          |
       |                      |                      |
       |                      ---shared_ptr--------> Receives Ref
       |                      (B can drop Ref)       |
       |                                             |
  Object Kept Alive  <--------------------------  Active Ref

Canopy extends C++ RAII across the network. Using rpc::shared_ptr and rpc::optimistic_ptr, you can manage the lifetime of remote objects as easily as local ones, even in complex multi-hop topologies.

  • rpc::shared_ptr: Mimics std::shared_ptr behavior across the wire. It maintains a distributed reference count. If Machine A shares an object with Machine B, and Machine B passes that reference to Machine C, the object on Machine A remains alive until both B and C have released their pointers.
  • rpc::optimistic_ptr: Optimized for performance where the developer assumes the object will remain valid for the duration of the call, good for long lived objects such as llms and databases, or to break circular dependencies.

  caller                      callee
   🐒  ══[post]══▶▶▶▶▶▶▶▶   🐒
   │
   └──▶ continues immediately
           (no reply needed)

One-directional calls for fire-and-forget workloads and streaming, good for financial data or streaming media. Methods marked [post] are sent without waiting for a reply — the caller continues immediately. This eliminates round-trip latency for workloads where the caller does not need a result: streaming media frames, LLM inference token delivery, telemetry events, log records, or any high-throughput notification pattern.


            ┌──[i_foo]──▶ 
  [remote object]──┼──[i_bar]──▶ class X
            └──[i_baz]──▶ 
     cast performed against live object

Polymorphism and Multiple Inheritance. A single remote object can implement multiple interfaces simultaneously, and many different classes can implement the same interface. Callers hold a proxy to one interface and can remotely cast to any other interface the object supports — the cast is performed against the live object in its zone, not a local copy. This gives you the full expressiveness of C++ polymorphism over any transport, without being limited to the single flat contracts that most RPC systems impose.


  [zone] ──── discover ────▶  { i_calculator }  ──▶ MCP
    ?                 { i_logger     }
                      { i_storage    }

Remote reflection. Canopy carries interface metadata across zone boundaries, making it possible to discover what interfaces a remote object supports at runtime. This opens the door to generic tooling, dynamic proxies, and runtime composition — capabilities that are normally reserved for languages with built-in reflection and are unusual in a C++ RPC system. One practical application is implementing Model Context Protocol (MCP) services: because Canopy can enumerate the methods and types of a remote object at runtime, it can generate MCP tool descriptions dynamically, allowing AI assistants to discover and call C++ services without any hand-written schema.


Key Features

  • Type-Safe: Full C++ type system integration with compile-time verification
  • Transport Agnostic: Local, DLL, IPC, TCP, SPSC, SGX Enclave, and custom transports
  • Composable Streams: TCP, TLS, SPSC, WebSocket layers in any combination
  • Format Agnostic: YAS binary, compressed binary, JSON, Protocol Buffers, more can be added
  • Bi-Modal Execution: Same code runs in both blocking and coroutine modes
  • Experimental Rust Runtime: Interoperable blocking Rust implementation for Protocol Buffers over local and dynamic-library transports
  • Reduced-Trust JavaScript Client: Generated JavaScript/WebSocket client support without claiming full runtime parity
  • SGX Enclave Support: Secure computation in Intel SGX enclaves
  • Comprehensive Telemetry: Sequence diagrams, console output, HTML animations
  • Coroutine Library Agnostic: libcoro, libunifex, cppcoro, Asio (see 08-coroutine-libraries.md)
  • Address, UB and thread Sanitizer Support: As part of clang

Documentation

Start with the Documentation Overview.

Key entry points:

Getting Started

  1. Introduction - What is Canopy and its key features
  2. Getting Started Tutorial - Step-by-step tutorials
  3. IDL Guide - Interface Definition Language syntax and usage
  4. Building Canopy - Build configuration and CMake presets
  5. Bi-Modal Execution - Blocking and coroutine modes
  6. Error Handling - Error codes and handling patterns
  7. Telemetry - Debugging and visualization
  8. Coroutine Libraries - Coroutine library support and porting
  9. API Reference - Quick reference for main APIs
  10. Examples - Working examples and demos
  11. Best Practices - Design guidelines and troubleshooting

Architecture

Additional Implementation Notes

  • C++ Status - current status of the primary implementation
  • Rust Status - current status and supported scope of the experimental Rust implementation
  • JavaScript Status - current status of the reduced-trust JavaScript client layer
  • Rust Port Documentation - Rust planning, migration history, and retrospectives

Build And Test

Serialization

Companion Repositories

Repository Description
CanopyJSON Generic JSON value type (json::v1::object) for use in Canopy IDL interfaces. Provides runtime flexibility within a strongly-typed IDL — useful wherever the structure of data is open-ended at compile time, such as LLM request configuration. Serialises to pure JSON on JSON transports and compact binary on binary transports.

Quick Start

Prerequisites

  • C++17 Compiler: Clang 10+, GCC 9.4+, or Visual Studio 2019+
  • CMake: 3.24 or higher
  • Build System: Ninja (recommended)
  • Node.js: 18+ (for llhttp code generation)
  • OpenSSL: Development headers (libssl-dev on Linux, OpenSSL SDK on Windows)
  • clang-tidy (optional): LLVM 16+ for static analysis; LLVM 21+ recommended for full check coverage including modernize-use-designated-initializers

Build

# Clone and configure
git clone https://github.com/edwardbr/Canopy.git
cd Canopy

# Blocking (synchronous) mode
cmake --preset Debug
cmake --build build_debug

# Coroutine (async/await) mode
cmake --preset Debug_Coroutine
cmake --build build_debug_coroutine

# With AddressSanitizer
cmake --preset Debug_ASAN
cmake --build build_debug

cmake --preset Debug_Coroutine_ASAN
cmake --build build_debug_coroutine

# Coverage builds
cmake --preset Debug_Coverage
cmake --build build_debug

cmake --preset Debug_Coroutine_Coverage
cmake --build build_debug_coroutine

# Static analysis with clang-tidy (requires LLVM 16+)
cmake --preset Debug_Coroutine_Tidy
cmake --build build_debug_coroutine_tidy

# Run tests
ctest --test-dir build_debug --output-on-failure
ctest --test-dir build_debug_coroutine --output-on-failure

Local User Presets

For machine-specific or personal presets, create CMakeUserPresets.json from the template:

cp CMakeUserPresets.json.example CMakeUserPresets.json
cmake --list-presets

This keeps your custom presets local while still inheriting from project presets.

Build Options

# Execution mode
CANOPY_BUILD_COROUTINE=ON    # Enable async/await support (requires C++20)

# Features
CANOPY_BUILD_ENCLAVE=ON      # SGX enclave support
CANOPY_BUILD_TEST=ON         # Test suite
CANOPY_BUILD_DEMOS=ON        # Demo applications

# Development
CANOPY_USE_LOGGING=ON        # Comprehensive logging
CANOPY_USE_TELEMETRY=ON      # Debugging and visualization
CANOPY_VERBOSE_GENERATOR=ON          # Code generation debugging

# Memory Safety
CANOPY_DEBUG_ADDRESS=ON      # AddressSanitizer (detect memory errors)
CANOPY_DEBUG_THREAD=ON       # ThreadSanitizer (detect data races)
CANOPY_DEBUG_UNDEFINED=ON    # UndefinedBehaviorSanitizer

Hello World Example

calculator.idl:

namespace calculator {
    [inline] namespace v1 {
        [status=production]
        interface i_calculator {
            error_code add(int a, int b, [out] int& result);
        };
    }
}

Server — listen on TCP, wrap each accepted connection in TLS, serve the calculator:

#include "generated/calculator/calculator.h"
#include <streaming/listener.h>
#include <streaming/tcp/acceptor.h>
#include <streaming/tls/stream.h>
#include <transports/streaming/transport.h>

using namespace calculator::v1;

auto service = std::make_shared<rpc::root_service>("calc_server", server_zone, scheduler);

auto tls_ctx = std::make_shared<streaming::tls::context>(cert_path, key_path);

// stream_transformer: wrap each raw TCP stream in TLS before handing it to the transport
auto tls_transformer = [tls_ctx, scheduler](std::shared_ptr<streaming::stream> tcp_stm)
    -> CORO_TASK(std::optional<std::shared_ptr<streaming::stream>>)
{
    auto tls_stm = std::make_shared<streaming::tls::stream>(tcp_stm, tls_ctx);
    if (!CO_AWAIT tls_stm->handshake())
        CO_RETURN std::nullopt;  // reject connection if handshake fails
    CO_RETURN tls_stm;
};

auto listener = std::make_shared<streaming::listener>("calc_server",
    std::make_shared<streaming::tcp::acceptor>(endpoint),
    rpc::stream_transport::make_connection_callback<i_calculator, i_calculator>(
        [](const rpc::shared_ptr<i_calculator>&,
            const std::shared_ptr<rpc::service>& svc)
            -> CORO_TASK(rpc::service_connect_result<i_calculator>)
        {
            // Welcome you are in RPC land!
            CO_RETURN rpc::service_connect_result<i_calculator>{
                rpc::error::OK(),
                rpc::shared_ptr<i_calculator>(new my_calculator_impl(svc))};
        }),
    std::move(tls_transformer));

listener->start_listening(service);

Client — connect via TCP, perform TLS handshake, call the remote calculator:

#include "generated/calculator/calculator.h"
#include <streaming/tcp/stream.h>
#include <streaming/tls/stream.h>
#include <transports/streaming/transport.h>

using namespace calculator::v1;

auto client_service = std::make_shared<rpc::root_service>("calc_client", client_zone, scheduler);

// 1. Establish TCP connection
coro::net::tcp::client tcp_client(scheduler, endpoint);
CO_AWAIT tcp_client.connect(std::chrono::milliseconds{5000});
auto tcp_stm = std::make_shared<streaming::tcp::stream>(std::move(tcp_client), scheduler);

// 2. Wrap in TLS
auto tls_ctx = std::make_shared<streaming::tls::client_context>(/*verify_peer=*/true);
auto tls_stm = std::make_shared<streaming::tls::stream>(tcp_stm, tls_ctx);
CO_AWAIT tls_stm->client_handshake();

// 3. Create transport and connect to the remote zone
auto transport = rpc::stream_transport::make_client("calc_client", client_service, tls_stm);

rpc::shared_ptr<i_calculator> input_iface;
auto connect_result = CO_AWAIT client_service->connect_to_zone<i_calculator, i_calculator>(
    "calc_server", transport, input_iface);

if (connect_result.error_code != rpc::error::OK())
{
    // handle connection failure
}
auto calc = connect_result.output_interface;

// 4. Make RPC call
int result;
auto error = CO_AWAIT calc->add(5, 3, result);
std::cout << "5 + 3 = " << result << std::endl;  // Output: 5 + 3 = 8

For a complete working example see demos/stream_composition/src/tcp_spsc_tls_demo.cpp.


Supported Transports

Transport Description Requirements
Local In-process parent-child communication None
DLL (rpc::dynamic_library) In-process DLL-loaded child zone in blocking builds Shared library payload
DLL (rpc::libcoro_dynamic_library) In-process DLL-loaded child zone in coroutine builds CANOPY_BUILD_COROUTINE=ON
IPC (rpc::ipc_transport) Child-process transport hosting a direct stream_transport service CANOPY_BUILD_COROUTINE=ON
IPC + DLL (rpc::ipc_transport + rpc::libcoro_spsc_dynamic_dll) Child-process transport hosting a DLL-backed zone over SPSC streams CANOPY_BUILD_COROUTINE=ON
TCP Network communication between machines Coroutines
SPSC Single-producer single-consumer queues Coroutines
SGX Enclave Secure enclave communication SGX SDK
Custom User-defined transport implementations Custom implementation

See transport documentation for details, especially Dynamic Library and IPC Child Transports and Hierarchical Transport Pattern.

Implementation note:

  • the table above describes Canopy transport concepts and the primary C++ implementation
  • the experimental Rust implementation currently supports only:
    • local transport
    • dynamic-library transport
    • blocking runtime mode
    • Protocol Buffers
  • the JavaScript implementation is a reduced-trust generated client/transport layer for WebSocket scenarios, not a full transport/runtime matrix equivalent to C++

Requirements

Supported Platforms

  • Windows: Visual Studio 2019+
  • Linux: Ubuntu 18.04+, CentOS 8+
  • Embedded: Any platform with C++17 support

Compilers

  • Clang: 10.0+ (LLVM 21 recommended for full clang-tidy support)
  • GCC: 9.4+
  • MSVC: Visual Studio 2019+

Dependencies

Git submodules manage external dependencies they will auto load when required:

  • YAS: Serialization framework
  • libcoro: Coroutine support (when CANOPY_BUILD_COROUTINE=ON)
  • protobuf: Protocol Buffers
  • idlparser: IDL parser

Project Structure

canopy/
├── rust/                   # Experimental Rust implementation and migration docs
│   ├── rpc/                # Rust RPC runtime
│   ├── transports/         # Rust local and dynamic-library transports
│   ├── tests/              # Rust interop and probe tests
│   └── *.md                # Port plan, progress, completed work, retrospectives
├── c++/                    # C++ source code
│   ├── rpc/                # Core RPC library
│   ├── transports/         # Transport implementations (local, tcp, spsc, sgx)
│   ├── tests/              # Test suite
│   ├── demos/              # Example applications
│   ├── telemetry/          # Telemetry and logging
│   ├── streaming/          # Coroutine streaming stack
│   ├── subcomponents/      # Network config, SPSC queue, HTTP server, etc.
│   ├── benchmarking/       # Benchmark targets
│   └── submodules/         # C++ third-party dependencies
├── generator/              # IDL code generator
├── interfaces/             # Shared IDL interface definitions
├── c_abi/                  # Language-neutral ABI specifications
├── cmake/                  # CMake build configuration modules
│   ├── Canopy.cmake        # Main build configuration
│   ├── Linux.cmake         # Linux-specific settings
│   ├── Windows.cmake       # Windows-specific settings
│   ├── SGX.cmake           # SGX enclave support
│   └── CanopyGenerate.cmake # IDL code generation
├── documents/              # Comprehensive documentation
├── submodules/             # Core dependencies (idlparser, protobuf)
└── CMakeLists.txt          # Build configuration

Development Setup

Linux Installation (Fedora 43+)

Install system dependencies:

sudo dnf install gcc gcc-c++ clang clang-tools-extra openssl-devel wget make perl-core zlib-devel ninja-build nodejs gdb python3-pip liburing-devel
pip install --user cmakelang

clang-tools-extra includes clang-tidy and clang-format. The Fedora 43 repos ship LLVM 21, which supports all checks used in this project including modernize-use-designated-initializers.

Install CMake 4.x or later (the version in the Fedora repos may be too old):

# Download and install the CMake 4.2.3 prebuilt binary
wget https://github.com/Kitware/CMake/releases/download/v4.2.3/cmake-4.2.3-linux-x86_64.tar.gz
tar -zxf cmake-4.2.3-linux-x86_64.tar.gz
sudo cp -r cmake-4.2.3-linux-x86_64/* /usr/local/

Code Formatting

This project uses cmake-format for CMake files and clang-format for C++ files (both installed above).

VSCode Setup:

  1. Open the project in VSCode
  2. Install recommended extensions when prompted (or manually install cheshirekow.cmake-format)
  3. The workspace settings will automatically use .cmake-format.yaml for formatting
  4. Format-on-save is enabled by default

Manual formatting:

# Check CMake formatting
git ls-files -- \*.cmake \*CMakeLists.txt | xargs cmake-format --check

# Apply CMake formatting
git ls-files -- \*.cmake \*CMakeLists.txt | xargs cmake-format -i

# Apply C++ formatting
clang-format -i <file>

Contributing

Canopy is actively maintained.

  • Performance optimizations
  • New transport implementations
  • New serialisation formats
  • Platform ports
  • New remote reflection mechanism
  • Documentation improvements
  • Alternative language support

License

Copyright (c) 2026 Edward Boggis-Rolfe. All rights reserved.

See LICENSE for details.


Acknowledgments

SHA3 Implementation: Credit to brainhub/SHA3IUF


For technical questions and detailed API documentation, see the documents directory.

About

An RPC solution that is agnostic on serialization formats, transports and coroutine libraries. With remote RAII object management support. A major rewrite of https://github.com/edwardbr/rpc

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors