Conversation
…coordinator This PR brings Layer 0 checkpoint and replica sync features from feat/file-system. ## Client SDK - checkpoint.rs: Multi-provider checkpoint coordination with consensus - checkpoint_persistence.rs: State persistence with backup rotation - event_subscription.rs: Real-time blockchain event monitoring - Integration tests for checkpoint protocol ## Provider Node - challenge_responder.rs: Automated challenge detection and response - checkpoint_coordinator.rs: Provider-initiated checkpoint submission - replica_sync_coordinator.rs: Autonomous replica synchronization - New API endpoints: /checkpoint/*, /replica/* ## Pallet - Provider-initiated checkpoint extrinsic - Historical roots tracking for replica sync ## Primitives - CheckpointProposal type for multi-provider signing - CommitmentPayload enhancements ## Documentation - CHECKPOINT_PROTOCOL.md: Complete protocol design - EXECUTION_FLOWS.md: Sequence diagrams - provider-initiated-checkpoints.md: Design rationale
Add core primitives and registry pallet for the Layer 1 file system built on top of Layer 0 storage, following the three-layered architecture design. Changes: - Create file-system-primitives crate with protobuf schemas - DirectoryNode: stores directory structure with child references - FileManifest: tracks file chunks and metadata - DriveInfo: on-chain drive metadata (owner, bucket, root CID) - Helper functions for CID computation and serialization - Implement pallet-drive-registry for on-chain drive management - Multi-drive support: users can create multiple drives per account - Extrinsics: create_drive, update_root_cid, delete_drive, update_drive_name - Storage: Drives (DriveId → DriveInfo), UserDrives (Account → Vec<DriveId>) - Events: DriveCreated, RootCIDUpdated, DriveDeleted, DriveNameUpdated - 13 comprehensive tests all passing Architecture: - Layer 1 (On-Chain): Registry stores DriveId → root CID mapping - Layer 0 (Off-Chain): Metadata blobs stored in buckets as protobuf - DAG traversal: root CID → DirectoryNode → child CIDs → files/dirs - Immutable versioning: Each root CID = snapshot of drive state Key design decisions: - Names stored in parent (optimal for renames) - Multi-drive per account (flexible) - BoundedVec for names (MaxEncodedLen compliance) - DriveId auto-increment counter Tests: All passing (5 primitive tests + 13 pallet tests)
- Add file-system-primitives crate with protobuf schemas
- DirectoryNode: Protobuf-serialized directory structure
- FileManifest: File metadata with chunk references
- DriveInfo: On-chain drive metadata (owner, bucket, root CID)
- CID computation using blake2-256
- Add pallet-drive-registry for drive management
- create_drive: Register new drive with bucket and root CID
- update_root_cid: Update drive after file system changes
- delete_drive: Remove drive
- update_drive_name: Rename drive
- Multi-drive per account support
- UserDrives storage for tracking user's drives
- Organize in storage-interfaces/file-system/ structure
- Separates Layer 0 (storage primitives) from Layer 1 (interfaces)
- Clear hierarchy: storage-interfaces/file-system/{primitives,pallet-registry}
- Comprehensive README with architecture, data flow, and examples
Runtime Integration: - Add pallet-drive-registry and file-system-primitives to runtime - Configure DriveRegistry with MaxDriveNameLength=128, MaxDrivesPerUser=100 - Runtime builds successfully (non-WASM) File System Client SDK: - Implement FileSystemClient with high-level API - Support drive operations: create_drive, get_root_cid - Support file operations: upload_file, download_file - Support directory operations: create_directory, list_directory - Automatic DAG traversal and ancestor updates - Path resolution and CID computation - Chunk-based file uploads (256 KiB chunks) Primitives Improvements: - Add extern crate std for protobuf compatibility - Enable std features for blake2 and hex dependencies - Fix protobuf code generation in std mode Examples: - basic_usage.rs: Demonstrates primitives API with working code - client_sdk_demo.rs: Shows intended FileSystemClient usage - pallet_interaction.rs: Documents on-chain pallet operations All examples are documented and basic_usage runs successfully
…docs Add user-configurable drive creation parameters: - Optional min_providers parameter (auto-determines based on storage period) - Checkpoint frequency control (immediate, batched, manual) - Automatic bucket creation and provider selection - Payment distribution across providers Layer 0 integration: - Create buckets internally from Layer 1 - Query available providers by capacity - Request primary and replica agreements automatically - Handle provider selection and agreement distribution Documentation: - Complete File System Interface documentation (4 guides) - User Guide with examples and troubleshooting - Admin Guide with monitoring and system management - API Reference with all extrinsics and SDK methods - Architecture overview and capabilities comparison Technical improvements: - Resolve DecodeWithMemTracking codec compatibility issues - Update primitives to use workspace codec version - Refactor commit strategy to use primitive parameters - Add Layer 0 internal helper functions for inter-pallet calls All tests passing (19/19 pallet tests)
Replace the two-parameter workaround (commit_immediately: bool, commit_interval: Option<u32>) with proper CommitStrategy enum type. Root cause: Missing explicit DecodeWithMemTracking derive trait. Changes: - Add DecodeWithMemTracking to CommitStrategy enum derives - Update create_drive extrinsic to accept CommitStrategy directly - Update client SDK to pass CommitStrategy enum - Update all tests to use proper enum values - Update API documentation with correct signature and examples Benefits: - Type-safe: Cannot pass invalid parameter combinations - Ergonomic: Clear, self-documenting API - Consistent: Same type across pallet and client SDK - Idiomatic: Follows Substrate/FRAME patterns All tests passing (19/19)
Rewrite MMR implementation with correct position arithmetic: - push() uses leaf_count.trailing_zeros() for merge count - peaks() walks set bits of leaf_count from MSB to LSB - leaf_index_to_pos(k) = 2*k - popcount(k) (O(1)) - proof_with_path() uses locate_leaf() for peak subtree and bit-based paths Fix proof generation in provider node: - Storage uses real Mmr struct for MMR operations - Balanced Merkle proofs with power-of-2 padding for verify_merkle_proof - HTTP endpoints return full proof data with path bits Fix client SDK extrinsic builder: - Replace broken respond_challenge with respond_to_challenge_proof - Build correct ChallengeResponse::Proof variant via subxt dynamic tx Add challenge watcher binary: - Subscribes to finalized blocks for ChallengeCreated events - Queries on-chain Challenges storage for mmr_root/leaf/chunk indices - Fetches proofs from provider HTTP API and submits response extrinsic - Handles nested AccountId32 composite encoding from subxt
The challenge_responder had TODOs and used a local MmrProof type without path bits. Updated to use storage_primitives::MmrProof and MerkleProof from the working storage layer, matching the on-chain verification logic.
Add DefaultCheckpointInterval, DefaultCheckpointGrace, CheckpointReward, and CheckpointMissPenalty to the runtime's pallet_storage_provider config to match new checkpoint protocol requirements from the pallet.
…ht-reclaim Migrate from the deprecated cumulus_primitives_storage_weight_reclaim to cumulus_pallet_weight_reclaim, which wraps the full transaction extension pipeline for accurate proof size reclaim.
Use the frame_system::Config bound syntax instead, as recommended by polkadot-sdk#7229.
Implements comprehensive drive cleanup with proper Layer 0 bucket management: Layer 0 (pallet-storage-provider): - Add cleanup_bucket_internal() for complete bucket cleanup - Ends all agreements with prorated refunds - Pays providers for time served - Removes bucket from storage - Emits BucketDeleted event Layer 1 (pallet-drive-registry): - Add clear_drive() extrinsic to wipe drive contents - Resets root CID to zero - Keeps drive structure and agreements intact - No refunds (storage continues) - Update delete_drive() to properly cleanup buckets - Calls cleanup_bucket_internal() in Layer 0 - Provides prorated refunds to drive owner - Removes bucket-to-drive mapping - Emits DriveDeleted event with refund amount Tests: - Add clear_drive tests (ownership, multiple clears) - Update delete_drive tests for new behavior - Add integration test for Layer 0 dependency Documentation: - Update API_REFERENCE.md with clear_drive and updated delete_drive - Add DriveCleared and update DriveDeleted events - Add BucketCleanupFailed error documentation - Update USER_GUIDE.md with clear vs delete comparison Key improvements: - Drive owners now receive refunds when deleting drives - Clear distinction between clearing (wipe) and deleting (remove) - Proper cleanup of Layer 0 resources - Prorated refunds based on remaining storage time
Implement real on-chain integration for Layer 1 file system client using subxt for trustless storage operations. This replaces placeholder methods with actual blockchain transactions and state queries. Core Changes: - Add subxt and subxt-signer dependencies to file-system-client - Create substrate.rs module with SubstrateClient for blockchain interaction - Implement dynamic extrinsic construction for DriveRegistry pallet - Add event extraction to get drive IDs from blockchain responses - Implement storage queries for drive metadata using manual key construction - Update FileSystemClient to use real blockchain calls instead of placeholders API Changes: - Change constructor to async new() that connects to blockchain - Add with_dev_signer() for testing with development accounts - Add with_signer() for production keypair configuration - Remove placeholder blockchain methods, use real subxt calls Example & Documentation: - Create basic_usage.rs example demonstrating complete workflow - Add comprehensive README.md for file-system-client package - Create EXAMPLE_WALKTHROUGH.md with step-by-step guide - Update API_REFERENCE.md with new constructor and signer methods - Update USER_GUIDE.md with blockchain integration instructions - Update filesystems README with correct example paths - Update docs/README.md with example walkthrough link - Update Layer 0 client README with Layer 1 comparison Technical Details: - Use subxt dynamic API for runtime-agnostic transactions - Manual storage key construction with twox_128 and blake2_128 hashing - Event extraction from finalized blocks to get transaction results - Box::pin() pattern for recursive async functions - Proper error mapping from subxt errors to FsClientError
Add comprehensive just commands for testing and running the Layer 1 File System Interface, along with quick start documentation. Just Commands Added: - fs-integration-test: Full integration test (starts everything) - fs-demo: Quick demo (assumes infrastructure running) - fs-example: Run basic_usage.rs example - fs-test: Run unit tests - fs-test-verbose: Run tests with logging - fs-test-all: Test all file system components - fs-build: Build file system components only - fs-clean: Clean file system artifacts - fs-docs: Show documentation links Documentation Added: - FILE_SYSTEM_QUICKSTART.md: Complete quick start guide - One-command integration test - Manual workflow steps - Expected output examples - Troubleshooting guide - Command reference table Documentation Updates: - README.md: Add file system section and commands - CLAUDE.md: Add file system commands and architecture - Updated directory structure to show Layer 1 components - Added Layer 1 components to key components section Benefits: - Single command to test entire file system: just fs-integration-test - Automatic infrastructure startup and verification - Clear documentation for new users - Easy integration into CI/CD workflows
The file-system-primitives crate was causing WASM build failures because prost (protobuf) types were included unconditionally, but prost requires std and is incompatible with the WASM runtime build. Changes: - Create SCALE-encoded types (DirectoryEntry, DirectoryNode, FileManifest, FileChunk, EntryType) that work in no_std environments - Make prost/proto module std-only via #[cfg(feature = "std")] - Add conversion traits between SCALE and proto types for std builds - Update file-system-client to use new SCALE types API - Update examples to use new API (name_str(), to_scale_bytes(), etc.) - Make prost, prost-types, thiserror optional dependencies This fixes the "duplicate lang item in crate core" error that occurred when building the runtime with Rust 1.88.0.
- Fix integer overflow in fetch_blob when passing u64::MAX as length to storage client read(), which caused provider chunk calculation to overflow and return empty data - Add genesis-patch.json with correct parachainId (4000) to fix parachain block production - Create comprehensive ARCHITECTURE.md covering encoding, security, encryption, and blockchain integration details - Update USER_GUIDE.md with security considerations and encryption guide - Update ADMIN_GUIDE.md with technical reference for debugging - Update README.md and CLAUDE.md with links to new architecture doc - Add encoding verification test in primitives
- Create EXECUTION_FLOWS.md with Mermaid sequence diagrams for: - Provider registration and settings - Bucket creation - Storage agreement flow (request + accept) - Data upload flow (off-chain) - Checkpoint/commitment flow with signature verification - Data read flow with proof verification - Challenge flow and automatic slashing - Layer 1 drive operations - Explain why checkpoints require provider signatures: - Non-repudiable evidence of storage commitment - Enables accountability through challenge mechanism - Bitfield tracking which providers signed - Add links from CLAUDE.md quick links section
Design document for the Checkpoint Manager that abstracts away multi-provider signature collection from end users: - Provider Discovery: Auto-discover endpoints from on-chain state - Commitment Collection: Parallel queries with timeout/retry - Consensus Verification: Majority-based agreement checking - Signature Aggregation: Collect and verify provider signatures - On-Chain Submission: Automatic based on CommitStrategy - Conflict Resolution: Handle provider disagreements gracefully Key features: - Integrates with CommitStrategy (Immediate/Batched/Manual) - Background loop for batched checkpoints - Event/callback system for applications - Exponential backoff retry logic - Provider health tracking User API becomes simple: fs_client.upload_file(...) → checkpoint handled automatically
…nation Add CheckpointManager to Layer 0 client SDK that handles: - Parallel commitment collection from multiple providers - Consensus verification (majority agreement on MMR root) - Automatic checkpoint submission on-chain - Provider health checking and retry with exponential backoff Integrate checkpoint functionality into FileSystemClient: - submit_checkpoint() method for easy checkpointing - submit_checkpoint_with_config() for custom settings - get_bucket_id() helper for Layer 0 integration The implementation is reusable by any Layer 1 interface.
…gration
Phase 2 checkpoint protocol completion:
- Add BatchedCheckpointConfig and BatchedInterval for configuring
periodic checkpoint submissions
- Implement background checkpoint loop in CheckpointManager with:
- Configurable interval (blocks or duration)
- Dirty flag tracking for changed buckets
- Failure backoff and retry logic
- Pause/resume/stop controls via CheckpointLoopHandle
- Optional callback for checkpoint events
- Integrate automatic checkpoints with FileSystemClient:
- enable_auto_checkpoints() starts background loop for a drive
- disable_auto_checkpoints() stops the loop
- File operations (upload_file, create_directory) automatically
mark drives as dirty for checkpoint batching
- request_immediate_checkpoint() forces immediate submission
- Export new types: BatchedCheckpointConfig, BatchedInterval,
BucketCheckpointStatus, CheckpointCallback, CheckpointLoopCommand,
CheckpointLoopHandle
…llenge Phase 3 Features: - Add CheckpointMetrics for tracking checkpoint operations (attempts, successes, failures, conflicts, timing) - Add AutoChallengeConfig for configuring automatic challenge submission - Add ChallengeRecommendation with evidence for divergent providers - Implement analyze_challenge_candidates() for auto-challenge analysis - Add conflict history tracking per bucket/provider Additional Changes: - Add 27 comprehensive unit tests for checkpoint features - Fix compiler warnings across client crates (unused imports, variables) - Add documentation for checkpoint API in API_REFERENCE.md - Update CHECKPOINT_PROTOCOL.md with implementation status
Auto-Challenge Execution: - Add execute_auto_challenges() method to CheckpointManager - Wire ChallengerClient.challenge_checkpoint() to auto-challenge analysis - Add AutoChallengeResult, SubmittedChallenge, FailedChallenge types - Add execute_all_auto_challenges() for batch processing Provider Node Cleanup: - Fix all compiler warnings (unused imports, variables, dead code) - Remove unused Storage import from api.rs - Remove unused blake2_256 import from mmr.rs - Remove unused H256 import from types.rs - Prefix unused variables with underscore - Add #[allow(dead_code)] to response structs Integration Tests: - Add checkpoint_integration.rs with 21 tests - Test provider health tracking and degradation - Test conflict detection types - Test metrics tracking - Test batched checkpoint configuration
* feat: implement chain query functions for replica sync coordinator Add complete on-chain query implementations for autonomous replica sync: - query_replica_agreements(): Iterates chain storage to find all replica agreements where this provider is involved, parsing SCALE-encoded StorageAgreement to extract sync_balance, sync_price, min_sync_interval - query_bucket_snapshot(): Fetches authoritative checkpoint state from on-chain Buckets storage, extracting mmr_root and leaf_count - query_primary_endpoints(): Looks up primary provider multiaddrs from chain and converts them to HTTP endpoint URLs Also fixes: - Add tracing-subscriber dev-dependency for client examples - Fix doctest missing bucket_id variable in checkpoint.rs - Fix unused variable warning in replica_sync.rs * feat: add FRAME benchmarking infrastructure for pallet weights Add complete benchmarking setup for all 36 pallet extrinsics to enable accurate weight calculation for transaction fees and block limits. - Create weights.rs with WeightInfo trait and SubstrateWeight implementation - Create benchmarking.rs with benchmark functions for all extrinsics - Update lib.rs to use WeightInfo trait instead of hardcoded weights - Add comprehensive BENCHMARKING.md documentation - Link documentation from CLAUDE.md and docs/README.md * Nits * Job * fmt * fix: resolve all benchmark test failures with proper state setup - Increase provider stake to cover declared max_capacity (MinStakePerByte * 1B) - Set min_providers=0 in setup_bucket so empty-signature checkpoints succeed - Fix challenge_checkpoint by setting provider bit in snapshot bitfield - Fix challenge_off_chain with real sr25519 keypair signing - Fix confirm_replica_sync/challenge_replica by creating checkpoint first - Fix provider_checkpoint/report_missed_checkpoint by advancing blocks - Fix claim_expired_agreement by advancing past expiry + settlement - Fix claim_checkpoint_rewards by writing rewards directly to storage - Fix remove_slashed by setting provider stake to zero - Fix respond_to_challenge by inserting challenge directly in storage * Fmt * fix: resolve no_std compilation errors in benchmarking Use `Pair::from_seed` instead of `Pair::generate` (unavailable in no_std) and qualify `alloc::vec!` macro for no_std compatibility. * ci: remove build step and its prerequisites from setup job The setup job only needs to cache binaries. Rust toolchain, system dependencies, rust cache, and free disk space were only needed for the build step which has been removed. * ci: restore free disk space step in setup job * fix: use host function signing in benchmarks for no_std compatibility Pair::sign is unavailable in no_std (requires randomness). Switch to sp_io::crypto::sr25519_generate and sr25519_sign host functions, and register a MemoryKeystore in test externalities to support them. * chore: use pre-built binary for provider node in justfile The build dependency already ensures the binary exists, so skip the redundant cargo invocation. * feat: add genesis bucket creation for storage provider pallet Pre-create two buckets (bucket_id=0 and bucket_id=1) with Bob as admin at genesis, improving developer experience by having ready-to-use buckets on chain start. Also adds formatting section to CLAUDE.md, fixes zepter feature propagation for sp-keystore, and tweaks zombienet log levels. * refactor: move demo binaries from client to separate examples crate Separates the 5 demo/tool binaries (demo_setup, demo_upload, demo_checkpoint, demo_challenge, challenge_watcher) into a dedicated storage-examples crate so the client crate is purely a library. The justfile now prebuilds via build-examples and runs binaries directly, avoiding redundant cargo run recompilation checks. * chore: simplify justfile downloads and merge integration CI into single job Collapse ~100 lines of copy-paste download recipes into a reusable _download helper with computed URL variables. Mark internal recipes (downloads, check, build-examples) as [private] to declutter just --list. Merge the two-job integration CI (setup + integration-tests) into a single job to avoid redundant checkouts and cache restores. * test: assert two ChallengeDefended events in demo workflow Capture challenge watcher output and verify exactly two challenges were defended (off-chain and on-chain checkpoint). The demo now fails if the watcher does not successfully respond to both challenges. --------- Co-authored-by: Naren Mudigal <naren@parity.io>
Brings in benchmarking infrastructure, checkpoint protocol + CI, and benchmark fixes from dev. Resolves all merge conflicts preserving both dev's WeightInfo/GenesisConfig improvements and file-system Layer 1 features. Fixes deprecated sp-std usage and adds missing WeightInfo to drive registry mock.
* feat: replace bash demo with PAPI integration test Add a single-file TypeScript/JS integration test using polkadot-api (PAPI) that replaces the bash demo orchestration of 5 Rust binaries. The new demo.mjs script: - Connects to chain via PAPI with native event subscription - Performs setup, upload, 2 challenges + 2 responses synchronously - Asserts exactly 2 ChallengeDefended events - No background processes, no sleep-based synchronization, no log grep The old bash demo is preserved as `just demo-legacy`. CI updated: added Node.js setup and PAPI descriptor generation step. * ci: run both legacy and PAPI demos in integration tests * fix: replace fixed sleep with polling loop for challenge defense assertion The 30s sleep was not always enough for the watcher to respond to the second challenge in CI. Now polls the watcher log every 2s for up to 120s, proceeding as soon as both challenges are defended. * refactor: rename demo.mjs to demo.js, add error handling and upload assertion - Rename to .js since package.json already has "type": "module" - Add catch block with error logging and non-zero exit code - Assert uploaded data matches by downloading it back from provider * refactor: extract demo steps into named functions for readability Break the monolithic main() into registerProvider, createBucket, createAgreement, uploadData, challengeOffchain, submitCheckpoint, challengeCheckpoint, and respondToChallenge. Remove unused ALICE_SS58 constant and waitFor helper. * refactor: remove storage-examples crate and demo-legacy The PAPI-based demo (`just demo`) fully replaces the Rust binary orchestration. Remove the storage-examples crate (demo_setup, demo_upload, demo_challenge, demo_checkpoint, challenge_watcher) and all associated justfile recipes (demo-legacy, demo-setup, demo-upload, demo-challenge, start-watcher, build-examples). * rename demo.js to full-flow.js * chore: remove redundant examples/papi/.gitignore Already covered by node_modules/ in root .gitignore. ---------
* feat: replace bash demo with PAPI integration test Add a single-file TypeScript/JS integration test using polkadot-api (PAPI) that replaces the bash demo orchestration of 5 Rust binaries. The new demo.mjs script: - Connects to chain via PAPI with native event subscription - Performs setup, upload, 2 challenges + 2 responses synchronously - Asserts exactly 2 ChallengeDefended events - No background processes, no sleep-based synchronization, no log grep The old bash demo is preserved as `just demo-legacy`. CI updated: added Node.js setup and PAPI descriptor generation step. * ci: run both legacy and PAPI demos in integration tests * fix: replace fixed sleep with polling loop for challenge defense assertion The 30s sleep was not always enough for the watcher to respond to the second challenge in CI. Now polls the watcher log every 2s for up to 120s, proceeding as soon as both challenges are defended. * refactor: rename demo.mjs to demo.js, add error handling and upload assertion - Rename to .js since package.json already has "type": "module" - Add catch block with error logging and non-zero exit code - Assert uploaded data matches by downloading it back from provider * refactor: extract demo steps into named functions for readability Break the monolithic main() into registerProvider, createBucket, createAgreement, uploadData, challengeOffchain, submitCheckpoint, challengeCheckpoint, and respondToChallenge. Remove unused ALICE_SS58 constant and waitFor helper. * refactor: remove storage-examples crate and demo-legacy The PAPI-based demo (`just demo`) fully replaces the Rust binary orchestration. Remove the storage-examples crate (demo_setup, demo_upload, demo_challenge, demo_checkpoint, challenge_watcher) and all associated justfile recipes (demo-legacy, demo-setup, demo-upload, demo-challenge, start-watcher, build-examples). * rename demo.js to full-flow.js * chore: remove redundant examples/papi/.gitignore Already covered by node_modules/ in root .gitignore. * fix: reorder PORT as first parameter in start-provider recipe just uses positional parameters, so PORT must come first to allow `just start-provider 3001` without passing SEED and CHAIN_WS. * fix: use top-level variable for PORT in start-provider just supports KEY=VALUE overrides for top-level variables, not recipe parameters. Usage: just PORT=3001 start-provider * chore: make demo depend on papi-setup Removes the need to run papi-setup separately. Also removes the now-redundant CI step. * chore: ignore generated .papi/ and package-lock.json * Logs * ci: remove redundant papi-setup step Already runs as a dependency of `just demo`. ---------
d26fac2 to
3201119
Compare
…#12) Adds a new extrinsic that allows users to create buckets with storage requirements and have the system automatically match them to a suitable provider. This eliminates the manual request/accept dance for agreements. Key changes: - New extrinsic `create_bucket_with_storage(max_bytes, duration, max_price_per_byte)` with call_index 16 - Automatic provider selection based on: accepting_primary status, capacity, price, duration constraints, and stake requirements - Selects cheapest matching provider when multiple qualify - Creates bucket and agreement atomically in one transaction - Added NoMatchingProvider error when no suitable provider exists - Added comprehensive tests (7 new tests covering success and error cases) - Added benchmark for the new extrinsic The matching algorithm bridges the gap between users (who work at bucket level with small requests) and providers (who offer capacity pools). Providers pre-consent to agreements by setting accepting_primary: true.
- Fix formatting issues in pallet/src/benchmarking.rs and tests.rs - Replace deprecated sp_runtime::RuntimeDebug with Debug in file-system-primitives
… into feat/file-system # Conflicts: # Cargo.lock
- Run cargo +nightly fmt --all to fix formatting issues - Apply clippy auto-fixes across the codebase - Fix uninlined_format_args warnings - Fix useless_conversion warnings in runtime - Clean up code style issues
… into feat/file-system
- Fix manual_flatten in disk_storage.rs using iter.flatten() - Add #[allow(clippy::type_complexity)] to CheckpointManager - Convert chunk_data to static method in storage_user.rs and lib.rs - Fix manual_clamp using clamp() in verification.rs
… into feat/file-system
- Fix unused variables in discovery.rs and pallet/src/lib.rs - Fix dead_code warning for collect_chunks in storage.rs - Add #[allow(clippy::too_many_arguments)] to functions with many params - Fix unnecessary_filter_map by converting to map - Fix if_same_then_else by combining conditions - Fix field_reassign_with_default in tests - Add #[allow(dead_code)] to unused test helpers in mock.rs - Fix unused imports in pallet-registry - Fix deprecated warnings in pallet-registry tests - Add clippy::type_complexity and clippy::let_unit_value allows
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR introduces the Layer 1 File System Interface - a high-level abstraction over Layer 0's raw blob storage that provides familiar file/folder semantics for Web3 storage.
What's New
🗂️ Drive Registry Pallet (
pallet-drive-registry)📦 File System Primitives (
file-system-primitives)🔧 File System Client SDK (
file-system-client)upload_file(),download_file(),create_directory()✅ Automated Checkpoint Protocol
Quick Start
Documentation
Test Results