Skip to content

Add RAM-buffered queue (BufferedQueue, RamRing, SharedRamRing)#123

Open
leftger wants to merge 5 commits intotweedegolf:masterfrom
leftger:feat/flash-queue
Open

Add RAM-buffered queue (BufferedQueue, RamRing, SharedRamRing)#123
leftger wants to merge 5 commits intotweedegolf:masterfrom
leftger:feat/flash-queue

Conversation

@leftger
Copy link

@leftger leftger commented Mar 19, 2026

Summary

This adds a RAM-buffered wrapper around QueueStorage for cases where the producer is faster than NOR flash can keep up with.

  • RamRing<N> — compact fixed-capacity ring buffer for variable-length byte items (2-byte LE length prefix per item)
  • OverflowPolicyErr or DiscardOldest when the ring is full
  • BufferedQueue<S, C, N> — wraps QueueStorage with a RAM ring; enqueue is synchronous and never touches flash, drain_one/drain_all commit items to flash asynchronously
  • SharedRamRing<N> — ISR-safe variant behind the optional embassy feature; wraps the ring in a CriticalSectionRawMutex and uses an Embassy Signal to wake a drain task

FIFO ordering is preserved across the RAM→flash boundary: pop/peek drain any pending RAM items to flash first, then read from flash.

Power-fail note: items still in the RAM ring at power loss are discarded; items that have been drained carry the usual sequential-storage guarantees.

Dependency note

SharedRamRing pulls in embassy-sync (gated behind the embassy feature). The embassy-sync types are not part of the public API — the only public surface is SharedRamRing<N> itself. Happy to revisit the dependency (e.g. swap the mutex for direct critical_section::with calls and hand-roll a minimal signal) if you'd prefer to avoid it.

Introduces queue::buffer with:
- RamRing<N>: fixed-capacity ring buffer for variable-length items
- OverflowPolicy: Err or DiscardOldest on full ring
- BufferedQueue<S,C,N>: accepts sync enqueues into RAM, drains to flash async
- SharedRamRing<N>: ISR-safe variant using embassy-sync (optional embassy feature)

FIFO ordering is preserved across the RAM→flash boundary. Power-fail note:
items still in RAM at power loss are discarded; drained items carry the usual
sequential-storage guarantees.
leftger added 2 commits March 19, 2026 02:48
SharedRamRing no longer depends on embassy-sync. The blocking mutex is
replaced with critical_section::Mutex (borrow pattern), and the Signal
is replaced with a minimal WakeSignal built from an AtomicBool and a
stored core::task::Waker — no runtime dependency required.

The feature flag is renamed from `embassy` to `critical-section` to
reflect the actual dependency.
@diondokter
Copy link
Member

Oh we've got new clippy lints and it's rightfully pointing out I've been lazy 🙈
For now you can just put an allow attribute on them to fix it.

I'll improve that when the next breaking change comes around I guess

- Rename feature `embassy` to `shared-ram-ring` per maintainer request
- Run cargo fmt (import order, line-length wrapping)
- Add #[allow(clippy::result_unit_err)] to push/push_overwriting/enqueue
- Fix include!("mock_flash.rs") → include!("../mock_flash.rs") in doctest
  (path is now relative to src/queue/mod.rs, not src/queue.rs)
@leftger
Copy link
Author

leftger commented Mar 19, 2026

Let me know if there's anything else I need to address. Happy to hopefully contribute to your awesome work!

Comment on lines +302 to +304
if !self.ram.is_empty() {
self.drain_all(data_buffer, allow_overwrite).await?;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this? Not wrong per se, but we could also peek the storage, then if none, peek the oldest from ram

Comment on lines +285 to +287
if !self.ram.is_empty() {
self.drain_all(data_buffer, allow_overwrite).await?;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

Comment on lines +335 to +338
/// Mutable reference to the underlying [`QueueStorage`] for direct access.
pub fn storage(&mut self) -> &mut QueueStorage<S, C> {
&mut self.storage
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This allows people to do weird things that would alter the ordering. Is there a real reason we need this? It's kinda footgunny

Comment on lines +318 to +323
/// Byte length of the oldest item in the RAM ring, or `None` if the ring is empty.
///
/// Use this to size the `scratch` buffer passed to [`drain_one`][Self::drain_one].
pub fn oldest_ram_item_len(&self) -> Option<usize> {
self.ram.oldest_len()
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what this is trying to do. But is it actually gonna be used in practice? It can be helpful maybe if you have alloc available...

… oldest_ram_item_len

- pop/peek now read flash first (flash items are always older) and fall
  back to the oldest RAM item if flash is empty, avoiding unnecessary
  drain-all-to-flash on every read; allow_overwrite removed from both
  since they no longer write to flash
- Remove pub storage() accessor — it allowed callers to bypass ordering
  invariants (footgunny per reviewer)
- Remove oldest_ram_item_len() — not practically useful in no_std without
  alloc; callers can size scratch buffers statically
- Restructure pop_reads_flash_before_ram test to use QueueStorage
  directly instead of the removed storage() accessor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants