Skip to content

perf: reduce allocations through memoization #3219

Merged
julienrbrt merged 6 commits intomainfrom
marko/perf
Mar 31, 2026
Merged

perf: reduce allocations through memoization #3219
julienrbrt merged 6 commits intomainfrom
marko/perf

Conversation

@tac0turtle
Copy link
Copy Markdown
Contributor

@tac0turtle tac0turtle commented Mar 31, 2026

Overview

This pr reuses hashes and memoizes the header hash in order to reduce pressure on GC

Summary by CodeRabbit

  • Performance

    • Introduced memoized header hashing across components to reduce redundant hash computations and improve throughput.
    • Added a fast-path to avoid unnecessary updates when the same hash is already recorded for a height.
  • Reliability

    • Ensured memoized header hashes are cleared on copy, deserialization, and related transformations to prevent stale cached values.
    • Memoized hashes are now produced and reused in key sync and networking paths.
  • Tests

    • Added tests covering memoization, invalidation, cloning, and deserialization behavior.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 31, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedMar 31, 2026, 1:30 PM

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 31, 2026

Claude finished @tac0turtle's task in 2m 45s —— View job


Code Review: perf: reduce allocations through memoization

  • Gather context and read changed files
  • Review memoization core (types/hashing.go, types/header.go)
  • Review serialization/utils changes
  • Review call sites (executor, syncer, da_retriever, p2p_handler)
  • Review cache/store changes
  • Review tests
  • Post comprehensive review

Overall the PR is a solid performance improvement. The memoization strategy is well-reasoned and the invalidation paths (Clone, FromProto, ApplyLegacyDefaults) are correctly identified and patched. One critical concern around concurrent access to cachedHash needs to be addressed before merge.

Critical: Data Race on cachedHash

types/hashing.go:46-77

cachedHash has no synchronization. The CachedStore stores and returns the same *types.SignedHeader pointer to all callers (both GetHeader and GetBlockData do cs.headerCache.Add(height, header) then return the same pointer). Once a pointer is in the LRU, multiple goroutines receive it. If any of those goroutines then call MemoizeHash() or InvalidateHash(), there is an unsynchronized write while other goroutines may be reading cachedHash in Hash().

The executor.go call sites at lines 266, 362, and 922 call header.MemoizeHash() on pointers already returned from the CachedStore. Since CachedStore.GetHeader calls MemoizeHash() before caching, the field is already non-nil, so these calls are effectively read-only (the h.cachedHash != nil fast-path returns immediately). However, the Go memory model does not guarantee that a write performed by goroutine A is visible to goroutine B without a synchronization point — go race can flag this.

The fix is to add a sync.RWMutex to Header (or use sync/atomic with a pointer swap) to guard cachedHash reads and writes:

// In types/header.go
type Header struct {
    // ... existing fields ...
    mu         sync.RWMutex
    cachedHash Hash
}
// In types/hashing.go
func (h *Header) Hash() Hash {
    h.mu.RLock()
    cached := h.cachedHash
    h.mu.RUnlock()
    if cached != nil {
        return cached
    }
    return h.computeHash()
}

func (h *Header) MemoizeHash() Hash {
    h.mu.RLock()
    if h.cachedHash != nil {
        defer h.mu.RUnlock()
        return h.cachedHash
    }
    h.mu.RUnlock()

    hash := h.computeHash()
    if hash != nil {
        h.mu.Lock()
        h.cachedHash = hash
        h.mu.Unlock()
    }
    return hash
}

Note: adding a sync.RWMutex makes Header non-copyable (the Clone() method already returns by value and resets cachedHash, but the mutex copy would be zeroed — that's actually fine for a zero-value mutex). Alternatively, use atomic.Pointer[Hash] to avoid the mutex overhead entirely.

Fix this →


Major: Inconsistent hash warming in executor.go

executor.go:307-311

The lastBlockInfo warm-up uses h.Hash() (line 310) instead of h.MemoizeHash():

info := &lastBlockInfo{
    headerHash: h.Hash(),   // <- does not populate cachedHash
    dataHash:   d.Hash(),
}

Since h comes from store.GetBlockData (which, if the CachedStore is in use, already calls MemoizeHash()), this is functionally correct. But if the raw store is used without the cache layer, h.Hash() computes but doesn't cache. Prefer h.MemoizeHash() here to be consistent with all other call sites in this PR.

Fix this →


Minor: setSeen fast-path correctness

block/internal/cache/generic_cache.go:64-67

The fast-path is correct: when hashByHeight[height] already equals the incoming hash, marking existing (which equals hash) as seen and returning is logically equivalent to the normal path. No bug here. This is a nice optimization to avoid redundant map writes.


Design: cachedHash in value type is fragile

types/header.go:86-93

Embedding mutable cache state in a value type (Header is passed by value in HeaderFromContext, Clone, etc.) creates an ongoing maintenance burden — every copy/mutation path must remember to call InvalidateHash(). The existing fixes for Clone (line 274) and HeaderFromContext (line 29) show this is already being managed, but plain field assignment (e.g., header2 := *header1) will silently copy a stale hash.

This is a known trade-off and the PR correctly documents it in comments. It might be worth a follow-up to use atomic.Pointer or store the cache externally (keyed by header content hash), but that's a larger refactor.


Nitpick: MemoizeHash() return value discarded in p2p_handler.go

block/internal/syncing/p2p_handler.go:105

p2pHeader.MemoizeHash()  // return value discarded

Other call sites like da_retriever.go:315 use the return value directly:

headerHash := header.MemoizeHash().String()

Since p2p_handler.go doesn't need the hash immediately, discarding is fine. But a _ = assignment makes the intent explicit and suppresses any future linter complaints.


Positive notes

  • computeHash() correctly checks Legacy first, then falls back to slim hash — no double-marshal on legacy headers. Well done.
  • Clone() clearing cachedHash (line 274) and HeaderFromContext calling InvalidateHash() (line 29) are exactly right.
  • serialization.go calling h.InvalidateHash() after FromProto is the correct place to hook this.
  • Tests in hashing_test.go cover memoization, invalidation, and clone behavior. Coverage gap in executor.go is pre-existing.
  • The setSeen short-circuit in the cache is a clean addition.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 31, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ef2a4868-f932-4b7d-8ddf-7ab56b034901

📥 Commits

Reviewing files that changed from the base of the PR and between 192b483 and 8d2fef6.

📒 Files selected for processing (2)
  • types/hashing.go
  • types/header.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • types/header.go
  • types/hashing.go

📝 Walkthrough

Walkthrough

Adds memoized header hashing (cachedHash) with MemoizeHash/InvalidateHash, updates call sites to memoize before caching or comparisons, ensures clones/deserialization clear cached hash, and introduces a height→hash fast-path in the internal cache to avoid redundant overwrites. (≤50 words)

Changes

Cohort / File(s) Summary
Hash memoization core
types/hashing.go, types/header.go
Add cachedHash field; implement MemoizeHash(), InvalidateHash(), and computeHash(); modify Hash() to reuse cache; ensure clones and legacy defaults clear cached hash.
Serialization & utils
types/serialization.go, types/utils.go
Invalidate cached hash after FromProto() and when constructing derived headers (GetRandomNextHeader) to avoid leaking stale memoized values.
Execution & syncing call sites
block/internal/executing/executor.go, block/internal/syncing/syncer.go, block/internal/syncing/da_retriever.go, block/internal/syncing/p2p_handler.go
Memoize header hash at comparison/caching boundaries (startup sync checks, replay verification, DA cache insertion, P2P event emission) to avoid repeated computation and to pass memoized hashes into cache/update paths.
Cache & store changes
block/internal/cache/generic_cache.go, pkg/store/cached_store.go
generic_cache.setSeen adds a fast-path: if hashByHeight[height] already equals incoming hash, mark existing hash seen and return early; store layer memoizes headers on cache misses before inserting them.
Tests
types/hashing_test.go, types/serialization_test.go, types/utils_test.go
Add tests for memoization, invalidation after mutation/deserialization/derivation, clone behavior (cachedHash not copied), and deterministic hash checks for derived headers.

Sequence Diagram(s)

(Skipped)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • yarikbratashchuk
  • tuxcanfly
  • gupadhyaya

Poem

🐰
I hid a hash beneath my paw,
Memoized so I hop less raw.
Clone, serialize — I wipe the slate,
Fresh hashes bloom, no stale debate.
A rabbit cheers for lighter compute.

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 36.36% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The description provides a brief overview explaining the purpose (reusing hashes and memoizing header hash to reduce GC pressure), but lacks detail about the scope, implementation approach, or testing methodology. Expand the overview with more context about the changes across multiple files, the performance impact expected, and any testing approach used to validate the optimization.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title 'perf: reduce allocations through memoization' clearly and concisely summarizes the main change: implementing memoization to reduce allocations.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch marko/perf

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 31, 2026

Codecov Report

❌ Patch coverage is 58.13953% with 18 lines in your changes missing coverage. Please review.
✅ Project coverage is 61.43%. Comparing base (60c68ea) to head (8d2fef6).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
block/internal/executing/executor.go 0.00% 9 Missing ⚠️
types/hashing.go 73.68% 2 Missing and 3 partials ⚠️
block/internal/syncing/syncer.go 0.00% 3 Missing ⚠️
types/header.go 66.66% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3219      +/-   ##
==========================================
+ Coverage   61.41%   61.43%   +0.02%     
==========================================
  Files         120      120              
  Lines       12474    12504      +30     
==========================================
+ Hits         7661     7682      +21     
- Misses       3953     3960       +7     
- Partials      860      862       +2     
Flag Coverage Δ
combined 61.43% <58.13%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@tac0turtle tac0turtle marked this pull request as ready for review March 31, 2026 10:20
@tac0turtle tac0turtle requested a review from julienrbrt March 31, 2026 10:27
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
types/hashing.go (1)

75-88: ⚠️ Potential issue | 🟠 Major

Legacy headers are hashed twice on the cold path.

Line 76 computes HashSlim() unconditionally, and Lines 81-85 may then compute and return the legacy hash instead. For any header with populated Legacy, the first call now does two marshals and two SHA-256s before memoization even helps, which cuts directly against the allocation win this PR is chasing.

♻️ Suggested change
func (h *Header) computeHash() Hash {
-	slimHash, err := h.HashSlim()
-	if err != nil {
-		return nil
-	}
-
 	if h.Legacy != nil && !h.Legacy.IsZero() {
 		legacyHash, err := h.HashLegacy()
 		if err == nil {
 			return legacyHash
 		}
 	}
 
+	slimHash, err := h.HashSlim()
+	if err != nil {
+		return nil
+	}
 	return slimHash
}

As per coding guidelines, types/**/*.go should "Keep types lightweight and avoid unnecessary allocations for performance optimization".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/hashing.go` around lines 75 - 88, Header.computeHash currently always
calls HashSlim(), causing double marshals/SHA-256 for legacy headers; change the
control flow to check h.Legacy (h.Legacy != nil && !h.Legacy.IsZero()) first and
call HashLegacy() only in that case (returning legacyHash on success), and only
call HashSlim() if Legacy is nil/zero or HashLegacy() errors; update computeHash
to avoid precomputing slimHash so a header with Legacy populated only computes
one hash (use HashLegacy or HashSlim as appropriate) while preserving existing
error handling and memoization behavior.
🧹 Nitpick comments (2)
types/serialization_test.go (1)

359-378: Prefer table-driven structure for this new serialization-path test.

The assertions are good, but this should follow the types/**/*_test.go table-driven test convention for consistency with the suite.

As per coding guidelines types/**/*_test.go: "Use table-driven tests for all validation paths, serialization round-trips, and edge cases in test files".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/serialization_test.go` around lines 359 - 378, Refactor
TestHeaderFromProtoClearsCachedHash into a table-driven test: define a slice of
test cases (with fields like name, initial *Header, protoFrom *Header,
expectedHeight, expectedDataHash, expectCachedNil) then loop over cases with
t.Run(case.name, func(t *testing.T){ t.Parallel(); case.initial.MemoizeHash();
require.NotNil(t, case.initial.cachedHash); protoMsg :=
case.protoFrom.ToProto(); require.NoError(t, case.initial.FromProto(protoMsg));
assert.Equal(t, case.expectCachedNil, case.initial.cachedHash == nil);
assert.Equal(t, case.expectedHeight, case.initial.Height()); assert.Equal(t,
case.expectedDataHash, case.initial.DataHash) }); keep the existing checks
(MemoizeHash, FromProto, cachedHash nil, Height, DataHash) and reference the
existing symbols Header, BaseHeader, MemoizeHash, FromProto, ToProto, Height,
and DataHash when implementing the cases.
types/header.go (1)

86-88: The cache now lives inside a mutable value type.

Line 88 makes cachedHash part of every plain Header copy, and Line 270 already has to manually scrub it back out in Clone(). Plain assignment / pass-by-value copies will still carry the old hash unless each mutation path remembers to call InvalidateHash(). Please verify the remaining copy+mutate sites, or move memoization to the store/cache boundary instead of coupling it to Header value semantics.

As per coding guidelines, types/**/*.go should "Use value semantics for small types and deep copy when modification is needed to maintain immutability".

Also applies to: 262-270

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/header.go` around lines 86 - 88, The cachedHash field embeds mutable
memoization into the Header value type (cachedHash), causing copies to carry
stale hashes; remove cachedHash from the Header struct and move memoization out
to the store/cache boundary instead. Concretely: delete the cachedHash field
from Header, replace the in-struct MemoizeHash() / InvalidateHash()
implementations to consult an external package-level or store-scoped cache
(e.g., headerHashCache keyed by the header's canonical immutable fields or
serialized bytes), update Clone() to stop scrubbing cachedHash, and update all
callers that referenced Header.cachedHash or called
InvalidateHash()/MemoizeHash() to use the new cache API (look for references to
MemoizeHash, InvalidateHash, Clone, and cachedHash to modify). Ensure the new
cache uses safe keys (immutable header content) and provides equivalent
semantics for cache population and invalidation at the store boundary.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@block/internal/syncing/p2p_handler.go`:
- Around line 103-106: The call currently uses the embedded field selector
p2pHeader.SignedHeader.MemoizeHash(); change it to call the promoted method
directly as p2pHeader.MemoizeHash() to follow idiomatic Go and satisfy
staticcheck QF1008—update the invocation in the
block/internal/syncing/p2p_handler.go code where MemoizeHash is called on the
P2PSignedHeader instance (p2pHeader) so downstream callers still get the
memoized hash.

---

Outside diff comments:
In `@types/hashing.go`:
- Around line 75-88: Header.computeHash currently always calls HashSlim(),
causing double marshals/SHA-256 for legacy headers; change the control flow to
check h.Legacy (h.Legacy != nil && !h.Legacy.IsZero()) first and call
HashLegacy() only in that case (returning legacyHash on success), and only call
HashSlim() if Legacy is nil/zero or HashLegacy() errors; update computeHash to
avoid precomputing slimHash so a header with Legacy populated only computes one
hash (use HashLegacy or HashSlim as appropriate) while preserving existing error
handling and memoization behavior.

---

Nitpick comments:
In `@types/header.go`:
- Around line 86-88: The cachedHash field embeds mutable memoization into the
Header value type (cachedHash), causing copies to carry stale hashes; remove
cachedHash from the Header struct and move memoization out to the store/cache
boundary instead. Concretely: delete the cachedHash field from Header, replace
the in-struct MemoizeHash() / InvalidateHash() implementations to consult an
external package-level or store-scoped cache (e.g., headerHashCache keyed by the
header's canonical immutable fields or serialized bytes), update Clone() to stop
scrubbing cachedHash, and update all callers that referenced Header.cachedHash
or called InvalidateHash()/MemoizeHash() to use the new cache API (look for
references to MemoizeHash, InvalidateHash, Clone, and cachedHash to modify).
Ensure the new cache uses safe keys (immutable header content) and provides
equivalent semantics for cache population and invalidation at the store
boundary.

In `@types/serialization_test.go`:
- Around line 359-378: Refactor TestHeaderFromProtoClearsCachedHash into a
table-driven test: define a slice of test cases (with fields like name, initial
*Header, protoFrom *Header, expectedHeight, expectedDataHash, expectCachedNil)
then loop over cases with t.Run(case.name, func(t *testing.T){ t.Parallel();
case.initial.MemoizeHash(); require.NotNil(t, case.initial.cachedHash); protoMsg
:= case.protoFrom.ToProto(); require.NoError(t,
case.initial.FromProto(protoMsg)); assert.Equal(t, case.expectCachedNil,
case.initial.cachedHash == nil); assert.Equal(t, case.expectedHeight,
case.initial.Height()); assert.Equal(t, case.expectedDataHash,
case.initial.DataHash) }); keep the existing checks (MemoizeHash, FromProto,
cachedHash nil, Height, DataHash) and reference the existing symbols Header,
BaseHeader, MemoizeHash, FromProto, ToProto, Height, and DataHash when
implementing the cases.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2e40f39f-abdc-4e81-be57-d85369501b1f

📥 Commits

Reviewing files that changed from the base of the PR and between 60c68ea and 872b560.

📒 Files selected for processing (13)
  • block/internal/cache/generic_cache.go
  • block/internal/executing/executor.go
  • block/internal/syncing/da_retriever.go
  • block/internal/syncing/p2p_handler.go
  • block/internal/syncing/syncer.go
  • pkg/store/cached_store.go
  • types/hashing.go
  • types/hashing_test.go
  • types/header.go
  • types/serialization.go
  • types/serialization_test.go
  • types/utils.go
  • types/utils_test.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@types/hashing.go`:
- Around line 44-73: Header.Hash and Header.MemoizeHash read and write the
shared field cachedHash without synchronization, causing a data race when the
same *Header is handed out to multiple goroutines; fix this by adding a
concurrency guard (e.g., a sync.RWMutex) to the Header type and use it when
accessing cachedHash: use a read lock (RLock/RUnlock) in Hash() and computeHash
readers, and a write lock (Lock/Unlock) in MemoizeHash() when assigning
h.cachedHash (and in any invalidate method that clears cachedHash);
alternatively, stop returning the same *Header from caches and return copies,
but if keeping shared headers use the mutex and update all places that touch
cachedHash accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: acbe1014-5715-48c8-8304-03f79e12dbad

📥 Commits

Reviewing files that changed from the base of the PR and between 78b6ff6 and 192b483.

📒 Files selected for processing (1)
  • types/hashing.go

Comment on lines +44 to +73
// Hash returns the header hash. It reuses a memoized value if one has already
// been prepared via MemoizeHash, but it does not write to the header itself.
func (h *Header) Hash() Hash {
if h == nil {
return nil
}
if h.cachedHash != nil {
return h.cachedHash
}

slimHash, err := h.HashSlim()
if err != nil {
return h.computeHash()
}

// MemoizeHash computes the header hash and stores it on the header for future
// Hash() calls. Call this before publishing the header to shared goroutines or
// caches.
func (h *Header) MemoizeHash() Hash {
if h == nil {
return nil
}
if h.cachedHash != nil {
return h.cachedHash
}

hash := h.computeHash()
if hash != nil {
h.cachedHash = hash
}
return hash
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Synchronize cachedHash before sharing cached headers.

Lines 50-51, 64-70, and 91-93 access cachedHash without synchronization. pkg/store/cached_store.go:87-106 and pkg/store/cached_store.go:108-130 return the same *types.SignedHeader instance to multiple callers, so one goroutine can memoize or invalidate while another reads Hash(). That is a Go data race on shared state; please guard cachedHash or stop returning shared mutable header pointers from the caches. As per coding guidelines "Be careful with concurrent access to shared state in Go".

Also applies to: 89-95

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/hashing.go` around lines 44 - 73, Header.Hash and Header.MemoizeHash
read and write the shared field cachedHash without synchronization, causing a
data race when the same *Header is handed out to multiple goroutines; fix this
by adding a concurrency guard (e.g., a sync.RWMutex) to the Header type and use
it when accessing cachedHash: use a read lock (RLock/RUnlock) in Hash() and
computeHash readers, and a write lock (Lock/Unlock) in MemoizeHash() when
assigning h.cachedHash (and in any invalidate method that clears cachedHash);
alternatively, stop returning the same *Header from caches and return copies,
but if keeping shared headers use the mutex and update all places that touch
cachedHash accordingly.

Copy link
Copy Markdown
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK

@julienrbrt julienrbrt enabled auto-merge March 31, 2026 13:33
@julienrbrt julienrbrt added this pull request to the merge queue Mar 31, 2026
Merged via the queue into main with commit 9290142 Mar 31, 2026
37 checks passed
@julienrbrt julienrbrt deleted the marko/perf branch March 31, 2026 13:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants