Skip to content

fix: prevent double-panic on buffer slice bounds violation#124

Open
grenade wants to merge 1 commit intoNehliin:masterfrom
grenade:fix-double-panic
Open

fix: prevent double-panic on buffer slice bounds violation#124
grenade wants to merge 1 commit intoNehliin:masterfrom
grenade:fix-double-panic

Conversation

@grenade
Copy link
Copy Markdown

@grenade grenade commented Apr 5, 2026

While building monsoon, a BitTorrent GUI and headless server that uses vortex-bittorrent as its protocol backend, we're hitting a consistent crash when a peer sends piece data that exceeds the allocated buffer size.

The panic occurs at event_loop.rs:1039 where buffer.raw_slice()[start_idx..end_idx] is accessed without bounds validation. During stack unwinding, Buffer::drop in buf_pool.rs panics because the buffer wasn't returned to the pool, triggering a double-panic which Rust's runtime converts to a process abort. This takes down the entire application -- not just the affected torrent.

Changes:

  1. event_loop.rs -- Add bounds check before slicing the buffer in the disk read completion handler. Logs an error and skips the send instead of panicking. The connection may miss a piece response but the torrent continues operating.

  2. buf_pool.rs -- Guard Buffer::drop with !std::thread::panicking() to prevent the secondary panic during unwind. This follows the identical pattern already applied to BufferRing::drop in PR fix: Avoid double panic in buf_ring destructor #95.

Both changes are minimal and the panicking() guard is a proven pattern already in the codebase.

if self.inner.is_some() && self.pool_alive.load(std::sync::atomic::Ordering::Acquire) {
if self.inner.is_some()
&& self.pool_alive.load(std::sync::atomic::Ordering::Acquire)
&& !std::thread::panicking()
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice, I missed this in my previous pr

Bytes::copy_from_slice(&buffer.raw_slice()[start_idx..end_idx]),
);
let buffer_slice = buffer.raw_slice();
if end_idx <= buffer_slice.len() {
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure we want this validation check here since we should be able to catch this much earlier and avoid unnecessary I/O. If you can consistently reproduce this I would be very much interested in what the value of the piece length is inside of the torrent metadata and see what is actually queued here https://github.com/Nehliin/vortex/blob/master/bittorrent/src/peer_comm/peer_connection.rs#L945

There must be some bug elsewhere because the buffers should match the size of the of the pieces and the request validation should prevent any malicious packets from triggering disk reads.

I do see that the is_valid_piece_req only checks that "begin" is divisible with SUBPIECE_SIZE but not if it's resonable compared to the piece length. Could you confirm if it's a weird begin value instead? If so we should update the piece validation logic instead

@Nehliin
Copy link
Copy Markdown
Owner

Nehliin commented Apr 8, 2026

I merged #129 which hopefully fixes the crash you've seen. Let me know if it's still a problem post patch. I'd still merge the double panic fix though if you keep that in the PR.

@grenade
Copy link
Copy Markdown
Author

grenade commented Apr 9, 2026

#129 tested switched monsoon back to vortex upstream. however this reintroduced panic crashes in monsoon. so for monsoon the only option is to remain dependent on a fork or await a merge on this pr or a new patch.

- Add bounds check before slicing buffer in disk read completion
  handler (event_loop.rs). Logs error instead of panicking when
  end_idx exceeds buffer length.

- Guard Buffer::drop panic with std::thread::panicking() to prevent
  double-panic abort. Follows the same pattern already applied to
  BufferRing::drop in PR Nehliin#95.
@grenade grenade force-pushed the fix-double-panic branch from baebe40 to 9669d82 Compare April 9, 2026 16:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants