fix: prevent double-panic on buffer slice bounds violation#124
fix: prevent double-panic on buffer slice bounds violation#124grenade wants to merge 1 commit intoNehliin:masterfrom
Conversation
| if self.inner.is_some() && self.pool_alive.load(std::sync::atomic::Ordering::Acquire) { | ||
| if self.inner.is_some() | ||
| && self.pool_alive.load(std::sync::atomic::Ordering::Acquire) | ||
| && !std::thread::panicking() |
There was a problem hiding this comment.
nice, I missed this in my previous pr
| Bytes::copy_from_slice(&buffer.raw_slice()[start_idx..end_idx]), | ||
| ); | ||
| let buffer_slice = buffer.raw_slice(); | ||
| if end_idx <= buffer_slice.len() { |
There was a problem hiding this comment.
I am not sure we want this validation check here since we should be able to catch this much earlier and avoid unnecessary I/O. If you can consistently reproduce this I would be very much interested in what the value of the piece length is inside of the torrent metadata and see what is actually queued here https://github.com/Nehliin/vortex/blob/master/bittorrent/src/peer_comm/peer_connection.rs#L945
There must be some bug elsewhere because the buffers should match the size of the of the pieces and the request validation should prevent any malicious packets from triggering disk reads.
I do see that the is_valid_piece_req only checks that "begin" is divisible with SUBPIECE_SIZE but not if it's resonable compared to the piece length. Could you confirm if it's a weird begin value instead? If so we should update the piece validation logic instead
|
I merged #129 which hopefully fixes the crash you've seen. Let me know if it's still a problem post patch. I'd still merge the double panic fix though if you keep that in the PR. |
|
#129 tested switched monsoon back to vortex upstream. however this reintroduced panic crashes in monsoon. so for monsoon the only option is to remain dependent on a fork or await a merge on this pr or a new patch. |
- Add bounds check before slicing buffer in disk read completion handler (event_loop.rs). Logs error instead of panicking when end_idx exceeds buffer length. - Guard Buffer::drop panic with std::thread::panicking() to prevent double-panic abort. Follows the same pattern already applied to BufferRing::drop in PR Nehliin#95.
baebe40 to
9669d82
Compare
While building monsoon, a BitTorrent GUI and headless server that uses vortex-bittorrent as its protocol backend, we're hitting a consistent crash when a peer sends piece data that exceeds the allocated buffer size.
The panic occurs at
event_loop.rs:1039wherebuffer.raw_slice()[start_idx..end_idx]is accessed without bounds validation. During stack unwinding,Buffer::dropinbuf_pool.rspanics because the buffer wasn't returned to the pool, triggering a double-panic which Rust's runtime converts to a process abort. This takes down the entire application -- not just the affected torrent.Changes:
event_loop.rs -- Add bounds check before slicing the buffer in the disk read completion handler. Logs an error and skips the send instead of panicking. The connection may miss a piece response but the torrent continues operating.
buf_pool.rs -- Guard
Buffer::dropwith!std::thread::panicking()to prevent the secondary panic during unwind. This follows the identical pattern already applied toBufferRing::dropin PR fix: Avoid double panic in buf_ring destructor #95.Both changes are minimal and the
panicking()guard is a proven pattern already in the codebase.