Use segment_position.bytesize in consumer deliver loop#1786
Use segment_position.bytesize in consumer deliver loop#1786
Conversation
Use the pre-computed bytesize stored on SegmentPosition instead of calling BytesMessage#bytesize which recomputes the value each time by summing timestamp, exchange_name, routing_key, properties, and bodysize. SegmentPosition.bytesize is set once at creation via SegmentPosition.make(segment, position, msg.bytesize.to_u32) so the value is identical.
|
No issues found. The change replaces |
|
I was a bit skeptical about the change, so I asked ChatGPT 5.3 to take a deep look at it, and then challenged Claude:
I think I agree with the conclusion here, but let me know what you think. But should the segment's Also |
|
It's only used for deciding when to yield, so I think it's up to us to decide what's correct!
Did a quick benchmark and it saves like 0.6ns per run in my test. So it only needs to run like 1,6 million times to save a whole ms! 😅 With that said, it's still an optimization so I think we should use it. "Many small creeks"! |
|
hehe yeah, saw no meaningful difference in benchmarks either, who knew computers were fast at adding some integers together! |
|
What I was thinking with this change and based on the comment " the delivered_bytes counter is only used to decide when to Fiber.yield" that |
WHAT is this pull request doing?
Use the pre-computed
bytesizestored onSegmentPositioninstead of callingBytesMessage#bytesizewhich recomputes the value each time by summingtimestamp,exchange_name,routing_key,properties, andbodysize.SegmentPosition.bytesizeis set once at creation viaSegmentPosition.make(segment, position, msg.bytesize.to_u32)so the value is identical.(Noted while doing #1783)
HOW can this pull request be tested?
Tests. Doesn't seem to make much of a difference in benchmarks though, at least not with 1 producer and 1 consumer, guessing we are "producer-bound" or this simple is a drop in the ocean?