Hook to process events after a job fails all retries#780
Hook to process events after a job fails all retries#780mperham merged 3 commits intosidekiq:masterfrom
Conversation
When the maximum number of retries is hit and the message is about to be thrown away, give the option of allowing the worker to say goodbye by defining an 'exhausted' method on the worker.
|
👍 |
1 similar comment
|
+1 |
|
A totally reasonable feature.
|
|
|
I like retries_exhausted. Could you update your PR to use that name and catch and handle any errors raised? |
Additionally, any errors raised during retries_exhausted hook are logged and dropped before resuming original control flow.
|
+1 |
|
BTW our profile shots are hilariously alike. |
Hook to process events after a job fails all retries
|
Gracias! |
|
Thanks for the contribution and working through this with me, @jkassemi! Oh want to update the changelog and give yourself some credit? |
|
@jkassemi Don't forget to add something to the Wiki about this if you haven't already. This is a great feature that other people should know about. Perhaps here https://github.com/mperham/sidekiq/wiki/Error-Handling |
|
👍 |
|
Changelog's updated in #787 @mperham - yes... I noticed that myself... brothers separated at birth, maybe? @brandonhilkert - I'll get to it before I head out for the day. |
|
This pull request is awesome, I made middlewares to achieve that behavior. One more question: can we make the same hook, but after every retry? I don't know if somebody need this, but I really do. And it would even better, if we can get somehow the reties count. Because without this hook we would be able to use middleware, but in this concrete case it's not the best way (imo). So, what do you think? |
|
@ognevsky If you need to capture errors in your worker and do something every time, use begin..rescue. |
|
@mperham I haven't even thought about this, thanks! |
When the maximum number of retries is hit and the message is about
to be thrown away, give the option of allowing the worker to say
goodbye by defining an 'exhausted' method on the worker.
We have a job that calls a third party service that's not consistently available. The retry
middleware handles 99% of cases appropriately, but in the event we're not able to reach
the service for all retry attempts, we need an error notification.