Skip to content

Hook to process events after a job fails all retries#780

Merged
mperham merged 3 commits intosidekiq:masterfrom
atpay:master
Mar 21, 2013
Merged

Hook to process events after a job fails all retries#780
mperham merged 3 commits intosidekiq:masterfrom
atpay:master

Conversation

@jkassemi
Copy link
Contributor

When the maximum number of retries is hit and the message is about
to be thrown away, give the option of allowing the worker to say
goodbye by defining an 'exhausted' method on the worker.

We have a job that calls a third party service that's not consistently available. The retry
middleware handles 99% of cases appropriately, but in the event we're not able to reach
the service for all retry attempts, we need an error notification.

When the maximum number of retries is hit and the message is about
to be thrown away, give the option of allowing the worker to say
goodbye by defining an 'exhausted' method on the worker.
@LeakyBucket
Copy link
Contributor

👍

1 similar comment
@zquestz
Copy link

zquestz commented Mar 20, 2013

+1

@mperham
Copy link
Collaborator

mperham commented Mar 21, 2013

A totally reasonable feature.

  1. I'm not sold on 'exhausted'. Better name? Maybe 'retry_failed'?
  2. What happens when the callback raises an error? Do we retry later? ❓

@jkassemi
Copy link
Contributor Author

  1. 'retry_failed' seems like it could represent a singular failure - 'retries_exhausted' or 'retries_failed'?
  2. Letting it die at that point seems to make sense - feels reasonable that our callback should be well tested and simple enough to expect successful execution. The most common use case will likely be just queuing up an email and writing a log message.

@mperham
Copy link
Collaborator

mperham commented Mar 21, 2013

I like retries_exhausted. Could you update your PR to use that name and catch and handle any errors raised?

Additionally, any errors raised during retries_exhausted hook
are logged and dropped before resuming original control flow.
@zquestz
Copy link

zquestz commented Mar 21, 2013

+1

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doc update needed

@mperham
Copy link
Collaborator

mperham commented Mar 21, 2013

BTW our profile shots are hilariously alike.

mperham added a commit that referenced this pull request Mar 21, 2013
Hook to process events after a job fails all retries
@mperham mperham merged commit c62f59e into sidekiq:master Mar 21, 2013
@jkassemi
Copy link
Contributor Author

Gracias!

@mperham
Copy link
Collaborator

mperham commented Mar 21, 2013

Thanks for the contribution and working through this with me, @jkassemi! Oh want to update the changelog and give yourself some credit?

@brandonhilkert
Copy link
Collaborator

@jkassemi Don't forget to add something to the Wiki about this if you haven't already. This is a great feature that other people should know about. Perhaps here https://github.com/mperham/sidekiq/wiki/Error-Handling

@bryanrite
Copy link

👍

@jkassemi
Copy link
Contributor Author

Changelog's updated in #787

@mperham - yes... I noticed that myself... brothers separated at birth, maybe?

@brandonhilkert - I'll get to it before I head out for the day.

@ognevsky
Copy link

This pull request is awesome, I made middlewares to achieve that behavior.

One more question: can we make the same hook, but after every retry? I don't know if somebody need this, but I really do. And it would even better, if we can get somehow the reties count.

Because without this hook we would be able to use middleware, but in this concrete case it's not the best way (imo).

So, what do you think?

@ognevsky
Copy link

/cc @mperham @jkassemi

@mperham
Copy link
Collaborator

mperham commented Mar 27, 2013

@ognevsky If you need to capture errors in your worker and do something every time, use begin..rescue.

@ognevsky
Copy link

@mperham I haven't even thought about this, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants