Currently the background jobs just fail. It would be helpful if there was a path to add or set background jobs with a parameter to retry x times (probably with a y seconds interval between?) especially now that plugins can add background jobs.
Would it be acceptable to make this feature depend on manually installing a RabbitMQ plugin? https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/releases I don't know how to solve 'the STOMP use case' otherwise.
As noted on that RabbitMQ plugin link, having a task scheduler could solve the problem for both broker methods. Upon failure, a task could be enqueued to run X seconds later. That task could be to restart the background job. It could reset the job status and then for RabbitMQ mode it could send a new message. The number of background job tries could be tracked in the job. We need a task scheduler at some point anyway.
That said... what's our proposed use case? What's the background job that would benefit from an automatic retry?
(In reply to David Cook from comment #3) > That said... what's our proposed use case? What's the background job that > would benefit from an automatic retry? I have some real life use cases - Plugin hook scheduling API calls to external API that could fail (they do fail, not often but do) and I want the job to be able to detect certain errors and schedule a retry. - ES overwhelmed somehow, I would like the ES indexing job to be retried under certain circumstances.
(In reply to Tomás Cohen Arazi (tcohen) from comment #4) > - Plugin hook scheduling API calls to external API that could fail (they do > fail, not often but do) and I want the job to be able to detect certain > errors and schedule a retry. To me, it sounds like this would benefit from a task scheduler, and then the retry logic is just part of the plugin. > - ES overwhelmed somehow, I would like the ES indexing job to be retried > under certain circumstances. That could probably be useful. I suppose many things where there is inter-process communication it can be handy to be able to retry up to X times. So long as there's some coding to make sure there's no race conditions. ES indexing should be fine since it should just be passing an ID rather than any stale data. -- Yeah overall I think a task scheduler solves this.