Bug 34990

Summary: Backgroundjob->enqueue does not send persistent header
Product: Koha Reporter: Marcel de Rooy <m.de.rooy>
Component: Architecture, internals, and plumbingAssignee: Marcel de Rooy <m.de.rooy>
Status: CLOSED FIXED QA Contact: Nick Clemens (kidclamp) <nick>
Severity: normal    
Priority: P5 - low CC: david, dcook, fridolin.somers, jonathan.druart
Version: Main   
Hardware: All   
OS: All   
See Also: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=34997
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=34998
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=32305
Change sponsored?: --- Patch complexity: Trivial patch
Documentation contact: Documentation submission:
Text to go in the release notes:
Version(s) released in:
23.11.00,23.05.05,22.11.12
Circulation function:
Bug Depends on: 32305    
Bug Blocks: 35092    
Attachments: Bug 34990: Add persistent header when sending msg to RabbitMQ
Bug 34990: Add persistent header when sending msg to RabbitMQ
Bug 34990: Add persistent header when sending msg to RabbitMQ

Description Marcel de Rooy 2023-10-05 09:56:12 UTC
Restarting RabbitMQ will make us loose all pending messages (jobs) in the default or long_tasks queue. Although we have a database fallback, this might be a problem when e.g. replacing/restarting a container while the workers did not yet read all messages or were down already, and the restart would activate Rabbit first.

Note that this scenario places RabbitMQ in the same container; it surely would be better to put it in its own container. But also in that scenario we can loose messages if we do not add a persistent flag.
Comment 1 Marcel de Rooy 2023-10-05 10:08:57 UTC
Created attachment 156564 [details] [review]
Bug 34990: Add persistent header when sending msg to RabbitMQ

Test plan:
NOTE: It is very hard to add a Koha unit test for adding this
single header when communicating to RabbitMQ via Stomp plugin.
When we would mock the send, we are only testing if perl can
pass a hashref to a subroutine ;)

Do NOT yet apply this patch.
Make sure that RabbitMQ runs.
Stop the koha-worker for long_tasks:
  koha-worker --stop --queue long_tasks myclone
Stage a MARC file.
Check queues with rabbitmqctl list_queues.
Look for: koha_myclone-long_tasks 1  (at least 1)
Stop rabbitmq (something like /etc/init.d/rabbitmq-server stop)
Start rabbitmq (/etc/init.d/rabbitmq-server start)
Check queue again with with rabbitmqctl list_queues.
Look for: koha_myclone-long_tasks 0
Your messages have been gone.

Now apply this patch.
Reiterate the former steps. But note that you will still see
a non-empty queue in the last step:
koha_myclone-long_tasks 1 (at least 1)

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Comment 2 David Nind 2023-10-05 17:58:08 UTC
Created attachment 156579 [details] [review]
Bug 34990: Add persistent header when sending msg to RabbitMQ

Test plan:
NOTE: It is very hard to add a Koha unit test for adding this
single header when communicating to RabbitMQ via Stomp plugin.
When we would mock the send, we are only testing if perl can
pass a hashref to a subroutine ;)

Do NOT yet apply this patch.
Make sure that RabbitMQ runs.
Stop the koha-worker for long_tasks:
  koha-worker --stop --queue long_tasks myclone
Stage a MARC file.
Check queues with rabbitmqctl list_queues.
Look for: koha_myclone-long_tasks 1  (at least 1)
Stop rabbitmq (something like /etc/init.d/rabbitmq-server stop)
Start rabbitmq (/etc/init.d/rabbitmq-server start)
Check queue again with with rabbitmqctl list_queues.
Look for: koha_myclone-long_tasks 0
Your messages have been gone.

Now apply this patch.
Reiterate the former steps. But note that you will still see
a non-empty queue in the last step:
koha_myclone-long_tasks 1 (at least 1)

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Comment 3 David Nind 2023-10-05 18:01:51 UTC
I've signed off, as the test plan works.

However, what should happen to the jobs queued (shown under Administration > Jobs)? For me, they are staying there with a status of 'New' and progress 'null/0'. The action options I have are view and cancel.

Testing notes (using KTD):

1. Add sudo in front of the commands in the test plan. For myclone, use kohadev.
Comment 4 Marcel de Rooy 2023-10-06 06:26:42 UTC
(In reply to David Nind from comment #3)
> I've signed off, as the test plan works.
> 
Thanks

> However, what should happen to the jobs queued (shown under Administration >
> Jobs)? For me, they are staying there with a status of 'New' and progress
> 'null/0'. The action options I have are view and cancel.

Thats another problem, yes. There is only a workaround for that: stop rabbitmq, restart worker. The worker will now go in 'database mode' and get the jobs that are still new in the db. Since they might be quite old, this is very questionable behavior btw.

You mention null/0. I saw that too. That is another small bug.

Will open two new reports.
Comment 5 David Nind 2023-10-06 06:29:53 UTC
Thanks Marcel!
Comment 6 Nick Clemens (kidclamp) 2023-10-11 17:57:26 UTC
Created attachment 156871 [details] [review]
Bug 34990: Add persistent header when sending msg to RabbitMQ

Test plan:
NOTE: It is very hard to add a Koha unit test for adding this
single header when communicating to RabbitMQ via Stomp plugin.
When we would mock the send, we are only testing if perl can
pass a hashref to a subroutine ;)

Do NOT yet apply this patch.
Make sure that RabbitMQ runs.
Stop the koha-worker for long_tasks:
  koha-worker --stop --queue long_tasks myclone
Stage a MARC file.
Check queues with rabbitmqctl list_queues.
Look for: koha_myclone-long_tasks 1  (at least 1)
Stop rabbitmq (something like /etc/init.d/rabbitmq-server stop)
Start rabbitmq (/etc/init.d/rabbitmq-server start)
Check queue again with with rabbitmqctl list_queues.
Look for: koha_myclone-long_tasks 0
Your messages have been gone.

Now apply this patch.
Reiterate the former steps. But note that you will still see
a non-empty queue in the last step:
koha_myclone-long_tasks 1 (at least 1)

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Comment 7 Jonathan Druart 2023-10-12 04:10:09 UTC
I have not tested this patch but I am not sure about the consequences. If the status in the DB does not reflect the actual status of the job I don't think we should push as it.

Can you clarify please?
Comment 8 Marcel de Rooy 2023-10-12 05:42:38 UTC
(In reply to Jonathan Druart from comment #7)
> I have not tested this patch but I am not sure about the consequences. If
> the status in the DB does not reflect the actual status of the job I don't
> think we should push as it.
> 
> Can you clarify please?

It sounds like extending the scope of this bug? The fact that we have redundancy and that it may be conflicting, is not caused here and is not resolved here. We resolve the loss of messages here for a restart of RabbitMQ.

I agree that the BackgroundJob code needs more attention. Somehow it reflects our fear to switch to a message queue? Limping on two thoughts? There are still several reports open about getting the message queue more stable. I am surprised that we cant resolve that since RabbitMQ is commonly used and for large volumes.

So I recommend to push this patch and I will certainly support further improvements in this area.
Comment 9 Jonathan Druart 2023-10-12 06:36:17 UTC
(In reply to Marcel de Rooy from comment #8)
> (In reply to Jonathan Druart from comment #7)
> > I have not tested this patch but I am not sure about the consequences. If
> > the status in the DB does not reflect the actual status of the job I don't
> > think we should push as it.
> > 
> > Can you clarify please?
> 
> It sounds like extending the scope of this bug? The fact that we have
> redundancy and that it may be conflicting, is not caused here and is not
> resolved here. We resolve the loss of messages here for a restart of
> RabbitMQ.
> 
> I agree that the BackgroundJob code needs more attention. Somehow it
> reflects our fear to switch to a message queue? Limping on two thoughts?
> There are still several reports open about getting the message queue more
> stable. I am surprised that we cant resolve that since RabbitMQ is commonly
> used and for large volumes.
> 
> So I recommend to push this patch and I will certainly support further
> improvements in this area.

It's not extending the scope of this bug, it's not introducing another bug.
IMO Koha UI should show what has been processed. It's better to reenqueue the job manually than tell users the job failed whereas it succeeded.

Just my opinion, feel free to ignoring it.

Back to PQA as I don't have more time to dedicate to this, but it should be better in discussion...
Comment 10 Marcel de Rooy 2023-10-12 07:01:11 UTC
(In reply to Jonathan Druart from comment #9)
> (In reply to Marcel de Rooy from comment #8)
> > (In reply to Jonathan Druart from comment #7)
> > > I have not tested this patch but I am not sure about the consequences. If
> > > the status in the DB does not reflect the actual status of the job I don't
> > > think we should push as it.
> > > 
> > > Can you clarify please?
> > 
> > It sounds like extending the scope of this bug? The fact that we have
> > redundancy and that it may be conflicting, is not caused here and is not
> > resolved here. We resolve the loss of messages here for a restart of
> > RabbitMQ.
> > 
> > I agree that the BackgroundJob code needs more attention. Somehow it
> > reflects our fear to switch to a message queue? Limping on two thoughts?
> > There are still several reports open about getting the message queue more
> > stable. I am surprised that we cant resolve that since RabbitMQ is commonly
> > used and for large volumes.
> > 
> > So I recommend to push this patch and I will certainly support further
> > improvements in this area.
> 
> It's not extending the scope of this bug, it's not introducing another bug.
> IMO Koha UI should show what has been processed. It's better to reenqueue
> the job manually than tell users the job failed whereas it succeeded.
> 
> Just my opinion, feel free to ignoring it.
> 
> Back to PQA as I don't have more time to dedicate to this, but it should be
> better in discussion...

I never ignore it :)
Will still check if I am missing something in our discussion.
Comment 11 Marcel de Rooy 2023-10-18 14:55:25 UTC
This should go along with 32305. But still looking at it tomorrow.
Comment 12 Marcel de Rooy 2023-10-19 09:12:05 UTC
Added dependency on 32305. See omnibus bug.
Comment 13 Tomás Cohen Arazi (tcohen) 2023-10-20 14:04:00 UTC
Pushed to master for 23.11.

Nice work everyone, thanks!
Comment 14 Fridolin Somers 2023-10-25 21:53:40 UTC
Pushed to 23.05.x for 23.05.05
Comment 15 Matt Blenkinsop 2023-11-13 14:20:01 UTC
Nice work everyone!

Pushed to oldstable for 22.11.x