Description
Marcel de Rooy
2023-10-05 09:56:12 UTC
Created attachment 156564 [details] [review] Bug 34990: Add persistent header when sending msg to RabbitMQ Test plan: NOTE: It is very hard to add a Koha unit test for adding this single header when communicating to RabbitMQ via Stomp plugin. When we would mock the send, we are only testing if perl can pass a hashref to a subroutine ;) Do NOT yet apply this patch. Make sure that RabbitMQ runs. Stop the koha-worker for long_tasks: koha-worker --stop --queue long_tasks myclone Stage a MARC file. Check queues with rabbitmqctl list_queues. Look for: koha_myclone-long_tasks 1 (at least 1) Stop rabbitmq (something like /etc/init.d/rabbitmq-server stop) Start rabbitmq (/etc/init.d/rabbitmq-server start) Check queue again with with rabbitmqctl list_queues. Look for: koha_myclone-long_tasks 0 Your messages have been gone. Now apply this patch. Reiterate the former steps. But note that you will still see a non-empty queue in the last step: koha_myclone-long_tasks 1 (at least 1) Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Created attachment 156579 [details] [review] Bug 34990: Add persistent header when sending msg to RabbitMQ Test plan: NOTE: It is very hard to add a Koha unit test for adding this single header when communicating to RabbitMQ via Stomp plugin. When we would mock the send, we are only testing if perl can pass a hashref to a subroutine ;) Do NOT yet apply this patch. Make sure that RabbitMQ runs. Stop the koha-worker for long_tasks: koha-worker --stop --queue long_tasks myclone Stage a MARC file. Check queues with rabbitmqctl list_queues. Look for: koha_myclone-long_tasks 1 (at least 1) Stop rabbitmq (something like /etc/init.d/rabbitmq-server stop) Start rabbitmq (/etc/init.d/rabbitmq-server start) Check queue again with with rabbitmqctl list_queues. Look for: koha_myclone-long_tasks 0 Your messages have been gone. Now apply this patch. Reiterate the former steps. But note that you will still see a non-empty queue in the last step: koha_myclone-long_tasks 1 (at least 1) Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Signed-off-by: David Nind <david@davidnind.com> I've signed off, as the test plan works. However, what should happen to the jobs queued (shown under Administration > Jobs)? For me, they are staying there with a status of 'New' and progress 'null/0'. The action options I have are view and cancel. Testing notes (using KTD): 1. Add sudo in front of the commands in the test plan. For myclone, use kohadev. (In reply to David Nind from comment #3) > I've signed off, as the test plan works. > Thanks > However, what should happen to the jobs queued (shown under Administration > > Jobs)? For me, they are staying there with a status of 'New' and progress > 'null/0'. The action options I have are view and cancel. Thats another problem, yes. There is only a workaround for that: stop rabbitmq, restart worker. The worker will now go in 'database mode' and get the jobs that are still new in the db. Since they might be quite old, this is very questionable behavior btw. You mention null/0. I saw that too. That is another small bug. Will open two new reports. Thanks Marcel! Created attachment 156871 [details] [review] Bug 34990: Add persistent header when sending msg to RabbitMQ Test plan: NOTE: It is very hard to add a Koha unit test for adding this single header when communicating to RabbitMQ via Stomp plugin. When we would mock the send, we are only testing if perl can pass a hashref to a subroutine ;) Do NOT yet apply this patch. Make sure that RabbitMQ runs. Stop the koha-worker for long_tasks: koha-worker --stop --queue long_tasks myclone Stage a MARC file. Check queues with rabbitmqctl list_queues. Look for: koha_myclone-long_tasks 1 (at least 1) Stop rabbitmq (something like /etc/init.d/rabbitmq-server stop) Start rabbitmq (/etc/init.d/rabbitmq-server start) Check queue again with with rabbitmqctl list_queues. Look for: koha_myclone-long_tasks 0 Your messages have been gone. Now apply this patch. Reiterate the former steps. But note that you will still see a non-empty queue in the last step: koha_myclone-long_tasks 1 (at least 1) Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Signed-off-by: David Nind <david@davidnind.com> Signed-off-by: Nick Clemens <nick@bywatersolutions.com> I have not tested this patch but I am not sure about the consequences. If the status in the DB does not reflect the actual status of the job I don't think we should push as it. Can you clarify please? (In reply to Jonathan Druart from comment #7) > I have not tested this patch but I am not sure about the consequences. If > the status in the DB does not reflect the actual status of the job I don't > think we should push as it. > > Can you clarify please? It sounds like extending the scope of this bug? The fact that we have redundancy and that it may be conflicting, is not caused here and is not resolved here. We resolve the loss of messages here for a restart of RabbitMQ. I agree that the BackgroundJob code needs more attention. Somehow it reflects our fear to switch to a message queue? Limping on two thoughts? There are still several reports open about getting the message queue more stable. I am surprised that we cant resolve that since RabbitMQ is commonly used and for large volumes. So I recommend to push this patch and I will certainly support further improvements in this area. (In reply to Marcel de Rooy from comment #8) > (In reply to Jonathan Druart from comment #7) > > I have not tested this patch but I am not sure about the consequences. If > > the status in the DB does not reflect the actual status of the job I don't > > think we should push as it. > > > > Can you clarify please? > > It sounds like extending the scope of this bug? The fact that we have > redundancy and that it may be conflicting, is not caused here and is not > resolved here. We resolve the loss of messages here for a restart of > RabbitMQ. > > I agree that the BackgroundJob code needs more attention. Somehow it > reflects our fear to switch to a message queue? Limping on two thoughts? > There are still several reports open about getting the message queue more > stable. I am surprised that we cant resolve that since RabbitMQ is commonly > used and for large volumes. > > So I recommend to push this patch and I will certainly support further > improvements in this area. It's not extending the scope of this bug, it's not introducing another bug. IMO Koha UI should show what has been processed. It's better to reenqueue the job manually than tell users the job failed whereas it succeeded. Just my opinion, feel free to ignoring it. Back to PQA as I don't have more time to dedicate to this, but it should be better in discussion... (In reply to Jonathan Druart from comment #9) > (In reply to Marcel de Rooy from comment #8) > > (In reply to Jonathan Druart from comment #7) > > > I have not tested this patch but I am not sure about the consequences. If > > > the status in the DB does not reflect the actual status of the job I don't > > > think we should push as it. > > > > > > Can you clarify please? > > > > It sounds like extending the scope of this bug? The fact that we have > > redundancy and that it may be conflicting, is not caused here and is not > > resolved here. We resolve the loss of messages here for a restart of > > RabbitMQ. > > > > I agree that the BackgroundJob code needs more attention. Somehow it > > reflects our fear to switch to a message queue? Limping on two thoughts? > > There are still several reports open about getting the message queue more > > stable. I am surprised that we cant resolve that since RabbitMQ is commonly > > used and for large volumes. > > > > So I recommend to push this patch and I will certainly support further > > improvements in this area. > > It's not extending the scope of this bug, it's not introducing another bug. > IMO Koha UI should show what has been processed. It's better to reenqueue > the job manually than tell users the job failed whereas it succeeded. > > Just my opinion, feel free to ignoring it. > > Back to PQA as I don't have more time to dedicate to this, but it should be > better in discussion... I never ignore it :) Will still check if I am missing something in our discussion. This should go along with 32305. But still looking at it tomorrow. Added dependency on 32305. See omnibus bug. Pushed to master for 23.11. Nice work everyone, thanks! Pushed to 23.05.x for 23.05.05 Nice work everyone! Pushed to oldstable for 22.11.x |