Bug 35819 - "No job found" error for BatchUpdateBiblioHoldsQueue (race condition)
Summary: "No job found" error for BatchUpdateBiblioHoldsQueue (race condition)
Status: Pushed to oldstable
Alias: None
Product: Koha
Classification: Unclassified
Component: Architecture, internals, and plumbing (show other bugs)
Version: unspecified
Hardware: All All
: P5 - low critical
Assignee: Jonathan Druart
QA Contact: Marcel de Rooy
URL:
Keywords:
Depends on: 29346
Blocks: 35092 35920
  Show dependency treegraph
 
Reported: 2024-01-16 14:55 UTC by Jonathan Druart
Modified: 2024-03-19 15:41 UTC (History)
9 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Small patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
24.05.00,23.11.04,23.05.10
Circulation function:


Attachments
Bug 35819: Retry to fetch job if does not enqueued yet (2.45 KB, patch)
2024-01-16 15:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
[ALTERNATE] Bug 35819: Notify NACK when Job ID not found (1.87 KB, patch)
2024-01-16 17:51 UTC, Tomás Cohen Arazi (tcohen)
Details | Diff | Splinter Review
Bug 35819: Notify NACK and requeue when Job ID not found (3.88 KB, patch)
2024-01-18 14:23 UTC, Tomás Cohen Arazi (tcohen)
Details | Diff | Splinter Review
Bug 35819: Notify NACK and requeue when Job ID not found (3.98 KB, patch)
2024-01-19 08:35 UTC, Marcel de Rooy
Details | Diff | Splinter Review
Bug 35819: Add simple delay (1.24 KB, patch)
2024-01-19 08:35 UTC, Marcel de Rooy
Details | Diff | Splinter Review
Bug 35819: nack and not requeue if frame is invalid (2.38 KB, patch)
2024-01-23 09:17 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 35819: Improve logging (2.43 KB, patch)
2024-01-26 13:30 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 35819: Adjust es_indexer_daemon (3.24 KB, patch)
2024-01-30 08:31 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 35819: Notify NACK and requeue when Job ID not found (4.00 KB, patch)
2024-01-31 13:41 UTC, Tomás Cohen Arazi (tcohen)
Details | Diff | Splinter Review
Bug 35819: Add simple delay (1.25 KB, patch)
2024-01-31 13:41 UTC, Tomás Cohen Arazi (tcohen)
Details | Diff | Splinter Review
Bug 35819: nack and not requeue if frame is invalid (2.43 KB, patch)
2024-01-31 13:41 UTC, Tomás Cohen Arazi (tcohen)
Details | Diff | Splinter Review
Bug 35819: Improve logging (2.49 KB, patch)
2024-01-31 13:41 UTC, Tomás Cohen Arazi (tcohen)
Details | Diff | Splinter Review
Bug 35819: Adjust es_indexer_daemon (3.29 KB, patch)
2024-01-31 13:41 UTC, Tomás Cohen Arazi (tcohen)
Details | Diff | Splinter Review
Bug 35819: Notify NACK and requeue when Job ID not found (4.09 KB, patch)
2024-02-05 11:26 UTC, Marcel de Rooy
Details | Diff | Splinter Review
Bug 35819: Add simple delay (1.35 KB, patch)
2024-02-05 11:26 UTC, Marcel de Rooy
Details | Diff | Splinter Review
Bug 35819: nack and not requeue if frame is invalid (2.53 KB, patch)
2024-02-05 11:27 UTC, Marcel de Rooy
Details | Diff | Splinter Review
Bug 35819: Improve logging (2.58 KB, patch)
2024-02-05 11:27 UTC, Marcel de Rooy
Details | Diff | Splinter Review
Bug 35819: Adjust es_indexer_daemon (3.58 KB, patch)
2024-02-05 11:27 UTC, Marcel de Rooy
Details | Diff | Splinter Review
Bug 35819: (QA follow-up) Prevent warning on uninitialized retries count (1.77 KB, patch)
2024-02-05 11:27 UTC, Marcel de Rooy
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Jonathan Druart 2024-01-16 14:55:16 UTC
A BatchUpdateBiblioHoldsQueue job is triggered from AddRenewal if RealTimeHoldsQueue is set.

It is executed inside a transaction and the worker can receive the job to process before the transaction has been committed (and so before the job exists in the DB).

The call stack is:
opac-renew: AddRenewal
In C4::Circulation::AddRenewal
3234     $schema->txn_do(sub{
3302         $item_object->store({ log_action => 0, skip_record_index => 1 }); 

In Koha::Item->store
 223     Koha::BackgroundJob::BatchUpdateBiblioHoldsQueue->new->enqueue(
 224         {
 225             biblio_ids => [ $self->biblionumber ]
 226         }   
 227     ) unless $params->{skip_holds_queue} or !C4::Context->preference('RealTimeHoldsQueue');

To recreate:
Turn on RealTimeHoldsQueue
Check an item in to a patron
At the OPAC, renew the checkout

=> Job 109 not found, or has wrong status main:: /kohadevbox/koha/misc/workers/background_jobs_worker.pl
Comment 1 Jonathan Druart 2024-01-16 15:27:11 UTC
We should have a "retry" mechanism for background jobs, or enqueuing with delay, or both.
Comment 2 Jonathan Druart 2024-01-16 15:38:51 UTC
Created attachment 161067 [details] [review]
Bug 35819: Retry to fetch job if does not enqueued yet

If a job has been enqueued in the middle of a txn, the worker will fetch
the job but fail because it's not in the DB yet.

The uglier but most effective (as a quick fix and easily backportable
fix) is to... wait a bit.
This patch suggests to sleep 1sec and retry, 10 times.

It should be enough for the majority of the jobs.
Comment 3 Tomás Cohen Arazi (tcohen) 2024-01-16 17:51:20 UTC
Created attachment 161071 [details] [review]
[ALTERNATE] Bug 35819: Notify NACK when Job ID not found
Comment 4 Tomás Cohen Arazi (tcohen) 2024-01-16 17:52:56 UTC
I'm not 100% confident with my patch, but sending in case it might open for another options to solve the issue. The idea is to not ACK if the job is not found, making rabbit resend the notification to subscribers.
Comment 5 Jonathan Druart 2024-01-16 19:43:28 UTC
(In reply to Tomás Cohen Arazi from comment #4)
> I'm not 100% confident with my patch, but sending in case it might open for
> another options to solve the issue. The idea is to not ACK if the job is not
> found, making rabbit resend the notification to subscribers.

Shouldnt you have the ack in the catch then? Otherwise the "frame not processed" situation will have the frame reenqueued endlessly (?)
Comment 6 Jonathan Druart 2024-01-16 19:45:07 UTC
Cannot teat right now but not sure nack will re-enqueue btw
Comment 7 Jonathan Druart 2024-01-16 19:49:07 UTC
(In reply to Jonathan Druart from comment #6)
> Cannot teat right now but not sure nack will re-enqueue btw

https://www.rabbitmq.com/stomp.html
"""
NACK frames can optionally carry the requeue header which controls whether the message will be requeued or discarded/dead lettered. Default value is true.
"""
Comment 8 Marcel de Rooy 2024-01-17 12:12:55 UTC
The idea of the 2nd patch does attract me more than the sleep/retry.
This should serve imo as a temporary bugfix/workaround for backports etc and like we discuss on 35092 that we should move to a less hybrid approach. We should read new jobs from either MQ or DB, not both. See the options in bug 35092 comment7. Current stand shows more people choose option2.
Comment 9 Nick Clemens (kidclamp) 2024-01-18 13:32:49 UTC
Why not move the enqueue out of the transaction? We can pass "skip_holds_queue" to the store and then handle as we do indexing
Comment 10 Tomás Cohen Arazi (tcohen) 2024-01-18 13:34:32 UTC
Ok, my patch works on my testing. Now writing a good test plan.
And researching for a delayed requeue option. Will explain on the commit.
Comment 11 Tomás Cohen Arazi (tcohen) 2024-01-18 13:38:08 UTC
(In reply to Nick Clemens from comment #9)
> Why not move the enqueue out of the transaction? We can pass
> "skip_holds_queue" to the store and then handle as we do indexing

The thing is, you cannot always know something is going to trigger a background job. So coding will become nightmare-ish.
Comment 12 Tomás Cohen Arazi (tcohen) 2024-01-18 14:23:53 UTC
Created attachment 161139 [details] [review]
Bug 35819: Notify NACK and requeue when Job ID not found

This patch makes the worker reject the incoming frame for putting the
message back in the queue, in the event the job id doesn't exist yet.
Which is the case when some actions are being triggered inside a
transaction which hasn't been commited to the DB yet.

To test you will need 3 KTD shells
(a) mysql:
   $ ktd --shell
  k$ sudo koha-mysql kohadev

(b) logs:
   $ ktd --shell
   # for restarting the worker and looking at the logs
  k$ sudo koha-worker --restart kohadev  ; tail -f /var/log/koha/kohadev/worker-*.log
(c) running the test:
   $ ktd --shell

1. Have (a), (b) and (c) terminals ready
2. On (c), run:
   $ perl -MKoha::Database -MKoha::BackgroundJob::BatchUpdateBiblioHoldsQueue -e 'Koha::Database->schema->txn_do( sub { Koha::BackgroundJob::BatchUpdateBiblioHoldsQueue->new->enqueue({ biblio_ids => [ 1 ] }); sleep 1;  } );'
=> FAIL:
   * (b) shows (once) an error about a job not existing
3. On (a) run:
   > SELECT * FROM background_jobs;
=> FAIL: Notice the job ID mentioned on 2 stands as 'new'.
4. Apply this patch
5. Ctrl+c on (b), and re-run to launch the worker with the patch applied
6. Repeat 2
=> SUCCESS (partial): The error about the job not existing is displayed
many times
7. Repeat 3
=> SUCCESS: The job ID mentioned on 6 stands as 'finished'.
8. Sign off :-D

Discussion:

* The `requeue` header I added is correct, but it is the default
  behavior anyway. I prefered to make it explicit, though.

* To avoid that bunch of retries, we should requeue with some delay. I
  didn't manage to make it work (yet) but there's a 'delay' plugin for
  rabbit [1]. We already install the 'stomp' plugin in
  koha-common.postinst. But this plugin requires downloading it. Which
  would require further investigation.

* As Nick and Marcel pointed, we need to revisit the whole architecture,
  the need of a MQ (DB polling wouldn't have this problem), etc. But
  that's for another place.

[1] https://hevodata.com/learn/rabbitmq-delayed-message/#:~:text=To%20delay%20a%20message%2C%20the,to%20queues%20or%20other%20exchanges.
Comment 13 Tomás Cohen Arazi (tcohen) 2024-01-18 14:51:42 UTC
FTR: I manually installed the mentioned RabbitMQ plugin [1], and couldn't make it work.

[1] v3.8.9 here https://github.com/rabbitmq/rabbitmq-delayed-message-exchange
Comment 14 Marcel de Rooy 2024-01-19 07:34:34 UTC
Hmm Seeing:
PRECONDITION_FAILED - unknown delivery tag 1
Comment 15 Marcel de Rooy 2024-01-19 07:37:44 UTC
(In reply to Marcel de Rooy from comment #14)
> Hmm Seeing:
> PRECONDITION_FAILED - unknown delivery tag 1

"PRECONDITION_FAILED - unknown delivery tag" usually happens because of double ack-ing, ack-ing on wrong channels or ack-ing messages that should not be ack-ed.

=> ?
Comment 16 Marcel de Rooy 2024-01-19 07:43:52 UTC
(In reply to Marcel de Rooy from comment #15)
> (In reply to Marcel de Rooy from comment #14)
> > Hmm Seeing:
> > PRECONDITION_FAILED - unknown delivery tag 1
> 
> "PRECONDITION_FAILED - unknown delivery tag" usually happens because of
> double ack-ing, ack-ing on wrong channels or ack-ing messages that should
> not be ack-ed.
> 
> => ?

Cause found in my own changes ;)
Comment 17 Marcel de Rooy 2024-01-19 08:07:11 UTC
https://www.cloudamqp.com/docs/delayed-messages.html

Is this a usable pattern? Move it to a 'delayed' queue? Where rabbitmq pushes it back to the work queue when it 'times out'.
Comment 18 Marcel de Rooy 2024-01-19 08:35:51 UTC
Created attachment 161172 [details] [review]
Bug 35819: Notify NACK and requeue when Job ID not found

This patch makes the worker reject the incoming frame for putting the
message back in the queue, in the event the job id doesn't exist yet.
Which is the case when some actions are being triggered inside a
transaction which hasn't been commited to the DB yet.

To test you will need 3 KTD shells
(a) mysql:
   $ ktd --shell
  k$ sudo koha-mysql kohadev

(b) logs:
   $ ktd --shell
   # for restarting the worker and looking at the logs
  k$ sudo koha-worker --restart kohadev  ; tail -f /var/log/koha/kohadev/worker-*.log
(c) running the test:
   $ ktd --shell

1. Have (a), (b) and (c) terminals ready
2. On (c), run:
   $ perl -MKoha::Database -MKoha::BackgroundJob::BatchUpdateBiblioHoldsQueue -e 'Koha::Database->schema->txn_do( sub { Koha::BackgroundJob::BatchUpdateBiblioHoldsQueue->new->enqueue({ biblio_ids => [ 1 ] }); sleep 1;  } );'
=> FAIL:
   * (b) shows (once) an error about a job not existing
3. On (a) run:
   > SELECT * FROM background_jobs;
=> FAIL: Notice the job ID mentioned on 2 stands as 'new'.
4. Apply this patch
5. Ctrl+c on (b), and re-run to launch the worker with the patch applied
6. Repeat 2
=> SUCCESS (partial): The error about the job not existing is displayed
many times
7. Repeat 3
=> SUCCESS: The job ID mentioned on 6 stands as 'finished'.
8. Sign off :-D

Discussion:

* The `requeue` header I added is correct, but it is the default
  behavior anyway. I prefered to make it explicit, though.

* To avoid that bunch of retries, we should requeue with some delay. I
  didn't manage to make it work (yet) but there's a 'delay' plugin for
  rabbit [1]. We already install the 'stomp' plugin in
  koha-common.postinst. But this plugin requires downloading it. Which
  would require further investigation.

* As Nick and Marcel pointed, we need to revisit the whole architecture,
  the need of a MQ (DB polling wouldn't have this problem), etc. But
  that's for another place.

[1] https://hevodata.com/learn/rabbitmq-delayed-message/#:~:text=To%20delay%20a%20message%2C%20the,to%20queues%20or%20other%20exchanges.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Comment 19 Marcel de Rooy 2024-01-19 08:35:54 UTC
Created attachment 161173 [details] [review]
Bug 35819: Add simple delay

Here I add 500 ms. In my testing with the 1s sleep from the test
plan, I might see one or two 'not found' lines. Obviously things
depend on the time needed before the txn commits. But it will
reduce a flood of these messages.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Comment 20 Jonathan Druart 2024-01-23 09:03:16 UTC
Re-enqueue is definitely better than the naive sleep approach, great!

It is working nicely in my tests.

However I think we should:
1. apply this change to misc/workers/es_indexer_daemon.pl (ideally have the duplicated code moved to Koha::BackgroundJobs but that's for another day)
2. remove the "Job not found" warnings from the logs if the job is actually processed later.
3. correctly handle "Frame not processed". If we are enqueuing a job without correctly encoding the data (not JSON) we will never ack or nack and the worker will get stuck.
Comment 21 Jonathan Druart 2024-01-23 09:17:35 UTC
Created attachment 161267 [details] [review]
Bug 35819: nack and not requeue if frame is invalid

If a frame cannot be correctly processed (most probably because the body
is not valid JSON) then we are not acking or nacking the frame and the
worker is stuck.

In this specific case we should nack without requeuing the frame.

NOTE that requeue must be 'true' or 'false', not 1 or 0, or the default
'true' will be used.
Comment 22 Jonathan Druart 2024-01-23 09:18:22 UTC
(In reply to Jonathan Druart from comment #20)
> 3. correctly handle "Frame not processed". If we are enqueuing a job without
> correctly encoding the data (not JSON) we will never ack or nack and the
> worker will get stuck.

Done in previous patch "nack and not requeue if frame is invalid".
Comment 23 Jonathan Druart 2024-01-23 09:34:08 UTC
(In reply to Jonathan Druart from comment #20)
> 2. remove the "Job not found" warnings from the logs if the job is actually
> processed later.

Would it make sense to store (in worker memory) the number of tries for a given job? I think so, otherwise we may requeue a job endlessly...
Comment 24 Marcel de Rooy 2024-01-23 11:13:48 UTC
(In reply to Jonathan Druart from comment #23)
> (In reply to Jonathan Druart from comment #20)
> > 2. remove the "Job not found" warnings from the logs if the job is actually
> > processed later.
> 
> Would it make sense to store (in worker memory) the number of tries for a
> given job? I think so, otherwise we may requeue a job endlessly...

It sounds good. But does it make sense in view of separating the MQ and DB loop as suggested on the omnibus?
Comment 25 Jonathan Druart 2024-01-24 09:30:59 UTC
(In reply to Marcel de Rooy from comment #24)
> (In reply to Jonathan Druart from comment #23)
> > (In reply to Jonathan Druart from comment #20)
> > > 2. remove the "Job not found" warnings from the logs if the job is actually
> > > processed later.
> > 
> > Would it make sense to store (in worker memory) the number of tries for a
> > given job? I think so, otherwise we may requeue a job endlessly...
> 
> It sounds good. But does it make sense in view of separating the MQ and DB
> loop as suggested on the omnibus?

What do you suggest? We keep it buggy? :)
Comment 26 Marcel de Rooy 2024-01-24 12:26:19 UTC
(In reply to Jonathan Druart from comment #25)
> (In reply to Marcel de Rooy from comment #24)
> > (In reply to Jonathan Druart from comment #23)
> > > (In reply to Jonathan Druart from comment #20)
> > > > 2. remove the "Job not found" warnings from the logs if the job is actually
> > > > processed later.
> > > 
> > > Would it make sense to store (in worker memory) the number of tries for a
> > > given job? I think so, otherwise we may requeue a job endlessly...
> > 
> > It sounds good. But does it make sense in view of separating the MQ and DB
> > loop as suggested on the omnibus?
> 
> What do you suggest? We keep it buggy? :)

LOL
As you might be aware, the omnibus was created to resolve an enormous bunch of bugs related to this feature.
Comment 27 Jonathan Druart 2024-01-26 13:30:11 UTC
Created attachment 161527 [details] [review]
Bug 35819: Improve logging

Log (warn) if the job will be processed later, but add a debug however.

Have a specific log for bad status
Comment 28 Jonathan Druart 2024-01-26 13:30:52 UTC
(In reply to Jonathan Druart from comment #20)
> 2. remove the "Job not found" warnings from the logs if the job is actually
> processed later.

Done in previous patch "Bug 35819: Improve logging"
Comment 29 Jonathan Druart 2024-01-30 08:31:02 UTC
Created attachment 161624 [details] [review]
Bug 35819: Adjust es_indexer_daemon
Comment 30 Jonathan Druart 2024-01-30 08:31:31 UTC
(In reply to Jonathan Druart from comment #20)
> Re-enqueue is definitely better than the naive sleep approach, great!
> 
> It is working nicely in my tests.
> 
> However I think we should:
> 1. apply this change to misc/workers/es_indexer_daemon.pl (ideally have the
> duplicated code moved to Koha::BackgroundJobs but that's for another day)

Done in "Adjust es_indexer_daemon". Looks like I need to take over here.
Ready for testing.
Comment 31 Tomás Cohen Arazi (tcohen) 2024-01-31 11:50:32 UTC
(In reply to Jonathan Druart from comment #20)
> Re-enqueue is definitely better than the naive sleep approach, great!
> 
> It is working nicely in my tests.
> 
> However I think we should:
> 1. apply this change to misc/workers/es_indexer_daemon.pl (ideally have the
> duplicated code moved to Koha::BackgroundJobs but that's for another day)
> 2. remove the "Job not found" warnings from the logs if the job is actually
> processed later.
> 3. correctly handle "Frame not processed". If we are enqueuing a job without
> correctly encoding the data (not JSON) we will never ack or nack and the
> worker will get stuck.

I think I missed a lot on this bug conversation :-P

Sorry!
Comment 32 Tomás Cohen Arazi (tcohen) 2024-01-31 12:40:46 UTC
Comment on attachment 161267 [details] [review]
Bug 35819: nack and not requeue if frame is invalid

Review of attachment 161267 [details] [review]:
-----------------------------------------------------------------

::: misc/workers/background_jobs_worker.pl
@@ +125,5 @@
>          };
>  
> +        unless ( $args ) {
> +            Koha::Logger->get({ interface => 'worker' })->warn(sprintf "Frame does not have correct args, ignoring it");
> +            $conn->nack( { frame => $frame, requeue => 'false' } );

This is great! I didn't manage to make it work properly!
Comment 33 Tomás Cohen Arazi (tcohen) 2024-01-31 12:42:34 UTC
Comment on attachment 161527 [details] [review]
Bug 35819: Improve logging

Review of attachment 161527 [details] [review]:
-----------------------------------------------------------------

::: misc/workers/background_jobs_worker.pl
@@ +73,4 @@
>  $max_processes ||= C4::Context->config('background_jobs_worker')->{max_processes} if C4::Context->config('background_jobs_worker');
>  $max_processes ||= 1;
>  
> +my $not_found_retries = {};

This will only work for a single worker, right?

I think it is a good compromise approach, and running multiple workers would just make it retry more in the worst case.

Just noting it.
Comment 34 Tomás Cohen Arazi (tcohen) 2024-01-31 12:56:00 UTC
This is NSO but the original patch is SO by a QA team member, and has follow-up patches from two QA team members... This shouldn't block our bugs :-D

Testing and adding my QA stamp.
Comment 35 Tomás Cohen Arazi (tcohen) 2024-01-31 13:41:33 UTC
Created attachment 161682 [details] [review]
Bug 35819: Notify NACK and requeue when Job ID not found

This patch makes the worker reject the incoming frame for putting the
message back in the queue, in the event the job id doesn't exist yet.
Which is the case when some actions are being triggered inside a
transaction which hasn't been commited to the DB yet.

To test you will need 3 KTD shells
(a) mysql:
   $ ktd --shell
  k$ sudo koha-mysql kohadev

(b) logs:
   $ ktd --shell
   # for restarting the worker and looking at the logs
  k$ sudo koha-worker --restart kohadev  ; tail -f /var/log/koha/kohadev/worker-*.log
(c) running the test:
   $ ktd --shell

1. Have (a), (b) and (c) terminals ready
2. On (c), run:
   $ perl -MKoha::Database -MKoha::BackgroundJob::BatchUpdateBiblioHoldsQueue -e 'Koha::Database->schema->txn_do( sub { Koha::BackgroundJob::BatchUpdateBiblioHoldsQueue->new->enqueue({ biblio_ids => [ 1 ] }); sleep 1;  } );'
=> FAIL:
   * (b) shows (once) an error about a job not existing
3. On (a) run:
   > SELECT * FROM background_jobs;
=> FAIL: Notice the job ID mentioned on 2 stands as 'new'.
4. Apply this patch
5. Ctrl+c on (b), and re-run to launch the worker with the patch applied
6. Repeat 2
=> SUCCESS (partial): The error about the job not existing is displayed
many times
7. Repeat 3
=> SUCCESS: The job ID mentioned on 6 stands as 'finished'.
8. Sign off :-D

Discussion:

* The `requeue` header I added is correct, but it is the default
  behavior anyway. I prefered to make it explicit, though.

* To avoid that bunch of retries, we should requeue with some delay. I
  didn't manage to make it work (yet) but there's a 'delay' plugin for
  rabbit [1]. We already install the 'stomp' plugin in
  koha-common.postinst. But this plugin requires downloading it. Which
  would require further investigation.

* As Nick and Marcel pointed, we need to revisit the whole architecture,
  the need of a MQ (DB polling wouldn't have this problem), etc. But
  that's for another place.

[1] https://hevodata.com/learn/rabbitmq-delayed-message/#:~:text=To%20delay%20a%20message%2C%20the,to%20queues%20or%20other%20exchanges.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 36 Tomás Cohen Arazi (tcohen) 2024-01-31 13:41:36 UTC
Created attachment 161683 [details] [review]
Bug 35819: Add simple delay

Here I add 500 ms. In my testing with the 1s sleep from the test
plan, I might see one or two 'not found' lines. Obviously things
depend on the time needed before the txn commits. But it will
reduce a flood of these messages.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 37 Tomás Cohen Arazi (tcohen) 2024-01-31 13:41:39 UTC
Created attachment 161684 [details] [review]
Bug 35819: nack and not requeue if frame is invalid

If a frame cannot be correctly processed (most probably because the body
is not valid JSON) then we are not acking or nacking the frame and the
worker is stuck.

In this specific case we should nack without requeuing the frame.

NOTE that requeue must be 'true' or 'false', not 1 or 0, or the default
'true' will be used.

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 38 Tomás Cohen Arazi (tcohen) 2024-01-31 13:41:42 UTC
Created attachment 161685 [details] [review]
Bug 35819: Improve logging

Log (warn) if the job will be processed later, but add a debug however.

Have a specific log for bad status

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 39 Tomás Cohen Arazi (tcohen) 2024-01-31 13:41:45 UTC
Created attachment 161686 [details] [review]
Bug 35819: Adjust es_indexer_daemon

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 40 Jonathan Druart 2024-01-31 14:27:55 UTC
(In reply to Tomás Cohen Arazi from comment #33)
> Comment on attachment 161527 [details] [review] [review]
> Bug 35819: Improve logging
> 
> Review of attachment 161527 [details] [review] [review]:
> -----------------------------------------------------------------
> 
> ::: misc/workers/background_jobs_worker.pl
> @@ +73,4 @@
> >  $max_processes ||= C4::Context->config('background_jobs_worker')->{max_processes} if C4::Context->config('background_jobs_worker');
> >  $max_processes ||= 1;
> >  
> > +my $not_found_retries = {};
> 
> This will only work for a single worker, right?
> 
> I think it is a good compromise approach, and running multiple workers would
> just make it retry more in the worst case.
> 
> Just noting it.

Yes, this is not ideal at all, but I wanted to avoid the wrong warning in the log that would cause confusion. As well as preventing an invalid job (should never happen, right! :D) to block the worker.
Comment 41 Katrin Fischer 2024-01-31 21:30:31 UTC
Marcel, could you maybe do a final review here?
Comment 42 Marcel de Rooy 2024-02-01 07:09:30 UTC
(In reply to Katrin Fischer from comment #41)
> Marcel, could you maybe do a final review here?

Sure, will do. Changing status
Comment 43 Marcel de Rooy 2024-02-05 11:26:55 UTC
Created attachment 161752 [details] [review]
Bug 35819: Notify NACK and requeue when Job ID not found

This patch makes the worker reject the incoming frame for putting the
message back in the queue, in the event the job id doesn't exist yet.
Which is the case when some actions are being triggered inside a
transaction which hasn't been commited to the DB yet.

To test you will need 3 KTD shells
(a) mysql:
   $ ktd --shell
  k$ sudo koha-mysql kohadev

(b) logs:
   $ ktd --shell
   # for restarting the worker and looking at the logs
  k$ sudo koha-worker --restart kohadev  ; tail -f /var/log/koha/kohadev/worker-*.log
(c) running the test:
   $ ktd --shell

1. Have (a), (b) and (c) terminals ready
2. On (c), run:
   $ perl -MKoha::Database -MKoha::BackgroundJob::BatchUpdateBiblioHoldsQueue -e 'Koha::Database->schema->txn_do( sub { Koha::BackgroundJob::BatchUpdateBiblioHoldsQueue->new->enqueue({ biblio_ids => [ 1 ] }); sleep 1;  } );'
=> FAIL:
   * (b) shows (once) an error about a job not existing
3. On (a) run:
   > SELECT * FROM background_jobs;
=> FAIL: Notice the job ID mentioned on 2 stands as 'new'.
4. Apply this patch
5. Ctrl+c on (b), and re-run to launch the worker with the patch applied
6. Repeat 2
=> SUCCESS (partial): The error about the job not existing is displayed
many times
7. Repeat 3
=> SUCCESS: The job ID mentioned on 6 stands as 'finished'.
8. Sign off :-D

Discussion:

* The `requeue` header I added is correct, but it is the default
  behavior anyway. I prefered to make it explicit, though.

* To avoid that bunch of retries, we should requeue with some delay. I
  didn't manage to make it work (yet) but there's a 'delay' plugin for
  rabbit [1]. We already install the 'stomp' plugin in
  koha-common.postinst. But this plugin requires downloading it. Which
  would require further investigation.

* As Nick and Marcel pointed, we need to revisit the whole architecture,
  the need of a MQ (DB polling wouldn't have this problem), etc. But
  that's for another place.

[1] https://hevodata.com/learn/rabbitmq-delayed-message/#:~:text=To%20delay%20a%20message%2C%20the,to%20queues%20or%20other%20exchanges.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Comment 44 Marcel de Rooy 2024-02-05 11:26:58 UTC
Created attachment 161753 [details] [review]
Bug 35819: Add simple delay

Here I add 500 ms. In my testing with the 1s sleep from the test
plan, I might see one or two 'not found' lines. Obviously things
depend on the time needed before the txn commits. But it will
reduce a flood of these messages.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Comment 45 Marcel de Rooy 2024-02-05 11:27:01 UTC
Created attachment 161754 [details] [review]
Bug 35819: nack and not requeue if frame is invalid

If a frame cannot be correctly processed (most probably because the body
is not valid JSON) then we are not acking or nacking the frame and the
worker is stuck.

In this specific case we should nack without requeuing the frame.

NOTE that requeue must be 'true' or 'false', not 1 or 0, or the default
'true' will be used.

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Comment 46 Marcel de Rooy 2024-02-05 11:27:03 UTC
Created attachment 161755 [details] [review]
Bug 35819: Improve logging

Log (warn) if the job will be processed later, but add a debug however.

Have a specific log for bad status

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Comment 47 Marcel de Rooy 2024-02-05 11:27:06 UTC
Created attachment 161756 [details] [review]
Bug 35819: Adjust es_indexer_daemon

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
[EDIT] Add forgotten module
Comment 48 Marcel de Rooy 2024-02-05 11:27:09 UTC
Created attachment 161757 [details] [review]
Bug 35819: (QA follow-up) Prevent warning on uninitialized retries count

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Comment 49 Marcel de Rooy 2024-02-05 11:27:55 UTC
No problem for me. Get this further now.

 WARN   misc/workers/background_jobs_worker.pl
   WARN   tidiness
                The file is less tidy than before (bad/messy lines before: 12, now: 15)

 WARN   misc/workers/es_indexer_daemon.pl
   WARN   tidiness
                The file is less tidy than before (bad/messy lines before: 19, now: 23)
Comment 50 Katrin Fischer 2024-03-07 15:37:50 UTC
Pushed for 24.05!

Well done everyone, thank you!
Comment 51 Fridolin Somers 2024-03-11 09:54:22 UTC
Pushed to 23.11.x for 23.11.04
Comment 52 Lucas Gass (lukeg) 2024-03-19 15:41:12 UTC
Backported to 23.05.x for upcoming 23.05.10