Bug 32481 - Rabbit times out when too many jobs are queued and the response takes too long
Summary: Rabbit times out when too many jobs are queued and the response takes too long
Status: Pushed to oldoldstable
Alias: None
Product: Koha
Classification: Unclassified
Component: Architecture, internals, and plumbing (show other bugs)
Version: master
Hardware: All All
: P5 - low critical (vote)
Assignee: David Cook
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks: 32594 35089
  Show dependency treegraph
 
Reported: 2022-12-16 11:31 UTC by Nick Clemens
Modified: 2023-10-18 13:56 UTC (History)
8 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
23.05.00,22.11.02,22.05.09, 21.11.16


Attachments
Bug 32481: Limit prefetch size for background jobs worker (1.37 KB, patch)
2022-12-22 22:20 UTC, David Cook
Details | Diff | Splinter Review
Test background jobs (4.80 KB, patch)
2023-01-06 14:48 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 32481: Use correct prefetch syntax for RabbitMQ (1.11 KB, patch)
2023-01-09 00:58 UTC, David Cook
Details | Diff | Splinter Review
Bug 32481: Limit prefetch size for background jobs worker (1.84 KB, patch)
2023-01-10 13:31 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 32481: Limit prefetch size for background jobs worker (1.90 KB, patch)
2023-01-11 15:56 UTC, Nick Clemens
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Nick Clemens 2022-12-16 11:31:28 UTC
We saw this happen several times - a library would queue up 6 batch modifications that would take roughly 6 minutes each.

The jobs would process normally, until the 30 minute configured timeout would hit - then the remaining job would fail, and rabbit would disconnect and not reconnect, and jobs would pile up

It seems that koha receives all of the jobs immediately, but only acknowledges when complete - so the final job was waiting over 30 minutes for acknowledgement and this caused issues
Comment 1 Jonathan Druart 2022-12-20 09:45:21 UTC
https://www.rabbitmq.com/consumers.html#acknowledgement-timeout

"""
The timeout can be deactivated using advanced.config. This is not recommended:

%% advanced.config
[
  {rabbit, [
    {consumer_timeout, undefined}
  ]}
].

Instead of disabling the timeout entirely, consider using a high value (for example, a few hours).
"""
Comment 2 Katrin Fischer 2022-12-20 10:15:03 UTC
I am slightly confused by this one, wasn't the idea of RabbitMQ to avoid just this kind of thing? Having a server dealing with the jobs so nothing would be lost and it all done in due time? It seems strange that we run into limitations so soon.
Comment 3 David Cook 2022-12-20 23:20:55 UTC
(In reply to Katrin Fischer from comment #2)
> I am slightly confused by this one, wasn't the idea of RabbitMQ to avoid
> just this kind of thing? Having a server dealing with the jobs so nothing
> would be lost and it all done in due time? It seems strange that we run into
> limitations so soon.

Generally speaking, Koha developers are not experienced with using advanced tools and techniques. Even the most hardcore of Koha developers are still just beginners with things like distributed computing. 

According to the RabbitMQ Work Queues tutorial, we're doing the right thing by ACKing after the message has been processed.

I think options would be increasing the timeout, or changing how we're enqueuing tasks.

In this case, are the "6 batch modifications" actually 1 batch modification job for 6 items? Why were they taking 6 minutes?
Comment 4 David Cook 2022-12-20 23:22:32 UTC
(In reply to Nick Clemens from comment #0)
> We saw this happen several times - a library would queue up 6 batch
> modifications that would take roughly 6 minutes each.

Is that 1 job for 6 items, or is that 6 different jobs? Why were the jobs taking 6 minutes?

> The jobs would process normally, until the 30 minute configured timeout
> would hit - then the remaining job would fail, and rabbit would disconnect
> and not reconnect, and jobs would pile up

When you say "rabbit would disconnect and not reconnect", are you talking about the RabbitMQ server disconnecting, and then not allowing the koha-worker to reconnect? 

> It seems that koha receives all of the jobs immediately, but only
> acknowledges when complete - so the final job was waiting over 30 minutes
> for acknowledgement and this caused issues

What do you mean by "koha"? Do you mean "koha-worker"? I think we need more information here.
Comment 5 Katrin Fischer 2022-12-21 08:11:29 UTC
A timeout seems arbitrary when you ultimately want all jobs to be processed and nothing be lost.
Comment 6 Julian Maurice 2022-12-21 08:32:10 UTC
(In reply to David Cook from comment #3)
> According to the RabbitMQ Work Queues tutorial, we're doing the right thing
> by ACKing after the message has been processed.

According to that tutorial, this allows RabbitMQ to redeliver the message to another worker. But do we really want that ? A background job should not be restarted if it failed, because it might fail forever, or because by running it more than once it might do unwanted modifications to the database.
I believe that the background job worker should acknowledge the message as soon as it receives it.
Comment 7 Julian Maurice 2022-12-21 08:34:53 UTC
> as soon as it receives it...
or at least as soon as it has validated that the message correspond to an existing pending job, or something like that.
Comment 8 David Cook 2022-12-21 22:51:53 UTC
(In reply to Katrin Fischer from comment #5)
> A timeout seems arbitrary when you ultimately want all jobs to be processed
> and nothing be lost.

The RabbitMQ message broker doesn't know if the message was received by the consumer though. All it knows is that it sent a message. It doesn't know that it's being worked on.
Comment 9 David Cook 2022-12-21 22:59:26 UTC
(In reply to Julian Maurice from comment #6)
> According to that tutorial, this allows RabbitMQ to redeliver the message to
> another worker. But do we really want that ? 

Theoretically, yes. If you have multiple machines with workers running and 1 machine crashes due to a power outage, you probably want the message to be sent to a different worker on a different machine.

A simpler scenario is RabbitMQ delivers a message, the worker is working on it, and a sysadmin reboots the server. When it comes back online, RabbitMQ and the worker should start up, and RabbitMQ should re-deliver the message to the worker.

> A background job should not be
> restarted if it failed, because it might fail forever, or because by running
> it more than once it might do unwanted modifications to the database.

You make a good point about the failing forever. In practice, that seems to be the most common scenario I've encountered. The worker crashes due to a programming error, and then it is just stuck in an infinite failure loop. Not good. Of course, you could argue that isn't the problem of RabbitMQ, but rather someone coded the worker badly ;). 

Likewise, unwanted modifications to the database can be avoided by coding the worker better.

> I believe that the background job worker should acknowledge the message as
> soon as it receives it.

That's certainly an option.

The downside is that if the worker isn't able to complete handling of the message, then that message is gone forever. 

That might not be a big deal with Koha, since we have the "database fallback". 

Of course, that could get you into an infinite failure loop as well.
Comment 10 David Cook 2022-12-21 23:03:06 UTC
(In reply to David Cook from comment #8)
> (In reply to Katrin Fischer from comment #5)
> > A timeout seems arbitrary when you ultimately want all jobs to be processed
> > and nothing be lost.
> 
> The RabbitMQ message broker doesn't know if the message was received by the
> consumer though. All it knows is that it sent a message. It doesn't know
> that it's being worked on.

I run a large IoT system and without timeouts you end up waiting for a response forever. 

30 minutes is arbitrary but they make it configurable so that you can adjust it to your workload.

Of course, in the case of Koha, we have less control over what people do with their RabbitMQ.
Comment 11 David Cook 2022-12-21 23:25:50 UTC
It's an interesting topic though...

https://docs.celeryq.dev/en/latest/faq.html#should-i-use-retry-or-acks-late

It looks like Celery workers acknowledge messages when they receive them and then they do work. 

They provide an "acks_late" option for tasks where crashes would be a problem.

--

Maybe it is better to ack early so that we can handle longer running tasks, and then we handle failure scenarios more as edge cases...

It's probably more likely that you'll have a long running task than an unexpected crash. 

I think if there is a crash (say VM power outage rather than a fatal error executing a Perl function), the job should get stuck in "started" state as well? So that would allow sysadmins to then deal with that situation (or we could have a cronjob that "times out" jobs in "started" state if they're in that state for longer than X time.

--

In that way, we still have timeouts, but we're putting those timeouts into the application rather than relying on RabbitMQ's...
Comment 12 Nick Clemens 2022-12-22 14:25:25 UTC
(In reply to David Cook from comment #4)
> (In reply to Nick Clemens from comment #0)
> > We saw this happen several times - a library would queue up 6 batch
> > modifications that would take roughly 6 minutes each.
> 
> Is that 1 job for 6 items, or is that 6 different jobs? Why were the jobs
> taking 6 minutes?

6 batch jobs of several hundred items, submitted one after the other

> 
> > The jobs would process normally, until the 30 minute configured timeout
> > would hit - then the remaining job would fail, and rabbit would disconnect
> > and not reconnect, and jobs would pile up
> 
> When you say "rabbit would disconnect and not reconnect", are you talking
> about the RabbitMQ server disconnecting, and then not allowing the
> koha-worker to reconnect? 

Rabbit crashed for all intents and purposes - koha worker could not connect, only a restart would beign to process jobs again

> 
> > It seems that koha receives all of the jobs immediately, but only
> > acknowledges when complete - so the final job was waiting over 30 minutes
> > for acknowledgement and this caused issues
> 
> What do you mean by "koha"? Do you mean "koha-worker"? I think we need more
> information here.

Yes, the koha-worker - it seems to get the frame, but not send an ack
Comment 13 David Cook 2022-12-22 22:15:22 UTC
(In reply to Nick Clemens from comment #12)
> > > It seems that koha receives all of the jobs immediately, but only
> > > acknowledges when complete - so the final job was waiting over 30 minutes
> > > for acknowledgement and this caused issues

Thanks for clarifying, Nick. Your comment also made me re-read the description, and it made me realize that I've had this same problem before on a different project, although it was with AMQP rather than STOMP. Let me see the STOMP way of doing it...

The problem is that we're not defining a prefetch size. You can see the problem here in ./misc/background_jobs_worker.pl:

$conn->subscribe({ destination => sprintf("/queue/%s-%s", $namespace, $queue), ack => 'client' });

As per the docs, we need to set "'activemq.prefetchSize' => 1" in that subscribe hashref argument:
https://metacpan.org/pod/Net::Stomp#activemq.prefetchSize

If you/we set that, I would bet that your problem goes away.

--

I have a RabbitMQ AMQP worker on another project that consumes many thousands of messages a day, and I was noticing timeouts even though it was blazing through the messages. I changed the prefetch size to 1 and ever since there's been no more timeouts. It's just worked like a beast!

--

(Katrin, the above is what I mean about us all being beginning with this stuff.)
Comment 14 David Cook 2022-12-22 22:20:11 UTC
Created attachment 144809 [details] [review]
Bug 32481: Limit prefetch size for background jobs worker

This patch adds a prefetch size of 1 to the background jobs worker,
so that it fetches 1 message at a time. Without this change,
the RabbitMQ connection timeout when too many messages for slow tasks
are fetched at the same time.

To test:
0. Apply patch
1. Run background worker
2. Rapidly enqueue multiple jobs that in total will take longer
than 30 minutes to process
Comment 15 David Cook 2022-12-22 22:21:23 UTC
The test plan is a bit lacklustre because it's probably going to be hard for people to test. 

I suppose we can just test that it doesn't cause any problems with the current background jobs, and then Nick can confirm whether or not it fixes the problem for him.
Comment 16 Nick Clemens 2022-12-23 16:03:05 UTC
(In reply to David Cook from comment #15)
> The test plan is a bit lacklustre because it's probably going to be hard for
> people to test. 
> 
> I suppose we can just test that it doesn't cause any problems with the
> current background jobs, and then Nick can confirm whether or not it fixes
> the problem for him.

I'll try to see if I can reproduce on a test environment, thanks David!
Comment 17 Jonathan Druart 2023-01-04 10:36:49 UTC
And what about the 30 minutes timeout?
Comment 18 Jonathan Druart 2023-01-04 11:06:39 UTC
I think I recreate the problem this way:

% more /etc/rabbitmq/rabbitmq.conf
consumer_timeout = 15000

Add a sleep 10 in Koha::BackgroundJob::BatchUpdateItem->process

Then enqueue several batch item mod jobs.

However:
1. I don't see jobs stuck in "new" (without the patch)

2. I am seeing the following in the rabbitmq logs, with and without the patch

2023-01-04 11:01:03.041 [warning] <0.2774.0> Consumer Q_/queue/koha_kohadev-long_tasks on channel 1 has timed out waiting on consumer acknowledgement. Timeout used: 15000 ms                                        
2023-01-04 11:01:03.041 [error] <0.2774.0> Channel error on connection <0.2764.0> (127.0.0.1:39578 -> 127.0.0.1:61613, vhost: '/', user: 'guest'), channel 1:                                                        
operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1                                                                                                                   
2023-01-04 11:01:03.042 [error] <0.2758.0> STOMP error frame sent:                                                                                                                                                   
Message: precondition_failed
Detail: "PRECONDITION_FAILED - consumer ack timed out on channel 1\n"
Server private detail: none
2023-01-04 11:01:03.042 [info] <0.2758.0> closing STOMP connection <0.2758.0> (127.0.0.1:39578 -> 127.0.0.1:61613)
Comment 19 Nick Clemens 2023-01-04 17:01:22 UTC
(In reply to Jonathan Druart from comment #18)
> I think I recreate the problem this way:
> 
> % more /etc/rabbitmq/rabbitmq.conf
> consumer_timeout = 15000
> 
> Add a sleep 10 in Koha::BackgroundJob::BatchUpdateItem->process
> 
> Then enqueue several batch item mod jobs.
> 
> However:
> 1. I don't see jobs stuck in "new" (without the patch)
> 
> 2. I am seeing the following in the rabbitmq logs, with and without the patch
> 
> 2023-01-04 11:01:03.041 [warning] <0.2774.0> Consumer
> Q_/queue/koha_kohadev-long_tasks on channel 1 has timed out waiting on
> consumer acknowledgement. Timeout used: 15000 ms                            
> 
> 2023-01-04 11:01:03.041 [error] <0.2774.0> Channel error on connection
> <0.2764.0> (127.0.0.1:39578 -> 127.0.0.1:61613, vhost: '/', user: 'guest'),
> channel 1:                                                        
> operation none caused a channel exception precondition_failed: consumer ack
> timed out on channel 1                                                      
> 
> 2023-01-04 11:01:03.042 [error] <0.2758.0> STOMP error frame sent:          
> 
> Message: precondition_failed
> Detail: "PRECONDITION_FAILED - consumer ack timed out on channel 1\n"
> Server private detail: none
> 2023-01-04 11:01:03.042 [info] <0.2758.0> closing STOMP connection
> <0.2758.0> (127.0.0.1:39578 -> 127.0.0.1:61613)

I played with this a bunch, and things seem to sometimes recover and sometimes not

Setting rabbitmq.conf with:
consumer_timeout = 10000

Adding "sleep 1;" in the Koha/BackgroundJob/UpdateElasticIndex.pm

sudo koha-mysql kohadev
DELETE FROM biblio WHERE biblionumber=269;
DELETE FROM biblio WHERE biblionumber=72;

Set SearchEngine syspref to 'Elastic'

perl misc/maintenance/touch_all_biblios.pl

With or without patch, I get errors like:
2023-01-04 16:49:40.689 [warning] <0.7712.0> Consumer Q_/queue/koha_kohadev-default on channel 1 has timed out waiting on consumer acknowledgement. Timeout used: 10000 ms
2023-01-04 16:49:40.692 [error] <0.7712.0> Channel error on connection <0.7703.0> (127.0.0.1:60178 -> 127.0.0.1:61613, vhost: '/', user: 'guest'), channel 1:
operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1
2023-01-04 16:49:40.692 [error] <0.7700.0> STOMP error frame sent:
Message: precondition_failed
Detail: "PRECONDITION_FAILED - consumer ack timed out on channel 1\n"
Server private detail: none
2023-01-04 16:49:40.693 [info] <0.7700.0> closing STOMP connection <0.7700.0> (127.0.0.1:60178 -> 127.0.0.1:61613)
2023-01-04 16:49:41.224 [info] <0.14720.0> accepting STOMP connection <0.14720.0> (127.0.0.1:60376 -> 127.0.0.1:61613)
2023-01-04 16:49:41.235 [error] <0.14732.0> Channel error on connection <0.14723.0> (127.0.0.1:60376 -> 127.0.0.1:61613, vhost: '/', user: 'guest'), channel 1:
operation basic.ack caused a channel exception precondition_failed: unknown delivery tag 245
2023-01-04 16:49:41.239 [error] <0.14720.0> STOMP error frame sent:
Message: precondition_failed
Detail: "PRECONDITION_FAILED - unknown delivery tag 245\n"
Server private detail: none
2023-01-04 16:49:41.239 [info] <0.14720.0> closing STOMP connection <0.14720.0> (127.0.0.1:60376 -> 127.0.0.1:61613)
2023-01-04 16:49:42.477 [info] <0.14736.0> accepting STOMP connection <0.14736.0> (127.0.0.1:60388 -> 127.0.0.1:61613)


And varied number of jobs that remain in new
Comment 20 Jonathan Druart 2023-01-05 13:58:03 UTC
> operation basic.ack caused a channel exception precondition_failed: unknown delivery tag 245

I think it's the fork, I removed it and it seems to be ok now. Could someone confirm?

> Detail: "PRECONDITION_FAILED - consumer ack timed out on channel 1\n"

I am still seeing this after I removed the fork, which means this patch does not fix the timeout problem.


I am also seeing this:
':' expected, at character offset 49 (before "CONNECTED\nserver") at /kohadevbox/koha/misc/background_jobs_worker.pl line 97.
Comment 21 Jonathan Druart 2023-01-05 14:06:32 UTC
There is something scary happening here.

I've added 3 debug statements, one displays the frame, 2 others are before the process and ack calls.

{"record_ids":[295],"record_server":"biblioserver","job_id":1824} at /kohadevbox/koha/misc/background_jobs_worker.pl line 97.
Processing 1824 at /kohadevbox/koha/misc/background_jobs_worker.pl line 105.                                           
Acking     1824 at /kohadevbox/koha/misc/background_jobs_worker.pl line 107.
{"record_ids":[296],"job_id":1825,"record_server"CONNECTED
server at /kohadevbox/koha/misc/background_jobs_worker.pl line 97.
':' expected, at character offset 49 (before "CONNECTED\nserver") at /kohadevbox/koha/misc/background_jobs_worker.pl line 98.
{"record_server":"biblioserver","job_id":1879,"record_ids":[350]} at /kohadevbox/koha/misc/background_jobs_worker.pl line 97.
Processing 1879 at /kohadevbox/koha/misc/background_jobs_worker.pl line 105.
Acking     1879 at /kohadevbox/koha/misc/background_jobs_worker.pl line 107.

=> Frame for job 1825 is incorrect (?!), and we are losing jobs 1826-1878.
Comment 22 Jonathan Druart 2023-01-05 15:40:09 UTC
The best solution I found is the 2 patches from bug 32393.
Comment 23 Jonathan Druart 2023-01-05 15:42:25 UTC
(In reply to Jonathan Druart from comment #22)
> The best solution I found is the 2 patches from bug 32393.

But still not idea, I still get gaps.
Comment 24 Jonathan Druart 2023-01-06 14:48:52 UTC
Created attachment 145082 [details] [review]
Test background jobs
Comment 25 Jonathan Druart 2023-01-06 14:51:58 UTC
I am giving up, need help.

I wrote this script to help adjusting the worker code and see how things went.

To test:
Edit /etc/rabbitmq/rabbitmq.conf

  consumer_timeout = 10000

# Delete existing jobs - not needed but better to track down what's going on
MariaDB [koha_kohadev]> delete from background_jobs;

# Watch the logs
% sudo tail -f /var/log/koha/kohadev/* /var/log/rabbitmq/*.log

# Run the script
% perl test_bg.pl

Wait 100 seconds
Comment 26 David Cook 2023-01-09 00:01:17 UTC
(In reply to Jonathan Druart from comment #17)
> And what about the 30 minutes timeout?

That's always a danger. If 1 message takes 30+ minutes to process, then we'll have problems when using a "worker queue" (as described at https://www.rabbitmq.com/tutorials/tutorial-two-python.html) style of background job processing. 

Options that come to mind are increasing the timeout value (which is tough to do programmatically with RabbitMQ via the koha-common package) or breaking down long running tasks into smaller easier to process chunks.
Comment 27 David Cook 2023-01-09 00:04:50 UTC
(In reply to Jonathan Druart from comment #25)
> I am giving up, need help.
> 
> I wrote this script to help adjusting the worker code and see how things
> went.
> 
> To test:
> Edit /etc/rabbitmq/rabbitmq.conf
> 
>   consumer_timeout = 10000
> 
> # Delete existing jobs - not needed but better to track down what's going on
> MariaDB [koha_kohadev]> delete from background_jobs;
> 
> # Watch the logs
> % sudo tail -f /var/log/koha/kohadev/* /var/log/rabbitmq/*.log
> 
> # Run the script
> % perl test_bg.pl
> 
> Wait 100 seconds

I'm not sure I understand. That timeout is 10 seconds but the problem happens after 100 seconds? 

I'll give the test a go, since those missing frames are disturbing...
Comment 28 David Cook 2023-01-09 00:14:36 UTC
Looks like the test plan is missing a step since without restarting the koha background job worker you'll just get the following:

Exception 'Koha::Exception' thrown 'test is not a valid job_type'
Comment 29 David Cook 2023-01-09 00:21:55 UTC
Ok I see the timeout happening 10 seconds after it probably got the first message on the connection/channel. 

Although according to the logs it processed 51 tasks. That seems odd... I'm going to dive deeper...


622 2023-01-09 00:14:38.291 [info] <0.2408.0> accepting STOMP connection <0.2408.0> (127.0.0.1:58134 -> 127.0.0.1:61613)
623 2023-01-09 00:15:39.033 [warning] <0.2420.0> Consumer Q_/queue/koha_kohadev-default on channel 1 has timed out waiting on consumer acknowledgement. Timeout used: 10000 ms
624 2023-01-09 00:15:39.170 [error] <0.2420.0> Channel error on connection <0.2411.0> (127.0.0.1:58134 -> 127.0.0.1:61613, vhost: '/', user: 'guest'), channel 1:
625 operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1
626 2023-01-09 00:15:39.182 [error] <0.2408.0> STOMP error frame sent:
627 Message: precondition_failed
628 Detail: "PRECONDITION_FAILED - consumer ack timed out on channel 1\n"
629 Server private detail: none
630 2023-01-09 00:15:39.307 [info] <0.2408.0> closing STOMP connection <0.2408.0> (127.0.0.1:58134 -> 127.0.0.1:61613)
Comment 30 David Cook 2023-01-09 00:50:00 UTC
I'm thinking that the prefetch config I provided isn't working with Net::Stomp...

I enqueue 100 messages and then I look at the channel. 

root@kohadevbox:kohadevbox$ rabbitmqctl list_channels
Listing channels ...
pid     user    consumer_count  messages_unacknowledged
<rabbit@kohadevbox.1671492158.2401.0>   guest   1       0
<rabbit@kohadevbox.1671492158.8209.0>   guest   1       99

According to https://www.rabbitmq.com/rabbitmqctl.8.html, messages_unacknowledged means that many messages have been delivered but haven't been acknowledged.

The Net::Stomp library was set up to work with ActiveMQ, and I made the mistake of assuming that some of the config option naming was labelled for ActiveMQ but could be used for other providers. That's my bad. 

I'm working on a better fix now...
Comment 31 David Cook 2023-01-09 00:58:12 UTC
Created attachment 145134 [details] [review]
Bug 32481: Use correct prefetch syntax for RabbitMQ

According to https://www.rabbitmq.com/stomp.html the header to
use for managing the prefetch is "prefetch-count".

You can verify the number of delivered and unacknowledged messages
on a channel on a connection by running "rabbitmqctl list_channels"
on the RabbitMQ host. This will tell you how many messages have been
delivered and are awaiting acknowledgement
Comment 32 David Cook 2023-01-09 01:04:13 UTC
(In reply to Jonathan Druart from comment #25)
> I am giving up, need help.

I am terribly sorry for wasting your time on this one, Jonathan.

I solved this problem on a different project using https://metacpan.org/pod/Net::AMQP::RabbitMQ but couldn't directly translate it across, and just assumed the first thing that said "prefetch" on https://metacpan.org/pod/Net::Stomp#subscribe would be correct...

Thanks for providing your testing patch, as that helped me to figure out the prefetch config wasn't working.
Comment 33 David Cook 2023-01-09 01:08:15 UTC
That 3rd patch "Bug 32481: Use correct prefetch syntax for RabbitMQ" uses the correct syntax, so the worker only fetches 1 message at a time ( see https://www.rabbitmq.com/stomp.html#pear.p )

Jonathan's test_bg.pl script completes perfectly with it. No time outs. You can verify it using "rabbitmqctl list_channels" as well.

Once testers are happy, I'd suggest obsoleting Jonathan's patch and then squashing my 2 patches together. 

--

Of course, as I note above, the 30 minute timeout will still be an issue for any 1 job that takes longer than 30 minutes. 

But now I'm confident that Nick's case of 6 jobs of 6 minutes each will be resolved.
Comment 34 Jonathan Druart 2023-01-10 13:31:49 UTC
Created attachment 145173 [details] [review]
Bug 32481: Limit prefetch size for background jobs worker

This patch adds a prefetch size of 1 to the background jobs worker,
so that it fetches 1 message at a time. Without this change,
the RabbitMQ connection timeout when too many messages for slow tasks
are fetched at the same time.

To test:
0. Apply patch
1. Run background worker
2. Rapidly enqueue multiple jobs that in total will take longer
than 30 minutes to process

Bug 32481: Use correct prefetch syntax for RabbitMQ

According to https://www.rabbitmq.com/stomp.html the header to
use for managing the prefetch is "prefetch-count".

You can verify the number of delivered and unacknowledged messages
on a channel on a connection by running "rabbitmqctl list_channels"
on the RabbitMQ host. This will tell you how many messages have been
delivered and are awaiting acknowledgement

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Comment 35 Nick Clemens 2023-01-11 15:56:15 UTC
Created attachment 145208 [details] [review]
Bug 32481: Limit prefetch size for background jobs worker

This patch adds a prefetch size of 1 to the background jobs worker,
so that it fetches 1 message at a time. Without this change,
the RabbitMQ connection timeout when too many messages for slow tasks
are fetched at the same time.

To test:
0. Apply patch
1. Run background worker
2. Rapidly enqueue multiple jobs that in total will take longer
than 30 minutes to process

Bug 32481: Use correct prefetch syntax for RabbitMQ

According to https://www.rabbitmq.com/stomp.html the header to
use for managing the prefetch is "prefetch-count".

You can verify the number of delivered and unacknowledged messages
on a channel on a connection by running "rabbitmqctl list_channels"
on the RabbitMQ host. This will tell you how many messages have been
delivered and are awaiting acknowledgement

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>

Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Comment 36 Tomás Cohen Arazi 2023-01-11 23:55:43 UTC
Pushed to master for 23.05.

Nice work everyone, thanks!
Comment 37 Jacob O'Mara 2023-01-13 16:24:57 UTC
Nice work, thanks everyone!

Pushed to 22.11.x for the next release.
Comment 38 Lucas Gass 2023-01-24 18:30:59 UTC
Backported to 22.05.x for upcoming 22.05.09
Comment 39 Arthur Suzuki 2023-01-26 13:31:19 UTC
applied to 21.11.x for 21.11.16
Comment 40 wainuiwitikapark 2023-03-15 01:37:00 UTC
Not backported to 21.05.x