Summary: | Rabbit times out when too many jobs are queued and the response takes too long | ||
---|---|---|---|
Product: | Koha | Reporter: | Nick Clemens (kidclamp) <nick> |
Component: | Architecture, internals, and plumbing | Assignee: | David Cook <dcook> |
Status: | CLOSED FIXED | QA Contact: | Testopia <testopia> |
Severity: | critical | ||
Priority: | P5 - low | CC: | arthur.suzuki, dcook, jonathan.druart, julian.maurice, kyle, lucas, magnus, wainuiwitikapark |
Version: | Main | ||
Hardware: | All | ||
OS: | All | ||
See Also: |
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=32393 https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=32594 |
||
Change sponsored?: | --- | Patch complexity: | --- |
Documentation contact: | Documentation submission: | ||
Text to go in the release notes: | Version(s) released in: |
23.05.00,22.11.02,22.05.09, 21.11.16
|
|
Circulation function: | |||
Bug Depends on: | |||
Bug Blocks: | 32594, 35089 | ||
Attachments: |
Bug 32481: Limit prefetch size for background jobs worker
Test background jobs Bug 32481: Use correct prefetch syntax for RabbitMQ Bug 32481: Limit prefetch size for background jobs worker Bug 32481: Limit prefetch size for background jobs worker |
Description
Nick Clemens (kidclamp)
2022-12-16 11:31:28 UTC
https://www.rabbitmq.com/consumers.html#acknowledgement-timeout """ The timeout can be deactivated using advanced.config. This is not recommended: %% advanced.config [ {rabbit, [ {consumer_timeout, undefined} ]} ]. Instead of disabling the timeout entirely, consider using a high value (for example, a few hours). """ I am slightly confused by this one, wasn't the idea of RabbitMQ to avoid just this kind of thing? Having a server dealing with the jobs so nothing would be lost and it all done in due time? It seems strange that we run into limitations so soon. (In reply to Katrin Fischer from comment #2) > I am slightly confused by this one, wasn't the idea of RabbitMQ to avoid > just this kind of thing? Having a server dealing with the jobs so nothing > would be lost and it all done in due time? It seems strange that we run into > limitations so soon. Generally speaking, Koha developers are not experienced with using advanced tools and techniques. Even the most hardcore of Koha developers are still just beginners with things like distributed computing. According to the RabbitMQ Work Queues tutorial, we're doing the right thing by ACKing after the message has been processed. I think options would be increasing the timeout, or changing how we're enqueuing tasks. In this case, are the "6 batch modifications" actually 1 batch modification job for 6 items? Why were they taking 6 minutes? (In reply to Nick Clemens from comment #0) > We saw this happen several times - a library would queue up 6 batch > modifications that would take roughly 6 minutes each. Is that 1 job for 6 items, or is that 6 different jobs? Why were the jobs taking 6 minutes? > The jobs would process normally, until the 30 minute configured timeout > would hit - then the remaining job would fail, and rabbit would disconnect > and not reconnect, and jobs would pile up When you say "rabbit would disconnect and not reconnect", are you talking about the RabbitMQ server disconnecting, and then not allowing the koha-worker to reconnect? > It seems that koha receives all of the jobs immediately, but only > acknowledges when complete - so the final job was waiting over 30 minutes > for acknowledgement and this caused issues What do you mean by "koha"? Do you mean "koha-worker"? I think we need more information here. A timeout seems arbitrary when you ultimately want all jobs to be processed and nothing be lost. (In reply to David Cook from comment #3) > According to the RabbitMQ Work Queues tutorial, we're doing the right thing > by ACKing after the message has been processed. According to that tutorial, this allows RabbitMQ to redeliver the message to another worker. But do we really want that ? A background job should not be restarted if it failed, because it might fail forever, or because by running it more than once it might do unwanted modifications to the database. I believe that the background job worker should acknowledge the message as soon as it receives it. > as soon as it receives it...
or at least as soon as it has validated that the message correspond to an existing pending job, or something like that.
(In reply to Katrin Fischer from comment #5) > A timeout seems arbitrary when you ultimately want all jobs to be processed > and nothing be lost. The RabbitMQ message broker doesn't know if the message was received by the consumer though. All it knows is that it sent a message. It doesn't know that it's being worked on. (In reply to Julian Maurice from comment #6) > According to that tutorial, this allows RabbitMQ to redeliver the message to > another worker. But do we really want that ? Theoretically, yes. If you have multiple machines with workers running and 1 machine crashes due to a power outage, you probably want the message to be sent to a different worker on a different machine. A simpler scenario is RabbitMQ delivers a message, the worker is working on it, and a sysadmin reboots the server. When it comes back online, RabbitMQ and the worker should start up, and RabbitMQ should re-deliver the message to the worker. > A background job should not be > restarted if it failed, because it might fail forever, or because by running > it more than once it might do unwanted modifications to the database. You make a good point about the failing forever. In practice, that seems to be the most common scenario I've encountered. The worker crashes due to a programming error, and then it is just stuck in an infinite failure loop. Not good. Of course, you could argue that isn't the problem of RabbitMQ, but rather someone coded the worker badly ;). Likewise, unwanted modifications to the database can be avoided by coding the worker better. > I believe that the background job worker should acknowledge the message as > soon as it receives it. That's certainly an option. The downside is that if the worker isn't able to complete handling of the message, then that message is gone forever. That might not be a big deal with Koha, since we have the "database fallback". Of course, that could get you into an infinite failure loop as well. (In reply to David Cook from comment #8) > (In reply to Katrin Fischer from comment #5) > > A timeout seems arbitrary when you ultimately want all jobs to be processed > > and nothing be lost. > > The RabbitMQ message broker doesn't know if the message was received by the > consumer though. All it knows is that it sent a message. It doesn't know > that it's being worked on. I run a large IoT system and without timeouts you end up waiting for a response forever. 30 minutes is arbitrary but they make it configurable so that you can adjust it to your workload. Of course, in the case of Koha, we have less control over what people do with their RabbitMQ. It's an interesting topic though... https://docs.celeryq.dev/en/latest/faq.html#should-i-use-retry-or-acks-late It looks like Celery workers acknowledge messages when they receive them and then they do work. They provide an "acks_late" option for tasks where crashes would be a problem. -- Maybe it is better to ack early so that we can handle longer running tasks, and then we handle failure scenarios more as edge cases... It's probably more likely that you'll have a long running task than an unexpected crash. I think if there is a crash (say VM power outage rather than a fatal error executing a Perl function), the job should get stuck in "started" state as well? So that would allow sysadmins to then deal with that situation (or we could have a cronjob that "times out" jobs in "started" state if they're in that state for longer than X time. -- In that way, we still have timeouts, but we're putting those timeouts into the application rather than relying on RabbitMQ's... (In reply to David Cook from comment #4) > (In reply to Nick Clemens from comment #0) > > We saw this happen several times - a library would queue up 6 batch > > modifications that would take roughly 6 minutes each. > > Is that 1 job for 6 items, or is that 6 different jobs? Why were the jobs > taking 6 minutes? 6 batch jobs of several hundred items, submitted one after the other > > > The jobs would process normally, until the 30 minute configured timeout > > would hit - then the remaining job would fail, and rabbit would disconnect > > and not reconnect, and jobs would pile up > > When you say "rabbit would disconnect and not reconnect", are you talking > about the RabbitMQ server disconnecting, and then not allowing the > koha-worker to reconnect? Rabbit crashed for all intents and purposes - koha worker could not connect, only a restart would beign to process jobs again > > > It seems that koha receives all of the jobs immediately, but only > > acknowledges when complete - so the final job was waiting over 30 minutes > > for acknowledgement and this caused issues > > What do you mean by "koha"? Do you mean "koha-worker"? I think we need more > information here. Yes, the koha-worker - it seems to get the frame, but not send an ack (In reply to Nick Clemens from comment #12) > > > It seems that koha receives all of the jobs immediately, but only > > > acknowledges when complete - so the final job was waiting over 30 minutes > > > for acknowledgement and this caused issues Thanks for clarifying, Nick. Your comment also made me re-read the description, and it made me realize that I've had this same problem before on a different project, although it was with AMQP rather than STOMP. Let me see the STOMP way of doing it... The problem is that we're not defining a prefetch size. You can see the problem here in ./misc/background_jobs_worker.pl: $conn->subscribe({ destination => sprintf("/queue/%s-%s", $namespace, $queue), ack => 'client' }); As per the docs, we need to set "'activemq.prefetchSize' => 1" in that subscribe hashref argument: https://metacpan.org/pod/Net::Stomp#activemq.prefetchSize If you/we set that, I would bet that your problem goes away. -- I have a RabbitMQ AMQP worker on another project that consumes many thousands of messages a day, and I was noticing timeouts even though it was blazing through the messages. I changed the prefetch size to 1 and ever since there's been no more timeouts. It's just worked like a beast! -- (Katrin, the above is what I mean about us all being beginning with this stuff.) Created attachment 144809 [details] [review] Bug 32481: Limit prefetch size for background jobs worker This patch adds a prefetch size of 1 to the background jobs worker, so that it fetches 1 message at a time. Without this change, the RabbitMQ connection timeout when too many messages for slow tasks are fetched at the same time. To test: 0. Apply patch 1. Run background worker 2. Rapidly enqueue multiple jobs that in total will take longer than 30 minutes to process The test plan is a bit lacklustre because it's probably going to be hard for people to test. I suppose we can just test that it doesn't cause any problems with the current background jobs, and then Nick can confirm whether or not it fixes the problem for him. (In reply to David Cook from comment #15) > The test plan is a bit lacklustre because it's probably going to be hard for > people to test. > > I suppose we can just test that it doesn't cause any problems with the > current background jobs, and then Nick can confirm whether or not it fixes > the problem for him. I'll try to see if I can reproduce on a test environment, thanks David! And what about the 30 minutes timeout? I think I recreate the problem this way: % more /etc/rabbitmq/rabbitmq.conf consumer_timeout = 15000 Add a sleep 10 in Koha::BackgroundJob::BatchUpdateItem->process Then enqueue several batch item mod jobs. However: 1. I don't see jobs stuck in "new" (without the patch) 2. I am seeing the following in the rabbitmq logs, with and without the patch 2023-01-04 11:01:03.041 [warning] <0.2774.0> Consumer Q_/queue/koha_kohadev-long_tasks on channel 1 has timed out waiting on consumer acknowledgement. Timeout used: 15000 ms 2023-01-04 11:01:03.041 [error] <0.2774.0> Channel error on connection <0.2764.0> (127.0.0.1:39578 -> 127.0.0.1:61613, vhost: '/', user: 'guest'), channel 1: operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1 2023-01-04 11:01:03.042 [error] <0.2758.0> STOMP error frame sent: Message: precondition_failed Detail: "PRECONDITION_FAILED - consumer ack timed out on channel 1\n" Server private detail: none 2023-01-04 11:01:03.042 [info] <0.2758.0> closing STOMP connection <0.2758.0> (127.0.0.1:39578 -> 127.0.0.1:61613) (In reply to Jonathan Druart from comment #18) > I think I recreate the problem this way: > > % more /etc/rabbitmq/rabbitmq.conf > consumer_timeout = 15000 > > Add a sleep 10 in Koha::BackgroundJob::BatchUpdateItem->process > > Then enqueue several batch item mod jobs. > > However: > 1. I don't see jobs stuck in "new" (without the patch) > > 2. I am seeing the following in the rabbitmq logs, with and without the patch > > 2023-01-04 11:01:03.041 [warning] <0.2774.0> Consumer > Q_/queue/koha_kohadev-long_tasks on channel 1 has timed out waiting on > consumer acknowledgement. Timeout used: 15000 ms > > 2023-01-04 11:01:03.041 [error] <0.2774.0> Channel error on connection > <0.2764.0> (127.0.0.1:39578 -> 127.0.0.1:61613, vhost: '/', user: 'guest'), > channel 1: > operation none caused a channel exception precondition_failed: consumer ack > timed out on channel 1 > > 2023-01-04 11:01:03.042 [error] <0.2758.0> STOMP error frame sent: > > Message: precondition_failed > Detail: "PRECONDITION_FAILED - consumer ack timed out on channel 1\n" > Server private detail: none > 2023-01-04 11:01:03.042 [info] <0.2758.0> closing STOMP connection > <0.2758.0> (127.0.0.1:39578 -> 127.0.0.1:61613) I played with this a bunch, and things seem to sometimes recover and sometimes not Setting rabbitmq.conf with: consumer_timeout = 10000 Adding "sleep 1;" in the Koha/BackgroundJob/UpdateElasticIndex.pm sudo koha-mysql kohadev DELETE FROM biblio WHERE biblionumber=269; DELETE FROM biblio WHERE biblionumber=72; Set SearchEngine syspref to 'Elastic' perl misc/maintenance/touch_all_biblios.pl With or without patch, I get errors like: 2023-01-04 16:49:40.689 [warning] <0.7712.0> Consumer Q_/queue/koha_kohadev-default on channel 1 has timed out waiting on consumer acknowledgement. Timeout used: 10000 ms 2023-01-04 16:49:40.692 [error] <0.7712.0> Channel error on connection <0.7703.0> (127.0.0.1:60178 -> 127.0.0.1:61613, vhost: '/', user: 'guest'), channel 1: operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1 2023-01-04 16:49:40.692 [error] <0.7700.0> STOMP error frame sent: Message: precondition_failed Detail: "PRECONDITION_FAILED - consumer ack timed out on channel 1\n" Server private detail: none 2023-01-04 16:49:40.693 [info] <0.7700.0> closing STOMP connection <0.7700.0> (127.0.0.1:60178 -> 127.0.0.1:61613) 2023-01-04 16:49:41.224 [info] <0.14720.0> accepting STOMP connection <0.14720.0> (127.0.0.1:60376 -> 127.0.0.1:61613) 2023-01-04 16:49:41.235 [error] <0.14732.0> Channel error on connection <0.14723.0> (127.0.0.1:60376 -> 127.0.0.1:61613, vhost: '/', user: 'guest'), channel 1: operation basic.ack caused a channel exception precondition_failed: unknown delivery tag 245 2023-01-04 16:49:41.239 [error] <0.14720.0> STOMP error frame sent: Message: precondition_failed Detail: "PRECONDITION_FAILED - unknown delivery tag 245\n" Server private detail: none 2023-01-04 16:49:41.239 [info] <0.14720.0> closing STOMP connection <0.14720.0> (127.0.0.1:60376 -> 127.0.0.1:61613) 2023-01-04 16:49:42.477 [info] <0.14736.0> accepting STOMP connection <0.14736.0> (127.0.0.1:60388 -> 127.0.0.1:61613) And varied number of jobs that remain in new > operation basic.ack caused a channel exception precondition_failed: unknown delivery tag 245 I think it's the fork, I removed it and it seems to be ok now. Could someone confirm? > Detail: "PRECONDITION_FAILED - consumer ack timed out on channel 1\n" I am still seeing this after I removed the fork, which means this patch does not fix the timeout problem. I am also seeing this: ':' expected, at character offset 49 (before "CONNECTED\nserver") at /kohadevbox/koha/misc/background_jobs_worker.pl line 97. There is something scary happening here. I've added 3 debug statements, one displays the frame, 2 others are before the process and ack calls. {"record_ids":[295],"record_server":"biblioserver","job_id":1824} at /kohadevbox/koha/misc/background_jobs_worker.pl line 97. Processing 1824 at /kohadevbox/koha/misc/background_jobs_worker.pl line 105. Acking 1824 at /kohadevbox/koha/misc/background_jobs_worker.pl line 107. {"record_ids":[296],"job_id":1825,"record_server"CONNECTED server at /kohadevbox/koha/misc/background_jobs_worker.pl line 97. ':' expected, at character offset 49 (before "CONNECTED\nserver") at /kohadevbox/koha/misc/background_jobs_worker.pl line 98. {"record_server":"biblioserver","job_id":1879,"record_ids":[350]} at /kohadevbox/koha/misc/background_jobs_worker.pl line 97. Processing 1879 at /kohadevbox/koha/misc/background_jobs_worker.pl line 105. Acking 1879 at /kohadevbox/koha/misc/background_jobs_worker.pl line 107. => Frame for job 1825 is incorrect (?!), and we are losing jobs 1826-1878. The best solution I found is the 2 patches from bug 32393. (In reply to Jonathan Druart from comment #22) > The best solution I found is the 2 patches from bug 32393. But still not idea, I still get gaps. Created attachment 145082 [details] [review] Test background jobs I am giving up, need help. I wrote this script to help adjusting the worker code and see how things went. To test: Edit /etc/rabbitmq/rabbitmq.conf consumer_timeout = 10000 # Delete existing jobs - not needed but better to track down what's going on MariaDB [koha_kohadev]> delete from background_jobs; # Watch the logs % sudo tail -f /var/log/koha/kohadev/* /var/log/rabbitmq/*.log # Run the script % perl test_bg.pl Wait 100 seconds (In reply to Jonathan Druart from comment #17) > And what about the 30 minutes timeout? That's always a danger. If 1 message takes 30+ minutes to process, then we'll have problems when using a "worker queue" (as described at https://www.rabbitmq.com/tutorials/tutorial-two-python.html) style of background job processing. Options that come to mind are increasing the timeout value (which is tough to do programmatically with RabbitMQ via the koha-common package) or breaking down long running tasks into smaller easier to process chunks. (In reply to Jonathan Druart from comment #25) > I am giving up, need help. > > I wrote this script to help adjusting the worker code and see how things > went. > > To test: > Edit /etc/rabbitmq/rabbitmq.conf > > consumer_timeout = 10000 > > # Delete existing jobs - not needed but better to track down what's going on > MariaDB [koha_kohadev]> delete from background_jobs; > > # Watch the logs > % sudo tail -f /var/log/koha/kohadev/* /var/log/rabbitmq/*.log > > # Run the script > % perl test_bg.pl > > Wait 100 seconds I'm not sure I understand. That timeout is 10 seconds but the problem happens after 100 seconds? I'll give the test a go, since those missing frames are disturbing... Looks like the test plan is missing a step since without restarting the koha background job worker you'll just get the following: Exception 'Koha::Exception' thrown 'test is not a valid job_type' Ok I see the timeout happening 10 seconds after it probably got the first message on the connection/channel. Although according to the logs it processed 51 tasks. That seems odd... I'm going to dive deeper... 622 2023-01-09 00:14:38.291 [info] <0.2408.0> accepting STOMP connection <0.2408.0> (127.0.0.1:58134 -> 127.0.0.1:61613) 623 2023-01-09 00:15:39.033 [warning] <0.2420.0> Consumer Q_/queue/koha_kohadev-default on channel 1 has timed out waiting on consumer acknowledgement. Timeout used: 10000 ms 624 2023-01-09 00:15:39.170 [error] <0.2420.0> Channel error on connection <0.2411.0> (127.0.0.1:58134 -> 127.0.0.1:61613, vhost: '/', user: 'guest'), channel 1: 625 operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1 626 2023-01-09 00:15:39.182 [error] <0.2408.0> STOMP error frame sent: 627 Message: precondition_failed 628 Detail: "PRECONDITION_FAILED - consumer ack timed out on channel 1\n" 629 Server private detail: none 630 2023-01-09 00:15:39.307 [info] <0.2408.0> closing STOMP connection <0.2408.0> (127.0.0.1:58134 -> 127.0.0.1:61613) I'm thinking that the prefetch config I provided isn't working with Net::Stomp... I enqueue 100 messages and then I look at the channel. root@kohadevbox:kohadevbox$ rabbitmqctl list_channels Listing channels ... pid user consumer_count messages_unacknowledged <rabbit@kohadevbox.1671492158.2401.0> guest 1 0 <rabbit@kohadevbox.1671492158.8209.0> guest 1 99 According to https://www.rabbitmq.com/rabbitmqctl.8.html, messages_unacknowledged means that many messages have been delivered but haven't been acknowledged. The Net::Stomp library was set up to work with ActiveMQ, and I made the mistake of assuming that some of the config option naming was labelled for ActiveMQ but could be used for other providers. That's my bad. I'm working on a better fix now... Created attachment 145134 [details] [review] Bug 32481: Use correct prefetch syntax for RabbitMQ According to https://www.rabbitmq.com/stomp.html the header to use for managing the prefetch is "prefetch-count". You can verify the number of delivered and unacknowledged messages on a channel on a connection by running "rabbitmqctl list_channels" on the RabbitMQ host. This will tell you how many messages have been delivered and are awaiting acknowledgement (In reply to Jonathan Druart from comment #25) > I am giving up, need help. I am terribly sorry for wasting your time on this one, Jonathan. I solved this problem on a different project using https://metacpan.org/pod/Net::AMQP::RabbitMQ but couldn't directly translate it across, and just assumed the first thing that said "prefetch" on https://metacpan.org/pod/Net::Stomp#subscribe would be correct... Thanks for providing your testing patch, as that helped me to figure out the prefetch config wasn't working. That 3rd patch "Bug 32481: Use correct prefetch syntax for RabbitMQ" uses the correct syntax, so the worker only fetches 1 message at a time ( see https://www.rabbitmq.com/stomp.html#pear.p ) Jonathan's test_bg.pl script completes perfectly with it. No time outs. You can verify it using "rabbitmqctl list_channels" as well. Once testers are happy, I'd suggest obsoleting Jonathan's patch and then squashing my 2 patches together. -- Of course, as I note above, the 30 minute timeout will still be an issue for any 1 job that takes longer than 30 minutes. But now I'm confident that Nick's case of 6 jobs of 6 minutes each will be resolved. Created attachment 145173 [details] [review] Bug 32481: Limit prefetch size for background jobs worker This patch adds a prefetch size of 1 to the background jobs worker, so that it fetches 1 message at a time. Without this change, the RabbitMQ connection timeout when too many messages for slow tasks are fetched at the same time. To test: 0. Apply patch 1. Run background worker 2. Rapidly enqueue multiple jobs that in total will take longer than 30 minutes to process Bug 32481: Use correct prefetch syntax for RabbitMQ According to https://www.rabbitmq.com/stomp.html the header to use for managing the prefetch is "prefetch-count". You can verify the number of delivered and unacknowledged messages on a channel on a connection by running "rabbitmqctl list_channels" on the RabbitMQ host. This will tell you how many messages have been delivered and are awaiting acknowledgement Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Created attachment 145208 [details] [review] Bug 32481: Limit prefetch size for background jobs worker This patch adds a prefetch size of 1 to the background jobs worker, so that it fetches 1 message at a time. Without this change, the RabbitMQ connection timeout when too many messages for slow tasks are fetched at the same time. To test: 0. Apply patch 1. Run background worker 2. Rapidly enqueue multiple jobs that in total will take longer than 30 minutes to process Bug 32481: Use correct prefetch syntax for RabbitMQ According to https://www.rabbitmq.com/stomp.html the header to use for managing the prefetch is "prefetch-count". You can verify the number of delivered and unacknowledged messages on a channel on a connection by running "rabbitmqctl list_channels" on the RabbitMQ host. This will tell you how many messages have been delivered and are awaiting acknowledgement Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Nick Clemens <nick@bywatersolutions.com> Pushed to master for 23.05. Nice work everyone, thanks! Nice work, thanks everyone! Pushed to 22.11.x for the next release. Backported to 22.05.x for upcoming 22.05.09 applied to 21.11.x for 21.11.16 Not backported to 21.05.x |