I noticed my disk was filling up, because worker-output.log was getting several hundred of these per second: [2023/06/20 18:21:24] [WARN] Frame not processed - malformed JSON string, neither tag, array, object, number, string or atom, at character offset 0 (before "You must log in usin...") at /usr/share/koha/bin/background_jobs_worker.pl line 116. main::catch {...} /usr/share/koha/bin/background_jobs_worker.pl (119) I think the problem is related to this code: 106 while (1) { 107 if ( $conn ) { 108 my $frame = $conn->receive_frame; 109 if ( !defined $frame ) { 110 # maybe log connection problems 111 next; # will reconnect automatically 112 } 113 114 my $args = try { 115 my $body = $frame->body; 116 decode_json($body); # TODO Should this be from_json? Check utf8 flag. 117 } catch { 118 Koha::Logger->get({ interface => 'worker' })->warn(sprintf "Frame not processed - %s", $_); 119 return; 120 } finally { 121 $conn->ack( { frame => $frame } ); 122 }; 123 124 next unless $args; 125 126 # FIXME This means we need to have create the DB entry before 127 # It could work in a first step, but then we will want to handle job that will be created from the message received 128 my $job = Koha::BackgroundJobs->find($args->{job_id}); There is a problem with the connection, but $conn is still defined, so this code is run. When I Dump $conn after line 107 it looks like this: $VAR1 = bless( { 'bufsize' => 8192, 'logger' => bless( { 'warn' => 1, 'error' => 1, 'fatal' => 1 }, 'Net::Stomp::StupidLogger' ), '_framebuf_changed' => 1, 'port' => '61613', 'connect_delay' => 5, '_pid' => ***, 'hostname' => 'localhost', 'reconnect_attempts' => 0, 'reconnect_on_fork' => 1, 'select' => bless( [ '�', 1, undef, undef, undef, undef, undef, undef, undef, bless( \*Symbol::GEN0, 'IO::Socket::IP' ) ], 'IO::Select' ), 'socket' => $VAR1->{'select'}[9], 'initial_reconnect_attempts' => 1, '_framebuf' => '', 'socket_options' => {}, 'subscriptions' => { 'dest-/queue/koha_***-default' => { 'ack' => 'client', 'destination' => '/queue/koha_***-default', 'prefetch-count' => 1 } } }, 'Net::Stomp' ); If I dump the contents of $body after line 115, I get 'You must log in using CONNECT first'. Not sure why that happens yet, but how we handle it seems to be problematic. We fail to decode it, because it is not JSON. So the "catch" block is run and logs the problem. But then I think the "return" kicks us back to the start of the "while" and we end up logging the same error as fast as we can, which seems to be pretty fast. And we never get to the "ack" on line 121. The comment "will reconnect automatically" seems to be a little optimistic? Moving the "return" from the end of the "catch" block, to the end of the "finally" block slows things down, now the logging only happens about every other second. But there is also a problem because we are assigning the "output" from the try/catch to $args. If we can decode_json the $body, that works as expected. But if we execute the "catch" block, it looks like $args gets assigned the result of Koha::Logger->get, which seems to be 1. So $args is defined on line 124 and we get to Koha::BackgroundJobs->find, but using "1" as a hashref there does not work. I think maybe something like this would work better: my $args; try { my $body = $frame->body; $args = decode_json($body); # TODO Should this be from_json? Check utf8 flag. } catch { Koha::Logger->get({ interface => 'worker' })->warn(sprintf "Frame not processed - %s", $_); } finally { $conn->ack( { frame => $frame } ); return; }; Is the return needed? This is a little above my head, so other solutions are most welcome. And if anyone can tell me why I get 'You must log in using CONNECT first' I would be most grateful. :-)
Looks like the problem with connecting might be because I have not added the proper config to koha-conf.xml: <message_broker> <hostname>__MESSAGE_BROKER_HOST__</hostname> <port>__MESSAGE_BROKER_PORT__</port> <username>__MESSAGE_BROKER_USER__</username> <password>__MESSAGE_BROKER_PASS__</password> <vhost>__MESSAGE_BROKER_VHOST__</vhost> </message_broker>
(In reply to Magnus Enger from comment #1) > Looks like the problem with connecting might be because I have not added the > proper config to koha-conf.xml: > > <message_broker> > <hostname>__MESSAGE_BROKER_HOST__</hostname> > <port>__MESSAGE_BROKER_PORT__</port> > <username>__MESSAGE_BROKER_USER__</username> > <password>__MESSAGE_BROKER_PASS__</password> > <vhost>__MESSAGE_BROKER_VHOST__</vhost> > </message_broker> You should not need it. Did you try a restart_all? If rabbitmq cannot be reached we are supposed to fallback to the DB (and bypass the broker). However I can see a problem if the daemon (background_jobs_worker.pl) has been started and the connection lost.
(In reply to Jonathan Druart from comment #2) > You should not need it. Did you try a restart_all? If rabbitmq cannot be > reached we are supposed to fallback to the DB (and bypass the broker). I spotted this on a production server, so there is no restart_all. But I did restart rabbit and koha-worker, but that did not change anything. Then I restarted the server, but kept getting the same errors. Then I added the message_broker config to one of six sites on the server, restarted koha-worker (I think it was) and then the problem went away for all the sites. So it seems kind of random...
I've just spotted this in production, upgraded several versions over the years, through 22.11. The configuration seemed to be correct. We've been searching for a stray broken message for a while in vain. What we found in the rabbit queue logs, is that the connection to the broker was not successful. This is the reason for filling the disk: the worker gets a message telling the credentials are invalid or don't have enough permissions, which is a plain string ("You must log in usin...") and it explodes when trying to decode as JSON, and retries. We initially thought it was about the guest user not existing, but it existed: $ sudo rabbitmqctl add_user guest guest Adding user "guest" ... Error: User "guest" already exists Then we found a line in the rabbit logs telling the 'vhost' didn't exist (koha_frontera in this case). It was easier to spot the login or permissions problem so we missed it initially: vhost koha_frontera not found 2023-07-12 09:56:31.983 [warning] <0.902.0> STOMP login failed - not_allowed (vhost access not allowed)~n 2023-07-12 09:56:31.983 [error] <0.902.0> STOMP error frame sent: Message: "Bad CONNECT" Detail: "Virtual host 'koha_frontera' access denied" Server private detail: none We created it: $ sudo rabbitmqctl add_vhost koha_frontera Then it started saying (again) that there was a permissions issue, so we gave 'guest', permissions over the 'koha_frontera' vhost: $ sudo rabbitmqctl set_permissions -p 'koha_frontera' guest ".*" ".*" ".*" And then we restarted everything. Tried staging a file and all logs look correct. I'm leaving this here, and closing this bug as invalid, as it looks like a maintenance problem.
I had a similar problem today and finally got round to digging a bit deeper into RabbitMQ. Here are some of the things I found: $ sudo rabbitmq-diagnostics check_virtual_hosts Checking if all vhosts are running on node rabbit@kohaswe ... Error: Some virtual hosts on node rabbit@kohaswe are down: / $ sudo rabbitmqctl list_queues --offline state name Timeout: 60.0 seconds ... Listing queues for vhost / ... state name down koha_mykoha-batch_authority_record_modification down koha_mykoha-batch_item_record_modification down koha_mykoha-long_tasks ... $ sudo rabbitmqctl restart_vhost Trying to restart vhost '/' on node 'rabbit@kohaswe' ... Error: Failed to start vhost '/' on node 'rabbit@kohaswe'Reason: {:shutdown, {:failed_to_start_child, :rabbit_vhost_process, {:badmatch, {:error, {{{:badmatch, {:error, {:not_a_dets_file, '/var/lib/rabbitmq/mnesia/rabbit@kohaswe/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/recovery.dets'}}}, [{:rabbit_recovery_terms, :open_table, 1, [file: 'src/rabbit_recovery_terms.erl', line: 199]}, {:rabbit_recovery_terms, :init, 1, [file: 'src/rabbit_recovery_terms.erl', line: 179]}, {:gen_server, :init_it, 2, [file: 'gen_server.erl', line: 374]}, {:gen_server, :init_it, 6, [file: 'gen_server.erl', line: 342]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]}, {:child, :undefined, :rabbit_recovery_terms, {:rabbit_recovery_terms, :start_link, ["/"]}, :transient, 30000, :worker, [:rabbit_recovery_terms]}}}}}} I found a solution to the problem here: https://stackoverflow.com/questions/58689551/rabbitmq-vhost-is-down-for-user-xyz-even-after-user-has-all-access and after effectively wiping all the queues, I got things running again.
*** Bug 37164 has been marked as a duplicate of this bug. ***
*** Bug 37545 has been marked as a duplicate of this bug. ***
Re-opening and changing the title from "background_jobs_worker.pl floods logs when not connecting" to "background_jobs_worker.pl floods logs when it gets error frames" Because that's arguably the biggest problem here.
I've got a different scenario which got me thinking about this. [2025/02/13 23:01:38] [WARN] Frame not processed - garbage after JSON object, at character offset 7 (before "must include a valid...") at /usr/share/koha/bin/workers/es_indexer_daemon.pl line 122. main::catch {...} /usr/share/koha/bin/workers/es_indexer_daemon.pl (124) [2025/02/13 23:01:38] [WARN] Frame does not have correct args, ignoring it main:: /usr/share/koha/bin/workers/es_indexer_daemon.pl (129) Looking at the RabbitMQ logs, ,I can see this: 2025-02-13 23:01:38.946579+11:00 [erro] <0.5016.0> STOMP error frame sent: 2025-02-13 23:01:38.946579+11:00 [erro] <0.5016.0> Message: "Invalid header" 2025-02-13 23:01:38.946579+11:00 [erro] <0.5016.0> Detail: "\"NACK\" must include a valid \"message-id\" header\n" 2025-02-13 23:01:38.946579+11:00 [erro] <0.5016.0> Server private detail: none 2025-02-13 23:01:38.947706+11:00 [erro] <0.5016.0> STOMP error frame sent: 2025-02-13 23:01:38.947706+11:00 [erro] <0.5016.0> Message: "Invalid header" 2025-02-13 23:01:38.947706+11:00 [erro] <0.5016.0> Detail: "\"NACK\" must include a valid \"message-id\" header\n" 2025-02-13 23:01:38.947706+11:00 [erro] <0.5016.0> Server private detail: none 2025-02-13 23:01:38.948783+11:00 [erro] <0.5016.0> STOMP error frame sent: 2025-02-13 23:01:38.948783+11:00 [erro] <0.5016.0> Message: "Invalid header" -- It's not clear what created the first STOMP error frame coming from RabbitMQ, but the loop happens because RabbitMQ sends an error frame with a plain text message, Koha fails to read it as JSON, tries to NACK it, but you can't NACK a STOMP error frame, so RabbitMQ creates another error frame, and it creates an infinite loop.
According to the STOMP spec: "The server MAY send ERROR frames if something goes wrong. In this case, it MUST then close the connection just after sending the ERROR frame." However, RabbitMQ has acknowledged that they don't do that, and have several open discussions/issues surrounding this topic. In our case, I'd like to add a content-type to sent messages, so that we're only trying to JSON decode messages marked as "application/json". Plus, if we encounter an ERROR frame rather than a MESSAGE frame, we can kill the connection and let an auto-reconnect happen. -- Now... that idea would probably have fixed my problem with the infinite NACK error loop. But... it wouldn't solve the problem with the infinite connect error. With that one... we'd have to either nullify $conn and do a next() or exit the worker. But... that might be good for a follow-up... baby steps...
According to the STOMP spec: "If a frame body is present, the SEND, MESSAGE and ERROR frames SHOULD include a content-type header to help the receiver of the frame interpret its body. If the content-type header is set, its value MUST be a MIME type which describes the format of the body. Otherwise, the receiver SHOULD consider the body to be a binary blob." All the more reason to include a content-type header...
I can't reproduce the '"NACK" must include a valid "message-id" header' error on ktd. It was on an older version of RabbitMQ so maybe that's part of the problem. In any case, I've decided that it probably makes more sense to crash the worker rather than trying to force a disconnect/reconnect because that won't fix anything. Whereas if we crash the worker, the daemon manager will try restarts and if it restarts too frequently it will back off and retry later. So that would be good for acute and chronic ERROR frames. But I figure I'll just post the work-in-progress and see if anyone has any thoughts on it.
Created attachment 178068 [details] [review] Bug 34070: WIP
To test ERROR handling: 1. Switch to using Elasticsearch 2. Change your connection details in koha-conf.xml or the hard-coded ones in Koha/BackgroundJob.pm 3. sudo koha-es-indexer --restart kohadev 4. View the errors in the logs: tail -f /var/log/koha/kohadev/es-indexer-output.log tail -f /var/log/koha/kohadev/es-indexer-error.log To test MESSAGE handling: 1. Switch to using Elasticsearch 2. Change the title of a bib record to something new and unique 3. Try to search for the bib using the new title 4. Note you can find it 5. Look in the logs just to make sure there are no messages there
Got the NACK error again: 2025-03-07 06:24:16.820842+11:00 [error] <0.851.0> STOMP error frame sent: 2025-03-07 06:24:16.820842+11:00 [error] <0.851.0> Message: "Invalid header" 2025-03-07 06:24:16.820842+11:00 [error] <0.851.0> Detail: "\"NACK\" must include a valid \"message-id\" header\n" 2025-03-07 06:24:16.820842+11:00 [error] <0.851.0> Server private detail: none
There is a potential issue with my patch and it's the transitional period where theoretically there could be enqueued messages without a content-type of "application/json". In practice, it's unlikely to happen if you're careful with your deployment, but theoretically it could happen. So I'll add the decode even when the content-type isn't application/json. We can always change that later after this has been running for a while.
(In reply to David Cook from comment #16) > There is a potential issue with my patch and it's the transitional period > where theoretically there could be enqueued messages without a content-type > of "application/json". > > In practice, it's unlikely to happen if you're careful with your deployment, > but theoretically it could happen. So I'll add the decode even when the > content-type isn't application/json. We can always change that later after > this has been running for a while. Of course... this could always be a problem when someone does a big upgrade over many versions. But I think it's still best to include this fallback operation in the short-term (maybe a couple years) to help the majority of deployments.
Created attachment 179024 [details] [review] Bug 34070: Add error handling for ERROR frames This change causes es_indexer_daemon.pl to die on ERROR frames instead of spinning in an infinite loop. It also adds a content-type STOMP frame header to help consumers know what data type is in the MESSAGE frame body. To test ERROR handling for es_indexer_daemon.pl: 1. Switch to using Elasticsearch 2. Change your RabbitMQ connection details in koha-conf.xml or the hard-coded ones in Koha/BackgroundJob.pm to include an invalid password. For example, if you don't have a message_broker already defined, use the following: <message_broker> <password>bad</password> </message_broker> 3. sudo koha-es-indexer --restart kohadev 4. View the errors in the logs: tail -f /var/log/koha/kohadev/es-indexer-output.log tail -f /var/log/koha/kohadev/es-indexer-error.log To test MESSAGE handling for es_indexer_daemon.pl: 1. Switch to using Elasticsearch (and ensure RabbitMQ connection details are correct) 2. Change the title of a bib record to something new and unique 3. Try to search for the bib using the new title 4. Note you can find it 5. Look in the logs just to make sure there are no messages there
So if the background worker crashes 5 times quickly, it'll back off for 30 seconds. The logging is about 600 bytes in the Koha error log per 30 seconds. In 24 hours that would be a log file of about 1.7MB. Much better than multi-GB log files. The output logs write a bit more. That would be 3.9MB after 24 hours. The RabbitMQ logs are 3370 bytes every 30 seconds, which is about 9.3MB over 24 hours. -- Overall, this means you still get lots of errors in your logs for a CONNECT error, which is good, because you want to have a log of the problem. But it's at a much more manageable pace. Also, for ERROR frames that are cleared by a restart of the message consumer, it should auto-heal.
Created attachment 179025 [details] [review] Bug 34070: Add error handling for ERROR frames for background worker This change adds error handling for ERROR frames to background_jobs_worker.pl. A previous patch added it to es_indexer_daemon.pl which is a different background jobs worker. To test ERROR handling for background_jobs_worker.pl: 1. Change your RabbitMQ connection details in koha-conf.xml or the hard-coded ones in Koha/BackgroundJob.pm to include an invalid password. For example, if you don't have a message_broker already defined, use the following: <message_broker> <password>bad</password> </message_broker> 2. sudo koha-worker --restart --quiet kohadev 3. View the errors in the logs: tail -f /var/log/koha/kohadev/worker-output.log tail -f /var/log/koha/kohadev/worker-error.log To test MESSAGE handling for background_jobs_worker.pl: 1. Ensure RabbitMQ connection details are correct 2. Go to Batch item modification and add the following barcodes: 39999000001310 39999000004571 3. Add a z - Public Note of the following and click Save: THEBACKGROUNDWORKERWORKED 4. Search the catalogue using THEBACKGROUNDWORKERWORKED in the top catalogue search 5. Note that you get 2 results in your search 6. Look in the logs just to make sure there are no new messages there
I'm actually pretty happy with these patches, and I'm going to start using them in my own systems to combat this issue where NACKing the error frame is creating an infinite loop. For future reference, the issue appears on Ubuntu 22.04 rabbitmq-server 3.9.27-0ubuntu-0.1 I haven't reproduced the NACKing issue on KTD using Debian 12 bookworm rabbitmq-server 3.10.8-1.1+deb12u1 Note I haven't tried manually reproducing the issue on the Ubuntu either. That's just where I've seen it in the wild.
I don't think this is optimal, I am working on an alternative patch.
Created attachment 179094 [details] [review] Bug 34070: Deal with broker connection issues This patch suggests to deal with connection issues at Koha::BackgroundJob->connect level. It incorrectly returned a Net::Stomp object when the connection failed. With this patch, if the CONNECT does not return a CONNECTED frame then Koha::BackgroundJob->connect returns nothing and the caller is not fooled. It also fixes the about page, we now have: "Using SQL polling (Fallback, Error connecting to RabbitMQ)" instead of "Using RabbitMQ"
Created attachment 179095 [details] [review] Bug 34070: Add tests
Actually both approaches are compatible I think. I have not tested with yours and mine, but attaching now to get feedback from you, David. If you agree that those patches are good for here I will have another round.
(In reply to Jonathan Druart from comment #22) > I don't think this is optimal, I am working on an alternative patch. (In reply to Jonathan Druart from comment #25) > Actually both approaches are compatible I think. I have not tested with > yours and mine, but attaching now to get feedback from you, David. > > If you agree that those patches are good for here I will have another round. I think that the approaches are compatible as well, since we're solving different problems. I like your solution for detecting a failed connection early, and then allowing a graceful fallback to DB processing of background jobs (for workers). I haven't tested it but it looks OK at a glance. (Although I'd "die" in the try{} rather than repeat the code for the the warning and undefing of stomp twice.) In my test, I use the failed connection as an example because it was easy to reproduce, but that's not my primary concern. My concern is the ERROR frames I'm getting for "\"NACK\" must include a valid \"message-id\" header\n". Personally, I don't get the CONNECT problems, but I do sometimes get these odd NACK ones for successful connections. But I haven't been able to reproduce it locally unfortunately.
Created attachment 179214 [details] [review] Bug 34070: Add error handling for ERROR frames This change causes es_indexer_daemon.pl to die on ERROR frames instead of spinning in an infinite loop. It also adds a content-type STOMP frame header to help consumers know what data type is in the MESSAGE frame body. To test ERROR handling for es_indexer_daemon.pl: 1. Switch to using Elasticsearch 2. Change your RabbitMQ connection details in koha-conf.xml or the hard-coded ones in Koha/BackgroundJob.pm to include an invalid password. For example, if you don't have a message_broker already defined, use the following: <message_broker> <password>bad</password> </message_broker> 3. sudo koha-es-indexer --restart kohadev 4. View the errors in the logs: tail -f /var/log/koha/kohadev/es-indexer-output.log tail -f /var/log/koha/kohadev/es-indexer-error.log To test MESSAGE handling for es_indexer_daemon.pl: 1. Switch to using Elasticsearch (and ensure RabbitMQ connection details are correct) 2. Change the title of a bib record to something new and unique 3. Try to search for the bib using the new title 4. Note you can find it 5. Look in the logs just to make sure there are no messages there Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Created attachment 179215 [details] [review] Bug 34070: Add error handling for ERROR frames for background worker This change adds error handling for ERROR frames to background_jobs_worker.pl. A previous patch added it to es_indexer_daemon.pl which is a different background jobs worker. To test ERROR handling for background_jobs_worker.pl: 1. Change your RabbitMQ connection details in koha-conf.xml or the hard-coded ones in Koha/BackgroundJob.pm to include an invalid password. For example, if you don't have a message_broker already defined, use the following: <message_broker> <password>bad</password> </message_broker> 2. sudo koha-worker --restart --quiet kohadev 3. View the errors in the logs: tail -f /var/log/koha/kohadev/worker-output.log tail -f /var/log/koha/kohadev/worker-error.log To test MESSAGE handling for background_jobs_worker.pl: 1. Ensure RabbitMQ connection details are correct 2. Go to Batch item modification and add the following barcodes: 39999000001310 39999000004571 3. Add a z - Public Note of the following and click Save: THEBACKGROUNDWORKERWORKED 4. Search the catalogue using THEBACKGROUNDWORKERWORKED in the top catalogue search 5. Note that you get 2 results in your search 6. Look in the logs just to make sure there are no new messages there Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Created attachment 179216 [details] [review] Bug 34070: Deal with broker connection issues This patch suggests to deal with connection issues at Koha::BackgroundJob->connect level. It incorrectly returned a Net::Stomp object when the connection failed. With this patch, if the CONNECT does not return a CONNECTED frame then Koha::BackgroundJob->connect returns nothing and the caller is not fooled. It also fixes the about page, we now have: "Using SQL polling (Fallback, Error connecting to RabbitMQ)" instead of "Using RabbitMQ" Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Created attachment 179217 [details] [review] Bug 34070: Add tests Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Created attachment 179218 [details] [review] Bug 34070: Display the 'jobs will be processed anyway' message from worker ->connect does not raise an exception, so we didn't reached the catch block. Adjusting the code but I am not really happy with it, but more changes will derail the original bug Now we log: Cannot connect to broker (Access refused for user 'guest') at /kohadevbox/koha/Koha/BackgroundJob.pm line 96. Cannot connect to the message broker, the jobs will be processed anyway at /kohadevbox/koha/misc/workers/background_jobs_worker.pl line 100. Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
I stop here, there are too much to do. I am still willing to work on the worker code, but we need to consolidate the code first. See bug 35920.
(In reply to Jonathan Druart from comment #32) > I stop here, there are too much to do. > > I am still willing to work on the worker code, but we need to consolidate > the code first. See bug 35920. Yeah bug 35920 would be great. I'm having problems in the real world, so I'm going to backport my initial patches to solve an immediate problem. Happy to keep working on this together though whenever that might be.
(In reply to Jonathan Druart from comment #32) > I stop here, there are too much to do. > > I am still willing to work on the worker code, but we need to consolidate > the code first. See bug 35920. (In reply to David Cook from comment #33) > (In reply to Jonathan Druart from comment #32) > > I stop here, there are too much to do. > > > > I am still willing to work on the worker code, but we need to consolidate > > the code first. See bug 35920. > > Yeah bug 35920 would be great. > > I'm having problems in the real world, so I'm going to backport my initial > patches to solve an immediate problem. > > Happy to keep working on this together though whenever that might be. Not sure how we should proceed here. Stuff applies. Test passes. Code looks good to me at first glance. And yes, this area needs further attention; see several other reports. Jonathan, David: any comments please.
(In reply to Marcel de Rooy from comment #34) > Not sure how we should proceed here. > Stuff applies. Test passes. Code looks good to me at first glance. And yes, > this area needs further attention; see several other reports. > Jonathan, David: any comments please. I'm not sure how we should proceed here either. It feels like we're stuck and I don't know why we're stuck? At this point, I'm tempted to give up on these patches, and just patch the issue locally, as I've been bitten by this bug too many times to keep waiting. I'd prefer to get this fixed in main, but I don't know what else to do to move things along?
IIRC the status was correct, this is ready for QA.
(In reply to Jonathan Druart from comment #36) > IIRC the status was correct, this is ready for QA. Oh great! :D
Created attachment 182570 [details] [review] Bug 34070: Add error handling for ERROR frames This change causes es_indexer_daemon.pl to die on ERROR frames instead of spinning in an infinite loop. It also adds a content-type STOMP frame header to help consumers know what data type is in the MESSAGE frame body. To test ERROR handling for es_indexer_daemon.pl: 1. Switch to using Elasticsearch 2. Change your RabbitMQ connection details in koha-conf.xml or the hard-coded ones in Koha/BackgroundJob.pm to include an invalid password. For example, if you don't have a message_broker already defined, use the following: <message_broker> <password>bad</password> </message_broker> 3. sudo koha-es-indexer --restart kohadev 4. View the errors in the logs: tail -f /var/log/koha/kohadev/es-indexer-output.log tail -f /var/log/koha/kohadev/es-indexer-error.log To test MESSAGE handling for es_indexer_daemon.pl: 1. Switch to using Elasticsearch (and ensure RabbitMQ connection details are correct) 2. Change the title of a bib record to something new and unique 3. Try to search for the bib using the new title 4. Note you can find it 5. Look in the logs just to make sure there are no messages there Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 182571 [details] [review] Bug 34070: Add error handling for ERROR frames for background worker This change adds error handling for ERROR frames to background_jobs_worker.pl. A previous patch added it to es_indexer_daemon.pl which is a different background jobs worker. To test ERROR handling for background_jobs_worker.pl: 1. Change your RabbitMQ connection details in koha-conf.xml or the hard-coded ones in Koha/BackgroundJob.pm to include an invalid password. For example, if you don't have a message_broker already defined, use the following: <message_broker> <password>bad</password> </message_broker> 2. sudo koha-worker --restart --quiet kohadev 3. View the errors in the logs: tail -f /var/log/koha/kohadev/worker-output.log tail -f /var/log/koha/kohadev/worker-error.log To test MESSAGE handling for background_jobs_worker.pl: 1. Ensure RabbitMQ connection details are correct 2. Go to Batch item modification and add the following barcodes: 39999000001310 39999000004571 3. Add a z - Public Note of the following and click Save: THEBACKGROUNDWORKERWORKED 4. Search the catalogue using THEBACKGROUNDWORKERWORKED in the top catalogue search 5. Note that you get 2 results in your search 6. Look in the logs just to make sure there are no new messages there Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 182572 [details] [review] Bug 34070: Deal with broker connection issues This patch suggests to deal with connection issues at Koha::BackgroundJob->connect level. It incorrectly returned a Net::Stomp object when the connection failed. With this patch, if the CONNECT does not return a CONNECTED frame then Koha::BackgroundJob->connect returns nothing and the caller is not fooled. It also fixes the about page, we now have: "Using SQL polling (Fallback, Error connecting to RabbitMQ)" instead of "Using RabbitMQ" Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 182573 [details] [review] Bug 34070: Add tests Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 182574 [details] [review] Bug 34070: Display the 'jobs will be processed anyway' message from worker ->connect does not raise an exception, so we didn't reached the catch block. Adjusting the code but I am not really happy with it, but more changes will derail the original bug Now we log: Cannot connect to broker (Access refused for user 'guest') at /kohadevbox/koha/Koha/BackgroundJob.pm line 96. Cannot connect to the message broker, the jobs will be processed anyway at /kohadevbox/koha/misc/workers/background_jobs_worker.pl line 100. Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Pushed for 25.05! Well done everyone, thank you!