Bug 34070 - background_jobs_worker.pl floods logs when not connecting
Summary: background_jobs_worker.pl floods logs when not connecting
Status: RESOLVED INVALID
Alias: None
Product: Koha
Classification: Unclassified
Component: Architecture, internals, and plumbing (show other bugs)
Version: 22.11
Hardware: All All
: P5 - low major (vote)
Assignee: Bugs List
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2023-06-20 17:21 UTC by Magnus Enger
Modified: 2024-01-19 13:42 UTC (History)
2 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Magnus Enger 2023-06-20 17:21:36 UTC
I noticed my disk was filling up, because worker-output.log was getting several hundred of these per second: 

[2023/06/20 18:21:24] [WARN] Frame not processed - malformed JSON string, neither tag, array, object, number, string or atom, at character offset 0 (before "You must log in usin...") at /usr/share/koha/bin/background_jobs_worker.pl line 116.
 main::catch {...}  /usr/share/koha/bin/background_jobs_worker.pl (119)

I think the problem is related to this code: 

106 while (1) {
107     if ( $conn ) {
108         my $frame = $conn->receive_frame;
109         if ( !defined $frame ) {
110             # maybe log connection problems
111             next;    # will reconnect automatically
112         }
113 
114         my $args = try {
115             my $body = $frame->body;
116             decode_json($body); # TODO Should this be from_json? Check utf8 flag.
117         } catch {
118             Koha::Logger->get({ interface => 'worker' })->warn(sprintf "Frame not processed - %s", $_);
119             return;
120         } finally {
121             $conn->ack( { frame => $frame } );
122         };
123 
124         next unless $args;
125 
126         # FIXME This means we need to have create the DB entry before
127         # It could work in a first step, but then we will want to handle job that will be created from the message received
128         my $job = Koha::BackgroundJobs->find($args->{job_id});

There is a problem with the connection, but $conn is still defined, so this code is run. When I Dump $conn after line 107 it looks like this:

$VAR1 = bless( {
                 'bufsize' => 8192,
                 'logger' => bless( {
                                      'warn' => 1,
                                      'error' => 1,
                                      'fatal' => 1
                                    }, 'Net::Stomp::StupidLogger' ),
                 '_framebuf_changed' => 1,
                 'port' => '61613',
                 'connect_delay' => 5,
                 '_pid' => ***,
                 'hostname' => 'localhost',
                 'reconnect_attempts' => 0,
                 'reconnect_on_fork' => 1,
                 'select' => bless( [
                                      '�',
                                      1,
                                      undef,
                                      undef,
                                      undef,
                                      undef,
                                      undef,
                                      undef,
                                      undef,
                                      bless( \*Symbol::GEN0, 'IO::Socket::IP' )
                                    ], 'IO::Select' ),
                 'socket' => $VAR1->{'select'}[9],
                 'initial_reconnect_attempts' => 1,
                 '_framebuf' => '',
                 'socket_options' => {},
                 'subscriptions' => {
                                      'dest-/queue/koha_***-default' => {
                                                                               'ack' => 'client',
                                                                               'destination' => '/queue/koha_***-default',
                                                                               'prefetch-count' => 1
                                                                             }
                                    }
               }, 'Net::Stomp' );

If I dump the contents of $body after line 115, I get 'You must log in using CONNECT first'. Not sure why that happens yet, but how we handle it seems to be problematic. We fail to decode it, because it is not JSON. So the "catch" block is run and logs the problem. But then I think the "return" kicks us back to the start of the "while" and we end up logging the same error as fast as we can, which seems to be pretty fast. And we never get to the "ack" on line 121.

The comment "will reconnect automatically" seems to be a little optimistic? 

Moving the "return" from the end of the "catch" block, to the end of the "finally" block slows things down, now the logging only happens about every other second.

But there is also a problem because we are assigning the "output" from the try/catch to $args. If we can decode_json the $body, that works as expected. But if we execute the "catch" block, it looks like $args gets assigned the result of Koha::Logger->get, which seems to be 1. So $args is defined on line 124 and we get to Koha::BackgroundJobs->find, but using "1" as a hashref there does not work. 

I think maybe something like this would work better: 

         my $args;
         try {
             my $body = $frame->body;
             $args = decode_json($body); # TODO Should this be from_json? Check utf8 flag.
         } catch {
             Koha::Logger->get({ interface => 'worker' })->warn(sprintf "Frame not processed - %s", $_);
         } finally {
             $conn->ack( { frame => $frame } );
             return;
         };

Is the return needed? This is a little above my head, so other solutions are most welcome. And if anyone can tell me why I get 'You must log in using CONNECT first' I would be most grateful. :-)
Comment 1 Magnus Enger 2023-06-20 17:45:39 UTC
Looks like the problem with connecting might be because I have not added the proper config to koha-conf.xml: 

 <message_broker>
   <hostname>__MESSAGE_BROKER_HOST__</hostname>
   <port>__MESSAGE_BROKER_PORT__</port>
   <username>__MESSAGE_BROKER_USER__</username>
   <password>__MESSAGE_BROKER_PASS__</password>
   <vhost>__MESSAGE_BROKER_VHOST__</vhost>
 </message_broker>
Comment 2 Jonathan Druart 2023-06-21 06:53:01 UTC
(In reply to Magnus Enger from comment #1)
> Looks like the problem with connecting might be because I have not added the
> proper config to koha-conf.xml: 
> 
>  <message_broker>
>    <hostname>__MESSAGE_BROKER_HOST__</hostname>
>    <port>__MESSAGE_BROKER_PORT__</port>
>    <username>__MESSAGE_BROKER_USER__</username>
>    <password>__MESSAGE_BROKER_PASS__</password>
>    <vhost>__MESSAGE_BROKER_VHOST__</vhost>
>  </message_broker>

You should not need it. Did you try a restart_all? If rabbitmq cannot be reached we are supposed to fallback to the DB (and bypass the broker).

However I can see a problem if the daemon (background_jobs_worker.pl) has been started and the connection lost.
Comment 3 Magnus Enger 2023-06-21 07:13:25 UTC
(In reply to Jonathan Druart from comment #2)
> You should not need it. Did you try a restart_all? If rabbitmq cannot be
> reached we are supposed to fallback to the DB (and bypass the broker).

I spotted this on a production server, so there is no restart_all. But I did restart rabbit and koha-worker, but that did not change anything. Then I restarted the server, but kept getting the same errors. Then I added the message_broker config to one of six sites on the server, restarted koha-worker (I think it was) and then the problem went away for all the sites. So it seems kind of random...
Comment 4 Tomás Cohen Arazi 2023-07-12 14:24:29 UTC
I've just spotted this in production, upgraded several versions over the years, through 22.11. The configuration seemed to be correct.

We've been searching for a stray broken message for a while in vain.

What we found in the rabbit queue logs, is that the connection to the broker was not successful. This is the reason for filling the disk: the worker gets a message telling the credentials are invalid or don't have enough permissions, which is a plain string ("You must log in usin...") and it explodes when trying to decode as JSON, and retries.

We initially thought it was about the guest user not existing, but it existed:

$ sudo rabbitmqctl add_user guest guest
Adding user "guest" ...
Error:
User "guest" already exists

Then we found a line in the rabbit logs telling the 'vhost' didn't exist (koha_frontera in this case). It was easier to spot the login or permissions problem so we missed it initially:

vhost koha_frontera not found
2023-07-12 09:56:31.983 [warning] <0.902.0> STOMP login failed - not_allowed (vhost access not allowed)~n
2023-07-12 09:56:31.983 [error] <0.902.0> STOMP error frame sent:
Message: "Bad CONNECT"
Detail: "Virtual host 'koha_frontera' access denied"
Server private detail: none

We created it:

$ sudo rabbitmqctl add_vhost koha_frontera

Then it started saying (again) that there was a permissions issue, so we gave 'guest', permissions over the 'koha_frontera' vhost:

$ sudo rabbitmqctl set_permissions -p 'koha_frontera' guest ".*" ".*" ".*"

And then we restarted everything. Tried staging a file and all logs look correct.

I'm leaving this here, and closing this bug as invalid, as it looks like a maintenance problem.
Comment 5 Magnus Enger 2024-01-19 13:42:35 UTC
I had a similar problem today and finally got round to digging a bit deeper into RabbitMQ. Here are some of the things I found: 

$ sudo rabbitmq-diagnostics check_virtual_hosts
Checking if all vhosts are running on node rabbit@kohaswe ...
Error:
Some virtual hosts on node rabbit@kohaswe are down:
/

$ sudo rabbitmqctl list_queues --offline state name
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
state	name
down	koha_mykoha-batch_authority_record_modification
down	koha_mykoha-batch_item_record_modification
down	koha_mykoha-long_tasks
...

$ sudo rabbitmqctl restart_vhost
Trying to restart vhost '/' on node 'rabbit@kohaswe' ...
Error:
Failed to start vhost '/' on node 'rabbit@kohaswe'Reason: {:shutdown, {:failed_to_start_child, :rabbit_vhost_process, {:badmatch, {:error, {{{:badmatch, {:error, {:not_a_dets_file, '/var/lib/rabbitmq/mnesia/rabbit@kohaswe/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/recovery.dets'}}}, [{:rabbit_recovery_terms, :open_table, 1, [file: 'src/rabbit_recovery_terms.erl', line: 199]}, {:rabbit_recovery_terms, :init, 1, [file: 'src/rabbit_recovery_terms.erl', line: 179]}, {:gen_server, :init_it, 2, [file: 'gen_server.erl', line: 374]}, {:gen_server, :init_it, 6, [file: 'gen_server.erl', line: 342]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]}, {:child, :undefined, :rabbit_recovery_terms, {:rabbit_recovery_terms, :start_link, ["/"]}, :transient, 30000, :worker, [:rabbit_recovery_terms]}}}}}}

I found a solution to the problem here: https://stackoverflow.com/questions/58689551/rabbitmq-vhost-is-down-for-user-xyz-even-after-user-has-all-access and after effectively wiping all the queues, I got things running again.