Summary: | background_jobs_worker.pl floods logs when not connecting | ||
---|---|---|---|
Product: | Koha | Reporter: | Magnus Enger <magnus> |
Component: | Architecture, internals, and plumbing | Assignee: | Bugs List <koha-bugs> |
Status: | RESOLVED INVALID | QA Contact: | Testopia <testopia> |
Severity: | major | ||
Priority: | P5 - low | CC: | indradg, jonathan.druart, joseanjos, tomascohen, yuriy.kotsyuk |
Version: | 22.11 | ||
Hardware: | All | ||
OS: | All | ||
Change sponsored?: | --- | Patch complexity: | --- |
Documentation contact: | Documentation submission: | ||
Text to go in the release notes: | Version(s) released in: | ||
Circulation function: |
Description
Magnus Enger
2023-06-20 17:21:36 UTC
Looks like the problem with connecting might be because I have not added the proper config to koha-conf.xml: <message_broker> <hostname>__MESSAGE_BROKER_HOST__</hostname> <port>__MESSAGE_BROKER_PORT__</port> <username>__MESSAGE_BROKER_USER__</username> <password>__MESSAGE_BROKER_PASS__</password> <vhost>__MESSAGE_BROKER_VHOST__</vhost> </message_broker> (In reply to Magnus Enger from comment #1) > Looks like the problem with connecting might be because I have not added the > proper config to koha-conf.xml: > > <message_broker> > <hostname>__MESSAGE_BROKER_HOST__</hostname> > <port>__MESSAGE_BROKER_PORT__</port> > <username>__MESSAGE_BROKER_USER__</username> > <password>__MESSAGE_BROKER_PASS__</password> > <vhost>__MESSAGE_BROKER_VHOST__</vhost> > </message_broker> You should not need it. Did you try a restart_all? If rabbitmq cannot be reached we are supposed to fallback to the DB (and bypass the broker). However I can see a problem if the daemon (background_jobs_worker.pl) has been started and the connection lost. (In reply to Jonathan Druart from comment #2) > You should not need it. Did you try a restart_all? If rabbitmq cannot be > reached we are supposed to fallback to the DB (and bypass the broker). I spotted this on a production server, so there is no restart_all. But I did restart rabbit and koha-worker, but that did not change anything. Then I restarted the server, but kept getting the same errors. Then I added the message_broker config to one of six sites on the server, restarted koha-worker (I think it was) and then the problem went away for all the sites. So it seems kind of random... I've just spotted this in production, upgraded several versions over the years, through 22.11. The configuration seemed to be correct. We've been searching for a stray broken message for a while in vain. What we found in the rabbit queue logs, is that the connection to the broker was not successful. This is the reason for filling the disk: the worker gets a message telling the credentials are invalid or don't have enough permissions, which is a plain string ("You must log in usin...") and it explodes when trying to decode as JSON, and retries. We initially thought it was about the guest user not existing, but it existed: $ sudo rabbitmqctl add_user guest guest Adding user "guest" ... Error: User "guest" already exists Then we found a line in the rabbit logs telling the 'vhost' didn't exist (koha_frontera in this case). It was easier to spot the login or permissions problem so we missed it initially: vhost koha_frontera not found 2023-07-12 09:56:31.983 [warning] <0.902.0> STOMP login failed - not_allowed (vhost access not allowed)~n 2023-07-12 09:56:31.983 [error] <0.902.0> STOMP error frame sent: Message: "Bad CONNECT" Detail: "Virtual host 'koha_frontera' access denied" Server private detail: none We created it: $ sudo rabbitmqctl add_vhost koha_frontera Then it started saying (again) that there was a permissions issue, so we gave 'guest', permissions over the 'koha_frontera' vhost: $ sudo rabbitmqctl set_permissions -p 'koha_frontera' guest ".*" ".*" ".*" And then we restarted everything. Tried staging a file and all logs look correct. I'm leaving this here, and closing this bug as invalid, as it looks like a maintenance problem. I had a similar problem today and finally got round to digging a bit deeper into RabbitMQ. Here are some of the things I found: $ sudo rabbitmq-diagnostics check_virtual_hosts Checking if all vhosts are running on node rabbit@kohaswe ... Error: Some virtual hosts on node rabbit@kohaswe are down: / $ sudo rabbitmqctl list_queues --offline state name Timeout: 60.0 seconds ... Listing queues for vhost / ... state name down koha_mykoha-batch_authority_record_modification down koha_mykoha-batch_item_record_modification down koha_mykoha-long_tasks ... $ sudo rabbitmqctl restart_vhost Trying to restart vhost '/' on node 'rabbit@kohaswe' ... Error: Failed to start vhost '/' on node 'rabbit@kohaswe'Reason: {:shutdown, {:failed_to_start_child, :rabbit_vhost_process, {:badmatch, {:error, {{{:badmatch, {:error, {:not_a_dets_file, '/var/lib/rabbitmq/mnesia/rabbit@kohaswe/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/recovery.dets'}}}, [{:rabbit_recovery_terms, :open_table, 1, [file: 'src/rabbit_recovery_terms.erl', line: 199]}, {:rabbit_recovery_terms, :init, 1, [file: 'src/rabbit_recovery_terms.erl', line: 179]}, {:gen_server, :init_it, 2, [file: 'gen_server.erl', line: 374]}, {:gen_server, :init_it, 6, [file: 'gen_server.erl', line: 342]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]}, {:child, :undefined, :rabbit_recovery_terms, {:rabbit_recovery_terms, :start_link, ["/"]}, :transient, 30000, :worker, [:rabbit_recovery_terms]}}}}}} I found a solution to the problem here: https://stackoverflow.com/questions/58689551/rabbitmq-vhost-is-down-for-user-xyz-even-after-user-has-all-access and after effectively wiping all the queues, I got things running again. *** Bug 37164 has been marked as a duplicate of this bug. *** |