Bug 35111 - Background jobs worker crashes on SIGPIPE when database connection lost in Ubuntu 22.04
Summary: Background jobs worker crashes on SIGPIPE when database connection lost in Ub...
Status: Pushed to oldstable
Alias: None
Product: Koha
Classification: Unclassified
Component: Architecture, internals, and plumbing (show other bugs)
Version: Main
Hardware: All All
: P1 - high critical (vote)
Assignee: David Cook
QA Contact: Marcel de Rooy
URL:
Keywords:
Depends on:
Blocks: 35092
  Show dependency treegraph
 
Reported: 2023-10-19 23:30 UTC by David Cook
Modified: 2024-01-24 00:34 UTC (History)
7 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Small patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
23.11.00,23.05.05,22.11.12


Attachments
Bug 35111: Ignore SIGPIPE in background jobs worker (1.62 KB, patch)
2023-10-20 01:39 UTC, David Cook
Details | Diff | Splinter Review
Bug 35111: Ignore SIGPIPE in background jobs worker (1.83 KB, patch)
2023-10-25 12:13 UTC, Marcel de Rooy
Details | Diff | Splinter Review
Bug 35111: Ignore SIGPIPE in background jobs worker (1.85 KB, patch)
2023-10-27 15:40 UTC, Victor Grousset/tuxayo
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description David Cook 2023-10-19 23:30:03 UTC
Typically, if Koha loses a connection to the database, it will automatically reconnect. We expect the background jobs worker to do the same thing.

However, I've noticed that instead of reconnecting it crashes with an error like this in the worker-error.log:

20231018 07:44:06 instance-koha-worker-long_tasks: client (pid 3949888) killed by signal 13, respawning

Signal 13 is a SIGPIPE and you will encounter that when you're trying to write to a socket that is no longer connected. 

Starman is based on Net::Server::PreFork which - like a lot of daemon software - ignores SIGPIPE. So if you restart the database and Starman loses its persistent connections, it will reconnect and ignore the SIGPIPE error.

If you restart the database or you have a database connection timeout[1], the background jobs worker won't ignore the SIGPIPE. It'll die on it, and it will leave a background job in a stuck state.

[1]
Oct 19 02:24:02 awesome-host mariadbd[959]: 2023-10-19  2:24:02 6432 [Warning] Aborted connection 6432 to db: 'koha_instance' user: 'koha_instance' host: 'localhost' (Got timeout reading communication packets)
Comment 1 David Cook 2023-10-19 23:32:34 UTC
The reason it gets stuck is that '$conn->ack( { frame => $frame } );' is run and then 'my $job = Koha::BackgroundJobs->find($args->{job_id});' is run after it.

So RabbitMQ thinks the message has been processed. 

But we've lost the database connection, so Koha::BackgroundJobs->find() generates a SIGPIPE signal, the process crashes, and daemon starts a new one.

The background job we were about to start is now stuck.

--

This is easy to avoid. We just need to ignore the SIGPIPE and then the background_jobs_worker.pl will ignore the SIGPIPE, reconnect to the database, and process the background job correctly.

I suspect this will fix a lot of people's problems with background tasks...
Comment 2 David Cook 2023-10-20 00:44:55 UTC
At the moment, I can only reproduce this on Ubuntu 22.04 Koha 22.11 and not Debian 11 Koha master...

It looks like Ubuntu 22.04 is using MySQL client libraries and Debian is using MariaDB client libraries, so that might be related.

Overall, the Koha code doesn't look that different, so I imagine it's coming from a lower level. 

In fact... the info here seems to suggest that MySQL and MariaDB client libraries have had some SIGPIPE issues over time...
https://github.com/perl5-dbi/DBD-MariaDB/pull/196
https://jira.mariadb.org/browse/CONC-591

But if that's the case then MySQL should be ignoring the SIGPIPE as well...

Looking at the Ubuntu patches for the MySQL library, it no longer reconnects on its own... so I'm guessing the SIGPIPE is actually happening at a higher level after all in the Ubuntu 22.04 example. 

And I'd guess that the MariaDB client is actually handling reconnections before the DBIx::Class code. 

I'll have a look at master-jammy in koha-testing-docker which should help confirm this theory...
Comment 3 David Cook 2023-10-20 01:10:27 UTC
Initially I couldn't reproduce on Ubuntu 22.04 Koha master... but maybe I did something wrong, because then I tried Ubuntu 22.04 Koha 22.11.08, 23.05.04, and again on master... and I could reproduce the problem each time.

Basically it goes like this:
1. Go to http://localhost:8081/cgi-bin/koha/catalogue/detail.pl?biblionumber=29
2. Click "Save" > "MARCXML"
3. Go to http://localhost:8081/cgi-bin/koha/tools/stage-marc-import.pl
4. Click "Choose file", choose the MARCXML file, click "Upload file"
5. Click "Stage for import"
6. Note the job is marked as "100% Finished"

7. In a separate window run "docker restart koha-db-1"
8. Repeat steps 3-5 for uploading file and running stage for import
9. Note that the job is stuck in status "New" and in /var/log/koha/kohadev/worker-error.log it says something like the following:

20231020 01:09:42 kohadev-koha-worker-long_tasks: client (pid 4516) killed by signal 13, respawnin
Comment 4 David Cook 2023-10-20 01:14:15 UTC
Test plan:
0. Apply patch and run "restart_all"
1. Go to http://localhost:8081/cgi-bin/koha/catalogue/detail.pl?biblionumber=29
2. Click "Save" > "MARCXML"
3. Go to http://localhost:8081/cgi-bin/koha/tools/stage-marc-import.pl
4. Click "Choose file", choose the MARCXML file, click "Upload file"
5. Click "Stage for import"
6. Note the job is marked as "100% Finished"

7. In a separate window run "docker restart koha-db-1"
8. Repeat steps 3-5 for uploading file and running stage for import
9. Note that the job is marked as "100% Finished" as you'd expect
Comment 5 David Cook 2023-10-20 01:39:58 UTC
Created attachment 157484 [details] [review]
Bug 35111: Ignore SIGPIPE in background jobs worker

This change explicitly ignores SIGPIPE signals in the background jobs
worker.

Daemons like Starman ignore SIGPIPE so it makes sense to explicitly set this.
Differences in the inner workings of MySQL vs MariaDB client libraries have yielded
different behaviours in automatic reconnections and potentially SIGPIPE handling,
so this helps to make the overall behaviour more consistent.

Test plan:
0. Apply patch and run "restart_all"
1. Go to http://localhost:8081/cgi-bin/koha/catalogue/detail.pl?biblionumber=29
2. Click "Save" > "MARCXML"
3. Go to http://localhost:8081/cgi-bin/koha/tools/stage-marc-import.pl
4. Click "Choose file", choose the MARCXML file, click "Upload file"
5. Click "Stage for import"
6. Note the job is marked as "100% Finished"

7. In a separate window run "docker restart koha-db-1"
8. Repeat steps 3-5 for uploading file and running stage for import
9. Note that the job is marked as "100% Finished" as you'd expect
Comment 6 David Cook 2023-10-20 01:41:54 UTC
Noting that you *will not* be able to reproduce this in Debian but you *will* in Ubuntu 22.04.

If you want to confirm the issue use "master-jammy" in KOHA_IMAGE for ktd rather than "master".

However, this patch causes no harm in "master", so you can always test that if you want.
Comment 7 David Cook 2023-10-20 01:45:38 UTC Comment hidden (obsolete)
Comment 8 Jonathan Druart 2023-10-20 07:39:02 UTC
Nice one, David!
Comment 9 Marcel de Rooy 2023-10-20 08:27:07 UTC
This is certainly interesting! But still raises some further questions too. Will try to test a bit.
Comment 10 Marcel de Rooy 2023-10-24 14:20:55 UTC
I am still wondering about the cause of the SIGPIPE here. The case presented sounds convincing but still seems to depend on a few assumptions?

You have caught some lines from the logfiles for worker and database. But wasnt there any further lines in koha-worker-output about the crash? This would give us more info about the line in the worker script. Now we are guessing? Or do I miss something?

When worker lost the connection, I would expect DBIx to just raise an exception?
The background worker does not catch it, so perl exits probably with 255 ? And daemon respawns (but without 13?)

When I hear about SIGPIPE, I would first look for problems between parent and child processes since we are forking here via Parallel::ForkManager. Also note that this depends on a koha-conf setting.
background_jobs_worker/max_processes
What value do you use there?
If there is a database connection problem in the child process, and we did not configure P::F correctly (?), could that trigger a SIGPIPE ?

Note that process_job has a try..catch block. But what if $job->process dies on a database issue, then we try $job->store in the catch block?! And trigger a new error that is not caught? And what about the I/O layers? I didnt find any information about what P::F exactly does with these? Just the same as fork (inheriting them in child)?

Should we eval the job->status->store in the catch block of sub process_job to get wiser?

Note that these are just questions. Wondering if we are looking in the wrong direction somehow. But I could very well be mistaken.
Comment 11 Marcel de Rooy 2023-10-24 14:31:08 UTC
It seems that if the child dies in the catch block, it can still write to STDERR. I am seeing info there from a die in the catch block.

Just another thought: Do Ubuntu and Debian use the same 'daemon' as used in the worker script ?
Comment 12 David Cook 2023-10-24 22:27:55 UTC
(In reply to Marcel de Rooy from comment #10)
> Note that these are just questions. Wondering if we are looking in the wrong
> direction somehow. But I could very well be mistaken.

I captured a stacktrace during my investigation which was quite definitive. I don't seem to have it on hand now, but I suppose I could re-capture it to share here. 

I spent a lot of time on this issue, and I am very confident that I'm on the right track.
Comment 13 David Cook 2023-10-24 22:48:55 UTC
(In reply to Marcel de Rooy from comment #10)
> You have caught some lines from the logfiles for worker and database. But
> wasnt there any further lines in koha-worker-output about the crash? This
> would give us more info about the line in the worker script. Now we are
> guessing? Or do I miss something?

There's nothing in the worker-output.log in this case. That's part of what made it challenging to troubleshoot initially.

> When worker lost the connection, I would expect DBIx to just raise an
> exception?
> The background worker does not catch it, so perl exits probably with 255 ?
> And daemon respawns (but without 13?)

That's the issue. DBIx has reconnect logic, and on Debian with the mariadb client libraries it handles this without any exceptions or issues. On Ubuntu with the mysql client libraries a SIGPIPE signal gets generated. In Starman/Plack, SIGPIPE is ignored, so the DBIx can just reconnect. In the background jobs worker, the SIGPIPE is not ignored and it causes the process to crash.

> When I hear about SIGPIPE, I would first look for problems between parent
> and child processes since we are forking here via Parallel::ForkManager.

I looked at all the different sockets and pipes. It's the database one that's the problem.

> Also note that this depends on a koha-conf setting.
> background_jobs_worker/max_processes
> What value do you use there?
> If there is a database connection problem in the child process, and we did
> not configure P::F correctly (?), could that trigger a SIGPIPE ?

It's not the child process. It's the parent process. The stacktrace shows that.
Comment 14 David Cook 2023-10-25 00:41:00 UTC
Here's a stacktrace from koha-testing-docker using "master-jammy" image. I added a SIGPIPE handler and used the test plan from Comment 3:

PIPE at /kohadevbox/koha/misc/workers/background_jobs_worker.pl line 58.
        main::__ANON__("PIPE") called at /usr/share/perl5/DBIx/Class/Storage/DBI.pm line 932
        eval {...} called at /usr/share/perl5/DBIx/Class/Storage/DBI.pm line 932
        DBIx::Class::Storage::DBI::connected(DBIx::Class::Storage::DBI::mysql=HASH(0x561f24b1f1a8)) called at /usr/share/perl5/DBIx/Class/Storage/DBI.pm line 850
        DBIx::Class::Storage::DBI::__ANON__(DBIx::Class::Storage::BlockRunner=HASH(0x561f24d28dc8)) called at /usr/share/perl5/DBIx/Class/Storage/BlockRunner.pm line 190
        DBIx::Class::Storage::BlockRunner::__ANON__() called at /usr/share/perl5/Context/Preserve.pm line 38
        Context::Preserve::preserve_context(CODE(0x561f24954d50), "replace", CODE(0x561f24954cf0)) called at /usr/share/perl5/DBIx/Class/Storage/BlockRunner.pm line 213
        DBIx::Class::Storage::BlockRunner::_run(DBIx::Class::Storage::BlockRunner=HASH(0x561f24d28dc8), CODE(0x561f24959310)) called at /usr/share/perl5/DBIx/Class/Storage/BlockRunner.pm line 105
        DBIx::Class::Storage::BlockRunner::run(DBIx::Class::Storage::BlockRunner=HASH(0x561f24d28dc8), CODE(0x561f24959310)) called at /usr/share/perl5/DBIx/Class/Storage/DBI.pm line 856
        DBIx::Class::Storage::DBI::dbh_do(undef, undef, "SELECT `me`.`id`, `me`.`status`, `me`.`progress`, `me`.`size`"..., ARRAY(0x561f24e56b40), ARRAY(0x561f26b72b08)) called at /usr/share/perl5/DBIx/Class/Storage/DBI.pm line 1939
        DBIx::Class::Storage::DBI::_execute(DBIx::Class::Storage::DBI::mysql=HASH(0x561f24b1f1a8), "select", ARRAY(0x561f26a3c390), ARRAY(0x561f24e56a08), HASH(0x561f24959328), HASH(0x561f26b93218)) called at /usr/share/perl5/DBIx/Class/
Storage/DBI.pm line 2584
        DBIx::Class::Storage::DBI::_select(DBIx::Class::Storage::DBI::mysql=HASH(0x561f24b1f1a8), ARRAY(0x561f26a3c390), ARRAY(0x561f24e56a08), HASH(0x561f24959328), HASH(0x561f24e56b58)) called at /usr/share/perl5/DBIx/Class/Storage/DBI
/Cursor.pm line 125
        DBIx::Class::Storage::DBI::Cursor::next(DBIx::Class::Storage::DBI::Cursor=HASH(0x561f26b93290)) called at /usr/share/perl5/DBIx/Class/ResultSet.pm line 1329
        DBIx::Class::ResultSet::_construct_results(DBIx::Class::ResultSet=HASH(0x561f24958500)) called at /usr/share/perl5/DBIx/Class/ResultSet.pm line 1242
        DBIx::Class::ResultSet::next(DBIx::Class::ResultSet=HASH(0x561f24958500)) called at /kohadevbox/koha/Koha/Objects.pm line 317
        Koha::Objects::next(Koha::BackgroundJobs=HASH(0x561f26ec69d8)) called at /kohadevbox/koha/misc/workers/background_jobs_worker.pl line 137

vi ./misc/workers/background_jobs_worker.pl

137         my $job = Koha::BackgroundJobs->search( { id => $args->{job_id}, status => 'new' } )->next;
Comment 15 David Cook 2023-10-25 01:45:15 UTC
As you can see, the SIGPIPE happens before a child process is forked. 

And since we're at line 137 we know that we've received a message from RabbitMQ. Since the DB job is stuck in "New" and no worker can pick up the message from RabbitMQ after this, we know that the message has been ACKed via $conn->ack( { frame => $frame } );

So it's not the connection with RabbitMQ.

And since the stacktrace shows the SIGPIPE when the database connection is checked to see if it's connected, I think it's pretty clear that it's the SIGPIPE from the broken database connection causing the problem.

The restart of the database causing a consistently reproducible SIGPIPE also shows it's the database connection in the parent process that is producing the SIGPIPE.

In https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=35111#c7 I do some software version comparisons which demonstrate that it is likely related to the C client libraries. 

I will admit that I don't have 100% proof that it's the C client libraries having different behaviours, but I think it's a pretty good hypothesis. 

--

Another thing to keep in mind is that daemons typically ignore SIGPIPE. This is why we have no issues with the Starman daemon for instance. It's a good practice in general, so there's no harm in adding it into this daemon. (We may want to look at other daemons at some point as well.)
Comment 16 Marcel de Rooy 2023-10-25 12:00:39 UTC
Okay, great. Thx for including the strack trace info that confirms your theory.
Comment 17 Marcel de Rooy 2023-10-25 12:13:48 UTC
Created attachment 157786 [details] [review]
Bug 35111: Ignore SIGPIPE in background jobs worker

This change explicitly ignores SIGPIPE signals in the background jobs
worker.

Daemons like Starman ignore SIGPIPE so it makes sense to explicitly set this.
Differences in the inner workings of MySQL vs MariaDB client libraries have yielded
different behaviours in automatic reconnections and potentially SIGPIPE handling,
so this helps to make the overall behaviour more consistent.

Test plan:
0. Apply patch and run "restart_all"
1. Go to http://localhost:8081/cgi-bin/koha/catalogue/detail.pl?biblionumber=29
2. Click "Save" > "MARCXML"
3. Go to http://localhost:8081/cgi-bin/koha/tools/stage-marc-import.pl
4. Click "Choose file", choose the MARCXML file, click "Upload file"
5. Click "Stage for import"
6. Note the job is marked as "100% Finished"

7. In a separate window run "docker restart koha-db-1"
8. Repeat steps 3-5 for uploading file and running stage for import
9. Note that the job is marked as "100% Finished" as you'd expect

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
[EDIT] Added comment on the SIG PIPE line.
Comment 18 Victor Grousset/tuxayo 2023-10-27 15:40:16 UTC
Created attachment 157990 [details] [review]
Bug 35111: Ignore SIGPIPE in background jobs worker

This change explicitly ignores SIGPIPE signals in the background jobs
worker.

Daemons like Starman ignore SIGPIPE so it makes sense to explicitly set this.
Differences in the inner workings of MySQL vs MariaDB client libraries have yielded
different behaviours in automatic reconnections and potentially SIGPIPE handling,
so this helps to make the overall behaviour more consistent.

Test plan:
0. Apply patch and run "restart_all"
1. Go to http://localhost:8081/cgi-bin/koha/catalogue/detail.pl?biblionumber=29
2. Click "Save" > "MARCXML"
3. Go to http://localhost:8081/cgi-bin/koha/tools/stage-marc-import.pl
4. Click "Choose file", choose the MARCXML file, click "Upload file"
5. Click "Stage for import"
6. Note the job is marked as "100% Finished"

7. In a separate window run "docker restart koha-db-1"
8. Repeat steps 3-5 for uploading file and running stage for import
9. Note that the job is marked as "100% Finished" as you'd expect

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
[EDIT] Added comment on the SIG PIPE line.
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Comment 19 Victor Grousset/tuxayo 2023-10-27 15:40:50 UTC
Weird, I can't reproduce the issue (comment 3) with koha master, KTD image master-jammy, mariadb 10.6
Maybe there is another condition to get this.

Regardless, as far a doing a simple signoff I can since the test plan is just about no-regression checking. So here it is so it's possible to swap roles with Marcel if they feel confident enough to pass QA.
Comment 20 Marcel de Rooy 2023-10-30 08:18:54 UTC
(In reply to Victor Grousset/tuxayo from comment #19)
> Regardless, as far a doing a simple signoff I can since the test plan is
> just about no-regression checking. So here it is so it's possible to swap
> roles with Marcel if they feel confident enough to pass QA.

Sure
Comment 21 Tomás Cohen Arazi 2023-10-30 12:03:24 UTC
Pushed to master for 23.11.

Nice work everyone, thanks!
Comment 22 David Cook 2023-10-31 03:07:30 UTC
Awesome! Thanks folks!

Could we look at backporting this one too? It's a nice little fix which will help a lot of folks out in the world.
Comment 23 Fridolin Somers 2023-11-02 20:30:18 UTC
Pushed to 23.05.x for 23.05.05
Comment 24 David Cook 2023-11-03 06:07:25 UTC
(In reply to Fridolin Somers from comment #23)
> Pushed to 23.05.x for 23.05.05

Merci, Frido!

Hoping 22.11 maintainer can get this one too.

I've manually applied it to a 22.11 and they've been having a much better past few days with this patch :)
Comment 25 Victor Grousset/tuxayo 2023-11-03 16:32:05 UTC
Since the patch, is trivial, it's very likely to make it even to oldoldstable :) , since the bug is critical: https://wiki.koha-community.org/wiki/Release_maintenance#Deciding_on_what_to_back_port
Comment 26 Matt Blenkinsop 2023-11-13 15:11:35 UTC
Nice work everyone!

Pushed to oldstable for 22.11.x
Comment 27 Jonathan Druart 2024-01-10 09:00:41 UTC
(In reply to David Cook from comment #24)
> I've manually applied it to a 22.11 and they've been having a much better
> past few days with this patch :)

What do you see now? Can you search for jobs stuck since this patch has been applied?
Comment 28 David Cook 2024-01-24 00:34:19 UTC
(In reply to Jonathan Druart from comment #27)
> (In reply to David Cook from comment #24)
> > I've manually applied it to a 22.11 and they've been having a much better
> > past few days with this patch :)
> 
> What do you see now? Can you search for jobs stuck since this patch has been
> applied?

Jobs don't get stuck anymore. It just works perfectly with this patch.