It appears that the background jobs worker can leave defunct processes for periods of time. Though it is mostly harmless, it would be nice if that did not happen. Children are reaped automatically when start or wait_all_children are called. We only call start when a new job is found, and wait_all_children after exiting our while loop. The solution is to simply call reap_all_children after we sleep. This is a non-blocking call that will clean up those defunct processes.
Created attachment 151993 [details] [review] Bug 33898: background_jobs_worker.pl may leave defunct children processes for extended periods of time It appears that the background jobs worker can leave defunct processes for periods of time. Though it is mostly harmless, it would be nice if that did not happen. Children are reaped automatically when start or wait_all_children are called. We only call start when a new job is found, and wait_all_children after exiting our while loop. The solution is to simply call reap_all_children after we sleep. This is a non-blocking call that will clean up those defunct processes. Test Plan: 1) Disable Rabbit 2) Set background_jobs_worker/max_processes to something like 5 3) Restart all the things! 4) Run a bunch of elastic index updates 5) Verify you have defunct processes 6) Apply this patch 7) Run more elastic index updates 8) Defunct processes should disappear every 10 seconds or so! If you do not see defunct processes, the test plan is to simply verify everything continues to work as expected.
Created attachment 151994 [details] [review] Bug 33898: background_jobs_worker.pl may leave defunct children processes for extended periods of time It appears that the background jobs worker can leave defunct processes for periods of time. Though it is mostly harmless, it would be nice if that did not happen. Children are reaped automatically when start or wait_all_children are called. We only call start when a new job is found, and wait_all_children after exiting our while loop. The solution is to simply call reap_all_children after we sleep. This is a non-blocking call that will clean up those defunct processes. Test Plan: 1) Disable Rabbit 2) Set background_jobs_worker/max_processes to something like 5 3) Restart all the things! 4) Run a bunch of elastic index updates 5) Verify you have defunct processes 6) Apply this patch 7) Run more elastic index updates 8) Defunct processes should disappear every 10 seconds or so! If you do not see defunct processes, the test plan is to simply verify everything continues to work as expected.
Created attachment 151997 [details] [review] Bug 33898: background_jobs_worker.pl may leave defunct children processes for extended periods of time It appears that the background jobs worker can leave defunct processes for periods of time. Though it is mostly harmless, it would be nice if that did not happen. Children are reaped automatically when start or wait_all_children are called. We only call start when a new job is found, and wait_all_children after exiting our while loop. The solution is to simply call reap_all_children after we sleep. This is a non-blocking call that will clean up those defunct processes. Test Plan: 1) Disable Rabbit 2) Set background_jobs_worker/max_processes to something like 5 3) Restart all the things! 4) Run a bunch of elastic index updates 5) Verify you have defunct processes 6) Apply this patch 7) Run more elastic index updates 8) Defunct processes should disappear every 10 seconds or so! If you do not see defunct processes, the test plan is to simply verify everything continues to work as expected.
Created attachment 153194 [details] [review] Bug 33898: background_jobs_worker.pl may leave defunct children processes for extended periods of time It appears that the background jobs worker can leave defunct processes for periods of time. Though it is mostly harmless, it would be nice if that did not happen. Children are reaped automatically when start or wait_all_children are called. We only call start when a new job is found, and wait_all_children after exiting our while loop. The solution is to simply call reap_all_children after we sleep. This is a non-blocking call that will clean up those defunct processes. Test Plan: 1) Disable Rabbit 2) Set background_jobs_worker/max_processes to something like 5 3) Restart all the things! 4) Run a bunch of elastic index updates 5) Verify you have defunct processes 6) Apply this patch 7) Run more elastic index updates 8) Defunct processes should disappear every 10 seconds or so! If you do not see defunct processes, the test plan is to simply verify everything continues to work as expected. Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
You did not see https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=32558#c36 btw? Signal handlers are not the most elegant way to address problems. Inherited by children too, altough no big deal here (not the pending signals). P::F allows you to define a run_on_finish callback. Could that be used?
(In reply to Marcel de Rooy from comment #5) > You did not see > https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=32558#c36 btw? > > Signal handlers are not the most elegant way to address problems. Inherited > by children too, altough no big deal here (not the pending signals). > P::F allows you to define a run_on_finish callback. Could that be used? I agree that signal handlers aren't the way to go. It looks like run_on_finish is called via wait_all_children, so probably won't work in this case. An alternative could be to add a timeout (e.g. 10 seconds) to $conn->receive_frame, and to call $pm->reap_finished_children() if it returns undef (before calling next()). That would be pretty lightweight, since receive_frame is just calling the can_read () on the select loop. In effect, it's doing a 10 second sleep which can be interrupted by an incoming frame. An incoming frame reaps via start(), or we'd reap after that timeout sleep. That should do the trick.
(In reply to David Cook from comment #6) > (In reply to Marcel de Rooy from comment #5) > > You did not see > > https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=32558#c36 btw? I'm not sure what you are trying to indicate. Your patch was included in the bug push and does not solve this problem. > An alternative could be to add a timeout (e.g. 10 seconds) to > $conn->receive_frame, and to call $pm->reap_finished_children() if it > returns undef (before calling next()). That would only work for Koha's using Rabbit and 10 seconds is quite a long delay between calls to reap all.
(In reply to Kyle M Hall from comment #7) > (In reply to David Cook from comment #6) > > An alternative could be to add a timeout (e.g. 10 seconds) to > > $conn->receive_frame, and to call $pm->reap_finished_children() if it > > returns undef (before calling next()). > > That would only work for Koha's using Rabbit and 10 seconds is quite a long > delay between calls to reap all. You could easily add it to the database polling block too. I suggested 10 seconds since that's the sleep used for the database polling block, but we could use shorter intervals for reaping. That said, why would 10 seconds be a long delay between calls to reap? If you have frequent jobs, then start() will be reaping previous ones. If you have infrequent jobs, then you probably won't have a lot of zombie children around, so 10 seconds seems all right to me to wait for reaping them. I suppose the problem compounds if you have many instances on a server, but I don't think they'd fill up the process table.
> 4) Run a bunch of elastic index updates > 5) Verify you have defunct processes > If you do not see defunct processes, the test plan is to simply verify > everything continues to work as expected. Any idea what factors can help having the issue reproduced? Also, I renamed multiple times at record, that causes ES updates with a background worker, right?
at record => a record
Taking a closer look at the comments here, there seems to be some discussion about implementation, also question about how to reproduce the issue this is trying to fix. Truth is this has had comments from a lot of QA team members already and we are a little stuck. Kyle, could you review the last few comments for a start please?
Hi, I encounter the same issue. In my case, Koha 22.11.15 (Debian Package) on Ubuntu 22.04.1(MariaDB + RabbitMQ), the issue occurs when I do: From Koha Staff Interface 1) Stage records for import (everything goes fine) 2) Manage staged MARC records (everything goes fine) FromShell 3) # ps aux | grep 'Z' find the "background_jobs" as Zombie <defunct> with a PID (e.g. 54679) From Koha Staff Interface 4) Stage records for import (everything goes fine) 5) Manage staged MARC records (everything goes well) FromShell 6) # ps aux | grep 'Z' find the "background_jobs" as Zombie <defunct> with another PID (e.g. 54679) So, the fact that "background_jobs" remains a zombie does not actually prevent other processes from operating in Koha (simply once the new process has started, that zombie is killed and a new zombie is created). A reboot of the server kills the zombie process. Since it seems like a problem that doesn't affect everyone, could it be something related to the Server settings? Thanks
(In reply to Asymar Riu from comment #12) > A reboot of the server kills the zombie process. That's overkill. To get child zombie processes reaped, you'd just need to stop the parent process. Then the child process gets inherited by PID 1 and they'll get reaped. > Since it seems like a problem that doesn't affect everyone, could it be > something related to the Server settings? No, it's a real issue. It's more likely to happen to systems that use background jobs more, especially sporadically. This is just how parent/child processes work. Parent processes are responsible for reaping their child processes. As Kyle pointed out, currently child processes are only reaped under certain conditions. It's just about tweaking the code to reap them in a more responsible way.
(In reply to David Cook from comment #13) > To get child zombie processes reaped, you'd just need to stop the parent process. Then the child process gets inherited by PID 1 and they'll get reaped. In other words @Asymar just restart the koha services (plack more specifically). I think it's here: https://wiki.koha-community.org/wiki/Commands_provided_by_the_Debian_packages#koha-plack
Created attachment 162941 [details] [review] Bug 33898: background_jobs_worker.pl may leave defunct children processes for extended periods of time It appears that the background jobs worker can leave defunct processes for periods of time. Though it is mostly harmless, it would be nice if that did not happen. Children are reaped automatically when start or wait_all_children are called. We only call start when a new job is found, and wait_all_children after exiting our while loop. The solution is to simply call reap_all_children after we sleep. This is a non-blocking call that will clean up those defunct processes. Test Plan: 1) Disable Rabbit 2) Set background_jobs_worker/max_processes to something like 5 3) Restart all the things! 4) Run a bunch of elastic index updates 5) Verify you have defunct processes 6) Apply this patch 7) Run more elastic index updates 8) Defunct processes should disappear every 10 seconds or so! If you do not see defunct processes, the test plan is to simply verify everything continues to work as expected. Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Used Asymar's test plan from comment 12. (based on record import)
Thanks Asymar for your test plan, it's very simple and I was able to reproduce the issue and confirm that the patch prevents it. So with comment 12's test plan, there one zombie at the time. It's killed when the next staging/import starts. And a new zombie will stay at the end of the operation. So that blocking point is no more. (unless the elastic index update based test plan shows something more subtle or impactful that would benefit from being tested) Remaining is «there seems to be some discussion about implementation».
In the above test plan Elastic is used, but note that those updates do not go via the regular worker script. There is another one (with more ore less the same code). Still to be merged :)
Created attachment 162942 [details] [review] Bug 33898: Alternative approach with receive frame timeout See bug 33898 comment6.
Created attachment 162943 [details] [review] Bug 33898: Alternative approach with receive frame timeout See bug 33898 comment6. Test plan: Based on comment12: Stage MARC import and manage. Look at the same time to the results of ps aux|grep Z. Verify that the lines with [background_jobs] <defunct> disappear within 10 seconds. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
The alternative patch addresses the issues raised earlier on the use of signals and follows the suggestions of David. Kyle, Victor and Asymar: Would this work for you? Note that you should apply only the second patch as an alternative for the first one.
Created attachment 162944 [details] [review] Bug 33898: (follow-up) Apply same solution to es_indexer_daemon Test plan: Similar as first patch with Elastic index jobs.
Comment on attachment 162944 [details] [review] Bug 33898: (follow-up) Apply same solution to es_indexer_daemon Oops this still needs a bit of attention
(In reply to Marcel de Rooy from comment #22) > Comment on attachment 162944 [details] [review] [review] > Bug 33898: (follow-up) Apply same solution to es_indexer_daemon > > Oops this still needs a bit of attention Ah I see $pm is not even used in the es_indexer_daemon..
Created attachment 162953 [details] [review] Bug 33898: Alternative approach with receive frame timeout See bug 33898 comment6. Test plan: Based on comment12: Stage MARC import and manage. Look at the same time to the results of ps aux|grep Z. Verify that the lines with [background_jobs] <defunct> disappear within 10 seconds. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 162954 [details] [review] Bug 33898: Implement reaping for database polling Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 163056 [details] [review] Bug 33898: Alternative approach with receive frame timeout See bug 33898 comment6. Test plan: Based on comment12: Stage MARC import and manage. Look at the same time to the results of ps aux|grep Z. Verify that the lines with [background_jobs] <defunct> disappear within 10 seconds. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Created attachment 163057 [details] [review] Bug 33898: Implement reaping for database polling Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
(In reply to Marcel de Rooy from comment #20) > The alternative patch addresses the issues raised earlier on the use of > signals and follows the suggestions of David. Thanks :) > Kyle, Victor and Asymar: Would this work for you? It works, so signing off. As for the signal vs no signals I don't know the good practices to QA this. Same for how this plays with when using database polling or the message queue. (I would have missed that the 1st alternate patch missed this)
(In reply to Victor Grousset/tuxayo from comment #28) > (In reply to Marcel de Rooy from comment #20) > > The alternative patch addresses the issues raised earlier on the use of > > signals and follows the suggestions of David. > > Thanks :) > > > Kyle, Victor and Asymar: Would this work for you? > > It works, so signing off. > As for the signal vs no signals I don't know the good practices to QA this. > Same for how this plays with when using database polling or the message > queue. (I would have missed that the 1st alternate patch missed this) I think we can move this to PQA with the sign-offs we have!
Hi, sorry for the delay in replying: I confirm that it works. Thank you very much! :)
Pushed for 24.05! Well done everyone, thank you!
Pushed to 23.11.x for 23.11.04
Backported to 23.05.x for upcoming 23.05.10