In our system, we're noticing that the real-time holds queue is consistently sending a disproportionate number of holds to the same branches, although RandomizeHoldsQueueWeight is set to distribute holds randomly. The holds queue cron job often produced a "bubble" where one or two libraries would have a much longer holds queue than the others, but the "bubble" would move around between different branches from run to run. Since turning on the real-time holds queue, the "bubble" has stayed on the same two libraries for the full week that the real-time holds queue has been running. Other library systems have reported imbalanced distributions as well. Adding some logging revealed that the branches were being "randomized" to an identical order each time the holds queue is updated! This is consistent with the evidence that the same branches are always being assigned the most and second-most holds (respectively) in our production server. I was also able to replicate that same behavior on master, 22.11.x, and 22.05.x. Instructions to replicate are coming in the first comment, along with a patch to add the logging.
Created attachment 154219 [details] [review] Bug 34470: [DO NOT PUSH] Add logging to watch randomization
To replicate: 1. Apply logging patch 2. Set system preferences: a. RealTimeHoldsQueue -> Enable b. RandomizeHoldsQueueWeight -> in random order 3. Watch the logs for the staff interface in ktd: ktd --shell koha-intra-err 4. Place a hold. Note that the logs display the branch list before and after it is randomized. 5. Place some more holds. Note that the branch order after randomization is identical each time. Bonus round 1: 6. Fill in some branches (I recommend at least 8 or so) for StaticHoldsQueueWeight 7. Place some more holds. 8. Note that the branch list before randomization is identical to the list in StaticHoldsQueueWeight, and the list after randomization is the same each time (but not necessarily identical to the randomized list that was produced when StaticHoldsQueueWeight was empty). Bonus round 2: 9. Add the following line to load_branches_to_pull_from in C4/HoldsQueue.pm: $logger->warn(rand()); 10. Place some more holds. 11. Note that rand() produces the same result each time, and the list after randomization is still the same each time (but not necessarily identical to the randomized list that was produced before the call to rand() was added).
The logging makes it clear that the call to shuffle() in load_branches_to_pull_from is "randomizing" deterministically for some reason. According to the List::Utils docs, shuffle() depends on Perl's rand() function (unless an alternative method is set). I think the bug has to do with the scenario described here: https://stackoverflow.com/questions/58120618/how-to-get-random-number-in-forked-processes We do fork at minimum one process to process a background job (or more depending on configuration after bug 32558). According to the above, if a parent process uses rand() and later spawns a child process which also calls rand(), every child process will inherit the same seed and thus produce an identical random number sequence unless the seed is reset with srand() after the fork. The official docs for rand() don't quite spell this out but they allude to the need to call srand() in a child process (though they also warn that srand() should not be called more than once per process). The results I'm seeing are consistent with this, and that would also explain why this doesn't happen in the cron job (which does not fork).
The only thing I can't figure out is where (if?) randomization is happening in the parent process, which is supposed to be one of the conditions for the identical rand() results in the child process...but everything else fits. Either way, this is so deep in the weeds of Perl and parallel processing that I'd like to get more eyes on it (both for perspective and confirmation that others replicate the problem) before going further with it.
Created attachment 154335 [details] [review] Bug 34470: Initialize random seed after spawning a child worker process When background_jobs_worker.pl spawns a new child process, it needs to explicitly reinitialize the random seed - otherwise each child process will inherit the same random seed from the parent process, and any randomization will produce identical results each time. This patch adds a call to srand immediately after the fork to reinitialize the seed. Note that child processes should not call srand with no parameter anywhere else, as the Perl documentation indicates that srand should not be called with no parameter more than once per process. To test: 1. Apply the logging patch only 2. Set system preferences: a. RealTimeHoldsQueue -> Enable b. RandomizeHoldsQueueWeight -> in random order 3. Watch the logs for the staff interface in ktd: ktd --shell koha-intra-err 4. Place a hold. Note that the logs display the branch list before and after it is randomized. 5. Place some more holds. Note that the branch order after randomization is identical each time. 6. Apply both patches and restart_all 7. Repeat steps 3-5. -> Note that the branch order before randomization hasn't changed -> Note that the branch order after randomization is now different each time.
Created attachment 154336 [details] [review] Bug 34470: Initialize random seed after spawning a child worker process When background_jobs_worker.pl spawns a new child process, it needs to explicitly reinitialize the random seed - otherwise each child process will inherit the same random seed from the parent process, and any randomization will produce identical results each time. This patch adds a call to srand immediately after the fork to reinitialize the seed. Note that child processes should not call srand with no parameter anywhere else, as the Perl documentation indicates that srand should not be called with no parameter more than once per process. To test: 1. Apply the logging patch only 2. Set system preferences: a. RealTimeHoldsQueue -> Enable b. RandomizeHoldsQueueWeight -> in random order 3. Watch the logs for the staff interface in ktd: ktd --shell koha-intra-err 4. Place a hold. Note that the logs display the branch list before and after it is randomized. 5. Place some more holds. Note that the branch order after randomization is identical each time. 6. Apply both patches and restart_all 7. Repeat steps 3-5. -> Note that the branch order before randomization hasn't changed -> Note that the branch order after randomization is now different each time.
Created attachment 154339 [details] [review] Bug 34470: Initialize random seed after spawning a child worker process When background_jobs_worker.pl spawns a new child process, it needs to explicitly reinitialize the random seed - otherwise each child process will inherit the same random seed from the parent process, and any randomization will produce identical results each time. This patch adds a call to srand immediately after the fork to reinitialize the seed. Note that child processes should not call srand with no parameter anywhere else, as the Perl documentation indicates that srand should not be called with no parameter more than once per process. To test: 1. Apply the logging patch only 2. Set system preferences: a. RealTimeHoldsQueue -> Enable b. RandomizeHoldsQueueWeight -> in random order 3. Watch the logs for the staff interface in ktd: ktd --shell koha-intra-err 4. Place a hold. Note that the logs display the branch list before and after it is randomized. 5. Place some more holds. Note that the branch order after randomization is identical each time. 6. Apply both patches and restart_all 7. Repeat steps 3-5. -> Note that the branch order before randomization hasn't changed -> Note that the branch order after randomization is now different each time. Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Very small change, straightforward code, and excellently sleuthed! Passing QA
Pushed to master for 23.11. Nice work everyone, thanks!
(In reply to Nick Clemens from comment #8) > Very small change, straightforward code, and excellently sleuthed! +1 to the excellent sleuthing!
Pushed to 23.05.x for 23.05.03
Nice work everyone! Pushed to 22.11.x for next release