Bug 39769 - es_indexer_daemon.pl uses stale L1 cache
Summary: es_indexer_daemon.pl uses stale L1 cache
Status: NEW
Alias: None
Product: Koha
Classification: Unclassified
Component: Searching - Elasticsearch (show other bugs)
Version: Main
Hardware: All All
: P5 - low normal
Assignee: Bugs List
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2025-04-29 02:22 UTC by David Cook
Modified: 2025-11-21 02:53 UTC (History)
2 users (show)

See Also:
GIT URL:
Initiative type: ---
Sponsorship status: ---
Comma delimited list of Sponsors:
Crowdfunding goal: 0
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description David Cook 2025-04-29 02:22:59 UTC
As bug 36549 notes, the es_indexer_daemon.pl doesn't fork child processes to do its work. This leads to memory leaks, but it also means that it's using a stale L1 cache.

If you update a system preference like IncludeSeeFromInSearches, you have to restart your es_indexer_daemon.pl worker before you can re-index your database. 

Note that the solution to this problem does not have to be updating the daemon to fork child process workers.

Rather, we need to clear the L1 cache after we fetch a job.
Comment 1 Robin Sheat 2025-11-21 02:52:08 UTC
A model that I've used in other things with good success to solve this sort of issue is:

1. Have the script die after, say, 20 minutes (you'll need to add a timeout to `receive_frame` so that it doesn't block, but that's find)
2. Have a cron job that launches the script every 15 minutes, but does so using a mechanism that prevents multiple instances running at once (I think systemd has a function for this, otherwise it's possible to use flock)

This way, you are sure that any unused memory is cleared regularly, any updates to the code are picked up quickly, if it crashes it'll be running again soon, and so on.

We called them "cron daemons" (distinct from the cron daemon that actually does cron.)

It will need some migration (from daemon start to a cronjob), so it might be good to have a "timeout" command line option and then it can be migrated in the packages for example, but normal "run forever" behaviour will remain for everyone else.
Comment 2 Robin Sheat 2025-11-21 02:53:33 UTC
To clarify, the flock or whatever you use to prevent multiple executions should *block and wait* until the running process terminates before it starts, it shouldn't abort immediately.