When pushing the ES indexing jobs to the background queue, we have found it is hard for the workers to keep up in production. During high circulation times there are many single record reindexes produced. It would be nice to have a dedicated ES worker that can batch these to reduce the processing time and reduce the hits to the ES server
Created attachment 145145 [details] [review] Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue Currently we generate large numebrs if single record reindex for circulation and other actions. It can take a long time to process these as we need to load the ES settings for each. This patch updates the Elasticsearch background jobs to throw records into a new queue that can be processed by it's own worker and adds a dedicated worker that batches the jobs every 1 second. To test: 1 - Apply patches, set SearchEngine system preference to 'Elasticsearch' 2 - perl misc/search_tools/es_indexer_daemon.pl 3 - Leave the running in terminal and perform actions in staff interface: - Checking out a bib - Returning a bib - Editing a single bib - Editing a single item - Batch editing bibs - Batch editing items 4 - Confirm for each action that records are updated in search/search results 5 - Stop the script 6 - set SearchEngine system preference to 'Zebra' 7 - perl misc/search_tools/es_indexer_daemon.pl 8 - Script dies as Elasticsearch not enabled
(In reply to Nick Clemens from comment #0) > When pushing the ES indexing jobs to the background queue, we have found it > is hard for the workers to keep up in production. > > During high circulation times there are many single record reindexes > produced. > > It would be nice to have a dedicated ES worker that can batch these to > reduce the processing time and reduce the hits to the ES server I like the idea of batching together the single record indexes. I think I'd prefer if this worker created a new message that contained multiple job_ids, but background_jobs_worker.pl only supports a single job_id. Maybe we should enhance background_jobs_worker.pl to be able to take multiple job IDs... I'm not super familiar with alarm, so I'm curious at what point in code it is able to interrupt. It seems like it could introduce a race condition... (If you received the frame, and then the alarm handler fired, the frame would be acknowledged before it's processed. But the job should be handled in the next alarm. I suppose you might double acknowledge the same frame by accident. I don't know if that creates a warning or an error...) -- Marking "Failed QA" as there's some unused copy/paste code that needs to be removed (e.g. "process_job"). You'll also need to take into account Bug 32481. I suspect that you might also run into memory problems over time since you're not doing any forking. I reckon you could fork in the commit_records() function to do the actual indexing, since that's the point that will be most subject to spikes in memory usage.
I don't think this is the correct way. First I think we should have an entry in koha-conf to configure the workers, depending on their type. Here, for update_elastic_index jobs, we actually want to batch consume and process them all every X seconds (or we could decide every X jobs). I am trying something.
On the other hand... we are having the Koha module loaded already, and it cannot be more efficient than that. I wanted to avoid a different script for a specific job type, but actually I am inclined to agree with the approach. My first idea was to have a Koha::BackgroundJobs::UpdateElasticIndex (notice the *s*) to process several jobs. We would send the job ids from an array constructed from the worker, with something in the config like { background_jobs: { update_elastic_index: { batch: 10 } } } to process per batch of 10. But it makes things more complicated and we don't want that for now.
Created attachment 145195 [details] [review] Bug 32594: Improve the ES daemon indexer
Created attachment 145196 [details] [review] Bug 32594: Improve the ES daemon indexer
Adding my bit to the discussion. We are not ready however: - I think we should deal with authorities here - We need to make the constant configurable (options of the script enough?) - Koha::Logger is logging to the opac log (this is true for bug 32393 as well).
Created attachment 145197 [details] [review] Bug 32594: Improve the ES daemon indexer
(In reply to Jonathan Druart from comment #7) > Adding my bit to the discussion. We are not ready however: > > - I think we should deal with authorities here > - We need to make the constant configurable (options of the script enough?) > - Koha::Logger is logging to the opac log (this is true for bug 32393 as > well). Also I think it is confusing to have misc/search_tools/es_indexer_daemon.pl and misc/background_jobs_worker.pl
We need this ASAP for 22.11.
No-one else here?
Comment on attachment 145197 [details] [review] Bug 32594: Improve the ES daemon indexer Review of attachment 145197 [details] [review]: ----------------------------------------------------------------- ::: misc/search_tools/es_indexer_daemon.pl @@ +126,4 @@ > > + my @records; > + for my $job ( @jobs ) { > + my $args = $job->json->decode($job->data); I suppose this line should have a try/catch since we're doing that for the other job handling code.
I don't use Elasticsearch with Koha yet, so I'm probably not the best positioned to comment. (In reply to Jonathan Druart from comment #9) > Also I think it is confusing to have > misc/search_tools/es_indexer_daemon.pl > and > misc/background_jobs_worker.pl I don't think there's an issue having a separate worker program, but "search_tools" probably isn't the right place for it. Perhaps we should have something like "misc/workers/background_jobs_worker.pl" and "misc/workers/es_indexer_daemon.pl"
Created attachment 145704 [details] [review] Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue Currently we generate large numebrs if single record reindex for circulation and other actions. It can take a long time to process these as we need to load the ES settings for each. This patch updates the Elasticsearch background jobs to throw records into a new queue that can be processed by it's own worker and adds a dedicated worker that batches the jobs every 1 second. To test: 1 - Apply patches, set SearchEngine system preference to 'Elasticsearch' 2 - perl misc/search_tools/es_indexer_daemon.pl 3 - Leave the running in terminal and perform actions in staff interface: - Checking out a bib - Returning a bib - Editing a single bib - Editing a single item - Batch editing bibs - Batch editing items 4 - Confirm for each action that records are updated in search/search results 5 - Stop the script 6 - set SearchEngine system preference to 'Zebra' 7 - perl misc/search_tools/es_indexer_daemon.pl 8 - Script dies as Elasticsearch not enabled
Created attachment 145711 [details] [review] Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue Currently we generate large numebrs if single record reindex for circulation and other actions. It can take a long time to process these as we need to load the ES settings for each. This patch updates the Elasticsearch background jobs to throw records into a new queue that can be processed by it's own worker and adds a dedicated worker that batches the jobs every 1 second. To test: 1 - Apply patches, set SearchEngine system preference to 'Elasticsearch' 2 - perl misc/search_tools/es_indexer_daemon.pl 3 - Leave the running in terminal and perform actions in staff interface: - Checking out a bib - Returning a bib - Editing a single bib - Editing a single item - Batch editing bibs - Batch editing items 4 - Confirm for each action that records are updated in search/search results 5 - Stop the script 6 - set SearchEngine system preference to 'Zebra' 7 - perl misc/search_tools/es_indexer_daemon.pl 8 - Script dies as Elasticsearch not enabled Signed-off-by: David Nind <david@davidnind.com>
Testing notes (using KTD): 1. The path to the script for steps 2 and 7 is: perl misc/workers/es_indexer_daemon.pl
error: sha1 information is lacking or useless (misc/background_jobs_worker.pl).
Shouldn't we move misc/background_jobs_worker.pl to misc/workers/ as well? On another bug?
(In reply to Jonathan Druart from comment #18) > Shouldn't we move misc/background_jobs_worker.pl to misc/workers/ as well? > On another bug? +1
I encountered an error while testing this patch with batch item updates, in which one bad bib blocked the ES indexing update for every item in the batch. To reproduce the error: 1. Apply patches and run ktd --es7 up 2. Set SearchEngine system preference to 'Elasticsearch' 3. perl misc/search_tools/es_indexer_daemon.pl (leave running in the terminal while performing subsequent steps) 4. Perform an item search for all items where Collection is Reference 5. Export the results to a barcode file (898 items) 6. Upload this file to the Batch Item Modification Tool 7. Make an edit to the Full Call Number* and click Save 8. Notice that es_indexer_daemon prints an error to the console 9. Perform a catalog search 10. Limit the results with the Collection - Reference facet 11. Note that none of the results display the updated Call Number value *Or, as far as I can tell, any edit that will produce a change in the record for biblionumber 369 Error message printed to the console: ^ at /kohadevbox/koha/Koha/Biblio/Metadata.pm line 114. DEBUG - Update of elastic index failed with: Invalid data, cannot decode metadata object (biblio_metadata.id=368, biblionumber=369, format=marcxml, schema=MARC21, decoding_error=':8: parser error : PCDATA invalid Char value 31 <controlfield tag="001">00aD000015937</controlfield> ^ :9: parser error : PCDATA invalid Char value 31 <controlfield tag="004">00satmrnu0</controlfield> ^ :9: parser error : PCDATA invalid Char value 31 <controlfield tag="004">00satmrnu0</controlfield> ^ :9: parser error : PCDATA invalid Char value 31 <controlfield tag="004">00satmrnu0</controlfield> ^ :9: parser error : PCDATA invalid Char value 31 <controlfield tag="004">00satmrnu0</controlfield> ^ :10: parser error : PCDATA invalid Char value 31 <controlfield tag="008">00ar19881981bdkldan</controlfield> ^ :10: parser error : PCDATA invalid Char value 31 <controlfield tag="008">00ar19881981bdkldan</controlfield> ^ :10: parser error : PCDATA invalid Char value 31 <controlfield tag="008">00ar19881981bdkldan</controlfield> ^')
(In reply to Emily Lamancusa from comment #20) > I encountered an error while testing this patch with batch item updates, in > which one bad bib blocked the ES indexing update for every item in the batch. Hello Emily, I confirm the problem, but that's not directly linked with this patch. I am pretty sure this issue exists without this patch. When you create a new batch, 2 new jobs are created, the first one is modifying the items in database, the second one is updating elastic's index. If you look at the job list you will see that the second one has in error (still not clear, the status is "finished", but there is an error in the detail: " / records have successfully been reindexed. Some errors occurred.") That's not ideal but we should deal with that on a separate bug report in my opinion.
(In reply to Jonathan Druart from comment #21) > (In reply to Emily Lamancusa from comment #20) > > I encountered an error while testing this patch with batch item updates, in > > which one bad bib blocked the ES indexing update for every item in the batch. > > Hello Emily, I confirm the problem, but that's not directly linked with this > patch. I am pretty sure this issue exists without this patch. > > When you create a new batch, 2 new jobs are created, the first one is > modifying the items in database, the second one is updating elastic's index. > If you look at the job list you will see that the second one has in error > (still not clear, the status is "finished", but there is an error in the > detail: " / records have successfully been reindexed. Some errors occurred.") > > That's not ideal but we should deal with that on a separate bug report in my > opinion. Good to know - I didn't think to check whether the error was present without the patch (though it seems obvious in retrospect). Everything worked as intended on edits that didn't touch the broken bib, so I'll sign off here and open a new bug.
Created attachment 146402 [details] [review] Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue Currently we generate large numebrs if single record reindex for circulation and other actions. It can take a long time to process these as we need to load the ES settings for each. This patch updates the Elasticsearch background jobs to throw records into a new queue that can be processed by it's own worker and adds a dedicated worker that batches the jobs every 1 second. To test: 1 - Apply patches, set SearchEngine system preference to 'Elasticsearch' 2 - perl misc/search_tools/es_indexer_daemon.pl 3 - Leave the running in terminal and perform actions in staff interface: - Checking out a bib - Returning a bib - Editing a single bib - Editing a single item - Batch editing bibs - Batch editing items 4 - Confirm for each action that records are updated in search/search results 5 - Stop the script 6 - set SearchEngine system preference to 'Zebra' 7 - perl misc/search_tools/es_indexer_daemon.pl 8 - Script dies as Elasticsearch not enabled Signed-off-by: David Nind <david@davidnind.com> Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Testing notes (using KTD): The path to the script for steps 2 and 7 is: perl misc/workers/es_indexer_daemon.pl
Bug 32920 created for the error noted in comment #20
Created attachment 146831 [details] [review] Bug 32594: (follow-up) Adjust logging per bug 32612
Created attachment 147141 [details] [review] Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue Currently we generate large numebrs if single record reindex for circulation and other actions. It can take a long time to process these as we need to load the ES settings for each. This patch updates the Elasticsearch background jobs to throw records into a new queue that can be processed by it's own worker and adds a dedicated worker that batches the jobs every 1 second. To test: 1 - Apply patches, set SearchEngine system preference to 'Elasticsearch' 2 - perl misc/search_tools/es_indexer_daemon.pl 3 - Leave the running in terminal and perform actions in staff interface: - Checking out a bib - Returning a bib - Editing a single bib - Editing a single item - Batch editing bibs - Batch editing items 4 - Confirm for each action that records are updated in search/search results 5 - Stop the script 6 - set SearchEngine system preference to 'Zebra' 7 - perl misc/search_tools/es_indexer_daemon.pl 8 - Script dies as Elasticsearch not enabled Signed-off-by: David Nind <david@davidnind.com> Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov> Bug 32594: (follow-up) Adjust logging per bug 32612 JD amended patch: tidy! There were tabs here... Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Not sure where my patches disappear, but that's not important. Ready to go!
Pushed to master for 23.05. Nice work everyone, thanks!
Created attachment 147460 [details] [review] Bug 32594: (QA follow-up) Adjust tests Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
I think we forget an important part here, there is no daemon around the new script, and so it's not running (and records are not indexed!)
(In reply to Jonathan Druart from comment #31) > I think we forget an important part here, there is no daemon around the new > script, and so it's not running (and records are not indexed!) Maybe bake it into koha-indexer?
Nice work everyone! Pushed to stable for 22.11.x
Created attachment 147562 [details] [review] Bug 32594: (QA follow-up) Fix POD Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
POD fix pushed to master.
Nick, I am seeing " / records have successfully been reindexed. Some errors occurred. " when showing a job detail.
Should we remove Koha::BackgroundJob::UpdateElasticIndex->progress?
Backport to 22.11.04 reversed while additional work is needed
Created attachment 148350 [details] [review] Bug 32594: POC
Created attachment 148513 [details] [review] Bug 32594: Mark jobs as started and finished
(In reply to Nick Clemens from comment #39) > Created attachment 148350 [details] [review] [review] > Bug 32594: POC I don't think we should loop over the jobs, we are adding additional (almost useless) processing. Here we need to do things fast, so let's update them all at once. To prevent other scripts/jobs to use this, we should not provide a method (that does not feel correct). Now we need to decide if we allow indexing jobs from another worker. The detail page of the job is still wrong " / records have successfully been reindexed. Some errors occurred.". This is because it expects Koha::BackgroundJob::UpdateElasticIndex->process to have processed the job, and provided the correct info. IMO we should not provide 2 ways to process the jobs, only the new worker should do it. So let's remove the process method and simplify the job detail view. Sounds like we should move that to a separate bug...
Created attachment 148571 [details] [review] Bug 32594: Mark jobs as started and finished Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
(In reply to Jonathan Druart from comment #41) > (In reply to Nick Clemens from comment #39) > > Created attachment 148350 [details] [review] [review] [review] > > Bug 32594: POC > > I don't think we should loop over the jobs, we are adding additional (almost > useless) processing. Here we need to do things fast, so let's update them > all at once. To prevent other scripts/jobs to use this, we should not > provide a method (that does not feel correct). I was trying to set the number processed to the numebr in the job, I forgot we made all a size of 1 I squashed and mark yours SO > > Now we need to decide if we allow indexing jobs from another worker. The > detail page of the job is still wrong " / records have successfully been > reindexed. Some errors occurred.". Bug 33319 filed to improve As for processing: > IMO we should not provide 2 ways to process the jobs, only the new worker should do it. So let's remove the process method and simplify the job detail view. > Sounds like we should move that to a separate bug... I think that's a third job - we could have some plugins or some outside system wanting to handle indexation for some reason? But I am okay to remove too
Created attachment 148588 [details] [review] Bug 32594: Mark jobs as started and finished Signed-off-by: Nick Clemens <nick@bywatersolutions.com> Restored authorship so that it shows the patch has been signed off.
Follow-up in master.
Many hands makes light work, thankyou everyone! Pushed to 22.11.x for the next release