Bug 32594 - Add a dedicated ES indexing background worker
Summary: Add a dedicated ES indexing background worker
Status: RESOLVED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Searching - Elasticsearch (show other bugs)
Version: master
Hardware: All All
: P5 - low critical (vote)
Assignee: Nick Clemens
QA Contact: Jonathan Druart
URL:
Keywords: rel_22_11_candidate
Depends on: 32481 32393 32992
Blocks: 32572 35086 36009 33108 33486
  Show dependency treegraph
 
Reported: 2023-01-09 13:05 UTC by Nick Clemens
Modified: 2024-02-23 09:57 UTC (History)
10 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
23.05.00,22.11.06


Attachments
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue (6.26 KB, patch)
2023-01-09 13:52 UTC, Nick Clemens
Details | Diff | Splinter Review
Bug 32594: Improve the ES daemon indexer (6.12 KB, patch)
2023-01-10 16:07 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 32594: Improve the ES daemon indexer (6.07 KB, patch)
2023-01-10 16:10 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 32594: Improve the ES daemon indexer (6.12 KB, patch)
2023-01-10 16:27 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue (8.60 KB, patch)
2023-01-26 19:48 UTC, Nick Clemens
Details | Diff | Splinter Review
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue (8.64 KB, patch)
2023-01-26 23:28 UTC, David Nind
Details | Diff | Splinter Review
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue (8.71 KB, patch)
2023-02-08 16:58 UTC, Emily Lamancusa
Details | Diff | Splinter Review
Bug 32594: (follow-up) Adjust logging per bug 32612 (2.40 KB, patch)
2023-02-17 11:52 UTC, Nick Clemens
Details | Diff | Splinter Review
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue (8.99 KB, patch)
2023-02-22 13:22 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 32594: (QA follow-up) Adjust tests (1.05 KB, patch)
2023-02-27 16:12 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review
Bug 32594: (QA follow-up) Fix POD (1.07 KB, patch)
2023-03-01 13:55 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review
Bug 32594: POC (1.42 KB, patch)
2023-03-17 14:22 UTC, Nick Clemens
Details | Diff | Splinter Review
Bug 32594: Mark jobs as started and finished (1.86 KB, patch)
2023-03-22 10:59 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 32594: Mark jobs as started and finished (1.30 KB, patch)
2023-03-22 19:57 UTC, Nick Clemens
Details | Diff | Splinter Review
Bug 32594: Mark jobs as started and finished (1.31 KB, patch)
2023-03-23 08:18 UTC, Jonathan Druart
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Nick Clemens 2023-01-09 13:05:56 UTC
When pushing the ES indexing jobs to the background queue, we have found it is hard for the workers to keep up in production.

During high circulation times there are many single record reindexes produced.

It would be nice to have a dedicated ES worker that can batch these to reduce the processing time and reduce the hits to the ES server
Comment 1 Nick Clemens 2023-01-09 13:52:44 UTC
Created attachment 145145 [details] [review]
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue

Currently we generate large numebrs if single record reindex for circulation and other
actions. It can take a long time to process these as we need to load the ES settings for each.

This patch updates the Elasticsearch background jobs to throw records into a new queue
that can be processed by it's own worker and adds a dedicated worker that batches the jobs
every 1 second.

To test:
1 - Apply patches, set SearchEngine system preference to 'Elasticsearch'
2 - perl misc/search_tools/es_indexer_daemon.pl
3 - Leave the running in terminal and perform actions in staff interface:
    - Checking out a bib
    - Returning a bib
    - Editing a single bib
    - Editing a single item
    - Batch editing bibs
    - Batch editing items
4 - Confirm for each action that records are updated in search/search results
5 - Stop the script
6 - set SearchEngine system preference to 'Zebra'
7 - perl misc/search_tools/es_indexer_daemon.pl
8 - Script dies as Elasticsearch not enabled
Comment 2 David Cook 2023-01-10 01:02:13 UTC
(In reply to Nick Clemens from comment #0)
> When pushing the ES indexing jobs to the background queue, we have found it
> is hard for the workers to keep up in production.
> 
> During high circulation times there are many single record reindexes
> produced.
> 
> It would be nice to have a dedicated ES worker that can batch these to
> reduce the processing time and reduce the hits to the ES server

I like the idea of batching together the single record indexes. 

I think I'd prefer if this worker created a new message that contained multiple job_ids, but background_jobs_worker.pl only supports a single job_id. Maybe we should enhance background_jobs_worker.pl to be able to take multiple job IDs...

I'm not super familiar with alarm, so I'm curious at what point in code it is able to interrupt. It seems like it could introduce a race condition...

(If you received the frame, and then the alarm handler fired, the frame would be acknowledged before it's processed. But the job should be handled in the next alarm. I suppose you might double acknowledge the same frame by accident. I don't know if that creates a warning or an error...)
 
--

Marking "Failed QA" as there's some unused copy/paste code that needs to be removed (e.g. "process_job"). You'll also need to take into account Bug 32481.

I suspect that you might also run into memory problems over time since you're not doing any forking. I reckon you could fork in the commit_records() function to do the actual indexing, since that's the point that will be most subject to spikes in memory usage.
Comment 3 Jonathan Druart 2023-01-10 14:56:59 UTC
I don't think this is the correct way.

First I think we should have an entry in koha-conf to configure the workers, depending on their type.

Here, for update_elastic_index jobs, we actually want to batch consume and process them all every X seconds (or we could decide every X jobs).

I am trying something.
Comment 4 Jonathan Druart 2023-01-10 15:23:03 UTC
On the other hand... we are having the Koha module loaded already, and it cannot be more efficient than that. I wanted to avoid a different script for a specific job type, but actually I am inclined to agree with the approach.

My first idea was to have a Koha::BackgroundJobs::UpdateElasticIndex (notice the *s*) to process several jobs. We would send the job ids from an array constructed from the worker, with something in the config like { background_jobs: { update_elastic_index: { batch: 10 } } } to process per batch of 10. But it makes things more complicated and we don't want that for now.
Comment 5 Jonathan Druart 2023-01-10 16:07:03 UTC
Created attachment 145195 [details] [review]
Bug 32594: Improve the ES daemon indexer
Comment 6 Jonathan Druart 2023-01-10 16:10:29 UTC
Created attachment 145196 [details] [review]
Bug 32594: Improve the ES daemon indexer
Comment 7 Jonathan Druart 2023-01-10 16:11:14 UTC
Adding my bit to the discussion. We are not ready however:

- I think we should deal with authorities here
- We need to make the constant configurable (options of the script enough?)
- Koha::Logger is logging to the opac log (this is true for bug 32393 as well).
Comment 8 Jonathan Druart 2023-01-10 16:27:17 UTC
Created attachment 145197 [details] [review]
Bug 32594: Improve the ES daemon indexer
Comment 9 Jonathan Druart 2023-01-10 16:28:47 UTC
(In reply to Jonathan Druart from comment #7)
> Adding my bit to the discussion. We are not ready however:
> 
> - I think we should deal with authorities here
> - We need to make the constant configurable (options of the script enough?)
> - Koha::Logger is logging to the opac log (this is true for bug 32393 as
> well).

Also I think it is confusing to have
  misc/search_tools/es_indexer_daemon.pl
and
  misc/background_jobs_worker.pl
Comment 10 Jonathan Druart 2023-01-12 14:38:08 UTC
We need this ASAP for 22.11.
Comment 11 Jonathan Druart 2023-01-24 11:38:25 UTC
No-one else here?
Comment 12 David Cook 2023-01-24 22:38:43 UTC
Comment on attachment 145197 [details] [review]
Bug 32594: Improve the ES daemon indexer

Review of attachment 145197 [details] [review]:
-----------------------------------------------------------------

::: misc/search_tools/es_indexer_daemon.pl
@@ +126,4 @@
>  
> +    my @records;
> +    for my $job ( @jobs ) {
> +        my $args = $job->json->decode($job->data);

I suppose this line should have a try/catch since we're doing that for the other job handling code.
Comment 13 David Cook 2023-01-24 22:43:22 UTC
I don't use Elasticsearch with Koha yet, so I'm probably not the best positioned to comment.

(In reply to Jonathan Druart from comment #9)
> Also I think it is confusing to have
>   misc/search_tools/es_indexer_daemon.pl
> and
>   misc/background_jobs_worker.pl

I don't think there's an issue having a separate worker program, but "search_tools" probably isn't the right place for it. Perhaps we should have something like "misc/workers/background_jobs_worker.pl" and "misc/workers/es_indexer_daemon.pl"
Comment 14 Nick Clemens 2023-01-26 19:48:36 UTC
Created attachment 145704 [details] [review]
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue

Currently we generate large numebrs if single record reindex for circulation and other
actions. It can take a long time to process these as we need to load the ES settings for each.

This patch updates the Elasticsearch background jobs to throw records into a new queue
that can be processed by it's own worker and adds a dedicated worker that batches the jobs
every 1 second.

To test:
1 - Apply patches, set SearchEngine system preference to 'Elasticsearch'
2 - perl misc/search_tools/es_indexer_daemon.pl
3 - Leave the running in terminal and perform actions in staff interface:
    - Checking out a bib
    - Returning a bib
    - Editing a single bib
    - Editing a single item
    - Batch editing bibs
    - Batch editing items
4 - Confirm for each action that records are updated in search/search results
5 - Stop the script
6 - set SearchEngine system preference to 'Zebra'
7 - perl misc/search_tools/es_indexer_daemon.pl
8 - Script dies as Elasticsearch not enabled
Comment 15 David Nind 2023-01-26 23:28:07 UTC
Created attachment 145711 [details] [review]
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue

Currently we generate large numebrs if single record reindex for circulation and other
actions. It can take a long time to process these as we need to load the ES settings for each.

This patch updates the Elasticsearch background jobs to throw records into a new queue
that can be processed by it's own worker and adds a dedicated worker that batches the jobs
every 1 second.

To test:
1 - Apply patches, set SearchEngine system preference to 'Elasticsearch'
2 - perl misc/search_tools/es_indexer_daemon.pl
3 - Leave the running in terminal and perform actions in staff interface:
    - Checking out a bib
    - Returning a bib
    - Editing a single bib
    - Editing a single item
    - Batch editing bibs
    - Batch editing items
4 - Confirm for each action that records are updated in search/search results
5 - Stop the script
6 - set SearchEngine system preference to 'Zebra'
7 - perl misc/search_tools/es_indexer_daemon.pl
8 - Script dies as Elasticsearch not enabled

Signed-off-by: David Nind <david@davidnind.com>
Comment 16 David Nind 2023-01-26 23:29:26 UTC
Testing notes (using KTD):

1. The path to the script for steps 2 and 7 is: perl misc/workers/es_indexer_daemon.pl
Comment 17 Jonathan Druart 2023-01-31 08:58:12 UTC
error: sha1 information is lacking or useless (misc/background_jobs_worker.pl).
Comment 18 Jonathan Druart 2023-01-31 08:59:23 UTC
Shouldn't we move misc/background_jobs_worker.pl to misc/workers/ as well? On another bug?
Comment 19 David Cook 2023-02-01 02:44:31 UTC
(In reply to Jonathan Druart from comment #18)
> Shouldn't we move misc/background_jobs_worker.pl to misc/workers/ as well?
> On another bug?

+1
Comment 20 Emily Lamancusa 2023-02-02 20:29:06 UTC
I encountered an error while testing this patch with batch item updates, in which one bad bib blocked the ES indexing update for every item in the batch.

To reproduce the error:
1. Apply patches and run ktd --es7 up
2. Set SearchEngine system preference to 'Elasticsearch'
3. perl misc/search_tools/es_indexer_daemon.pl
   (leave running in the terminal while performing subsequent steps)
4. Perform an item search for all items where Collection is Reference
5. Export the results to a barcode file (898 items)
6. Upload this file to the Batch Item Modification Tool
7. Make an edit to the Full Call Number* and click Save
8. Notice that es_indexer_daemon prints an error to the console
9. Perform a catalog search
10. Limit the results with the Collection - Reference facet
11. Note that none of the results display the updated Call Number value

*Or, as far as I can tell, any edit that will produce a change in the record for
biblionumber 369

Error message printed to the console:

^ at /kohadevbox/koha/Koha/Biblio/Metadata.pm line 114.
DEBUG - Update of elastic index failed with: Invalid data, cannot decode metadata object (biblio_metadata.id=368, biblionumber=369, format=marcxml, schema=MARC21, decoding_error=':8: parser error : PCDATA invalid Char value 31
  <controlfield tag="001">00aD000015937</controlfield>
                            ^
:9: parser error : PCDATA invalid Char value 31
  <controlfield tag="004">00satmrnu0</controlfield>
                            ^
:9: parser error : PCDATA invalid Char value 31
  <controlfield tag="004">00satmrnu0</controlfield>
                               ^
:9: parser error : PCDATA invalid Char value 31
  <controlfield tag="004">00satmrnu0</controlfield>
                                  ^
:9: parser error : PCDATA invalid Char value 31
  <controlfield tag="004">00satmrnu0</controlfield>
                                     ^
:10: parser error : PCDATA invalid Char value 31
  <controlfield tag="008">00ar19881981bdkldan</controlfield>
                            ^
:10: parser error : PCDATA invalid Char value 31
  <controlfield tag="008">00ar19881981bdkldan</controlfield>
                                       ^
:10: parser error : PCDATA invalid Char value 31
  <controlfield tag="008">00ar19881981bdkldan</controlfield>
                                           ^')
Comment 21 Jonathan Druart 2023-02-08 08:24:18 UTC
(In reply to Emily Lamancusa from comment #20)
> I encountered an error while testing this patch with batch item updates, in
> which one bad bib blocked the ES indexing update for every item in the batch.

Hello Emily, I confirm the problem, but that's not directly linked with this patch. I am pretty sure this issue exists without this patch.

When you create a new batch, 2 new jobs are created, the first one is modifying the items in database, the second one is updating elastic's index. If you look at the job list you will see that the second one has in error (still not clear, the status is "finished", but there is an error in the detail: " / records have successfully been reindexed. Some errors occurred.")

That's not ideal but we should deal with that on a separate bug report in my opinion.
Comment 22 Emily Lamancusa 2023-02-08 16:31:58 UTC
(In reply to Jonathan Druart from comment #21)
> (In reply to Emily Lamancusa from comment #20)
> > I encountered an error while testing this patch with batch item updates, in
> > which one bad bib blocked the ES indexing update for every item in the batch.
> 
> Hello Emily, I confirm the problem, but that's not directly linked with this
> patch. I am pretty sure this issue exists without this patch.
> 
> When you create a new batch, 2 new jobs are created, the first one is
> modifying the items in database, the second one is updating elastic's index.
> If you look at the job list you will see that the second one has in error
> (still not clear, the status is "finished", but there is an error in the
> detail: " / records have successfully been reindexed. Some errors occurred.")
> 
> That's not ideal but we should deal with that on a separate bug report in my
> opinion.

Good to know - I didn't think to check whether the error was present without the patch (though it seems obvious in retrospect). Everything worked as intended on edits that didn't touch the broken bib, so I'll sign off here and open a new bug.
Comment 23 Emily Lamancusa 2023-02-08 16:58:23 UTC
Created attachment 146402 [details] [review]
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue

Currently we generate large numebrs if single record reindex for circulation and other
actions. It can take a long time to process these as we need to load the ES settings for each.

This patch updates the Elasticsearch background jobs to throw records into a new queue
that can be processed by it's own worker and adds a dedicated worker that batches the jobs
every 1 second.

To test:
1 - Apply patches, set SearchEngine system preference to 'Elasticsearch'
2 - perl misc/search_tools/es_indexer_daemon.pl
3 - Leave the running in terminal and perform actions in staff interface:
    - Checking out a bib
    - Returning a bib
    - Editing a single bib
    - Editing a single item
    - Batch editing bibs
    - Batch editing items
4 - Confirm for each action that records are updated in search/search results
5 - Stop the script
6 - set SearchEngine system preference to 'Zebra'
7 - perl misc/search_tools/es_indexer_daemon.pl
8 - Script dies as Elasticsearch not enabled

Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Comment 24 Emily Lamancusa 2023-02-08 17:00:00 UTC
Testing notes (using KTD):

The path to the script for steps 2 and 7 is: perl misc/workers/es_indexer_daemon.pl
Comment 25 Emily Lamancusa 2023-02-08 21:45:38 UTC
Bug 32920 created for the error noted in comment #20
Comment 26 Nick Clemens 2023-02-17 11:52:27 UTC
Created attachment 146831 [details] [review]
Bug 32594: (follow-up) Adjust logging per bug 32612
Comment 27 Jonathan Druart 2023-02-22 13:22:07 UTC
Created attachment 147141 [details] [review]
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue

Currently we generate large numebrs if single record reindex for circulation and other
actions. It can take a long time to process these as we need to load the ES settings for each.

This patch updates the Elasticsearch background jobs to throw records into a new queue
that can be processed by it's own worker and adds a dedicated worker that batches the jobs
every 1 second.

To test:
1 - Apply patches, set SearchEngine system preference to 'Elasticsearch'
2 - perl misc/search_tools/es_indexer_daemon.pl
3 - Leave the running in terminal and perform actions in staff interface:
    - Checking out a bib
    - Returning a bib
    - Editing a single bib
    - Editing a single item
    - Batch editing bibs
    - Batch editing items
4 - Confirm for each action that records are updated in search/search results
5 - Stop the script
6 - set SearchEngine system preference to 'Zebra'
7 - perl misc/search_tools/es_indexer_daemon.pl
8 - Script dies as Elasticsearch not enabled

Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>

Bug 32594: (follow-up) Adjust logging per bug 32612

JD amended patch: tidy! There were tabs here...

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Comment 28 Jonathan Druart 2023-02-22 14:53:12 UTC
Not sure where my patches disappear, but that's not important.

Ready to go!
Comment 29 Tomás Cohen Arazi 2023-02-24 20:38:49 UTC
Pushed to master for 23.05.

Nice work everyone, thanks!
Comment 30 Tomás Cohen Arazi 2023-02-27 16:12:39 UTC
Created attachment 147460 [details] [review]
Bug 32594: (QA follow-up) Adjust tests

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 31 Jonathan Druart 2023-02-28 15:05:24 UTC
I think we forget an important part here, there is no daemon around the new script, and so it's not running (and records are not indexed!)
Comment 32 Tomás Cohen Arazi 2023-02-28 17:29:42 UTC
(In reply to Jonathan Druart from comment #31)
> I think we forget an important part here, there is no daemon around the new
> script, and so it's not running (and records are not indexed!)

Maybe bake it into koha-indexer?
Comment 33 Matt Blenkinsop 2023-03-01 09:54:06 UTC
Nice work everyone!

Pushed to stable for 22.11.x
Comment 34 Tomás Cohen Arazi 2023-03-01 13:55:33 UTC
Created attachment 147562 [details] [review]
Bug 32594: (QA follow-up) Fix POD

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 35 Tomás Cohen Arazi 2023-03-02 11:38:56 UTC
POD fix pushed to master.
Comment 36 Jonathan Druart 2023-03-02 15:23:33 UTC
Nick, I am seeing " / records have successfully been reindexed. Some errors occurred. " when showing a job detail.
Comment 37 Jonathan Druart 2023-03-02 15:28:08 UTC
Should we remove Koha::BackgroundJob::UpdateElasticIndex->progress?
Comment 38 Matt Blenkinsop 2023-03-03 11:46:18 UTC
Backport to 22.11.04 reversed while additional work is needed
Comment 39 Nick Clemens 2023-03-17 14:22:42 UTC
Created attachment 148350 [details] [review]
Bug 32594: POC
Comment 40 Jonathan Druart 2023-03-22 10:59:19 UTC
Created attachment 148513 [details] [review]
Bug 32594: Mark jobs as started and finished
Comment 41 Jonathan Druart 2023-03-22 11:01:31 UTC
(In reply to Nick Clemens from comment #39)
> Created attachment 148350 [details] [review] [review]
> Bug 32594: POC

I don't think we should loop over the jobs, we are adding additional (almost useless) processing. Here we need to do things fast, so let's update them all at once. To prevent other scripts/jobs to use this, we should not provide a method (that does not feel correct).

Now we need to decide if we allow indexing jobs from another worker. The detail page of the job is still wrong " / records have successfully been reindexed. Some errors occurred.". This is because it expects Koha::BackgroundJob::UpdateElasticIndex->process to have processed the job, and provided the correct info. IMO we should not provide 2 ways to process the jobs, only the new worker should do it. So let's remove the process method and simplify the job detail view.

Sounds like we should move that to a separate bug...
Comment 42 Nick Clemens 2023-03-22 19:57:39 UTC
Created attachment 148571 [details] [review]
Bug 32594: Mark jobs as started and finished

Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Comment 43 Nick Clemens 2023-03-22 20:00:12 UTC
(In reply to Jonathan Druart from comment #41)
> (In reply to Nick Clemens from comment #39)
> > Created attachment 148350 [details] [review] [review] [review]
> > Bug 32594: POC
> 
> I don't think we should loop over the jobs, we are adding additional (almost
> useless) processing. Here we need to do things fast, so let's update them
> all at once. To prevent other scripts/jobs to use this, we should not
> provide a method (that does not feel correct).

I was trying to set the number processed to the numebr in the job, I forgot we made all a size of 1

I squashed and mark yours SO

> 
> Now we need to decide if we allow indexing jobs from another worker. The
> detail page of the job is still wrong " / records have successfully been
> reindexed. Some errors occurred.". 

Bug 33319 filed to improve 

As for processing:
> IMO we should not provide 2 ways to process the jobs, only the new worker should do it. So let's remove the process method and simplify the job detail view.
> Sounds like we should move that to a separate bug...

I think that's a third job - we could have some plugins or some outside system wanting to handle indexation for some reason? But I am okay to remove too
Comment 44 Jonathan Druart 2023-03-23 08:18:07 UTC
Created attachment 148588 [details] [review]
Bug 32594: Mark jobs as started and finished

Signed-off-by: Nick Clemens <nick@bywatersolutions.com>

Restored authorship so that it shows the patch has been signed off.
Comment 45 Tomás Cohen Arazi 2023-03-30 10:25:08 UTC
Follow-up in master.
Comment 46 Martin Renvoize 2023-05-03 12:22:38 UTC
Many hands makes light work, thankyou everyone!

Pushed to 22.11.x for the next release