Bug 35086 - Koha::SearchEngine::Elasticsearch::Indexer->update_index needs to commit in batches
Summary: Koha::SearchEngine::Elasticsearch::Indexer->update_index needs to commit in b...
Status: Pushed to oldstable
Alias: None
Product: Koha
Classification: Unclassified
Component: Searching - Elasticsearch (show other bugs)
Version: Main
Hardware: All All
: P5 - low normal
Assignee: Nick Clemens (kidclamp)
QA Contact:
URL:
Keywords:
Depends on: 32594
Blocks:
  Show dependency treegraph
 
Reported: 2023-10-17 20:58 UTC by Nick Clemens (kidclamp)
Modified: 2024-07-25 11:11 UTC (History)
6 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
This enables breaking large Elasticsearch or Open Search indexing requests into smaller chunks (for example, when updating many records using batch modifications). This means that instead of sending a single background request for indexing, which could exceed the limits of the search server or take up too many resources, it limits index update requests to a more manageable size. The default chunk size is 5,000. To configure a different chunk size, add a <chunk_size> directive to the elasticsearch section of the instance's koha-conf.xml (for example: <chunk_size>2000</chunk_size>). NOTE: This doesn't change the command line indexing script, as this already allows passing a commit size defining how many records to send.
Version(s) released in:
24.05.00,23.11.02,23.05.09
Circulation function:


Attachments
Bug 35086: Add chunk_size option to elasticsearch configuration (5.82 KB, patch)
2023-12-22 20:17 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review
Bug 35086: Add chunk_size option to elasticsearch configuration (5.86 KB, patch)
2023-12-26 20:49 UTC, David Nind
Details | Diff | Splinter Review
Bug 35086: Add chunk_size option to elasticsearch configuration (6.08 KB, patch)
2024-01-04 13:13 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review
Bug 35086: Also split chunks when indexing from background job (3.01 KB, patch)
2024-01-04 13:13 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review
Bug 35086: Tidy tests (10.18 KB, patch)
2024-01-04 13:13 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review
Bug 35086: Add chunk_size option to elasticsearch configuration (6.08 KB, patch)
2024-01-04 21:52 UTC, David Nind
Details | Diff | Splinter Review
Bug 35086: Also split chunks when indexing from background job (3.05 KB, patch)
2024-01-04 21:52 UTC, David Nind
Details | Diff | Splinter Review
Bug 35086: Tidy tests (10.23 KB, patch)
2024-01-04 21:52 UTC, David Nind
Details | Diff | Splinter Review
Bug 35086: (follow-up) Use 5000 as example in conf file (1.16 KB, patch)
2024-01-05 13:05 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review
Bug 35086: Add chunk_size option to elasticsearch configuration (6.16 KB, patch)
2024-01-11 14:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 35086: Also split chunks when indexing from background job (3.13 KB, patch)
2024-01-11 14:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 35086: Tidy tests (10.30 KB, patch)
2024-01-11 14:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 35086: (follow-up) Use 5000 as example in conf file (1.23 KB, patch)
2024-01-11 14:38 UTC, Jonathan Druart
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Nick Clemens (kidclamp) 2023-10-17 20:58:58 UTC
As we have moved tasks into the background we have updated code to send a single background request for indexing - so if a library modifies several thousand biblios we send the full list for reindexing.

update_index needs to ensure the size of the requests does not exceed the limits of ES or take up too many resources.

We should provide a configurable (syspref? kconf switch?) limit for each indexing request
Comment 1 Fridolin Somers 2023-10-28 21:07:54 UTC
I would prefer kconf so that test env with same database can be different.

We usually have prod using real cluster and test env using a single node side-container.
Comment 2 Nick Clemens (kidclamp) 2023-12-22 20:17:11 UTC Comment hidden (obsolete)
Comment 3 Nick Clemens (kidclamp) 2023-12-22 20:17:52 UTC
Note: this patch doesn't affect the command line indexing script which allows you to pass a commit size defining how many records to send
Comment 4 David Nind 2023-12-26 20:49:53 UTC Comment hidden (obsolete)
Comment 5 David Nind 2023-12-26 21:28:06 UTC
I've signed off, as everything in the test plan worked.

However, I did note one thing when updating authority records:

1. Starting fresh (patch applied, shutting down KTD, then starting up again so that there are no previous jobs, adding the <chunk_size>250</chunk_size>, restarting everything).

2. After updating all the authority records by adding text to 680$1, there are 1320 entries for jobs:
   - the last 7 job entries (1314-1320) are for the elastic search index updates, split into 250 chunks (1320 is for 206 record updates)
   - the first job entry (1) is for the batch authority record modification (1706 modifications)
   - the rest of the job entries (2-1313) are for Elasticsearch index updates to individual bibliographic records (from a sample I checked)

3. I'm assuming that the individual bibliographic record updates are because the authority terms updated are linked to them.

4. I don't know whether this is what is expected, or whether there are plans to chunk the subsequent individual bibliographic records updated because of authority term changes. Or whether that is even possible.

5. Irrespective of that, this is still a great improvement!

Testing notes (using KTD):

1. I tested using ES8 (ktd --es8 up).

2. For the modification of bibliographic records, I updated the 'z - Public note' with some text.

3. For the modification of authority records, I had a rule to add some text to 680$i subfield.
Comment 6 Nick Clemens (kidclamp) 2024-01-04 13:13:04 UTC
Created attachment 160530 [details] [review]
Bug 35086: Add chunk_size option to elasticsearch configuration

Whne performing batch operations we can send a large numebr of records for reindexing at once.
Currently this can create requetss that are too large for Elasticsearch to process. We need
to break these requests into chunks/

This patch adds a chunk_size configuration to the elasticsearch stanza in koha-conf.xml

If blank we default to 5000.

To test:
0 - Have Koha using Elasticsearch
1 - Create and download a report of all barcodes:
    SELECT barcode FROM items
2 - Batch modify these items
3 - Note a single ESindexing job is created
4 - Create and download a report of all authority ids:
    SELECT auth_header.authid FROM auth_header
5 - Setup a marc modification template, and batch modify all the authorities
6 - Again note a single ES backgorund job is created
7 - Apply patch
8 - Repeat the modifications above - you still get a single job
9 - Edit koha-conf.xml and add <chunk_size>250</chunk_size> to elasticsearch stanza
10 - Repeat modifications - you now get several background ES jobs
11 - prove -v t/db_dependent/Koha/SearchEngine/Elasticsearch/Indexer.t

Signed-off-by: David Nind <david@davidnind.com>
Comment 7 Nick Clemens (kidclamp) 2024-01-04 13:13:06 UTC
Created attachment 160531 [details] [review]
Bug 35086: Also split chunks when indexing from background job

The es background indexer is designed to combine background jobs when started based on the 'batch_size' option.

While this is helpful for combining individual updates, it can be problematic when there are several large batch modifications, or when worker has stopped and is restarted.

This patch uses the same logic as in the indexer to split the chunks that are sent directly for indexing.

To test:
1 - Follow test plan on previous patch
2 - Confirm items are correctly indexed and jobs marked
Comment 8 Nick Clemens (kidclamp) 2024-01-04 13:13:09 UTC
Created attachment 160532 [details] [review]
Bug 35086: Tidy tests
Comment 9 David Nind 2024-01-04 21:52:34 UTC
Created attachment 160560 [details] [review]
Bug 35086: Add chunk_size option to elasticsearch configuration

Whne performing batch operations we can send a large numebr of records for reindexing at once.
Currently this can create requetss that are too large for Elasticsearch to process. We need
to break these requests into chunks/

This patch adds a chunk_size configuration to the elasticsearch stanza in koha-conf.xml

If blank we default to 5000.

To test:
0 - Have Koha using Elasticsearch
1 - Create and download a report of all barcodes:
    SELECT barcode FROM items
2 - Batch modify these items
3 - Note a single ESindexing job is created
4 - Create and download a report of all authority ids:
    SELECT auth_header.authid FROM auth_header
5 - Setup a marc modification template, and batch modify all the authorities
6 - Again note a single ES backgorund job is created
7 - Apply patch
8 - Repeat the modifications above - you still get a single job
9 - Edit koha-conf.xml and add <chunk_size>250</chunk_size> to elasticsearch stanza
10 - Repeat modifications - you now get several background ES jobs
11 - prove -v t/db_dependent/Koha/SearchEngine/Elasticsearch/Indexer.t

Signed-off-by: David Nind <david@davidnind.com>
Comment 10 David Nind 2024-01-04 21:52:37 UTC
Created attachment 160561 [details] [review]
Bug 35086: Also split chunks when indexing from background job

The es background indexer is designed to combine background jobs when started based on the 'batch_size' option.

While this is helpful for combining individual updates, it can be problematic when there are several large batch modifications, or when worker has stopped and is restarted.

This patch uses the same logic as in the indexer to split the chunks that are sent directly for indexing.

To test:
1 - Follow test plan on previous patch
2 - Confirm items are correctly indexed and jobs marked

Signed-off-by: David Nind <david@davidnind.com>
Comment 11 David Nind 2024-01-04 21:52:40 UTC
Created attachment 160562 [details] [review]
Bug 35086: Tidy tests

Signed-off-by: David Nind <david@davidnind.com>
Comment 12 David Nind 2024-01-04 22:08:54 UTC
Here are the list of jobs from testing using a freshly started KTD:

0000-1316 - Before patch, for both item modification and authority 
            record changes
1317-1318 - After patch, no chunking, item modifications
1319-2632 - After patch, no chunking, authority record changes
            ==> no change (as expected) to number of jobs after patch
                applied and no chunking set
2633-2635 - After patch, chunking (250), item modifications
            . 1 job for batch item modifications, 2 jobs for elastic search
              updates (1 batch of 250 and 1 batch of 161)
2636-3955 - After patch, chunking (250), authority record changes
            . 2636 - Batch authority record modification
            . 2637-3948 - Elasticsearch index updates for 1,312 individual
                          bibliographic record updates
            . 3949-3955 - 6 batches of 250, 1 batch of 206

I'm assuming this is what is expected, feel free to change the bug status if it isn't.


Testing notes:

1. I tested using ES8 (ktd --es8 up).

2. For the modification of bibliographic records, I updated the 'z - Public note' with some text.

3. For the modification of authority records, I had a rule to add some text to 680$i subfield.
Comment 13 Jonathan Druart 2024-01-05 05:42:51 UTC
you have 500 in conf and 5000 in pm, is that expected?
Comment 14 Nick Clemens (kidclamp) 2024-01-05 13:05:19 UTC
Created attachment 160572 [details] [review]
Bug 35086: (follow-up) Use 5000 as example in conf file
Comment 15 Nick Clemens (kidclamp) 2024-01-05 13:06:06 UTC
(In reply to Jonathan Druart from comment #13)
> you have 500 in conf and 5000 in pm, is that expected?

500 seemed a more reasonable size in my head, but 5000 is more consistent with our default indexing so I updated it.
Comment 16 Jonathan Druart 2024-01-11 11:36:00 UTC
I am wondering about the changes made to the worker.

+        while ( ( my @auth_chunk = $auth_chunks->() ) ) {
+            try {
+                $auth_indexer->update_index( \@auth_chunk );
+            } catch {
+                $logger->warn( sprintf "Update of elastic index failed with: %s", $_ );
+            };
+        }

Should not we surround the while with a try instead?... Not sure what's best here!
Comment 17 Nick Clemens (kidclamp) 2024-01-11 12:18:12 UTC
(In reply to Jonathan Druart from comment #16)
> Should not we surround the while with a try instead?... Not sure what's best
> here!

I'd rather try to index what we can, and only fail the bits that didn't work - i.e. if we have a big job and encounter an error early - a try on the whole thing would fail on the first chunk and stop. This way it tries each chunk - so one might fail, but the rest succeed - if there are errors lets minimize what needs to be reindexed
Comment 18 Jonathan Druart 2024-01-11 14:38:29 UTC
Created attachment 160853 [details] [review]
Bug 35086: Add chunk_size option to elasticsearch configuration

Whne performing batch operations we can send a large numebr of records for reindexing at once.
Currently this can create requetss that are too large for Elasticsearch to process. We need
to break these requests into chunks/

This patch adds a chunk_size configuration to the elasticsearch stanza in koha-conf.xml

If blank we default to 5000.

To test:
0 - Have Koha using Elasticsearch
1 - Create and download a report of all barcodes:
    SELECT barcode FROM items
2 - Batch modify these items
3 - Note a single ESindexing job is created
4 - Create and download a report of all authority ids:
    SELECT auth_header.authid FROM auth_header
5 - Setup a marc modification template, and batch modify all the authorities
6 - Again note a single ES backgorund job is created
7 - Apply patch
8 - Repeat the modifications above - you still get a single job
9 - Edit koha-conf.xml and add <chunk_size>250</chunk_size> to elasticsearch stanza
10 - Repeat modifications - you now get several background ES jobs
11 - prove -v t/db_dependent/Koha/SearchEngine/Elasticsearch/Indexer.t

Signed-off-by: David Nind <david@davidnind.com>

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Comment 19 Jonathan Druart 2024-01-11 14:38:32 UTC
Created attachment 160854 [details] [review]
Bug 35086: Also split chunks when indexing from background job

The es background indexer is designed to combine background jobs when started based on the 'batch_size' option.

While this is helpful for combining individual updates, it can be problematic when there are several large batch modifications, or when worker has stopped and is restarted.

This patch uses the same logic as in the indexer to split the chunks that are sent directly for indexing.

To test:
1 - Follow test plan on previous patch
2 - Confirm items are correctly indexed and jobs marked

Signed-off-by: David Nind <david@davidnind.com>

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Comment 20 Jonathan Druart 2024-01-11 14:38:35 UTC
Created attachment 160855 [details] [review]
Bug 35086: Tidy tests

Signed-off-by: David Nind <david@davidnind.com>

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Comment 21 Jonathan Druart 2024-01-11 14:38:38 UTC
Created attachment 160856 [details] [review]
Bug 35086: (follow-up) Use 5000 as example in conf file

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Comment 22 Katrin Fischer 2024-01-16 11:09:09 UTC
Pushed for 24.05!

Well done everyone, thank you!
Comment 23 David Nind 2024-01-16 20:56:26 UTC
Tweaked the release notes text - feel free to improve!
Comment 24 Fridolin Somers 2024-01-17 09:23:22 UTC
Pushed to 23.11.x for 23.11.02
Comment 25 Lucas Gass (lukeg) 2024-02-02 16:26:33 UTC
Backported to 23.05.x for upcoming 23.05.09