Bug 38408 - Add parallel exporting of MARC records to Zebra rebuild/reindex
Summary: Add parallel exporting of MARC records to Zebra rebuild/reindex
Status: Needs Signoff
Alias: None
Product: Koha
Classification: Unclassified
Component: Command-line Utilities (show other bugs)
Version: Main
Hardware: All All
: P5 - low enhancement
Assignee: Marcel de Rooy
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks: 38427
  Show dependency treegraph
 
Reported: 2024-11-08 11:01 UTC by Marcel de Rooy
Modified: 2024-11-13 12:29 UTC (History)
2 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Small patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
Bug 38408: Add parallel exporting in rebuild_zebra.pl (6.85 KB, patch)
2024-11-12 14:41 UTC, Marcel de Rooy
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Marcel de Rooy 2024-11-08 11:01:46 UTC
We can speed up reindexing Zebra when we use parallel exporting, creating multiple export files in the same directory. This directory is passed later to zebraidx.
Comment 1 Marcel de Rooy 2024-11-12 14:41:38 UTC
Created attachment 174418 [details] [review]
Bug 38408: Add parallel exporting in rebuild_zebra.pl

The first part of the Zebra rebuild is the exporting. This part is
made faster. The second part with zebraidx is not changed.

A new commandline parameter -forks is added to the rebuild_zebra.pl
script. A subroutine export_marc_records is added between index_records
and export_marc_records_from_sth. The last routine has a new parameter,
the sequence number of the export file.

NOTE: This report does not touch koha-rebuild-zebra yet! This will be
done on a follow-up.

Test plan:
Note that the number of forks/records below can be adjusted
according to your server and database setup.

[1] Reindex a subset of 100 records without forks:
    su [YOUR_KOHA_USER]
    misc/migration_tools/rebuild_zebra.pl -a -b -r -d /tmp/rebuild01 -k --length 100
    Check if /tmp/rebuild01/biblio contains one export file for auth/bib.
    Verify that max. 100 auth and bib were indexed (check Auth search, Cataloguing)
[2] Reindex an additional subset of 100 recs with forks (remove -r, add -forks):
    su [YOUR_KOHA_USER]
    misc/migration_tools/rebuild_zebra.pl -a -b -d /tmp/rebuild02 -k --length 100 --offset 100 -forks 3
    Check if /tmp/rebuild02/biblio contains 3 export files for auth/bib.
    Verify that max. 200 auth and bib were indexed (check Auth search, Cataloguing)
[3] Run a full reindex with forks:
    su [YOUR_KOHA_USER]
    misc/migration_tools/rebuild_zebra.pl -a -b -d /tmp/rebuild03 -k -forks 3
    Check both searches again.
[4] Bonus: To get a feeling of improved speed, reindex a larger production db with and
    without using -forks. (Use something like above.) You may add -I to skip indexing
    in order to better compare both exports.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Reindexed a prod db in 96 mins instead of 150 mins (3 forks, 4 cores). Main gain in
biblio export; complete export took 35 mins, zebraidx 61 mins.
Comment 2 Marcel de Rooy 2024-11-12 14:47:22 UTC
(In reply to Marcel de Rooy from comment #1)

> NOTE: This report does not touch koha-rebuild-zebra yet! This will be
> done on a follow-up.

Planned on bug 38427
Comment 3 Marcel de Rooy 2024-11-12 15:01:41 UTC
Note for QA: A few smaller improvements are reported on bug 38427 as well.
Comment 4 David Cook 2024-11-12 23:30:16 UTC
This makes a lot of sense conceptually. I can't believe we didn't do this years ago.

I've got a few questions and comments though:

1. Is there are a reason you moved around a bunch of the top "use" statements? It makes it harder to see what's actually changed at the top.

2. There's a 'my $chunk_size = 100000;' at the top of the script which appears to be unused? 

3. It wouldn't hurt to add some more code comments to make it easier to read/understand what's happening. Not sure that I follow the math when just reading the code. I would've expected chunk_size to stay fixed and $num_records_exported to be the number of records actually exported. Not sure this all adds up... but don't have time to test right now.
Comment 5 Marcel de Rooy 2024-11-13 12:29:20 UTC
(In reply to David Cook from comment #4)

> 1. Is there are a reason you moved around a bunch of the top "use"
> statements? It makes it harder to see what's actually changed at the top.

It was kind of disorganized, and perltidy was also complaining. So I rearranged a bit more than strictly needed. But nothing special.

> 2. There's a 'my $chunk_size = 100000;' at the top of the script which
> appears to be unused? 
True. See the follow-up bug. Just a remainder of something obsoleted during dev.

> 3. It wouldn't hurt to add some more code comments to make it easier to
> read/understand what's happening. Not sure that I follow the math when just
> reading the code. I would've expected chunk_size to stay fixed and
> $num_records_exported to be the number of records actually exported. Not
> sure this all adds up... but don't have time to test right now.

I tried to put some extra comments in the main loop. The tricky thing is that the loop inside is normally the child, but could be the parent too if you dont fork.
I am using chunk_size to control the loop; so it is adjusted before the last run. Added a TODO on the follow-up bug about passing data from child back to parent.