Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML.
Summary: rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARC...
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Searching (show other bugs)
Version: Main
Hardware: All All
: P5 - low enhancement (vote)
Assignee: Olli-Antti Kivilahti
QA Contact: Marcel de Rooy
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2015-02-03 11:51 UTC by Olli-Antti Kivilahti
Modified: 2018-12-03 20:04 UTC (History)
5 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Small patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML. (3.40 KB, patch)
2015-02-03 12:01 UTC, Olli-Antti Kivilahti
Details | Diff | Splinter Review
Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML. (3.45 KB, patch)
2017-10-15 09:36 UTC, Katrin Fischer
Details | Diff | Splinter Review
Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML. (3.55 KB, patch)
2018-01-05 07:29 UTC, Marcel de Rooy
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Olli-Antti Kivilahti 2015-02-03 11:51:59 UTC
When looking for a bad MARC Record using the rebuild_zebra_sliced.sh, it is useful to skip the complete MARCXML exporting from Koha and reuse the exported files for Zebra indexing.

This patch adds a new parameter:
    -x | --exclude-export Do not export Biblios from Koha, but use the existing
                          export-dir

Which depends on the:
     -d | --export-dir     Where rebuild_zebra.pl will export data
                           Default: $EXPORTDIR

 !---------!
! TEST PLAN !
 !---------!

1. Run  "./rebuild_zebra_sliced.sh --length 1000" to export 1000 MARC Records and slice them to one big 1000-Record chunk.
2. Realize that you get an imaginary "stack smashing detected"-error crashing your indexing at some Record you dont know of and can't make out from the indexing logging.
3. Start looking for the bad Record by running:
"./rebuild_zebra_sliced.sh --exlude-export --chunk-size 10"
To skip Biblios export from Koha which takes ~2h and get straight into splitting your exported biblios to chunks of 10, and importing them. You know which chunk fails so it is much easier to find the issue there.
Comment 1 Olli-Antti Kivilahti 2015-02-03 12:01:45 UTC
Created attachment 35658 [details] [review]
Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML.

When looking for a bad MARC Record using the rebuild_zebra_sliced.sh, it is useful to skip the complete MARCXML exporting from Koha and reuse the exported files for Zebra indexing.

This patch adds a new parameter:
    -x | --exclude-export Do not export Biblios from Koha, but use the existing
                          export-dir

Which depends on the:
     -d | --export-dir     Where rebuild_zebra.pl will export data
                           Default: $EXPORTDIR

 !---------!
! TEST PLAN !
 !---------!

1. Run  "./rebuild_zebra_sliced.sh --length 1000" to export 1000 MARC Records and slice them to one big 1000-Record chunk.
2. Realize that you get an imaginary "stack smashing detected"-error crashing your indexing at some Record you dont know of and can't make out from the indexing logging.
3. Start looking for the bad Record by running:
"./rebuild_zebra_sliced.sh --exlude-export --chunk-size 10"
To skip Biblios export from Koha which takes ~2h and get straight into splitting your exported biblios to chunks of 10, and importing them. You know which chunk fails so it is much easier to find the issue there.
Comment 2 Katrin Fischer 2017-10-15 09:36:56 UTC
Created attachment 68159 [details] [review]
Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML.

When looking for a bad MARC Record using the rebuild_zebra_sliced.sh, it is
useful to skip the complete MARCXML exporting from Koha and reuse the exported
files for Zebra indexing.

This patch adds a new parameter:
    -x | --exclude-export Do not export Biblios from Koha, but use the existing
                          export-dir

Which depends on the:
     -d | --export-dir     Where rebuild_zebra.pl will export data
                           Default: $EXPORTDIR

 !---------!
! TEST PLAN !
 !---------!

1. Run
     "./rebuild_zebra_sliced.sh --length 1000"
   to export 1000 MARC Records
   and slice them to one big 1000-Record chunk.
2. Realize that you get an imaginary "stack smashing detected"-error crashing
   your indexing at some Record you dont know of and can't make out from the
   indexing logging.
3. Start looking for the bad Record by running:
     "./rebuild_zebra_sliced.sh --exlude-export --chunk-size 10"
   To skip Biblios export from Koha which takes ~2h and get straight into
   splitting your exported biblios to chunks of 10, and indexing them. You
   know which chunk fails so it is much easier to find the issue there.

Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Comment 3 Marcel de Rooy 2018-01-05 07:29:28 UTC
Created attachment 70280 [details] [review]
Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML.

When looking for a bad MARC Record using the rebuild_zebra_sliced.sh, it is
useful to skip the complete MARCXML exporting from Koha and reuse the exported
files for Zebra indexing.

This patch adds a new parameter:
    -x | --exclude-export Do not export Biblios from Koha, but use the existing
                          export-dir

Which depends on the:
     -d | --export-dir     Where rebuild_zebra.pl will export data
                           Default: $EXPORTDIR

 !---------!
! TEST PLAN !
 !---------!

1. Run
     "./rebuild_zebra_sliced.sh --length 1000"
   to export 1000 MARC Records
   and slice them to one big 1000-Record chunk.
2. Realize that you get an imaginary "stack smashing detected"-error crashing
   your indexing at some Record you dont know of and can't make out from the
   indexing logging.
3. Start looking for the bad Record by running:
     "./rebuild_zebra_sliced.sh --exlude-export --chunk-size 10"
   To skip Biblios export from Koha which takes ~2h and get straight into
   splitting your exported biblios to chunks of 10, and indexing them. You
   know which chunk fails so it is much easier to find the issue there.

Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Comment 4 Jonathan Druart 2018-01-09 20:27:37 UTC
Pushed to master for 18.05, thanks to everybody involved!
Comment 5 Nick Clemens 2018-01-16 12:25:09 UTC
Enhancement, skipping for 17.11.x.
Awesome work everybody!