Summary: | rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML. | ||
---|---|---|---|
Product: | Koha | Reporter: | Olli-Antti Kivilahti <olli-antti.kivilahti> |
Component: | Searching | Assignee: | Olli-Antti Kivilahti <olli-antti.kivilahti> |
Status: | CLOSED FIXED | QA Contact: | Marcel de Rooy <m.de.rooy> |
Severity: | enhancement | ||
Priority: | P5 - low | CC: | gitbot, jonathan.druart, m.de.rooy, nick, olli-antti.kivilahti |
Version: | Main | ||
Hardware: | All | ||
OS: | All | ||
See Also: | http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=13659 | ||
Change sponsored?: | --- | Patch complexity: | Small patch |
Documentation contact: | Documentation submission: | ||
Text to go in the release notes: | Version(s) released in: | ||
Circulation function: | |||
Attachments: |
Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML.
Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML. Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML. |
Description
Olli-Antti Kivilahti
2015-02-03 11:51:59 UTC
Created attachment 35658 [details] [review] Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML. When looking for a bad MARC Record using the rebuild_zebra_sliced.sh, it is useful to skip the complete MARCXML exporting from Koha and reuse the exported files for Zebra indexing. This patch adds a new parameter: -x | --exclude-export Do not export Biblios from Koha, but use the existing export-dir Which depends on the: -d | --export-dir Where rebuild_zebra.pl will export data Default: $EXPORTDIR !---------! ! TEST PLAN ! !---------! 1. Run "./rebuild_zebra_sliced.sh --length 1000" to export 1000 MARC Records and slice them to one big 1000-Record chunk. 2. Realize that you get an imaginary "stack smashing detected"-error crashing your indexing at some Record you dont know of and can't make out from the indexing logging. 3. Start looking for the bad Record by running: "./rebuild_zebra_sliced.sh --exlude-export --chunk-size 10" To skip Biblios export from Koha which takes ~2h and get straight into splitting your exported biblios to chunks of 10, and importing them. You know which chunk fails so it is much easier to find the issue there. Created attachment 68159 [details] [review] Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML. When looking for a bad MARC Record using the rebuild_zebra_sliced.sh, it is useful to skip the complete MARCXML exporting from Koha and reuse the exported files for Zebra indexing. This patch adds a new parameter: -x | --exclude-export Do not export Biblios from Koha, but use the existing export-dir Which depends on the: -d | --export-dir Where rebuild_zebra.pl will export data Default: $EXPORTDIR !---------! ! TEST PLAN ! !---------! 1. Run "./rebuild_zebra_sliced.sh --length 1000" to export 1000 MARC Records and slice them to one big 1000-Record chunk. 2. Realize that you get an imaginary "stack smashing detected"-error crashing your indexing at some Record you dont know of and can't make out from the indexing logging. 3. Start looking for the bad Record by running: "./rebuild_zebra_sliced.sh --exlude-export --chunk-size 10" To skip Biblios export from Koha which takes ~2h and get straight into splitting your exported biblios to chunks of 10, and indexing them. You know which chunk fails so it is much easier to find the issue there. Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Created attachment 70280 [details] [review] Bug 13660 - rebuild_zebra_sliced.sh - Exclude export phase and use existing exported MARCXML. When looking for a bad MARC Record using the rebuild_zebra_sliced.sh, it is useful to skip the complete MARCXML exporting from Koha and reuse the exported files for Zebra indexing. This patch adds a new parameter: -x | --exclude-export Do not export Biblios from Koha, but use the existing export-dir Which depends on the: -d | --export-dir Where rebuild_zebra.pl will export data Default: $EXPORTDIR !---------! ! TEST PLAN ! !---------! 1. Run "./rebuild_zebra_sliced.sh --length 1000" to export 1000 MARC Records and slice them to one big 1000-Record chunk. 2. Realize that you get an imaginary "stack smashing detected"-error crashing your indexing at some Record you dont know of and can't make out from the indexing logging. 3. Start looking for the bad Record by running: "./rebuild_zebra_sliced.sh --exlude-export --chunk-size 10" To skip Biblios export from Koha which takes ~2h and get straight into splitting your exported biblios to chunks of 10, and indexing them. You know which chunk fails so it is much easier to find the issue there. Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Pushed to master for 18.05, thanks to everybody involved! Enhancement, skipping for 17.11.x. Awesome work everybody! |