Word search with multi-part facets works properly only with Zebra ICU tokenization. This patch add a new question to Koha command line installer: Zebra has two methods to perform records tokenization and characters normalization: CHR and ICU. ICU is recommended for catalogs containing non-Latin characters. (chr, icu) [chr] How to test: - perl ./Makefile.PL - Try each possible value for new parameter - Take a look at zebradb/etc/default.idx file. Depending of the parameter you get this line: icuchain words-icu.xml or this one: charmap word-phrase-utf.chr (copied from bug 3216)
Created attachment 8150 [details] [review] Bug 7698: Add CHR/ICU Zebra tokenization choice to installation Word search with multi-part facets works properly only with Zebra ICU tokenization. This patch add a new question to Koha command line installer: Zebra has two methods to perform records tokenization and characters normalization: CHR and ICU. ICU is recommended for catalogs containing non-Latin characters. (chr, icu) [chr] How to test: - perl ./Makefile.PL - Try each possible value for new parameter - Take a look at zebradb/etc/default.idx file. Depending of the parameter you get this line: icuchain words-icu.xml or this one: charmap word-phrase-utf.chr Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com> (Note: This patch was previously associated with bug 3216; I moved it to a separate bug because including ICU is a good idea independent of the fix for the particular issue described in bug 3216)
Adds the ZEBRA_TOKENIZER option to the Makefile, and properly passes it about on make. New ICU chain XML file is the standard most of us have been using for the last few years, only with a blank locale attribute in the root element (better for l10n than any default value). Glad to have this as an official option now. Marking as Passed QA.
This is now available on Master.