Summary: | Add transliteration of Ž in ICU chains | ||
---|---|---|---|
Product: | Koha | Reporter: | Nick Clemens (kidclamp) <nick> |
Component: | Z39.50 / SRU / OpenSearch Servers | Assignee: | Nick Clemens (kidclamp) <nick> |
Status: | CLOSED WORKSFORME | QA Contact: | Testopia <testopia> |
Severity: | normal | ||
Priority: | P5 - low | CC: | black23, dcook, m.de.rooy |
Version: | Main | ||
Hardware: | All | ||
OS: | All | ||
See Also: | https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=35621 | ||
Change sponsored?: | --- | Patch complexity: | --- |
Documentation contact: | Documentation submission: | ||
Text to go in the release notes: | Version(s) released in: | ||
Circulation function: | |||
Attachments: | Bug 26390: Add transliteration for Z with caron in ICU chains |
Description
Nick Clemens (kidclamp)
2020-09-04 13:51:15 UTC
Created attachment 109669 [details] [review] Bug 26390: Add transliteration for Z with caron in ICU chains Bug 26390: Add transliteration for Z with caron in ICU chains https://en.wikipedia.org/wiki/Caron From RT 52831. Uunder ICU chains most patrons cannot search for Slavoj Žižek TO test: 1 - Add a record with Slavoj Žižek as author 2 - Enable ICU chains https://wiki.koha-community.org/wiki/ICU_chains_configuration 3 - Ensure Koha is using zebra 4 - Restart all the things and reindex 5 - Try to search for 'Zizek' 6 - Not found 7 - Apply patch 8 - Restart all the things and reindex 9 - Try to search for Zizek 10 - It works! Hi Nick, can we add some other czech and slavic letters into ICU too? <transliterate rule="{ č > c "/> <transliterate rule="{ Č > c "/> <transliterate rule="{ ď > d "/> <transliterate rule="{ Ď > d "/> <transliterate rule="{ ť > t "/> <transliterate rule="{ Ť > t "/> <transliterate rule="{ ř > r "/> <transliterate rule="{ Ř > r "/> <transliterate rule="{ š > s "/> <transliterate rule="{ Š > s "/> <transliterate rule="{ ú > u "/> <transliterate rule="{ Ú > u "/> <transliterate rule="{ ů > u "/> <transliterate rule="{ ě > e "/> <transliterate rule="{ Ě > e "/> <transliterate rule="{ é > e "/> <transliterate rule="{ É > e "/> <transliterate rule="{ á > a "/> <transliterate rule="{ Á > a "/> <transliterate rule="{ í > i "/> <transliterate rule="{ ý > y "/> I'm ready for test :-) Sorry, I forgot some ... all here: <transliterate rule="{ č > c "/> <transliterate rule="{ Č > c "/> <transliterate rule="{ ď > d "/> <transliterate rule="{ Ď > d "/> <transliterate rule="{ ť > t "/> <transliterate rule="{ Ť > t "/> <transliterate rule="{ ř > r "/> <transliterate rule="{ Ř > r "/> <transliterate rule="{ š > s "/> <transliterate rule="{ Š > s "/> <transliterate rule="{ ú > u "/> <transliterate rule="{ Ú > u "/> <transliterate rule="{ ů > u "/> <transliterate rule="{ ě > e "/> <transliterate rule="{ Ě > e "/> <transliterate rule="{ ň > n "/> <transliterate rule="{ Ň > n "/> <transliterate rule="{ é > e "/> <transliterate rule="{ É > e "/> <transliterate rule="{ á > a "/> <transliterate rule="{ Á > a "/> <transliterate rule="{ í > i "/> <transliterate rule="{ ý > y "/> <transliterate rule="{ ó > o "/> <transliterate rule="{ Ó > a "/> I'm ready for test :-) I wonder if adding the rules is the best way of achieving this. You can add a general rule for using the 'base letter'. We have been doing this I think. Found a hint about the rule here: https://wiki.koha-community.org/wiki/ICU_do_not_undiacritic (In reply to Katrin Fischer from comment #4) > I wonder if adding the rules is the best way of achieving this. You can add > a general rule for using the 'base letter'. We have been doing this I think. > Found a hint about the rule here: > > https://wiki.koha-community.org/wiki/ICU_do_not_undiacritic Also see the documentation here: http://userguide.icu-project.org/transforms/general And our sample files using it: https://wiki.koha-community.org/wiki/ICU_Chains_Library This makes it unnecessary to add transliteration rules for every character diacritic combination. It means, the change here should not be necessary... Nick, can you please double check? (In reply to Katrin Fischer from comment #6) > It means, the change here should not be necessary... Nick, can you please > double check? Although wouldn't that NFD change make Žižek into something like... ˇZiˇzek? (In reply to David Cook from comment #7) > (In reply to Katrin Fischer from comment #6) > > It means, the change here should not be necessary... Nick, can you please > > double check? > > Although wouldn't that NFD change make Žižek into something like... > ˇZiˇzek? You have to look at the full example in the links I posted. 3 lines: <transform rule="NFD"/> <transform rule="[:Nonspacing Mark:] Remove"/> <transform rule="NFC"/> So yes, but then it uses that form to remove the diacritics: https://www.compart.com/en/unicode/category/Mn You are correct Katrin - it looks like there was confusion about whether a site was using ICU when we wrote these patches. Testing on master everything works correctly under ICU without this patch. (In reply to Katrin Fischer from comment #8) > You have to look at the full example in the links I posted. 3 lines: > > <transform rule="NFD"/> > <transform rule="[:Nonspacing Mark:] Remove"/> > <transform rule="NFC"/> > > So yes, but then it uses that form to remove the diacritics: > https://www.compart.com/en/unicode/category/Mn Ahhh right. I should've been more thorough. I was thinking recently about how Zebra ICU has been seen as inferior to Elasticsearch ICU on the listserv. Looking at ftp://ftp.software.ibm.com/software/globalization/icu/3.6/icu-3_6-userguide.pdf, it looks like ICU actually originated in Java (ICU4J) and was later ported to C++ and C (ICU4C). According to https://wiki.koha-community.org/wiki/Record_Indexing_and_Retrieval_Options_for_Koha, the Zebra use of libicu is inferior to Lucence ICU which uses ICU4J. There's no evidence given for the claim, but it seems believable (especially considering global prominence of Solr and Elasticsearch). Looking at https://lucene.apache.org/core/4_4_0/analyzers-icu/index.html, it seems that writing systems can use dictionary based algorithms (for systems like Thai script, Chinese, etc). That explains a lot. I know a bit of Chinese, and I've wondered how indexers could handle such a context-dependent language... |