When indexing records in Zebra using ICU (in etc/default.idx), non latin characters are transliterate to latin one. It works well with arabic for example but not for polish special characters. See http://en.wikipedia.org/wiki/Polish_alphabet#Computer_encoding
Can you expand on this bug report? "Not working well" is rather vague.
(In reply to Fridolyn SOMERS from comment #0) > When indexing records in Zebra using ICU (in etc/default.idx), non latin > characters are transliterate to latin one. Are you sure about this? It sounds to me like you are describing the behavior of Zebra with charmap, not Zebra with ICU.
The delivered words-icu.xml does a decompose (NFD) then removes Non-spacing mark then a recompose (NFC). That may strip diacritics from characters which should have a separate collation value. Probably needs some experimentation with the transform rules to confirm
Hi Some years ago, I find out the polish "l stroke" was not considered as a variant of "l". Do you think of that (Fridolyn, it is in french in my blog http://www.vingtseptpointsept.fr/2011/09/22/une-fonction-javascript-pour-eliminer-accents-et-diacritiques/ "Je pensais utiliser ce fichier, mais il n’est malheureusement pas assez précis. Certains caractères que l’on peut légitimement considérer comme composés ne sont pas signalés comme tels. C’est le cas notamment du l barré polonais") Maybe I was wrong, I did not work on that since 2011... Mathieu
(In reply to Galen Charlton from comment #1) > Can you expand on this bug report? "Not working well" is rather vague. I mean I does not work at all, polish "l stroke" is not considered as the same character has "l". (In reply to Jared Camins-Esakov from comment #2) > (In reply to Fridolyn SOMERS from comment #0) > > When indexing records in Zebra using ICU (in etc/default.idx), non latin > > characters are transliterate to latin one. > > Are you sure about this? It sounds to me like you are describing the > behavior of Zebra with charmap, not Zebra with ICU. I do not know the real behavior of Zebra+ICU, but I mean that non latin characters can be searched using latin characters and the opposite. That is the aim of ICU. (In reply to mathieu saby from comment #4) > Hi > Some years ago, I find out the polish "l stroke" was not considered as a > variant of "l". Even if true, It would be easier use the "l" to search into polish records no ? "l stroke" does not exist in a QWERTY nor AZERTY keyboard.
I don't speak polish, but I agree with you Fridolyn. Maybe for Poles it is a very different letter than "l", but if we have a polish book with this stroke l in our collection, we need a way to search it with a simple "l" ;-) Mathieu
Created attachment 22599 [details] [review] Bug 10939 - ICU does not transliterate polish special characters Polish characters added to ICU config
Some of the added transliteration rules are default ICU behaviour, i.e. stripping accents. I'm a bit wary of this shipping as default as a polish language library would, I assume, have to undo this. And why Polish especially? What about Norwegian with its extra characters? I wonder if for release what is needed is rather some documentation on 'how to alter normalization of non-ascii characters'. It might call for a bit of research
(In reply to Colin Campbell from comment #8) > I wonder if for release what is needed > is rather some documentation on 'how to alter normalization of non-ascii > characters'. It might call for a bit of research Ok for documentation, there is indeed such a page on wiki for arabic.
I set to discussion while i create the wiki page
Done : http://wiki.koha-community.org/wiki/Correcting_Search_of_Polish_records
I think you only need those two lines in etc/words-icu.xml: <transliterate rule="{ ł > l "/> <transliterate rule="{ Ł > l "/> as all other Polish diacritics should behave correctly out of the box. Polish "l striked" is indeed somehow special (while it shouldn't - general consensus is that "ł" is NOT a very different letter than "l" etc.). Problem is: long time ago, some person from Unicode Consortium involved in UCA (Unicode Collation Algorithm) development made rather questionable decision to not treat "ł, Ł" as wariants of "l, L". It became a major PITA from than on - for example, it does also affect mysql utf8_general_ci & utf8_unicode_ci collations, to name just a few side effects. BTW, I believe this got finally corrected in subsequent UCA revisions (in v5.2.0+, AFAIRC), so this workaround may be no longer necessary in Koha installations with more recent libicu versions (?).
(In reply to Jacek Ablewicz from comment #12) > I think you only need those two lines in etc/words-icu.xml: > > <transliterate rule="{ ł > l "/> We are currently trying to improve this file in our library, and we use this solution (with other tricks) M. Saby
Fridolyn, we can have Polish records in a french or american catalog. So, why not improving the default etc/words-icu.xml for everybody? (same remarks for german umlauts, french ligatures etc) Mathieu
(In reply to mathieu saby from comment #14) > So, why not improving the default etc/words-icu.xml for everybody? See comment 8. It may be rediscussed.
For information, ICU provide already a package for processing similar transliteration. With just one line in etc/words-icu.xml (and etc/phrases-icu.xml) : <transform rule="[:Latin:] Latin-ASCII;"/> you can convert latin unicode to ascii. Normally, this works for all polish diacriticals. You can test icu rules here : http://demo.icu-project.org/icu-bin/translit or using yaz-icu.
Ouch, looks like problems with "ł, Ł' are not related to UCA < v5.2 quirks like I thought previously; not at all. Real problem here being that: NFD; [:Nonspacing Mark:] Remove; NFC apparently does not work for 'striked through' latin characters (same problem with: ø, Ø, ƶ, Ƶ etc., not just with Ł, ł). I guess that's because NFD is not decomposing 'ł' to 'l + /' like it does for accented characters and so on. For ICU indexing to behave (more or less) like CHR (zebradb/etc/word-phrase-utf.chr) did, we can: 1) add something like that: <transliterate rule="{ Ø > o "/> <transliterate rule="{ ø > o "/> <transliterate rule="{ Đ > d "/> <transliterate rule="{ đ > d "/> <transliterate rule="{ Ħ > h "/> <transliterate rule="{ ħ > h "/> <transliterate rule="{ Ł > l "/> <transliterate rule="{ ł > l "/> <transliterate rule="{ Ŧ > t "/> <transliterate rule="{ ŧ > t "/> <transliterate rule="{ Ƶ > z "/> <transliterate rule="{ ƶ > z "/> <transliterate rule="{ Ǥ > g "/> <transliterate rule="{ ǥ > g "/> <transliterate rule="{ Ⱥ > a "/> <transliterate rule="{ ⱥ > a "/> <transliterate rule="{ Ȼ > c "/> <transliterate rule="{ ȼ > c "/> <transliterate rule="{ Ɇ > e "/> <transliterate rule="{ ɇ > e "/> <transliterate rule="{ Ɍ > r "/> <transliterate rule="{ ɍ > r "/> <transliterate rule="{ Ɏ > y "/> <transliterate rule="{ ɏ > y "/> <transliterate rule="{ Ɨ > i "/> <transliterate rule="{ ɨ > i "/> <transliterate rule="{ ʉ > u "/> <transliterate rule="{ Ʉ > u "/> <transliterate rule="{ Ӕ > ae "/> <transliterate rule="{ ӕ > ae "/> <transliterate rule="{ Œ > oe "/> <transliterate rule="{ œ > oe "/> to words-icu.xml, or: 2) as Julien suggested, we may use built-in Latin-ASCII ICU transliterator, i.e. add: <transform rule="[:Latin:] Latin-ASCII"/> Solution 2) looks much better IMO (it's more general-purpose) but it may not be ideal for everybody, as Latin-ASCII transliterator is not implemented in pre-4.6 ICU versions.