Description
Chris Cormack
2010-05-21 00:53:39 UTC
Any word on the status of this bug? Is anyone working on resolving it? In fact, I meant left truncation. and not right truncation. But well icu configuration could be a way to cope with that. ICU chains are not required to solve this issue; the word-phrase-utf.chr configuration file contains a mapping of diacritic characters to 'plain' characters, including è. To get this config file to kick in, however, the MARC records must be Unicode Normalized to NFC. Since 3.4, any time AddBiblio or ModBiblio is called, the subroutine SetUTF8Flag is called, which normalizes each subfield in each field of the MARC record to NFC (or NFD, if the param is given). This should not be an issue on a new installation of 3.4, but running misc/maintenance/touch_all_biblios.pl would ensure the records are all properly normalized. For anyone upgrading to 3.4, since the upgrae requires the use of remove_items_from_biblioitems.pl, which makes use of ModBiblio, upgrading should also resolve this issue, for most diacritics. I say most, because I've recently discovered that several diacritic marks, including letters with stroke sign, are not included in the mapping. This means that searches for the Polish name "Lutoslawski" will not return any hits if the name is spelled with diacritics: "Lutosławski". Patch forthcoming. ICU is an elegant solution to index all types of characters. charmap is good... But if you need to add every kind of characters, you will have a very big charmap file and maintenance will be even harder. Until Koha's ICU configuration is fixed to work with fuzzy searching and right truncation, the word-phrase-utf.chr configuration is best solution to getting this kind of searching implemented. I agree that listing out each character to map is kind of tedious, but I don't think that it's a blocker, and new characters aren't being created faster than we can keep up with. well the very many languages you may want to search and index... hebrew, arabic and more.... tends to be also a blocker for some libraries... using charmaps for them is not only tedious but also leads to some search crux. Moreover, fuzzy search and right truncation, though a tough drawback, still is better than returning anything. How about this: 1. Commit a solid ICU config file to Koha 2. Add a question to the installer whether to use Charmaps or ICU for indexing, and make default.idx respect that choice. This'll let people choose whatever configuration works best for their local language, and much like what happened with Zebra v. NoZebra, the "better" config file will win out over time. why not ? In my opinion and from what I could test and all the feedback I had, the icu configuration file i posted on list was quite robust. then having a debconf option to choose the indexing schema could be fine. Mind you, the icu_chain should not be used though for p indexation. default.idx was also posted on list. I've used the icu.xml file, and aside from the fuzzy search/right truncation issues, it seems to work quite well when implemented. I'd say let's start with that file. Created attachment 4701 [details] [review] Add charmap mappings for characters with stroke This patch adds character-with-stroke mappings to word-phrase-utf.chr, as well as corrects a note on the usage of the 'equivalent' command. Created attachment 4966 [details] [review] [SIGNED-OFF] Bug 2629: add char-with-stroke support to word-phrase-utf.chr This patch adds diacritic search support for the following characters-with-stroke: a,b,c,d,e,h,l,r,t,u,y,z Handles both uppercase and lowercase mappings. Also corrects a note in word-phrase-utf.chr: the 'equivalent' command is NOT for searching, but rather for sorting. See Zebra manual: http://www.indexdata.com/zebra/doc/character-map-files.html, near the bottom. Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de> Some notes: - Copied the file to my koha-dev folder and reindexed - Tried some simple searches like: Süden and Suden, schon und schön with success - Added some of the new characters to a record and tried search with and without diacritics (ɨƗʉⱥɆɌ and iiuaer) with success. Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de> Patch just adds new character support, and an updated note; Katrin's tests verify no regression. Marking as Passed QA. Pushed, I have only tested cursorily, please test more thoroughly Created attachment 5599 [details] [review] Proposed Patch Adds Ů and ů support in Zebra searches I think Ian's patch from 9/26 still needs to be tested and signed off, so I am changing the patch status to "needs signoff" again. Created attachment 6171 [details] [review] Bug 2629: Add diacritic support for Ů (U with ring) Adds Ů and ů support to word-phrase-utf.chr. These characters are used in Czech, for example the author Martinů, Bohuslav (1890-1959) Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz> Testing out newly implemented QA Contact feature, and assigning to Marcel to QA (since this latest patch is mine). Passed QA. Works. Simple patch. Patch pushed There is currently no support for ċ in Koha, and other 'dot-above' characters may also have issues. I think this is a forever lasting ticket... until you try ICU. We (in France) have switched to icu, and word-phrase become useless Paul, The patch for ICU that is floating around works pretty well in this regard, but breaks Fuzzy searching for Keywords. I've implemented it for several of our libraries, but many were not pleased and we needed to revert. So, if the ICU patch has been updated to fix this issue, we could include it as a alternate config. Otherwise, I think it's best to stick with what works (word-phrase-utf.chr) even if we have to keep adding new character maps. I know libraries that are not too happy with fuzzy search and ICU is needed if you want to search for other scripts like Hebrew. I think implementing this as an option in Koha would be great, even if we can't fix the FuzzySearch. katrin++ In our setups, fuzzy is desactivated too. I suspect the fuzzy in english is good, and in other language poor. Some fuzzy suggestions were more understandable if you tried to say them in english (and imagine what zebra could have made) Created attachment 6460 [details] [review] word-phrase-utf.chr /etc/zebradb/etc/word-phrase-utf.chr added Cc miniscule and Cc circumflex; Kk acute accent; map Ċ c map ċ c map Ĉ c map ĉ c map Ḱ k map ḱ k (In reply to comment #25) > Created attachment 6460 [details] [review] > word-phrase-utf.chr > > /etc/zebradb/etc/word-phrase-utf.chr Albert, please send in your change as a git formatted patch. Thanks. After that, please set Importance to Patch-Sent and Patch Status to Needs Signoff. Created attachment 6471 [details] [review] Bug-2629-Diacritics-not-being-ignored-when-searching.patch I cannot replicate the problem that the second patch is trying to fix. As far as I can tell, all the accent characters it addresses already work properly. Please provide a more complete description and sample record. Hi Jared, The original problem reported was for this: Anam ċara from Harrison Memorial Library (Carmel) Anam ċara : (Record no. 62385) [ view plain ] 000 -LEADER fixed length control field 00991nam a2200325Ia 4500 001 - CONTROL NUMBER control field ocm40357390 100 1# - MAIN ENTRY--PERSONAL NAME Personal name O'Donohue, John, Dates associated with a name 1956- 245 10 - TITLE STATEMENT Title Anam ċara : Remainder of title a book of Celtic wisdom / Statement of responsibility, etc John O'Donohue The ċ was the only character reported as causing a problem, but I found a few extra characters that were missing, that I added in to prevent future tickets. Thanks, Albert Albert, Could you please attach the record to the bug? Carmel has the download link removed from their OPAC. Regards, Jared Created attachment 6945 [details]
example of diacritic causing a failed search
Created attachment 6946 [details]
marc for record 62385
Created attachment 6996 [details] [review] [Signed Off] Bug 2629 - Diacritics not being ignored when searching /etc/zebradb/etc/word-phrase-utf.chr added Cc miniscule and Cc circumflex; added Kk acute accent. Signed-off-by: Liz Rea <wizzyrea@gmail.com> - imported marc record from the bug with the offending diacritic - reindexed - searched for the title - result found! Yay! ps. thank you very much for the record to test this patch with. Made testing a lot easier, I appreciate it. Bug 3216 implements ICU as an option during install. With ICU searching for diacritics can be solved in a general way. Just adds support for additional characters in the charmap. Marking Passed QA Comment on attachment 6171 [details] [review] Bug 2629: Add diacritic support for Ů (U with ring) patch obsoleted, already applied Patch pushed, as it works, BUT katrin is right: you should investigate ICU, that solves all diacritics (well, there are some remaining problems, and that's why we should get rid of zebra, but that's another matter. And using ICU is better than trying to have everything in word-phrase ! Included in the 3.6 branch prior to 3.6.4. Created attachment 12385 [details] [review] Bug 2629 - Diacritics not being ignored when searching - Map ů to u As ICU is being persued, I am only noting an alternate possible idea in case that route does not come about. We might be able to write a perl script that uses Unicde::UCD or such routines to scan the DB searchable fields, and convert them to NFD form to detect diacritics(decomposed form) or another alogrithm, and compile a table accordingly. Then we could generate the /etc/zebradb/etc/word-phrase-utf.chr map. It may suffice as a stop gap measure, that once run in a large library, it might cover 99.9% of the cases, catering to the context of each library. So instead of a patch per character, the site would periodically run the script to handle such cases. But ICU is where we are persuing for now. Applying the patch gives following error: fatal: cannot convert from UTF-8utf-8 to UTF-8 It seems that the patch maps this letter twice? See line 1, 3 and 5 snip: ---snip--- -map ů u -map Ů u +map ů u +map Ů u +map ů u ---snip--- fatal: cannot convert from UTF-8utf-8 to UTF-8 usually happens because when the patch was made the .gitconfig probably had [format] headers = "Content-Type: text/plain; charset=UTF-8" and a newer git was used somewhere inthe process. So it got added to the patch header. One can delete the duplicate line in the header of the patch, to cleanly apply it. If you are using a newer/recent version of git, they fixed things, so it doesn't need this, anymore, so you can comment that line out in your .gitconfig if its there. [format] # headers = "Content-Type: text/plain; charset=UTF-8" So for older patches you still come across this duplicate line the patch header. What exactly is this latest patch supposed to do? It is unclear from inspection. A note about right truncation and ICU : The existing issue was fixed in version 2.0.53 of Zebra (2012/12/03) source : http://www.indexdata.com/zebra/doc/NEWS I don't know how to take profit of that fix in Koha? M. Saby To my understanding this may be valid if you are not using ICU, but it would probably be better to address specific issues in a separate bug. |