When you search with Elasticsearch, keywords/names beginning with diacritics such as " Šostakovitš, Dmitri" end up last in facets. The diacritics should be ignored when sorting the facets. Also upper case letters get alphabetized first in the facets, following the lower case letters (facets show A-Z first, then a-z). This is an issue specially with keywords (names, cities, countries). A fix for this would be uppercasing the values and dropping the diacritics when comparing values in sorting. ps. There's a ticket with a patch trying to solve the same thing made when Koha still used Text::Unaccent for Zebra in bug 26614.
Created attachment 167110 [details] [review] Bug 36947 - Diacritics and lower case letters not taken into account when facets are alphabetized When using facets with Elasticsearch, the facets are alphabetized with upper and lower case letters and diacritics being separated in the alphabetizing process. This patch fixes that problem. To test: 1) You will need records with diacritics as authors/keywords like "Šostakovitš, Dmitri". 2) Reindex Elasticsearch if needed 3) Observe facets, notice author/keyword "Šostakovitš, Dmitri" ends up at the end of the facets instead adjacent to entries starting with letter S and lower case keywords being separated from upper case keywords 4) Apply patch. 5) Observe the facets' entries get alphabetized correctly. 6) Sign off.
Created attachment 167125 [details] [review] Bug 36947 - Diacritics and lower case letters not taken into account when facets are alphabetized When using facets with Elasticsearch, the facets are alphabetized with upper and lower case letters and diacritics being separated in the alphabetizing process. This patch fixes that problem. To test: 1) You will need records with diacritics as authors/keywords like "Šostakovitš, Dmitri". 2) Index Elasticsearch if needed, set syspref FacetOrder to "alphabetically" 3) Observe facets, notice author/keyword "Šostakovitš, Dmitri" ends up at the end of the facets instead adjacent to entries starting with letter S and lower case keywords being separated from upper case keywords 4) Apply patch. 5) Observe the facets' entries get alphabetized correctly. 6) Sign off.
Oh this sounds interesting. -- I agree about Unicode::Normalize being the way to go for the diacritics... As for the normalization form... a quick Google suggest that NFKD is most likely the correct normalization form to use, although it might only help in terms of the initial sorting based on the first letter. For instance, I think ÅB should become A + a combining ring above (030A bytes) + B. I'm actually running some test code using Unicode::Normalize, and I can't get NFKD() to work at all... going to try out some more things... -- I'm not sure what I think about the case insensitivity. I could imagine different libraries possibly having different views on that one. That said, I think a default of forcing all to uppercase or lowercase would be OK. I wonder if there's any consequences of NFKD being used after the uc() rather than the other way around, but I can't think of any off the top of my head.
(In reply to David Cook from comment #3) > I agree about Unicode::Normalize being the way to go for the diacritics... > > As for the normalization form... a quick Google suggest that NFKD is most > likely the correct normalization form to use, although it might only help in > terms of the initial sorting based on the first letter. For instance, I > think ÅB should become A + a combining ring above (030A bytes) + B. Indeed, I think that my assessment is (mostly) correct. Consider the following code and its output: #!/usr/bin/perl use utf8; use Modern::Perl; use Unicode::Normalize; binmode(STDOUT, ":encoding(UTF-8)"); my @stuff = ( "ab", "Aa", "a_", "bone", "Bad", "Åa", ); my @sorted = sort { NFKD(uc($a)) cmp NFKD(uc($b)) } @stuff; use Data::Dumper; warn Dumper(\@sorted); foreach my $thing (@sorted){ print NFKD($thing) . "\n"; } perl testunicode.pl | xxd $VAR1 = [ 'Aa', 'ab', 'a_', "\x{c5}a", 'Bad', 'bone' ]; 00000000: 4161 0a61 620a 615f 0a41 cc8a 610a 4261 Aa.ab.a_.A..a.Ba 00000010: 640a 626f 6e65 0a d.bone. -- You can see that it hasn't properly sorted the array. The NFKD broke the Å character (bytes c385) into bytes 41cc8a which is A + "Unicode Character 'COMBINING RING ABOVE' (U+030A)" which has UTF8 encoded bytes of CC8A. While it's sorted the As all together using the uppercasing, it then sorts by other marks. We can see that the "combining ring above" sorts below the underscore punctuation. If we're going to do a proper comparison, I think that we're going to need to completely remove the diacritics.
I'm going to send a follow-up quick... and you let me know if it works for you. I should be having lunch right now, but I find this interesting.
Created attachment 167141 [details] [review] Bug 36947: Remove diacritics from decomposed strings in ES search This patch removes the diacritics from strings decomposed using Unicode::Normalize's NFKD function. It's necessary to provide equivalent sorting of accented and unaccented characters. Test plan: 0. Apply patch 1. koha-plack --restart kohadev 2. Switch to using Elasticsearch and reindex koha-elasticsearch -b -v --rebuild kohadev 3. Setup some test records with authors with accented and unaccented names, and different cases for the lead letter e.g. Aa author, Åa author2, aa author3 4 Do a test search e.g. http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test 5. Confirm the facet names are sorted in ascending order regardless of accent or case e.g. Aa author Åa author2 aa author3 Farley, David Humble, Jez
Created attachment 167142 [details] [review] Bug 36947 - Diacritics and lower case letters not taken into account when facets are alphabetized Signed-off-by: David Cook <dcook@prosentient.com.au>
Created attachment 167143 [details] [review] Bug 36947: Remove diacritics from decomposed strings in ES search This patch removes the diacritics from strings decomposed using Unicode::Normalize's NFKD function. It's necessary to provide equivalent sorting of accented and unaccented characters. Test plan: 0. Apply patch 1. koha-plack --restart kohadev 2. Switch to using Elasticsearch and reindex koha-elasticsearch -b -v --rebuild kohadev 3. Setup some test records with authors with accented and unaccented names, and different cases for the lead letter e.g. Aa author, Åa author2, aa author3 4 Do a test search e.g. http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test 5. Confirm the facet names are sorted in ascending order regardless of accent or case e.g. Aa author Åa author2 aa author3 Farley, David Humble, Jez
All right... fixed the sign off. Lari, if you approve of my patch, sign it off, and we can move this to Signed Off. That was fun!
So I'm a bit confused... On Mattermost, @paxed is saying for Finnish "it should also sort åöä to the end of the alphabet, as per the system preference" But Lari's patch wouldn't do that, and Lari works at Koha-Suomi with @paxed? -- None of my clients have raised this as an issue yet, so I'm not super invested.
Created attachment 167145 [details] [review] Bug 36947 - Diacritics and lower case letters not taken into account when facets are alphabetized Signed-off-by: David Cook <dcook@prosentient.com.au> Signed-off-by: Lari Strand <lari.strand@koha-suomi.fi>
Created attachment 167146 [details] [review] Bug 36947: Remove diacritics from decomposed strings in ES search This patch removes the diacritics from strings decomposed using Unicode::Normalize's NFKD function. It's necessary to provide equivalent sorting of accented and unaccented characters. Test plan: 0. Apply patch 1. koha-plack --restart kohadev 2. Switch to using Elasticsearch and reindex koha-elasticsearch -b -v --rebuild kohadev 3. Setup some test records with authors with accented and unaccented names, and different cases for the lead letter e.g. Aa author, Åa author2, aa author3 4 Do a test search e.g. http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test 5. Confirm the facet names are sorted in ascending order regardless of accent or case e.g. Aa author Åa author2 aa author3 Farley, David Humble, Jez Signed-off-by: Lari Strand <lari.strand@koha-suomi.fi>
Another idea... I wonder if we could tell Perl the locale and have it do the heavy lifting. Maybe the user's locale or (as Katrin suggested) the library's locale...
Let's do a new bugzilla ticket about the scandic letters issue?
We also figured out that the expectations might differ from language to language (or locale) with some of us using letters that present the same way (thinking of Ä)
(In reply to Katrin Fischer from comment #15) > We also figured out that the expectations might differ from language to > language (or locale) with some of us using letters that present the same way > (thinking of Ä) Yeah, I think that the current patches might not be the way to go. Initially the problem looked like a nail, so a hammer seemed like the right solution, but now I think it's more complicated. Looking at the Perl documentation for "sort", it says the following: "When use locale (but not use locale ':not_characters') is in effect, sort LIST sorts LIST according to the current collation locale. See perllocale." "perllocale" is an interesting read. When using "use locale", "The comparison operators (lt, le, cmp, ge, and gt) use LC_COLLATE. sort() is also affected if used without an explicit comparison function, because it uses cmp by default." "The default behavior is restored with the no locale pragma, or upon reaching the end of the block enclosing use locale. Note that use locale calls may be nested, and that what is in effect within an inner scope will revert to the outer scope's rules at the end of the inner scope."
I'm going to try this out in a minute...
(In reply to David Cook from comment #17) > I'm going to try this out in a minute... In lieu of the previous patches by myself and Lari, I've added "use locale" to sub _convert_facets() in Koha/SearchEngine/Elasticsearch/Search.pm In koha-testing-docker, it yields the same results in my testing. While LC_COLLATE isn't set in ktd, my understanding is that LC_CTYPE will be used in lieu of that variable and my ktd is set to en_US.UTF-8 Now to play with other collations...
To generate a locale and Koha to use it 1. vi /etc/locale.gen 2. Uncomment the locale you want to generate (e.g. fi_FI.UTF-8 UTF-8) 3. locale-gen 4. locale-a # this will show you what locales are available 5. Export env vars export LANG=fi_FI.UTF-8 6. koha-plack --restart kohadev # NOTE: You must restart. Reloading won't pull in the new env I know the Perl docs said to use LC_COLLATE, but I could only get it to work with LANG...
And sure enough... When I just add "use locale" to sub "_convert_facets" in Koha/SearchEngine/Elasticsearch/Search.pm and make sure the environmental variable LANG is set to the locale I want (ie Finnish)... then my facets are sorted as you'd hope in Finnish. Actually... in terms of environmental variables, it's looking like it was LC_ALL=fi_FI.UTF-8 that did the trick. There's still some experimenting to do here... But the order looks right for Finnish: Aa author aa author3 étienne Farley, David Humble, Jez Martin, Robert C. Åa author2 -- So... I think that's actually the way to go. I think that's going to provide a much better experience to users. For now, we work with the server's locale, but we could potentially use the user's locale. I'm not an expert on that one, but I notice my browser sends "Accept-Language" and includes en-GB and en-US. So there could be more interesting things done there. Anyway, alternate patch incoming...
Ok, so the issue was LC_ALL was overriding everything, as one can see by running the "locale" command. If you unset "LC_ALL" and set "LC_COLLATE", then you're all good. -- But I've learned a lot about locales in the last few minutes. Setting an environmental variable at the application level isn't going to work. The environmental variables has to be set before the Perl interpreter starts up. My current place to test this is using "/etc/default/koha-common". I set "export LC_ALL=fi_FI.UTF-8" at the bottom of that file and restart Koha, and then my locale is working as expected.
Created attachment 167188 [details] [review] Bug 36947: [Alternate] Do a locale-based sort for ES facet names This change causes the locale system to be used when sorting for ES facet names. Test plan: 0. Apply the patch 1. vi "/etc/default/koha-common" 2. Add the following to the bottom of the file: export LC_ALL=fi_FI.UTF-8 3. koha-plack --restart koha-common 4. Setup some test records with authors with accented and unaccented names, and different cases for the lead letter e.g. Aa author, Åa author2, aa author3, étienne 5. Switch to using Elasticsearch and reindex koha-elasticsearch -b -v --rebuild kohadev 6. Do a test search e.g. http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test 7. Confirm the facet names are sorted in ascending order following Finnish collation rules e.g. Aa author aa author3 étienne Farley, David Humble, Jez Martin, Robert C. Åa author NOTE: Any collation and language can be used. Finnish is just an example of a Latin-based script which has a different alphabetical ordering than just A-Z
Grabbing Assignee because apparently I've decided to take this one over [U+1F605]
We're happy with this solution. I can sign off.
The locale based sorting is not working with this alternative commit. The perl module is announced inside the sub. I added use locale; to the beginning of the file and now the sorting works when Finnish locale is set.
+use Unicode::Normalize; Dont we need to choose the method ? +use Unicode::Normalize qw( NFKD );
(In reply to Lari Strand from comment #25) > The locale based sorting is not working with this alternative commit. The > perl module is announced inside the sub. I added use locale; to the > beginning of the file and now the sorting works when Finnish locale is set. It was definitely working for me, and "use locale" is designed to be used within blocks (like this sub/function). From "perllocale": "The default behavior is restored with the no locale pragma, or upon reaching the end of the block enclosing use locale. Note that use locale calls may be nested, and that what is in effect within an inner scope will revert to the outer scope's rules at the end of the inner scope." Were you testing in koha-testing-docker?
(In reply to Fridolin Somers from comment #26) > +use Unicode::Normalize; > Dont we need to choose the method ? > +use Unicode::Normalize qw( NFKD ); Unicode::Normalize exports NFC, NFD, NFKC, and NFKD by default. Probably would be better to just export NFKD, but shouldn't matter too much.
Created attachment 167198 [details] [review] Bug 36947: [Alternate 2] Do a locale-based sort for ES facet names This change uses a configurable locale-based collator to sort the ES facet names. Test plan: 0. Apply the patch 1. vi "/etc/default/koha-common" 2. Add the following to the bottom of the file: export LC_ALL=fi_FI.UTF-8 3. koha-plack --restart koha-common 4. Setup some test records with authors with accented and unaccented names, and different cases for the lead letter e.g. Aa author, Åa author2, aa author3, étienne 5. Switch to using Elasticsearch and reindex koha-elasticsearch -b -v --rebuild kohadev 6. Do a test search e.g. http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test 7. Confirm the facet names are sorted in ascending order following Finnish collation rules e.g. Aa author aa author3 étienne Farley, David Humble, Jez Martin, Robert C. Åa author NOTE: Any collation and language can be used. Finnish is just an example of a Latin-based script which has a different alphabetical ordering than just A-Z
Of course, there's always this 3rd option of using "Unicode::Collate::Locale", which I've called "Alternate 2". I think that I like this option best, as it's the most configurable. You can just feed whatever locale you want into the object and then have it do the sort for you. In my patch, I have it fetch LC_COLLATE, but in the future we could base it off of C4::Languages::getlanguage($cgi). There's options, and I like having options. I suspect in the future that there will be other places that we want to sort using locale as well, and I think Unicode::Collate::Locale will be the most portable for those situations. -- Lari, I suspect that this will work for you better than the one using "use locale". Please give it a try and let me know what you think.
Note that I've tried with a variety of system settings. Even if all the LC_, LANG, and LANGUAGE environmental variables are empty, the collator will still use a sensible default, so it's not like it will throw fatal errors due to bad/missing configuration. -- Getting ready for work this morning I was thinking how I still wasn't 100% satisfied with "use locale" as it was good but it locked the whole of Koha into 1 locale. This Unicode::Collate::Locale gives us a lot of flexibility and future possibility. It's also core Perl which is great ^_^.
This seems to achieve the same result as use locale (when LC_COLLATE is set to "fi_FI"): if( C4::Context->preference('FacetOrder') eq 'Alphabetical' ){ my $coll = Unicode::Collate::Locale->new( locale => "fi_FI" ); @{ $facet->{facets} } = sort { $coll->cmp ($a->{facet_label_value}, $b->{facet_label_value}) } @{ $facet->{facets} }; } push @facets, $facet if exists $facet->{facets}; Should the collator be configurable in Koha or were you thinking of using the selected UI language for sorting somehow here?
(In reply to Lari Strand from comment #32) > This seems to achieve the same result as use locale (when LC_COLLATE is set > to "fi_FI"): > > if( C4::Context->preference('FacetOrder') eq 'Alphabetical' ){ > my $coll = Unicode::Collate::Locale->new( locale => "fi_FI" ); > @{ $facet->{facets} } = > sort { $coll->cmp ($a->{facet_label_value}, > $b->{facet_label_value}) } @{ $facet->{facets} }; > } > push @facets, $facet if exists $facet->{facets}; That's the idea :) > Should the collator be configurable in Koha or were you thinking of using > the selected UI language for sorting somehow here? My patch uses the system's LC_COLLATE for now, but in future it could easily use the selected UI language yeah.
Sorry, missed the patch :D
(In reply to Lari Strand from comment #34) > Sorry, missed the patch :D No worries! I hope you like the patch! Thanks for the interesting challenge.
Found couple of bugs linking to this one. Collections get alphabetized by the auth value code rather than the description. Will make another bugzilla ticket for this unless I can find one. I already have a fix for this. Also languages get alpabetized based on their english descriptions instead of the selected intranet language. This might be more difficult to fix...
Added https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=36982.
(In reply to Lari Strand from comment #36) > Also languages get alpabetized based on their english descriptions instead > of the selected intranet language. This might be more difficult to fix... I think that relates to this bug 36947, or at least the solution is related. I think Unicode::Collate::Locale would work well for that.
For my part, I like "Bug 36947: [Alternate 2] Do a locale-based sort for ES facet names" best. What do others think? I'll post this on the listserv as well to solicit opinions.
I like alternative 2, Do a locale-based sort for ES facet names" best as well. We actually implemented this in our production environment with my $coll = Unicode::Collate::Locale->new( locale => "fi_FI" ); and using it in the sorting phase, just to avoid making adjustments to our 9 production environments which run different consortiums' Koha installations in parallel, I was lazy :D.
(In reply to Lari Strand from comment #40) > I like alternative 2, Do a locale-based sort for ES facet names" best as > well. We actually implemented this in our production environment with I'm so happy to hear that! > my $coll = Unicode::Collate::Locale->new( locale => "fi_FI" ); > > and using it in the sorting phase, just to avoid making adjustments to our 9 > production environments which run different consortiums' Koha installations > in parallel, I was lazy :D. I misread your comment and thought that you meant that you wanted to use different locales for different Koha instances on the same server, but I see you wanted to use "fi_FI" for all 9 environments, so hard-coded it. I understand now! But it does make me think how maybe a person would want to use a different locale for a different instance. I'm thinking that we should add a "locale" configuration parameter in koha-conf.xml. If it's empty, then we default to LC_COLLATE. Otherwise, we use the "locale" in koha-conf.xml. Or maybe it should be an environmental variable that we can set in the Apache configuration, so that different OPACs can provide different values. -- I suppose my ideas here are all extensions of this base idea though, and there's no reason not to proceed with "Alternate 2" as it currently stands...
I guess there could be cases where you want to override your env LC_COLLATE locale with one set in koha-conf.xml or Apache. I'm not familiar on how Koha uses localizations in other ways besides selecting intranet language based on your system locale (LC_LANG?). I've seen some code regarding monetary units like [% fines | $Price with_symbol => 1 %], don't know where this can be configured though. I don't think env locale has anything to do with that, otherwise we would be playing with USD instead of €(eur) atm (LC_* = en_US). "Or maybe it should be an environmental variable that we can set in the Apache configuration, so that different OPACs can provide different values." I just want to see all Koha side localization configuration/overrides in the same place if possible.
(In reply to Lari Strand from comment #42) > I guess there could be cases where you want to override your env LC_COLLATE > locale with one set in koha-conf.xml or Apache. I'm not familiar on how Koha > uses localizations in other ways besides selecting intranet language based > on your system locale (LC_LANG?). I've seen some code regarding monetary > units like [% fines | $Price with_symbol => 1 %], don't know where this can > be configured though. I don't think env locale has anything to do with that, > otherwise we would be playing with USD instead of €(eur) atm (LC_* = en_US). I think that the money is based off of acquisitions/syspref data, but I can't recall for certain. > "Or maybe it should be an environmental variable that we can set in the > Apache configuration, so that different OPACs can provide different values." > > I just want to see all Koha side localization configuration/overrides in the > same place if possible. That does sound reasonable. More thinking to do here I think. -- I think that I'll obsolete the older patches and move back to Needs Signoff. I think we can improve the locale handling as a follow-up.
Prices display according to the active currency and the CurrencyFormat system preference. The Local doesn't play any role here.
Created attachment 167880 [details] [review] Bug 36947: [Alternate 2] Do a locale-based sort for ES facet names This change uses a configurable locale-based collator to sort the ES facet names. Test plan: 0. Apply the patch 1. vi "/etc/default/koha-common" 2. Add the following to the bottom of the file: export LC_ALL=fi_FI.UTF-8 3. koha-plack --restart koha-common 4. Setup some test records with authors with accented and unaccented names, and different cases for the lead letter e.g. Aa author, Åa author2, aa author3, étienne 5. Switch to using Elasticsearch and reindex koha-elasticsearch -b -v --rebuild kohadev 6. Do a test search e.g. http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test 7. Confirm the facet names are sorted in ascending order following Finnish collation rules e.g. Aa author aa author3 étienne Farley, David Humble, Jez Martin, Robert C. Åa author NOTE: Any collation and language can be used. Finnish is just an example of a Latin-based script which has a different alphabetical ordering than just A-Z Signed-off-by: Lari Strand <lari.strand@koha-suomi.fi>
Created attachment 168488 [details] [review] Bug 36947: Do a locale-based sort for ES facet names This change uses a configurable locale-based collator to sort the ES facet names. Test plan: 0. Apply the patch 1. vi "/etc/default/koha-common" 2. Add the following to the bottom of the file: export LC_ALL=fi_FI.UTF-8 3. koha-plack --restart koha-common 4. Setup some test records with authors with accented and unaccented names, and different cases for the lead letter e.g. Aa author, Åa author2, aa author3, étienne 5. Switch to using Elasticsearch and reindex koha-elasticsearch -b -v --rebuild kohadev 6. Do a test search e.g. http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test 7. Confirm the facet names are sorted in ascending order following Finnish collation rules e.g. Aa author aa author3 étienne Farley, David Humble, Jez Martin, Robert C. Åa author NOTE: Any collation and language can be used. Finnish is just an example of a Latin-based script which has a different alphabetical ordering than just A-Z Signed-off-by: Lari Strand <lari.strand@koha-suomi.fi> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
This is a great first pass... I'd love to see an option added to koha-conf.xml to allow setting at the instance level. More importantly though.. if we can add a unit test here I think we could PQA.
(In reply to Martin Renvoize from comment #47) > This is a great first pass... I'd love to see an option added to > koha-conf.xml to allow setting at the instance level. > > More importantly though.. if we can add a unit test here I think we could > PQA. Sounds like a plan. I had thought about moving it out to a separate function and adding a unit test. It looks like we have C, en_US, fr_FR, and POSIX as locales on koha-testing-docker (based on output of "locale -a"), so I think we could probably compare the output of en_US vs fr_FR. I know we're all low on time, but I've got two deadlines coming up fast and not many work days for them. Going to put this on my list of things I'm actually definitely coming back to!
And as for the koha-conf.xml option... yeah cool. I think that makes sense for the first implementation. I'm so excited to explore more options in the future as well. I love languages and UTF-8 :D.
Would love to see this over the finish line ;)
*** Bug 26614 has been marked as a duplicate of this bug. ***
(In reply to Martin Renvoize from comment #50) > Would love to see this over the finish line ;) I'm hoping to look at this in August...
Staying back late on my Friday working on this one... `locale -a` returns C C.utf8 en_US.utf8 fr_FR.utf8 POSIX However, when I try to use one of the locales in Unicode::Collate::Locale, it says it's using the "default" locale. So I tried re-generating the locales... `sudo locale-gen` Generating locales (this might take a while)... en_US.UTF-8... done fr_FR.UTF-8... done Generation complete. So far I can't get any differences between "en", "default" and "fr_FR". But I'll have a go again with Finnish and have another look when it's not so late...
(In reply to David Cook from comment #53) > But I'll have a go again with Finnish and have another look when it's not so > late... I was going to leave this for another day but I'm just too stubborn. 1. vi /etc/locale.gen 2. Uncomment the locale you want to generate (e.g. fi_FI.UTF-8 UTF-8) 3. locale-gen Now when I give Unicode::Collate::Locale the "fi_FI" locale, it definitely sets it. That suggests to me that both English and French use the same collator. That's going to make unit testing this in any meaningful way pretty hard... but maybe that just means I need to raise a ticket with koha-testing-docker folk to add fi_FI as a locale to ktd so that we can do this kind of testing... blah...
I'm hoping to revisit this soon. We can do a unit test to verify the overall functionality, but we'll need to be very careful unit testing other locales, because I don't think we can count on every system running the unit test suite to have locales that require a different collator to "default". We could perhaps do some conditional testing on that, so that ktd (which I assume Jenkins uses) can have an extra locale, but yeah... more to think about there. Anyway, I figure I'll do the basic version, and then we can chat...
Created attachment 170489 [details] [review] Bug 36947: Do a locale-based sort for ES facet names This change uses a configurable locale-based collator to sort the ES facet names. Test plan: 0. Apply the patch 1. vi /etc/locale.gen 2. Uncomment the locale you want to generate (e.g. fi_FI.UTF-8 UTF-8) 3. locale-gen 4. vi "/etc/default/koha-common" 5. Add the following to the bottom of the file: export LC_ALL=fi_FI.UTF-8 6. koha-plack --restart koha-common 7. Setup some test records with authors with accented and unaccented names, and different cases for the lead letter e.g. Aa author, Åa author2, aa author, étienne 8. Switch to using Elasticsearch and reindex koha-elasticsearch -b -v --rebuild kohadev 9. Do a test search e.g. http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test 10. Confirm the facet names are sorted in ascending order following Finnish collation rules e.g. aa author Aa author étienne Farley, David Humble, Jez Martin, Robert C. Åa author NOTE: Any collation and language can be used. Finnish is just an example of a Latin-based script which has a different alphabetical ordering than just A-Z
I just finished writing and polishing this... And I realize the LC_COLLATE thing is controversial, since it's a system-level thing and not an instance-level thing. A system preference or a koha-conf.xml config is probably the way to go. We'd look it up in Koha/SearchEngine/Elasticsearch/Search.pm and pass it into the _sort_facets() function using the "locale" parameter. -- I was going to say that since it's a system level thing and something that shouldn't be changed by librarians, I thought koha-conf.xml would be better. But... if we used the system preference, then people could override it using VirtualHost directives in Apache, so that different front-ends use different locales. I'm really not too fussed... the main reason I did this patch was out of the goodness of my heart (or perhaps intellectual vanity or perhaps both), so I'm happy to be led by others here on the configuration side of things.