To make searching easier at least in French, it would be nice to have the default configuration handle elisions properly (e.g. l'avion). Coming up with a proposed patch.
Created attachment 78889 [details] [review] Bug 21357: Add elision filtering to default ES index config To test: 1. Rebuild Elasticsearch index with the new config 2. Add a record with "l'avion" in the title 3. Verify that the record can be found with both "l'avion" and "avion"
Created attachment 79375 [details] [review] Bug 21357: Add elision filtering to default ES index config To test: 1. Rebuild Elasticsearch index with the new config 2. Add a record with "l'avion" in the title 3. Verify that the record can be found with both "l'avion" and "avion" Signed-off-by: Séverine QUEUNE <severine.queune@bulac.fr>
Hi, bravo. But its really specific to french. I set in discussion because I think we should build a french-specific configuration, based on : https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-lang-analyzer.html#french-analyzer This adds french stemming and stop words. Since one can define a path for configuration in koha-conf.xml, what about creating a lang-def folder in etc ?
I think a lot of libraries have French materials, is it specific to French catalouging or just to data?
If it's decided this shouldn't be part of the default rules, we could perhaps still include these as a commented-out example, right?
(In reply to Ere Maijala from comment #5) > If it's decided this shouldn't be part of the default rules, we could > perhaps still include these as a commented-out example, right? I'd prefer to propose an entire french-specific file. I attach the one I'm working on.
Created attachment 79564 [details] french ES index config We could add this in etc/elasticsearch/fr or etc/searchengine/fr ?
(In reply to Fridolin SOMERS from comment #6) > (In reply to Ere Maijala from comment #5) > > If it's decided this shouldn't be part of the default rules, we could > > perhaps still include these as a commented-out example, right? > > I'd prefer to propose an entire french-specific file. > I attach the one I'm working on. Are we sure those rules would not be helpful for other installations as well? I can understand if it's specific to UNIMARC, but otherwise it might be more helpful to develop good defaults together. At the moment we have the problem with Zebra and the sort files - some language specific sort files are really good and others have only minimal information. If you have a library with multiple languages (we do) one file that fits multiple would be better.
(In reply to Katrin Fischer from comment #8) > (In reply to Fridolin SOMERS from comment #6) > > (In reply to Ere Maijala from comment #5) > > > If it's decided this shouldn't be part of the default rules, we could > > > perhaps still include these as a commented-out example, right? > > > > I'd prefer to propose an entire french-specific file. > > I attach the one I'm working on. > > Are we sure those rules would not be helpful for other installations as > well? I can understand if it's specific to UNIMARC, but otherwise it might > be more helpful to develop good defaults together. > > At the moment we have the problem with Zebra and the sort files - some > language specific sort files are really good and others have only minimal > information. If you have a library with multiple languages (we do) one file > that fits multiple would be better. Stemming for example is very language specific. Stop words even more, for example "or" means gold in french. But I was thinking that an install may merge several language (elisions, stemmers and stop words) files to index for example a French+German catalog.
I think ideally things shoudl be as easy as possible. I think all of our libraries have materials in various languages - the Goethe institute libraires being a prominent example. I don't want to run each of them with a different Elasticsearch configuration if possible.
(In reply to Katrin Fischer from comment #10) > I think ideally things shoudl be as easy as possible. I think all of our > libraries have materials in various languages - the Goethe institute > libraires being a prominent example. I don't want to run each of them with a > different Elasticsearch configuration if possible. Me too, but I fear side-effects. Like stopwords, its very useful to have them language-specific, but they are real words in other languages. Waiting for other opinions. Or if we want this bug to focus on elision I'm OK, I will open another bug.
Serious question: Why would we want stop words?
(In reply to Katrin Fischer from comment #12) > Serious question: Why would we want stop words? To explain: to me stop words seem to stem from a time where search would die searching for something like "the" - I think modern search engines are better than that and we have always been happy that you can find the band "the the" with Koha and haven't had stop words since moving to Zebra. For stemming: I think stemming is a bit difficult, but could we not make it depend on the language of the interface? Ideally it should depend on the language you enter something in (if I use a German GUI and search a French book...)... or could we stem for mulitple languages at once?
Ay up friends, We are a language library and we have more than 350 different languages and lots of writing systems. Our interface is only in French (the language of our country) or in English (the science's new lingua franca). I we had installed other languages, we should have installed all languages available, which even today seems overkill, while everyone seems OK with French and English. Therefore, having stemming based on the interface language doesn't look a good option for us. OK maybe we're weird, nonetheless I think most University libraries have books in a lot of different languages and not just one or two. With our context, stemming itself seems quite trying. We aren't yet eager about it. So before having the full stemmed engine picky about which language is what, what have done Ere (thank you !) will certainly suit us. If it is not generic enough to be in Koha... Well we will hack it on our instance.
> OK maybe we're weird, nonetheless I think most University libraries have > books in a lot of different languages and not just one or two. Exactly my point too - you are not weird! Not everyone can hack their installations. I just wanted to make sure that we don't start adding separate configurations when we just would be better of with one standard file for now.
Ok I will propose the stemming configuration on wiki page. This patch is OK, but for me the "apostrophe" filter is not needed. I've checked with analyse API on ES : curl -X GET 'http://localhost:9200/koha_master_biblios/_analyze?pretty=yes&analyzer=analyser_phrase' -d "dans l'avion" With patch it returns : "token" : "dans lavion" Without patch it returns : "token" : "dans l'avion" Checking with only "l'avion" both with and without patch returns "avion".
The apostrophe filter is for the other cases where an apostrophe is not a part of an elision. It's not stripped in the char filter like other punctuation so that the elision filter can do its job but needs to be done afterwards. Try analysis for e.g. "Merriam-Webster's Collegiate Dictionary". I agree that this bug should only be about elisions. Stop words are another thing completely, and there are strong reasons for not using a stop word list like Katrin described.
(In reply to Ere Maijala from comment #17) > The apostrophe filter is for the other cases where an apostrophe is not a > part of an elision. It's not stripped in the char filter like other > punctuation so that the elision filter can do its job but needs to be done > afterwards. Try analysis for e.g. "Merriam-Webster's Collegiate Dictionary". > > I agree that this bug should only be about elisions. Stop words are another > thing completely, and there are strong reasons for not using a stop word > list like Katrin described. I think we should use ICU transformation for that. I will try to find the syntax.
I'd like to either move this forward. I think elision handling is important even if the language used does not use them since they could be used in names etc.
Created attachment 86670 [details] [review] Bug 21357: Add elision filtering to default ES index config To test: 1. Rebuild Elasticsearch index with the new config 2. Add a record with "l'avion" in the title 3. Verify that the record can be found with both "l'avion" and "avion" Signed-off-by: Séverine QUEUNE <severine.queune@bulac.fr> Signed-off-by: Björn Nylén <bjorn.nylen@ub.lu.se>
We signed off this bug as we think it's a good feature as is. We just recently had a coworker noting this behaviour.
This is a most awaited feature for us. Note it doesn't work for « L'avion ». So we need maybe something like this : diff --git a/admin/searchengine/elasticsearch/index_config.yaml b/admin/searchengine/elasticsearch/index_config.yaml index f47fb22..05d89a9 100644 --- a/admin/searchengine/elasticsearch/index_config.yaml +++ b/admin/searchengine/elasticsearch/index_config.yaml @@ -54,6 +54,12 @@ index: - 'qu' - 's' - 't' + - 'C' + - 'D' + - 'J' + - 'L' + - 'QU' + - 'T' apostrophe: type: pattern_replace pattern: "'"
Created attachment 87901 [details] [review] Bug 21357: Add elision filtering to default ES index config To test: 1. Rebuild Elasticsearch index with the new config 2. Add a record with "l'avion" in the title 3. Verify that the record can be found with both "l'avion" and "avion" Signed-off-by: Séverine QUEUNE <severine.queune@bulac.fr> Signed-off-by: Björn Nylén <bjorn.nylen@ub.lu.se>
Created attachment 87902 [details] [review] Bug 21357: Add uppercase articles to the elision filter
Right, thanks for pointing that out. I added uppercase versions for all articles.
Hi Ere, Everything works fine for the single letters + apostrophe for both upper and lower case, but it doesn't work for the double letters QU : here the title of my notice : Qu'ils me laissent tranquille ! If I search : - QU'ILS --> failed - Qu'ils --> ok - qu'ils --> failed (same list of result I had for QU'ILS) - qU'ILS --> ok I created another exemple and had no problem due to uper or lower case : - Aujourd'hui --> ok - AUJOURD'HUI --> ok - AUJOURd'HUI --> ok As the 'D' and 'd' is on your list, could my first exemple behave correctly if you add only 'U' and 'u' on the settings ?
(In reply to Ere Maijala from comment #24) > Created attachment 87902 [details] [review] [review] > Bug 21357: Add uppercase articles to the elision filter I think the correct way is using : "articles_case": true From https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-lang-analyzer.html#french-analyzer
At last I'm going to work on this again. I understand way more Elasticsearch now :D I know now that we can use french elision on a non-french catalog, even with an unwanted impact the ranking will give priority to correct match. So we can provide this in default configuration. Thanks to all for your existing participation on this bug.
Created attachment 90191 [details] [review] Bug 21357: Case-insensive articles to the elision filter Test with : GET index/_analyze { "filter": ["elision"], "text": "qu'ils" } With different cases : QU'ILS Qu'ils qu'ils qU'ILS
Ohhh playing with ES API with explain analyze : https://www.elastic.co/guide/en/elasticsearch/reference/current/_explain_analyze.html I see that elision is only with the begining of the token. It explains a lot on analyzer_phrase (comment 16) I continue my tests.
(In reply to Fridolin SOMERS from comment #30) > Ohhh playing with ES API with explain analyze : > https://www.elastic.co/guide/en/elasticsearch/reference/current/ > _explain_analyze.html > > I see that elision is only with the begining of the token. > It explains a lot on analyzer_phrase (comment 16) > > I continue my tests. Hey Frido ! Ok for you to test again during Hackfest ?
I think that enabling french analyzer (or parts of french analyzer) by default is not the best solution. Sure it will work with french catalogs, but what will happen when one want to add support for another language (or, for the weird people above, 350 other languages! :)) ? I think a better solution is to use multi-fields with multiple analyzers, as suggested here : https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-fields.html#_multi_fields_with_multiple_analyzers The idea is to have multiple "subfields" for each field, one subfield per language. For instance: - `title` will be language agnostic - `title.lang_en` will use an english analyzer - `title.lang_fr` will use a french analyzer When searching, Koha can search in all these fields, and the query will be analyzed by all analyzers. I've been testing this idea this week, and I made a small patch that I will submit here. It is only a proof of concept for now, but I hope it will help
Created attachment 98169 [details] [review] Bug 21357: Allow to use multiple ES language analyzers Test plan: 1. Apply this patch 2. If you have a custom `field_config.yaml` report the diff in your custom config 3. Create biblios in english and french 4. misc/search_tools/rebuild_elasticsearch.pl 5. Test biblio search at intranet with stemming and ellision in mind Examples of test queries: - "journal actualité" => should return results with "Journaux d'actualités" in it - "lord lady" => should return results with "Lords and Ladies" in it
Changing status to needs signoff to get opinions
Hi Julian, does it mean it searches the different representations simultanously? I am still not quite sure about 'French specific'. It feels like every catalog containing some French materials would be happy to have "l'avion" work the way you describe - we would. In my experience most catalogs contain records in different languages. I wonder how it would work for English, thinking of words like "can't" or "doesn't".
I can't really see the benefit since, as far as I can see, elision handling is not prone to cause conflicts with other language analysis. Separating analysis for different languages also won't work for mixed-language fields. Think about names and a (very fictional) example phrase "Images from movie l'Avion". You'd get either elision filtering or English stemming but not both. For sure it will still be found with a simple keyword search, but it breaks at least adjacent word searches and relevance ranking.
(In reply to Katrin Fischer from comment #35) > Hi Julian, does tit mean it searches the different representations > simultanously? Only one query to ES is needed, if that's what you mean by simultaneously > I wonder how it would work for English, thinking of words like "can't" or > "doesn't". The built-in english analyzer does not do anything with words that end with "n't" but it should possible to configure a custom english analyzer that treats "can't" and "cannot" the same way. (In reply to Ere Maijala from comment #36) > I can't really see the benefit since, as far as I can see, elision handling > is not prone to cause conflicts with other language analysis. Ellision might not cause troubles (but what about names like "D'Amato" ?). I'm thinking about the next step : stemming is very different from one language to another and we need to find a way to have stemming for multi-language catalogs. > Separating analysis for different languages also won't work for > mixed-language fields. Think about names and a (very fictional) > example phrase "Images from movie l'Avion". You'd get either elision > filtering or English stemming but not both. For sure it will still be > found with a simple keyword search, but it breaks at least adjacent > word searches and relevance ranking. Nothing would work perfectly with mixed-language fields. But in this particular example, you could have another subfield `lang_en_fr` that does english stemming and french elision
(In reply to Julian Maurice from comment #37) > Ellision might not cause troubles (but what about names like "D'Amato" ?). > I'm thinking about the next step : stemming is very different from one > language to another and we need to find a way to have stemming for > multi-language catalogs. I agree that stemming is difficult, and I'e tried to purposefully limit this bug to elisions. > > Separating analysis for different languages also won't work for > > mixed-language fields. Think about names and a (very fictional) > > example phrase "Images from movie l'Avion". You'd get either elision > > filtering or English stemming but not both. For sure it will still be > > found with a simple keyword search, but it breaks at least adjacent > > word searches and relevance ranking. > > Nothing would work perfectly with mixed-language fields. But in this > particular example, you could have another subfield `lang_en_fr` that does > english stemming and french elision And maybe lang_fi_fr, lang_sv_fr etc. This gets complicated pretty quickly. And I'm afraid separating the different language analysis chains doesn't solve the issue with stemming etc. because you'd need to avoid indexing into "wrong" fields which would require you to know what language the string to be indexed is in.
> because you'd need to avoid indexing into "wrong" fields which would > require you to know what language the string to be indexed is in. I don't think you have to avoid indexing into the wrong fields (but I agree it would be better). You just need to have at least one that match. You might end up with false positives (more results that you should have) but in my opinion it's still better than not having the results you expected. I realize that stemming is out of the scope of this bug, but even if we talk only about ellision, I think it would be better to have one field analyzed with ellision and one field without ellision, and to search in both fields every time
(In reply to Julian Maurice from comment #39) What about using a record field to determine the language of the record and applying different analyser settings based on that? 008/35-37 or 041 in marc21, I don't know if there is a UNIMARC equivalent
(In reply to Nick Clemens from comment #40) > (In reply to Julian Maurice from comment #39) > > What about using a record field to determine the language of the record and > applying different analyser settings based on that? 008/35-37 or 041 in > marc21, I don't know if there is a UNIMARC equivalent In UNIMARC there is field 101 that could be used for that.
At least with MARC 21 that's the language of the catalogued item, not the language of the metadata record, right? We've been through this with our discovery interface and had to give up trying to guess the language. Additionally, a record can contain metadata fields in multiple languages.
(In reply to Ere Maijala from comment #42) > At least with MARC 21 that's the language of the catalogued item, not the > language of the metadata record, right? We've been through this with our > discovery interface and had to give up trying to guess the language. > Additionally, a record can contain metadata fields in multiple languages. True, but for at least certain fields like 'title' shouldn't the language match? i.e. the french version should have the french title and the english version the english title? Maybe the problem here is 'MARC is bad' :-)
Julian, for non latin scripts we are doing a latin transliteration and so are suppose to do all the ABES partners in France. Look at https://catalogue.bulac.fr/cgi-bin/koha/opac-detail.pl?biblionumber=976426 http://www.sudoc.fr/18117295X If I check the language here, it's Hebrew. I've got a 200$a with a Hebrew title in hebrew and another one with a Hebrew title in Latin alphabet. To know what to do, I have to look at 200$6 and 200$7 which are ABES non standard ways to describe non latin scripts (I like to think we are not using UNIMARC but ABESMARC). http://documentation.abes.fr/sudoc/regles/Catalogage/Regles_Multiecritures.htm#Ss-zones$6$7 Voilà. Don't know what to think yet, but now you know.
Created attachment 98244 [details] [review] Bug 21357: Add elision filtering to default ES index config To test: 1. Rebuild Elasticsearch index with the new config 2. Add a record with "l'avion" in the title 3. Verify that the record can be found with both "l'avion" and "avion" Signed-off-by: Séverine QUEUNE <severine.queune@bulac.fr> Signed-off-by: Björn Nylén <bjorn.nylen@ub.lu.se> Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 98245 [details] [review] Bug 21357: Case-insensive articles to the elision filter Test with : GET index/_analyze { "filter": ["elision"], "text": "qu'ils" } With different cases : QU'ILS Qu'ils qu'ils qU'ILS Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 98246 [details] [review] Bug 21357: Allow to use multiple ES language analyzers Test plan: 1. Apply this patch 2. If you have a custom `field_config.yaml` report the diff in your custom config 3. Create biblios in english and french 4. misc/search_tools/rebuild_elasticsearch.pl 5. Test biblio search at intranet with stemming and ellision in mind Examples of test queries: - "journal actualité" => should return results with "Journaux d'actualités" in it - "lord lady" => should return results with "Lords and Ladies" in it Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 101305 [details] [review] Bug 21357: Add elision filtering to default ES index config To test: 1. Rebuild Elasticsearch index with the new config 2. Add a record with "l'avion" in the title 3. Verify that the record can be found with both "l'avion" and "avion" Signed-off-by: Séverine QUEUNE <severine.queune@bulac.fr> Signed-off-by: Björn Nylén <bjorn.nylen@ub.lu.se> Signed-off-by: Michal Denar <black23@gmail.com> Signed-off-by: Bouzid Fergani <bouzid.fergani@inlibro.com>
Created attachment 101306 [details] [review] Bug 21357: Case-insensive articles to the elision filter Test with : GET index/_analyze { "filter": ["elision"], "text": "qu'ils" } With different cases : QU'ILS Qu'ils qu'ils qU'ILS Signed-off-by: Michal Denar <black23@gmail.com> Signed-off-by: Bouzid Fergani <bouzid.fergani@inlibro.com>
Created attachment 101307 [details] [review] Bug 21357: Allow to use multiple ES language analyzers Test plan: 1. Apply this patch 2. If you have a custom `field_config.yaml` report the diff in your custom config 3. Create biblios in english and french 4. misc/search_tools/rebuild_elasticsearch.pl 5. Test biblio search at intranet with stemming and ellision in mind Examples of test queries: - "journal actualité" => should return results with "Journaux d'actualités" in it - "lord lady" => should return results with "Lords and Ladies" in it Signed-off-by: Michal Denar <black23@gmail.com> Signed-off-by: Bouzid Fergani <bouzid.fergani@inlibro.com>
Created attachment 101309 [details] [review] Bug 19482 - DB changes Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr> Signed-off-by: Bouzid Fergani <bouzid.fergani@inlibro.com>
Created attachment 101310 [details] [review] Bug 19482 - Add support for defining 'mandatory' mappings To test: 1 - Apply patch 2 - ./installer/data/mysql/updatedatabase.pl 3 - Reset ES mapping: Administration->Search engine configuration , button at bottom of page 4 - 'issues' and 'title' mapping under 'search fields' should be mandatory and not editable 5 - On 'Bibliographic records' tab you should not be able to delete the single entry for issues 6 - You should be able to delete 'title' mappings, however, at the final one you should be stopped by javascript 7 - Bonus: force remove the last mapping from the page using developer tools - attempt to save and should be warned of missing mandatory mapping Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr> Signed-off-by: Bouzid Fergani <bouzid.fergani@inlibro.com>
Created attachment 101311 [details] [review] Bug 19482: SCHEMA CHANGES _ DO NOT PUSH Signed-off-by: Bouzid Fergani <bouzid.fergani@inlibro.com>
The last patch here seems to do a few things: 1 - Change the way we build our search queries to use bool rather than combining in query_string - I like this, but it should be its own bug 2 - It adds ability to have multiple language analyzers - it is simple, but should also be its own bug 3 - Both of the above will need test coverage I am not sure consensus was reached in discussion before the new patch, it should probably at least wait until Ere returns
Indeed my proposal was just to add elision filtering to the default ES index config without any code changes. The first patch still does that, but since then there have been additional ideas incorporated. I think that it would make sense to create separate bugs for them. Then it would only leave us with the question of whether we include elision filtering in the default configuration or just document this as a possibility. Either way is fine for me since in Finland we'll need to customize the default config regardless. I'm not sure how to reach consensus on this. I believe including filtering by default would make sense regardless of whether the catalog contains French titles since it helps searching e.g. author names too.
+1 for default filtering. It's how it works now in the default and we haven't seen questions or complaints about this. To me it appears to be the expected behaviour.
(In reply to Katrin Fischer from comment #56) > +1 for default filtering. It's how it works now in the default and we > haven't seen questions or complaints about this. To me it appears to be the > expected behaviour. And welcome back, Ere :)
*** Bug 27153 has been marked as a duplicate of this bug. ***
So, where are we at with this one?
I think the last patch "Allow to use multiple ES language analyzers" should be a separate issue. The fix for case-insensitivity in the second patch is valid and necessary. I failed to even test without, and the Elasticsearch docs seem to have the meaning of true/false for the articles_case option reversed.
(In reply to Ere Maijala from comment #60) > I think the last patch "Allow to use multiple ES language analyzers" should > be a separate issue. The fix for case-insensitivity in the second patch is > valid and necessary. I failed to even test without, and the Elasticsearch > docs seem to have the meaning of true/false for the articles_case option > reversed. So could we get this back into testing with the first two patches? They still apply, just the third is conflicting.
I am making a brave step here (please don't kill me :) ) and putting this back in the test queue. I am not sure how to phrase a new bug for the now obsoleted 3rd patch, maybe someone could help?
[10658] Checking state of biblios index [10658] Dropping and recreating biblios index [Request] ** [http://localhost:9200]-[400] [illegal_argument_exception] Custom normalizer [facet_normalizer] failed to find char_filter under name [facet], called from sub Search::Elasticsearch::Role::Client::Direct::__ANON__ at /var/repositories/koha/Koha/SearchEngine/Elasticsearch/Indexer.pm line 389. With vars: {'request' => {'mime_type' => 'application/json','qs' => {},'method' => 'PUT','body' => {'settings' => {'index.mapping.total_fields.limit' => '10000','index.number_of_shards' => '5','index.number_of_replicas' => '1','index' => {'analysis' => {'char_filter' => {'punctuation' => {'pattern' => '([\\x00-\\x1F,\\x21-\\x26,\\x28-\\x2F,\\x3A-\\x40,\\x5B-\\x60,\\x7B-\\x89,\\x8B,\\x8D,\\x8F,\\x90-\\x99,\\x9B,\\x9D,\\xA0-\\xBF,\\xD7,\\xF7])','type' => 'pattern_replace','replacement' => ''}},'analyzer' => {'analyzer_stdno' => {'char_filter' => ['punctuation'],'tokenizer' => 'whitespace','filter' => ['icu_folding']},'analyzer_phrase' => {'tokenizer' => 'keyword','char_filter' => ['punctuation'],'filter' => ['elision','icu_folding','apostrophe']},'analyzer_standard' => {'filter' => ['elision','icu_folding'],'tokenizer' => 'icu_tokenizer'}},'filter' => {'apostrophe' => {'replacement' => '','pattern' => '\'','type' => 'pattern_replace'},'facet' => {'pattern' => '\\s*(?<!\\p{Lu})[.\\-,;]*\\s*$','type' => 'pattern_replace','replacement' => ''},'elision' => {'articles_case' => 'true','type' => 'elision','articles' => ['c','d','j','l','m','n','qu','s','t']}},'normalizer' => {'facet_normalizer' => {'char_filter' => 'facet'},'nfkc_cf_normalizer' => {'char_filter' => 'icu_normalizer','type' => 'custom'},'icu_folding_normalizer' => {'type' => 'custom','filter' => ['elision','icu_folding']}}}}}},'ignore' => [],'path' => '/koha_robin_biblios','serialize' => 'std'},'status_code' => 400,'body' => {'error' => {'root_cause' => [{'reason' => 'Custom normalizer [facet_normalizer] failed to find char_filter under name [facet]','type' => 'illegal_argument_exception'}],'reason' => 'Custom normalizer [facet_normalizer] failed to find char_filter under name [facet]','type' => 'illegal_argument_exception'},'status' => 400}}
In order to keep this bug alive: Our institution has several libraries in different countries requiring all kinds of char sets and punctuation supported, so we are also very much interested to have a generic solution to the problem.
I am brand new to Koha, but I was told this bug is why I have to include apostrophes while searching. I tried searching for Don't Hate the Player title and came up with zero results. I did not use the apostrophe because I have never had to before in multiple catalog systems. I repeated the search with the apostrophe and found the title. I think the searching should work both ways with or without the apostrophe. Thank you.
What would be the easiest option here to make everyone happy for now: Could it be made a configuration option somehow?
(In reply to Katrin Fischer from comment #66) > What would be the easiest option here to make everyone happy for now: Could > it be made a configuration option somehow? I just found bug 14542 which seems to solve this for Zebra with ICU - could we not do the same for Elasticsearch?
(In reply to Katrin Fischer from comment #67) > (In reply to Katrin Fischer from comment #66) > > What would be the easiest option here to make everyone happy for now: Could > > it be made a configuration option somehow? > > I just found bug 14542 which seems to solve this for Zebra with ICU - could > we not do the same for Elasticsearch? Sure we could, if that's what makes everyone happy.
(In reply to Ere Maijala from comment #68) > (In reply to Katrin Fischer from comment #67) > > (In reply to Katrin Fischer from comment #66) > > > What would be the easiest option here to make everyone happy for now: Could > > > it be made a configuration option somehow? > > > > I just found bug 14542 which seems to solve this for Zebra with ICU - could > > we not do the same for Elasticsearch? > > Sure we could, if that's what makes everyone happy. Sometimes making everyone happy is hard, but I feel that there is a need to solve this and we haven't moved an inch for some time. Looking for a next step forward. I feel for us the "Zebra" style solution would work.
I find 14542 a bit scary, but perhaps I'm just paranoid. The problem with changing apostrophes to spaces is that the elision is left dangling. In the "l'avion" example you can then find the record by searching for just "l". I'm not sure if that's desirable.
I was very happy with Ere first proposal. Bug 14542 looks weirder to me.
I think Ere's patch would be a great start for French and maybe we should bring back this bug to just that and handle quotes separately? As I said earlier, I think a lot of libraries have French materials, so this would be a nice enhancement.
It is a pity that after almost five years this issue is still not solved ;) I take the opportunity to add some initial elisions from an Italian/international catalogue -- because the problem is not really French-specific: dell' o' nell' dall' all' un' sull' sant' ch' --to list only the most common. BTW, the problem of apostrophe is not only the problem of the initial elisions--also the saxon genetive creates problems: looking for "Shakespeare's" with "Shakespeare" will give no results unless QueryAutoTruncate is on. So maybe in analyzer_standard, as a third filter the apostrophe filter should be used as well? (which probably would also have some drawbacks...)
... I mean the original apostrophe filter: https://www.elastic.co/guide/en/elasticsearch/reference/8.7/analysis-apostrophe-tokenfilter.html I.e. to have: analyzer_standard: tokenizer: icu_tokenizer filter: - icu_folding - elision - apostrophe So: GET koha_<instance>_biblios/_analyze { "analyzer": "analyzer_standard", "text": "Nell'aria voce di Sant'Antonio e Shakespeare's" } would give: aria voce di antonio e shakespeare which is quite nice.
(In reply to Janusz Kaczmarek from comment #74) > ... I mean the original apostrophe filter: > https://www.elastic.co/guide/en/elasticsearch/reference/8.7/analysis- > apostrophe-tokenfilter.html According to the documentation, that was originally created for the Turkish language. In English, it would work well with apostrophe S ('s) but not so well for contractions like "don't" or "can't". It would also break languages like Ukrainian: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=27153#c14
(In reply to David Cook from comment #75) > (In reply to Janusz Kaczmarek from comment #74) > > ... I mean the original apostrophe filter: > > https://www.elastic.co/guide/en/elasticsearch/reference/8.7/analysis- > > apostrophe-tokenfilter.html > > According to the documentation, that was originally created for the Turkish > language. > > In English, it would work well with apostrophe S ('s) but not so well for > contractions like "don't" or "can't". > > It would also break languages like Ukrainian: > https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=27153#c14 I am not sure I follow that argument. We already have been ignoring ' with Zebra for many years and never got a bug filed about it. On the other hand, we have a lot of people now that ask for it to ignore the '. Would it be possible to make it configurable? If not, I'd say we should just go with it. It's a big issue for French. And searching 'dont' is what I feel is more common than 'don t'.
(In reply to Katrin Fischer from comment #76) > I am not sure I follow that argument. We already have been ignoring ' with > Zebra for many years and never got a bug filed about it. On the other hand, > we have a lot of people now that ask for it to ignore the '. We don't ignore the apostrophe/single quote in Zebra. We change it to a space. That's significantly different than replacing it with nothing from a tokenization perspective. > Would it be possible to make it configurable? If not, I'd say we should just > go with it. It's a big issue for French. And searching 'dont' is what I feel > is more common than 'don t'. I think I've talked about some options on a different bug about indexing both options, so there probably are some options (like indexing with and without punctuation) for making a Koha configuration for it. But it would probably be a bit error-prone as we'd need to change all of Koha to support it. I'm not using Elasticsearch yet, so I'm happy for people to try things out. Just trying to think of any pitfalls. And also hoping that we can try to keep Zebra and Elasticsearch similar in their configuration - or else we should think about deprecating Zebra and just focusing on Elasticsearch. Regarding "don't", I would hope that people would search for "don't" instead of "dont" or "don t". Note that "dont" is also a French word, so searches for "don't" would match for titles including the word "dont". But multilingual indexing is hard anyway...
An interesting case I bumped into recently was "Dewhurst's textbook of obstetrics" The library wants to be able to search for "Dewhursts textbook of obstetrics". I think that I can understand that. It's often difficult with names to remember if it ends in just an "s" or if it's "'s". So it would be nice if the system considered both options. If "Dewhurst's" is indexed as "Dewhursts", that means it would catch both "Dewhurst's" and "Dewhursts". But it wouldn't catch "Dewhurst". That would return no results. If you have query stemming turned on, "Dewhursts" or "Dewhurst's" also gets stemmed down to "Dewhurst" in English, so that would also break your search, unless you have truncation turned on too. -- I think Bywater are the biggest English language users of Elasticsearch with Koha, so I'll defer to them on this topic. Like I said... I'm not using ES with Koha yet, so I wouldn't consider my comments a blocker at all. Feel free to move forward. I'm sure someone other than me will find any potential issues that pop up.
I have a library catalogue with French, English, and Arabic records. We're using Zebra with ICU indexing, but it would be interesting to try out Elasticsearch on that catalogue, especially with these patches.