Bug 41469

Summary: Convert Traditional Chinese to Simplified Chinese for searching
Product: Koha Reporter: Nick Clemens (kidclamp) <nick>
Component: SearchingAssignee: Bugs List <koha-bugs>
Status: NEW --- QA Contact: Testopia <testopia>
Severity: enhancement    
Priority: P5 - low CC: zdypop
Version: Main   
Hardware: All   
OS: All   
GIT URL: Initiative type: ---
Sponsorship status: --- Comma delimited list of Sponsors:
Crowdfunding goal: 0 Patch complexity: ---
Documentation contact: Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:

Description Nick Clemens (kidclamp) 2025-12-17 14:42:23 UTC
We have had a request for supporting the conversion of traditional characters to simplified characters

There are elastic tools/analyzers for this:
https://github.com/infinilabs/analysis-stconvert
https://www.elastic.co/docs/reference/elasticsearch/plugins/analysis-smartcn

However, in testing I have not been able to get things working.

The requester suggested some config for Zebra as well, but I couldn't get that working either

Filing this here to capture conversations I had with David Cook and so he can share the setup he tried ;-)
Comment 1 Anthony Zhu 2025-12-17 21:46:29 UTC
Hi Nick,

Regarding the Elasticsearch implementation, my research findings (based on the PDF documentation I shared previously) suggest that installing the plugins alone might not be enough. We likely need to define a custom analyzer in the Elasticsearch index settings to explicitly chain the tokenizer and the conversion filter.

Here is the specific configuration logic derived from my research (Source: "DeepSeek" analysis in my documentation) that might be the missing link:

1. Define Custom Analyzer (Elasticsearch API Approach) It seems we need to update the index settings to include a custom analyzer that uses the ICU or SmartCN tokenizer followed by a Traditional-to-Simplified transform filter.

JSON

/* Concept Configuration for Elasticsearch Index Settings */
PUT /koha_biblios
{
  "settings": {
    "analysis": {
      "filter": {
        "traditional_to_simplified": {
          "type": "icu_transform",
          "id": "Traditional-Simplified" 
          /* Or use stconvert if installed: "type": "stconvert", "convert_type": "t2s" */
        }
      },
      "analyzer": {
        "zh_cn_search": {
          "tokenizer": "icu_tokenizer",  /* or smartcn_tokenizer */
          "filter": [
            "traditional_to_simplified",
            "lowercase"
          ]
        }
      }
    }
  }
}
2. Update Koha Search Mappings (search_fields.yaml) After defining the analyzer in ES, we need to tell Koha to use this zh_cn_search analyzer for the relevant fields (like Title, Author).

YAML

/* In Koha's search mappings */
title:
  type: text
  analyzer: zh_cn_search
  search_analyzer: zh_cn_search
3. Regarding the Zebra Config If Elasticsearch proves too difficult for now, the Zebra ICU configuration I verified previously is:

koha-conf.xml: Enable <icu>1</icu> and <language>zh</language>.

ICU Rule: Use ::zh-Hans-zh-Hant; transliteration.

I hope these specific JSON snippets help clarify how to trigger the plugins!