Bug 38101 - ES skips records with huge fields
Summary: ES skips records with huge fields
Status: Signed Off
Alias: None
Product: Koha
Classification: Unclassified
Component: Searching - Elasticsearch (show other bugs)
Version: unspecified
Hardware: All All
: P5 - low normal
Assignee: Bugs List
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2024-10-04 20:36 UTC by Tomás Cohen Arazi (tcohen)
Modified: 2024-11-13 13:12 UTC (History)
6 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Small patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
Bug 38101: Make ES indexer split big fields into chunks (3.22 KB, patch)
2024-10-04 20:50 UTC, Tomás Cohen Arazi (tcohen)
Details | Diff | Splinter Review
Bug 38101: Make ES indexer split big fields into chunks (3.28 KB, patch)
2024-11-07 14:57 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Tomás Cohen Arazi (tcohen) 2024-10-04 20:36:03 UTC
I saw a case in the wild, where the staff member copied and pasted some legal text from a PDF into a 500 field. Then the record was not able to be found using ES.

The reason is fairly simple: ES has a max size it will accept for a phrase index.

To reproduce:
1. Have KTD running with ES:
   $ ktd --proxy --es7 up -d
2. Perform a search
3. Pick the first result for edition
4. Find a cool Wiki page with lots of paragraphs
5. Copy all of the paragraphs and put them on a 500$a field for the record.
6. Repeat 2
=> FAIL: The record is not found
7. Reindex manually:
   $ ktd --shell
  k$ perl misc/search_tools/rebuild_elasticsearch.pl --biblios --where "biblionumber=3"  -v -v
=> FAIL: You get something like:
```
[22229] Committing final records...
One or more ElasticSearch errors occurred when indexing documents at /kohadevbox/koha/Koha/SearchEngine/Elasticsearch/Indexer.pm line 148.
[22229] There were errors during indexing
Record #3 Document contains at least one immense term in field="note.raw" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is: '[10, 109, 117, 115, 116, 97, 102, 97, 32, 102, 117, 101, 32, 101, 108, 32, 115, 101, 103, 117, 110, 100, 111, 32, 104, 105, 106, 111, 32, 100]...', original message: bytes can be at most 32766 in length; got 32771 (illegal_argument_exception) : max_bytes_length_exceeded_exception (bytes can be at most 32766 in length; got 32771)
[22229] Total 1 records indexed
```
Comment 1 Tomás Cohen Arazi (tcohen) 2024-10-04 20:50:57 UTC
Created attachment 172428 [details] [review]
Bug 38101: Make ES indexer split big fields into chunks

This patch makes the `_process_mappings()` method split the index values
in the event of them being bigger than the allowed 32766 bytes size.

To test:
1. Have KTD running with ES:
   $ ktd --proxy --es7 up -d
2. Perform a search
3. Pick the first result for edition
4. Find a cool Wiki page with lots of paragraphs
5. Copy all of the paragraphs and put them on a 500$a field for the record.
6. Repeat 2
=> FAIL: The record is not found
7. Reindex manually:
   $ ktd --shell
  k$ perl misc/search_tools/rebuild_elasticsearch.pl --biblios --where "biblionumber=3"  -v -v
=> FAIL: You get something like:
```
[22229] Committing final records...
One or more ElasticSearch errors occurred when indexing documents at /kohadevbox/koha/Koha/SearchEngine/Elasticsearch/Indexer.pm line 148.
[22229] There were errors during indexing
Record #3 Document contains at least one immense term in field="note.raw" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is: '[10, 109, 117, 115, 116, 97, 102, 97, 32, 102, 117, 101, 32, 101, 108, 32, 115, 101, 103, 117, 110, 100, 111, 32, 104, 105, 106, 111, 32, 100]...', original message: bytes can be at most 32766 in length; got 32771 (illegal_argument_exception) : max_bytes_length_exceeded_exception (bytes can be at most 32766 in length; got 32771)
[22229] Total 1 records indexed
```
8. Apply this patch
9. Repeat 7
=> SUCCESS: No error!
10. Repeat 2
=> SUCCESS: The record is indexed and can be found!
11. Sign off :-D
Comment 2 Tomás Cohen Arazi (tcohen) 2024-10-04 20:54:52 UTC
Hi, I came up with this patch. I'm not familiar with the area enough to be confident the code is in the right spot, if some edge cases were not considered, or if this should just be handled using the 'callbacks' (which I haven't seen in real life).

I just know this fixes a real-life issue and looks like a sound solution (splitting a huge string into chunks without breaking words). If other devs consider it correct, I can write unit tests for it.
Comment 3 Mark Hofstetter 2024-10-04 22:59:21 UTC
maybe domm could/should have a look at this one he's got a lot of expedrtise with ES
Comment 4 Thomas Klausner 2024-10-05 06:48:15 UTC
I think that the ES index is defined badly. 

The field you are trying to store the data is most likely defined as "type" : "keyword". "keyword" should only be used for exact match lookups (think "category") and thus indeed has an upper limit:
https://www.elastic.co/guide/en/elasticsearch/reference/current/keyword.html

But storing such a huge amount of text into a "keyword" field makes no sense.

So the proper fix is to index that data into a field that's defined as "type":"text". (Only then can you do partial matches on the content!)

If this is not possible, you can tell ES itself to cut off the text using `ignore_above`:
https://www.elastic.co/guide/en/elasticsearch/reference/current/ignore-above.html

So instead of handling this via code as suggested in the patch it should be handled via fixing the ElasticSearch mappings for this (and maybe other) fields.

I've taken a quick look at `elasticsearch/mappings.yaml` but did not find any indicator there that 500 should be used as a keyword.

BUT: I do not really understand how your solution can fix the issue, so maybe all I'm saying now is wrong... I can try your bug later (not sure if during the weekend) to get a better understanding

Greetings
domm
Comment 5 Tomás Cohen Arazi (tcohen) 2024-10-09 13:52:12 UTC
(In reply to Thomas Klausner from comment #4)
> I think that the ES index is defined badly. 
> 
> The field you are trying to store the data is most likely defined as "type"
> : "keyword". "keyword" should only be used for exact match lookups (think
> "category") and thus indeed has an upper limit:
> https://www.elastic.co/guide/en/elasticsearch/reference/current/keyword.html
> 
> But storing such a huge amount of text into a "keyword" field makes no sense.
> 
> So the proper fix is to index that data into a field that's defined as
> "type":"text". (Only then can you do partial matches on the content!)

This looked promising but the result was the same. Maybe I'm doing it wrong but this is what I tried:

- Added 'text' as an option to the `search_field.type` ENUM definition (kohastructure.sql:5664) but did it directly on the DB.
- Changed the attribute `type` for the `note` index definition in mappings.yaml:3075
- Reloaded the mappings, made sure 'text' was set on the DB

Running this:
   $ ktd --shell
  k$ perl misc/search_tools/rebuild_elasticsearch.pl --biblios --where "biblionumber=3"  -v -v

gave the same results without my patch. And worked with it.

Have you been able to reproduce it?
Comment 6 Thomas Klausner 2024-10-12 19:08:25 UTC
I have now (finally..sorry) looked at the mappings as used in ktd, and it seems that sub-fields of `notes` (where `500` is index to per default) is indeed set up as `type: keyword`:

start ktd, then ktd --shell:

```
kohadev-koha@kohadevbox:koha(main)$ curl -s 'http://es:9200/koha_kohadev_biblios/_mapping/field/note?pretty''

{
  "koha_kohadev_biblios" : {
    "mappings" : {
      "note" : {
        "full_name" : "note",
        "mapping" : {
          "note" : {
            "type" : "text",
            "fields" : {
              "ci_raw" : {
                "type" : "keyword",
                "normalizer" : "icu_folding_normalizer"
              },
              "phrase" : {
                "type" : "text",
                "analyzer" : "analyzer_phrase"
              },
              "raw" : {
                "type" : "keyword",
                "normalizer" : "nfkc_cf_normalizer"
              }
            },
            "analyzer" : "analyzer_standard"
          }
        }
      }
    }
  }
}

```

Here you see that fields "raw" and "ci_raw" are of type "keyword".

Now, to test if this is indeed the problem we have to fiddle with the ElasticSearch mappings, which is not very easy (because the web interface does not have any effect on the actual mappings, which are stored in `admin/searchengine/elasticsearch/mappings.yaml`. BUT we actually don't care that much about the mappings (i.e. which MARC21 fields goes into which search field). We care about the definition of the "note" field, which has no 'type', so it uses the default type, which we find in `admin/searchengine/elasticsearch/field_config.yaml`:

```
  default:
    type: text
    analyzer: analyzer_standard
    search_analyzer: analyzer_standard
    fields:
      phrase:
        type: text
        analyzer: analyzer_phrase
        search_analyzer: analyzer_phrase
      raw:
        type: keyword
        normalizer: nfkc_cf_normalizer
      ci_raw:
        type: keyword
        normalizer: icu_folding_normalizer
```

Because I'm currently just exploring, I just deleted `raw` and `ci_raw`, but (spoiler alert) this wasn't enough, because the `analyzer_phrase` has the same problem. So I remove all the subfields from "default", so we only have 

```
  default:
    type: text
    analyzer: analyzer_standard
    search_analyzer: analyzer_standard
```

Now I can recreate the ES index:

kohadev-koha@kohadevbox:koha(main)$ perl misc/search_tools/rebuild_elasticsearch.pl -r

And Re-Index my test entry (where I added ~40k text to 500):

perl misc/search_tools/rebuild_elasticsearch.pl --biblios --bn 284 -v -v

And it works!!

And I can find the book when I search for some of the text I entered (even if the text is at the end of the 40k).

BUT (a very big BUT):

This is NOT the proper solution, just a prove that the problem lies in the usage of `keyword` and/or `analyzer_phrase` (where `analyzer_phrase` is defined in `admin/searchengine/elasticsearch/index_config.yaml` and also uses `keyword`)

One thing we could (easily) do is to use `ignore_above` for type=keyword (which would behave similar to your patch, in that it removes too-long text):

      raw:
        type: keyword
        normalizer: nfkc_cf_normalizer
        ignore_above: 20000
      ci_raw:
        type: keyword
        normalizer: icu_folding_normalizer
        ignore_above: 20000

But this does not work for `analyzer_phrase` :-(

I guess the correct (but very hard) solution would be to figure out why and where we need those subfields (esp. "phrase", but also "raw" and "ci_raw") and decide if we can use ignore_above for "raw" and "ci_raw". And figure out a fix for "phrase".

Or, much easier: we define a new search type "long_text" which does not include all those subfields (and therefor will not support a phrase search). Then you can change the search_mappings for "note" on your instance from "default" to "long_text" and everything should work. Or we might even decide that "note" should be a "long_text" per default.

Unfortunantley ElasticSearch is a complex beast, and the Koha ES implementation has a lot of improvement opportunities (let's call it that...)
Comment 7 Tomás Cohen Arazi (tcohen) 2024-10-16 13:41:13 UTC
I don't have the ES knowledge that is required to implement this correctly. But I did understand the design decisions (simplifications?) that led us to this problem.

I feel like my compromise solution was good enough for now. I would agree it hides the underlying issue, that can bite us in some other circumstances.

I will keep my patch as NSO in case it can be useful until someone funds/dedicates time to rethink our indexes definitions.
Comment 8 Tomás Cohen Arazi (tcohen) 2024-11-06 04:18:44 UTC
After talking to some librarian colleagues, my understanding is that if a librarian puts (for example) a long abstract on a 5xx field, they do it because they want to perform a phrase search on it. So truncating the field would impact search results.

So I still think my solution is more correct, for now.
Comment 9 David Cook 2024-11-06 05:46:16 UTC
I'm surprised that you encountered this bug, since I would've expected you to run into bug 27365 first...
Comment 10 David Cook 2024-11-06 05:49:47 UTC
(In reply to Tomás Cohen Arazi (tcohen) from comment #8)
> After talking to some librarian colleagues, my understanding is that if a
> librarian puts (for example) a long abstract on a 5xx field, they do it
> because they want to perform a phrase search on it. So truncating the field
> would impact search results.
> 
> So I still think my solution is more correct, for now.

That's true.

Locally, with Zebra, we truncate after 5000 characters on a field, since we've had performance and storage problems with very large 5xx fields before, and recently I've had libraries unhappy about that because they wanted to be able to search the entire 5xx field.

--

I think on bug 27365 we've talked about requiring stricter adherence to 9999 characters for MARC fields (even though that's not quite right either), but there will always be cases where a record with very large 5xx fields makes it into Koha via an import of some kind anyway...

--

When I finish bug 38270 to allow for compressed MARCXML, I imagine that I will run into this bug... so I'll probably be looking to test this very soon!
Comment 11 David Cook 2024-11-06 05:50:48 UTC
I'm working towards a Tuesday deadline, but after that I should have more time to look at this.
Comment 12 Nick Clemens (kidclamp) 2024-11-07 14:57:08 UTC
Created attachment 174247 [details] [review]
Bug 38101: Make ES indexer split big fields into chunks

This patch makes the `_process_mappings()` method split the index values
in the event of them being bigger than the allowed 32766 bytes size.

To test:
1. Have KTD running with ES:
   $ ktd --proxy --es7 up -d
2. Perform a search
3. Pick the first result for edition
4. Find a cool Wiki page with lots of paragraphs
5. Copy all of the paragraphs and put them on a 500$a field for the record.
6. Repeat 2
=> FAIL: The record is not found
7. Reindex manually:
   $ ktd --shell
  k$ perl misc/search_tools/rebuild_elasticsearch.pl --biblios --where "biblionumber=3"  -v -v
=> FAIL: You get something like:
```
[22229] Committing final records...
One or more ElasticSearch errors occurred when indexing documents at /kohadevbox/koha/Koha/SearchEngine/Elasticsearch/Indexer.pm line 148.
[22229] There were errors during indexing
Record #3 Document contains at least one immense term in field="note.raw" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is: '[10, 109, 117, 115, 116, 97, 102, 97, 32, 102, 117, 101, 32, 101, 108, 32, 115, 101, 103, 117, 110, 100, 111, 32, 104, 105, 106, 111, 32, 100]...', original message: bytes can be at most 32766 in length; got 32771 (illegal_argument_exception) : max_bytes_length_exceeded_exception (bytes can be at most 32766 in length; got 32771)
[22229] Total 1 records indexed
```
8. Apply this patch
9. Repeat 7
=> SUCCESS: No error!
10. Repeat 2
=> SUCCESS: The record is indexed and can be found!
11. Sign off :-D

Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Comment 13 Nick Clemens (kidclamp) 2024-11-07 14:57:57 UTC
Testing notes:
I was still able to find the record after the big field was added, however, the content of that field was not indexed. After the patch I could search for the contents and find the record
Comment 14 David Cook 2024-11-11 03:22:41 UTC
(In reply to Nick Clemens (kidclamp) from comment #13)
> Testing notes:
> I was still able to find the record after the big field was added, however,
> the content of that field was not indexed. After the patch I could search
> for the contents and find the record

I'm looking at reproducing at the moment, and I created a new record with a big field, so I can't find it from the outset, as it isn't indexed at all.
Comment 15 Tomás Cohen Arazi (tcohen) 2024-11-13 13:12:55 UTC
(In reply to Nick Clemens (kidclamp) from comment #13)
> Testing notes:
> I was still able to find the record after the big field was added, however,
> the content of that field was not indexed. After the patch I could search
> for the contents and find the record

Because the record was already in Koha and thus indexed, right? It is just it didn't get reindexed. If you try with `-d`