Bug 38913 - Elasticsearch indexing explodes with some oversized records with UTF-8 characters
Summary: Elasticsearch indexing explodes with some oversized records with UTF-8 charac...
Status: Pushed to main
Alias: None
Product: Koha
Classification: Unclassified
Component: Searching - Elasticsearch (show other bugs)
Version: Main
Hardware: All All
: P5 - low blocker
Assignee: Janusz Kaczmarek
QA Contact: David Cook
URL:
Keywords: rel_24_11_candidate
Depends on: 38416
Blocks:
  Show dependency treegraph
 
Reported: 2025-01-16 20:39 UTC by Janusz Kaczmarek
Modified: 2025-01-24 11:03 UTC (History)
10 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Trivial patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
25.05.00
Circulation function:


Attachments
Bug 38913: (bug 38416 follow-up) Elasticsearch indexing explodes with oversized records (2.25 KB, patch)
2025-01-16 21:12 UTC, Janusz Kaczmarek
Details | Diff | Splinter Review
Test record (77.45 KB, application/x-zip-compressed)
2025-01-16 22:11 UTC, Janusz Kaczmarek
Details
Bug 38913: (bug 38416 follow-up) Elasticsearch indexing explodes with oversized records (3.27 KB, patch)
2025-01-16 22:21 UTC, Janusz Kaczmarek
Details | Diff | Splinter Review
Bug 38913: (bug 38416 follow-up) Elasticsearch indexing explodes with oversized records (3.36 KB, patch)
2025-01-17 07:51 UTC, Magnus Enger
Details | Diff | Splinter Review
Bug 38913: (bug 38416 follow-up) Elasticsearch indexing explodes with oversized records (3.42 KB, patch)
2025-01-20 02:05 UTC, David Cook
Details | Diff | Splinter Review
Bug 38913: (QA follow-up) test UTF-8 exceptions in large MARC records (2.08 KB, patch)
2025-01-20 03:03 UTC, David Cook
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Janusz Kaczmarek 2025-01-16 20:39:09 UTC
After Bug 38416 Elasticsearch indexing explodes with oversized records, especially with UTF encoded data.

In Koha::SearchEngine::Elasticsearch::marc_records_to_documents a following snippet has been introduced:

my $usmarc_record = $record->as_usmarc();

#NOTE: Try to round-trip the record to prove it will work for retrieval after searching
my $decoded_usmarc_record = MARC::Record->new_from_usmarc($usmarc_record);

But if $record is oversized (> 99999 bytes), it is OK for MARC::Record object, but not for $record->as_usmarc. The produced ISO 2709 string is not correct and hence cannot be properly converted back to MARC::Record object by new_from_usmarc.

The result in this case can be like:

UTF-8 "\x85" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.

Since it is done without any eval / try, the whole reindex procedure (for instance rebuild_elasticsearch.pl) is being randomly interrupted with no explanation.
Comment 1 Janusz Kaczmarek 2025-01-16 21:12:39 UTC Comment hidden (obsolete)
Comment 2 Janusz Kaczmarek 2025-01-16 22:11:59 UTC
Created attachment 176699 [details]
Test record

A test MARCXML record with lots of items producing oversized ISO 2709 record.
Comment 3 Janusz Kaczmarek 2025-01-16 22:21:40 UTC Comment hidden (obsolete)
Comment 4 Magnus Enger 2025-01-17 07:51:00 UTC
Created attachment 176704 [details] [review]
Bug 38913: (bug 38416 follow-up) Elasticsearch indexing explodes with oversized records

After Bug 38416 Elasticsearch indexing explodes with oversized
records, especially with UTF encoded data.

In Koha::SearchEngine::Elasticsearch::marc_records_to_documents a
following snippet has been introduced:

my $usmarc_record = $record->as_usmarc();
my $decoded_usmarc_record = MARC::Record->new_from_usmarc($usmarc_record);

But if $record is oversized (> 99999 bytes), it is OK for MARC::Record
object, but not for $record->as_usmarc. The produced ISO 2709 string
is not correct and hence cannot be properly converted back to
MARC::Record object by new_from_usmarc.

The result in this case can be like:

UTF-8 "\x85" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.

Since it is done without any eval / try, the whole reindex procedure
(for instance rebuild_elasticsearch.pl) is being randomly interrupted
with no explanation.

Test plan:
==========
Hard to reproduce. But the explanation together with discussion in Bug
38416 (from 2024-12-15) explains and justifies the need of this added
eval.

1. Have a standard KTD installation with Elasticsearch.
2. Use the provided test record - add it to Koha with
   ./misc/migration_tools/bulkmarcimport.pl -b -file test.xml -m=MARCXML
   (have patience).
   During load process you should see a message like:
   UTF-8 "\xC4" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.
3. The record should get biblionumber 439. Check in librarian interface with
   http://<your_addreess>:8081/cgi-bin/koha/catalogue/detail.pl?biblionumber=439
   that the record has been imported.
   However, you should not be able to make a search for this record.
4. Try to reindex with:
   ./misc/search_tools/rebuild_elasticsearch.pl -b -bn 439
   You should get a message like:
   UTF-8 "\xC4" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.
   Again, no search results.
5. Apply the patch ; restart_all.
6. Repeat reindex with:
   ./misc/search_tools/rebuild_elasticsearch.pl -b -bn 439
   There should be no warning now and you should be able to find the record.

Signed-off-by: Magnus Enger <magnus@libriotech.no>
Followed the test plan. Works as advertised.
Comment 5 David Cook 2025-01-20 01:30:20 UTC
Without the patch, I tried to use the web UI to import this file, but it failed to load. In /var/log/koha/kohadev/worker-output.log I see the following:

Record length of 527856 is larger than the MARC spec allows (99999 bytes). at /usr/share/perl5/MARC/File/USMARC.pm line 314.
UTF-8 "\x85" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.

--

I applied the patch, ran 'restart_all', and tried the web UI again.

And it's still failing to import.

--

I'll try the test plan you've provided...
Comment 6 David Cook 2025-01-20 01:32:40 UTC
Guessing I must need to update my koha-testing-docker because it's failing with a missing MARC/Lint.pm module...
Comment 7 David Cook 2025-01-20 02:04:46 UTC
While my ktd was downloading, I looked at the code and warnings more and I see the logic here now. I must've been so focused on bug 38416 on fields with more than 9999 that we forgot to test records with more than 99999. The MARC::* modules have some interesting quirks. MARC::File::USMARC could use some attention...

--

So this patch is good. It fixes the problem. Thanks a lot, Janusz, for following up on this one.

I'll mark it QAed and I'll increase the importance.
Comment 8 David Cook 2025-01-20 02:05:20 UTC
Created attachment 176767 [details] [review]
Bug 38913: (bug 38416 follow-up) Elasticsearch indexing explodes with oversized records

After Bug 38416 Elasticsearch indexing explodes with oversized
records, especially with UTF encoded data.

In Koha::SearchEngine::Elasticsearch::marc_records_to_documents a
following snippet has been introduced:

my $usmarc_record = $record->as_usmarc();
my $decoded_usmarc_record = MARC::Record->new_from_usmarc($usmarc_record);

But if $record is oversized (> 99999 bytes), it is OK for MARC::Record
object, but not for $record->as_usmarc. The produced ISO 2709 string
is not correct and hence cannot be properly converted back to
MARC::Record object by new_from_usmarc.

The result in this case can be like:

UTF-8 "\x85" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.

Since it is done without any eval / try, the whole reindex procedure
(for instance rebuild_elasticsearch.pl) is being randomly interrupted
with no explanation.

Test plan:
==========
Hard to reproduce. But the explanation together with discussion in Bug
38416 (from 2024-12-15) explains and justifies the need of this added
eval.

1. Have a standard KTD installation with Elasticsearch.
2. Use the provided test record - add it to Koha with
   ./misc/migration_tools/bulkmarcimport.pl -b -file test.xml -m=MARCXML
   (have patience).
   During load process you should see a message like:
   UTF-8 "\xC4" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.
3. The record should get biblionumber 439. Check in librarian interface with
   http://<your_addreess>:8081/cgi-bin/koha/catalogue/detail.pl?biblionumber=439
   that the record has been imported.
   However, you should not be able to make a search for this record.
4. Try to reindex with:
   ./misc/search_tools/rebuild_elasticsearch.pl -b -bn 439
   You should get a message like:
   UTF-8 "\xC4" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.
   Again, no search results.
5. Apply the patch ; restart_all.
6. Repeat reindex with:
   ./misc/search_tools/rebuild_elasticsearch.pl -b -bn 439
   There should be no warning now and you should be able to find the record.

Signed-off-by: Magnus Enger <magnus@libriotech.no>
Followed the test plan. Works as advertised.
Signed-off-by: David Cook <dcook@prosentient.com.au>
Comment 9 David Cook 2025-01-20 02:20:54 UTC
Something is still bugging me...

t/db_dependent/Koha/SearchEngine/Elasticsearch.t

In theory, that test script has a test for a "large MARC record" which runs marc_records_to_documents() and it doesn't fail on "main".

It must be longer than 99999 bytes since it's switching to MARCXML from base64ISO2709.

So we did test for large MARC records om bug 38416. 

But... since the fatal error Janusz is fixing comes from MARC::File::USMARC not handling an exception during marc_to_utf8(), it must also be because being over 99999 bytes creates an invalid USMARC directory in the USMARC data. It then starts doing string handling using the positional math. And then it needs to get the right combination of invalid bytes. 

Since the unit test was just using single byte ASCII...  it should be impossible to generate invalid UTF8 in this particular test scenario.

Let's see if I can break the unit test in main...
Comment 10 David Cook 2025-01-20 02:54:32 UTC
(In reply to David Cook from comment #9)
> Let's see if I can break the unit test in main...

I added a bunch of Chinese and some of the Polish from the test record, and I couldn't get the unit tests to break in main. 

I was about to give up... when I tried again, and I managed to get the following:

kohadev-koha@kohadevbox:koha(main)$ prove t/db_dependent/Koha/SearchEngine/Elasticsearch.t
t/db_dependent/Koha/SearchEngine/Elasticsearch.t .. 1/8     # Looks like you planned 70 tests but ran 55.

#   Failed test 'Koha::SearchEngine::Elasticsearch::marc_records_to_documents () tests'
#   at t/db_dependent/Koha/SearchEngine/Elasticsearch.t line 805.
UTF-8 "\x99" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.
# Looks like your test exited with 11 just after 4.

Hurray!
Comment 11 David Cook 2025-01-20 03:03:29 UTC
Created attachment 176768 [details] [review]
Bug 38913: (QA follow-up) test UTF-8 exceptions in large MARC records

MARC records with over 99999 bytes are invalid by spec, and when you use
UTF-8 encoded characters in your MARC records, there is the potential
to generate fatal errors in MARC::File::USMARC when it runs
"marc_to_utf8" from "MARC::File::Encode" during its "decode" operation.

That is, if you MARC::File::USMARC->encode a MARC record
with over 99999 bytes (including a number of UTF-8 bytes), there
is the potential when you run MARC::File:USMARC->decode on that same
data that you'll generate a fatal exception.

The main patch in bug 38913 wraps the function doing the decode,
so that a bad record doesn't crash processing.

Without the patch, this unit test will fail. With the patch, this
unit test will pass.
Comment 12 Jonathan Druart 2025-01-20 08:51:09 UTC
This is clearly not enough (could go on a separate bugs).

Testing this patch I have noticed several things:
1. 
$ ./misc/migration_tools/bulkmarcimport.pl -b -file test.xml -m=MARCXML
.UTF-8 "\xC4" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.

Not really useful to guess where the error is, but we know it's in the file so we can search in it easily

2. Stage the file and import using the UI

            id: 2                                                                                         
        status: failed                                                                                                                                                                                               
      progress: 0                                                                                         
          size: 1                                                                                         
borrowernumber: 51                 
          type: marc_import_commit_batch
         queue: long_tasks                                                                                
          data: {"report":{"import_batch_id":"1","num_items_added":null,"num_ignored":null,"num_items_replaced":null,"num_items_errored":null,"num_added":null,"num_updated":null},"import_batch_id":"1","overlay_fra
mework":null,"messages":[],"frameworkcode":""}
       context: {"firstname":null,"surname":"koha","flags":"1","number":"51","register_name":null,"emailaddress":null,"desk_id":null,"desk_name":null,"branchname":"Centerville","register_id":null,"shibboleth":"0",
"cardnumber":"42","branch":"CPL","interface":"intranet","id":"koha"}
   enqueued_on: 2025-01-20 08:40:28                                                                       
    started_on: 2025-01-20 08:40:29                                                                                                                                                                                  
      ended_on: 2025-01-20 08:40:29   

No info on the failure.

There is something in the log, but that should go in the report of the job

==> /var/log/koha/kohadev/worker-output.log <==                                                                                                                                                                      
Record length of 527856 is larger than the MARC spec allows (99999 bytes). at /usr/share/perl5/MARC/File/USMARC.pm line 314.    
UTF-8 "\x85" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.

3. Now the record is in the DB, start a full reindex:

% koha-elasticsearch --rebuild -b  kohadev
UTF-8 "\xC4" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35.
Something went wrong rebuilding indexes for kohadev

No info on the problematic record! We should tell which record failed.
Comment 13 Jonathan Druart 2025-01-20 09:09:55 UTC
(In reply to Jonathan Druart from comment #12)

Hey, this was test on bug 38713 not 38913, oops!
So basically what I described was the behaviour in main.

> This is clearly not enough (could go on a separate bugs).
> 
> Testing this patch I have noticed several things:
> 1. 
> $ ./misc/migration_tools/bulkmarcimport.pl -b -file test.xml -m=MARCXML
> .UTF-8 "\xC4" does not map to Unicode at
> /usr/share/perl5/MARC/File/Encode.pm line 35.
> 
> Not really useful to guess where the error is, but we know it's in the file
> so we can search in it easily

With this patch:
"1 MARC records done in 81.9053399562836 seconds"

However, I have delete all biblio and background_jobs before the import and now I have:

MariaDB [koha_kohadev]> select count(*) from biblio\G
count(*): 1


MariaDB [koha_kohadev]> select count(*) from background_jobs\G
count(*): 2508

Interesting!...


> 2. Stage the file and import using the UI

Same with this patch.

> 3. Now the record is in the DB, start a full reindex:
> 
> % koha-elasticsearch --rebuild -b  kohadev
> UTF-8 "\xC4" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm
> line 35.
> Something went wrong rebuilding indexes for kohadev
> 
> No info on the problematic record! We should tell which record failed.

We don't have anything in the output, which is problematic IMO.
Comment 14 Janusz Kaczmarek 2025-01-20 22:07:35 UTC
(In reply to Jonathan Druart from comment #13)
> With this patch:
> "1 MARC records done in 81.9053399562836 seconds"
> 
> However, I have delete all biblio and background_jobs before the import and
> now I have:
> 
> MariaDB [koha_kohadev]> select count(*) from biblio\G
> count(*): 1
> 
> 
> MariaDB [koha_kohadev]> select count(*) from background_jobs\G
> count(*): 2508
> 
> Interesting!...

Well, every Koha::Item->store triggers $indexer->index_records, so no wonder -- we have 2508 952 fields in the test record :)

> > 2. Stage the file and import using the UI
> 
> Same with this patch.

This is also more or less clear.  In C4::ImportBatch::BatchCommitRecords called by the worker we call: 

my $marc_record = MARC::Record->new_from_usmarc($rowref->{'marc'});

despite of having also the marcxml representation of the record in the import_records table (import_records.marcxml). 

This is exactly the same issue that made David's patch die with this kind of records.  Worker dies because if the uncaught die generated by new_from_usmarc. This has nothing to do with the patch (and with the previous David's patch) -- just another case of a call to a function that potentially dies without any eval / try. 


Now, if we create and save to import_records table both versions (iso2709 and marcxml) in C4::ImportBatch::_create_import_record, why not to use marcxml version in C4::ImportBatch::BatchCommitRecords instead of iso2709 which creates trouble in case of oversized records?

After this little change it seems to work - I was able to import the huge test record with UI:

diff --git a/C4/ImportBatch.pm b/C4/ImportBatch.pm
index 5aebaafacf..799b69f0ca 100644
--- a/C4/ImportBatch.pm
+++ b/C4/ImportBatch.pm
@@ -531,7 +531,7 @@ sub BatchCommitRecords {
     my $item_tag;
     my $item_subfield;
     my $dbh = C4::Context->dbh;
-    my $sth = $dbh->prepare("SELECT import_records.import_record_id, record_type, status, overlay_status, marc, encoding
+    my $sth = $dbh->prepare("SELECT import_records.import_record_id, record_type, status, overlay_status, marc, marcxml, encoding
                              FROM import_records
                              LEFT JOIN import_auths ON (import_records.import_record_id=import_auths.import_record_id)
                              LEFT JOIN import_biblios ON (import_records.import_record_id=import_biblios.import_record_id)
@@ -568,7 +568,7 @@ sub BatchCommitRecords {
         } else {
             $marc_type = 'USMARC';
         }
-        my $marc_record = MARC::Record->new_from_usmarc($rowref->{'marc'});
+        my $marc_record = MARC::Record->new_from_xml($rowref->{'marcxml'}, $rowref->{'encoding'});

         if ($record_type eq 'biblio') {
             # remove any item tags - rely on _batchCommitItems


> 
> > 3. Now the record is in the DB, start a full reindex:
> > 
> > % koha-elasticsearch --rebuild -b  kohadev
> > UTF-8 "\xC4" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm
> > line 35.
> > Something went wrong rebuilding indexes for kohadev
> > 
> > No info on the problematic record! We should tell which record failed.
> 
> We don't have anything in the output, which is problematic IMO.

Yes, this is problematic, because new_from_usmarc died and we didn't catch it.  But now since we call it in eval we should be save with this.
Comment 15 Janusz Kaczmarek 2025-01-20 22:17:43 UTC
I've created Bug 38933 for this stage/import from UI issue.
Comment 16 David Cook 2025-01-20 22:33:42 UTC
(In reply to Jonathan Druart from comment #12)
> No info on the problematic record! We should tell which record failed.

I raised Bug 32638 a couple years ago. I'm sure there's a bunch of reports about the MARC import failing silently. 

Never been high enough priority to fix it.
Comment 17 David Cook 2025-01-20 22:41:23 UTC
(In reply to Jonathan Druart from comment #12)
> This is clearly not enough (could go on a separate bugs).

Yep. Step by step.
Comment 18 Janusz Kaczmarek 2025-01-20 22:47:43 UTC
(In reply to David Cook from comment #16)
> 
> I raised Bug 32638 a couple years ago. I'm sure there's a bunch of reports
> about the MARC import failing silently. 

At first glance, this seems to be a different (but somehow related) problem.  The cause of 32638 seems to lie elsewhere, not in the MARC transformation itself.  Am I right?
Comment 19 David Cook 2025-01-20 23:13:12 UTC
(In reply to Janusz Kaczmarek from comment #18)
> (In reply to David Cook from comment #16)
> > 
> > I raised Bug 32638 a couple years ago. I'm sure there's a bunch of reports
> > about the MARC import failing silently. 
> 
> At first glance, this seems to be a different (but somehow related) problem.
> The cause of 32638 seems to lie elsewhere, not in the MARC transformation
> itself.  Am I right?

Yeah, I just meant that the MARC import doesn't surface errors/failures.
Comment 20 Jonathan Druart 2025-01-21 07:35:44 UTC
(In reply to Janusz Kaczmarek from comment #14)
> (In reply to Jonathan Druart from comment #13)
> > With this patch:
> > "1 MARC records done in 81.9053399562836 seconds"
> > 
> > However, I have delete all biblio and background_jobs before the import and
> > now I have:
> > 
> > MariaDB [koha_kohadev]> select count(*) from biblio\G
> > count(*): 1
> > 
> > 
> > MariaDB [koha_kohadev]> select count(*) from background_jobs\G
> > count(*): 2508
> > 
> > Interesting!...
> 
> Well, every Koha::Item->store triggers $indexer->index_records, so no wonder
> -- we have 2508 952 fields in the test record :)

Yes, the "interesting" was sarcastic, hence the "..." but that was not obvious, sorry.

It's still a bug IMO.
Especially with this:
718                 $indexer->update_index( \@search_engine_record_ids, \@search_engine_records ) unless $skip_indexing;
Comment 21 Martin Renvoize (ashimema) 2025-01-21 07:45:14 UTC
This all reminds me a little about:

Bug 35104 - We should warn when attempting to save MARC records that contain characters invalid in XML

Whilst it's not specifically about record length, it's meant to try and prevent bad data making it's way into Koha entirely.  That said, it sounds like this isn't "bad" data so much as just data our MARC utilities don't deal with well.
Comment 22 Janusz Kaczmarek 2025-01-21 11:13:03 UTC
(In reply to Jonathan Druart from comment #20)
> > Well, every Koha::Item->store triggers $indexer->index_records, so no wonder
> > -- we have 2508 952 fields in the test record :)
> 
> Yes, the "interesting" was sarcastic, hence the "..." but that was not
> obvious, sorry.
> 
> It's still a bug IMO.
> Especially with this:
> 718                 $indexer->update_index( \@search_engine_record_ids,
> \@search_engine_records ) unless $skip_indexing;

Does it mean that both in bulkmarcimport and in import staged records from UI we should add bibliographic records with { skip_record_index => 1 } and then add items with { skip_record_index => 1 }, and then, at the very end, or after a certain number of records, or after each record, explicitly call: 

$indexer->index_records( $biblionumber(s), ...) ? 

Would it be a right way?
Comment 23 Jonathan Druart 2025-01-21 11:49:54 UTC
(In reply to Janusz Kaczmarek from comment #22)
> (In reply to Jonathan Druart from comment #20)
> > > Well, every Koha::Item->store triggers $indexer->index_records, so no wonder
> > > -- we have 2508 952 fields in the test record :)
> > 
> > Yes, the "interesting" was sarcastic, hence the "..." but that was not
> > obvious, sorry.
> > 
> > It's still a bug IMO.
> > Especially with this:
> > 718                 $indexer->update_index( \@search_engine_record_ids,
> > \@search_engine_records ) unless $skip_indexing;
> 
> Does it mean that both in bulkmarcimport and in import staged records from
> UI we should add bibliographic records with { skip_record_index => 1 } and
> then add items with { skip_record_index => 1 }, and then, at the very end,
> or after a certain number of records, or after each record, explicitly call: 
> 
> $indexer->index_records( $biblionumber(s), ...) ? 
> 
> Would it be a right way?

Yes, see what we do in Koha::Items->batch_update.
Comment 24 Katrin Fischer 2025-01-21 14:47:17 UTC
I see there is still a lot of discussion gong on - is it ok to push these patches as is and continue on another bug for remaining issues or should I wait?
Comment 25 Jonathan Druart 2025-01-21 14:58:52 UTC
(In reply to Katrin Fischer from comment #24)
> I see there is still a lot of discussion gong on - is it ok to push these
> patches as is and continue on another bug for remaining issues or should I
> wait?

You can push.
Comment 26 Katrin Fischer 2025-01-24 11:03:45 UTC
Pushed for 25.05!

Well done everyone, thank you!