Description
David Cook
2024-11-11 03:55:42 UTC
Created attachment 174318 [details] [review] Bug 38416: Failover to MARCXML if cannot roundtrip USMARC during indexing This change failsover to MARCXML from USMARC if there are any warnings generated by MARC::File::USMARC::decode when trying to roundtrip the record. Test plan: 0. Apply the patch 1. Setup your koha-testing-docker to use Elasticsearch 2. Create a new record with 15,000 characters in the 500$a field 3. Index that record (e.g. perl misc/search_tools/rebuild_elasticsearch.pl --biblios -v -v) 4. Note that a warning saying the following appears: "Warnings encountered while roundtripping a MARC record to/from USMARC. Failing over to MARCXML" 5. View the "Elasticsearch record" on the detail page and note that the marc_format is MARCXML 6. Perform a search for the record (the keyword should be something that brings up other results too) 7. Note that the record appears correctly in the search results Created attachment 174319 [details] [review] Bug 38416: Add unit tests If we did push bug 38270, it would be tempting to failover to MARCXML_COMPRESSED actually... Note: When rebuilding the koha-testing-docker Elasticsearch indexes, I noticed zero impact on indexing time by adding the roundtripping step. Created attachment 174389 [details] [review] Bug 38416: Tidy Created attachment 174459 [details] [review] Bug 38416: Failover to MARCXML if cannot roundtrip USMARC during indexing This change failsover to MARCXML from USMARC if there are any warnings generated by MARC::File::USMARC::decode when trying to roundtrip the record. Test plan: 0. Apply the patch 1. Setup your koha-testing-docker to use Elasticsearch 2. Create a new record with 15,000 characters in the 500$a field 3. Index that record (e.g. perl misc/search_tools/rebuild_elasticsearch.pl --biblios -v -v) 4. Note that a warning saying the following appears: "Warnings encountered while roundtripping a MARC record to/from USMARC. Failing over to MARCXML" 5. View the "Elasticsearch record" on the detail page and note that the marc_format is MARCXML 6. Perform a search for the record (the keyword should be something that brings up other results too) 7. Note that the record appears correctly in the search results Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 174460 [details] [review] Bug 38416: Add unit tests Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 174461 [details] [review] Bug 38416: Tidy Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 174542 [details] [review] Bug 38416: Failover to MARCXML if cannot roundtrip USMARC during indexing This change failsover to MARCXML from USMARC if there are any warnings generated by MARC::File::USMARC::decode when trying to roundtrip the record. Test plan: 0. Apply the patch 1. Setup your koha-testing-docker to use Elasticsearch 2. Create a new record with 15,000 characters in the 500$a field 3. Index that record (e.g. perl misc/search_tools/rebuild_elasticsearch.pl --biblios -v -v) 4. Note that a warning saying the following appears: "Warnings encountered while roundtripping a MARC record to/from USMARC. Failing over to MARCXML" 5. View the "Elasticsearch record" on the detail page and note that the marc_format is MARCXML 6. Perform a search for the record (the keyword should be something that brings up other results too) 7. Note that the record appears correctly in the search results Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Created attachment 174543 [details] [review] Bug 38416: Add unit tests Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Created attachment 174544 [details] [review] Bug 38416: Tidy Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Pushed for 24.11! Well done everyone, thank you! Backported to 24.05.x for upcoming 24.05.06 After this patch, it DIES inside this sub for something like 1/5 of my records, i.e., it's not recording "warnings" into an array, but crashes, and because it dies, it crushes the WHOLE block of IDs processing, so if I am processing this in rebuild_elasticsearch.pl with -c 1000 (for example, I have 198136 biblios in DB, 198136 records in index with old code and this patch removed, and 195464 records in index with this above new patch present) for block submission - it loses all from the block from index, - so, on rebuild_elasticsearch.pl it dies with such message: UTF-8 "\xC3" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm line 35. - it happens in the line: $decoded_usmarc_record = MARC::Record->new_from_usmarc($usmarc_record); Note 1: I used the 24.11.xx branches, just with Elasticsearch.pm code reverted (removed), and it works on old code properly: it does reindex and has all records. But with this patch, I have ~2500 records lost from the index. Note 2: we have a lot of non-ASCII symbols in Finnish language texts and Cyrillic texts. I am researching why, but I am still in the process. (In reply to Andrii Nugged from comment #14) > After this patch, it DIES inside this sub for something like 1/5 of my > records, Thanks for reporting this. Looking again at my code, I can see how that could be a risk. > UTF-8 "\xC3" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm > line 35. > > - it happens in the line: > > $decoded_usmarc_record = MARC::Record->new_from_usmarc($usmarc_record); > > > Note 1: I used the 24.11.xx branches, just with Elasticsearch.pm code > reverted (removed), and it works on old code properly: it does reindex and > has all records. But with this patch, I have ~2500 records lost from the > index. Are you able to see these records in your Koha search results? If they're failing in the indexing code, surely they should be failing in the search code too? > Note 2: we have a lot of non-ASCII symbols in Finnish language texts and > Cyrillic texts. I haven't seen any problems in my non-English libraries, but perhaps they haven't triggered much indexing recently. Looking at the above, it seems like you might have some data problems? What do you have for position 09 in the leader? I do have a vague memory that there might be some Koha code somewhere for forcing UTF-8 on records even when the MARC records themselves aren't marked as UTF-8... > I am researching why, but I am still in the process. I'll create a new bug report and add a patch with an eval or try/catch so that a bad record doesn't cause a larger crash, but I am curious about the underlying cause too. (In reply to David Cook from comment #15) > I'll create a new bug report and add a patch with an eval or try/catch so > that a bad record doesn't cause a larger crash, but I am curious about the > underlying cause too. Actually, before doing that, I think it would be good to be able to reproduce it in koha-testing-docker... I've tried adding in bad data and MySQL is actually fighting me. Andrii, perhaps you can open a new bug report, link it to this one, and provide some data and steps for reproducing your problem? I see there may be side effects, not backported to 23.11.x (In reply to Fridolin Somers from comment #17) > I see there may be side effects, not backported to 23.11.x So far I'm not able to reproduce but in theory there might be. (In reply to David Cook from comment #18) > (In reply to Fridolin Somers from comment #17) > > I see there may be side effects, not backported to 23.11.x > > So far I'm not able to reproduce but in theory there might be. Yeah no... koha-testing-docker comes with 1 bad record and I've added a 2nd bad record, and 434/436 records are indexed when using 'perl misc/search_tools/rebuild_elasticsearch.pl -d -v -b -c 10' Koha::BiblioUtils runs the Koha::Biblio->metadata_record() function within an eval, so if you can't get a MARC::Record from the XML, then the exception for that record is caught. So we're talking about a record that is valid MARCXML and a valid MARC::Record but an invalid USMARC that is bad enough to trigger a fatal error during decoding (which is interesting since creating a MARC::Record from bad USMARC typically works even when it shouldn't). I've got 1 more idea to try... (In reply to Andrii Nugged from comment #14) > - so, on rebuild_elasticsearch.pl it dies with such message: > > UTF-8 "\xC3" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm > line 35. > > - it happens in the line: > > $decoded_usmarc_record = MARC::Record->new_from_usmarc($usmarc_record); > I am researching why, but I am still in the process. After reviewing the main branch and v24.11.00, this seems very unlikely. If you had bad UTF8 data, the MARC::Record object would fail to get created from the MARCXML within an eval{}. To fail at '$decoded_usmarc_record = MARC::Record->new_from_usmarc($usmarc_record);' with a UTF8 encoding error... it just doesn't make sense. I wrote a little script to inject a "\xC3" byte into the UTF-8 record to try to update a record using Koha APIs with mixed encodings, but something along the way converted it into the EFBFBD UTF-8 replacement character... I was more brutal and I tried injecting C3 bytes into the text, but either DBI or MySQL itself seems to automatically try to do damage control and turns a C3 byte into a C383 UTF-8 byte. There might be some sort of configuration of bad bytes out there that can trigger this error you're having, but I can't find it. There is a serious issue introduced by this patch (cf. Bug 38913). It is not so rare to happen when you have UTF-8 encoded records rich in non basic Latin characters. Then often the ISO 2709 string produced by as_usmarc will end not between Unicode characters (which normally will happen with English-only letters) but in the middle of a composed character. Then you will always get this error (UTF-8 "\x85" does not map to Unicode at /usr/share/perl5/MARC/File/Encode.pm). There should be a way to save the initial idea behind the patch without making reindexing virtually impossible... Wouldn't this be enough and OK? @@ -842,8 +842,9 @@ sub marc_records_to_documents { my $usmarc_record = $record->as_usmarc(); #NOTE: Try to round-trip the record to prove it will work for retrieval after searching - my $decoded_usmarc_record = MARC::Record->new_from_usmarc($usmarc_record); - if ( $decoded_usmarc_record->warnings() ) { + my $decoded_usmarc_record; + eval { $decoded_usmarc_record = MARC::Record->new_from_usmarc($usmarc_record); } ; + if ( $@ || $decoded_usmarc_record->warnings() ) { #NOTE: We override the warnings since they're many and misleading It seems to work for me... (In reply to Janusz Kaczmarek from comment #22) > Wouldn't this be enough and OK? Please have a look at the patch @ Bug 38913. (In reply to Janusz Kaczmarek from comment #23) > Please have a look at the patch @ Bug 38913. There is also a test data added that provokes this issue on KTD. (In reply to Janusz Kaczmarek from comment #24) > (In reply to Janusz Kaczmarek from comment #23) > > Please have a look at the patch @ Bug 38913. > > There is also a test data added that provokes this issue on KTD. Awesome. Thanks for providing that. I'll take a look shortly. (In reply to Janusz Kaczmarek from comment #23) > Please have a look at the patch @ Bug 38913. Thanks again, Janusz. I've Passed QA your patch and added an updated unit test. It was hard to reproduce, but once reproduced it's so obvious. Thanks too Andrii for first raising the issue. Was it too bad to just do MARCXML? This ended up a bit hacky for me. (In reply to Tomás Cohen Arazi (tcohen) from comment #27) > Was it too bad to just do MARCXML? This ended up a bit hacky for me. FTR: This are the sizes for the first record in KTD in different formats [1]: -rw-r--r-- 1 kohadev-koha kohadev-koha 1.8K Feb 4 18:40 record.b64 -rw-r--r-- 1 kohadev-koha kohadev-koha 2.3K Feb 4 18:41 record.json -rw-r--r-- 1 kohadev-koha kohadev-koha 4.0K Feb 4 18:40 record.xml [1] Extracted with the following commands respectively: ```shell perl -MMIME::Base64 -MKoha::Biblios -MEncode -e 'my $b = Koha::Biblios->new->next(); print encode_base64( encode( "UTF-8", $b->metadata_record->as_usmarc))' > record.b64 perl -MMARC::Record::MiJ -MKoha::Biblios -e 'my $b = Koha::Biblios->new->next(); print $b->metadata_record->to_mij' > record.json perl -MKoha::Biblios -e 'my $b = Koha::Biblios->new->next(); print $b->metadata_record->as_xml' > record.xml ``` (In reply to Tomás Cohen Arazi (tcohen) from comment #27) > Was it too bad to just do MARCXML? This ended up a bit hacky for me. I agree. Check out bug 38270. I've added MARCXML and MARCXML_COMPRESSED options to ElasticsearchMARCFormat there. From memory MARCXML_COMPRESSED actually ends up being even more compact than USMARC and still has good performance. (In reply to David Cook from comment #29) > (In reply to Tomás Cohen Arazi (tcohen) from comment #27) > > Was it too bad to just do MARCXML? This ended up a bit hacky for me. > > I agree. Check out bug 38270. > > I've added MARCXML and MARCXML_COMPRESSED options to ElasticsearchMARCFormat > there. > > From memory MARCXML_COMPRESSED actually ends up being even more compact than > USMARC and still has good performance. As I note in Comment 1 on bug 38270: base64 marcxml: 4.9K base64 isomarc: 1.3K base64 zlib compressed marcxml: 1.1K (or 1012 bytes if you don't use newlines in the base64 encoding) 5KB x 1,000,000 records = 4.7GB 1KB x 1,000,000 records = 0.95 GB |