Bug 38270 - Add MARCXML options to ElasticsearchMARCFormat
Summary: Add MARCXML options to ElasticsearchMARCFormat
Status: Signed Off
Alias: None
Product: Koha
Classification: Unclassified
Component: Searching - Elasticsearch (show other bugs)
Version: Main
Hardware: All All
: P5 - low enhancement
Assignee: David Cook
QA Contact: Testopia
URL:
Keywords:
Depends on: 22258
Blocks: 27365
  Show dependency treegraph
 
Reported: 2024-10-25 05:39 UTC by David Cook
Modified: 2024-11-13 11:39 UTC (History)
5 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
Bug 38270: Add MARCXML and MARCXML_COMPRESSED to ElasticsearchMARCFormat syspref (6.76 KB, patch)
2024-11-11 05:57 UTC, David Cook
Details | Diff | Splinter Review
Bug 38270: Tidy (4.42 KB, patch)
2024-11-11 05:57 UTC, David Cook
Details | Diff | Splinter Review
Bug 38270: Add unit tests for MARCXML and MARCXML_COMPRESSED options (2.50 KB, patch)
2024-11-11 05:57 UTC, David Cook
Details | Diff | Splinter Review
Bug 38270: Add MARCXML and MARCXML_COMPRESSED to ElasticsearchMARCFormat syspref (6.83 KB, patch)
2024-11-13 11:39 UTC, Martin Renvoize (ashimema)
Details | Diff | Splinter Review
Bug 38270: Tidy (4.48 KB, patch)
2024-11-13 11:39 UTC, Martin Renvoize (ashimema)
Details | Diff | Splinter Review
Bug 38270: Add unit tests for MARCXML and MARCXML_COMPRESSED options (2.56 KB, patch)
2024-11-13 11:39 UTC, Martin Renvoize (ashimema)
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description David Cook 2024-10-25 05:39:53 UTC
As noted on bug 27365, Elasticsearch search results don't get fetched correctly if a MARCXML record cannot be serialized in the compact ISO MARC 2709 format.

One reason it might not be able to be serialized is due to 1 MARC field containing more than 9999 bytes or a whole record containing more than 99999 bytes.

The change to be provided here will store MARCXML instead of ISO MARC 2709.

--

During development, it may be found that a compressed form of MARCXML is preferred to an uncompressed form to be more comparable with the currently stored ISO MARC.
Comment 1 David Cook 2024-10-25 05:55:17 UTC
It looks like the marc_data in Elasticsearch is base64 encoded, and comparing the marc_data vs a base64 encoded MARCXML export, the results are as follows:

base64 marcxml: 4.9K
base64 isomarc: 1.3K

Using Compress::Zlib with Z_BEST_COMPRESSION compression level:

base64 zlib compressed marcxml: 1.1K
(or 1012 bytes if you don't use newlines in the base64 encoding)

--

From a size perspective, I think I'm most intrigued by the zlib compressed MARCXML.
Comment 2 David Cook 2024-10-25 06:02:36 UTC
Looking at Koha/SearchEngine/Elasticsearch.pm and there is code to actually use 'MARCXML' instead of 'base64ISO2709' if there are errors doing the following:

$record_document->{'marc_data'} = encode_base64(encode('UTF-8', $record->as_usmarc()));

--

I get the feeling that isn't working....

--

And yeah... Koha::SearchEngine::Elasticsearch::Search's "decode_record_from_result" method already supports MARCXML as a "marc_format" apparently.

--

It looks like it's going to be very easy to add 'base64_zlib_MARCXML' as a format as well.
Comment 3 David Cook 2024-10-25 06:20:37 UTC
Testing...

If I create a simple MARC record with two 500 note fields with 18000 characters in each $a subfield, it won't show up in Koha search results correctly when using Elasticsearch. This is in line with bug 27365.

It does seem that MARC::Record->as_usmarc() can output an invalid record which cannot be roundtripped via MARC::Record->new_from_usmarc(). This becomes clear when you start looking at MARC::File::USMARC as Thomas has already noted. No warns get generated from as_usmarc() related functions. The real fix for this bug in Koha would be to fix MARC::File::USMARC.

--

So yeah... I'm going to add explicit support for MARCXML and ZLIB_MARCXML to ElasticsearchMARCFormat 

After changing the preference, you'll need to re-index your records. Technically, I think that's already true, so that should be added to the description...
Comment 4 David Cook 2024-10-25 06:24:02 UTC
This actually looks like it's going to be a very simple change. It's 5:21pm on a Friday right now, so I'm not going to do this now, but I think that I will do this next week.

It's going to be a game changer for a few of our libraries with records with a large number of items or extra large fields.
Comment 5 David Cook 2024-11-11 03:23:35 UTC
Doing some more testing here...

1.
For a record with over 99,999 characters (for example a bib record with twenty 500 fields with 5000 characters in the $a), it looks like it really does switch to using MARCXML for the marc_data in Elasticsearch and the marc_format is listed as MARCXML. All good!

2.
For a record with one 500 field with 15,000 characters in the $a, it uses a marc_format of "base64ISO2709" and while it does index it... it cannot properly display the metadata from marc_data in the Koha search result pages.

I suppose for all records using "base64ISO2709", we could always try to round-trip it first before storing, and failing that we could switch to MARCXML. 

That said, I'm still interested in the Compressed MARCXML...

But... if we're reaching a scenario where we're using "base64ISO2709" but have 1+ fields that have more than 9999 chars but the overall record is < 99999 chars... then it's not going to be a huge record... 

3.
For a record with one 500 field with over 32766 bytes in length, you run into bug 38101
Comment 6 David Cook 2024-11-11 03:54:27 UTC
(In reply to David Cook from comment #5)
> I suppose for all records using "base64ISO2709", we could always try to
> round-trip it first before storing, and failing that we could switch to
> MARCXML. 

Except that I've tried this and MARC::File::USMARC will return a MARC::Record object even when it cannot really parse the incoming USMARC...

A lot of warnings will be generated but that's it.

We could failover to MARCXML whenever we encounter any USMARC decode warnings, but there will be false positives.
Comment 7 David Cook 2024-11-11 05:57:00 UTC
Created attachment 174320 [details] [review]
Bug 38270: Add MARCXML and MARCXML_COMPRESSED to ElasticsearchMARCFormat syspref

This change adds MARCXML and MARCXML_COMPRESSED to the ElasticsearchMARCFormat
system preference.

Previously, MARCXML was a fallback format for base64ISO2709, but this provides
it as a valid choice in itself.

For storage and performance reasons, MARCXML_COMPRESSED is another alternative.
It uses the DEFLATE algorithm using the zlib library via C bindings. It is just
as fast as base64ISO2709 for indexing, and it's actually superior in terms of
storage, especially for large records which fallback to MARCXML.

Test plan:
0. Apply the patch
0b. Set up koha-testing-docker to use Elasticsearch
0c. koha-plack --restart kohadev
1. Go to http://localhost:8081/cgi-bin/koha/admin/preferences.pl?
op=search&searchfield=ElasticsearchMARCFormat
2. Switch the setting to "ISO2709"
3. Create a MARC record with a field over 9999 characters
4. Create a MARC record with a record length over 99999 (but individual
field length of under 32000 characters)
5. Reindex the database
6. Search for the records (using a keyword that matches these and other records)
7. Note that the record with a field over 9999 characters is not properly retrieved
8. Look on the detail page at "Elastic record" and note that the records use
a "marc_format" of "baseISO2709" and "MARCXML" respectively

9. Change "ElasticsearchMARCFormat" to "MARCXML"
10. Reindex the database
11. Note that the records now use a marc_format of "MARCXML" and both
records are now displaying properly in search

12. Change "ElasticsearchMARCFormat" to "MARCXML compressed using DEFLATE algorithm"
13. Reindex the database
14. Note that the records now use a marc_format of "MARCXML_COMPRESSED" and
both records are now displaying properly in search

15. prove -v t/db_dependent/Koha/SearchEngine/Elasticsearch.t

*Bonus points*
- Create a record with 100 items with the same large note fields
- Using "ISO2709" note that the fallback is to "MARCXML" and the marc_data
field is very large
- Change to "MARCXML compressed using DEFLATE algorithm" and note that
the marc_data field is very small.
Comment 8 David Cook 2024-11-11 05:57:02 UTC
Created attachment 174321 [details] [review]
Bug 38270: Tidy
Comment 9 David Cook 2024-11-11 05:57:05 UTC
Created attachment 174322 [details] [review]
Bug 38270: Add unit tests for MARCXML and MARCXML_COMPRESSED options
Comment 10 David Cook 2024-11-12 00:00:25 UTC
As an aside, I just noticed one of my libraries with a record where the marc_data MARCXML (pre-existing ISO2709 to MARCXML fallback) is 741,200 characters long. Only 4,000 characters are the bib record itself. The rest is 952 fields of embedded item data. 

I'm curious to see how long that would be using MARCXML_COMPRESSED instead...
Comment 11 Martin Renvoize (ashimema) 2024-11-13 11:39:33 UTC
Created attachment 174456 [details] [review]
Bug 38270: Add MARCXML and MARCXML_COMPRESSED to ElasticsearchMARCFormat syspref

This change adds MARCXML and MARCXML_COMPRESSED to the ElasticsearchMARCFormat
system preference.

Previously, MARCXML was a fallback format for base64ISO2709, but this provides
it as a valid choice in itself.

For storage and performance reasons, MARCXML_COMPRESSED is another alternative.
It uses the DEFLATE algorithm using the zlib library via C bindings. It is just
as fast as base64ISO2709 for indexing, and it's actually superior in terms of
storage, especially for large records which fallback to MARCXML.

Test plan:
0. Apply the patch
0b. Set up koha-testing-docker to use Elasticsearch
0c. koha-plack --restart kohadev
1. Go to http://localhost:8081/cgi-bin/koha/admin/preferences.pl?
op=search&searchfield=ElasticsearchMARCFormat
2. Switch the setting to "ISO2709"
3. Create a MARC record with a field over 9999 characters
4. Create a MARC record with a record length over 99999 (but individual
field length of under 32000 characters)
5. Reindex the database
6. Search for the records (using a keyword that matches these and other records)
7. Note that the record with a field over 9999 characters is not properly retrieved
8. Look on the detail page at "Elastic record" and note that the records use
a "marc_format" of "baseISO2709" and "MARCXML" respectively

9. Change "ElasticsearchMARCFormat" to "MARCXML"
10. Reindex the database
11. Note that the records now use a marc_format of "MARCXML" and both
records are now displaying properly in search

12. Change "ElasticsearchMARCFormat" to "MARCXML compressed using DEFLATE algorithm"
13. Reindex the database
14. Note that the records now use a marc_format of "MARCXML_COMPRESSED" and
both records are now displaying properly in search

15. prove -v t/db_dependent/Koha/SearchEngine/Elasticsearch.t

*Bonus points*
- Create a record with 100 items with the same large note fields
- Using "ISO2709" note that the fallback is to "MARCXML" and the marc_data
field is very large
- Change to "MARCXML compressed using DEFLATE algorithm" and note that
the marc_data field is very small.

Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 12 Martin Renvoize (ashimema) 2024-11-13 11:39:35 UTC
Created attachment 174457 [details] [review]
Bug 38270: Tidy

Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 13 Martin Renvoize (ashimema) 2024-11-13 11:39:38 UTC
Created attachment 174458 [details] [review]
Bug 38270: Add unit tests for MARCXML and MARCXML_COMPRESSED options

Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>