Bug 27365 - Koha doesn't check marcxml field size is < 10000 and fails in various places
Summary: Koha doesn't check marcxml field size is < 10000 and fails in various places
Status: CONFIRMED
Alias: None
Product: Koha
Classification: Unclassified
Component: MARC Bibliographic data support (show other bugs)
Version: Main
Hardware: All All
: P5 - low normal
Assignee: Bugs List
QA Contact: Testopia
URL:
Keywords:
Depends on: 38270 38416
Blocks:
  Show dependency treegraph
 
Reported: 2021-01-08 15:19 UTC by Didier Gautheron
Modified: 2024-11-11 05:56 UTC (History)
14 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
example MARC record (18.01 KB, text/plain)
2022-10-28 19:19 UTC, Eric Phetteplace
Details
Short script to show the problem with large subfields (1.19 KB, application/x-perl)
2024-06-03 16:02 UTC, Thomas Klausner
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Didier Gautheron 2021-01-08 15:19:29 UTC
Hi

ISO standard
>The core of the framework is a MARCXML schema that allows lossless round-trip >conversion of an ISO 2709 MARC 21 record and an XML-encoded MARC 21 record.

Thus a field size is 9999 bytes max.

But  /usr/share/perl5/MARC/File/USMARC.pm doesn't care:

my $direntry = sprintf( "%03s%04d%05d", $field->tag, $len, $dataend );

and if len >= 10000 it silently creates a bogus directory with a length of 13 rather then 12.

These non-conformed marxml will fail in various places. Export, search results list with elasticsearch which store  ISO 2709 record in index and use them as data source in results list.


eg:
Easy to trigger in french UNIMARC with 359 (sorry in french)
https://www.transition-bibliographique.fr/wp-content/uploads/2019/02/B359-2002.pdf
Comment 1 David Cook 2021-01-11 06:10:05 UTC
I don't think that there is any requirement in MARCXML to adhere to ISO 2709. 

However, it does create a problem when trying to export MARCXML as ISO 2709. That's true. 

Wait, did you say that Elasticsearch stores the ISO 2709 record? I think I've seen that and that's awful... we shouldn't be doing that.
Comment 2 Didier Gautheron 2021-01-11 09:58:07 UTC
I my understanding it is
From
https://www.iso.org/obp/ui/#iso:std:iso:25577:ed-1:v1:en

In 2001, the U.S. Library of Congress developed a framework for working with MARC data in an XML environment. The core of the framework is a MARCXML schema that allows lossless round-trip conversion of an ISO 2709 MARC 21 record and an XML-encoded MARC 21 record.


IMO operative words are 'lossless round-trip'

But I haven't access to the full norm, it's not free and anyway it' not available in France, you can't buy it!

Moreover Koha may use xml --> iso and back in a couple of place.

Yes by default ES is storing the whole record in iso2709 within its index.
cf syspref ElasticsearchMARCFormat and Koha/SearchEngine/Elasticsearch.pm there's a f
Comment 3 David Cook 2021-01-11 23:39:02 UTC
(In reply to Didier Gautheron from comment #2)
> I my understanding it is
> From
> https://www.iso.org/obp/ui/#iso:std:iso:25577:ed-1:v1:en
> 
> In 2001, the U.S. Library of Congress developed a framework for working with
> MARC data in an XML environment. The core of the framework is a MARCXML
> schema that allows lossless round-trip conversion of an ISO 2709 MARC 21
> record and an XML-encoded MARC 21 record.
> 

That's interesting. I'd never heard of ISO 25577. I wonder if anyone actually uses it. Technically, it's a separate schema from MARCXML, although it looks like MarcXchange is supposed to be a superset of MARCXML. 

It looks like there's a newer version of ISO 25577: https://www.iso.org/obp/ui/#iso:std:iso:25577:ed-2:v1:en. 

Here's the MarcXchange XML schema:
https://www.loc.gov/standards/iso25577/marcxchange-2-0.xsd

However neither https://www.loc.gov/standards/iso25577/marcxchange-2-0.xsd nor https://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd explicitly limits the length of fields. 

If you look again at the "Scope" section, you'll see the following:

"It does not define the length or the content of individual records and does not assign any meaning to tags, indicators, or identifiers, these specifications being the functions of an implementation format."

"Validation of MARC records content is not enforced by the schema but by dedicated software tailored for the specific usage (e.g. the specific MARC-format)."

In that way, it seems like the expectation is that you use ISO 2709 with MarcXChange as well...
Comment 4 David Cook 2021-01-11 23:41:45 UTC
(In reply to Didier Gautheron from comment #2)
> But I haven't access to the full norm, it's not free and anyway it' not
> available in France, you can't buy it!
> 

Yeah, you have to love ISO standards like that...

However, it looks like we can see a free draft from 2006. It might give us some insight: https://www.loc.gov/standards/iso25577/ISO_DIS_25577__E_.pdf
Comment 5 David Cook 2021-01-11 23:42:51 UTC
(In reply to David Cook from comment #3)
> In that way, it seems like the expectation is that you use ISO 2709 with
> MarcXChange as well...

Of course if we look at ISO 2709...

https://www.iso.org/standard/41319.html

The abstract says the following:

"ISO 2709:2008 specifies the requirements for a generalized exchange format which will hold records describing all forms of material capable of bibliographic description as well as other types of records. It does not define the length or the content of individual records and does not assign any meaning to tags, indicators or identifiers, these specifications being the functions of an implementation format."

Note the part about it not defining the length of records.
Comment 6 David Cook 2021-01-11 23:49:37 UTC
Found some old extracts from ISO 2709 (https://files.eric.ed.gov/fulltext/ED087403.pdf) which ignore record length. I guess they weren't worried in the 70s.
Comment 7 David Cook 2021-01-11 23:56:42 UTC
If you look at https://www.iso.org/obp/ui/#iso:std:iso:2709:ed-4:v1:en, you can't see the contents without paying. However, you can see the table of contents, and in the table of contents it says "4.3.2 Record length (octets 0 to 4)"

When we look at the MARC standard, which I think is technically an implementation of ISO 2709, we can see "00-04 - Record length" at https://www.loc.gov/marc/bibliographic/bdleader.html

Thus, it seems like to be MARC compliant, you need to restrict yourself to these limitations.

And to get that lossless round-trip, you'd need to impose additional limitations on MARCXML.

It's really a shame that they didn't make those restrictions explicit in the XSD file. I wonder if that was intentional or a mistake.

But from a practical standpoint... limiting MARCXML to be ISO 2709 compliant seems so... impractical. 

But you're right about not limiting MARCXML does lead to conversion errors...
Comment 8 David Cook 2021-01-11 23:57:21 UTC
To be honest, at this point, I think so many people have non-compliant (ie long) records that trying to enforce this is going to be next to impossible anyway, but...
Comment 9 Blou 2022-02-01 20:59:42 UTC
Hello David,
I read through this thread... which ended on an ellipsis.  Amusingly quite on point.

I find the discussion on the standard fascinating (exchange is stretched, I realized it was only you keeping notes), but I was lazingly hoping for something practical.  

Unfortunate, I was so happy to find this bz after such a lenghty search to understand why our biblios with big notes (505$a) indexed on zebra would give us crappy results entries with no title or bib number, where notes is not even considered in results.tt

Do you have a suggestion?  Where is the 9999 limit, in the end?  Because I don't see it anywhere in the code, nor in MARC perl libraries.
Comment 10 Katrin Fischer 2022-02-01 21:40:17 UTC
I think I was told you could change the 'record size limit' somewhere in the Zebra config files, but I don't remember where exactly.
Comment 11 Katrin Fischer 2022-02-01 21:44:55 UTC
> Do you have a suggestion?  Where is the 9999 limit, in the end?  Because I
> don't see it anywhere in the code, nor in MARC perl libraries.

This here looks helpful:
https://git.koha-community.org/Koha-community/Koha/commit/2660cc8f0bbb7f8d0903aa2145095a43c84c5a8e
Comment 12 Blou 2022-02-01 21:55:54 UTC
Damn, I said Zebra out of habit.  I meant Elastic Search.  We moved most of our users to ES already.  I've never seen that error before when using zebra, btw...
Comment 13 Katrin Fischer 2022-02-01 21:59:12 UTC
(In reply to Blou from comment #12)
> Damn, I said Zebra out of habit.  I meant Elastic Search.  We moved most of
> our users to ES already.  I've never seen that error before when using
> zebra, btw...

We've recently managed to have some that Zebra would not index. I haven't read about any limitations on size with Elastic, maybe it's another issue there? Filing a separate bug or an email to the mailing list might turn up some ideas.
Comment 14 David Cook 2022-02-01 23:00:50 UTC
(In reply to Blou from comment #9)
> Unfortunate, I was so happy to find this bz after such a lenghty search to
> understand why our biblios with big notes (505$a) indexed on zebra would
> give us crappy results entries with no title or bib number, where notes is
> not even considered in results.tt
> 
> Do you have a suggestion?  Where is the 9999 limit, in the end?  Because I
> don't see it anywhere in the code, nor in MARC perl libraries.

Yeah... I can't see why the 9999 limit would necessarily cause a problem in Koha unless one was exporting as "usmarc"? 

If you can provide more details, I can have a think.
Comment 15 Blou 2022-02-02 14:00:53 UTC
We have the problem in searches (opac/staff).  ES indexes without any problem.  I see the data returned when turning on trace_to in koha-conf.xml.

But somewhere in Koha or MARC library, something breaks.  The records with a 505$a longer than 9999 characters are not displayed correctly in the results.

Actually, NO data shows up.  But there's an empty entry in the results.  The other records are listed without problem.

Considering that the xslt for the results never infer to 505 the slightest, it seems to me its the marcxml that breaks down.  Somewhere, somehow.
Comment 16 David Cook 2022-02-06 22:46:54 UTC
(In reply to Blou from comment #15)
> We have the problem in searches (opac/staff).  ES indexes without any
> problem.  I see the data returned when turning on trace_to in koha-conf.xml.
> 
> But somewhere in Koha or MARC library, something breaks.  The records with a
> 505$a longer than 9999 characters are not displayed correctly in the results.
> 
> Actually, NO data shows up.  But there's an empty entry in the results.  The
> other records are listed without problem.
> 
> Considering that the xslt for the results never infer to 505 the slightest,
> it seems to me its the marcxml that breaks down.  Somewhere, somehow.

Are you able to reliably reproduce that problem? If so, attach a copy of that record here with the steps needed to reproduce it. Someone might be able to take it somewhere then. 

Long ago, I locally customized the Zebra indexing to only index 5000 characters for the notes fields, as I think that 9999+ notes fields were causing us problems, but I think the original record would still be retrieved and used in Koha. Since you're using ES, that's probably not relevant.

Sounds like more investigation is needed, but also sounds like an edgecase, so perhaps more of a support issue...
Comment 17 David Cook 2022-02-06 22:50:27 UTC
(You may also want to change the issue report title to something more specific. 'fails in various places' is too vague I think.)
Comment 18 Eric Phetteplace 2022-10-28 19:19:01 UTC
Created attachment 142786 [details]
example MARC record

(In reply to David Cook from comment #16)
> (In reply to Blou from comment #15)
> > We have the problem in searches (opac/staff).  ES indexes without any
> > problem.  I see the data returned when turning on trace_to in koha-conf.xml.
> > 
> > But somewhere in Koha or MARC library, something breaks.  The records with a
> > 505$a longer than 9999 characters are not displayed correctly in the results.
> > 
> > Actually, NO data shows up.  But there's an empty entry in the results.  The
> > other records are listed without problem.
> > 
> > Considering that the xslt for the results never infer to 505 the slightest,
> > it seems to me its the marcxml that breaks down.  Somewhere, somehow.
> 
> Are you able to reliably reproduce that problem? If so, attach a copy of
> that record here with the steps needed to reproduce it. Someone might be
> able to take it somewhere then. 
> 
> Long ago, I locally customized the Zebra indexing to only index 5000
> characters for the notes fields, as I think that 9999+ notes fields were
> causing us problems, but I think the original record would still be
> retrieved and used in Koha. Since you're using ES, that's probably not
> relevant.
> 
> Sounds like more investigation is needed, but also sounds like an edgecase,
> so perhaps more of a support issue...

We're able to recreate this bug with the attached record. If you search for it on the staff side, a broken search result appears with the cover image in the left-side column and the list of action links (hold, add to cart, etc...) in the second column, no metadata. The public search results show an analogous problem. To fix it, we split the long 505 field in two; that record works fine.

We're on Elasticsearch. I tried retrieving the record with the API as well as going to its detail views pages on the public and staff sides; all of those are OK. So the problem is specific to search results. I did not test on Zebra.
Comment 19 Katrin Fischer 2022-11-16 23:23:31 UTC
Just a note here: the normal cataloguing editor does enforce a length, the input field shave maxsize = 9999 set (at least in MARC21). Is this different for UNIMARC? But if you import using staged import it's probably just being added as is.
Comment 20 Katrin Fischer 2022-11-16 23:26:44 UTC
(In reply to Eric Phetteplace from comment #18)
> Created attachment 142786 [details]
> example MARC record
> 
> (In reply to David Cook from comment #16)
> > (In reply to Blou from comment #15)
> > > We have the problem in searches (opac/staff).  ES indexes without any
> > > problem.  I see the data returned when turning on trace_to in koha-conf.xml.
> > > 
> > > But somewhere in Koha or MARC library, something breaks.  The records with a
> > > 505$a longer than 9999 characters are not displayed correctly in the results.
> > > 
> > > Actually, NO data shows up.  But there's an empty entry in the results.  The
> > > other records are listed without problem.
> > > 
> > > Considering that the xslt for the results never infer to 505 the slightest,
> > > it seems to me its the marcxml that breaks down.  Somewhere, somehow.
> > 
> > Are you able to reliably reproduce that problem? If so, attach a copy of
> > that record here with the steps needed to reproduce it. Someone might be
> > able to take it somewhere then. 
> > 
> > Long ago, I locally customized the Zebra indexing to only index 5000
> > characters for the notes fields, as I think that 9999+ notes fields were
> > causing us problems, but I think the original record would still be
> > retrieved and used in Koha. Since you're using ES, that's probably not
> > relevant.
> > 
> > Sounds like more investigation is needed, but also sounds like an edgecase,
> > so perhaps more of a support issue...
> 
> We're able to recreate this bug with the attached record. If you search for
> it on the staff side, a broken search result appears with the cover image in
> the left-side column and the list of action links (hold, add to cart,
> etc...) in the second column, no metadata. The public search results show an
> analogous problem. To fix it, we split the long 505 field in two; that
> record works fine.
> 
> We're on Elasticsearch. I tried retrieving the record with the API as well
> as going to its detail views pages on the public and staff sides; all of
> those are OK. So the problem is specific to search results. I did not test
> on Zebra.

Not sure if this is good or bad news, but I imported the attached record into my  dev installation without problem. The record is searchable and display on detail page and in result list looks ok too. (Maybe something changed since this was filed initially?)

If someone could confirm, we might be able to close this?
Comment 21 Julian Maurice 2024-01-23 10:06:41 UTC
> If someone could confirm, we might be able to close this?
This is still an issue in master. 
I tried importing a MARCXML file with a field containing more than 9999 characters and while the import was successful. It resulted in a lot of garbage in the stored MARCXML (ASCII 30 and 29 in random places)

I think it should be fixed by fixing MARC::Record (not sure how ? excluding big fields from the output ? truncating ? ...), and/or by stopping to use iso2709 in koha wherever we can.
For instance, when importing a MARCXML file I believe Koha convert it to iso2709 (for storing into import_records table) and then back to MARCXML (when creating the biblio).
Comment 22 Thomas Klausner 2024-06-03 15:49:44 UTC
Hi!

We've also encountered this problem (fields with val > 9999 bytes breaking), esp since ElasticSearch uses an UNSMARC Dump stored in the index to render the results. (Interestingly there is a workaround there to handle the whole record being larger than 99999 bytes..)

Attached you can find a small script that shows the problematic behaviour (subfield_to_large.pl). It will create a new MARC Record with a field > 9999 bytes, export it as USMARC, and then create a new MARC Record from this dump. Basically what ES does in
Koha::SearchEngine::Elasticsearch->marc_records_to_documents
and
Koha::SearchEngine::Elasticsearch::Search->decode_record_from_result

This will trigger a bunch of warnings produce an invald Record:

Run it like this to produce the error: 
subfield_to_large.pl

Or pass any argument to shorten the data a bit, in which case it works:
subfield_to_large.pl 1


I also have a fix for MARC::File::USMARC->_build_tag_directory. BUT: there are two ways to "fix" it:

a) throw an exception if invalid (=too long) data is passed
b) shorten the data to 9999 bytes

a) has the advantage of being correct and allowing whoever creates the data to fix the data. But sometimes you don't have control over the data

b) is bad because it silently looses data (but it's smart enough to not cut UTF8 chars in half..)

So maybe c) would be needed, which could somehow allow the user / Koha to set either a) or b)


Anyway, I'm not sure how to get a patch into MARC::File::USMARC (or if Koha is using the slightly old version from CPAN or a custom one)
Comment 23 Thomas Klausner 2024-06-03 16:01:18 UTC
(In reply to Katrin Fischer from comment #19)
> Just a note here: the normal cataloguing editor does enforce a length, the
> input field shave maxsize = 9999 set (at least in MARC21). Is this different
> for UNIMARC? But if you import using staged import it's probably just being
> added as is.

Using the latest KTD, I could enter data longer than 9999 bytes (even ignoring chars vs bytes..) into eg 521$a, which does has a `max_length`. So maybe the max length is ignored (another bug?).

And even if it wasn't ignored, the length limit is per *field*, not per *subfield*, so eg even if (521$a < 9999) && (521$b < 9999),  521$a + 521$b could be greater than 9999 (triggering the bug)
Comment 24 Thomas Klausner 2024-06-03 16:02:18 UTC
Created attachment 167351 [details]
Short script to show the problem with large subfields

Here's the attachment...
Comment 25 Katrin Fischer 2024-06-04 04:36:40 UTC
Hm, this seems like a step back from Zebra where we switched to use MARCXML to get around size limits with many items etc. a long while ago?
Comment 26 Thomas Klausner 2024-06-04 09:15:42 UTC
(In reply to Katrin Fischer from comment #25)
> Hm, this seems like a step back from Zebra where we switched to use MARCXML
> to get around size limits with many items etc. a long while ago?

I'm all for moving forward and storing MARCXML in the ElasticSearch document in `marc_data`. This already happens when some warnings are encounted during converting to USMARC. And there is a field `marc_format` to indicate which format is used in `marc_data`.

Updating Koha::SearchEngine::Elasticsearch 781-808 seems quite easy. And should not need a re-indexing or any changes further "down", because Koha::SearchEngine::Elasticsearch::Search->decode_record_from_result uses `marc_format` to figure out how to handle `marc_data`. (But I think this should be a new ticket)

But whatever we do in Koha, MARC::Record still is buggy. Does anybody here knows what the process is to get updates into MARC::Record? The CPAN RT queue seems not maintained, and I cannot find a git repo anywhere.
Comment 27 David Cook 2024-06-05 01:44:53 UTC
(In reply to David Cook from comment #1)
> Wait, did you say that Elasticsearch stores the ISO 2709 record? I think
> I've seen that and that's awful... we shouldn't be doing that.

Now that I'm using Elasticsearch a lot, I'm even more upset about this haha. 

Just bumped into this issue the other day where Koha search is failing because Elasticsearch has a MARC record with a field larger than 9999 bytes. 

I'm planning on fixing up the MARC records to better adhere to the MARC standard in the short-term, but we do need a better solution here for the long-run.
Comment 28 David Cook 2024-06-05 01:45:15 UTC
I suggest this be an umbrella report, and we have spin-off reports for the specific places that are malfunctioning.
Comment 29 David Cook 2024-06-05 02:01:35 UTC
(In reply to Thomas Klausner from comment #26)
> But whatever we do in Koha, MARC::Record still is buggy. Does anybody here
> knows what the process is to get updates into MARC::Record? The CPAN RT
> queue seems not maintained, and I cannot find a git repo anywhere.

I don't know about process, but it's maintained by Galen Charlton who is CCed into this bug. You can contact him through various channels I imagine.

However, I can't recall if this is actually a fixable problem.

We're encountering inherent constraints of the ISO2709 MARC specification. If you review the documentation for the Leader, you'll see that the record length can only be 99999, and the length of the leader and directory can only be 99999. I think those may have been worked around a bit as they can be cheated.

But the part that can't be cheated is the "directory". It has a fixed format of 12 characters, and a field length cannot be longer than 9999 no matter what. If it's greater, then it becomes impossible to parse the directory. 

--

That said... in theory the MARC standard could be changed/hacked, so that the "20 - Length of the length-of-field portion" could be longer, and then the MARC processing library could interpret a directory as having longer field lengths... 

...but that would really be unprecedented I think. Technically, it would be defying the specification as well.
Comment 30 David Cook 2024-06-05 02:03:48 UTC
The ISO 2709 MARC format is just plain flawed. It was invented in the 1960s when computing was quite different. 

With the Elasticsearch issue, the shortest path is probably to store the MARCXML instead of the ISO 2709 MARC, although overall I think it's madness that we store the whole record in the indexing engine and we shouldn't do it at all.
Comment 31 Thomas Klausner 2024-06-05 07:37:50 UTC
> But the part that can't be cheated is the "directory". It has a fixed format
> of 12 characters, and a field length cannot be longer than 9999 no matter
> what. If it's greater, then it becomes impossible to parse the directory. 

yeah, I learned that recently (while reading code & spec).

But as I stated in my original comment, MARC::Record should handle invalid data (i.e. field > 9999 bytes). Either by rejecting the data (i.e. throwing an exception) or truncating the data. Or let the user decide which of the two via a flag.

But generating an invalid record that cannot be parsed back is definitely a bug.

> That said... in theory the MARC standard could be changed/hacked, so that
> the "20 - Length of the length-of-field portion" could be longer, and then
> the MARC processing library could interpret a directory as having longer
> field lengths... 

I guess you're joking. I cannot imagine that this 24 year old standard can be changed, esp. as we have things like MARCXML :-)
Comment 32 Thomas Klausner 2024-06-05 07:41:28 UTC
(In reply to David Cook from comment #30)
 
> With the Elasticsearch issue, the shortest path is probably to store the
> MARCXML

Which already happens in some cases. And it would be easy to just completely switch to MARCXML

> instead of the ISO 2709 MARC, although overall I think it's madness
> that we store the whole record in the indexing engine and we shouldn't do it
> at all.

No, it does make sense. When we have the whole record we can render it in the result list without hitting the database. I guess that's the intention here, though I haven't checked if Koha actually renders the list only based on the ES response, or if it already does hit the DB (which it might have to do to check if an item is currently available)

But if it's worth it (the storage costs of the whole record vs the runtime cost of retrieving it from the DB) is something that could be up for discussion.
Comment 33 David Cook 2024-06-06 04:23:37 UTC
(In reply to Thomas Klausner from comment #31)
> But as I stated in my original comment, MARC::Record should handle invalid
> data (i.e. field > 9999 bytes). Either by rejecting the data (i.e. throwing
> an exception) or truncating the data. Or let the user decide which of the
> two via a flag.

That's an interesting idea.

> But generating an invalid record that cannot be parsed back is definitely a
> bug.

That's a very good point. 

> > That said... in theory the MARC standard could be changed/hacked, so that
> > the "20 - Length of the length-of-field portion" could be longer, and then
> > the MARC processing library could interpret a directory as having longer
> > field lengths... 
> 
> I guess you're joking. I cannot imagine that this 24 year old standard can
> be changed, esp. as we have things like MARCXML :-)

I probably didn't articulate myself well enough. I mean the standard contradicts itself a bit. In theory, it lets you define the length of the field according to one part. But another part says only to a maximum of 4 characters. And another part mandates only 4 is allowed. But in theory if you had a cooperative software library, you could stretch the truth a bit... so yeah maybe I was joking around a bit haha. 

(Also, the standard is much older than 24 years!)
Comment 34 David Cook 2024-06-06 04:27:18 UTC
(In reply to Thomas Klausner from comment #32)
> > instead of the ISO 2709 MARC, although overall I think it's madness
> > that we store the whole record in the indexing engine and we shouldn't do it
> > at all.
> 
> No, it does make sense. When we have the whole record we can render it in
> the result list without hitting the database. I guess that's the intention
> here, though I haven't checked if Koha actually renders the list only based
> on the ES response, or if it already does hit the DB (which it might have to
> do to check if an item is currently available)
> 
> But if it's worth it (the storage costs of the whole record vs the runtime
> cost of retrieving it from the DB) is something that could be up for
> discussion.

Yeah, I think we should actually measure the difference, and we can optimize that database query so that we're fetching all the results with 1 query rather than 20 individual queries.
Comment 35 Martin Renvoize (ashimema) 2024-06-06 08:02:11 UTC
I suppose it's not as clear cut as we're making out here is it?  MARC::Record allows dealing with ISO 2709 MARC and MARCXML, and those differ in their ability to handle large records.  If we fix the ISO representation to truncate it throw exceptions won't that also affect the MARCXML round tripping?   Can we definitely have it work one way for one and another for the other?   I suppose we would basically need to ensure the core store is in MARCXML and the ISO output is the but that truncated or errors..  but is that inside MARC::Record or should that be handled at the Koha layer..  certainly the iso parts should do better at highlighting the issues however, and certainly the encoding fun you identified needs fixing.
Comment 36 Thomas Klausner 2024-06-06 09:42:18 UTC
(In reply to Martin Renvoize from comment #35)
> I suppose it's not as clear cut as we're making out here is it? 
> MARC::Record allows dealing with ISO 2709 MARC and MARCXML, and those differ
> in their ability to handle large records.

Yes, so I think that MARC::File::USMARC should properly handle too large data (which MARC::File::XML can handle) via providing an option to truncate too large data or die when trying to encode it.

Koha itself should switch to use MARCXML to store the whole record in the ES index (unless we don't need the whole record in the index at all)
Comment 37 David Cook 2024-06-06 23:45:00 UTC
(In reply to Thomas Klausner from comment #36)
> (In reply to Martin Renvoize from comment #35)
> > I suppose it's not as clear cut as we're making out here is it? 
> > MARC::Record allows dealing with ISO 2709 MARC and MARCXML, and those differ
> > in their ability to handle large records.
> 
> Yes, so I think that MARC::File::USMARC should properly handle too large
> data (which MARC::File::XML can handle) via providing an option to truncate
> too large data or die when trying to encode it.

+1

> Koha itself should switch to use MARCXML to store the whole record in the ES
> index (unless we don't need the whole record in the index at all)

Personally, I don't think we need the whole record in the index at all, but that's arguably a separate issue. 

If we use MARCXML instead of MARC, unless we base64 encode it, I realized there will be a side-effect there of making the whole record queryable on a keyword search. (I recently noted that we don't have a "Any" index in ES like we do with Zebra.)

(In reply to Martin Renvoize from comment #35)
> I suppose it's not as clear cut as we're making out here is it? 
> MARC::Record allows dealing with ISO 2709 MARC and MARCXML, and those differ
> in their ability to handle large records.  If we fix the ISO representation
> to truncate it throw exceptions won't that also affect the MARCXML round
> tripping?   Can we definitely have it work one way for one and another for
> the other?   I suppose we would basically need to ensure the core store is
> in MARCXML and the ISO output is the but that truncated or errors..  but is
> that inside MARC::Record or should that be handled at the Koha layer.. 
> certainly the iso parts should do better at highlighting the issues however,
> and certainly the encoding fun you identified needs fixing.

So MARCXML->MARC->MARCXML record roundtripping for large records is already a problem, because the process generates invalid MARC records, which can't be read back into MARCXML. That's the issue we're trying to address. 

If MARC::Record threw an exception for MARCXML->MARC where a valid record can't be generated, we would've never wound up in the mess we're currently in haha. 

Of course, MARC::Record isn't the only one guilty of this. I'm trying to get MarcEdit fixed, because I have a different scenario where it produces invalid MARC records too. 

So in terms of MARC::Record vs Koha... I think MARC::Record should throw an exception, and Koha should handle that exception. Personally, the only time we should ever be using ISO MARC is as an export. And in that situation we can say "Sorry! Unable to export this record as an ISO MARC file due to field size limitations! Please try another format!" or something like that.
Comment 38 David Cook 2024-06-07 02:25:25 UTC
(In reply to Thomas Klausner from comment #23)
> (In reply to Katrin Fischer from comment #19)
> > Just a note here: the normal cataloguing editor does enforce a length, the
> > input field shave maxsize = 9999 set (at least in MARC21). Is this different
> > for UNIMARC? But if you import using staged import it's probably just being
> > added as is.
> 
> Using the latest KTD, I could enter data longer than 9999 bytes (even
> ignoring chars vs bytes..) into eg 521$a, which does has a `max_length`. So
> maybe the max length is ignored (another bug?).
> 
> And even if it wasn't ignored, the length limit is per *field*, not per
> *subfield*, so eg even if (521$a < 9999) && (521$b < 9999),  521$a + 521$b
> could be greater than 9999 (triggering the bug)

On 23.11.x and ktd, I don't see a maxlength attribute on textareas in the editor.

If I add one via the browser HTML inspector, then copy and paste the text back in, then the text gets truncated according to the maxlength, and I can't add any more text.

Although as Thomas says, since 9999 is a limit of the field rather than subfield, that isn't a perfect solution.

That said, it would be an improvement.

We could have Javascript or Perl that checks the aggregate of subfields for a field... 

But that could always be bypassed by using Staged MARC Import.
Comment 39 David Cook 2024-06-07 02:40:05 UTC
I tried the following SQL to find problem records:

select biblionumber from biblio_metadata where ExtractValue(metadata,'//*[string-length(.) > 9999]') <> '';

However, it can have misleading results...

I've had someone save HTML into a note. The HTML might only be 7600 characters, but when you encode the HTML as entities (which is necessary for the XML) it increases to beyond 9999.

The MARC has the HTML without the entity encoding so it's fine though...
Comment 40 David Cook 2024-10-14 06:01:12 UTC
(In reply to Thomas Klausner from comment #36)
> Koha itself should switch to use MARCXML to store the whole record in the ES
> index (unless we don't need the whole record in the index at all)

I'm inclined to agree here. I think we need the whole record, since we do things like add the items, make some changes for 880, etc.

If we're scared about how much space MARCXML will take in the Elasticsearch document, we could compress it. 

Compress::Zlib and IO::Compress::Gzip are both part of the Perl core. I think that the Zlib one uses C bindings (via Compress::Raw::Zlib) to use the zlib C library while the Gzip one is Pure Perl so probably less efficient.

We could add options to ElasticsearchMARCFormat for "MARCXML" and "MARCXML (zlib compressed)" or just the latter. 

--

Most of our libraries are on Elasticsearch but some are still on Zebra specifically because of this issue, so it would be nice to fix this (even nearly 4 years later after being initially reported). 

I'm pretty flat out at the moment, but interested in doing a proof of concept for this one. In theory, it shouldn't be that much code...
Comment 41 David Cook 2024-10-25 05:40:49 UTC
Added "Bug 38270 - Add MARCXML options to ElasticsearchMARCFormat" so that particular problem/solution can be explored independently.
Comment 42 David Cook 2024-11-11 05:56:51 UTC
(In reply to David Cook from comment #41)
> Added "Bug 38270 - Add MARCXML options to ElasticsearchMARCFormat" so that
> particular problem/solution can be explored independently.

Also added "Bug 38416 - Failover to MARCXML if cannot roundtrip USMARC when indexing" which is related but different