Bug 2453 - (very) large biblio/item handling
Summary: (very) large biblio/item handling
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Cataloging (show other bugs)
Version: Main
Hardware: PC All
: P3 critical
Assignee: Galen Charlton
QA Contact: Bugs List
URL:
Keywords:
Depends on: 5579 6789
Blocks:
  Show dependency treegraph
 
Reported: 2008-08-03 10:08 UTC by Paul Poulain
Modified: 2019-06-27 09:24 UTC (History)
2 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Chris Cormack 2010-05-21 00:51:21 UTC


---- Reported by paul.poulain@biblibre.com 2008-08-03 10:08:19 ----

I think i've discovered a problem : one of our clients has a DB with daily serials that are itemised. Thus, we reach up to 600 items for a given biblio.
when reindexing zebra, I get a nasty "
18:51:38-03/08 zebraidx(31045) [log] add grs.marcxml.record /tmp/export/biblio/exported_records 127332
18:51:38-03/08 zebraidx(31045) [warn] MARC: Base address does not follow directory
18:51:38-03/08 zebraidx(31045) [warn] MARC: Bad offsets in data. Skipping rest
18:51:38-03/08 zebraidx(31045) [log] add grs.marcxml.record /tmp/export/biblio/exported_records 161209
18:51:38-03/08 zebraidx(31045) [warn] MARC: Skipping bad byte 105 (0x69)
18:51:38-03/08 zebraidx(31045) [warn] MARC: Skipping bad byte 113 (0x71)
18:51:38-03/08 zebraidx(31045) [warn] MARC: Skipping bad byte 117 (0x75)
18:51:38-03/08 zebraidx(31045) [warn] MARC: Skipping bad byte 101 (0x65)
18:51:38-03/08 zebraidx(31045) [warn] MARC: Skipping bad byte 115 (0x73)
18:51:38-03/08 zebraidx(31045) [warn] MARC: Skipping bad byte 31 (0x1F)
18:51:38-03/08 zebraidx(31045) [warn] MARC: Base address does not follow directory
18:51:38-03/08 zebraidx(31045) [warn] MARC: Bad offsets in data. Skipping rest
18:51:38-03/08 zebraidx(31045) [warn] Record didn't contain match fields in (bib1,Local-Number)
18:51:38-03/08 zebraidx(31045) [log] error grs.marcxml.record /tmp/export/biblio/exported_records 261208
"

Investigating the data, I've discovered that the record is more than 99999Bytes long. Thus, xml (marcxml) handle it properly, but the iso2709/marc is incorrect & zebra really don't like that.

We can't fix the iso2709 behaviour, so we have 2 possibilities :
- fix Koha to limit the size of a biblio/items (check that we don't ge more than 100kB)
- fix the export during rebuild_zebra.

I think the 1st solution is the best.



---- Additional Comments From rch@liblime.com 2008-08-04 14:26:42 ----


Paul: any reason to not just run rebuild_zebra with -x ?
It is unfortunate that zebra chokes on a bad record length in the leader, but I think it is Zebra's problem, not Koha's.  We (LL) are using -x to rebuild with xml dump and don't see this error.  



---- Additional Comments From paul.poulain@biblibre.com 2008-08-05 02:45:35 ----

Hi Ryan,

good suggestion, but it don't work. I get :
11:40:26-05/08 zebraidx(16823) [warn] 422:0:XML error: junk after document element
11:40:26-05/08 zebraidx(16823) [log] add grs.xml /home/paul/koha.dev/local/LecturePub/dracenie/export//biblio/exported_records 0
11:40:26-05/08 zebraidx(16823) [warn] 1:0:XML error: syntax error


Here is the line 420+ :
  <controlfield tag="001">10633</controlfield>
</record>
<?xml version="1.0" encoding="UTF-8"?>   <<<<<<< LINE 422
<record
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/ standards/marcxml/schema/MARC21slim.xsd"
  xmlns="http://www.loc.gov/MARC21/slim">

I've read :
            # FIXME - when more than one record is exported and $as_xml is true,
            # the output file is not valid XML - it's just multiple <record> elements
            # strung together with no single root element.  zebraidx doesn't seem
            # to care, though, at least if you're using the GRS-1 filter.  It does
            # care if you're using the DOM filter, which requires valid XML file(s).


in rebuild_zebra, but I don't think I should care, as I don't use the DOM filter I think

Any idea welcomed...



---- Additional Comments From jmf@liblime.com 2008-08-09 17:28:05 ----

bumping this up to 3.2



---- Additional Comments From henridamien@koha-fr.org 2008-08-26 07:26:58 ----

idea proposed would be to get item data out of Marcxml in biblioitems.
Problem it raises :
how to index the item data along with the bibliodata.

It would solve the problem of data XML/items table synchronization.

afaik Tumer solved that problem.
he had 3 databases : one for biblio, one for biblioitem, one for items, iirc.



---- Additional Comments From rch@liblime.com 2008-08-27 19:19:56 ----

I support this approach.
If a complete reimplementation of Koha's holdings model is not going to make the 3.2 cut, then minimally the embedded holdings should be excluded from the bib record.  Indexing can either be accomplished by building the embedded holdings tags on the fly from the items table or by piping xml output of the items table through an xsl transformation using zebra's dom indexing model.  
Whichever seems more efficient.

If a reworking of the holdings model is expected to be complete for the 3.2 release, then the issue is moot.  Galen, can you comment ?

To address this bug, though, we are also seeing some problems with very large bib records, even when indexed using marcxml, and are looking into these errors.   It looks like Koha is choking on bad (huge) records returned from zebra when there are more embedded holdings than it really makes sense to pack into a marc(xml) record.  I'll update this report when we get more info on these errors.





---- Additional Comments From joe.atzberger@liblime.com 2009-01-28 13:30:11 ----

5 months w/o comment here suggests complete reimplementation of holdings may not be reasonable for 3.2.



--- Bug imported by chris@bigballofwax.co.nz 2010-05-21 00:51 UTC  ---

This bug was previously known as _bug_ 2453 at http://bugs.koha.org/cgi-bin/bugzilla3/show_bug.cgi?id=2453

Actual time not defined. Setting to 0.0

Comment 1 Katrin Fischer 2011-06-28 09:29:18 UTC
Is this resolved by removing items from the XML in 3.4?
Comment 2 Katrin Fischer 2011-07-15 15:51:40 UTC
(17:45:15) kf: I was looking at bug 2453
(17:45:16) huginn: Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=2453 critical, P3, ---, gmcharlt, NEW , (very) large biblio/item handling
(17:45:35) kf: can this be closed?
(17:48:30) sekjal: I think so... if you use -x on the rebuild_zebra.pl script, it should work fine
(17:49:06) gmcharlt: yes
(17:50:40) gmcharlt: there is one more potential improvement that could be made, namely tweaking how the marcxml blobs are handled by C4::Search so that we grab the current item information from mysql rather than zebra when rendering search results
(17:50:51) gmcharlt: but that can certainly be the topic of a separate bug
Comment 3 Ian Walls 2011-08-25 22:18:02 UTC
I was wrong, this is still an issue.

If a title has many, many items, enough to break ISO formatting, it can still be passed to Zebra, but any information beyond the character limit seems to be "lost".  Since the indexing key is in the 999$c, and the items are in 952, and are inserted ordered, this means many items can push the key off the edge.  I'm seeing this result in search results where the link to the details page contains a blank biblionumber.

The solution to this would seem to be to append the 952s.  An additional wrinkle comes if you run without --no-sanitize; the 999$c and $d are recalculated right before handoff to Zebra, and are inserted ordered (so at the very end).  So this step would need to be moved BEFORE the embedding of items, which would mean in the GetMarcBiblio() subroutine.  The embedding of items could also be done there, instead of in rebuild_zebra.pl, since we can pass a flag to GetMarcBiblio to do that.
Comment 4 Ian Walls 2011-08-25 23:21:32 UTC
Further research has shown that my issue is not with ISO 2709, but the implicit file size limit of the Net::Z3950::ZOOM connection.  The perl module does not make use of any connection options, one of which is maximumRecordSize.  Since it's not specified on connection, the default of 1MB is used.   This is almost always enough, but in the case of a VERY prolific serial, not always.

For example, the record I was testing showed 1673 items, and came out to 1.2MB of XML once the items were added in.

While fixing Net::Z3950::ZOOM would be ideal, it's probably easier to alter Koha in the way described in my previous comment.
Comment 5 Paul Poulain 2011-09-02 09:27:45 UTC
comment: UNIMARC users often use 009 for storing biblionumber, and should not face this problem (does not mean it must not be addressed, of course)
Comment 6 Chris Nighswonger 2011-09-07 18:28:00 UTC
Pushing to master.
Comment 7 Ian Walls 2011-09-30 19:36:15 UTC
Marking this as resolved again, since bug 6789 is being used to track the issue I reopened this for.