Created attachment 45852 [details] Test file for import. When the length of the marcxml data exported to zebra (i.e. biblioitems.marcxml + cumulative 952 tags) is greater than 1 MB (1048576 bytes), the bib record no longer shows in search results. Tested on Master. Setup: 1/ Add item type 'BOOK' to your test instance. 2/ Add branch 'asdf' to your test instance. 3/ Stage and import the attached marc record (test.marc.utf8). 4/ Run sudo koha-rebuild-zebra --full <instancename> 5/ Search for Theories of human development 6/ Copy the URL of the detail page (it will become unavailable via search). 7/ Click 'New' and select 'New Item' 8/ Set the Koha item type to 'BOOK'. 9/ Click 'Add Multiple Items' and set the number to 2500 10/ Go back to the detail url from step 6. 11/ Click export and select MARCXML Test: 1/ Run sudo koha-rebuild-zebra --full <instancename> 2/ Search for Theories of human development The bib record will not show in the search. 3/ In the linux console, Run the following command on the marcxml file output from Setup step 11/ head -c 1048576 'bib-1.marcxml' [change the name of the marcxml file] 4/ Note the value of 952$9 for the last complete 952 record. 5/ Delete all items in the imported bib with itemnumber greater than the itemnumber in step 4. 6/ Run another MARCXML record. If the size is greater than 1048576 bytes, remove another record. 7/ Run sudo koha-rebuild-zebra --full <instancename> 8/ Search for Theories of human development The bib record *will* show in the search. 9/ Add one more item. 10/ Repeat steps 7/ and 8/, the search will fail.
Switching to needs signoff, please undo if it is not ready for testing
(In reply to Mirko Tietgen from comment #1) > Switching to needs signoff, please undo if it is not ready for testing ??? There's no patch.
Oh I'm sorry, I have mistaken the detailed description for a patch with test plan! I can't set it back to new because that is not an option for status. I set it to in discussion now. If you plan to work on it please take the bug and set it to assigned.
This is a workflow thing - you sometimes have to step through different status in order to get back,
(In reply to Barton Chittenden from comment #0) > Created attachment 45852 [details] > Test file for import. Thank you for providing a clear test plan. This bug is affecting us and will hit our serials department hard soon. I am looking forward to ElasticSearch as a fix, but maybe this can be circumvented simply by cutting "extra" marc subfields from items when indexing. Looks like we need to shift our priorities :)
Yep I noticed this bug. There seem to by a limit of YAZ. See yaz-client man, option -k. But I did not find how to use it in ZOOM perl lib. In fact many bibs still use iso2709 for export/import. So its best to have records with size < 100 Ko.
(In reply to Fridolin SOMERS from comment #6) > Yep I noticed this bug. > > There seem to by a limit of YAZ. > See yaz-client man, option -k. > But I did not find how to use it in ZOOM perl lib. > > In fact many bibs still use iso2709 for export/import. So its best to have > records with size < 100 Ko. Using GRS-1 is fully deprecated in Koha, which is the last place that I think that the 100 Kb limit touches zebra. Koha's item data, stored in 952, can *easily* go over 100 Kb, so this is not a realistic limit, and libraries with long running serials cataloged as items can go over 1MB. Looking at http://www.indexdata.com/yaz/doc/zoom.html, it seems that the default values of maximumRecordSize and preferredMessageSize are both set to 1MB, and void ZOOM_connection_option_setl(ZOOM_connection c, const char *key, const char *val, int len); and const char *ZOOM_connection_option_getl(ZOOM_connection c, const char *key, int *lenp); take length options. connection_option_setl() is called at line 470 of /usr/lib/x86_64-linux-gnu/perl5/5.20/ZOOM.pm, but I'm not sure if Koha's execution path ever calls option_binary(), which is the subroutine where connection_option_setl() is called, nor am I sure that this will actually override the default value of 1MB.
It turns out that the 1024 K limit is the default max record size in zebrasrv; this can be changed using the -k option to zebrasrv. man zebrasrv shows: -k size Maximum record size/message size, in kilobytes. Default is 1024 KB (1 MB). This could be increased in koha-start-zebra, koha-restart-zebra and koha-zebra. I imagine the change would look something like this: from zebrasrv \ -v $loglevels \ -f "/etc/koha/sites/$instancename/koha-conf.xml" && \ return 0 || \ return 1 to zebrasrv \ -v $loglevels \ -k $max_record_size -f "/etc/koha/sites/$instancename/koha-conf.xml" && \ return 0 || \ return 1 where $max_record_size would be read from $KOHA_CONF in the same manner that $loglevels is.
If you increase $max_record_size too much it breaks the Z39.50 server. A yaz-client search report this: Target has closed the association. Reason: protocolError, message: Incoming package too large
Created attachment 60937 [details] [review] Bug 15399 - MARCXML records larger than 1 MB (1048576 bytes) are not searchable. fka. KD-1512 - Remove items from the search index for serial mothers Make stuff searchable again.
This is how we fixed this issue. Looking forward to ElasticSearch and it's peculiarities.
(In reply to Barton Chittenden from comment #8) > It turns out that the 1024 K limit is the default max record size in > zebrasrv; this can be changed using the -k option to zebrasrv. > > man zebrasrv shows: > > -k size > Maximum record size/message size, in kilobytes. Default is 1024 KB (1 MB). > > This could be increased in koha-start-zebra, koha-restart-zebra and > koha-zebra. I imagine the change would look something like this: > > from > > zebrasrv \ > -v $loglevels \ > -f "/etc/koha/sites/$instancename/koha-conf.xml" && \ > return 0 || \ > return 1 > > to > > zebrasrv \ > -v $loglevels \ > -k $max_record_size > -f "/etc/koha/sites/$instancename/koha-conf.xml" && \ > return 0 || \ > return 1 > > where $max_record_size would be read from $KOHA_CONF in the same manner that > $loglevels is. Based on my testing, I believe that the documentation is wrong -- the value of -k is in bytes, not Kilobytes. -- This may be a moot point, considering the issues with Z39.50.
(In reply to alfre69 from comment #9) > If you increase $max_record_size too much it breaks the Z39.50 server. > > A yaz-client search report this: > > Target has closed the association. > Reason: protocolError, message: Incoming package too large Does this error occur on *all* searches, or simply searches whose results are greater than 1 MB?
We had the same issue and proposed rebuild_zebra.pl options on Bug 10482
Looks like the perl Module Net-Z3950-ZOOM also has an option to specify max record size (1MB by default). http://search.cpan.org/~mirk/Net-Z3950-ZOOM/lib/ZOOM.pod#option()_/_option_binary()
(In reply to Barton Chittenden from comment #13) > (In reply to alfre69 from comment #9) > > If you increase $max_record_size too much it breaks the Z39.50 server. > > > > A yaz-client search report this: > > > > Target has closed the association. > > Reason: protocolError, message: Incoming package too large > > Does this error occur on *all* searches, or simply searches whose results > are greater than 1 MB? It occurs on all searches, so the Z39.50 server is not usable when increasing $max_record_size.
(In reply to alfre69 from comment #16) > (In reply to Barton Chittenden from comment #13) > > (In reply to alfre69 from comment #9) > > > If you increase $max_record_size too much it breaks the Z39.50 server. > > > > > > A yaz-client search report this: > > > > > > Target has closed the association. > > > Reason: protocolError, message: Incoming package too large > > > > Does this error occur on *all* searches, or simply searches whose results > > are greater than 1 MB? > > It occurs on all searches, so the Z39.50 server is not usable when > increasing $max_record_size. IndexData told me a while back that yaz-client has several limitations the protocol doesn't have, and we should be using zoomsh instead for testing.
I was sure that I replied to this already... I changed zebra_max_record_size to 4096 in prod for one instance since there were MARCXML records with large numbers of records, and it works fine.
(In reply to David Cook from comment #18) > I was sure that I replied to this already... > > I changed zebra_max_record_size to 4096 in prod for one instance since there > were MARCXML records with large numbers of records, and it works fine. Could we add this to the wiki for Zebra troubleshooting?
(In reply to Katrin Fischer from comment #19) > (In reply to David Cook from comment #18) > > I was sure that I replied to this already... > > > > I changed zebra_max_record_size to 4096 in prod for one instance since there > > were MARCXML records with large numbers of records, and it works fine. > > Could we add this to the wiki for Zebra troubleshooting? https://wiki.koha-community.org/wiki/Troubleshooting_Zebra#Records_with_lots_of_items_not_indexing
(In reply to David Cook from comment #20) > (In reply to Katrin Fischer from comment #19) > > (In reply to David Cook from comment #18) > > > I was sure that I replied to this already... > > > > > > I changed zebra_max_record_size to 4096 in prod for one instance since there > > > were MARCXML records with large numbers of records, and it works fine. > > > > Could we add this to the wiki for Zebra troubleshooting? > > https://wiki.koha-community.org/wiki/ > Troubleshooting_Zebra#Records_with_lots_of_items_not_indexing Awesome, thanks David!