In our library there are some records (newspapers) that have lots of items, some have more than a thousand items. It seems that if there's more than a thousand items per one location then this record is not shown in any searches, only way to access it is to know biblionumber and insert it into URL.
Just a wild guess, but could you try setting a lower value for maxItemsInSearchResults to see if this changes things?
Changed it to 15, nothing happened. If there is more than 1 result found, it skips missing record. Instead of 10 or 20 records per page I see 9 or 19 and the enumeration goes from 1 to 3 for example and continues as usual.
Are you sure you're not accidentally using the system preference OpacHiddenItems?
This happens both in staff interface and in OPAC. OpacHiddenItems preference is not used in that library.
Bohdan, you said you were using Elasticsearch in another thread, is that correct? It does sound like some sort of size limit, but I am not an Elastic expert myself.
That is a different library. This one uses Zebra.
In that case I'll add some of our Zebra experts that have probably dealt with big records like that :)
My experience is that this happens when the Marc record exceeds 9999k in size. Our solution was to separate the record into multiple records and put all of the items from one year on one record, all of the items from the next year on the second record and so on . . . It wasn't a problem with Koha. The problem was that we were exceeding the maximum size of a Marc record. George
(In reply to George Williams (NEKLS) from comment #8) > My experience is that this happens when the Marc record exceeds 9999k in > size. Our solution was to separate the record into multiple records and put > all of the items from one year on one record, all of the items from the next > year on the second record and so on . . . > > It wasn't a problem with Koha. The problem was that we were exceeding the > maximum size of a Marc record. > > George That's interesting. I thought that we'd overcome that limitation by switching to MARCXML with Zebra but perhaps we didn't...
Bohdan: With your 20.11 installation, is it a new installation or an upgraded installation that you've had for a long time?
This Koha is an upgraded one. I installed version 19.11.
(In reply to David Cook from comment #9) > (In reply to George Williams (NEKLS) from comment #8) > > My experience is that this happens when the Marc record exceeds 9999k in > > size. Our solution was to separate the record into multiple records and put > > all of the items from one year on one record, all of the items from the next > > year on the second record and so on . . . > > > > It wasn't a problem with Koha. The problem was that we were exceeding the > > maximum size of a Marc record. > > > > George > > That's interesting. I thought that we'd overcome that limitation by > switching to MARCXML with Zebra but perhaps we didn't... It's possible that the 9999k limit has been fixed and I just never realized it because we trained everyone to limit the size of the records that might hit that limit.
Created attachment 127084 [details] Search results image
In the attachment you can see that for some reason the result with number 1 is missing. This still persists on 21.05.
See Bug 10482 We have this code for some libraries with records having more than 1000 items I don't know which limit is in Zebra against this.
There is a parameter that limits the size of record Zebra will index. As he problem is that the record is not searchable, could this be the issue here?
(In reply to Katrin Fischer from comment #16) > There is a parameter that limits the size of record Zebra will index. As he > problem is that the record is not searchable, could this be the issue here? Do you mean the "memMax" parameter? You're probably right. According to https://software.indexdata.com/zebra/doc/idzebra.pdf, "The indexed documents are parsed into a standard XML DOM tree, which restricts record size according to availability of memory." Last year, I discovered a size issue with the memMax parameter and Indexdata fixed it: https://github.com/indexdata/idzebra/issues/34 From Zebra 2.2.4 onwards you should be able to specify as much memory as you want for "memMax": https://github.com/indexdata/idzebra/blob/master/NEWS Prior to Zebra 2.2.4, the max was 2047M. I have a library with a large collection of large records and so far 2047M has been fine for them.
(In reply to Katrin Fischer from comment #16) > There is a parameter that limits the size of record Zebra will index. As he > problem is that the record is not searchable, could this be the issue here? Or maybe you meant zebra_max_record_size from bug 18909... Interestingly I've never had to modify that parameter.