TO recreate: 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Some items show as not-for-loan, some show as available
This is a problem with any batch process that can affect more than one item on a single bib. IMports and batch edits involving more than a single item on a bib Under Zebra we check the queue first to see if there is an existing operation that has not been completed yet and only add a new one if not For ES, we can send repeated indexes of the same bib, and it creates a race condition.
Good catch!
I guess using the skip_modzebra_update flag, introduced by bug 24027, could fix the problem here.
Looks like there are 2 reindex requests per item, ModItemFromMarc and LostItem call Koha::Item->store.
(In reply to Jonathan Druart from comment #4) > Looks like there are 2 reindex requests per item, ModItemFromMarc and > LostItem call Koha::Item->store. It is the caller the one that should trigger the reindex, not the ->store method.
It's in store because it was in AddItem and ModItem. So no behaviour changes expected from bug 23463. Having it in store make it sure the callers won't forget. I think the "skip" param can be the best way to go. The problem is when the param need to be propagated. For instance LostItem is calling ->store, so LostItem will have to deal with it. We might end up with an ugly stack of calls passing it.
This is a major problem for us when we are doing batch changes (or imports). If we need to request a reindex then we also have to consider the timing of the reindex. We are on elastic search.
Created attachment 107832 [details] [review] Bug 25265: Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update Test plan: 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan
Missing an '=' in ModItemFromMarc assignment LostItem is not always called, only if item is marked lost when it wasn't I think this patch goes after the wrong problem - the issue is that we reindex the whole bib for each item call, instead of editing all items, then reindexing the bib. I think the idea here is right though, we should pass skip_modzebra_update for all the calls, then add a new call to index bib after all items are modified
Created attachment 107855 [details] [review] Bug 25265: Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update Additionally we should only send one request for biblio, and we should only do it if we succeed As the hwole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset makes ModZebra take lists of biblionumbers or records so that multiple records can be passed in one call Test plan: 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan
Created attachment 107867 [details] [review] Bug 25265: Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update Additionally we should only send one request for biblio, and we should only do it if we succeed As the hwole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset makes ModZebra take lists of biblionumbers or records so that multiple records can be passed in one call Test plan: 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan
I've adjusted the authorship of the patch ;)
Patch doesn't apply. Thanks!
Created attachment 109538 [details] [review] Bug 25265: Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update Additionally we should only send one request for biblio, and we should only do it if we succeed As the hwole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset makes ModZebra take lists of biblionumbers or records so that multiple records can be passed in one call Test plan: 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan
Created attachment 109556 [details] [review] Bug 25265: Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update Additionally we should only send one request for biblio, and we should only do it if we succeed As the hwole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset makes ModZebra take lists of biblionumbers or records so that multiple records can be passed in one call Test plan: 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org>
This should be a higher priority particularly for large institutions. I have 1.3 million bib records and close to 2 million items. I have, on occasion, needed to delete large batches of records. I should not be able to 'break' the catalog because of the indexing and I remove 20K bib records.
Can we get a QA stamp on this patch please?
Looking here
General Module changes, no tests ? I understand that mocking a search engine is a bit hard, but this is a point for adding at least something ? Reading the code and searching further in the ElasticSearch code raises lots of questions. Listing a few. $indexer->update_index_background( $biblionumbers, $records ); $indexer->delete_index_background( $biblionumbers ); marc_records_to_documents => Koha/SearchEngine/Elasticsearch.pm We are now fetching MARC records in ModZebra for Elastic. I have the feeling that if we use Elastic we should be leaving ModZebra as soon as we can ;) Move this code where it belongs? Somewhere in SearchEngine/Elastic? The operation of sending biblionumbers and biblio records in two lists makes me a bit suspicious. There is no check anymore that these lists are in sync. If there would be a gap somehow, we would be indexing records on the wrong id? Might be just theoretical but looks suboptimal. It seems that ModZebra is still inserting records into zebraqueue when we use Elastic. Why? Should we skip and cleanup ? The for loop in ModZebra seems to be suboptimal? In a loop you are checking if a record count is zero and if so, insert one record. Why dont you do one DELETE statement and one INSERT ? delete from zebra_queue where biblio_auth_number IN ($str) etc. Actually thinking about that: if you do one db call per record, why dont you add the loop in batch mod without touching ModZebra ? [Wrote this earlier: see my conclusion too.] Koha/SearchEngine/Elasticsearch/Indexer.pm dates from 2013 but still includes: $indexer->update_index_background does just refer to update_index: TODO implement in the future - I don't know the best way of doing this yet. Same $indexer->delete_index_background # TODO: Should be made async C4/AuthoritiesMarc.pm: ModZebra( $authid, 'specialUpdate', 'authorityserver', $record ); Is this the only use of the record parameter in ModZebra ? Apparently. For Zebra we dont need it, for Elastic we now do. This part in ModZebra is really troublesome too. We do not even check if we got a biblio or an authority record ! unless ($record) { $record = GetMarcBiblio({ biblionumber => $biblionumber, embed_items => 1 }); } Conclusion: Looks like we can still improve a lot of the Elastic code. Adding the current code in the old ModZebra is not the way to go imo. Move that code to SearchEngine by just adding a call. Since we only have one use of the record parameter in ModZebra, I would be inclined to remove it for consistency. I understand that it is useful as optimization though. Adding a loop somewhere else around ModZebra is probably the pragmatic way too. We never call it for lists. Perhaps adding an iteration routine in a module ? Changing status.
From IRC: [11:45] <cait> marcelr: you are supposed still to run zebra and elastic in parallel
Hm, I had actually typed up a nice comment about Z39.50/SRU and the like... but I probably never saved it :( There is another bug around where we were discussing making this behaviour configuable (Zebra only, Elasticsearch only, Both), but at the moment it's intentional to have both running as the default use case. (afaik)
(In reply to Marcel de Rooy from comment #19) > General > Module changes, no tests ? I will work on something here > We are now fetching MARC records in ModZebra for Elastic. We were before these patches > that if we use Elastic we should be leaving ModZebra as soon as we can ;) Agreed, but that is a bigger job, this is addressing a bug > The operation of sending biblionumbers and biblio records in two lists makes > me a bit suspicious. There is no check anymore that these lists are in sync. > If there would be a gap somehow, we would be indexing records on the wrong > id? Might be just theoretical but looks suboptimal. As you note later, we only pass a record from ModAuthority, otherwise we are building the lists > It seems that ModZebra is still inserting records into zebraqueue when we > use Elastic. Why? Should we skip and cleanup ? Z3950 was the original main reason, we have the z3950responder now, but keeping zebra in sync is not a bad thing. Eventually yes, but for now the dafety net has been useful. Beyond the scope of this bug > The for loop in ModZebra seems to be suboptimal? In a loop you are checking > if a record count is zero and if so, insert one record. > Why dont you do one DELETE statement and one INSERT ? delete from > zebra_queue where biblio_auth_number IN ($str) etc. > Actually thinking about that: if you do one db call per record, why dont you > add the loop in batch mod without touching ModZebra ? > [Wrote this earlier: see my conclusion too.] That is essentially what we have now, a loop in the batch operations, calling ModZebra for each record. This could be improved, but I don't think this adds complexity so much as highlighting what we do > > Koha/SearchEngine/Elasticsearch/Indexer.pm dates from 2013 but still > includes: > $indexer->update_index_background does just refer to update_index: TODO > implement in the future - I don't know the best way of doing this yet. > Same $indexer->delete_index_background # TODO: Should be made async This is work we should do, we don't have sponsors or the infrastructure yet, hopefully the task queue can help here > > C4/AuthoritiesMarc.pm: ModZebra( $authid, 'specialUpdate', > 'authorityserver', $record ); > Is this the only use of the record parameter in ModZebra ? Apparently. For > Zebra we don't need it, for Elastic we now do. This code is weird, I agree, but beyond the scope of this bug > This part in ModZebra is really troublesome too. We do not even check if we > got a biblio or an authority record ! > unless ($record) { > $record = GetMarcBiblio({ > biblionumber => $biblionumber, > embed_items => 1 }); > } This code is beyond the scope here, we are in C4/Biblio.pm - we really should only deal with biblio, and we do, except for the call above > Conclusion: > Looks like we can still improve a lot of the Elastic code. Yes we definitely can > Adding the > current code in the old ModZebra is not the way to go imo. Rewriting and moving this will be a bigger work and this is a bug that affects stables > Move that code to > SearchEngine by just adding a call. > Since we only have one use of the record > parameter in ModZebra, I would be inclined to remove it for consistency. I > understand that it is useful as optimization though. Please file a new bug for this > Adding a loop somewhere else around ModZebra is probably the pragmatic way > too. We never call it for lists. Perhaps adding an iteration routine in a > module ? I don't see why looping in one place or the other matters. In fact, look at this comment in the code # true ModZebra commented until indexdata fixes zebraDB crashes (it seems they occur on multiple updates # at the same time > Changing status. Your comments are well appreciated Marcel, and they are true. We have technical debt here, and we need to rethink some of our methods. You have taken a very broad look at the code, and identified issues that affect many areas. Currently though, we have a bug that is affecting users of ES, and we need to support these users so we can continue to test and improve the ES code. I attempted to take a path here that makes smaller changes, the basic idea of calling for index updates by batch rather than individually I think is sound. I will work on tests and take a look at the Zebra loop and would ask you to reconsider some of the larger points as future enhancements
IIUC doing the necessary refactoring along with the bugfix would prevent it from being backportable. If the bug is indeed major and a fix is there it would be a shame to not have it in the current stable branches. The related instances having to wait till the upgrade to 20.11. That's option 1. option 2 would be push to master (and backport) the current bugfix and open a new ticket for the heavier refactoring. With the risk of that happening in a very distant future. option 3 would be to do the heavier refactoring and cleaner fix in the short term and when it's in master, then the bugfix from this ticket could go to the stable branches. This needs someone having the resources for doing the heavier refactoring in the short term. trade-offs everywhere
I have certainly not expressed the need for a larger/heavy refactoring operation now. Moving some lines in a separate sub is not large. And adding a for loop in batch mod instead of ModZebra is not either, etc. This report is not about improving all Elastic code, but I couldnt help mentioning a few things in this context.
Created attachment 110262 [details] [review] Bug 25265: Unit tests These cover Koha::SearchEngine::Indexer and ensure that all calls in the code are routed correctly to the expected search engine
Created attachment 110263 [details] [review] Bug 25265: Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update Additionally we should only send one request for biblio, and we should only do it if we succeed As the whole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset moves Zebra and Elasticsearch calls to Indexer modules and introduces a generic Koha::SearchEngine::Indexer so that we don't need to check the engine when calling for index The new index_records routine takes an array so that we can reduce the calls to the ES server. The index_records routine for Zebra loops over ModZebra to avoid affecting current behaviour Test plan: General tests, under both search engines: 1 - Add a biblio and confirm it is searchable 2 - Edit the biblio and confirm changes are searchable 3 - Add an item, confirm it is searchable 4 - Delete an item, confirm it is not searchable 5 - Delete a biblio, confirm it is not searchable 6 - Add an authority and confirm it is searchable 7 - Delete an authority and confirm it is not searchable Batch mod tests, under both search engines 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan
Created attachment 110264 [details] [review] Bug 25265: Rename skip_modzebra_update to skip_record_index
Tests fail if there is no Elastic or not completely well configured ;)
Created attachment 110380 [details] [review] Bug 25265: Unit tests These cover Koha::SearchEngine::Indexer and ensure that all calls in the code are routed correctly to the expected search engine Bug 25265: (follow-up) Skip tests if elastic not configured
Created attachment 110381 [details] [review] Bug 25265: Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update (renamed skip_record_index) Additionally we should only send one request for biblio, and we should only do it if we succeed As the whole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset moves Zebra and Elasticsearch calls to Indexer modules and introduces a generic Koha::SearchEngine::Indexer so that we don't need to check the engine when calling for index The new index_records routine takes an array so that we can reduce the calls to the ES server. The index_records routine for Zebra loops over ModZebra to avoid affecting current behaviour Test plan: General tests, under both search engines: 1 - Add a biblio and confirm it is searchable 2 - Edit the biblio and confirm changes are searchable 3 - Add an item, confirm it is searchable 4 - Delete an item, confirm it is not searchable 5 - Delete a biblio, confirm it is not searchable 6 - Add an authority and confirm it is searchable 7 - Delete an authority and confirm it is not searchable Batch mod tests, under both search engines 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan 9 - Test batch deleting items a - Test with a list of items, not deleting bibs b - Test with a list of items, deleting bibs if no items remain where all items are only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1) c - Test with a list of items, deleting bibs if no items remain where some items are the only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1,2) 10 - Confirm records are update/deleted as appropriate
Created attachment 110382 [details] [review] Bug 25265: Rename skip_modzebra_update to skip_record_index
Created attachment 110457 [details] [review] Bug 18958: (follow-up) Ensure hold fill target reserve_id is set for all hold types MapItemsToHoldRequests has three sections: Local holds, item level holds, bib level holds Only one of them was setting the reserve_id. This patch makes al three set it and adds tests To test: 1 - Repeat test plan on bug 2 - sudo koha-mysql kohadev SELECT * FROM hold_fill_targets 3 - Ensure reserve_id is set at appropriate times 4 - prove -v t/db_dependent/HoldsQueue.t Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 18958: (QA follow-up) Fix number of tests In HoldsQueue.t Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 110458 [details] [review] Bug 18958: DBRev 20.06.00.039 Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 110459 [details] [review] Bug 19889: (follow-up) update DB adjustments Fix fields order Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 110560 [details] [review] Bug 25265: Unit tests These cover Koha::SearchEngine::Indexer and ensure that all calls in the code are routed correctly to the expected search engine Bug 25265: (follow-up) Skip tests if elastic not configured
Created attachment 110561 [details] [review] Bug 25265: Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update (renamed skip_record_index) Additionally we should only send one request for biblio, and we should only do it if we succeed As the whole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset moves Zebra and Elasticsearch calls to Indexer modules and introduces a generic Koha::SearchEngine::Indexer so that we don't need to check the engine when calling for index The new index_records routine takes an array so that we can reduce the calls to the ES server. The index_records routine for Zebra loops over ModZebra to avoid affecting current behaviour Test plan: General tests, under both search engines: 1 - Add a biblio and confirm it is searchable 2 - Edit the biblio and confirm changes are searchable 3 - Add an item, confirm it is searchable 4 - Delete an item, confirm it is not searchable 5 - Delete a biblio, confirm it is not searchable 6 - Add an authority and confirm it is searchable 7 - Delete an authority and confirm it is not searchable Batch mod tests, under both search engines 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan 9 - Test batch deleting items a - Test with a list of items, not deleting bibs b - Test with a list of items, deleting bibs if no items remain where all items are only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1) c - Test with a list of items, deleting bibs if no items remain where some items are the only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1,2) 10 - Confirm records are update/deleted as appropriate
Created attachment 110562 [details] [review] Bug 25265: Rename skip_modzebra_update to skip_record_index
Created attachment 110565 [details] [review] Bug 25265: Fix copy paste error for parameter
Created attachment 110594 [details] [review] Bug 25265: Unit tests These cover Koha::SearchEngine::Indexer and ensure that all calls in the code are routed correctly to the expected search engine Bug 25265: (follow-up) Skip tests if elastic not configured Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org>
Created attachment 110595 [details] [review] Bug 25265: Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update (renamed skip_record_index) Additionally we should only send one request for biblio, and we should only do it if we succeed As the whole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset moves Zebra and Elasticsearch calls to Indexer modules and introduces a generic Koha::SearchEngine::Indexer so that we don't need to check the engine when calling for index The new index_records routine takes an array so that we can reduce the calls to the ES server. The index_records routine for Zebra loops over ModZebra to avoid affecting current behaviour Test plan: General tests, under both search engines: 1 - Add a biblio and confirm it is searchable 2 - Edit the biblio and confirm changes are searchable 3 - Add an item, confirm it is searchable 4 - Delete an item, confirm it is not searchable 5 - Delete a biblio, confirm it is not searchable 6 - Add an authority and confirm it is searchable 7 - Delete an authority and confirm it is not searchable Batch mod tests, under both search engines 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan 9 - Test batch deleting items a - Test with a list of items, not deleting bibs b - Test with a list of items, deleting bibs if no items remain where all items are only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1) c - Test with a list of items, deleting bibs if no items remain where some items are the only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1,2) 10 - Confirm records are update/deleted as appropriate Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org>
Created attachment 110596 [details] [review] Bug 25265: Rename skip_modzebra_update to skip_record_index Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org>
Created attachment 110597 [details] [review] Bug 25265: Fix copy paste error for parameter Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org>
Created attachment 110615 [details] [review] Bug 25265: (follow-up) Don't index malformed records This is analogous to 26522, we shoudl skip record that cannot be retrieved for indexing
Created attachment 110618 [details] [review] Bug 25265: (follow-up) Don't index malformed records This is analogous to 26522, we shoudl skip record that cannot be retrieved for indexing
QAing
Looks quite good, Nick. Starting to test..
t/db_dependent/Koha/SearchEngine/Elasticsearch/Indexer.t Without Elastic: not ok 2 - create_index() tests # Failed test 'create_index() tests' # at t/db_dependent/Koha/SearchEngine/Elasticsearch/Indexer.t line 82. [NoNodes] ** No nodes are available: [http://localhost:9200], called from sub Search::Elasticsearch::Role::Client::Direct::__ANON__ at /usr/share/koha/Koha/SearchEngine/Elasticsearch/Indexer.pm line 382.# Looks like your test exited with 255 just after 2. With Elastic (finally ;) 1..2 ok 1 - use Koha::SearchEngine::Elasticsearch::Indexer; # Subtest: create_index() tests 1..6 ok 1 - Creating a new indexer object ok 2 - Creating an index ok 3 - no error on update_index ok 4 - 1 item indexed ok 5 - We should get a string matching the bibnumber passed in ok 6 - Dropping the index ok 2 - create_index() tests
Nick, does entire record search on Authorities not work in Elastic?
Created attachment 110729 [details] [review] Bug 25265: Unit tests These cover Koha::SearchEngine::Indexer and ensure that all calls in the code are routed correctly to the expected search engine Bug 25265: (follow-up) Skip tests if elastic not configured Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 110730 [details] [review] Bug 25265: Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update (renamed skip_record_index) Additionally we should only send one request for biblio, and we should only do it if we succeed As the whole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset moves Zebra and Elasticsearch calls to Indexer modules and introduces a generic Koha::SearchEngine::Indexer so that we don't need to check the engine when calling for index The new index_records routine takes an array so that we can reduce the calls to the ES server. The index_records routine for Zebra loops over ModZebra to avoid affecting current behaviour Test plan: General tests, under both search engines: 1 - Add a biblio and confirm it is searchable 2 - Edit the biblio and confirm changes are searchable 3 - Add an item, confirm it is searchable 4 - Delete an item, confirm it is not searchable 5 - Delete a biblio, confirm it is not searchable 6 - Add an authority and confirm it is searchable 7 - Delete an authority and confirm it is not searchable Batch mod tests, under both search engines 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan 9 - Test batch deleting items a - Test with a list of items, not deleting bibs b - Test with a list of items, deleting bibs if no items remain where all items are only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1) c - Test with a list of items, deleting bibs if no items remain where some items are the only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1,2) 10 - Confirm records are update/deleted as appropriate Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 110731 [details] [review] Bug 25265: Rename skip_modzebra_update to skip_record_index Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 110732 [details] [review] Bug 25265: Fix copy paste error for parameter Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 110733 [details] [review] Bug 25265: (follow-up) Don't index malformed records This is analogous to 26522, we shoudl skip record that cannot be retrieved for indexing Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 110734 [details] [review] Bug 25265: (QA follow-up) Add shebang to Indexer.t Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 110736 [details] [review] Bug 25265: (QA follow-up) Rename biblionumber in ModZebra, index_records ModZebra: The name is very misleading: we can index authid's too here. And yes, it should not be in C4/Biblio too ;) A first step.. Adding the same change here in Koha/SearchEngine/Zebra/Indexer. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 110737 [details] [review] Bug 25265: (QA follow-up) Check server type in Elasticsearch::index_records Doing the same change as previously (renaming biblionumber), but fixing at the same the record fetch. If (theoretically) an authority is passed without a record, it would have fetched a biblio record. Test plan: You need Elasticsearch here. Replaced this line in AddAuthority: $indexer->index_records( $authid, "specialUpdate", "authorityserver", $record ); by $indexer->index_records( $authid, "specialUpdate", "authorityserver", undef ); And updated an authority record. Check if you can search for the change. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Good work. Looks good to me. Tested on Zebra and single Elastic container. The last QA follow-up is a bit arbitrary. If Nick or RM feels that we'd better move this to a new report, thats fine with me.
The index_records method is (still) not tested...
Pushed to master for 20.11, thanks to everybody involved!
Created attachment 110882 [details] [review] Bug 25265: [20.05.x] Unit tests These cover Koha::SearchEngine::Indexer and ensure that all calls in the code are routed correctly to the expected search engine Bug 25265: (follow-up) Skip tests if elastic not configured Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 110883 [details] [review] Bug 25265: [20.05.x] Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update (renamed skip_record_index) Additionally we should only send one request for biblio, and we should only do it if we succeed As the whole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset moves Zebra and Elasticsearch calls to Indexer modules and introduces a generic Koha::SearchEngine::Indexer so that we don't need to check the engine when calling for index The new index_records routine takes an array so that we can reduce the calls to the ES server. The index_records routine for Zebra loops over ModZebra to avoid affecting current behaviour Test plan: General tests, under both search engines: 1 - Add a biblio and confirm it is searchable 2 - Edit the biblio and confirm changes are searchable 3 - Add an item, confirm it is searchable 4 - Delete an item, confirm it is not searchable 5 - Delete a biblio, confirm it is not searchable 6 - Add an authority and confirm it is searchable 7 - Delete an authority and confirm it is not searchable Batch mod tests, under both search engines 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan 9 - Test batch deleting items a - Test with a list of items, not deleting bibs b - Test with a list of items, deleting bibs if no items remain where all items are only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1) c - Test with a list of items, deleting bibs if no items remain where some items are the only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1,2) 10 - Confirm records are update/deleted as appropriate Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: Rename skip_modzebra_update to skip_record_index Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: Fix copy paste error for parameter Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: (follow-up) Don't index malformed records This is analogous to 26522, we shoudl skip record that cannot be retrieved for indexing Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: (QA follow-up) Add shebang to Indexer.t Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: (QA follow-up) Rename biblionumber in ModZebra, index_records ModZebra: The name is very misleading: we can index authid's too here. And yes, it should not be in C4/Biblio too ;) A first step.. Adding the same change here in Koha/SearchEngine/Zebra/Indexer. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: (QA follow-up) Check server type in Elasticsearch::index_records Doing the same change as previously (renaming biblionumber), but fixing at the same the record fetch. If (theoretically) an authority is passed without a record, it would have fetched a biblio record. Test plan: You need Elasticsearch here. Replaced this line in AddAuthority: $indexer->index_records( $authid, "specialUpdate", "authorityserver", $record ); by $indexer->index_records( $authid, "specialUpdate", "authorityserver", undef ); And updated an authority record. Check if you can search for the change. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 110937 [details] [review] Bug 25265: [20.05.x] Prevent double reindex of the same item in batchmod When batch editing, 2 reindex calls are sent to ES/Zebra. We can easily avoid that reusing the skip_modzebra_update (renamed skip_record_index) Additionally we should only send one request for biblio, and we should only do it if we succeed As the whole batch mod is in a transaction it is possible to fail in which case Zebra queue is reset, but ES indexes have already been set In addition to the skip param this patchset moves Zebra and Elasticsearch calls to Indexer modules and introduces a generic Koha::SearchEngine::Indexer so that we don't need to check the engine when calling for index The new index_records routine takes an array so that we can reduce the calls to the ES server. The index_records routine for Zebra loops over ModZebra to avoid affecting current behaviour Test plan: General tests, under both search engines: 1 - Add a biblio and confirm it is searchable 2 - Edit the biblio and confirm changes are searchable 3 - Add an item, confirm it is searchable 4 - Delete an item, confirm it is not searchable 5 - Delete a biblio, confirm it is not searchable 6 - Add an authority and confirm it is searchable 7 - Delete an authority and confirm it is not searchable Batch mod tests, under both search engines 1 - Have a bib with several items, none marked 'not for loan' 2 - Do a staff search that returns this biblio 3 - Items show as available 4 - Click on title to go to details page 5 - Edit->Item in a batch 6 - Set the not for loan status for all items 7 - Repeat your search 8 - Items show as not for loan 9 - Test batch deleting items a - Test with a list of items, not deleting bibs b - Test with a list of items, deleting bibs if no items remain where all items are only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1) c - Test with a list of items, deleting bibs if no items remain where some items are the only item on a biblio: SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1,2) 10 - Confirm records are update/deleted as appropriate Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: Rename skip_modzebra_update to skip_record_index Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: Fix copy paste error for parameter Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: (follow-up) Don't index malformed records This is analogous to 26522, we shoudl skip record that cannot be retrieved for indexing Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: (QA follow-up) Add shebang to Indexer.t Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: (QA follow-up) Rename biblionumber in ModZebra, index_records ModZebra: The name is very misleading: we can index authid's too here. And yes, it should not be in C4/Biblio too ;) A first step.. Adding the same change here in Koha/SearchEngine/Zebra/Indexer. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Bug 25265: (QA follow-up) Check server type in Elasticsearch::index_records Doing the same change as previously (renaming biblionumber), but fixing at the same the record fetch. If (theoretically) an authority is passed without a record, it would have fetched a biblio record. Test plan: You need Elasticsearch here. Replaced this line in AddAuthority: $indexer->index_records( $authid, "specialUpdate", "authorityserver", $record ); by $indexer->index_records( $authid, "specialUpdate", "authorityserver", undef ); And updated an authority record. Check if you can search for the change. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
backported to 20.05.x for 20.05.05
missing dependencies, not backported to 19.11.x