When merging records with many items, Koha reindexes the origin record and the destination record for each item moving (check MoveItemFromBiblio in C4::Items). If the record being eliminated has 1000 items, Koha will make 2000 reindexes. Because of this behaviour, the merging is too slow. Test plan: - Create 2 records (one with 1000 items for example) - Add the records to one list and start the merging process of those records. - Choose the record with many items as the one to be eliminated. - Start the merging - After a while the web server should give you a timeout error (the merging process continues)
hello, when i try to merge records, i have an issue : Following required fields are missing: -801 https://snag.gy/76Fh9R.jpg this is a mandatory tag but i can't add it Do someone know why?
(In reply to axel Amghar from comment #1) > hello, > when i try to merge records, i have an issue : > Following required fields are missing: > -801 > > > https://snag.gy/76Fh9R.jpg > this is a mandatory tag but i can't add it > Do someone know why? Axel that happens because the framework chosen for merging has 801 as mandatory field (that's my guess).
(In reply to Vitor Fernandes from comment #2) > Axel that happens because the framework chosen for merging has 801 as > mandatory field (that's my guess). I think is the same thing with "000" and "001". They are mandatory but we can't add them. So.. Do we have to fix it ?
Created attachment 90061 [details] [review] Bug 22690 - Merging records with many items (ElasticSearch) This patch allow us to merged many items without timeout Test plan : Without the patch : > - Create 2 records (one with 1000 items for example) > - Add the records to one list and start the merging process of those records. > - Choose the record with many items as the one to be eliminated. > - Start the merging > - After a while the web server should give you a timeout error (the merging process continues) With the patch : - make the same - this time verify that all items have been merged without time out
Created attachment 90115 [details] [review] Bug 22690: Merging records with many items (ElasticSearch) This patch allow us to merged many items without timeout Test plan : Without the patch : > - Create 2 records (one with 1000 items for example) > - Add the records to one list and start the merging process of those records. > - Choose the record with many items as the one to be eliminated. > - Start the merging > - After a while the web server should give you a timeout error (the merging process continues) With the patch : - make the same - this time verify that all items have been merged without time out
Created attachment 90116 [details] [review] Bug 22690: Merging records with many items (ElasticSearch) This patch allow us to merged many items without timeout Test plan : Without the patch : > - Create 2 records (one with 1000 items for example) > - Add the records to one list and start the merging process of those records. > - Choose the record with many items as the one to be eliminated. > - Start the merging > - After a while the web server should give you a timeout error (the merging process continues) With the patch : - make the same - this time verify that all items have been merged without time out
Hello, I found an issue when you click on merge selected, and then you click instant on merge, the destination record doesn't have the time to load. And when you look your record, it lose his MARC record. Someone else succeed to reproduce this bug?
I don't get the logic of the $items_number parameter. As far as I can see the first UPDATE moves all items regardless of the parameter, but the subsequent processing of acquisitions etc. only goes through items in the $items_number parameter. If MoveItemsFromBiblio is supposed to move all items, I'd remove the $items_number parameter and do everything necessary inside MoveItemsFromBiblio. Also, I believe the 801 issue should be handled in a separate bug since it's not related to Elasticsearch.
Taking this since I need this for bug 20447.
Created attachment 92828 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t
Created attachment 94505 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t
Doesn't apply anymore: Apply? [(y)es, (n)o, (i)nteractive] y Applying: Bug 22690: Refactor merging of records to improve performance (Elasticsearch) Using index info to reconstruct a base tree... M C4/Items.pm M Koha/Biblio.pm M Koha/Item.pm M t/db_dependent/Koha/Item.t Falling back to patching base and 3-way merge... Auto-merging t/db_dependent/Koha/Item.t Auto-merging Koha/Item.pm CONFLICT (content): Merge conflict in Koha/Item.pm Auto-merging Koha/Biblio.pm CONFLICT (content): Merge conflict in Koha/Biblio.pm Auto-merging C4/Items.pm error: Failed to merge in the changes. Patch failed at 0001 Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Created attachment 97696 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t
Rebased.
Hi Ere, patch working, but one test fails: Test Summary Report ------------------- t/db_dependent/Koha/Item.t (Wstat: 256 Tests: 5 Failed: 1) Failed test: 3 Non-zero exit status: 1 Second test was OK.
Mike, which test is failing? I'm seeing some errors from an unrelated test that tries to clean up the database in wrong order (remove patrons before their checkouts).
Hi Ere, prove t/db_dependent/Koha/Item.t: not ok 3 - Value is mapped correctly for column biblionumber # Failed test 'Value is mapped correctly for column biblionumber' # at t/db_dependent/Koha/Item.t line 109. # got: undef # expected: '462' not ok 4 - Value is mapped correctly for column biblioitemnumber # Failed test 'Value is mapped correctly for column biblioitemnumber' # at t/db_dependent/Koha/Item.t line 109. # got: undef # expected: '461' not ok 28 - Value is mapped correctly for column timestamp # Failed test 'Value is mapped correctly for column timestamp' # at t/db_dependent/Koha/Item.t line 109. # got: undef # expected: '2020-01-24 22:23:22' not ok 42 - Value is mapped correctly for column biblionumber # Failed test 'Value is mapped correctly for column biblionumber' # at t/db_dependent/Koha/Item.t line 124. # got: undef # expected: '462' not ok 67 - Value is mapped correctly for column timestamp # Failed test 'Value is mapped correctly for column timestamp' # at t/db_dependent/Koha/Item.t line 124. # got: undef # expected: '2020-01-24 22:23:22' # Looks like you failed 6 tests of 79. not ok 3 - as_marc_field() tests It's problem with my mappings?
Mike, that seems unrelated to any of the changes here. I tried with both MARC 21 and UNIMARC default mappings and couldn't reproduce, so I'd say it's an issue with your mappings, but I'm a bit at loss if they're at defaults. Do you see the issue with plain master version of Koha?
Hi Ere, I've kohadevbox on master, with almost defaults settings. Framework test show 1 error: subfields not in same tabs, but at biblio fileds. I'll test it again, because and propably switch to SO.
Created attachment 98143 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Signed-off-by: Michal Denar <black23@gmail.com>
Comment on attachment 98143 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) Review of attachment 98143 [details] [review]: ----------------------------------------------------------------- This looks like a reasonable approach to me and seems to work well.. a relatively minor point regarding the introduce Koha/Biblio method. ::: Koha/Biblio.pm @@ +854,5 @@ > +Move items from the given biblio > + > +=cut > + > +sub move_items_from_biblio { I feel like 'move_items_from_biblio' isn't immediately obvious as a function name.. are we moving from 'this' biblio or to it.. perhaps 'adopt_items_from_biblio' is a bit clearer?
(In reply to Martin Renvoize from comment #21) > Comment on attachment 98143 [details] [review] [review] > Bug 22690: Refactor merging of records to improve performance (Elasticsearch) > > Review of attachment 98143 [details] [review] [review]: > ----------------------------------------------------------------- > > This looks like a reasonable approach to me and seems to work well.. a > relatively minor point regarding the introduce Koha/Biblio method. > > ::: Koha/Biblio.pm > @@ +854,5 @@ > > +Move items from the given biblio > > + > > +=cut > > + > > +sub move_items_from_biblio { > > I feel like 'move_items_from_biblio' isn't immediately obvious as a function > name.. are we moving from 'this' biblio or to it.. perhaps > 'adopt_items_from_biblio' is a bit clearer? I would just reverse the original idea: move_items_to_biblio. Then it should be obvious we are talking about the current object's items being moved. Not sure if perl OO style supports overloading, but in that case it could be just move_items, and the parameters defined whether we move them to another biblio or (yet to be included to Koha) to a holdings record.
(In reply to Joonas Kylmälä from comment #22) > I would just reverse the original idea: move_items_to_biblio. Then it should > be obvious we are talking about the current object's items being moved. Not > sure if perl OO style supports overloading, but in that case it could be > just move_items, and the parameters defined whether we move them to another > biblio or (yet to be included to Koha) to a holdings record. Or we could do this even in the Item and Items objects for maximum code reusability, like $biblio->items->move_to_biblio?
> Or we could do this even in the Item and Items objects for maximum code > reusability, like $biblio->items->move_to_biblio? Maybe just move in this case? They can't go anywhere else than another biblio.
(In reply to Katrin Fischer from comment #24) > > Or we could do this even in the Item and Items objects for maximum code > > reusability, like $biblio->items->move_to_biblio? > > Maybe just move in this case? They can't go anywhere else than another > biblio. This bug is a dependency for Bug 20447 which introduces holdings records so similar moving of items to another holdings record is needed there in addition to moving to another biblio.
I'll rename the method as Martin suggested. Any other changes will cause complications with bug 20447 where we need to move holdings too (so that would become adopt_holdings_and_items_from_biblio). They can't really be separated since they depend on each other.
Created attachment 98210 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 98211 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Signed-off-by: Michal Denar <black23@gmail.com>
Rename done.
root@6d3e2c82bf14:koha(bug22690-qa)$ prove t/db_dependent/Koha/Item.t t/db_dependent/Koha/Item.t .. 4/5 # No tests run! t/db_dependent/Koha/Item.t .. 5/5 # Failed test 'No tests run for subtest "move_to_biblio() tests"' # at t/db_dependent/Koha/Item.t line 435. Can't use string ("k8XjSy") as a HASH ref while "strict refs" in use at /kohadevbox/koha/C4/Reserves.pm line 175. # Looks like your test exited with 255 just after 5. t/db_dependent/Koha/Item.t .. Dubious, test returned 255 (wstat 65280, 0xff00) Failed 1/5 subtests Test Summary Report ------------------- t/db_dependent/Koha/Item.t (Wstat: 65280 Tests: 5 Failed: 1) Failed test: 5 Non-zero exit status: 255 Files=1, Tests=5, 6 wallclock secs ( 0.04 usr 0.01 sys + 3.30 cusr 0.62 csys = 3.97 CPU) Result: FAIL
Created attachment 99376 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Signed-off-by: Michal Denar <black23@gmail.com>
Fixed tests (use the new params for AddReserve).
Created attachment 99379 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t
Hi Ere, I get some errors for prove -v t/db_dependent/Koha/Item.t # Looks like you failed 6 tests of 79. not ok 3 - as_marc_field() tests ok 5 - move_to_biblio() tests # Looks like you failed 1 test of 5. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/5 subtests Test Summary Report ------------------- t/db_dependent/Koha/Item.t (Wstat: 256 Tests: 5 Failed: 1) Failed test: 3 Non-zero exit status: 1 Files=1, Tests=5, 4 wallclock secs ( 0.03 usr 0.02 sys + 2.70 cusr 0.32 csys = 3.07 CPU) Result: FAIL
Mike, I can't reproduce. Can you provide more insight on which tests failed? And is it an empty database or does it have existing data?
Created attachment 100316 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 104796 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Signed-off-by: Michal Denar <black23@gmail.com>
Just rebased on master
Created attachment 104797 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Signed-off-by: Michal Denar <black23@gmail.com>
Comment on attachment 104797 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) Review of attachment 104797 [details] [review]: ----------------------------------------------------------------- Great job Ere, I have just few concerns about the code. But I like this refactor a lot! ::: Koha/Item.pm @@ +840,5 @@ > + > + my $schema = Koha::Database->new()->schema(); > + > + # Acquisition orders related to the item > + my $orders = $schema->resultset('Aqorder')->search( Koha::Object(s) should be used @@ +847,5 @@ > + ); > + $orders->update_all({ biblionumber => $biblionumber }); > + > + # reserves > + my $reserves = $self->_result->reserves; there is $self->holds method @@ +867,5 @@ > + $dbh->do("UPDATE tmp_holdsqueue SET biblionumber=? WHERE itemnumber=?", undef, $biblionumber, $self->itemnumber); > + } > + ); > + #my $tmp_holdsqueues = $self->_result->tmp_holdsqueues; > + #$tmp_holdsqueues->update_all({ biblionumber => $biblionumber }); Please, remove these commented lines ::: cataloguing/merge.pl @@ +106,5 @@ > UPDATE suggestions SET biblionumber = ? WHERE biblionumber = ? > "); > + my $sth_articlerequests = $dbh->prepare(" > + UPDATE article_requests SET biblionumber = ? WHERE biblionumber = ? > + "); There should be no SQL in .pl script. Please, use Koha::Object(s)
Created attachment 104871 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Signed-off-by: Michal Denar <black23@gmail.com>
Thanks, Josef. Should be fixed now.
Created attachment 105445 [details] error > With the patch: > - Do the same as above > - This time verify that the records are merged without timeout
BTW, librarian question: is it expected to for the merge to complain about the 003 field being missing? (and blocking the merge). So in my test data (koha-testing-docker and I supose KohaDevBox), I have to edit the books framework to make 003 visible so it will be populated when I create the two records.
The failure in comment 43 might have been because I had the patch applied before initialize my dev env (koha-testing-docker). So retried: - initialize & start instance - apply patch - restart_all - try the merge - timeout So there is definitely an issue
The above tests where done on Debian with MariaDB 10.3 with ElasticSearch 6.8.8 == == I retried the above simplified test plan (second half of the full one) with Debian 9, MySQL 5.5 and still ES 6.8.8 Still the same issue. Though Koha is stuck on additem.pl whereas in the previous test it's merge.pl that I was seeing the process list to. (when checking that it was still running in the background despite the timeout) == == I retried what I did on comment 43 on Debian 9& MySQL 5.5 and it didn't happen (no error, stuck on additem.pl). So maybe it's linked to the OS & DB or I didn't something else that messed it up. == == I think that's all for now from me. I'll be happy to do some more tests with a new patch or precises instructions to help troubleshoot the issue in case it only happens to me.
Forget what I said about being stuck on additem.pl or merge.pl, it seems related to the first that I queried after I reset the dev env. After I do a "reset_all", the process that is stuck at 100% CPU is the starman worker. (so maybe it was some kind of other bug)
One last thing, it also happens with ES5
Victor, the patch doesn't change anything with the framework handling, so whether that's a problem is unrelated to this issue, as far as I can see. Thanks for testing in any case. I'll see if I can reproduce the problem.
Ok, I can see it's not fast enough, but it doesn't seem to be related to Elasticsearch but the fact that Koha::Objects must be used. I'll see what I can do to improve it.
Created attachment 106144 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Signed-off-by: Michal Denar <black23@gmail.com>
Oops. It _was_ still related to ES since I forgot to skip ES update when storing a single item. Should be fixed now.
Michal: Could you verify the patch is still signed-off even after the rebase Ere did? It seems like he forgot your sign-off there even though I think it should have been removed after the rebase. If the patch is signed-off could you change the bug status to Signed-off?
Created attachment 108777 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Created attachment 109694 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Created attachment 110991 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Created attachment 110994 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Rebased to solve conflict caused by "Bug 25265: Prevent double reindex of the same item in batchmod" / 88cb7f223d. Replaced ModZebra calls with Koha::SearchEngine::Indexer.
Is there an easy way to add 1,000 items for a record?
Hi David, yes, you can use New item/Add multiple copies of this item.
(In reply to Michal Denar from comment #60) > Hi David, > yes, you can use New item/Add multiple copies of this item. Thanks Michal! I can see it now!
Created attachment 111196 [details] Bug 22690 - Output from tests after patch applied I had a go at testing this with koha-testing-docker on my average laptop (default KTD uses ES 6.x). After applying the patch the time reduced from about an hour to around 30 minutes for all the records to be merged and listed under the record (in the staff interface and OPAC). About 5 mins in I get this timeout message in the browser: Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request Reason: Error reading from remote server Apache/2.4.25 (Debian) Server at 127.0.0.1 Port 8081 Test results after patch applied: - prove -v t/db_dependent/Koha/Item.t - fail, took about 30 minutes to run - prove -v t/db_dependent/Items/MoveItemFromBiblio.t I haven't changed the status to failed QA. I'm not sure my testing has helped!
David, thanks for testing. It's very valuable! I'm seeing the same as you, and it's not good. I'll investigate why that is.
Created attachment 111207 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
There was a parameter name change in master that caused the biblio to be reindexed separately for every item. Now fixed, and you should see greatly improved performance.
Created attachment 111236 [details] Bug 22690 - Output from tests for Item.t Thanks Ere! That made a huge difference, and the merge takes almost no time at all now (less than 1 minute) However, the the tests for prove -v t/db_dependent/Koha/Item.t run a lot faster as well, but still fail - see attachment Bug 22690 - Output from tests for Item.t.
Hi, (In reply to David Nind from comment #66) > However, the the tests for prove -v t/db_dependent/Koha/Item.t run a lot > faster as well, but still fail - see attachment Bug 22690 - Output from > tests for Item.t. please run the test after running "reset_all" and it should work. The Koha unit tests tend to fail if you change system preferences (which seems to be the case here with the searchengine syspref).
Created attachment 111237 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Items/MoveItemFromBiblio.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: David Nind <david@davidnind.com>
Hi Josef, I see you are QA contact on this one, will you be able to finish here or should we reassign?
Hi Ere, Thank you! Great refactoring! A few things: 1 - Generally we assume that we should index unless passed 'skip_record_index', rather than hardcoding not to 2 - If you change 'move_to_biblio' to take that param and allow the store to index otherwise then we can can remove MoveItemFromBiblio and simply use move_to_biblio in moveitem.pl 3 - please add a test to t/db_dependent/Koha/SearchEngine/Indexer.t to cover when indexing happens (or doesn't) for these subroutines (to prove the adopt method only calls once)
Created attachment 112100 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Hi Nick, and thanks for the excellent feedback. I've now implemented the changes you requested.
Created attachment 112184 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Latest version makes sure params is defined in Item->move_to_biblio.
Patch doesn't apply anymore.
Created attachment 112567 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Rebased. However the patch has two QA failures reported by the QA tool: FAIL C4/Items.pm OK critic OK forbidden patterns OK git manipulation OK pod FAIL pod coverage OK spelling OK valid FAIL cataloguing/moveitem.pl OK critic OK forbidden patterns OK git manipulation OK pod OK spelling FAIL valid These need to be fixed.
Created attachment 112569 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Fixed the QA tool error FAIL cataloguing/moveitem.pl with > - my $to_biblio = Koha::Biblios->find($biblionumber) > + my $to_biblio = Koha::Biblios->find($biblionumber); Probably happened during one of the rebases. The POD coverage fail FAIL C4/Items.pm is not caused by caused by this patch as far as I can tell. Moving to Needs Signoff.
I'm getting test failures with prove -v t/db_dependent/Koha/Item.t after the patch is applied. I ran this after applying the patches, but before adding the 1000 items and merging. They passed before the patch was applied. Note: I couldn't get the tests to complete after adding the 1000 items and merging. Apart from this, everything else in the test plan works.
(In reply to David Nind from comment #80) > I'm getting test failures with prove -v t/db_dependent/Koha/Item.t after the > patch is applied. I ran this after applying the patches, but before adding > the 1000 items and merging. > > They passed before the patch was applied. > > Note: I couldn't get the tests to complete after adding the 1000 items and > merging. > > Apart from this, everything else in the test plan works. I cannot reproduce on koha-testing-docker, what is the error you get?
Created attachment 112754 [details] Test results - Bug 22690 Attached are the test results I got using koha-testing-docker: - Before the patch is applied: the tests pass - After changing the search engine system preference to elasticsearch and reindexing: tests fail - After patch applied: tests fail - After 1,000 records added and merging: tests fail
(In reply to David Nind from comment #82) > - Before the patch is applied: the tests pass > - After changing the search engine system preference to elasticsearch and > reindexing: tests fail > - After patch applied: tests fail > - After 1,000 records added and merging: tests fail This seems to be the same case as in comment 67.
I think the test failure is a red herring.. However.. I believe we need to add unit tests for the new Koha::Biblio->adopt_items_from_biblio method and Koha::Item->item_orders relation accessor before we proceed. These should be fairly trivial tests to add, one to prove that the loop happens and the final indexing takes place for the set and the second to prove the relation is setup correctly. Failing QA
Created attachment 113308 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Created attachment 113309 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio
(In reply to Martin Renvoize from comment #84) > However.. I believe we need to add unit tests for the new > Koha::Biblio->adopt_items_from_biblio method and Koha::Item->item_orders > relation accessor before we proceed. Absolutely. Tests now added.
Thanks Ere, I'll get back to QA on this asap :)
Doesn't apply anymore
Created attachment 113941 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Created attachment 113942 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio
Created attachment 113967 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 113968 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com>
So the problem is actually that the reindex is not async. Bug 27344 is trying to provide a solution for that.
(In reply to Jonathan Druart from comment #94) > So the problem is actually that the reindex is not async. > Bug 27344 is trying to provide a solution for that. That's part of it, but I believe the refactoring improves the flow and readability of the code as well.
Comment on attachment 113967 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) Review of attachment 113967 [details] [review]: ----------------------------------------------------------------- ::: Koha/Item.pm @@ +1082,5 @@ > + > +$params: > + skip_record_index => 1|0 > + > +Returns undef if the move failed or the biblionumber of the destination record otherwise I wonder if this might be nicer as a fluent interface (i.e returning $self so it can be chained.. the undef return would become a no-op and the final return would be the updated Koha::Item object?) One for later perhaps ::: Koha/Schema/Result/Item.pm @@ +778,4 @@ > '+exclude_from_local_holds_priority' => { is_boolean => 1 }, > ); > > +# Relationship with orders via the aqorders_item table that not have foreign keys Was there a reason not to add the foreign key and let dbic generate the relationship?
(In reply to Ere Maijala from comment #95) > (In reply to Jonathan Druart from comment #94) > > So the problem is actually that the reindex is not async. > > Bug 27344 is trying to provide a solution for that. > > That's part of it, but I believe the refactoring improves the flow and > readability of the code as well. I think I agree, the code flow is clearer with this work.
Comment on attachment 113967 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) Review of attachment 113967 [details] [review]: ----------------------------------------------------------------- ::: Koha/Item.pm @@ +1064,5 @@ > +sub item_orders { > + my ( $self ) = @_; > + > + my $orders = $self->_result->item_orders; > + return Koha::Acquisition::Orders->_new_from_dbic($orders); In it's current form, this can result in failure.. I'm not seeing any handling for that... i.e. if an item is deleted it gets moved to deleted_items but the itemnumber remains in the aqorder_items table as there is no foreign key constraint.. have you tested this case?
Comment on attachment 113967 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) Review of attachment 113967 [details] [review]: ----------------------------------------------------------------- ::: Koha/Item.pm @@ +1102,5 @@ > + biblioitemnumber => $to_biblio->biblioitem->biblioitemnumber > + })->store({ skip_record_index => $params->{skip_record_index} }); > + > + # Acquisition orders > + $self->item_orders->update({ biblionumber => $biblionumber }, { no_triggers => 1 }); no_triggers: Varified this one is OK.. we don't have any code level triggers based on biblionumber that I can see.. happy with this and understand why your using it. @@ +1105,5 @@ > + # Acquisition orders > + $self->item_orders->update({ biblionumber => $biblionumber }, { no_triggers => 1 }); > + > + # Holds > + $self->holds->update({ biblionumber => $biblionumber }, { no_triggers => 1 }); I don't see any code level triggers at all for Koha::Hold or Koha::Holds.. as such I don't think we should call 'no_triggers' here.. Without a local ->store method in Koha::Hold, or a local ->update method in Koha::Holds the result of calling Koha::Objects->update should be the same as without no_triggers passed. As such, I feel for future-proofing we should not pass no_triggers as we don't currently know that our biblionumber change here wouldn't be part of a trigger in the future.
Failing QA to get some attention.. I don't think anything is major here.. it's more that I want to get verification that things have been considered and taken care of.
(In reply to Martin Renvoize from comment #98) > Comment on attachment 113967 [details] [review] [review] > Bug 22690: Refactor merging of records to improve performance (Elasticsearch) > > Review of attachment 113967 [details] [review] [review]: > ----------------------------------------------------------------- > > ::: Koha/Item.pm > @@ +1064,5 @@ > > +sub item_orders { > > + my ( $self ) = @_; > > + > > + my $orders = $self->_result->item_orders; > > + return Koha::Acquisition::Orders->_new_from_dbic($orders); > > In it's current form, this can result in failure.. I'm not seeing any > handling for that... > > i.e. if an item is deleted it gets moved to deleted_items but the itemnumber > remains in the aqorder_items table as there is no foreign key constraint.. > have you tested this case? Not sure I understand what error possibility you see here. Koha::Item is only for existing items? So if you have an Koha::Item object it must have itemnumber, no?
(In reply to Martin Renvoize from comment #96) > Comment on attachment 113967 [details] [review] [review] > Bug 22690: Refactor merging of records to improve performance (Elasticsearch) > > Review of attachment 113967 [details] [review] [review]: > ----------------------------------------------------------------- > > ::: Koha/Item.pm > @@ +1082,5 @@ > > + > > +$params: > > + skip_record_index => 1|0 > > + > > +Returns undef if the move failed or the biblionumber of the destination record otherwise > > I wonder if this might be nicer as a fluent interface (i.e returning $self > so it can be chained.. the undef return would become a no-op and the final > return would be the updated Koha::Item object?) > > > One for later perhaps Probably yes, but I tried to make it resemble the old MoveItemFromBiblio. The caller wants to know whether the move succeeeded, and I didn't want to broaden the scope in trying to avoid making this change any larger. > > ::: Koha/Schema/Result/Item.pm > @@ +778,4 @@ > > '+exclude_from_local_holds_priority' => { is_boolean => 1 }, > > ); > > > > +# Relationship with orders via the aqorders_item table that not have foreign keys > > Was there a reason not to add the foreign key and let dbic generate the > relationship? The same as above, to try to manage the scope of change. It would make sense to do that, but I'm afraid there will be some gotchas, so better handled separately.
Created attachment 118226 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Biblio.t prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 118227 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 118229 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Biblio.t prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 118231 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com>
(In reply to Martin Renvoize from comment #98) > Comment on attachment 113967 [details] [review] [review] > Bug 22690: Refactor merging of records to improve performance (Elasticsearch) > > Review of attachment 113967 [details] [review] [review]: > ----------------------------------------------------------------- > > ::: Koha/Item.pm > @@ +1064,5 @@ > > +sub item_orders { > > + my ( $self ) = @_; > > + > > + my $orders = $self->_result->item_orders; > > + return Koha::Acquisition::Orders->_new_from_dbic($orders); > > In it's current form, this can result in failure.. I'm not seeing any > handling for that... > > i.e. if an item is deleted it gets moved to deleted_items but the itemnumber > remains in the aqorder_items table as there is no foreign key constraint.. > have you tested this case? (See also comment #101) The only case I can see this one fail is if the item for which this is being called Well, there's the possibility that the item gets deleted in the meantime, so (In reply to Martin Renvoize from comment #98) > Comment on attachment 113967 [details] [review] [review] > Bug 22690: Refactor merging of records to improve performance (Elasticsearch) > > Review of attachment 113967 [details] [review] [review]: > ----------------------------------------------------------------- > > ::: Koha/Item.pm > @@ +1064,5 @@ > > +sub item_orders { > > + my ( $self ) = @_; > > + > > + my $orders = $self->_result->item_orders; > > + return Koha::Acquisition::Orders->_new_from_dbic($orders); > > In it's current form, this can result in failure.. I'm not seeing any > handling for that... > > i.e. if an item is deleted it gets moved to deleted_items but the itemnumber > remains in the aqorder_items table as there is no foreign key constraint.. > have you tested this case? Error checking added. Not tested, however, since this should be extremely rare. I can't see this happening unelss the underlying item record for the Item instance here gets deleted while item move is being processed. There are a lot of similar accessor with the same issue.
The "Bug 22690: Add more tests" patch removes accidentally > $schema->storage->txn_rollback; > lines from the subtest above. Also I rebased bug 20447 bug and now the adopt_items_from_biblio() test fails there, it could be probably due to some problem in bug 20447 patches but maybe also in these patches – will investigate.
Created attachment 118391 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
There was a rebase issue in the attachment made by Ere in comment 106, in the previous revision https://bugs.koha-community.org/bugzilla3/attachment.cgi?id=113942 it was still OK, I fixed the rebase issue now and attached the working patch.
Created attachment 119433 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Biblio.t prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Michal Denar <black23@gmail.com> Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 119434 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Michal Denar <black23@gmail.com>
Martin, could you take another look here now that the QA issues you spotted should be fixed? :)
Created attachment 120820 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Biblio.t prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Created attachment 120821 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Rebased. There was only small conflicts in tests.
Created attachment 120852 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Biblio.t prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Created attachment 120853 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
It works! > - After a while the web server should give you a timeout error (the merging process may still continue) Indeed it continues! refreshing the search can show items being moved from one record to another. It's unbelievably slow. Not going to wait for all: 210 items in 22:35 => 1355 sec => 0.155 items/sec with patch 1004 item in 50 sec => 20.08 /items speedup: 129!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Issue found, more warnings in the tests with these patches: ============ Without patches =================== kohadev-koha@kohadevbox:/kohadevbox/koha$ prove t/db_dependent/Koha/Biblio.t t/db_dependent/Koha/Item.t t/db_dependent/Koha/SearchEngine/Indexer.t t/db_dependent/Koha/Biblio.t ................ 4/14 Use of uninitialized value $sub6 in pattern match (m//) at /kohadevbox/koha/Koha/SearchEngine/Elasticsearch.pm line 596. t/db_dependent/Koha/Biblio.t ................ ok t/db_dependent/Koha/Item.t .................. ok t/db_dependent/Koha/SearchEngine/Indexer.t .. ok All tests successful. ============ With patches =================== kohadev-koha@kohadevbox:/kohadevbox/koha$ prove t/db_dependent/Koha/Biblio.t t/db_dependent/Koha/Item.t t/db_dependent/Koha/SearchEngine/Indexer.t t/db_dependent/Koha/Biblio.t ................ 4/15 Use of uninitialized value $sub6 in pattern match (m//) at /kohadevbox/koha/Koha/SearchEngine/Elasticsearch.pm line 596. t/db_dependent/Koha/Biblio.t ................ ok t/db_dependent/Koha/Item.t .................. 10/11 DBIx::Class::Storage::DBI::select_single(): Query returned more than one row. SQL that returns multiple rows is DEPRECATED for ->find and ->single at /kohadevbox/koha/t/lib/TestBuilder.pm line 235 DBIx::Class::Storage::DBI::select_single(): Query returned more than one row. SQL that returns multiple rows is DEPRECATED for ->find and ->single at /kohadevbox/koha/t/lib/TestBuilder.pm line 235 DBIx::Class::Storage::DBI::select_single(): Query returned more than one row. SQL that returns multiple rows is DEPRECATED for ->find and ->single at /kohadevbox/koha/t/lib/TestBuilder.pm line 235 t/db_dependent/Koha/Item.t .................. ok t/db_dependent/Koha/SearchEngine/Indexer.t .. 1/2 Zebra at t/db_dependent/Koha/SearchEngine/Indexer.t line 91. Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. Zebra at t/db_dependent/Koha/SearchEngine/Indexer.t line 91. Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. Zebra at t/db_dependent/Koha/SearchEngine/Indexer.t line 91. Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. Zebra at t/db_dependent/Koha/SearchEngine/Indexer.t line 91. Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. Elasticsearch at t/db_dependent/Koha/SearchEngine/Indexer.t line 91. Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. Elasticsearch at t/db_dependent/Koha/SearchEngine/Indexer.t line 91. Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. Elasticsearch at t/db_dependent/Koha/SearchEngine/Indexer.t line 91. Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. Elasticsearch at t/db_dependent/Koha/SearchEngine/Indexer.t line 91. Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. t/db_dependent/Koha/SearchEngine/Indexer.t .. ok All tests successful.
I think we have a regression here, the new move_to_biblio function (previously MoveItemFromBiblio) doesn't index the old biblio, e.g. the item info there is outdated. The indexing is only done for the biblio where the item was moved to, not the where it came from.
Created attachment 121389 [details] [review] Bug 22690: (QA follow-up) Silence manually generated warnings In our test setup we mock the index_records() to produce warnings like this: Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. By wrapping all our item creations to warnings_are{} we can silence them. Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
I attached a patch to fix those manually added warns but the one with > DBIx::Class::Storage::DBI::select_single(): Query returned more than one row. SQL that returns multiple rows is DEPRECATED for ->find and ->single at /kohadevbox/koha/t/lib/TestBuilder.pm line 235 seems a bit more tricky. Not sure if we should fix TestBuilder.pm or...
Created attachment 121392 [details] [review] Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio() We need to update the search index record for the old biblio where the item was moved from to keep the item info in search index up-to-date. To test: 1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
To debug the remaining warning I added the following lines to TestBuilder.pm > use Data::Dumper; > warn "$linked_tbl with " . Dumper($fk_value) if $linked_tbl eq 'Item'; this is just above the line: > return {} if $self->schema->resultset($linked_tbl)->find( $fk_value ); inside _create_links(). The output I got when running the test then with this is as follows: > Item with $VAR1 = { > 'itemnumber' => 1773 > }; > Item with $VAR1 = { > 'biblionumber' => 1234 > }; > DBIx::Class::Storage::DBI::select_single(): Query returned more than one row. > SQL that returns multiple rows is DEPRECATED for ->find and ->single at /kohadevbox/koha/t/lib/TestBuilder.pm line 239 So for some reason where are using ->find() in TestBuilder.pm even for non-primary keys (here 'biblionumber' of items table) and that causes problems. I haven't still gotten to the bottom of this so any help is still welcome to resolve this.
Created attachment 121474 [details] [review] Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level We need to pass undef itemnumber to build_object() to actually have a hold without an item tied to it. Otherwise build_object() will create automatically an item for us (thus making it an item-level hold) To test: $ prove t/db_dependent/Koha/Item.t
Victor the warnings have been addressed now in the follow-ups. Please give one more look and mark this as PQA & sign-off if all looks good to you.
Note: please be sure to include the patch from bug 28479, it fixes one of the warnings in TestBuilder.pm
Created attachment 121516 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Biblio.t prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Created attachment 121517 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Created attachment 121518 [details] [review] Bug 22690: (QA follow-up) Silence manually generated warnings In our test setup we mock the index_records() to produce warnings like this: Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. By wrapping all our item creations to warnings_are{} we can silence them. Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Created attachment 121519 [details] [review] Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio() We need to update the search index record for the old biblio where the item was moved from to keep the item info in search index up-to-date. To test: 1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Created attachment 121520 [details] [review] Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level We need to pass undef itemnumber to build_object() to actually have a hold without an item tied to it. Otherwise build_object() will create automatically an item for us (thus making it an item-level hold) To test: $ prove t/db_dependent/Koha/Item.t Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Thanks for the followup, no additional warnings and the test plan still works :) --- QA tools: FAIL C4/Items.pm OK critic OK forbidden patterns OK git manipulation OK pod FAIL pod coverage POD coverage was greater before, try perl -MPod::Coverage=PackageName -e666 OK spelling OK valid That looks a false positive, a function was deleted, so it's POD --- (In reply to Joonas Kylmälä from comment #127) > Please give one more look and mark this as PQA & sign-off if all looks good to you. Code is way over my head so I can't go further than this. But at least the QAer only has the code to review.
Created attachment 122846 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Biblio.t prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 122847 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 122848 [details] [review] Bug 22690: (QA follow-up) Silence manually generated warnings In our test setup we mock the index_records() to produce warnings like this: Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. By wrapping all our item creations to warnings_are{} we can silence them. Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 122849 [details] [review] Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio() We need to update the search index record for the old biblio where the item was moved from to keep the item info in search index up-to-date. To test: 1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 122850 [details] [review] Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level We need to pass undef itemnumber to build_object() to actually have a hold without an item tied to it. Otherwise build_object() will create automatically an item for us (thus making it an item-level hold) To test: $ prove t/db_dependent/Koha/Item.t Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
I am so sorry, this completely fell off my radar with an overly busy period at work. This is all looking great now and has had lots more eyes on since my own comments have all be taken care of. All works as expected for me, the code is clean and it's a great improvement as a whole. QA scripts happy, tests all pass. Passing QA, thanks for the perseverance everyone.
Created attachment 122886 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Biblio.t prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 122887 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 122888 [details] [review] Bug 22690: (QA follow-up) Silence manually generated warnings In our test setup we mock the index_records() to produce warnings like this: Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. By wrapping all our item creations to warnings_are{} we can silence them. Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 122889 [details] [review] Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio() We need to update the search index record for the old biblio where the item was moved from to keep the item info in search index up-to-date. To test: 1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 122890 [details] [review] Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level We need to pass undef itemnumber to build_object() to actually have a hold without an item tied to it. Otherwise build_object() will create automatically an item for us (thus making it an item-level hold) To test: $ prove t/db_dependent/Koha/Item.t Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Patches rebased.
Does this need more testing? More review? (it's passed QA)
Created attachment 122936 [details] [review] Bug 22690: Remove MoveItemFromBiblio import Added in the meanwhile by bug 17600.
Created attachment 122938 [details] [review] Bug 22690: Add missing txn_begin in subtest
1. Why Koha::Item->item_orders is not named Koha::Item->orders? It returns a Koha::Acquisition::Orders. 2. in move_to_biblio 1215 return unless $self->biblionumber != $to_biblio->biblionumber; I am a fan of unless, not when there is a negation in the test :) return if $self->biblionumber == $to_biblio->biblionumber; Read much better IMO. 3. There is Koha::Item->move_to_biblio($biblio) and Koha::Biblio->adopt_items_from_biblio($biblio) Don't you think Koha::Biblio->adopt_items_from_biblio could be Koha::Items->move_to_biblio actually? It would be more consistent and flexible. 4. in move_to_biblio you are calling, on a DBIC rs, ->update, then update_all: 1237 $hold_fill_target->update({ biblionumber => $to_biblionumber }); 1254 $linktrackers->update_all({ biblionumber => $to_biblionumber }); It's not consistent, is there a good reason for that? Please provide a fast reply, I can write the follow-up patches if needed.
Created attachment 122939 [details] [review] Bug 22690: Remove MoveItemFromBiblio import Added in the meanwhile by bug 17600.
(In reply to Jonathan Druart from comment #150) > 1. Why Koha::Item->item_orders is not named Koha::Item->orders? > It returns a Koha::Acquisition::Orders. This should certainly be named 'orders', annoyed I overlooked that during QA :( > > 2. > in move_to_biblio > 1215 return unless $self->biblionumber != $to_biblio->biblionumber; > I am a fan of unless, not when there is a negation in the test :) > return if $self->biblionumber == $to_biblio->biblionumber; > Read much better IMO. Agreed. > > 3. There is Koha::Item->move_to_biblio($biblio) and > Koha::Biblio->adopt_items_from_biblio($biblio) > Don't you think Koha::Biblio->adopt_items_from_biblio could be > Koha::Items->move_to_biblio actually? > It would be more consistent and flexible. Hmmm, I think so long as the method name is clear I was happy for it to live in either class. I do see what you mean though and for consistency, I certainly like that. > > 4. > in move_to_biblio you are calling, on a DBIC rs, ->update, then update_all: > 1237 $hold_fill_target->update({ biblionumber => $to_biblionumber > }); > > 1254 $linktrackers->update_all({ biblionumber => $to_biblionumber }); > > It's not consistent, is there a good reason for that? > > Please provide a fast reply, I can write the follow-up patches if needed. I think this is simply because we didn't have a Koha:: based resultset yet and he didn't want to add further complexity to the patch by adding that new class as well. Having said that, it's fairly trivial to add such a class so long as it's a basic one, so perhaps we should for consistency.
(In reply to Martin Renvoize from comment #152) > (In reply to Jonathan Druart from comment #150) > > 4. > > in move_to_biblio you are calling, on a DBIC rs, ->update, then update_all: > > 1237 $hold_fill_target->update({ biblionumber => $to_biblionumber > > }); > > > > 1254 $linktrackers->update_all({ biblionumber => $to_biblionumber }); > > > > It's not consistent, is there a good reason for that? > > > > Please provide a fast reply, I can write the follow-up patches if needed. > > I think this is simply because we didn't have a Koha:: based resultset yet > and he didn't want to add further complexity to the patch by adding that new > class as well. Having said that, it's fairly trivial to add such a class so > long as it's a basic one, so perhaps we should for consistency. Both are "raw" DBIC resultsets :)
Created attachment 123029 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Biblio.t prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123030 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123031 [details] [review] Bug 22690: (QA follow-up) Silence manually generated warnings In our test setup we mock the index_records() to produce warnings like this: Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. By wrapping all our item creations to warnings_are{} we can silence them. Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123032 [details] [review] Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio() We need to update the search index record for the old biblio where the item was moved from to keep the item info in search index up-to-date. To test: 1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123033 [details] [review] Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level We need to pass undef itemnumber to build_object() to actually have a hold without an item tied to it. Otherwise build_object() will create automatically an item for us (thus making it an item-level hold) To test: $ prove t/db_dependent/Koha/Item.t Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123034 [details] [review] Bug 22690: Add missing txn_begin in subtest
Created attachment 123035 [details] [review] Bug 22690: Remove MoveItemFromBiblio import Added in the meanwhile by bug 17600.
Created attachment 123036 [details] [review] Bug 22690: (QA follow-up) Rename 'item_orders' to 'orders'
Created attachment 123037 [details] [review] Bug 22690: (QA follow-up) Improve negation syntax
Created attachment 123038 [details] [review] Bug 22690: (QA follow-up) Move adopt_items_from_biblios to Koha::Items This patch moves the Koha::Biblio->adopt_items_from_biblio method to the Koha::Items set class and updates all calls from Biblio2->adopt_items_from_biblio(Biblio1) to Biblio->items->move_to_biblio(Biblio2)
Created attachment 123039 [details] [review] Bug 22690: (QA follow-up) Clarify uses of DBIC
Created attachment 123040 [details] [review] Bug 22690: (QA follow-up) Add relationships to linktracker Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123041 [details] [review] Bug 22690: DBIC Schema Updates
Created attachment 123042 [details] [review] Bug 22690: (QA follow-up) Use relationship accessor With the addition of foreign key relationships to the linktracker table we now get a DBIC relationship accessor we can use. This clarifies the code slightly by using the _result->relationship form to get the DBIC resultset. We should still introduce a Koha::Object based class for this table at some point.
Created attachment 123044 [details] [review] Bug 22690: (QA follow-up) Add TrackedLink classes and use them This patch adds Koha::TrackedLink(s) classes based on Koha::Object(s) and then adds the relationship accessor to Koha::Item and uses it within the move_to_biblio method. Tests for new relationship also added to t/db_dependent/Koha/Item.t
RM comments addressed in followups. I also took the opportunity to add the Koha::TrackedLink(s) classes and use those Koha::Object(s) based accessors to clarify some of the code found within move_to_biblio. With this change, the odd call to update_all has been replaced with our Koha::Objects based automatic trigger handling update call (without passing no_triggers yet). The other DBIC resultset acting call is against a single row so update is correct.. I believe.
In Koha::Items::move_to_biblio the variable $from_biblio should be $to_biblio, now it produces error because there is no such variable.
Created attachment 123062 [details] [review] Bug 22690: (QA follow-up) Correct variable name The $from_biblio variable name doesn't exists after a refactoring that happened. Here we need to re-index both the $self biblio and $to_biblio biblio.
Thanks for catching that Joonas :)
Koha::Items::move_to_biblio only re-indexing the last item's biblio and not all of them, this was not spotted probably in testing because all the items were in the same biblio. Working on a patch.
I also get the following test failure: > root@kohadevbox:koha(master)$ prove t/db_dependent/Koha/Item.t > t/db_dependent/Koha/Item.t .. # No tests run! > t/db_dependent/Koha/Item.t .. 1/12 > # Failed test 'No tests run for subtest "tracked_links relationship"' > # at t/db_dependent/Koha/Item.t line 56. > Can't locate object method "_new_from_dbic" via package "Koha::TrackedLinks" (perhaps you forgot to load "Koha::TrackedLinks"?) at /kohadevbox/koha/Koha/Item.pm line 1220. > # Looks like your test exited with 255 just after 1.
(In reply to Martin Renvoize from comment #169) > RM comments addressed in followups. > > I also took the opportunity to add the Koha::TrackedLink(s) classes and use > those Koha::Object(s) based accessors to clarify some of the code found > within move_to_biblio. With this change, the odd call to update_all has > been replaced with our Koha::Objects based automatic trigger handling update > call (without passing no_triggers yet). The other DBIC resultset acting > call is against a single row so update is correct.. I believe. To clarify, I was not asking to add new classes. I was wondering if the inconsistency (update vs update_all) in those two calls was expected: 1257 my $hold_fill_target = $self->_result->hold_fill_target; 1258 if ($hold_fill_target) { 1259 $hold_fill_target->update({ biblionumber => $to_biblionumber }); 1260 } 1260 my $linktrackers = $schema->resultset('Linktracker')->search({ itemnumber => $self->itemnumber }); 1261 $linktrackers->update_all({ biblionumber => $to_biblionumber }); both are DBIx::Class::ResultSet
Created attachment 123481 [details] [review] Bug 22690: Refactor merging of records to improve performance (Elasticsearch) This patch allows merging of records with many items without the web server timing out. Test plan: Without the patch: - Create 2 records (one with e.g. 1000 items). - Do a cataloguing search that displays both records, select them and click "Merge selected". - Choose the record with many items as the one to be eliminated. - Start the merging. - After a while the web server should give you a timeout error (the merging process may still continue) With the patch: - Do the same as above - This time verify that the records are merged without timeout - Create a new biblio with an item - Add with the item: * acquisition order * hold (reserve) - Merge the biblio to another one - Verify that the item and its related data was moved - Verify that tests pass: prove -v t/db_dependent/Koha/Biblio.t prove -v t/db_dependent/Koha/Item.t prove -v t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123482 [details] [review] Bug 22690: Add more tests - Tests for adopt_items_from_biblio - Tests for the relationship between items and acquisition orders - Tests for indexer calls in adopt_items_from_biblio Signed-off-by: Michal Denar <black23@gmail.com> Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123483 [details] [review] Bug 22690: (QA follow-up) Silence manually generated warnings In our test setup we mock the index_records() to produce warnings like this: Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93. By wrapping all our item creations to warnings_are{} we can silence them. Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123484 [details] [review] Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio() We need to update the search index record for the old biblio where the item was moved from to keep the item info in search index up-to-date. To test: 1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi> Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123485 [details] [review] Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level We need to pass undef itemnumber to build_object() to actually have a hold without an item tied to it. Otherwise build_object() will create automatically an item for us (thus making it an item-level hold) To test: $ prove t/db_dependent/Koha/Item.t Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123486 [details] [review] Bug 22690: Add missing txn_begin in subtest
Created attachment 123487 [details] [review] Bug 22690: Remove MoveItemFromBiblio import Added in the meanwhile by bug 17600.
Created attachment 123488 [details] [review] Bug 22690: (QA follow-up) Rename 'item_orders' to 'orders'
Created attachment 123489 [details] [review] Bug 22690: (QA follow-up) Improve negation syntax
Created attachment 123490 [details] [review] Bug 22690: (QA follow-up) Move adopt_items_from_biblios to Koha::Items This patch moves the Koha::Biblio->adopt_items_from_biblio method to the Koha::Items set class and updates all calls from Biblio2->adopt_items_from_biblio(Biblio1) to Biblio->items->move_to_biblio(Biblio2)
Created attachment 123491 [details] [review] Bug 22690: (QA follow-up) Clarify uses of DBIC
Created attachment 123492 [details] [review] Bug 22690: (QA follow-up) Add relationships to linktracker Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 123493 [details] [review] Bug 22690: DBIC Schema Updates
Created attachment 123494 [details] [review] Bug 22690: (QA follow-up) Use relationship accessor With the addition of foreign key relationships to the linktracker table we now get a DBIC relationship accessor we can use. This clarifies the code slightly by using the _result->relationship form to get the DBIC resultset. We should still introduce a Koha::Object based class for this table at some point.
Created attachment 123495 [details] [review] Bug 22690: (QA follow-up) Add TrackedLink classes and use them This patch adds Koha::TrackedLink(s) classes based on Koha::Object(s) and then adds the relationship accessor to Koha::Item and uses it within the move_to_biblio method. Tests for new relationship also added to t/db_dependent/Koha/Item.t
Created attachment 123496 [details] [review] Bug 22690: (QA follow-up) Correct variable name The $from_biblio variable name doesn't exists after a refactoring that happened. Here we need to re-index both the $self biblio and $to_biblio biblio.
Created attachment 123497 [details] [review] Bug 22690: (QA follow-up) Fix indexing for Items sets This patch adds tests and handling for calling move_to_biblio on a Koha::Items set that contains items from more than one source biblio. Test plan 1/ Inspect the changes to t/db_dependent/Koha/SearchEngine/Indexer.t 2/ Run t/db_dependent/Koha/SearchEngine/Indexer.t and confirm it passes
Fixes made in followup, test suit re-run against everything and all passing. I think it was a good move adding the Koha:: classes, it makes the code cleaner and shows the path for future work. Setting back to PQA :)
About the new FK added to the linktracker table, is it relevant to keep the entry if the biblio has been deleted? + CONSTRAINT `linktracker_biblio_ibfk` FOREIGN KEY (`biblionumber`) REFERENCES `biblio` (`biblionumber`) ON DELETE SET NULL ON UPDATE SET NULL, (not blocker but please answer)
Created attachment 124176 [details] [review] Bug 22690: Remove uneeded return and add no_triggers * C4/Items.pm - Koha::Biblios not used * Koha/Item.pm - Koha::Item->orders must return an empty set if no order attached - no_triggers should be passed to other update calls * Item.t - No need to build a fund - Add new test to test Koha::Item->orders when no order attached
Pushed to master for 21.11, thanks to everybody involved!
Created attachment 124188 [details] [review] Bug 22690: Add koha_object[s]_class to fix TestBuilder.t 'Can't locate object method "_new_from_dbic" via package "Koha::Linktracker" (perhaps you forgot to load "Koha::Linktracker"?) at /kohadevbox/koha/Koha/Object.pm line 334. Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Follow-up pushed to master.
Created attachment 124257 [details] [review] Bug 22690: Fix the tracklink feature With the FK we must set to undef/NULL, not 0.
(In reply to Jonathan Druart from comment #199) > Created attachment 124257 [details] [review] [review] > Bug 22690: Fix the tracklink feature > > With the FK we must set to undef/NULL, not 0. This also means that we need a db rev to fix existing values. Can you follow-up please?
I sent the following email to Chris yesterday: """ A couple of questions: * Having the FK means the ON UPDATE SET NULL clause will apply. We will "lose" the links but as the ids can be reused several times I think it makes sense. * We will need to update existing entries (set to NULL when 0 or deleted), we can take the opportunity to remove the rows with biblionumber=null, itemnumber=null, biblionumber=null. """ his reply: """ The table is mainly just report the usage of electronic resources. So if the resource is deleted it doesn't make sense to keep how many times it whats viewed, so I think the FK will be fine """ Maybe we should then have a delete cascade on the biblionumber. Please help here, I am not willing to follow-up myself.
Jonathan, I've lost track of things a bit here, but since linktracker also tracks clicks per borrower, would it still be useful to keep the records? Also, if cascade delete would be used, would it only affect biblionumber, not itemnumber or borrowernumber? Or to put it another way, if we don't want to keep the records after the biblio is deleted, why is biblionumber nullable in the table?
(In reply to Ere Maijala from comment #202) > Jonathan, I've lost track of things a bit here, but since linktracker also > tracks clicks per borrower, would it still be useful to keep the records? > Also, if cascade delete would be used, would it only affect biblionumber, > not itemnumber or borrowernumber? In my understanding we want to keep track of the number of clicks on the records, so even if the patron of item is deleted we should keep the entry. > Or to put it another way, if we don't want to keep the records after the > biblio is deleted, why is biblionumber nullable in the table? Correct, that is not consistent.
Does not apply cleanly to 21.05.x. If this is needed for 21.05 please create a 21.05 patch. Thanks!
Follow-up still needed for 21.11
Jonathan, I'm still not sure it makes sense to delete the tracking information when a biblio is deleted. That would break the first use case described here: https://bywatersolutions.com/news/koha-tutorial-track-click-system-preference The second example has a JOIN with the biblio table, but I'm not convinced it's intentionally ignoring the case of deleted biblios.
(In reply to Ere Maijala from comment #206) > Jonathan, > > I'm still not sure it makes sense to delete the tracking information when a > biblio is deleted. That would break the first use case described here: > > https://bywatersolutions.com/news/koha-tutorial-track-click-system-preference > > The second example has a JOIN with the biblio table, but I'm not convinced > it's intentionally ignoring the case of deleted biblios. I think both cases there are tracking use of current resources. My assumption would be that a deleted resource is no longer available and the decisions has already been made (i.e. you ended the subscription) so that tracking does not need to be retained
(In reply to Nick Clemens from comment #207) > (In reply to Ere Maijala from comment #206) > > Jonathan, > > > > I'm still not sure it makes sense to delete the tracking information when a > > biblio is deleted. That would break the first use case described here: > > > > https://bywatersolutions.com/news/koha-tutorial-track-click-system-preference > > > > The second example has a JOIN with the biblio table, but I'm not convinced > > it's intentionally ignoring the case of deleted biblios. > > I think both cases there are tracking use of current resources. > > My assumption would be that a deleted resource is no longer available and > the decisions has already been made (i.e. you ended the subscription) so > that tracking does not need to be retained I think this is true if you want to report on a specific resource, but I think the information could still be useful if you want to purchase the records again in another license model or when doing annual reporting on e-resource use.
(In reply to Katrin Fischer from comment #208) > (In reply to Nick Clemens from comment #207) > > (In reply to Ere Maijala from comment #206) > > > Jonathan, > > > > > > I'm still not sure it makes sense to delete the tracking information when a > > > biblio is deleted. That would break the first use case described here: > > > > > > https://bywatersolutions.com/news/koha-tutorial-track-click-system-preference > > > > > > The second example has a JOIN with the biblio table, but I'm not convinced > > > it's intentionally ignoring the case of deleted biblios. > > > > I think both cases there are tracking use of current resources. > > > > My assumption would be that a deleted resource is no longer available and > > the decisions has already been made (i.e. you ended the subscription) so > > that tracking does not need to be retained > > I think this is true if you want to report on a specific resource, but I > think the information could still be useful if you want to purchase the > records again in another license model or when doing annual reporting on > e-resource use. I am not sure it's relevant as you lost the link (on delete set null). The question basically is: do we want to know how many times a user clicked on records that have been deleted?
(In reply to Jonathan Druart from comment #209) > I am not sure it's relevant as you lost the link (on delete set null). > > The question basically is: do we want to know how many times a user clicked > on records that have been deleted? The url is still recorded in the tracking table, and that might be all you need regardless of the biblio that contained it. The bottom line is, I wouldn't want to make too many assumptions about the possible use cases and potentially handicap someone's reporting. I also think such change, if implemented, should not be a side effect of this issue, but would need to be its own issue with clear decisions, perhaps an RFC as well.
ok, let's keep as it for now, thanks! Note: (In reply to Jonathan Druart from comment #201) > * We will need to update existing entries (set to NULL when 0 or > deleted), we can take the opportunity to remove the rows with > biblionumber=null, itemnumber=null, biblionumber=null. The DBRev already deals with that.