In cataloguing/additem.pl?biblionumber=XXX when we choose "add multiple copies of the item" it's adds very slowly. I did analysis and there's one big inefficiency of calling ModZebra( $item->{biblionumber}, "specialUpdate", "biblioserver" ); in every AddItem call: it requests update for same biblionumber after adding each new item and time for execution of this ModZebra grows exponentially. My set of patches (preparatory + fix) I will provide here will make this call to ModZebra happen once in the end of adding of the whole number of these items once. Also I will attach a few smaller follow-up patches where I propose optimisations for extra operations.
Created attachment 95413 [details] [review] Bug 24027: dbh handler not passed through all chain preparatory fix for next patch to have all parameters aligned: $dbh handler passed through as parameter in AddItem but forgotten in AddItemFromMarc. Added.
Created attachment 95414 [details] [review] Bug 24027: (follow-up) ModZebra should be called once after all items added, not on each item add Why happened: - call to ModZebra was done after EACH item added, but it was called only with main biblionumber only, so call was the same on each of requests - and also time spent in that ModZebra sub increased with every next hundred items in DB for that element: so adding every next 100 was slower and slower, Solved: - now it's called only once (by adding some extra parameter to "AddItem*" sub set in postponed mode) - and now adding of elements not so heavily depends from how much items was in DB before. Test plan / How to replicate the issue: (test depends from how already populated DB and Zebra/Elasticsearch DBs so might be not so noticeably fast on empty DBs) - go to one of bibliotitems cataloguing/additem.pl?biblionumber=XXX pages - press button "add multiple copies of this item", put 1000 on slower machines, 5000 on faster - start measuring time + submit page/form It takes some serious amount of time even up to timeout. After applying patch, same procedure for same number of items. Also it much less slows down when number of items rises, more linearly. But, again, it heavily depends how database is populated and indexes are rebuild but it's obvious that it will me much more efficient call "ModZebra" once after 1000 addings, then 1000 times on each item copied.
Hi Andrew, Thanks for the patch, that's a great idea. Is it ready for signoff? Only one thing, we will certainly reject the first patch, $dbh must be retrieved from C4::Context->dbh only. Also we prefer to have a hashref as parameters to make the flag explicit: { postpone_indexes_update => 1 } Cheers, Jonathan
Created attachment 95417 [details] [review] Bug 24027: (follow-up) Don't combine long items-add POST page with list generation When hundreds or thousands of items added, it takes time. Then generation of items list also takes time, and more items more items list generation needed, so sometimes user gets even browser timeout. It's much more sensible on big databases and libraries hosting/clouds with limited resources. So - showing only counter of added items on POST-generated page instead of list generation shortens page generation time and then user can navigate further. This stacks up with previous patches in this ticket for the combined effect.
Thanks Jonathan, > Only one thing, we will certainly reject the first patch, $dbh must be retrieved from C4::Context->dbh only. so I will rework in opposite direction, removing mine + old dbh in params: that > Also we prefer to have a hashref as parameters to make the flag explicit: > { postpone_indexes_update => 1 } yes, this is good coding style for not to seding unknown ", 1" or ", 0" in params :), thought about that but was afraid to announce it because it is more bigger change then (calling style from sequential params to hash with options). But both thumbs up for this! I will recreate now patches/chain, please then give feedback on last GUI change one: Bug 24027: (follow-up) Don't combine long items-add POST page with list generation ... so I will rework them in a batch :), and then I will ask for Sign-Off.
I understand the need of the UI patch, but I am worry it will not fit all needs, as it adds an extra step. I am pretty sure catalogers want to see if the last action did what they wanted :) We have something similar for the batch item edit/del tools, driven by a sysprefs. We could hardcoded it for now (50, 100?) and see if it passes QA :)
(In reply to Jonathan Druart from comment #6) > I understand the need of the UI patch, but I am worry it will not fit all > needs, as it adds an extra step. I am pretty sure catalogers want to see if > the last action did what they wanted :) > > We have something similar for the batch item edit/del tools, driven by a > sysprefs. We could hardcoded it for now (50, 100?) and see if it passes QA :) I meant: do not display the item list if X items are added in one go. But it will not fix the problem if there are already thousands of items. So maybe a pref to limit the display of the number of items. I feel like we should ask the list to get feedbacks from catalogers.
(for the OPAC we have OpacMaxItemsToDisplay, to limit the number of items to display on the biblio detail view)
> OpacMaxItemsToDisplay thought to make it configurable but hezitated :). Will be more brave then. Okay will use your vision and present rework for all the chain this week.
Created attachment 95446 [details] [review] Bug 24027: ModZebra should be called once after all items added, not on each item add Why happened: - call to ModZebra was done after EACH item added, but it was called only with main biblionumber only, so call was the same on each of requests - and also time spent in that ModZebra sub increased with every next hundred items in DB for that element: so adding every next 100 was slower and slower, Solved: - now it's called only once (by adding some extra parameter to "AddItem*" sub set in postponed mode) - and now adding of elements not so heavily depends from how much items was in DB before. Test plan / How to replicate the issue: (test depends from how already populated DB and Zebra/Elasticsearch DBs so might be not so noticeably fast on empty DBs) - go to one of bibliotitems cataloguing/additem.pl?biblionumber=XXX pages - press button "add multiple copies of this item", put 1000 on slower machines, 5000 on faster - start measuring time + submit page/form It takes some amount of time even up to timeout. After applying patch, run the same procedure for same number of items. Note: it goes fast in both variants on empty database so time not so noticeable (also depents how ModZebra-related stuff is configured). Also it slows more linearly when number of items grows. But, again, it heavily depends how database is populated and indexes are rebuild but it's obvious that it will me much more efficient to call "ModZebra" once after 1000 addings, then call it 1000 times on each item created in the loop.
Created attachment 95473 [details] [review] Bug 24027: Call ModZebra only once after all items have been added Why happened: - call to ModZebra was done after EACH item added, but it was called only with main biblionumber only, so call was the same on each of requests - and also time spent in that ModZebra sub increased with every next hundred items in DB for that element: so adding every next 100 was slower and slower, Solved: - now it's called only once (by adding some extra parameter to "AddItem*" sub set in postponed mode) - and now adding of elements not so heavily depends from how much items was in DB before. Test plan / How to replicate the issue: (test depends from how already populated DB and Zebra/Elasticsearch DBs so might be not so noticeably fast on empty DBs) - go to one of bibliotitems cataloguing/additem.pl?biblionumber=XXX pages - press button "add multiple copies of this item", put 1000 on slower machines, 5000 on faster - start measuring time + submit page/form It takes some amount of time even up to timeout. After applying patch, run the same procedure for same number of items. Note: it goes fast in both variants on empty database so time not so noticeable (also depents how ModZebra-related stuff is configured). Also it slows more linearly when number of items grows. But, again, it heavily depends how database is populated and indexes are rebuild but it's obvious that it will me much more efficient to call "ModZebra" once after 1000 addings, then call it 1000 times on each item created in the loop.
Created attachment 95663 [details] [review] Bug 24027: (follow-up) Don't combine multiple items add POST page with list generation When hundreds or thousands of items added, it takes time to add to DB. Then generation of items list in the SAME request also takes time. This "doubles" page generation time. This patch proposes to show only the number of added and total items on POST-generated page instead of list generation, but not always: only if some limit are reached ("OpacMaxItemsToDisplay" // 50 is used), and propose navigation link for the user to see the list after. It's much more sensible on big databases and libraries hosting/clouds with limited resources even up to page generation timeout. This stacks up with previous patch for the combined speed up effect. Other improvements with this patch: because added counters for both: - added now items, - overall items for this biblioid in DB, it is now shows: - number of added items after adding in "dialog message" style box, - number of total displayed items on the beginning of the list table.
Hi, any test plan?
Created attachment 98250 [details] [review] Bug 24027: Call ModZebra only once after all items have been added Why happened: - call to ModZebra was done after EACH item added, but it was called only with main biblionumber only, so call was the same on each of requests - and also time spent in that ModZebra sub increased with every next hundred items in DB for that element: so adding every next 100 was slower and slower, Solved: - now it's called only once (by adding some extra parameter to "AddItem*" sub set in postponed mode) - and now adding of elements not so heavily depends from how much items was in DB before. Test plan / How to replicate the issue: (test depends from how already populated DB and Zebra/Elasticsearch DBs so might be not so noticeably fast on empty DBs) - go to one of bibliotitems cataloguing/additem.pl?biblionumber=XXX pages - press button "add multiple copies of this item", put 1000 on slower machines, 5000 on faster - start measuring time + submit page/form It takes some amount of time even up to timeout. After applying patch, run the same procedure for same number of items. Note: it goes fast in both variants on empty database so time not so noticeable (also depents how ModZebra-related stuff is configured). Also it slows more linearly when number of items grows. But, again, it heavily depends how database is populated and indexes are rebuild but it's obvious that it will me much more efficient to call "ModZebra" once after 1000 addings, then call it 1000 times on each item created in the loop. Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 98251 [details] [review] Bug 24027: (follow-up) Don't combine multiple items add POST page with list generation When hundreds or thousands of items added, it takes time to add to DB. Then generation of items list in the SAME request also takes time. This "doubles" page generation time. This patch proposes to show only the number of added and total items on POST-generated page instead of list generation, but not always: only if some limit are reached ("OpacMaxItemsToDisplay" // 50 is used), and propose navigation link for the user to see the list after. It's much more sensible on big databases and libraries hosting/clouds with limited resources even up to page generation timeout. This stacks up with previous patch for the combined speed up effect. Other improvements with this patch: because added counters for both: - added now items, - overall items for this biblioid in DB, it is now shows: - number of added items after adding in "dialog message" style box, - number of total displayed items on the beginning of the list table. Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 98252 [details] [review] Bug 24027: Call ModZebra only once after all items have been added Why happened: - call to ModZebra was done after EACH item added, but it was called only with main biblionumber only, so call was the same on each of requests - and also time spent in that ModZebra sub increased with every next hundred items in DB for that element: so adding every next 100 was slower and slower, Solved: - now it's called only once (by adding some extra parameter to "AddItem*" sub set in postponed mode) - and now adding of elements not so heavily depends from how much items was in DB before. Test plan / How to replicate the issue: (test depends from how already populated DB and Zebra/Elasticsearch DBs so might be not so noticeably fast on empty DBs) - go to one of bibliotitems cataloguing/additem.pl?biblionumber=XXX pages - press button "add multiple copies of this item", put 1000 on slower machines, 5000 on faster - start measuring time + submit page/form It takes some amount of time even up to timeout. After applying patch, run the same procedure for same number of items. Note: it goes fast in both variants on empty database so time not so noticeable (also depents how ModZebra-related stuff is configured). Also it slows more linearly when number of items grows. But, again, it heavily depends how database is populated and indexes are rebuild but it's obvious that it will me much more efficient to call "ModZebra" once after 1000 addings, then call it 1000 times on each item created in the loop. Signed-off-by: Michal Denar <black23@gmail.com>
Created attachment 98253 [details] [review] Bug 24027: (follow-up) Don't combine multiple items add POST page with list generation When hundreds or thousands of items added, it takes time to add to DB. Then generation of items list in the SAME request also takes time. This "doubles" page generation time. This patch proposes to show only the number of added and total items on POST-generated page instead of list generation, but not always: only if some limit are reached ("OpacMaxItemsToDisplay" // 50 is used), and propose navigation link for the user to see the list after. It's much more sensible on big databases and libraries hosting/clouds with limited resources even up to page generation timeout. This stacks up with previous patch for the combined speed up effect. Other improvements with this patch: because added counters for both: - added now items, - overall items for this biblioid in DB, it is now shows: - number of added items after adding in "dialog message" style box, - number of total displayed items on the beginning of the list table. Signed-off-by: Michal Denar <black23@gmail.com>
We should not reuse OpacMaxItemsToDisplay but create a new pref here.
(In reply to Jonathan Druart from comment #18) > We should not reuse OpacMaxItemsToDisplay but create a new pref here. I will separate this ticket into two – First one, current, will be without GUI update, but to fix this "ModZebra" call with ElasticSearch enabled which takes really crazy amount of time to perform. Second one – I will create another ticket – with pref or with API/ajax pagination or some other solution (we will decide I will think + ask in IRC/etc), it's really separated update so ok to have major speed affecting this one ModZebra issue, publishing updated patch + explanation: ...
Created attachment 102464 [details] [review] Bug 24027: Call ModZebra once after all items added/deleted in a batch Issue description: - call to ModZebra was unconditional inside 'store' method for Koha::Item, so it was after each item added, or deleted. - ModZebra called with param biblionumber, so it is the same parameter across calls for each items with same biblionumber, especially when we adding/removing in a batch. - with ElasticSearch enabled this makes even more significant load and it is also progressively grows when more items already in DB Solution: - to add extra parameter 'skip_modzebra_update' and propagate it down to 'store' method call to prevent call of ModZebra, - but to call ModZebra once after the whole batch loop in the upper layer Test plan / how to replicate: - make sure that you have in the admin settings "SearchEngine" set to "Elasticsearch" and your ES is configured and working ( /cgi-bin/koha/admin/preferences.pl?op=search&searchfield=SearchEngine ) - select one of biblioitems without items ( /cgi-bin/koha/cataloguing/additem.pl?biblionumber=XXX ) - press button "add multiple copies of this item", - enter 200 items, start measuring time and submit the page/form... On my test machine when adding 200 items 3 times in a row (so 600 in total, but to show that time grows with every next batch gradually): WHEN ElasticSearch DISABLED (only Zebra queue): - 9s, 12s, 13s WHEN ElasticSearch ENABLED: - 1.3m, 3.2m, 4.8m WITH PATCH WHEN ElasticSearch ENABLED: - 10s, 13s, 15s Same slowness (because also same call to ModZebra) happens when you try to delete items in a batch or delete all items ("op=delallitems"). And same fix.
(+ did rebase to latest master)
Created attachment 102474 [details] [review] Bug 24027: Call ModZebra once after all items added/deleted in a batch Issue description: - call to ModZebra was unconditional inside 'store' method for Koha::Item, so it was after each item added, or deleted. - ModZebra called with param biblionumber, so it is the same parameter across calls for each items with same biblionumber, especially when we adding/removing in a batch. - with ElasticSearch enabled this makes even more significant load and it is also progressively grows when more items already in DB Solution: - to add extra parameter 'skip_modzebra_update' and propagate it down to 'store' method call to prevent call of ModZebra, - but to call ModZebra once after the whole batch loop in the upper layer Test plan / how to replicate: - make sure that you have in the admin settings "SearchEngine" set to "Elasticsearch" and your ES is configured and working ( /cgi-bin/koha/admin/preferences.pl?op=search&searchfield=SearchEngine ) - select one of biblioitems without items ( /cgi-bin/koha/cataloguing/additem.pl?biblionumber=XXX ) - press button "add multiple copies of this item", - enter 200 items, start measuring time and submit the page/form... On my test machine when adding 200 items 3 times in a row (so 600 in total, but to show that time grows with every next batch gradually): WHEN ElasticSearch DISABLED (only Zebra queue): - 9s, 12s, 13s WHEN ElasticSearch ENABLED: - 1.3m, 3.2m, 4.8m WITH PATCH WHEN ElasticSearch ENABLED: - 10s, 13s, 15s Same slowness (because also same call to ModZebra) happens when you try to delete items in a batch or delete all items ("op=delallitems"). And same fix.
Hi Andrew, can you explain why did you change your mind? The patches were almost ready, I was only asking for a new pref. The changes to add a new pref is minimal, I can help if needed.
Those were anyway two patches, which not related to each other. One for "ModZebra" calls and another for GUI changes. Yet both improving the same site page(s) section. so:
(In reply to Jonathan Druart from comment #23) About FIRST patch which I just posted – it was "full separate one" and it is the same one, but more clear and proper: I re-created and re-tested/re-described, because: - I now found the real cause (Elasticsearch enabled) why it happens to be tested well, - I also found that "deleteallitems" had the same issue so change also added, - the parameter name (skip_modzebra_update) and propagation down below to "store" made better now - overall rebase (with rework) was needed because old patch used "AddItem" sub which now removed in favor of Koha::Item->...->store. And this first patch had main and most SERIOUS impact on execution speed. (but second patch: ... )
(In reply to Jonathan Druart from comment #23) But second patch not based nor related to the first one, that's why I propose to separate it and do it right: - it was an UGLY tradeoff solution with not so good UX improvement: it just not shown all number of items on the post-POST page if it is more than that parameter, just saying "XXX items added" only. Adding extra branching in the template and pre-template call, but not adding so much improvement because of the next page: - it anyway had plain normal mode page which loaded all 1000 (or how much there) items on a single page in non-POST request then, so anyway page gen time is long and usefulness of displaying all-items-on-one page for an operator is very questionable, - that's why I propose to separate this into another ticket and do it, - but (and) yes, I can re-create it with the separate new pref-parameter as it was on the beginning if you want, but I feet myself "bad senior coder" with this "plumber" solution :) but CORRECT solution (two options) I propose/see: - complex but beautiful: create ajax pagination as it made on patrons page (for example) - straight simple dumb but robust without any extra pref-parameters: unconditionally, without any "prefs", just NOT to generate items list on post-POST page, but add the link (or forward with delay) to full list+form page in non-POST mode (i.e. usual post-POST page "xxx items added" + link or forward)
s/feet/feel/ :P
(In reply to Andrew Nugged from comment #26) > but CORRECT solution (two options) I propose/see: > > - complex but beautiful: create ajax pagination as it made on patrons page > (for example) > > - straight simple dumb but robust without any extra pref-parameters: > unconditionally, without any "prefs", just NOT to generate items list on > post-POST page, but add the link (or forward with delay) to full list+form > page in non-POST mode (i.e. usual post-POST page "xxx items added" + link or > forward) Yes, that's definitely the way to go, but not trivial. Make sure you are aware of the latest moves from Tomas in this area (see bug 20212) before starting!
(In reply to Jonathan Druart from comment #28) ... so we can proceed with this single ModZebra update patch? and queue the Ajax-one for later (and to combine with Tomas'es updates?) And in my prev. message I proposed two options (it was unclear as one single solution): 1. ajax 2. OR simply post-POST page to make it without listing all added items on the same page So about p2 above: > - straight simple ... just NOT to generate list but show on post-POST page "xxx items added" + link or forward to list page i.e. no "prefs", I will just add after items added: "xxx items added. Follow this link to return to the list: ..." message on post-POST page without list generation. Do you think this p2 (quick-patch) needed here in this ticket? Should I create?
(In reply to Jonathan Druart from comment #28) Jonathan, which your nickname (for PM) on IRC / Koha if you're there? (I'm 'nugged')
I do not think we should have an additional click. I guess cataloguers want to see the result of their last changes after the POST.
Agreed. There will be another ticket from me for this new "Max pref" and post-POST page.
... so I support conclusion that we can proceed with this ticket as finished / to be signed+tested.
Lets implement pagination as all the framework is completed to accomplish that! No more Max items. For references on using the API to render (paginated) datatables, see bug 20212. The important bits are embedding related objects for rendering and filtering, and rendering properly :-D So you need to add embeds and build the right renderers for each column. Regarding my work on the performance front, I identified this situations: - We retrieve (cached or not) the frameworks inside loops without a real need, and really deep in the code. Take a biblio, as the outer situation: it only has one framework assigned, we can pass it down to the last portion of the code we call instead of doing it inside the loop (and many times as we do). - The same applies for code related to OpacHiddenItems and friends. Bug 23247 is a good example of how I think it should be done. It might still need fetching the framework outside now that I look at it with fresh eyes :-D
I don't know you, Andrew, and I think your patches are doing a good job (well spotted the bottleneck, the loop design issue, etc. Well done!). But I think it is time for all of us to think more in the long term. ModZebra (in the Elasticsearch case) is fetching the MARC record so it gets indexed by ES. The main problem here is that this is not an async call, but a blocking one. It doesn't hurt Zebra because in that case we have task queue and a script that takes care of the queue. I think it is the way to go: have a task queue and leverage on that. That said, I never liked that we call ModZebra from Koha::Item->store, and while I understand the reasons for that and agree to just move forward and fix later, I wouldn't add even more things related to ModZebra to the method signature like this. We would just be polluting things even more. To put it clear, my POV is that this post-success-actions (i.e. actions that take place if storing the new/updated Item is successful) should be something that happens in the controller code. If Koha::Item->store didn't call ModZebra by itself, I'd do something like: try { # This is just fiction code while ( my $item = $items_to_store->next ) { $item->store; } } catch { handle_exception($_); } finally { if (@_) { notify_the_error(); } else { # No error if (ES) { trigger_es_reindex( $item->biblionumber ); } elsif (Zebra) { ModZebra(...); } else { Koha::Exception::WTF->throw(); } } }; So, I understand the optimization, I fear from my QA POV that we are introducing even more code that we will need to remove, in the place we wouldn't want to. And this is always the excuse we use for not doing things right! If I was to fix this, I would look into moving the code for ES from plain triggering the indexing, into using zebraqueue (probably renaming some things while on it). I hope this discussion doesn't discourage you, Andrew! I don't want to block this if the QA team thinks we could live with this.
Talked to the author of most of Koha::Item->store (Jonathan) and we agreed this should move forward with a FIXME or similar. I'm testing this ATM for sign-off
Created attachment 102567 [details] [review] Bug 24027: Call ModZebra once after all items added/deleted in a batch Issue description: - call to ModZebra was unconditional inside 'store' method for Koha::Item, so it was after each item added, or deleted. - ModZebra called with param biblionumber, so it is the same parameter across calls for each items with same biblionumber, especially when we adding/removing in a batch. - with ElasticSearch enabled this makes even more significant load and it is also progressively grows when more items already in DB Solution: - to add extra parameter 'skip_modzebra_update' and propagate it down to 'store' method call to prevent call of ModZebra, - but to call ModZebra once after the whole batch loop in the upper layer Test plan / how to replicate: - make sure that you have in the admin settings "SearchEngine" set to "Elasticsearch" and your ES is configured and working ( /cgi-bin/koha/admin/preferences.pl?op=search&searchfield=SearchEngine ) - select one of biblioitems without items ( /cgi-bin/koha/cataloguing/additem.pl?biblionumber=XXX ) - press button "add multiple copies of this item", - enter 200 items, start measuring time and submit the page/form... On my test machine when adding 200 items 3 times in a row (so 600 in total, but to show that time grows with every next batch gradually): WHEN ElasticSearch DISABLED (only Zebra queue): - 9s, 12s, 13s WHEN ElasticSearch ENABLED: - 1.3m, 3.2m, 4.8m WITH PATCH WHEN ElasticSearch ENABLED: - 10s, 13s, 15s Same slowness (because also same call to ModZebra) happens when you try to delete items in a batch or delete all items ("op=delallitems"). And same fix. Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 102568 [details] [review] Bug 24027: (QA follow-up) Fix POD warning Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 102572 [details] [review] Bug 24027: Add POD about the new parameter in Koha::Item->store
From the commit message: """ Same slowness (because also same call to ModZebra) happens when you try to delete items in a batch or delete all items ("op=delallitems"). And same fix. """ What do you mean by "delete items in a batch"?
Created attachment 102578 [details] [review] Bug 24027: Call ModZebra once after all items added/deleted in a batch Issue description: - call to ModZebra was unconditional inside 'store' method for Koha::Item, so it was after each item added, or deleted. - ModZebra called with param biblionumber, so it is the same parameter across calls for each items with same biblionumber, especially when we adding/removing in a batch. - with ElasticSearch enabled this makes even more significant load and it is also progressively grows when more items already in DB Solution: - to add extra parameter 'skip_modzebra_update' and propagate it down to 'store' method call to prevent call of ModZebra, - but to call ModZebra once after the whole batch loop in the upper layer Test plan / how to replicate: - make sure that you have in the admin settings "SearchEngine" set to "Elasticsearch" and your ES is configured and working ( /cgi-bin/koha/admin/preferences.pl?op=search&searchfield=SearchEngine ) - select one of biblioitems without items ( /cgi-bin/koha/cataloguing/additem.pl?biblionumber=XXX ) - press button "add multiple copies of this item", - enter 200 items, start measuring time and submit the page/form... On my test machine when adding 200 items 3 times in a row (so 600 in total, but to show that time grows with every next batch gradually): WHEN ElasticSearch DISABLED (only Zebra queue): - 9s, 12s, 13s WHEN ElasticSearch ENABLED: - 1.3m, 3.2m, 4.8m WITH PATCH WHEN ElasticSearch ENABLED: - 10s, 13s, 15s Same slowness (because also same call to ModZebra) happens when you try to delete all items ("op=delallitems"). And same fix. Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org> Amended commit message: the fix does not include the batch item deletion.
Created attachment 102579 [details] [review] Bug 24027: (QA follow-up) Fix POD warning Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Created attachment 102580 [details] [review] Bug 24027: Add POD about the new parameter in Koha::Item->store Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
(In reply to Jonathan Druart from comment #40) > From the commit message: > """ > Same slowness (because also same call to ModZebra) happens when you try to > delete items in a batch or delete all items ("op=delallitems"). And same fix. > """ > > What do you mean by "delete items in a batch"? Confirmed by Andrew (pm), I amended the commit message.
Nice work everyone! Pushed to master for 20.05
does not apply to 19.11.x branch please rebase if needed.