While benchmarking Koha trying to improve indexing speed DBIx::Class::ResultSet->find() in C4::Koha::Getitem stood out as a major bottleneck. I looked through the code, and it's only when 'item-level_itypes' is enabled when the actual item object is needed, if not the function can be optimized by only fetching the the data through the much faster low-level DBI interface. This is of course a bit of a kludge, and I would normally avoid this type of micro-optimization, but with this fix we managed to shave a couple of hours of a full re-indexing, and get it done during closing hours. I have also added some more tests to try to catch any possible future issues this change might cause. If more stuff needs to be fetched through the Item-object there could be issues, but this should be clearly pointed out in the code. I also removed an extra call to GetItem() in GetMarcItem() by making it accept an item object or item id as argument, and only fetch the item if needed, cutting the actual calls to GetItem by about half.
Created attachment 70172 [details] [review] Bug 19884 - Improve performance of GetItem Provide optimized search_unblessed method for Koha::Items to increase performance of GetItem. Also extend GetMarcItem() to also accept an item record as second parameter. This way an extra call to GetItem() can be avoided by passing an item record directly instead of an item number. Also add new function GetItems() making it possible to load multiple items at once, which is more efficient than loading them one by one.
Forgot to add an how to test. Will fix this, will also provide a test script to illustrate performance difference, might not have time to do so today, but latest tomorrow.
And a correction: "it's only when 'item-level_itypes' is enabled when the actual item object is needed" in the first comment should be the other way around: "it's only when 'item-level_itypes' is _disabled_ the actual item object is needed"
Just noting: item-level_itypes ON is used in most libraries as it's a common use case to have multiple items on a record with different item types. (See: https://hea.koha-community.org/systempreferences to get an idea of use of systempreferences) I am not sure use of SQL in Koha::Items is ok - CC'ing some QA team members and RM.
Created attachment 70182 [details] GetMarcBiblio master
Created attachment 70183 [details] GetMarcBiblio with patch
I added two profiling results obtained by running this script https://gist.github.com/gnucifer/a86feeac08dac9bc1bb699f52b95f8b2 with and without the patch to illustrate the difference. As can be seen by the flamegraph Koha::Objects::find takes up half of the total time, and with the patch applied this overhead is gone. This shows the performance difference of just calling GetItem: https://gist.github.com/gnucifer/113b033d211add58fe7e6ebfc257726a Without patch the calls to GetItem takes about 100s to execute on my machine. With patch applied 3.26s, so about 31 times faster.
Created attachment 70184 [details] [review] Bug 19884 - Improve performance of GetItem Provide optimized search_unblessed method for Koha::Items to increase performance of GetItem. Also extend GetMarcItem() to also accept an item record as second parameter. This way an extra call to GetItem() can be avoided by passing an item record directly instead of an item number. Also add new function GetItems() making it possible to load multiple items at once, which is more efficient than loading them one by one. How to test: 1) Run tests in t/db_dependent/Items.t
Created attachment 70185 [details] [review] Bug 19884 - Improve performance of GetItem Provide optimized search_unblessed method for Koha::Items to increase performance of GetItem. Also extend GetMarcItem() to also accept an item record as second parameter. This way an extra call to GetItem() can be avoided by passing an item record directly instead of an item number. Also add new function GetItems() making it possible to load multiple items at once, which is more efficient than loading them one by one. How to test: 1) Run tests in t/db_dependent/Items.t Sponsored-by: Gothenburg University Library
Hi David, did you see my last comment? I think using SQL instead of Koha::Object in a module in the Koha namespace is not valid. - So maybe wait before putting more work into this until we can give you some more feedback.
Hi! Yes I read the comment. As I wrote in the first comment I usually frown upon this kind of optimization myself, but since the speedup is so substantial, and we (at our library) really need to get indexing done during the night, this patch is a requirement for our needs. The search_unblessed code is placed in the Koha::Items for clarity, since it approximates the behaviour of Koha::Items::search. DBIx also uses DBI internally, so in my opinion this communicates the purpose of this optimization in more obvious way than if the SQL would reside in GetItems for example. Also, one might want to use this method in other places where item objects are fetched (through ->search for example) and unblessed.
Just to clarify: with "this patch is a requirement for us" I'm of course not implying it's a requirement it gets committed, just that if it doesn't we will maintain this patch against koha-master indefinitely (or at least as long as we are using Koha) since without it indexing takes too long :)
Hi David, It looks like you find a good place where we can improve speed performance, good catch. However it is like we are going backward reintroducing raw SQL queries. I would prefer to avoid such changes. Did you try to make a mix of both approaches, like: generate only 1 query with my $items = Koha::Items->search({ -in => $barcodes }); in place of your my $items = C4::Items::GetItems($itemnumbers); I guess we will need to call $items->unblessed and an interesting addition would be to fill itype according to ->effective_itemtype (so perf will be impacted if not item-level_itypes, like in your patch). Does it make sense? Something else, did you make sure serials will not be impacted? GetItem fetches serialseq and publisheddate. At first glance it looks safe but we need to be sure.
Hi! I'm not sure I understand, but where you thinking of skipping the pure-SQL optimization but keeping the one loading multiple items at once using search instead of find and then unblessing? Unfortunately that does not make that much of a difference it seems. I tried benchmarking with a biblio with 10 items (which is pretty rare, think most of our biblios has only 1 or 2 items): Loading each item separately took 372 seconds. Loading multiple (10) items at once took 47 seconds: This is not bad (about 8 times faster, for 10 items, which is to be expected since results in 1/10 of DBIx calls). Unfortunately since most libraries probably only have a couple of items per biblio, averaging below 2, in practice the speedup is probably below 2 times faster. With the pure SQL variant loading multiple items it took 7.4 seconds. All benchmarks where performed with item-level_types = 1, and loaded the items 20000 times. I tried to make sure serials where not affected by adjusting the code for the fethching of multiple items, and all tests passes (not sure if there are tests for serials), but I could have another look at this.
(In reply to David Gustafsson from comment #14) > Hi! I'm not sure I understand, but where you thinking of skipping the > pure-SQL optimization but keeping the one loading multiple items at once > using search instead of find and then unblessing? Unfortunately that does > not make that much of a difference it seems. DBIx::Class is the same as using plain DBI. So the speed gain doesn't come from the plain SQL (you could write the same query using the SQL::Abstract syntax DBIC uses), but from the fact that you are using a more fine-tuned approach on the query, to get all data at the same time. What I think Jonathan is saying, is that we need to solve this in a way that doesn't break our object system for the rest of the functionalities. I will try to think about it today too.
Just to clarify, the performance gain comes from using DBI directly. DBIx::Abstract could potentially be faster then DBIx::Class::ResultSet::search, but in my opinion not much is gained by the abstraction for such a simple at static query. If you are saying that the speed gain mainly comes from loading multiple items at once and not the low-level SQL, that is actually not the case as demonstrated by the previous benchmark. Loading multiple items with search (10 at the time) takes 50 seconds. Loading multiple items (10 at the time) with pure-SQL 7.4 seconds. And as previously noted, 10 items is pretty rare, so in practice the code using search will produce an even worse result.
(In reply to David Gustafsson from comment #16) > Just to clarify, the performance gain comes from using DBI directly. > DBIx::Abstract could potentially be faster then > DBIx::Class::ResultSet::search, but in my opinion not much is gained by the > abstraction for such a simple at static query. > > If you are saying that the speed gain mainly comes from loading multiple > items at once and not the low-level SQL, that is actually not the case as > demonstrated by the previous benchmark. > > Loading multiple items with search (10 at the time) takes 50 seconds. > > Loading multiple items (10 at the time) with pure-SQL 7.4 seconds. > > And as previously noted, 10 items is pretty rare, so in practice the code > using search will produce an even worse result. Can you profile it using ntyprof? I bet the bottleneck comes from effective_itemtype when unblessing.
When item-level_itypes = 0, the code run is pretty similar to master with patch applied, the only difference being ->search is used to fetch multiple items instead of ->find for each one. But this only results in a pretty small performance gain as demonstrated by the above benchmarks. So the interesting case to benchmark is when item-level_itypes = 1 (which also seems to be the most commonly used setting), and the NYTProf screenshots attached should clearly show that DBIx::Class::ResultSet::find (or DBIx::Class::ResultSet::search which has similar performance) is the culprit. But I can generate two new ones for DBIx::Class::ResultSet::search with multiple items (10 like in the previous example) to show that the same is true for ->search when loading multiple items. effective_itemtype() is only expensive when item-level_itypes = 0 the biblio is fetched though a relation, but this fix is not designed to address the case when item-level_itypes = 0 since it is much harder to optimize.
Created attachment 70247 [details] DBIX_search
Created attachment 70248 [details] search_unblessed
Uploaded the benchmarks. Koha::Object::unblessed takes 3.63s, and Koha::Schema::Result::Item::effective_itemtype 3.70s (of 157s) when item-level_itypes = 1, so not much of an impact. Also: with patch applied, C4::Context::dbh takes an unreasonable 3.71s (of 17.6s), but that can be fixed using this patch: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=19819
David, I was talking about something like that: diff --git a/C4/Biblio.pm b/C4/Biblio.pm index b26c767ef8..373d9f8330 100644 --- a/C4/Biblio.pm +++ b/C4/Biblio.pm @@ -2840,7 +2840,7 @@ sub EmbedItemsInMarcBiblio { && ( C4::Context->preference('OpacHiddenItems') !~ /^\s*$/ ); require C4::Items; - my $items = C4::Items::GetItems($itemnumbers); + my $items = Koha::Items->search({ itemnumber => { -in => $itemnumbers } })->unblessed; if ($opachiddenitems) { my %hidden_items = map { $_ => undef } C4::Items::GetHiddenItemnumbers(@{$items}); # Reduce items to non hidden items diff --git a/Koha/Item.pm b/Koha/Item.pm index bea05f8317..73306f54a2 100644 --- a/Koha/Item.pm +++ b/Koha/Item.pm @@ -235,6 +235,14 @@ sub current_holds { return Koha::Holds->_new_from_dbic($hold_rs); } +sub unblessed { + my ($self) = @_; + my $itype = $self->effective_itemtype; + my $h = $self->SUPER->unblessed; + $h->{itype} = $itype; + return $h; +} Can you share the script you use to benchmark, I could use it to compare different situations.
I was using this to benchmark: https://gist.github.com/gnucifer/5b2119f656592c4d6b3f6dab0b8087f1 Which is more or less equivalent to your code example (when comparing approaches to improve performance). And this to run the benchmark: https://gist.github.com/gnucifer/fa3bb31d49f1258a4d1af270d235d76f (biblionumber probably needs replacing if running on different instance). So to replace find with search using this approach will unfortunately not increase performance very much.
There is by the way a way of getting equivalent performance (as the SQL-hack) out of GetItems, if biblios where batch loaded all the way down. (Multiple biblio ids sent to EmbedItemsInMarcBiblio so that all items could be loaded in one batch (requireing requiring only one search query for x number of biblios). But this would be a much more complicated change, and require existing code to use this new API (loading multiple biblios at once). Still, might be worth having a look at in the long run.
We, of course, know that an ORM will impact performance a bit. I do not think the script you are testing to compare is fair, you are using the DBMS cache as the query is always the same. I think we should use a loop on all X biblionumbers, then retrieve the items info. With DBIx::Class::ResultClass::HashRefInflator, I'd expect more x4 instead of x10, which is still to high, I admit. If we have to provide a way to fetch a row from its id, I would prefer to introduce a generic way (at Koha::Object level) to do it and use it only for batch/heavy operations.
Created attachment 70251 [details] [review] Bug 19884: Add benchmark script to compare DBIC vs plain SQL
Still x10 :-/
*Thinking loud Months ago I wrote Koha::Object::Simple (for a different need, it was AV and caching related). Now it sounds like we could need something similar here, see https://github.com/joubu/Koha/commits/koha_object_simple to understand what I have in mind (this is a very quick draft!)
Thanks, I looked through the code. My general feeling is it would be better for an optimized more low level interface to just fetch the data still would reside under the Koha::Objects namespace, since the table metadata should be available there, and having to manage basically the same meta-data in multiple places creates a possible source for bugs. I can perhaps have a look at how feasible it would be to introduce a general search_unblessed for Koha::Objects (as suggested), using the available metadata, and introducing some new metadata for search conditions for example. In my opinion this still might be overkill since items are probably the only Koha entity in need of this heavy optimization. Creating a special case (with admittedly pretty ugly code) communicates more clearly that this really is a hack, and not something that one would/should generally use. Regarding performance and DBMS cache, I did initially benchmark this with production data, but decided to provide a benchmarking script requiring only one biblio with items so it would be easier for someone else to reproduce. There might be a difference in results, but not depending on DBMS cache. I think the SQL-execution constitutes a very small proportion of the execution time (when running un-cached queries with production data, not repeating the same query. 1.84s was spent by mysql executing the query (DBI::st::execute), of total 462s, so 0.4% of total time is spent executing SQL, the rest is data-mangling in Perl (if I'm not missing something). There was however a 30x difference in the results I posted earlier in this thread, and that may be an artificial result (since I'm only getting around 10x when running with production data). I check the average number of items per biblio for us, and its about 1.4. (SELECT AVG(item_count) FROM (select biblionumber, count(itemnumber) as item_count FROM items GROUP BY biblionumber) as item_counts). So to load items in batches of 10 is really not representative of production data. Running this script (https://gist.github.com/gnucifer/84b5ede7b06f0f4103400f8c89714f52) with 2000 items, in batches of 2, for our production data takes 256s, and 49s with patch. So 5x differance. In batches of 1 the difference should be around 8-10. I might explore the option of instead try working with biblios in batches, and load all the items for a batch of biblios at a time. Then the horrible DBIx-performance should not matter much at all since there is only one search query for items for every x number of biblios.
I looked through the code, and batching biblios might be easier than I first thought (as only Koha::BiblioUtils really needs to be modified to use the new API). I will probably be able to submit a suggestion using this approach instead, possibly providing around the same performance as the SQL-hack (when using unique queries Koha::Object::unblessed and Koha::Schema::Result::Item::effective_itemtype performed much worse though ~20% of total time, so it would still be nice to be able to bypass those if possible).
Created attachment 70284 [details] [review] Add new functions GetMarcBiblios and EmbedItemsInMarcBiblios for embedding items in multiple records at once for improved performance
I made variants of GetMarcBiblio and EmbedItemsInMarcBiblio (GetMarcBiblios and EmbedItemsInMarcBiblio) that accepts multiple biblionumbers instead of just one. This way all items can be loaded at once for all of these biblios, avoiding the overhead of callings search/find once for each item. Running this script (https://gist.github.com/gnucifer/1e33f9aa7155b1b771baaa357dfc7141) it seems to give about the same performance boost as the SQL-hack. When using this strategy just benchmarking GetItems is not feasible since the speedup is because GetItems now need to run a fraction of the times (about 1/2000 for a batch size of 2000) as before, making the impact of DBIx-search having a large overhead preparing the query compared to the low lever interface not matter that much. I will try to produce a before/after NYTProf-flamegraph with the latest benchmarking script (it's a little bit messy since does not have the production data available in my dev-box, and need to produce the results in our staging environment).
Forgot to add my results of running the script: Batching enabled + SQL-hack: 32.422s Batching enabled (using ->search, no SQL-hack): 33.387s Batching disabled + SQL-hack: 38.269s Batching disabled (using ->search, no SQL-hack): 59.755s The benchmarks for ->search includes unblessing the item and calling effective_itemtype().
Also note that this latest benchmark includes all work done in GetMarcBiblio, so more illustrates the actual speedup of loading biblios for indexing (for example), not how much faster items loading is.
Created attachment 70285 [details] search_not_batched
Created attachment 70286 [details] search_batched
The last two flamegraphes are from running this script (https://gist.github.com/gnucifer/1e33f9aa7155b1b771baaa357dfc7141) without the SQL-hack and batching enabled/disabled. On the last graph the long thin flame to the left corresponds to the Koha::Obects::Search one on the first (without batching).
The SQL-hack is still included in the last patch, but if you like the biblio-batch-loading approach, I can polish and test this some more (add subroutine documentation etc), and even remove the SQL-hack since it no longer needed to speed up indexing at least.
David, my main concern with your approach is that we are adding new subroutines to C4 (against our coding guidelines) and too specific methods (for items only when we should think Koha::Object[s] instead). Moreover IIRC the IN clause should not be used for big ranges, there is a maximum number of elements we can passed in [ref needed].
As far as I know the Mysql 'max_allowed_packet' setting is the only concern for large IN-clauses, 1MB is the default I think (but can be raised if necessary). 1MB is about 1_000_000 characters, it's probably safe to assume that the average IN-argument doesn't exceed 10 bytes. So as long as itemnumbers for each batch < 100_000 it's probably fine (and if it's not one can simply raise the conf-setting). Right now batch size is set to 2000, and with an average number of items per biblio of around 1.3 there is absolutely no risk of exceeding the max_allowed_packet limit. Regarding C4 I did not know that was depricated, but if things are to be moved into the Koha-namespace, and subroutines in C4 will become methods on the appropriate objects, it's not very feasable to place the current addition of pluralized subroutines to Koha:: right now before that refactoring has taken place. I'm quite confident that the addition of the pluralized subroutines will not make this future change more difficult. It is only really GetMarcBiblios that "needs" to be exposed (since it's used in Koha::BiblioUtils, all the others can be considered private helpers. I understand that adding pluralized versions is not a insignificant addition to to API, but I have made no alternations of the current API, and architecturally I think this is the way to go for opening up several optimization paths for Koha. Drupal for example uses the very same technique for loading fields for it's "entities" (biblios could be considered an entity in this context), and without it performance would be horrible. We have also been working on a hook-system (https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=19306). Invoking hooks for each biblio/item etc would probably be quite expensive, but instead placing those hooks where biblios etc can be loaded in bulk would mean that the addition of hooks would not degrade performance as much, if at all.
Created attachment 70606 [details] [review] Add new functions GetMarcBiblios and EmbedItemsInMarcBiblios for embedding items in multiple records at once for improved performance
Fixed a minor error handling broken marc records (where indexing crashed when should just skip the record).
Created attachment 82870 [details] [review] Bug 19884 - Improve performance of GetItem Provide optimized search_unblessed method for Koha::Items to increase performance of GetItem. Also extend GetMarcItem() to also accept an item record as second parameter. This way an extra call to GetItem() can be avoided by passing an item record directly instead of an item number. Also add new function GetItems() making it possible to load multiple items at once, which is more efficient than loading them one by one. How to test: 1) Run tests in t/db_dependent/Items.t Sponsored-by: Gothenburg University Library Add new functions GetMarcBiblios and EmbedItemsInMarcBiblios for embedding items in multiple records at once for improved performance
Created attachment 82871 [details] [review] Bug 19884 - Improve performance of GetItem Add new subs GetMarcBiblios() and EmbedItemsInMarcBiblios() for embedding items in multiple records at once for improved performance. Add new sub GetItems() making it possible to load multiple items at once in the same manner. Also extend GetMarcItem() to also accept an item record as second parameter. This way an extra call to GetItem() can be avoided by passing an item record directly instead of an item number. How to test: 1) Run tests in t/db_dependent/Items.t Sponsored-by: Gothenburg University Library
Squshed and rebased against master.
Created attachment 82979 [details] [review] Add documentation for pluralized versions of subroutines
Created attachment 83111 [details] [review] Bug 19884: Add documentation for pluralized versions of subroutines
Created attachment 83112 [details] [review] Bug 19884: Pass items as arrayref when passed as named argument
Fix mistake made on rebase when changed from positional to named arguments for GetHiddenItemnumbers.
Bug 21206 is going to remove C4::Items::GetItem
Apply? [(y)es, (n)o, (i)nteractive] y Applying: Bug 19884: Add benchmark script to compare DBIC vs plain SQL Using index info to reconstruct a base tree... M Koha/Item.pm Falling back to patching base and 3-way merge... Auto-merging Koha/Item.pm CONFLICT (content): Merge conflict in Koha/Item.pm Failed to merge in the changes. Patch failed at 0001 Bug 19884: Add benchmark script to compare DBIC vs plain SQL About the test plan, is there something else to test or the only thing to do is run t/db_dependent/Items.t ?
I just found the bug 21206 which, if I understoof it well, removes GetItem. Is the present patch still needed ?
Hi! Unfortunately the removal of GetItem does not effect the performance issues. The will require a little bit more extensive refactoring after the changes in bug 21206, so I will probably not be able to do this right away. I benchmarked rebuild_elasticsearch.pl-script to illustrate how C4::Items:GetMarcItem is still a bottleneck by calling Koha::Items->find for each item and will attach the flamegraph.
Created attachment 93949 [details] Benchmark of rebuild_elasticsearch.pl (1500 records)
Hi David, I haven't been following this thread in full, but I suggest you take a look at this line of code: http://git.koha-community.org/gitweb/?p=koha.git;a=blob;f=opac/opac-ISBDdetail.pl;h=c79167cd31274cf115e99293bdd7fa7716c3d88d;hb=HEAD#l63 You will notice that there's actually a prefetch of the related metadata row and the (maybe multiple) items rows in the same query (through a LEFT JOIN). The same approach can be used for a resultset (i.e. not just a ->find returning a single Koha::Biblio). You can then acccess (without further queries to the DB) the MARC record through: $biblio->metadata->record I think the only missing thing here, would be a Koha::RecordProcessor filter (EmbedItems) that would take this resultset (so no new queries) and does the same thing your original code did. In short: heavy use of prefetch, and abandon the C4::Biblio and C4::Items routines. If you agree I could write the RecordProcessor filter.
(In reply to Tomás Cohen Arazi from comment #55) > You will notice that there's actually a prefetch of the related metadata row > and the (maybe multiple) items rows in the same query (through a LEFT JOIN). That's clever!
(In reply to Tomás Cohen Arazi from comment #55) > Hi David, I haven't been following this thread in full, but I suggest you > take a look at this line of code: > > http://git.koha-community.org/gitweb/?p=koha.git;a=blob;f=opac/opac- > ISBDdetail.pl;h=c79167cd31274cf115e99293bdd7fa7716c3d88d;hb=HEAD#l63 > > You will notice that there's actually a prefetch of the related metadata row > and the (maybe multiple) items rows in the same query (through a LEFT JOIN). > > The same approach can be used for a resultset (i.e. not just a ->find > returning a single Koha::Biblio). > > You can then acccess (without further queries to the DB) the MARC record > through: > > $biblio->metadata->record > > I think the only missing thing here, would be a Koha::RecordProcessor filter > (EmbedItems) that would take this resultset (so no new queries) and does the > same thing your original code did. > > In short: heavy use of prefetch, and abandon the C4::Biblio and C4::Items > routines. If you agree I could write the RecordProcessor filter. Hi! This sounds good if I'm understanding correctly. The big win is to be able to fetch multiple biblios in batches and this resulting in only one or a few constant number of queries. If prefetch on a biblio resultset manages to do this you would probably get the same speedup. (Worth noticing is that the overhead is not on the sql-server, but almost completely in the perl-code initializing each call to Koha::Objects->find/search, so more efficient database-queries is not the primary target for improving performance, but making fewer calls to DBIx methods through batching.)
It would need a larger refactoring though, even though I know we want to get rid of the C4 namespace. GetBiblioMarc would be replaced by a call to Biblio->find with a Koha::RecordProcessor for embedding items (I would suggest keeping but deprecating GetBiblioMarc and replacing the subroutine body with the Koha-namespace equivalent code. Perhaps it would be a mistake to make things a bit messy making this refactoring part of this patch.
GetItems is gone, but I think some of the thoughts here are valid -doing a single fetch for each item of a biblio is excessive - EmbedItemsInBarcBiblio should use Koha::Items->search() - limiting to the biblionumber, filtering against "$itemnumbers" and checking for opac visibility (filter_by_visible_in_opac) GetMarcItem should accept a Koha item object too, to avoid re-fetching the items It may be worth filing new bugs, as this one is lengthy now
Bug 21206 removed GetItem.