Description
Ere Maijala
2018-04-26 10:22:56 UTC
Created attachment 74873 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are three ways this is accomplished: 1.) Use state variables to hold prepared SQL statements so that the preparation is done only once. 2.) Create optimized method for fetching item fields for MARC embedding. 3.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: diff -u unoptimized.xml optimized.xml These changes improve performance also for other processes that call e.g. GetMarcBiblio repeatedly. My test results show and improvement from about 30 seconds to about 10 seconds with the example export in the test plan. Created attachment 75968 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are three ways this is accomplished: 1.) Use state variables to hold prepared SQL statements so that the preparation is done only once. 2.) Create optimized method for fetching item fields for MARC embedding. 3.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: diff -u unoptimized.xml optimized.xml Latest version fixes a mistake in the itemnumber check. Also rebased on master. Fails t/db_dependent/Items.t # Failed test 'For OPAC, the pref OpacHiddenItems should have been take into account. Only items with homebranch ne CPL should have been embedded' # at t/db_dependent/Items.t line 631. # got: '8' # expected: '2' # Failed test 'For OPAC, If all items are hidden, no item should have been embedded' # at t/db_dependent/Items.t line 642. # got: '8' # expected: '0' # Looks like you failed 2 tests of 7. # Failed test 'C4::Biblio::EmbedItemsInMarcBiblio' # at t/db_dependent/Items.t line 649. Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729. Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729. Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729. Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729. Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729. Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729. # Failed test 'items.barcode is not mapped anymore, so the DB column has not been updated' # at t/db_dependent/Items.t line 726. # got: undef # expected: 'a barcode' # Looks like you failed 1 test of 4. t/db_dependent/Items.t .. 11/14 # Failed test 'C4::Items::_build_default_values_for_mod_marc' # at t/db_dependent/Items.t line 731. # Looks like you failed 2 tests of 14. t/db_dependent/Items.t .. Dubious, test returned 2 (wstat 512, 0x200) Failed 2/14 subtests Test Summary Report ------------------- t/db_dependent/Items.t (Wstat: 512 Tests: 14 Failed: 2) Failed tests: 10-11 Non-zero exit status: 2 (In reply to Lari Taskula from comment #6) > Fails t/db_dependent/Items.t This is due to caching parsed YAML from OpacHiddenItems. Cache does not cleared between tests when OpacHiddenItems changes. Instead of additionally caching parsed YAML of this system preference, would it be better for Koha to handle YAML system preferences as parsed by default? This would require some changes to Koha core but I guess it would benefit all features using system preferences represented in YAML. Thanks for the feedback. I think I'll spin that off as a separate bug and remove the caching thing here. Created attachment 76045 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are three ways this is accomplished: 1.) Use state variables to hold prepared SQL statements so that the preparation is done only once. 2.) Create optimized method for fetching item fields for MARC embedding. 3.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: diff -u unoptimized.xml optimized.xml Latest patch removes the custom YAML caching and fixes tests to properly flush all caches after changing the settings. Additionally fixes warnings surfaced by the use of Modern::Perl. Created attachment 76046 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are three ways this is accomplished: 1.) Use state variables to hold prepared SQL statements so that the preparation is done only once. 2.) Create optimized method for fetching item fields for MARC embedding. 3.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: diff -u unoptimized.xml optimized.xml Comment on attachment 76046 [details] [review] Bug 20664: Optimize retrieval of biblio and item data I need to come up with a better mechanism for avoiding unnecessary query preparation since statements cached in state variables may become stale. Created attachment 76669 [details] [review] Bug 20664: Optimize retrieval of biblio and item data https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=20664 Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two changes to accomplish this: 1.) Create optimized method for fetching item fields for MARC embedding. 2.) Use the cache service more and where repeated calls are made. Also the now-unnecessary frameworkcode parameter to _strip_item_fields() has been removed. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: diff -u unoptimized.xml optimized.xml The latest patch is more simple and avoids any attempts at caching prepared statements. Fortunately MySQL prepares quickly, so the performance improvement is still good. Created attachment 78868 [details] [review] Bug 20664: Optimize retrieval of biblio and item data https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=20664 Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two changes to accomplish this: 1.) Create optimized method for fetching item fields for MARC embedding. 2.) Use the cache service more and where repeated calls are made. Also the now-unnecessary frameworkcode parameter to _strip_item_fields() has been removed. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: diff -u unoptimized.xml optimized.xml Created attachment 78869 [details] [review] Bug 20664: Add unit tests for GetMarcItem To test: prove -v t/db_dependent/Items.t Created attachment 78870 [details] [review] Bug 20664: Unit tests for GetMarcItemFields To test: prove -v t/db_dependent/Items.t I shamelessly lifted Nick's tests from bug 21006. Comment on attachment 78868 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Review of attachment 78868 [details] [review]: ----------------------------------------------------------------- The changes to the hiding logic break extensions to OpacHiddenItems in bugs 14385 and eventually 10589. :( Too bad, but it happens. Since bug 14385 is already signed off, I'll rework this. Looks like it will be a significant change... *** Bug 21006 has been marked as a duplicate of this bug. *** Created attachment 79225 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished: 1.) Create optimized method for fetching item fields for MARC embedding. 2.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: diff -u unoptimized.xml optimized.xml 6.) Run tests to verify that they still pass: prove t/db_dependent/Items.t The latest version now incorporates the functionality added in bug 14385. Oops, need to fix conflicts with tests. Created attachment 79227 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished: 1.) Create optimized method for fetching item fields for MARC embedding. 2.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: Created attachment 79228 [details] [review] Bug 20664: Add unit tests for GetMarcItem To test: prove -v t/db_dependent/Items.t Created attachment 79229 [details] [review] Bug 20664: Unit tests for GetMarcItemFields To test: prove -v t/db_dependent/Items.t Created attachment 79276 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished: 1.) Create optimized method for fetching item fields for MARC embedding. 2.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Created attachment 79277 [details] [review] Bug 20664: Add unit tests for GetMarcItem To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Created attachment 79278 [details] [review] Bug 20664: Unit tests for GetMarcItemFields To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Created attachment 79279 [details] [review] Bug 20664: (follow-up) Fix test for GetMarcItemFields Without this patch I got this error running the test: YAML Error: Stream does not end with newline character Test plan: prove t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Comment on attachment 79276 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Review of attachment 79276 [details] [review]: ----------------------------------------------------------------- The differences between unimarc and marc21 and in the future an integration of both types of records in the same system might be a reason to leave the framework code parameter stuff in. If not, I just have a really bad feeling about removing the frameworkcode parameter. ::: C4/Biblio.pm @@ -310,4 @@ > > $frameworkcode = "" if !$frameworkcode || $frameworkcode eq "Default"; # XXX > > - _strip_item_fields($record, $frameworkcode); I really think removing the frameworkcode is a bad idea. (In reply to M. Tompsett from comment #32) > ::: C4/Biblio.pm > @@ -310,4 @@ > > > > $frameworkcode = "" if !$frameworkcode || $frameworkcode eq "Default"; # XXX > > > > - _strip_item_fields($record, $frameworkcode); > > I really think removing the frameworkcode is a bad idea. The default framwork was made authoritative in bug 19096. As a result of that, the frameworkcode parameter in GetMarcFromKohaField function is no longer used. _strip_item_fields is still using GetMarcFromKohaField, it just doesn't need the frameworkcode for anything. So effectively the patch just removes an unused parameter. (In reply to Ere Maijala from comment #33) > (In reply to M. Tompsett from comment #32) > > ::: C4/Biblio.pm > > @@ -310,4 @@ > > > > > > $frameworkcode = "" if !$frameworkcode || $frameworkcode eq "Default"; # XXX > > > > > > - _strip_item_fields($record, $frameworkcode); > > > > I really think removing the frameworkcode is a bad idea. > > The default framwork was made authoritative in bug 19096. As a result of > that, the frameworkcode parameter in GetMarcFromKohaField function is no > longer used. _strip_item_fields is still using GetMarcFromKohaField, it just > doesn't need the frameworkcode for anything. So effectively the patch just > removes an unused parameter. Yeah still writing that patch :) On top of 14385 as suggested: Applying: Bug 20664: Optimize retrieval of biblio and item data /usr/share/koha/devclone/.git/rebase-apply/patch:220: tab in indent. my ( $biblionumber, $itemnumbers, $hidingrules ) = @_; fatal: sha1 information is lacking or useless (t/db_dependent/Items.t). New year, new request for next steps.. be nice to get this moving again. Created attachment 84516 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished: 1.) Create optimized method for fetching item fields for MARC embedding. 2.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Created attachment 84517 [details] [review] Bug 20664: Add unit tests for GetMarcItem To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Created attachment 84518 [details] [review] Bug 20664: Unit tests for GetMarcItemFields To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Created attachment 84519 [details] [review] Bug 20664: (follow-up) Fix test for GetMarcItemFields Without this patch I got this error running the test: YAML Error: Stream does not end with newline character Test plan: prove t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Created attachment 84520 [details] [review] Bug 20664: (follow-up) Fix QA whitespace errors Sorry for the delay. Now rebased and fixed the whitespace issues. Created attachment 85106 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished: 1.) Create optimized method for fetching item fields for MARC embedding. 2.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85107 [details] [review] Bug 20664: Add unit tests for GetMarcItem To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85108 [details] [review] Bug 20664: Unit tests for GetMarcItemFields To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85109 [details] [review] Bug 20664: (follow-up) Fix test for GetMarcItemFields Without this patch I got this error running the test: YAML Error: Stream does not end with newline character Test plan: prove t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85110 [details] [review] Bug 20664: (follow-up) Fix QA whitespace errors Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> A nice little performance boost that I can't find any regressions in.. Passing QA. (In reply to Ere Maijala from comment #14) > The latest patch is more simple and avoids any attempts at caching prepared > statements. Fortunately MySQL prepares quickly, so the performance > improvement is still good. I was really excited when I read that you were going to use cached prepared statements (as I've used them to gain huge performance boosts on other Perl projects), so I'm saddened to see this, but after reviewing GetMarcItemFields I see that it would be challenging/impossible to do well, because of the dynamic query generation with @itemnumbers. Was that the reason you didn't do it? > I was really excited when I read that you were going to use cached prepared
> statements (as I've used them to gain huge performance boosts on other Perl
> projects), so I'm saddened to see this, but after reviewing
> GetMarcItemFields I see that it would be challenging/impossible to do well,
> because of the dynamic query generation with @itemnumbers. Was that the
> reason you didn't do it?
Unfortunately cached prepared statements don't work well with Plack where the process can outlive the MySQL connection. We found that reconnection to MySQL would leave the prepared statements holding on to the previous connection leading to "MySQL server has gone away" errors. All this could be overcome with additional logic, but that'd be another effort.
(In reply to Ere Maijala from comment #50) > Unfortunately cached prepared statements don't work well with Plack where > the process can outlive the MySQL connection. We found that reconnection to > MySQL would leave the prepared statements holding on to the previous > connection leading to "MySQL server has gone away" errors. All this could be > overcome with additional logic, but that'd be another effort. That's an interesting point, although I was thinking of just caching (actually, re-using is probably a more accurate word than caching here) the prepared statement in the scope of the batch operation. I wouldn't cache it at the level of the Plack web server. I actually had to deal with that reconnection issue on a separate project recently (see "validate-on-match" and "background-validation" at https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/beta500/html/ch13s13.html). Caching during a batch operation would be useful, but it's harder to do without messing the architecture. And there's still a chance that something hickups during a batch process and automatic reconnection doesn't kick in for the cached statement. In any case the performance benefit from caching the statements, while measurable and significant, isn't dramatic. (In reply to Ere Maijala from comment #52) > Caching during a batch operation would be useful, but it's harder to do > without messing the architecture. And there's still a chance that something > hickups during a batch process and automatic reconnection doesn't kick in > for the cached statement. In any case the performance benefit from caching > the statements, while measurable and significant, isn't dramatic. Yeah, it would certainly require messing with the existing architecture, and I can understand not doing that. If something did hiccup during a batch process and automatic reconnection didn't kick in, then we'd trap and report the error, which doesn't seem like the end of the world to me. But no worries. That's interesting that it's not dramatic. I've seen dramatic performance benefits on other projects, although perhaps that was for more complex queries and higher volumes of data where fractions of seconds can add up dramatically. In any case, you don't have to justify it to me. I'm just curious. Thanks for explaining your thought process to me :). Does this affect the export of biblios/items for Zebra indexing? I'm watching a rebuild_zebra.pl crawling along exporting the bib records, and thinking how surely it should be able to be faster than this. This should provide a nice performance improvement also for exports for indexing. MySQL is quick to prepare statements, that's why we can get away with it. Other databases may have different behavior, and there are certainly slower ones in that regard. (In reply to Ere Maijala from comment #55) > This should provide a nice performance improvement also for exports for > indexing. > Excellent. > MySQL is quick to prepare statements, that's why we can get away with it. > Other databases may have different behavior, and there are certainly slower > ones in that regard. MySQL also makes some really bad query plans. Maybe that's how it achieves its speed. For Koha's user-generated SQL Reports I've actually resorted to using "index hints" lately to get efficient query plans. It's a MySQLism but can make a world of difference as well. In any case, I think any improvement is a well worthwhile improvement :D. So thanks for doing this one! Sorry, rebase needed here Created attachment 85776 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished: 1.) Create optimized method for fetching item fields for MARC embedding. 2.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85777 [details] [review] Bug 20664: Add unit tests for GetMarcItem To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85778 [details] [review] Bug 20664: Unit tests for GetMarcItemFields To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85779 [details] [review] Bug 20664: (follow-up) Fix test for GetMarcItemFields Without this patch I got this error running the test: YAML Error: Stream does not end with newline character Test plan: prove t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85780 [details] [review] Bug 20664: (follow-up) Fix QA whitespace errors Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85781 [details] [review] Bug 20664: (follow-up) Fix tests on rebase I am not convinced by the changes made to GetMarcFromKohaField. I would not cache anything here as GetMarcSubfieldStructure already cached it. In my test it does not bring any performances boost. I know we need improvement boost, but if it goes against consistency and code readability/maintainability I think we are not going to the right direction: => Use of raw SQL statements => Duplication of code (when we are trying to centralize it) - item type calculation Also it seems that tests are missing (`git grep UseHidingRulesWithBorrowerCategory t` does not return anything). Caching in GetMarcFromKohaField is indeed useless, I'll remove it. Also the test missing is something I need to fix. I can also make the missing itype check use Koha::Item->effective_itemtype to avoid duplication since it shouldn't be a common situation. What I don't really like is to use a known slow method for fetching the data from the database. I can change it to use Koha::Items->search since the changes still help improve the performance, but I'm not convinced that's the way it should be. (In reply to Ere Maijala from comment #65) > Caching in GetMarcFromKohaField is indeed useless, I'll remove it. Also the > test missing is something I need to fix. I can also make the missing itype > check use Koha::Item->effective_itemtype to avoid duplication since it > shouldn't be a common situation. What I don't really like is to use a known > slow method for fetching the data from the database. I can change it to use > Koha::Items->search since the changes still help improve the performance, > but I'm not convinced that's the way it should be. If you are calling Koha::Items->find( $item->{itemnumber} )->effective_itemtype, my guess is that the Koha::Items->search will not have a big impact. Yes, but originally all the item type checking could be avoided unless item-level_itypes was false or item didn't have itype. Created attachment 85898 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished: 1.) Create optimized method for fetching item fields for MARC embedding. 2.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85899 [details] [review] Bug 20664: Add unit tests for GetMarcItem To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85900 [details] [review] Bug 20664: Unit tests for GetMarcItemFields To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85901 [details] [review] Bug 20664: (follow-up) Fix test for GetMarcItemFields Without this patch I got this error running the test: YAML Error: Stream does not end with newline character Test plan: prove t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85902 [details] [review] Bug 20664: (follow-up) Fix QA whitespace errors Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 85903 [details] [review] Bug 20664: (follow-up) Fix tests on rebase Created attachment 85904 [details] [review] Bug 20664: (follow-up) Switch to Koha objects for retrieving items Created attachment 85905 [details] [review] Bug 20664: (follow-up) Add tests for UseHidingRulesWithBorrowerCategory Created attachment 86002 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished: 1.) Create optimized method for fetching item fields for MARC embedding. 2.) Use the cache service more and where repeated calls are made. Test plan: 1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml 2.) Apply the patch. 3.) Time the export process again with a different output file: time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml 4.) Verify that the optimized process is faster. 5.) Compare the resulting export files to make sure they're identical: Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 86003 [details] [review] Bug 20664: Add unit tests for GetMarcItem To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 86004 [details] [review] Bug 20664: Unit tests for GetMarcItemFields To test: prove -v t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 86005 [details] [review] Bug 20664: (follow-up) Fix test for GetMarcItemFields Without this patch I got this error running the test: YAML Error: Stream does not end with newline character Test plan: prove t/db_dependent/Items.t Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Created attachment 86006 [details] [review] Bug 20664: (follow-up) Fix QA whitespace errors Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Created attachment 86007 [details] [review] Bug 20664: (follow-up) Fix tests on rebase Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Created attachment 86008 [details] [review] Bug 20664: (follow-up) Switch to Koha objects for retrieving items Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Created attachment 86009 [details] [review] Bug 20664: (follow-up) Add tests for UseHidingRulesWithBorrowerCategory Signed-off-by: Josef Moravec <josef.moravec@gmail.com> + # This is so much faster than using Koha::Items->search that it makes sense even if it's ugly. + my $query = 'SELECT * FROM items WHERE biblionumber = ?'; Patch still applies. If I am reading thru the comments above, the main point of discussion is now: Do we want to return to raw sql in the above lines? It is rather obvious that this is faster than Koha::Object/DBIx. But we made a choice for DBIx and are still wrestling to implement it in the codebase. What would be the decisive reason for making the exception here, and would it be a precedent for doing similar things elsewhere? Also note that although the test plan refers to verifying that things are faster, I do not see any benchmark figures on the report. Moving to discussion and sending mail to QA. Comment on attachment 86002 [details] [review] Bug 20664: Optimize retrieval of biblio and item data Review of attachment 86002 [details] [review]: ----------------------------------------------------------------- ::: C4/Items.pm @@ +1357,5 @@ > + my $item_level_itype = C4::Context->preference('item-level_itypes'); > + # This is so much faster than using Koha::Items->search that it makes sense even if it's ugly. > + my $query = 'SELECT * FROM items WHERE biblionumber = ?'; > + if (@$itemnumbers) { > + $query .= ' AND itemnumber IN (' . join(',', @$itemnumbers) . ')'; This should be adding ? placeholders and binding the itemnumbers before executing. While it would probably be rare, a malformed record could cause SQL errors here. (In reply to Marcel de Rooy from comment #84) > + # This is so much faster than using Koha::Items->search that it makes > sense even if it's ugly. > + my $query = 'SELECT * FROM items WHERE biblionumber = ?'; > > Patch still applies. > If I am reading thru the comments above, the main point of discussion is > now: Do we want to return to raw sql in the above lines? It is rather > obvious that this is faster than Koha::Object/DBIx. But we made a choice for > DBIx and are still wrestling to implement it in the codebase. What would be > the decisive reason for making the exception here, and would it be a > precedent for doing similar things elsewhere? Also note that although the > test plan refers to verifying that things are faster, I do not see any > benchmark figures on the report. > > Moving to discussion and sending mail to QA. I'd say the decisive reason would be that the low-performance of DBIx::Class adds up when doing batch operations. While low-performance and high convenience might be tolerable for many single requests, if you're dealing with high volumes of data, it's a nightmare. Even fractions of a second per record can add up to unacceptably slow amounts of time. I've used raw SQL at https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662#c221 for the same reason. I wish I had included benchmarks on that bug, but with the raw SQL and re-using the statement handle, I was able to process much higher volumes. I think it was like 100 records per second instead of 4 records per second. David, you're right about the placeholders for itemnumbers, though the change will only be necessary if the code will actually be used instead of Koha objects. About benchmarks, I find them difficult since everyone's mileage will vary. I can fabricate a worst case benchmark such as exporting 1000 biblios that each have 1000 items, but someone else would have hard time getting the impressive results I got. In many cases a simple situation with one item per biblio might not show a meaningful change at all. Maybe this would be a compromise for Ere's situation: https://metacpan.org/pod/distribution/DBIx-Class/lib/DBIx/Class/Manual/Cookbook.pod#Arbitrary-SQL-through-a-custom-ResultSource It wouldn't work for my scenario as I'm doing higher performance inserts, but this might be a nice way of adding arbitrary SQL while maintaining use of the DBIx::Class framework? (I was inspired by Class::DBI. I work on a legacy project that uses Class::DBI and it has some functionality for adding arbitrary SQL to the ORM: https://metacpan.org/pod/Class::DBI#Ima::DBI-queries) There's also some interesting discussion at https://www.perlmonks.org/?node_id=700283. I wonder if a person could define a custom search through https://metacpan.org/pod/DBIx::Class::ResultSet#ATTRIBUTES to achieve the performance that Ere wants. Ere, you're not re-using statement handles, right? So are you getting performance improvement basically by reducing the number of SQL queries being made? If so, you might be able to avoid the arbitrary SQL by just making a more detailed search as noted in the attributes link above. Worth thinking about maybe? I'd be interested to see exactly what Koha::Object search call you were making for comparison... I've always found the SQL::Abstract query compilation really pretty quick.. it's the result inflation into DBIx::Class (and then Koha::Object) objects that takes time.. This can be entirely skipped with DBIx::Class::ResultClass::HashRefInflator and friends.. that's the approach I would take personally I think. Caveat.. if you're wanting related (JOINed) data you'll wan't to call 'prefetch' in the actual search call too... I'm not entirely sure how Koha::Objects plays games here as it just adds a further layer of complexity on top of DBIx::Class.. but I've certainly seen realy solid results when I've used ::HashRefInflator in other projects I won't pursue this further. There's nice progress in bug 23793 which makes embedding item data much cleaner. |