Bug 20664

Summary: Optimize retrieval of biblio and item data
Product: Koha Reporter: Ere Maijala <ere.maijala>
Component: Architecture, internals, and plumbingAssignee: Ere Maijala <ere.maijala>
Status: CLOSED WONTFIX QA Contact: Martin Renvoize <martin.renvoize>
Severity: enhancement    
Priority: P5 - low CC: dcook, f.demians, fridolin.somers, jonathan.druart, josef.moravec, lari.taskula, m.de.rooy, martin.renvoize, nick
Version: Main   
Hardware: All   
OS: All   
See Also: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=20930
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=19365
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=21006
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=19884
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=23793
Change sponsored?: --- Patch complexity: ---
Documentation contact: Documentation submission:
Text to go in the release notes:
Version(s) released in:
Bug Depends on: 14385    
Bug Blocks:    
Attachments: Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Add unit tests for GetMarcItem
Bug 20664: Unit tests for GetMarcItemFields
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Add unit tests for GetMarcItem
Bug 20664: Unit tests for GetMarcItemFields
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Add unit tests for GetMarcItem
Bug 20664: Unit tests for GetMarcItemFields
Bug 20664: (follow-up) Fix test for GetMarcItemFields
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Add unit tests for GetMarcItem
Bug 20664: Unit tests for GetMarcItemFields
Bug 20664: (follow-up) Fix test for GetMarcItemFields
Bug 20664: (follow-up) Fix QA whitespace errors
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Add unit tests for GetMarcItem
Bug 20664: Unit tests for GetMarcItemFields
Bug 20664: (follow-up) Fix test for GetMarcItemFields
Bug 20664: (follow-up) Fix QA whitespace errors
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Add unit tests for GetMarcItem
Bug 20664: Unit tests for GetMarcItemFields
Bug 20664: (follow-up) Fix test for GetMarcItemFields
Bug 20664: (follow-up) Fix QA whitespace errors
Bug 20664: (follow-up) Fix tests on rebase
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Add unit tests for GetMarcItem
Bug 20664: Unit tests for GetMarcItemFields
Bug 20664: (follow-up) Fix test for GetMarcItemFields
Bug 20664: (follow-up) Fix QA whitespace errors
Bug 20664: (follow-up) Fix tests on rebase
Bug 20664: (follow-up) Switch to Koha objects for retrieving items
Bug 20664: (follow-up) Add tests for UseHidingRulesWithBorrowerCategory
Bug 20664: Optimize retrieval of biblio and item data
Bug 20664: Add unit tests for GetMarcItem
Bug 20664: Unit tests for GetMarcItemFields
Bug 20664: (follow-up) Fix test for GetMarcItemFields
Bug 20664: (follow-up) Fix QA whitespace errors
Bug 20664: (follow-up) Fix tests on rebase
Bug 20664: (follow-up) Switch to Koha objects for retrieving items
Bug 20664: (follow-up) Add tests for UseHidingRulesWithBorrowerCategory

Description Ere Maijala 2018-04-26 10:22:56 UTC
There are several inefficiencies in the current process of retrieving biblios and embedding items in the MARC data. These affect especially batch processes such as exporting of records and the OAI-PMH provider.
Comment 1 Ere Maijala 2018-04-26 10:45:35 UTC
Created attachment 74873 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are three ways this is accomplished:

1.) Use state variables to hold prepared SQL statements so that the preparation is done only once.
2.) Create optimized method for fetching item fields for MARC embedding.
3.) Use the cache service more and where repeated calls are made.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:

diff -u unoptimized.xml optimized.xml
Comment 2 Ere Maijala 2018-04-26 10:50:37 UTC
These changes improve performance also for other processes that call e.g. GetMarcBiblio repeatedly.
Comment 3 Ere Maijala 2018-04-26 10:52:34 UTC
My test results show and improvement from about 30 seconds to about 10 seconds with the example export in the test plan.
Comment 4 Ere Maijala 2018-06-12 08:05:19 UTC Comment hidden (obsolete)
Comment 5 Ere Maijala 2018-06-12 08:08:07 UTC
Latest version fixes a mistake in the itemnumber check. Also rebased on master.
Comment 6 Lari Taskula 2018-06-13 09:15:11 UTC
Fails t/db_dependent/Items.t

    #   Failed test 'For OPAC, the pref OpacHiddenItems should have been take into account. Only items with homebranch ne CPL should have been embedded'
    #   at t/db_dependent/Items.t line 631.
    #          got: '8'
    #     expected: '2'

    #   Failed test 'For OPAC, If all items are hidden, no item should have been embedded'
    #   at t/db_dependent/Items.t line 642.
    #          got: '8'
    #     expected: '0'
    # Looks like you failed 2 tests of 7.

#   Failed test 'C4::Biblio::EmbedItemsInMarcBiblio'
#   at t/db_dependent/Items.t line 649.
Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729.
Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729.
Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729.
Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729.
Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729.
Use of uninitialized value in string ne at /home/koha/kohaclone/C4/Items.pm line 1729.

    #   Failed test 'items.barcode is not mapped anymore, so the DB column has not been updated'
    #   at t/db_dependent/Items.t line 726.
    #          got: undef
    #     expected: 'a barcode'
    # Looks like you failed 1 test of 4.
t/db_dependent/Items.t .. 11/14 
#   Failed test 'C4::Items::_build_default_values_for_mod_marc'
#   at t/db_dependent/Items.t line 731.
# Looks like you failed 2 tests of 14.
t/db_dependent/Items.t .. Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/14 subtests 

Test Summary Report
-------------------
t/db_dependent/Items.t (Wstat: 512 Tests: 14 Failed: 2)
  Failed tests:  10-11
  Non-zero exit status: 2
Comment 7 Lari Taskula 2018-06-13 09:25:39 UTC
(In reply to Lari Taskula from comment #6)
> Fails t/db_dependent/Items.t
This is due to caching parsed YAML from OpacHiddenItems. Cache does not cleared between tests when OpacHiddenItems changes.

Instead of additionally caching parsed YAML of this system preference, would it be better for Koha to handle YAML system preferences as parsed by default? This would require some changes to Koha core but I guess it would benefit all features using system preferences represented in YAML.
Comment 8 Ere Maijala 2018-06-13 09:38:32 UTC
Thanks for the feedback. I think I'll spin that off as a separate bug and remove the caching thing here.
Comment 9 Ere Maijala 2018-06-14 07:23:25 UTC Comment hidden (obsolete)
Comment 10 Ere Maijala 2018-06-14 07:24:54 UTC
Latest patch removes the custom YAML caching and fixes tests to properly flush all caches after changing the settings. Additionally fixes warnings surfaced by the use of Modern::Perl.
Comment 11 Ere Maijala 2018-06-14 07:54:48 UTC Comment hidden (obsolete)
Comment 12 Ere Maijala 2018-06-20 06:14:42 UTC Comment hidden (obsolete)
Comment 13 Ere Maijala 2018-07-04 07:16:11 UTC
Created attachment 76669 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=20664

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two changes to accomplish this:

1.) Create optimized method for fetching item fields for MARC embedding.
2.) Use the cache service more and where repeated calls are made.

Also the now-unnecessary frameworkcode parameter to _strip_item_fields() has been removed.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:

diff -u unoptimized.xml optimized.xml
Comment 14 Ere Maijala 2018-07-04 07:18:01 UTC
The latest patch is more simple and avoids any attempts at caching prepared statements. Fortunately MySQL prepares quickly, so the performance improvement is still good.
Comment 15 Ere Maijala 2018-09-15 18:01:36 UTC
Created attachment 78868 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=20664

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two changes to accomplish this:

1.) Create optimized method for fetching item fields for MARC embedding.
2.) Use the cache service more and where repeated calls are made.

Also the now-unnecessary frameworkcode parameter to _strip_item_fields() has been removed.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:

diff -u unoptimized.xml optimized.xml
Comment 16 Ere Maijala 2018-09-15 18:01:42 UTC
Created attachment 78869 [details] [review]
Bug 20664: Add unit tests for GetMarcItem

To test:
prove -v t/db_dependent/Items.t
Comment 17 Ere Maijala 2018-09-15 18:01:49 UTC
Created attachment 78870 [details] [review]
Bug 20664: Unit tests for GetMarcItemFields

To test:
prove -v t/db_dependent/Items.t
Comment 18 Ere Maijala 2018-09-15 18:02:58 UTC
I shamelessly lifted Nick's tests from bug 21006.
Comment 19 Mark Tompsett 2018-09-17 23:23:18 UTC
Comment on attachment 78868 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Review of attachment 78868 [details] [review]:
-----------------------------------------------------------------

The changes to the hiding logic break extensions to OpacHiddenItems in bugs 14385 and eventually 10589. :(
Comment 20 Ere Maijala 2018-09-18 00:22:01 UTC
Too bad, but it happens. Since bug 14385 is already signed off, I'll rework this. Looks like it will be a significant change...
Comment 21 Nick Clemens 2018-09-20 19:07:23 UTC
*** Bug 21006 has been marked as a duplicate of this bug. ***
Comment 22 Ere Maijala 2018-09-21 11:44:57 UTC
Created attachment 79225 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished:

1.) Create optimized method for fetching item fields for MARC embedding.
2.) Use the cache service more and where repeated calls are made.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:

diff -u unoptimized.xml optimized.xml

6.) Run tests to verify that they still pass:

prove t/db_dependent/Items.t
Comment 23 Ere Maijala 2018-09-21 11:46:06 UTC
The latest version now incorporates the functionality added in bug 14385.
Comment 24 Ere Maijala 2018-09-21 11:49:40 UTC
Oops, need to fix conflicts with tests.
Comment 25 Ere Maijala 2018-09-21 12:03:42 UTC
Created attachment 79227 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished:

1.) Create optimized method for fetching item fields for MARC embedding.
2.) Use the cache service more and where repeated calls are made.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:
Comment 26 Ere Maijala 2018-09-21 12:03:48 UTC
Created attachment 79228 [details] [review]
Bug 20664: Add unit tests for GetMarcItem

To test:
prove -v t/db_dependent/Items.t
Comment 27 Ere Maijala 2018-09-21 12:03:55 UTC
Created attachment 79229 [details] [review]
Bug 20664: Unit tests for GetMarcItemFields

To test:
prove -v t/db_dependent/Items.t
Comment 28 Josef Moravec 2018-09-24 08:06:43 UTC
Created attachment 79276 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished:

1.) Create optimized method for fetching item fields for MARC embedding.
2.) Use the cache service more and where repeated calls are made.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 29 Josef Moravec 2018-09-24 08:06:47 UTC
Created attachment 79277 [details] [review]
Bug 20664: Add unit tests for GetMarcItem

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 30 Josef Moravec 2018-09-24 08:06:51 UTC
Created attachment 79278 [details] [review]
Bug 20664: Unit tests for GetMarcItemFields

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 31 Josef Moravec 2018-09-24 08:06:55 UTC
Created attachment 79279 [details] [review]
Bug 20664: (follow-up) Fix test for GetMarcItemFields

Without this patch I got this error running the test:

YAML Error: Stream does not end with newline character

Test plan:
prove t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 32 Mark Tompsett 2018-10-11 14:50:44 UTC
Comment on attachment 79276 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Review of attachment 79276 [details] [review]:
-----------------------------------------------------------------

The differences between unimarc and marc21 and in the future an integration of both types of records in the same system might be a reason to leave the framework code parameter stuff in. If not, I just have a really bad feeling about removing the frameworkcode parameter.

::: C4/Biblio.pm
@@ -310,4 @@
>  
>      $frameworkcode = "" if !$frameworkcode || $frameworkcode eq "Default"; # XXX
>  
> -    _strip_item_fields($record, $frameworkcode);

I really think removing the frameworkcode is a bad idea.
Comment 33 Ere Maijala 2018-10-12 06:05:32 UTC
(In reply to M. Tompsett from comment #32)
> ::: C4/Biblio.pm
> @@ -310,4 @@
> >  
> >      $frameworkcode = "" if !$frameworkcode || $frameworkcode eq "Default"; # XXX
> >  
> > -    _strip_item_fields($record, $frameworkcode);
> 
> I really think removing the frameworkcode is a bad idea.

The default framwork was made authoritative in bug 19096. As a result of that, the frameworkcode parameter in GetMarcFromKohaField function is no longer used. _strip_item_fields is still using GetMarcFromKohaField, it just doesn't need the frameworkcode for anything. So effectively the patch just removes an unused parameter.
Comment 34 Marcel de Rooy 2018-11-02 10:26:01 UTC
(In reply to Ere Maijala from comment #33)
> (In reply to M. Tompsett from comment #32)
> > ::: C4/Biblio.pm
> > @@ -310,4 @@
> > >  
> > >      $frameworkcode = "" if !$frameworkcode || $frameworkcode eq "Default"; # XXX
> > >  
> > > -    _strip_item_fields($record, $frameworkcode);
> > 
> > I really think removing the frameworkcode is a bad idea.
> 
> The default framwork was made authoritative in bug 19096. As a result of
> that, the frameworkcode parameter in GetMarcFromKohaField function is no
> longer used. _strip_item_fields is still using GetMarcFromKohaField, it just
> doesn't need the frameworkcode for anything. So effectively the patch just
> removes an unused parameter.

Yeah still writing that patch :)
Comment 35 Marcel de Rooy 2018-11-02 10:28:23 UTC
On top of 14385 as suggested:

Applying: Bug 20664: Optimize retrieval of biblio and item data
/usr/share/koha/devclone/.git/rebase-apply/patch:220: tab in indent.
        my ( $biblionumber, $itemnumbers, $hidingrules ) = @_;
fatal: sha1 information is lacking or useless (t/db_dependent/Items.t).
Comment 36 Martin Renvoize 2019-01-30 09:09:25 UTC
New year, new request for next steps.. be nice to get this moving again.
Comment 37 Ere Maijala 2019-01-30 13:15:55 UTC
Created attachment 84516 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished:

1.) Create optimized method for fetching item fields for MARC embedding.
2.) Use the cache service more and where repeated calls are made.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 38 Ere Maijala 2019-01-30 13:16:01 UTC
Created attachment 84517 [details] [review]
Bug 20664: Add unit tests for GetMarcItem

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 39 Ere Maijala 2019-01-30 13:16:06 UTC
Created attachment 84518 [details] [review]
Bug 20664: Unit tests for GetMarcItemFields

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 40 Ere Maijala 2019-01-30 13:16:11 UTC
Created attachment 84519 [details] [review]
Bug 20664: (follow-up) Fix test for GetMarcItemFields

Without this patch I got this error running the test:

YAML Error: Stream does not end with newline character

Test plan:
prove t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 41 Ere Maijala 2019-01-30 13:16:16 UTC
Created attachment 84520 [details] [review]
Bug 20664: (follow-up) Fix QA whitespace errors
Comment 42 Ere Maijala 2019-01-30 13:19:22 UTC
Sorry for the delay. Now rebased and fixed the whitespace issues.
Comment 43 Martin Renvoize 2019-02-14 12:53:20 UTC
Created attachment 85106 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished:

1.) Create optimized method for fetching item fields for MARC embedding.
2.) Use the cache service more and where repeated calls are made.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 44 Martin Renvoize 2019-02-14 12:53:23 UTC
Created attachment 85107 [details] [review]
Bug 20664: Add unit tests for GetMarcItem

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 45 Martin Renvoize 2019-02-14 12:53:27 UTC
Created attachment 85108 [details] [review]
Bug 20664: Unit tests for GetMarcItemFields

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 46 Martin Renvoize 2019-02-14 12:53:31 UTC
Created attachment 85109 [details] [review]
Bug 20664: (follow-up) Fix test for GetMarcItemFields

Without this patch I got this error running the test:

YAML Error: Stream does not end with newline character

Test plan:
prove t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 47 Martin Renvoize 2019-02-14 12:53:34 UTC
Created attachment 85110 [details] [review]
Bug 20664: (follow-up) Fix QA whitespace errors

Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 48 Martin Renvoize 2019-02-14 12:54:14 UTC
A nice little performance boost that I can't find any regressions in.. Passing QA.
Comment 49 David Cook 2019-02-15 00:35:08 UTC
(In reply to Ere Maijala from comment #14)
> The latest patch is more simple and avoids any attempts at caching prepared
> statements. Fortunately MySQL prepares quickly, so the performance
> improvement is still good.

I was really excited when I read that you were going to use cached prepared statements (as I've used them to gain huge performance boosts on other Perl projects), so I'm saddened to see this, but after reviewing GetMarcItemFields I see that it would be challenging/impossible to do well, because of the dynamic query generation with @itemnumbers. Was that the reason you didn't do it?
Comment 50 Ere Maijala 2019-02-15 07:36:58 UTC
> I was really excited when I read that you were going to use cached prepared
> statements (as I've used them to gain huge performance boosts on other Perl
> projects), so I'm saddened to see this, but after reviewing
> GetMarcItemFields I see that it would be challenging/impossible to do well,
> because of the dynamic query generation with @itemnumbers. Was that the
> reason you didn't do it?

Unfortunately cached prepared statements don't work well with Plack where the process can outlive the MySQL connection. We found that reconnection to MySQL would leave the prepared statements holding on to the previous connection leading to "MySQL server has gone away" errors. All this could be overcome with additional logic, but that'd be another effort.
Comment 51 David Cook 2019-02-18 04:46:23 UTC
(In reply to Ere Maijala from comment #50)
> Unfortunately cached prepared statements don't work well with Plack where
> the process can outlive the MySQL connection. We found that reconnection to
> MySQL would leave the prepared statements holding on to the previous
> connection leading to "MySQL server has gone away" errors. All this could be
> overcome with additional logic, but that'd be another effort.

That's an interesting point, although I was thinking of just caching (actually, re-using is probably a more accurate word than caching here) the prepared statement in the scope of the batch operation. I wouldn't cache it at the level of the Plack web server.

I actually had to deal with that reconnection issue on a separate project recently (see "validate-on-match" and "background-validation" at https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/beta500/html/ch13s13.html).
Comment 52 Ere Maijala 2019-02-18 08:26:04 UTC
Caching during a batch operation would be useful, but it's harder to do without messing the architecture. And there's still a chance that something hickups during a batch process and automatic reconnection doesn't kick in for the cached statement. In any case the performance benefit from caching the statements, while measurable and significant, isn't dramatic.
Comment 53 David Cook 2019-02-19 01:06:06 UTC
(In reply to Ere Maijala from comment #52)
> Caching during a batch operation would be useful, but it's harder to do
> without messing the architecture. And there's still a chance that something
> hickups during a batch process and automatic reconnection doesn't kick in
> for the cached statement. In any case the performance benefit from caching
> the statements, while measurable and significant, isn't dramatic.

Yeah, it would certainly require messing with the existing architecture, and I can understand not doing that. 

If something did hiccup during a batch process and automatic reconnection didn't kick in, then we'd trap and report the error, which doesn't seem like the end of the world to me. But no worries.

That's interesting that it's not dramatic. I've seen dramatic performance benefits on other projects, although perhaps that was for more complex queries and higher volumes of data where fractions of seconds can add up dramatically. 

In any case, you don't have to justify it to me. I'm just curious. Thanks for explaining your thought process to me :).
Comment 54 David Cook 2019-02-19 04:07:03 UTC
Does this affect the export of biblios/items for Zebra indexing? 

I'm watching a rebuild_zebra.pl crawling along exporting the bib records, and thinking how surely it should be able to be faster than this.
Comment 55 Ere Maijala 2019-02-19 07:18:17 UTC
This should provide a nice performance improvement also for exports for indexing.

MySQL is quick to prepare statements, that's why we can get away with it. Other databases may have different behavior, and there are certainly slower ones in that regard.
Comment 56 David Cook 2019-02-19 23:26:56 UTC
(In reply to Ere Maijala from comment #55)
> This should provide a nice performance improvement also for exports for
> indexing.
> 

Excellent.

> MySQL is quick to prepare statements, that's why we can get away with it.
> Other databases may have different behavior, and there are certainly slower
> ones in that regard.

MySQL also makes some really bad query plans. Maybe that's how it achieves its speed. For Koha's user-generated SQL Reports I've actually resorted to using "index hints" lately to get efficient query plans. It's a MySQLism but can make a world of difference as well. 

In any case, I think any improvement is a well worthwhile improvement :D. So thanks for doing this one!
Comment 57 Nick Clemens 2019-02-27 13:29:58 UTC
Sorry, rebase needed here
Comment 58 Ere Maijala 2019-02-27 14:14:36 UTC
Created attachment 85776 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished:

1.) Create optimized method for fetching item fields for MARC embedding.
2.) Use the cache service more and where repeated calls are made.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 59 Ere Maijala 2019-02-27 14:14:40 UTC
Created attachment 85777 [details] [review]
Bug 20664: Add unit tests for GetMarcItem

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 60 Ere Maijala 2019-02-27 14:14:44 UTC
Created attachment 85778 [details] [review]
Bug 20664: Unit tests for GetMarcItemFields

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 61 Ere Maijala 2019-02-27 14:14:47 UTC
Created attachment 85779 [details] [review]
Bug 20664: (follow-up) Fix test for GetMarcItemFields

Without this patch I got this error running the test:

YAML Error: Stream does not end with newline character

Test plan:
prove t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 62 Ere Maijala 2019-02-27 14:14:51 UTC
Created attachment 85780 [details] [review]
Bug 20664: (follow-up) Fix QA whitespace errors

Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 63 Ere Maijala 2019-02-27 14:14:55 UTC
Created attachment 85781 [details] [review]
Bug 20664: (follow-up) Fix tests on rebase
Comment 64 Jonathan Druart 2019-02-27 14:21:51 UTC
I am not convinced by the changes made to GetMarcFromKohaField. I would not cache  anything here as GetMarcSubfieldStructure already cached it.

In my test it does not bring any performances boost.

I know we need improvement boost, but if it goes against consistency and code readability/maintainability I think we are not going to the right direction:
=> Use of raw SQL statements
=> Duplication of code (when we are trying to centralize it) - item type calculation

Also it seems that tests are missing (`git grep UseHidingRulesWithBorrowerCategory t` does not return anything).
Comment 65 Ere Maijala 2019-02-27 15:04:19 UTC
Caching in GetMarcFromKohaField is indeed useless, I'll remove it. Also the test missing is something I need to fix. I can also make the missing itype check use Koha::Item->effective_itemtype to avoid duplication since it shouldn't be a common situation. What I don't really like is to use a known slow method for fetching the data from the database. I can change it to use Koha::Items->search since the changes still help improve the performance, but I'm not convinced that's the way it should be.
Comment 66 Jonathan Druart 2019-02-27 15:32:02 UTC
(In reply to Ere Maijala from comment #65)
> Caching in GetMarcFromKohaField is indeed useless, I'll remove it. Also the
> test missing is something I need to fix. I can also make the missing itype
> check use Koha::Item->effective_itemtype to avoid duplication since it
> shouldn't be a common situation. What I don't really like is to use a known
> slow method for fetching the data from the database. I can change it to use
> Koha::Items->search since the changes still help improve the performance,
> but I'm not convinced that's the way it should be.

If you are calling Koha::Items->find( $item->{itemnumber} )->effective_itemtype, my guess is that the Koha::Items->search will not have a big impact.
Comment 67 Ere Maijala 2019-02-28 12:17:59 UTC
Yes, but originally all the item type checking could be avoided unless item-level_itypes was false or item didn't have itype.
Comment 68 Ere Maijala 2019-03-01 14:17:33 UTC
Created attachment 85898 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished:

1.) Create optimized method for fetching item fields for MARC embedding.
2.) Use the cache service more and where repeated calls are made.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 69 Ere Maijala 2019-03-01 14:17:38 UTC
Created attachment 85899 [details] [review]
Bug 20664: Add unit tests for GetMarcItem

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 70 Ere Maijala 2019-03-01 14:17:42 UTC
Created attachment 85900 [details] [review]
Bug 20664: Unit tests for GetMarcItemFields

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 71 Ere Maijala 2019-03-01 14:17:47 UTC
Created attachment 85901 [details] [review]
Bug 20664: (follow-up) Fix test for GetMarcItemFields

Without this patch I got this error running the test:

YAML Error: Stream does not end with newline character

Test plan:
prove t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 72 Ere Maijala 2019-03-01 14:17:51 UTC
Created attachment 85902 [details] [review]
Bug 20664: (follow-up) Fix QA whitespace errors

Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 73 Ere Maijala 2019-03-01 14:17:56 UTC
Created attachment 85903 [details] [review]
Bug 20664: (follow-up) Fix tests on rebase
Comment 74 Ere Maijala 2019-03-01 14:18:00 UTC
Created attachment 85904 [details] [review]
Bug 20664: (follow-up) Switch to Koha objects for retrieving items
Comment 75 Ere Maijala 2019-03-01 14:18:04 UTC
Created attachment 85905 [details] [review]
Bug 20664: (follow-up) Add tests for UseHidingRulesWithBorrowerCategory
Comment 76 Josef Moravec 2019-03-04 15:07:52 UTC
Created attachment 86002 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Optimizes embedding of item data in MARC and fixes several bottlenecks encountered while profiling OAI-PMH and exporting of records. There are two ways this is accomplished:

1.) Create optimized method for fetching item fields for MARC embedding.
2.) Use the cache service more and where repeated calls are made.

Test plan:

1.) Before applying the patch, time an export_records.pl run for a set of biblios that also have items. Run it a couple of times to account for initial slowness and possible fluctuations. For example:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename unoptimized.xml

2.) Apply the patch.

3.) Time the export process again with a different output file:

time misc/export_records.pl --record-type bibs --starting_biblionumber 960000 --ending_biblionumber 965000 --format xml --filename optimized.xml

4.) Verify that the optimized process is faster.

5.) Compare the resulting export files to make sure they're identical:

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 77 Josef Moravec 2019-03-04 15:07:56 UTC
Created attachment 86003 [details] [review]
Bug 20664: Add unit tests for GetMarcItem

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 78 Josef Moravec 2019-03-04 15:08:00 UTC
Created attachment 86004 [details] [review]
Bug 20664: Unit tests for GetMarcItemFields

To test:
prove -v t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 79 Josef Moravec 2019-03-04 15:08:04 UTC
Created attachment 86005 [details] [review]
Bug 20664: (follow-up) Fix test for GetMarcItemFields

Without this patch I got this error running the test:

YAML Error: Stream does not end with newline character

Test plan:
prove t/db_dependent/Items.t

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 80 Josef Moravec 2019-03-04 15:08:09 UTC
Created attachment 86006 [details] [review]
Bug 20664: (follow-up) Fix QA whitespace errors

Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 81 Josef Moravec 2019-03-04 15:08:12 UTC
Created attachment 86007 [details] [review]
Bug 20664: (follow-up) Fix tests on rebase

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 82 Josef Moravec 2019-03-04 15:08:16 UTC
Created attachment 86008 [details] [review]
Bug 20664: (follow-up) Switch to Koha objects for retrieving items

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 83 Josef Moravec 2019-03-04 15:08:20 UTC
Created attachment 86009 [details] [review]
Bug 20664: (follow-up) Add tests for UseHidingRulesWithBorrowerCategory

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 84 Marcel de Rooy 2019-03-29 09:01:53 UTC
+    # This is so much faster than using Koha::Items->search that it makes sense even if it's ugly.
+    my $query = 'SELECT * FROM items WHERE biblionumber = ?';

Patch still applies.
If I am reading thru the comments above, the main point of discussion is now: Do we want to return to raw sql in the above lines? It is rather obvious that this is faster than Koha::Object/DBIx. But we made a choice for DBIx and are still wrestling to implement it in the codebase. What would be the decisive reason for making the exception here, and would it be a precedent for doing similar things elsewhere? Also note that although the test plan refers to verifying that things are faster, I do not see any benchmark figures on the report.

Moving to discussion and sending mail to QA.
Comment 85 David Cook 2019-03-31 23:54:42 UTC
Comment on attachment 86002 [details] [review]
Bug 20664: Optimize retrieval of biblio and item data

Review of attachment 86002 [details] [review]:
-----------------------------------------------------------------

::: C4/Items.pm
@@ +1357,5 @@
> +    my $item_level_itype = C4::Context->preference('item-level_itypes');
> +    # This is so much faster than using Koha::Items->search that it makes sense even if it's ugly.
> +    my $query = 'SELECT * FROM items WHERE biblionumber = ?';
> +    if (@$itemnumbers) {
> +        $query .= ' AND itemnumber IN (' . join(',', @$itemnumbers) . ')';

This should be adding ? placeholders and binding the itemnumbers before executing. 

While it would probably be rare, a malformed record could cause SQL errors here.
Comment 86 David Cook 2019-04-01 00:00:50 UTC
(In reply to Marcel de Rooy from comment #84)
> +    # This is so much faster than using Koha::Items->search that it makes
> sense even if it's ugly.
> +    my $query = 'SELECT * FROM items WHERE biblionumber = ?';
> 
> Patch still applies.
> If I am reading thru the comments above, the main point of discussion is
> now: Do we want to return to raw sql in the above lines? It is rather
> obvious that this is faster than Koha::Object/DBIx. But we made a choice for
> DBIx and are still wrestling to implement it in the codebase. What would be
> the decisive reason for making the exception here, and would it be a
> precedent for doing similar things elsewhere? Also note that although the
> test plan refers to verifying that things are faster, I do not see any
> benchmark figures on the report.
> 
> Moving to discussion and sending mail to QA.

I'd say the decisive reason would be that the low-performance of DBIx::Class adds up when doing batch operations. While low-performance and high convenience might be tolerable for many single requests, if you're dealing with high volumes of data, it's a nightmare. Even fractions of a second per record can add up to unacceptably slow amounts of time. 

I've used raw SQL at https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662#c221 for the same reason. I wish I had included benchmarks on that bug, but with the raw SQL and re-using the statement handle, I was able to process much higher volumes. I think it was like 100 records per second instead of 4 records per second.
Comment 87 Ere Maijala 2019-04-01 08:47:52 UTC
David, you're right about the placeholders for itemnumbers, though the change will only be necessary if the code will actually be used instead of Koha objects.

About benchmarks, I find them difficult since everyone's mileage will vary. I can fabricate a worst case benchmark such as exporting 1000 biblios that each have  1000 items, but someone else would have hard time getting the impressive results I got. In many cases a simple situation with one item per biblio might not show a meaningful change at all.
Comment 88 David Cook 2019-04-01 23:44:24 UTC
Maybe this would be a compromise for Ere's situation:

https://metacpan.org/pod/distribution/DBIx-Class/lib/DBIx/Class/Manual/Cookbook.pod#Arbitrary-SQL-through-a-custom-ResultSource

It wouldn't work for my scenario as I'm doing higher performance inserts, but this might be a nice way of adding arbitrary SQL while maintaining use of the DBIx::Class framework?

(I was inspired by Class::DBI. I work on a legacy project that uses Class::DBI and it has some functionality for adding arbitrary SQL to the ORM: https://metacpan.org/pod/Class::DBI#Ima::DBI-queries)

There's also some interesting discussion at https://www.perlmonks.org/?node_id=700283. 

I wonder if a person could define a custom search through https://metacpan.org/pod/DBIx::Class::ResultSet#ATTRIBUTES to achieve the performance that Ere wants.

Ere, you're not re-using statement handles, right? So are you getting performance improvement basically by reducing the number of SQL queries being made? If so, you might be able to avoid the arbitrary SQL by just making a more detailed search as noted in the attributes link above. 

Worth thinking about maybe?
Comment 89 Martin Renvoize 2019-04-02 15:24:40 UTC
I'd be interested to see exactly what Koha::Object search call you were making for comparison...

I've always found the SQL::Abstract query compilation really pretty quick.. it's the result inflation into DBIx::Class (and then Koha::Object) objects that takes time.. This can be entirely skipped with DBIx::Class::ResultClass::HashRefInflator and friends.. that's the approach I would take personally I think.
Comment 90 Martin Renvoize 2019-04-02 15:26:18 UTC
Caveat.. if you're wanting related (JOINed) data you'll wan't to call 'prefetch' in the actual search call too...

I'm not entirely sure how Koha::Objects plays games here as it just adds a further layer of complexity on top of DBIx::Class.. but I've certainly seen realy solid results when I've used ::HashRefInflator in other projects
Comment 91 Ere Maijala 2019-10-22 12:54:35 UTC
I won't pursue this further. There's nice progress in bug 23793 which makes embedding item data much cleaner.