Bug 23624 - Count rows in report without (potentially) consuming all memory
Summary: Count rows in report without (potentially) consuming all memory
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Reports (show other bugs)
Version: Main
Hardware: All All
: P5 - low normal (vote)
Assignee: Paul Hoffman
QA Contact: Tomás Cohen Arazi
URL:
Keywords:
Depends on:
Blocks: 23626 23982
  Show dependency treegraph
 
Reported: 2019-09-16 18:58 UTC by Paul Hoffman
Modified: 2021-06-14 21:28 UTC (History)
7 users (show)

See Also:
Change sponsored?: Sponsored
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
19.11.00,19.05.05


Attachments
Count total number of rows 1,000 at a time (557 bytes, patch)
2019-09-16 18:58 UTC, Paul Hoffman
Details | Diff | Splinter Review
Bug 23624 - Count rows in report without (potentially) consuming all memory (2.01 KB, patch)
2019-09-17 17:06 UTC, Kyle M Hall
Details | Diff | Splinter Review
Bug 23624: Count rows in report without (potentially) consuming all memory (2.05 KB, patch)
2019-09-19 14:23 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review
Bug 23624: Unit tests (1.45 KB, patch)
2019-09-19 14:23 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review
Bug 23624: (QA follow-up) Optimize even more (1.58 KB, patch)
2019-09-19 14:23 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review
Bug 23624: Count rows in report without (potentially) consuming all memory (2.09 KB, patch)
2019-09-19 18:45 UTC, Liz Rea
Details | Diff | Splinter Review
Bug 23624: Unit tests (1.50 KB, patch)
2019-09-19 18:46 UTC, Liz Rea
Details | Diff | Splinter Review
Bug 23624: (QA follow-up) Optimize even more (1.62 KB, patch)
2019-09-19 18:46 UTC, Liz Rea
Details | Diff | Splinter Review
Bug 23624: (QA follow-up) Don't fetch the count unless the query was successful (897 bytes, patch)
2019-09-20 11:18 UTC, Kyle M Hall
Details | Diff | Splinter Review
Bug 23624: (QA follow-up) Don't fetch the count unless the query was successful (925 bytes, patch)
2019-09-20 19:22 UTC, Liz Rea
Details | Diff | Splinter Review
Bug 23624: Count rows in report without (potentially) consuming all memory (2.14 KB, patch)
2019-09-25 13:45 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review
Bug 23624: Unit tests (1.55 KB, patch)
2019-09-25 13:45 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review
Bug 23624: (QA follow-up) Optimize even more (1.68 KB, patch)
2019-09-25 13:45 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review
Bug 23624: (QA follow-up) Don't fetch the count unless the query was successful (980 bytes, patch)
2019-09-25 13:45 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review
Bug 23624: (QA follow-up) Test error cases (1.65 KB, patch)
2019-09-25 13:45 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Paul Hoffman 2019-09-16 18:58:58 UTC
Created attachment 92841 [details] [review]
Count total number of rows 1,000 at a time

C4::Reports::Guided::nb_rows (called by get_prepped_report in reports/guided_reports.pl) uses DBI::fetchall_arrayref to retrieve all rows at once; counts them; and then discards the rows and returns the count.  This has the potential, if the number of rows is very large, to exhaust all available memory.

(Other code in guided_reports.pl has the same potential effect, but because the solution to that is much less straightforward it will be addressed in a separate bug report.)

This patch uses the second ($max_rows) parameter to DBI::fetchall_arrayref to retrieve a smaller number (1,000) of rows at a time, looping until all results have been retrieved.  This will only use as much memory as the maximum amount used by a single call to DBI::fetchall_arrayref.
Comment 1 Paul Hoffman 2019-09-17 15:00:05 UTC
I've changed the priority and severity on this bug report, along with #23626, to P2/major since the problem they solve can make Koha unusable.
Comment 2 Kyle M Hall 2019-09-17 17:06:44 UTC
Created attachment 92903 [details] [review]
Bug 23624 - Count rows in report without (potentially) consuming all memory

C4::Reports::Guided::nb_rows (called by get_prepped_report in reports/guided_reports.pl) uses DBI::fetchall_arrayref to retrieve all rows at once; counts them; and then discards the rows and returns the count.  This has the potential, if the number of rows is very large, to exhaust all available memory.

(Other code in guided_reports.pl has the same potential effect, but because the solution to that is much less straightforward it will be addressed in a separate bug report.)

This patch uses the second ($max_rows) parameter to DBI::fetchall_arrayref to retrieve a smaller number (1,000) of rows at a time, looping until all results have been retrieved.  This will only use as much memory as the maximum amount used by a single call to DBI::fetchall_arrayref.

Test Plan:
1) Create a report the will generate a huge number of results
2) Run the report, watch your memory usage spike
3) Apply this patch
4) Restart all the things!
5) Run the report again, note your memory usage is much lower

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Comment 3 Kyle M Hall 2019-09-17 17:07:46 UTC
QA thoughts: 1000 per fetch feels arbitrary. Should that number be controllable via a config entry or syspref?
Comment 4 Paul Hoffman 2019-09-17 17:18:54 UTC
It could be, but the number seems more like an implementation detail than a preference or config option.  And it seems reasonable -- a million rows, which without this patch would probably kill any Koha instance, is still only 1,000 iterations of the loop.

Most importantly, though, I wouldn't want a debate over the number to hold up the patch -- we can always change it later.  (I certainly wouldn't object to some other number.)
Comment 5 Tomás Cohen Arazi 2019-09-19 14:23:52 UTC
Created attachment 92968 [details] [review]
Bug 23624: Count rows in report without (potentially) consuming all memory

C4::Reports::Guided::nb_rows (called by get_prepped_report in reports/guided_reports.pl) uses DBI::fetchall_arrayref to retrieve all rows at once; counts them; and then discards the rows and returns the count.  This has the potential, if the number of rows is very large, to exhaust all available memory.

(Other code in guided_reports.pl has the same potential effect, but because the solution to that is much less straightforward it will be addressed in a separate bug report.)

This patch uses the second ($max_rows) parameter to DBI::fetchall_arrayref to retrieve a smaller number (1,000) of rows at a time, looping until all results have been retrieved.  This will only use as much memory as the maximum amount used by a single call to DBI::fetchall_arrayref.

Test Plan:
1) Create a report the will generate a huge number of results
2) Run the report, watch your memory usage spike
3) Apply this patch
4) Restart all the things!
5) Run the report again, note your memory usage is much lower

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 6 Tomás Cohen Arazi 2019-09-19 14:23:56 UTC
Created attachment 92969 [details] [review]
Bug 23624: Unit tests

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 7 Tomás Cohen Arazi 2019-09-19 14:23:59 UTC
Created attachment 92970 [details] [review]
Bug 23624: (QA follow-up) Optimize even more

This patch makes counting the results have no memory footprint by
leveraging on the DB to count the rows.

To test:
- Without this path, run:
  $ kshell
 k$ prove t/db_dependent/Reports/Guided.t
=> SUCCESS: Tests pass
- Apply this patch
- Run:
 k$ prove t/db_dependent/Reports/Guided.t
=> SUCCESS: Tests still pass!

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 8 Paul Hoffman 2019-09-19 15:13:59 UTC
Thanks, that's the perfect solution.  I had contemplated (and rejected) munging the SQL to get a count but never thought of using a subquery.
Comment 9 Liz Rea 2019-09-19 18:45:59 UTC
Created attachment 92974 [details] [review]
Bug 23624: Count rows in report without (potentially) consuming all memory

C4::Reports::Guided::nb_rows (called by get_prepped_report in reports/guided_reports.pl) uses DBI::fetchall_arrayref to retrieve all rows at once; counts them; and then discards the rows and returns the count.  This has the potential, if the number of rows is very large, to exhaust all available memory.

(Other code in guided_reports.pl has the same potential effect, but because the solution to that is much less straightforward it will be addressed in a separate bug report.)

This patch uses the second ($max_rows) parameter to DBI::fetchall_arrayref to retrieve a smaller number (1,000) of rows at a time, looping until all results have been retrieved.  This will only use as much memory as the maximum amount used by a single call to DBI::fetchall_arrayref.

Test Plan:
1) Create a report the will generate a huge number of results
2) Run the report, watch your memory usage spike
3) Apply this patch
4) Restart all the things!
5) Run the report again, note your memory usage is much lower

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Comment 10 Liz Rea 2019-09-19 18:46:13 UTC
Created attachment 92975 [details] [review]
Bug 23624: Unit tests

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Comment 11 Liz Rea 2019-09-19 18:46:17 UTC
Created attachment 92976 [details] [review]
Bug 23624: (QA follow-up) Optimize even more

This patch makes counting the results have no memory footprint by
leveraging on the DB to count the rows.

To test:
- Without this path, run:
  $ kshell
 k$ prove t/db_dependent/Reports/Guided.t
=> SUCCESS: Tests pass
- Apply this patch
- Run:
 k$ prove t/db_dependent/Reports/Guided.t
=> SUCCESS: Tests still pass!

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Comment 12 Kyle M Hall 2019-09-20 10:40:44 UTC
If the report being run has a syntax error in it, Koha now gets an error:

Can't use an undefined value as an ARRAY reference at /kohadevbox/koha/C4/Reports/Guided.pm line 432
Comment 13 Kyle M Hall 2019-09-20 11:18:41 UTC
Created attachment 92995 [details] [review]
Bug 23624: (QA follow-up) Don't fetch the count unless the query was successful
Comment 14 Liz Rea 2019-09-20 19:22:50 UTC
Created attachment 93037 [details] [review]
Bug 23624: (QA follow-up) Don't fetch the count unless the query was successful

Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Comment 15 Paul Hoffman 2019-09-24 13:22:01 UTC
Do *I* need to sign off on this?  I'm woefully ignorant of Koha community dev practices.
Comment 16 Owen Leonard 2019-09-24 13:28:57 UTC
(In reply to Paul Hoffman from comment #15)
> Do *I* need to sign off on this?  I'm woefully ignorant of Koha community
> dev practices.

One sign-off is enough to move it forward in the process, so yours is not required. However if you *wanted* to test and sign off it would be welcomed.
Comment 17 Tomás Cohen Arazi 2019-09-25 13:45:00 UTC
Created attachment 93153 [details] [review]
Bug 23624: Count rows in report without (potentially) consuming all memory

C4::Reports::Guided::nb_rows (called by get_prepped_report in reports/guided_reports.pl) uses DBI::fetchall_arrayref to retrieve all rows at once; counts them; and then discards the rows and returns the count.  This has the potential, if the number of rows is very large, to exhaust all available memory.

(Other code in guided_reports.pl has the same potential effect, but because the solution to that is much less straightforward it will be addressed in a separate bug report.)

This patch uses the second ($max_rows) parameter to DBI::fetchall_arrayref to retrieve a smaller number (1,000) of rows at a time, looping until all results have been retrieved.  This will only use as much memory as the maximum amount used by a single call to DBI::fetchall_arrayref.

Test Plan:
1) Create a report the will generate a huge number of results
2) Run the report, watch your memory usage spike
3) Apply this patch
4) Restart all the things!
5) Run the report again, note your memory usage is much lower

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 18 Tomás Cohen Arazi 2019-09-25 13:45:05 UTC
Created attachment 93154 [details] [review]
Bug 23624: Unit tests

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 19 Tomás Cohen Arazi 2019-09-25 13:45:09 UTC
Created attachment 93155 [details] [review]
Bug 23624: (QA follow-up) Optimize even more

This patch makes counting the results have no memory footprint by
leveraging on the DB to count the rows.

To test:
- Without this path, run:
  $ kshell
 k$ prove t/db_dependent/Reports/Guided.t
=> SUCCESS: Tests pass
- Apply this patch
- Run:
 k$ prove t/db_dependent/Reports/Guided.t
=> SUCCESS: Tests still pass!

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>

Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 20 Tomás Cohen Arazi 2019-09-25 13:45:13 UTC
Created attachment 93156 [details] [review]
Bug 23624: (QA follow-up) Don't fetch the count unless the query was successful

Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 21 Tomás Cohen Arazi 2019-09-25 13:45:18 UTC
Created attachment 93157 [details] [review]
Bug 23624: (QA follow-up) Test error cases

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 22 Marcel de Rooy 2019-09-27 08:57:28 UTC
I am no fan of the derived table name xxx
Could you use something with nb_rows_xxx ? Or use something with random or file::temp etc ?
Comment 23 Martin Renvoize 2019-09-30 14:46:39 UTC
Nice work!

Pushed to master for 19.11.00
Comment 24 Martin Renvoize 2019-09-30 14:57:32 UTC
Thanks Paul, and congratulations on your first patch accepted into the Koha codebase.

I've also taken the liberty of giving full sponsorship attribution in a followup commit message only patch. Thanks:

    Sponsored-by: Higher Education Libraries of Massachusetts
    Sponsored-by: Fenway Libraries Online
Comment 25 Fridolin Somers 2019-10-08 07:26:30 UTC
Pushed to 19.05.x for 19.05.05
Comment 26 Lucas Gass 2019-10-17 22:33:32 UTC
backported to 18.11.x for 18.11.11
Comment 27 Martin Renvoize 2019-11-26 11:39:48 UTC
Moved sponsorship into a git followup commit ensuring it appears in the release notes and on the about page.