Bug 23685 - Exporting report may consume unlimited memory
Summary: Exporting report may consume unlimited memory
Status: Signed Off
Alias: None
Product: Koha
Classification: Unclassified
Component: Reports (show other bugs)
Version: Main
Hardware: All All
: P5 - low normal (vote)
Assignee: Aleisha Amohia
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-09-26 18:31 UTC by Paul Hoffman
Modified: 2024-07-07 06:25 UTC (History)
5 users (show)

See Also:
Change sponsored?: Sponsored
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
Bug 23685: Don't store ODS content in a variable to save memory (1.65 KB, patch)
2024-06-26 02:48 UTC, Aleisha Amohia
Details | Diff | Splinter Review
Bug 23685: Exclude guided_reports.pl from plack (1.54 KB, patch)
2024-07-06 00:44 UTC, Aleisha Amohia
Details | Diff | Splinter Review
Bug 23685: Exclude guided_reports.pl from plack (1.58 KB, patch)
2024-07-07 06:25 UTC, David Nind
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Paul Hoffman 2019-09-26 18:31:17 UTC
In guided_reports.pl, when exporting report results ($phase eq 'Export') all rows of data are fetched and then converted to the desired format.  This may consume an unlimited amount of memory; when report results are particularly large, it may consume all available memory, leading to timed-out HTTP requests, crashes, and potentially data loss.  (We have experienced Zebra index corruption as a result.)

In master, reports/guided_reports.pl supports three export types -- tab-delimited, CSV, or *.ods.  In each case, all data are loaded into memory and held there before any output is produced:

   891      $sql = get_prepped_report( $sql, \@param_names, \@sql_params );
   892          my ($sth, $q_errors) = execute_query($sql);
   ...
   895          if ($format eq 'tab') {
   896              $type = 'application/octet-stream';
   897              $content .= join("\t", header_cell_values($sth)) . "\n";
   898              $content = Encode::decode('UTF-8', $content);
   899              while (my $row = $sth->fetchrow_arrayref()) {
   900                  $content .= join("\t", @$row) . "\n";
   901              }
   902          } else {
   903              my $delimiter = C4::Context->preference('delimiter') || ',';
   904              if ( $format eq 'csv' ) {
   ...
   914                  while (my $row = $sth->fetchrow_arrayref()) {
   915                      if ($csv->combine(@$row)) {
   916                          $content .= $csv->string() . "\n";
   917                      } else {
   918                          push @$q_errors, { combine => $csv->error_diag() } ;
   919                      }
   920                  }
   921              }
   922              elsif ( $format eq 'ods' ) {
   ...
   932                  # Other line in Unicode
   933                  my $sql_rows = $sth->fetchall_arrayref();
   934                  foreach my $sql_row ( @$sql_rows ) {
   935                      my @content_row;
   936                      foreach my $sql_cell ( @$sql_row ) {
   937                          push @content_row, Encode::encode( 'UTF8', $sql_cell );
   938                      }
   939                      push @$ods_content, \@content_row;
   940                  }
   941
   942                  # Process
   943                  generate_ods($ods_filepath, $ods_content);
   944
   945                  # Output
   946                  binmode(STDOUT);
   947                  open $ods_fh, '<', $ods_filepath;
   948                  $content .= $_ while <$ods_fh>;
   949                  unlink $ods_filepath;
   950              }

The *.ods case is particularly problematic, because before any data is sent back to the user's browser, *three* copies of the full results are sitting in memory simultaneously -- @$sql_row, @$ods_content, and $content.
Comment 1 Katrin Fischer 2019-09-27 17:39:58 UTC
Hi Paul, can you share why you closed INVALID?
Comment 2 Paul Hoffman 2019-09-27 17:44:16 UTC
(In reply to Katrin Fischer from comment #1)
> Hi Paul, can you share why you closed INVALID?

I tweaked bug #23626 (memory consumption related to the charting feature) to encompass this problem, because they involve the same files and the same underlying problem -- running a report and then doing something with the results (charting or exporting) potentially consumes all available memory.

Maybe I should post a comment there with the details from this ticket?
Comment 3 Katrin Fischer 2019-09-28 06:25:27 UTC
(In reply to Paul Hoffman from comment #2)
> (In reply to Katrin Fischer from comment #1)
> > Hi Paul, can you share why you closed INVALID?
> 
> I tweaked bug #23626 (memory consumption related to the charting feature) to
> encompass this problem, because they involve the same files and the same
> underlying problem -- running a report and then doing something with the
> results (charting or exporting) potentially consumes all available memory.
> 
> Maybe I should post a comment there with the details from this ticket?

You could use 'mark as duplicate' or choose 'RESOLVED MOVED' with a comment - that would link the bugs and make this clearer to people researching bugs later. Also if you leave out the # bugzilla will show a link: bug 23626
Comment 4 Paul Hoffman 2019-10-04 14:03:40 UTC
I'm reopening this as suggested by Nick Clemens in bug 23626.
Comment 5 Didier Gautheron 2020-05-26 11:16:13 UTC
What were the rationals for using a big string rather than writing directly to SDTOUT or a temporary file?

Are theses assumptions still valid?
Comment 6 David Cook 2021-02-22 23:42:14 UTC
(In reply to Didier Gautheron from comment #5)
> What were the rationals for using a big string rather than writing directly
> to SDTOUT or a temporary file?
> 
> Are theses assumptions still valid?

It looks like it used to print out to STDOUT but it was changed in Bug 11679.

After reviewing the code, I'd say it was probably a desire to make the code more readable/easier to maintain. However, it does create this performance problem.

Fixing the "tab" and "csv" export should be fairly trivial, but the ODS will be harder since it's a more complex file format (ZIP compressed XML). 

I'll write another comment about that...
Comment 7 David Cook 2021-02-23 00:56:12 UTC
Wow looking at OpenOffice::OODoc... it hasn't been updated in over 10 years. It's amazing that it still works.

Excel::Writer::XLSX has an interesting little write up on the topic of memory usage (see https://metacpan.org/pod/Excel::Writer::XLSX#SPEED-AND-MEMORY-USAGE). 

It looks like OpenOffice::OODoc uses Archive::Zip and Archive::Zip doesn't seem to be able to stream to output...

It looks like Excel::Writer::XLSX manages memory by writing worksheets out as temporary files (https://metacpan.org/pod/Excel::Writer::XLSX#set_tempdir()). 

Of course Excel::Writer::XLSX still adds every one of those temporary files into memory when it's saving the workbook, so it would still have a memory spike.
Comment 8 David Cook 2021-02-23 01:16:56 UTC
Rewriting OpenOffice::OODoc is not really an option, but that would be the most "correct" solution I imagine.

However, realistically, we could be more optimal in our current use of OpenOffice::OODoc. As Paul mentions, it makes no sense to do a $sth->fetchall_arrayref(), plus building @$ods_content, and then having an in-memory representation in OpenOffice::OODoc. That's 3x more memory usage than we need to use.

We should just use something like $sth->fetchrow_hashref or $sth->fetchrow_arrayref and feed OpenOffice::OODoc row by row. 

--

Note too that the ODS format will have proxy issues for large exports, because guided_reports.pl will write the ZIP to disk, read the ZIP from disk, write to STDOUT, and then Plack::App::CGIBin will buffer STDOUT in a temporary file on disk, and then it will send the whole response all at once to the Apache reverse proxy.
Comment 9 David Cook 2021-02-23 01:23:33 UTC
An alternative solution might be to write a CSV file to a temporary file and then use LibreOffice's CLI tools to convert from CSV to ODS.

Example:

soffice --convert-to ods koha.csv --headless

I haven't tried that though, so I can't speak to its performance. I seem to recall Indranil saying OpenOffice or LibreOffice had some memory usage issues for large spreadsheets. It might not be any better. Plus, it would add a large dependency to Koha for just 1 thing.

I think that's all I have for ideas though.

I think the ODS format is just problematic - at least with the tools that we have to hand.
Comment 10 Liz Rea 2024-04-02 19:27:16 UTC
Just a note here to say that this is still a problem - perhaps we should only support CSV and TAB for exports?
Comment 11 Katrin Fischer 2024-04-02 19:32:14 UTC
(In reply to Liz Rea from comment #10)
> Just a note here to say that this is still a problem - perhaps we should
> only support CSV and TAB for exports?

The .ods format is what we use most as CSV is not recognized by Excel as utf8 and so umlauts are broken. You have to separately import the data which is a lot of extra steps. I'd really really like to keep it.
Comment 12 Aleisha Amohia 2024-05-09 23:43:46 UTC
Just noting that this has caused a bunch of OOMs for our libraries this week, not sure why it's happening all of a sudden but this is a real problem for us. Keen to collaborate on a solution!
Comment 13 Aleisha Amohia 2024-06-10 20:05:25 UTC
We've received sponsorship to try and fix the ODS export format.

Any ideas are welcome, we're working on this now
Comment 14 David Cook 2024-06-11 00:22:01 UTC
(In reply to Aleisha Amohia from comment #13)
> We've received sponsorship to try and fix the ODS export format.
> 
> Any ideas are welcome, we're working on this now

You'll want to go with a streaming response for all formats. Instead of putting everything into $content, you'll want to print to STDOUT line by line to save memory. 

However, I don't know why I didn't mention it before, but even if you tried to do a streaming response where each row was written out to STDOUT one by one, it won't work as expected because Koha uses Plack::App::CGIBin. You can see fully explanations in bug 8437 and bug 26791 in my comments there. Basically, the whole response gets buffered in a temporary file before it's sent to the Apache proxy to send back to the client. 

Plus, for ODS, you'll have to wait for the content of generate_ods as well. (If you want to see a possible cheeky solution for that, you can check out bug 31744. Not sure if it'll work for ODS like it works for CSV though.)

Anyway, just some thoughts...
Comment 15 Aleisha Amohia 2024-06-26 02:48:19 UTC
Created attachment 168116 [details] [review]
Bug 23685: Don't store ODS content in a variable to save memory

Just attaching this to start.

Sponsored-by: Waikato Institute of Technology
Comment 16 Aleisha Amohia 2024-06-26 02:49:15 UTC
I don't know the kind of report I need to test crashing the system when I export to ODS. Any ideas?

I've just avoided adding to a $content variable as a start and can confirm it works, but doesn't address the Plack problem that David Cook raised. Not sure the best way around that.
Comment 17 David Cook 2024-06-28 03:27:28 UTC
(In reply to Aleisha Amohia from comment #16)
> I don't know the kind of report I need to test crashing the system when I
> export to ODS. Any ideas?

Try something like this:

SELECT biblio.* FROM biblio CROSS JOIN biblio b limit 10000;

I think it should make an enormous number of rows, so you can play with the limit however you like. 

> I've just avoided adding to a $content variable as a start and can confirm
> it works, but doesn't address the Plack problem that David Cook raised. Not
> sure the best way around that.

Yeah it's a hard one. For now, the best bet is probably to avoid using Plack for this. It sucks, but it's the only practical solution I can think of.

There just isn't enough community support to move forward on a Mojolicious/Plack-based controller solution yet. I've tried a few times to progress it, but it's too much for just 1 person.
Comment 18 Aleisha Amohia 2024-07-06 00:44:10 UTC
Created attachment 168570 [details] [review]
Bug 23685: Exclude guided_reports.pl from plack

When attempting to export reports in ODS format from Koha, plack can time out

Excluding the script from plack is a simple fix until we have a more permanent fix for this
issue.

To test:
1. Create a report which will generate an enormous number of rows, such as SELECT biblio.* FROM biblio CROSS JOIN biblio b; (add a limit of 10000 or something if you like i.e. SELECT biblio.* FROM biblio CROSS JOIN biblio b limit 10000;)
2. Run the report
3. Apply patch, copy changes to /etc/koha/apache-shared-intranet-plack.conf
4. Restart all the things
5. Download the results in ODS format
6. Confirm the export works as expected

Sponsored-by: Waikato Institute of Technology
Comment 19 David Nind 2024-07-07 06:25:28 UTC
Created attachment 168577 [details] [review]
Bug 23685: Exclude guided_reports.pl from plack

When attempting to export reports in ODS format from Koha, plack can time out

Excluding the script from plack is a simple fix until we have a more permanent fix for this
issue.

To test:
1. Create a report which will generate an enormous number of rows, such as SELECT biblio.* FROM biblio CROSS JOIN biblio b; (add a limit of 10000 or something if you like i.e. SELECT biblio.* FROM biblio CROSS JOIN biblio b limit 10000;)
2. Run the report
3. Apply patch, copy changes to /etc/koha/apache-shared-intranet-plack.conf
4. Restart all the things
5. Download the results in ODS format
6. Confirm the export works as expected

Sponsored-by: Waikato Institute of Technology
Signed-off-by: David Nind <david@davidnind.com>