Bug 8437 - Large database backups and large exports from export.pl fail under plack
Summary: Large database backups and large exports from export.pl fail under plack
Status: In Discussion
Alias: None
Product: Koha
Classification: Unclassified
Component: Tools (show other bugs)
Version: master
Hardware: All All
: P2 normal (vote)
Assignee: Galen Charlton
QA Contact:
URL:
Keywords:
Depends on:
Blocks: 8268
  Show dependency treegraph
 
Reported: 2012-07-13 08:17 UTC by Paul Poulain
Modified: 2019-11-07 16:25 UTC (History)
15 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Who signed the patch off:
Text to go in the release notes:
Version(s) released in:


Attachments
Bug 8437: variable scoping for tools/export.pl (2.18 KB, patch)
2012-07-28 01:34 UTC, Jared Camins-Esakov
Details | Diff | Splinter Review
Bug 8437: variable scoping for tools/export.pl (2.20 KB, patch)
2012-08-04 11:42 UTC, Dobrica Pavlinusic
Details | Diff | Splinter Review
Bug 8437 - fix koha-dump configuration tar permissions (807 bytes, patch)
2012-08-04 12:35 UTC, Dobrica Pavlinusic
Details | Diff | Splinter Review
Bug 8437 - fix koha-dump configuration tar permissions (805 bytes, patch)
2012-08-04 12:38 UTC, Dobrica Pavlinusic
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Paul Poulain 2012-07-13 08:17:55 UTC
I've found a problem with patch of bug 8268, that you can see when you run plack:
Variable "$backupdir" is not available at /home/paul/koha.dev/koha-community/tools/export.pl line 334.

=> the backup features won't work under plack (problem of variable scoping with Plack that put the script in a sub, making main() variable unaccessible in other subs. Problem solved everywhere with a our instead of my. The best option, that I would prefer -from far- for this new feature, being that we don't use such a our variable, but define backupdir where it's needed (+ it's just reading of a syspref, so not something costly)
Comment 1 Jared Camins-Esakov 2012-07-28 01:34:53 UTC
Created attachment 11177 [details] [review]
Bug 8437: variable scoping for tools/export.pl

Variable scoping problems prevented database export from working
under Plack. This patch fixes that.
Comment 2 Paul Poulain 2012-08-01 10:33:19 UTC
Dobrica, could you sign-off this one pls ?
Comment 3 Dobrica Pavlinusic 2012-08-04 11:42:51 UTC Comment hidden (obsolete)
Comment 4 Dobrica Pavlinusic 2012-08-04 11:44:23 UTC
Signed off using test scenario from Bug 8268 - Koha should offer way to backup entire db
Comment 5 Dobrica Pavlinusic 2012-08-04 12:35:40 UTC Comment hidden (obsolete)
Comment 6 Dobrica Pavlinusic 2012-08-04 12:38:09 UTC
Created attachment 11355 [details] [review]
Bug 8437 - fix koha-dump configuration tar permissions

This allows backups created using Debian package to show up
automatically in web interface and makes permissions of backup
files consistent.
Comment 7 Dobrica Pavlinusic 2012-08-04 13:17:38 UTC
This patch fixes errors under plack but doesn't work for large backup files.

The problem is in download_backup which tries to print data back to browser in 64K chunks. Under plack, that gets buffered in memory and with backup of our production database it dies after a while and returns zero length file.

Correct way to fix this is to issue redirect under plack in download_backup
to some plack handled URL so we don't read whole file in memory.

I don't know how to detect plack if we are running under plack since script is inside CGI::Compile, but we could use environment variable to signal that we are under plack.

That would, however, require us to have some kind of koha.psgi in which we can implement this in master.

I also had to remove 

&& not $filename =~ m#|#

from download_backup to make download work at all, so I will have to remove signed-off on this patch, sorry.
Comment 8 Jared Camins-Esakov 2012-08-04 13:23:53 UTC
(In reply to comment #7)
> This patch fixes errors under plack but doesn't work for large backup files.
> 
> The problem is in download_backup which tries to print data back to browser
> in 64K chunks. Under plack, that gets buffered in memory and with backup of
> our production database it dies after a while and returns zero length file.
> 
> Correct way to fix this is to issue redirect under plack in download_backup
> to some plack handled URL so we don't read whole file in memory.
>
> I don't know how to detect plack if we are running under plack since script
> is inside CGI::Compile, but we could use environment variable to signal that
> we are under plack.
> 
> That would, however, require us to have some kind of koha.psgi in which we
> can implement this in master.

I guess the thing to do is note that this is not available under Plack?

> I also had to remove 
> 
> && not $filename =~ m#|#
> 
> from download_backup to make download work at all, so I will have to remove
> signed-off on this patch, sorry.

There's a signed off patch which makes that last change on bug 8268.
Comment 9 Jared Camins-Esakov 2012-08-04 14:02:28 UTC
(In reply to comment #7)
> This patch fixes errors under plack but doesn't work for large backup files.
> 
> The problem is in download_backup which tries to print data back to browser
> in 64K chunks. Under plack, that gets buffered in memory and with backup of
> our production database it dies after a while and returns zero length file.
> 
> Correct way to fix this is to issue redirect under plack in download_backup
> to some plack handled URL so we don't read whole file in memory.

This can't be how exporting large blobs from a database would be handled under Plack. Do you have any idea how that would work? Maybe we can use the same technique.

> I don't know how to detect plack if we are running under plack since script
> is inside CGI::Compile, but we could use environment variable to signal that
> we are under plack.
> 
> That would, however, require us to have some kind of koha.psgi in which we
> can implement this in master.
Comment 10 Jared Camins-Esakov 2012-08-04 14:17:34 UTC
Would using this syntax (borrowed from bug 7952) fix the problem?

+ open(my $fh, '<', $filename)
+ print <$fh>;
+ close $fh;
Comment 11 Paul Poulain 2012-11-02 09:48:12 UTC
(In reply to comment #10)
> Would using this syntax (borrowed from bug 7952) fix the problem?
> 
> + open(my $fh, '<', $filename)
> + print <$fh>;
> + close $fh;

Bumping Jared's question !
Comment 12 Robin Sheat 2014-04-22 03:59:25 UTC
(In reply to Jared Camins-Esakov from comment #10)
> Would using this syntax (borrowed from bug 7952) fix the problem?
> 
> + open(my $fh, '<', $filename)
> + print <$fh>;
> + close $fh;

If we're dealing with big binary files, this may consume a lot of memory before it sees a newline. Also, we possibly run the risk of changing newlines if we're ever different (or more lenient) from what we accept to what we output. Needless to say, this is bad for binary files.
Comment 13 Nick Clemens 2016-06-21 18:50:10 UTC
Raising severity in hopes of getting more attention here, not quite a blocker, but quite painful. Large sites are now unable to use the export tool in many cases as it just times out eventually.

Adding the page as an exception to plack seems to work sometimes but it would be nice to see a solution for this one.
Comment 14 Jonathan Druart 2016-07-03 15:51:30 UTC
(In reply to Dobrica Pavlinusic from comment #7)
> Correct way to fix this is to issue redirect under plack in download_backup
> to some plack handled URL so we don't read whole file in memory.

The solution seems to be here, but I don't understand what Dobrica meant.
Comment 15 Barton Chittenden 2016-12-12 14:52:58 UTC
(In reply to Jonathan Druart from comment #14)
> (In reply to Dobrica Pavlinusic from comment #7)
> > Correct way to fix this is to issue redirect under plack in download_backup
> > to some plack handled URL so we don't read whole file in memory.
> 
> The solution seems to be here, but I don't understand what Dobrica meant.

Dobrica,

Can you clarify and/or submit a further patch? This issue is really causing issues for large libraries.

Thanks,

--Barton
Comment 16 NancyK. 2017-04-25 23:16:26 UTC
This bug is going to cause us a really big problem.  We are doing weekly exports and have plans for more.  Please fix this soon.  
Thanks

NancyK
Comment 17 Jonathan Druart 2017-04-26 12:38:39 UTC
(In reply to NancyK. from comment #16)
> This bug is going to cause us a really big problem.  We are doing weekly
> exports and have plans for more.  Please fix this soon.  
> Thanks
> 
> NancyK

This will not be fixed soon as nobody as a solution.
You should consider using the CLI script misc/export_records.pl for your weekly export (with a cronjob even).
Comment 18 Nick Clemens 2017-04-26 12:51:36 UTC
Hi Nancy, We have also worked around this by simply excluding that page from plack, it avoids the issue and allows exports to complete.
Comment 19 Magnus Enger 2019-01-16 12:41:47 UTC
(In reply to Nick Clemens from comment #18)
> Hi Nancy, We have also worked around this by simply excluding that page from
> plack, it avoids the issue and allows exports to complete.

Anyone got an example of how to do that? Is it something we should do in the strandard config, until someone comes up with a better solution? 

I think the second patch from Dobrica should be a separate bug.
Comment 20 Magnus Enger 2019-01-16 17:44:46 UTC
I have taken the liberty of moving the second patch from Dobrica to a dedicated bug: Bug 22143. I also signed it off. Obsoleting the patch here to avoid confusion. 

Also setting this bug to "In discussion", since there does not seem to be a clear plan for solving the main problem.
Comment 21 Barton Chittenden 2019-01-17 03:21:05 UTC
(In reply to Magnus Enger from comment #19)
> (In reply to Nick Clemens from comment #18)
> > Hi Nancy, We have also worked around this by simply excluding that page from
> > plack, it avoids the issue and allows exports to complete.
> 
> Anyone got an example of how to do that? Is it something we should do in the
> strandard config, until someone comes up with a better solution? 
> 
> I think the second patch from Dobrica should be a separate bug.

Magnus, take a look at /etc/koha/apache-shared-intranet-plack.conf, you'll find lines that look like this:

        ProxyPass "/cgi-bin/koha/offline_circ/process_koc.pl" "!"

... the "!" means "Don't run this under plack".
Comment 22 Magnus Enger 2019-01-18 08:48:45 UTC
(In reply to Barton Chittenden from comment #21)
> ... the "!" means "Don't run this under plack".

Thanks! There are already a number of pages listed here:
http://git.koha-community.org/gitweb/?p=koha.git;a=blob;f=debian/templates/apache-shared-intranet-plack.conf

Would it make sense to add export.pl until someone comes up with a proper solution? Or are there so few people downloading huge files that it is not worth it?
Comment 23 Barton Chittenden 2019-01-20 00:15:41 UTC
(In reply to Magnus Enger from comment #22)

> Would it make sense to add export.pl until someone comes up with a proper
> solution? Or are there so few people downloading huge files that it is not
> worth it?

I think it's probably worth it -- it's a one-line change, and it's easy to back out when we finally fix it. Nick, Kyle, Larry, care to weigh in?
Comment 24 Kyle M Hall 2019-01-22 14:25:01 UTC
(In reply to Barton Chittenden from comment #23)
> (In reply to Magnus Enger from comment #22)
> 
> > Would it make sense to add export.pl until someone comes up with a proper
> > solution? Or are there so few people downloading huge files that it is not
> > worth it?
> 
> I think it's probably worth it -- it's a one-line change, and it's easy to
> back out when we finally fix it. Nick, Kyle, Larry, care to weigh in?

That's up to the RM :)

Avoiding plack will 'fix' the script, but most likely cause everyone to stop looking for an actual solution. That's my worry. That being said, it's probably better to get it working in the near-term.
Comment 25 David Cook 2019-01-22 23:30:06 UTC
(In reply to Kyle M Hall from comment #24)
> Avoiding plack will 'fix' the script, but most likely cause everyone to stop
> looking for an actual solution. That's my worry. That being said, it's
> probably better to get it working in the near-term.

This is my worry as well but I didn't want to say it and seem like a naysayer.
Comment 26 Liz Rea 2019-06-13 03:11:33 UTC
What if we change dump strategy for large database, and use mydumper and myloader? 

https://dotlayer.com/extremely-fast-mysql-backup-restore-using-mydumpermyloader/
Comment 27 Liz Rea 2019-06-13 03:11:46 UTC
OPTIONALLY, of course.
Comment 28 David Cook 2019-06-13 08:57:43 UTC
(In reply to Liz Rea from comment #26)
> What if we change dump strategy for large database, and use mydumper and
> myloader? 
> 
> https://dotlayer.com/extremely-fast-mysql-backup-restore-using-
> mydumpermyloader/

I like the sound of that, but I'm not sure that it's relevant to this issue, since we're just sending an already existing backup?
Comment 29 David Cook 2019-06-13 09:25:26 UTC
(In reply to Kyle M Hall from comment #24)
> Avoiding plack will 'fix' the script, but most likely cause everyone to stop
> looking for an actual solution. That's my worry. That being said, it's
> probably better to get it working in the near-term.

I'm looking at export.pl and thinking how export.pl's job should be to authenticate the user, process user input about what to download, and then direct the user somewhere else better suited to serving files...

This is kind of interesting although perhaps not relevant in our hybrid CGI/PSGI way of doing things, and it's old so doesn't take into account that X-Sendfile seems deprecated:
http://www.catalystframework.org/calendar/2014/5

But it does make the point that trying to serve a large file from Perl is going to be slow. 

It might be worth looking at frameworks using Plack to see how they do it. For example send_file and _send_file in Dancer: https://metacpan.org/release/Dancer/source/lib/Dancer.pm. 

Or we could look at https://metacpan.org/release/Plack/source/lib/Plack/App/File.pm, although that looks like it uses PSGI magic to send the file too, which would be tricky in our hybrid environment.