Bug 8437

Summary: Large database backups and large exports from export.pl fail under plack
Product: Koha Reporter: Paul Poulain <paul.poulain>
Component: ToolsAssignee: Nick Clemens <nick>
Status: CLOSED FIXED QA Contact: Katrin Fischer <katrin.fischer>
Severity: normal    
Priority: P2 CC: aleisha, barton, black23, dcook, dpavlin, fridolin.somers, jonathan.druart, josef.moravec, kyle, larry, lucas, magnus, martin.renvoize, nick, nkeener, robin, victor, wizzyrea
Version: master   
Hardware: All   
OS: All   
See Also: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=13847
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=22143
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=17240
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=22481
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=22417
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=26791
Change sponsored?: --- Patch complexity: Trivial patch
Documentation contact: Documentation submission:
Text to go in the release notes:
Version(s) released in:
20.11.00, 20.05.06, 19.11.12
Bug Depends on:    
Bug Blocks: 8268    
Attachments: Bug 8437: variable scoping for tools/export.pl
Bug 8437: variable scoping for tools/export.pl
Bug 8437 - fix koha-dump configuration tar permissions
Bug 8437 - fix koha-dump configuration tar permissions
Bug 8437: Exclude export.pl from plack
Bug 8437: Exclude export.pl from plack
Bug 8437: Exclude export.pl from plack

Description Paul Poulain 2012-07-13 08:17:55 UTC
I've found a problem with patch of bug 8268, that you can see when you run plack:
Variable "$backupdir" is not available at /home/paul/koha.dev/koha-community/tools/export.pl line 334.

=> the backup features won't work under plack (problem of variable scoping with Plack that put the script in a sub, making main() variable unaccessible in other subs. Problem solved everywhere with a our instead of my. The best option, that I would prefer -from far- for this new feature, being that we don't use such a our variable, but define backupdir where it's needed (+ it's just reading of a syspref, so not something costly)
Comment 1 Jared Camins-Esakov 2012-07-28 01:34:53 UTC
Created attachment 11177 [details] [review]
Bug 8437: variable scoping for tools/export.pl

Variable scoping problems prevented database export from working
under Plack. This patch fixes that.
Comment 2 Paul Poulain 2012-08-01 10:33:19 UTC
Dobrica, could you sign-off this one pls ?
Comment 3 Dobrica Pavlinusic 2012-08-04 11:42:51 UTC Comment hidden (obsolete)
Comment 4 Dobrica Pavlinusic 2012-08-04 11:44:23 UTC
Signed off using test scenario from Bug 8268 - Koha should offer way to backup entire db
Comment 5 Dobrica Pavlinusic 2012-08-04 12:35:40 UTC Comment hidden (obsolete)
Comment 6 Dobrica Pavlinusic 2012-08-04 12:38:09 UTC
Created attachment 11355 [details] [review]
Bug 8437 - fix koha-dump configuration tar permissions

This allows backups created using Debian package to show up
automatically in web interface and makes permissions of backup
files consistent.
Comment 7 Dobrica Pavlinusic 2012-08-04 13:17:38 UTC
This patch fixes errors under plack but doesn't work for large backup files.

The problem is in download_backup which tries to print data back to browser in 64K chunks. Under plack, that gets buffered in memory and with backup of our production database it dies after a while and returns zero length file.

Correct way to fix this is to issue redirect under plack in download_backup
to some plack handled URL so we don't read whole file in memory.

I don't know how to detect plack if we are running under plack since script is inside CGI::Compile, but we could use environment variable to signal that we are under plack.

That would, however, require us to have some kind of koha.psgi in which we can implement this in master.

I also had to remove 

&& not $filename =~ m#|#

from download_backup to make download work at all, so I will have to remove signed-off on this patch, sorry.
Comment 8 Jared Camins-Esakov 2012-08-04 13:23:53 UTC
(In reply to comment #7)
> This patch fixes errors under plack but doesn't work for large backup files.
> 
> The problem is in download_backup which tries to print data back to browser
> in 64K chunks. Under plack, that gets buffered in memory and with backup of
> our production database it dies after a while and returns zero length file.
> 
> Correct way to fix this is to issue redirect under plack in download_backup
> to some plack handled URL so we don't read whole file in memory.
>
> I don't know how to detect plack if we are running under plack since script
> is inside CGI::Compile, but we could use environment variable to signal that
> we are under plack.
> 
> That would, however, require us to have some kind of koha.psgi in which we
> can implement this in master.

I guess the thing to do is note that this is not available under Plack?

> I also had to remove 
> 
> && not $filename =~ m#|#
> 
> from download_backup to make download work at all, so I will have to remove
> signed-off on this patch, sorry.

There's a signed off patch which makes that last change on bug 8268.
Comment 9 Jared Camins-Esakov 2012-08-04 14:02:28 UTC
(In reply to comment #7)
> This patch fixes errors under plack but doesn't work for large backup files.
> 
> The problem is in download_backup which tries to print data back to browser
> in 64K chunks. Under plack, that gets buffered in memory and with backup of
> our production database it dies after a while and returns zero length file.
> 
> Correct way to fix this is to issue redirect under plack in download_backup
> to some plack handled URL so we don't read whole file in memory.

This can't be how exporting large blobs from a database would be handled under Plack. Do you have any idea how that would work? Maybe we can use the same technique.

> I don't know how to detect plack if we are running under plack since script
> is inside CGI::Compile, but we could use environment variable to signal that
> we are under plack.
> 
> That would, however, require us to have some kind of koha.psgi in which we
> can implement this in master.
Comment 10 Jared Camins-Esakov 2012-08-04 14:17:34 UTC
Would using this syntax (borrowed from bug 7952) fix the problem?

+ open(my $fh, '<', $filename)
+ print <$fh>;
+ close $fh;
Comment 11 Paul Poulain 2012-11-02 09:48:12 UTC
(In reply to comment #10)
> Would using this syntax (borrowed from bug 7952) fix the problem?
> 
> + open(my $fh, '<', $filename)
> + print <$fh>;
> + close $fh;

Bumping Jared's question !
Comment 12 Robin Sheat 2014-04-22 03:59:25 UTC
(In reply to Jared Camins-Esakov from comment #10)
> Would using this syntax (borrowed from bug 7952) fix the problem?
> 
> + open(my $fh, '<', $filename)
> + print <$fh>;
> + close $fh;

If we're dealing with big binary files, this may consume a lot of memory before it sees a newline. Also, we possibly run the risk of changing newlines if we're ever different (or more lenient) from what we accept to what we output. Needless to say, this is bad for binary files.
Comment 13 Nick Clemens 2016-06-21 18:50:10 UTC
Raising severity in hopes of getting more attention here, not quite a blocker, but quite painful. Large sites are now unable to use the export tool in many cases as it just times out eventually.

Adding the page as an exception to plack seems to work sometimes but it would be nice to see a solution for this one.
Comment 14 Jonathan Druart 2016-07-03 15:51:30 UTC
(In reply to Dobrica Pavlinusic from comment #7)
> Correct way to fix this is to issue redirect under plack in download_backup
> to some plack handled URL so we don't read whole file in memory.

The solution seems to be here, but I don't understand what Dobrica meant.
Comment 15 Barton Chittenden 2016-12-12 14:52:58 UTC
(In reply to Jonathan Druart from comment #14)
> (In reply to Dobrica Pavlinusic from comment #7)
> > Correct way to fix this is to issue redirect under plack in download_backup
> > to some plack handled URL so we don't read whole file in memory.
> 
> The solution seems to be here, but I don't understand what Dobrica meant.

Dobrica,

Can you clarify and/or submit a further patch? This issue is really causing issues for large libraries.

Thanks,

--Barton
Comment 16 NancyK. 2017-04-25 23:16:26 UTC
This bug is going to cause us a really big problem.  We are doing weekly exports and have plans for more.  Please fix this soon.  
Thanks

NancyK
Comment 17 Jonathan Druart 2017-04-26 12:38:39 UTC
(In reply to NancyK. from comment #16)
> This bug is going to cause us a really big problem.  We are doing weekly
> exports and have plans for more.  Please fix this soon.  
> Thanks
> 
> NancyK

This will not be fixed soon as nobody as a solution.
You should consider using the CLI script misc/export_records.pl for your weekly export (with a cronjob even).
Comment 18 Nick Clemens 2017-04-26 12:51:36 UTC
Hi Nancy, We have also worked around this by simply excluding that page from plack, it avoids the issue and allows exports to complete.
Comment 19 Magnus Enger 2019-01-16 12:41:47 UTC
(In reply to Nick Clemens from comment #18)
> Hi Nancy, We have also worked around this by simply excluding that page from
> plack, it avoids the issue and allows exports to complete.

Anyone got an example of how to do that? Is it something we should do in the strandard config, until someone comes up with a better solution? 

I think the second patch from Dobrica should be a separate bug.
Comment 20 Magnus Enger 2019-01-16 17:44:46 UTC
I have taken the liberty of moving the second patch from Dobrica to a dedicated bug: Bug 22143. I also signed it off. Obsoleting the patch here to avoid confusion. 

Also setting this bug to "In discussion", since there does not seem to be a clear plan for solving the main problem.
Comment 21 Barton Chittenden 2019-01-17 03:21:05 UTC
(In reply to Magnus Enger from comment #19)
> (In reply to Nick Clemens from comment #18)
> > Hi Nancy, We have also worked around this by simply excluding that page from
> > plack, it avoids the issue and allows exports to complete.
> 
> Anyone got an example of how to do that? Is it something we should do in the
> strandard config, until someone comes up with a better solution? 
> 
> I think the second patch from Dobrica should be a separate bug.

Magnus, take a look at /etc/koha/apache-shared-intranet-plack.conf, you'll find lines that look like this:

        ProxyPass "/cgi-bin/koha/offline_circ/process_koc.pl" "!"

... the "!" means "Don't run this under plack".
Comment 22 Magnus Enger 2019-01-18 08:48:45 UTC
(In reply to Barton Chittenden from comment #21)
> ... the "!" means "Don't run this under plack".

Thanks! There are already a number of pages listed here:
http://git.koha-community.org/gitweb/?p=koha.git;a=blob;f=debian/templates/apache-shared-intranet-plack.conf

Would it make sense to add export.pl until someone comes up with a proper solution? Or are there so few people downloading huge files that it is not worth it?
Comment 23 Barton Chittenden 2019-01-20 00:15:41 UTC
(In reply to Magnus Enger from comment #22)

> Would it make sense to add export.pl until someone comes up with a proper
> solution? Or are there so few people downloading huge files that it is not
> worth it?

I think it's probably worth it -- it's a one-line change, and it's easy to back out when we finally fix it. Nick, Kyle, Larry, care to weigh in?
Comment 24 Kyle M Hall 2019-01-22 14:25:01 UTC
(In reply to Barton Chittenden from comment #23)
> (In reply to Magnus Enger from comment #22)
> 
> > Would it make sense to add export.pl until someone comes up with a proper
> > solution? Or are there so few people downloading huge files that it is not
> > worth it?
> 
> I think it's probably worth it -- it's a one-line change, and it's easy to
> back out when we finally fix it. Nick, Kyle, Larry, care to weigh in?

That's up to the RM :)

Avoiding plack will 'fix' the script, but most likely cause everyone to stop looking for an actual solution. That's my worry. That being said, it's probably better to get it working in the near-term.
Comment 25 David Cook 2019-01-22 23:30:06 UTC
(In reply to Kyle M Hall from comment #24)
> Avoiding plack will 'fix' the script, but most likely cause everyone to stop
> looking for an actual solution. That's my worry. That being said, it's
> probably better to get it working in the near-term.

This is my worry as well but I didn't want to say it and seem like a naysayer.
Comment 26 Liz Rea 2019-06-13 03:11:33 UTC
What if we change dump strategy for large database, and use mydumper and myloader? 

https://dotlayer.com/extremely-fast-mysql-backup-restore-using-mydumpermyloader/
Comment 27 Liz Rea 2019-06-13 03:11:46 UTC
OPTIONALLY, of course.
Comment 28 David Cook 2019-06-13 08:57:43 UTC
(In reply to Liz Rea from comment #26)
> What if we change dump strategy for large database, and use mydumper and
> myloader? 
> 
> https://dotlayer.com/extremely-fast-mysql-backup-restore-using-
> mydumpermyloader/

I like the sound of that, but I'm not sure that it's relevant to this issue, since we're just sending an already existing backup?
Comment 29 David Cook 2019-06-13 09:25:26 UTC
(In reply to Kyle M Hall from comment #24)
> Avoiding plack will 'fix' the script, but most likely cause everyone to stop
> looking for an actual solution. That's my worry. That being said, it's
> probably better to get it working in the near-term.

I'm looking at export.pl and thinking how export.pl's job should be to authenticate the user, process user input about what to download, and then direct the user somewhere else better suited to serving files...

This is kind of interesting although perhaps not relevant in our hybrid CGI/PSGI way of doing things, and it's old so doesn't take into account that X-Sendfile seems deprecated:
http://www.catalystframework.org/calendar/2014/5

But it does make the point that trying to serve a large file from Perl is going to be slow. 

It might be worth looking at frameworks using Plack to see how they do it. For example send_file and _send_file in Dancer: https://metacpan.org/release/Dancer/source/lib/Dancer.pm. 

Or we could look at https://metacpan.org/release/Plack/source/lib/Plack/App/File.pm, although that looks like it uses PSGI magic to send the file too, which would be tricky in our hybrid environment.
Comment 30 Nick Clemens 2020-09-03 16:26:29 UTC
Created attachment 109632 [details] [review]
Bug 8437: Exclude export.pl from plack

When attempting to download large files from Koha plack can timeout

Excluding the script from plack is a simple fix until we have a more permanent fix for this
issue.

To test:
1 - Try to export your entire DB from Tools->Export
2 - If big enough, it fails
3 - Apply patch, copy changes to /etc/koha/apache-shared-intranet-plack.conf
4 - Restart all the things
5 - Repeat export, it succeeds
Comment 31 David Cook 2020-09-04 03:55:24 UTC
Based on my analysis on Bug 26128, I'm wondering what kind of timeouts we're talking about here.

In the bug I note above, the timeout used by CGI was effectively 2 times as long as the timeout used when proxying to Plack/Starman.
Comment 32 David Cook 2020-10-22 00:19:43 UTC
I've bumped into this one again, so I'm going to have a go at solving it for real.
Comment 33 David Cook 2020-10-22 01:18:00 UTC
Ok, so I'm running a large bib export using export.pl, and I'm monitoring the Apache with "LogLevel Debug".

I see a few things that might be interesting, but the key interesting thing is this:

[Thu Oct 22 00:26:23.676618 2020] [reqtimeout:info] [pid 60205] [client 172.21.0.1:55460] AH01382: Request header read timeout

As Dobrica mentioned, it does appear that Starman is buffering the whole response before sending it. I am guessing this is due to our usage of Plack::App::CGIBin... but I'll investigate further.
Comment 34 David Cook 2020-10-22 01:32:00 UTC
(In reply to David Cook from comment #33)
> As Dobrica mentioned, it does appear that Starman is buffering the whole
> response before sending it. I am guessing this is due to our usage of
> Plack::App::CGIBin... but I'll investigate further.

If I understand correctly, this will never work for us, so long as we're using Plack::App::CGIBin to serve export.pl. 

At https://metacpan.org/release/Plack/source/lib/Plack/App/CGIBin.pm#L47, Plack::App::CGIBin uses Plack::App::WrapCGI. 

Plack::App::WrapCGI uses CGI::Emulate::PSGI at https://metacpan.org/source/Plack::App::WrapCGI#L87.

CGI::Emulate::PSGI writes the response to a temporary file before returning the response (as per https://metacpan.org/release/CGI-Emulate-PSGI/source/lib/CGI/Emulate/PSGI.pm#L19).

If you look at CGI::Emulate::PSGI::Streaming (https://metacpan.org/release/CGI-Emulate-PSGI-Streaming/source/lib/CGI/Emulate/PSGI/Streaming.pm), you can see how that handler returns a closure rather than a tuple (as noted at http://www.catalystframework.org/calendar/2013/10).

In order to do a streaming response, you need to leverage that $responder coderef in the closure.

Of course, this should be doable. I have an idea germinating.
Comment 35 David Cook 2020-10-22 02:42:25 UTC
One alternative could be to use https://metacpan.org/pod/Plack::App::CGIBin::Streaming. 

Another would be a custom handler using CGI::Emulate::PSGI::Streaming:
cpan2deb CGI::Emulate::PSGI::Streaming
apt install ./libcgi-emulate-psgi-streaming-perl_1.0.0-1_all.deb

I'm exploring the second option now (via a custom Koha::Plack::App::StreamCGI module)...

It seems like the custom handler is returning an empty koha.mrc file after about 10 seconds, and I've only processed about 6000 out of 40000 records...

Of course, as soon as I say that, suddenly it seems to start working without me doing anything different. 

Although I notice that usually my custom handler doesn't load some Javascript or CSS quite right... and I hit the "Export bibliographic records" button from a different window...

But still... progress... I've streamed 10,000 bib records (10MB) out of Starman via Apache.
Comment 36 David Cook 2020-10-22 02:45:45 UTC
Hmm the difference seems to be in the HTML...
Comment 37 David Cook 2020-10-22 02:49:43 UTC
(In reply to David Cook from comment #36)
> Hmm the difference seems to be in the HTML...

Ahh my custom handler isn't sending a complete HTML response for some reason. I think a Perl error. I think I must not be setting up the CGI env quite right. But so close...
Comment 38 David Cook 2020-10-22 02:56:53 UTC
For some reason, my HTML response is being cut off here:

    <script>
        $(document).ready(function() {
            $('#exporttype').tabs();

            $("li.csv_profiles").hide();

            $("#bibs select[name='output_format']").on('change', function(){
                var format = $(this).val();
                if ( format == 'csv' ) {
                    $("#bibs li.csv_profiles").show();
                } else {
                    $("#bibs li.csv_profiles").hide();
                }
            });
            $("#checkall").on("click",function(e){
                e.preventDefault();
                $(".branch_select").prop("checked",1);
            });
            $("#checknone").on("click",func
Comment 39 David Cook 2020-10-22 04:33:34 UTC
That truncated response is 32489 bytes long... 

I'm guessing the issue is with CGI::Parse::PSGI::Streaming...

According to the tools/export.pl, I'm writing a body of 33441 characters...

Looking at CGI::Parse::PSGI::Streaming, I'm only seeing 32768 characters (which includes the HTTP headers and the HTTP body), and 32768 bytes is the same as 32KB...

CGI::Parse::PSGI::Streaming::Handle shows the full response in the buffer and says it is 33750 characters long... 

Ah, I think it's a bug in CGI::Parse::PSGI::Streaming::Handle. For some reason, the author takes the data printed to STDOUT, creates an in-memory file handle stored in a scalar variable, and prints the contents of that "STDOUT" to the in-memory file handle, and that's when it is truncated from 33750 characters to 32768 characters...

This is the line in CGI::Parse::PSGI::Streaming::Handle which seemingly breaks things:
print {$self->{fh}} substr($buf, $offset, $len);

The weird thing is that I'm trying to reproduce the problem in a more stripped back program and I can't do it. It just works...
Comment 40 David Cook 2020-10-22 05:21:07 UTC
I can't figure out why the problem with CGI::Emulate::PSGI::Streaming is happening, but I've reported the bug. It's unusable until it's fixed though.
Comment 41 David Cook 2020-10-22 06:14:14 UTC
I do have another idea... based on what Dobrica said at https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=8437#c7

When using Plack, we could re-direct away from export.pl to a URL where we have mounted a PSGI application (e.g. we could use Mojo::Serve::PSGI like we do with the REST API).

This would also give us a path forward to transition away from CGI to Plack.

The tools/export.pl file is actually pretty short, and the majority of the logic is in Perl modules (the "model" in MVC terms). The .pl files are basically just overloaded controllers (in MVC terms). 

I don't have time today, but maybe I would be willing to work on a solution for this. We could keep the existing template file (or "view" in MVC terms), and just provide an alternative controller. 

We could deprecate tools/export.pl and focus in favour on a PSGI controller.
Comment 42 David Cook 2020-10-22 22:37:01 UTC
In any case, I think we do need to solve the immediate problem ASAP, so I'm going to go ahead with Nick's patch.
Comment 43 David Cook 2020-10-22 22:37:42 UTC
Comment on attachment 11177 [details] [review]
Bug 8437: variable scoping for tools/export.pl

I think that this one is obsolete at this point.
Comment 44 David Cook 2020-10-22 22:51:54 UTC
(In reply to Nick Clemens from comment #30)
> Created attachment 109632 [details] [review] [review]
> Bug 8437: Exclude export.pl from plack
> 
> When attempting to download large files from Koha plack can timeout
> 
> Excluding the script from plack is a simple fix until we have a more
> permanent fix for this
> issue.
> 
> To test:
> 1 - Try to export your entire DB from Tools->Export
> 2 - If big enough, it fails
> 3 - Apply patch, copy changes to /etc/koha/apache-shared-intranet-plack.conf
> 4 - Restart all the things
> 5 - Repeat export, it succeeds

My test plan:
0. Using koha-testing-docker
1. cp debian/templates/apache-shared-intranet-plack.conf /etc/koha/apache-shared-intranet-plack.conf
2. enable "backup_db_via_tools" in /etc/koha/sites/kohadev/koha-conf.xml
3. echo 'flush_all' | nc -q 1 memcached 11211
4. service apache2 restart
5. koha-plack --restart kohadev
6. Go to http://localhost:8081/cgi-bin/koha/tools/export.pl
7. Note "Unfortunately, no backups are available."

So that's not going to work... but I have another idea.
Comment 45 David Cook 2020-10-22 22:56:17 UTC
(In reply to David Cook from comment #44)
> (In reply to Nick Clemens from comment #30)
> > Created attachment 109632 [details] [review] [review] [review]
> > Bug 8437: Exclude export.pl from plack
> > 
> > When attempting to download large files from Koha plack can timeout
> > 
> > Excluding the script from plack is a simple fix until we have a more
> > permanent fix for this
> > issue.
> > 
> > To test:
> > 1 - Try to export your entire DB from Tools->Export
> > 2 - If big enough, it fails
> > 3 - Apply patch, copy changes to /etc/koha/apache-shared-intranet-plack.conf
> > 4 - Restart all the things
> > 5 - Repeat export, it succeeds
> 
> My test plan:
> 0. Using koha-testing-docker
> 1. cp debian/templates/apache-shared-intranet-plack.conf
> /etc/koha/apache-shared-intranet-plack.conf
> 2. enable "backup_db_via_tools" in /etc/koha/sites/kohadev/koha-conf.xml
> 3. echo 'flush_all' | nc -q 1 memcached 11211
> 4. service apache2 restart
> 5. koha-plack --restart kohadev
> 6. Go to http://localhost:8081/cgi-bin/koha/tools/export.pl
> 7. Note "Unfortunately, no backups are available."
> 
> So that's not going to work... but I have another idea.

My test plan:
0. Using koha-testing-docker
1. cp debian/templates/apache-shared-intranet-plack.conf /etc/koha/apache-shared-intranet-plack.conf
2. Point koha-testing-docker at a database with a large number of biblios (e.g. 40,000 biblios)
3. echo 'flush_all' | nc -q 1 memcached 11211
4. restart_all
5. Go to http://localhost:8081/cgi-bin/koha/tools/export.pl
6. Click "Don't export items"
7. Click "Export bibliographic records"
8. Note after about 5 seconds that a download for koha.mrc starts. 

Perfect. Cheers, Nick.
Comment 46 David Cook 2020-10-22 22:57:40 UTC
Created attachment 112224 [details] [review]
Bug 8437: Exclude export.pl from plack

When attempting to download large files from Koha plack can timeout

Excluding the script from plack is a simple fix until we have a more permanent fix for this
issue.

To test:
1 - Try to export your entire DB from Tools->Export
2 - If big enough, it fails
3 - Apply patch, copy changes to /etc/koha/apache-shared-intranet-plack.conf
4 - Restart all the things
5 - Repeat export, it succeeds

Signed-off-by: David Cook <dcook@prosentient.com.au>
Comment 47 David Cook 2020-10-23 09:05:59 UTC
Side note: It's harder to make this work in Mojolicious than I thought it would, but only because Mojolicious is non-blocking by design. I think I should be able to create something that should work quite well in the end...
Comment 48 Katrin Fischer 2020-10-25 01:51:54 UTC
Created attachment 112448 [details] [review]
Bug 8437: Exclude export.pl from plack

When attempting to download large files from Koha plack can timeout

Excluding the script from plack is a simple fix until we have a more permanent fix for this
issue.

To test:
1 - Try to export your entire DB from Tools->Export
2 - If big enough, it fails
3 - Apply patch, copy changes to /etc/koha/apache-shared-intranet-plack.conf
4 - Restart all the things
5 - Repeat export, it succeeds

Signed-off-by: David Cook <dcook@prosentient.com.au>

Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Comment 49 Jonathan Druart 2020-10-25 23:05:24 UTC
Pushed to master for 20.11, thanks to everybody involved!
Comment 50 David Cook 2020-10-26 00:05:03 UTC
Hurray!

I won't have work time to spend on Bug 26791, but sometime soon I could share the work I did to get it working in Mojolicious. It'll need a fair bit more work to be production ready, but it could be the path forward for fixing all these proxy exceptions.
Comment 51 Lucas Gass 2020-11-13 18:38:59 UTC
backported to 20.05.x for 20.05.06
Comment 52 David Cook 2020-11-17 01:08:20 UTC
Using CGI::Emulate::PSGI::Streaming (version 1.0.1 since 1.0.0 is buggy) and a custom Plack::Component I created based on Plack::App::WrapCGI, I was able to stream 65MB of bib records from /cgi-bin/koha/tools/export.pl in 13 minutes using koha-testing-docker (and an external database on a different server). 

I could explore a solution using that tech, although I think that I'm still more inclined to moving forward with Bug 26791, as replacing export.pl provides a good example for replacing CGI scripts with Mojolicious controllers.
Comment 53 Aleisha Amohia 2020-11-17 04:26:28 UTC
backported to 19.11.x for 19.11.12
Comment 54 Victor Grousset/tuxayo 2020-11-17 10:33:26 UTC
Not backported to oldoldstable (19.05.x). Feel free to ask if it's needed.