Bug 4855

Summary: Tools/Export does not tell browser file size
Product: Koha Reporter: Lars Wirzenius <lars>
Component: ToolsAssignee: Galen Charlton <gmcharlt>
Status: NEW --- QA Contact: Bugs List <koha-bugs>
Severity: enhancement    
Priority: P5 - low CC: arouss1980, dcook, katrin.fischer
Version: Main   
Hardware: All   
OS: All   
See Also: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=6952
Change sponsored?: --- Patch complexity: ---
Documentation contact: Documentation submission:
Text to go in the release notes:
Version(s) released in:

Description Lars Wirzenius 2010-06-08 04:35:35 UTC
When exporting the catalogue via tools/export, the download works fine, but if the catalogue is large, the user does not know how long it will take. Koha does not tell the browser the size of the file (Content-Length HTTP header I assume), so the browser cannot estimate the remaining time.

It would be nice if it did.
Comment 1 Andreas Roussos 2016-09-13 17:01:52 UTC
I'm no Perl expert but I had a look and, apparently, tools/export.pl calls
Koha::Exporter::Record::export in order to export the bibliographic/authority
records.

Unless given a filename, the export() subroutine in Koha/Exporter/Record.pm
operates on STDOUT, hence the inability to know the size of the file
beforehand.

So, I think that with the current set of Perl scripts and modules it's not
possible to tell the browser the size of the file in advance...
Comment 2 Katrin Fischer 2019-05-04 13:17:52 UTC
Linking to bug 6952 suggesting to show the number of records before download.
Comment 3 David Cook 2022-08-22 03:47:44 UTC
Yeah, this is a tough one.

As Andreas suggests, it's impossible to output the Content-Length unless it fetches the entire data dump first, and then sends it out. For a large database, you're not going to be able to do that in RAM, so you're going to need to use a temporary file. 

The tools/export.pl is actually problematic in general (see Bug 26791). If we wanted to use a temporary file instead of streaming out the response record by record, then we'd be best off using a BackgroundJob to prepare the file (although then you have potential issues with disk space for large data dumps). 

(For a large file it would be more efficient to have Apache httpd serve it as a static file than for Starman, but then you wouldn't have authentication and authorization protecting the file if you serve it using Apache httpd. So we'd probably still use Starman, but we'd need to make sure it was using either a CGI script or a Mojolicious controller and not Plack since CGI::Emulate::PSGI buffers the entire HTTP response before sending it out)

But it's something on my mind 😅