As per my notes on Bug 8437, starting at https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=8437#c33, any Koha .pl script that needs to stream a large response to a client web browser is never going to work, so long as we're using Plack::App::CGIBin to create a PSGI handler. While I tried to create my own streaming version of Plack::App::WrapCGI, I found a third-party module to be buggy, and that writing my own would be prohibitive in terms of time and energy. It's also architectural glue code that we really might not want to maintain long-term. So I've concluded that an alternative would be to write a Mojolicious controller to replace export.pl. While CGI Koha can still use export.pl, when we're using Starman/Plack, we can redirect to the appropriate Mojolicious endpoint.
I think that C4::Auth and C4::Templates will need a bit of refactoring to get the AuthN/AuthZ and Template::Toolkit working 100% correctly, but this is very achievable. I have ideas about how to get this done. I'm just going to focus first on streaming output rather than the Koha internals.
I'm using Mojo::Server::PSGI->to_psgi_app, and I'm not receiving a stream of content in my web client. I decided to bypass Apache, and even when I go to Starman directly on port 5000, I'm still not getting my stream from Mojo. Going to grab some lunch and have another crack at this.
Running a simple Mojo app using morbo and still not getting the stream, so I must be doing something wrong in Mojolicious...
Actually, I think it was a case of suffering from buffering, and I just wasn't using a reasonable test load. It's look better now...
(In reply to David Cook from comment #4) > Actually, I think it was a case of suffering from buffering, and I just > wasn't using a reasonable test load. It's look better now... Except now when I'm trying to write out iso2709 records and I'm not getting anything again...
Alas, I was not able to get the $c->write or $c->write_chunk methods to work for streaming a large amount of data in a HTTP response. I could seemingly make it work for small amounts of data, but for large amounts it didn't work, despite doing my best to follow the (limited) documentation. Perhaps Martin will have a better idea about how to get it to work.
Using a drain callback (like in https://gist.github.com/hiroaki/1893896 or https://docs.mojolicious.org/Mojolicious/Guides/Rendering#Streaming) works for a small amount of data, but I didn't have time to try it with a large amount of data. Moreover, the construct might not be workable for some of our situations...
I've tried using the drain callback, and while I am getting a stream of data in my web client... the results are strange. It's like it's pulling the same database record more than once but creating different Koha::Biblioitem objects each time. I don't have the time/energy to work through that today...
(In reply to David Cook from comment #8) > I've tried using the drain callback, and while I am getting a stream of data > in my web client... the results are strange. > > It's like it's pulling the same database record more than once but creating > different Koha::Biblioitem objects each time. I don't have the time/energy > to work through that today... Ahhh it's because of the join with the items table: my $biblioitems = Koha::Biblioitems->search( $conditions, { join => 'items', columns => 'biblionumber' } ); The code in tools/export.pl is illogical, but at least I understand my weird results now.
So it seems that $c->write and $c->write_chunk just put the content in a buffer rather than actually writing to the socket. (It's too bad they're not called $c->buffer and $c->buffer_chunk >_>.) The buffer doesn't get emptied until "generate_body_chunk" is called... and a stacktrace suggests in this case it's due to the drain event being fired. Then like Node.js, you have recursive callbacks where each callback is called by an event being fired and that callback processes 1 result and sets up the next callback. It's complicated, but it should be performant. If I wrote a native PSGI handler, it would be blocking, and if you only have 2 Starman workers, that means 1 whole worker would be occupied with streaming the results of 1 large export... I've written to the Mojolicious listserv to see if there is an easier way, but I think that's probably how it's going to go. Any more investigation will have to wait until a day where I don't stay at work until 8pm...
Sebastian Riedel, the founder of Mojolicious, mentioned that the drain callback is the only way to do streaming in Mojolicious, since it relies on an event loop and non-blocking I/O, which makes sense. You write some data to the buffer, enqueue an event, the event loop processes the queue (which could include other events like a new web request, some I/O, or whatever), and then you repeat the process over and over until you're out of data. This asynchronous style is not the most intuitive to write, but it does make for a much more powerful and robust web server. Sebastian also mentioned promises and async/await. The async/await (https://docs.mojolicious.org/Mojolicious/Guides/Cookbook#async-await) looks very user-friendly, but it's an *experimental* feature and requires at least Perl 5.16+ I think and preferrably 5.24+. (Note Debian Stretch comes with perl 5.24.) Alternatively, I could look at wrapping it in a Mojo::Promise. That might help in terms of readability/maintainability at least.
Oh... I should be able to provide a "staff" script which can be used by CGI, but which is ignored when "intranet/staff" is mounted in plack.psgi! In that case, I should be able to totally replace export.pl, and provide a baseline for slowly replacing all CGI scripts with Mojolicious controllers!
Making significant progress. koha-run-backups --days 2 --output /var/spool/koha ls /var/spool/koha/kohadev/ kohadev-2020-11-16.sql.gz kohadev-2020-11-16.tar.gz I can't download kohadev-2020-11-16.tar.gz via the web interface because of a permission issue that seems to already exist in Koha. However, I was able to download kohadev-2020-11-16.sql.gz in 22-49ms using the Mojolicious controller (vs 1.46-1.97 seconds with export.pl).
Impressive :D
Next steps: 1. The code for exporting records from export.pl needs a massive rewrite in order to work here, but I'm leaving that for last... 2. I also need to write unit tests, but at this point I'm just trying to get something that obviously works and works well. I already want to refactor the code I have but so far it's just about getting something working as close to what we already have, which obviously isn't great. 3. Now I need to look more at C4::Templates::gettemplate, since it is expecting a CGI object in the $query variable, and I don't have one. This is needed for translations. I think that it's probably going to be far too difficult to try to update it, and I'll end up creating a Koha::Template replacement.
Created attachment 113660 [details] [review] Bug 26791: Replace export.pl with export Mojolicious controller WORK IN PROGRESS
I am still working on this, but I'm sharing so that people can review, provide feedback, or whatever. Test plan: 1. cp debian/templates/plack.psgi /etc/koha/sites/kohadev/plack.psgi 2. koha-plack --restart kohadev 3. Go to http://localhost:8081/cgi-bin/koha/staff/tools/export (Alternatively, navigate to http://localhost:8081/cgi-bin/koha/tools/tools-home.pl and click "Export data" 4. Visually compare with http://localhost:8081/cgi-bin/koha/tools/export.pl (Note that the top nav is slightly different because I am being very opinionated about how I populate $template variables to reduce future technical debt.) Other test plans: 0. First, follow the above test plan 1. koha-run-backups --days 2 --output /var/spool/koha 2. vi /etc/koha/sites/kohadev/koha-conf.xml 3. Enable backup_db_via_tools and backup_conf_via_tools 4. echo 'flush_all' | nc -q 1 memcached 11211 5. koha-plack --restart kohadev 6. Go to http://localhost:8081/cgi-bin/koha/staff/tools/export 7. Click "Export database" 8. Choose a file and click "Download database" 9. Note that it downloads quickly and correctly (Bonus points if you try downloading a very large database. I have so far only tried with small databases, but it should work with large ones too.)
Another test plan: 0. Follow the first test plan 1. koha-plack --disable kohadev 2. service apache2 restart 3. Go to http://localhost:8081/cgi-bin/koha/staff/tools/export 4. Click "Export database", choose a file, and click "Download database" 5. Note that the database downloads successfully (even in CGI mode!)
I just did a test with a 41MB Koha database and it took 17ms to download from http://localhost:8081/cgi-bin/koha/staff/tools/export. I've verified the file and it's complete. Using http://localhost:8081/cgi-bin/koha/tools/export.pl it took 1.4s.
Technically Bug 26791 doesn't depend on Bug 20582, but I will need to do something for app.psgi from Bug 20582 before this work is complete...
*** Bug 26792 has been marked as a duplicate of this bug. ***
Created attachment 114587 [details] [review] Bug 26791: Use Mojolicious record exporter when using ExportCircHistory This patch uses the Mojolicious record exporter to export records from the Circulation module. Test plan: 1. Change system preference "ExportCircHistory" to "Show" 2. Checkout items 39999000004571 and 39999000001310 to the "koha" borrower 3. Select checkouts from list and check the "Export" checkboxes 4. Click "Export" at the bottom of the page 5. Note that valid ISO 2709 MARC is generated in the export
I still have more work to do, but the core is there. It's really now just about building out the controller, hopefully moving code into modules, and adding unit tests for those modules. Actually, exporting as CSV is going to be annoying, but I have some ideas. I'm hoping to look at this more over the holidays.
(In reply to David Cook from comment #23) > I'm hoping to look at this more over the holidays. Next steps: 1. Finish Perl coding a. Finish controller code b. Move code out of controller and into modules c. Possibly refactor or add new CSV export code... 2. Write unit tests for modules 3. Final E2E testing
Unfortunately, I haven't had any time/energy to work on this, although it's still something I want to do eventually.
The patches are outdated and realistically we're not moving in this direction anyway. It's probably more realistic to handle this as a background task. It would allow the web app to be more available. The only downside is that it will take up disk space.