Bug 8970 - MARC import gives error under Starman/Plack
Summary: MARC import gives error under Starman/Plack
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: MARC Bibliographic record staging/import (show other bugs)
Version: Main
Hardware: All All
: P5 - low normal (vote)
Assignee: Robin Sheat
QA Contact:
URL:
Keywords:
Depends on:
Blocks: 12173 13938 18343
  Show dependency treegraph
 
Reported: 2012-10-25 15:45 UTC by Jared Camins-Esakov
Modified: 2021-02-08 15:46 UTC (History)
7 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
Bug 8970 - fix MARC import under plack (5.58 KB, patch)
2014-04-29 06:21 UTC, Robin Sheat
Details | Diff | Splinter Review
[signed off] Bug 8970 - fix MARC import under plack (5.64 KB, patch)
2014-05-21 22:50 UTC, Brendan Gallagher
Details | Diff | Splinter Review
Bug 8970 - fix MARC import under plack (5.79 KB, patch)
2014-05-22 11:54 UTC, Jonathan Druart
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Jared Camins-Esakov 2012-10-25 15:45:25 UTC
When trying to import records under Starman/Plack (this probably does not happen with Plack alone), one gets an error "Failed to submit form: parsererror" with the following in the error log:
Subroutine redo_matching redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 142.
Subroutine create_labelbatch_from_importbatch redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 186.
Subroutine import_batches_list redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 206.Subroutine commit_batch redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 234.Subroutine revert_batch redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 263.Subroutine put_in_background redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 292.
Subroutine progress_callback redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 327.
Subroutine add_results_to_template redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 337.
Subroutine add_saved_job_results_to_template redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 343.
Subroutine import_records_list redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 351.
Subroutine batch_info redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 413.
Subroutine add_matcher_list redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 445.
Subroutine add_page_numbers redefined at /home/jcamins/kohaclone/tools/manage-marc-import.pl line 458.
Comment 1 Paul Poulain 2012-11-02 09:05:19 UTC
Upgrading severity, it's a blocker for using Plack, as there's no workaround. Jared, do you plan to work on a fix for this ? (I think Galen is the default assignee, but Galen, feel free to provide a fix, of course)

If none of you plan to do, I'll investigate.
Comment 2 Jared Camins-Esakov 2012-11-02 10:44:12 UTC
(In reply to comment #1)
> Upgrading severity, it's a blocker for using Plack, as there's no
> workaround. Jared, do you plan to work on a fix for this ? (I think Galen is
> the default assignee, but Galen, feel free to provide a fix, of course)
> 
> If none of you plan to do, I'll investigate.

Oh, are you able to reproduce? I don't think I'll be working on this because I don't understand at all what the problem is. I just know that it doesn't work.
Comment 3 Paul Poulain 2012-11-02 11:05:10 UTC
(In reply to comment #2)
> (In reply to comment #1)
> > Upgrading severity, it's a blocker for using Plack, as there's no
> > workaround. Jared, do you plan to work on a fix for this ? (I think Galen is
> > the default assignee, but Galen, feel free to provide a fix, of course)
> > 
> > If none of you plan to do, I'll investigate.
> 
> Oh, are you able to reproduce? I don't think I'll be working on this because
> I don't understand at all what the problem is. I just know that it doesn't
> work.

You may have seen that I'm working on all bugs related to "Plack" today. i've submitted a patch & tested some of them, but not this one, as it seems really mysterious.
I didn't even try to reproduce it.
Comment 4 Jared Camins-Esakov 2013-05-01 12:01:00 UTC
More relevant error messages:
seek() on closed filehandle _GEN_3 at /usr/share/perl5/CGI/Emulate/PSGI.pm line 31.
Undefined subroutine &CGI::Emulate::PSGI::croak called at /usr/share/perl5/CGI/Emulate/PSGI.pm line 31
        CGI::Emulate::PSGI::__ANON__('HASH(0x38efca0)') called at /usr/share/perl5/Plack/App/WrapCGI.pm line 83
        Plack::App::WrapCGI::call('Plack::App::WrapCGI=HASH(0x419fd50)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /usr/share/perl5/Plack/App/CGIBin.pm line 50
        Plack::App::CGIBin::serve_path('Plack::App::CGIBin=HASH(0x34ea1f0)', 'HASH(0x38efca0)', '/home/jcamins/kohaclone/tools/stage-marc-import.pl') called at /usr/shar
e/perl5/Plack/App/File.pm line 34
        Plack::App::File::call('Plack::App::CGIBin=HASH(0x34ea1f0)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /usr/share/perl5/Plack/App/URLMap.pm line 71
        Plack::App::URLMap::call('Plack::App::URLMap=HASH(0x34ea5f8)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /usr/share/perl5/Plack/Middleware/ReverseProxy.pm line 68
        Plack::Middleware::ReverseProxy::call('Plack::Middleware::ReverseProxy=HASH(0x35cd768)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /usr/share/perl5/Plack/Middleware/Conditional.pm line 19
        Plack::Middleware::Conditional::call('Plack::Middleware::Conditional=HASH(0x35cd540)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 39
        Plack::Middleware::StackTrace::__ANON__ at /usr/share/perl5/Try/Tiny.pm line 71
        eval {...} at /usr/share/perl5/Try/Tiny.pm line 67
        Plack::Middleware::StackTrace::call('Plack::Middleware::StackTrace=HASH(0x35cd900)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /home/jcamins/perl5/lib/perl5/Plack/Middleware/Debug/Base.pm line 23
        Plack::Middleware::Debug::Base::call('Plack::Middleware::Debug::Memory=HASH(0x366a150)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /home/jcamins/perl5/lib/perl5/Plack/Middleware/Debug/Base.pm line 23
        Plack::Middleware::Debug::Base::call('Plack::Middleware::Debug::Timer=HASH(0x366aaf8)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /home/jcamins/perl5/lib/perl5/Plack/Middleware/Debug/Base.pm line 23
        Plack::Middleware::Debug::Base::call('Plack::Middleware::Debug::Response=HASH(0x35d2498)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 
39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /home/jcamins/perl5/lib/perl5/Plack/Middleware/Debug/Base.pm line 23
        Plack::Middleware::Debug::Base::call('Plack::Middleware::Debug::Environment=HASH(0x35d2630)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm li
ne 39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /home/jcamins/perl5/lib/perl5/Plack/Middleware/Debug.pm line 138
        Plack::Middleware::Debug::call('Plack::Middleware::Debug=HASH(0x35cdaf8)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /usr/share/perl5/Plack/Middleware/AccessLog.pm line 21
        Plack::Middleware::AccessLog::call('Plack::Middleware::AccessLog=HASH(0xbe4de0)', 'HASH(0x38efca0)') called at /usr/share/perl5/Plack/Component.pm line 39
        Plack::Component::__ANON__('HASH(0x38efca0)') called at /usr/share/perl5/Plack/Util.pm line 165
        eval {...} at /usr/share/perl5/Plack/Util.pm line 165
        Plack::Util::run_app('CODE(0xbe4d08)', 'HASH(0x38efca0)') called at /usr/share/perl5/Starman/Server.pm line 223
        Starman::Server::process_request('Starman::Server=HASH(0xbe4f18)') called at /usr/share/perl5/Net/Server.pm line 142
        Net::Server::run_client_connection('Starman::Server=HASH(0xbe4f18)') called at /usr/share/perl5/Net/Server/PreFork.pm line 273
        eval {...} at /usr/share/perl5/Net/Server/PreFork.pm line 273
        Net::Server::PreFork::run_child('Starman::Server=HASH(0xbe4f18)') called at /usr/share/perl5/Net/Server/PreFork.pm line 229
        Net::Server::PreFork::run_n_children('Starman::Server=HASH(0xbe4f18)', 2) called at /usr/share/perl5/Net/Server/PreFork.pm line 144
        Net::Server::PreFork::loop('Starman::Server=HASH(0xbe4f18)') called at /usr/share/perl5/Net/Server.pm line 117
        Net::Server::run('Starman::Server=HASH(0xbe4f18)', 'port', 'ARRAY(0xbec948)', 'host', 'ARRAY(0x13ff188)', 'proto', 'ARRAY(0x3877180)', 'serialize', 'flock', 'log
_level', 2, 'log_file', '/home/jcamins/koha-dev/var/log/koha-error.log', 'min_servers', 2, 'min_spare_servers', 1, 'max_spare_servers', 1, 'max_servers', 2, 'max_request
s', 50, 'user', 1001, 'group', '1003 33 999 1003', 'listen', 1024, 'leave_children_open_on_hup', 1, 'no_client_stdout', 1, 'pid_file', '/home/jcamins/koha-dev/var/run/pl
ack.pid', 'setsid', 1, 'background', 1) called at /usr/share/perl5/Starman/Server.pm line 61
        Starman::Server::run('Starman::Server=HASH(0xbe4f18)', 'CODE(0xbe4d08)', 'HASH(0x34ea3a0)') called at /usr/share/perl5/Plack/Handler/Starman.pm line 18
        Plack::Handler::Starman::run('Plack::Handler::Starman=HASH(0x34e9f20)', 'CODE(0xbe4d08)') called at /usr/share/perl5/Plack/Loader.pm line 88
        Plack::Loader::run('Plack::Loader=HASH(0xb5f448)', 'Plack::Handler::Starman=HASH(0x34e9f20)') called at /usr/share/perl5/Plack/Runner.pm line 263
        Plack::Runner::run('Plack::Runner=HASH(0x8d46e0)') called at /usr/bin/starman line 31
Comment 5 Paul Poulain 2013-05-03 07:45:07 UTC
Feedback: we're running plack in production at SAN Ouest Provence since last week. They are facing this problem. I can reproduce it on their test platform too. However, when trying to reproduce it on a master/3.11, it's working well. I spent more than 3 hours trying to understand the differences, and can't find any patch that explain this behaviour change.

Jared, is your comment 4 applying on master or 3.10 ? (I fear it's master, that would just add some mystery...)

Side comment = it seems that, despite the error, the file is uploaded correctly.
Comment 6 Paul Poulain 2013-05-03 08:10:31 UTC
Feedback 2 : suspecting a problem with background jobs, I've tested tools/upload-cover-image.pl, on a 3.10 and got the same problem.
So it relies on background-job.

I tried on a master, and it worked !
Comment 7 Paul Poulain 2013-05-03 08:43:24 UTC
still trying to figure out why it works for me on a master.
Here are the log I get on plack:
seek() on closed filehandle _GEN_5 at /usr/share/perl5/CGI/Emulate/PSGI.pm line 31.
Undefined subroutine &CGI::Emulate::PSGI::croak called at /usr/share/perl5/CGI/Emulate/PSGI.pm line 31.

(and nothing more)

I checked configurations files on both servers, and couldn't see any difference.

How I run starman:
starman -M FindBin --max-requests 50 --workers 6 --port 5001 --pid /path/starman_pro.pid koha.psgi

my koha.psgi is:

#!/usr/bin/perl
use Plack::Builder;
use Plack::App::CGIBin;
use Plack::Middleware::Debug;
use Plack::App::Directory;
#use Plack::Middleware::Debug::MemLeak;
use lib("/home/paul/koha.dev/koha-community");
use C4::Context;
use C4::Languages;
use C4::Members;
use C4::Dates;
use C4::Boolean;
use C4::Letters;
use C4::Koha;
use C4::XSLT;
use C4::Branch;
use C4::Category;

my $app=Plack::App::CGIBin->new(root => "/home/paul/koha.dev/koha-community/");
builder {
#        enable 'Debug',  panels => [
#                qw(Environment Response Timer Memory),
#               [ 'Profiler::NYTProf', exclude => [qw(.*\.css .*\.png .*\.ico .*\.js .*\.gif)] ],
#               [ 'DBITrace', level => 1 ],
#        ];

        enable "Plack::Middleware::Static",
                path => qr{^/intranet-tmpl/}, root => '/home/paul/koha.dev/koha-community/koha-tmpl/';

       enable "Plack::Middleware::Static::Minifier",
               path => qr{^/intranet-tmpl/},
               root => './koha-tmpl/';

#        enable 'StackTrace';
        mount "/cgi-bin/koha" => $app;

};

(note that I also tried to uncommand "enable 'StackTrace', no difference, still work on my laptop)

my laptop is running Ubuntu 12.04LTS. Plack version is 0.9985

On the (faulty) server : Debian + Plack version 1.0013
Comment 8 Jared Camins-Esakov 2013-05-03 11:46:35 UTC
(In reply to comment #5)
> Jared, is your comment 4 applying on master or 3.10 ? (I fear it's master,
> that would just add some mystery...)

Master.

> Side comment = it seems that, despite the error, the file is uploaded
> correctly.

Confirmed.
Comment 9 Paul Poulain 2013-05-03 16:01:23 UTC
I suspect the difference does not come from Koha version, but from Plack version.
> my laptop is running Ubuntu 12.04LTS. Plack version is 0.9985
> On the (faulty) server : Debian + Plack version 1.0013

I suspect that it's Plack that is responsible of the different behaviour.

Jared, which plack version do you have ?
(i'll try to update, but on ubuntu, by default, it's 0.9985)
Comment 10 Jared Camins-Esakov 2013-05-03 16:10:06 UTC
(In reply to comment #9)
> I suspect the difference does not come from Koha version, but from Plack
> version.
> > my laptop is running Ubuntu 12.04LTS. Plack version is 0.9985
> > On the (faulty) server : Debian + Plack version 1.0013
> 
> I suspect that it's Plack that is responsible of the different behaviour.
> 
> Jared, which plack version do you have ?
> (i'll try to update, but on ubuntu, by default, it's 0.9985)

I have Plack 0.9985 and Starman 0.2014.
Comment 11 Jonathan Druart 2013-07-15 14:17:25 UTC
I cannot reproduce with Plack 1.0016 and Starman 0.3006.
Comment 12 Paul Poulain 2013-07-17 15:05:08 UTC
still investigating :
error still here in SAN-OP (3.10.7), they have 
biblibre@deb2571:~$ pmvers Plack
1.0023
biblibre@deb2571:~$ pmvers Starman
0.3011
Comment 13 Jonathan Druart 2013-10-29 13:09:51 UTC
Retested now on 3.10.7 and Plack v1.0023 and Starman v0.3011
I don't reproduced...
Comment 14 Robin Sheat 2014-04-22 05:52:34 UTC
I can somewhat reproduce this:

My method:
1) Tools -> Stage MARC records for import - staged a small marc file with one entry.
2) Fill out the boxes (left everything as default, except for giving it a comment.)
3) Stage for import - everything seems to be fine
4) Follow the Manage staged records link
5) Note that my record is there.


I did see:

seek() on closed filehandle _GEN_44 at /usr/share/perl5/CGI/Emulate/PSGI.pm line 32.
Can't seek stdout handle: Bad file descriptor at /usr/share/perl5/Plack/App/WrapCGI.pm line 87.
 at /usr/share/perl/5.18/Carp.pm line 100

in the logs, but no error apparent in the browser. I assume that something (the progress updater?) is closing the output handle when plack/starman doesn't expect it.

We have seen the parseerror before, long ago. It was something to do with invalid JSON appearing in the upload, which would be consistent with this. I also see:

192.168.56.1 - - [22/Apr/2014:17:47:41 +1200] "POST /cgi-bin/koha/tools/stage-marc-import.pl HTTP/1.1" 500 5728 "http://test-intra.debmaker32/cgi-bin/koha/tools/stage-marc-import.pl" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:28.0) Gecko/20100101 Firefox/28.0"

that is, a 500 error. So there is something going wrong.
Comment 15 Robin Sheat 2014-04-22 05:57:21 UTC
Tried it with a larger marc file, and some things really went wrong.

It went straight to 100%, then to 0%, now it's sitting there doing nothing.

There is:

DBD::mysql::st fetchrow_array failed: fetch() without execute() at /usr/share/perl5/CGI/Session/Driver/DBI.pm line 74.
 at /mnt/catalyst/koha/tools/upload-file.pl line 85

and

DBD::mysql::st execute failed: MySQL server has gone away at /usr/share/perl5/CGI/Session/Driver/DBI.pm line 72.
DBD::mysql::st execute failed: MySQL server has gone away at /usr/share/perl5/CGI/Session/Driver/DBI.pm line 72.

along with some other 500 errors in there.
Comment 16 Robin Sheat 2014-04-23 03:17:29 UTC
Tried it again, I think something else happened last time that caused the database connection to go away.

This time I got an error during the stage process, with:

fork failed while attempting to run /cgi-bin/koha/tools/stage-marc-import.pl as a background job at /mnt/catalyst/koha/tools/stage-marc-import.pl line 125, <GEN12> chunk 139245.
Comment 17 Robin Sheat 2014-04-23 05:15:43 UTC
OK, while I haven't nailed this down exactly, I think I have some hints that there's a race condition happening. I also get plain weird things like:

DBD::mysql::db selectrow_array failed: Cannot add or update a child row: a foreign key constraint fails (`koha_test`.`import_biblios`, CONSTRAINT `import_biblios_ibfk_1` FOREIGN KEY (`import_record_id`) REFERENCES `import_records` (`import_record_id`) ON DELETE CASCADE ON UPDATE CASCADE) at /mnt/catalyst/koha/C4/Context.pm line 618.

and that line in C4::Context is requesting a preference. So I think that we have some sort of race condition going on. Also, forking a plack process is a) tricky, and b) memory heavy (which is why the fork was failing for me in the first instance, it started working when I changed my VM from 512MB to 1024MB.)

However, I think there's something odd going on with dbh in the forked process and the original process interacting in bad ways.

It seems to me that the best(?!) way to sort this is to rewrite the way the whole background processing stuff works, and to using a queueing daemon that handles forking and processing. This is cleaner overall than forking a web process anyway. However, it's a fairly substantial amount of work.

A quick grep for 'fork' suggests that the following scripts may be affected by this:
tools/stage-marc-import.pl
tools/batchMod.pl
tools/manage-marc-import.pl
offline_circ/process_koc.pl
Comment 18 Robin Sheat 2014-04-23 05:22:24 UTC
Another option might be to do a fork-and-exec of a small process to do the work totally independently of the web serving process. That'd probably be easier to adapt.
Comment 19 Jonathan Druart 2014-04-23 07:30:27 UTC
(In reply to Robin Sheat from comment #17)
> It seems to me that the best(?!) way to sort this is to rewrite the way the
> whole background processing stuff works, and to using a queueing daemon that
> handles forking and processing. This is cleaner overall than forking a web
> process anyway. However, it's a fairly substantial amount of work.

Julian used Net::Server::Daemonize to fix an other issue in tools/batchMod.pl. It is in production and it seems to work quite well.

see http://git.biblibre.com/?p=koha;a=commit;h=1347281752d62dd370f6e4fae936def6d7630d5c

I don't have a lot of time at the moment to make a patch, but I will try to do it when I could.
Comment 20 Jacek Ablewicz 2014-04-23 07:39:23 UTC
(In reply to Robin Sheat from comment #17)
> OK, while I haven't nailed this down exactly, I think I have some hints that
> there's a race condition happening. I also get plain weird things like:
> 
> DBD::mysql::db selectrow_array failed: Cannot add or update a child row: a
> foreign key constraint fails (`koha_test`.`import_biblios`, CONSTRAINT
> `import_biblios_ibfk_1` FOREIGN KEY (`import_record_id`) REFERENCES
> `import_records` (`import_record_id`) ON DELETE CASCADE ON UPDATE CASCADE)
> at /mnt/catalyst/koha/C4/Context.pm line 618.
> 

Looks like the plack/mod_perl related pitfall, when 2+ concurrently running processes are sharing the same single DBI connection (a big no-no, DBI would not work reliably if used that way). But - reading previous comments in this report, there may be some other unrelated/additional problems with background jobs under plack as well (?).

> and that line in C4::Context is requesting a preference. So I think that we
> have some sort of race condition going on. Also, forking a plack process is
> a) tricky, and b) memory heavy (which is why the fork was failing for me in
> the first instance, it started working when I changed my VM from 512MB to
> 1024MB.)

Forked/background jobs - and plack workers etc. - would not be as memory heavy (thanks to copy-on-write kernel feature) if we preload more C4/*, Koha/* and CPAN modules at plack startup script. But right now, preloading C4/Auth.pm, or any other module which is using it would cause the same exact problem with single DBI connection being shared between all plack workers and/or mod-perl processes.. That's because C4/Auth is fetching some sysprefs in BEGIN {} block. I addressed it with the following line added in the end of the startup script

   ${C4::Context::context}->{dbh}=undef;

It seems to work fine under mod_perl (but I haven't tested it all that much yet,
in particular - not with plack). This start-up script:

...

use CGI ();
CGI->compile(':all');

use C4::Members ();
use C4::Search ();
use C4::Serials ();
use C4::Acquisition ();
use C4::AuthoritiesMarc ();

C4::Context->config('AaaBbCcc');
${C4::Context::context}->{dbh}=undef;

does preload circa 80% of C4/* perl code, resulting in significantly better latency and total RAM usage under mod_perl.
Comment 21 Robin Sheat 2014-04-24 02:18:02 UTC
(In reply to Jonathan Druart from comment #19)
> Julian used Net::Server::Daemonize to fix an other issue in
> tools/batchMod.pl. It is in production and it seems to work quite well.
> 
> see
> http://git.biblibre.com/?p=koha;a=commit;
> h=1347281752d62dd370f6e4fae936def6d7630d5c

I'm not sure that method would be totally safe, for the same reason that our current one has issues. Things like carrying the database handles around. I also don't like the idea of daemonising a starman process, it just seems dangerous.

(In reply to Jacek Ablewicz from comment #20)
> Looks like the plack/mod_perl related pitfall, when 2+ concurrently running
> processes are sharing the same single DBI connection (a big no-no, DBI would
> not work reliably if used that way). But - reading previous comments in this

I thought about it a bit more, and realised why this was all of a sudden a problem: under CGI, the parent process ended immediately, so the child had the DBI handle all to itself. Now, it doesn't end immediately, and so next time it gets called on to do something, it's using the handle that is being used to do the import.

> report, there may be some other unrelated/additional problems with
> background jobs under plack as well (?).

Maybe. I think it's OK, it's just that most of my failure to have it work is that it required more RAM than I had allocated to the VM. If we don't have a dedicated queuing daemon, there's no way around this. But I think a fork-and-exec will do for now, splitting the functions out into either another script, or changing the execution path so that it works just by execing itself.

> Forked/background jobs - and plack workers etc. - would not be as memory
> heavy (thanks to copy-on-write kernel feature) if we preload more C4/*,
> Koha/* and CPAN modules at plack startup script. But right now, preloading

That would be a good thing to do, though is a task for another bug.
Comment 22 Robin Sheat 2014-04-29 06:21:22 UTC Comment hidden (obsolete)
Comment 23 Robin Sheat 2014-04-29 06:25:08 UTC
There are still other issues that should be addressed, but I think they're not critical, as with this patch things work.

In particular:
* MARC data is all loaded into RAM, this makes the process get large and stay large until it's reaped,
* Probably due to an incomplete record at the end of my test data, there is a crash, but it doesn't seem to impair the working of it at all.
Comment 24 Galen Charlton 2014-05-06 17:11:24 UTC
Resetting severity to normal - Plack support is still experimental.
Comment 25 Brendan Gallagher 2014-05-21 22:50:06 UTC Comment hidden (obsolete)
Comment 26 Jonathan Druart 2014-05-22 11:54:54 UTC
Created attachment 28423 [details] [review]
Bug 8970 - fix MARC import under plack

There were database handles being shared between a parent and a child
process, which is a big no-no, and was leading to crazy crashes. Fine
under CGI, but not in a persistent environment. This causes the child to
make a new database handle to use. Also some small cleanups.

To test:
* In a plack environment,
* Tools -> stage MARC records for import
* Use a reasonable size file (but not too big as it all goes into RAM -
  I made one about 40MB.)
* Make sure that it works, and that the progress bars progress.

Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>

Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Tested with a 55M file, I reproduced the error and I confirm this patch
fixes it.
Comment 27 Galen Charlton 2014-05-26 00:52:06 UTC
Pushed to master.  Thanks, Robin!