Bug 15032 - [Plack] Scripts that fork (like stage-marc-import.pl) don't work as expected
Summary: [Plack] Scripts that fork (like stage-marc-import.pl) don't work as expected
Status: CLOSED WONTFIX
Alias: None
Product: Koha
Classification: Unclassified
Component: Tools (show other bugs)
Version: Main
Hardware: All All
: P5 - low major
Assignee: Jonathan Druart
QA Contact: Testopia
URL: https://gitlab.com/joubu/Koha/tree/bu...
Keywords:
: 18719 20335 (view as bug list)
Depends on:
Blocks: 15019
  Show dependency treegraph
 
Reported: 2015-10-19 19:38 UTC by HB-NEKLS
Modified: 2024-07-04 20:38 UTC (History)
18 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
"Stage to import" fails (139.26 KB, image/png)
2015-10-20 12:34 UTC, Tomás Cohen Arazi (tcohen)
Details
[OBSOLETE?] Bug 15032: Make sure the filehandle is not close on background mode (1.25 KB, patch)
2015-10-26 16:28 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 15032: Try to fix background jobs - batch_record_modification.pl (3.18 KB, patch)
2017-03-27 20:39 UTC, Jonathan Druart
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description HB-NEKLS 2015-10-19 19:38:28 UTC
After the fixes for 15005, I retried importing a record batch. I didn't get the error pop up message, but this time I get the message "Internal Server Error" for this page: http://test-intra.plack.bywatersolutions.com/cgi-bin/koha/tools/stage-marc-import.pl

However, when I go directly to the Manage Staged Records page, I see my staged batch.

To recreate: 

1) Go to Tools --> Stage MARC records for import. 
2) Upload file and click Import. I left all the settings on default ones. 
3) Click Stage for Import.
4) Note the "Internal Server Error" message. 
5) Click the back button and click on the Managed Staged Records link in the side bar. 
6) Note the presence of the file you just uploaded.
Comment 1 HB-NEKLS 2015-10-19 19:38:34 UTC
A second file I just staged for import, stalled on 24% Job Progress on the screen, but when I manually go to the Managed Staged Records link, my second batch is there.
Comment 2 Jonathan Druart 2015-10-20 07:35:28 UTC
For the same file, sometimes it works, sometimes you get "Internal server error" and sometimes the progress bar is stuck, that's it?
Comment 3 Tomás Cohen Arazi (tcohen) 2015-10-20 12:34:58 UTC
Created attachment 43625 [details]
"Stage to import" fails

The upload step doesn't seem to fail. But the "Stage to import" step raises the error from the screenshot. I tried to upload both authority and bibliographic records.
Comment 4 Liz Rea 2015-10-20 20:39:26 UTC
This is what I was seeing too - though I never got a server error, it just hung.

Liz
Comment 5 HB-NEKLS 2015-10-20 21:15:07 UTC
I didn't get a server error just now when staging for import, but the page never goes to the MARC staging results page.

When you manually access the manage staged records page, the file you tried to stage for import is listed.
Comment 6 Liz Rea 2015-10-20 21:20:25 UTC
Yep, this is what I was seeing yesterday.
Comment 7 HB-NEKLS 2015-10-20 21:22:22 UTC
I also got another Internal Server Error for http://test-intra.plack.bywatersolutions.com/cgi-bin/koha/tools/manage-marc-import.pl page when trying to import the batch of staged records.
Comment 8 Jonathan Druart 2015-10-26 12:05:58 UTC
This seems to be caused by bug 14321

The errors in the logs are:

binmode() on closed filehandle _GEN_2 at /home/vagrant/kohaclone/C4/Templates.pm line 120.
seek() on closed filehandle _GEN_2 at /usr/share/perl5/CGI/Emulate/PSGI.pm line 32.
Can't seek stdout handle: Bad file descriptor at /usr/share/perl5/Plack/App/WrapCGI.pm line 87.
Comment 9 Jonathan Druart 2015-10-26 12:21:51 UTC
(In reply to Jonathan Druart from comment #8)
> This seems to be caused by bug 14321

Not sure actually.
Comment 10 Jonathan Druart 2015-10-26 16:28:18 UTC
Created attachment 44011 [details] [review]
[OBSOLETE?] Bug 15032: Make sure the filehandle is not close on background mode

With Plack, when a file is uploaded for import, the fh created on
this file in Koha::Upload is closed.
To be sure we are using an opened one, we need to open it in the current
scope.

Test plan:
1) Go to Tools --> Stage MARC records for import.
2) Upload file and click Import.
3) Click Stage for Import.
You should not get any error and the records have been correctly
staged.
Comment 11 Jonathan Druart 2015-10-26 17:27:44 UTC
The last patch fixes the popup with not the internal server error.
I only get the error with a kohadevbox install, using a dev install and my "old Plack config", everything works fine with this patch.

My "old Plack config" is:


INTRANET=1 INTRANETDIR=/home/koha/src plackup --reload --port 5001 $HOME/src/misc/plack/koha.psgi -E deployment


Apache config:
    <Proxy>
      Order deny,allow
      Allow from 127.0.0.1,localhost
    </Proxy>
    <location /cgi-bin/koha/>
      ProxyPass http://localhost:5001/cgi-bin/koha/
    </location>

% INTRANET=1 INTRANETDIR=/home/koha/src plackup --reload --port 5001 $HOME/src/misc/plack/koha.psgi -E deployment


service apache2 restart
Comment 12 Jonathan Druart 2015-10-26 17:28:08 UTC
> The last patch fixes the popup with not the internal server error.

s/with/but
Comment 13 Jonathan Druart 2015-10-27 11:48:53 UTC
I made it works on kohadevbox:ansible, using plackup, ports and the koha.psgi from the source:


vagrant@kohadevbox:~$ sudo KOHA_CONF=/etc/koha/sites/kohadev/koha-conf.xml PERL5LIB=/home/vagrant/kohaclone INTRANET=1 INTRANETDIR=/home/vagrant/kohaclone plackup --port 5001 /home/vagrant/kohaclone/misc/plack/koha.psgi -E deployment                                                                                               
and
/etc/koha/apache-shared-intranet-plack.conf
    ProxyPass /cgi-bin/koha http://localhost:5001/cgi-bin/koha
    ProxyPassReverse /cgi-bin/koha http://localhost:5001/cgi-bin/koha

Otherwise, trying with socket:

vagrant@kohadevbox:~$ sudo KOHA_CONF=/etc/koha/sites/kohadev/koha-conf.xml PERL5LIB=/home/vagrant/kohaclone INTRANET=1 INTRANETDIR=/home/vagrant/kohaclone plackup --socket /var/run/koha/kohadev/plack.sock /home/vagrant/kohaclone/misc/plack/koha.psgi -E deployment
# using Koha intranet CGI from /home/vagrant/kohaclone
failed to listen to port 8080: Address already in use at /usr/share/perl5/HTTP/Server/PSGI.pm line 94.

--socket does not seems to work here (??)

Note that it works if I use Starman:
sudo KOHA_CONF=/etc/koha/sites/kohadev/koha-conf.xml PERL5LIB=/home/vagrant/kohaclone INTRANET=1 INTRANETDIR=/home/vagrant/kohaclone plackup -s Starman --socket /var/run/koha/kohadev/plack.sock /home/vagrant/kohaclone/misc/plack/koha.psgi -E deployment
# using Koha intranet CGI from /home/vagrant/kohaclone
2015/10/27-11:42:45 Starman::Server (type Net::Server::PreFork) starting! pid(11966)
Binding to UNIX socket file "/var/run/koha/kohadev/plack.sock"
Setting gid to "0 0 0"

But when I try and access the interface:
[Tue Oct 27 11:45:11.077045 2015] [proxy_http:error] [pid 12238] (70008)Partial results are valid but processing is incomplete: [client 192.168.50.1:47165] AH01110: error reading response, referer: http://kohadev-intra.box.vm:8080/
and finish with a 503 Service Unavailable
Comment 14 Jonathan Druart 2015-10-27 12:07:24 UTC
Starman with ports:
 sudo KOHA_CONF=/etc/koha/sites/kohadev/koha-conf.xml PERL5LIB=/home/vagrant/kohaclone INTRANET=1 INTRANETDIR=/home/vagrant/kohaclone /usr/bin/starman -M FindBin --max-requests 50 --workers 2 --user=kohadev-koha --group kohadev-koha --pid /var/run/koha/kohadev/plack.pid --port 5001 /home/vagrant/kohaclone/misc/plack/koha.psgi

Everyting looks fine, but the import tool is still broken (no progress bar).
Comment 15 Jonathan Druart 2015-10-27 12:09:02 UTC
(In reply to Jonathan Druart from comment #8)
> This seems to be caused by bug 14321
> 
> The errors in the logs are:
> 
> binmode() on closed filehandle _GEN_2 at
> /home/vagrant/kohaclone/C4/Templates.pm line 120.
> seek() on closed filehandle _GEN_2 at /usr/share/perl5/CGI/Emulate/PSGI.pm
> line 32.
> Can't seek stdout handle: Bad file descriptor at
> /usr/share/perl5/Plack/App/WrapCGI.pm line 87.

Note that these errors always appear, even when the import and the progress bar work correctly.
Comment 16 Mark Tompsett 2015-11-18 23:51:24 UTC
(In reply to Jonathan Druart from comment #14)
> Everything looks fine, but the import tool is still broken (no progress bar).

Don't ask me to duplicate it, but I have had instances where the javascript didn't trigger.
Comment 17 Tomás Cohen Arazi (tcohen) 2015-11-20 03:31:08 UTC
Bug 15218 provides a workaround to this bug. Reading the scripts that fork, they look buggy and should probably be rewritten.
Comment 18 HB-NEKLS 2016-02-09 01:47:38 UTC
I'm testing on 3.22 and the original bug I posted last fall when testing plack is more -- I'm no longer getting the reported error message. But, as Jonathan reported below, the staged import progress bar is non-existent [sits at 0% and then jumps to 100%] when clicking on the Stage Import button on stage-marc-import.pl. The same behavior happens on manage-marc-import.pl when clicking on the "Import this batch into the catalog" button [progress bar sits at 0% and then suddenly moves to 100%].
Comment 19 Fridolin Somers 2016-08-18 07:49:29 UTC
Script are running so faster with Plack, we may disable the background mode when using Plack.
Comment 20 Jonathan Druart 2017-03-27 20:39:11 UTC
Created attachment 61649 [details] [review]
Bug 15032: Try to fix background jobs - batch_record_modification.pl

I do not really understand how behave this patch, sometimes it works,
sometimes not.
It would be good to get other people testing and reading it to try and
understand what's happening. It may be a first step to fix our
background jobs.

The thing is that we cannot close STDOUT, Plack (CGI::Emulate::PSGI) is
expecting it opened (search for 'seek').

If we close STDOUT and keep 'text/html' in the headers, it works almost
correctly: but we got the JSON content + the headers printed by the
child.
With the redirect we a getting the right page when the job is finished.
But sometimes we see the plain text content (with the JSON headers) + a
302

Any thoughts?
Comment 21 Tomás Cohen Arazi (tcohen) 2017-03-27 23:47:40 UTC
I thought about this a couple times. There's some race between parent process and the fork closing the file handle for STDOUT. The best approach is to use some lib for creating child processes that takes care of this, or look carefully at what process needs a handle for STDOUT and which doesn't.

The backgrounded job is supposed to communicate its status through the DB, and the progress bars talk to a separate /svc endpoint, so the backgrounded job doesn't need STDOUT. That's why it is being closed (I guess) but it looks wrong at first glance.
Comment 22 Tomás Cohen Arazi (tcohen) 2017-03-28 00:09:08 UTC
(In reply to Tomás Cohen Arazi from comment #21)
> I thought about this a couple times. There's some race between parent
> process and the fork closing the file handle for STDOUT. The best approach
> is to use some lib for creating child processes that takes care of this, or
> look carefully at what process needs a handle for STDOUT and which doesn't.
> 
> The backgrounded job is supposed to communicate its status through the DB,
> and the progress bars talk to a separate /svc endpoint, so the backgrounded
> job doesn't need STDOUT. That's why it is being closed (I guess) but it
> looks wrong at first glance.

http://search.cpan.org/~bzajac/Proc-Background-1.10/lib/Proc/Background.pm
Comment 23 Tomás Cohen Arazi (tcohen) 2017-09-21 15:37:48 UTC
I would like to mention we could be taking advantage of the following libs that are designed exactly for this kind of things:


Long lasting/resource consuming jobs that need forking to avoid locking the app:
http://search.cpan.org/~dbook/Mojolicious-Plugin-Subprocess-0.004/lib/Mojolicious/Plugin/Subprocess.pm

Job queue without feedback (send a notification, etc)
http://mojolicious.org/perldoc/Minion
Comment 24 Jonathan Druart 2018-03-29 18:40:51 UTC
*** Bug 20335 has been marked as a duplicate of this bug. ***
Comment 25 David Cook 2018-10-24 01:03:11 UTC
I figure one way or another we need to have a solid solution for background processing of tasks.

I'm running into a related issue with someone trying to do batch_record_modification.pl with a large number of records, but there are a million other use cases that require it.

We've been talking about this issue for years, and I'm wondering if this is something where plugins might be useful. 

Tomas suggested an approach to me at Kohacon where Koha has a default built-in functionality, but then lets you use plugins to override that built-in functionality.

Maybe we could make the hooks a "developer feature"? So the hook is only run in "developer mode", but that lets many developers write their own plugins to try and find something that works? And maybe the best plugin gets adopted by Koha to replace the default functionality, which I'm sure we all agree is not good enough right now.

I wrote my own job server using POE::Component::JobQueue for #10662 and it works pretty well, but I'm sure it could be better. But I'd be happy to contribute a plugin for people to try out. I can't guarantee it would be the best solution, but it would be "a" solution. (With 10662, the job server runs as a standalone daemon process, and Koha, whether CGI or PSGI, connects to the daemon using a UNIX socket and uses JSON messages for talking about jobs.) There are lots of other options out there. I think Minion might be a good one since it's within the Mojolicious family. I was thinking of using it, but I think I didn't as it wasn't interactive enough for my purposes, but... I figure we should try out some things rather than just be in discussion forever?
Comment 26 Jonathan Druart 2018-10-24 12:06:08 UTC
I do not think your Koha::Daemon will work for the background jobs.

To me, the best way to fix this would be to provide a Koha daemon, instead.
But it would watch a DB table, which would allow us to build a view on top of it to manage the different jobs (and history).

We could also have different daemons for different needs (a param of the daemon matching a column in the DB table).

The question behind that is security, we will have to list the different operations the daemon is allowed to process (and so we will loose flexibility).

I have spent a *lot* of time trying to fix this issue by forking, etc. And my conclusion is: it's not feasible (see discussions and tries on related bug reports)

See also bug 1993.
Comment 27 Jonathan Druart 2018-10-24 12:08:34 UTC
(In reply to David Cook from comment #25)
> Maybe we could make the hooks a "developer feature"? So the hook is only run
> in "developer mode", but that lets many developers write their own plugins
> to try and find something that works? And maybe the best plugin gets adopted
> by Koha to replace the default functionality, which I'm sure we all agree is
> not good enough right now.

My idea would be "something that works", I have already tons of things that do not work. I am not sure letting other developer retrying again what did not work is a good idea :)
Let's discuss and agree on an idea, then I can implement it.
Comment 28 Fridolin Somers 2018-10-24 12:14:51 UTC
Has Nginx+Plack users, we at Biblibre are interested by any solution.
Comment 29 David Cook 2018-10-24 23:28:05 UTC
(In reply to Jonathan Druart from comment #27)
> My idea would be "something that works", I have already tons of things that
> do not work. I am not sure letting other developer retrying again what did
> not work is a good idea :)
> Let's discuss and agree on an idea, then I can implement it.

That works for me too! So long as we have some sort of timeline/deadline I think, so that we don't let the discussion go on forever?
Comment 30 David Cook 2018-10-24 23:36:16 UTC
(In reply to Jonathan Druart from comment #26)
> I do not think your Koha::Daemon will work for the background jobs.
> 
> To me, the best way to fix this would be to provide a Koha daemon, instead.
> But it would watch a DB table, which would allow us to build a view on top
> of it to manage the different jobs (and history).
> 
> We could also have different daemons for different needs (a param of the
> daemon matching a column in the DB table).
> 
> The question behind that is security, we will have to list the different
> operations the daemon is allowed to process (and so we will loose
> flexibility).
> 
> I have spent a *lot* of time trying to fix this issue by forking, etc. And
> my conclusion is: it's not feasible (see discussions and tries on related
> bug reports)
> 
> See also bug 1993.

My Koha::Daemon is just a module for making a daemon process; it's similar to Proc::Daemon on CPAN.

But yeah I could see my Koha::OAI::Harvester::Listener and Koha::OAI::Harvester not working for all background jobs. They work well in my case but I'd be happy to actually replace (elements of) them with a more general Koha solution, if it worked for my use case too.

Yeah I think that different daemons for different needs makes sense too. Like in the case of #10662, I originally wrote a generic job server, but then realized it might not suit everyone, so I re-wrote it just for that specific task of harvesting OAI-PMH records. 

Yeah, I don't think forking makes sense in a Plack/persistent environment. Actually, I don't think it makes sense in the CGI environment either, but it "worked". Setting up separate daemons is easy. I don't know why we don't just do that. The issue then I suppose is security, as you've said, and deciding on communication protocols.
Comment 31 David Cook 2018-10-24 23:46:39 UTC
Actually, when working on #10662, I thought about creating a standalone "import daemon", as importing records into Koha is the hardest part of OAI-PMH harvesting. (I already have an extremely fast download worker, but the bottleneck is importing into Koha *sadface*.)

For #15032, "Stage MARC records for import" could simply be a script that allows a use to upload MARC records from their browser client to the Koha server, and to pass on some configuration options to accompany those records. We could store all of that on the file system, in the database, or wherever makes the most sense. If we wanted to have fun, we could leave them in-memory and let an import daemon suck them out of memory instead of a disk which would require way more I/O ops.

Once the records are uploaded to the Koha server and stored <wherever> with the job details, the PSGI/CGI script alerts the "import daemon" that it would like to start the job. 

Now the "import daemon" should probably have a master listener process which does very little work, which makes it highly responsive to queries from the Koha web client. That master listener process can talk to worker processes that do the actual work. In terms of imports, it probably would be good if the master process can query the workers for progress updates (and probably also allow cancellations of imports - which might be useful for very large imports).

(Oh another thing... this import daemon wouldn't just be usable from a web interface. It could also be used by command line tools. Thus we have 1 point for getting records into Koha, and that daemon can act as gatekeeper.)

We default the "import daemon" workers to maybe 1 worker, but allow it to be configurable, so that large Koha instances can commit more resources to it. 

(If we were smart, we'd probably have the import daemon use TCP for communications and allow workers to be distributed. Although to do that we'd need to package Koha libraries for working with records and distribute those too. Plus we'd need database configuration information distributed. I mean we could have an API to handle all the actual imports, but that would be slow...)

Just some ideas ^_^.
Comment 32 David Cook 2018-10-24 23:48:15 UTC
We could look at something like http://gearman.org/, http://www.celeryproject.org/, or one of the other many projects... but they don't offer as much nuanced control as we might like.
Comment 33 David Cook 2018-10-24 23:57:15 UTC
(In reply to Jonathan Druart from comment #26)
> I do not think your Koha::Daemon will work for the background jobs.
> 
> To me, the best way to fix this would be to provide a Koha daemon, instead.
> But it would watch a DB table, which would allow us to build a view on top
> of it to manage the different jobs (and history).
> 
> We could also have different daemons for different needs (a param of the
> daemon matching a column in the DB table).
> 
> The question behind that is security, we will have to list the different
> operations the daemon is allowed to process (and so we will loose
> flexibility).
> 
> I have spent a *lot* of time trying to fix this issue by forking, etc. And
> my conclusion is: it's not feasible (see discussions and tries on related
> bug reports)
> 
> See also bug 1993.

I just realized that I misread your comment, Jonathan...

Take a look at line 106 of "Harvester.pm" at https://bugs.koha-community.org/bugzilla3/page.cgi?id=splinter.html&bug=10662&attachment=79039. 

Using POE::Component::JobQueue, I check a DB table called "oai_harvester_import_queue" for "new" import jobs. 

You could do the same thing. 

In my case, the job in "oai_harvester_import_queue" actually has a field containing a JSON array of record identifiers, and the import task fetches those records from the file system and then works on them as a batch. Going 1 by 1 for each record would be way too slow (both in terms of retrieving tasks and importing records). 

But yeah the web interface could upload the records, store their location, put that in a job, and then store that data in the database. The import daemon would poll the database (which isn't super efficient but efficient enough for Koha's purposes I imagine), and then work on batches as it could. 

--

(One thing to keep in mind here is import matching rules that use Zebra... if you have high performance importing, Zebra won't be able to keep up, and it means that you could wind up with duplicate records if someone tries to import the same record too quickly... )
Comment 34 Tomás Cohen Arazi (tcohen) 2018-10-29 13:22:04 UTC
(In reply to David Cook from comment #33)
> (In reply to Jonathan Druart from comment #26)
> > I do not think your Koha::Daemon will work for the background jobs.
> > 
> > To me, the best way to fix this would be to provide a Koha daemon, instead.
> > But it would watch a DB table, which would allow us to build a view on top
> > of it to manage the different jobs (and history).
> > 
> > We could also have different daemons for different needs (a param of the
> > daemon matching a column in the DB table).
> > 
> > The question behind that is security, we will have to list the different
> > operations the daemon is allowed to process (and so we will loose
> > flexibility).
> > 
> > I have spent a *lot* of time trying to fix this issue by forking, etc. And
> > my conclusion is: it's not feasible (see discussions and tries on related
> > bug reports)
> > 
> > See also bug 1993.
> 
> I just realized that I misread your comment, Jonathan...
> 
> Take a look at line 106 of "Harvester.pm" at
> https://bugs.koha-community.org/bugzilla3/page.cgi?id=splinter.
> html&bug=10662&attachment=79039. 
> 
> Using POE::Component::JobQueue, I check a DB table called
> "oai_harvester_import_queue" for "new" import jobs. 
> 
> You could do the same thing. 
> 
> In my case, the job in "oai_harvester_import_queue" actually has a field
> containing a JSON array of record identifiers, and the import task fetches
> those records from the file system and then works on them as a batch. Going
> 1 by 1 for each record would be way too slow (both in terms of retrieving
> tasks and importing records). 
> 
> But yeah the web interface could upload the records, store their location,
> put that in a job, and then store that data in the database. The import
> daemon would poll the database (which isn't super efficient but efficient
> enough for Koha's purposes I imagine), and then work on batches as it could. 
> 
> --
> 
> (One thing to keep in mind here is import matching rules that use Zebra...
> if you have high performance importing, Zebra won't be able to keep up, and
> it means that you could wind up with duplicate records if someone tries to
> import the same record too quickly... )

This is worth discussing in koha-devel, and IRC meeting or Marseille, but I think we should have a job queue in the lines of what David proposes. And I would add maybe the use of ZeroMQ to possibly notify other services.
Comment 35 David Cook 2018-10-30 01:24:30 UTC
(In reply to Tomás Cohen Arazi from comment #34)
> This is worth discussing in koha-devel, and IRC meeting or Marseille, but I
> think we should have a job queue in the lines of what David proposes. And I
> would add maybe the use of ZeroMQ to possibly notify other services.

I agree about opening up the discussion in another forum like koha-devel, IRC, or Marseille. 

I'm intrigued by the use of ZeroMQ. How do you see that working, Tomás? When I first started thinking about background processing, I thought using a message queue would be a good idea. Since then, I've used an embedded ActiveMQ with the Fedora repository a bit, but that publishes notifications about record changes. Celery uses a message queue for sending out tasks and then it can use a backend like Redis for storing task results. 

I think if we use a database for a job queue, that would replace the message queue? Or are you meaning using a message queue for notifying other services like an indexer that there are imported records to index?
Comment 36 Tomás Cohen Arazi (tcohen) 2018-10-30 12:40:42 UTC
(In reply to David Cook from comment #35)
> (In reply to Tomás Cohen Arazi from comment #34)
> > This is worth discussing in koha-devel, and IRC meeting or Marseille, but I
> > think we should have a job queue in the lines of what David proposes. And I
> > would add maybe the use of ZeroMQ to possibly notify other services.
> 
> I agree about opening up the discussion in another forum like koha-devel,
> IRC, or Marseille. 
> 
> I'm intrigued by the use of ZeroMQ. How do you see that working, Tomás? When
> I first started thinking about background processing, I thought using a
> message queue would be a good idea. Since then, I've used an embedded
> ActiveMQ with the Fedora repository a bit, but that publishes notifications
> about record changes. Celery uses a message queue for sending out tasks and
> then it can use a backend like Redis for storing task results. 
> 
> I think if we use a database for a job queue, that would replace the message
> queue? Or are you meaning using a message queue for notifying other services
> like an indexer that there are imported records to index?

I need to implement a way to hook record/item updates so changes are pushed to an external service. Ideally through the use of plugins so a more generic job queue was the first thing to consider. Then if we replace (say) rebuild_zebra.pl -d with this, we should add a way to inject new 'tasks' derived from this events (so a record update should trigger a zebra/ES indexing step, and also look for things (plugins) that would require actions. It felt simpler if it just sent broadcast messages through a socket to any service listening there.
I like ZeroMQ because it doesn't need a server, you just use your own service.

So, one event => multiple actions.
Comment 37 David Cook 2018-10-30 23:37:35 UTC
(In reply to Tomás Cohen Arazi from comment #36)
> I need to implement a way to hook record/item updates so changes are pushed
> to an external service. Ideally through the use of plugins so a more generic
> job queue was the first thing to consider. Then if we replace (say)
> rebuild_zebra.pl -d with this, we should add a way to inject new 'tasks'
> derived from this events (so a record update should trigger a zebra/ES
> indexing step, and also look for things (plugins) that would require
> actions. It felt simpler if it just sent broadcast messages through a socket
> to any service listening there.
> I like ZeroMQ because it doesn't need a server, you just use your own
> service.
> 
> So, one event => multiple actions.

I've been wanting a hook for record/item creation/update/deletion as well, so it sounds like our desire is similar there! I'm sure many people have a variety of external services which would like event-driven updates from Koha. 

I read about ZeroMQ a while ago, but I didn't realize it doesn't need a dedicated broker (ie server). That's interesting! It would be fun to play with that.

In the context of #15032, would you still want to use a message queue? I could see Koha sending a message to an importer service (in a pub/sub pattern), but I'm not sure how Koha would know when the import task is finished without having a result store. I suppose we could have a result store though. We could send a message to the importer service, and then asynchronously poll the result store to see when the task is done? I suppose that has disadvantages in that if the task fails, you're polling forever. And if you're polling, you also lose any sort of progress meter (although I always assumed we faked the progress with our current background implementation anyway). I guess those are just implementation details though.

(With #10662, the Koha web UI connects to the OAI-PMH harvester demon through a Unix socket. It has bidirectional communication though as it has a request-reply relationship.)
Comment 38 Daniel Gaghan 2019-11-13 21:07:21 UTC
We at PCCLD are also seeing this bug, is it related to the marc file size?
Comment 39 John Sterbenz 2020-02-18 21:17:30 UTC
This is a new issue to me (as of yesterday) on the 18.11 series on 18.04 LTS.  This process had worked for years prior (all the way back to 2016 / 3.22 on 14.04)--now every load I try fails.

I did upgrade my production server to 18.04 in preparation for installation of the 19.05 series--that is the only difference (in terms of anything I've directly initiated) between success and failure, yet the comments here indicate this has been an issue for nearly 5 years and reported (at least here) as recently as a few months ago.  Prod was already running the 18.11 series--it was only the OS I upgraded.

Thanks to VM snapshots, I took my dev server back to 17.11 on 16.04 and everything worked again, but that doesn't do me any good for production.

This is the only way we load records--and do so to the tune of 260K-300K records a month (in files of 20K records).  Most of these records are overlaid--perhaps only 1-2% are new records.  We are at a standstill until this is resolved or I identify some sort of workaround.

I have not yet tried anything like a smaller file size since this was just discovered yesterday by my staff in the course of regular work.  I note the mention of Plack (without having read the entire comment thread yet to see if this is relevant/related or not) and have not turned that off for this instance to see if that will matter.
Comment 40 David Cook 2020-02-18 23:24:56 UTC
(In reply to John Sterbenz from comment #39)
> This is a new issue to me (as of yesterday) on the 18.11 series on 18.04
> LTS.  This process had worked for years prior (all the way back to 2016 /
> 3.22 on 14.04)--now every load I try fails.
> 
> I did upgrade my production server to 18.04 in preparation for installation
> of the 19.05 series--that is the only difference (in terms of anything I've
> directly initiated) between success and failure, yet the comments here
> indicate this has been an issue for nearly 5 years and reported (at least
> here) as recently as a few months ago.  Prod was already running the 18.11
> series--it was only the OS I upgraded.
> 
> Thanks to VM snapshots, I took my dev server back to 17.11 on 16.04 and
> everything worked again, but that doesn't do me any good for production.
> 
> This is the only way we load records--and do so to the tune of 260K-300K
> records a month (in files of 20K records).  Most of these records are
> overlaid--perhaps only 1-2% are new records.  We are at a standstill until
> this is resolved or I identify some sort of workaround.
> 
> I have not yet tried anything like a smaller file size since this was just
> discovered yesterday by my staff in the course of regular work.  I note the
> mention of Plack (without having read the entire comment thread yet to see
> if this is relevant/related or not) and have not turned that off for this
> instance to see if that will matter.

Based on what you've said, I suspect that your issue might be unrelated to this discussion. This issue is about running scripts like "stage-marc-import.pl" using Plack, but there has been a workaround, which is provided out of the box, for that for many years. 

I'd suggest that you look for support for your particular issue via https://koha-community.org/support/. For example, the mailing lists (https://koha-community.org/support/koha-mailing-lists/) and/or the #koha IRC channel. There we can run through a series of specific questions to help you.
Comment 41 Mason James 2020-02-19 07:07:47 UTC
(In reply to John Sterbenz from comment #39)
> This is a new issue to me (as of yesterday) on the 18.11 series on 18.04
> LTS.  This process had worked for years prior (all the way back to 2016 /
> 3.22 on 14.04)--now every load I try fails.
> 

hi John
you've probably hit the issue below...

pay attention to your various 'tmp' dirs, and their owner/perm values

 https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=19676
 https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=20428
Comment 42 Katrin Fischer 2020-09-13 10:50:35 UTC
*** Bug 18719 has been marked as a duplicate of this bug. ***
Comment 43 David Cook 2020-10-29 05:46:42 UTC
See Bug 26854 for a potential fix.

While I haven't tried with Plack, we haven't been forking stage-marc-import.pl correctly in a CGI environment either.
Comment 44 David Cook 2020-10-29 23:36:52 UTC
(In reply to David Cook from comment #43)
> See Bug 26854 for a potential fix.
> 
> While I haven't tried with Plack, we haven't been forking
> stage-marc-import.pl correctly in a CGI environment either.

It's not a fix for Plack, but it is a fix for CGI...
Comment 45 David Cook 2020-10-30 01:32:38 UTC
After reviewing Bug 20342, the method we've used for background jobs in CGI is not going to work with a persistent process. 

Now that Bug 22417 has been pushed to master, I think the way forward is to write modules to handle background tasks that we currently manage by forking in CGI. 

I am interested in doing this, but it is likely to happen in my own personal time, which means it's going to be a very slow process, especially as I have a number of Koha projects I'd like to complete.
Comment 46 Katrin Fischer 2023-08-27 14:00:26 UTC
I wonder if we could close this now that the staged MARC import has been moved into a background job (and then close the omnibus bug 15019 as well)?
Comment 47 David Cook 2023-08-27 23:33:53 UTC
(In reply to Katrin Fischer from comment #46)
> I wonder if we could close this now that the staged MARC import has been
> moved into a background job (and then close the omnibus bug 15019 as well)?

I think so. We still have other scripts that should be run in the background but I don't think they fork so yeah.
Comment 48 Marcel de Rooy 2023-08-28 09:04:44 UTC
Yes, I think so too.
We still have a few forks but they are not in the scope of this report:

misc/search_tools/rebuild_elasticsearch.pl:        my $pid = fork();
Koha/MetaSearcher.pm:    if ( ( $pid = fork ) ) {