Bug 30654 - Even with RabbitMQ enabled, we should poll the database for jobs at worker startup
Summary: Even with RabbitMQ enabled, we should poll the database for jobs at worker st...
Status: In Discussion
Alias: None
Product: Koha
Classification: Unclassified
Component: Architecture, internals, and plumbing (show other bugs)
Version: Main
Hardware: All All
: P5 - low major (vote)
Assignee: Martin Renvoize
QA Contact: Testopia
URL:
Keywords:
Depends on: 30172
Blocks: 35092
  Show dependency treegraph
 
Reported: 2022-04-29 13:40 UTC by Martin Renvoize
Modified: 2023-10-18 14:49 UTC (History)
10 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
Bug 30654: Processing outstanding jobs on startup (1.74 KB, patch)
2022-04-29 13:54 UTC, Martin Renvoize
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Martin Renvoize 2022-04-29 13:40:35 UTC
Our worker currently starts up and immediately tries to listen for jobs being passed via STOMP.  However, if rabbitMQ wasn't running when the tasks were enqueue, then the worker will never know about them.

We should work through the outstanding queue before listening for new jobs.
Comment 1 Martin Renvoize 2022-04-29 13:54:24 UTC
Created attachment 134388 [details] [review]
Bug 30654: Processing outstanding jobs on startup

This patch ensures jobs queued up in the database prior to worker
startup are processed regardless as to whether you are using RabbitMQ or
Database polling.
Comment 2 David Nind 2022-05-06 21:41:25 UTC
Hi Martin.

I'm not sure how to test this, could you add a test plan?

Thanks!

David
Comment 3 David Cook 2022-05-09 01:06:30 UTC
I don't know about this one. The database fall-back already doesn't scale, so this would make it so that you could only ever have a maximum of 1 background_jobs_worker.pl for a queue.

For this particular use case, wouldn't it be better to enqueue any tasks that weren't enqueued due to RabbitMQ being offline? 

(To avoid a race condition, you could either have 1 process responsible for that activity, or you could use database locking I suppose. I'd have to double-check the Koha::BackgroundJob code.)
Comment 4 David Cook 2022-05-09 01:10:06 UTC
Regardless, this code introduces a race condition. If background_jobs_worker.pl tries to run process_outstanding at startup after Koha::BackgroundJob runs $self->set(<things>)->store; but before $conn->send_with_receipt, then the same job would be run (at least) 2 times.

It'd be an unlikely scenario, but technically it could happen.
Comment 5 Tomás Cohen Arazi 2022-05-09 02:00:00 UTC
We need a PID column.
Comment 6 David Cook 2022-05-09 02:54:58 UTC
(In reply to Tomás Cohen Arazi from comment #5)
> We need a PID column.

To what end? 

I really wonder about our overall strategy here. It seems like we can't commit to a message queue based solution, and it's causing us to come up with a strange bespoke solution. If that's the case, maybe we tear out RabbitMQ and switch to using Minion with a MySQL plugin instead?

At the moment, we're overloading the background_job_worker.pl with logic that it shouldn't have as a "worker", and the database fallback limits us from taking advantage of RabbitMQ.
Comment 7 Andrii Nugged 2022-05-27 11:59:11 UTC
> However, if rabbitMQ wasn't running when the tasks were enqueue, then the worker will never know about them.

Martin, you just picked up what I had - I had lost jobs that never processed because of (another issue: race conditions) and they left in DB as "New", so I made some hotfix to run worker in "non-daemon mode": https://github.com/NatLibFi/Koha/commit/b87313773e4aefa4b89e4b223f1e2132fd196183
which also can be put to cron for example, 
(but it's better of course to get rid of race condition issues at all :), yet this not related to this ticket I think)
Comment 8 David Cook 2022-06-28 01:13:25 UTC
(In reply to Martin Renvoize from comment #0)
> Our worker currently starts up and immediately tries to listen for jobs
> being passed via STOMP.  However, if rabbitMQ wasn't running when the tasks
> were enqueue, then the worker will never know about them.
> 
> We should work through the outstanding queue before listening for new jobs.

I was thinking about this again as I'm building a RabbitMQ based job queue for a different Perl project.

(Part of me thought maybe I should just use Minion, but the Perl and DB dependencies aren't available and I'm already using RabbitMQ for other asynchronous work on that system.)

It seems to me that the order of operations should be the following:

1. Connect to RabbitMQ
2. Insert job in DB
3. Commit job in DB
4. Send message to a durable RabbitMQ queue
5. Output result of send message to user
5a. If successful, say job has been created
5b. If unsuccessful, say there was a problem creating job and to contact an administrator
Comment 9 David Cook 2022-06-28 01:37:30 UTC
After comparing the "background_jobs" table and the "minion_jobs" table, it seems like we've got a fairly similar schema.

https://metacpan.org/release/SRI/Minion-10.25/source/lib/Minion/Backend/resources/migrations/pg.sql

I think the additions it has are:
- attempts (number of retries allowed)
- delayed (for scheduling jobs in the future although API is based off seconds from now)
- notes (arbitrary metadata - progress is a note key that uses % values)
- parents
- priority 
- retried (time it was last retried)
- retries (number of retries attempted)
- expires
- lax (for managing parent/child dependencies)

One difference is that we use "data" whereas "minion" has two different columns for task arguments and task result, which makes sense to me...
- args
- result

Another difference is that we track the borrowernumber. I suppose that could arguably be part of "args", but having it in its own column where it could be indexed makes jobs easier to manage on a per-user basis.
Comment 10 David Cook 2022-06-28 01:38:45 UTC
(In reply to David Cook from comment #8)
> It seems to me that the order of operations should be the following:
> 
> 1. Connect to RabbitMQ
> 2. Insert job in DB
> 3. Commit job in DB
> 4. Send message to a durable RabbitMQ queue
> 5. Output result of send message to user
> 5a. If successful, say job has been created
> 5b. If unsuccessful, say there was a problem creating job and to contact an
> administrator

Actually that should be:

1. Connect to RabbitMQ
2. Insert job in DB
3. Commit job in DB
4. Send message to a durable RabbitMQ queue
5. Output result of send message to user
5a. If successful, say job has been created.
5b. If unsuccessful, say there was a problem creating job and to contact an administrator, and update/commit job in DB as failed.
Comment 11 David Cook 2022-06-28 01:48:34 UTC
Now this is a tangent, but...

I notice in "background_jobs" we have "data" as "longtext". 

According to https://mariadb.com/kb/en/json-data-type/, the JSON datatype is an alias for LONGTEXT (from MariaDB 10.2.7)

It looks like JSON_VALUE was added in MariaDB 10.2.3, and using a virtual column we could index JSON data as well which is interesting (https://mariadb.com/resources/blog/using-json-in-mariadb/)
Comment 12 Tomás Cohen Arazi 2022-07-26 14:37:01 UTC
Let's kill the rabbit ;-)
Comment 13 Tomás Cohen Arazi 2022-07-26 14:53:41 UTC
Send message to a durable RabbitMQ queue(In reply to David Cook from comment #10)
> 4. Send message to a durable RabbitMQ queue

What is the 'durable RabbitMQ queue' in this context?

In order for using RabbitMQ makes sense, I feel like we need:

- A Task queue manager (we have the table, just missing a process that polls it and sends 'the message'
- A way to dispatch the message to notify workers about it (Yay, RabbitMQ)
- A worker that reads the message and acts accordingly

Questions:

- how does a worker report back if it failed? Probably directly to the DB? This is a case for a task queue manager, which would read and make the decision it is time to retry.
- if we had 2 workers waiting for 'index' jobs through the mq, how do we pick which process takes the job? how does the system know it is being processed by who? what if the process dies and we need to assign a new worker? That's where I was going with the PID thing. BEcause I was thinking about things locally (wrong) but the case still stands with this question. Who is running what?

I personally don't see a case in which RabbitMQ is being useful right now. At least when we still haven't implemented a way to avoid tasks that failed to get notified through the mq to be retried, scheduled, etc.
Comment 14 David Cook 2022-07-28 01:26:23 UTC
(In reply to Tomás Cohen Arazi from comment #12)
> Let's kill the rabbit ;-)

I won't oppose it. Minion might make more sense after all.
Comment 15 David Cook 2022-07-28 01:38:29 UTC
(In reply to David Cook from comment #14)
> (In reply to Tomás Cohen Arazi from comment #12)
> > Let's kill the rabbit ;-)
> 
> I won't oppose it. Minion might make more sense after all.

That said, I am a little suspicious that Sebastian Riedel is going to abandon Mojolicious/Minion (ie Perl) in favour of mojo.js/minion.js (ie Node.js/Typescript). 

He keeps saying that he's not going to abandon them but... there's no guarantees in life. 

Of course, that's why I keep thinking having a robust model is important, so that porting to a different controller framework isn't prohibitive.
Comment 16 David Cook 2022-07-28 02:05:35 UTC
(In reply to Tomás Cohen Arazi from comment #13)
> Send message to a durable RabbitMQ queue(In reply to David Cook from comment
> #10)
> > 4. Send message to a durable RabbitMQ queue
> 
> What is the 'durable RabbitMQ queue' in this context?

"Durable (the queue will survive a broker restart)" (https://www.rabbitmq.com/queues.html)

> In order for using RabbitMQ makes sense, I feel like we need:
> 
> - A Task queue manager (we have the table, just missing a process that polls
> it and sends 'the message'
> - A way to dispatch the message to notify workers about it (Yay, RabbitMQ)
> - A worker that reads the message and acts accordingly
> 
> Questions:
> 
> - how does a worker report back if it failed? Probably directly to the DB?
> This is a case for a task queue manager, which would read and make the
> decision it is time to retry.
> - if we had 2 workers waiting for 'index' jobs through the mq, how do we
> pick which process takes the job? how does the system know it is being
> processed by who? what if the process dies and we need to assign a new
> worker? That's where I was going with the PID thing. BEcause I was thinking
> about things locally (wrong) but the case still stands with this question.
> Who is running what?
 
- If a worker fails, it needs to record that in the "result store" (which in this case is the database). 
- Why do you need to know which worker processes which message? It shouldn't matter. If the worker dies, the MQ broker would detect the broken connection, and give that message to the next worker that connects asking for a message. 

> I personally don't see a case in which RabbitMQ is being useful right now.
> At least when we still haven't implemented a way to avoid tasks that failed
> to get notified through the mq to be retried, scheduled, etc.

It's not the job of the message broker to handle scheduling. There needs to be a separate scheduler. That can be a cronjob, it can be a daemon. It's whatever we want it to be. (The advantage of the minion daemon is that it already exists and we wouldn't need to write it.)

Are you having problems with retries? RabbitMQ should automatically retry. Or are you referring to this scenario: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=30654#c0

RabbitMQ is useful right now because it provides a standard interface for passing message between separate processes. It's the only reason we're able to process asynchronous jobs in a reasonable way. (Forking CGI processes for the old school background jobs has a major barrier to using Plack 100%.)

(In reply to Martin Renvoize from comment #0)
> Our worker currently starts up and immediately tries to listen for jobs
> being passed via STOMP.  However, if rabbitMQ wasn't running when the tasks
> were enqueue, then the worker will never know about them.

Why wouldn't RabbitMQ be running? I'd argue that we shouldn't be saving jobs in the database if the message broker (which is a component of the overall Koha system) is unavailable. 

To me, it sounds like a Koha design problem rather than a RabbitMQ problem. If the database weren't available, we wouldn't try saving a change to the file system on our own. We'd throw an exception saying that we can't save the record. 

I'm in the process of implementing a background job system using RabbitMQ on another Perl app and that's the process I'll be following. First I'll check my RabbitMQ connection, if good, then I'll insert the job into the DB, commit into the DB, which is necessary to avoid a race condition, send the message to RabbitMQ. If the message sends, then there's nothing more to do but tell the user that their job is in progress. If it doesn't send, I update the job in the database as failed (or you could delete the job), and tell the user that there was an error. 

I'm not saying that's necessarily the only right way to do it. Or that we have to keep RabbitMQ, but that's my interpretation of the situation. 

Totally not opposed to yanking out RabbitMQ and replacing with Minion. I think that we'd gain a lot of functionality for free by doing that, but I suppose I want to make sure we're not unfairly condemning RabbitMQ either when I think we're the reason RabbitMQ might not be working the way we want it to.
Comment 17 Jonathan Druart 2022-07-28 06:46:43 UTC
(In reply to David Cook from comment #14)
> (In reply to Tomás Cohen Arazi from comment #12)
> > Let's kill the rabbit ;-)
> 
> I won't oppose it. Minion might make more sense after all.

Or, as time is limited for everybody, we take advantage of having something that is working (and has been waiting for years), a task queue (whatever is its implementation), and fix the different scripts that are still running under cgi?

We have been waiting for it to adjust the scripts. Now we have it we would like to rethink/rewrite everything? What's the point for the end-users exactly? Bug 27421 is one step forward, and does not get attention.

Could we show durability in our tech choices?

I've already answered several about why rabbitmq.

I am out of the recurring discussion anyway, just wanted to say that.
Comment 18 David Cook 2022-07-28 06:55:27 UTC
(In reply to Jonathan Druart from comment #17)
> (In reply to David Cook from comment #14)
> > (In reply to Tomás Cohen Arazi from comment #12)
> > > Let's kill the rabbit ;-)
> > 
> > I won't oppose it. Minion might make more sense after all.
> 
> Or, as time is limited for everybody, we take advantage of having something
> that is working (and has been waiting for years), a task queue (whatever is
> its implementation), and fix the different scripts that are still running
> under cgi?
> 
> We have been waiting for it to adjust the scripts. Now we have it we would
> like to rethink/rewrite everything? What's the point for the end-users
> exactly? Bug 27421 is one step forward, and does not get attention.
> 
> Could we show durability in our tech choices?
> 
> I've already answered several about why rabbitmq.
> 
> I am out of the recurring discussion anyway, just wanted to say that.

That makes sense to me too. I'm happy to support whomever has the time to do the work one way or another. 

(I've already given feedback on Bug 27421 but I'm happy to look at it again if it's ready to be QAed.)