Bug 36484 - Even with RabbitMQ enabled, we should should poll the database for update_elastic_index jobs at koha-es-indexer startup
Summary: Even with RabbitMQ enabled, we should should poll the database for update_ela...
Status: In Discussion
Alias: None
Product: Koha
Classification: Unclassified
Component: Architecture, internals, and plumbing (show other bugs)
Version: Main
Hardware: All All
: P5 - low enhancement
Assignee: Alex Buckley
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2024-04-02 01:22 UTC by Alex Buckley
Modified: 2024-06-28 03:01 UTC (History)
1 user (show)

See Also:
Change sponsored?: Sponsored
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
Bug 36484: Fetch all outstanding (New) update_elastic_index background jobs upon starting koha-es-indexer (1.98 KB, patch)
2024-04-02 04:54 UTC, Alex Buckley
Details | Diff | Splinter Review
Bug 36484: Switch to fetch all outstanding (New) background jobs based on type = 'update_elastic_index' (1.20 KB, patch)
2024-06-05 22:11 UTC, Alex Buckley
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Alex Buckley 2024-04-02 01:22:45 UTC
Bug 30654 adds a check of the background_jobs database table prior to worker standup, regardless of whether a Koha instance is using RabbitMQ or Database polling. 

However, the patches on bug 30654 do not process outstanding update_elastic_index background jobs. The es_indexer_daemon.pl worker script should process these outstanding jobs.
Comment 1 Alex Buckley 2024-04-02 04:54:15 UTC
Created attachment 164231 [details] [review]
Bug 36484: Fetch all outstanding (New) update_elastic_index background jobs upon starting koha-es-indexer

Test plan:
1. Ensure your Koha instance is using Elasticsearch
2. Apply patches
3. sudo koha-es-indexer --stop <instance>
4. Add a new biblio record in your Koha instance
5. Query your database and ensure there is one outstanding update_elastic_index job
   in background_jobs:
SELECT * FROM background_jobs WHERE type = 'update_elastic_index' AND
status = 'new';

6. Start services: sud service koha-common start
7. Repeat step 5 and observe the outstanding job is not returned by the
   query - that is because it has been processed.

Sponsored-by: Toi Ohomai Institute of Technology, New Zealand
Comment 2 Alex Buckley 2024-04-02 04:55:11 UTC
I'm not sure if this is the best test plan, so please do let me know if you would like me to change it. 

Ready for testing.
Comment 3 Alex Buckley 2024-06-05 22:11:40 UTC
Created attachment 167499 [details] [review]
Bug 36484: Switch to fetch all outstanding (New) background jobs based on type = 'update_elastic_index'

The previous patch polled the database based on background jobs having queue = 'elastic_index', however, some catalogue changes might have a queue = 'default'. So it is better to poll the database for 'new' background jobs having type = 'update_elastic_index'

Sponsored-by: Auckland University of Technology, New Zealand
Comment 4 Alex Buckley 2024-06-05 22:13:41 UTC
Added a follow-up based on fetching background jobs based on their type not their queue.
Comment 5 David Cook 2024-06-06 03:06:48 UTC
This change doesn't make sense to me. 

What's the scenario where you'd need this?
Comment 6 David Cook 2024-06-06 03:07:37 UTC
In the test plan, it seems to me the same job would be tried twice.
Comment 7 Alex Buckley 2024-06-06 03:12:12 UTC
(In reply to David Cook from comment #5)
> This change doesn't make sense to me. 
> 
> What's the scenario where you'd need this?

Hi David, 

The use case we have is, a partner library has 359 background jobs with:
- type='update_elastic_index'
- status='new'

Even though koha-worker and koha-es-indexer are running these 359 jobs are stuck on the status of new and are not being processed. 

RabbitMQ is not looking for outstanding jobs instead it is only listening to new jobs - in a similar situation to https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=30654

This enhancement ensures when we start koha-common services like koha-worker and koha-es-indexer all these new 'update_elastic_index' jobs are processed and not ignored.
Comment 8 Alex Buckley 2024-06-06 03:44:46 UTC
(In reply to David Cook from comment #6)
> In the test plan, it seems to me the same job would be tried twice.

Could I please check why do you think the same job would be tried twice?

Thanks
Alex
Comment 9 David Cook 2024-06-06 04:08:18 UTC
Thanks for providing more info, Alex. Much appreciated!

(In reply to Alex Buckley from comment #7)
> The use case we have is, a partner library has 359 background jobs with:
> - type='update_elastic_index'
> - status='new'
> 
> Even though koha-worker and koha-es-indexer are running these 359 jobs are
> stuck on the status of new and are not being processed. 

I have 50+ instances running Elasticsearch, and I have seen that symptom with 1 of them. It has other issues related to Elasticsearch indexing too though (like bizarre timeouts when the indexer is processing). I'm still looking into it. 

I have some plans to improve /usr/share/koha/bin/workers/es_indexer_daemon.pl as I don't think it's managing well enough. 

That said, in this case, it could be that the web app is failing to enqueue the task at all in RabbitMQ. I'll be looking into that side of things too. 

> RabbitMQ is not looking for outstanding jobs instead it is only listening to
> new jobs - in a similar situation to
> https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=30654

So RabbitMQ doesn't look for jobs. It's just a message broker. So I'm not sure what you mean there. I think you probably mean the es_indexer_daemon.pl. 

You might see that I've been fairly critical of 30654 hehe.

> This enhancement ensures when we start koha-common services like koha-worker
> and koha-es-indexer all these new 'update_elastic_index' jobs are processed
> and not ignored.

These patches would probably help in the short-term but they won't help us to resolve the underlying problem, so I'm a bit wary. 

For this scenario, I'm thinking that we need a tool to re-queue backgrounds jobs. That should help us to get the tasks pumping through on-demand without covering up the underlying problem.

--

That all said, I agree that we need to improve the Elasticsearch indexing ASAP. While it works well for 98% of my libraries, I want it to work perfectly for 100% of them.
Comment 10 David Cook 2024-06-06 04:11:39 UTC
(In reply to Alex Buckley from comment #8)
> (In reply to David Cook from comment #6)
> > In the test plan, it seems to me the same job would be tried twice.
> 
> Could I please check why do you think the same job would be tried twice?
> 
> Thanks
> Alex

The same job could be considered twice. process_oustanding() could grab it from the database, and then the worker could pick it up as a RabbitMQ message too.

However, upon review, I don't think they'd be tried twice. I see now that the job would be retrieved and skipped on the second go around. Which is all right. Not ideal but not a drama.
Comment 11 David Cook 2024-06-18 06:06:25 UTC
(In reply to David Cook from comment #9)
> That all said, I agree that we need to improve the Elasticsearch indexing
> ASAP. While it works well for 98% of my libraries, I want it to work
> perfectly for 100% of them.

I figured out my timeout issue had to do with Azure's firewall causing problems with persistent TCP connections. I've resolved that problem, and now Elasticsearch is working perfectly. 

I haven't had a case of a background job being stuck in over a month. 

I'll be sharing more about the TCP issue on the listserv shortly...
Comment 12 Alex Buckley 2024-06-24 22:14:00 UTC
Hi @David Cook, 

Apologies for my late reply. 

Firstly, it is interesting to hear that you've also seen a symptom like we have experienced on one of your Elasticsearch instances.

Yes, you're right I was meaning the es_indexer_daemon.pl

Agreed that the patches on this bug report would help in the short term, but not longer term.

I am very happy for my patches on this bug report to be replaced by a better long-term solution so I'll change this to 'In discussion'. Please do let us know if you end up writing a better long term fix for this :)
Comment 13 David Cook 2024-06-28 03:01:02 UTC
(In reply to Alex Buckley from comment #12)
> Firstly, it is interesting to hear that you've also seen a symptom like we
> have experienced on one of your Elasticsearch instances.

Since resolving the issue with the Azure firewall, I haven't seen this problem again. Yay for me, but perhaps not yay in terms of helping others (except for tips I can give about using Azure...).

> I am very happy for my patches on this bug report to be replaced by a better
> long-term solution so I'll change this to 'In discussion'. Please do let us
> know if you end up writing a better long term fix for this :)

Send me a message on Mattermost and let's see if we can work out what's going on with your ES indexer?