Summary: | Even with RabbitMQ enabled, we should should poll the database for update_elastic_index jobs at koha-es-indexer startup | ||
---|---|---|---|
Product: | Koha | Reporter: | Alex Buckley <alexbuckley> |
Component: | Architecture, internals, and plumbing | Assignee: | Alex Buckley <alexbuckley> |
Status: | In Discussion --- | QA Contact: | Testopia <testopia> |
Severity: | enhancement | ||
Priority: | P5 - low | CC: | dcook |
Version: | Main | ||
Hardware: | All | ||
OS: | All | ||
Change sponsored?: | Sponsored | Patch complexity: | --- |
Documentation contact: | Documentation submission: | ||
Text to go in the release notes: | Version(s) released in: | ||
Circulation function: | |||
Attachments: |
Bug 36484: Fetch all outstanding (New) update_elastic_index background jobs upon starting koha-es-indexer
Bug 36484: Switch to fetch all outstanding (New) background jobs based on type = 'update_elastic_index' |
Description
Alex Buckley
2024-04-02 01:22:45 UTC
Created attachment 164231 [details] [review] Bug 36484: Fetch all outstanding (New) update_elastic_index background jobs upon starting koha-es-indexer Test plan: 1. Ensure your Koha instance is using Elasticsearch 2. Apply patches 3. sudo koha-es-indexer --stop <instance> 4. Add a new biblio record in your Koha instance 5. Query your database and ensure there is one outstanding update_elastic_index job in background_jobs: SELECT * FROM background_jobs WHERE type = 'update_elastic_index' AND status = 'new'; 6. Start services: sud service koha-common start 7. Repeat step 5 and observe the outstanding job is not returned by the query - that is because it has been processed. Sponsored-by: Toi Ohomai Institute of Technology, New Zealand I'm not sure if this is the best test plan, so please do let me know if you would like me to change it. Ready for testing. Created attachment 167499 [details] [review] Bug 36484: Switch to fetch all outstanding (New) background jobs based on type = 'update_elastic_index' The previous patch polled the database based on background jobs having queue = 'elastic_index', however, some catalogue changes might have a queue = 'default'. So it is better to poll the database for 'new' background jobs having type = 'update_elastic_index' Sponsored-by: Auckland University of Technology, New Zealand Added a follow-up based on fetching background jobs based on their type not their queue. This change doesn't make sense to me. What's the scenario where you'd need this? In the test plan, it seems to me the same job would be tried twice. (In reply to David Cook from comment #5) > This change doesn't make sense to me. > > What's the scenario where you'd need this? Hi David, The use case we have is, a partner library has 359 background jobs with: - type='update_elastic_index' - status='new' Even though koha-worker and koha-es-indexer are running these 359 jobs are stuck on the status of new and are not being processed. RabbitMQ is not looking for outstanding jobs instead it is only listening to new jobs - in a similar situation to https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=30654 This enhancement ensures when we start koha-common services like koha-worker and koha-es-indexer all these new 'update_elastic_index' jobs are processed and not ignored. (In reply to David Cook from comment #6) > In the test plan, it seems to me the same job would be tried twice. Could I please check why do you think the same job would be tried twice? Thanks Alex Thanks for providing more info, Alex. Much appreciated! (In reply to Alex Buckley from comment #7) > The use case we have is, a partner library has 359 background jobs with: > - type='update_elastic_index' > - status='new' > > Even though koha-worker and koha-es-indexer are running these 359 jobs are > stuck on the status of new and are not being processed. I have 50+ instances running Elasticsearch, and I have seen that symptom with 1 of them. It has other issues related to Elasticsearch indexing too though (like bizarre timeouts when the indexer is processing). I'm still looking into it. I have some plans to improve /usr/share/koha/bin/workers/es_indexer_daemon.pl as I don't think it's managing well enough. That said, in this case, it could be that the web app is failing to enqueue the task at all in RabbitMQ. I'll be looking into that side of things too. > RabbitMQ is not looking for outstanding jobs instead it is only listening to > new jobs - in a similar situation to > https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=30654 So RabbitMQ doesn't look for jobs. It's just a message broker. So I'm not sure what you mean there. I think you probably mean the es_indexer_daemon.pl. You might see that I've been fairly critical of 30654 hehe. > This enhancement ensures when we start koha-common services like koha-worker > and koha-es-indexer all these new 'update_elastic_index' jobs are processed > and not ignored. These patches would probably help in the short-term but they won't help us to resolve the underlying problem, so I'm a bit wary. For this scenario, I'm thinking that we need a tool to re-queue backgrounds jobs. That should help us to get the tasks pumping through on-demand without covering up the underlying problem. -- That all said, I agree that we need to improve the Elasticsearch indexing ASAP. While it works well for 98% of my libraries, I want it to work perfectly for 100% of them. (In reply to Alex Buckley from comment #8) > (In reply to David Cook from comment #6) > > In the test plan, it seems to me the same job would be tried twice. > > Could I please check why do you think the same job would be tried twice? > > Thanks > Alex The same job could be considered twice. process_oustanding() could grab it from the database, and then the worker could pick it up as a RabbitMQ message too. However, upon review, I don't think they'd be tried twice. I see now that the job would be retrieved and skipped on the second go around. Which is all right. Not ideal but not a drama. (In reply to David Cook from comment #9) > That all said, I agree that we need to improve the Elasticsearch indexing > ASAP. While it works well for 98% of my libraries, I want it to work > perfectly for 100% of them. I figured out my timeout issue had to do with Azure's firewall causing problems with persistent TCP connections. I've resolved that problem, and now Elasticsearch is working perfectly. I haven't had a case of a background job being stuck in over a month. I'll be sharing more about the TCP issue on the listserv shortly... Hi @David Cook, Apologies for my late reply. Firstly, it is interesting to hear that you've also seen a symptom like we have experienced on one of your Elasticsearch instances. Yes, you're right I was meaning the es_indexer_daemon.pl Agreed that the patches on this bug report would help in the short term, but not longer term. I am very happy for my patches on this bug report to be replaced by a better long-term solution so I'll change this to 'In discussion'. Please do let us know if you end up writing a better long term fix for this :) (In reply to Alex Buckley from comment #12) > Firstly, it is interesting to hear that you've also seen a symptom like we > have experienced on one of your Elasticsearch instances. Since resolving the issue with the Azure firewall, I haven't seen this problem again. Yay for me, but perhaps not yay in terms of helping others (except for tips I can give about using Azure...). > I am very happy for my patches on this bug report to be replaced by a better > long-term solution so I'll change this to 'In discussion'. Please do let us > know if you end up writing a better long term fix for this :) Send me a message on Mattermost and let's see if we can work out what's going on with your ES indexer? |