Summary: | Large background jobs can create delays | ||
---|---|---|---|
Product: | Koha | Reporter: | Nick Clemens (kidclamp) <nick> |
Component: | Architecture, internals, and plumbing | Assignee: | Bugs List <koha-bugs> |
Status: | NEW --- | QA Contact: | Testopia <testopia> |
Severity: | normal | ||
Priority: | P5 - low | CC: | dcook, jonathan.druart, martin.renvoize, matt.blenkinsop, tomascohen, wizzyrea |
Version: | Main | ||
Hardware: | All | ||
OS: | All | ||
See Also: | https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=35438 | ||
Change sponsored?: | --- | Patch complexity: | --- |
Documentation contact: | Documentation submission: | ||
Text to go in the release notes: | Version(s) released in: | ||
Circulation function: |
Description
Nick Clemens (kidclamp)
2023-11-29 18:09:17 UTC
I like the idea of the large task being broken into smaller jobs. However, at the moment, I don't know how that would work in terms of updating the result store and showing the user their progress? I've actually got a start point for this in a piece of work I'm doing for importing KBART files into ERM. The files can contain 10s of 1000s of titles and quite often breach the max_allowed_packet in the database when you try and store the args in the background_jobs table. I've got some subroutines that chunk those files to make sure that we are never more than 75% of the max_allowed_packet and enqueue chunked jobs based on those acceptable sizes. They are still large and sometimes slow jobs though and they all just get passed to the worker queue in order - how would you envisage the interleaving of jobs? |