Bug 35439

Summary: Large background jobs can create delays
Product: Koha Reporter: Nick Clemens (kidclamp) <nick>
Component: Architecture, internals, and plumbingAssignee: Bugs List <koha-bugs>
Status: NEW --- QA Contact: Testopia <testopia>
Severity: normal    
Priority: P5 - low CC: dcook, jonathan.druart, martin.renvoize, matt.blenkinsop, tomascohen, wizzyrea
Version: Main   
Hardware: All   
OS: All   
See Also: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=35438
Change sponsored?: --- Patch complexity: ---
Documentation contact: Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:

Description Nick Clemens (kidclamp) 2023-11-29 18:09:17 UTC
When a large import or other background job is enqueued it can slow down the processing of other jobs. 

A novel idea would be to have this job break itself into smaller chunks, and enqueue those jobs - this way the queue could continue to advance and the smaller jobs could be interleaved with incoming jobs.

See bug 35438 as well
Comment 1 David Cook 2023-11-29 22:23:07 UTC
I like the idea of the large task being broken into smaller jobs. 

However, at the moment, I don't know how that would work in terms of updating the result store and showing the user their progress?
Comment 2 Matt Blenkinsop 2023-12-04 14:54:18 UTC
I've actually got a start point for this in a piece of work I'm doing for importing KBART files into ERM. The files can contain 10s of 1000s of titles and quite often breach the max_allowed_packet in the database when you try and store the args in the background_jobs table. I've got some subroutines that chunk those files to make sure that we are never more than 75% of the max_allowed_packet and enqueue chunked jobs based on those acceptable sizes.

They are still large and sometimes slow jobs though and they all just get passed to the worker queue in order - how would you envisage the interleaving of jobs?