Bug 25891 - build_holds_queue can be daemonised
Summary: build_holds_queue can be daemonised
Status: RESOLVED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Command-line Utilities (show other bugs)
Version: Main
Hardware: All All
: P5 - low enhancement (vote)
Assignee: Bugs List
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2020-06-29 13:32 UTC by Martin Renvoize
Modified: 2023-09-04 20:11 UTC (History)
6 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Martin Renvoize 2020-06-29 13:32:36 UTC
We have customers who have come from systems where the holds queue is either dynamically built at query time or where it's being much more regularly updated.

In their workflows, having to wait for an hour between runs is an annoyance. With the recent introduction of easy locking mechanisms via the Koha::Script class we should be able to utilise this functionality to allow the build_holds_queue job to be demonized such that it runs on a loop.
Comment 1 Katrin Fischer 2020-06-29 13:57:11 UTC
Hm, woudl this be optional or that you could still pick the schedule? If you use the randomized hold targetting it would mean that the holds information would likely change with every hold added (or whatever triggers the daemon) - I think that would not be generally wanted.
Comment 2 Martin Renvoize 2020-06-29 13:59:12 UTC
Definitely optional Katrin :)
Comment 3 Katrin Fischer 2020-06-29 14:04:37 UTC
Maybe to explain better: hold information = the library picked to get the item off the shelf. I think the reason it's on a slow schedule by default is that the idea was to give people time to fetch the items in a multi-library setting. I can see that it would work well for a single branch library if it showed immediately. Different use cases to think about!
Comment 4 Sally 2020-06-29 14:28:25 UTC
Hi Katrin, I was one of the people who requested this - hopefully this bit of background might be useful.  

We have over 40 sites where books can be transferred to/from.  We only use biblio level holds, we use the Transport Cost Matrix to choose the 'optimal' library to fill the hold, and the Holds Queue to generate the list of items for the staff to look for.

As we have so many libraries, a lot of stock, and all of our holds are biblio level, there can be crossovers, i.e.

The holds queue is generated by Koha at 1pm.  

BBB library's holds queue shows a request for item 005 on record QQQ for a patron at AAA library.

A patron at AAA library coincidentally returns item - 002 - from QQQ record.  

002 item on QQQ record is trapped for the hold at AAA library.  

Staff at BBB library start working through the holds queue using the information generated at 1pm.  

They locate item 005 on QQQ record and scan it in.

Koha informs them that the item is no longer required.

I realise that when you write it out like this, it's easy to think, "Wow, that scenario must be fairly rare!"  ...but it happens constantly (on a daily basis) because we use biblio level holds and because we have so many items / libraries.  

It is a real source of frustration for the staff - if they've been sent to find 60 items and 10 of them are no longer required because they were filled half an hour earlier, it's a bit galling.

It also affects us because holds queue is one of those jobs which is scheduled - but if the library is quiet, many staff will load it up and see if they can fill any, because we put a lot of emphasis on filling and transferring holds as quickly as possible.

This is a common scenario:

Holds queue - built at 5pm.
Staff do holds queue - it has 50 items on, and they find 25.
Staff change shifts.
It's quiet at 5.45pm so a staff member loads holds queue.
There are still 50 items as it hasn't refreshed.
Staff start to look for items, not realising that half of them have already been filled.

We mitigate that by asking the staff to share print outs, and informing them when it refreshes.

Despite this, it's a source of confusion for the staff, because they don't understand why the request isn't instantly removed from the holds queue once it's been filled / been made missing / is no longer required.
Comment 5 Katrin Fischer 2020-10-19 15:40:00 UTC
Hi Sally, sorry I didn't get back to that. I totally understand your use case.

Maybe we could start small by highlighting holds already filled (or removing them) on the report page? This way staff who picked the book could see, why it is no longer needed.

I imagine we could add a trigger on filling or cancelling a hold that checks the holds queue... but not sure how hard this would be to do.

I was always told the slow schedule was on purpose. The explanation I came up with for myself is that there are several options on how library to fill a hold gets picked. One being the "random" option ( RandomizeHoldsQueueWeight ). My thinking was that if the report ran again and again this would constantly change the library to pick the item.

It would be great to discuss this with other consortia/multi branch libraries.
Comment 6 David Cook 2020-10-20 00:12:46 UTC
Rather than daemonize it, I'd say this would be a good one for the task queue.

Either particular actions could add a hold queue rebuild to the task queue or a scheduler could periodically add to the task queue.
Comment 7 Christopher Brannon 2020-10-30 15:08:00 UTC
I'm trying to understand why Sally doesn't have a cron job rebuild the queue at least hourly.  From her description, it sounds like it is only run once a day, unless I am missing something.

With that said, I understand the randomize thing, but from the perspective of a consortium, we stay as far away from randomizing as we can.  It seems like if you are not going to randomize, and you are going for the most efficient transfer, you would want that info at the time of running the report.

Yes, we will ALWAYS have items that disappear from the list - a library opens, and on that day, their item is the better candidate.  Those changes make sense, and our libraries come to expect it.  On average, we might get 3 pulled holds at most that don't trigger the hold on the list because another library just checked in a copy and filled the hold.  To me, this is more annoying, because it means that ANY item on a record hold request can supersede the efficiency of the queue, thus making the hold being filled less efficient than was designed by the transport cost matrix.

In my opinion, if a system is going to use the transport cost matrix, randomization is out the window - and holds should be locked in according to the matrix.  If a library can't fill that hold, they need a way to mark it as unfillable, and the matrix moves on to the next most efficient candidate.  This would stop other libraries from making less efficient responses, and libraries would be dealing with less items that are already filled elsewhere.

But back to the daemonized report - If the system would just tell you what needs to be pulled at the time of the report (without having to build this in the background x times a day), it would only have to do this as many times as the report is run.  With the transport cost matrix locking request (and with the option of a library passing on a request due to an issue), this would be a very appealing layout for us.
Comment 8 Sally 2020-11-02 10:44:17 UTC
Hi Christopher,

Our cron job does rebuild hourly; this is what Martin refers to when he says, "In their workflows, having to wait for an hour between runs is an annoyance."

The request here is to enable the cron job to rebuild more frequently - so once the job has completed, we would like it to start back up again, and not wait for the next hourly timeslot.

I think the discussion about whether items coincidentally scanned over the system should be trapped for holds / locking the holds in accordance to the transport cost matrix could be raised on another bug?  I agree that it's related, but it's not the whole scenario for us.
Comment 9 Kyle M Hall 2022-10-26 15:46:00 UTC
With the Real Time Holds Queue implemented, this is no longer needed, right?
Comment 10 Christopher Brannon 2022-10-26 20:03:14 UTC
(In reply to Kyle M Hall from comment #9)
> With the Real Time Holds Queue implemented, this is no longer needed, right?

Kyle, can you enlighten me on this?  This is the first I've heard of this.  I would like to know more about it.
Comment 11 Katrin Fischer 2022-10-26 20:31:55 UTC
(In reply to Christopher Brannon from comment #10)
> (In reply to Kyle M Hall from comment #9)
> > With the Real Time Holds Queue implemented, this is no longer needed, right?
> 
> Kyle, can you enlighten me on this?  This is the first I've heard of this. 
> I would like to know more about it.

Kyle is talking about the new 22.05 feature from bug 29346 that takes care of constantly updating the holdsqueue when something changes.
Comment 12 Christopher Brannon 2022-10-26 20:57:03 UTC
(In reply to Katrin Fischer from comment #11)
> (In reply to Christopher Brannon from comment #10)
> > (In reply to Kyle M Hall from comment #9)
> > > With the Real Time Holds Queue implemented, this is no longer needed, right?
> > 
> > Kyle, can you enlighten me on this?  This is the first I've heard of this. 
> > I would like to know more about it.
> 
> Kyle is talking about the new 22.05 feature from bug 29346 that takes care
> of constantly updating the holdsqueue when something changes.

Thanks for the info.  :)  It helps.  Looking forward to see how it works.
Comment 13 Nick Clemens 2023-08-31 15:30:12 UTC
I think real time holds queue, and the fact that holds are removed from the queue when satisfied make this one resolved?

Bug 29346
Bug 24359
Comment 14 Katrin Fischer 2023-09-04 20:11:30 UTC
(In reply to Nick Clemens from comment #13)
> I think real time holds queue, and the fact that holds are removed from the
> queue when satisfied make this one resolved?
> 
> Bug 29346
> Bug 24359

+1