We could cache the circulation rules (in memory/L1) to speed up areas where they are fetched several times (in a loop).
Created attachment 128180 [details] [review] Bug 29623: Cache circulation rules
Nick, that would be helpful if you could run the same tests as you did on bug 29474 comment 9, to see if we can gain more caching the circ rules in L1. This patch is absolutely not ready to be integrated into master but I would like to get some opinions about it.
Created attachment 128576 [details] [review] Bug 29623: Cache circulation rules Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Testing on top of 29703 this improves the performance on request.pl by about 1.5 seconds with 100 items, which is pretty decent
Could I get a test plan for this please?
Also, patch doesn't apply on top of dependency patch: 128576 - Bug 29623: Cache circulation rules Apply? [(y)es, (n)o, (i)nteractive] y Applying: Bug 29623: Cache circulation rules error: sha1 information is lacking or useless (C4/Circulation.pm). error: could not build fake ancestor Patch failed at 0001 Bug 29623: Cache circulation rules hint: Use 'git am --show-current-patch=diff' to see the failed patch When you have resolved this problem run "git bz apply --continue". If you would prefer to skip this patch, instead run "git bz apply --skip". To restore the original branch and stop patching run "git bz apply --abort". Patch left in /tmp/Bug-29623-Cache-circulation-rules-SDQmaq.patch
Created attachment 134209 [details] [review] Bug 29623: Cache circulation rules Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
(In reply to Katrin Fischer from comment #5) > Could I get a test plan for this please? Confirm that the rule is taken into account where get_effective_rules is used (hardduedate for instance). And that a new value is used when modified in the circ matrix (ie. the cache is flushed when a rule is updated).
Silly question... why do this just at the request caching level? I had a bit of a deep dive into our caching strategies yesterday to get myself back up to speed... Is it slower to fetch a key from Memcached once (to populate the L1 per request) than it is to fetch it from MySQL (to populate the same L1 per request)?.. or am I totally misunderstanding our caching code?
(In reply to Martin Renvoize from comment #9) > Silly question... why do this just at the request caching level? Where else? Sorry I am not sure I understand your question. > I had a bit of a deep dive into our caching strategies yesterday to get > myself back up to speed... > > Is it slower to fetch a key from Memcached once (to populate the L1 per > request) than it is to fetch it from MySQL (to populate the same L1 per > request)?.. or am I totally misunderstanding our caching code? Where do you read that in the code? I don't have numbers but yes, fetching something simple from MySQL may take the same time as fetching from Memcached. That must be confirmed.
Comment on attachment 134209 [details] [review] Bug 29623: Cache circulation rules Review of attachment 134209 [details] [review]: ----------------------------------------------------------------- ::: Koha/CirculationRules.pm @@ +379,5 @@ > + my $memory_cache = Koha::Cache::Memory::Lite->get_instance; > + my $cache_key = sprintf "CircRules:%s:%s:%s:%s", $rule_name // q{}, > + $categorycode // q{}, $branchcode // q{}, $itemtype // q{}; > + > + Koha::Cache::Memory::Lite->flush(); It looks like the $memory_cache and $cache_key variables are initialised but unused? @@ +410,4 @@ > push( @$rule_objects, $rule_object ); > } > > + Koha::Cache::Memory::Lite->flush(); Is this necessary since the cache should be cleared at the end of the HTTP request? Or is this to provide for cases where the method is called outside a HTTP context?
(In reply to Martin Renvoize from comment #9) > Silly question... why do this just at the request caching level? > > I had a bit of a deep dive into our caching strategies yesterday to get > myself back up to speed... > > Is it slower to fetch a key from Memcached once (to populate the L1 per > request) than it is to fetch it from MySQL (to populate the same L1 per > request)?.. or am I totally misunderstanding our caching code? The TCP speed is probably comparable although the MySQL is going to be doing some disk I/O while Memcached is all in memory so in theory Memcached should be faster, but the difference might be imperceptible with a low load and good disk*? *I have a different app that does a lot of I/O on very bad disks and even doing simple lookups can be hard when the disks are busy with other tasks
(In reply to David Cook from comment #11) > Comment on attachment 134209 [details] [review] [review] > Bug 29623: Cache circulation rules > > Review of attachment 134209 [details] [review] [review]: > ----------------------------------------------------------------- > > ::: Koha/CirculationRules.pm > @@ +379,5 @@ > > + my $memory_cache = Koha::Cache::Memory::Lite->get_instance; > > + my $cache_key = sprintf "CircRules:%s:%s:%s:%s", $rule_name // q{}, > > + $categorycode // q{}, $branchcode // q{}, $itemtype // q{}; > > + > > + Koha::Cache::Memory::Lite->flush(); > > It looks like the $memory_cache and $cache_key variables are initialised but > unused? Looks like a leftover. I think I first put the code there then moved it into the other method. > @@ +410,4 @@ > > push( @$rule_objects, $rule_object ); > > } > > > > + Koha::Cache::Memory::Lite->flush(); > > Is this necessary since the cache should be cleared at the end of the HTTP > request? Or is this to provide for cases where the method is called outside > a HTTP context? Hum, that's bad indeed. I think we need to flush (as the cache contains old values now), or update the cache's values, but we shouldn't flush the whole memory cache.
Created attachment 135463 [details] [review] Bug 29623: Don't flush the whole L1 cache We shouldn't flush L1 cache completely, only the values related to the circulation rules. It is not correct to update the value of the value we are currently setting (because of the inheritance concept of the circ rules).
I think this is the correct approach, what do you think?
Rephrasing my question.. why the choice to use Koha::Cache::Memory::Lite instead of just Koha::Cache ? I don't think the approach is bad at all, just trying to understand the decisions made.
(In reply to Martin Renvoize from comment #16) > Rephrasing my question.. why the choice to use Koha::Cache::Memory::Lite > instead of just Koha::Cache ? > I don't think the approach is bad at all, just trying to understand the > decisions made. Because of the flush :) We cannot flush using regex, so I had to flush the whole L1. See the last patch :) The last patch does it now, but only for L1. I guess I also wanted to prevent bad values to be kept in cache, but... we shouldn't be afraid of that, right? We certainly could improve this idea, but having it as simple as it is now seems good to me.
(In reply to Jonathan Druart from comment #15) > I think this is the correct approach, what do you think? I'm not 100% sure. I find that the code in ./admin/smart-rules.pl and Koha/CirculationsRules.pm is very messy. If we clear the cache after each set_rule() call, we'll be calling the function ~38 times (so 38 loops) for each add/update/delete in ./admin/smart-rules.pl. We should only need to call it 1 time for each CRUD operation. In terms of core code, it looks like Koha::CirculationRules->set_rule only gets called in Koha/CirculationRules.pm and once accidentally in ./admin/smart-rules.pl (it should use set_rules instead). We do call set_rule() a lot in the unit tests though. I suppose occasional inefficient cache setting/clearing in ./admin/smart-rules.pl is worth a performance speed up during frequent transactional work. In fact, since we're using the L1 cache and the L1 cache is flushed at the end of every HTTP/SIP request, clearing the cache (in ./admin/smart-rules.pl) is actually probably totally unnecessary for production purposes. The only time caching of circulation rules would be a potential problem is in the unit tests. So I'd say the cache clearing is messy but it's good enough. (Side note: the circulation_rules table has an "id" column, so we should be using that for for distinct update_rules/delete_rules operations.)
(In reply to Jonathan Druart from comment #15) > I think this is the correct approach, what do you think? Since we are clearing the cache via set_rule, there's probably no harm in using the L2 cache via Koha::Cache. Nick has already signed off this version though, and I think this one is good enough for now at least.
One last thing... we might want to rename this issue to be "Cache effective circulation rules" because that would be more accurate than "Cache circulation rules". We're not just saving a database call by caching. We're saving a database call that has embedded logic in it.
(In reply to David Cook from comment #18) > (In reply to Jonathan Druart from comment #15) > > I think this is the correct approach, what do you think? > > I'm not 100% sure. > > I find that the code in ./admin/smart-rules.pl and Koha/CirculationsRules.pm > is very messy. Let's say all our code is messy and so we don't need to tell it every week, deal? > If we clear the cache after each set_rule() call, we'll be calling the > function ~38 times (so 38 loops) for each add/update/delete in > ./admin/smart-rules.pl. We should only need to call it 1 time for each CRUD > operation. We could add a flag to set_rule to not clear the cache, and clear it from set_rules. Would that work for you? > In fact, since we're using the L1 cache and the L1 cache is flushed at the > end of every HTTP/SIP request, clearing the cache (in > ./admin/smart-rules.pl) is actually probably totally unnecessary for > production purposes. Seems a good practices to invalided the cache when the value might have changed. That will prevent bugs in the future.
(In reply to David Cook from comment #20) > One last thing... we might want to rename this issue to be "Cache effective > circulation rules" because that would be more accurate than "Cache > circulation rules". Done.
(In reply to Jonathan Druart from comment #21) > Let's say all our code is messy and so we don't need to tell it every week, > deal? Hehe deal. > We could add a flag to set_rule to not clear the cache, and clear it from > set_rules. Would that work for you? I thought about that but the existing patches are already signed off, so it's probably not a necessary optimization at this point. > > In fact, since we're using the L1 cache and the L1 cache is flushed at the > > end of every HTTP/SIP request, clearing the cache (in > > ./admin/smart-rules.pl) is actually probably totally unnecessary for > > production purposes. > > Seems a good practices to invalided the cache when the value might have > changed. That will prevent bugs in the future. Agreed. -- Overall, I don't see any blocks/fails for this one.
I think we can probably call this one PQA now.. but I'd love to see a follow-up but to convert to Koha::Cache as we've pretty much done all the work here to make sure we're resetting the caches as required on changes and caching for longer makes sense. I'm also interested in David's suggestion about using ID's for the cache key, but I've not investigated the code to validate that myself. David, did you fancy adding your QA stamp?
(In reply to Martin Renvoize from comment #24) > David, did you fancy adding your QA stamp? I don't think I'm officially on the QA team anymore [U+1F605]
(In reply to Martin Renvoize from comment #24) > I think we can probably call this one PQA now.. but I'd love to see a > follow-up but to convert to Koha::Cache as we've pretty much done all the > work here to make sure we're resetting the caches as required on changes and > caching for longer makes sense. Yes, maybe. However I am expecting several adjustments in test files as we are deleting all the entries from circulation_rules in some tests. > I'm also interested in David's suggestion about using ID's for the cache > key, but I've not investigated the code to validate that myself. I don't think it's possible, get_effective_rule_value does not have the ID.
(In reply to Jonathan Druart from comment #26) > > I'm also interested in David's suggestion about using ID's for the cache > > key, but I've not investigated the code to validate that myself. > > I don't think it's possible, get_effective_rule_value does not have the ID. Yeah, there's quite a bit of refactoring to do on the CRUD before we could use the ID for the caching. My earlier suggestion was a "someday" suggestion heh.
IMO It's not about refactoring, it's just not possible. get_effective_rule will still need to get (category, library, itemtype) in parameter. If the caller has the rule's id, it does not need to ask for the *effective* rule :)
Created attachment 135916 [details] [review] Bug 29623: Cache circulation rules Signed-off-by: Nick Clemens <nick@bywatersolutions.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 135917 [details] [review] Bug 29623: Don't flush the whole L1 cache We shouldn't flush L1 cache completely, only the values related to the circulation rules. It is not correct to update the value of the value we are currently setting (because of the inheritance concept of the circ rules). Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 135918 [details] [review] Bug 29623: (QA follow-up) Add POD to Koha::Cache::Lite Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 135919 [details] [review] Bug 29623: (QA follow-up) Add POD to Koha::CirculationRules Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
OK, all points raised in the discussion have been agreed upon.. Happy to PQA (with the POD follow-ups to squash QA script noise). Passing QA
(In reply to Jonathan Druart from comment #28) > IMO It's not about refactoring, it's just not possible. get_effective_rule > will still need to get (category, library, itemtype) in parameter. If the > caller has the rule's id, it does not need to ask for the *effective* rule :) Agreed.. now I've delved into the code that's pretty obvious.
(In reply to Jonathan Druart from comment #28) > IMO It's not about refactoring, it's just not possible. get_effective_rule > will still need to get (category, library, itemtype) in parameter. If the > caller has the rule's id, it does not need to ask for the *effective* rule :) I must've misworded my earlier comment. Originally, I meant refactoring to use the ID for doing database updates/deletions. Wasn't really related to the caching. Not sure how that got dragged into it. Totally separate thing.
(In reply to David Cook from comment #12) > (In reply to Martin Renvoize from comment #9) > > Silly question... why do this just at the request caching level? > > > > I had a bit of a deep dive into our caching strategies yesterday to get > > myself back up to speed... > > > > Is it slower to fetch a key from Memcached once (to populate the L1 per > > request) than it is to fetch it from MySQL (to populate the same L1 per > > request)?.. or am I totally misunderstanding our caching code? > > The TCP speed is probably comparable although the MySQL is going to be doing > some disk I/O while Memcached is all in memory so in theory Memcached should > be faster, but the difference might be imperceptible with a low load and > good disk*? MySQL will be doing disk I/O on the first request for the same thing. I'm pretty sure the caching strategy in MySQL is far better than ours and memcached.
The patchset tweaks Circulation.t to explicitly flush the cached rules in select places so I ran: $ git grep --name-only 'Koha::CirculationRule' t/ | xargs prove All good \o/ but keep in mind that we might need to write something in the guidelines for tests tweaking rules.
Pushed to master for 22.11. Nice work everyone, thanks!
(In reply to Tomás Cohen Arazi from comment #36) > MySQL will be doing disk I/O on the first request for the same thing. I'm > pretty sure the caching strategy in MySQL is far better than ours and > memcached. While I know what you mean about the disk caching, I'm not sure what you mean about it being better than ours. Caching a rule once per HTTP request makes more sense to me than fetching it X times over the course of that HTTP request. That said, it all depends on the volume. When working with hundreds, thousands, and millions, even fractional second lags add up. (I've had a lot of fun with that over the years just in terms of the difference in TCP lag of connecting to local TCP sockets vs remote TCP sockets.)
Enhancement will not be backported to 22.05.x series
Jonathan, would you say this is eligible for backport? More performance than enhancement
(In reply to Nick Clemens from comment #41) > Jonathan, would you say this is eligible for backport? More performance than > enhancement We could backport, but I would wait a couple of months to let us time to catch potential problems.
Internal improvements, no documentation change required.
*** Bug 26393 has been marked as a duplicate of this bug. ***