I have added a POC on bug 11998 ( patch "Add a L1 cache for sysprefs" ) to use a L1 cache instead of retrieving infos from memcache everytime. If it works for sysprefs, we could use it for any objects set in cache (I have Marc structure and framework in mind).
Created attachment 48968 [details] [review] Bug 16044: Use the L2 cache for any objects set in cache
Great idea IMO - but only as long as you are caching scalars (like in Bug 11998).. Unfortunately, caching complex data structures that way globally (by reference) is inherently dangerous and regression-prone. Would be great to have such mechanism in Koha, but to be on the safe side, it should be used carefully and selectively (e.g. - let's return a "deep clone" of the cached data structure by default, and the direct reference only if explicitly asking for it, etc.). This patch is tempting - it would provide some big, immediate speed gains for a lot of scripts if pushed right now, and in this moment it probably would not break anything important (but only because we don't use Koha::Cache all that much so far). But on the long term, such method will make Koha::Cache essentially unusable for caching anything complex - to stay on the safe side, typically you'll need to meticulously audit a lot of the code to ensure that data structure fetched from the cache is not getting messed up internally in any code part between subsequent feches.
The bug name is misleading. This patch is about generalizing the L1 cache, which is plack-and-sysprefs-only as of bug 11998. As I said on signing 11998, separating the L1 from L2 cache gives us better ganularity for handling edge cases, but of course each edge case needs to be handled carefully. Go for it Jonathan! We can fight about worse cases (marcstructure, etc) in more specific bugs. This bug/patch is only generalizing 11998, which is a very good idea.
+1 from me, I pretty much agree entirely with Tomás's comment. With great power comes great responsibility, we indeed need a good thorough testing procedure as more things get added to the caches, but this bug in itself looks good to me.
To clarify comment #1: I don't think it's a bad idea; on the contrary, I think it's an excellent idea! But I also think that current/initial implementation is, again, inherently dangerous. IMO it should at least have some basic safety measures built-in to be (reasonably) safe. I dunno, something simple.. e.g - let's have two variants of ->get_from_cache(): 1) get_from_cache(): - if the cached thingy in L1 cache is a scalar - just return it, no problems whatsoever - if it's a reference, return a "deep clone" of what we keep in L1 cache in deserialized form (that should still be way faster than fetching it from the network and deserializing it each and every time) 2) get_from_cache_just_gimme_a_raw_reference_I_know_what_I_am_doing() - initial/ultra-fast implementation which just returns references directly Having two variants of get_from_cache() is not very elegant at the 1st glance, but it's the fastest and simplest method I can imagine: - it's faster than (e.g.) handling some extra parameters inside one-to-rule-them-all get_from_cache() subroutine - sooner or later some scripts will be calling that sub 10000+ times, and such (seemingly very small) overheads have a nasty tendency to add up - it will allow us to introduce "I know what I'm doing / I like to live dangerously" variant selectivelly and gradually - preferably, in the separate bug reports, so if something somwhere explodes due to using the ultra-fast-but-not-always-safe caching method, it will be a lot easier to fix/revert it selectivelly without killing performance of the entire caching system
Created attachment 49179 [details] [review] Bug 16044: Make tests from t/Cache.t pass The timeout does not impact the L1 cache (it would be to time consuming and not really useful to do that for this cache). To simulate the real timeout, we need to flush this L1 cache when needed. It would be also done adding a disable_L1_cache method.
Created attachment 49186 [details] [review] Bug 16044: Add tests to make sure structures will be copied
Created attachment 49187 [details] [review] Bug 16044: Add deep cloning To avoid the cache to be modified unfortunately, the default behavior of get_from_cache will be to deep copy if we are getting a structure. If the item is a scalar, it's simply returned.
Created attachment 49188 [details] [review] Bug 16044: Add an unsafe flag to Koha::Cache->get_from_cache If the caller/developer knows what he is doing, he can decide not to deep copy the structure. It will be faster but unsafe! If the structure is modified, the cache will also be updated. This option must be used with care and is not the default behavior.
Jacek, I completely agree with your comment, and I think the safe option (deep copy) should be the default behavior. It will avoid regression and hard-to-catch weird issues. We will be able to switch to the unsafe but faster option later in a step-by-step process.
Created attachment 49239 [details] [review] Bug 16044: (followup) only clear L1 cache when needed This is a proposed enhancement that further increases the performance gain. Search times on a small dataset with this patch (and the following) drop from 2.2 to 1.2 seconds.
Created attachment 49240 [details] [review] Bug 16044: (followup) don't clone cache results for framework/authvals This is needed so as not to cancel out the performance gains of the L1 cache.
Two tiny nitpicks: * Why do the commit messages say L2 instead of L1? * Why is the variable named L1_cache instead of l1_cache or _l1_cache?
Jesse, I really like your patches and yes, it's the way to go. But I think they go too far for a first step. We will introduce a significant perfs gain without any regressions (in theory!!) if we stupidly flush the L1 cache and deep copy in any cases. Could you move your patches to another bug report? We should also propose benchmarks to know when we are doing something interesting.
(In reply to Jesse Weaver from comment #13) > Two tiny nitpicks: > > * Why do the commit messages say L2 instead of L1? It's a typo. > * Why is the variable named L1_cache instead of l1_cache or _l1_cache? I don't care about the name of the var, we can change it. I thought L1_cache was the more appropriate
+1 to the two method approach, seems the safest and most flexible way to do it :-) +1 to splitting out the frameworks followups into their own issue. +1 to building some benchmarks to prove the performance gain. Well done all, I'm super excited to see this moving forward
I'm also trying to test this, but with kinda weird results so far.. Searching (medium-size dataset, 120k biblios, 300k items, 10-200-20000 hits, XSLT processing enabled, testing with initial patch set - without the last 2 followups): 1) CGI + memcache - no measurable differences between patched/unpatched 2) CGI + Cache::Memory - it's ca 2x slower for the searches with 50+ hits With the 1st followup (attachment #49239 [details] [review]), both CGI + memcache and CGI + Cache::Memory are 2x slower :( But why is that? Supposedly, it should be neutral re/ search speed (or even a bit faster on average). 2nd followup (#49240) just disables "safety measures" in the 2 most speed-sensitive places - an instant, enormous performance gain! Not very "step-by-step", though. Would it be safe enough/reasonably safe thing to do at this stage - I have no idea.
For some reason, deep copying of MARC frameworks structures with clone() is insanely slow: - clone() from the Clone module: 62 ms !!! - clone() from Clone::Fast: 21 ms - Storable dclone(): 16.9 ms - Storable thaw(freeze()): 17.0 ms - fetching framework from DB directly (caching disabled in GetMarcStructure()): 36.6 ms If I replace clone() with Storable dclone() in Koha::Cache, search speed is back to normal. Well, almost - looks like fetching the framework directly from L2 cache is still faster (12.8 ms, L2 = memcache) then getting a clone from L1 cache.
BTW, that part of the 1st followup: my $get_sub = $self->{ref($self->{$cache}) . "_get"}; - return $get_sub ? $get_sub->($key) : $self->{$cache}->get($key); + my $value = $get_sub ? $get_sub->($key) : $self->{$cache}->get($key); + + $L1_cache{$key} = $value; + + return $value; is essential if you want to test this bug with L2 = memcache; without it, L1 cache is not getting populated in such setups. Which explains case 1) from comment #17 ;)
Some other thoughts, after toying with these patches for a little while. 1) If we want to stay "on the safe side" by default, clone()/dclone() call needs to be introduced to the set_in_cache() sub as well 2) Looks like Cache::Memory as L2 cache is 100% waste of time and CPU cycles. If L2 = Cache::Memory, typically we'll be storing a lot of things in there - small and large, but never fetching them back 3) In that statement > - if it's a reference, return a "deep clone" of what we keep in L1 cache in > deserialized form (that should still be way faster than fetching it from the > network and deserializing it each and every time) I was completely wrong :). Cloning very complex data structures, like MARC framework hashes in Koha, is not faster in perl then deserializing them. Cloning involves a) traversing the whole structure recursively - inspecting all the nested parts of it in various ways etc. b) creating anew the copies of all those pesky hashes, hash keys, scalars etc. in perl guts Thawing the structure from serialized form is doing b), but not a), so it can be faster. And it is - for the MARC default framework: - Storable thaw() call is taking ca 10 ms - freeze(): 6 ms - dclone(): 17 ms - decode_sereal(): 9 ms - probably not worth the trouble, there are no packages in Debian for Sereal module Memcache::Fast is using Storable thaw() as desrializer by default. Feching framework from memcached is faster (12.5 ms) than cloning it with dclone. It only involves two "expensive" tasks: thaw() - 10 ms, and probably decompress (~2 ms). This may be the problem; get_from_cache() which involves dclone() will be significantly slower, and Koha::Cache as a whole will suffer severe performance loss if used in the "safe" way.
(In reply to Jacek Ablewicz from comment #20) > This may be the problem; get_from_cache() which involves dclone() will be > significantly slower, and Koha::Cache as a whole will suffer severe > performance loss if used in the "safe" way. But I think that issue is utterly fixable, if complex structures are stored in L1 cache in two forms { frozen => .. thawed => } We'll need to send 'thawed' scalar to memcached somehow packaged (e.g. in [ ]), to be able to distinguish if it was originally a scalar or not. And the complex structures in L2 cache will be double-frozen. But the 2nd freeze call (inside Memcached::Fast) should be very cheap - at that stage, this wouldn't be a complex data structure any more. Unless I'm very much mistaken (see comment #20 item 3) a side effect of this will be a performance gain - after this bug, fetching MARC framework with get_from_cache() will take 10 ms instead of 12.5 ms on average, even with Koha::Cache used entirely in the "safe" mode. Win-win situation ;).
(In reply to Jacek Ablewicz from comment #21) > We'll need to send 'thawed' scalar to memcached somehow packaged (e.g. in [ s/'thawed' scalar/'frozen' scalar/
(In reply to Jacek Ablewicz from comment #21) > We'll need to send 'frozen' scalar to memcached somehow packaged (e.g. in [ > ]), to be able to distinguish if it was originally a scalar or not. And the > complex structures in L2 cache will be double-frozen. But the 2nd freeze > call (inside Memcached::Fast) should be very cheap - at that stage, this > wouldn't be a complex data structure any more. Unless I'm very much mistaken > (see comment #20 item 3) a side effect of this will be a performance gain - > after this bug, fetching MARC framework with get_from_cache() will take 10 > ms instead of 12.5 ms on average, even with Koha::Cache used entirely in the > "safe" mode. Win-win situation ;). Or just append some small flag to the end of each string we store in L2 and strip it on fetching (0: scalar, 1: structure frozen by Storable() freeze, 2: structure frozen by Sereal, 4: undef value etc. ;) - I don't think it will affect performance of caching very small values in any measurable way, and all that double-frozen madness could be scrached then.
(In reply to Jacek Ablewicz from comment #17) > 1) CGI + memcache - no measurable differences between patched/unpatched It certainly comes from bug 16088, don't you think?
I think that trying to protect cache users from themselves for complex structures is going to come at too high a cost of complexity or performance, based on the discussions we've been having so far... Auditing use of functions that call APIs that return large cached structures to either add clone()s as necessary or remove mutation will be more work but likely more benefit in the long run. We could look at using Hash::Util::lock_hash.
Created attachment 49490 [details] [review] Bug 16044: Use the L2 cache for any objects set in cache Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Created attachment 49491 [details] [review] Bug 16044: Make tests from t/Cache.t pass The timeout does not impact the L1 cache (it would be to time consuming and not really useful to do that for this cache). To simulate the real timeout, we need to flush this L1 cache when needed. It would be also done adding a disable_L1_cache method. Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Created attachment 49492 [details] [review] Bug 16044: Add tests to make sure structures will be copied Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Created attachment 49493 [details] [review] Bug 16044: Add deep cloning To avoid the cache to be modified unfortunately, the default behavior of get_from_cache will be to deep copy if we are getting a structure. If the item is a scalar, it's simply returned. Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Created attachment 49494 [details] [review] Bug 16044: Add an unsafe flag to Koha::Cache->get_from_cache If the caller/developer knows what he is doing, he can decide not to deep copy the structure. It will be faster but unsafe! If the structure is modified, the cache will also be updated. This option must be used with care and is not the default behavior. Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
(In reply to Jonathan Druart from comment #24) > (In reply to Jacek Ablewicz from comment #17) > > 1) CGI + memcache - no measurable differences between patched/unpatched > > It certainly comes from bug 16088, don't you think? Very unlikely IMO, Bug 16088 is (mostly) plack-specific. I got no significant speed differences in 1) test above, because for L2 = memcached, two-level caching usually doesn't happen at all: the only place in the code when L1 gets ever populated is set_in_cache() subroutine. With L2 = memcached, set_in_cache() calls are (statistically) very rare - pretty much everything is fetched from L2, and L1 is left mostly unpopulated / hardly even populated, hence no performace differences. This issue got fixed in 1st followup from Jesse (now: 1st patch in separate Bug 16140). Relevant part of that patch (see comment #19), when applied on top of Bug 16044, enables two-level caching for L2 = memcached setups. If I do that, performance differences for CGI + memcached are immediately apparent ;)
Created attachment 49545 [details] [review] Bug 16044: Use the L1 cache for any objects set in cache Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 49546 [details] [review] Bug 16044: Make tests from t/Cache.t pass The timeout does not impact the L1 cache (it would be to time consuming and not really useful to do that for this cache). To simulate the real timeout, we need to flush this L1 cache when needed. It would be also done adding a disable_L1_cache method. Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 49547 [details] [review] Bug 16044: Add tests to make sure structures will be copied Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 49548 [details] [review] Bug 16044: Add deep cloning To avoid the cache to be modified unfortunately, the default behavior of get_from_cache will be to deep copy if we are getting a structure. If the item is a scalar, it's simply returned. Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 49549 [details] [review] Bug 16044: Add an unsafe flag to Koha::Cache->get_from_cache If the caller/developer knows what he is doing, he can decide not to deep copy the structure. It will be faster but unsafe! If the structure is modified, the cache will also be updated. This option must be used with care and is not the default behavior. Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Looks good, works as expected. Great job guys.
Pushed to Master - Should be in the May 2016 release. Thanks!
(In reply to Jacek Ablewicz from comment #19) > BTW, that part of the 1st followup: > > my $get_sub = $self->{ref($self->{$cache}) . "_get"}; > - return $get_sub ? $get_sub->($key) : $self->{$cache}->get($key); > + my $value = $get_sub ? $get_sub->($key) : $self->{$cache}->get($key); > + > + $L1_cache{$key} = $value; > + > + return $value; > > is essential if you want to test this bug with L2 = memcache; without it, L1 > cache is not getting populated in such setups. Which explains case 1) from > comment #17 ;) This comment was very relevant and I should have submitted a follow-up to make this patchset pertinent and useful.
Created attachment 49555 [details] [review] Bug 16044; Populate the L1 cache when L2 is fetched The whole patch set is not very pertinent is the L1 cache is not populated when L2 is fetched! This patch fixes this inconsistency.
(In reply to Jonathan Druart from comment #40) > Created attachment 49555 [details] [review] [review] > Bug 16044; Populate the L1 cache when L2 is fetched > > The whole patch set is not very pertinent is the L1 cache is not > populated when L2 is fetched! > This patch fixes this inconsistency. Skip QA ok? Or I can ask piano to look.
I was going to ask Tomas or Jacek, but Jesse is around at this time :)
Created attachment 49568 [details] [review] Bug 16044: Populate the L1 cache when L2 is fetched The whole patch set is not very pertinent is the L1 cache is not populated when L2 is fetched! This patch fixes this inconsistency. Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Created attachment 49647 [details] [review] Bug 16044: Populate the L1 cache when L2 is fetched The whole patch set is not very pertinent if the L1 cache is not populated when L2 is fetched! This patch fixes this inconsistency. Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com> Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
I see 2 sign off lines - one from a QA team member. Should we move this PQA?
(In reply to Katrin Fischer from comment #45) > I see 2 sign off lines - one from a QA team member. Should we move this PQA? Last patch pushed (mar 29 - 2016)
Patches pushed to 3.22.x, will be in 3.22.8
(In reply to Julian Maurice from comment #47) > Patches pushed to 3.22.x, will be in 3.22.8 Hi Julian, With Bug 16044 pushed for 3.22.8, some follow-ups of this report may be needed in 3.22.x branch too (Bug 16229, Bug 16412, Bug 16221).
(In reply to Jacek Ablewicz from comment #48) > (In reply to Julian Maurice from comment #47) > > Patches pushed to 3.22.x, will be in 3.22.8 > > Hi Julian, > > With Bug 16044 pushed for 3.22.8, some follow-ups of this report may be > needed in 3.22.x branch too (Bug 16229, Bug 16412, Bug 16221). Thanks, I will push them ASAP
(In reply to Julian Maurice from comment #49) > (In reply to Jacek Ablewicz from comment #48) > > (In reply to Julian Maurice from comment #47) > > > Patches pushed to 3.22.x, will be in 3.22.8 > > > > Hi Julian, > > > > With Bug 16044 pushed for 3.22.8, some follow-ups of this report may be > > needed in 3.22.x branch too (Bug 16229, Bug 16412, Bug 16221). > > Thanks, I will push them ASAP Done