Bug 16044

Summary: Define a L1 cache for all objects set in cache
Product: Koha Reporter: Jonathan Druart <jonathan.druart>
Component: Architecture, internals, and plumbingAssignee: Jonathan Druart <jonathan.druart>
Status: CLOSED FIXED QA Contact: Testopia <testopia>
Severity: enhancement    
Priority: P1 - high CC: abl, brendan, julian.maurice, jweaver, katrin.fischer, m.de.rooy, martin.renvoize, srdjan, tomascohen
Version: Main   
Hardware: All   
OS: All   
See Also: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=15264
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=16088
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=16041
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=16221
Change sponsored?: --- Patch complexity: Small patch
Documentation contact: Documentation submission:
Text to go in the release notes:
Version(s) released in:
Bug Depends on: 11998    
Bug Blocks: 16140, 16166, 16229, 16412, 16758    
Attachments: Bug 16044: Use the L2 cache for any objects set in cache
Bug 16044: Make tests from t/Cache.t pass
Bug 16044: Add tests to make sure structures will be copied
Bug 16044: Add deep cloning
Bug 16044: Add an unsafe flag to Koha::Cache->get_from_cache
Bug 16044: (followup) only clear L1 cache when needed
Bug 16044: (followup) don't clone cache results for framework/authvals
Bug 16044: Use the L2 cache for any objects set in cache
Bug 16044: Make tests from t/Cache.t pass
Bug 16044: Add tests to make sure structures will be copied
Bug 16044: Add deep cloning
Bug 16044: Add an unsafe flag to Koha::Cache->get_from_cache
Bug 16044: Use the L1 cache for any objects set in cache
Bug 16044: Make tests from t/Cache.t pass
Bug 16044: Add tests to make sure structures will be copied
Bug 16044: Add deep cloning
Bug 16044: Add an unsafe flag to Koha::Cache->get_from_cache
Bug 16044; Populate the L1 cache when L2 is fetched
Bug 16044: Populate the L1 cache when L2 is fetched
Bug 16044: Populate the L1 cache when L2 is fetched

Description Jonathan Druart 2016-03-10 16:00:37 UTC
I have added a POC on bug 11998 ( patch "Add a L1 cache for sysprefs" ) to use a L1 cache instead of retrieving infos from memcache everytime.
If it works for sysprefs, we could use it for any objects set in cache (I have Marc structure and framework in mind).
Comment 1 Jonathan Druart 2016-03-10 16:01:52 UTC Comment hidden (obsolete)
Comment 2 Jacek Ablewicz 2016-03-12 09:52:01 UTC
Great idea IMO - but only as long as you are caching scalars (like in Bug 11998).. Unfortunately, caching complex data structures that way globally (by reference) is inherently dangerous and regression-prone. Would be great to have such mechanism in Koha, but to be on the safe side, it should be used carefully and selectively (e.g. - let's return a "deep clone" of the cached data structure by default, and the direct reference only if explicitly asking for it, etc.).

This patch is tempting - it would provide some big, immediate speed gains for a lot of scripts if pushed right now, and in this moment it probably would not break anything important (but only because we don't use Koha::Cache all that much so far). But on the long term, such method will make Koha::Cache essentially unusable for caching anything complex - to stay on the safe side, typically you'll need to meticulously audit a lot of the code to ensure that data structure fetched from the cache is not getting messed up internally in any code part between subsequent feches.
Comment 3 Tomás Cohen Arazi 2016-03-14 13:39:46 UTC
The bug name is misleading. This patch is about generalizing the L1 cache, which is plack-and-sysprefs-only as of bug 11998.

As I said on signing 11998, separating the L1 from L2 cache gives us better ganularity for handling edge cases, but of course each edge case needs to be handled carefully.

Go for it Jonathan! We can fight about worse cases (marcstructure, etc) in more specific bugs. This bug/patch is only generalizing 11998, which is a very good idea.
Comment 4 Martin Renvoize 2016-03-15 06:42:09 UTC
+1 from me, I pretty much agree entirely with Tomás's comment.  With great power comes great responsibility, we indeed need a good thorough testing procedure as more things get added to the caches, but this bug in itself looks good to me.
Comment 5 Jacek Ablewicz 2016-03-15 08:41:44 UTC
To clarify comment #1: I don't think it's a bad idea; on the contrary, I think it's an excellent idea! But I also think that current/initial implementation is, again, inherently dangerous. IMO it should at least have some basic safety measures built-in to be (reasonably) safe. I dunno, something simple.. e.g - let's have two variants of ->get_from_cache():

1) get_from_cache():
   - if the cached thingy in L1 cache is a scalar - just return it, no problems whatsoever
   - if it's a reference, return a "deep clone" of what we keep in L1 cache in deserialized form (that should still be way faster than fetching it from the network and deserializing it each and every time)

2) get_from_cache_just_gimme_a_raw_reference_I_know_what_I_am_doing()
   - initial/ultra-fast implementation which just returns references directly

Having two variants of get_from_cache() is not very elegant at the 1st glance, but it's the fastest and simplest method I can imagine:

- it's faster than (e.g.) handling some extra parameters inside one-to-rule-them-all get_from_cache() subroutine - sooner or later some scripts will be calling that sub 10000+ times, and such (seemingly very small) overheads have a nasty tendency to add up
- it will allow us to introduce "I know what I'm doing / I like to live dangerously" variant selectivelly and gradually - preferably, in the separate bug reports, so if something somwhere explodes due to using the ultra-fast-but-not-always-safe caching method, it will be a lot easier to fix/revert it selectivelly without killing performance of the entire caching system
Comment 6 Jonathan Druart 2016-03-15 15:48:29 UTC Comment hidden (obsolete)
Comment 7 Jonathan Druart 2016-03-15 16:43:17 UTC Comment hidden (obsolete)
Comment 8 Jonathan Druart 2016-03-15 16:43:21 UTC Comment hidden (obsolete)
Comment 9 Jonathan Druart 2016-03-15 16:43:25 UTC Comment hidden (obsolete)
Comment 10 Jonathan Druart 2016-03-15 16:47:30 UTC
Jacek,
I completely agree with your comment, and I think the safe option (deep copy) should be the default behavior. It will avoid regression and hard-to-catch weird issues.
We will be able to switch to the unsafe but faster option later in a step-by-step process.
Comment 11 Jesse Weaver 2016-03-16 23:14:56 UTC Comment hidden (obsolete)
Comment 12 Jesse Weaver 2016-03-16 23:15:04 UTC Comment hidden (obsolete)
Comment 13 Jesse Weaver 2016-03-16 23:18:15 UTC
Two tiny nitpicks:

  * Why do the commit messages say L2 instead of L1?
  * Why is the variable named L1_cache instead of l1_cache or _l1_cache?
Comment 14 Jonathan Druart 2016-03-17 08:04:28 UTC
Jesse, I really like your patches and yes, it's the way to go.
But I think they go too far for a first step.
We will introduce a significant perfs gain without any regressions (in theory!!) if we stupidly flush the L1 cache and deep copy in any cases.
Could you move your patches to another bug report?
We should also propose benchmarks to know when we are doing something interesting.
Comment 15 Jonathan Druart 2016-03-17 08:05:26 UTC
(In reply to Jesse Weaver from comment #13)
> Two tiny nitpicks:
> 
>   * Why do the commit messages say L2 instead of L1?

It's a typo.

>   * Why is the variable named L1_cache instead of l1_cache or _l1_cache?

I don't care about the name of the var, we can change it. I thought L1_cache was the more appropriate
Comment 16 Martin Renvoize 2016-03-17 08:29:05 UTC
+1 to the two method approach,  seems the safest and most flexible way to do it :-) 
+1 to splitting out the frameworks followups into their own issue.
+1 to building some benchmarks to prove the performance gain. 

Well done all, I'm super excited to see this moving forward
Comment 17 Jacek Ablewicz 2016-03-17 09:58:17 UTC
I'm also trying to test this, but with kinda weird results so far..

Searching (medium-size dataset, 120k biblios, 300k items, 10-200-20000 hits, XSLT processing enabled, testing with initial patch set - without the last 2 followups):

1) CGI + memcache - no measurable differences between patched/unpatched
2) CGI + Cache::Memory - it's ca 2x slower for the searches with 50+ hits

With the 1st followup (attachment #49239 [details] [review]), both CGI + memcache and CGI + Cache::Memory are 2x slower :( But why is that? Supposedly, it should be neutral re/ search speed (or even a bit faster on average).

2nd followup (#49240) just disables "safety measures" in the 2 most speed-sensitive places - an instant, enormous performance gain! Not very "step-by-step", though. Would it be safe enough/reasonably safe thing to do at this stage - I have no idea.
Comment 18 Jacek Ablewicz 2016-03-17 10:23:10 UTC
For some reason, deep copying of MARC frameworks structures with clone() is insanely slow:

- clone() from the Clone module: 62 ms !!!
- clone() from Clone::Fast: 21 ms
- Storable dclone(): 16.9 ms
- Storable thaw(freeze()): 17.0 ms
- fetching framework from DB directly (caching disabled in GetMarcStructure()): 36.6 ms

If I replace clone() with Storable dclone() in Koha::Cache, search speed is back to normal. Well, almost - looks like fetching the framework directly from L2 cache is still faster (12.8 ms, L2 = memcache) then getting a clone from L1 cache.
Comment 19 Jacek Ablewicz 2016-03-17 10:32:35 UTC
BTW, that part of the 1st followup:

     my $get_sub = $self->{ref($self->{$cache}) . "_get"};
-    return $get_sub ? $get_sub->($key) : $self->{$cache}->get($key);
+    my $value = $get_sub ? $get_sub->($key) : $self->{$cache}->get($key);
+
+    $L1_cache{$key} = $value;
+
+    return $value;

is essential if you want to test this bug with L2 = memcache; without it, L1 cache is not getting populated in such setups. Which explains case 1) from comment #17 ;)
Comment 20 Jacek Ablewicz 2016-03-17 15:55:45 UTC
Some other thoughts, after toying with these patches for a little while.

1) If we want to stay "on the safe side" by default, clone()/dclone() call needs to be introduced to the set_in_cache() sub as well

2) Looks like Cache::Memory as L2 cache is 100% waste of time and CPU cycles. If L2 = Cache::Memory, typically we'll be storing a lot of things in there - small and large, but never fetching them back

3) In that statement

> - if it's a reference, return a "deep clone" of what we keep in L1 cache in
> deserialized form (that should still be way faster than fetching it from the
> network and deserializing it each and every time)

I was completely wrong :). Cloning very complex data structures, like MARC framework hashes in Koha, is not faster in perl then deserializing them. Cloning involves

a) traversing the whole structure recursively - inspecting all the nested parts of it in various ways etc.
b) creating anew the copies of all those pesky hashes, hash keys, scalars etc. in perl guts

Thawing the structure from serialized form is doing b), but not a), so it can be faster. And it is - for the MARC default framework:

- Storable thaw() call is taking ca 10 ms
- freeze(): 6 ms
- dclone(): 17 ms
- decode_sereal(): 9 ms - probably not worth the trouble, there are no packages in Debian for Sereal module

Memcache::Fast is using Storable thaw() as desrializer by default. Feching framework from memcached is faster (12.5 ms) than cloning it with dclone. It only involves two "expensive" tasks: thaw() - 10 ms, and probably decompress (~2 ms).

This may be the problem; get_from_cache() which involves dclone() will be significantly slower, and Koha::Cache as a whole will suffer severe performance loss if used in the "safe" way.
Comment 21 Jacek Ablewicz 2016-03-17 16:47:23 UTC
(In reply to Jacek Ablewicz from comment #20)

> This may be the problem; get_from_cache() which involves dclone() will be
> significantly slower, and Koha::Cache as a whole will suffer severe
> performance loss if used in the "safe" way.

But I think that issue is utterly fixable, if complex structures are stored in L1 cache in two forms

   {
       frozen => ..
       thawed => 
   }

We'll need to send 'thawed' scalar to memcached somehow packaged (e.g. in [ ]), to be able to distinguish if it was originally a scalar or not. And the complex structures in L2 cache will be double-frozen. But the 2nd freeze call (inside Memcached::Fast) should be very cheap - at that stage, this wouldn't be a complex data structure any more. Unless I'm very much mistaken (see comment #20 item 3) a side effect of this will be a performance gain - after this bug, fetching MARC framework with get_from_cache() will take 10 ms instead of 12.5 ms on average, even with Koha::Cache used entirely in the "safe" mode. Win-win situation ;).
Comment 22 Jacek Ablewicz 2016-03-17 16:50:10 UTC
(In reply to Jacek Ablewicz from comment #21)

> We'll need to send 'thawed' scalar to memcached somehow packaged (e.g. in [

s/'thawed' scalar/'frozen' scalar/
Comment 23 Jacek Ablewicz 2016-03-17 18:03:43 UTC
(In reply to Jacek Ablewicz from comment #21)

> We'll need to send 'frozen' scalar to memcached somehow packaged (e.g. in [
> ]), to be able to distinguish if it was originally a scalar or not. And the
> complex structures in L2 cache will be double-frozen. But the 2nd freeze
> call (inside Memcached::Fast) should be very cheap - at that stage, this
> wouldn't be a complex data structure any more. Unless I'm very much mistaken
> (see comment #20 item 3) a side effect of this will be a performance gain -
> after this bug, fetching MARC framework with get_from_cache() will take 10
> ms instead of 12.5 ms on average, even with Koha::Cache used entirely in the
> "safe" mode. Win-win situation ;).

Or just append some small flag to the end of each string we store in L2 and strip it on fetching (0: scalar, 1: structure frozen by Storable() freeze, 2: structure frozen by Sereal, 4: undef value etc. ;) - I don't think it will affect performance of caching very small values in any measurable way, and all that double-frozen madness could be scrached then.
Comment 24 Jonathan Druart 2016-03-18 16:17:24 UTC
(In reply to Jacek Ablewicz from comment #17)
> 1) CGI + memcache - no measurable differences between patched/unpatched

It certainly comes from bug 16088, don't you think?
Comment 25 Jesse Weaver 2016-03-18 21:06:33 UTC
I think that trying to protect cache users from themselves for complex structures is going to come at too high a cost of complexity or performance, based on the discussions we've been having so far... Auditing use of functions that call APIs that return large cached structures to either add clone()s as necessary or remove mutation will be more work but likely more benefit in the long run.

We could look at using Hash::Util::lock_hash.
Comment 26 Jesse Weaver 2016-03-23 23:04:39 UTC Comment hidden (obsolete)
Comment 27 Jesse Weaver 2016-03-23 23:04:43 UTC Comment hidden (obsolete)
Comment 28 Jesse Weaver 2016-03-23 23:04:47 UTC Comment hidden (obsolete)
Comment 29 Jesse Weaver 2016-03-23 23:04:51 UTC Comment hidden (obsolete)
Comment 30 Jesse Weaver 2016-03-23 23:04:55 UTC Comment hidden (obsolete)
Comment 31 Jacek Ablewicz 2016-03-24 11:44:56 UTC
(In reply to Jonathan Druart from comment #24)
> (In reply to Jacek Ablewicz from comment #17)
> > 1) CGI + memcache - no measurable differences between patched/unpatched
> 
> It certainly comes from bug 16088, don't you think?

Very unlikely IMO, Bug 16088 is (mostly) plack-specific.

I got no significant speed differences in 1) test above, because for L2 = memcached, two-level caching usually doesn't happen at all: the only place in the code when L1 gets ever populated is set_in_cache() subroutine. With L2 = memcached, set_in_cache() calls are (statistically) very rare - pretty much everything is fetched from L2, and L1 is left mostly unpopulated / hardly even populated, hence no performace differences. This issue got fixed in 1st followup from Jesse (now: 1st patch in separate Bug 16140). Relevant part of that patch (see comment #19), when applied on top of Bug 16044, enables two-level caching for L2 = memcached setups. If I do that, performance differences for CGI + memcached are immediately apparent ;)
Comment 32 Tomás Cohen Arazi 2016-03-24 19:07:29 UTC
Created attachment 49545 [details] [review]
Bug 16044: Use the L1 cache for any objects set in cache

Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 33 Tomás Cohen Arazi 2016-03-24 19:07:37 UTC
Created attachment 49546 [details] [review]
Bug 16044: Make tests from t/Cache.t pass

The timeout does not impact the L1 cache (it would be to time consuming
and not really useful to do that for this cache).
To simulate the real timeout, we need to flush this L1 cache when
needed.
It would be also done adding a disable_L1_cache method.

Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 34 Tomás Cohen Arazi 2016-03-24 19:07:45 UTC
Created attachment 49547 [details] [review]
Bug 16044: Add tests to make sure structures will be copied

Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 35 Tomás Cohen Arazi 2016-03-24 19:07:54 UTC
Created attachment 49548 [details] [review]
Bug 16044: Add deep cloning

To avoid the cache to be modified unfortunately, the default behavior of
get_from_cache will be to deep copy if we are getting a structure.
If the item is a scalar, it's simply returned.

Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 36 Tomás Cohen Arazi 2016-03-24 19:08:11 UTC
Created attachment 49549 [details] [review]
Bug 16044: Add an unsafe flag to Koha::Cache->get_from_cache

If the caller/developer knows what he is doing, he can decide not to
deep copy the structure. It will be faster but unsafe!
If the structure is modified, the cache will also be updated.
This option must be used with care and is not the default behavior.

Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 37 Tomás Cohen Arazi 2016-03-24 19:09:10 UTC
Looks good, works as expected. Great job guys.
Comment 38 Brendan Gallagher 2016-03-24 19:45:29 UTC
Pushed to Master - Should be in the May 2016 release.  Thanks!
Comment 39 Jonathan Druart 2016-03-24 20:45:55 UTC
(In reply to Jacek Ablewicz from comment #19)
> BTW, that part of the 1st followup:
> 
>      my $get_sub = $self->{ref($self->{$cache}) . "_get"};
> -    return $get_sub ? $get_sub->($key) : $self->{$cache}->get($key);
> +    my $value = $get_sub ? $get_sub->($key) : $self->{$cache}->get($key);
> +
> +    $L1_cache{$key} = $value;
> +
> +    return $value;
> 
> is essential if you want to test this bug with L2 = memcache; without it, L1
> cache is not getting populated in such setups. Which explains case 1) from
> comment #17 ;)

This comment was very relevant and I should have submitted a follow-up to make this patchset pertinent and useful.
Comment 40 Jonathan Druart 2016-03-24 20:49:29 UTC Comment hidden (obsolete)
Comment 41 Brendan Gallagher 2016-03-24 20:51:33 UTC
(In reply to Jonathan Druart from comment #40)
> Created attachment 49555 [details] [review] [review]
> Bug 16044; Populate the L1 cache when L2 is fetched
> 
> The whole patch set is not very pertinent is the L1 cache is not
> populated when L2 is fetched!
> This patch fixes this inconsistency.

Skip QA ok?  Or I can ask piano to look.
Comment 42 Jonathan Druart 2016-03-24 21:00:10 UTC
I was going to ask Tomas or Jacek, but Jesse is around at this time :)
Comment 43 Jesse Weaver 2016-03-24 22:06:02 UTC Comment hidden (obsolete)
Comment 44 Jacek Ablewicz 2016-03-29 18:37:09 UTC
Created attachment 49647 [details] [review]
Bug 16044: Populate the L1 cache when L2 is fetched

The whole patch set is not very pertinent if the L1 cache is not
populated when L2 is fetched!
This patch fixes this inconsistency.

Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Comment 45 Katrin Fischer 2016-03-29 21:37:02 UTC
I see 2 sign off lines - one from a QA team member. Should we move this PQA?
Comment 46 Brendan Gallagher 2016-03-29 22:27:21 UTC
(In reply to Katrin Fischer from comment #45)
> I see 2 sign off lines - one from a QA team member. Should we move this PQA?

Last patch pushed (mar 29 - 2016)
Comment 47 Julian Maurice 2016-06-16 12:01:40 UTC
Patches pushed to 3.22.x, will be in 3.22.8
Comment 48 Jacek Ablewicz 2016-06-22 12:18:52 UTC
(In reply to Julian Maurice from comment #47)
> Patches pushed to 3.22.x, will be in 3.22.8

Hi Julian,

With Bug 16044 pushed for 3.22.8, some follow-ups of this report may be needed in 3.22.x branch too (Bug 16229, Bug 16412, Bug 16221).
Comment 49 Julian Maurice 2016-06-22 13:18:27 UTC
(In reply to Jacek Ablewicz from comment #48)
> (In reply to Julian Maurice from comment #47)
> > Patches pushed to 3.22.x, will be in 3.22.8
> 
> Hi Julian,
> 
> With Bug 16044 pushed for 3.22.8, some follow-ups of this report may be
> needed in 3.22.x branch too (Bug 16229, Bug 16412, Bug 16221).

Thanks, I will push them ASAP
Comment 50 Julian Maurice 2016-06-23 07:47:53 UTC
(In reply to Julian Maurice from comment #49)
> (In reply to Jacek Ablewicz from comment #48)
> > (In reply to Julian Maurice from comment #47)
> > > Patches pushed to 3.22.x, will be in 3.22.8
> > 
> > Hi Julian,
> > 
> > With Bug 16044 pushed for 3.22.8, some follow-ups of this report may be
> > needed in 3.22.x branch too (Bug 16229, Bug 16412, Bug 16221).
> 
> Thanks, I will push them ASAP

Done