Bug 16140 - Only clear L1 cache when needed
Summary: Only clear L1 cache when needed
Status: In Discussion
Alias: None
Product: Koha
Classification: Unclassified
Component: Architecture, internals, and plumbing (show other bugs)
Version: master
Hardware: All All
: P2 enhancement (vote)
Assignee: Jesse Weaver
QA Contact: Testopia
URL:
Keywords:
Depends on: 16044
Blocks: 18232 17819
  Show dependency treegraph
 
Reported: 2016-03-23 22:49 UTC by Jesse Weaver
Modified: 2020-01-13 05:34 UTC (History)
7 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Small patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
Bug 16140: Only clear L1 cache when needed (5.33 KB, patch)
2016-03-23 22:55 UTC, Jesse Weaver
Details | Diff | Splinter Review
Bug 16140: don't clone cache results for framework/authvals (1.68 KB, patch)
2016-03-23 22:55 UTC, Jesse Weaver
Details | Diff | Splinter Review
Bug 16140: ->flush_if_needed should be called on an instanciate Koha::Cache object (799 bytes, patch)
2016-03-25 13:27 UTC, Jonathan Druart
Details | Diff | Splinter Review
file used for benchmarking (731 bytes, application/x-perl)
2016-03-25 14:28 UTC, Jonathan Druart
Details
Bug 16140: Only clear L1 cache when needed (4.02 KB, patch)
2016-04-29 17:15 UTC, Jesse Weaver
Details | Diff | Splinter Review
Bug 16140: don't clone cache results for framework/authvals (1.41 KB, patch)
2016-04-29 17:15 UTC, Jesse Weaver
Details | Diff | Splinter Review
Bug 16140: ->flush_if_needed should be called on an instanciate Koha::Cache object (803 bytes, patch)
2016-04-29 17:15 UTC, Jesse Weaver
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Jesse Weaver 2016-03-23 22:49:45 UTC

    
Comment 1 Jesse Weaver 2016-03-23 22:55:52 UTC Comment hidden (obsolete)
Comment 2 Jesse Weaver 2016-03-23 22:55:56 UTC Comment hidden (obsolete)
Comment 3 Tomás Cohen Arazi 2016-03-24 19:16:59 UTC
Are we sure it doesn't cost more to calculate the invalidated-status than to retrieve the data on each request?
Comment 4 Jonathan Druart 2016-03-25 13:27:47 UTC Comment hidden (obsolete)
Comment 5 Jonathan Druart 2016-03-25 14:27:26 UTC
Quick benchmarks:

% more /dev/null > /tmp/time;
% export KOHA_INTRANET_URL=http://pro.koha-qa.vm
% for i in {1..10}; do {time perl test.t > /dev/null} 2>> /tmp/time;done;
echo $((`perl -pe 's/.*cpu ([^s]*) total/$1/' /tmp/time| tr '\n' '+'|sed 's/+$//'`))

# Before 16044
git reset --hard ffb17a2914c43e536155856a86fed374b7f26e9c # just before 16044
71.23 (7.389+6.932+6.966+6.937+6.941+6.663+7.827+7.237+7.075+7.259)

# After 16044
git reset --hard origin/master # + last patch on 16044
72.09 6.881+7.236+8.261+7.005+7.975+6.909+6.816+6.860+7.043+7.100+%

# With 16140
git bz apply 16140
63.60 (6.952+7.158+6.481+6.424+6.428+6.338+6.003+6.034+6.004+5.782)

# With 16088
git reset --hard origin/master # + last patch on 16044
git bz apply 16088
73.43 (9.092+7.246+7.020+7.053+6.859+7.081+7.045+6.932+7.068+8.037)

# With 16140 and 16088
git bz apply 16140
61.21 (7.403+6.036+6.127+5.881+5.674+5.971+5.779+5.901+5.643+6.794)

The standard deviation is very high and it seems that my laptop is not stable enough to get correct values. Could someone confirm/contradict them?
Comment 6 Jonathan Druart 2016-03-25 14:28:50 UTC
Created attachment 49590 [details]
file used for benchmarking
Comment 7 Jonathan Druart 2016-03-25 15:10:28 UTC
Comment 5 was for 20 results.

Here the time for 100 results:

# Before 16044
262.99 (28.134+32.617+24.995+26.349+25.525+24.629+25.725+24.349+26.063+24.601)

# After 16044
255.346 (24.791+25.713+24.151+25.569+26.977+23.780+25.005+23.722+30.042+25.596)

# With 16140
219.02 (20.512+21.213+22.303+25.897+20.029+19.469+20.842+19.974+26.847+21.936)

# With 16088
293.16 (28.449+28.928+27.206+28.886+28.276+29.433+32.831+30.136+28.632+30.329)

# With 16140 and 16088
218.33 (21.278+23.491+20.935+21.529+21.854+19.016+21.408+23.760+21.577+23.485)
Comment 8 Jonathan Druart 2016-03-25 15:43:55 UTC
For the record, just retry the same on fc640d2 # just before bug 11998:

134.578 (13.868+12.963+13.374+13.732+13.039+13.538+12.972+13.200+14.647+13.245)
Comment 9 Jonathan Druart 2016-03-25 15:45:51 UTC
(In reply to Jonathan Druart from comment #8)
> For the record, just retry the same on fc640d2 # just before bug 11998:
> 
> 134.578
> (13.868+12.963+13.374+13.732+13.039+13.538+12.972+13.200+14.647+13.245)

For 20! so to compare with comment 5 :)
Comment 10 Jonathan Druart 2016-03-25 15:59:54 UTC
No let's try to compare the idea of this first patch (Bug 16140: Only clear L1 cache when needed) and the one on bug 16088 (avoid excessive L1 flush):

# 16044 + Bug 16140: Only clear L1 cache when needed

92.83 (11.043+8.967+8.916+9.276+9.709+8.686+9.119+9.055+8.850+9.205)
Comment 11 Jonathan Druart 2016-03-25 16:03:26 UTC
Now let's try to compare the idea of this first patch (Bug 16140: Only clear L1 cache when needed) and the one on bug 16088 (avoid excessive L1 flush):

# 16044 + Bug 16140: Only clear L1 cache when needed
89.07 (9.606+8.805+8.416+9.685+8.629+9.392+8.956+8.631+8.281+8.670)
Comment 12 Jonathan Druart 2016-03-25 16:12:44 UTC
If I remove the intelligent caching:
-        Koha::Cache->get_instance->flush_L1_if_needed();
+        Koha::Cache->flush_L1_cache();

I get:
58.30 (6.379+5.665+5.752+6.237+5.676+5.763+5.654+5.622+5.749+5.804)
Comment 13 Jonathan Druart 2016-03-25 17:21:45 UTC
To reinforce these numbers, I have reused the selenium scripts (bug 13691) used on https://wiki.koha-community.org/wiki/Benchmark_for_3.22:

v3.22.00
CP main = 3.45
CP add patron category = 4.83
CP add patron = 4.72
CP add items = 15.17
CP checkout = 11.57
CP checkin = 9.53

before 11998 (fc640d2)
CP main = 4.04
CP add patron category = 4.86
CP add patron = 4.87
CP add items = 16.38
CP checkout = 10.91
CP checkin = 10.99

master + 16140 + 16088 - intelligent flushing
CP main = 3.42
CP add patron category = 4.17
CP add patron = 4.67
CP add items = 12.82
CP checkout = 8.36
CP checkin = 7.56
Comment 14 Jacek Ablewicz 2016-03-30 11:35:45 UTC
I have kinda mixed feelings about flush_L1_if_needed() feature. It provides nice speed-up for plack setups, but - unless I'm very much mistaken, it also makes L2 cache expiration mechanism unreliable under plack (?).
Comment 15 Jacek Ablewicz 2016-03-30 12:31:53 UTC
I've tried to review GetMarcStructure() calls in the existing code as an attempt to establish it's "safety level". As general rule, framework hash-of-hashes-of hashes returned by GetMarcStructure() tends to be abused a little bit here and there, we have a lot of statements like this

    $tagslib->{$fieldvalue}->{$subfvalue}->{'hidden'} || 0

i.e., this structure is prone to be messed up in many places, due to autovivification of hash keys. However, if it is messed up everywhere in the same, somehow predictable way, maybe it will be not such a big problem (?). But there are places in the code where we do

    foreach my $field ( keys %$tagslib )
    foreach my $subfield ( sort( keys %{ $tagslib->{$tag} }

and so on, so adding { unsafe => 1 } to GetMarcStructure() certainly has quite a bit of potential for introducting some hard-to-catch regressions, especially under plack.
Comment 16 Jonathan Druart 2016-03-30 14:02:58 UTC
(In reply to Jacek Ablewicz from comment #14)
> I have kinda mixed feelings about flush_L1_if_needed() feature. It provides
> nice speed-up for plack setups, but - unless I'm very much mistaken, it also
> makes L2 cache expiration mechanism unreliable under plack (?).

Regarding the benchmarks, this part will certainly be dropped. This bug report should focus on the "do not clone when unnecessary" part.
Jesse, could you confirm that?
Comment 17 Jesse Weaver 2016-03-30 22:44:15 UTC
I'm starting to think there's something very oddly system dependent here.

Running the same tests (each number inside the parentheses is the total runtime of 10 runs of Jonathan's tests, and the number of front is the total of those totals):

master (6a04ba598, which includes the last batch on bug 16044 to set the L1 cache more often):
232.81 (22.93+23.21+23.21+23.07+23.41+23.01+23.51+23.25+23.51+23.70)

master + 16088:
441.07 (43.28+43.87+44.00+43.90+44.35+44.54+44.96+44.53+43.92+43.72)
# I suspect this is because L1 cache hits, and thus deep-cloning, is increased.

master + 16088 + the second patch here (don't clone framework/authvals):
137.90 (13.28+13.81+14.02+13.79+13.81+13.32+13.96+14.00+13.95+13.96)

master + 16140 - intelligent flushing:
233.67 (23.61+23.02+23.48+23.64+22.98+23.62+23.44+23.54+23.60+22.74)

master + 16140:
137.34 (13.92+13.90+13.75+13.70+13.42+13.84+13.93+13.74+13.83+13.3

master + 16140 + 16088 - intelligent flushing:
139.33 (14.03+14.03+13.96+14.06+13.60+13.88+13.87+14.02+14.18+13.70)

master + 16140 + 16088:
134.67 (13.40+13.33+13.52+13.38+14.03+13.43+12.97+13.42+13.59+13.60)


This makes sense to me. Reducing deep cloning makes the biggest difference, and as we add 16088 then 16140 the number of L1 flushes drops from:
  4200 (100 runs * 21 L1 flushes per request) to
  100 (100 runs * 1 L1 flush per request) to
  1.

I'm currently trying an approach that recursively locks any refs passed to the L1 cache, but am having trouble making it performant.
Comment 18 Jacek Ablewicz 2016-03-31 07:00:14 UTC
(In reply to Jesse Weaver from comment #17)
> I'm starting to think there's something very oddly system dependent here.

+1

> master (6a04ba598, which includes the last batch on bug 16044 to set the L1
> cache more often):
> 232.81 (22.93+23.21+23.21+23.07+23.41+23.01+23.51+23.25+23.51+23.70)

Btw, those numbers are in what units? Seconds, miliseconds (microfortnights, nanocenturies, ..) ? They look way too high to be in seconds (unless you are running Koha on Arduino Pro Mini or something like that) and way to low to be in miliseconds (unless you are running Koha on some liquid-helium-cooled test rig on steroids).
Comment 19 Jonathan Druart 2016-03-31 07:00:55 UTC
(In reply to Jesse Weaver from comment #17)
> I'm starting to think there's something very oddly system dependent here.

Why do you say that, your numbers confirm mine.
Comment 20 Jonathan Druart 2016-03-31 07:02:36 UTC
(In reply to Jacek Ablewicz from comment #18)
> (In reply to Jesse Weaver from comment #17)
> > master (6a04ba598, which includes the last batch on bug 16044 to set the L1
> > cache more often):
> > 232.81 (22.93+23.21+23.21+23.07+23.41+23.01+23.51+23.25+23.51+23.70)
> 
> Btw, those numbers are in what units? Seconds, miliseconds (microfortnights,
> nanocenturies, ..) ? They look way too high to be in seconds (unless you are
> running Koha on Arduino Pro Mini or something like that) and way to low to
> be in miliseconds (unless you are running Koha on some liquid-helium-cooled
> test rig on steroids).

It's seconds: "(each number inside the parentheses is the total runtime of 10 runs of Jonathan's tests, and the number of front is the total of those totals)"
Comment 21 Jacek Ablewicz 2016-03-31 07:50:32 UTC
(In reply to Jonathan Druart from comment #20)
> (In reply to Jacek Ablewicz from comment #18)
> > (In reply to Jesse Weaver from comment #17)
> > > master (6a04ba598, which includes the last batch on bug 16044 to set the L1
> > > cache more often):
> > > 232.81 (22.93+23.21+23.21+23.07+23.41+23.01+23.51+23.25+23.51+23.70)
> > 
> > Btw, those numbers are in what units? Seconds, miliseconds (microfortnights,
> > nanocenturies, ..) ? They look way too high to be in seconds (unless you are
> > running Koha on Arduino Pro Mini or something like that) and way to low to
> > be in miliseconds (unless you are running Koha on some liquid-helium-cooled
> > test rig on steroids).
> 
> It's seconds: "(each number inside the parentheses is the total runtime of
> 10 runs of Jonathan's tests, and the number of front is the total of those
> totals)"

So 232.81 seconds is a total for 100 test runs (1 log in + 1 search taking 2.33 seconds on average), or is it a total for 10 test runs (log in + search taking 23.3 seconds on average)?
Comment 22 Jonathan Druart 2016-03-31 08:39:01 UTC
(In reply to Jacek Ablewicz from comment #21)
> So 232.81 seconds is a total for 100 test runs (1 log in + 1 search taking
> 2.33 seconds on average), or is it a total for 10 test runs (log in + search
> taking 23.3 seconds on average)?

From comment 5:

> % for i in {1..10}; do {time perl test.t > /dev/null} 2>> /tmp/time;done;
> echo $((`perl -pe 's/.*cpu ([^s]*) total/$1/' /tmp/time| tr '\n' '+'|sed > 's/+$//'`))

The file test.t is the one attached to this bug report ("file used for benchmarking"), it does 1 run: 1 login + 1 search using Test::WWW::Mechanize.
The output will give the total time for the 10 runs

> 71.23 (7.389+6.932+6.966+6.937+6.941+6.663+7.827+7.237+7.075+7.259)

"71.23" is the output of the previous command and is the addition of the execution times of the 10 runs (1 run takes ~7sec) which are contain in /tmp/time
Comment 23 Jacek Ablewicz 2016-03-31 20:05:03 UTC
Some more benchmarks :), test setup: CGI, L2 = memcached, medium dataset (130K biblio records, 300K items), one test run = log in + staff catalogue search for 'z' term (like in comment #5), XSLT processing enabled, numSearchResults = 100 (i.e 100 results / page), maxRecordsForFacets = 200, StaffAuthorisedValueImages disabled (Bug 16041).

# master
146.05 (14.52+14.63+14.63+14.73+14.81+14.71+14.16+14.65+14.60+14.61)

# master + Bug 16166
90.7 (9.15+9.10+9.16+8.41+9.19+9.19+9.11+9.19+9.02+9.18)

# master + 2nd patch from Bug 16140 (aka don't clone framework/authvals)
71.61 (7.22+7.19+7.13+7.16+7.13+7.10+7.27+7.18+7.05+7.18)

# master + 2nd patch from Bug 16140 + Bug 16166
71.15 (7.11+7.10+7.20+7.22+7.19+6.52+7.24+7.28+7.10+7.19)

Shell test script from comment #5 does not quite work for me for some reasons

> % for i in {1..10}; do {time perl test.t > /dev/null} 2>> /tmp/time;done;
> echo $((`perl -pe 's/.*cpu ([^s]*) total/$1/' /tmp/time| tr '\n' '+'|sed > 's/+$//'`)

looks like 'time' used in it is an internal shell subroutine of some kind, specific for a given UX shell (which one - ?); instead, I used '/usr/bin/time -n', and postproceesed timing results throug this simple perl script:


    #!/usr/bin/perl

    use Modern::Perl;

    my @timings;
    while (<>) {
        chomp;
        /^real ([\d\.]+)/ || next;
        push(@timings, $1);
    }

    my $total = 0;
    map { $total += $_; } @timings;
    print $total." (".join('+', @timings).")\n";

Note that I'm measuring so-called "real" (wall clock) timings in here; I have no idea what exactly is being measured in comment #5 - hopefully someting similar?
Comment 24 Jacek Ablewicz 2016-04-01 06:30:32 UTC
(In reply to Jonathan Druart from comment #16)
> (In reply to Jacek Ablewicz from comment #14)
> > I have kinda mixed feelings about flush_L1_if_needed() feature. It provides
> > nice speed-up for plack setups, but - unless I'm very much mistaken, it also
> > makes L2 cache expiration mechanism unreliable under plack (?).
> 
> Regarding the benchmarks, this part will certainly be dropped.

Without that part, 2nd patch (don't clone cache results for framework/authvals) would be a whole lot less risky under plack - it may still cause some (hopefully minor) issues, but at least they will be predictable & reproductible under plack as well as in the CGI setups.
Comment 25 Jonathan Druart 2016-04-04 08:35:04 UTC
(In reply to Jacek Ablewicz from comment #23)

> Shell test script from comment #5 does not quite work for me for some reasons
> 
> > % for i in {1..10}; do {time perl test.t > /dev/null} 2>> /tmp/time;done;
> > echo $((`perl -pe 's/.*cpu ([^s]*) total/$1/' /tmp/time| tr '\n' '+'|sed > 's/+$//'`)

It's zsh.

> Note that I'm measuring so-called "real" (wall clock) timings in here; I
> have no idea what exactly is being measured in comment #5 - hopefully
> someting similar?

I have used the total CPU time (if the output looks like '1.95s user 0.05s system 68% cpu 2.920 total', I got 2.920).
Comment 26 Jesse Weaver 2016-04-29 17:15:17 UTC
Created attachment 51024 [details] [review]
Bug 16140: Only clear L1 cache when needed

By storing a modification time in memcached, we can check the upstream
cache at the start of every request, and only clear the L1 cache if the
upstream has actually been modified.

This and the following patch get noticeable performance improvements on
top of bug 16044: ~1.2 vs 2.2 seconds for a search on a small dataset.
Comment 27 Jesse Weaver 2016-04-29 17:15:21 UTC
Created attachment 51025 [details] [review]
Bug 16140: don't clone cache results for framework/authvals

This is needed so as not to cancel out the performance gains of the L1
cache.
Comment 28 Jesse Weaver 2016-04-29 17:15:26 UTC
Created attachment 51026 [details] [review]
Bug 16140: ->flush_if_needed should be called on an instanciate Koha::Cache object
Comment 29 Marcel de Rooy 2016-07-28 09:52:53 UTC
What is the current status of this discussion? ;)
Comment 30 Kyle M Hall 2016-09-06 15:57:15 UTC
(In reply to Marcel de Rooy from comment #29)
> What is the current status of this discussion? ;)

As far as I know Jesse is no longer working on this since he's moved on. I'm not exactly sure what we should do about this.
Comment 31 Jonathan Druart 2016-09-07 07:18:24 UTC
(In reply to Marcel de Rooy from comment #29)
> What is the current status of this discussion? ;)

IIRC the flush_L1_if_needed is a good idea but we did not notice a high perf gain.
We could try and revive it in a few months if we want to safe some ms per query.
The patch 2 should be moved to its own bug report (note that GetMarcStructure is done on bug 16365).
Comment 32 David Cook 2019-09-19 01:36:03 UTC
Sounds like this one might be worth closing
Comment 33 Katrin Fischer 2020-01-11 11:10:10 UTC
(In reply to David Cook from comment #32)
> Sounds like this one might be worth closing

Should we close?
Comment 34 Jonathan Druart 2020-01-11 11:29:36 UTC
I do not think so, there are interesting ideas that could be reused.
Comment 35 David Cook 2020-01-13 05:34:08 UTC
(In reply to Jonathan Druart from comment #34)
> I do not think so, there are interesting ideas that could be reused.

Perhaps those should be spun off into their own specific bug reports? 

I just re-read the comments, and I don't think it's clear what's being discussed here entirely?

I am interested in L1 cache issues, which I think is what brought me here, but not sure what's the actual issue in this report? Maybe it just need to be renamed?