Bug 13193 - Make Memcached usage fork safe
Summary: Make Memcached usage fork safe
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Architecture, internals, and plumbing (show other bugs)
Version: Main
Hardware: All All
: P3 major
Assignee: Joonas Kylmälä
QA Contact: Testopia
URL:
Keywords: dependency
Depends on:
Blocks: 24642
  Show dependency treegraph
 
Reported: 2014-11-03 19:31 UTC by Barton Chittenden
Modified: 2020-11-30 21:46 UTC (History)
11 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
*Important Note*: You will need to make sure you install `Cache::Memcached::Fast::Safe` to continue to use memcached after this.
Version(s) released in:
20.05.00, 19.11.04
Circulation function:


Attachments
Bug 13193: Make Memcached usage fork safe (3.27 KB, patch)
2019-09-16 10:48 UTC, Joonas Kylmälä
Details | Diff | Splinter Review
Bug 13193: Make Memcached usage fork safe (3.27 KB, patch)
2019-09-16 12:38 UTC, Joonas Kylmälä
Details | Diff | Splinter Review
Bug 13193: Make Memcached usage fork safe (3.40 KB, patch)
2019-09-17 04:04 UTC, David Cook
Details | Diff | Splinter Review
about.pl page (454.08 KB, image/png)
2019-09-17 05:48 UTC, Mason James
Details
[Draft] Integration test for testing memcached client (3.15 KB, patch)
2019-09-18 02:55 UTC, David Cook
Details | Diff | Splinter Review
[Draft] Integration test Koha::Cache and memcached (3.55 KB, patch)
2019-09-18 05:03 UTC, David Cook
Details | Diff | Splinter Review
Bug 13193: Make Memcached usage fork safe (1.04 KB, patch)
2020-02-07 06:24 UTC, Mason James
Details | Diff | Splinter Review
Bug 13193: Add new module to debian/control file (1.07 KB, patch)
2020-02-07 09:59 UTC, Mason James
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Barton Chittenden 2014-11-03 19:31:23 UTC
In bug 13150, we came across the following error:

Can't use string ("") as a HASH ref while "strict refs" in use at /usr/share/koha/lib/C4/Biblio.pm line 1635.

Ultimately, this was determined to be caused by a corrupted data structure caused by memcache.

I would suggest creating an entry in a similar hash that contains a well known string that could be checked upon entering any page. If the memcached copy didn't match the well-known value, we would know that memcache is having problems, and throw a warning of some sort, advising that memcache should be re-started.
Comment 1 Olli-Antti Kivilahti 2017-01-02 09:44:45 UTC
Interesting find.

Do you know if memcached corrupted all the key-values inside it or if this was an isolated incident which only affected your specific data-structure?

We could add a static checknumber (not calculated from the data) after every key and val we push to memcached. This shouldn't have much of an overhead and we could check that the last character in a key or a value must be for ex '!'

This would protect against corruption.
We can log the incident using log4perl and statistize it on a longer duration to find out if we need to further improve the reliability of memcached?

Offtopic: Cache::Memcached::Fast doesn't have a maintainer any more.
Comment 2 Marcel de Rooy 2017-01-05 09:37:48 UTC
Also saw some strange things coming from the cache incidentally, but it seems that just flushing the cache would be enough instead of restarting memcache.
Comment 3 David Cook 2019-09-02 07:05:22 UTC
I might also be having some caching issues where the wrong system preference value seems to be coming back from the caches, but still digging into that one...
Comment 4 Joonas Kylmälä 2019-09-16 09:19:40 UTC
I don't think there is corruption happening in memcached but a race condition caused by us using Cache::Memcached::Fast even though it is not thread/fork safe. I fixed this issue by replacing it with Cache::Memcached::Fast::Safe, hopefully I will manage to send a patch soon.
Comment 5 Joonas Kylmälä 2019-09-16 09:53:23 UTC
I started to work on a patch for this.
Comment 6 Joonas Kylmälä 2019-09-16 10:40:09 UTC
To reproduce run you can use the following snippet:

-------------------------------------------------------------
use C4::Context;
use C4::SIP::Sip::MsgType;

my $threads = 50;
for (1 .. $threads) {
if (fork() == 0) {
C4::SIP::Sip::MsgType::api_auth("DDDDDD", "fffffff", "YY");
exit;
}
}
wait for $threads;
-------------------------------------------------------------

and following diff:

-------------------------------------------------------------
--- a/C4/Context.pm
+++ b/C4/Context.pm
@@ -418,6 +418,7 @@ sub preference {
     if ($use_syspref_cache) {
         $syspref_cache = Koha::Caches->get_instance('syspref') unless $syspref_cache;
         my $cached_var = $syspref_cache->get_from_cache("syspref_$var");
+warn "Cache for $var is $cached_var";
         return $cached_var if defined $cached_var;
     }
--------------------------------------------------------------

Notice how *sometimes* the output of the warn message "Cache for version is 1d" has 1d as the value instead of an actual Koha version like 19.0600013.
Comment 7 Joonas Kylmälä 2019-09-16 10:48:34 UTC
Created attachment 92827 [details] [review]
Bug 13193: Make Memcached usage fork safe

When a high enough number of forks try to access for example system
preferences with Koha::Cache using memcached as backend the results of
different cache requests get mixed up.

The problem is fixed by using Cache::Memcached::Fast::Safe that is a
fork safe verson of Cache::Memcached::Fast.

Sponsored-by: The National Library of Finland
Comment 8 Joonas Kylmälä 2019-09-16 10:49:24 UTC
Waiting for someone to sign-off the patch!
Comment 9 Joonas Kylmälä 2019-09-16 12:38:16 UTC
Created attachment 92835 [details] [review]
Bug 13193: Make Memcached usage fork safe

When a high enough number of forks try to access for example system
preferences with Koha::Cache using memcached as backend the results of
different cache requests get mixed up.

The problem is fixed by using Cache::Memcached::Fast::Safe that is a
fork safe version of Cache::Memcached::Fast.

Sponsored-by: The National Library of Finland
Comment 10 Joonas Kylmälä 2019-09-16 12:38:56 UTC
I fixed a typo "verson" in the commit message.
Comment 11 Joonas Kylmälä 2019-09-16 12:46:54 UTC
Bumping the criticality of this patch from low to at least normal now since returning wrong system preference could potentially cause unwanted access to some things or block access from some user (we had a case where a SIP user was not able to connect because the system was supposedly in "maintenance mode" because the timeout syspref value was returned instead of Koha database version number)
Comment 12 David Cook 2019-09-17 01:36:10 UTC
(In reply to Joonas Kylmälä from comment #4)
> I don't think there is corruption happening in memcached but a race
> condition caused by us using Cache::Memcached::Fast even though it is not
> thread/fork safe. I fixed this issue by replacing it with
> Cache::Memcached::Fast::Safe, hopefully I will manage to send a patch soon.

Joonas, you are my favourite person in the world right now. 

I've been bumping into this problem, and it's been driving me insane. 

I haven't verified it yet, but I will as soon as I can! 

--

Reading through the documentation, it looks like we could still use Cache::Memcached::Fast, if we were diligent with the use of "disconnect_all", but that would be a lot of effort to refactor Koha to do that, so I think it would be better to switch to Cache::Memcached::Fast::Safe as you've suggested.

The downside is that I don't think Cache::Memcached::Fast::Safe is packaged for Debian (https://packages.debian.org/search?suite=default&section=all&arch=any&searchon=names&keywords=libcache-memcached-fast-safe-perl), so Mirko would have to package it and put it in Koha's APT repository
Comment 13 David Cook 2019-09-17 03:54:20 UTC
Using the code provided, I was able to reproduce the problem, although I found it easier to see using 100 iterations rather than 50 iterations. 

Cache for timeout is 1d at C4/Context.pm line 420.
Cache for timeout is 18.1101000 at C4/Context.pm line 420.
Cache for version is 1d at C4/Context.pm line 420.
Cache for version is 1d at C4/Context.pm line 420.
Argument "1d" isn't numeric in numeric lt (<) at C4/Auth.pm line 1407.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
...
Cache for timeout is 1d at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Use of uninitialized value $cached_var in concatenation (.) or string at C4/Context.pm line 420.
Cache for timeout is  at C4/Context.pm line 420.
Use of uninitialized value $cached_var in concatenation (.) or string at C4/Context.pm line 420.
Cache for version is  at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for timeout is 1d at C4/Context.pm line 420.
Comment 14 David Cook 2019-09-17 04:01:04 UTC
cpanm Cache::Memcached::Fast::Safe
git bz apply 13191

And now the output is beautifully accurate:

Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for timeout is 1d at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for timeout is 1d at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for timeout is 1d at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for timeout is 1d at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Cache for version is 18.1101000 at C4/Context.pm line 420.
Comment 15 David Cook 2019-09-17 04:02:22 UTC
I'm actually going to bump this up to critical, since I think this is probably impacting libraries more than we all realize. 

I have a library that is having lots of issues because of this problem, and a resolution ASAP would be great.
Comment 16 David Cook 2019-09-17 04:04:44 UTC
Created attachment 92867 [details] [review]
Bug 13193: Make Memcached usage fork safe

When a high enough number of forks try to access for example system
preferences with Koha::Cache using memcached as backend the results of
different cache requests get mixed up.

The problem is fixed by using Cache::Memcached::Fast::Safe that is a
fork safe version of Cache::Memcached::Fast.

Sponsored-by: The National Library of Finland
Signed-off-by: David Cook <dcook@prosentient.com.au>

Works as described, and solves an insidious difficult to debug
problem in Koha.
Comment 17 David Cook 2019-09-17 04:05:43 UTC
Updated the title to be more in-line with the actual content of the report
Comment 18 Katrin Fischer 2019-09-17 04:38:47 UTC
Adding dependency keyword to get Mirko's opinion on the dependency change.
Comment 19 Mason James 2019-09-17 05:48:56 UTC
Created attachment 92868 [details]
about.pl page

the about.pl page looks happy and correct.. great work!
Comment 20 Martin Renvoize (ashimema) 2019-09-17 06:22:39 UTC
I'd really like to see a regression test added for this if at all possible
Comment 21 Joonas Kylmälä 2019-09-17 06:59:28 UTC
(In reply to Martin Renvoize from comment #20)
> I'd really like to see a regression test added for this if at all possible

This might be quite difficult to reproduce in a test because on my machine this happened at 50 forks but on David's machine at 100 forks, so someone might have a machine where 1000 forks are required and when we do 1000 forks on lower specced machine it might totally freeze. I cannot come up currently with any way to test this specific case in a consistent way.

Also while finding ways to test this online I came across this and there doesn't seem to be any way we could use currently with Koha's testing framework: <https://softwareengineering.stackexchange.com/questions/196105/testing-multi-threaded-race-conditions>. I don't know much about fuzzing and stuff so that could potentially something to look into, and stress testing Koha with 1000+ clients connecting simultaneously.
Comment 22 David Cook 2019-09-17 08:14:01 UTC
(In reply to Martin Renvoize from comment #20)
> I'd really like to see a regression test added for this if at all possible

I'm with Joonas on this one. I've tried a number of times, and the results are very unpredictable. 

On the first run, memcached is empty, and even 100 runs didn't get any errors.

Ran the test script again with memcached warmed up, and it took almost until the end of my 100 runs (maybe around 50-70) to get an error.

Ran the test script again, and very soon got quite frequent errors. 

Ran the test script another few times, and now I'm no longer getting any errors. 

Cleared memcached... and can't reproduce again. Seems weird that I got so "lucky" the first times to get errors and now I'm getting none. 

I think this just goes to show the difficulty in testing this one...
Comment 23 Martin Renvoize (ashimema) 2019-09-17 08:29:29 UTC
I'm wondering if it's worthwhile effectively porting the test from Cache::Memcached::Fast::Safe for forking: https://metacpan.org/source/KAZEBURO/Cache-Memcached-Fast-Safe-0.06/t/02_fork.t.

It doesn't test our particular case, but, assuming it's right, should catch cases of missed calls to disconnect_all (which ::Safe does for us here).. in that way we should catch failures if someone down the line decides to remove the ::Safe module without fully understanding why we were using it.

Thoughts?
Comment 24 Joonas Kylmälä 2019-09-17 09:13:24 UTC
(In reply to Martin Renvoize from comment #23)
> I'm wondering if it's worthwhile effectively porting the test from
> Cache::Memcached::Fast::Safe for forking:
> https://metacpan.org/source/KAZEBURO/Cache-Memcached-Fast-Safe-0.06/t/
> 02_fork.t.
> 
> It doesn't test our particular case, but, assuming it's right, should catch
> cases of missed calls to disconnect_all (which ::Safe does for us here).. in
> that way we should catch failures if someone down the line decides to remove
> the ::Safe module without fully understanding why we were using it.
> 
> Thoughts?

or would it be enough to add a comment on top of the line

+my $memcached = Cache::Memcached::Fast::Safe->new( 

Stating that it is a fork safe memcached module?
Comment 25 David Cook 2019-09-18 00:48:36 UTC
(In reply to Martin Renvoize from comment #23)
> I'm wondering if it's worthwhile effectively porting the test from
> Cache::Memcached::Fast::Safe for forking:
> https://metacpan.org/source/KAZEBURO/Cache-Memcached-Fast-Safe-0.06/t/
> 02_fork.t.
> 
> It doesn't test our particular case, but, assuming it's right, should catch
> cases of missed calls to disconnect_all (which ::Safe does for us here).. in
> that way we should catch failures if someone down the line decides to remove
> the ::Safe module without fully understanding why we were using it.
> 
> Thoughts?

To be honest, that test looks pretty useless, as a barebones implementation using Cache::Memcached::Fast doesn't show any errors. 

use strict;
use warnings;
use Cache::Memcached::Fast;
use Data::Dumper;
my $cache = Cache::Memcached::Fast->new({
    servers => ["localhost:11211"],
});
my $version = $cache->server_versions;
warn Dumper($version);
my $pid = fork;
if ( $pid == 0 ){
    my $after_fork = $cache->server_versions;
    warn Dumper($after_fork)
}
waitpid($pid,0);
Comment 26 David Cook 2019-09-18 00:49:00 UTC
$VAR1 = {
          'localhost:11211' => '1.5.6'
        };
$VAR1 = {
          'localhost:11211' => '1.5.6'
        };
Comment 27 David Cook 2019-09-18 00:49:32 UTC
But... I'll try to see if I can come up with a test that is reliable...ish...
Comment 28 David Cook 2019-09-18 01:09:22 UTC
Quite the heisenbug although I'm making progress getting more reliable test results but testing Cache::Memcached::Fast directly.

Note the \d+) is the pid of the child process. 

100
hundred
26091) akey = 100
26091) bkey = hundred
26092) akey = 100
26092) bkey = hundred
26093) akey = 100
26093) bkey = 100
26094) akey = hundred
26094) bkey = hundred
26097) akey = 100
26095) akey = 100
26095) bkey = hundred
26097) bkey = hundred
26096) akey = 100
26096) bkey = 100
26098) akey = hundred
26100) akey = hundred
26099) akey = hundred
26099) bkey = hundred
Use of uninitialized value in concatenation (.) or string at test.pl line 22.
26098) bkey =
Use of uninitialized value in concatenation (.) or string at test.pl line 22.
26100) bkey =

I'm actually quite intrigued by those undefs retrieved at the end. Those are really rare. I'm able to get mixed up values quite regularly but a null response... wow.
Comment 29 David Cook 2019-09-18 01:12:00 UTC
I say reliable and then I have a long series of perfect runs with no errors one after the other...
Comment 30 David Cook 2019-09-18 01:49:41 UTC
Adding more entropy is helping. Now I can reproduce it every single time I run my test.
Comment 31 David Cook 2019-09-18 01:51:19 UTC
(In reply to David Cook from comment #30)
> Adding more entropy is helping. Now I can reproduce it every single time I
> run my test.

And when I swap it out with the Safe module... no errors on any of my runs. 

Hoping to have a unit test ready soon... funnily enough by sharing an inherited file handle between all the processes... heh
Comment 32 David Cook 2019-09-18 02:55:31 UTC
Created attachment 92926 [details] [review]
[Draft] Integration test for testing memcached client
Comment 33 David Cook 2019-09-18 02:59:14 UTC
Martin et al: Take a look at that draft integration test I wrote? 

Using 10 child processes and semi-randomized highish-volume Memcached lookups, I'm able to seemingly reliable reproduce the problem every time with Cache::Memcached::Fast and never reproduce it with Cache::Memcached::Fast::Safe.

I'm going to look at doing a more Koha-specific version just now... but thought I'd share what I have as I go. 

At a glance, the Memcached protocol doesn't look that hard either, so a person could theoretically mock the server within the test. I think the problem is with the client rather than the server, but mocking the server would remove the dependency on the Memcached server... but I'll leave that up to Martin to make a call on that one.
Comment 34 David Cook 2019-09-18 03:16:55 UTC
I've tried using C4::Context->preference() instead of using Cache::Memcached::Fast directly, but it's much less reliable, even when including Koha::Caches->flush_L1_caches() after every call...
Comment 35 David Cook 2019-09-18 03:51:49 UTC
Ok so C4::Context calls the following which creates a singleton Koha::Cache in the process where C4::Context is first loaded (after it's compiled):

my $syspref_cache = Koha::Caches->get_instance('syspref');

It looks like the $syspref_cache is the same object shared between all the child processes... which should mean they all are using the same socket as well...

I don't know why this isn't working. It should be working (or rather causing errors).

So I'm going to go one step lower and just try the test using Koha::Cache directly.

Oh yes that's much better. There has to be some logic in C4::Context throwing me off...
Comment 36 David Cook 2019-09-18 04:12:29 UTC
Actually testing Koha::Cache directly only looked like it was working. I think I'm having namespacing issues... should hopefully have something better soon...
Comment 37 David Cook 2019-09-18 05:03:42 UTC
Created attachment 92931 [details] [review]
[Draft] Integration test Koha::Cache and memcached

This test is not 100% reliable. When run with Cache::Memcached::Fast,
it will usually generate errors, but not always.

When run with Cache::Memcached::Fast::Safe, it never generates errors.
Comment 38 David Cook 2019-09-18 05:14:47 UTC
The protocol for memcached actually looks super straightforward: https://github.com/memcached/memcached/blob/master/doc/protocol.txt

So we could probably mock a server easily enough. I took a little look using netcat but the client seemed to time out super quickly, although the client looks configurable in that respect. 

netcat -l localhost -p 50000 -vv
Listening on [localhost] (family 0, port 50000)
Connection from localhost 37910 received!
add ckey 0 0 4
cent

--

But mocking the server we could introduce other problems. I'm going to have some lunch in any case (at 3pm...)
Comment 39 David Cook 2019-09-19 01:13:27 UTC
Talked to Martin on IRC last night, and he was saying that we shouldn't bother adding a test for this one, since it's too challenging to accurately reproduce, and the testing I've already done seems sufficient.
Comment 40 Joonas Kylmälä 2019-09-19 06:45:50 UTC
(In reply to David Cook from comment #39)
> Talked to Martin on IRC last night, and he was saying that we shouldn't
> bother adding a test for this one, since it's too challenging to accurately
> reproduce, and the testing I've already done seems sufficient.

Ok, so now we just wait for Mirko's opinion on the dependency change.
Comment 41 David Cook 2019-09-20 01:25:32 UTC
(In reply to Joonas Kylmälä from comment #40)
> Ok, so now we just wait for Mirko's opinion on the dependency change.

I'm guessing so. Maybe we can buy him a beer to get him to look at it soon. 

I guess the Marseilles Hackfest is coming up soon. I really wanted to go this year, but it's just not going to work out for me. Maybe a good chance to run this in, if not sooner...
Comment 42 Martin Renvoize (ashimema) 2019-10-01 14:15:05 UTC
Nice work!

Pushed to master for 19.11.00
Comment 43 Joonas Kylmälä 2019-10-01 14:22:46 UTC
Martin, oh noes, the dependency was not added yet to the Koha repositories? This needs to be reverted?
Comment 44 David Cook 2019-10-22 06:42:49 UTC
Any movement on this one? I'd really like to see this one move forward...
Comment 45 Katrin Fischer 2019-10-22 07:14:23 UTC
(In reply to David Cook from comment #44)
> Any movement on this one? I'd really like to see this one move forward...

It looks like it was pushed?
Comment 46 Joonas Kylmälä 2019-10-22 08:48:42 UTC
(In reply to Katrin Fischer from comment #45)
> (In reply to David Cook from comment #44)
> > Any movement on this one? I'd really like to see this one move forward...
> 
> It looks like it was pushed?

It was reverted because the dependency is missing still.
Comment 47 David Cook 2019-10-22 23:14:45 UTC
(In reply to Joonas Kylmälä from comment #46)
> (In reply to Katrin Fischer from comment #45)
> > (In reply to David Cook from comment #44)
> > > Any movement on this one? I'd really like to see this one move forward...
> > 
> > It looks like it was pushed?
> 
> It was reverted because the dependency is missing still.

This is my understanding as well. I was wondering if there was anything we could do to help Mirko or if he's unwell or anything like that.
Comment 48 Joonas Kylmälä 2019-10-23 08:53:13 UTC
(In reply to David Cook from comment #47)
> (In reply to Joonas Kylmälä from comment #46)
> > (In reply to Katrin Fischer from comment #45)
> > > (In reply to David Cook from comment #44)
> > > > Any movement on this one? I'd really like to see this one move forward...
> > > 
> > > It looks like it was pushed?
> > 
> > It was reverted because the dependency is missing still.
> 
> This is my understanding as well. I was wondering if there was anything we
> could do to help Mirko or if he's unwell or anything like that.

well, we can either make the debian package ourselves or remove the dependency by solving this with another way.
Comment 49 David Cook 2019-10-24 05:32:35 UTC
(In reply to Joonas Kylmälä from comment #48)
> well, we can either make the debian package ourselves or remove the
> dependency by solving this with another way.

I don't think making the package ourselves is the solution though as Mirko is still the gatekeeper for the community's APT repository, which people around the world would need.

We could remove the dependency by solving this another way, but it seems like that would require a lot more code changes.
Comment 50 David Cook 2019-11-27 02:06:22 UTC
Still have libraries being impacted about this in rather nasty ways. 

It would be great to see progress on this one.
Comment 51 David Cook 2019-11-29 02:16:54 UTC
(In reply to Joonas Kylmälä from comment #48)
> well, we can either make the debian package ourselves or remove the
> dependency by solving this with another way.

I'm checking with the Release Manager to see the process for one of us building the Debian package and getting it into Koha's APT repository.

Happy to share this information with you as I receive it.
Comment 52 David Cook 2020-02-06 23:05:35 UTC
This is still on my todo list but just haven't had the time yet...
Comment 53 Mason James 2020-02-07 06:24:05 UTC
Created attachment 98546 [details] [review]
Bug 13193: Make Memcached usage fork safe

update debian/control file
Comment 54 Mason James 2020-02-07 06:33:00 UTC
(In reply to Joonas Kylmälä from comment #43)
> Martin, oh noes, the dependency was not added yet to the Koha repositories?
> This needs to be reverted?

hi folks, i've added this module to the KC repo

# cat /etc/apt/sources.list.d/koha.list
deb http://debian.koha-community.org/koha stable main

# apt-cache policy libcache-memcached-fast-safe-perl
libcache-memcached-fast-safe-perl:
  Installed: (none)
  Candidate: 0.06-1~koha1
  Version table:
     0.06-1~koha1 0
        500 http://debian.koha-community.org/koha/ stable/main amd64 Packages
Comment 55 Joonas Kylmälä 2020-02-07 08:35:43 UTC
The fix looks good but commit message title doesn't tell what the bug does (https://wiki.koha-community.org/wiki/Commit_messages#Good_commit_messages_2). Mason, if you could just move the "update debian/control file" to the title and maybe add a body describing why it needed to be updated would be great!
Comment 56 Joonas Kylmälä 2020-02-07 08:37:07 UTC
(In reply to Joonas Kylmälä from comment #55)
> The fix looks good but commit message title doesn't tell what the bug does

oh, I was supposed to write it doesn't tell what the patch does. The problem is that it tells what the bug is.
Comment 57 Martin Renvoize (ashimema) 2020-02-07 09:57:28 UTC
Sorry Joonas, I think I've confused the issue here.. 

Mason just did the packaging of the requisit dependancy for me at my request.. the bug itself was already PQA.  You are indeed correct regarding the commit message of the followup.. but I'll just fix that on push now we have the dependancy packaged and on our repository.

Many thanks for taking a look though.
Comment 58 Mason James 2020-02-07 09:59:06 UTC
Created attachment 98550 [details] [review]
Bug 13193: Add new module to debian/control file

this patch adds the new module to the debian/control file
Comment 59 Martin Renvoize (ashimema) 2020-02-07 10:01:25 UTC
Nice work everyone!

Pushed to master for 20.05
Comment 60 David Cook 2020-02-10 00:43:36 UTC
Hurray! Extra special thanks to Barton, Martin, Joonas, and Mason. (That sentence actually sounds really good when you say it outloud. Excellent names heh.)

My apologies for not getting the dependency packaged. I'm going to leave it on my TODO list and work on getting it accepted into Debian. 

I figure once I have a handle on Debian policies, it shouldn't be too challenging to do.
Comment 61 David Cook 2020-02-10 01:20:12 UTC
Because this adds a new dependency, I'm guessing it may not be backported to older versions? 

Would love to see this get into 19.11, but understand if that's not possible.
Comment 62 Jonathan Druart 2020-02-12 14:24:00 UTC
We are loosing the memcached cache after this change, as the module will not be installed by default and is not required.
See bug 24642.
Comment 63 Jonathan Druart 2020-02-12 14:25:40 UTC
(In reply to Jonathan Druart from comment #62)
> We are loosing the memcached cache after this change, as the module will not
> be installed by default and is not required.
> See bug 24642.

I guess it's because the packages are not up-to-date (?)
Comment 64 Katrin Fischer 2020-02-12 21:42:35 UTC
(In reply to Jonathan Druart from comment #63)
> (In reply to Jonathan Druart from comment #62)
> > We are loosing the memcached cache after this change, as the module will not
> > be installed by default and is not required.
> > See bug 24642.
> 
> I guess it's because the packages are not up-to-date (?)

Hm, we have strongly recommended use of memcached and actually Koha doesn't work well without in our experience. We had problems with Plack without memcached, like config changes not taking effect without lots of reloads etc.
Comment 65 David Cook 2020-02-12 23:13:32 UTC
Martin has already pushed #24642 so should be good now
Comment 66 Joy Nelson 2020-03-05 00:27:55 UTC
I am hesitant to push this to 19.11.x branch as a point release.  This is a fix to a bug, but seems like a major change for a point release.  I'm not backporting unless a strong case can be made for it.

<also, my systems team might kill me in my sleep if i snuck this in on a point release :D >
Comment 67 David Cook 2020-03-05 01:01:49 UTC
(In reply to Joy Nelson from comment #66)
> I am hesitant to push this to 19.11.x branch as a point release.  This is a
> fix to a bug, but seems like a major change for a point release.  I'm not
> backporting unless a strong case can be made for it.
> 

It would be great to get it in since it's a major fix, but I can understand your reluctance. Would additional testing help? I'd be happy to apply 24642 and 13191 to 19.11.x, build a package, and test an install and an upgrade. 

> <also, my systems team might kill me in my sleep if i snuck this in on a
> point release :D >

Are they using Debian packages? It should automagically work if they are, although I could see it being a bit fraught if they're not.
Comment 68 Joy Nelson 2020-03-05 02:00:56 UTC
David-
I spent some time thinking about this one.  Let me get some input from our DevOps team here and see what their thoughts are.  We meet tomorrow. My preference is to push this if I can.  
Thanks!
joy
Comment 69 David Cook 2020-03-05 02:02:56 UTC
(In reply to Joy Nelson from comment #68)
> David-
> I spent some time thinking about this one.  Let me get some input from our
> DevOps team here and see what their thoughts are.  We meet tomorrow. My
> preference is to push this if I can.  
> Thanks!
> joy

Awesome! Thanks, Joy! The library that brought me to this issue originally is moving on to 19.11 soon, so fingers crossed.
Comment 70 Joy Nelson 2020-03-05 23:52:12 UTC
Pushed to 19.11.x branch for 19.11.04
Comment 71 David Cook 2020-03-09 00:08:11 UTC
Should this one be marked as "Pushed to stable" now?
Comment 72 Joy Nelson 2020-03-09 23:02:49 UTC
had to cycle through :all the things: to get it there.  :)
Comment 73 Lucas Gass (lukeg) 2020-03-18 16:26:38 UTC
missing 19.05.x dependencies, no backport