In bug 13150, we came across the following error: Can't use string ("") as a HASH ref while "strict refs" in use at /usr/share/koha/lib/C4/Biblio.pm line 1635. Ultimately, this was determined to be caused by a corrupted data structure caused by memcache. I would suggest creating an entry in a similar hash that contains a well known string that could be checked upon entering any page. If the memcached copy didn't match the well-known value, we would know that memcache is having problems, and throw a warning of some sort, advising that memcache should be re-started.
Interesting find. Do you know if memcached corrupted all the key-values inside it or if this was an isolated incident which only affected your specific data-structure? We could add a static checknumber (not calculated from the data) after every key and val we push to memcached. This shouldn't have much of an overhead and we could check that the last character in a key or a value must be for ex '!' This would protect against corruption. We can log the incident using log4perl and statistize it on a longer duration to find out if we need to further improve the reliability of memcached? Offtopic: Cache::Memcached::Fast doesn't have a maintainer any more.
Also saw some strange things coming from the cache incidentally, but it seems that just flushing the cache would be enough instead of restarting memcache.
I might also be having some caching issues where the wrong system preference value seems to be coming back from the caches, but still digging into that one...
I don't think there is corruption happening in memcached but a race condition caused by us using Cache::Memcached::Fast even though it is not thread/fork safe. I fixed this issue by replacing it with Cache::Memcached::Fast::Safe, hopefully I will manage to send a patch soon.
I started to work on a patch for this.
To reproduce run you can use the following snippet: ------------------------------------------------------------- use C4::Context; use C4::SIP::Sip::MsgType; my $threads = 50; for (1 .. $threads) { if (fork() == 0) { C4::SIP::Sip::MsgType::api_auth("DDDDDD", "fffffff", "YY"); exit; } } wait for $threads; ------------------------------------------------------------- and following diff: ------------------------------------------------------------- --- a/C4/Context.pm +++ b/C4/Context.pm @@ -418,6 +418,7 @@ sub preference { if ($use_syspref_cache) { $syspref_cache = Koha::Caches->get_instance('syspref') unless $syspref_cache; my $cached_var = $syspref_cache->get_from_cache("syspref_$var"); +warn "Cache for $var is $cached_var"; return $cached_var if defined $cached_var; } -------------------------------------------------------------- Notice how *sometimes* the output of the warn message "Cache for version is 1d" has 1d as the value instead of an actual Koha version like 19.0600013.
Created attachment 92827 [details] [review] Bug 13193: Make Memcached usage fork safe When a high enough number of forks try to access for example system preferences with Koha::Cache using memcached as backend the results of different cache requests get mixed up. The problem is fixed by using Cache::Memcached::Fast::Safe that is a fork safe verson of Cache::Memcached::Fast. Sponsored-by: The National Library of Finland
Waiting for someone to sign-off the patch!
Created attachment 92835 [details] [review] Bug 13193: Make Memcached usage fork safe When a high enough number of forks try to access for example system preferences with Koha::Cache using memcached as backend the results of different cache requests get mixed up. The problem is fixed by using Cache::Memcached::Fast::Safe that is a fork safe version of Cache::Memcached::Fast. Sponsored-by: The National Library of Finland
I fixed a typo "verson" in the commit message.
Bumping the criticality of this patch from low to at least normal now since returning wrong system preference could potentially cause unwanted access to some things or block access from some user (we had a case where a SIP user was not able to connect because the system was supposedly in "maintenance mode" because the timeout syspref value was returned instead of Koha database version number)
(In reply to Joonas Kylmälä from comment #4) > I don't think there is corruption happening in memcached but a race > condition caused by us using Cache::Memcached::Fast even though it is not > thread/fork safe. I fixed this issue by replacing it with > Cache::Memcached::Fast::Safe, hopefully I will manage to send a patch soon. Joonas, you are my favourite person in the world right now. I've been bumping into this problem, and it's been driving me insane. I haven't verified it yet, but I will as soon as I can! -- Reading through the documentation, it looks like we could still use Cache::Memcached::Fast, if we were diligent with the use of "disconnect_all", but that would be a lot of effort to refactor Koha to do that, so I think it would be better to switch to Cache::Memcached::Fast::Safe as you've suggested. The downside is that I don't think Cache::Memcached::Fast::Safe is packaged for Debian (https://packages.debian.org/search?suite=default§ion=all&arch=any&searchon=names&keywords=libcache-memcached-fast-safe-perl), so Mirko would have to package it and put it in Koha's APT repository
Using the code provided, I was able to reproduce the problem, although I found it easier to see using 100 iterations rather than 50 iterations. Cache for timeout is 1d at C4/Context.pm line 420. Cache for timeout is 18.1101000 at C4/Context.pm line 420. Cache for version is 1d at C4/Context.pm line 420. Cache for version is 1d at C4/Context.pm line 420. Argument "1d" isn't numeric in numeric lt (<) at C4/Auth.pm line 1407. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. ... Cache for timeout is 1d at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Use of uninitialized value $cached_var in concatenation (.) or string at C4/Context.pm line 420. Cache for timeout is at C4/Context.pm line 420. Use of uninitialized value $cached_var in concatenation (.) or string at C4/Context.pm line 420. Cache for version is at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for timeout is 1d at C4/Context.pm line 420.
cpanm Cache::Memcached::Fast::Safe git bz apply 13191 And now the output is beautifully accurate: Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for timeout is 1d at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for timeout is 1d at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for timeout is 1d at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for timeout is 1d at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420. Cache for version is 18.1101000 at C4/Context.pm line 420.
I'm actually going to bump this up to critical, since I think this is probably impacting libraries more than we all realize. I have a library that is having lots of issues because of this problem, and a resolution ASAP would be great.
Created attachment 92867 [details] [review] Bug 13193: Make Memcached usage fork safe When a high enough number of forks try to access for example system preferences with Koha::Cache using memcached as backend the results of different cache requests get mixed up. The problem is fixed by using Cache::Memcached::Fast::Safe that is a fork safe version of Cache::Memcached::Fast. Sponsored-by: The National Library of Finland Signed-off-by: David Cook <dcook@prosentient.com.au> Works as described, and solves an insidious difficult to debug problem in Koha.
Updated the title to be more in-line with the actual content of the report
Adding dependency keyword to get Mirko's opinion on the dependency change.
Created attachment 92868 [details] about.pl page the about.pl page looks happy and correct.. great work!
I'd really like to see a regression test added for this if at all possible
(In reply to Martin Renvoize from comment #20) > I'd really like to see a regression test added for this if at all possible This might be quite difficult to reproduce in a test because on my machine this happened at 50 forks but on David's machine at 100 forks, so someone might have a machine where 1000 forks are required and when we do 1000 forks on lower specced machine it might totally freeze. I cannot come up currently with any way to test this specific case in a consistent way. Also while finding ways to test this online I came across this and there doesn't seem to be any way we could use currently with Koha's testing framework: <https://softwareengineering.stackexchange.com/questions/196105/testing-multi-threaded-race-conditions>. I don't know much about fuzzing and stuff so that could potentially something to look into, and stress testing Koha with 1000+ clients connecting simultaneously.
(In reply to Martin Renvoize from comment #20) > I'd really like to see a regression test added for this if at all possible I'm with Joonas on this one. I've tried a number of times, and the results are very unpredictable. On the first run, memcached is empty, and even 100 runs didn't get any errors. Ran the test script again with memcached warmed up, and it took almost until the end of my 100 runs (maybe around 50-70) to get an error. Ran the test script again, and very soon got quite frequent errors. Ran the test script another few times, and now I'm no longer getting any errors. Cleared memcached... and can't reproduce again. Seems weird that I got so "lucky" the first times to get errors and now I'm getting none. I think this just goes to show the difficulty in testing this one...
I'm wondering if it's worthwhile effectively porting the test from Cache::Memcached::Fast::Safe for forking: https://metacpan.org/source/KAZEBURO/Cache-Memcached-Fast-Safe-0.06/t/02_fork.t. It doesn't test our particular case, but, assuming it's right, should catch cases of missed calls to disconnect_all (which ::Safe does for us here).. in that way we should catch failures if someone down the line decides to remove the ::Safe module without fully understanding why we were using it. Thoughts?
(In reply to Martin Renvoize from comment #23) > I'm wondering if it's worthwhile effectively porting the test from > Cache::Memcached::Fast::Safe for forking: > https://metacpan.org/source/KAZEBURO/Cache-Memcached-Fast-Safe-0.06/t/ > 02_fork.t. > > It doesn't test our particular case, but, assuming it's right, should catch > cases of missed calls to disconnect_all (which ::Safe does for us here).. in > that way we should catch failures if someone down the line decides to remove > the ::Safe module without fully understanding why we were using it. > > Thoughts? or would it be enough to add a comment on top of the line +my $memcached = Cache::Memcached::Fast::Safe->new( Stating that it is a fork safe memcached module?
(In reply to Martin Renvoize from comment #23) > I'm wondering if it's worthwhile effectively porting the test from > Cache::Memcached::Fast::Safe for forking: > https://metacpan.org/source/KAZEBURO/Cache-Memcached-Fast-Safe-0.06/t/ > 02_fork.t. > > It doesn't test our particular case, but, assuming it's right, should catch > cases of missed calls to disconnect_all (which ::Safe does for us here).. in > that way we should catch failures if someone down the line decides to remove > the ::Safe module without fully understanding why we were using it. > > Thoughts? To be honest, that test looks pretty useless, as a barebones implementation using Cache::Memcached::Fast doesn't show any errors. use strict; use warnings; use Cache::Memcached::Fast; use Data::Dumper; my $cache = Cache::Memcached::Fast->new({ servers => ["localhost:11211"], }); my $version = $cache->server_versions; warn Dumper($version); my $pid = fork; if ( $pid == 0 ){ my $after_fork = $cache->server_versions; warn Dumper($after_fork) } waitpid($pid,0);
$VAR1 = { 'localhost:11211' => '1.5.6' }; $VAR1 = { 'localhost:11211' => '1.5.6' };
But... I'll try to see if I can come up with a test that is reliable...ish...
Quite the heisenbug although I'm making progress getting more reliable test results but testing Cache::Memcached::Fast directly. Note the \d+) is the pid of the child process. 100 hundred 26091) akey = 100 26091) bkey = hundred 26092) akey = 100 26092) bkey = hundred 26093) akey = 100 26093) bkey = 100 26094) akey = hundred 26094) bkey = hundred 26097) akey = 100 26095) akey = 100 26095) bkey = hundred 26097) bkey = hundred 26096) akey = 100 26096) bkey = 100 26098) akey = hundred 26100) akey = hundred 26099) akey = hundred 26099) bkey = hundred Use of uninitialized value in concatenation (.) or string at test.pl line 22. 26098) bkey = Use of uninitialized value in concatenation (.) or string at test.pl line 22. 26100) bkey = I'm actually quite intrigued by those undefs retrieved at the end. Those are really rare. I'm able to get mixed up values quite regularly but a null response... wow.
I say reliable and then I have a long series of perfect runs with no errors one after the other...
Adding more entropy is helping. Now I can reproduce it every single time I run my test.
(In reply to David Cook from comment #30) > Adding more entropy is helping. Now I can reproduce it every single time I > run my test. And when I swap it out with the Safe module... no errors on any of my runs. Hoping to have a unit test ready soon... funnily enough by sharing an inherited file handle between all the processes... heh
Created attachment 92926 [details] [review] [Draft] Integration test for testing memcached client
Martin et al: Take a look at that draft integration test I wrote? Using 10 child processes and semi-randomized highish-volume Memcached lookups, I'm able to seemingly reliable reproduce the problem every time with Cache::Memcached::Fast and never reproduce it with Cache::Memcached::Fast::Safe. I'm going to look at doing a more Koha-specific version just now... but thought I'd share what I have as I go. At a glance, the Memcached protocol doesn't look that hard either, so a person could theoretically mock the server within the test. I think the problem is with the client rather than the server, but mocking the server would remove the dependency on the Memcached server... but I'll leave that up to Martin to make a call on that one.
I've tried using C4::Context->preference() instead of using Cache::Memcached::Fast directly, but it's much less reliable, even when including Koha::Caches->flush_L1_caches() after every call...
Ok so C4::Context calls the following which creates a singleton Koha::Cache in the process where C4::Context is first loaded (after it's compiled): my $syspref_cache = Koha::Caches->get_instance('syspref'); It looks like the $syspref_cache is the same object shared between all the child processes... which should mean they all are using the same socket as well... I don't know why this isn't working. It should be working (or rather causing errors). So I'm going to go one step lower and just try the test using Koha::Cache directly. Oh yes that's much better. There has to be some logic in C4::Context throwing me off...
Actually testing Koha::Cache directly only looked like it was working. I think I'm having namespacing issues... should hopefully have something better soon...
Created attachment 92931 [details] [review] [Draft] Integration test Koha::Cache and memcached This test is not 100% reliable. When run with Cache::Memcached::Fast, it will usually generate errors, but not always. When run with Cache::Memcached::Fast::Safe, it never generates errors.
The protocol for memcached actually looks super straightforward: https://github.com/memcached/memcached/blob/master/doc/protocol.txt So we could probably mock a server easily enough. I took a little look using netcat but the client seemed to time out super quickly, although the client looks configurable in that respect. netcat -l localhost -p 50000 -vv Listening on [localhost] (family 0, port 50000) Connection from localhost 37910 received! add ckey 0 0 4 cent -- But mocking the server we could introduce other problems. I'm going to have some lunch in any case (at 3pm...)
Talked to Martin on IRC last night, and he was saying that we shouldn't bother adding a test for this one, since it's too challenging to accurately reproduce, and the testing I've already done seems sufficient.
(In reply to David Cook from comment #39) > Talked to Martin on IRC last night, and he was saying that we shouldn't > bother adding a test for this one, since it's too challenging to accurately > reproduce, and the testing I've already done seems sufficient. Ok, so now we just wait for Mirko's opinion on the dependency change.
(In reply to Joonas Kylmälä from comment #40) > Ok, so now we just wait for Mirko's opinion on the dependency change. I'm guessing so. Maybe we can buy him a beer to get him to look at it soon. I guess the Marseilles Hackfest is coming up soon. I really wanted to go this year, but it's just not going to work out for me. Maybe a good chance to run this in, if not sooner...
Nice work! Pushed to master for 19.11.00
Martin, oh noes, the dependency was not added yet to the Koha repositories? This needs to be reverted?
Any movement on this one? I'd really like to see this one move forward...
(In reply to David Cook from comment #44) > Any movement on this one? I'd really like to see this one move forward... It looks like it was pushed?
(In reply to Katrin Fischer from comment #45) > (In reply to David Cook from comment #44) > > Any movement on this one? I'd really like to see this one move forward... > > It looks like it was pushed? It was reverted because the dependency is missing still.
(In reply to Joonas Kylmälä from comment #46) > (In reply to Katrin Fischer from comment #45) > > (In reply to David Cook from comment #44) > > > Any movement on this one? I'd really like to see this one move forward... > > > > It looks like it was pushed? > > It was reverted because the dependency is missing still. This is my understanding as well. I was wondering if there was anything we could do to help Mirko or if he's unwell or anything like that.
(In reply to David Cook from comment #47) > (In reply to Joonas Kylmälä from comment #46) > > (In reply to Katrin Fischer from comment #45) > > > (In reply to David Cook from comment #44) > > > > Any movement on this one? I'd really like to see this one move forward... > > > > > > It looks like it was pushed? > > > > It was reverted because the dependency is missing still. > > This is my understanding as well. I was wondering if there was anything we > could do to help Mirko or if he's unwell or anything like that. well, we can either make the debian package ourselves or remove the dependency by solving this with another way.
(In reply to Joonas Kylmälä from comment #48) > well, we can either make the debian package ourselves or remove the > dependency by solving this with another way. I don't think making the package ourselves is the solution though as Mirko is still the gatekeeper for the community's APT repository, which people around the world would need. We could remove the dependency by solving this another way, but it seems like that would require a lot more code changes.
Still have libraries being impacted about this in rather nasty ways. It would be great to see progress on this one.
(In reply to Joonas Kylmälä from comment #48) > well, we can either make the debian package ourselves or remove the > dependency by solving this with another way. I'm checking with the Release Manager to see the process for one of us building the Debian package and getting it into Koha's APT repository. Happy to share this information with you as I receive it.
This is still on my todo list but just haven't had the time yet...
Created attachment 98546 [details] [review] Bug 13193: Make Memcached usage fork safe update debian/control file
(In reply to Joonas Kylmälä from comment #43) > Martin, oh noes, the dependency was not added yet to the Koha repositories? > This needs to be reverted? hi folks, i've added this module to the KC repo # cat /etc/apt/sources.list.d/koha.list deb http://debian.koha-community.org/koha stable main # apt-cache policy libcache-memcached-fast-safe-perl libcache-memcached-fast-safe-perl: Installed: (none) Candidate: 0.06-1~koha1 Version table: 0.06-1~koha1 0 500 http://debian.koha-community.org/koha/ stable/main amd64 Packages
The fix looks good but commit message title doesn't tell what the bug does (https://wiki.koha-community.org/wiki/Commit_messages#Good_commit_messages_2). Mason, if you could just move the "update debian/control file" to the title and maybe add a body describing why it needed to be updated would be great!
(In reply to Joonas Kylmälä from comment #55) > The fix looks good but commit message title doesn't tell what the bug does oh, I was supposed to write it doesn't tell what the patch does. The problem is that it tells what the bug is.
Sorry Joonas, I think I've confused the issue here.. Mason just did the packaging of the requisit dependancy for me at my request.. the bug itself was already PQA. You are indeed correct regarding the commit message of the followup.. but I'll just fix that on push now we have the dependancy packaged and on our repository. Many thanks for taking a look though.
Created attachment 98550 [details] [review] Bug 13193: Add new module to debian/control file this patch adds the new module to the debian/control file
Nice work everyone! Pushed to master for 20.05
Hurray! Extra special thanks to Barton, Martin, Joonas, and Mason. (That sentence actually sounds really good when you say it outloud. Excellent names heh.) My apologies for not getting the dependency packaged. I'm going to leave it on my TODO list and work on getting it accepted into Debian. I figure once I have a handle on Debian policies, it shouldn't be too challenging to do.
Because this adds a new dependency, I'm guessing it may not be backported to older versions? Would love to see this get into 19.11, but understand if that's not possible.
We are loosing the memcached cache after this change, as the module will not be installed by default and is not required. See bug 24642.
(In reply to Jonathan Druart from comment #62) > We are loosing the memcached cache after this change, as the module will not > be installed by default and is not required. > See bug 24642. I guess it's because the packages are not up-to-date (?)
(In reply to Jonathan Druart from comment #63) > (In reply to Jonathan Druart from comment #62) > > We are loosing the memcached cache after this change, as the module will not > > be installed by default and is not required. > > See bug 24642. > > I guess it's because the packages are not up-to-date (?) Hm, we have strongly recommended use of memcached and actually Koha doesn't work well without in our experience. We had problems with Plack without memcached, like config changes not taking effect without lots of reloads etc.
Martin has already pushed #24642 so should be good now
I am hesitant to push this to 19.11.x branch as a point release. This is a fix to a bug, but seems like a major change for a point release. I'm not backporting unless a strong case can be made for it. <also, my systems team might kill me in my sleep if i snuck this in on a point release :D >
(In reply to Joy Nelson from comment #66) > I am hesitant to push this to 19.11.x branch as a point release. This is a > fix to a bug, but seems like a major change for a point release. I'm not > backporting unless a strong case can be made for it. > It would be great to get it in since it's a major fix, but I can understand your reluctance. Would additional testing help? I'd be happy to apply 24642 and 13191 to 19.11.x, build a package, and test an install and an upgrade. > <also, my systems team might kill me in my sleep if i snuck this in on a > point release :D > Are they using Debian packages? It should automagically work if they are, although I could see it being a bit fraught if they're not.
David- I spent some time thinking about this one. Let me get some input from our DevOps team here and see what their thoughts are. We meet tomorrow. My preference is to push this if I can. Thanks! joy
(In reply to Joy Nelson from comment #68) > David- > I spent some time thinking about this one. Let me get some input from our > DevOps team here and see what their thoughts are. We meet tomorrow. My > preference is to push this if I can. > Thanks! > joy Awesome! Thanks, Joy! The library that brought me to this issue originally is moving on to 19.11 soon, so fingers crossed.
Pushed to 19.11.x branch for 19.11.04
Should this one be marked as "Pushed to stable" now?
had to cycle through :all the things: to get it there. :)
missing 19.05.x dependencies, no backport