Bug 12005 - Zebra searches sometimes fail silently under Plack
Summary: Zebra searches sometimes fail silently under Plack
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Searching (show other bugs)
Version: Main
Hardware: All All
: P5 - low critical (vote)
Assignee: Galen Charlton
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-03-27 08:38 UTC by Magnus Enger
Modified: 2019-06-27 09:24 UTC (History)
6 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
Bug 12005 : Creating a new zebra connection for each time we need one (2.47 KB, patch)
2014-10-09 16:06 UTC, Chris Cormack
Details | Diff | Splinter Review
Bug 12005 : Creating a new zebra connection for each time we need one (2.52 KB, patch)
2014-10-10 19:04 UTC, Paul Poulain
Details | Diff | Splinter Review
Bug 12005 : Creating a new zebra connection for each time we need one (2.58 KB, patch)
2014-10-10 21:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12005: Remove useless parameters (1.84 KB, patch)
2014-10-10 21:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12005 : Creating a new zebra connection for each time we need one (3.62 KB, patch)
2014-10-15 14:01 UTC, Chris Cormack
Details | Diff | Splinter Review
Bug 12005 : Creating a new zebra connection for each time we need one (3.86 KB, patch)
2014-10-16 12:52 UTC, Chris Cormack
Details | Diff | Splinter Review
Bug 12005 : Creating a new zebra connection for each time we need one (3.91 KB, patch)
2014-10-16 14:21 UTC, Jacek Ablewicz
Details | Diff | Splinter Review
Bug 12005 : Creating a new zebra connection for each time we need one (4.04 KB, patch)
2014-10-16 15:49 UTC, Brendan Gallagher
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Magnus Enger 2014-03-27 08:38:03 UTC
When Koha is running under Plack, Zebra searches can sometimes fail silently. What you will see is that a search that should return some hits suddenly returns 0 hits. The next search can work as normal again. The failed search will not be logged. 

A quick and dirty fix can be found here: http://git.catalyst.net.nz/gw?p=koha.git;a=commitdiff;h=459c750e4b0aa0fe5dba601e423e78070383b97b;hp=2c9581f75fdb1f76e60c83f86e253cb4dfe04597
Comment 1 Jacek Ablewicz 2014-03-27 15:52:30 UTC
(In reply to Magnus Enger from comment #0)

> A quick and dirty fix can be found here:

Works for me (3.14.5, apache + mod_perl). Or at least it seems to - hard to tell for sure, as this nasty bug does recur in semi-random fashion. Sometimes the 1st failed search occurs after a couple of minutes after server restart, and sometimes it works seemingly fine for hours.. I got an impression that as long as all processes/workers are relativelly busy, this issue is much harder to reproduce. Also it seems to happen somehow more often / earlier in plack + starman test configuration (or maybe it is just a coincidence).

This Q&D fix is kind of ugly, though ;). It does quickly result in a lot of accumulated zebra processes (never to be used again, with postponed death sentences). Wouldn't it be better to destroy the Zconn connection as soon as possible when it is no longer needed? Sadly, ZOOM::Connection object does not automatically disconnect on DESTROY() like DBI does; I guess some kind of wrapper class may be needed to solve this.
Also I wonder what kind of performance impact this (= not caching zebra connections at all) may have in plain CGI installations. Perhaps some plack-related fixes in the code should be controlled by new configuration setting (preferably in koha-conf.xml) which will indicate that system is, or is not running in persistent environment (and what kind: plack/*, mod_perl/persistent, mod_perl/semi-persistent ..).

Btw, bug 11701 (pushed yesterday) got me thinking: looks like Zconn caching was not plack compatible at all. May it be somehow related to this bug (especially sync vs async thing)?
Comment 2 Jacek Ablewicz 2014-04-02 11:11:38 UTC
I did some Q&D measurements trying to estimate what kind of performance penalty we can expect when ditching Zconn caching entirely. While cached Zconn connections are generally not used all that often by Koha, there are some notable exceptions, like (e.g.) authority searches (for each authority record on the result list Koha does one extra search to find out how many biblio records are attached to the specific authority).

Creating & immediately destroying Zebra connection - without doing any search with it - is surprisingly lightweight (ca 46 μs per connection). But it goes up to 2.6 - 3.0 ms if the connection was used to perform some actual search. So for a typical case (20 authority search results per request) an extra latency when not caching would be 60 milliseconds or so. Not too bad, considering that:

(testing "/cgi-bin/koha/opac-authorities-home.pl?startfrom=0&marclist=any&and_or=and&excluding=&operator=contains&value=&resultsperpage=20&type=opac&op=do_search&authtypecode=PERSO_NAME&orderby=HeadingAsc" page loading time): 

- mod_perl (semi-persistent; persistent would be ~50 ms faster) with zebra connection caching enabled: 290 ms
- mod_perl with zebra connection caching disabled: 350 ms
- regular CGI with zebra connection caching enabled: 890 ms
Comment 3 Liz Rea 2014-04-27 21:18:36 UTC
This is definitely a problem, we have seen this. Our solution was to use a zconn per search.
Comment 4 Chris Cormack 2014-10-09 14:36:32 UTC
I can do a patch for this, which makes us use a new zebra connection each time.
Or I can do a patch which would build a dependency on yaz-proxy so that we use that instead.

The first is obviously a lot faster, and is what zebra is designed for, it is not built for persistent connections, the second would allow us to pool connections, but would add a new software dependency

Opinions?
Comment 5 Tomás Cohen Arazi 2014-10-09 15:19:24 UTC
(In reply to Chris Cormack from comment #4)
> I can do a patch for this, which makes us use a new zebra connection each
> time.
> Or I can do a patch which would build a dependency on yaz-proxy so that we
> use that instead.
> 
> The first is obviously a lot faster, and is what zebra is designed for, it
> is not built for persistent connections, the second would allow us to pool
> connections, but would add a new software dependency
> 
> Opinions?

I'd go for the approach that creates a new connection.
Comment 6 Chris Cormack 2014-10-09 16:06:01 UTC Comment hidden (obsolete)
Comment 7 Jacek Ablewicz 2014-10-10 08:07:23 UTC
(In reply to Chris Cormack from comment #6)

> This patch changes it so plack works the same way that cgi did.

At the moment I don't have a persistent install properly set up & suitable for any meaningfull testing, but - at least at the 1st glance, this proposed patch doesn't look quite right to me: 

1) Completely abandoninig Zconn caching would have some measurable impact (performance-wise and memory-footprint-wise) in non-persistent installations. How significant such an impact may be in the practical circumstances is another question - while Koha does not seem to use cached Zconn very often, in those rare (?) cases when it does, cached Zconn is being used rather extensivelly (eg. when performing authority searches).

2) Removing a code part which used to destroy() previously made zebra connection may be not such a good idea IMO. AFAIRC (it's been several months since I had a look at that code), previously estabilished Zconn is not being atomatically destroyed when the variable which holds it gets udefined, goes out of scope, etc. Unless I'm very much mistaken - with this patch applied, when running in persistent environment - yes, Koha will indeed make brand new zebra connection each time it does perform a search, but the problem is that previously made connection[s] would never get destroyed (and they will accumulate pretty quicky, eventually leading to the crash when the system goes out of available RAM for new zebra processes, free filehandles for subsequent connections and so on).
Comment 8 Chris Cormack 2014-10-10 13:49:17 UTC
(In reply to Jacek Ablewicz from comment #7)
> (In reply to Chris Cormack from comment #6)
> 
> > This patch changes it so plack works the same way that cgi did.
> 
> At the moment I don't have a persistent install properly set up & suitable
> for any meaningfull testing, but - at least at the 1st glance, this proposed
> patch doesn't look quite right to me: 
> 
> 1) Completely abandoninig Zconn caching would have some measurable impact
> (performance-wise and memory-footprint-wise) in non-persistent
> installations. How significant such an impact may be in the practical
> circumstances is another question - while Koha does not seem to use cached
> Zconn very often, in those rare (?) cases when it does, cached Zconn is
> being used rather extensivelly (eg. when performing authority searches).
> 
It would if it was working, but the caching has never worked. It has never been doing caching, you make a connection, and use it and then cgi dies and the connection disappears. In plack you make a connection, you use it, you stop using it, you asking for a new connection, it tries to give you back the old one that is dead. Searches break. So while I agree caching would probably be good, the fact is there is no performance hit from removing it, because we never had it.

> 2) Removing a code part which used to destroy() previously made zebra
> connection may be not such a good idea IMO. AFAIRC (it's been several months
> since I had a look at that code), previously estabilished Zconn is not being
> atomatically destroyed when the variable which holds it gets udefined, goes
> out of scope, etc. Unless I'm very much mistaken - with this patch applied,
> when running in persistent environment - yes, Koha will indeed make brand
> new zebra connection each time it does perform a search, but the problem is
> that previously made connection[s] would never get destroyed (and they will
> accumulate pretty quicky, eventually leading to the crash when the system
> goes out of available RAM for new zebra processes, free filehandles for
> subsequent connections and so on).

THis is not our experience we have been running it live in production for 6 months, zebra kills the connections without the destroy
Comment 9 Paul Poulain 2014-10-10 19:04:43 UTC Comment hidden (obsolete)
Comment 10 Paul Poulain 2014-10-10 19:07:16 UTC
I also tested that CGI search still work, on both OPAC and staff interface, by doing some searches in both
Comment 11 Jonathan Druart 2014-10-10 21:38:15 UTC Comment hidden (obsolete)
Comment 12 Jonathan Druart 2014-10-10 21:38:45 UTC Comment hidden (obsolete)
Comment 13 Jonathan Druart 2014-10-10 21:39:37 UTC
I was not able to reproduce the original error, but I confirm this don't break anything.

Looks good!
Comment 14 Jacek Ablewicz 2014-10-11 14:36:33 UTC
CGI searches do seemingly work, but what I'm getting in my tests (clean master + bug 12005) still suggests that zebra connections have a tendency to accumulate with this patches applied. E.g., opac-authorities-home.pl script does create 21 of them, and they are getting destroyed all together only when the script ends. I dunno - maybe this only happens for a specific zebra / yaz / ZOOM versions, or depends on some configuration settings?

On my test rig I have:

   zebra: 2.0.44-3
   yaz: 4.2.30-2
   libnet-z3950-zoom-perl 1.26-1+b2

They are all pretty old, but I guess it may be not that uncommon config (such versions are shipped with debian wheezy AFAIRC).

In test database I have authorities linked to biblio records by using $9 subfields / auth record IDs.
Comment 15 Jacek Ablewicz 2014-10-11 14:45:45 UTC
Also, misc/link_bibs_to_authorities.pl script doesn't work properly any longer:

   misc/link_bibs_to_authorities.pl --test

after 20 or so seconds, it's starting to throw the following errors:

   oAuth error: Connect failed (10000) unix:/home/koha/koha-dev/var/run/zebradb/authoritysocket ZOOM

and zebra daemons count climbs to (and stays at) 1019.
Comment 16 Chris Cormack 2014-10-11 15:19:36 UTC
(In reply to Jacek Ablewicz from comment #15)
> Also, misc/link_bibs_to_authorities.pl script doesn't work properly any
> longer:
> 
>    misc/link_bibs_to_authorities.pl --test
> 
> after 20 or so seconds, it's starting to throw the following errors:
> 
>    oAuth error: Connect failed (10000)
> unix:/home/koha/koha-dev/var/run/zebradb/authoritysocket ZOOM
> 
> and zebra daemons count climbs to (and stays at) 1019.

Ah yes, that looks like a bug in the script, in that it is making a new connection every query, instead of only once. Ill have a look at fixing that script.
Comment 17 Jacek Ablewicz 2014-10-11 16:39:25 UTC
It's not just that one previously mentioned script that is affected; simliar thing happens too when running e.g: 

   misc/migration_tools/remove_unused_authorities.pl --test

after a while:

   error: CCL configuration error (10013) can't open cclfile '/home/koha/koha-dev/etc/zebradb/ccl.properties': Too many open files ZOOM on search an=27697

But it's still far from clear if that problem is somehow specific to my test setup, or maybe previously created zebra connection ZOOM objects (so called "objects" - apparently they are not real objects ?) have to be manually destroyed before creating new ones (unless that e.g. got changed/fixed in more recent yaz/ zoom versions, which would explain the symptoms we are getting ?).
Comment 18 Jacek Ablewicz 2014-10-11 16:57:37 UTC
BTW, regarding the performance hit (after ditching Zconn caching) - I did some additional tests (with both current patch set and the Q&D patch from comment #0 somehow combined, so the old/previous connections are destroyed manually before making new ones): OPAC and staff search performaces are IMO not being affected in any signifficant way, but that particlar script (link_bibs_to_authorities.pl) is another story: without Zconn caching, it runs ca 3x slower.. However, it's probably not something that is being run every day (?) so maybe it's not such a big problem?
Comment 19 Chris Cormack 2014-10-14 13:44:26 UTC
Thanks heaps for the testing Jacek, ill do a bit more testing too.
Comment 20 Chris Cormack 2014-10-15 13:06:30 UTC
Working on follow up patches for those 2 scripts now
Comment 21 Chris Cormack 2014-10-15 14:01:51 UTC Comment hidden (obsolete)
Comment 22 Chris Cormack 2014-10-15 14:14:06 UTC
I have squashed the 2 patches and made it destroy any old connection before making a new one. Jacek could you please give it a test?
Comment 23 Jacek Ablewicz 2014-10-16 11:31:53 UTC
(In reply to Chris Cormack from comment #22)
> I have squashed the 2 patches and made it destroy any old connection before
> making a new one. Jacek could you please give it a test?

It's looking good so far; this modified patch version works fine for me while testing both under plain CGI and under apache (prefork) + mod_perl. No apparent side effects like before (not counting that some command line scripts are runnig significantly slower when Zconn caching is not being used any longer). Regarding those scripts: I'm wondering if something like that:

     my $cache_key = join ('::', (map { $_ // '' } ($server, $async )));
+    if ( (!defined($ENV{GATEWAY_INTERFACE})) && defined($context->{"Zconn"}->{$cache_key}) && (0 == $context->{"Zconn"}->{$cache_key}->errcode()) ) {
+        return $context->{"Zconn"}->{$cache_key};
+    }

would be good (or at least good enough ;) solution for adressing performance hit on the cmd line scripts? It seems to work OK under CGI and mod_perl, but I'm not sure how reliable checking for GATEWAY_INTERFACE environment variable (especially when being done in the module, and not in the script) may be for other various persistent setups?
Comment 24 Chris Cormack 2014-10-16 12:23:34 UTC
(In reply to Jacek Ablewicz from comment #23)
>
> 
> would be good (or at least good enough ;) solution for adressing performance
> hit on the cmd line scripts? It seems to work OK under CGI and mod_perl, but
> I'm not sure how reliable checking for GATEWAY_INTERFACE environment
> variable (especially when being done in the module, and not in the script)
> may be for other various persistent setups?

We would have to check that is passed through to plack, otherwise we will be back where we started, with zebra connections dying and plack searches failing until the thread respawns :) I'll see if I can test that now.
Comment 25 Paul Poulain 2014-10-16 12:42:46 UTC
another option, as those scripts are always run through command line : have a local-made zoom connexion, not using C4 connector. Would it be possible ? (if someone think it's Q&D, I agree)
Comment 26 Chris Cormack 2014-10-16 12:47:42 UTC
Yep, this works

$VAR1 = 'CGI/1.1'; at /home/vagrant/kohaclone/C4/Context.pm line 689. 

It is set under plack.

So I will attach an updated patch. Which should fix plack, but not slow down the scripts from the commandline.
Comment 27 Chris Cormack 2014-10-16 12:52:51 UTC Comment hidden (obsolete)
Comment 28 Jacek Ablewicz 2014-10-16 14:21:37 UTC Comment hidden (obsolete)
Comment 29 Brendan Gallagher 2014-10-16 15:49:26 UTC
Created attachment 32429 [details] [review]
Bug 12005 : Creating a new zebra connection for each time we need one

Zebra is not designed to have persistent connections, under cgi this
didn't matter the scripts would get a new connection each time, but
under plack we try to use dead connections

This patch changes it so plack works the same way that cgi did.

To test:
Apply this patch
Do some searches
Check everything still works

Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>

Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>

Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Comment 30 Tomás Cohen Arazi 2014-10-22 17:32:11 UTC
Patch pushed to master.

Thanks Chris!