Bug 27078

Summary: Starman hanging in 3-node Koha cluster when 1 node goes offline.
Product: Koha Reporter: Christian McDonald <rcmcdonald91>
Component: Architecture, internals, and plumbingAssignee: Bugs List <koha-bugs>
Status: CLOSED INVALID QA Contact: Testopia <testopia>
Severity: normal    
Priority: P5 - low CC: dcook, federicoantoniopaiz, jonathan.druart
Version: 20.05   
Hardware: All   
OS: All   
See Also: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=16067
GIT URL: Change sponsored?: ---
Patch complexity: --- Documentation contact:
Documentation submission: Text to go in the release notes:
Version(s) released in:
Circulation function:

Description Christian McDonald 2020-11-23 15:06:06 UTC
So I've got a pretty interesting case that I've been playing with for the past several weeks. My goal has been to build a converged 3-node Koha cluster.

The architecture looks like this:
1. Each node runs the "standalone" Koha stack (Koha, Starman, Apache, ElasticSearch, Memcached, MariaDB). For the sake of example, these nodes are 10.10.100.51, 10.10.100.52, and 10.10.100.53.
2. Galera is used to build the 3-node MariaDB cluster and each Koha node simply talks to the MariaDB server at localhost. This has worked fine and koha is blissfully unaware of the underlying galera cluster.
3. ElasticSearch is built in a 3-node cluster. koha-conf.xml is configured to use all three ES nodes (again at 10.10.100.51, 10.10.100.52, and 10.10.100.53). Again, koha doesn't seem to mind this at all.
4. All three nodes run Memcached and koha-conf.xml is configured to use all three Memcached nodes (again at 10.10.100.51-3). 
5. Plack is enabled on all nodes using koha-plack --enable instancename && koha-plack --start instancename.
6. GlusterFS is used to serve up a 3-node replicated volume for /etc/koha/*, /usr/share/koha/*, and /var/lib/koha* across all three nodes. symlinks are used to present this storage in places koha is expecting them to be. Again, this works great.
6. Two HAProxy instances sit in front of these three koha instances (at 10.10.100.2 and 10.10.100.3). koha_trusted_proxies in koha-conf.xml is configured with these two IPs.
7. Finally, HAProxy handles SSL-offloading and client stickiness. This all works fine too.

Here is the weirdness... When all three nodes are online, everything is absolutely fine. Everything is snappy, searching works, etc. When I change koha-conf.xml on one node, this is replicated to the other nodes immediately. However, when one node goes offline, the two remaining nodes become really sluggish. I've narrowed this down to a Starman/Plack issue, but I have no idea why. 

Here's how I arrived at that conclusion.

* I started by killing pertinent services one-by-one on Node A. Killing MariaDB on Node A had no effect on Node B and C... though as expected, Node A started spitting out errors that the DB was unavailable.
* Next, I stopped Memcached on Node A. Again, this had no effect on Node B and C.
* Next, I stopped ElasticSeasrch on Node A. Again, this had no effect on Node B and Node C.
* Next, I stopped GlusterFS on Node A. Again, this had no effect on Node B and Node C.
* Next, I stopped koha-common on Node A. Again, this had no effect on Node B and Node C.


So at this point, Node A is still "online." However, every service related to Koha is stopped (MariaDB, Memcached, ElasticSearch, Apache, koha-common, etc.). As expected, Node B and C keep on working just fine.

Here is the weird part,

When Node A actually goes offline (i.e. loses network connectivity and/or powers down), Node B and C become very very slow. They still serve traffic, but they are really sluggish. As soon as connectivity is restored to Node A, Node B and Node C speed right back up again.

So is this related to Starman/Plack?

When I disable Starman/Plack on all nodes, the speed of each node doesn't change when a single node goes offline.
Comment 1 Jonathan Druart 2020-11-23 16:15:19 UTC
I bet you don't see anything on htop?

Are you able to profile a perl process to catch any bottlenecks?

I don't think you are safe using Memcached on each node. If you have Node 1 that modifies something that is cached, only the one from Node 1 will be updated/invalidated. Nodes 2 and 3 will continue to use an old value.
Comment 2 Christian McDonald 2020-11-23 20:29:28 UTC
That makes sense, though that's conflicting with what I'm reading about scaling out memcached in this way. So then, what is the point of having multiple memcached servers (which can be configured in koha-conf.xml)? As I understand memcached (which I will admit is novice at best), the use of multiple servers is completely up to the memcached client implementation...something along the lines of hashing each keyitem against the server list, effectively tying a keyitem to a single node in the cluster. Because the number of nodes is known by all clients, this should effectively distribute keys across the nodes and ensure a cached value only lives on one node at a time. If a node goes down, those keys are simply rebased on another node.

I might be completely off here too lol
Comment 3 David Cook 2020-11-23 23:19:19 UTC
(In reply to Christian McDonald from comment #2)
> That makes sense, though that's conflicting with what I'm reading about
> scaling out memcached in this way. So then, what is the point of having
> multiple memcached servers (which can be configured in koha-conf.xml)? As I
> understand memcached (which I will admit is novice at best), the use of
> multiple servers is completely up to the memcached client
> implementation...something along the lines of hashing each keyitem against
> the server list, effectively tying a keyitem to a single node in the
> cluster. Because the number of nodes is known by all clients, this should
> effectively distribute keys across the nodes and ensure a cached value only
> lives on one node at a time. If a node goes down, those keys are simply
> rebased on another node.
> 
> I might be completely off here too lol

I haven't set up a memcached cluster but I think it should be fine so long as it's configured correctly
Comment 4 Jonathan Druart 2020-11-23 23:21:55 UTC
Yes, sorry Christian, I misread. I thought you had only 1 memcached server per node. With the full cluster defined on each node you should be good.
Comment 5 David Cook 2020-11-23 23:26:40 UTC
(In reply to Christian McDonald from comment #0)
> 
> When I disable Starman/Plack on all nodes, the speed of each node doesn't
> change when a single node goes offline.

What do you mean by "the speed of each node"? Is this a reference to the Koha application specifically? If Starman/Plack is disabled, are you running Koha using CGI?

I'd be interested to know more about your "node goes offline" scenario. I assume this is not a graceful process? If you think about it, chances are your clusters haven't been notified that the node has left the cluster, so they're going to keep trying to connect to it. That would explain a slowdown. Gracefully shutting down a process would probably cause a node member to gracefully leave the cluster and that would explain a lack of a slowdown.

I'd say... profile the application and monitor network traffic during slowdown. 

As a side note: I think it's great that you're trying out this clustered setup!
Comment 6 Christian McDonald 2020-11-24 00:28:10 UTC
Well Koha doesn't really know it is part of a cluster, does it? It still talks to a single MariaDB instance (at localhost, just like a standalone instance), and the same can be applied to ES (again a single localhost:9200 ES node could be configured on each, just like a standalone deployment). 

Apache isn't aware it's part of a cluster. Plack/Starman aren't aware either. Really Koha is still "standalone" from the perspective of Koha itself, it just so happens that the DB, file-system and index are distributed.

So what I mean about taking nodes offline is that, when all nodes are online and all services running, everything is very performant. As I would expect. However, when one node goes offline, the Koha application itself becomes very slow to respond to page loads...lots of spinning browser tabs waiting for a response...but it will eventually respond. My question is, why? Again, Koha as an application isn't aware that it is part of a cluster, it doesn't know that it's database, file system and index are replicated under-the-hood.

Here's what's weird. Like I said, when all nodes and their services are online, everything is fine. However, say if I "systemctl stop" Maria, elastic, Apache, and memcached on a single node (say on Node A), everything still is fine when connecting to either of the remaining nodes (Nodes B and C). However, if I then power down Node A (remember, all it's koha-related services had been stopped) or pull node A's network connection, node B and C become very slow to serve page requests. Again, this isn't because of some performance degradation of Maria (3 nodes can withstand 1 node offline), Elastic (again, one node offline is okay). 

Also, when Apache is nuked, HAProxy immediately stops sending clients to that node. So in that regard, once Apache is hosed, that node isn't even going to get client requests.


What can I do to monitor performance, or poke into Plack/Starman? The reason why I have a hunce Plack/Starman are involved here is because when I disable Plack on all nodes, the loss of a single node doesn't impact the performance of the remaining nodes... granted with Plack disabled they are all noticably slower, which is expected
Comment 7 David Cook 2020-11-24 04:14:41 UTC
(In reply to Christian McDonald from comment #6)
> Well Koha doesn't really know it is part of a cluster, does it? It still
> talks to a single MariaDB instance (at localhost, just like a standalone
> instance), and the same can be applied to ES (again a single localhost:9200
> ES node could be configured on each, just like a standalone deployment). 
> 
> Apache isn't aware it's part of a cluster. Plack/Starman aren't aware
> either. Really Koha is still "standalone" from the perspective of Koha
> itself, it just so happens that the DB, file-system and index are
> distributed.
> 

Right, Koha isn't aware of the clusters, but it relies on components that are clustered, so if those components experience latency, then Koha will experience latency.

> So what I mean about taking nodes offline is that, when all nodes are online
> and all services running, everything is very performant. As I would expect.
> However, when one node goes offline, the Koha application itself becomes
> very slow to respond to page loads...lots of spinning browser tabs waiting
> for a response...but it will eventually respond. My question is, why? Again,
> Koha as an application isn't aware that it is part of a cluster, it doesn't
> know that it's database, file system and index are replicated under-the-hood.
> 

And that's why Jonathan and I were saying that you should profile the application to see where it's hanging. 

It might be trying to do database I/O, but the database might not be responding because it's trying to synchronize with the missing DB node. I'm not familiar with Galera, but multi-master sounds like it would need synchronous consistency, which could be slow and if it were trying to synchronize (until it times out) with a node that isn't there... that could be a source of latency. 

> Here's what's weird. Like I said, when all nodes and their services are
> online, everything is fine. However, say if I "systemctl stop" Maria,
> elastic, Apache, and memcached on a single node (say on Node A), everything
> still is fine when connecting to either of the remaining nodes (Nodes B and
> C). However, if I then power down Node A (remember, all it's koha-related
> services had been stopped) or pull node A's network connection, node B and C
> become very slow to serve page requests. Again, this isn't because of some
> performance degradation of Maria (3 nodes can withstand 1 node offline),
> Elastic (again, one node offline is okay). 
> 

When you say that you "power down Node A", what kind of power down is that? Is it graceful or forced? Pulling out node A's network connection is definitely a forced outage, so I could see that having impact on a cluster. 

If you're sure about Galera's and Elastic's fault tolerance, I'd say look at Memcached. 

Even if the systems are fault tolerant, I imagine there will be some latency due to cluster re-balancing and comms timeouts. But I haven't run these particular clusters. I've just had issues with latency on other clustered systems when the remaining systems are trying to determine that the dead members are actually dead. 

> What can I do to monitor performance, or poke into Plack/Starman? The reason
> why I have a hunce Plack/Starman are involved here is because when I disable
> Plack on all nodes, the loss of a single node doesn't impact the performance
> of the remaining nodes... granted with Plack disabled they are all noticably
> slower, which is expected

I really don't think it's going to Be Plack/Starman. Everything suggests it being one of the distributed systems. However, the easiest way to find that is by profiling the Koha application. 

Now I haven't done this in a while... but you'll want to look at NYTProf and you might get some clues about configuration at https://wiki.koha-community.org/wiki/Plack.
Comment 8 David Cook 2020-11-24 04:21:54 UTC
Another suggestion would be to look into whatever logging/monitoring you have for Elasticsearch, MariaDB Galera, and Memcached. 

If I were you, I'd be checking those logs, when I experience the Koha latency. 

Going back to my previous example, I had a Keycloak cluster that had 2 active nodes and 2 dead nodes. Requests were only going to the 2 active nodes, but they were struggling, because they were obsessing over checking the 2 dead nodes. 

Admittedly, that doesn't always happen. It was a bit of a fluke that I noticed one time, but it was interesting. 

Looking at Galera... it looks like there's a default 5 second timeout inactivity: https://galeracluster.com/library/documentation/galera-parameters.html#evs-suspect-timeout. 

After you power off/disconnect Node A, does the performance degradation stay for a long time? Or does it resolve itself after X minutes?
Comment 9 Christian McDonald 2020-11-24 14:35:03 UTC
Thanks for the replies... I'm still mulling over it.

Latest Findings:

** htop is uninformative in this case. I don't see any memory or CPU spikes.

** ElasticSearch is healthy even with one-node offline (3 node cluster):

$ curl localhost:9200/_cluster/health
{"cluster_name":"koha_es","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}

* MariaDB is healthy too with one-node offline (3 node cluster):

$ MariaDB [(none)]> show status like '%wsrep%';
I'm not going to paste the output here, but this command is clearly indicating that the failed node is NOT actively participating in the cluster and latency between nodes is basically <1ms. i.e. 0/0/0/0/0

When running tcpdump filtered to look for traffic with the offline node, I only see ARP requests. I'm running 'tcpdump host koha01.lab.mydomain.com'

Still investigating.
Comment 10 Christian McDonald 2020-11-24 15:01:30 UTC
Another really weird thing, is that if I bring up a virgin replacement node, that ONLY has debian 10 installed and has the original node's static ip address (so in my case 10.10.100.51), the other two nodes immediately start performing as expected again.



What...........
Comment 11 David Cook 2020-11-24 23:16:31 UTC
Thanks for providing more information, Christian.

I'm not familiar with your hostnames but I would've done something like "tcpdump dst 10.10.100.51" on the remaining nodes.

It sounds like Elasticsearch and MariaDB are well handling the missing node.

What about Memcached? That should be the last distributed system that your living nodes could be trying to query, I suspect. Is that referenced by IP address or hostname in your koha-conf.xml?

Koha is using Cache::Memcached::Fast::Safe... which takes us to https://metacpan.org/pod/release/RAZ/Cache-Memcached-Fast-0.26/lib/Cache/Memcached/Fast.pm. Mostly using defaults. Take a look at "max_failures". It looks like the Memcached client isn't managing failures. Even if it gets a timeout from a server, it will keep trying that server. 

The connect_timeout looks like 0.25 seconds, but there are some warnings that it can take longer. Koha does lean hard on Memcached so I wouldn't be surprised if this is the issue. You could try the tcpdump again targeting the Memcached specifically.

(In reply to Christian McDonald from comment #10)
> Another really weird thing, is that if I bring up a virgin replacement node,
> that ONLY has debian 10 installed and has the original node's static ip
> address (so in my case 10.10.100.51), the other two nodes immediately start
> performing as expected again.
> 

This doesn't surprise me at all. As I've been saying, it sounds like Koha (or rather a subcomponent of Koha) is trying to contact the missing node. Since it's missing, it is probably blocking while it waits for a response and eventually times out. In your latest case, when you have a node available at 10.10.100.51, it is probably responding quite quickly by refusing the connection, and that's why the nodes are performing well.

At this point, I think that you've narrowed it down to the network connection. 

I would suggest playing around with your Memcached config. Like maybe remove the Node A Memcached from the server list on Node B and Node C and then take down Node A. See if that makes a difference.

I recall now that you said "However, when one node goes offline, the two remaining nodes become really sluggish." Can you clarify what you mean by "really sluggish"? How much slower are they?

My gut is telling me it's the Memcached setup causing the problems, but it could be something else. Still, I hope that you pursue this angle.

You may also try adding "max_failures => 3" or something like that to Koha::Cache::_initialize_memcached to see if that helps. 

I doubt many people are using Memcached clusters, so it wouldn't surprise me if Koha's Memcached client config isn't optimal.
Comment 12 Christian McDonald 2020-11-25 14:36:42 UTC
I think Memcached is likely the culprit. However, if I recall correctly, I actually reconfigured koha-conf.xml in my testing to ONLY point to localhost:11211 memcached. So each Koha node was only looking at the memcached instance on itself and not distributing keys among the other two nodes. In this case, the results were the same. Weird. I'm still mulling over everything in your latest post, but I think I need learn how to do some perl profiling and see what methods are hanging up this process.

On another note, has there been any discussion concerning other in-memory caching solutions, like redis? How much work would be involved in supporting another caching solution? It appears that most (if not all) of the caching abstraction is wrapped up in Koha::Cache. I'm not a perl expert but it seems based on the code that Koha defaults to fastmap if memcached isn't configured?
Comment 13 Jonathan Druart 2020-11-25 14:47:25 UTC
Koha won't work correctly if memcached is not configured correctly (there is a warning on the about page).

There is a L1 cache mechanism that is caching in-memory (Memcached is L2).
Comment 14 David Cook 2020-11-25 23:26:00 UTC
(In reply to Christian McDonald from comment #12)
> I think Memcached is likely the culprit. However, if I recall correctly, I
> actually reconfigured koha-conf.xml in my testing to ONLY point to
> localhost:11211 memcached. So each Koha node was only looking at the
> memcached instance on itself and not distributing keys among the other two
> nodes. 

That would be a good thing to confirm one way or another.

If you were only using localhost:11211, then I'd say there is something else going on. But there's only so much remote support a person can give with only so much information.

> I'm still mulling
> over everything in your latest post, but I think I need learn how to do some
> perl profiling and see what methods are hanging up this process.
> 

I think that's a good idea too. 

If you're using Plack, then you'll need to use a middleware like this one:
https://metacpan.org/pod/Plack::Middleware::Debug::Profiler::NYTProf

As noted above, there is an example at https://wiki.koha-community.org/wiki/Plack. 

Hopefully those two links should be sufficient. I think the plack.psgi file you'll want to edit will be at /etc/koha/sites/kohadev/plack.psgi. 

> On another note, has there been any discussion concerning other in-memory
> caching solutions, like redis? 

Not that I know of. I think Memcached has been sufficient for everyone's needs.

> How much work would be involved in supporting
> another caching solution? 

How long is a piece of string? On one hand, adding code to support Redis probably wouldn't be that difficult. On the other hand, supporting Redis would add to the possible permutations of Koha setups, which makes overall support of Koha more complex. 

That said, I have been wanting to play with Redis for a long time. 

The way to do this would be to move the Memcached code out of Koha::Cache and into a driver class called Koha::Cache::Memcached, and then have a configuration option somewhere to specify the driver to use for Koha::Cache. That way, Koha could officially support Memcached, but a person could easily write their own driver and install it however they like. (For example, using cpan/cpanm, or customizing the source code to include another driver, or tweaking the PERL5LIB env variable in "/etc/default/koha-common").

> It appears that most (if not all) of the caching
> abstraction is wrapped up in Koha::Cache. 

That's correct if I recall correctly.