Before this patch : 9619 borrowers added in 31 Minutes After : 68 seconds.
Created attachment 6453 [details] the csv file containing anonymised borrowers to import
Created attachment 6454 [details] [review] proposed patch
The fix change the addmember sub, add a borrower in koha to test it
resetting priority so Patch Status will show up. Very encouraging looking patch. Please test thoroughly in not only Member Entry, but also Member search and acquisitions, as this change may affect those areas as well.
This patch show a big improvement in performance when we bulk load patrons, using a specific script we're using at BibLibre when doing our migrations. I think the improvement can be seen in tools > import patrons, when loading large sets of patrons. Note this patch is a porting of what we've made in git.biblibre.com and use in productions for months (if not years)
This patch looks good, but as we are moving to a persistent running environment for Koha, we need to make sure we have a way to clear the table structure from the variable. So I will submit a follow up to clear the variable, which can be called by the update scripts when it changes the table structure. Similar to clear_syspref_cache in C4::Context This patch does give a huge improvement in patron import though, so I am going to sign off and send that follow up.
Created attachment 6623 [details] [review] Bug 7276 : member entry Performance improvement Before this patch : 9619 borrowers added in 31 Minutes, After : 68 seconds. This adds Hashref to table structure in C4::SQLHelper to speed up bulk edits. Signed-off-by: Stéphane Delaune <stephane.delaune@biblibre.com> Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Created attachment 6624 [details] [review] Bug 7276 : Follow up, adding a sub to clear the cache
Only the follow up needs sign off
QA comment about the follow-up = the initial patch caches the database structure for up to 10 minuts. It means the clearing sub is not necessary imo, even under a persistent env, that will work. It will just be needed to wait for up to 10 minuts to get a fresh env. Chris, can you confirm you've seen the 10 minuts limits, and explain why it should not be enough ? (follow-up OK and everything passed QA though)
(In reply to comment #10) > QA comment > > about the follow-up = the initial patch caches the database structure for up to > 10 minuts. It means the clearing sub is not necessary imo, even under a > persistent env, that will work. It will just be needed to wait for up to 10 > minuts to get a fresh env. > > Chris, can you confirm you've seen the 10 minuts limits, and explain why it > should not be enough ? > > (follow-up OK and everything passed QA though) We are moving slowly to a persistent model, be it mod_perl, plack, fastcgi ..whatever. As such variables declared like this hash need to be able to be wiped. This has nothing to do with the memcached caching, but that we are declaring a variable, that when we are running in persistent mode will persist. So, we should always clean up after ourselves or we will get inconsistent and wrong behaviour when running in a persistent mode.
OK, I undestand what I missed: cleaning the variable is necessary, otherwise, it will stay forever in your variable, which is bad. It has nothing to do with memcache PS: maybe we should write guidelines to manage persistent variables PS2: with a persistent model, will memcache still be usefull ? (i'll push the patch on monday)
(In reply to comment #12) > OK, I undestand what I missed: cleaning the variable is necessary, otherwise, > it will stay forever in your variable, which is bad. It has nothing to do with > memcache > > PS: maybe we should write guidelines to manage persistent variables Probably a good idea. > PS2: with a persistent model, will memcache still be usefull ? > Yes, even with persistence, threads die after a while and respawn. So it is still useful.
(In reply to comment #13) > (In reply to comment #12) > > OK, I undestand what I missed: cleaning the variable is necessary, otherwise, > > it will stay forever in your variable, which is bad. It has nothing to do with > > memcache > > > > PS: maybe we should write guidelines to manage persistent variables > > Probably a good idea. > > > > PS2: with a persistent model, will memcache still be usefull ? > > > Yes, even with persistence, threads die after a while and respawn. So it is > still useful. Following up, FWIW on some of huge traffic sites (not Koha sites but other sites) we run with persistance, and memcached. When you move to a cluster model this really becomes useful, but even single server, removing a lot of unnessecary reads from your db helps it do writes much faster.