The record prosessing in misc/search_tools/rebuild_elastic_search.pl happens currently single threadeadly, it could be made multithreaded to take full advantage of multicore systems. <ere> says on IRC following about this: "[..] a simple way would be to add start offset and skip count to the indexing script so you could run multiple in parallel" So in the line "while ( my $record = $next->() ) {" the next->() function gets called and that should be possible to multithread.
(In reply to Joonas Kylmälä from comment #0) > The record prosessing in misc/search_tools/rebuild_elastic_search.pl happens > currently single threadeadly, it could be made multithreaded to take full > advantage of multicore systems. > > <ere> says on IRC following about this: "[..] a simple way would be to add > start offset and skip count to the indexing script so you could run multiple > in parallel" > > So in the line "while ( my $record = $next->() ) {" the next->() function > gets called and that should be possible to multithread. I'll just split hairs and mentioning that multithreading in Perl is not recommended and never really done, but you could achieve the thing by forking workers. In #10662, I use the following modules to perform rapid event-driven processing of job queues: https://metacpan.org/pod/POE::Component::JobQueue https://metacpan.org/pod/POE::Wheel::Run Another option would be to use a message queue and separate workers for doing the indexing. Just a thought.
(In reply to David Cook from comment #1) > I'll just split hairs and mentioning that multithreading in Perl is not > recommended and never really done, but you could achieve the thing by > forking workers. Thanks for making the distinction. > > In #10662, I use the following modules to perform rapid event-driven > processing of job queues: > > https://metacpan.org/pod/POE::Component::JobQueue > https://metacpan.org/pod/POE::Wheel::Run The Parallel::ForkManager is also used already in Koha so it would be worth to take look if it could be used with the indexing code as it looks super simple!
What I was referring to would be to just add a couple of parameters to the indexing that would control which records a single script would process. Then you'd be able to run multiple processes in parallel like this: [...] --offset=0 --skip=3 [...] --offset=1 --skip=3 [...] --offset=2 --skip=3 The first one would process records 1, 4, 7... The second one would process records 2, 5, 8... The third one would process records 3, 6, 9...
A really simple method of achieving this could be to use GNU parallel to run multiple instances of rebuild_elastic_search.pl script if it where to accept --start-biblionum --end-biblionum options. I have already made a script to generate batches utilized in a parallel export that we use: https://github.com/ub-digit/Koha/blob/gub-dev-record-batches-script/misc/record_batches.pl I might rewrite this script a bit since I think there are better ways to produce batches, but it works. It could then be used in a wrapper script for parallel running of rebuild_elastic_search.pl like: $KOHA_ROOT/misc/record_batches.pl | parallel --colsep ' ' -j$CONCURRENCY_LEVEL $KOHA_ROOT/misc/search_tools/rebuild_elastic_search.pl --start-biblionum={1} --end-biblionum={2} The above is just pseudo-code and would have to be worked out to forward options to rebuild_elastic_search.pl etc, but I think this would be a pretty easy and efficient way to implement parallel indexing.
Created attachment 82582 [details] [review] Bug 21872: Elasticsearch indexing faster by making it multi-threaded Add record_batches script for generating biblionumber bathes and add --start-bnumber and --end-bnumber to rebuild_elastic_search.pl script.
Something like this. Now all that is needed is to create the wrapper script.
Created attachment 82583 [details] [review] Bug 21872: Fix some issues with record_batches.pl Fix behavior for case where no biblios or just one biblio exists. Improve performance by only selecting items greater than last end of range instead of increasing offset by batch size. Also don't use underscore for option names.
Created attachment 82584 [details] [review] Bug 21872: Fix some issues with record_batches.pl Fix behavior for case where no biblios or just one biblio exists. Improve performance by only selecting items greater than last end of range instead of increasing offset by batch size. Also don't use underscore for option names.
Created attachment 82585 [details] [review] Bug 21872: Fix some issues with record_batches.pl Fix behavior for case where no biblios or just one biblio exists. Improve performance by only selecting items greater than last end of range instead of increasing offset by batch size. Also don't use underscore for option names.
Created attachment 82586 [details] [review] Bug 21872: Add parallel_rebuild_biblios.pl script Add parallel_rebuild_biblios.pl script for rebuild biblios index in parallel. Adjust some option names and script pods.
I might as well finish to so there is a working (hopefully) proof of concept. I'm sure there are lots of minor things that needs fixing before this would be ready for sign off, but I think the current code should work (at least for me it does).
Created attachment 82598 [details] [review] Bug 21872: Add slice parameter to rebuild_elastic_search.pl The slice parameter allows one to define a slice of the records to index for parallel processing.
David, is there a compelling reason to do it with a predefined record range? I find it a bit complicated, and it doesn't currently work the same way for authorities. I've just attached an implementation along the lines I described earlier. It can be used e.g. like this: echo -n "1,2,3" | xargs -d "," -I{} -P 3 perl misc/search_tools/rebuild_elastic_search.pl -v -b --slice={},3 This allows one to index the records in parallel without prior knowledge of the available record id's and is fairly simple in implementation.
Created attachment 82599 [details] [review] Bug 21872: Add slice parameter to rebuild_elastic_search.pl The slice parameter allows one to define a slice of the records to index for parallel processing.
(In reply to Joonas Kylmälä from comment #2) > (In reply to David Cook from comment #1) > > I'll just split hairs and mentioning that multithreading in Perl is not > > recommended and never really done, but you could achieve the thing by > > forking workers. > > Thanks for making the distinction. > > > > > In #10662, I use the following modules to perform rapid event-driven > > processing of job queues: > > > > https://metacpan.org/pod/POE::Component::JobQueue > > https://metacpan.org/pod/POE::Wheel::Run > > The Parallel::ForkManager is also used already in Koha so it would be worth > to take look if it could be used with the indexing code as it looks super > simple! Parallel::ForkManager is only used in the tests at the moment and it's marked as a non-required dependency, but... it is marked as a dependency in Koha and I do see it in the debian/control file as well, so I suppose a person could use it. The nice thing about POE::Wheel::Run is that it uses bilateral communication channels between the parent and children, so you can fork off X number of workers and then continue to send data to the workers. Plus the event-driven nature of POE means that things happen really quickly. You can have the parent manage the queue, and have it fire off data to the children workers. There's even a POE::Component::* module for non-blocking HTTP requests, although I haven't played with it myself yet, but that could also speed things up with indexing ElasticSearch, but that would probably require not using Catmandu (which I think is Ere's plan in the long-run anyway?).
(In reply to Ere Maijala from comment #3) > What I was referring to would be to just add a couple of parameters to the > indexing that would control which records a single script would process. > Then you'd be able to run multiple processes in parallel like this: > > [...] --offset=0 --skip=3 > [...] --offset=1 --skip=3 > [...] --offset=2 --skip=3 > > The first one would process records 1, 4, 7... > The second one would process records 2, 5, 8... > The third one would process records 3, 6, 9... That would be easier than building a new higher performance indexer... How would you know the offsets in an automated way, or are you thinking about this more for just manual use? Are you talking about a total rebuild or incremental indexing?
David, see my attached patch. The mechanism would work regardless of whether it's an incremental indexing process, though there are currently no parameters available to support incremental indexing since it shouldn't be needed. I'd rather keep this simple. I don't see the need for e.g. IPC mechanisms that tend to complicate things for little gain. Also keep in mind that rebuilding the index is not a daily process or such.
(In reply to Ere Maijala from comment #13) > David, is there a compelling reason to do it with a predefined record range? > I find it a bit complicated, and it doesn't currently work the same way for > authorities. > > I've just attached an implementation along the lines I described earlier. It > can be used e.g. like this: > > echo -n "1,2,3" | xargs -d "," -I{} -P 3 perl > misc/search_tools/rebuild_elastic_search.pl -v -b --slice={},3 > > This allows one to index the records in parallel without prior knowledge of > the available record id's and is fairly simple in implementation. The main reason would be that instead of for example one long lived thread per CPU (or 4 as above) you would split up the work in many more batches that can be balanced across CPUs with a certain concurrency level until none are left. This could potentially distribute load more evenly assuming for example one or more of the long living thread finishes early. But in practice they probably would finish almost the same time, so it does not really matter if using one or the other model. Parallel also outputs the workers output in sequence, which could be nice, but also not all that important. I mainly made the patch because I knew it would be a quick and dirty way to get a working parallel indexing. Parallel::ForkManager looks great to me, I would probably have used it instead of parallel if was aware of it. It would probably be quite easy to implement as part of the rebuild script (with the slice approach) instead having to use xargs. Then you could also use a larger number for slice to produce more workers since ForManager has a $MAX_PROCESSES argument.
(In reply to David Cook from comment #15) > (In reply to Joonas Kylmälä from comment #2) > > (In reply to David Cook from comment #1) > > > I'll just split hairs and mentioning that multithreading in Perl is not > > > recommended and never really done, but you could achieve the thing by > > > forking workers. > > > > Thanks for making the distinction. > > > > > > > > In #10662, I use the following modules to perform rapid event-driven > > > processing of job queues: > > > > > > https://metacpan.org/pod/POE::Component::JobQueue > > > https://metacpan.org/pod/POE::Wheel::Run > > > > The Parallel::ForkManager is also used already in Koha so it would be worth > > to take look if it could be used with the indexing code as it looks super > > simple! > > Parallel::ForkManager is only used in the tests at the moment and it's > marked as a non-required dependency, but... it is marked as a dependency in > Koha and I do see it in the debian/control file as well, so I suppose a > person could use it. > > The nice thing about POE::Wheel::Run is that it uses bilateral communication > channels between the parent and children, so you can fork off X number of > workers and then continue to send data to the workers. Plus the event-driven > nature of POE means that things happen really quickly. You can have the > parent manage the queue, and have it fire off data to the children workers. > > There's even a POE::Component::* module for non-blocking HTTP requests, > although I haven't played with it myself yet, but that could also speed > things up with indexing ElasticSearch, but that would probably require not > using Catmandu (which I think is Ere's plan in the long-run anyway?). Isn't PEO event-loop based and thus runs in a single thread? If so it would not help at all in speeding up the indexing process (except for perhaps committing to Elasticsearch in parallel since that does not run in perl).
Parallel::ForkManager is fine, but I don't think we can make it a required module just for this, so it needs to be optional. And that would make the script a bit more complex since it would need to accommodate for both situations. I'm not sure if it makes sense to have a lot of slice sources since it may cause concurrency or congestion issues on the MySQL side and there's perhaps also the possibility of getting connection timeouts since a slice wouldn't be processed until there are children available. That means I'd rather change the script so that the main process would only feed children with record ID's and the children would do all the rest.
Oh, but then the batching and committing of changes would become difficult. On a second thought I'm not sure ForkManager is quite as suitable for the task as it might seem.
(In reply to David Gustafsson from comment #19) > Isn't PEO event-loop based and thus runs in a single thread? If so it would > not help at all in speeding up the indexing process (except for perhaps > committing to Elasticsearch in parallel since that does not run in perl). POE does work off an event loop, so it does run in a single process/single thread, but that's where POE::Wheel::Run becomes relevant. That module forks child processes and uses pipes for bilateral communication between the parent and children. The children do the parallel processing and the parent manages the job/task queue for distributing work to the children. It could be used for Elasticsearch or Zebra really. The current rebuild scripts are written in Perl but the Zebra one is just a wrapper around command line tools.
(In reply to Ere Maijala from comment #20) > That means > I'd rather change the script so that the main process would only feed > children with record ID's and the children would do all the rest. That's what I'd think. (In reply to Ere Maijala from comment #21) > Oh, but then the batching and committing of changes would become difficult. > On a second thought I'm not sure ForkManager is quite as suitable for the > task as it might seem. Why would batching and committing changes be difficult? (That's a genuine question. I haven't done much hands-on with Elasticsearch and Solr indexing APIs myself, so happy to admit my ignorance there.)
(In reply to Ere Maijala from comment #17) > David, see my attached patch. The mechanism would work regardless of whether > it's an incremental indexing process, though there are currently no > parameters available to support incremental indexing since it shouldn't be > needed. > Admittedly I don't use Elasticsearch, but are you saying that the incremental indexing uses a different mechanism than this one? So misc/search_tools/rebuild_elastic_search.pl only it used for a total reindexing of the database? When is that typically required?
(In reply to David Cook from comment #23) > (In reply to Ere Maijala from comment #20) > > That means > > I'd rather change the script so that the main process would only feed > > children with record ID's and the children would do all the rest. > > That's what I'd think. > > (In reply to Ere Maijala from comment #21) > > Oh, but then the batching and committing of changes would become difficult. > > On a second thought I'm not sure ForkManager is quite as suitable for the > > task as it might seem. > > Why would batching and committing changes be difficult? (That's a genuine > question. I haven't done much hands-on with Elasticsearch and Solr indexing > APIs myself, so happy to admit my ignorance there.) For good indexing performance you need to send records to Elasticsearch in batches. The current default is to collect 5000 records and then commit the batch to ES. If we have a lot of workers that only process one record at a time, we also need IPC to collect the records in the main process to be able to update in batches. All that's of course possible, but I'm not sure there's any real benefit from the way more complex mechanism compared to the slice version.
(In reply to David Cook from comment #24) > (In reply to Ere Maijala from comment #17) > > David, see my attached patch. The mechanism would work regardless of whether > > it's an incremental indexing process, though there are currently no > > parameters available to support incremental indexing since it shouldn't be > > needed. > > > > Admittedly I don't use Elasticsearch, but are you saying that the > incremental indexing uses a different mechanism than this one? So > misc/search_tools/rebuild_elastic_search.pl only it used for a total > reindexing of the database? When is that typically required? Yes. Actually, there's no incremental indexing but changes are sent to ES when a record is saved. It's pretty fast since ES can take the update and make it visible later. Rebuild is typically needed only if you change the indexing rules or import a lot of records somehow without indexing.
(In reply to Ere Maijala from comment #25) > For good indexing performance you need to send records to Elasticsearch in > batches. The current default is to collect 5000 records and then commit the > batch to ES. If we have a lot of workers that only process one record at a > time, we also need IPC to collect the records in the main process to be able > to update in batches. > It's fairly trivial to have workers process batches rather than single records, and IPC really isn't that hard either. > All that's of course possible, but I'm not sure there's any real benefit > from the way more complex mechanism compared to the slice version. I'm just providing an alternative suggestion. You're the one doing the real work, so if you want to go with the slice version, then that sounds good to me.
(In reply to Ere Maijala from comment #26) > Yes. Actually, there's no incremental indexing but changes are sent to ES > when a record is saved. It's pretty fast since ES can take the update and > make it visible later. Rebuild is typically needed only if you change the > indexing rules or import a lot of records somehow without indexing. Apologies for the imprecision in my language. When I said incremental, I meant small or individual, so I was referring to what you're describing. Glad to be on the same page! That's great. That context for a rebuild makes sense too. Not something that the average user will be doing.
I'm going to whip up something hoping I can do easy forking without extra reqs.
Created attachment 82780 [details] [review] Bug 21872: Add multiprocess support to Elasticsearch indexing utility Test plan: 1. Time execution without -p parameter 2. Time execution with -p 2 or -p3 or -p 4 depending on CPU core count
Ok, so what do you think about the latest one? Should be pretty straight-forward to use and no new dependencies required.
(In reply to Ere Maijala from comment #31) > Ok, so what do you think about the latest one? Should be pretty > straight-forward to use and no new dependencies required. Apologies for my earlier comments. Please don't feel obligated to use forking just because of my suggestions! Actually, I just noticed that misc/search_tools/rebuild_elastic_search.pl doesn't have a lock file, which seems problematic but predates this patch. Logging should be fine since the child processes should inherit the STDOUT file handle... I don't have an Elasticsearch on hand for testing but looks workable at a glance.
David Gustafsson, what do you think about the latest one?
No problem, David Cook, the forking one turned out to be quite nice, if I may say so.
We were force to rewrite rebuild_elastic_search.pl as it just died after a couple days never finishing to index our 2.4 million bibliographic records. Our version forks a copy of the process to each machine core using biblio_metadata based limits precalculated by the parent process (this has been upgraded since I sent you a copy of the code, David to make sure each core gets the same number of records to index). My old algorithm didn't distribute the load well as the metadata ids gaps were create by biblio updates with time. With 8 cores the indexing completes in 50 minutes with elastic search running on the same virtual machine. We speed up the process a great deal by accessing the metadata table directly instead of through the iterator. The only draw back is memory usage due to needing to put the 952 item data (coincidentally also 2.4 million items) in hashes.
(In reply to david holoshka from comment #35) > We were force to rewrite rebuild_elastic_search.pl as it just died after a > couple days never finishing to index our 2.4 million bibliographic records. Why did it die?
(In reply to Ere Maijala from comment #33) > David Gustafsson, what do you think about the latest one? Hello! Sorry about the late reply. I have been a little bit buried in non-Koha related work the last few months. I think it looks good. I first found it a little bit hardcore with a low-level fork implementation, but since there is no need to spawn and wait for workers more than once when using several long lived threads equal to the concurrency level, the code is simple enough to understand. If I may make a suggestion I think starting the slice index on 0 instead of 1 and assign index using "$slice_index = $proc - 1" in the process dispatch loop would get rid of the "$slice_modulo = 0 if ($slice_modulo == $slice_count);" condition in the iterator. I think I will be able to test the patch tomorrow and can provide a patch for this change.
Created attachment 85119 [details] [review] Bug 21872: Simplify conditions and exit on invalid combination of arguments Change to zero based indexing for slice index to simplify some conditions. Exit with error message if trying to combine processes and biblio numbers arguments.
Thanks, that makes sense!
Created attachment 85121 [details] [review] Bug 21872: Remove duplicate modulo condition in authorities iterator
Found what I think was a duplicate condition in authorities iterator, and removed it.
I have tried out the patch locally with a small number of biblios, and seems to work just fine! Will be interesting to try out in our staging environment with a much larger number of records.
Hi Ere, I think that koha-elasticsearch debian script should be able to pass -p parameter to rebuild_elastic_search.pl
Right, I'll add it.
(In reply to Ere Maijala from comment #44) > Right, I'll add it. Thanks
Created attachment 85975 [details] [review] Bug 21872: Add multiprocess support to Elasticsearch indexing utility Test plan: 1. Time execution without -p parameter 2. Time execution with -p 2 or -p3 or -p 4 depending on CPU core count
Created attachment 85976 [details] [review] Bug 21872: Simplify conditions and exit on invalid combination of arguments Change to zero based indexing for slice index to simplify some conditions. Exit with error message if trying to combine processes and biblio numbers arguments.
Created attachment 85977 [details] [review] Bug 21872: Remove duplicate modulo condition in authorities iterator
Created attachment 85978 [details] [review] Bug 21872: Add support for -p parameter to koha-elasticsearch
Created attachment 85979 [details] [review] Bug 21872: Fix name of rebuild_elasticsearch.pl
Increasing importance since this can make a huge difference in bigger libraries.
Created attachment 86053 [details] [review] Bug 21872: Add multiprocess support to Elasticsearch indexing utility Test plan: 1. Time execution without -p parameter 2. Time execution with -p 2 or -p3 or -p 4 depending on CPU core count Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Created attachment 86054 [details] [review] Bug 21872: Simplify conditions and exit on invalid combination of arguments Change to zero based indexing for slice index to simplify some conditions. Exit with error message if trying to combine processes and biblio numbers arguments. Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Created attachment 86055 [details] [review] Bug 21872: Remove duplicate modulo condition in authorities iterator Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Created attachment 86056 [details] [review] Bug 21872: Add support for -p parameter to koha-elasticsearch Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Created attachment 86057 [details] [review] Bug 21872: Fix name of rebuild_elasticsearch.pl Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
QA looking here. I've got a couple of points to make before continuing. 1) The 'die' on fork failure isn't cleaning up after itself.. imagine a case where we want to sporn 5 subprocesses, it we get to process 4 and then run out of memory for example. The parent script will die and leave behind zombie child processes. 2) It doesn't look like there's any form of signal handling here and as such a CTRL+C for example could end up leaving zombie processes too. I'm also wrapping my head around the use of wait vs waitpid here.. I remember tripping myself up using them before, but can't remember the details well enough right now to be confident I've not missed something. Finally, I'll be looking into the possibility of race conditions being introduced with this. We had to introduce lock files for the zebra indexer as overlapping runs of the script could cause problems, especially with the query that got the list of bib/auths to index during each run. I'm vaguely feeling that might also be a problem here, but I'm not entirely sure yet as I'm still looking at how the iterator is being built. It's great to see this work however.. I'd love to see if make it into the 19.05 release. Failing for the first issue raised above for now. (I found https://www.perl.com/article/fork-yeah-/ pretty helpful whilst QAing this.. it gave me the insight to spot the above issues where I may have missed them otherwise)
Good points. With regard to the race condition I'm pretty sure there is none since the data is partitioned per process. No process should ever process the data of any of the others. Or are you talking about running multiple instances of the script at the same time?
(In reply to Martin Renvoize from comment #57) > 1) The 'die' on fork failure isn't cleaning up after itself.. imagine a case > where we want to sporn 5 subprocesses, it we get to process 4 and then run > out of memory for example. The parent script will die and leave behind > zombie child processes. I don't think that this is an actual problem. I think that this happens all the time. The parent dies, the init process (ie 1) becomes the new parent, and it reaps the children when they complete. I don't even think they actually do become zombie child processes in this process. This is also the same process used to daemonize a process. Where you run into a problem with zombie child processes is when the parent lives, the child exits, and the parent doesn't reap the child, which means that you have zombie child processes filling up your process table. That's a real problem. > 2) It doesn't look like there's any form of signal handling here and as such > a CTRL+C for example could end up leaving zombie processes too. > You don't need any signal handling. If you do a CTRL+C on the parent process, it'll cascade down through the child processes, because they'll share the same process group ID. So a CTRL+C won't leave zombie child processes. Even if the CTRL+C just killed the parent and not the children (e.g. the children had set their own process group ID after forking), then they'd just be inherited by init and cleaned up anyway. > I'm also wrapping my head around the use of wait vs waitpid here.. I > remember tripping myself up using them before, but can't remember the > details well enough right now to be confident I've not missed something. > I think wait() and waitpid(-1) are roughly equivalent? They could probably be more rigorous in checking that the PID returned by wait() actually matches the child PIDs, but not the end of the world. Even if the parent forgot to wait and exited early, the child processes would be cleaned up once they completed.
(In reply to Martin Renvoize from comment #57) > Finally, I'll be looking into the possibility of race conditions being > introduced with this. We had to introduce lock files for the zebra indexer > as overlapping runs of the script could cause problems, especially with the > query that got the list of bib/auths to index during each run. I'm vaguely > feeling that might also be a problem here, but I'm not entirely sure yet as > I'm still looking at how the iterator is being built. > I am also concerned about there not being a lock file. I suppose I'm less concerned about race conditions so much as accidentally running multiple indexing runs before the first has even completed. I was thinking about the scenario you mentioned where the parent process dies and there's multiple child processes. I would be concerned that the lock would be lost when the parent dies, although https://perldoc.perl.org/functions/flock.html says that locks are inherited across fork calls. In hindsight, I was thinking about the fork and exec (http://www.wumpus-cave.net/2014/04/21/underappreciated-perl-passing-file-descriptors/), but that shouldn't be an issue here. So yeah... I think adding a lock file would be trivial but very worthwhile.
With Elasticsearch the only race condition I can think of would be running indexing with -d while another indexing run is going. Otherwise it's just waste of resources. That said, adding a lock file makes sense. I'll do that.
On a second thought, I'd leave lock file out. Unlike rebuild_zebra, there's normally only need to run rebuild_elasticsearch manually. If you need to e.g. cron it for some reason, an external locking mechanism can be used. Also, you may want to rebuild authorities and biblios side by side, and lock file would just complicate that.
...and if you really feel that lock file should be added, let's make that a separate bug. It's not as simple as I first thought at least if you use the same mechanism as rebuild_zebra.pl.
Hi Ere, Yeah, I've been digging further into this code and I'd entirely forgotten/overlooked that this script really is intended as a human interface and that the regular indexing is actually handled live rather than this script running as a daemon or under cron.. Don't worry about a lock file at all, apologies for my not realising that earlier (seems I still have more to learn about ES than I thought) Given the feedback I've had above I'm now confident that the issues have been thought through and appear to have been handled appropriately. I'm going to go ahead and PQA, thanks for all the efforts everyone and for the responses to queries. Great to see this one going through.
Created attachment 89109 [details] [review] Bug 21872: Add multiprocess support to Elasticsearch indexing utility Test plan: 1. Time execution without -p parameter 2. Time execution with -p 2 or -p3 or -p 4 depending on CPU core count Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 89110 [details] [review] Bug 21872: Simplify conditions and exit on invalid combination of arguments Change to zero based indexing for slice index to simplify some conditions. Exit with error message if trying to combine processes and biblio numbers arguments. Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 89111 [details] [review] Bug 21872: Remove duplicate modulo condition in authorities iterator Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 89112 [details] [review] Bug 21872: Add support for -p parameter to koha-elasticsearch Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Created attachment 89113 [details] [review] Bug 21872: Fix name of rebuild_elasticsearch.pl Signed-off-by: Josef Moravec <josef.moravec@gmail.com> Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
(In reply to Ere Maijala from comment #62) > On a second thought, I'd leave lock file out. Unlike rebuild_zebra, there's > normally only need to run rebuild_elasticsearch manually. If you need to > e.g. cron it for some reason, an external locking mechanism can be used. > Also, you may want to rebuild authorities and biblios side by side, and lock > file would just complicate that. Mmm that's a good point. Yeah, I'll withdraw my concern about it as well in that case.
Awesome work all! Pushed to master for 19.05
The rename of the script caused a new issue on misc4dev https://gitlab.com/koha-community/koha-misc4dev/issues/31
Enhancement will not be backported to 18.11.x series.