Bugzilla – Attachment 145145 Details for
Bug 32594
Add a dedicated ES indexing background worker
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Help
|
New Account
|
Log In
[x]
|
Forgot Password
Login:
[x]
[patch]
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue
Bug-32594-Add-a-dedicated-Elasticsearch-biblio-ind.patch (text/plain), 6.26 KB, created by
Nick Clemens (kidclamp)
on 2023-01-09 13:52:44 UTC
(
hide
)
Description:
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue
Filename:
MIME Type:
Creator:
Nick Clemens (kidclamp)
Created:
2023-01-09 13:52:44 UTC
Size:
6.26 KB
patch
obsolete
>From 7fc79516191e2814206dcc535a52faaf01fc9d37 Mon Sep 17 00:00:00 2001 >From: Nick Clemens <nick@bywatersolutions.com> >Date: Mon, 9 Jan 2023 13:43:43 +0000 >Subject: [PATCH] Bug 32594: Add a dedicated Elasticsearch biblio indexing > background queue > >Currently we generate large numebrs if single record reindex for circulation and other >actions. It can take a long time to process these as we need to load the ES settings for each. > >This patch updates the Elasticsearch background jobs to throw records into a new queue >that can be processed by it's own worker and adds a dedicated worker that batches the jobs >every 1 second. > >To test: >1 - Apply patches, set SearchEngine system preference to 'Elasticsearch' >2 - perl misc/search_tools/es_indexer_daemon.pl >3 - Leave the running in terminal and perform actions in staff interface: > - Checking out a bib > - Returning a bib > - Editing a single bib > - Editing a single item > - Batch editing bibs > - Batch editing items >4 - Confirm for each action that records are updated in search/search results >5 - Stop the script >6 - set SearchEngine system preference to 'Zebra' >7 - perl misc/search_tools/es_indexer_daemon.pl >8 - Script dies as Elasticsearch not enabled >--- > Koha/BackgroundJob/UpdateElasticIndex.pm | 4 + > misc/search_tools/es_indexer_daemon.pl | 136 +++++++++++++++++++++++ > 2 files changed, 140 insertions(+) > create mode 100755 misc/search_tools/es_indexer_daemon.pl > >diff --git a/Koha/BackgroundJob/UpdateElasticIndex.pm b/Koha/BackgroundJob/UpdateElasticIndex.pm >index 432d34fa97..279db9bbe9 100644 >--- a/Koha/BackgroundJob/UpdateElasticIndex.pm >+++ b/Koha/BackgroundJob/UpdateElasticIndex.pm >@@ -105,10 +105,14 @@ sub enqueue { > > my $record_server = $args->{record_server}; > my @record_ids = @{ $args->{record_ids} }; >+ my $queue = $record_server eq 'biblioserver' ? 'elastic_index' : 'default'; >+ # Biblios should be sent to ES daemon for batching. >+ # Authorities are much less common and can be processed by default > > $self->SUPER::enqueue({ > job_size => 1, > job_args => {record_server => $record_server, record_ids => \@record_ids}, >+ job_queue => $queue > }); > } > >diff --git a/misc/search_tools/es_indexer_daemon.pl b/misc/search_tools/es_indexer_daemon.pl >new file mode 100755 >index 0000000000..a57fec8b51 >--- /dev/null >+++ b/misc/search_tools/es_indexer_daemon.pl >@@ -0,0 +1,136 @@ >+#!/usr/bin/perl >+ >+# This file is part of Koha. >+# >+# Koha is free software; you can redistribute it and/or modify it >+# under the terms of the GNU General Public License as published by >+# the Free Software Foundation; either version 3 of the License, or >+# (at your option) any later version. >+# >+# Koha is distributed in the hope that it will be useful, but >+# WITHOUT ANY WARRANTY; without even the implied warranty of >+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the >+# GNU General Public License for more details. >+# >+# You should have received a copy of the GNU General Public License >+# along with Koha; if not, see <http://www.gnu.org/licenses>. >+ >+=head1 NAME >+ >+background_jobs_worker_es.pl - Worker script that will process background Elasticsearch jobs >+ >+=head1 SYNOPSIS >+ >+./background_jobs_worker_es.pl >+ >+=head1 DESCRIPTION >+ >+This script will connect to the Stomp server (RabbitMQ) and subscribe to the Elasticsearch queue, processing batches every second. >+If a Stomp server is not active it will poll the database every 10s for new jobs in the Elasticsearch queue >+and process them in batches every second. >+ >+=cut >+ >+use Modern::Perl; >+use JSON qw( decode_json ); >+use Try::Tiny qw( catch try ); >+use Pod::Usage; >+use Getopt::Long; >+ >+use C4::Context; >+use Koha::BackgroundJobs; >+use Koha::SearchEngine; >+use Koha::SearchEngine::Indexer; >+ >+my ( $help ); >+GetOptions( >+ 'h|help' => \$help, >+) || pod2usage(1); >+ >+pod2usage(0) if $help; >+ >+die "Not using Elasticsearch" unless C4::Context->preference('SearchEngine') eq 'Elasticsearch'; >+ >+my $conn; >+try { >+ $conn = Koha::BackgroundJob->connect; >+} catch { >+ warn sprintf "Cannot connect to the message broker, the jobs will be processed anyway (%s)", $_; >+}; >+ >+if ( $conn ) { >+ # FIXME cf note in Koha::BackgroundJob about $namespace >+ my $namespace = C4::Context->config('memcached_namespace'); >+ $conn->subscribe({ destination => sprintf("/queue/%s-%s", $namespace, 'elastic_index'), ack => 'client' }); >+} >+my $indexer = Koha::SearchEngine::Indexer->new({ index => $Koha::SearchEngine::BIBLIOS_INDEX }); >+my @jobs = (); >+my @records = (); >+my $frame; >+ >+while (1) { >+ local $SIG{ALRM} = sub { commit_records() if scalar @jobs }; # NB: \n required >+ alarm 1; >+ if ( $conn ) { >+ $frame = $conn->receive_frame; >+ if ( !defined $frame ) { >+ # maybe log connection problems >+ next; # will reconnect automatically >+ } >+ >+ my $body = $frame->body; >+ my $args = decode_json($body); # TODO Should this be from_json? Check utf8 flag.<F9> >+ >+ # FIXME This means we need to have create the DB entry before >+ # It could work in a first step, but then we will want to handle job that will be created from the message received >+ my $job = Koha::BackgroundJobs->find($args->{job_id}); >+ next unless defined $job; >+ >+ push @records, @{ $args->{record_ids} }; >+ push @jobs, $job; >+ >+ >+ } else { >+ my $jobs = Koha::BackgroundJobs->search({ status => 'new', queue => 'elastic_index' }); >+ while ( my $job = $jobs->next ) { >+ my $args = $job->json->decode($job->data); >+ push @records, @{ $args->{record_ids} }; >+ push @jobs, $job; >+ } >+ sleep 10; >+ } >+} >+$conn->disconnect; >+ >+sub commit_records { >+ eval { >+ $indexer->update_index(\@records); >+ }; >+ if ( $@ ){ >+ warn $@; >+ } >+ foreach my $job (@jobs){ >+ $job->status('finished')->store; >+ } >+ @jobs = (); >+ @records = (); >+ $conn->ack( { frame => $frame } ) if $frame; # FIXME depending on success? >+ # Acknowledging the current frame also acknowledges all previously batched frames >+} >+ >+ >+sub process_job { >+ my ( $job, $args ) = @_; >+ >+ my $pid; >+ if ( $pid = fork ) { >+ wait; >+ return; >+ } >+ >+ die "fork failed!" unless defined $pid; >+ >+ $job->process( $args ); >+ >+ exit; >+} >-- >2.30.2
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Diff
|
Splinter Review
Attachments on
bug 32594
:
145145
|
145195
|
145196
|
145197
|
145704
|
145711
|
146402
|
146831
|
147141
|
147460
|
147562
|
148350
|
148513
|
148571
|
148588