Bugzilla – Attachment 7379 Details for
Bug 7246
rebuild_zebra.pl --limit option to allow partial re-indexing
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Help
|
New Account
|
Log In
[x]
|
Forgot Password
Login:
[x]
[patch]
Bug 7246 add min/offset and WHERE options to rebuild_zebra
Bug-7246-add-minoffset-and-WHERE-options-to-rebuil.patch (text/plain), 4.20 KB, created by
Jared Camins-Esakov
on 2012-01-29 13:50:57 UTC
(
hide
)
Description:
Bug 7246 add min/offset and WHERE options to rebuild_zebra
Filename:
MIME Type:
Creator:
Jared Camins-Esakov
Created:
2012-01-29 13:50:57 UTC
Size:
4.20 KB
patch
obsolete
>From 253a637bc822e307cd62977ecc860c2f613ddfd3 Mon Sep 17 00:00:00 2001 >From: Paul Poulain <paul.poulain@biblibre.com> >Date: Tue, 17 Jan 2012 17:15:03 +0100 >Subject: [PATCH] Bug 7246 add min/offset and WHERE options to rebuild_zebra >Content-Type: text/plain; charset="UTF-8" > >This patch reimplement a feature that is on biblibre/master for >Koha-community/master > >It adds 4 parameters: >* offset = the ofset of record. Say 1000 to start rebuilding at the 1000th > record of your database >* min = how many records to export. Say 400 to export only 400 records >* WHERE = add a where clause to rebuild only a given itemtype, or anything you > want to filter on >* l = how many items should be export with biblios. This is a usefull option > when you have records with so many items that they can result in a record > higher than 99999bytes, that zebra don't like at all > >Another improvement resulting from ofset & min limit is the >rebuild_zebra_sliced.zsh that will be submitted in another patch. >_sliced will slice your all database in small chunks, and, if something went >wrong for a given slice, will slice the slice, and repeat, until you reach a >slice size of 1, showing which record is wrong in your database. > >Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com> >--- > misc/migration_tools/rebuild_zebra.pl | 30 ++++++++++++++++++++++++++++-- > 1 files changed, 28 insertions(+), 2 deletions(-) > >diff --git a/misc/migration_tools/rebuild_zebra.pl b/misc/migration_tools/rebuild_zebra.pl >index 31e8125..d94b645 100755 >--- a/misc/migration_tools/rebuild_zebra.pl >+++ b/misc/migration_tools/rebuild_zebra.pl >@@ -34,6 +34,10 @@ my $want_help; > my $as_xml; > my $process_zebraqueue; > my $do_not_clear_zebraqueue; >+my $item_limit; >+my $min; >+my $where; >+my $offset; > my $verbose_logging = 0; > my $zebraidx_log_opt = " -v none,fatal,warn "; > my $result = GetOptions( >@@ -51,6 +55,10 @@ my $result = GetOptions( > 'x' => \$as_xml, > 'y' => \$do_not_clear_zebraqueue, > 'z' => \$process_zebraqueue, >+ 'l:i' => \$item_limit, >+ 'where:s' => \$where, >+ 'min:i' => \$min, >+ 'offset:i' => \$offset, > 'v+' => \$verbose_logging, > ); > >@@ -294,13 +302,21 @@ sub select_all_records { > } > > sub select_all_authorities { >- my $sth = $dbh->prepare("SELECT authid FROM auth_header"); >+ my $strsth=qq{SELECT authid FROM auth_header}; >+ $strsth.=qq{ WHERE $where } if ($where); >+ $strsth.=qq{ LIMIT $min } if ($min && !$offset); >+ $strsth.=qq{ LIMIT $offset,$min } if ($min && $offset); >+ my $sth = $dbh->prepare($strsth); > $sth->execute(); > return $sth; > } > > sub select_all_biblios { >- my $sth = $dbh->prepare("SELECT biblionumber FROM biblioitems ORDER BY biblionumber"); >+ my $strsth = qq{ SELECT biblionumber FROM biblioitems }; >+ $strsth.=qq{ WHERE $where } if ($where); >+ $strsth.=qq{ LIMIT $min } if ($min && !$offset); >+ $strsth.=qq{ LIMIT $offset,$min } if ($offset); >+ my $sth = $dbh->prepare($strsth); > $sth->execute(); > return $sth; > } >@@ -635,10 +651,20 @@ Parameters: > the same records - specify -y to override this. > Cannot be used with -z. > >+ -l set a maximum number of exported items per biblio. >+ Doesn't work with -nosanitize. > -v increase the amount of logging. Normally only > warnings and errors from the indexing are shown. > Use log level 2 (-v -v) to include all Zebra logs. > >+ -min 1234 minimum biblionumber >+ -offset 1243 count biblios to process >+ example: -min 1000 -offset=500 will result in a LIMIT 500,1000 (exporting 1000 records, starting by the 500th one) >+ note that the numbers are NOT related to biblionumber, that's the intended behaviour. >+ >+ -where let you specify a WHERE query, like itemtype='BOOK' >+ or something like that >+ > -munge-config Deprecated option to try > to fix Zebra config files. > --help or -h show this message. >-- >1.7.2.5
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Diff
|
Splinter Review
Attachments on
bug 7246
:
6345
|
6347
|
6994
|
6995
|
7200
|
7379
|
7410
|
7470
|
7564