Bug 26024

Summary: Purge undone of zebraqueue in cleanup_database.pl
Product: Koha Reporter: Fridolin Somers <fridolin.somers>
Component: Searching - ZebraAssignee: Fridolin Somers <fridolin.somers>
Status: Needs Signoff --- QA Contact:
Severity: normal    
Priority: P5 - low CC: dcook
Version: Main   
Hardware: All   
OS: All   
See Also: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=25710
https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=21820
Change sponsored?: --- Patch complexity: Trivial patch
Documentation contact: Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:
Attachments: Bug 26024: Purge undone of zebraqueue in cleanup_database.pl
Bug 26024: Purge option for undone zebraq entries in cleanup_database.pl

Description Fridolin Somers 2020-07-20 09:47:32 UTC
Purge script misc/cronjobs/cleanup_database.pl with --zebraqueue DAYS will delete entries from zebraqueue table with done=1

We now start to use Elasticsearch only so entries of zebraqueue with done=0 stay in database.

I propose we purge all entries in cleanup_database.pl.
Even with zebra, if indexing does not occur after several days (30 by default), there is a problem and one need a full rebuild.
Comment 1 Fridolin Somers 2020-07-20 10:00:44 UTC
Created attachment 107083 [details] [review]
Bug 26024: Purge undone of zebraqueue in cleanup_database.pl

Purge script misc/cronjobs/cleanup_database.pl with --zebraqueue DAYS will delete entries from zebraqueue table with done=1

We now start to use Elasticsearch only so entries of zebraqueue with done=0 stay in database.

I propose we purge all entries in cleanup_database.pl.
Even with zebra, if indexing does not occur after several days (30 by default), there is a problem and one need a full rebuild.

Test plan:
1) Use an database with entries in zebraqueue older than 7 days
2) Stop zebra indexing
3) Count entries older than 7 days :
mysql > select count(*),done from zebraqueue where time < date_sub(curdate(), INTERVAL 7 DAY) group by done;
4) Simulate done=0 :
mysql > update zebraqueue set done=0;
5) Run misc/cronjobs/cleanup_database.pl --zebraqueue 7 -v
6) Re run 3) you have no results
Comment 2 Katrin Fischer 2020-07-20 10:54:00 UTC
I am sorry, Frido, but I don't feel this is the right solution.

1) If there is a problem with indexing, I want to know about it, this will hide the issue by deleting the evidence. 

2) You can do a partial rebuilt now but simply having the queue pick up again where the indexer stopped - so there is sense in keeping the old entries. Depending on how big and busy your library is you might well want to avoid a full reindex.

3) Wouldn't the solution be that if your library uses Elasticsearch only, we should not write to the zebraqueue at all? As of now, your patch affects all libraries and doesn't allow for any choice. 

For example, we delete done entries every night, but I'd want the undone to be a separate option at least.
Comment 3 Fridolin Somers 2020-07-20 15:24:42 UTC
(In reply to Katrin Fischer from comment #2)
> 3) Wouldn't the solution be that if your library uses Elasticsearch only, we
> should not write to the zebraqueue at all?
Of course its even better.

If one must switch back to Zebra just rebuild full.
Comment 4 Fridolin Somers 2020-09-02 14:52:02 UTC
Ok thats Bug 21820
Comment 5 Fridolin Somers 2020-09-21 14:40:25 UTC
Comment on attachment 107083 [details] [review]
Bug 26024: Purge undone of zebraqueue in cleanup_database.pl

>From fd02420ab8d40053906af75634104b5bb45f2dfa Mon Sep 17 00:00:00 2001
>From: Fridolin Somers <fridolin.somers@biblibre.com>
>Date: Mon, 20 Jul 2020 11:48:24 +0200
>Subject: [PATCH] Bug 26024: Purge undone of zebraqueue in cleanup_database.pl
>
>Purge script misc/cronjobs/cleanup_database.pl with --zebraqueue DAYS will delete entries from zebraqueue table with done=1
>
>We now start to use Elasticsearch only so entries of zebraqueue with done=0 stay in database.
>
>I propose we purge all entries in cleanup_database.pl.
>Even with zebra, if indexing does not occur after several days (30 by default), there is a problem and one need a full rebuild.
>
>Test plan:
>1) Use an database with entries in zebraqueue older than 7 days
>2) Stop zebra indexing
>3) Count entries older than 7 days :
>mysql > select count(*),done from zebraqueue where time < date_sub(curdate(), INTERVAL 7 DAY) group by done;
>4) Simulate done=0 :
>mysql > update zebraqueue set done=0;
>5) Run misc/cronjobs/cleanup_database.pl --zebraqueue 7 -v
>6) Re run 3) you have no results
>---
> misc/cronjobs/cleanup_database.pl | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>diff --git a/misc/cronjobs/cleanup_database.pl b/misc/cronjobs/cleanup_database.pl
>index a1a9cae442..8a6224ae55 100755
>--- a/misc/cronjobs/cleanup_database.pl
>+++ b/misc/cronjobs/cleanup_database.pl
>@@ -236,7 +236,7 @@ if ($zebraqueue_days) {
>         q{
>             SELECT id,biblio_auth_number,server,time
>             FROM zebraqueue
>-            WHERE done=1 AND time < date_sub(curdate(), INTERVAL ? DAY)
>+            WHERE time < date_sub(curdate(), INTERVAL ? DAY)
>         }
>     );
>     $sth->execute($zebraqueue_days) or die $dbh->errstr;
>-- 
>2.27.0
Comment 6 Peter Vashchuk 2024-10-15 14:30:16 UTC
Created attachment 172779 [details] [review]
Bug 26024: Purge option for undone zebraq entries in cleanup_database.pl

Added --zebraqueue-undone DAYS option to cleanup_database.pl to purge uncompleted (done=0) zebraqueue entries older than DAYS days (90 days default).
Comment 7 David Cook 2024-10-17 06:21:04 UTC
(In reply to Katrin Fischer from comment #2)
> 3) Wouldn't the solution be that if your library uses Elasticsearch only, we
> should not write to the zebraqueue at all? As of now, your patch affects all
> libraries and doesn't allow for any choice. 

Yeah, that's probably the way. I plan to do that at some point...