Description
Martin Renvoize (ashimema)
2022-05-20 11:31:52 UTC
Is there a reason indexing could not take place during the import and would have to be pushed after the task is completed? Just wondering if this would not slow down things for big imports like you would do with bulkmarcimport. (In reply to Katrin Fischer from comment #1) > Is there a reason indexing could not take place during the import and would > have to be pushed after the task is completed? Just wondering if this would > not slow down things for big imports like you would do with bulkmarcimport. See discussion on bug 30465 for background on this exact issue... As far as I can tell there's already a bug in the existing implementation here anyway.. as I just mentioned in bug 29440 if `BiblioAddsAuthorities` is enabled we're using the search indexes to find authority matches as part of the AddBiblio action.. if we're calling a whole bunch of AddBiblio's in a row in a loop then the index will likely be out of date for at least some of these searches and as such authority matches won't work as expected.. we have a race condition. What we really need is task dependencies... then the Authority linking can be a task that's dependent on the rebuild having been completed. Created attachment 135251 [details] [review] Bug 30822: Make BatchCommitRecords update the index in one request When committing staged marc imports to the catalogue we will often be importing a batch of records. We don't want to send one index request per biblio affected, we want to index them all after the records have been modified otherwise we will end up with multiple tasks per record (when items are also affected). Test plan: 1) Use the stage marc record tool to stage and commit a set of records and confirm the behaviour remains correct. 2) If using Elastic, check that only one indexing job is queued to take place resulting from the committed import. Good improvement! Just a small nit: as BatchCommitItems is a public function it should take either skip_record_index as a parameter or document clearly that it doesn't do indexing and it is left to the caller. (In reply to Joonas Kylmälä from comment #5) > Good improvement! Just a small nit: as BatchCommitItems is a public function > it should take either skip_record_index as a parameter or document clearly > that it doesn't do indexing and it is left to the caller. Good Catch.. I'm wondering if the third option is to rename the function to be 'private'.. it appears to only be called inside this module and isn't exported either. Certainly some POD around it would also be sensible though. Created attachment 135372 [details] [review] Bug 30822: Clarify that BatchCommitItems is a private function BatchCommitItems is only being used within this module and isn't mentioned in EXPORT_OK. This patch simply renames it to _batchCommitItems to take the _ standard for private functions and also adds a little hint to the POD of the function to clarify that the caller must trigger a re-index. Created attachment 135963 [details] [review] Bug 30822: Make BatchCommitRecords update the index in one request When committing staged marc imports to the catalogue we will often be importing a batch of records. We don't want to send one index request per biblio affected, we want to index them all after the records have been modified otherwise we will end up with multiple tasks per record (when items are also affected). Test plan: 1) Use the stage marc record tool to stage and commit a set of records and confirm the behaviour remains correct. 2) If using Elastic, check that only one indexing job is queued to take place resulting from the committed import. Signed-off-by: Joonas Kylmälä <joonas.kylmala@iki.fi> Created attachment 135964 [details] [review] Bug 30822: Clarify that BatchCommitItems is a private function BatchCommitItems is only being used within this module and isn't mentioned in EXPORT_OK. This patch simply renames it to _batchCommitItems to take the _ standard for private functions and also adds a little hint to the POD of the function to clarify that the caller must trigger a re-index. JK: Amended patch to rename also the function in t/db_dependent/ImportBatch.t and fix typo "commiting" => "commiting" Signed-off-by: Joonas Kylmälä <joonas.kylmala@iki.fi> I think you did better than me on https://bugs.koha-community.org/bugzilla3/attachment.cgi?id=136129 But maybe we need that part, for biblio deletions, what do you think? + if ( $record_type eq 'biblio' && ( @updated_biblionumbers || @deleted_biblionumbers ) ) { + my $indexer = Koha::SearchEngine::Indexer->new({ index => $Koha::SearchEngine::BIBLIOS_INDEX }); + if ( @deleted_biblionumbers ) { + $indexer->index_records( \@deleted_biblionumbers, "recordDelete", "biblioserver" ); + } else { + $indexer->index_records( \@updated_biblionumbers, "specialUpdate", "biblioserver" ); + } + } Went to apply it as a dependency of bug 27421 and noticed patches don't apply anymore: Bug 30822 - BatchCommit does not deal with indexation correctly 135963 - Bug 30822: Make BatchCommitRecords update the index in one request 135964 - Bug 30822: Clarify that BatchCommitItems is a private function Apply? [(y)es, (n)o, (i)nteractive] y Applying: Bug 30822: Make BatchCommitRecords update the index in one request Using index info to reconstruct a base tree... M C4/ImportBatch.pm Falling back to patching base and 3-way merge... Auto-merging C4/ImportBatch.pm CONFLICT (content): Merge conflict in C4/ImportBatch.pm error: Failed to merge in the changes. Patch failed at 0001 Bug 30822: Make BatchCommitRecords update the index in one request Created attachment 136218 [details] [review] Bug 30822: Make BatchCommitRecords update the index in one request When committing staged marc imports to the catalogue we will often be importing a batch of records. We don't want to send one index request per biblio affected, we want to index them all after the records have been modified otherwise we will end up with multiple tasks per record (when items are also affected). Test plan: 1) Use the stage marc record tool to stage and commit a set of records and confirm the behaviour remains correct. 2) If using Elastic, check that only one indexing job is queued to take place resulting from the committed import. Signed-off-by: Joonas Kylmälä <joonas.kylmala@iki.fi> Created attachment 136219 [details] [review] Bug 30822: Clarify that BatchCommitItems is a private function BatchCommitItems is only being used within this module and isn't mentioned in EXPORT_OK. This patch simply renames it to _batchCommitItems to take the _ standard for private functions and also adds a little hint to the POD of the function to clarify that the caller must trigger a re-index. JK: Amended patch to rename also the function in t/db_dependent/ImportBatch.t and fix typo "commiting" => "commiting" Signed-off-by: Joonas Kylmälä <joonas.kylmala@iki.fi> QA: Looking here (In reply to Martin Renvoize from comment #6) > (In reply to Joonas Kylmälä from comment #5) > > Good improvement! Just a small nit: as BatchCommitItems is a public function > > it should take either skip_record_index as a parameter or document clearly > > that it doesn't do indexing and it is left to the caller. > > Good Catch.. > > I'm wondering if the third option is to rename the function to be > 'private'.. it appears to only be called inside this module and isn't > exported either. Certainly some POD around it would also be sensible though. Just a thought. Not sure if we should invest time in renaming C4 functions private rather than getting them out of C4 ;) There will be a bunch more. Wouldnt rename them all. This is done, no problem. There is something funny going on here. Might be a config issue, but while the worker is running, I dont see any import finished but constantly a new process is spinning up for the background-job-progress ? Testing with Zebra only. (In reply to Jonathan Druart from comment #10) > But maybe we need that part, for biblio deletions, what do you think? > > + if ( $record_type eq 'biblio' && ( @updated_biblionumbers || > @deleted_biblionumbers ) ) { > + my $indexer = Koha::SearchEngine::Indexer->new({ index => > $Koha::SearchEngine::BIBLIOS_INDEX }); > + if ( @deleted_biblionumbers ) { > + $indexer->index_records( \@deleted_biblionumbers, > "recordDelete", "biblioserver" ); > + } else { > + $indexer->index_records( \@updated_biblionumbers, > "specialUpdate", "biblioserver" ); > + } > + } I agree that for consistency, we should address that too. But we could say that this can be done on a new report. The title of this report is scoped to BatchCommitRecords. And we are talking BatchRevert here. (In reply to Marcel de Rooy from comment #16) > There is something funny going on here. Might be a config issue, but while > the worker is running, I dont see any import finished but constantly a new > process is spinning up for the background-job-progress ? Testing with Zebra > only. The worker is not yet relevant here. But what needs attention (somewhere), is that if BatchCommit silently fails, the js on the manage import page keeps polling for the status of something that crashed. Created attachment 136247 [details] [review] Bug 30822: Make BatchCommitRecords update the index in one request When committing staged marc imports to the catalogue we will often be importing a batch of records. We don't want to send one index request per biblio affected, we want to index them all after the records have been modified otherwise we will end up with multiple tasks per record (when items are also affected). Test plan: 1) Use the stage marc record tool to stage and commit a set of records and confirm the behaviour remains correct. 2) If using Elastic, check that only one indexing job is queued to take place resulting from the committed import. Signed-off-by: Joonas Kylmälä <joonas.kylmala@iki.fi> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Created attachment 136248 [details] [review] Bug 30822: Clarify that BatchCommitItems is a private function BatchCommitItems is only being used within this module and isn't mentioned in EXPORT_OK. This patch simply renames it to _batchCommitItems to take the _ standard for private functions and also adds a little hint to the POD of the function to clarify that the caller must trigger a re-index. JK: Amended patch to rename also the function in t/db_dependent/ImportBatch.t and fix typo "commiting" => "commiting" Signed-off-by: Joonas Kylmälä <joonas.kylmala@iki.fi> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Thanks guys, sorry I didn't get back to this one. Yes, I can take care of batchRevert next unless someone else wants to jump on that one. As for other failures I'm wondering if to some extent that's looked at in Bug 29325 - commit_file.pl error 'Already in a transaction'? Pushed to master for 22.11. Nice work everyone, thanks! Does not apply cleanly to 22.05.x, no backport. Please rebase if needed *** Bug 26543 has been marked as a duplicate of this bug. *** |