We must provide a fix to deal with corrupted data. For more information, see bug 18966 and https://wiki.koha-community.org/wiki/DBMS_auto_increment_fix
Would be great imo if this fix could be used in cleanup_database somehow too. Put the core fix in some module?
Created attachment 65402 [details] [review] Bug 19016: Add a script to fix corrupted data This patch add two new options to the cleanup_database.pl script: * --list-corrupted-data to list the different rows that are affected * --fix-corrupted-data to fix and reassign an id to the corrupted rows TO NOT USE IT IN PRODUCTION YET!
Created attachment 65403 [details] [review] Bug 19016: Update other values Here we have a problem! If we assign a new id to the different rows that cannot be moved safetly, we need to modify the other tables that do not have a foreign key (for historical or laziness reasons). For instance: John is borrowernumber=42 and create a suggestion (suggestion.suggestedby=42) Jane is borrowernumber=42 in the deletedborrowers table. She created a suggestion (same suggestedby value). John will get a new id and the suggestion.suggestedby will not be replaced without this patch. But with this fix, the 2 suggestions will be marked as suggested by him.
What about the biblioitems table?
Created attachment 65456 [details] [review] Bug 19016 [Followup] - Fix bad column name
(In reply to Kyle M Hall from comment #4) > What about the biblioitems table? I have no idea what to do with biblioitems. What do you think about the second patch?
Created attachment 65710 [details] [review] Bug 19016: Check and fix 'biblioitems' This patch adds 'biblioitems' to the list of tables to be evaluated and fixed. To test the problem exists: - reset_all - Run: $ sudo koha-mysql kohadev > SELECT biblionumber FROM biblio ORDER BY biblionumber DESC LIMIT 1; - From the staff UI, delete the biblio with that biblionumber - Restart mysql: $ sudo systemctl restart mysql.service - Add a new biblio record - Run: $ sudo koha-shell kohadev k> cd kohaclone k> misc/cronjobs/cleanup_database.pl --list-corrupted-data => FAIL: biblioitems issue is not higlighted - Apply this patch - Run: $ sudo koha-shell kohadev k> cd kohaclone k> misc/cronjobs/cleanup_database.pl --list-corrupted-data => SUCCESS: biblioitems issue is highlighted Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created a problem in two tables: * Tables biblioitems/deletedbiblioitems: 2580, 2632 * Tables reserves/old_reserves: 831, 833 Now fixing it with the script: * Tables biblioitems/deletedbiblioitems: 2580, 2632 Updating biblioitems.biblioitemnumber=2580 with new id 2634 Updating biblioitems.biblioitemnumber=2632 with new id 2635 * Tables reserves/old_reserves: 831, 833 Updating reserves.reserve_id=831 with new id 835 Updating reserves.reserve_id=833 with new id 836 And now searching on opac. Crashes on opac-detail. Warnings on opac-search.pl Can't call method "title" on an undefined value at /usr/share/koha/masterclone/opac/opac-detail.pl line 454. GetCOinSBiblio called with undefined record at /usr/share/koha/masterclone/opac/opac-search.pl line 663. We need to call ModZebra (or Elastic?) on these changed id's too. Would it be safer to change id's in the old/deleted tables, and push the autoincrement on the original table ?
(In reply to Marcel de Rooy from comment #8) > Would it be safer to change id's in the old/deleted tables, and push the > autoincrement on the original table ? Inserting the new max into the original table (and deleting it) might be a faster workaround than alter table? Should be tested..
I wonder if it would be good to separate the different types of data. I feel pretty sure about the fix for issues, but borrowers and biblio* tables appear more dangerous. The issue_id one is also the most dangerous, as the items are kept on the accounts. The list of tables for borrowers appears a bit short - are you relying on cascades in part there? In general I am worried about maintaining the script - with every new table, this could cause new bugs. But as we don't have clean FK relationships, I don't see another way of doing it. :(
When can expect a fix to be available via the stable Debian repo? I have 120 affected rows in my issues tables that is preventing my library from properly returning books. Otherwise, how can I manually fix "deal with" these affected rows?
(In reply to Christian McDonald from comment #11) > When can expect a fix to be available via the stable Debian repo? I have 120 > affected rows in my issues tables that is preventing my library from > properly returning books. Otherwise, how can I manually fix "deal with" > these affected rows? Seconded. I've applied the fix (DBMS auto increment fix on the Koha wiki site) to prevent further occurrences, but this doesn't appear to fix existing corruption. We have a handful of books that we're unable to check back in. How would one go about fixing this manually in lieu of the upcoming script?
(In reply to dguidry from comment #12) > (In reply to Christian McDonald from comment #11) > > When can expect a fix to be available via the stable Debian repo? I have 120 > > affected rows in my issues tables that is preventing my library from > > properly returning books. Otherwise, how can I manually fix "deal with" > > these affected rows? > > Seconded. I've applied the fix (DBMS auto increment fix on the Koha wiki > site) to prevent further occurrences, but this doesn't appear to fix > existing corruption. We have a handful of books that we're unable to check > back in. How would one go about fixing this manually in lieu of the upcoming > script? For issues/old_issues you could try the script, in a test environment(!) You will see this kind of output: Updating issues.issue_id=24 with new id 42 When it is done, you should double-check the accountlines rows with the issue_id values that have been updated: SELECT borrowernumber FROM accountlines WHERE issue_id=42; There are 2 possibilities: either the accountlines has been correctly updated, or not. Either fines has been attached to the correct patron account, or another patron has been charged (!) The accountlines values have been updated with the following line of the cleanup script: 511 $dbh->do( q|UPDATE accountlines SET issue_id = ? WHERE issue_id = ?|, undef, $max, $old_id ); If you do not want accountlines to be updated, you can comment this line and execute the script again on a DB backup. Without feedback I cannot tell you if it is preferable to comment this line or not. By instinct I would comment the line and assume that check-ins are certainly related to a fine rather than already checked out items.
We prefer to correct id in old_issues and old_reserves tables. For deleted_xxx tables, we may just delete since it is too difficult to fix. For example biblionumber, biblioitemnumber and itemnumber are in MARCXML also.
Created attachment 68015 [details] [review] Bug 19016: Trigger reindex on fixing biblios Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
FWIW, I just tried the script with all the patches (up to now) and ran into a FK constraint violation: % ./cleanup_database.pl --fix-corrupted-data * Tables biblio/deletedbiblio: 1455 Updating biblio.biblionumber=1455 with new id 1516 DBD::mysql::st execute failed: Cannot add or update a child row: a foreign key constraint fails (`koha_koha`.`items`, CONSTRAINT `items_ibfk_5` FOREIGN KEY (`biblionumber`) REFERENCES `biblio` (`biblionumber`) ON DELETE CASCADE ON UPDATE CASCADE) [for Statement "UPDATE biblio SET biblionumber = ? WHERE biblionumber = ?" with ParamValues: 0=1516, 1=1455] at ./cleanup_database.pl line 510, <DATA> line 755. Something went wrong, rolling back! * Tables biblioitems/deletedbiblioitems: 1455 Updating biblioitems.biblioitemnumber=1455 with new id 1516
This script doesn't update primary key auto increment after updating item numbers which results in duplicate primary key errors when Koha tries to create new items (until you go over items created by script). This is not evident if you restart mysql server and have setup auto increment fixes according to wiki, after running this script, but it would be better if script would update auto increment after having used primary keys in first place.
Hi, How do I download script to fix corrupted data?
The script is not finished and not recommended for production use.
The fix is in Bug 20271. Please push it forward
(In reply to Benjamin Rokseth from comment #20) > The fix is in Bug 20271. Please push it forward Hum, what do you mean?
Only an attempt to push focus to the core problem, and a proposed fix to that ;) The problem is worse than shifted autoincrements, as we discovered to our dismay.
This approach never received the necessary feedbacks from upgrades done in production environment. Closing.