The script link_bibs_to_authorities.pl will link bibs to authorities, but it won't update the bibs with the data from the authority. I'm proposing adding a new script which will update the bib records from its linked authorities.
This is still on my TODO short list but still probably a little ways off as I have higher priorities at the moment.
It seems like there's a few ways this could be done. One idea is to fetch every bib record that contains at least one $9 linkage and then compiling a list of unique IDs for authority records. Then I'll run the following: my $record = GetAuthority($authid); merge({ mergefrom => $authid, MARCfrom => $record, mergeto => $authid, MARCto => $record }); But that will involve double-handling bib records. -- Alternatively, I suppose I could fetch every bib record that contains at least one $9 linkage, and then run the following for each unique linkage: my $record = GetAuthority($authid); merge({ mergefrom => $authid, MARCfrom => $record, mergeto => $authid, MARCto => $record, biblionumbers => [$biblionumber] }); I could cache the $record object to reduce some database fetches, except that merge() is super inefficient. -- I suppose I could write a custom function... It looks like the heading data is obtained using the following: "@record_to = $MARCto->field($auth_tag_to_report_to)->subfields()" Then create $field_to from @record_to, and then $field->replace_with($field_to). Of course, writing a new function adds complications, especially if I don't want to refactor merge(), which is an intimidating task. Maybe I will just maintain this locally as that is less risky.
Actually, I'd say it's probably better to just be inefficient than have a custom implementation. The simplest version is just to compile a list of unique authority IDs and then run the following: my $record = GetAuthority($authid); merge({ mergefrom => $authid, MARCfrom => $record, mergeto => $authid, MARCto => $record }); It will be inefficient but it will be a CLI script which can just run as long as it needs to.
Created attachment 121405 [details] [review] Bug 28011: Add CLI script to update bib headings from linked auth records This patch adds a script which updates bib headings from authority records. This helps normalize the headings to remove differences in punctuation, capitalization, etc. This can be useful to run after link_bibs_to_authorities.pl. Test plan: 1. Apply the patch 2. Go to http://localhost:8081/cgi-bin/koha/cataloguing/addbiblio.pl?biblionumber=192 3. Mangle the HTML to remove the accents from 100$a and save the record 4. Note the bib record now says "By: O Cadhain, Mairtin" without the accents 5. Run the script 6. Go to http://localhost:8081/cgi-bin/koha/catalogue/detail.pl?biblionumber=192 7. Note that the accents have been restored and it now says "By: Ó Cadhain, Máirtín"
Warnings: This uses Zebra's scan feature to build up the list of authority records to use for updating, and this can be a resource intensive operation. It can use 100% CPU for the zebrasrv connection since it's doing a lot of rapid scanning. So just beware. This script will also only work as expected if your Zebra indexes are current.
I suppose another way of doing this could've been to compile the list of authority IDs, grabbed all the bibs that use those authorities, inspect the headings to see if an update is needed per authority record... but I think that would've been a fairly cumbersome intensive process as well... This script is a blunt instrument but it's effective.
Oh also note that 100% is only for the scanning portion. For a Koha with 15000 authorities scanned, it took about 59 seconds. The actual updater uses say 30-50% of 1 CPU or less for the process. (The impact on the database is less obvious at a glance as I'm running external DB servers.)
Just double-checked the DB impact and there's pretty much zero impact.
Ahh, I'm getting caught out by AuthorityMergeLimit...
(In reply to David Cook from comment #9) > Ahh, I'm getting caught out by AuthorityMergeLimit... Although it looks like it put them in the need_merge_authorities table, so they will be processed eventually. That's probably actually a bit more optimal anyway.
This looks interesting - would be nice if adapted to work with ES as well
+1 for ES support I'm afraid.
Is this something you might still work on David?
https://stackoverflow.com/questions/14466274/query-all-unique-values-of-a-field-with-elasticsearch/26647301#26647301 looks promising on the ES front.. but I'm not sure how we'd achieve that in Koha land.
(In reply to Martin Renvoize (ashimema) from comment #13) > Is this something you might still work on David? Mmm probably not. I can't recall who I did the Zebra version for, and I don't know that I've even thought about this again since 2021 😬