We (PTFS-Europe) have been using this script to export catalogue data to EBSCO EDS for many of our customers for a long time now, and it would be very useful if it could be incorporated into the core code.
Created attachment 103455 [details] [review] Bug 25246: Add script to export bib data to EBSCO EDS This script adds the capability to export bibliographic data to EBSCO EDS via a cronjob. To test: 1) Apply the patch 2) Edit the file so that line 73 contains a file name. 3) Run the script and a file should be created in /tmp 4) If you have a valid username and password for the EBSCO EDS ftp server, add these into lines 84 and 85 and run the script again to ensure that it transfers correctly.
Same comment as Bug 25252
Created attachment 103680 [details] [review] Bug 25246: Add script to export bib data to EBSCO EDS This script adds the capability to export bibliographic data to EBSCO EDS via a cronjob. To test: 1) Apply the patch 2) Edit the file so that line 73 contains a file name. 3) Run the script and a file should be created in /tmp 4) If you have a valid username and password for the EBSCO EDS ftp server, add these into lines 84 and 85 and run the script again to ensure that it transfers correctly.
Created attachment 103683 [details] [review] Bug 25246: Add script to export bib data to EBSCO EDS This script adds the capability to export bibliographic data to EBSCO EDS via a cronjob. To test: 1) Apply the patch 2) Edit the file so that line 73 contains a file name. 3) Run the script and a file should be created in /tmp 4) If you have a valid username and password for the EBSCO EDS ftp server, add these into lines 84 and 85 and run the script again to ensure that it transfers correctly.
I've changed the licence number and added a help page.
Created attachment 103706 [details] [review] Bug 25246: Add script to export bib data to EBSCO EDS This script adds the capability to export bibliographic data to EBSCO EDS via a cronjob. To test: 1) Apply the patch 2) Edit the file so that line 73 contains a file name. 3) Run the script and a file should be created in /tmp 4) If you have a valid username and password for the EBSCO EDS ftp server, add these into lines 84 and 85 and run the script again to ensure that it transfers correctly.
Corrected some further problems that were found in bug 25252. Hopefully this will be OK now.
Created attachment 103711 [details] [review] Bug 25246: Add script to export bib data to EBSCO EDS This script adds the capability to export bibliographic data to EBSCO EDS via a cronjob. To test: 1) Apply the patch 2) Edit the file so that line 73 contains a file name. 3) Run the script and a file should be created in /tmp 4) If you have a valid username and password for the EBSCO EDS ftp server, add these into lines 84 and 85 and run the script again to ensure that it transfers correctly. Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com> No user for EBSCO, but functionally seems Ok
Why not just use OAI-PMH to export to EBSCO? That's what we use for the majority of our clients that use EDS.
We could have used OAI but there is at least one limitation, the main one being that EBSCO can't handle the deletions. Although we tag records as "deleted" in the OAI xml, EBSCO don't/can't use that. They asked us to set position 5 of the leader (000) to "d". However, Koha doesn't do this automatically so it would mean manually editing the record each time which is impractical. I guess the other option would be to write another patch to set position 5 on deletion. So, unless you know another way of handling the deletions, the export script is still our preferred method.
(In reply to David Roberts from comment #10) > We could have used OAI but there is at least one limitation, the main one > being that EBSCO can't handle the deletions. Although we tag records as > "deleted" in the OAI xml, EBSCO don't/can't use that. They asked us to set > position 5 of the leader (000) to "d". However, Koha doesn't do this > automatically so it would mean manually editing the record each time which > is impractical. I guess the other option would be to write another patch to > set position 5 on deletion. So, unless you know another way of handling the > deletions, the export script is still our preferred method. That's an interesting point! We've certainly encountered that obstacle with EBSCO. However, how are deletions handled with your export script? It looks to me like you're just exporting the full set of bib records on each export, which would be functionally equivalent to them re-harvesting the whole collection using OAI-PMH. That said, it would be much less intensive to generate 1 export to FTP than requiring EBSCO to do X number of HTTP requests to get the same result.
Comment on attachment 103711 [details] [review] Bug 25246: Add script to export bib data to EBSCO EDS Review of attachment 103711 [details] [review]: ----------------------------------------------------------------- Looking at this overall, maybe it should be changed from "export2ebsco.pl" to "export2ftp.pl" or "export_to_ftp.pl" and just made a bit more generalized. Not everyone uses EBSCO, but I'm sure many people use FTP for metadata consumers like national libraries and such. ::: misc/export2ebsco.pl @@ +1,1 @@ > +#!/usr/bin/env perl This should be #!/usr/bin/perl @@ +24,5 @@ > +use C4::Context; > +use C4::Auth; > +use C4::Output; > +use C4::Biblio; # GetMarcBiblio GetXmlBiblio > +use C4::Koha; # GetItemTypes Many of these modules appear to be unused? @@ +49,5 @@ > +if ( not $result or $want_help ) { > + usage(); > +} > + > +unless ( chdir '/tmp' ) { It would be good to use a configurable directory, since /tmp might be very small on some systems and the export very large @@ +64,5 @@ > + my $export_filename = shift; > + my $dbh = C4::Context->dbh; > + > + my $query = > +# 'SELECT distinct biblioitems.biblionumber FROM biblioitems WHERE biblionumber >0 '; This comment should be removed @@ +65,5 @@ > + my $dbh = C4::Context->dbh; > + > + my $query = > +# 'SELECT distinct biblioitems.biblionumber FROM biblioitems WHERE biblionumber >0 '; > + 'SELECT distinct biblioitems.biblionumber FROM biblioitems LEFT JOIN items USING (biblioitemnumber) WHERE biblioitems.biblionumber >0 '; The WHERE clause seems odd to include but I don't see any problem with it. I'm guessing the author sometimes has biblios with negative biblionumbers? @@ +74,5 @@ > + open my $fh, '>:encoding(utf8)', $export_filename > + or croak "Cannot open $export_filename : $!"; > + > + while ( my ($biblionumber) = $sth->fetchrow_array ) { > +# my $marc_record = GetMarcBiblio($biblionumber, 1); This comment should be removed @@ +91,5 @@ > + my @abbr = qw( Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec ); > + my @tm = localtime(); > + my $month = $tm[4]; > + my $year = $tm[5] - 100; > + my $name = 'INSERT FILE NAME HERE' . $abbr[$month] . $year; The filename needs to be provided by a CLI option or configuration file @@ +101,5 @@ > +} > + > +sub transfer_file { > + my $marc_file = shift; > + my $remote = 'ftp.epnet.com'; Probably best not to hard-code this either, since hostnames are prone to change @@ +103,5 @@ > +sub transfer_file { > + my $marc_file = shift; > + my $remote = 'ftp.epnet.com'; > + my $username = q(INSERT USERNAME HERE); > + my $password = q(INSERT PASSWORD HERE); The username and password need to be provided by a CLI option or configuration file (the latter being better for security).
(In reply to David Cook from comment #11) > (In reply to David Roberts from comment #10) > > We could have used OAI but there is at least one limitation, the main one > > being that EBSCO can't handle the deletions. Although we tag records as > > "deleted" in the OAI xml, EBSCO don't/can't use that. They asked us to set > > position 5 of the leader (000) to "d". However, Koha doesn't do this > > automatically so it would mean manually editing the record each time which > > is impractical. I guess the other option would be to write another patch to > > set position 5 on deletion. So, unless you know another way of handling the > > deletions, the export script is still our preferred method. > > That's an interesting point! We've certainly encountered that obstacle with > EBSCO. > > However, how are deletions handled with your export script? It looks to me > like you're just exporting the full set of bib records on each export, which > would be functionally equivalent to them re-harvesting the whole collection > using OAI-PMH. That said, it would be much less intensive to generate 1 > export to FTP than requiring EBSCO to do X number of HTTP requests to get > the same result. We send the full export each month, and EBSCO are happy to delete the current data that they have and replace it with the new export.
Going to mark this one as Failed QA for now due to my in-line comments, as I imagine the QA team would agree. If you disagree, feel free to mark back as Signed Off, and I'll leave my comments at that.
Hi David No worries - I'm working my way through your suggestions! :-)
Looks to me like EBSCO support OAI-PMH delete operations: https://connect.ebsco.com/s/article/Using-OAI-PMH-to-Create-and-Maintain-Records-in-your-EBSCO-Discovery-Service-EDS-Custom-Catalog-Database?language=en_US So, is it Koha's implementation of OAI that's lacking... I've really not dabbled in that area of Koha at all so far?
(In reply to Martin Renvoize from comment #16) > Looks to me like EBSCO support OAI-PMH delete operations: > https://connect.ebsco.com/s/article/Using-OAI-PMH-to-Create-and-Maintain- > Records-in-your-EBSCO-Discovery-Service-EDS-Custom-Catalog- > Database?language=en_US > > So, is it Koha's implementation of OAI that's lacking... I've really not > dabbled in that area of Koha at all so far? Koha can publish deletions with OAI and has been able to do so for a while, but I think we improved on it. Some bugs might have prevented use in the past.
(In reply to Martin Renvoize from comment #16) > Looks to me like EBSCO support OAI-PMH delete operations: > https://connect.ebsco.com/s/article/Using-OAI-PMH-to-Create-and-Maintain- > Records-in-your-EBSCO-Discovery-Service-EDS-Custom-Catalog- > Database?language=en_US > That's really interesting. The date for that article is May 2020, so perhaps they've updated their harvester capabilities. For several years now, I've been told by EBSCO that they don't support OAI-PMH deleted records. I can double-check that with my EBSCO contacts in light of that article. > So, is it Koha's implementation of OAI that's lacking... I've really not > dabbled in that area of Koha at all so far? I don't think so. I think it's been EBSCO's harvester that hasn't been up to the task of handling deleted records. I've been using OAI-PMH with EBSCO for years now and overall it works pretty well.
You can use export_records.pl : https://git.koha-community.org/Koha-community/Koha/src/branch/master/misc/export_records.pl And then ftp export via a bash script.
*** This bug has been marked as a duplicate of bug 36766 ***