It would be very helpful to have at least a 'basic' xsl template to be able to convert UNIMARC data to USMARC when trying to search and import data from UNIMARC z39.50 sources. It seems that there is a generic xslt available for UNIMARC to MARC conversion (https://github.com/edsd/biblio-metadata/blob/master/UNIMARC2MARC21.xsl) but it obviously needs a few changes to work with Koha.
MarcEdit has a UNIMARC to MARC21 XSL transform included with it. It looks to me as though it is based on the one in comment 1. I contacted the author, Terry Reese, to see whether he would mind sharing it, and he uploaded it to GitHub at https://github.com/reeset/unimarc . I've tried it with the Bibliotheque nationale de France Z39.50 server and it seems to work fine with Koha, at least as a minimal conversion.
I should've said Description instead of comment 1.
Andy, I've also noticed this file in MarcEdit file structure a few days ago and experimented with it (it's identical to the file from my first comment), and although it seems to produce some output with standalone UNIMARC xml files, it needs some changes if it supposed to be used directly from Koha (in the z3950 configuration). Gaetan (from Biblibre) did some initial work on it and then I made some more changes and additions (to manipulate a couple more tags), but I've noticed that there are some serious encoding issues when importing non latin-1 records. Having done many tests, I tend to conclude that it's NOT an issue with the .xsl itself (it produces proper utf-8 output), it's probably an issue with perl modules and/or parts of Koha code [1], (but I could be wrong). More details soon...
Indeed, I thought the two files were identical when I first looked at them, but they're not quite. the same. I was unable to get any output in Koha (other than the LDR field) using the XSLT from edsd. The file from MarcEdit produces usable (bot not perfect) output. I, too, noticed a problem with utf-8 output, but hadn't tried yet to identify the source of the problem, so in a way I'm glad you're seeing the problem as well. I'll attach an image showing the output I get in Koha doing a Z39.50 search both with and without the MarcEdit XSLT file in use.
Created attachment 53333 [details] Image showing Z39.50 search results both with and without MarcEdit XSLT applied
Created attachment 53372 [details] XSL for Unimarc -> MARC21 (partial) translation This file supports the translation of far more tags from Unimarc->MARC21 than what was currently available on the net, but is still not complete. (btw, seeing the complexity of the conversion rules I doubt if it will ever be 100% translation xsl)
Created attachment 53377 [details] XSL for Unimarc -> MARC21 (partial) translation (minor update)
This updated xml version has better handling of Unimarc-100 (MARC21-008) field, and supports the translation of far more UNIMARC fields. Almost all of the new code was based on the (almost 15-year-old!) conversion guidelines by LOC, so some fields/subfields might now be deprecated. Even worse, important new fields might be missing from the translation. (856 field was such an example, so by reading more recent docs, I think I translated it properly). How to test: - Copy the file to koha-tmpl/intranet-tmpl/prog/en/xslt/ - Put "UNIMARC2MARC21.xsl" in "XSLT File(s) for transforming results:" textfield of a UNIMARC Z39.50 source that you wish to translate to MARC21 - Perform a test in the Z39.50 source. - Results /should/ be displayed properly in the Z39.50 result list. - If the original Z39.50 source has non-utf (ie. Latin) encoding, you may even try to import the records in Koha. Warnings: - The work is nowhere near complete, but it's better (and more complete) that what was currently available online. - My knowledge of UNIMARC is very-very limited. - The xsl was not thoroughly tested and the time I had to spend on it was limited. Expect to find errors Known issues: Although the xsl /seems/ to work fine for me, there are certain encoding issues when using it to IMPORT utf8 UNIMARC records to Koha. Displaying of UTF8 records in the Koha Z39.50 result list is OK, but importing them to cataloging is not. It's possible that it's an issue of Koha code and not related to this xsl. (I will open a BZ case soon).
I've tried installing the XSLT file. When I perform a Z39.50 search, I get a bunch of errors "Warning: XSLT error on search result 1" etc, followed by the search results without a transformation.
Andy, you obviously search BNF. I've added some debugging and for title 'vatican', BNF returns for example this tag in the RAW (UNIMARC) data: <datafield tag="200" ind1="1" ind2=" "> <subfield code="a"><88>Die <89>vaticanische Handschrift der Chronik des Mathias von Neuenburg, von Ludwig Weiland. Vorgelegt in der Sitzung der k. Gesellschaft der Wissenschaften am 28 Mai. 1892</subfield> <subfield code="b">Texte imprimé</subfield> </datafield> This produces the following error during the transformation: parser error : StartTag: invalid element name <subfield code="a"><88>Die <89>vaticanische Handschrift der Chronik des Math ^ Which means that it's probably an issue with the encoding of the data returned by BNF through yaz. Unfortunately I couldn't use the BNF TOUT-UTF8 database (that returns utf8 encoded records) because it timed out. I've tested the xsl with several Greek UNIMARC databases that return utf8 records and it works. If you know any other useful UNIMARC database (ideally with utf8 support), I could give them a try.
Created attachment 53395 [details] XSL for Unimarc -> MARC21 (partial) translation v2.1 Minor fix in Unimarc 100/MARC 008 field handling
Have tried testing Theodoros' translation XSLT in my koha installation (ver.16.05.01.000 on Debian, package installed, upgraded, not a clean 16.05 install). First note: can't find "koha-tmpl/intranet-tmpl/prog/en/xslt/". Closest candidate was "/usr/share/koha/intranet/htdocs/intranet-tmpl/prog/en/xslt". Is this location correct? I expected that in package installations most paths would be similar. I proceded in saving UNIMARC2MARC21.xsl in "/usr/share/koha/intranet/htdocs/intranet-tmpl/prog/en/xslt" and setting-up a Z39.50 target for testing against (z3950.nlg.gr:210/biblios with UNIMARC2MARC21.xsl). As I noticed that the folder "/usr/share/koha/intranet/htdocs/intranet-tmpl/prog/el-GR/xslt" was also populated, I copied UNIMARC2MARC21.xsl there too. When performing a search by ISBN I get the following error message in page "http://admin_address:port/cgi-bin/koha/cataloguing/z3950_search.pl": "Software error: Wide character in subroutine entry at /usr/share/perl5/MARC/Charset/Table.pm line 96." Any ideas?
Theodoros, I wanted to let you know that I've tried v2.1 with the BNF TOUT-UTF8 database. It's working fine for me now, that is, I'm getting no errors and the search results are displaying as one would expect. I've tried title and ISBN searches. The only issue I'm running into at this point is with the utf-8 encoding, as you pointed out in comment 3. It looks fine when displayed in the Z39.50 search results, but characters with diacritics, for example, are incorrectly displayed when imported. Thanks for your work on this.
Havin upgraded to v16.05.04.000 I'm sorry to report that I still get the "Wide character in subroutine entry at /usr/share/perl5/MARC/Charset/Table.pm line 96." error. Steps to reproduce the error (in a MARC21 environment): 1. set-up National Library of Greece as a Z39.50 target, as described in https://kohaprojectgr.wordpress.com/2014/11/04/%CF%80%CF%81%CE%BF%CF%83%CE%B8%CE%AE%CE%BA%CE%B7-%CF%84%CE%BF%CF%85-z39-50-%CF%84%CE%B7%CF%82-%CE%B5%CE%B8%CE%BD%CE%B9%CE%BA%CE%AE%CF%82-%CE%B2%CE%B9%CE%B2%CE%BB%CE%B9%CE%BF%CE%B8%CE%AE%CE%BA%CE%B7/ 2. search for ISBN 978-960-6760-24-2 (or for "Vian" author), using NLG as the sole target, pre-selecting books as result framework. Check that an entry is indeed returned, but not in MARC21 syntax. 3. edit NLS bibliographic target, adding "UNIMARC2MARC21.xsl" to the XSLT filter field ("XSLT Αρχείο(α) για μετατροπή αποτελεσμάτων:" in Greek) 4. perform same search as above. 5. error message "Software error: Wide character in subroutine entry at /usr/share/perl5/MARC/Charset/Table.pm line 96. For help, please send mail to the webmaster ([no address given]), giving this error message and the time and date of the error." should appear. URL is http://x.x.x.x:y/cgi-bin/koha/cataloguing/z3950_search.pl Any ideas? Note: "UNIMARC2MARC21.xsl" seems to work fine for me too, when used against NLF (host:z3950.bnf.fr port:2211 dbname:TOUT-UTF8 username:Z3950 pass:Z3950_BNF syntax:UNIMARC encoding:utf8). Fine that is except accented characters, just as Theodoros and Andy have already described.
Someone please help me I am well verse with computer coding and all. My data are not lost but i am unable to check in books.
Created attachment 105365 [details] NLG search result example
Created attachment 105366 [details] NLG search settings
Created attachment 105367 [details] Erroneous transition of converted data.
In a fresh installation of koha now (19.11.05.000 on debian 9), an the "Wide character in subroutine entry at /usr/share/perl5/MARC/Charset/Table.pm" no longer appears when using UNIMARC2MARC21.xsl to convert and import UNIMARC data. I'm sorry to report though that the error described by Theodoros Theodoropoulos on July 13th, 2016 is still present: "Displaying of UTF8 records in the Koha Z39.50 result list is OK, but importing them to cataloging is not". An example is provided in the attached photos.
Created attachment 123175 [details] Search against National Library of Greece, example 1
Created attachment 123176 [details] Search against National Library of Greece, example 2
I wonder if there are any news as to the error described by Theodoros Theodoropoulos on July 13th, 2016. I've attached two more examples for testing, as well as the settings for the specific Z39.50 target I use.
Created attachment 123177 [details] Search against National Library of Greece, target settings
Dear list, We had also the issue of not being able to decipher the accented UTF-8 characters of the SRU server of the BNF in Koha, and maybe found a solution by doing a slight modification in the XSLT file of MARCEDIT (see comment 3 of Theodoros Theodoropoulos). The leader field of a MARC XML document must declare the encoding in its 9th bit (10th character). It has to be “a” for UTF-8 documents, and blank for non_UTF-8. https://knowledge.exlibrisgroup.com/Voyager/Knowledge_Articles/Determine_the_character_set_of_a_MARC_record But the XSLT file leaves it always blank. I had to modify a line in the XSLT (see below) that generates the bits 8 to 16 in the leader. Maybe I could send the updated file to a GIT repository available to the community ? Best regards, Franck ------------ Original file: <xsl:template name="transform-leader"> <xsl:variable name="leader" select="marc:leader"/> <xsl:variable name="leader05" select="translate(substring($leader,06,1), 'o', 'c')"/> <xsl:variable name="leader06" select="translate(substring($leader,07,1), 'hmn', 'aor')"/> <xsl:variable name="leader07" select="substring($leader,08,1)"/> <xsl:variable name="leader08-16" select="' 22 '"/> <xsl:variable name="leader17" select="translate(substring($leader,18,1), '23', '87')"/> <xsl:variable name="leader18" select="translate(substring($leader,19,1), ' n', 'i ')"/> <xsl:variable name="leader19-23" select="' 4500'"/> ------------ Same snippet with correction : -------------- <xsl:template name="transform-leader"> <xsl:variable name="leader" select="marc:leader"/> <xsl:variable name="leader05" select="translate(substring($leader,06,1), 'o', 'c')"/> <xsl:variable name="leader06" select="translate(substring($leader,07,1), 'hmn', 'aor')"/> <xsl:variable name="leader07" select="substring($leader,08,1)"/> <xsl:variable name="leader08-16" select="' a22 '"/> <xsl:variable name="leader17" select="translate(substring($leader,18,1), '23', '87')"/> <xsl:variable name="leader18" select="translate(substring($leader,19,1), ' n', 'i ')"/> <xsl:variable name="leader19-23" select="' 4500'"/>
I am very glad to attest that after applying Franck Theeten's modification, records from National Library of Greece Z39.50 service can now be imported in koha. THANK YOU Franck!