I have experienced this problem for a while now in different versions, but I am not quite sure what's going wrong: - Add a records with diacritics, such as German umlauts äüö to your cart - Download the record from the cart in MARC - Import the record into Koha using the staged MARC import - Verify that the MARC preview doesn't work in the staged MARC import table - Verify that the umlauts didn't import correctly When using the 'import record' feature in Rancor, the diacritics also appear broken, fields containing diacritics get cut off and other oddities happen.
This seems to work for me, I have exported a record with 245$a "äüö ❤ ✓" from the cart and then reimported it (do not look for matching records, add incoming record, add items). The title appears ok during the whole import process. Do you mean you do not get this problem when exporting using the export records tool?
Hm, no. I get this when I export using the MARC option from the download menu. The attachement to the email (send cart) is fine. Which encoding did you pick for importing?
From /cgi-bin/koha/basket/basket.pl?bib_list=1/ I used Download > iso2709 Then import with the default character encoding option: UTF-8
I tested this on master before filing, but will test again and try to attach the exported files.
Any news here?
Sorry not yet!
Lowering the severity as it's not confirmed.
I can no longer reproduce this issue on master (tested in kohadevbox).
I am still seeing this issue, tested in 18.11: To test: - Create a record using umlauts öüä - Put the record in your cart in OPAC - Use the Download button and MARC as format - Try to reimport the record or use MarcEdit to create a mrk - Verify the Umlauts don't appear correctly Example: Deutsche Übersetzung turns into: Deutsche {DC}bersetzung
Still ok for me on 18.11.x following comments 1 and 3.
% marcdump ~/Downloads/cart.iso2709|grep 245 /home/jonathan/Downloads/cart.iso2709 245 10 _aäüö ❤ ✓
I have no idea what's going on there... but it doesn't work for me. :(
Could you attach the record here and tell us the different options you chose on importing it?
Also on which screen the display starts to be incorrect.
I cannot recreate this problem in master.
Closing again, please reopen (and attach the file) if still valid.
I'd really like to get behind this issue. I just retested on our Koha 18.11.16 and we still have the issue. :( Our test installation is publicly accessible, but I don't want to post the link here. Joubu, maybe I could talk you through it? For matching I just seleted the file and changed none of the other settings (encoding defaults to UTF-8). We do have one local change on that file as we did chose not to export items: my $record = GetMarcBiblio({ biblionumber => $biblio, - embed_items => 1, + embed_items => 0, But that should not make a difference in encoding? Attaching sample records.
Created attachment 105333 [details] Sample records downloaded from cart
Given my bad experience the other day trying to import records converted from GB2312 to UTF8 into Koha, I'm extra interested by this. Maybe it's a related topic. At a glance, those sample records look fine both in Latin1 and UTF8. MarcEdit can convert the ISO MARC into its MRK format, but I'm failing to convert it from ISO MARC to MARCXML. When I try to read your sample records as UTF-8 using MARC::File::USMARC, I see the following error: UTF-8 "\xFC" does not map to Unicode Using "xxd cart.iso2709", I see that the "fc" byte is the ü in über and für. Ah, and FC is ü in Latin-1 encoding whereas in UTF-8 it's C3 BC. So it sounds like Koha is exporting as Latin-1 but trying to import as UTF-8 and that's where it's falling over? Needs more investigating, but that's the problem with your sample records I'd say.
Oh, Katrin, try importing it using /cgi-bin/koha/tools/stage-marc-import.pl with a Character encoding of "ISO 8859-1". I bet it'll work.
Hi David, it looks like you are correct. Using ISO 8859-1 encoding when staging the MARC imports correctly. So... why do we not export in UTF-8 for the cart? I think we do in other places as for example the MARC attached to the cart email works correctly.
(In reply to Katrin Fischer from comment #21) > Hi David, it looks like you are correct. Using ISO 8859-1 encoding when > staging the MARC imports correctly. So... why do we not export in UTF-8 for > the cart? I think we do in other places as for example the MARC attached to > the cart email works correctly. That's a very good question. Which Koha version are you using? This is the "Download" option in the cart? (OPAC or Staff Client?) I'll take a quick look...
Ok it just looks like a bug to me. opac/opac-downloadcart.pl doesn't instruct Perl how to output the bytes, and I think Perl defaults to N8CS/Latin-1. With opac/opac-sendbasket.pl, we explicitly encode the characters as UTF8 bytes.
(In reply to David Cook from comment #23) > Ok it just looks like a bug to me. > > opac/opac-downloadcart.pl doesn't instruct Perl how to output the bytes, and > I think Perl defaults to N8CS/Latin-1. > > With opac/opac-sendbasket.pl, we explicitly encode the characters as UTF8 > bytes. That being said... I can't reproduce the problem. In 19.11, I added some Chinese to a test record, and I was able to download it from Staff Client and OPAC carts and Notepad++ says they're both UTF-8 and renders correctly... I'll try with ü specifically...
Ha! If I use ü instead of 我, it outputs as Latin-1 and ü is a FC byte instead of the C3 BC bytes.
(In reply to David Cook from comment #25) > Ha! If I use ü instead of 我, it outputs as Latin-1 and ü is a FC byte > instead of the C3 BC bytes. Using 我, I get the correct bytes: e6 88 91. Now if I add ü after 我, I am guessing that whatever automatic character encoding is happening (perhaps on the browser end?) will encode ü as UTF-8 instead of Latin-1
(In reply to David Cook from comment #26) > Now if I add ü after 我, I am guessing that whatever automatic character > encoding is happening (perhaps on the browser end?) will encode ü as UTF-8 > instead of Latin-1 And that theory was correct. The byte sequence is: e6 8891 20c3 bc.
Plot twist: encoding the character output as UTF-8 doesn't seem to make a difference. At least not when using Chrome, which I'm guessing is automatically transcoding the output. I'm going to see what wget does...
Ok I think the key is in the Content-Type. When I download the cart, I'm seeing this in the response headers: Content-transfer-encoding: binary Content-Type: application/octet-stream; charset=ISO-8859-1
Using the following doesn't help either: Content-Type: application/octet-stream; charset=utf-8 I notice the console is saying "Resource interpreted as Document but transferred with MIME type application/octet-stream", so I think that's what's doing it. More to come...
In the end, I was able to solve this by just UTF-8 encoding the output.
Created attachment 105341 [details] [review] Bug 17842: UTF-8 encode ISO2709 MARC download from cart The cart was outputing ISO2709 MARC records with Latin-1 encoding. Records containing non-latin1 characters were automatically re-encoded as UTF-8 browsers, which led to inconsistent character encodings for downloaded MARC files. This patch explicitly encodes ISO2709 MARC characters from the cart download as UTF-8 encoded bytes, which resolves the problem. Test Plan: 0) Don't apply patch 1) Create bib record with only ASCII characters 2) Add a ü character to the title 3) Save bib record 4) Download bib record from cart (opac and staff client) 5) Using xxd or some other program, note that the ü is represented by a FC byte (latin-1 encoded) 6) Apply the patch 7) Download bib record from cart (opac and staff client) 8) Using xxd or some other program, note that the ü is represented by C3 BC bytes (utf-8 encoded) 9) Success (Note that you could potentially use Notepad++ or some other program to open the downloaded file and just note the encoding that it finds. You could also try "chardetect" instead. Lots of options for figuring out the encoding.)
I can't mark this as "Needs Signoff"...
Thx David, and that finally also explains why Jonathan could never reproduce the issue - he had added other characters that were outside of latin-1. It was really bugging me that I seemed to be the only one being able to see this 'ghost bug'.
The second record containing 'über' imports nicely, but I see different encoding errors in the first one which also has 2 umlauts in 245 (öä). Before we have those black things with a question mark (whatever you call them). Now I see: Ãber die Auflösung...
Created attachment 105344 [details] New export file
Created attachment 105345 [details] Sample records correctly encoded
So interesting. It looks like the UTF-8 encoded data has been re-encoded as Latin-1. I'm guessing this is because of the browser, and I have an idea for something that might help, although I want to try to reproduce your issue first...
I just uploaded "Sample records correctly encoded" into my 19.11 system (with the patch), and I was able to export it as UTF-8 (according to Notepad++ and the hex inspected using xxd). So I can't reproduce the double-encoding issue. I did my test in Chrome 81 on Windows 10. About to test on koha-testing-docker with the same test plan.
(In reply to David Cook from comment #39) > I just uploaded "Sample records correctly encoded" into my 19.11 system > (with the patch), and I was able to export it as UTF-8 (according to > Notepad++ and the hex inspected using xxd). > > So I can't reproduce the double-encoding issue. > > I did my test in Chrome 81 on Windows 10. > > About to test on koha-testing-docker with the same test plan. That was also using CGI rather than Plack.
koha-testing-docker Test Plan: 0) Do not apply patch 1) Upload "Sample records correctly encoded" as UTF-8 MARC (with no items) 2) Add the records to the Staff Client cart 3) Select both records and "Download" > "iso2709" 4) Note that the exported file is ISO 8859-1/Latin-1 encoded 5) Apply the patch 6) restart_all 7) Reload the Staff client cart 8) Select both records and "Download" > "iso2709" And yep... I reproduce Katrin's double encoding problem. Interestingly, it only affects the 1st record in the cart and not the 2nd record. The importance of the order becomes obvious when I reverse the order of the records in the cart. It's not the content of the records that causes the encoding problem, but rather their place in the sequence.
Very silly coding error on my part, which would only happen if you were exporting more than 1 record. I'll have a revised patch in a minute.
Created attachment 105346 [details] [review] Bug 17842: UTF-8 encode ISO2709 MARC download from cart The cart was outputing ISO2709 MARC records with Latin-1 encoding. Records containing non-latin1 characters were automatically re-encoded as UTF-8 by browsers, which led to inconsistent character encodings for downloaded MARC files. This patch explicitly encodes ISO2709 MARC characters from the cart download as UTF-8 encoded bytes, which resolves the problem. Test Plan: 0) Don't apply patch 1) Create bib record with only ASCII characters 2) Add a ü character to the title 3) Save bib record 4) Download bib record from cart (opac and staff client) 5) Using xxd or some other program, note that the ü is represented by a FC byte (latin-1 encoded) 6) Apply the patch 7) Download bib record from cart (opac and staff client) 8) Using xxd or some other program, note that the ü is represented by C3 BC bytes (utf-8 encoded) 9) Success (Note that you could potentially use Notepad++ or some other program to open the downloaded file and just note the encoding that it finds. You could also try "chardetect" instead. Lots of options for figuring out the encoding.)
Created attachment 105440 [details] [review] Bug 17842: UTF-8 encode ISO2709 MARC download from cart The cart was outputing ISO2709 MARC records with Latin-1 encoding. Records containing non-latin1 characters were automatically re-encoded as UTF-8 by browsers, which led to inconsistent character encodings for downloaded MARC files. This patch explicitly encodes ISO2709 MARC characters from the cart download as UTF-8 encoded bytes, which resolves the problem. Test Plan: 0) Don't apply patch 1) Create bib record with only ASCII characters 2) Add a ü character to the title 3) Save bib record 4) Download bib record from cart (opac and staff client) 5) Using xxd or some other program, note that the ü is represented by a FC byte (latin-1 encoded) 6) Apply the patch 7) Download bib record from cart (opac and staff client) 8) Using xxd or some other program, note that the ü is represented by C3 BC bytes (utf-8 encoded) 9) Success (Note that you could potentially use Notepad++ or some other program to open the downloaded file and just note the encoding that it finds. You could also try "chardetect" instead. Lots of options for figuring out the encoding.) Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
It works! :D
I think the whole block could be: $output .= encode("UTF-8", $record->as_usmarc()) // q{}; From Encode::encode POD: "If the $string is undef, then undef is returned."
(In reply to Jonathan Druart from comment #46) > I think the whole block could be: > > $output .= encode("UTF-8", $record->as_usmarc()) // q{}; > > From Encode::encode POD: > "If the $string is undef, then undef is returned." Personally, I don't like putting method output directly into another function call, as it can make exception handling harder. But your suggestion does look more elegant and would be less code to maintain. I'd be happy with either.
Created attachment 105481 [details] [review] Bug 17842: Simplify the code There is no need for all the conditions. From Encode::encode POD: "If the $string is undef, then undef is returned."
Created attachment 105568 [details] [review] Bug 17842: UTF-8 encode ISO2709 MARC download from cart The cart was outputing ISO2709 MARC records with Latin-1 encoding. Records containing non-latin1 characters were automatically re-encoded as UTF-8 by browsers, which led to inconsistent character encodings for downloaded MARC files. This patch explicitly encodes ISO2709 MARC characters from the cart download as UTF-8 encoded bytes, which resolves the problem. Test Plan: 0) Don't apply patch 1) Create bib record with only ASCII characters 2) Add a ü character to the title 3) Save bib record 4) Download bib record from cart (opac and staff client) 5) Using xxd or some other program, note that the ü is represented by a FC byte (latin-1 encoded) 6) Apply the patch 7) Download bib record from cart (opac and staff client) 8) Using xxd or some other program, note that the ü is represented by C3 BC bytes (utf-8 encoded) 9) Success (Note that you could potentially use Notepad++ or some other program to open the downloaded file and just note the encoding that it finds. You could also try "chardetect" instead. Lots of options for figuring out the encoding.) Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net> Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Created attachment 105569 [details] [review] Bug 17842: Simplify the code There is no need for all the conditions. From Encode::encode POD: "If the $string is undef, then undef is returned." Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
(In reply to David Cook from comment #47) > (In reply to Jonathan Druart from comment #46) > > I think the whole block could be: > > > > $output .= encode("UTF-8", $record->as_usmarc()) // q{}; > > > > From Encode::encode POD: > > "If the $string is undef, then undef is returned." > > Personally, I don't like putting method output directly into another > function call, as it can make exception handling harder. I don't like it either, but it should be harmless in this particular case.
Pushed to master for 20.11, thanks to everybody involved!
backported to 20.05.x for 20.05.01
backported to 19.11.x for 19.11.07
Backported to 19.05.x branch for 19.05.12