Bug 39327 - UTF-8 BOM missing from label creator CSV and some UTF-8 output broken
Summary: UTF-8 BOM missing from label creator CSV and some UTF-8 output broken
Status: In Discussion
Alias: None
Product: Koha
Classification: Unclassified
Component: Label/patron card printing (show other bugs)
Version: Main
Hardware: All All
: P5 - low normal
Assignee: David Cook
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2025-03-14 05:52 UTC by David Cook
Modified: 2025-03-17 23:21 UTC (History)
3 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
Bug 39327: Add BOM to label CSV output and set output layer to UTF-8 (1.84 KB, patch)
2025-03-14 06:14 UTC, David Cook
Details | Diff | Splinter Review
Bug 39327: Add BOM to label CSV output and set output layer to UTF-8 (1.89 KB, patch)
2025-03-15 00:56 UTC, Phil Ringnalda
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description David Cook 2025-03-14 05:52:33 UTC
The CSV created by the label creator can't be easily opened by Excel as UTF-8 data.

If we add the UTF-8 BOM, then it will open it as UTF-8 data. Hurray!

Although in some cases I've noticed UTF-8 data in Koha is being exported seemingly as Latin-1 data. The problem appears to be with the Text::CSV_XS library not properly noticing that it's UTF-8. If we explicitly set the binmode to UTF-8, we can fix that.
Comment 1 David Cook 2025-03-14 06:14:50 UTC
Created attachment 179301 [details] [review]
Bug 39327: Add BOM to label CSV output and set output layer to UTF-8

This change adds a BOM to the label CSV output, so that
Excel properly recognizes the data as UTF-8.

It also sets the output layer to UTF-8, so that Text::CSV_XS library
always outputs UTF-8 even when it thinks a string might not be (see
below)

To reproduce:
1. Make a cataloguing record with the minimum requirements filled in
2. Set 100$a to Chödrön, Pema
3. Add an item with a barcode
4. Repeat this process but use 我爱你 for the 100$a

5. Create a new label batch
6. Add the item to the batch
7. Export using the defaults
8. click "Download as CSV"
9. Note that Chödrön appears correctly but the Chinese does not

Test plan:
1. Apply the patch and koha-plack --restart kohadev
2. Do the "To reproduce" plan
3. Note that both Chödrön and 我爱你 appear correctly
4. If you were to inspect the bytes, you'd see that the output is UTF-8
encoded after the patch while Chödrön before the patch is Latin-1 encoded
(ie ö when UTF-8 encoded is C3B6 and in Latin-1 it's encoded as F6)
Comment 2 Baptiste Wojtkowski (bwoj) 2025-03-14 12:51:31 UTC Comment hidden (obsolete)
Comment 3 Phil Ringnalda 2025-03-15 00:56:59 UTC
Created attachment 179362 [details] [review]
Bug 39327: Add BOM to label CSV output and set output layer to UTF-8

This change adds a BOM to the label CSV output, so that
Excel properly recognizes the data as UTF-8.

It also sets the output layer to UTF-8, so that Text::CSV_XS library
always outputs UTF-8 even when it thinks a string might not be (see
below)

To reproduce:
1. Make a cataloguing record with the minimum requirements filled in
2. Set 100$a to Chödrön, Pema
3. Add an item with a barcode
4. Repeat this process but use 我爱你 for the 100$a

5. Create a new label batch
6. Add the item to the batch
7. Export using the defaults
8. click "Download as CSV"
9. Note that Chödrön appears correctly but the Chinese does not

Test plan:
1. Apply the patch and koha-plack --restart kohadev
2. Do the "To reproduce" plan
3. Note that both Chödrön and 我爱你 appear correctly
4. If you were to inspect the bytes, you'd see that the output is UTF-8
encoded after the patch while Chödrön before the patch is Latin-1 encoded
(ie ö when UTF-8 encoded is C3B6 and in Latin-1 it's encoded as F6)

Signed-off-by: Phil Ringnalda <phil@chetcolibrary.org>
Comment 4 Michał 2025-03-17 09:57:04 UTC
I am actually really unsure about this. BOM is a legacy feature and should not be used for UTF-8 (and the standard says it's not recommended). If Excel cannot recognize an UTF-8 file properly without manually ticking it, then sadly it is a problem of the legacy software, but I don't think we should break the file format just to cater to that. For example LibreOffice or Google Docs recognizes these files as UTF-8 properly right away, so does Windows's Notepad (it didn't use to at some point, but around 6 years ago they made it so, making BOM non-default for UTF-8).

And on the other hand, people already using these files in existing software may face breakage, because software that isn't coded to explicitly recognize BOM at the beginning of an UTF-8 file (and that's uncommon), might in turn break as well, by having garbage data at the beginning of the file.

The other reason that I see catering to one particular piece of software (Excel), is that for Excel default settings for CSV import depend on the system locale for things like the default separator (among other broken behavior). Indeed in my locale comma-separated CSVs open with all lines in the first column, instead of being separated, if the user doesn't go through the import wizard and set the settings anyways. So the file will open differently on different Windows computers anyways. ALSO on top of that, I found on the internet that reportedly Excel 2007 ignored UTF-8 BOM anyways (and only recognizes it since Excel 2013 reportedly), so this isn't even a full solution to that software either: https://stackoverflow.com/a/40807218/4470653

So I would contest this change and ask that if people really want it to be there, it should be either as separate export option or some kind of syspref... (idk "CSV (UTF-8)", "CSV (UTF-8 with BOM)", akin to how most editors indicate the encodings)

Or if we want to please Excel specifically, there should be some kind of XLSX option instead perhaps.

> Although in some cases I've noticed UTF-8 data in Koha is being exported seemingly as Latin-1 data.

True! Noticed it too when testing for example exporting a result of report for patrons with most checkouts to CSV. That part should be fixed for sure.
Comment 5 Katrin Fischer 2025-03-17 10:04:33 UTC
Could this be a system preference or switch for now to allow us testing the change over a longer period of time and get feedback?

The "broken" umlauts in export are an issue that we have had from the beginning and using the data import feature works, but adds many clicks and is training intensive. This stays a major annoyance to librarians even after training. I would love to see it fixed.
Comment 6 Michał 2025-03-17 12:08:10 UTC
> the data import feature works, but adds many clicks and is training intensive

Well, for some locales it only works this way (having to use the import feature, as a standard routine), so the fact "just opening" a .csv works for others is just a coincidence, and catering to that a kind of (unfair?) favouritism.

> This stays a major annoyance to librarians even after training.

So then I think the proper solution to have all librarians get a seamlessly-openable sheet in office suites, with proper columns and encodings and no fiddling with settings, is to have an XLSX/Excel export option everywhere. Because CSV support in Excel is too finnicky to expect it to just work for majority of people (I wasn't even aware it ever works, since for us it only supports semicolon delimeters detection by default).

I mean the datatables have "Excel" option next to "CSV", so it'd be only fair to add it to other places. And we'd avoid butchering the CSVs with "Microsoft-ism" (which again, while solving this edge case, would be bound to introduce new problems and confusion, such as for if someone concatenates the files etc.).

There's already `labels/label-create-xml.pl` and it technically looks like a HTML table saved as .xml opens up properly in Excel as a lazy solution... (but that'd be incompatible with current .xml, and saving them as .xls or .xlsx works but certainly doesn't seem proper either). So I guess the most proper solution would be to use some kind of library for XLSX creation instead...? Either that or have an option called "XML for Excel", idk.

But basically either of these below saved as .xml should be just picked up by Excel (I only tested with LibreOffice due to lack of Excel by hand, but found these examples around Excel on the internet):

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<html xmlns:x="urn:schemas-microsoft-com:office:excel">
    <body>
        <table>
            <tr><td>author</td><td>title</td></tr>
            <tr><td>Val1</td><td>Val2</td></tr>
            <tr><td>Авиабилет</td><td>Tλληνικ</td></tr>
        </table>
    </body>
</html>

==== or ====

<?xml version="1.0" encoding="UTF-8"?>
<?mso-application progid="Excel.Sheet"?>
<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet" xmlns:html="http://www.w3.org/TR/REC-html40">
        <Worksheet ss:Name="labels">
                <ss:Table>
                        <ss:Row>
                                <ss:Cell><ss:Data ss:Type="String">author</ss:Data></ss:Cell>
                                <ss:Cell><ss:Data ss:Type="String">title</ss:Data></ss:Cell>
                        </ss:Row>
                        <ss:Row>
                                <ss:Cell><ss:Data ss:Type="String">Val1</ss:Data></ss:Cell>
                                <ss:Cell><ss:Data ss:Type="String">Val2</ss:Data></ss:Cell>
                        </ss:Row>
                        <ss:Row>
                                <ss:Cell><ss:Data ss:Type="String">Авиабилет</ss:Data></ss:Cell>
                                <ss:Cell><ss:Data ss:Type="String">Tλληνικ</ss:Data></ss:Cell>
                        </ss:Row>
                </ss:Table>
        </Worksheet>
</Workbook>
Comment 7 Katrin Fischer 2025-03-17 13:55:20 UTC
Sorry, I was not clear about that: I'd like all the CSV exports to behave the same - we already added the BOM to another a little while back (I think reports). So when asking for a switch I was thinking of a more general one that is used in multiple places. You should also be able to set your default delimiter via CSVDelimiter and have it taken into account everywhere (for your semicolon case, I suspect we'd prefer tabs) - otherwise I'd think of it as a bug, unless the delimiter can be specified explicitly.
Comment 8 David Cook 2025-03-17 23:21:05 UTC
Interesting discussion for sure. 

I think the change will probably help more people than it will hinder, but I was thinking similar thoughts about the BOM causing an unexpected change to anyone parsing the CSV via alternative means. A fairly small change but a change nonetheless.

Regarding the DataTables, that Excel export is done by the DataTables Javascript library. I don't think that we have a Perl library for XLSX exports. Although, as you indicate, Michal, maybe we don't need a library. That's interesting.

Overall, I think it's more important to be consistent than to be right. (Common saying in libraries!) But yeah maybe a syspref or some other switch to turn the BOM on/off...

I think that should probably be a follow-up bug though, as this bug report is just expanding the status quo.