Bug 2246 - Label printing doesn't work with Unicode characters
Summary: Label printing doesn't work with Unicode characters
Status: CLOSED WONTFIX
Alias: None
Product: Koha
Classification: Unclassified
Component: Label/patron card printing (show other bugs)
Version: Main
Hardware: PC All
: PATCH-Sent (DO NOT USE) critical (vote)
Assignee: Chris Nighswonger
QA Contact: Bugs List
URL:
Keywords:
: 3400 6899 8563 13627 (view as bug list)
Depends on:
Blocks: 3400
  Show dependency treegraph
 
Reported: 2008-06-16 02:41 UTC by Frédéric Demians
Modified: 2019-06-27 09:24 UTC (History)
14 users (show)

See Also:
Change sponsored?: Seeking cosponsors
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
A UTF8 text file containing an Hindi word issuing the bug (24 bytes, text/plain)
2008-06-16 02:45 UTC, Chris Cormack
Details
MARC21 ISO2709 single record (1.07 KB, application/octet-stream)
2008-09-09 15:13 UTC, Chris Cormack
Details
Partial fix to reduce invalid pdf generation due to wide character errors. (1.74 KB, patch)
2011-04-19 13:17 UTC, Chris Nighswonger
Details | Diff | Splinter Review
Bug 2246 - (Partial) Label printing doesn't work with Unicode characters (1.74 KB, patch)
2011-05-29 14:09 UTC, Katrin Fischer
Details | Diff | Splinter Review
Bug 2246 - (Partial) Label printing doesn't work with Unicode characters (1.74 KB, patch)
2011-05-29 14:10 UTC, Katrin Fischer
Details | Diff | Splinter Review
[SIGNED-OFF] Bug 2246 - (Partial) Label printing doesn't work with Unicode characters (1.75 KB, patch)
2011-05-29 14:20 UTC, Katrin Fischer
Details | Diff | Splinter Review
Bug 2246 - (Partial) Map multibyte UTF8 to single byte for ISOLatin1 fonts (fixes diacritics <ASCII 256 decimal) (4.11 KB, patch)
2011-10-05 08:30 UTC, wajasu
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Chris Cormack 2010-05-21 00:48:28 UTC


---- Reported by frederic@tamil.fr 2008-06-16 02:41:02 ----

Label printing doesn't work with Hindi characters

Issue posted on [Koha] list:

http://lists.katipo.co.nz/public/koha/2008-June/014200.html

If you add an Hindi word (दिशा) in a title, the biblio record
and its barcode can't be printed in Tools > Label. The PDF
file isn't generated.

To reproduce this bug:

  1. Modify an existing biblio record. Add  दिशा to its title.
  2. In Tools > Label, set a default Layout and a Template
  3. Create a New Label Batch with one record: the one modified at 1.
  4. Generated PDF for Batch defined at 3.
  
As a result, PDF file isn't generated. An error message appears in 
the log file:

  label-print-pdf.pl: Wide character in syswrite at 
  /usr/local/share/perl/5.8.8/PDF/Reuse.pm line 968



---- Additional Comments From frederic@tamil.fr 2008-06-16 02:45:16 ----

Created an attachment
A UTF8 text file containing an Hindi word issuing the bug





---- Additional Comments From chris.nighswonger@liblime.com 2008-06-16 22:18:28 ----

This probably has to do with the fact that only "core" fonts are available by default when generating PDF's. I think a Hindi font would have to be installed and some Koha code modified in C4/Labels.pm IIRC to correct this.



---- Additional Comments From oleonard@myacpl.org 2008-08-05 08:51:03 ----

Same issue for Arabic, as reported in http://bugs.koha.org/cgi-bin/bugzilla3/show_bug.cgi?id=2460.



---- Additional Comments From oleonard@myacpl.org 2008-08-05 08:51:15 ----

*** http://bugs.koha.org/cgi-bin/bugzilla3/show_bug.cgi?id=2460 has been marked as a duplicate of this bug. ***



---- Additional Comments From rch@liblime.com 2008-09-09 15:13:07 ----

Created an attachment
MARC21 ISO2709  single record

This record causes pdf to break if you print the title.



---- Additional Comments From joe.atzberger@liblime.com 2008-09-11 09:30:15 ----

Same issue for Japanese (and Korean), our users in Tokyo point out.



---- Additional Comments From jesse.weaver@liblime.com 2008-10-07 07:39:21 ----

In investigating the bug, I have discovered that it will be fairly difficult to fix. The problem is that PDF::Reuse reopens STDOUT, making it impossible for us to set binmode ':utf8' on it. It is possible to get around this by writing out double-encoded utf8 to a temporary file, reading that file back in and printing it back out correctly encoded, but its not a very good solution.

Thoughts?



---- Additional Comments From mason@kohaaloha.com 2008-11-11 18:26:04 ----

(In reply to comment #7)
> In investigating the bug, I have discovered that it will be fairly difficult to
> fix. The problem is that PDF::Reuse reopens STDOUT, making it impossible for us
> to set binmode ':utf8' on it. 

Jesse, I think this is not the problem.

> 
> Thoughts?
> 


I wrote the 1st go of this, so ill comment


the good news is that pdf::reuse now supports unicode (from july 08)  so this is *technically* possibile
which is a good start ;)

http://groups.google.com/group/PDF-Reuse/browse_thread/thread/99457d4e40d6a5ed


the *only*?? way to do this - is to use the new RDP::Reuse::prTTFont() sub 
and embed the non-standard (eg: korean/japanese/hindi)  font in the actual pdf file.

(after installing all your extra *.ttf fonts on your server , of course)


something like this...

  my $hindi= prTTFont('/path/to/hindi.ttf');
  prPage();
  prTTFont($hindi);
  prText(20, 675, "दिशा दिशा दिशा दिशा दिशा"); 


Its going to need a simple proof-of-concept first, as embedded-unicode-hindi-fonts is close to what PDF::Reuse is capable of currently, it seems


http://search.cpan.org/dist/PDF-Reuse/Reuse.pm#prTTFont_-_select_and_embed_a_TrueType_font

" Using TrueType fonts also enables the prText function to accept UTF-8 strings, which allows you to use characters outside the Mac-Roman/Win-ANSI character sets used by the built-in fonts. "

Mason






---- Additional Comments From joe.atzberger@liblime.com 2008-12-01 15:05:55 ----

I disagree with Mason's assessment.  PDF generation fails when I have added even one title with one (combining) diacritical character like: "The séance".  The error in the log is: 

label-print-pdf.pl: Wide character in syswrite at /usr/local/share/perl/5.8.8/PDF/Reuse.pm line 968, <DATA> line 228., 
referer: http://staff-atz.dev.kohalibrary.com/cgi-bin/koha/labels/label-manager.pl?op=add&batch_id=5&itemnumber=733

As far as I can tell, LATIN SMALL LETTER E WITH ACUTE should be available in any decent font.  The more descriptive post from the patch author on Unicode in PDF::Reuse is here:

http://groups.google.com/group/PDF-Reuse/browse_thread/thread/4e28d69fedf74b74

He summarizes: "It seems that the only way to access characters outside the
MacRoman/WinAnsi encodings supported by PDF's built-in fonts is to embed
a TrueType font.  (I'm happy to be corrected if this conclusion in
wrong).  I've implemented font embedding by grafting Font::TTF and
Text::PDF::TTFont0 onto the PDF::Reuse API."

I'm not sure that is a correct conclusion on his part (perhaps CMAP or a /name_object would work).  The conditions for making PDF::Reuse work even under this new version would be:
(1) require PDF::Reuse 0.35,
(2) Add two dependencies to Koha,
(3) Require ANY font being targeted to be installed on the server,
(4) Build an admin interface for selecting target fonts to be embedded, so label scripts can reference them by filepath, 
(5) Add data structure to remember which batches need which fonts, and
(6) Refactor code to use prTTFFont.

In the end, the TTFont would be embedded in each PDF generated, meaning a set of barcodes might increase in filesize by several orders of magnitude.  

In my opinion, PDF::Reuse has a rather severe Unicode workaround, it does not have compatibility.  Maybe that's Adobe's fault and not the module's, but right now I'm not sure this path is the right one.



---- Additional Comments From joe.atzberger@liblime.com 2008-12-02 18:43:49 ----

After extensive review, I am more inclined to agree with Mason inasmuch as PDFs are limited to 3 default encodings: MacRomanEncoding, MacExpertEncoding, or WinAnsiEncoding.  None of those cover as much Unicode as we need.  

The (1236 page!) Adobe reference book that I'm checking says: "For character encodings that are not predefined, the PDF file must contain a stream that defines the CMap."

It looks like we would have to define a mapping for every non-basic-ASCII character that  we might want to use.  This "ToUnicode Mapping File Tutorial" might be useful to pursue this route:
www.adobe.com/devnet/acrobat/pdfs/5411.ToUnicode.pdf 

I'm not sure how much of this prTTFont would encapsulate, but it does not look like fun though.  



---- Additional Comments From joe.atzberger@liblime.com 2008-12-03 08:55:30 ----

Note critical bug on CPAN that PDF::Reuse leaves TTF filehandles open, eventually blocking access:

http://rt.cpan.org/Public/Dist/Display.html?Name=PDF-Reuse




---- Additional Comments From mjr@ttllp.co.uk 2009-06-15 10:51:31 ----

Does the same fault exist in 3.2?

Is it possible for us to fix this from the Koha project?

Anyone care to estimate the time required?  I think this fault is affecting many libraries and a sponsorship call would get the money to fix it.




---- Additional Comments From cmwasim@gmail.com 2009-06-16 10:11:07 ----

I want Urdu to be displayed. But I can't display it. Instead of urdu I view symbols like "کتاب التوØÛOEØ"

I tried embedding the font using prTTFont, but it displayes this message no matter what font i try to embbed: 

"Cannot extract embedded font ''BXCJIM+fontname'. Some characters may not display or print properly"

Please help!



---- Additional Comments From mjr@ttllp.co.uk 2009-10-21 12:54:32 ----

http://comments.gmane.org/gmane.education.libraries.koha.devel/3413/ is a recent developer mailing list discussion of this issue.




---- Additional Comments From cnighswonger@foundations.edu 2010-02-10 13:13:13 ----

Moving this to HEAD as it is unlikely to be fixed for 3.2. Also upgrading it to critical since it precludes label printing in non-latin alphabets and diacriticals in latin alphabets.



--- Bug imported by chris@bigballofwax.co.nz 2010-05-21 00:48 UTC  ---

This bug was previously known as _bug_ 2246 at http://bugs.koha.org/cgi-bin/bugzilla3/show_bug.cgi?id=2246
Imported an attachment (id=585)
Imported an attachment (id=586)

Actual time not defined. Setting to 0.0
CC member arm@hanover.ca does not have an account here
CC member bchurch@ptfs.com does not have an account here
CC member ccslibrary@gmail.com does not have an account here
CC member cmwasim@gmail.com does not have an account here
CC member daz-koha@zzzurn.com does not have an account here
CC member Eric.Begin@inLibro.com does not have an account here
CC member mjr@ttllp.co.uk does not have an account here
The original submitter of attachment 585 [details] is unknown.
   Reassigning to the person who moved it here: chris@bigballofwax.co.nz.
The original submitter of attachment 586 [details] is unknown.
   Reassigning to the person who moved it here: chris@bigballofwax.co.nz.

Comment 1 Serhij Dubyk 2010-12-13 10:21:20 UTC
Label printing doesn't work with any Cirillic characters (we try print patron cards) (tested on Debian sid, Koha 3.2.1, Font::TTF and Text::PDF::TTFont0 was installed).
Comment 2 Chris Nighswonger 2011-04-19 13:17:20 UTC Comment hidden (obsolete)
Comment 3 akif-antispam 2011-04-25 10:59:31 UTC
I'm on 3.02.07.000 with Debian squeeze and I'm facing the same problem with German umlautes aswell as Turkish Characters in the records. PDF are invalid when created.
Comment 4 Chris Nighswonger 2011-04-25 13:35:09 UTC
*** Bug 3400 has been marked as a duplicate of this bug. ***
Comment 5 Katrin Fischer 2011-05-29 14:09:40 UTC Comment hidden (obsolete)
Comment 6 Katrin Fischer 2011-05-29 14:10:28 UTC Comment hidden (obsolete)
Comment 7 Katrin Fischer 2011-05-29 14:20:41 UTC
Created attachment 4293 [details] [review]
[SIGNED-OFF] Bug 2246 - (Partial) Label printing doesn't work with Unicode characters

This patch provides a very partial fix for this bug in that it reduces
the number of pdf generation failures due to a "wide character" error.
It does not ensure that all unicode characters will print correctly as
this is dependent upon many other issues mentioned in this bug and
various posts to the developer list.

What this code does is test to see if the pdf stream is utf8 encoded
and if it is, explicitly declares it to be so. Unicode chars will still
print incorrectly, but the pdf will be created and should open properly
in whatever pdf reader.

You may test this by adding any character with a diacritical to the
title of a bib and then attempting to generate a label pdf with the
title of that bib. Before the patch is applied the resulting pdf
should contain an error mentioning a wide character issue. After the
patch is applied, the pdf should be valid.

No documentation changes are necessary as a result of this patch.

This patch should be backported to 3.2.x.
Comment 8 Katrin Fischer 2011-05-29 14:22:04 UTC
Printing German umlauts works without problem for me. I could reproduce the desribed behaviour with Cyrillic "Россия".
Comment 9 Ian Walls 2011-06-03 20:19:04 UTC
Confirming that the signed-off patch does indeed allow the PDF to be opened, even if the Unicode characters are not properly displaying (as is the case for the Hindi word दिशा) or come with post-character cruft (as with author Brontë)

Marking as Passed QA, as although this bug is far from resolved, this patch greatly improves the usability of the label creator for the occasions when only a few titles have non-ASCII characters.
Comment 10 Chris Cormack 2011-06-04 08:04:50 UTC
Pushed to master, please test
Comment 11 Katrin Fischer 2011-09-21 14:23:32 UTC
*** Bug 6899 has been marked as a duplicate of this bug. ***
Comment 12 wajasu 2011-09-28 06:43:55 UTC
Improvement
With hopes of getting diacritics ( u dieresis and such) printing for my barcode labels, I dug into /home/koha344/koha/intranet/cgi-bin/labels/label-create-pdf.pl

I found that if I set the utf8 flag off for the $line string, it would end up sending FC instead of C3BC in the PDF.  So for the "standard PDF encodings that we currently allow (Helvitica, Courier, etc) the single byte maps to those common foreign characters just fine.  It worked for me!  

Here is the "utf8::downgrade($line);" that I used from this section of code. I am running perl 5.12.1 (that I built), with the latest cpan modules pulled as of Sept 2011, and koha 3.4.4. (your wide character fix in PDF.pm helped). It seems to get through my latest version of PDF::Reuse, that I have installed.

sub _print_text {
    my $label_text = shift;
    foreach my $text_line (@$label_text) {
        my $pdf_font = $pdf->Font($text_line->{'font'});
        my $line = "BT /$pdf_font $text_line->{'font_size'} Tf $text_line->{'text_llx'} $text_line->{'text_lly'} Td ($text_line->{'line'}) Tj
 ET";

        utf8::downgrade($line);  # This forces the utf8 flag off so the single byte will pass thru for regular PDFEncoding etc.
                                 # (So now my basic diacritics in the standard character sets will not get double encoded. Yeah!
                                 # Note: This is not meant to deal with the case where we want to use unicode with TTF - truetype fonts.

        $pdf->Add($line);
    }
}

This bug is log overdue, and it might be possible for someone to test and patch it, since I might not be able to get it done before Oct 8th 2011 for the 3.6 release.

Note: I didn't want to loose this info.  I also tried this fix in my production 3.0.2 environment.  I had to add the PDF.pm patch for wide chars, to get a PDF to be generated.  Once I added the downgrade line in 3.0.2, my create-label-pdf.pl hung, with some recursion error and cpu at 100%.  So I gave up on 3.0.2.  The PDF:Reuse in that version might be the factor, or something else.
I'm going to grab koha 3.4.5 and try the fix on that. We can't wait till next year for this, can we?

I also coded it with a TTF truetype font, but was missing the description lines, so I didn't have time to figure out the drawing logic.  We would need to add a way to select a font path, or upload one, and store it in the database. Also, this fix might be usable for other create-*-pdf.pl scripts (like patron cards).

This is a partial fix if we need to support other languages.
Comment 13 D Ruth Holloway 2011-09-28 12:51:13 UTC
I tried making wajasu's edit to a system known to be having this problem--code updated as of last night at HEAD, and it did not resolve the problem for me--I got a "this PDF is broken and cannot be repaired" error.  :(
Comment 14 wajasu 2011-10-05 08:30:09 UTC
Created attachment 5713 [details] [review]
Bug 2246 - (Partial) Map multibyte UTF8 to single byte for ISOLatin1 fonts (fixes diacritics <ASCII 256 decimal)

    Bug 2246 - (Partial) Map multibyte UTF8 to single byte for ISOLatin1 fonts (fixes diacritics <ASCII 256 decimal)
    
    This is a partial fix as well, which attempts to convert the internal representation
    of multibye UTF8 characters to their single byte in the native encoding (Latin-1)
    and allow them to pass through to the PDF stream.
    
    This ONLY fixes those ISOLatin1 diacritics and probably won't solve a full foreign
    language need.  It probably solves the printing case for many historical records,
    but won't take care of the need for the interational community.  I believe we need to use
    a full unicode embedded font in the final solution.
    
    Refer to utf8::downgrade($string,FAIL_OK); (see core perl  /usr/share/perl5/core_perl/utf8.pm)
    
    Test:
    a) I selected a biblio that had a udiaresis (u with 2 dots above, i.e. Jurgen Habermas)
    and created a batch with the one record.  I used the standard Helvitica font (no truetype)
    I exported it as a PDF and saw my label had "J (captial A tilde on top, 1/4) rgen" for
    the author.
    b) Applied patch.
    c) Exported the pdf again, and saw  "J (u with two dots above) rgen"
    
    To see what changed in the PDF that was generated:
    a) Edit the label-create-pdf.pl and temporarily comment out the $pdf->Compress(1) line so that
    you can see the PDF test instructions when generating. (Export it.)
    b) Use a hexfile viewer (I use hexedit), and search for Habermas, and you will see
    the corresponding Jurgen with two bytes C3BC before the patch, and FC after the patch.
    (You can use od -x in unix to also view the PDF if you don't have hexedit).
    
    Some explanation:
    The utf8 flag is turned off, and the FC is passed thru.  I tried Encode:decode routines,
    but I think they keep the perl internal utf8 flag on, and the bytes stream out as C3BC.
    I've read when strings are concatenated, the flag can switch on/off, so I hoped something
    in the PDF::Reuse module would not tern it back on (if thats what is helping).
    
    Observations:
    I tried this on my production 3.2 ish koha, and it did't work so this patch is dependent
    on other fixes such as http://bugs.koha-community.org/bugzilla3/attachment.cgi?id=4293
    I hoped folks in older versions could make the change in production without an upgrade,
    but one could try as see since its a staff client tool.
    
    It worked for me 3.4.4 and 3.5.x koho git master  as of October 5th 2011
    
    Things that might be needed:
    Pertinent modules?
                            MyTestEnv version       HEAD Required
    PDF::API2               2.019                       2                        Yes
    PDF::API2::Page         2.019                       2                        Yes
    PDF::API2::Simple       1.1.4                       1                        Yes
    PDF::API2::Util         2.019                       2                        Yes
    PDF::Reuse              0.35                        0.33                     Yes
    PDF::Reuse::Barcode     0.05                        0.05                     Yes
    PDF::Table              0.9.3                       0.9.3                    Yes
    Unicode::Normalize      1.03                        0.32                     Yes
    
    perl -v
    This is perl 5, version 12, subversion 1 (v5.12.1) built for x86_64-linux
    
    If you test, be sure to test with a diacritic that has a corresponding ISOLatin1
    mapping.
Comment 15 Katrin Fischer 2011-10-05 08:34:36 UTC
I am a bit confused about your example - because I had tested German umlauts like 'Jürgen' with one of the last patches that got into master for this bug and it worked nicely. Could your problem here be related to unicode normalization?
Comment 16 wajasu 2011-10-06 06:41:36 UTC
(In reply to comment #15)
> I am a bit confused about your example - because I had tested German umlauts
> like 'Jürgen' with one of the last patches that got into master for this bug
> and it worked nicely. Could your problem here be related to unicode
> normalization?

It could be because of unicode normalization.  I faintly recall making a decision to call or not call some unicode normalization routine when I wrote a program to convert to marcxml records from a legacy library syste 3 years ago.  It might have been a windows cp1252 encoding originally, with characters mapping to utf8.

I did this query against my database for diacritics:
With mysql I run:
select author,HEX(author),CHAR_LENGTH(author) from biblio where HEX(author) like '%C3BC%';
and get lots of records like:
Moltmann, Jürgen.  | 4D6F6C746D616E6E2C204AC3BC7267656E2E | 17 
where the C3BC represents ü. 

The C3BC is what get written to the PDF file with out my patch. Maybe you can use a hex editor and see if your label export has C3BC.  If so it might be the PDF reader that is able to map that.  If yours is F9, then there must be a difference in the perl version/compile options or module code.

Also, I saved a biblio as marcxml, and the xml contained  J&#xFX;rgen in subfield code "a", so thats an XML entity.
Saving marc.utf8 it has JuCC88rgen  in the text areao of my hex editor.

Another specific query I did for my test record.
select author,HEX(author),CHAR_LENGTH(author) from biblio where author like '%Habermas%';

mysql version: 5.5.16
mysql> SHOW SESSION VARIABLES LIKE 'character_set%';
+--------------------------+----------------------------+
| Variable_name            | Value                      |
+--------------------------+----------------------------+
| character_set_client     | utf8                       |
| character_set_connection | utf8                       |
| character_set_database   | utf8                       |
| character_set_filesystem | binary                     |
| character_set_results    | utf8                       |
| character_set_server     | utf8                       |
| character_set_system     | utf8                       |
| character_sets_dir       | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+


Note: These are barcode labels I'm printing.
Comment 17 wajasu 2011-10-09 20:50:56 UTC
I created a debian virtual machine dev environment as described by the community wiki, and see that perl 5.10.1 is what is used in debian squeeze.  I applied my one liner patch (utf8::downgrade($line) in label-create-pdf.pl) and it had no effect.  So I suspect some change in perl, or even a side effect of how certain modules behave is the reason.  I'm noting this so a future upgrade might fix the issue for someone if its needed.  I've been running perl 5.12.1, and now 5.14.2 is out. - Oct 24 2011
Comment 18 Chris Nighswonger 2012-07-06 00:26:30 UTC
I'm marking these label/diacritical related bugs as WONTFIX due to the well rehearsed issues of the pdf standard and diacriticals. If someone wants to put time into this, feel free to reopen and take ownership of these bugs.
Comment 19 Chris Nighswonger 2012-08-02 17:28:48 UTC
*** Bug 8563 has been marked as a duplicate of this bug. ***
Comment 20 Katrin Fischer 2015-02-01 21:03:27 UTC
*** Bug 13627 has been marked as a duplicate of this bug. ***