| Summary: | Spine label with BN_IN UTF8 data rendered incorrectly | ||
|---|---|---|---|
| Product: | Koha | Reporter: | Indranil Das Gupta <indradg> |
| Component: | Label/patron card printing | Assignee: | Chris Nighswonger <cnighswonger> |
| Status: | BLOCKED --- | QA Contact: | Testopia <testopia> |
| Severity: | major | ||
| Priority: | P5 - low | CC: | amitddng135, josef.moravec, veron |
| Version: | Main | ||
| Hardware: | All | ||
| OS: | All | ||
| URL: | https://rt.cpan.org/Ticket/Display.html?id=122778 | ||
| GIT URL: | Initiative type: | --- | |
| Sponsorship status: | --- | Comma delimited list of Sponsors: | |
| Crowdfunding goal: | 0 | Patch complexity: | --- |
| Documentation contact: | Documentation submission: | ||
| Text to go in the release notes: | Version(s) released in: | ||
| Circulation function: | |||
| Attachments: | View of expected output vs output rendered | ||
|
Description
Indranil Das Gupta
2017-08-11 07:16:39 UTC
Quoting from the mailing list:
This problem seems to be present for most Indian languages whenever
they have conjunct clusters in their call numbers (depicted as
grapheme clusters in an unicoded string).
To describe the problem simply - the order of chars rendered is
incorrect in the output. For example the string - "শেখর" is
represented by the following code points -
\x{09B6}\x{09C7}\x{0996}\x{09B0}.
Now here is the catch: \x{09B6} represents the bengali letter SHA,
whereas \x{09C7} represents the bengali vowel sign E; however in the
correct linguistic visual presentation, the vowel sign E sits before
the SHA, which is not how the codepoints are arranged in the unicode
string.
I looked around PDF::Reuse, Text::PDF::TTFont etc modules, what seems
to me to be the root of this problem is the unpacku() method which is
pushing the unicode characters into an array in order to introduce
them into the PDF content stream with the correct font information.
However, being pushed in in that order, I think may be the cause of
this problem, which would make this an upstream issue rather than a
Koha bug.
cheers
indranil
I wonder if this problem also occurs in other abugida writing systems? Yes! In fact, I was perhaps hasty in trashing unpacku() method. The root of the trouble is the out_text() method where the the actual glyphs are parsed into the PDF content stream. What is happening here is that the individual codepoints pushed into @clist by unpacku() are being listed out one at a time into the PDF content stream as glyphs, *without* the necessary glyph reordering taking place. So I would expect every single abiguda writing system to be be impacted. (In reply to Indranil Das Gupta from comment #3) > What is happening here is that the individual > codepoints pushed into @clist by unpacku() are being listed out one at a > time into the PDF content stream as glyphs, *without* the necessary glyph > reordering taking place. It would seem that the glyph order should never be "changed" in the first place. ie. the order they are supplied should be preserved throughout the entire process of PDF generation. Hi Chris,
The example I referred to on the m/l has the following codepoint order \x{09B6}\x{09C7}\x{0996}\x{09B0} and that's exactly how PDF::Reuse and PDI::API2 is pushing it out.
However as per rules of glyph reordered necessity by Bengali, the actual ordering of glyphs (as opposed to the codepoint order) should be \x{09C7}\x{09B6}\x{0996}\x{09B0}.
LibreOffice which uses the ICU rules, handles this perfectly within ODF as well as during the PDF export, as does any software that uses Pango as the rendering backend.
basically calls need to be made to pick up the correct information from the GSUB and GPOS tables of the font being embedded, which this two perl libs apparently (from my limited reading so far) do not do.
This Koha bug depends on this CPAN bug: https://rt.cpan.org/Ticket/Display.html?id=122778 |