Bug 34549 - The cataloguing editor allows you to input invalid data
Summary: The cataloguing editor allows you to input invalid data
Status: Needs documenting
Alias: None
Product: Koha
Classification: Unclassified
Component: Cataloging (show other bugs)
Version: Main
Hardware: All All
: P5 - low normal
Assignee: David Cook
QA Contact: Marcel de Rooy
URL:
Keywords:
Depends on: 29697
Blocks: 35104
  Show dependency treegraph
 
Reported: 2023-08-17 03:24 UTC by David Cook
Modified: 2024-11-14 23:15 UTC (History)
7 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Small patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
This fixes entering data when cataloguing so that non-XML characters are removed. Non-XML characters (such as ESC) were causing adding and editing data to fail, with errors similar to: Error: invalid data, cannot decode metadata object parser error : PCDATA invalid Char value 27
Version(s) released in:
23.11.00,23.05.05,22.11.11
Circulation function:


Attachments
Text file containing control characters (38 bytes, text/plain)
2023-08-17 03:35 UTC, David Cook
Details
Bug 34549: Strip non-XML chars during TransformHtmlToMarc (4.82 KB, patch)
2023-08-17 04:33 UTC, David Cook
Details | Diff | Splinter Review
Bug 34549: perltidy (3.52 KB, patch)
2023-09-26 05:03 UTC, David Cook
Details | Diff | Splinter Review
Bug 34549: Strip non-XML chars during TransformHtmlToMarc (4.87 KB, patch)
2023-09-27 18:54 UTC, David Nind
Details | Diff | Splinter Review
Bug 34549: perltidy (3.56 KB, patch)
2023-09-27 18:54 UTC, David Nind
Details | Diff | Splinter Review
Bug 34549: Strip non-XML chars during TransformHtmlToMarc (5.09 KB, patch)
2023-09-29 07:48 UTC, Marcel de Rooy
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description David Cook 2023-08-17 03:24:46 UTC
I had a library using UTF-8 encoding instead of MARC-8 encoding, and they were able to import records into the cataloguing editor and save them into Koha. Then when they tried to view, edit, or delete the records, Koha would throw fatal errors.

You can duplicate this by copying and pasting the following character into your cataloguing editor in say the 245$a field: 
Comment 1 David Cook 2023-08-17 03:28:34 UTC
Ok Bugzilla won't print that non-printable character so that's going to make this harder. 

I'll look at adding a text attachment where you can just copy the text and paste it into Koha to get it to break.
Comment 2 David Cook 2023-08-17 03:35:19 UTC
Created attachment 154479 [details]
Text file containing control characters

On Windows, open this file with Notepad or Notepad++. I haven't tried other OSes but use a graphical text editor. 

Copying and pasting this into a MARC record via the cataloguing editor will generate this kind of warning: "PCDATA invalid Char value 27"
Comment 3 David Cook 2023-08-17 03:39:17 UTC
Actually, it looks like you can just go straight to https://bugs.koha-community.org/bugzilla3/attachment.cgi?id=154479 and copy the text from there in the web browser and that will break your MARC record via the cataloguing editor. 

Click "Save and view record" for the most obvious results.
Comment 4 David Cook 2023-08-17 04:33:13 UTC
Created attachment 154480 [details] [review]
Bug 34549: Strip non-XML chars during TransformHtmlToMarc

This patch strips non-XML characters from inputs during
TransformHtmlToMarc.

To test:
0. Apply patch
1. koha-plack --restart kohadev
2. Go to http://localhost:8081/cgi-bin/koha/cataloguing/addbiblio.pl
3. Fill out record and use the text from "Text file containing control characters"
as the title
4. Click Save
5. Note that your record displays without any warnings like the following:
Error: invalid data, cannot decode metadata object
parser error : PCDATA invalid Char value 27
Comment 5 David Cook 2023-08-17 04:44:36 UTC
Note that there's no warnings anywhere that any stripping occurred.

We probably should log to file and show something on the screen for users...
Comment 6 Owen Leonard 2023-08-24 17:45:12 UTC
This seems to work, but the QA script complains about tidiness. David can you take a look?
Comment 7 David Cook 2023-09-26 05:00:00 UTC
(In reply to Owen Leonard from comment #6)
> This seems to work, but the QA script complains about tidiness. David can
> you take a look?

It's taken me a little while but I'm tidying up... ﷐[U+1F605]﷑
Comment 8 David Cook 2023-09-26 05:03:44 UTC
Created attachment 156198 [details] [review]
Bug 34549: perltidy
Comment 9 David Nind 2023-09-27 18:54:02 UTC
Created attachment 156306 [details] [review]
Bug 34549: Strip non-XML chars during TransformHtmlToMarc

This patch strips non-XML characters from inputs during
TransformHtmlToMarc.

To test:
0. Apply patch
1. koha-plack --restart kohadev
2. Go to http://localhost:8081/cgi-bin/koha/cataloguing/addbiblio.pl
3. Fill out record and use the text from "Text file containing control characters"
as the title
4. Click Save
5. Note that your record displays without any warnings like the following:
Error: invalid data, cannot decode metadata object
parser error : PCDATA invalid Char value 27

Signed-off-by: David Nind <david@davidnind.com>
Comment 10 David Nind 2023-09-27 18:54:04 UTC
Created attachment 156307 [details] [review]
Bug 34549: perltidy

Signed-off-by: David Nind <david@davidnind.com>
Comment 11 Marcel de Rooy 2023-09-29 07:48:51 UTC
Created attachment 156365 [details] [review]
Bug 34549: Strip non-XML chars during TransformHtmlToMarc

This patch strips non-XML characters from inputs during
TransformHtmlToMarc.

To test:
0. Apply patch
1. koha-plack --restart kohadev
2. Go to http://localhost:8081/cgi-bin/koha/cataloguing/addbiblio.pl
3. Fill out record and use the text from "Text file containing control characters"
as the title
4. Click Save
5. Note that your record displays without any warnings like the following:
Error: invalid data, cannot decode metadata object
parser error : PCDATA invalid Char value 27

Signed-off-by: David Nind <david@davidnind.com>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
[EDIT] Squashed the tidy patch. Still needed a few spaces to satisfy qa tools.
Comment 12 Tomás Cohen Arazi (tcohen) 2023-10-09 14:43:12 UTC
Pushed to master for 23.11.

Nice work everyone, thanks!
Comment 13 Fridolin Somers 2023-10-09 19:39:40 UTC
Pushed to 23.05.x for 23.05.05
Comment 14 Matt Blenkinsop 2023-10-17 17:36:02 UTC
Nice work everyone!

Pushed to oldstable for 22.11.x
Comment 15 Martin Renvoize (ashimema) 2023-10-19 09:21:27 UTC
Hmm,  whilst this certainly resolves the core issue.. I'd have loved to have seen some form of warning to the end user that their input data has been manipulated.

I'm not close enough to the differences between MARC-8 and UTF-8 encodings to know exactly what we're losing during the save.. the test case highlighted here is simple.. just dropping a hidden character.. no harm done.. however, might there be cases where the mis-encoded string getting stripped would result in worse data from the human perspective?  It would be good to somehow catch these sorts of misconfigurations and try to encourage end users to fix them.
Comment 16 David Cook 2023-10-22 23:09:21 UTC
(In reply to Martin Renvoize from comment #15)
> Hmm,  whilst this certainly resolves the core issue.. I'd have loved to have
> seen some form of warning to the end user that their input data has been
> manipulated.

I agree. I'm not sure the best way to do that, but I was a bit surprised my patches got pushed without it ﷐[U+1F605]﷑.

> I'm not close enough to the differences between MARC-8 and UTF-8 encodings
> to know exactly what we're losing during the save.. the test case
> highlighted here is simple.. just dropping a hidden character.. no harm
> done.. however, might there be cases where the mis-encoded string getting
> stripped would result in worse data from the human perspective?  It would be
> good to somehow catch these sorts of misconfigurations and try to encourage
> end users to fix them.

Firstly, absolutely.

Secondly, you can find code tables for MARC-8 at https://www.loc.gov/marc/specifications/specchartables.html 

Here's a fun case I encountered the other day:

ö
UTF-8: C3B6
Latin-1: F6

I was accidentally outputting Latin-1 as I'd forgot to tell Perl to print UTF-8. It was then interpreting ö as an underscore (ie "_") because F6 in MARC-8 is an underscore. 

And F6 isn't a valid UTF-8 byte in any case. So it would disappear thanks to this change.
Comment 17 David Cook 2023-10-22 23:41:54 UTC
(In reply to Martin Renvoize from comment #15)
> Hmm,  whilst this certainly resolves the core issue.. I'd have loved to have
> seen some form of warning to the end user that their input data has been
> manipulated.
> 
> I'm not close enough to the differences between MARC-8 and UTF-8 encodings
> to know exactly what we're losing during the save.. the test case
> highlighted here is simple.. just dropping a hidden character.. no harm
> done.. however, might there be cases where the mis-encoded string getting
> stripped would result in worse data from the human perspective?  It would be
> good to somehow catch these sorts of misconfigurations and try to encourage
> end users to fix them.

Of course, the way I typically deal with these records is to erase the problem characters. Otherwise there's not much that can be done. Where possible I indicate which records were problems so that they could be fixed up/re-imported.

I'll put some more thoughts on bug 35104...
Comment 18 David Cook 2023-10-22 23:53:47 UTC
(In reply to Martin Renvoize from comment #15)
> I'm not close enough to the differences between MARC-8 and UTF-8 encodings
> to know exactly what we're losing during the save.. 

I've done some testing and I had a hard time having issues with MARC-8 vs UTF-8 with Library of Congress imports, but Latin-1 causes big time problems, so it'll be easiest for testing bug 35104
Comment 19 David Cook 2023-10-23 01:00:02 UTC
(In reply to David Cook from comment #18)
> (In reply to Martin Renvoize from comment #15)
> > I'm not close enough to the differences between MARC-8 and UTF-8 encodings
> > to know exactly what we're losing during the save.. 
> 
> I've done some testing and I had a hard time having issues with MARC-8 vs
> UTF-8 with Library of Congress imports, but Latin-1 causes big time
> problems, so it'll be easiest for testing bug 35104

That said, "big time problems" only from a human perspective. The text is mangled, but the tests I ran still produced characters there were valid XML characters.

So I'm just going to continue my lab style test in bug 35104 based off the text file uploaded here...
Comment 20 David Cook 2023-11-01 01:27:14 UTC
(In reply to Martin Renvoize from comment #15)
> Hmm,  whilst this certainly resolves the core issue.. I'd have loved to have
> seen some form of warning to the end user that their input data has been
> manipulated.
> 
> I'm not close enough to the differences between MARC-8 and UTF-8 encodings
> to know exactly what we're losing during the save.. the test case
> highlighted here is simple.. just dropping a hidden character.. no harm
> done.. however, might there be cases where the mis-encoded string getting
> stripped would result in worse data from the human perspective?  It would be
> good to somehow catch these sorts of misconfigurations and try to encourage
> end users to fix them.

I've been looking at this further and it looks like it's actually harder to get a badly encoded record into Koha than I thought!

If I try to stage a Latin-1 record as a UTF-8 record, it'll fail. At the moment, the background job is failing silently, but after adding some debugging I saw the message is "Input is not proper UTF-8".

So I think maybe some of the encoding issues I've seen have to do with side-loaded records that have been directly put into the database as part of a data migration.

--

Also, as per my comments on bug 35104, it looks like Microsoft Edge has a tendency of corrupting data (at least in PDFs) and then users paste in corrupted data which includes control characters. 

This change would work well to erase those non-printable control characters though. I suppose there could be minor data loss, although it's due to the source data being a problem...

I'm trying to gather scenarios for bad data so that we can alert on them well...
Comment 21 David Cook 2023-11-01 01:39:08 UTC
I tried to manually insert Latin-1 data with a byte that doesn't exist in UTF-8 and I couldn't manually insert it into the database either using the ORM:

DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::mysql::st execute failed: Incorrect string value: '\xF6irm.<...' for column `koha_kohadev`.`biblio_metadata`.`metadata` at row 1 at /kohadevbox/koha/Koha/Object.pm line 170
Invalid value passed, biblio_metadata.metadata=\xF6irm.<... expected type is string

I also couldn't do it using the db handle via DBIx::Class more directly:

{UNKNOWN}: DBI Exception: DBD::mysql::st execute failed: Incorrect string value: '\xF6irm.<...' for column `koha_kohadev`.`biblio_metadata`.`metadata` at row 1  at /usr/share/perl5/DBIx/Class/Schema.pm line 1118.
        DBIx::Class::Schema::throw_exception(Koha::Schema=HASH(0x55aa1477f670), "DBI Exception: DBD::mysql::st execute failed: Incorrect strin"...) called at /usr/share/perl5/DBIx/Class/Storage.pm line 113
        DBIx::Class::Storage::throw_exception(DBIx::Class::Storage::DBI::mysql=HASH(0x55aa1b4856c8), "DBI Exception: DBD::mysql::st execute failed: Incorrect strin"...) called at /usr/share/perl5/DBIx/Class/Storage/DBI.pm line 1623
        DBIx::Class::Storage::DBI::__ANON__("DBD::mysql::st execute failed: Incorrect string value: '\\xF6i"..., DBI::st=HASH(0x55aa1d031f20), undef)
Comment 22 David Cook 2023-11-01 01:51:05 UTC
I tried © which is A9 in Latin-2 but C2A9 in UTF-8.

I wrote it out as Latin-1, but trying to save it into Koha won't work. The database refuses it because it's not UTF-8:

DBD::mysql::st execute failed: Incorrect string value: '\xA9irm.<...' for column `koha_kohadev`.`biblio_metadata`.`metadata`

--

In hindsight, some of the encoding issues experienced probably were before we updated Koha thoroughly with utf8mb4.

--

So I probably got my original analysis a bit wrong in terms of the "Description" of this bug report. I thought it was a case where they were using the wrong encoding on their Z39.50 but I don't think it was that.

Based off my experience on bug 35104 I think that all my recent issues with 
"Error: invalid data, cannot decode metadata object
  parser error : PCDATA invalid Char value [X]" are probably due to copy/paste...
Comment 23 David Cook 2023-11-01 02:24:08 UTC
I did have some library records with ESC characters in them, but no matter what I try I cannot reproduce that same record. 

Somehow “ got turned into \x1b(3\x1b)4z\x1b(B

In UTF-8 that would be e2 80 9c and in Windows-1252 it would be 93, so it looks like some big time weirdness...
Comment 24 Phil Ringnalda 2024-11-14 23:15:03 UTC
Sounds like this is fixed on every active branch, and the work continues in bug 35104