Description
Martin Renvoize (ashimema)
2023-10-19 09:36:51 UTC
When I was writing bug 34549, I thought a little bit about how this might be done. Whether it was better to do it by Javascript on the page, an AJAX call, or do it on page submission server side... But first a test... I change the Library of Congress Z39.50 from utf8 to "ISO_8859-1" (note changing to MARC-8 didn't seem to make a difference), and then I search "bibliothecaire" for the Title. I quickly notice obvious encoding problems like "Fonction peÌdagogique du documentaliste-bibliotheÌcaire" So I go to import that one... and actually I don't have any problems. I get a record with the following title: "Fonction peÌdagogique du documentaliste-bibliotheÌcaire : journeÌe acadeÌmique des documentalistes-bibliotheÌcaires, le 7 feÌvrier 1977" I suppose it's because Ì and are valid UTF-8 characters. If I change the Z39.50 back to "utf8" then I get: "Fonction pédagogique du documentaliste-bibliothécaire :" I'm not sure why é would be interpreted as Ì though. If I download as MARCXML, I see the title is "Fonction pédagogique du documentaliste-bibliothécaire :" I guess I'm not confident how many conversions are happening and at which points but this still looks odd... As per my comment at https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=34549#c19 technically the problem we're trying to solve in bug 34549 is preventing the saving of characters that cannot be rendered in XML. Picking up on encoding problems is a lot trickier because bytes can be valid in multiple different encodings. The machine can still output a valid character, so it doesn't realize that the source data is encoded incorrectly. Created attachment 157625 [details]
Text file containing control characters
I've written a Javascript based check which *mostly* gets us there. It checks the input[text] and textarea fields for bytes that aren't allowed in XML and it raises an alert if it finds any. I haven't figured out a way to get the "Go to field" to work though, as I need the tab where the HTML element is found, but we aren't recording that data anywhere at the moment. We have a "tabindex", but it seems hard-coded to 0 or 1 for some reason... I would attach my patch here on Bugzilla, but "git-bz" isn't working for me right now... I've run out of steam on this one anyway, but hopefully I can share my code and someone else can take it away... Created attachment 157627 [details] [review] Bug 34549: Alert when inserting text invalid in XML into bib record git-bz still not working for me, so just used "git format-patch -1 HEAD" and uploaded manually... Created attachment 157628 [details] [review] Bug 35104: Alert when inserting text invalid in XML into bib record This change adds a Javascript based alert when a user tries to save data via the bib record editor that is invalid in XML. Test plan: 0. Apply patch 1. Go to http://localhost:8081/cgi-bin/koha/cataloguing/addbiblio.pl?frameworkcode= 2. Copy the data from "Text file containing control characters" 3. Paste the data into 245$a and 500$a 4. Click "Save" 5. Note the alerts at the top of the web page and that the record does not save 6. Resolve the problems 7. Click "Save" 8. Note that the record saves Created attachment 157642 [details] [review] Bug 35104: Alert when inserting text invalid in XML into bib record This change adds a Javascript based alert when a user tries to save data via the bib record editor that is invalid in XML. Test plan: 0. Apply patch 1. Go to http://localhost:8081/cgi-bin/koha/cataloguing/addbiblio.pl?frameworkcode= 2. Copy the data from "Text file containing control characters" 3. Paste the data into 245$a and 500$a 4. Click "Save" 5. Note the alerts at the top of the web page and that the record does not save 6. Resolve the problems 7. Click "Save" 8. Note that the record saves Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com> This works really nicely, as such I'm signing off... Two little points however.. 1) I wonder if we could/should merge these errors into the existing validation alert rather than having it's own alert box? 2) I did consider whether we should go belt & braces and catch on the server too.. however I think this is a great improvement without that anyway. (In reply to David Cook from comment #3) > As per my comment at > https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=34549#c19 > technically the problem we're trying to solve in bug 34549 is preventing the > saving of characters that cannot be rendered in XML. > > Picking up on encoding problems is a lot trickier because bytes can be valid > in multiple different encodings. The machine can still output a valid > character, so it doesn't realize that the source data is encoded incorrectly. You're totally right there.. it's really hard to spot encoding issues becuase of this :( While I appreciate the sign off, it's not quite ready yet. It doesn't calculate the tab currently for the "Go to field" link. It only works by accident for things like the 245 due to "tabindex" weirdness. (In reply to Martin Renvoize from comment #10) > This works really nicely, as such I'm signing off... > > Two little points however.. > > 1) I wonder if we could/should merge these errors into the existing > validation alert rather than having it's own alert box? That was my original intention but that would involve more refactoring. Could be worth doing though. > 2) I did consider whether we should go belt & braces and catch on the server > too.. however I think this is a great improvement without that anyway. I was pondering that too. Probably worth doing. I had an interesting experience that I thought I'd share here since it's relevant. One of my librarians was copying and pasting text from a PDF into Koha. When they did it, it generated a broken MARC record. When I did it, it worked fine. That's when I discovered that Chrome, Firefox, and Edge all treat PDF text differently. Consider the following phrase "like-minded people" which is actually broken against 2 lines on the hyphen in the PDF. If you copy from Chrome, it removes the hyphen so it becomes "likeminded people". If you copy from Firefox, it copies the hyphen and a line break so it becomes something like "like-\nminded people". If you copy from Edge, it mangles the hyphen and turns it into a "Start of text" control character, which will break Koha. -- While we work through our official solution, I came up with a little Javascript function which is run during the "paste" event in the MARC editor. It displays a "confirm" box which contains an explanatory warning, offers a tip on understanding the problem, and then provides an option to try to "fix" the record (by stripping out the bad characters). I'm just running that in their "IntranetUserJS" for now, but I thought I'd share this information. I think having a warning at "save time" is very useful especially for imported records. This "on paste" warning might also be useful though since it's at the time that they're entering the data, so it might be easier for them to notice the problem. -- In any case, it was great to find an example from the wild to test a fix against... Wow, that's an impressive find. Man this stuff ends up in fun "exciting" places. (In reply to Martin Renvoize from comment #15) > Wow, that's an impressive find. Man this stuff ends up in fun "exciting" > places. Thanks! I did feel pretty good working that one out! During this process, I've been thinking there might be an alternative to bug 34549 as well. Instead of erasing the invalid characters, surely we could just escape them. So I'm going to use my discovery from https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=35104#c14 to see if I can do an alternate version of bug 34549 where we escape the bad characters using numeric character references or an entity. So the STX character from that example would be converted to something like  instead of just erased. Of course, it would be good to try it out with a variety of examples and not just this sort of data corruption example. But maybe it's also one of those ones where we can just do our best and see how it evolves over time... (In reply to David Cook from comment #16) > So the STX character from that example would be converted to something like >  instead of just erased. I just tried a client-side replace where the STX character gets converted into  but it looks like it gets converted into &#2; Something I find interesting is that in the database we'll store "David’s awesome record" but then the MARCXML export to the browser will encode the UTF-8 encoded character ’ as ’ I'm going to poke around in this a bit more... (In reply to David Cook from comment #17) > I'm going to poke around in this a bit more... The TransformHtmlToMarc doesn't seem to affect it... If I do $record->as_formatted then I see: likeminded If I do $record->as_xml then I see: &#2; Looking at https://metacpan.org/dist/MARC-File-XML/source/lib/MARC/File/XML.pm#L378 there is an escape function that escapes ampersands and angle brackets. In theory, maybe MARC::File::XML should escape any invalid characters using character references or remove them since they're invalid. But MARC::File::XML's escaping means it's impossible for us to pre-escape any invalid characters. It feels like MARC::File::XML is essentially holding us hostage. We need to clean our input data (in whatever format) before it reaches MARC::File::XML, which seems a bit silly, since it's the XML format which has the restrictions... That being said... the XML 1.0 spec is pretty forgiving. After review, it's really just excluding *some* ASCII control characters, Unicode surrogates, U+FFFE, and U+FFFF. That's a really small number of characters and none of them are printable characters. (In reply to David Cook from comment #18) > That being said... the XML 1.0 spec is pretty forgiving. After review, it's > really just excluding *some* ASCII control characters, Unicode surrogates, > U+FFFE, and U+FFFF. That's a really small number of characters and none of > them are printable characters. After review, for a UTF-8 encoded document, bug 34549 would only strip non-printable characters. I might have another crack at trying to get a Latin-1 encoded document into Koha... as I know that I've had Latin-1 encoded data in Koha before (although it's very possible that it came through side-loading using non-Koha tools). We've got an interesting assortment of fixes at the moment... If you already have a record with invalid characters, you can open it using bug 34014 which will scrub your record clean within the editor, but it will also show you the parser errors, so you could go and repair your record, in theory. With bug 34549, you can try to save invalid characters, and it will silently wipe them out. With bug 35104, I've been looking at a client-side warning, but it doesn't tell you exactly which characters are a problem. Maybe we'd be better off re-using the mechanism from bug 34014 here in bug 35104. That would remove the need for a client-side check, it would be more consistent, and then we could undo bug 34549... Created attachment 159615 [details] [review] Bug 35104: [Alternative] Throw exception on store of invalid marcxml This patch adds an exception into ModBilioMarc to prevent erronious marcxml being stored into the database. We also remove the cleanup of such characters from TransformHtmlToMarc as introduced by bug 34549. This should result in there being no way, outside of direct database manipulation, of getting forbidden characters into our marcxml store. We'll need to work through the codebase to catch the exception thrown to give the end user the oportunities to act on the error and fix the data at source. I think we could perhaps go a level lower and put this exception into Koha::Biblio::Metadata->store maybe instead? We'll also need to account for the exception at various places with a try/catch block as detailed in the commit message.. but I wanted to start getting feedback on the proposed approach before committing to continuing down this path. Created attachment 159620 [details] [review] Bug 35104: [Alternative] Throw exception on store of invalid marcxml This patch adds an exception Koha::Biblio::Metadata->store to prevent erroneous characters in marcxml being stored into the database. We also remove the cleanup of such characters from TransformHtmlToMarc as introduced by bug 34549. This should result in there being no way, outside of direct database manipulation, of getting forbidden characters into our marcxml store. We'll need to work through the codebase to catch the exception thrown to give the end user the oportunities to act on the error and fix the data at source. Created attachment 159621 [details] [review] Bug 35104: Initial attempt at catching the failure on new records Whilst I can see the warning output from store, I can't seem to actually catch any exceptions.. confused Thanks for working on this, Martin. I'll try to take a look in a bit. I've applied the last 2 patches and restarted Koha... Bug 35104: [Alternative] Throw exception on store of invalid marcxml Bug 35104: Initial attempt at catching the failure on new records And then I added the text from "Text file containing control characters" into a test record -- I'm getting an error trace page. It looks like you made a little mistake when catching the exception and it's causing you to rethrow it. I'll fix that up in a sec... -- However, with the fix, we're still having a problem rendering the page. But I'll let folk take it from there... Created attachment 159630 [details] [review] Bug 35104: Fix typo when catching exception I have to run and I'm away tomorrow morning, but hopefully this gets you a bit further. Awesome.. I knew it was something silly when catching the exception! That gives me enough to get going again :), thanks David (In reply to Martin Renvoize from comment #29) > Awesome.. I knew it was something silly when catching the exception! > > That gives me enough to get going again :), thanks David Yay teamwork! :D Created attachment 159810 [details] [review] Bug 35104: [Alternative] Throw exception on store of invalid marcxml This patch adds an exception Koha::Biblio::Metadata->store to prevent erroneous characters in marcxml being stored into the database. We also remove the cleanup of such characters from TransformHtmlToMarc as introduced by bug 34549. This should result in there being no way, outside of direct database manipulation, of getting forbidden characters into our marcxml store. We'll need to work through the codebase to catch the exception thrown to give the end user the oportunities to act on the error and fix the data at source. Created attachment 159811 [details] [review] Bug 35104: Catch the failure on new and edited records We now catch the error and display it to the end user so they can fix the issues before attempting to save again. Created attachment 159812 [details] [review] Bug 35104: Throw exception on store of invalid marcxml This patch adds an exception Koha::Biblio::Metadata->store to prevent erroneous characters in marcxml being stored into the database. We also remove the cleanup of such characters from TransformHtmlToMarc as introduced by bug 34549. This should result in there being no way, outside of direct database manipulation, of getting forbidden characters into our marcxml store. We'll need to work through the codebase to catch the exception thrown to give the end user the oportunities to act on the error and fix the data at source. Created attachment 159813 [details] [review] Bug 35104: Catch the failure on new and edited records We now catch the error and display it to the end user so they can fix the issues before attempting to save again. IMO this is too low level, we should not call MARC::Record::new_from_xml everytime we store. It will work ofc but what about perf? What if I do want invalid marcxml? :D (In reply to Jonathan Druart from comment #35) > IMO this is too low level, we should not call MARC::Record::new_from_xml > everytime we store. > > It will work ofc but what about perf? I think that we should validate each time we store. However... maybe we could use XML::LibXML directly instead of MARC::Record to reduce some overhead. I don't know if it makes an actual difference in terms of perf though. And the nice thing about using MARC::Record is we know if it will break on subsequent usage. Perhaps more importantly, these patches don't add error handling for every instance of Koha::Biblio::Metadata->store(). We might be breaking Staged MARC imports here and not realizing it. So maybe we should add a Koha::Biblio::Metadata->validate() and just call it from the controller script for now? > What if I do want invalid marcxml? :D :P Created attachment 166431 [details] [review] Bug 35104: Catch the failure on new and edited records We now catch the error and display it to the end user so they can fix the issues before attempting to save again. Signed-off-by: baptiste <baptiste.bayche@inlibro.com> Thanks for the follow-up here Baptiste.. it's great to see someone is still interested in this. I'll continue to add such catches elsewhere as it seems people do actually want to help pursue it. We should have at least one unit test for this. I was thinking that maybe we should move the code into a validate() function and call that from store() but it doesn't matter too much, as we can unit test it in store() just fine too. Martin, if you want to add that unit test, I could look at QAing this. Invalid metadata is a pet peeve of mine, so I'd love to see this (and related changes) get in. Getting sha1 errors. Can you post a branch on github? I too get the sha1 error: Applying: Bug 35104: Throw exception on store of invalid marcxml Applying: Bug 35104: Catch the failure on new and edited records error: sha1 information is lacking or useless (C4/Biblio.pm). error: could not build fake ancestor Patch failed at 0001 Bug 35104: Catch the failure on new and edited records -- Interestingly, the first patch applied but not the second one. If you dig into it, you can see that the first index referenced here doesn't exist in the upstream git: diff --git a/C4/Biblio.pm b/C4/Biblio.pm index 0d6feb8aed..eb79c397a5 100644 Since there is over 3 hours between commits, the second one must've been based off a local index state for C4/Biblio.pm. Funny enough, the other file changed does have an index from the upstream: a11711afd3 Just needs a rebase. Created attachment 173145 [details] [review] Bug 35104: Throw exception on store of invalid marcxml This patch adds an exception Koha::Biblio::Metadata->store to prevent erroneous characters in marcxml being stored into the database. We also remove the cleanup of such characters from TransformHtmlToMarc as introduced by bug 34549. This should result in there being no way, outside of direct database manipulation, of getting forbidden characters into our marcxml store. We'll need to work through the codebase to catch the exception thrown to give the end user the oportunities to act on the error and fix the data at source. Signed-off-by: baptiste <baptiste.bayche@inlibro.com> Created attachment 173146 [details] [review] Bug 35104: Catch the failure on new and edited records We now catch the error and display it to the end user so they can fix the issues before attempting to save again. Signed-off-by: baptiste <baptiste.bayche@inlibro.com> Rebased.. I started working on other cases but ran out of time for this.. there's a lot of calls to ModBiblio in Koha that would benefit from try/catch treatment after this and propagating the errors to screen or gracefully skipping over them. |