Bug 24123 - bulkmarcimport.pl doesn't support UTF-8 encoded MARCXML records
Summary: bulkmarcimport.pl doesn't support UTF-8 encoded MARCXML records
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Searching - Elasticsearch (show other bugs)
Version: master
Hardware: All All
: P5 - low major (vote)
Assignee: Jonathan Druart
QA Contact: Tomás Cohen Arazi
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-11-26 14:08 UTC by Joonas Kylmälä
Modified: 2021-06-14 21:29 UTC (History)
8 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Trivial patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
20.05.00, 19.11.03, 19.05.08


Attachments
Bug 24123: Fix import of UTF-8 encoded MARC21 MARCXML using bulkmarcimport (elastic only) (2.54 KB, patch)
2019-12-10 14:32 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 24123: Fix import of UTF-8 encoded MARC21 MARCXML using bulkmarcimport (elastic only) (2.59 KB, patch)
2020-01-07 20:50 UTC, Michal Denar
Details | Diff | Splinter Review
Bug 24123: Fix import of UTF-8 encoded MARC21 MARCXML using bulkmarcimport (elastic only) (2.64 KB, patch)
2020-01-09 19:24 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Joonas Kylmälä 2019-11-26 14:08:58 UTC
Importing an UTF-8 encoded MARC21 MARCXML record that has leader value in position 09 equal 'a' (unicode) makes non-ascii characters like 'ä' to be replaced with character '�'.

The command being run is

> misc/migration_tools/bulkmarcimport.pl -biblios -file record.marcxml -m=MARCXML -v -v


I tracked this problem down to the line

> $MARC::File::XML::_load_args{BinaryEncoding} = 'utf-8';

If you add that line again as first thing in the RECORD label block further down in the bulkmarcimport.pl the problem is fixed.
Comment 1 Jonathan Druart 2019-11-26 15:25:26 UTC
The problem only appears with SearchEngine=Elastic, Koha::SearchEngine::Search->new uses a require statement to load the correct Search module.
This is done l.257 of bulkmarcimport.pl:
  257 my $searcher = Koha::SearchEngine::Search->new

Koha::SearchEngine::Elasticsearch::Search will `use MARC::File::XML`, and so resets the arguments set before:
  216     $MARC::File::XML::_load_args{BinaryEncoding} = 'utf-8';

  220     $MARC::File::XML::_load_args{RecordFormat} = $recordformat;

An easy (but dirty) fix could be to move the declaration of my $searcher before in the script.

The tricky (but correct) fix would be to remove the long standing "ugly hack follows" comment.
Comment 2 Jonathan Druart 2019-11-26 15:25:58 UTC
Upping severity.
Comment 3 Jonathan Druart 2019-12-10 14:32:18 UTC
Created attachment 96160 [details] [review]
Bug 24123: Fix import of UTF-8 encoded MARC21 MARCXML using bulkmarcimport (elastic only)

If elastic is used as search engine, the bulkmarcimport.pl will not
handle correctly UTF-8 encoded MARCXML

Koha::SearchEngine::Search->new uses a require statement to load the correct Search module.
This is done l.257 of bulkmarcimport.pl:
  257 my $searcher = Koha::SearchEngine::Search->new

Koha::SearchEngine::Elasticsearch::Search will `use MARC::File::XML`, and so resets the arguments set before:
  216     $MARC::File::XML::_load_args{BinaryEncoding} = 'utf-8';

  220     $MARC::File::XML::_load_args{RecordFormat} = $recordformat;

An easy (but dirty) fix could be to move the declaration of my $searcher before in the script.
The tricky (but correct) fix would be to remove the long standing "ugly hack follows" comment.

This patch is the easy, and dirty, fix

Test plan:
Use the command line tool to import MARXCML records that contains unicode characters into Koha

Something like `misc/migration_tools/bulkmarcimport.pl -biblios -file record.marcxml -m=MARCXML`

Without this patch you will notice that unicode characters will not be displayed correctly
Comment 4 Michal Denar 2020-01-07 20:50:27 UTC
Created attachment 96967 [details] [review]
Bug 24123: Fix import of UTF-8 encoded MARC21 MARCXML using bulkmarcimport (elastic only)

If elastic is used as search engine, the bulkmarcimport.pl will not
handle correctly UTF-8 encoded MARCXML

Koha::SearchEngine::Search->new uses a require statement to load the correct Search module.
This is done l.257 of bulkmarcimport.pl:
  257 my $searcher = Koha::SearchEngine::Search->new

Koha::SearchEngine::Elasticsearch::Search will `use MARC::File::XML`, and so resets the arguments set before:
  216     $MARC::File::XML::_load_args{BinaryEncoding} = 'utf-8';

  220     $MARC::File::XML::_load_args{RecordFormat} = $recordformat;

An easy (but dirty) fix could be to move the declaration of my $searcher before in the script.
The tricky (but correct) fix would be to remove the long standing "ugly hack follows" comment.

This patch is the easy, and dirty, fix

Test plan:
Use the command line tool to import MARXCML records that contains unicode characters into Koha

Something like `misc/migration_tools/bulkmarcimport.pl -biblios -file record.marcxml -m=MARCXML`

Without this patch you will notice that unicode characters will not be displayed correctly

Signed-off-by: Michal Denar <black23@gmail.com>
Nice work, Jonathan
Comment 5 Tomás Cohen Arazi 2020-01-09 19:24:25 UTC
Created attachment 97138 [details] [review]
Bug 24123: Fix import of UTF-8 encoded MARC21 MARCXML using bulkmarcimport (elastic only)

If elastic is used as search engine, the bulkmarcimport.pl will not
handle correctly UTF-8 encoded MARCXML

Koha::SearchEngine::Search->new uses a require statement to load the correct Search module.
This is done l.257 of bulkmarcimport.pl:
  257 my $searcher = Koha::SearchEngine::Search->new

Koha::SearchEngine::Elasticsearch::Search will `use MARC::File::XML`, and so resets the arguments set before:
  216     $MARC::File::XML::_load_args{BinaryEncoding} = 'utf-8';

  220     $MARC::File::XML::_load_args{RecordFormat} = $recordformat;

An easy (but dirty) fix could be to move the declaration of my $searcher before in the script.
The tricky (but correct) fix would be to remove the long standing "ugly hack follows" comment.

This patch is the easy, and dirty, fix

Test plan:
Use the command line tool to import MARXCML records that contains unicode characters into Koha

Something like `misc/migration_tools/bulkmarcimport.pl -biblios -file record.marcxml -m=MARCXML`

Without this patch you will notice that unicode characters will not be displayed correctly

Signed-off-by: Michal Denar <black23@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 6 Martin Renvoize 2020-01-10 08:27:27 UTC
Happy with this pragmatic approach to a fix..

It has however, bugged me for a long time that we have both bulkmarcimport and the stage + commit command-line scripts to do the same process in reality.  I'd love to see that tidied up some time.

Also.. I'm really not sure why we need MARC::Batch at all.. my understanding is that its intended use case is for multiple files, but we're only loading a single file using bulkmarcimport aren't we?

Either way, pushing :)
Comment 7 Martin Renvoize 2020-01-10 08:39:04 UTC
Nice work everyone!

Pushed to master for 20.05
Comment 8 Katrin Fischer 2020-01-10 08:44:12 UTC
(In reply to Martin Renvoize from comment #6)
> Happy with this pragmatic approach to a fix..
> 
> It has however, bugged me for a long time that we have both bulkmarcimport
> and the stage + commit command-line scripts to do the same process in
> reality.  I'd love to see that tidied up some time.
> 
> Also.. I'm really not sure why we need MARC::Batch at all.. my understanding
> is that its intended use case is for multiple files, but we're only loading
> a single file using bulkmarcimport aren't we?
> 
> Either way, pushing :)

While both import, they do it quite differently. Bulkmarcimport is faster as it does not use the staging tables, but imports directly into the database. Matching works differently (not using the matching rules system) and it can't be undone.

The stage and commit scripts could probably be combined into one thing, but as they are they match the fact it's also 2 tools in Koha. They use the matching rules and import tables, which can mean create quite a lot of data db size wise as everything is stored twice after importing and cleaning the batch I think doesn't really make the db shrink. But you can undo them and use complex matching rules. When you want to have 'versioning' for your records, it's also helpful to keep that data.

We noticed that loading big files via stage/commit created quite a big load on the servers - I wonder if instead of multiple files a possible use could be to break up big files and work on them sequentially?
Comment 9 Jonathan Druart 2020-01-10 09:02:14 UTC
(In reply to Martin Renvoize from comment #6)
> Also.. I'm really not sure why we need MARC::Batch at all.. my understanding
> is that its intended use case is for multiple files, but we're only loading
> a single file using bulkmarcimport aren't we?

MARC::Batch is used to loop on several MARC records from a given file.
I guess we could replace it with RecordsFromISO2709File and RecordsFromMARCXMLFile however.
Comment 10 Joy Nelson 2020-01-30 23:33:15 UTC
Pushed to 19.11.x branch for 19.11.03
Comment 11 Lucas Gass 2020-02-05 21:37:15 UTC
backported to 19.05.x for 19.05.08