After updating to 24.05. the script bulkmarcimport.pl stops with a simple "Killed" when importing large sets of records (in my case 120k biblios) My server settings: 1cpu, 4GB RAM This is the console output: perl /usr/share/koha/bin/migration_tools/bulkmarcimport.pl -d -b -commit=100 -m=MARCXML -file=/data/btw_20240528.xml WARNING: MYSQL_OPT_RECONNECT is deprecated and will be removed in a future version. Deleting biblios Killed
Is there anything in the other logs maybe that could give us some more information on the issue?
Are you sure it's a regression and wouldn't happen with the older version of the script? How much swap space does your installation have? Generally back when I ran Koha VM with very low RAM (like 2 or 4 GB), some older version like 23.05, I'd have issues with various cron tasks crashing due to OOM (out-of-memory), up until I gave it some significant swap space (like 16 or 24 GB), which I think was even also a recommendation somewhere. Just thinking if it could perchance be simply a matter of that rather than a regression per se (though it might be both!).
Hi Michał (In reply to Michał from comment #2) > Are you sure it's a regression and wouldn't happen with the older version of > the script? I just checked it, I ran bulkmarcimport around 20 times before on the same machine, with the same amount of records (~120k) on versions < 24.05. Everytime it worked. > How much swap space does your installation have? 4GB > Generally back when I ran Koha VM with very low RAM (like 2 or 4 GB), some > older version like 23.05, I'd have issues with various cron tasks crashing > due to OOM (out-of-memory), up until I gave it some significant swap space > (like 16 or 24 GB), which I think was even also a recommendation somewhere. > > Just thinking if it could perchance be simply a matter of that rather than a > regression per se (though it might be both!). I tried also on ktd with 8GB RAM and it seems that docker is crashing because of bulkmarcimport. I am trying to give it more ressources. I also gave the script a little extra output in both loops and it always stopped in the first while-loop (https://git.koha-community.org/Koha-community/Koha/src/branch/main/misc/migration_tools/bulkmarcimport.pl#L317) Maybe that @marc_records array is causing my error.
I can confirm on ktd with 24.06.: - 8GB RAM, 4GB swap, 120k Records -> Killed - 12GB RAM, 4GB swap, 120k Records -> Worked No entries in ktd koha logs. Like mentioned before. The script seems to be hanging in the first loop, there is no commit of records in my output found. Shall i link to my marcxml for someone else to test?
I am having even bigger problems due to the mysql warnings. I have used koha-dump to backup koha 23.11 in server 1. I have installed latest Koha 24.05 in server 2 and ran koha-restore. As it's a version upgrade, it's expected to have web installer. When I am going through database upgrade step, I am getting errors and can't proceed further: "WARNING: MYSQL_OPT_RECONNECT is deprecated and will be removed in a future version.". I have used "apt install --reinstall koha-common", it seemed like the issue got fixed with the same warning in terminal, but the database doesn't seem to be upgraded as I am now getting "Error 500" in Circulation and Fine Rules page, and empty fields in "Identity Provider" edit buttons, which was working perfectly fine in version 23.11. I have checked through installer codebase, it seems like whenever DBD::mysql generates any warning logs, the Koha's error handling mechanism mistakes it as error instead of warnings (line 441 of "/usr/share/koha/intranet/cgi-bin/installer/install.pl"). And I believe, the same error mechanism is reflected either throughout the Koha codebase or in DBD::mysql package. Possible fix: 1. Change logic for error handling 2. Add "SET sql_notes=0" everywhere before having any query of SQL 3. Bundle error-free DBD::mysql perl package with Koha that has no such issue
(In reply to Aditya from comment #5) > I am having even bigger problems due to the mysql warnings. I have used > koha-dump to backup koha 23.11 in server 1. I have installed latest Koha > 24.05 in server 2 and ran koha-restore. As it's a version upgrade, it's > expected to have web installer. > > When I am going through database upgrade step, I am getting errors and can't > proceed further: "WARNING: MYSQL_OPT_RECONNECT is deprecated and will be > removed in a future version.". > > I have used "apt install --reinstall koha-common", it seemed like the issue > got fixed with the same warning in terminal, but the database doesn't seem > to be upgraded as I am now getting "Error 500" in Circulation and Fine Rules > page, and empty fields in "Identity Provider" edit buttons, which was > working perfectly fine in version 23.11. > > I have checked through installer codebase, it seems like whenever DBD::mysql > generates any warning logs, the Koha's error handling mechanism mistakes it > as error instead of warnings (line 441 of > "/usr/share/koha/intranet/cgi-bin/installer/install.pl"). And I believe, the > same error mechanism is reflected either throughout the Koha codebase or in > DBD::mysql package. > > Possible fix: > 1. Change logic for error handling > 2. Add "SET sql_notes=0" everywhere before having any query of SQL > 3. Bundle error-free DBD::mysql perl package with Koha that has no such issue Hi, this bug report is about the import script bulkmarcimport.pl. I think you might be experiencing different bugs, like bug 37533 (error 500 on circulation rules). Please check on the mailing list or Mattermost chat first if you are experiencing different issues like this. We will often be able to point you to the right bugs or solution.
Created attachment 176388 [details] [review] Bug 37020: Stream records from file instead of loading all into memory One of the issues with bulkmarcimport.pl is that the script loads the entire record set into memory before processing those records. If we load and process those records one at a time the memory needed will be miniscule. Test Plan: 1) Import a very large record set, note the ram consumption be the script 2) Apply this patch 1) Import the same record set, the ram consumption should be reduced!
Created attachment 176425 [details] [review] Bug 37020: Stream records from file instead of loading all into memory One of the issues with bulkmarcimport.pl is that the script loads the entire record set into memory before processing those records. If we load and process those records one at a time the memory needed will be miniscule. Test Plan: 1) Import a very large record set, note the ram consumption be the script 2) Apply this patch 1) Import the same record set, the ram consumption should be reduced! Signed-off-by: Magnus Enger <magnus@libriotech.no> The patch reduces the RAM used when importing a large file.
Just tried to import 120k records via kohadev-koha@kohadevbox:migration_tools(bug_37020)$ perl bulkmarcimport.pl -b -d --m=MARCXML --file=records.xml --commit=1000 At first it went as expected but after 120k it went on and on and then finally I had to kill the process in order to get control over my machine again. So I tried with a small file with 1 record: perl bulkmarcimport.pl -b -d --m=MARCXML --file=../../../thw/journal.xml --commit=1 Deleting biblios .Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/MARC/File/XML.pm line 399, <GEN3> chunk 2. 3 MARC records done in 0.0895359516143799 seconds So there seems an error in the loop but I did not dig into it..
Hi Jan, please set to Failed QA (and don't feel bad)
@Kyle: I think better keep the tidy separate here, hard to see any changes?
(In reply to Katrin Fischer from comment #11) > @Kyle: I think better keep the tidy separate here, hard to see any changes? There isn't any tidying in this patch ( that I can recall :). It really just involved moving move of the code from the lower loop into the upper loop ( which does change the indentation fwiw ) so there is only one loop instead of two.
(In reply to Jan Kissig from comment #9) > Just tried to import 120k records via Jan, is there any chance you can attach your records file to this bug?
Created attachment 176653 [details] MARCXML with 1 record perl bulkmarcimport.pl -m=MARCXML -b --commit=1000 --file=path/to/bib-262.marcxml
(In reply to Kyle M Hall (khall) from comment #13) > (In reply to Jan Kissig from comment #9) > > Just tried to import 120k records via > > Jan, is there any chance you can attach your records file to this bug? I will find a place to upload my 120k-record file. But for the beginning I attached a single record. Can you import and verify that result is something like: 3 MARC records done in ...
This really highlights the difference of the default patch files as done in Bugzilla. There should be a way to generate a separate git diff that ignores the whitespace changes completely, IDEs can do it, but there doesn't seem to be any popular way to convert an existing diff to skip displaying them... Even displaying the diff with some better viewers like Kompare, it's still not much easier to read. So we can only ask for complimentary git diff -w (--ignore-all-space) attachment to make review easier I guess (I would generate it myself, but I can't clone Koha here on the internet I have today).
(In reply to Jan Kissig from comment #15) > (In reply to Kyle M Hall (khall) from comment #13) > > (In reply to Jan Kissig from comment #9) > > > Just tried to import 120k records via > > > > Jan, is there any chance you can attach your records file to this bug? > > I will find a place to upload my 120k-record file. But for the beginning I > attached a single record. Can you import and verify that result is something > like: > > 3 MARC records done in ... I imported your attached file, and the result was indeed "3 MARC records done in 0.290374994277954 seconds".
(In reply to Kyle M Hall (khall) from comment #13) > (In reply to Jan Kissig from comment #9) > > Just tried to import 120k records via > > Jan, is there any chance you can attach your records file to this bug? Hi Kyle, here are the records I tried. Modified to fit ktd. https://nextcloud.th-wildau.de/nextcloud/index.php/s/fB3or6bXNX9fo9E
Created attachment 176921 [details] [review] Bug 37020: [alternate] Changed script to use XML:Twig to reduce memory usage. In the original script, MARC::Batch->new('XML', ...) was holding onto large chunks of parsed data for MARCXML files, resulting in excessive memory usage even after the script tried to free references. A hacky approach was to manually clear the batch’s internal structures, but Data::Dumper showed that MARC::Batch wasn’t storing data in those specific fields. Hence, the hack offered no solution to the underlying caching. By contrast, XML::Twig can stream XML elements one by one, calling a handler and then discarding the parsed chunk. Note, there is no guarantee this implementation works for non-XML files as it stands (although, in theory it should). The focus is on the XML:Twig implementation for reference as a solution. Overall, batching seems to be eating up memory. To test: 1. Run perl misc/migration_tools/bulkmarcimport.pl -m=MARCXML -b -d -v --commit=1000 --file=file_path_here on a large MARCXML/XML file (for example, 2 GB or greater). 2. On whatever machine or container it is ran, the script will likely cause an out-of-memory error and crash the environment. 3. Apply the patch, run "restart_all", and redo step 1. The script should utilize much less memory to import records from MARCXML/XML files.
*** Bug 39537 has been marked as a duplicate of this bug. ***
I have a similar but slightly different solution (developed for importing 10mil authority records). Instead of using MARC::Batch, I use XML::LibXML::Reader (which also implements a memory efficient pull parser) I guess XML::Twig works just as well, so I will (hopefully today) review this path.
The patch "Bug 37020: Stream records from file instead of loading all into memory" basically "just" removes the first loop that pushed all (valid) records onto an array. Yes, this solves the memory problem, but still uses the IMO not very sane regex-based pull "parser" in MARC::Batch. But it's a rather small change that works. (even though the patch looks huge because it contains a lot of whitespace-only changes. use `git diff -w ...` to see the relevant changes.
Review of the patch "Bug 37020: [alternate] Changed script to use XML:Twig to reduce memory usage.": This patch changes a lot, mainly using XML::Twig to pull-parse the (huge) XML file instead of using the regex-"parser" in MARC::Batch. It is a bit hard to read because it adds some comments (which would have to be removed) and adds a bunch of functions before the main code (which IMO is bad style, but maybe OK for Koha?), also moving the helper funtion up in the code. It proposes two different parsers, one removing the namespaces ('marc:record'), one not. Not sure why we need this. I like that processing a record is moved into a function. So I guess if this approach is taken, the code layout should be changed to have all the functions AFTER the main code, and we need to decide which of the two parser-function should be used (or we have both and a flag to choose?) I like this approach better then the other one because it removes MARC::Batch (for reading XML)
In our script, we used XML::LibXML. I can provide an example patch, but before spend that time, here's a quick preview of how the usage would look like: open( my $fh, '<', $input_marc_file ); my $reader = XML::LibXML::Reader->new( IO => $fh ); while ( $reader->read ) { next unless $reader->nodeType == XML_READER_TYPE_ELEMENT; next unless $reader->name eq 'record'; my $xml = $reader->readOuterXml; my $record = MARC::Record->new_from_xml( $xml, 'UTF8', 'MARC21' ); } I find that a bit easier to read than the HTML::Twig way of installing a handler subref for 'record'.
BTW, here is a link to Bug 37478 which introduced the duplicate loop that caused the memory issues (and the --skip_bad_records options does not protect us from the problem, because the loop pushing the records onto a Perl array happens if it is set or not)
One potential killer argument against XML::Twig and in favor of XML::LibXML: Koha currently already comes with XML::LibXML, but not with XML::Twig. So we would need to introduce a new depenceny.