Bug 37020 - bulkmarcimport gets killed after update to 24.05. when inserting large files
Summary: bulkmarcimport gets killed after update to 24.05. when inserting large files
Status: Signed Off
Alias: None
Product: Koha
Classification: Unclassified
Component: Architecture, internals, and plumbing (show other bugs)
Version: Main
Hardware: All Linux
: P5 - low major
Assignee: Bugs List
QA Contact: Testopia
URL:
Keywords: RM_priority
Depends on: 29440
Blocks:
  Show dependency treegraph
 
Reported: 2024-06-04 08:26 UTC by Jan Kissig
Modified: 2025-01-13 12:44 UTC (History)
9 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Small patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
Bug 37020: Stream records from file instead of loading all into memory (33.54 KB, patch)
2025-01-10 19:29 UTC, Kyle M Hall (khall)
Details | Diff | Splinter Review
Bug 37020: Stream records from file instead of loading all into memory (33.65 KB, patch)
2025-01-13 08:10 UTC, Magnus Enger
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Jan Kissig 2024-06-04 08:26:52 UTC
After updating to 24.05. the script bulkmarcimport.pl stops with a simple "Killed" when importing large sets of records (in my case 120k biblios)

My server settings: 1cpu, 4GB RAM

This is the console output:

perl /usr/share/koha/bin/migration_tools/bulkmarcimport.pl -d -b -commit=100 -m=MARCXML -file=/data/btw_20240528.xml
WARNING: MYSQL_OPT_RECONNECT is deprecated and will be removed in a future version.
Deleting biblios

Killed
Comment 1 Katrin Fischer 2024-06-17 10:43:04 UTC
Is there anything in the other logs maybe that could give us some more information on the issue?
Comment 2 Michał 2024-06-17 11:42:57 UTC
Are you sure it's a regression and wouldn't happen with the older version of the script? How much swap space does your installation have?

Generally back when I ran Koha VM with very low RAM (like 2 or 4 GB), some older version like 23.05, I'd have issues with various cron tasks crashing due to OOM (out-of-memory), up until I gave it some significant swap space (like 16 or 24 GB), which I think was even also a recommendation somewhere.

Just thinking if it could perchance be simply a matter of that rather than a regression per se (though it might be both!).
Comment 3 Jan Kissig 2024-06-18 11:53:33 UTC
Hi Michał

(In reply to Michał from comment #2)
> Are you sure it's a regression and wouldn't happen with the older version of
> the script? 

I just checked it, I ran bulkmarcimport around 20 times before on the same machine, with the same amount of records (~120k) on versions < 24.05. Everytime it worked.

> How much swap space does your installation have?

4GB

> Generally back when I ran Koha VM with very low RAM (like 2 or 4 GB), some
> older version like 23.05, I'd have issues with various cron tasks crashing
> due to OOM (out-of-memory), up until I gave it some significant swap space
> (like 16 or 24 GB), which I think was even also a recommendation somewhere.
> 
> Just thinking if it could perchance be simply a matter of that rather than a
> regression per se (though it might be both!).

I tried also on ktd with 8GB RAM and it seems that docker is crashing because of bulkmarcimport. I am trying to give it more ressources.

I also gave the script a little extra output in both loops and it always stopped in the first while-loop (https://git.koha-community.org/Koha-community/Koha/src/branch/main/misc/migration_tools/bulkmarcimport.pl#L317)
Maybe that @marc_records array is causing my error.
Comment 4 Jan Kissig 2024-06-18 17:48:03 UTC
I can confirm on ktd with 24.06.:

- 8GB RAM, 4GB swap, 120k Records -> Killed
- 12GB RAM, 4GB swap, 120k Records -> Worked

No entries in ktd koha logs. 

Like mentioned before. The script seems to be hanging in the first loop, there is no commit of records in my output found. 

Shall i link to my marcxml for someone else to test?
Comment 5 Aditya Sethi 2024-08-11 18:10:49 UTC
I am having even bigger problems due to the mysql warnings. I have used koha-dump to backup koha 23.11 in server 1. I have installed latest Koha 24.05 in server 2 and ran koha-restore. As it's a version upgrade, it's expected to have web installer. 

When I am going through database upgrade step, I am getting errors and can't proceed further: "WARNING: MYSQL_OPT_RECONNECT is deprecated and will be removed in a future version.". 

I have used "apt install --reinstall koha-common", it seemed like the issue got fixed with the same warning in terminal, but the database doesn't seem to be upgraded as I am now getting "Error 500" in Circulation and Fine Rules page, and empty fields in "Identity Provider" edit buttons, which was working perfectly fine in version 23.11. 

I have checked through installer codebase, it seems like whenever DBD::mysql generates any warning logs, the Koha's error handling mechanism mistakes it as error instead of warnings (line 441 of "/usr/share/koha/intranet/cgi-bin/installer/install.pl"). And I believe, the same error mechanism is reflected either throughout the Koha codebase or in DBD::mysql package. 

Possible fix: 
1. Change logic for error handling
2. Add "SET sql_notes=0" everywhere before having any query of SQL
3. Bundle error-free DBD::mysql perl package with Koha that has no such issue
Comment 6 Katrin Fischer 2024-08-12 14:54:15 UTC
(In reply to Aditya from comment #5)
> I am having even bigger problems due to the mysql warnings. I have used
> koha-dump to backup koha 23.11 in server 1. I have installed latest Koha
> 24.05 in server 2 and ran koha-restore. As it's a version upgrade, it's
> expected to have web installer. 
> 
> When I am going through database upgrade step, I am getting errors and can't
> proceed further: "WARNING: MYSQL_OPT_RECONNECT is deprecated and will be
> removed in a future version.". 
> 
> I have used "apt install --reinstall koha-common", it seemed like the issue
> got fixed with the same warning in terminal, but the database doesn't seem
> to be upgraded as I am now getting "Error 500" in Circulation and Fine Rules
> page, and empty fields in "Identity Provider" edit buttons, which was
> working perfectly fine in version 23.11. 
> 
> I have checked through installer codebase, it seems like whenever DBD::mysql
> generates any warning logs, the Koha's error handling mechanism mistakes it
> as error instead of warnings (line 441 of
> "/usr/share/koha/intranet/cgi-bin/installer/install.pl"). And I believe, the
> same error mechanism is reflected either throughout the Koha codebase or in
> DBD::mysql package. 
> 
> Possible fix: 
> 1. Change logic for error handling
> 2. Add "SET sql_notes=0" everywhere before having any query of SQL
> 3. Bundle error-free DBD::mysql perl package with Koha that has no such issue

Hi, this bug report is about the import script bulkmarcimport.pl. I think you might be experiencing different bugs, like bug 37533 (error 500 on circulation rules). Please check on the mailing list or Mattermost chat first if you are experiencing different issues like this. We will often be able to point you to the right bugs or solution.
Comment 7 Kyle M Hall (khall) 2025-01-10 19:29:28 UTC Comment hidden (obsolete)
Comment 8 Magnus Enger 2025-01-13 08:10:10 UTC
Created attachment 176425 [details] [review]
Bug 37020: Stream records from file instead of loading all into memory

One of the issues with bulkmarcimport.pl is that the script loads the entire record set into memory
before processing those records. If we load and process those records one at a time the memory
needed will be miniscule.

Test Plan:
1) Import a very large record set, note the ram consumption be the script
2) Apply this patch
1) Import the same record set, the ram consumption should be reduced!

Signed-off-by: Magnus Enger <magnus@libriotech.no>
The patch reduces the RAM used when importing a large file.
Comment 9 Jan Kissig 2025-01-13 12:44:22 UTC
Just tried to import 120k records via 

kohadev-koha@kohadevbox:migration_tools(bug_37020)$ perl bulkmarcimport.pl -b -d --m=MARCXML --file=records.xml --commit=1000

At first it went as expected but after 120k it went on and on and then finally I had to kill the process in order to get control over my machine again.

So I tried with a small file with 1 record: 

perl bulkmarcimport.pl -b -d --m=MARCXML --file=../../../thw/journal.xml --commit=1
Deleting biblios
.Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/MARC/File/XML.pm line 399, <GEN3> chunk 2.

3 MARC records done in 0.0895359516143799 seconds

So there seems an error in the loop but I did not dig into it..