Bug 39537 - bulkmarcimport.pl fails to import large files
Summary: bulkmarcimport.pl fails to import large files
Status: RESOLVED DUPLICATE of bug 37020
Alias: None
Product: Koha
Classification: Unclassified
Component: Command-line Utilities (show other bugs)
Version: unspecified
Hardware: All All
: P5 - low normal
Assignee: Bugs List
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2025-04-03 08:01 UTC by Janusz Kaczmarek
Modified: 2025-04-03 11:54 UTC (History)
5 users (show)

See Also:
GIT URL:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Janusz Kaczmarek 2025-04-03 08:01:18 UTC
After applying bug 29440 (entered in 24.05), bulkmarcimport.pl fails to import large files. 

The reason is that, after the changes made there, there are now two loops iterating the records to be imported, first of which reads the file and places the read records in an Perl array in memory. The second loop takes the records from that array array and actually inserts them into Koha database. 

Now, imagine what happens with limited RAM on the system and a large file (100K+, 1M+, 5M+ records, depending on system configuration) -- the script consumes more and more memory and at the end destroys the system stability, possibly being killed because of oom.

I must say, bulkmarcimport.pl has a lot of useful features, but has become completely useless for larger datasets and one have now to write own simple scripts for each use case to cover actual needs and adapting elements from the original script. This is tiring.

I am wondering what was the reason of such a design (splitting the main loop and collecting the records in a growing array in memory). Should it not be corrected? I would be happy to know David's (the original author's) opinion.
Comment 1 Thomas Klausner 2025-04-03 11:53:04 UTC
AFAIK (after some discussions on Koha Hackfest) the reason for the double loop was to validate all the records before importing them. It seems that in the old behaviour, bulkmarcimport could die in the middle of processing (when hitting an invalid MARC XML), leaving you with an unknown number of imported records, and no info which record actually is broken.

But loading all the records into an array does not work with big files. See also Bug 37020
Comment 2 Thomas Klausner 2025-04-03 11:54:31 UTC

*** This bug has been marked as a duplicate of bug 37020 ***