Bug 37020

Summary: bulkmarcimport gets killed when inserting large files
Product: Koha Reporter: Jan Kissig <bibliothek>
Component: Architecture, internals, and plumbingAssignee: Thomas Klausner <domm>
Status: Pushed to stable --- QA Contact: Testopia <testopia>
Severity: critical    
Priority: P5 - low CC: adifbbk1, alexander.wagner, andrew, domm, enica, glasklas, januszop, jesse, jonathan.druart, kyle, leo.stoyanov, magnus, martin.renvoize, nick, paul.derscheid, schodkowy.omegi-0r, tomascohen
Version: MainKeywords: additional_work_needed, rel_24_05_candidate
Hardware: All   
OS: Linux   
GIT URL: Change sponsored?: ---
Patch complexity: Small patch Documentation contact:
Documentation submission: Text to go in the release notes:
Version(s) released in:
25.05.00,24.11.05
Circulation function:
Bug Depends on: 29440    
Bug Blocks:    
Attachments: Bug 37020: Stream records from file instead of loading all into memory
Bug 37020: Stream records from file instead of loading all into memory
MARCXML with 1 record
Bug 37020: [alternate] Changed script to use XML:Twig to reduce memory usage.
Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files
Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files
Bug 37020: Don't update Elasticsearch index --test option is set
Bug 37020: Don't update Elasticsearch index if--test option is set
Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files
Bug 37020: (Followup) Fix txn handling, loop label and record_number counter
MARCXML file with 75 authorities from GND
Bug 37020: (follow-up): Remove memleak when not using Elasticsearch
Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files
Bug 37020: (Followup) Fix txn handling, loop label and record_number counter
Bug 37020: (follow-up): Remove memleak when not using Elasticsearch
Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files
Bug 37020: (Followup) Fix txn handling, loop label and record_number counter
Bug 37020: (follow-up): Remove memleak when not using Elasticsearch

Description Jan Kissig 2024-06-04 08:26:52 UTC
After updating to 24.05. the script bulkmarcimport.pl stops with a simple "Killed" when importing large sets of records (in my case 120k biblios)

My server settings: 1cpu, 4GB RAM

This is the console output:

perl /usr/share/koha/bin/migration_tools/bulkmarcimport.pl -d -b -commit=100 -m=MARCXML -file=/data/btw_20240528.xml
WARNING: MYSQL_OPT_RECONNECT is deprecated and will be removed in a future version.
Deleting biblios

Killed
Comment 1 Katrin Fischer 2024-06-17 10:43:04 UTC
Is there anything in the other logs maybe that could give us some more information on the issue?
Comment 2 Michał 2024-06-17 11:42:57 UTC
Are you sure it's a regression and wouldn't happen with the older version of the script? How much swap space does your installation have?

Generally back when I ran Koha VM with very low RAM (like 2 or 4 GB), some older version like 23.05, I'd have issues with various cron tasks crashing due to OOM (out-of-memory), up until I gave it some significant swap space (like 16 or 24 GB), which I think was even also a recommendation somewhere.

Just thinking if it could perchance be simply a matter of that rather than a regression per se (though it might be both!).
Comment 3 Jan Kissig 2024-06-18 11:53:33 UTC
Hi Michał

(In reply to Michał from comment #2)
> Are you sure it's a regression and wouldn't happen with the older version of
> the script? 

I just checked it, I ran bulkmarcimport around 20 times before on the same machine, with the same amount of records (~120k) on versions < 24.05. Everytime it worked.

> How much swap space does your installation have?

4GB

> Generally back when I ran Koha VM with very low RAM (like 2 or 4 GB), some
> older version like 23.05, I'd have issues with various cron tasks crashing
> due to OOM (out-of-memory), up until I gave it some significant swap space
> (like 16 or 24 GB), which I think was even also a recommendation somewhere.
> 
> Just thinking if it could perchance be simply a matter of that rather than a
> regression per se (though it might be both!).

I tried also on ktd with 8GB RAM and it seems that docker is crashing because of bulkmarcimport. I am trying to give it more ressources.

I also gave the script a little extra output in both loops and it always stopped in the first while-loop (https://git.koha-community.org/Koha-community/Koha/src/branch/main/misc/migration_tools/bulkmarcimport.pl#L317)
Maybe that @marc_records array is causing my error.
Comment 4 Jan Kissig 2024-06-18 17:48:03 UTC
I can confirm on ktd with 24.06.:

- 8GB RAM, 4GB swap, 120k Records -> Killed
- 12GB RAM, 4GB swap, 120k Records -> Worked

No entries in ktd koha logs. 

Like mentioned before. The script seems to be hanging in the first loop, there is no commit of records in my output found. 

Shall i link to my marcxml for someone else to test?
Comment 5 Aditya Sethi 2024-08-11 18:10:49 UTC
I am having even bigger problems due to the mysql warnings. I have used koha-dump to backup koha 23.11 in server 1. I have installed latest Koha 24.05 in server 2 and ran koha-restore. As it's a version upgrade, it's expected to have web installer. 

When I am going through database upgrade step, I am getting errors and can't proceed further: "WARNING: MYSQL_OPT_RECONNECT is deprecated and will be removed in a future version.". 

I have used "apt install --reinstall koha-common", it seemed like the issue got fixed with the same warning in terminal, but the database doesn't seem to be upgraded as I am now getting "Error 500" in Circulation and Fine Rules page, and empty fields in "Identity Provider" edit buttons, which was working perfectly fine in version 23.11. 

I have checked through installer codebase, it seems like whenever DBD::mysql generates any warning logs, the Koha's error handling mechanism mistakes it as error instead of warnings (line 441 of "/usr/share/koha/intranet/cgi-bin/installer/install.pl"). And I believe, the same error mechanism is reflected either throughout the Koha codebase or in DBD::mysql package. 

Possible fix: 
1. Change logic for error handling
2. Add "SET sql_notes=0" everywhere before having any query of SQL
3. Bundle error-free DBD::mysql perl package with Koha that has no such issue
Comment 6 Katrin Fischer 2024-08-12 14:54:15 UTC
(In reply to Aditya from comment #5)
> I am having even bigger problems due to the mysql warnings. I have used
> koha-dump to backup koha 23.11 in server 1. I have installed latest Koha
> 24.05 in server 2 and ran koha-restore. As it's a version upgrade, it's
> expected to have web installer. 
> 
> When I am going through database upgrade step, I am getting errors and can't
> proceed further: "WARNING: MYSQL_OPT_RECONNECT is deprecated and will be
> removed in a future version.". 
> 
> I have used "apt install --reinstall koha-common", it seemed like the issue
> got fixed with the same warning in terminal, but the database doesn't seem
> to be upgraded as I am now getting "Error 500" in Circulation and Fine Rules
> page, and empty fields in "Identity Provider" edit buttons, which was
> working perfectly fine in version 23.11. 
> 
> I have checked through installer codebase, it seems like whenever DBD::mysql
> generates any warning logs, the Koha's error handling mechanism mistakes it
> as error instead of warnings (line 441 of
> "/usr/share/koha/intranet/cgi-bin/installer/install.pl"). And I believe, the
> same error mechanism is reflected either throughout the Koha codebase or in
> DBD::mysql package. 
> 
> Possible fix: 
> 1. Change logic for error handling
> 2. Add "SET sql_notes=0" everywhere before having any query of SQL
> 3. Bundle error-free DBD::mysql perl package with Koha that has no such issue

Hi, this bug report is about the import script bulkmarcimport.pl. I think you might be experiencing different bugs, like bug 37533 (error 500 on circulation rules). Please check on the mailing list or Mattermost chat first if you are experiencing different issues like this. We will often be able to point you to the right bugs or solution.
Comment 7 Kyle M Hall (khall) 2025-01-10 19:29:28 UTC Comment hidden (obsolete)
Comment 8 Magnus Enger 2025-01-13 08:10:10 UTC
Created attachment 176425 [details] [review]
Bug 37020: Stream records from file instead of loading all into memory

One of the issues with bulkmarcimport.pl is that the script loads the entire record set into memory
before processing those records. If we load and process those records one at a time the memory
needed will be miniscule.

Test Plan:
1) Import a very large record set, note the ram consumption be the script
2) Apply this patch
1) Import the same record set, the ram consumption should be reduced!

Signed-off-by: Magnus Enger <magnus@libriotech.no>
The patch reduces the RAM used when importing a large file.
Comment 9 Jan Kissig 2025-01-13 12:44:22 UTC
Just tried to import 120k records via 

kohadev-koha@kohadevbox:migration_tools(bug_37020)$ perl bulkmarcimport.pl -b -d --m=MARCXML --file=records.xml --commit=1000

At first it went as expected but after 120k it went on and on and then finally I had to kill the process in order to get control over my machine again.

So I tried with a small file with 1 record: 

perl bulkmarcimport.pl -b -d --m=MARCXML --file=../../../thw/journal.xml --commit=1
Deleting biblios
.Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/MARC/File/XML.pm line 399, <GEN3> chunk 2.

3 MARC records done in 0.0895359516143799 seconds

So there seems an error in the loop but I did not dig into it..
Comment 10 Katrin Fischer 2025-01-15 15:06:02 UTC
Hi Jan, please set to Failed QA (and don't feel bad)
Comment 11 Katrin Fischer 2025-01-15 15:08:02 UTC
@Kyle: I think better keep the tidy separate here, hard to see any changes?
Comment 12 Kyle M Hall (khall) 2025-01-16 12:13:06 UTC
(In reply to Katrin Fischer from comment #11)
> @Kyle: I think better keep the tidy separate here, hard to see any changes?

There isn't any tidying in this patch ( that I can recall :). It really just involved moving move of the code from the lower loop into the upper loop ( which does change the indentation fwiw ) so there is only one loop instead of two.
Comment 13 Kyle M Hall (khall) 2025-01-16 12:14:21 UTC
(In reply to Jan Kissig from comment #9)
> Just tried to import 120k records via 

Jan, is there any chance you can attach your records file to this bug?
Comment 14 Jan Kissig 2025-01-16 13:10:19 UTC
Created attachment 176653 [details]
MARCXML with 1 record

perl bulkmarcimport.pl -m=MARCXML -b --commit=1000 --file=path/to/bib-262.marcxml
Comment 15 Jan Kissig 2025-01-16 13:11:58 UTC
(In reply to Kyle M Hall (khall) from comment #13)
> (In reply to Jan Kissig from comment #9)
> > Just tried to import 120k records via 
> 
> Jan, is there any chance you can attach your records file to this bug?

I will find a place to upload my 120k-record file. But for the beginning I attached a single record. Can you import and verify that result is something like: 

3 MARC records done in ...
Comment 16 Michał 2025-01-16 13:29:16 UTC
This really highlights the difference of the default patch files as done in Bugzilla. There should be a way to generate a separate git diff that ignores the whitespace changes completely, IDEs can do it, but there doesn't seem to be any popular way to convert an existing diff to skip displaying them... Even displaying the diff with some better viewers like Kompare, it's still not much easier to read. So we can only ask for complimentary git diff -w (--ignore-all-space) attachment to make review easier I guess (I would generate it myself, but I can't clone Koha here on the internet I have today).
Comment 17 Leo Stoyanov 2025-01-16 16:59:53 UTC
(In reply to Jan Kissig from comment #15)
> (In reply to Kyle M Hall (khall) from comment #13)
> > (In reply to Jan Kissig from comment #9)
> > > Just tried to import 120k records via 
> > 
> > Jan, is there any chance you can attach your records file to this bug?
> 
> I will find a place to upload my 120k-record file. But for the beginning I
> attached a single record. Can you import and verify that result is something
> like: 
> 
> 3 MARC records done in ...

I imported your attached file, and the result was indeed "3 MARC records done in 0.290374994277954 seconds".
Comment 18 Jan Kissig 2025-01-17 07:05:07 UTC
(In reply to Kyle M Hall (khall) from comment #13)
> (In reply to Jan Kissig from comment #9)
> > Just tried to import 120k records via 
> 
> Jan, is there any chance you can attach your records file to this bug?

Hi Kyle, here are the records I tried. Modified to fit ktd.

https://nextcloud.th-wildau.de/nextcloud/index.php/s/fB3or6bXNX9fo9E
Comment 19 Leo Stoyanov 2025-01-22 16:39:54 UTC
Created attachment 176921 [details] [review]
Bug 37020: [alternate] Changed script to use XML:Twig to reduce memory usage.

In the original script, MARC::Batch->new('XML', ...) was holding onto large chunks of parsed data for MARCXML files, resulting in excessive memory usage even after the script tried to free references. A hacky approach was to manually clear the batch’s internal structures, but Data::Dumper showed that MARC::Batch wasn’t storing data in those specific fields. Hence, the hack offered no solution to the underlying caching. By contrast, XML::Twig can stream XML elements one by one, calling a handler and then discarding the parsed chunk.

Note, there is no guarantee this implementation works for non-XML files as it stands (although, in theory it should). The focus is on the XML:Twig implementation for reference as a solution. Overall, batching seems to be eating up memory.

To test:
1. Run perl misc/migration_tools/bulkmarcimport.pl -m=MARCXML -b -d -v --commit=1000 --file=file_path_here on a large MARCXML/XML file (for example, 2 GB or greater).
2. On whatever machine or container it is ran, the script will likely cause an out-of-memory error and crash the environment.
3. Apply the patch, run "restart_all", and redo step 1. The script should utilize much less memory to import records from MARCXML/XML files.
Comment 20 Thomas Klausner 2025-04-03 11:54:31 UTC
*** Bug 39537 has been marked as a duplicate of this bug. ***
Comment 21 Thomas Klausner 2025-04-03 12:10:55 UTC
I have a similar but slightly different solution (developed for importing 10mil authority records). Instead of using MARC::Batch, I use XML::LibXML::Reader (which also implements a memory efficient pull parser)

I guess XML::Twig works just as well, so I will (hopefully today) review this path.
Comment 22 Thomas Klausner 2025-04-03 12:38:36 UTC
The patch "Bug 37020: Stream records from file instead of loading all into memory" basically "just" removes the first loop that pushed all (valid) records onto an array. Yes, this solves the memory problem, but still uses the IMO not very sane regex-based pull "parser" in MARC::Batch.

But it's a rather small change that works. (even though the patch looks huge because it contains a lot of whitespace-only changes. use `git diff -w ...` to see the relevant changes.
Comment 23 Thomas Klausner 2025-04-03 12:51:23 UTC
Review of the patch "Bug 37020: [alternate] Changed script to use XML:Twig to reduce memory usage.":

This patch changes a lot, mainly using XML::Twig to pull-parse the (huge) XML file instead of using the regex-"parser" in MARC::Batch.

It is a bit hard to read because it adds some comments (which would have to be removed) and adds a bunch of functions before the main code (which IMO is bad style, but maybe OK for Koha?), also moving the helper funtion up in the code.

It proposes two different parsers, one removing the namespaces ('marc:record'), one not. Not sure why we need this.

I like that processing a record is moved into a function.

So I guess if this approach is taken, the code layout should be changed to have all the functions AFTER the main code, and we need to decide which of the two parser-function should be used (or we have both and a flag to choose?)

I like this approach better then the other one because it removes MARC::Batch (for reading XML)
Comment 24 Thomas Klausner 2025-04-03 12:57:44 UTC
In our script, we used XML::LibXML. I can provide an example patch, but before spend that time, here's a quick preview of how the usage would look like:

    open( my $fh, '<', $input_marc_file );

    my $reader = XML::LibXML::Reader->new( IO => $fh );

    while ( $reader->read ) {
        next unless $reader->nodeType == XML_READER_TYPE_ELEMENT;
        next unless $reader->name eq 'record';

        my $xml    = $reader->readOuterXml;
        my $record = MARC::Record->new_from_xml( $xml, 'UTF8', 'MARC21' );
    }


I find that a bit easier to read than the HTML::Twig way of installing a handler subref for 'record'.
Comment 25 Thomas Klausner 2025-04-03 13:00:18 UTC
BTW, here is a link to Bug 37478 which introduced the duplicate loop that caused the memory issues (and the --skip_bad_records options does not protect us from the problem, because the loop pushing the records onto a Perl array happens if it is set or not)
Comment 26 Thomas Klausner 2025-04-03 13:14:55 UTC
One potential killer argument against XML::Twig and in favor of XML::LibXML: Koha currently already comes with XML::LibXML, but not with XML::Twig. So we would need to introduce a new depenceny.
Comment 27 David Gustafsson 2025-04-04 12:48:38 UTC
Created attachment 180639 [details] [review]
Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files

Include checking for invalid marc data in main loop to avoid
loading all marc data into memory at once.
Comment 28 David Gustafsson 2025-04-04 12:53:31 UTC
Created attachment 180640 [details] [review]
Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files

Include checking for invalid marc data in main loop to avoid
loading all marc data into memory at once.
Comment 29 David Gustafsson 2025-04-04 12:58:30 UTC
Had a look at this and the reason the script now consumes a lot more memory is that all records are loaded into memory when checking for bad xml records and encoding errors. Not sure why this was moved into a separate step, but now moved it in to the main loop so that the maximum records loaded is the commit size, which should resolve the issue.
Comment 30 David Gustafsson 2025-04-04 13:00:27 UTC
I think this also resolved a bug where if a limit is set for number of imported records, the loop will terminate prematurely without the last batch of records being committed to Elasticsearch.
Comment 31 David Gustafsson 2025-04-04 13:07:22 UTC
And in addition to the database transaction was previously also done in a manner in which made sure that both the records where inserted/updated and elastic index was updated, which also now should have been fixed.
Comment 32 David Gustafsson 2025-04-04 13:10:10 UTC
Created attachment 180641 [details] [review]
Bug 37020: Don't update Elasticsearch index --test option is set
Comment 33 David Gustafsson 2025-04-04 13:27:35 UTC
Created attachment 180644 [details] [review]
Bug 37020: Don't update Elasticsearch index if--test option is set
Comment 34 David Gustafsson 2025-04-04 13:28:43 UTC
Created attachment 180645 [details] [review]
Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files

Include checking for invalid marc data in main loop to avoid
loading all marc data into memory at once.
Comment 35 Thomas Klausner 2025-04-05 07:34:17 UTC
@David Gustafsson: I also mentioned all of this in my previous comments. And I still think we should move away from MARC::Batch and use XML::LibXML. And I also liked the cleanups Leo Stoyanov did in his XML::Twig version.

I'm in a train with very flaky internet at the moment, so it's a bit hard to review your patch (because of all the indention changes..), but at the first glance it seems very similar to Kyle M Halls patch?

So I don't think we should just go with the minimal fix, but properly improve bulkmarcimport.

What do you think of my analysis in the comments prior to your commits?

I would have loved a bit of discussion / feedback...
Comment 36 Jonathan Druart 2025-04-08 10:47:44 UTC
David, can you reply to Thomas please? See previous comments.
Comment 37 David Gustafsson 2025-04-08 13:15:45 UTC
(In reply to Thomas Klausner from comment #35)
> @David Gustafsson: I also mentioned all of this in my previous comments. And
> I still think we should move away from MARC::Batch and use XML::LibXML. And
> I also liked the cleanups Leo Stoyanov did in his XML::Twig version.
> 
> I'm in a train with very flaky internet at the moment, so it's a bit hard to
> review your patch (because of all the indention changes..), but at the first
> glance it seems very similar to Kyle M Halls patch?
> 
> So I don't think we should just go with the minimal fix, but properly
> improve bulkmarcimport.
> 
> What do you think of my analysis in the comments prior to your commits?
> 
> I would have loved a bit of discussion / feedback...

I have had a look at the XML::Twig patch and now that the records are again streamed and not loaded all at once I don't see why we should add a new dependency and increase code complexity with very little to gain in terms of memory consumption. I may be wrong, but memory consumbed by XML::LibXML should be released a record has been read, so there should not really be any significant difference.

With regard to the patch by Kyle M there is a bug introduced where the main loop can be terminated without indexing the last batch of records, since the conditions for indexing the last batch will not be met. There needs to be an additional check for if we have run out of records, and after that a check whether to terminate the loop.
Comment 38 Thomas Klausner 2025-04-09 20:14:03 UTC
> I have had a look at the XML::Twig patch and now that the records are again streamed and not loaded all at once I don't see why we should add a new dependency and increase code complexity with very little to gain in terms of memory consumption.

I agree. Usimg XML::Twig makes little sense, esp as XML::LibXML (and thus XML::LibXML::Reader are already available)

> I may be wrong, but memory consumbed by XML::LibXML should be released a record has been read, so there should not really be any significant difference.

The old code (prior to pushing all the records onto an array) had no memory problem. But I would still like to move away from using MARC::Batch when reading XML files, because the code there is crazy. But maybe this could be a second, later patch / bug.

> With regard to the patch by Kyle M there is a bug introduced where the main loop can be terminated without indexing the last batch of records, since the conditions for indexing the last batch will not be met. There needs to be an additional check for if we have run out of records, and after that a check whether to terminate the loop.

Yeah, reading that patch was also quite hard...

Anyway, I think what we should do is:

* review your patch
* merge it if it works
* start another bugzilla issue to refactor bulkmarcimport to use XML::LibXML::Reader for streaming XML input files
Comment 39 David Gustafsson 2025-04-10 12:35:39 UTC
Sounds good, though I don't understand the need to use XML::LibXML::Reader instead of MARC::File::XML. MARC::File::XML does perhaps have some odd behaviour when it comes to encoding, but there already is a worked around for this and it's just much nicer and more readable to have the MARC::Batch interface for all formats, and no special handling for MARC XML. There is also  a slight risk of introducing new bugs by rewriting code that already has been tried and tested, to me it's better to leave it as it is.

Anyway, I think what we should do is:

* review your patch
* merge it if it works
* start another bugzilla issue to refactor bulkmarcimport to use XML::LibXML::Reader for streaming XML input files
Comment 40 Leo Stoyanov 2025-04-11 14:35:10 UTC
Sounds like a good plan to me as well: If Dave's patch solves the immediate issue, then that could be the official patch for this bug; then, another Bugzilla issue could be made for implementing XML::LibXML::Reader as an enhancement.
Comment 41 Thomas Klausner 2025-04-14 13:19:51 UTC
Yes, I agree: Let's fix this bug, and (later / in another issue) think about what benefits XML::LibXML::Reader might have.
Comment 42 Thomas Klausner 2025-04-14 13:21:23 UTC
Created attachment 180905 [details] [review]
Bug 37020: (Followup) Fix txn handling, loop label and record_number counter

So, I reviewed the code, which has a bunch of issues:

* there's a call to `next RECORD`, but no `RECORD` loop label.
* $record_number is incremented twice
* and the calls to txn_begin is moved from the outside the loop into each record, starting one transaction per record. Not sure how mysql handles this, but postgres would not like it.

This patch fixes those issues:

After I fixed the first two issues, I ran the patched script on a file containing 75 authorities (from GND), and not one was added to the DB, presuambly because nested transactions don't work in mysql?

So I also moved the BEGIN transaction outside of the main loop and started a new one after each commit, and added a final commit after to loop.
Comment 43 Thomas Klausner 2025-04-14 13:23:24 UTC
Created attachment 180906 [details]
MARCXML file with 75 authorities from GND

Here'a another MARCXML file with 75 auths from GND (2 of which cannot be properly imported into ktd, which is ok for testing)
Comment 44 Thomas Klausner 2025-04-14 13:35:06 UTC
Here'a test plan:

1: Start KTD with ES, eg ktd --es7 start
2: Connect to DB: ktd --dbshell
3: Get the count of current auth: `SELECT count(*) FROM auth_header;`
   I get 1706. Remember that number
4: Apply the patch, copy the file "MARCXML file with 75 authorities from GND" / 75_authorities_gnd.xml into your koha dir
5: enter a koha shell: ktd --shell
6: import the 75_authorities_gnd.xml file:

kohadev-koha@kohadevbox:koha$  misc/migration_tools/bulkmarcimport.pl -m MARCXML -c utf8 -a --insert --file 75_authorities_gnd.xml -l testlog

You will see two DBIx errors, but at the end:

75 MARC records done in 0.464853048324585 seconds

7: Again in the DB-Shell, run the count: SELECT count(*) FROM auth_header;

You should now get 75 more auths (eg 1781 in my ktd)

you can verify that in the DB with: select 1781 - 1706;

8: Take a look at the testlog file, which will have 73 OKs and two errors :
eg:
1713;insert;ok
000152145;insert;ERROR
1715;insert;ok

Notice that 1714 is skipped (because it had an error).

But it still exists in the DB (in a useless form):

select * from auth_header where authid = 1714;

I'm not exactly sure why the two not importable auths show up in the db. But as this is the same behavior as on main (without this patch) I consider it out of scope for this bug.
Comment 45 Jan Kissig 2025-04-23 09:29:33 UTC
(In reply to Thomas Klausner from comment #44)
> Here'a test plan:
> 
> 1: Start KTD with ES, eg ktd --es7 start
> 2: Connect to DB: ktd --dbshell
> 3: Get the count of current auth: `SELECT count(*) FROM auth_header;`
>    I get 1706. Remember that number
> 4: Apply the patch, copy the file "MARCXML file with 75 authorities from
> GND" / 75_authorities_gnd.xml into your koha dir
> 5: enter a koha shell: ktd --shell
> 6: import the 75_authorities_gnd.xml file:
> 
> kohadev-koha@kohadevbox:koha$  misc/migration_tools/bulkmarcimport.pl -m
> MARCXML -c utf8 -a --insert --file 75_authorities_gnd.xml -l testlog
> 
> You will see two DBIx errors, but at the end:
> 
> 75 MARC records done in 0.464853048324585 seconds
> 
> 7: Again in the DB-Shell, run the count: SELECT count(*) FROM auth_header;
> 
> You should now get 75 more auths (eg 1781 in my ktd)
> 
> you can verify that in the DB with: select 1781 - 1706;
> 
> 8: Take a look at the testlog file, which will have 73 OKs and two errors :
> eg:
> 1713;insert;ok
> 000152145;insert;ERROR
> 1715;insert;ok
> 
> Notice that 1714 is skipped (because it had an error).
> 
> But it still exists in the DB (in a useless form):
> 
> select * from auth_header where authid = 1714;
> 
> I'm not exactly sure why the two not importable auths show up in the db. But
> as this is the same behavior as on main (without this patch) I consider it
> out of scope for this bug.

is it ready to get tested with a large number of (bib)records?
Comment 46 Thomas Klausner 2025-04-23 11:15:49 UTC
> is it ready to get tested with a large number of (bib)records

Yes!
Comment 47 Jan Kissig 2025-04-23 19:37:39 UTC
(In reply to Thomas Klausner from comment #46)
> > is it ready to get tested with a large number of (bib)records
> 
> Yes!

on ktd I got the following:

perl misc/migration_tools/bulkmarcimport.pl -d -b -commit=100 -m=MARCXML -file=../thw/btw.xml 

...
66000..................................................................Killed
Comment 48 Jan Kissig 2025-04-24 10:18:54 UTC
Today closed all applications except the running ktd and tried the file from https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=37020#c18 again. 
My setup 2017 I7 with 16GB RAM. Although it lasts longer than yesterday it got killed anyway:

perl misc/migration_tools/bulkmarcimport.pl -d -b -commit=100 -m=MARCXML -file=../thw/btw.xml 
...
120400....................................................................................................
120500....................................................................................................
120600...................................................Killed
Comment 49 Thomas Klausner 2025-04-24 13:57:41 UTC
Weird. You have applied both patches?

I'll try with a large auth xml file from GND.

BTW, you shouldn't need to wait for the OOM killer. Just inspect the RAM usage of the process using eg top or htop. It should stay more or less the same.

Of course there might be other memory leaks in Koha, which have nothing to do with this Bug and get triggered on **very** large imports.
Comment 50 Thomas Klausner 2025-04-24 14:24:28 UTC
Here is a authorities MARC file with 42k records: https://data.dnb.de/GNDlfdMarc21xml/2515gndmrc.xml.gz

Here's the command I use to import it into koha-testing-docker (via `ktd --shell` and fresh db):

misc/migration_tools/bulkmarcimport.pl -m MARCXML -v -a --file 2515gndmrc.xml

After starting it, get the pid of the process, eg via `ps xa | grep bulkmarc` (in the container or on the node, does not matter)

The watch the memory usage, eg via `htop -p $PID` or

while ps -p $PID --no-headers --format "etime pid %cpu %mem rss"; do     sleep 1 ; done


I see hardly any growth in RAM usage (htop reports 217M RES; the `while ps` bash thingy reports 222340 (which is basically the same)


And here's the result without the patches applied:

RAM usages goes up to 1620MB (so 1.6GB) or 1648108, which is what it takes to store the data in the array (and which is fixed by this patch) and then continues to slowly grow.


So:

a) the patches definitely work (at least for auths...)
and
b) there are other mem leaks.

I will now try to download your big file btw.xml and see how it works on my laptop.
Comment 51 Thomas Klausner 2025-04-24 14:45:28 UTC
I have now run it with Jans file (2.1GB, 210k biblio record) with `--commit 1000` and see jumps in RAM usage after each commit.

So there seems to a mem leak here, which (IMO) is unrelated to this issue (but might have been introduced in Bug 37478)

misc/migration_tools/bulkmarcimport.pl -b --m=MARCXML --file=btw.xml --commit=1000
Comment 52 Thomas Klausner 2025-04-24 15:07:52 UTC
Created attachment 181463 [details] [review]
Bug 37020: (follow-up): Remove memleak when not using Elasticsearch

If using ES, the script will collect the newly created records and biblionumbers in atwo arrays (@search_engine_record, @search_engine_record_ids) and pass that to the indexer when $commitnum records are read. The arrays are cleaned afterwards.

BUT: The records and biblionumbers were always stored, even if we're not using Elasticsearch. And as the cleaning of those arrays only happens if we are in fact using ES, all the data is added but never cleaned.

This patch fixes this by only storing record and biblionumber if we have an $indexer (i.e. if we're using Elasticsearch).

I have no idea though how/if the records are indexed when using Zebra.

After applying this patch and importing a large enough file without using Elasticsearch, the RAM usage also stays more or less constant.

@Jan, please verify!

Sponsored-by: HKS3
Comment 53 Jan Kissig 2025-04-25 06:08:50 UTC
Created attachment 181470 [details] [review]
Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files

Include checking for invalid marc data in main loop to avoid
loading all marc data into memory at once.

Signed-off-by: Jan Kissig <bibliothek@th-wildau.de>
Comment 54 Jan Kissig 2025-04-25 06:08:54 UTC
Created attachment 181471 [details] [review]
Bug 37020: (Followup) Fix txn handling, loop label and record_number counter

So, I reviewed the code, which has a bunch of issues:

* there's a call to `next RECORD`, but no `RECORD` loop label.
* $record_number is incremented twice
* and the calls to txn_begin is moved from the outside the loop into each record, starting one transaction per record. Not sure how mysql handles this, but postgres would not like it.

This patch fixes those issues:

After I fixed the first two issues, I ran the patched script on a file containing 75 authorities (from GND), and not one was added to the DB, presuambly because nested transactions don't work in mysql?

So I also moved the BEGIN transaction outside of the main loop and started a new one after each commit, and added a final commit after to loop.

Signed-off-by: Jan Kissig <bibliothek@th-wildau.de>
Comment 55 Jan Kissig 2025-04-25 06:08:57 UTC
Created attachment 181472 [details] [review]
Bug 37020: (follow-up): Remove memleak when not using Elasticsearch

If using ES, the script will collect the newly created records and biblionumbers in atwo arrays (@search_engine_record, @search_engine_record_ids) and pass that to the indexer when $commitnum records are read. The arrays are cleaned afterwards.

BUT: The records and biblionumbers were always stored, even if we're not using Elasticsearch. And as the cleaning of those arrays only happens if we are in fact using ES, all the data is added but never cleaned.

This patch fixes this by only storing record and biblionumber if we have an $indexer (i.e. if we're using Elasticsearch).

I have no idea though how/if the records are indexed when using Zebra.

After applying this patch and importing a large enough file without using Elasticsearch, the RAM usage also stays more or less constant.

@Jan, please verify!

Signed-off-by: Jan Kissig <bibliothek@th-wildau.de>
Comment 56 Thomas Klausner 2025-05-06 20:38:05 UTC
Created attachment 181996 [details] [review]
Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files

Include checking for invalid marc data in main loop to avoid
loading all marc data into memory at once.

Signed-off-by: Jan Kissig <bibliothek@th-wildau.de>
Signed-off-by: Thomas Klausner <domm@plix.at>
Comment 57 Thomas Klausner 2025-05-06 20:38:09 UTC
Created attachment 181997 [details] [review]
Bug 37020: (Followup) Fix txn handling, loop label and record_number counter

So, I reviewed the code, which has a bunch of issues:

* there's a call to `next RECORD`, but no `RECORD` loop label.
* $record_number is incremented twice
* and the calls to txn_begin is moved from the outside the loop into each record, starting one transaction per record. Not sure how mysql handles this, but postgres would not like it.

This patch fixes those issues:

After I fixed the first two issues, I ran the patched script on a file containing 75 authorities (from GND), and not one was added to the DB, presuambly because nested transactions don't work in mysql?

So I also moved the BEGIN transaction outside of the main loop and started a new one after each commit, and added a final commit after to loop.

Signed-off-by: Jan Kissig <bibliothek@th-wildau.de>
Signed-off-by: Thomas Klausner <domm@plix.at>
Comment 58 Thomas Klausner 2025-05-06 20:38:13 UTC
Created attachment 181998 [details] [review]
Bug 37020: (follow-up): Remove memleak when not using Elasticsearch

If using ES, the script will collect the newly created records and biblionumbers in atwo arrays (@search_engine_record, @search_engine_record_ids) and pass that to the indexer when $commitnum records are read. The arrays are cleaned afterwards.

BUT: The records and biblionumbers were always stored, even if we're not using Elasticsearch. And as the cleaning of those arrays only happens if we are in fact using ES, all the data is added but never cleaned.

This patch fixes this by only storing record and biblionumber if we have an $indexer (i.e. if we're using Elasticsearch).

I have no idea though how/if the records are indexed when using Zebra.

After applying this patch and importing a large enough file without using Elasticsearch, the RAM usage also stays more or less constant.

@Jan, please verify!

Signed-off-by: Jan Kissig <bibliothek@th-wildau.de>
Signed-off-by: Thomas Klausner <domm@plix.at>
Comment 59 Thomas Klausner 2025-05-06 20:39:10 UTC
I made the patches apply again on current main and set to PQA. Please merge, we really need to get this fixes in!
Comment 60 Katrin Fischer 2025-05-08 05:46:11 UTC
Please rebase on current main:


Apply? [(y)es, (n)o, (i)nteractive] y
Applying: Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files
error: sha1 information is lacking or useless (misc/migration_tools/bulkmarcimport.pl).
error: could not build fake ancestor
Patch failed at 0001 Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files
hint: Use 'git am --show-current-patch=diff' to see the failed patch
When you have resolved this problem run "git bz apply --continue".
If you would prefer to skip this patch, instead run "git bz apply --skip".
To restore the original branch and stop patching run "git bz apply --abort".
Patch left in /tmp/Bug-37020-bulkmarcimport-gets-killed-after-update--zzl30sda.patch
Comment 61 Thomas Klausner 2025-05-08 12:14:47 UTC
One patch was marked as obsolete, but it wasn't (this was probably my fault during some review/signing step)

I have now un-obsoloted the patch  Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files (33.43 KB, patch) 2025-04-04 13:28 UTC, David Gustafsson 

And now all 4 patches apply again without a conflict!
Comment 62 Katrin Fischer 2025-05-09 06:21:42 UTC
I still can't apply the patches, sha1 error... Please upload again.
I also notice that we have 2 patches with the exact same description - please double check (bz doesn't like these).
Comment 63 Katrin Fischer 2025-05-09 06:21:53 UTC
Apply? [(y)es, (n)o, (i)nteractive] y
Applying: Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files
Applying: Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files
error: sha1 information is lacking or useless (misc/migration_tools/bulkmarcimport.pl).
error: could not build fake ancestor
Patch failed at 0001 Bug 37020: bulkmarcimport gets killed after update to 24.05. when inserting large files
hint: Use 'git am --show-current-patch=diff' to see the failed patch
When you have resolved this problem run "git bz apply --continue".
If you would prefer to skip this patch, instead run "git bz apply --skip".
To restore the original branch and stop patching run "git bz apply --abort".
Patch left in /tmp/Bug-37020-bulkmarcimport-gets-killed-after-update--m_neycks.patch
Comment 64 Martin Renvoize (ashimema) 2025-05-09 14:11:44 UTC
I couldn't get these to apply either.. I tried un-obsoleting the more recent version of the patch that was unobsoleted last time and that was also a sha1 fail

Simplest way to get round this would be for you to push a branch somewhere for us Thomas if that's OK
Comment 65 Thomas Klausner 2025-05-09 17:17:21 UTC
I'm leaving for holidays tomorrow, but have ~4h on a train. I guess I can create one unified patch (so we discard all the intermediate steps). This should then hopefully apply. If not I can also try to push a branch to some other git repo
Comment 66 Thomas Klausner 2025-05-10 10:32:02 UTC
I have now pushed the three commits / patches to: https://github.com/domm/Koha/tree/37020_bulkmarcimport

@cait: I hope you can work with this version?

Or should I open a pull request on the Koha repo on github?

(Meta: I haven't changed the status of the issue here in bugzilla)
Comment 67 Katrin Fischer 2025-05-12 15:34:18 UTC
Patches applied without issue from remote branch now. But noting: Here we have 4 patches, on the branch there were 3. Squashed?
Comment 68 Thomas Klausner 2025-05-13 06:10:52 UTC
I think two of my fixed got somehow merged into one commit. So yes, all OK (esp as the actual main patch by David Gustafsson is properly credited)
Comment 69 Katrin Fischer 2025-05-19 14:40:04 UTC
(In reply to Thomas Klausner from comment #68)
> I think two of my fixed got somehow merged into one commit. So yes, all OK
> (esp as the actual main patch by David Gustafsson is properly credited)

I just noticed that this had the wrong status :)
Comment 70 Katrin Fischer 2025-05-19 14:41:48 UTC
Worked perfectly with the remote branch. Pushed to main, thanks!

https://git.koha-community.org/Koha-community/Koha/commits/branch/main/search?q=37020&all=
Comment 71 Paul Derscheid 2025-05-26 18:17:53 UTC
Nice work everyone!

Pushed to 24.11.x for 24.11.05
Comment 72 Jesse Maseto 2025-06-13 15:13:12 UTC
Merge conflicts with 24.05.x, please rebase if needed in 24.05.x.
Comment 73 Jonathan Druart 2025-06-19 07:53:07 UTC
record_number is still incremented twice, is this because of a conflict fix problem?
Comment 74 Jonathan Druart 2025-06-19 07:54:52 UTC
 Current version in main:

 329 RECORD: while () {
 330 
 331     my $record;
 332     $record_number++;
 333 
 334     # get record
 335     eval { $record = $batch->next() };
 336     if ($@) {
 337         print "Bad MARC record $record_number: $@ skipped\n";
 338 
 339         # FIXME - because MARC::Batch->next() combines grabbing the next
 340         # blob and parsing it into one operation, a correctable condition
 341         # such as a MARC-8 record claiming that it's UTF-8 can't be recovered
 342         # from because we don't have access to the original blob.  Note
 343         # that the staging import can deal with this condition (via
 344         # C4::Charset::MarcToUTF8Record) because it doesn't use MARC::Batch.
 345         next;
 346     }
 347     if ($record) {
 348         $record_number++;
 349 

This is wrong.
Comment 75 Alexander Wagner 2025-06-19 09:40:16 UTC
If I get the code right, the number of records ingested is still doubled. AFAIS `$record_number++;` is called twice within the loop. Jan mentioned that he ingested 1 record and got notified about 3, I also obeserved a result of 5 while I had only two (valid) records in the import file. Might be a minor issue compared to all the rest but can be quite confusing if your catalogue migration turns out to hold way more records that you expected. (Koha 25.05.00 ga)