Some oversized records with UTF-8 characters cause import worker to die. The record would be staged correctly, but then an attempt to import it with C4::ImportBatch::BatchCommitRecords can generate an uncaught die of new_from_usmarc. A solution would be to make use of import_records.marcxml instead of import_records.marc for this step. (Both versions are stored in the import_records table in stage step).
Created attachment 176845 [details] Test record A test MARCXML record to confirm the issue.
I was actually thinking about this a bit yesterday when fixing bug 38913. There are other places that call "new_from_usmarc". Some of them - like the API - have try/catch around them I think, but I'm sure not all the calls do.
(In reply to David Cook from comment #2) > I was actually thinking about this a bit yesterday when fixing bug 38913. > > There are other places that call "new_from_usmarc". Some of them - like the > API - have try/catch around them I think, but I'm sure not all the calls do. Should be investigated from this angle. But the current can be solved with marcxml that we have anyway. No need to try here around new_from_usmarc if we can effectively import such records with new_from_xml IMO.
Bumped into this one with ./misc/migration_tools/bulkmarcimport.pl In our case, I think it's bad leader data, but $record->as_usmarc is what is killing us. In C4::Biblio::ModBiblioMarc there is a call to $record->as_usmarc to re-calculate the record length, and that was throwing a fatal error, so I've wrapped that one in an eval{}... It looks like bug 38913 will still fall victim to this one too because of $record->as_usmarc used there too. I feel like some people reported issues even after bug 38913 with elastic indexing and maybe that's why...