Hi, With a big enough batch record modification list you end with an error: Template process failed: undef error - , or } expected while parsing object/hash, at character offset 65535 (before "(end of string)") at /home/koha/src/Koha/BackgroundJob.pm line 179.
I was going to open this bug report!
I have been running some tests BatchUpdateBiblio will have data with 230 chars for 1 record 836 chars for 10 records 7065 chars for 100 records 28470 chars for 400 records Seems linear so currently the tool has a limit of 800 records. Switching to MEDIUMTEXT will allow ~230k records Switching to LONGTEXT will allow ~60M records This is only an estimate (and the maximum), as the tool can log error, extra info, etc. which will take more characters.
BatchUpdateItem 1 item => 185 chars 10 items => 277 chars 100 items => 1171 chars 500 items => 5097 chars
What things hold us from allowing this to be LONGTEXT? Will fields occupy much more space on disk as I thought and only? I had only things one from "cons" and here people already thought the same: "The only difference is the length field in the row data. Using MEDIUMTEXT instead of LONGTEXT saves 1 byte per record. If you have 100 million records, that saves 100 MB." https://stackoverflow.com/questions/58225898/mysql-is-there-a-lack-of-performance-by-using-longtext-instead-of-mediumtext no more cons from having LONGTEXT?
Jonathan: if to take into account your estimations, we have also limits with items: Switching to MEDIUMTEXT will allow ~1.5M items to be queued Switching to LONGTEXT will allow ~420M items to be queued
In any case, going MEDIUMTEXT already makes Koha much more "stable", we already have some problematic users worldwide now since announcing of this queueing feature (I have three customers whose batch processing hiddenly failed because of this since spring not once (sic!) ), so for me setting this to MEDIUMTEXT at least gives "relatively much more stability", but: Do we need to allow for operators to operate with >200K biblio-records and >1.5M items in a batch? Because this is more business logic question, on which we can't answer but life, let's make it "trials and errors" way but LET'S MAKE VISIBLE the problem, i.e. I propose this: 1. set to MEDIUMTEXT, and ADD UI analysis/feedback with some "length estimator and limiter", which is to fail with UI errors like: - "ERROR: batch processing of >200K biblios not yet supported" - "ERROR: batch processing of >1.5M items not yet supported" saying even "yet" to signal to "feedback" from customers back to the community, i.e. make it hard-fail on bigger amounts, to prevent HIDDEN ERRORS which is now happening. Then if we will have requests from worldwide users - that can be considered to be switched to LONGTEXT, if no other choice. Anyway seems even with LONGTEXT it seems proper to make some "hard limited" number of queued items or bilbios (accordingly) because otherwise, this will become a "hidden error" anyway (ok, "potential" and "big numbers", but anyway).
I think I'd go for LONGTEXT straight away personally.. in the grand scheme, disk space isn't usually the limiting factor these days..
(In reply to Martin Renvoize from comment #7) > I think I'd go for LONGTEXT straight away personally.. in the grand scheme, > disk space isn't usually the limiting factor these days.. Lets justify that slightly.. my thoughts are there aren't going to be lots of reports against this data and the queries we're using on this table are specific enough and not on that particular field such that it shouldn't have any other knock on performance issues... where it would count is if we did another utf8-mb4 type upgrade down the line.. that would be a slow process for such a large field if there were lots of rows in it.. but that's all reasonably easy to resolve.
Created attachment 127288 [details] [review] Bug 29386: Extend background_jobs.data to LONGTEXT TEXT is too small, we must extend it to allow bigger jobs.
Looking here
Created attachment 127349 [details] [review] Bug 29386: Extend background_jobs.data to LONGTEXT TEXT is too small, we must extend it to allow bigger jobs. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Trivial. Combining SO and QA.
Pushed to master for 21.11, thanks to everybody involved!
Pushed to 21.05.x for 21.05.05
(In reply to Kyle M Hall from comment #14) > Pushed to 21.05.x for 21.05.05 /!\ beware : I see the commit https://git.koha-community.org/Koha-community/Koha/commit/78e28dc804d379f7409b553afe5851a8a4a7442b The change in Koha.pm is missing no ?
Pushed to 20.11.x for 20.11.12
Note this contains 3 commits : a4aa24931c Bug 29386: DBIC schema changes 9c5e3ef1c9 Bug 29386: DBRev 20.11.11.001 6d74511c5c Bug 29386: Extend background_jobs.data to LONGTEXT
Missing dependencies for 20.05.x, it shouldn't be affected, no backport.