Bug 29386 - background jobs table data field is a TEXT which is too small
Summary: background jobs table data field is a TEXT which is too small
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Architecture, internals, and plumbing (show other bugs)
Version: Main
Hardware: All All
: P5 - low critical (vote)
Assignee: Jonathan Druart
QA Contact: Marcel de Rooy
URL:
Keywords: rel_21_05_candidate
Depends on: 22417
Blocks:
  Show dependency treegraph
 
Reported: 2021-11-02 08:59 UTC by Didier Gautheron
Modified: 2022-06-06 20:25 UTC (History)
8 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Trivial patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
21.11.00,21.05.05,20.11.12


Attachments
Bug 29386: Extend background_jobs.data to LONGTEXT (1.76 KB, patch)
2021-11-04 09:45 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 29386: Extend background_jobs.data to LONGTEXT (1.86 KB, patch)
2021-11-05 09:32 UTC, Marcel de Rooy
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Didier Gautheron 2021-11-02 08:59:59 UTC
Hi,

With a big enough batch record modification list you end with an error:

Template process failed: undef error - , or } expected while parsing object/hash, at character offset 65535 (before "(end of string)") at /home/koha/src/Koha/BackgroundJob.pm line 179.
Comment 1 Jonathan Druart 2021-11-02 10:05:14 UTC
I was going to open this bug report!
Comment 2 Jonathan Druart 2021-11-02 10:12:41 UTC
I have been running some tests

BatchUpdateBiblio will have data with
230   chars for 1   record
836   chars for 10  records
7065  chars for 100 records
28470 chars for 400 records

Seems linear so currently the tool has a limit of 800 records.

Switching to MEDIUMTEXT will allow ~230k records
Switching to LONGTEXT   will allow ~60M records

This is only an estimate (and the maximum), as the tool can log error, extra info, etc. which will take more characters.
Comment 3 Jonathan Druart 2021-11-02 10:19:25 UTC
BatchUpdateItem

1 item => 185 chars
10 items => 277 chars
100 items => 1171 chars
500 items => 5097 chars
Comment 4 Andrii Nugged 2021-11-03 15:10:34 UTC
What things hold us from allowing this to be LONGTEXT? Will fields occupy much more space on disk as I thought and only? I had only things one from "cons" and here people already thought the same:

"The only difference is the length field in the row data. Using MEDIUMTEXT instead of LONGTEXT saves 1 byte per record. If you have 100 million records, that saves 100 MB."
https://stackoverflow.com/questions/58225898/mysql-is-there-a-lack-of-performance-by-using-longtext-instead-of-mediumtext

no more cons from having LONGTEXT?
Comment 5 Andrii Nugged 2021-11-03 15:24:42 UTC
Jonathan: if to take into account your estimations, we have also limits with items:

Switching to MEDIUMTEXT will allow ~1.5M items to be queued
Switching to LONGTEXT   will allow ~420M items to be queued
Comment 6 Andrii Nugged 2021-11-03 15:31:11 UTC
In any case, going MEDIUMTEXT already makes Koha much more "stable", we already have some problematic users worldwide now since announcing of this queueing feature (I have three customers whose batch processing hiddenly failed because of this since spring not once (sic!) ), so for me setting this to MEDIUMTEXT at least gives "relatively much more stability", but:

Do we need to allow for operators to operate with >200K biblio-records and >1.5M items in a batch? Because this is more business logic question, on which we can't answer but life, let's make it "trials and errors" way but LET'S MAKE VISIBLE the problem, 


i.e. I propose this:

1. set to MEDIUMTEXT, and ADD UI analysis/feedback with some "length estimator and limiter", which is to fail with UI errors like:
   - "ERROR: batch processing of >200K biblios not yet supported"
   - "ERROR: batch processing of >1.5M items not yet supported"
saying even "yet" to signal to "feedback" from customers back to the community,

i.e. make it hard-fail on bigger amounts, to prevent HIDDEN ERRORS which is now happening. Then if we will have requests from worldwide users - that can be considered to be switched to LONGTEXT, if no other choice.


Anyway seems even with LONGTEXT it seems proper to make some "hard limited" number of queued items or bilbios (accordingly) because otherwise, this will become a "hidden error" anyway (ok, "potential" and "big numbers", but anyway).
Comment 7 Martin Renvoize 2021-11-04 08:06:17 UTC
I think I'd go for LONGTEXT straight away personally.. in the grand scheme, disk space isn't usually the limiting factor these days..
Comment 8 Martin Renvoize 2021-11-04 08:08:03 UTC
(In reply to Martin Renvoize from comment #7)
> I think I'd go for LONGTEXT straight away personally.. in the grand scheme,
> disk space isn't usually the limiting factor these days..

Lets justify that slightly.. my thoughts are there aren't going to be lots of reports against this data and the queries we're using on this table are specific enough and not on that particular field such that it shouldn't have any other knock on performance issues... where it would count is if we did another utf8-mb4 type upgrade down the line.. that would be a slow process for such a large field if there were lots of rows in it.. but that's all reasonably easy to resolve.
Comment 9 Jonathan Druart 2021-11-04 09:45:20 UTC
Created attachment 127288 [details] [review]
Bug 29386: Extend background_jobs.data to LONGTEXT

TEXT is too small, we must extend it to allow bigger jobs.
Comment 10 Marcel de Rooy 2021-11-05 09:28:44 UTC
Looking here
Comment 11 Marcel de Rooy 2021-11-05 09:32:54 UTC
Created attachment 127349 [details] [review]
Bug 29386: Extend background_jobs.data to LONGTEXT

TEXT is too small, we must extend it to allow bigger jobs.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Comment 12 Marcel de Rooy 2021-11-05 09:33:21 UTC
Trivial. Combining SO and QA.
Comment 13 Jonathan Druart 2021-11-05 11:15:08 UTC
Pushed to master for 21.11, thanks to everybody involved!
Comment 14 Kyle M Hall 2021-11-11 12:23:18 UTC
Pushed to 21.05.x for 21.05.05
Comment 15 Fridolin Somers 2021-11-12 23:01:11 UTC
(In reply to Kyle M Hall from comment #14)
> Pushed to 21.05.x for 21.05.05

/!\ beware :

I see the commit https://git.koha-community.org/Koha-community/Koha/commit/78e28dc804d379f7409b553afe5851a8a4a7442b

The change in Koha.pm is missing no ?
Comment 16 Fridolin Somers 2021-11-12 23:05:09 UTC
Pushed to 20.11.x for 20.11.12
Comment 17 Fridolin Somers 2021-11-12 23:10:50 UTC
Note this contains 3 commits :
a4aa24931c Bug 29386: DBIC schema changes
9c5e3ef1c9 Bug 29386: DBRev 20.11.11.001
6d74511c5c Bug 29386: Extend background_jobs.data to LONGTEXT
Comment 18 Victor Grousset/tuxayo 2021-11-16 08:14:53 UTC
Missing dependencies for 20.05.x, it shouldn't be affected, no backport.