As noted by myself on the koha-devel listserv, Martin on https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=11944#c247, and Mark Tompsett on https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=15794#c2, we might want to use utf8mb4 instead of utf8 for MySQL tables, columns, and connections. utf8 in MySQL has a 3 byte limit: https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-utf8.html. While most "normal" characters in most languages are covered by 1-3 bytes, UTF8 does allow for 4 bytes, which means we have a problem using lesser used characters in Chinese, Japanese, and Korean among other languages. It also means we can't store emoji. When MySQL encounters a 4 byte UTF8 character, it immediately truncates the string from that character onward. Unfortunately, it doesn't raise an error. It raises a warning, which isn't that easy to detect. In my case, I'm trying to store a MARCXML record with a 4 byte character, and while C4::Biblio::AddBiblio returns true, MySQL corrupts the XML record when using utf8 encoding rather than utf8mb4 (both in terms of the MySQL column and the MySQL connection set by Koha::Database).
Interestingly DBD::mysql actually recommends using mysql_enable_utf8mb4 over mysql_enable_utf8 http://search.cpan.org/dist/DBD-mysql/lib/DBD/mysql.pm. It looks like we might be calling SET NAMES redundantly in Koha::Database as well, but I don't think that matters too much. You can use the following query to see if you have any incomplete MARCXML records in Koha: select biblionumber,metadata from biblio_metadata where ExtractValue(metadata,'/record') is null; Actually, that query is useful for finding any MARCXML record that will cause a XML parser to fail.
We (PTFS Europe) have indeed been installing using utf8mb4 as standard practice for a good number of years now and can confirm all of what you've said is true, and that we've never seen any side effects. I'd certainly back it's use during installation by default.
Created attachment 70854 [details] [review] Bug 18336: Full stack tests for supplemental UTF-8 chars This patch introduces tests for Koha's support for 4-byte supplemental UTF-8 chars. encoding/decoding tools handle this gracefuly. The missing piece is the MySQL DB backend. The tests in this patch: - Adds a couple records for each flavour (MARC21 and UNIMARC) so search_utf8.t tests 4-byte chars are handled correctly - Adds emoji testing in auth_values_input_www.t To test: - Apply this patch - Run: $ kshell k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => FAIL: It should fail if the DB hasn't been migrated into using utf8mb4 Sponsored-by: Hotchkiss School
Created attachment 70855 [details] [review] Bug 18336: SET NAMES utf8mb4 in Koha::Database Sponsored-by: Hotchkiss School
Created attachment 70856 [details] [review] Bug 18336: Convert schema from utf8 to utf8mb4 This patch adapts the DB structure so it uses utf8mb4 encoding and utf8mb4_generic_ci collation. Indexes for columns of type VARCHAR with prefix lenght higher than 191 are shortened because of the smaller max index lenght for utf8mb4 Note: please beware that testing this patchset risks your data and the patchset includes reinitializing the DB. To test: - Be on the master branch - Have a clean DB: $ reset_all (y) - Apply the first patch (Unit tests) - Run: $ kshell k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => FAIL: Tests fail because Koha doesn't support supplemental (UTF-8) chars. - Apply the rest of this patchset - Upgrade the schema: $ kshell k$ perl installer/data/mysql/updatedatabase.pl - Run the tests: k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => SUCCESS: Tests pass! - Now start from a clean DB - Run: $ reset_all (y) - Run the tests: k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => SUCCESS: Tests pass! Verify you can use emojis all over the place (MARC records, AV descriptions, etc). Sponsored-by: Hotchkiss School
Created attachment 70857 [details] [review] Bug 18336: DBIC update Sponsored-by: Hotchkiss School
Created attachment 70858 [details] [review] Bug 18336: Atomic update Sponsored-by: Hotchkiss School
Created attachment 70859 [details] [review] Bug 18336: Add explicit index names to kohastructure.sql This patch just adds an explicit index name to the search_marc_map table. The atomicupdate on this patchset is a good example of why we better have them. Sponsored-by: Hotchkiss School
Comment on attachment 70857 [details] [review] Bug 18336: DBIC update Review of attachment 70857 [details] [review]: ----------------------------------------------------------------- Why did the size change from 255 to 191?
(In reply to M. Tompsett from comment #9) > Why did the size change from 255 to 191? After a conversation on IRC with tcohen: A lot of fields are shrunk from 255 to 191. This is because index size for VARCHAR on InnoDB has a max size, in bytes, of 767 bytes (or similar) and utf8 uses 3 bytes for each char, and utf8mb4, 4. 767/3 > 255. but 767/4 > 191. https://dev.mysql.com/doc/refman/5.5/en/charset-unicode-conversion.html Gain functionality (necessary for true unicode), lose space (not an issue yet).
(In reply to M. Tompsett from comment #10) > (In reply to M. Tompsett from comment #9) > > Why did the size change from 255 to 191? > > After a conversation on IRC with tcohen: > A lot of fields are shrunk from 255 to 191. This is because A lot of *indexes* are shrunk from 255 to 191. This has the impact of making searches on things longer than 191 chars require some more CPU usage (probably). But no data is shrunk in most of the changes. The tags_* tables get the 'term' column shrunk, because I didn't manage to fix it other way. It seems a safe one, though. Tags are usually a single word...
Created attachment 70904 [details] [review] Bug 18336: Full stack tests for supplemental UTF-8 chars This patch introduces tests for Koha's support for 4-byte supplemental UTF-8 chars. encoding/decoding tools handle this gracefuly. The missing piece is the MySQL DB backend. The tests in this patch: - Adds a couple records for each flavour (MARC21 and UNIMARC) so search_utf8.t tests 4-byte chars are handled correctly - Adds emoji testing in auth_values_input_www.t To test: - Apply this patch - Run: $ kshell k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => FAIL: It should fail if the DB hasn't been migrated into using utf8mb4 Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Created attachment 70905 [details] [review] Bug 18336: SET NAMES utf8mb4 in Koha::Database Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Created attachment 70906 [details] [review] Bug 18336: Convert schema from utf8 to utf8mb4 This patch adapts the DB structure so it uses utf8mb4 encoding and utf8mb4_generic_ci collation. Indexes for columns of type VARCHAR with prefix lenght higher than 191 are shortened because of the smaller max index lenght for utf8mb4 Note: please beware that testing this patchset risks your data and the patchset includes reinitializing the DB. To test: - Be on the master branch - Have a clean DB: $ reset_all (y) - Apply the first patch (Unit tests) - Run: $ kshell k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => FAIL: Tests fail because Koha doesn't support supplemental (UTF-8) chars. - Apply the rest of this patchset - Upgrade the schema: $ kshell k$ perl installer/data/mysql/updatedatabase.pl - Run the tests: k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => SUCCESS: Tests pass! - Now start from a clean DB - Run: $ reset_all (y) - Run the tests: k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => SUCCESS: Tests pass! Verify you can use emojis all over the place (MARC records, AV descriptions, etc). Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Created attachment 70907 [details] [review] Bug 18336: DBIC update Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Created attachment 70908 [details] [review] Bug 18336: Atomic update Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Created attachment 70909 [details] [review] Bug 18336: Add explicit index names to kohastructure.sql This patch just adds an explicit index name to the search_marc_map table. The atomicupdate on this patchset is a good example of why we better have them. Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Followed test plan. Also added a patron with emojis in so many fields, including username and password! And the patron was able to log in with emojis!
Wondering, should we also keep the case sensitivity on subfields for the auth frameworks? The item search fields table might be another use case.
(In reply to Katrin Fischer from comment #19) > Wondering, should we also keep the case sensitivity on subfields for the > auth frameworks? The item search fields table might be another use case. Katrin, possibly. But a separate bug for sure.
Created attachment 71022 [details] [review] Bug 18336: Fix missing utf8_bin > utf8mb4_bin translation in kohastructure.sql Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
(In reply to Tomás Cohen Arazi from comment #21) > Created attachment 71022 [details] [review] [review] > Bug 18336: Fix missing utf8_bin > utf8mb4_bin translation in > kohastructure.sql > > Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> This was covered on the atomic update, but I forgot to add it to kohastructure.sql.
(In reply to Tomás Cohen Arazi from comment #20) > (In reply to Katrin Fischer from comment #19) > > Wondering, should we also keep the case sensitivity on subfields for the > > auth frameworks? The item search fields table might be another use case. > > Katrin, possibly. But a separate bug for sure. Agreed - I just noticed we didn't do it for those tables.
I've tried, but failed: kohadev-koha@kohadevbox:/home/vagrant/kohaclone$ ./installer/data/mysql/updatedatabase.pl DEV atomic update: bug_18336_utf8mb4.perl Upgrade to XXX done (Bug 18336: Convert DB tables to utf8mb4) kohadev-koha@kohadevbox:/home/vagrant/kohaclone$ prove t/db_dependent/www/search_utf8.t t/db_dependent/www/auth_values_input_www.t t/db_dependent/www/search_utf8.t ............ 1/99 Link not found at t/db_dependent/www/search_utf8.t line 146. # Looks like your test exited with 255 just after 4. t/db_dependent/www/search_utf8.t ............ Dubious, test returned 255 (wstat 65280, 0xff00) Failed 95/99 subtests t/db_dependent/www/auth_values_input_www.t .. 3/34 Error POSTing http://kohadev-intra.myDNSname.org:8080/cgi-bin/koha/admin/authorised_values.pl: Internal Server Error at t/db_dependent/www/auth_values_input_www.t line 85. # Looks like your test exited with 255 just after 5. t/db_dependent/www/auth_values_input_www.t .. Dubious, test returned 255 (wstat 65280, 0xff00) Failed 29/34 subtests Test Summary Report ------------------- t/db_dependent/www/search_utf8.t (Wstat: 65280 Tests: 4 Failed: 0) Non-zero exit status: 255 Parse errors: Bad plan. You planned 99 tests but ran 4. t/db_dependent/www/auth_values_input_www.t (Wstat: 65280 Tests: 5 Failed: 0) Non-zero exit status: 255 Parse errors: Bad plan. You planned 34 tests but ran 5. Files=2, Tests=9, 7 wallclock secs ( 0.03 usr 0.00 sys + 2.71 cusr 0.80 csys = 3.54 CPU) Result: FAIL Did I miss something?
Hm it worked on a clean db (of course) :)
Without the sample data the tests fail - but I guess that's ok. Passing QA on that assumption, please stop me if I am mistaken ;)
Need to fix the QA tools? FAIL installer/data/mysql/kohastructure.sql FAIL charset_collate The table columns_settings does not have the current charset collate (see bug 11944) ...
Created attachment 71210 [details] [review] Bug 18336: Full stack tests for supplemental UTF-8 chars This patch introduces tests for Koha's support for 4-byte supplemental UTF-8 chars. encoding/decoding tools handle this gracefuly. The missing piece is the MySQL DB backend. The tests in this patch: - Adds a couple records for each flavour (MARC21 and UNIMARC) so search_utf8.t tests 4-byte chars are handled correctly - Adds emoji testing in auth_values_input_www.t To test: - Apply this patch - Run: $ kshell k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => FAIL: It should fail if the DB hasn't been migrated into using utf8mb4 Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Created attachment 71211 [details] [review] Bug 18336: SET NAMES utf8mb4 in Koha::Database Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Created attachment 71212 [details] [review] Bug 18336: Convert schema from utf8 to utf8mb4 This patch adapts the DB structure so it uses utf8mb4 encoding and utf8mb4_generic_ci collation. Indexes for columns of type VARCHAR with prefix lenght higher than 191 are shortened because of the smaller max index lenght for utf8mb4 Note: please beware that testing this patchset risks your data and the patchset includes reinitializing the DB. To test: - Be on the master branch - Have a clean DB: $ reset_all (y) - Apply the first patch (Unit tests) - Run: $ kshell k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => FAIL: Tests fail because Koha doesn't support supplemental (UTF-8) chars. - Apply the rest of this patchset - Upgrade the schema: $ kshell k$ perl installer/data/mysql/updatedatabase.pl - Run the tests: k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => SUCCESS: Tests pass! - Now start from a clean DB - Run: $ reset_all (y) - Run the tests: k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => SUCCESS: Tests pass! Verify you can use emojis all over the place (MARC records, AV descriptions, etc). Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Created attachment 71213 [details] [review] Bug 18336: DBIC update Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Created attachment 71214 [details] [review] Bug 18336: Atomic update Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Created attachment 71215 [details] [review] Bug 18336: Add explicit index names to kohastructure.sql This patch just adds an explicit index name to the search_marc_map table. The atomicupdate on this patchset is a good example of why we better have them. Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Created attachment 71216 [details] [review] Bug 18336: Fix missing utf8_bin > utf8mb4_bin translation in kohastructure.sql Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Thought a bit more about it - could we get a second QA opinion please?
(In reply to Katrin Fischer from comment #25) > Hm it worked on a clean db (of course) :) If by clean you mean the result of reset_all, then the failure might be related to not restarting the plack process, then the DB connection isn't properly configured. It is important to be sure the upgrade process goes smooth!
It worked with a reset_all db, but didn't work when I tried a fresh install, patches applied after my initial testing. (drop database, create database, web installer with mandatory data + onboarding tool). Didn't work = tests didn't pass. That got me confused a bit so asking for a second opinion now.
(In reply to Katrin Fischer from comment #37) > It worked with a reset_all db, but didn't work when I tried a fresh install, > patches applied after my initial testing. (drop database, create database, > web installer with mandatory data + onboarding tool). > Didn't work = tests didn't pass. > That got me confused a bit so asking for a second opinion now. I tried it too, recreate db manually. Added all mandatory and optional data, go through onboarding tool. And the test started failing.
(In reply to Josef Moravec from comment #38) > (In reply to Katrin Fischer from comment #37) > > It worked with a reset_all db, but didn't work when I tried a fresh install, > > patches applied after my initial testing. (drop database, create database, > > web installer with mandatory data + onboarding tool). > > Didn't work = tests didn't pass. > > That got me confused a bit so asking for a second opinion now. > > I tried it too, recreate db manually. Added all mandatory and optional data, > go through onboarding tool. And the test started failing. It seems to me that then there's a problem with the tests, depending on non-existent data. Can you please test if this is a problem with this patchset or just master fails too?
Confirmed, it fails for me as well.
(In reply to Jonathan Druart from comment #40) > Confirmed, it fails for me as well. And all green on master.
All green after an upgrade también! So it only fails for new installs
(In reply to Jonathan Druart from comment #41) > (In reply to Jonathan Druart from comment #40) > > Confirmed, it fails for me as well. > > And all green on master. Not for me - without data, it fails on master the same way, when I do reset_all on kohadevbox then it is green
(In reply to Josef Moravec from comment #43) > (In reply to Jonathan Druart from comment #41) > > (In reply to Jonathan Druart from comment #40) > > > Confirmed, it fails for me as well. > > > > And all green on master. > > Not for me - without data, it fails on master the same way, > > when I do reset_all on kohadevbox then it is green I am always using misc4dev to populate the DB, so with the sample data
This is certainly not expected: Fresh install (with and without the patches): | biblio | CREATE TABLE `biblio` ( `biblionumber` int(11) NOT NULL AUTO_INCREMENT, `frameworkcode` varchar(4) COLLATE utf8_unicode_ci NOT NULL DEFAULT '', `author` mediumtext COLLATE utf8_unicode_ci, `title` mediumtext COLLATE utf8_unicode_ci, `unititle` mediumtext COLLATE utf8_unicode_ci, `notes` mediumtext COLLATE utf8_unicode_ci, Fresh master intall + upgrade: | biblio | CREATE TABLE `biblio` ( `biblionumber` int(11) NOT NULL AUTO_INCREMENT, `frameworkcode` varchar(4) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `author` longtext COLLATE utf8mb4_unicode_ci, `title` longtext COLLATE utf8mb4_unicode_ci, `unititle` longtext COLLATE utf8mb4_unicode_ci, `notes` longtext COLLATE utf8mb4_unicode_ci,
(In reply to Jonathan Druart from comment #45) > This is certainly not expected: > > Fresh install (with and without the patches): > | biblio | CREATE TABLE `biblio` ( > `biblionumber` int(11) NOT NULL AUTO_INCREMENT, > `frameworkcode` varchar(4) COLLATE utf8_unicode_ci NOT NULL DEFAULT '', > `author` mediumtext COLLATE utf8_unicode_ci, > `title` mediumtext COLLATE utf8_unicode_ci, > `unititle` mediumtext COLLATE utf8_unicode_ci, > `notes` mediumtext COLLATE utf8_unicode_ci, > > Fresh master intall + upgrade: > | biblio | CREATE TABLE `biblio` ( > `biblionumber` int(11) NOT NULL AUTO_INCREMENT, > `frameworkcode` varchar(4) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', > `author` longtext COLLATE utf8mb4_unicode_ci, > `title` longtext COLLATE utf8mb4_unicode_ci, > `unititle` longtext COLLATE utf8mb4_unicode_ci, > `notes` longtext COLLATE utf8mb4_unicode_ci, It is interesting that the atomicupdate DOESN'T touch the biblio table besides changing the collation and encoding!
(In reply to Jonathan Druart from comment #44) > (In reply to Josef Moravec from comment #43) > > (In reply to Jonathan Druart from comment #41) > > > (In reply to Jonathan Druart from comment #40) > > > > Confirmed, it fails for me as well. > > > > > > And all green on master. > > > > Not for me - without data, it fails on master the same way, > > > > when I do reset_all on kohadevbox then it is green > > I am always using misc4dev to populate the DB, so with the sample data The sample data doesn't include biblios. The one containing them is misc4dev.
Created attachment 71337 [details] [review] Bug 18336: Shift *TEXT columns size Because of the 3-byte vs. 4-byte char size change in utf8mb4, altering a column's encoding from utf8 into utf8mb4 results in this changes: TEXT => MEDIUMTEXT MEDIUMTEXT => LONGTEXT The column size in the rows (the text itself goes to a separate object storage) shifts by 1 byte, because all chars are shifted in bytes size too, so there needs to be room for the byte-count in the column. This is a debatable change, but the path needs to be included along with the rest of the patchset for consistency. Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 71779 [details] [review] Bug 18336: Full stack tests for supplemental UTF-8 chars This patch introduces tests for Koha's support for 4-byte supplemental UTF-8 chars. encoding/decoding tools handle this gracefuly. The missing piece is the MySQL DB backend. The tests in this patch: - Adds a couple records for each flavour (MARC21 and UNIMARC) so search_utf8.t tests 4-byte chars are handled correctly - Adds emoji testing in auth_values_input_www.t To test: - Apply this patch - Run: $ kshell k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => FAIL: It should fail if the DB hasn't been migrated into using utf8mb4 Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 71780 [details] [review] Bug 18336: SET NAMES utf8mb4 in Koha::Database Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 71781 [details] [review] Bug 18336: Convert schema from utf8 to utf8mb4 This patch adapts the DB structure so it uses utf8mb4 encoding and utf8mb4_generic_ci collation. Indexes for columns of type VARCHAR with prefix lenght higher than 191 are shortened because of the smaller max index lenght for utf8mb4 Note: please beware that testing this patchset risks your data and the patchset includes reinitializing the DB. To test: - Be on the master branch - Have a clean DB: $ reset_all (y) - Apply the first patch (Unit tests) - Run: $ kshell k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => FAIL: Tests fail because Koha doesn't support supplemental (UTF-8) chars. - Apply the rest of this patchset - Upgrade the schema: $ kshell k$ perl installer/data/mysql/updatedatabase.pl - Run the tests: k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => SUCCESS: Tests pass! - Now start from a clean DB - Run: $ reset_all (y) - Run the tests: k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => SUCCESS: Tests pass! Verify you can use emojis all over the place (MARC records, AV descriptions, etc). Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 71782 [details] [review] Bug 18336: DBIC update Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Created attachment 71783 [details] [review] Bug 18336: Atomic update Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Created attachment 71784 [details] [review] Bug 18336: Add explicit index names to kohastructure.sql This patch just adds an explicit index name to the search_marc_map table. The atomicupdate on this patchset is a good example of why we better have them. Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Created attachment 71785 [details] [review] Bug 18336: Fix missing utf8_bin > utf8mb4_bin translation in kohastructure.sql Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Created attachment 71786 [details] [review] Bug 18336: Shift *TEXT columns size Because of the 3-byte vs. 4-byte char size change in utf8mb4, altering a column's encoding from utf8 into utf8mb4 results in this changes: TEXT => MEDIUMTEXT MEDIUMTEXT => LONGTEXT The column size in the rows (the text itself goes to a separate object storage) shifts by 1 byte, because all chars are shifted in bytes size too, so there needs to be room for the byte-count in the column. This is a debatable change, but the path needs to be included along with the rest of the patchset for consistency. Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 71836 [details] [review] Bug 18336: Library groups fix Library groups were added after this patchset was submitted. This patch adjusts kohastructure.sql entry for library_groups. Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Created attachment 71843 [details] [review] Bug 18336: Full stack tests for supplemental UTF-8 chars This patch introduces tests for Koha's support for 4-byte supplemental UTF-8 chars. encoding/decoding tools handle this gracefuly. The missing piece is the MySQL DB backend. The tests in this patch: - Adds a couple records for each flavour (MARC21 and UNIMARC) so search_utf8.t tests 4-byte chars are handled correctly - Adds emoji testing in auth_values_input_www.t To test: - Apply this patch - Run: $ kshell k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => FAIL: It should fail if the DB hasn't been migrated into using utf8mb4 Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 71844 [details] [review] Bug 18336: SET NAMES utf8mb4 in Koha::Database Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 71845 [details] [review] Bug 18336: Convert schema from utf8 to utf8mb4 This patch adapts the DB structure so it uses utf8mb4 encoding and utf8mb4_generic_ci collation. Indexes for columns of type VARCHAR with prefix lenght higher than 191 are shortened because of the smaller max index lenght for utf8mb4 Note: please beware that testing this patchset risks your data and the patchset includes reinitializing the DB. To test: - Be on the master branch - Have a clean DB: $ reset_all (y) - Apply the first patch (Unit tests) - Run: $ kshell k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => FAIL: Tests fail because Koha doesn't support supplemental (UTF-8) chars. - Apply the rest of this patchset - Upgrade the schema: $ kshell k$ perl installer/data/mysql/updatedatabase.pl - Run the tests: k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => SUCCESS: Tests pass! - Now start from a clean DB - Run: $ reset_all (y) - Run the tests: k$ prove t/db_dependent/www/search_utf8.t \ t/db_dependent/www/auth_values_input_www.t => SUCCESS: Tests pass! Verify you can use emojis all over the place (MARC records, AV descriptions, etc). Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 71846 [details] [review] Bug 18336: DBIC update Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 71847 [details] [review] Bug 18336: Atomic update Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 71848 [details] [review] Bug 18336: Add explicit index names to kohastructure.sql This patch just adds an explicit index name to the search_marc_map table. The atomicupdate on this patchset is a good example of why we better have them. Sponsored-by: Hotchkiss School Signed-off-by: Mark Tompsett <mtompset@hotmail.com> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 71849 [details] [review] Bug 18336: Fix missing utf8_bin > utf8mb4_bin translation in kohastructure.sql Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 71850 [details] [review] Bug 18336: Shift *TEXT columns size Because of the 3-byte vs. 4-byte char size change in utf8mb4, altering a column's encoding from utf8 into utf8mb4 results in this changes: TEXT => MEDIUMTEXT MEDIUMTEXT => LONGTEXT The column size in the rows (the text itself goes to a separate object storage) shifts by 1 byte, because all chars are shifted in bytes size too, so there needs to be room for the byte-count in the column. This is a debatable change, but the path needs to be included along with the rest of the patchset for consistency. Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created attachment 71851 [details] [review] Bug 18336: Library groups fix Library groups were added after this patchset was submitted. This patch adjusts kohastructure.sql entry for library_groups. Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Can there be a loss of performance after "text" → "mediumtext"? https://stackoverflow.com/questions/3516290/any-pitfalls-of-converting-mysql-text-field-to-mediumtext > with PHP 5.6 and PDO, mediumtext uses more memory than text (with same value, an empty string). In my simple test, the difference is 0.5 MiB vs. 1.5 MiB Our ORM might have such an issue.
(In reply to Victor Grousset/tuxayo from comment #67) > Can there be a loss of performance after "text" → "mediumtext"? > > https://stackoverflow.com/questions/3516290/any-pitfalls-of-converting-mysql- > text-field-to-mediumtext > > with PHP 5.6 and PDO, mediumtext uses more memory than text (with same value, an empty string). In my simple test, the difference is 0.5 MiB vs. 1.5 MiB > > Our ORM might have such an issue. Can u think of a way to meassure that? I'm asking at #dbix-class@OFTC with no answers yet.
(In reply to M. Tompsett from comment #10) > A lot of fields are shrunk from 255 to 191. This is because > index size for VARCHAR on InnoDB has a max size, in bytes, of 767 bytes (or > similar) and utf8 uses 3 bytes for each char, and utf8mb4, 4. > 767/3 > 255. but 767/4 > 191. > https://dev.mysql.com/doc/refman/5.5/en/charset-unicode-conversion.html > > Gain functionality (necessary for true unicode), lose space (not an issue > yet). So, isn't there a loss of data on VARCHAR ? I didn't see VARCHAR being converted to something else.
(In reply to Tomás Cohen Arazi from comment #68) > (In reply to Victor Grousset/tuxayo from comment #67) > > Can there be a loss of performance after "text" → "mediumtext"? > > > > https://stackoverflow.com/questions/3516290/any-pitfalls-of-converting-mysql- > > text-field-to-mediumtext > > > with PHP 5.6 and PDO, mediumtext uses more memory than text (with same value, an empty string). In my simple test, the difference is 0.5 MiB vs. 1.5 MiB > > > > Our ORM might have such an issue. > > Can u think of a way to meassure that? I'm asking at #dbix-class@OFTC with > no answers yet. Is there anything that could keep alive for few seconds at least instances of ORM objects? Maybe this isn't an issue then. Well if some operations cause memory spikes of let's say 100MiB instead of 30MiB then that could be an issue. Actually it could be worse: https://stackoverflow.com/questions/3516290/any-pitfalls-of-converting-mysql-text-field-to-mediumtext > Some client interface libraries pre-allocate a buffer to hold results, and they allocate enough memory for the largest possible value, since the client doesn't know the data until you fetch. > Therefore the library would allocate 16MB per mediumtext while it would allocate 64KB for a text. This is something to watch out for if you have a low memory limit in your client layer. For instance, PHP has a memory_limit config parameter for scripts, and the buffer allocated for data result sets would count toward this.
(In reply to Victor Grousset/tuxayo from comment #69) > So, isn't there a loss of data on VARCHAR ? I didn't see VARCHAR being > converted to something else. Nothing to worry about it seems. https://dev.mysql.com/doc/refman/5.5/en/charset-unicode-conversion.html > Because utf8 cannot store the character at all, utf8 columns have no supplementary characters and you need not worry about converting characters or losing data when converting to utf8mb4.
The only thing I found in the mysql doc is: """ Tip: To save space with utf8mb4, use VARCHAR instead of CHAR. Otherwise, MySQL must reserve four bytes for each character in a CHAR CHARACTER SET utf8mb4 column because that is the maximum possible length. For example, MySQL must reserve 40 bytes for a CHAR(10) CHARACTER SET utf8mb4 column. """ https://dev.mysql.com/doc/refman/5.5/en/charset-unicode-utf8mb4.html
Pushed to master for 18.05, thanks to everybody involved!
QA test updated: commit 19e3654de795071116bfd064a467ced16e7d9c34 Bug 18336: Check the charset collate for kohastructure.sql
Tomas, it seems that I missed something, the DBIC schema files are different: Koha/Schema/Result/Club.pm | 8 ++++---- Koha/Schema/Result/ClubTemplate.pm | 8 ++++---- Koha/Schema/Result/ClubTemplateEnrollmentField.pm | 8 ++++---- Koha/Schema/Result/ClubTemplateField.pm | 8 ++++---- Koha/Schema/Result/MarcSubfieldStructure.pm | 12 ++++++------ Koha/Schema/Result/UploadedFile.pm | 8 ++++---- "name", - { data_type => "text", is_nullable => 0 }, + { data_type => "tinytext", is_nullable => 0 }, The ones in master have been generated with a DB upgraded from v17.11.00, but with a fresh install (kohastructure.sql) they are different. That is weird I am sure I took time to confirm that before I pushed. Can you double-check and provide a follow-up please (change to kohastructure or updatedabase I guess)?
(In reply to Jonathan Druart from comment #75) > Tomas, it seems that I missed something, the DBIC schema files are different: > > Koha/Schema/Result/Club.pm | 8 ++++---- > Koha/Schema/Result/ClubTemplate.pm | 8 ++++---- > Koha/Schema/Result/ClubTemplateEnrollmentField.pm | 8 ++++---- > Koha/Schema/Result/ClubTemplateField.pm | 8 ++++---- > Koha/Schema/Result/MarcSubfieldStructure.pm | 12 ++++++------ > Koha/Schema/Result/UploadedFile.pm | 8 ++++---- > > > "name", > - { data_type => "text", is_nullable => 0 }, > + { data_type => "tinytext", is_nullable => 0 }, > > > The ones in master have been generated with a DB upgraded from v17.11.00, > but with a fresh install (kohastructure.sql) they are different. > That is weird I am sure I took time to confirm that before I pushed. > > Can you double-check and provide a follow-up please (change to kohastructure > or updatedabase I guess)? I will try different engines too
Created attachment 71953 [details] [review] Bug 18336: (follow-up) Shift TINYTEXT columns This patch fixes two errors that slipped in the patchset. To test: - Create a dummy branch for testing: $ cd kohaclone $ git fetch $ git checkout v17.11.00 -b dummy - Reset your working DB $ reset_all (y) - Set your branch to current master $ git reset --hard origin/master - Update the DB $ updatedatabase - Update the schema files $ kshell k$ misc/devel/update_dbix_class_files.pl \ --db_name koha_kohadev \ --db_user koha_kohadev \ --db_passwd password k$ exit $ git diff => FAIL: There are discrepancies on upgrades - Reset to v17.11.00 revision and DB: $ git reset --hard v17.11.00 $ reset_all (y) - Set your branch to current master $ git reset --hard origin/master - Apply this patch - Update the DB $ updatedatabase - Update the schema files $ kshell k$ misc/devel/update_dbix_class_files.pl \ --db_name koha_kohadev \ --db_user koha_kohadev \ --db_passwd password k$ exit $ git diff => SUCCESS: No discrepancies! - Reset to HEAD to get rid of the schema changes $ git reset --hard HEAD - Regenerate the schema files on top of this patch $ dbic ; cd /home/vagrant/kohaclone $ git diff => SUCCESS: No discrepancies! - Sign off :-D Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Last patch pushed to master, thanks Tomás!
Seems a big change, not backported for 17.11
*** Bug 13239 has been marked as a duplicate of this bug. ***