Bug 23073 - wiki.koha-community.org needs updating to a later version
Summary: wiki.koha-community.org needs updating to a later version
Status: ASSIGNED
Alias: None
Product: Koha
Classification: Unclassified
Component: Websites, Mailing Lists, etc (show other bugs)
Version: unspecified
Hardware: All All
: P5 - low normal (vote)
Assignee: Thomas Dukleth
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-06-06 21:41 UTC by David Nind
Modified: 2019-10-15 15:48 UTC (History)
3 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Who signed the patch off:
Text to go in the release notes:
Version(s) released in:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description David Nind 2019-06-06 21:41:12 UTC
The version of Wikimedia used for https://wiki.koha-community.org needs updating.

Current version information: https://wiki.koha-community.org/wiki/Special:Version
Comment 1 David Nind 2019-06-06 21:45:58 UTC
See discussion on IRC:
- http://irc.koha-community.org/koha/2019-06-06#i_2149184 and
- http://irc.koha-community.org/koha/2019-06-06#i_2149465

A blocker for updating in the past seems to have been the CategoryTree extension and other extensions.
Comment 2 David Nind 2019-06-06 21:48:19 UTC
I am happy to help with testing, including doing a test upgrade locally to see what the issues are, and mapping out a plan for updating.

This would require a database dump and the site files - not sure what the security implications are of this as the database would include user name and password information.
Comment 3 Thomas Dukleth 2019-07-11 01:18:32 UTC
Perhaps the trend of consensus may point to starting over with a new instance of MediaWiki or Dokuwiki which may simplify the problem even if the choice would be an unfortunate consequence of my time having become less available for Koha in recent years.

Updating the Koha instance of MediaWiki version while retaining the current Postgres database implementation may prevent the best improvements to the Koha MediaWiki implementation as too many extensions, scripts, etc. do not work properly with Postgres.  If MediaWiki is updated and new content is added, there may be no good way to migrate to MySQL from that point.  See https://wiki.koha-community.org/wiki/Proposal_for_Wiki_Curator_17.05_Thomas_Dukleth#Migrating_to_MySQL .   

I only ever found one set of scripts for migrating a MediaWiki instance from Postgres to MySQL, http://www.winterrodeln.org/trac/wiki/MediaWikiPostgresqlToMysql . 

First migrating to MySQL and then updating MediaWiki would allow using many advantages of extensions such as SemanticMediaWiki which do not work properly and are not supported in Postgres even if has been possible to install the extension at one point.  We could have a good system of faceted categories which would be easier and more useful than the currently implemented more hierarchical categories modelled on what was implemented in the finding aid in Docuwiki which only supported hierarchical namespaces in browsable manner and thus lacked the flexibility provided by an extension such as SemanticMediaWiki.

Possibly breaking some installed extensions which can happen with any update is not a significant issue holding back upgrading but merely a reason to test updates before putting them into production.  The Koha MediaWiki instance did become briefly broken in the past by installed extensions when it was updated without testing and required some testing and modification of extensions which are not necessarily at issue going forward presently.  However, anything not core to the base software might lead to breakage in untested updates.

[While the details of issues relating to Dokuwiki should be out of scope for this bug, it is worthwhile noting the importance of not allowing an historical mistake in MediaWiki database choice and the subsequent neglect of the Koha MediaWiki instance to become a permanent mistake preventing the use of features dependent on MySQL.  In the practise of users on the previous Koha Wiki implemented in Dokuwiki, most pages seemed to be frustratingly lost to anything but guessing query terms in a search.  People did not have the habit of tagging pages at all when created and when people did tag them it was too often in a manner which did not aid findability.  Dokuwiki also did not provide any easy means for a wiki maintainer to find untagged pages and then add a tag or tags for findability, although, some people in the Dokuwiki community have been raising the issue for a while.  See https://github.com/dokufreaks/plugin-tag/issues/77 and https://forum.dokuwiki.org/thread/9473 .]
Comment 4 Thomas Dukleth 2019-08-07 21:28:12 UTC
[Correcting my hurried mistaken reading, I had misread the units for the Koha wiki database dump.  It is only about 100 MB not GB irrespective of the verbosity of the dump format.  I had incorrectly reported the units in the 1 August Documentation meeting, https://meetings.koha-community.org/2019/documentation_irc_meeting_1_august_2019.2019-08-01-13.02.log.html .]

In preparing to test migrating the MediaWiki database from Postgres to MySQL, I am now reverting to Postgres 9.6 in Debian Squeeze LTS for migration testing as postgres-common and postgresql-11 in Debian Buster currently have some bugs which we do not need trip over or otherwise pollute the results.  "pg_upgradecluster writes data_directory to postgresql.auto.conf, and gets confused by it on the next upgrade [DATA LOSS]", https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=931635 .  "ALTER TABLE statements causing "relation already exists" errors when some indexes exist", https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=932247 .  It is possible to work around those bugs in Debian Buster but it seems preferable to avoid issues where things are less widely tested and where staying closer to the old version with an old migration script may work better.

After migrating to MySQL, Debian Buster may be less problematic.  Maria DB MySQL packages in Buster for use after migrating to MySQL are probably much more widely tested than Postgres packages have apparently been.
Comment 5 Thomas Dukleth 2019-08-08 02:41:45 UTC
s/Squeeze LTS/Stretch LTS/g # above

[In my previous comment I mistakenly referred to Squeeze LTS when Stretch LTS was intended as would be evident from contextual clues.  I should have also included the numeric Debian version number 9 for better clarity.]
Comment 6 Thomas Dukleth 2019-08-14 12:59:39 UTC
I am preparing scripts to automate the conversion of the Koha MediaWiki database from Postgres to MySQL.

Multiple methods for reproducing the MediaWiki Postgres database from different forms of dump files are working well.  Using the full dump gives me more confidence with the requirement that more preparation of database users and associated username databases is necessary to avoid errors.  The Postgres database restoration with the separate SQL structure and dump files require constraint options such as --data-only --disable-triggers --superuser to avoid errors with pg_restore.

Philipp Spitzer's psqltomysql.py is written in a very generic abstracted matter and easy to modify with a few changes for both his database names, usernames, and adjustmemts where in Postgres we do not have the conventional form of username is the same as database name,   http://www.winterrodeln.org/trac/browser/servermediawiki/trunk/pgsqltomysql/pgsqltomysql.py .

The are a few problematic MediaWiki Postgres database tables for migration to MySQL where an obvious generic or MySQL centric SQL structure file is not present.  It may be necessary to partly convert from the Postgres SQL structure dump.

Table anonymized_mwuser is owned by one particular database username which might be a legacy from the very first day of testing MediaWiki for Koha for which I started over the next day.  All other tables have a different common username for the owner.  I have not found any proper source for the anonymized_mwuser table.

I have not found an easy to use MySQL structure file for the Semantic MediaWiki tables which have been essentially unused in consequence of Semantic MediaWiki not being well supported with Postgres.  Semantic MediaWiki was released with SMW_Postgres_Schema.sql but no corresponding MySQL centric file unlike other extensions which add SQL tables.  MySQL Semantic MediaWiki tables seem to be built from the Semantic MediaWiki installation script but that cannot be run on an empty database for the structure and I have not found an obvious database structure component in the includes files in the code.  A workaround for preserving any existing data is to retrieve the structure from a MySQL SQL structure dump after installing Semantic MediaWiki on an otherwise empty MySQL based MediaWiki installation being careful of course to match the old versions we have for successful database migration.  A similar workaround might be to temporarily drop the Semantic MediaWiki tables from the temporary Postgres database restoration; reinstall Semantic MediaWiki from the working MySQL conversion where no Semantic MediaWiki tables have been converted; dump the MySQL SQL structure; and then completely restart the database conversion using a newly restored temporary Postgres database and the newly obtained MySQL SQL structure dump.  We might always choose not to preserve Semantic MediaWiki data under the presumption that there was no significant work done to make use of Semantic MediaWiki under Postgres where it would not work well: drop the Semantic MediaWiki tables from database conversion; uninstall Semantic MediaWiki; and then start again with Semantic MediaWiki by installing Semantic MediaWiki with no pre-existing Semantic MediaWiki data after upgrading MediaWiki to the current version in MySQL.  The non-preservation option for Semantic MediaWiki might be best given known limitations of Semantic MediaWiki under Postgres, the peculiar dichotomy with how Postgres is treated, and the potential importance of Semantic MediaWiki for a flexible faceted system for finding content.
Comment 7 Thomas Dukleth 2019-09-04 13:21:59 UTC
I successfully changed variables and added some additional conditions to avoid NULL values as necessary from the original form of the Postgres to MySQL conversion script, http://www.winterrodeln.org/trac/browser/servermediawiki/trunk/pgsqltomysql/pgsqltomysql.py .  I also added the Postgres schema for the wiki which is "mediawiki", the same as the database name in the case of the Koha Wiki, and not the "public" schema.

I should now be reporting that it takes X minutes to run a script to convert the wiki database on my test system, however, there is an error similar to what was found to be SQL statement quotation or comma separation problem addressed at https://stackoverflow.com/questions/41475309/psycopg-error-column-does-not-exist and https://stackoverflow.com/questions/41804213/psycopg2-programmingerror-column-your-name-does-not-exist .

I am printing the generated SQL statements to try to trace the problem.  I may try using the generated SQL statements directly with suggested quotation to avoid problems where the SQL statements as stored in a variable elude what may be needed quotation.
Comment 8 Thomas Dukleth 2019-09-05 12:37:57 UTC
From printing variables, I seem to have confirmed that the problem with the Postgres to MySQL conversion script is failure to use proper quotation of the data and/or SQL statement as a whole, http://www.winterrodeln.org/trac/browser/servermediawiki/trunk/pgsqltomysql/pgsqltomysql.py .  It had been evident at glance that quoting had been originally considered for column names.  Nothing seems to be magically quoting the data so the originator of the script presumably had some well trusted content.

Consequently, I seem to need to escape any quotes within the column data; quote the data from each column; possibly avoid quoting numeric values to avoid a data type problem; and possibly quote the entire SQL statement.
Comment 9 Thomas Dukleth 2019-09-11 20:53:32 UTC
Database migration is working without any Python or MySQL errors in my current current revisions to Philipp Spitzer's Postgres to MySQL conversion script, http://www.winterrodeln.org/trac/browser/servermediawiki/trunk/pgsqltomysql/pgsqltomysql.py .  There are still some 'data truncated' warnings for some columns which I working through to correct for issues such as whether default values should be NULL, '', or 0.

The "psycopg2.ProgrammingError: column Host does not exist" error for non-existent column Host when retrieving database values from Python was not related to any lack of quotation for SQL statements in the original script from Philipp Spitzer.  Lack of quotation may be a concern especially for string values in the input data which I have endeavoured to correct to the extent permitted by the data structure required by the Python module MySQLdb.  However, if the data values could be trusted, avoiding string quotation had avoided some "data truncated" warnings.

Lack of database constraint which I subsequently added, had led to the SQL statement for collecting column names for each wiki database table as reported in information_schema.COLUMNS mistakenly collecting columns from some MySQL system user table when collecting column names for the wiki user table.  Evidently at the time the script was originally written, a MySQL system user table was not an issue or the database constraint in the MySQLdb connection was sufficient.
Comment 10 Thomas Dukleth 2019-10-03 18:50:16 UTC
Database migration from MySQL now functions with not only no errors but also with no warnings.

I have resolved all 'data truncated' warnings for various columns.  In some cases, setting a default value appropriate to a particular MySQL column as NULL, 0, or '' was sufficient.  In a few cases, I have altered the MySQL column sizes to accommodate a larger data size than the defaults set by Mediawiki for MySQL.  The difference which has allowed Postgres columns having larger data size than the size defined for for the same column in MySQL is that the Postgres columns at issue have a type text without any size constraint while the same column in MySQL often has a small size constraint.

No longer having 'data truncated' warnings is very good but a little more checking should be done.

For the cases of default value issues, it will be better to recheck my particular choices in the problematic columns for whether NULL, 0, or '' correctly align with default values set for the columns for MySQL.  Despite now avoiding all warnings, I initially took a trial and error approach to solve as much as I could in a sprint without sleep shortly after eliminating all errors and it is possible a choice or choices might not be quite correctly aligned or at least after doing my course of jury duty I cannot remember perfectly well how thoroughly I had checked before jury duty.

For the cases of Postgres having a larger data size than defined for the column in MySQL all the alterations I made to the matching MySQL columns size limits were made by trial and error and should at least be more closely aligned with what is actually necessary.  Unlike the earlier errors, the 'data truncated' warnings do not give helpful information to identify offending rows.  Particular MediaWiki implementations might need modification of the column size defaults to accommodate actual use and the case of Wikipedia has many but the modifications should not be unnecessary it is possible that some particular rows might have some buggy data from a code bug or a buggy user action.   

The columns for which I altered the maximum size may not generally have great consequence but there is one key column or key-like column affected, categorylinks.cl_sortkey for sorting titles properly with efficiency within a category.  The cause of exceeding the size limit could be how we use the wiki with some informative verbose titles, however, it may that any extra long page titles are an accident or traces left over from spammers where the content has presumably been deleted but traces are left in the database.  Running updateCollation.php to recompute sort keys for all pages would allow setting the column size back to VARCHAR(70) without an associated sorting problem.

In two other cases, the column size limits which I found to work are well more than an order of magnitude greater than the column size defined for MediaWiki.

I will try to query for rows which have excessive data size in the particular columns if I can find an efficient method to do so.  It may be helpful to use some combination of aligning the column size better for the actual data and/or and choosing to truncate some column data without triggering any warning where it is obvious that excessive data was created by some obvious accident, such as leaning on the keyboard without intent.
Comment 11 Thomas Dukleth 2019-10-03 19:00:20 UTC
The first sentence of my previous comment should have read as follows.

Database migration from Postgres to MySQL now functions with not only no errors but also with no warnings.
Comment 12 David Nind 2019-10-08 23:11:37 UTC
That's awesome! Thanks Thomas for all your work on this.
Comment 13 Thomas Dukleth 2019-10-10 17:16:08 UTC
I have successfully tested for the following changes in converting the Koha Wiki database from Postgres to MySQL.  Postgres MediaWiki defaults had text types without any size constraint while the same columns in MySQL had strong default size constraints.

We have the example of actual use in Postgres where it is evident that there are some circumstances where perfectly reasonable and correct use exceeds the MySQL size constraints and there seems to be no problem with altering the column size in MySQL to accommodate legitimate existing data use.  At the same time, it seems appropriate to truncate the data for the few accidents where someone pasted wiki page content mistakenly into the comments field.

Some cases where the column size default is too small in MySQL but the data is automatically generated or essentially automatically generated indicate that the MySQL column size constraint was not well planned.

In one case where the comment in a comment column is legitimate but exceeds the MySQL column size by a few characters which can be truncated without loss of understanding, it seems appropriate to truncate the column for the one row which goes over the size limit. If the MySQL size change of the same data type would be from. 

In the SQL statements below the comment "Original" immediately precedes an SQL statement for the original structure of the column.  If the original structure statement is not commented out it is restored, otherwise the next SQL statement after comments provides the structure for the column with an increased size.

For the three columns with a size increase from the default, it will be necessary in MediaWiki upgrades to check whether the size may be decreased by the upgrade if that would ever happen.
     
-- Original
-- ALTER TABLE `account_credentials` MODIFY COLUMN `acd_comment` varchar(255) NOT NULL DEFAULT '';
-- Allow for sufficiently greater than largest legitimate found data
-- length of 472.  An explanatory greeting message to a user about a
-- login problem seems necessary and appropriate
ALTER TABLE `account_credentials` MODIFY COLUMN `acd_comment` varchar(511) NOT NULL DEFAULT '';

-- Original
ALTER TABLE `archive` MODIFY COLUMN `ar_comment` tinyblob NOT NULL;
-- Truncated in database migration Python script for greater than largest
-- length for TINYBLOB, 255 bytes including one extra byte for data size.
--
-- Wrong field use mistake with illegitimate found data length 5446.
-- Column truncation affecting 3 rows.

-- Original
-- ALTER TABLE `categorylinks` MODIFY COLUMN `cl_sortkey` varchar(70) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL DEFAULT '';
-- Allow for sufficiently greater than largest legitimate found data
-- length of 165.  Column data is automatically generated from page
-- titles and used as a key for page title sort order which would
-- obviously not work properly if shorter than the longest page title.
-- Template titles are the longest but there are many others over the
-- default.
ALTER TABLE `categorylinks` MODIFY COLUMN `cl_sortkey` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL DEFAULT '';

-- Original
ALTER TABLE `filearchive` MODIFY COLUMN `fa_description` tinyblob;
-- Truncated in database migration Python script for greater than largest
-- length for default size TINYBLOB, 255 bytes including one extra byte
-- for data size.
--
-- Wrong field use mistake with illegitimate found data length 5447.
-- Column truncation affecting 3 rows.

-- Original
-- ALTER TABLE `logging` MODIFY COLUMN `log_comment` varchar(255) NOT NULL DEFAULT '';
-- Truncated in database migration Python script for sufficiently greater
-- than largest legitimate found data length of 278.  Many essentially
-- automatically filled uses which slightly exceed 255 characters.
--
-- An alternative to increasing the size of the column to accommodate
-- legitimate use could be changing some essentially automatically filled
-- text patterns in the data to some more abbreviated form.
--
-- Wrong field use mistake with illegitimate found data length 5447.
-- Column truncation affecting 3 rows.
ALTER TABLE `logging` MODIFY COLUMN `log_comment` varchar(300) NOT NULL DEFAULT '';

-- Original
ALTER TABLE `revision` MODIFY COLUMN `rev_comment` tinyblob NOT NULL;
-- Truncated in database migration Python script for greater than the
-- largest length for default size TINYBLOB, 255 bytes including one
-- extra byte for data size.
-- 
-- Largest legitimate found data length of 259.
--
-- Column truncation affecting 1 row cutting off the end of a word but not
-- affecting understandability of comment.  At the end of the comment the
-- word "breadcrumbs."  would be truncated as "breadcr" truncating four
-- characters and the full stop at the end of the sentence.