Bug 10662 - Build OAI-PMH Harvesting Client
Summary: Build OAI-PMH Harvesting Client
Status: RESOLVED DUPLICATE of bug 35659
Alias: None
Product: Koha
Classification: Unclassified
Component: Web services (show other bugs)
Version: Main
Hardware: All All
: P3 new feature
Assignee: David Cook
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks: 21359
  Show dependency treegraph
 
Reported: 2013-07-30 07:39 UTC by David Cook
Modified: 2024-09-01 23:20 UTC (History)
27 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Large patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
Bug 10662 - Build OAI-PMH Harvesting Client (69.35 KB, patch)
2013-09-02 07:21 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (69.35 KB, patch)
2013-09-02 07:22 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (85.28 KB, patch)
2013-10-11 01:11 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (86.73 KB, patch)
2015-09-08 03:13 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (86.73 KB, patch)
2015-09-08 03:15 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - DBIx::Class ResultSets for Testing (8.65 KB, patch)
2015-09-08 03:15 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (86.70 KB, patch)
2015-11-12 09:34 UTC, Julian Maurice
Details | Diff | Splinter Review
Bug 10662 - DBIx::Class ResultSets for Testing (8.65 KB, patch)
2015-11-12 09:34 UTC, Julian Maurice
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (121.36 KB, patch)
2016-02-05 06:07 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (119.64 KB, patch)
2016-02-16 06:26 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (119.86 KB, patch)
2016-04-04 00:21 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (149.22 KB, patch)
2016-04-15 07:02 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (152.27 KB, patch)
2016-04-29 07:32 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - kohastructure.sql changes (1.98 KB, patch)
2016-05-16 06:13 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Icarus job server and Koha UI for it (110.91 KB, patch)
2016-05-16 06:13 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Create svc/import_oai API (40.46 KB, patch)
2016-05-16 06:14 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Create svc/import_oai API (45.01 KB, patch)
2016-05-16 07:12 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Icarus job server and Koha UI for it (110.93 KB, patch)
2016-05-17 05:25 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Create svc/import_oai API (45.02 KB, patch)
2016-05-17 05:27 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Icarus job server and Koha UI for it (110.96 KB, patch)
2016-05-17 06:52 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Icarus job server and Koha UI for it (110.98 KB, patch)
2016-05-17 07:02 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Icarus job server and Koha UI for it (111.64 KB, patch)
2016-05-23 03:01 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - kohastructure.sql changes (1.99 KB, patch)
2016-07-11 05:41 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Create svc/import_oai API and import management (44.53 KB, patch)
2016-07-11 05:42 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Icarus job server and Koha UI for it (111.57 KB, patch)
2016-07-11 05:42 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Create svc/import_oai API (44.60 KB, patch)
2016-07-13 06:15 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Icarus job server and Koha UI for it (112.98 KB, patch)
2016-07-13 06:16 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (189.97 KB, patch)
2017-06-20 01:20 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Remove workaround for pre-17710 behaviour (1.04 KB, patch)
2017-06-20 01:20 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Modify OAI-PMH harvester to import RDFXML (26.89 KB, patch)
2017-06-20 01:20 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (189.96 KB, patch)
2017-06-20 23:44 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Remove workaround for pre-17710 behaviour (1.04 KB, patch)
2017-06-20 23:44 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Modify OAI-PMH harvester to import RDFXML (26.89 KB, patch)
2017-06-20 23:44 UTC, David Cook
Details | Diff | Splinter Review
Add RDF::Query dependency (796 bytes, patch)
2017-07-13 05:25 UTC, David Cook
Details | Diff | Splinter Review
Fix configuration typo for Debian instances (1.19 KB, patch)
2017-07-13 05:25 UTC, David Cook
Details | Diff | Splinter Review
Standardize OAI-PMH test success/failure (1.49 KB, patch)
2017-07-13 05:25 UTC, David Cook
Details | Diff | Splinter Review
Use old style of UUID generation (2.49 KB, patch)
2017-07-13 05:25 UTC, David Cook
Details | Diff | Splinter Review
Fix problems reported by Koha QA tools (26.75 KB, patch)
2017-10-25 00:57 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (189.90 KB, patch)
2018-01-29 01:15 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Remove workaround for pre-17710 behaviour (1.04 KB, patch)
2018-01-29 01:15 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Modify OAI-PMH harvester to import RDFXML (26.89 KB, patch)
2018-01-29 01:16 UTC, David Cook
Details | Diff | Splinter Review
Add RDF::Query dependency (796 bytes, patch)
2018-01-29 01:16 UTC, David Cook
Details | Diff | Splinter Review
Fix configuration typo for Debian instances (1.19 KB, patch)
2018-01-29 01:16 UTC, David Cook
Details | Diff | Splinter Review
Standardize OAI-PMH test success/failure (1.49 KB, patch)
2018-01-29 01:16 UTC, David Cook
Details | Diff | Splinter Review
Use old style of UUID generation (2.49 KB, patch)
2018-01-29 01:16 UTC, David Cook
Details | Diff | Splinter Review
Fix problems reported by Koha QA tools (26.75 KB, patch)
2018-01-29 01:16 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (189.91 KB, patch)
2018-09-11 19:44 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (189.91 KB, patch)
2018-09-11 19:49 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Remove workaround for pre-17710 behaviour (1.04 KB, patch)
2018-09-11 19:49 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662 - Modify OAI-PMH harvester to import RDFXML (26.89 KB, patch)
2018-09-11 19:49 UTC, David Cook
Details | Diff | Splinter Review
Add RDF::Query dependency (796 bytes, patch)
2018-09-11 19:49 UTC, David Cook
Details | Diff | Splinter Review
Fix configuration typo for Debian instances (1.19 KB, patch)
2018-09-11 19:50 UTC, David Cook
Details | Diff | Splinter Review
Standardize OAI-PMH test success/failure (1.49 KB, patch)
2018-09-11 19:50 UTC, David Cook
Details | Diff | Splinter Review
Use old style of UUID generation (2.49 KB, patch)
2018-09-11 19:50 UTC, David Cook
Details | Diff | Splinter Review
Fix problems reported by Koha QA tools (26.75 KB, patch)
2018-09-11 19:50 UTC, David Cook
Details | Diff | Splinter Review
Example OAI-PMH harvester configuration (517 bytes, text/plain)
2018-09-11 22:31 UTC, David Cook
Details
Bug 10662: Incorrect conditions cause incorrect messages and missing links (3.59 KB, patch)
2018-09-12 19:02 UTC, David Cook
Details | Diff | Splinter Review
Sample XSLT filter for converting OAI_DC into MARCXML (3.64 KB, text/xml)
2018-09-12 23:05 UTC, David Cook
Details
Bug 10662 - Build OAI-PMH Harvesting Client (189.71 KB, patch)
2018-09-14 22:22 UTC, Ed Veal
Details | Diff | Splinter Review
Bug 10662 - Remove workaround for pre-17710 behaviour (1.08 KB, patch)
2018-09-14 22:22 UTC, Ed Veal
Details | Diff | Splinter Review
Bug 10662 - Modify OAI-PMH harvester to import RDFXML (26.86 KB, patch)
2018-09-14 22:23 UTC, Ed Veal
Details | Diff | Splinter Review
Add RDF::Query dependency (839 bytes, patch)
2018-09-14 22:23 UTC, Ed Veal
Details | Diff | Splinter Review
Fix configuration typo for Debian instances (1.23 KB, patch)
2018-09-14 22:23 UTC, Ed Veal
Details | Diff | Splinter Review
Standardize OAI-PMH test success/failure (1.53 KB, patch)
2018-09-14 22:23 UTC, Ed Veal
Details | Diff | Splinter Review
Use old style of UUID generation (2.53 KB, patch)
2018-09-14 22:23 UTC, Ed Veal
Details | Diff | Splinter Review
Fix problems reported by Koha QA tools (26.74 KB, patch)
2018-09-14 22:23 UTC, Ed Veal
Details | Diff | Splinter Review
Bug 10662: Incorrect conditions cause incorrect messages and missing links (3.62 KB, patch)
2018-09-14 22:23 UTC, Ed Veal
Details | Diff | Splinter Review
Bug 10662: Template fixes (2.97 KB, patch)
2018-09-15 16:51 UTC, David Cook
Details | Diff | Splinter Review
Bug 20986 Add 867 and 868 holdings display (6.10 KB, patch)
2018-09-15 17:21 UTC, Ed Veal
Details | Diff | Splinter Review
Bug 10662 - Build OAI-PMH Harvesting Client (193.26 KB, patch)
2018-09-15 23:10 UTC, David Cook
Details | Diff | Splinter Review
OAI-PMH Request Example (50.30 KB, image/jpeg)
2018-09-16 17:36 UTC, David Cook
Details
Bug 10662 - Build OAI-PMH Harvesting Client (193.66 KB, patch)
2018-09-18 08:31 UTC, Andreas Hedström Mace
Details | Diff | Splinter Review
Datatables (100.46 KB, image/png)
2018-09-18 13:03 UTC, Josef Moravec
Details
Bug 10662: Build OAI-PMH Harvesting Client (193.61 KB, patch)
2018-11-01 05:12 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) addressing qa test tool output (87.63 KB, patch)
2018-11-01 05:13 UTC, David Cook
Details | Diff | Splinter Review
Encoding problem + datatable (17.81 KB, image/png)
2018-11-01 08:25 UTC, Josef Moravec
Details
Bug 10662: (QA follow-up) Fix plural in pod and use statements (2.15 KB, patch)
2018-11-01 09:15 UTC, Josef Moravec
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Enhance marc matchers description (1.80 KB, patch)
2018-11-01 09:15 UTC, Josef Moravec
Details | Diff | Splinter Review
Object returned by KohaTable (61.34 KB, image/jpeg)
2018-11-02 00:59 UTC, David Cook
Details
Object returned by DataTable (28.46 KB, image/jpeg)
2018-11-02 01:00 UTC, David Cook
Details
Bug 10662: Build OAI-PMH Harvesting Client (200.08 KB, patch)
2018-11-02 07:08 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) provide DBIC schema files (13.52 KB, patch)
2018-11-02 07:08 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Fix plural in pod and use statements (2.15 KB, patch)
2018-11-02 07:08 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Enhance marc matchers description (1.80 KB, patch)
2018-11-02 07:09 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (follow-up) Template corrections and improvements (84.15 KB, patch)
2018-11-02 15:04 UTC, Owen Leonard
Details | Diff | Splinter Review
Bug 10662: (follow-up) Template corrections and improvements (83.81 KB, patch)
2018-11-12 05:46 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: Build OAI-PMH Harvesting Client (200.17 KB, patch)
2019-01-23 06:31 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) provide DBIC schema files (13.55 KB, patch)
2019-01-23 06:31 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Fix plural in pod and use statements (2.17 KB, patch)
2019-01-23 06:31 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Enhance marc matchers description (1.81 KB, patch)
2019-01-23 06:31 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (follow-up) Template corrections and improvements (83.84 KB, patch)
2019-01-23 06:31 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (follow-up) Template corrections and improvements (84.00 KB, patch)
2019-02-15 10:53 UTC, Andreas Hedström Mace
Details | Diff | Splinter Review
Bug 10662: Build OAI-PMH Harvesting Client (200.17 KB, patch)
2019-02-17 20:48 UTC, Josef Moravec
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) provide DBIC schema files (13.57 KB, patch)
2019-02-17 20:48 UTC, Josef Moravec
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Fix plural in pod and use statements (2.20 KB, patch)
2019-02-17 20:48 UTC, Josef Moravec
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Enhance marc matchers description (1.86 KB, patch)
2019-02-17 20:49 UTC, Josef Moravec
Details | Diff | Splinter Review
Bug 10662: (follow-up) Template corrections and improvements (84.02 KB, patch)
2019-02-17 20:49 UTC, Josef Moravec
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Make atomic update consistent with kohastructrure. Remove utf8 charset (3.08 KB, patch)
2019-02-17 20:49 UTC, Josef Moravec
Details | Diff | Splinter Review
Bug 10662: Build OAI-PMH Harvesting Client (200.61 KB, patch)
2020-02-28 07:00 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) provide DBIC schema files (13.53 KB, patch)
2020-02-28 07:00 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Fix plural in pod and use statements (2.22 KB, patch)
2020-02-28 07:01 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Enhance marc matchers description (1.86 KB, patch)
2020-02-28 07:01 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (follow-up) Template corrections and improvements (76.30 KB, patch)
2020-02-28 07:01 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: (QA follow-up) Make atomic update consistent with kohastructrure. Remove utf8 charset (3.10 KB, patch)
2020-02-28 07:01 UTC, David Cook
Details | Diff | Splinter Review
Bug 10662: Strip UTC designators from header_datestamp (1.30 KB, patch)
2020-02-28 07:01 UTC, David Cook
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description David Cook 2013-07-30 07:39:02 UTC
Currently, Koha only acts as a OAI-PMH server, I propose to add a harvesting client as well (likely using the HTTP::OAI::Harvester module), so that Koha can ingest records from other data sources (such as digital repositories like Dspace).

I've only started reading about it but despite initial reservations about resumption tokens, I think the hardest part will not be with the retrieval of records so much as the parsing of those records into MARC.

The Library of Congress does provide some crosswalks (http://www.loc.gov/standards/marcxml/) for converting other metadata formats into MARC21. However, the DC to MARC crosswalk (which is the obvious choice for Dspace) does not produce records of a high quality. So, as part of this new feature, I will also be working on a more complete DC to MARC crosswalk. Their DCMI terms (http://dublincore.org/documents/2012/06/14/dcmi-terms/?v=elements#type) have been mapped reasonably well to the MARC21 LEADER 6th and 7th positions, which improves the quality of the record and helps to produce a "best guess" 008. I'll be looking at adding more datafields and providing better fixed field transformation.

Of course, some OAI-PMH repositories serve MARC so this crosswalk might not always be necessary. However, I've looked at Dspace's DC=>MARC crosswalk and it's similar to the LoC one, so I think this will be a valuable addition (both to Koha and to anyone wanting to transform DC to MARC21).

As I mentioned, I'm just starting out with OAI-PMH, but I imagine having Koha as a OAI-PMH harvester might be useful in union catalogue situations where other servers might send quality MARC records.

--

My plan:

1) Set up a script that is able to continuously harvest records from a OAI-PMH server (likely Dspace or Koha itself for my trials)
2) Set up a database table to handle harvester configuration (such as baseurl, sets, (possibly dates), metadata format, and any pointers to XSLTs.
3) Set up a solid (yet likely basic) DC => MARC XSLT.

If anyone has comments or advice, I'd love to hear it. Hopefully, I'll be able to focus on this over the next little while...
Comment 1 Galen Charlton 2013-07-31 18:32:09 UTC
(In reply to David Cook from comment #0)
> Currently, Koha only acts as a OAI-PMH server, I propose to add a harvesting
> client as well (likely using the HTTP::OAI::Harvester module), so that Koha
> can ingest records from other data sources (such as digital repositories
> like Dspace).

Interesting idea.

> I've only started reading about it but despite initial reservations about
> resumption tokens, I think the hardest part will not be with the retrieval
> of records so much as the parsing of those records into MARC.

This may be less of a problem in the long run with my plans to allow Koha to support multiple metadata formats (although even once that's available, you may still want the harvester to be able to convert the source metadata into something else).
 
One thing I'd suggest is that the harvester keep a copy of the original metadata record in a database table; that would be more flexible than immediately converting it to MARC and discarding the source data.
Comment 2 David Cook 2013-08-01 02:20:37 UTC
(In reply to Galen Charlton from comment #1)
> (In reply to David Cook from comment #0)
> > Currently, Koha only acts as a OAI-PMH server, I propose to add a harvesting
> > client as well (likely using the HTTP::OAI::Harvester module), so that Koha
> > can ingest records from other data sources (such as digital repositories
> > like Dspace).
> 
> Interesting idea.
>

I'm glad that you approve :). I think you wrote a bit on the subject a few years ago when the OAI-PMH support was first added to Koha, no? 

> 
> > I've only started reading about it but despite initial reservations about
> > resumption tokens, I think the hardest part will not be with the retrieval
> > of records so much as the parsing of those records into MARC.
> 
> This may be less of a problem in the long run with my plans to allow Koha to
> support multiple metadata formats (although even once that's available, you
> may still want the harvester to be able to convert the source metadata into
> something else).
>

I was thinking about that as I started my research, but I'm not sure how far along you are with your plans for metadata diversity. While I'm extremely excited for your work in that area, I suppose I wonder a bit about how feasible (both in terms of time to get there and ultimate function) it is given the current reliance of Koha on MARC data. 

In any case, like you say, the ability to transform incoming data might still be desired. Either in terms of changing metadata formats or even adding local data to incoming records. 

>  
> One thing I'd suggest is that the harvester keep a copy of the original
> metadata record in a database table; that would be more flexible than
> immediately converting it to MARC and discarding the source data.
>

Agreed. I was thinking of having a table to keep track of source record identifiers, since I'm still not familiar with resumption tokens, so I could certainly add columns for biblionumber and source metadata record. As you say, that would add flexibility for the future plus give people a source of truth, since data conversion isn't always precise or infallible.
Comment 3 David Cook 2013-09-02 07:21:49 UTC Comment hidden (obsolete)
Comment 4 David Cook 2013-09-02 07:22:33 UTC Comment hidden (obsolete)
Comment 5 David Cook 2013-09-02 07:40:12 UTC
Thoughts on things to include:

1) Add a preference/config for using identifier + datestamp OR identifier + datestamp + metadataPrefix as the indicator of the highest order of uniqueness.

2) Add an email feature that tells library staff to check a report which enumerates the status of records imported via OAI-PMH. These can be create (new records), replace (updated records), deleted (for incoming records with a status of deleted), or ambiguous (essentially a new record but linked to multiple existing bib records. These are almost certainly duplicates but require manual merging since it's tough to know which is the real authoritative record).

I was thinking perhaps of sending an email containing a link to a Template Toolkit page (so that translation would be possible), which would contain the import/history log.

3) Improving error handling

4) Make the import options more configurable? Although I think the hardcoded options for always replacing a bib match, adding for no match, and ignoring items are probably pretty good. There might be other use cases where people want something different though, so configuration might be a good idea (although ignoring items is fairly essential, as you could duplicate items if you're importing updated records with items). Perhaps the MARC21 XSLT should also strip 952 fields.

5) When using the cronjob, if the "from" date for a repository is "null", check for existing records in Koha, and use the latest "datestamp"? This way we're able to do selective harvesting automatically without having to update our configuration. (I'll probably add this one soon.)

6) Matching rules: a) Check what MARC field that system uses for its biblionumber. Check if there is a matching rule for that field in Koha. If not, create one? Having this matching rule is essential for matching updated records.

7) Improving the DC => MARC conversion (might look at this soon too...it will always be a "best guess" but it has room for a lot of easy improvement)

8) Make an OAI-PMH harvesting web UI. This would allow people to plug in the baseURL for a remote OAI-PMH repository and use the 6 verbs on it.

I imagine it being a good way of people getting used to what a OAI-PMH repo has, so that they can set up the automatic cronjob configuration. It could also be a good idea to allow the "ImportRecordsIntoKoha" method for selective harvesting.

That said, if it's too easy to use, it might also be abused by someone who doesn't know what they're doing. If there were a GUI, it would need a permission and/or system preference most likely.
Comment 6 Viktor Sarge 2013-09-02 11:52:53 UTC
I'm excited to see this functionality being developed as we will work with just the scenario you initially mentioned with a union catalogue providing quality records that we import automatically as our book vendor add them to our account in the union catalogue.
Comment 7 David Cook 2013-10-11 01:11:44 UTC Comment hidden (obsolete)
Comment 8 David Cook 2014-08-13 06:29:37 UTC
Recently, we've had some renewed interest in this feature, so if I'm able to find time I will be looking at this again.
Comment 9 Koha Team University Lyon 3 2014-12-18 13:37:38 UTC
I'm interrested by this feature, if you need someone to test, I'm volunteer.

Sonia BOUIS
Comment 10 David Cook 2014-12-19 04:16:39 UTC
(In reply to Koha Team Lyon 3 from comment #9)
> I'm interrested by this feature, if you need someone to test, I'm volunteer.
> 
> Sonia BOUIS

Great! Thanks, Sonia :).

I think that I have the main engine working, but I got distracted by DBIx::Class::Schema and some local work projects, so I haven't finished the UI, the documentation, or the unit tests yet. 

At the moment, I have another project taking priority over this one, but the OAI-PMH harvester is #2 on my list of projects. Hopefully, I'll be able to look at this soon.
Comment 11 Andreas Hedström Mace 2015-04-30 09:29:35 UTC
Also very interested in this. Any progress lately?
Comment 12 David Cook 2015-05-04 02:03:44 UTC
(In reply to Andreas Hedström Mace from comment #11)
> Also very interested in this. Any progress lately?

Sadly no :(

I'm tempted to post my work in progress... but I still have a few things that I need to work out. 

Here are my current notes to myself:

1) Add a UI in the staff client so that users can configure the repositories from which to harvest (NOTE: this is the change that will create the most merge conflicts, since it touches existing templates)

2) Add the ability to completely re-harvest from a repository (I suppose the thing to do is delete all the records that were derived from that repository, then re-harvest)

3) Improve the whole feature so that the Perl scripts and templates handle the presentation, while most of the logic is driven by the modules.

4) Double-check database structure?

5) Add POD to Harvester.pm and Importer.pm

6) Add unit tests for Harvester.pm and Importer.pm

7) Add necessary changes to updatedatabase.pl

8) Add function to OAI::Importer that will add the item type if it doesn't already exist?
Comment 13 David Cook 2015-05-04 02:46:19 UTC
Actually, I am going to post my code after all.

Here's a link to a Github repository that has my current code:

https://github.com/minusdavid/Koha/tree/pro_master_oai

--

At the moment, I have a test instance of Koha that uses the OAI harvester to pull in all new/changed records from about 6-7 different Koha OAI servers. 

It seems to work reasonably well.

I suppose priorities that I would highlight are:

1) The ability to completely re-harvest from a server.

This would involve deleting all the harvested records in Koha, and re-harvesting from the OAI server.

2) Creating a web UI to add/modify/remove repositories for OAI harvesting

--

Other than that... it's tidying up the code, adding documentation, and adding unit tests so that this can actually get into Koha. 

I just don't have time for any of this at the moment.
Comment 14 David Cook 2015-07-21 07:06:31 UTC
For those following along at home, it looks like I'll be starting work on this again soon!

My immediate priorities are finishing the web user interface, and adding the ability to re-harvest from an OAI-PMH server.

I also need to make some schema changes, which will require some refactoring now, but will save time/energy in the future. Hopefully, in the future we'll be able to harvest authorities and holdings instead of just bibliographic records.

After those three things, I'll be posting a revised patch for testing... then probably some follow-up patches with unit tests.
Comment 15 David Cook 2015-09-01 07:09:35 UTC
I think I've nearly arrived at a first draft.

On Friday, I'll look at writing out test plans/instructions, and doing some last minute changes. 

I still need to fix the POD, add unit tests, and add help pages, but I figure that can wait until some initial testing is finished, since the first round of testing might bring more changes to make...

I'm excited though! This is probably the closest the OAI-PMH harvester code has ever come to actually being usable for other people than just myself :p
Comment 16 David Cook 2015-09-08 03:13:32 UTC Comment hidden (obsolete)
Comment 17 David Cook 2015-09-08 03:15:30 UTC Comment hidden (obsolete)
Comment 18 David Cook 2015-09-08 03:15:37 UTC Comment hidden (obsolete)
Comment 19 David Cook 2015-09-08 03:42:15 UTC
Hi all!

I've finally got something up for testing, so please everyone take some time to test it out. So much has changed since I first started working on this back in 2013, but hopefully it should provide all the functionality that you need.

I'm sure that the user interface could use more attention, so I'd love to receive feedback on that.

I'd also love to hear back about how the feature works. The key component is the "oai_harvester.pl" cronjob, which will be set up by a system administrator. I don't think there's much that a web user can do to affect that, although I have seen other bugs talking about giving web users control over scheduling tasks. I think web users controlling scheduling would be outside the scope of this bug.

Unlike the "Staged MARC Management", there is currently no way of un-importing and re-importing. You can only "reset repository harvest", which will delete all currently harvested records and allow you to schedule a new re-harvest. While I originally was going to leverage the "Staged MARC Management" code, I decided that giving web users control over selectively un-importing and re-importing batches of records harvested via OAI-PMH could be really problematic. That is, you might un-import a batch which deletes 10 records, import a new batch which contains those 10 records, then try to re-import an earlier batch of those 10 records. Even if the (optional) record matching rules were set up perfectly, your Koha records would be wrong; they'd be for an older version of the upstream record. I decided that once a record was added to Koha - all further updates and deletions should be automatic. And if a record was deleted from Koha (other than by "resetting the harvest"), then it could not be re-added; it will instead generate an error (since you can't currently "undelete" a bibliographic record or re-add it with the same biblionumber). However, I'm happy to discuss options for handling records that have been deleted from Koha. There is code that checks if the record has been deleted in Koha, so it would be trivial to add a new record with a new biblionumber, although I'd have to update some other code which expects a unique OAI-PMH identifier to be tied to only 1 Koha biblionumber whereas in this case it would have 2 or more.

In fact, I'm happy to discuss every part of this code. 

Some of you might be interested in improving performance. At the moment, the "oai_harvester.pl" runs synchronously, which means that first all the records need to be downloaded into the database, and then all the records need to be processed and imported into Koha. For initial imports or large imports, this takes hours. However, I've recently gained a lot of experience using POE (Perl Object Environment). Using POE, I could presumably write an asynchronous program which could import records as they're received, rather than waiting for the entire harvest to complete. Unfortunately, POE was removed from Koha's dependencies in the past year or so, but I don't think it would be problematic to add it to the dependencies once again. 

--

Despite me posting these patches, the work isn't done yet. 

A keen observer will note that there is a lack of consistency in naming. I sometimes say "oai_client", "oai_server", "oai_target", "oai repository". It's not always exactly clear what I mean, even though I know what I mean. I want to be clear in differentiating this feature from Koha's OAI-PMH server as well. I would be receptive to comments about preferred terminology in both the backend and the web app.

Additionally, I also need to do the following:

1) Add unit tests
2) Revise the embedded POD in the code
3) Add help pages (and possibly hints/tips in the templates for web users)

I'm going to hold off on these 3 tasks for the moment until we get further into the testing. Otherwise they'll just need to be revised again after more code iterations. (That said, it would have been smart to have written unit tests from the beginning as I built up the code. Alas. Next time.)
Comment 20 Julian Maurice 2015-11-12 09:34:00 UTC Comment hidden (obsolete)
Comment 21 Julian Maurice 2015-11-12 09:34:12 UTC Comment hidden (obsolete)
Comment 22 Laurence Rault 2015-11-12 10:51:30 UTC
I am trying to test OAI harvest on a biblibre Marc21 sandbox following the test plan.

I want to import oai_dc records, but the koha records created are always empty (only the leader field is present)

I tried with Path to xslt = default, or set empty
Record type : biblio
Marc Framework : default

What kind of metadata is allowed ? 
I see only this xslt file : MARC21slimFromOAI.xsl
Should I previously create a specific oai to marc xslt stylesheet ?
Comment 23 David Cook 2015-11-12 23:25:48 UTC
(In reply to Laurence Lefaucheur from comment #22)
> I am trying to test OAI harvest on a biblibre Marc21 sandbox following the
> test plan.
> 
> I want to import oai_dc records, but the koha records created are always
> empty (only the leader field is present)
> 
> I tried with Path to xslt = default, or set empty
> Record type : biblio
> Marc Framework : default
> 
> What kind of metadata is allowed ? 
> I see only this xslt file : MARC21slimFromOAI.xsl
> Should I previously create a specific oai to marc xslt stylesheet ?

Hi Laurence:

Yes, you'll need to create a OAIDC2MARC XSLT in order to transform the metadata into MARC21. 

Since Koha only supports MARC, the record the OAI-PMH harvester passes to Koha's internal code must also be in MARC. 

Long ago, I thought about creating a XSLT to convert from oai_dc into MARCXML, but I found it to be a very error-prone process, since oai_dc is such a simple metadata format and MARC is quite complex. I was never happy with the conversion from oai_dc to MARC.

With DSpace, I plan to create a DIM to MARC XSLT which I'll probably use on DSpace itself so that I don't have to do anything special on the harvester end. DIM is DSpace's internal metadata format, and I found that to be much better than oai_dc for converting to MARC.

So my recommendation is to harvest in MARC, but where that's impossible and you still want access to the record... you'll have to point "Path to XSLT" at a different stylesheet which can convert oai_dc to MARCXML.
Comment 24 David Cook 2015-11-12 23:27:09 UTC
I've received a lot of feedback from the National Library of Sweden, so there will probably be more comments going up soon on Bugzilla regarding our discussions about changes to the code that I've posted here.
Comment 25 Andreas Hedström Mace 2015-11-13 14:52:51 UTC
The National Library of Sweden have together with Stockholm University Library provided the funding required for David to finalize his work on the harvester. Stockholm University Library has been testing the OAI-PMH harvester extensively of late, and have provided feedback and been in discussion with Dave about the development of the harvester. Here I’ll try to summarize our discussions. David will probably have to fill in the gaps where needed, and provide further detail!

Our use case
We are harvesting records from the Swedish union catalogue LIBRIS, which provides records in Marcxml. Today only bibliographic records are harvested, but we hope to add functionality in the future to also allow holdings to be harvested (but this is a separate development and won’t be discussed further here.)

We would want to harvest repeatedly and often, preferably every 5 seconds or so, to always have up-to-date records in our local system. Cataloging is done in LIBRIS.

Core functionality
* The harvester works as intended, where we have tried harvesting record, editing/deleting them at the source and then reharvesting them. All works as intended.
* We also tried to delete a record in Koha and then do a harvest – the intended error message is displayed (“Harvested records in error state”).
* It’s very good that the HTTP and OAI-PMH parameters for the OAI server target can be tested directly! (I was trying to set up LIBRIS SRU server in Koha the other day and was frustrated that I had to go to cataloging to test whether or not I had set-up correct parameters…)

All in all, the harvester works as intended!

Major issues

Repeated harvests
The harvester as built today is made to run one-time harvest or repeating harvest that are long in between – like once every night. For those use cases, performing the scheduling in the GUI and then running the job with the cronjob (the download and the import parameters) is not a problem. But for repeated tasks, this divided responsibility is highly problematic.
We would like to have all harvests (or tasks) set from the GUI! To facilitate this, David has proposed to change the functionality of the harvester to work as a daemon instead. The reasons for these is as follows:

* Using the daemon, all scheduling can be handled by the GUI
* Using the daemon, you could harvest every few seconds. The original intent with the cronjob was that it would be set once and never looked at again. The harvesting would just happen in the background. But since you want more control and to run the harvest every few seconds, a daemon is the way to go. 
* The key benefit of using the daemon is that you can control it from the GUI and that it can manage the harvests. Trying to set/schedule a cronjob from the GUI would be a bad idea. 
* If you’re trying to re-harvest every few seconds, a cronjob could easily get out of control. You could easily have competing processes and no way to control them at all. A daemon couldn’t be a communications centre in the way described. The way I envision it, the daemon will communicate with the Web GUI. You could start, stop, and pause harvests. The daemon would also be in charge of the actual harvest, as it could control its own activity. You can’t really control a cronjob. The cron daemon starts cronjobs based on its own unique syntax and that’s it. It’s just a scheduler. It’s not a controller. The daemon I’m talking about would be a controller. You could tell it “STOP 1” and it would stop running the harvest with the 1 identifier.

David could preferably provide more detail on the proposed daemon approach. 

We had some initial reservations about the use of a daemon for the harvester, mainly as this would be a background process that might be hard to evaluate/work with for a systems administrator, to which David replied:

* Why would it be hard for a systems administrator to evaluate/work with a daemon? It seems to me that it would actually be easier for sysadmins to evaluate/work with a daemon, as it can be monitored and controlled as a separate process. It’s much easier to control than a cronjob.

It would be good to have input from others in the community on the merits of having the harvester run as a daemon!

Matching rules
At the moment there are not matching rules for the harvester per se. The only matching that is done is based on the OAI-PMH unique identifier. If there’s already a record in Koha with the same title, but not the same OAI-PMH unique identifier, you will get a duplicate.

Not having matching rules will essentially make the harvester useless for us, and I would guess anyone harvesting from a union catalogue. We don’t want to add a lot of unnecessary duplicates to our local catalogue. In case of libraries who are already running Koha and would want to start using the harvester, there would be a lot of duplicates (possibly everything!). Also, we do not want to limit libraries to use one source to harvest from – there might be a need in the future to harvest from multiple sources.

We suggest that the “Staged Marc Management” tool should be used to actually import the records into Koha – then the matching rules that apply there would be used. Or copying/mirroring this functionality for the harvester.

Small issues
* Viewing a server target, the page doesn’t have a back button or working breadcrumbs. David has suggested that he might not add a back-button but will fix the breadcrumbs.
* The reset repository harvest button should have a warning or a help text next to it, explaining that all harvested records will be removed.
* A help text should be added next to the Until parameter, detailing that this should not be set for repeated harvest. Otherwise, as the From parameter is auto-updated with each harvest, Until might be set before From, which will cause to harvester to fail.
* More detailed information should be presented under “View”, preferably lists of records imported (where you can click on the bib-id to go to the actual record), lists of deleted records, updated records etc. We will draw up what we would like to see in terms of details and send to David. We can also post it here, if others are interested?
* It would be great if multiple sets could be provided for one OAI server.
The first time a new server is added, pressing the “Test HTTP and OAI-PMH parameters” will send you back to the OAI-PMH server targets (oai_client.pl) page, like you would expect the save button to do. David has confirmed that this is a bug.
Comment 26 Viktor Sarge 2015-11-13 17:33:43 UTC
> Our use case
> We are harvesting records from the Swedish union catalogue LIBRIS, which
> provides records in Marcxml. Today only bibliographic records are harvested,
> but we hope to add functionality in the future to also allow holdings to be
> harvested (but this is a separate development and won’t be discussed further
> here.)

We have (as many others) the same use case. Getting holdings would be very great!


> All in all, the harvester works as intended!

Great news! 

> Matching rules
> At the moment there are not matching rules for the harvester per se. The
> only matching that is done is based on the OAI-PMH unique identifier. If
> there’s already a record in Koha with the same title, but not the same
> OAI-PMH unique identifier, you will get a duplicate.
> 
> Not having matching rules will essentially make the harvester useless for
> us, and I would guess anyone harvesting from a union catalogue. We don’t
> want to add a lot of unnecessary duplicates to our local catalogue. In case
> of libraries who are already running Koha and would want to start using the
> harvester, there would be a lot of duplicates (possibly everything!). Also,
> we do not want to limit libraries to use one source to harvest from – there
> might be a need in the future to harvest from multiple sources.
> 
> We suggest that the “Staged Marc Management” tool should be used to actually
> import the records into Koha – then the matching rules that apply there
> would be used. Or copying/mirroring this functionality for the harvester.

Using the existing import tool sounds like a good plan - then there is a single point to work with for import rules even though we add a new import flow. Much better than building another place to poke around with it's own quirks. 

> Small issues
> * Viewing a server target, the page doesn’t have a back button or working
> breadcrumbs. David has suggested that he might not add a back-button but
> will fix the breadcrumbs.

Breadcrumbs is good enough if they work correctly and brings you one step up and not two-three steps up in the hierarchy. 

> * Using the daemon, all scheduling can be handled by the GUI

A GUI is a selling point in my eyes! Everything that lets the library handle their Koha installation by themselves when they don't themselves have the Linux know how is great! Not having to bug the server people about changes is a big plus. 

> It would be good to have input from others in the community on the merits of
> having the harvester run as a daemon!

GUI and short intervals for harvesting gets daemon my vote. But that is without a deeper analysis of technical details. Although I know Zebra indexing can now as a daemon which is viewed as a plus so it can't be all that alien a concept.
Comment 27 David Cook 2015-11-16 06:08:03 UTC
The first time I started working on this feature, I thought about using “Staged Marc Management”, but there were problems with this which I don't recall 100% (as it was over 2 years ago). I do have some memories though:

1. I wouldn't want the harvests accessible via the "Staged Marc Management" tool, because selective "import"/"undo import" of harvests would be highly problematic.

You could import 100 records, unimport 100 records, import 50 records, and then try to re-import those original 100 records which include that 50 record subset. In this case, you might overwrite the newer 50 records with the older 100 records. Of course, you could opt not to overwrite matches... but that relies on there being a good matcher, which there very well might not be. 

Plus, if you don't overwrite matches and have that setting defined at a OAI-PMH server level, you're never going to get newer records updating older records, which is also bad.

2. The "Staged Marc Management" record matcher relies on Zebra which makes it prone to not always matching correctly. If something hasn't been indexed correctly, you'll get duplicate records. It also relies on that Koha's indexing configuration. 

In some tests, I've forced the unique OAI-PMH identifier to be placed in the 035$a field... but that field isn't indexed by default. So it would be useless for matching without an update to the Zebra indexing... which can be achieved but it's another point of failure.

The matching also relies on import rules defined in Koha. If you have a staff member accidentally delete your OAI-PMH matching rule, you're going to quickly get many many duplicate records.

--

I chose to do my own import rules - using only the unique OAI-PMH identifier - because it was the most reliable way of making sure that harvested records weren't duplicated against themselves/each other.

In the event that you're harvesting holdings, you also need to have the original bibliographic record in the Koha. That means that if you're having duplicate matching, it must 100% of the time overwrite local bibliographic records. Otherwise, your holdings won't know which bibliographic record to which to bind. If you're using "Staged Marc Management", it's easy to accidentally misconfigure so that you're not overwriting local bibliographic records, and then you have problems again.

Another reason I chose to do my own import rules is because I don't think you can trust the user to manage the OAI-PMH harvester configuration completely. 

--

That all said, I think perhaps the "Staged Marc Management" system might be able to be leveraged... I just don't want it to be configurable by end users, since it needs very particular settings in order to work correctly. 

Unfortunately, this means that you're going to lose some of the functionality you want, like being able to look at all the records in a harvest.

However, the idea of a "harvest" doesn't really make sense if you're using the harvester every few seconds. Each "harvest" might only have 1-2 records in it, so the concept of harvests becomes a bit unhelpful.

--

Ultimately, I think we'll need to discuss the import and duplication part of the feature more...
Comment 28 David Cook 2015-11-16 06:11:55 UTC
We're also hoping to make the harvester/importer asynchronous which also makes the "harvest" or "batch" concept a bit useless, since it'll be doing lots of individual activities all at the same time.
Comment 29 David Cook 2015-11-16 06:41:16 UTC
I've been thinking a bit about the connexion_import_daemon.pl, and how it uses /cgi-bin/koha/svc/import_bib. This uses the "Staged Marc Management" backend without exposing it to users. 

Unfortunately, I think that service is hard-coded just to work with connexion_import_daemon.pl... so I might need to alter /cgi-bin/koha/svc/import_bib a bit... but that might be an option. I actually quite like that idea overall as it provides a more loosely coupled system.

I'm also tempted to change the existing system so that you define OAI-PMH servers, and then create OAI-PMH tasks for those servers. This information would all be stored in the database. Then, when you wanted to run a task, you could click "Run" on the Web UI, and it would send the task to the daemon. 

I haven't 100% thought out how the Web UI and the daemon will communicate yet. While that above paragraph sounds good, what happens if the daemon dies for some reason? If it requires a message from the Web UI, it'll need a human to restart it. 

Another thought is to let it access the MySQL database... in that case the Web UI would change a field in the database (like "state" to indicate that it should be running), and then tell the daemon something like "READ 15" to read the task from the database with an ID of 15. That way... if the daemon crashes, a server-side process could detect the crash and then tell the daemon to re-start itself... and when the daemon is re-starting, it could just look in the database for any tasks that it should be running, and get back on track.

If it has database access, it's not really that loosely coupled which would be unfortunate...

Actually, another idea... the Web UI could send the task, and the daemon could write it away to a temporary file which it cleans up after it's finished a task. If it crashes and gets restarted, it can check its temporary files to see what it was in the middle of doing. Yeah... that's probably a better idea.

Another idea would be to use shared memory... but I would need to do some more research into that one.
Comment 30 David Cook 2015-11-17 04:36:18 UTC
Thinking more about matching and how complex or even impossible it is.

Consider that you have 2 different OAI-PMH servers with 2 matching records and also 1 matching record locally on Koha.

Which is the source of truth? 

You might argue that the harvested records have a higher priority than the local record... so you can overwrite the local record.

However, what about the 2 harvested records? Which one takes precedence? 

What about if they have holdings records? The National Library of Sweden requires that holdings records be partially merged into bibliographic records... that becomes difficult in this scenario. Every time there is a holding record update, you would need to re-create the bibliographic record from the last harvested bibliographic record (otherwise the holding-bibliographic merge would quite quickly get duplicated or otherwise incorrect fields).

I suppose you could choose the most recent bibliographic record as the highest priority, and you could blindly merge holdings into that bibliographic record on each update...

You'd have to set up a relationship somewhere between the holdings and the bibliographic record though and this gets tough because the holdings from one OAI-PMH server aren't going to map to that bibliographic record using the 004/001 mechanism.

That is... Holdings A 004 refers to  Bibliographic A 001, so there is a link there. However, Holdings B 004 refers to Bibliographic B 001 which we're discarding as it's a "duplicate". 

So we need to have a linkage somewhere between Holdings B and Bibliographic A 001 or preferably Bibliographic A 999$c.

I think that might be possible, but certainly not with Koha's existing import mechanisms.

--

Importing holdings are going to have other issues as well like how to enforce barcode uniqueness... and how to manage values in records that don't exist in Koha.

I also need to use my special OAI import system for managing holdings imports, because there will be no reference to the OAI-PMH unique identifier in the Koha item MARXML, so there's no way to use the existing import system to check if that item already exists.

I also don't think there's any way to replace an item record using this system unless it shares the same 952$9. I might be wrong, I haven't investigated that issue thoroughly, but I bet I'm right, as it's a tough one. 

It's also worth reviewing the section marked "Embedded Holdings Information" in http://www.loc.gov/marc/holdings/hd852.html or http://www.loc.gov/marc/bibliographic/bd852.html. 

Part of the difficulty with the holdings is the fact that Koha doesn't support MARC21 Format for Holdings Data (MFHD). It would be much easier if it did, although there would still be the problems with the source of truth when merging bibliographic records. 

Merging records and de-duplicating is one thing when your system is relatively static or updated semi-manually, but when you're importing and auto-merging records at a speed of X records every 2 seconds, you're probably going to run into problems.
Comment 31 David Cook 2015-11-17 05:20:03 UTC
_HOLDINGS_

Diagramming this now...

OAI ID -> Koha ID -> Original 001 -> Parent 001
oai::1 -> bib1 -> 1a -> null
oai::2 -> bib1 -> 1b -> null 
oai::3 -> item 3 -> 3a -> 1a
oai::4 -> item 4 -> 4a -> 1b

So here we've downloaded oai::1 and added it as bib1.

We've downloaded oai::2 and determined that it is a duplicate of bib1. We can either overwrite bib1 or we can simply link to it.

We've downloaded oai::3. It's original parent 001 is 1a, so we can link oai::3 to bib1.

We've downloaded oai::4. It's original parent is 1b, so we can link oai::4 to the entry for oai::2, which is bib1.

In this case, the only problems we have are determining what makes a match, and determining whether oai::1 or oai::2 should provide the metadata for bib1. We  probably need another field to say which OAI record is the source of truth for bib1.

I might be able to use the existing C4::Matcher() for this... 

It's worth noting that the downloaded metadata will need to be used every time there's an item update, because the National Library of Sweden requires that item-level data be merged into the host bibliographic record... and the only way to do that cleanly is to start with a virgin bibliographic record each time. (Otherwise, when you change a 863 in a holdings record, you won't be updating a 863 in the bib record, you'll be adding a new one and the old one will stay there incorrectly.) Every holdings record will also need to be merged in as well, which could probably cause load problems with records with lots of holdings... 

Actually, I'm not sure how that's even possible now that I think about it... since you're harvesting from the holdings endpoint every 2 seconds... 

I suppose you could queue updates to a bibliographic record from the holdings records... but that wouldn't make sense as every updated holdings record would require a full update of the bibliographic record from all holdings records... so any update would need to be processed.

I suppose you could queue updates in terms of... if there's a holdings-bibliographic merge in progress, don't start another one as things will explode... still seems like a potentially intensive operation.

--

Problems to consider and solve (anyone can chime in here):

1) Precedence of bibliographic-bibliographic merges
2) Merging holdings records into bibliographic records (e.g. 852 and 863 into the bibliographic record... not 952 into bibliographic record)
3) Any local changes to a record will be erased by future downloaded updates
Comment 32 Andreas Hedström Mace 2015-11-17 10:24:00 UTC
Just a quick note: the idea of having a harvest running every 2 or 3 seconds is my personal preference. If it is not possible due to potential conflicts or heavy loads, then this could be revised. I would be ok with an harvest once every 10 seconds, every 30 seconds or even once a minute. But preferably not much slower than that...
Comment 33 Katrin Fischer 2015-11-17 14:51:15 UTC
I was told recently that 2-3 seconds is quite standard for OAI-PMH harvests.

I think a problem could occur if Zebra is involved in matching as you have to make sure the indexes have caught up before you can reliably match. Say a record is changed at the source twice in a very short timeframe... or added and then changed again, included in 2 harvests... but not yet indexed when the second runs, etc.
Comment 34 Andreas Hedström Mace 2015-11-17 15:06:10 UTC
Yes, if we can avoid using Zebra for matching that would probably be for the best.
Comment 35 Andreas Hedström Mace 2015-11-18 09:32:58 UTC
(In reply to David Cook from comment #29)

> I'm also tempted to change the existing system so that you define OAI-PMH
> servers, and then create OAI-PMH tasks for those servers. This information
> would all be stored in the database. Then, when you wanted to run a task,
> you could click "Run" on the Web UI, and it would send the task to the
> daemon. 

To me, this sounds like the way to go! In our case, we would set the task of repeated harvest - and not look at it again unless there are problems. =)
 
> I haven't 100% thought out how the Web UI and the daemon will communicate
> yet. While that above paragraph sounds good, what happens if the daemon dies
> for some reason? If it requires a message from the Web UI, it'll need a
> human to restart it. 
> 
> Actually, another idea... the Web UI could send the task, and the daemon
> could write it away to a temporary file which it cleans up after it's
> finished a task. If it crashes and gets restarted, it can check its
> temporary files to see what it was in the middle of doing. Yeah... that's
> probably a better idea.

This too sounds like good approach to me. Some sort of fail-safe to have the harvester pick up and continue what it was doing if it has crashed - but it should be set from the UI to begin with!
Comment 36 Andreas Hedström Mace 2015-11-18 11:57:22 UTC
(In reply to David Cook from comment #30)
> Thinking more about matching and how complex or even impossible it is.
> 
> Consider that you have 2 different OAI-PMH servers with 2 matching records
> and also 1 matching record locally on Koha.
> 
> Which is the source of truth? 
> 
> You might argue that the harvested records have a higher priority than the
> local record... so you can overwrite the local record.
> 
> However, what about the 2 harvested records? Which one takes precedence? 

I would think that this could be solved by adding a "priority option" when adding new servers. Then the user would decide which source should be ranked higher in terms of conflict/duplicates.
Comment 37 Viktor Sarge 2015-11-18 13:13:17 UTC
> > You might argue that the harvested records have a higher priority than the
> > local record... so you can overwrite the local record.
> > 
> > However, what about the 2 harvested records? Which one takes precedence? 
> 
> I would think that this could be solved by adding a "priority option" when
> adding new servers. Then the user would decide which source should be ranked
> higher in terms of conflict/duplicates.

Relevant to this discussion might be the fact that we built both tracking of changes to MARC-records with the possibility to roll back changes + a system for setting rules for who can change what fields in MARC records. I'll dig up the relevant threads.
Comment 38 Viktor Sarge 2015-11-18 13:17:58 UTC
Write protecting MARC fields based on source of import
http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=14957

History for MARC records. Roll back changes on a timeline or per field.
http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=14367


I'm not quite certain if this should be "See also" connections - what say ye?
Comment 39 Katrin Fischer 2015-11-20 15:32:58 UTC
I think the discussion is very interesting, but I am a bit worried about the handling of a record from multiple sources. Maybe it would be good to agree on a basic version of the harvester for a first implementation and then enhance it for more complicated scenarios step by step?
Comment 40 David Cook 2015-11-23 00:46:32 UTC
(In reply to Katrin Fischer from comment #39)
> I think the discussion is very interesting, but I am a bit worried about the
> handling of a record from multiple sources. Maybe it would be good to agree
> on a basic version of the harvester for a first implementation and then
> enhance it for more complicated scenarios step by step?

I'm inclined to agree with Katrin. I think it makes more sense to have a solid basic version before we try to over-complicate things.

I've actually thought of another problem with the OAI-PMH import... and that's merging bibliographic records. If you were to merge a local record and a OAI-PMH record, and choose for the local record to be the destination record, the OAI-PMH import would be broken for that record... because the merge functionality has no concept of the OAI-PMH harvest.

I like the idea of locking records that have been imported via certain mechanisms, although having that locking be effective across the board would require some rigorous checks in place throughout the code.

I wish that there were some sort of tracking to show the source of all records (e.g. original, Z39.50, OAI-PMH, REST API, etc) to help out with that.

--

The only problem I can see with providing a basic version is that a basic version might not capture all the data that we need for more complicated future scenarios... and it might make it 10 times harder to implement more complicated code in the future as a result...
Comment 41 David Cook 2015-11-23 01:11:05 UTC
(In reply to Katrin Fischer from comment #33)
> I was told recently that 2-3 seconds is quite standard for OAI-PMH harvests.
> 
> I think a problem could occur if Zebra is involved in matching as you have
> to make sure the indexes have caught up before you can reliably match. Say a
> record is changed at the source twice in a very short timeframe... or added
> and then changed again, included in 2 harvests... but not yet indexed when
> the second runs, etc.

I agree once again with Katrin. I think I've said before (either here or via email) that using Zebra for matching can be very unreliable. 

Currently, I use the unique OAI-PMH identifiers to handle all harvested records, and that's quite robust, since that identifier should be persistent. However, that obviously doesn't help with matching OAI-PMH harvested records against local records created via other methods.

In the short-term, perhaps merging bibliographic records would have to occur manually. Or maybe a deduplication tool could be created to semi-automate that task... although I think that tool would have to prevent any deletion of OAI-PMH harvested records.

Actually, this hearkens back to my previous comment. It would be good if each record had a simple way of identifying its origin. So you couldn't delete a record obtained via OAI-PMH unless its parent repository was deleted from Koha or unless you used a OAI-PMH management tool to delete records for that repository. 

I think providing this "source" or "origin" would need to be done consistently or rather... extensibly. I wouldn't want it to be OAI-PMH specific as that would be short-sighted. 

At the moment, everything that goes through svc/import_bib uses a webservices import_batch... but that's not very unique. It would be interesting to have unique identifiers for import sources. So you might use the svc/import_bib with the connexion_import_daemon.pl, or with MARCEdit, or your home-grown script, or whatever. It would be interesting to distinguish those separately... and maybe prevent writes/deletions for records that are entered via connexion_import_daemon.pl and home-grown script XYZ, while leaving ones imported via MARCEdit to be managed however since you just exported some original records and re-imported them via MARCEdit after making some changes.

One way of doing that would actually be to use developer keys... so a developer would need to get a key from Koha before using the web service and then the Koha sysadmin could handle the interaction between that service and Koha's internals using that key (e.g. if records are imported via Webservice A, prevent Koha users from doing anything with them).

I suppose that's a bit tougher to do with OAI-PMH... but not necessarily. When a new OAI-PMH repository is added, the system could generate a key for it, and use that key for handling the permissions for Koha users...

I think that element of the discussion would relate a lot to http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=14957...
Comment 42 David Cook 2015-11-24 06:26:42 UTC
I did some more thinking today...

The daemon, which I'm going to call Icarus as it's way easier to say that OAI-PMH harvester daemon, is going to be the focal point.

Clients, such as the Koha web staff client or an icarus-client tool, will send "harvesting tasks" to the daemon as JSON messages via a Unix socket. The daemon will process those JSON messages, periodically perform the tasks to download the records, and then hand the records off to a record processor plugin.

The plugin can then do whatever. It can hook directly into Koha, it can send the record to a REST API, or whatever. The JSON message is going to have an "extras" object that can contain data for the plugin, such as the URL of an API to send to. 

The plan at the moment is to leave it up to individuals to add the plugin to @INC when starting up the daemon.

That said, I think about adding some sort of default plugin, so that out of the box users can use Icarus (ie the OAI-PMH harvester) without any additional configuration. 

I think the plugin approach should allow me to build 90% of the infrastructure we need for the harvester while leaving 10% up to libraries to have custom implementations. In theory, this could also help people with different timelines for their desired implementation. That is, I can provide the core functionality, but you might consider developing your own plugins to handle the record. 

Anyway, I have to run, but that's where my thoughts (and diagrams) have led me. 

Oh, the daemon will also be based on the POE module. While it was recently removed from Koha's dependencies as it wasn't being used anyway, I imagine it will be trivial to add it back in. We'll have to ask Galen who was the one who removed it, I think, and who will be the package manager going forward...
Comment 43 Andreas Hedström Mace 2015-11-24 14:27:54 UTC
I think your thoughts for Icarus (oh, the name!) are well thought out and promising. Building a core functionality, with a plugin system on top makes sense to me as it will allow for plenty of customization for individual libraries! But adding a default plugin is probably a good idea too...
Comment 44 Andreas Hedström Mace 2015-11-24 21:54:54 UTC
(In reply to David Cook from comment #41)
> I agree once again with Katrin. I think I've said before (either here or via
> email) that using Zebra for matching can be very unreliable. 
> 
> Currently, I use the unique OAI-PMH identifiers to handle all harvested
> records, and that's quite robust, since that identifier should be
> persistent. However, that obviously doesn't help with matching OAI-PMH
> harvested records against local records created via other methods.

That potential matching rules should not rely on Zebra I think we can all agree on. But only matching via OAI-PMH identifiers I do not think would work for harvests from union catalogs. If I understand it correctly, it would force all libraries who want to start using Koha with OAI-PMH harvests to migrate using OAI-PMH or having a duplicate of all its records (since none of the local records will have OAI-PMH identifiers). In our case that would be about 1,2 million duplicates.

> In the short-term, perhaps merging bibliographic records would have to occur
> manually. Or maybe a deduplication tool could be created to semi-automate
> that task... although I think that tool would have to prevent any deletion
> of OAI-PMH harvested records.

Handling 1,2 million duplicates manually, or even semi-automatically will most likely not be possible. Although it might be difficult technically, I still think some sort of matching rules is necessary.

> Actually, this hearkens back to my previous comment. It would be good if
> each record had a simple way of identifying its origin. So you couldn't
> delete a record obtained via OAI-PMH unless its parent repository was
> deleted from Koha or unless you used a OAI-PMH management tool to delete
> records for that repository. 
> 
> I think providing this "source" or "origin" would need to be done
> consistently or rather... extensibly. I wouldn't want it to be OAI-PMH
> specific as that would be short-sighted. 

Marking the source/origin of a record sounds like a good idea to me, if it can be easily incorporated.

> At the moment, everything that goes through svc/import_bib uses a
> webservices import_batch... but that's not very unique. It would be
> interesting to have unique identifiers for import sources. So you might use
> the svc/import_bib with the connexion_import_daemon.pl, or with MARCEdit, or
> your home-grown script, or whatever. It would be interesting to distinguish
> those separately... and maybe prevent writes/deletions for records that are
> entered via connexion_import_daemon.pl and home-grown script XYZ, while
> leaving ones imported via MARCEdit to be managed however since you just
> exported some original records and re-imported them via MARCEdit after
> making some changes.

I was going to ask how this would tie in with the development of the REST API, but Davids comment #42 explains that.
Comment 45 Leif Andersson 2015-11-29 22:08:27 UTC
Have you ever considered "exporting" some of the marc fields to a separate mysql table? What I am thinking of is those fields that most likely will be used for duplicate detection: 001, 003, 020, 022, 035
We could than do the necessary matching without involving zebra. In most cases I'd imagine.

To be useful this would have to be applied on all imports, not only OAI.

Such a table could also be used to save the information of the origin of a record - if that is desired.
Comment 46 David Cook 2015-11-29 23:22:03 UTC
(In reply to Leif Andersson from comment #45)
> Have you ever considered "exporting" some of the marc fields to a separate
> mysql table? What I am thinking of is those fields that most likely will be
> used for duplicate detection: 001, 003, 020, 022, 035
> We could than do the necessary matching without involving zebra. In most
> cases I'd imagine.
> 

Yes, I've thought about this a bit. The 020 and 022 can already be found in biblioitems.isbn and biblioitems.issn respectively. Unfortunately, they store multiple values in the same field which is suboptimal in this case although not impossible to use...

For a while, I've been thinking that it would be nice to store the 001 somewhere and perhaps the 035. 

I think the problem with that is we're in a place where we actually want to be moving away from MARC... not entrenching it further. So I don't think we should really add to the biblio or biblioitems tables. 

Of course, there could be a way around that by making a generic "metadata" table. Something like...

metadata.id, metadata.record_id, metadata.scheme, metadata.qualifier, metadata.value. 

So that would look like:

1, 1, marc21, 001, 123456789

I think that's actually very similar to what they do in DSpace, and I've seen other library systems store their MARC records in a similar way.

> To be useful this would have to be applied on all imports, not only OAI.
> 

Well, it would actually need to be applied to _all records_ rather than _all imports_. You'd need that data filled for all records if you were going to match properly.

> Such a table could also be used to save the information of the origin of a
> record - if that is desired.

Actually, that's a good point. We could do something like:

metadata.id, metadata.record_id, metadata.scheme, metadata.qualifier, metadata.value. 
2, 1, koha, record_origin, oai-pmh

Actually, in retrospect, it would be wise to add another field like "metadata.record_type" for biblio, authority, and item.

--

I think there are some potential obstacles to this approach though:

1) Ideally, it would be discussed with the Koha community and Release Manager to see if this table could be used by other existing parts of Koha and new features
2) It would need to be added to the existing record Add/Mod/Del functions. This isn't necessarily a huge obstacle...
3) The table would need to be populated initially... for databases with millions of records, this would be a very time-consuming process. Since it would be an intensive process, I think perhaps it would need to be run at the discretion of a system administrator. I think the "touch_all_biblios.pl" script would actually take care of this process, since we'd be updating the Add/Mod/Del functions, so the "metadata" table would be populated by running that script. 
4) How to decide which fields should be "exported" into this table? While we could provide configuration for this, configuration changes would require "touch_all_biblios.pl" to be run again for the "metadata" fields to be generated correctly. Perhaps having a backend configuration file would be the best in this case, as the person editing it would also be someone who could re-generate the "metadata" table.
Comment 47 David Cook 2015-11-29 23:41:13 UTC
Leif and Andreas:

If I understand correctly, your main use case for matching would be to make sure that records previously imported from the union catalogue aren't duplicated when you start using OAI-PMH, yes?

In that case, would matching on the 001 be suitable? 

You could configure your OAI-PMH importer module to look for "metadata.value == incoming record 001" and update the matching record. That would prevent duplication in this situation.

I think in that situation it would also be good to add a "metadata.value" for the unique OAI-PMH Identifier... and only update a record if it has no OAI-PMH identifier or if it has a OAI-PMH Identifier that matches the incoming record.

That way, you might prevent different OAI-PMH repositories from updating each other's records if you're matching using 020 instead of 001. 

Since I want the OAI-PMH importer to be plugin based, the rules will be customizable. 

But if we're not going to use Zebra or some other third-party index, I think Leif's suggestion of a new "exported" metadata table might be necessary. 

Failing that "metadata" table... there is the "import_biblios", "import_items", and "import_auths" tables which might be re-purposed. "import_biblios" and "import_auths" both have a "control_number" field which I think may be unused... but that won't work in the case that you already have records in Koha which need to expose data for matching.

In an effort to move this matching discussion along, I'm going to submit a message to the koha-devel listserv as well to discuss the "metadata" table idea.
Comment 48 David Cook 2015-11-30 00:43:13 UTC
Of course, adding a "metadata" table might not be a great idea, because it would be "yet another place" that we store metadata.

There's already biblioitems.marcxml (and the soon to be removed biblioitems.marc), Zebra, biblio/biblioitems/items relational fields. 

I think "metadata" might be a good idea as a long-term shift, but it might be met with resistance in the short-term. 

That said, if we're going to move away from MARC, surely we need to start transitioning to infrastructure which is independent of MARC. 

But that's why I think we need community discussion and a decision (not necessarily consensus) on the best way to proceed.
Comment 49 Leif Andersson 2015-12-01 10:54:47 UTC
(In reply to David Cook from comment #47)
> Leif and Andreas:
> 
> If I understand correctly, your main use case for matching would be to make
> sure that records previously imported from the union catalogue aren't
> duplicated when you start using OAI-PMH, yes?
> 
> In that case, would matching on the 001 be suitable? 
> 

If we are only importing from one source, 001 would be fine.
But if we will be using several sources for our imports, then relying on 001 would sooner or later result in a "false matcning" where we end up having one record overwritten by a totally different one.

How reliable would it be to add in 003?
In MARC21 003 is the alphanumeric "MARC code for the organization whose control number is contained in field 001".
I don't know how this fits UNIMARC, though.

David suggested in a mail (to Koha devel list) to move field 001 to 035.
Or even 001 + 003 to 035
In doing so some refinements could be done to this matching point (e.g. normalization, if 003 is empty then add what we know about the exporting catalog etc)
Comment 50 David Cook 2015-12-02 07:25:54 UTC
(In reply to Katrin Fischer from comment #33)
> I was told recently that 2-3 seconds is quite standard for OAI-PMH harvests.
> 

Katrin, who said this to you? Andreas was also interested in every 2-3 seconds, but that doesn't seem very feasible to me.

Today, I've tried downloading records, and I can download 21 records from a Swedish server in 4-5 seconds. 

The OAI-PMH harvester utilizes synchronous code for downloading records, so if you have multiple OAI-PMH servers, it will have to download first from Server A, then Server B, then Server C... and then it will start processing records.

If each server takes 5 seconds, that's 15 seconds before you even start processing the first record. 

I think I might be able to find some asynchronous code for downloading records with Perl, but even then it might take 5 seconds or longer just to download records... that's longer than the ideal 2-3 seconds. Plus, the asynchronous code would require me to stop using the HTTP::OAI module and create my own asynchronous version of it... which would take some time and probably be more error-prone due to the speed which I'm trying to develop. 

I suppose 21 records might be a lot for a harvest running every 2-3 seconds... I just tried the query "verb=ListRecords&metadataPrefix=marcxml&from=2015-12-01T18:01:45Z&until=2015-12-01T18:01:47Z", and my browser downloaded 4 records in 1 second.

I suppose it might only take another 1-2 seconds to process those 4 records and import them into Koha. That's just a guess though, as I haven't written the necessary new processing/importing code yet. 

I suppose if I'm sending HTTP requests asynchronously and if it only takes 1 second to fetch a handful of records, it might be doable in 2-3 seconds... but the more records to fetch from the server, the longer the download is going to take and that blows out the overall time. If 2-3 seconds is just an ideal, it might not matter if it it takes 5-10 seconds.

I'm keen for feedback from potential uses of the OAI-PMH server. How much does time/frequency matter to you?

This might be a case of premature optimisation. It might be better for me to focus on building a functional system, and then worry about improving the speed later. I did that recently on a different project, and it worked quite well. I focused the majority of my time on meeting the functional requirements, and then spent a few hours tweaking the code to reach massive increases in performance. However, if the system needs to be re-designed to gain those performance increases, then that seems wasteful.

--

Another thought I had was to build an "import_oai" API into Koha, and then perhaps write the actual OAI-PMH harvester using a language which works asynchronously by design, like Node.js. Not that I'm excellent with Node.js. I've written code in my spare time which fetches data from a database and asynchronously updates a third-party REST API, but it's certainly not elegant... and requiring Node.js adds a layer of complexity that the Koha community itself would not want to support in any way shape or form I would think. But we could create an import API and then rely on individual libraries to supply their own OAI-PMH harvesters... although for that to work successfully, we would need standards for conversations between harvesters and importers. 

I'm thinking the "import_oai" or "import?type=oai" API might be a good idea in any case, although I'm not sure how Apache would cope with being hammered by an OAI-PMH harvester sending it multiple XML records every few seconds.

Perhaps it's worthwhile having one daemon for downloading records, and another for importing records. Perhaps it's worth writing a forking server to handle incoming records in parallel. 

--

Honestly though, I would ask that people think further about the frequency of harvests. Is every 2-3 seconds really necessary? Do we really need it to be able to perform that quickly?

If so, I'm open to ideas about how to achieve it. I have lots of ideas as outlined above, but more than happy to hear about suggestions, and even happier to be told not to worry about the speed.

Unless people think it's a concern, I'm going to continue development with slower synchronous code. I want to make this harvester as modular as possible, so that future upgrades don't require a rewrite of the whole system.

Right now, I see the bottleneck being with the downloading of records and passing those records to a processor/importer. The importer, at least for KB, is going to be difficult in terms of the logic involved, but I'm not necessarily that worried about its speed at this point. So I might try to prototype a synchronous downloader as fast as I can and spend more time on the importer and refactoring existing code.
Comment 51 Leif Andersson 2015-12-02 08:06:45 UTC
(In reply to David Cook from comment #50)
> Honestly though, I would ask that people think further about the frequency
> of harvests. Is every 2-3 seconds really necessary? Do we really need it to
> be able to perform that quickly?
> 

Well, the use case envisioned by Stockholm UL would in practice ideally involve fetching ONE record every 10 minutes or so!
The cataloger will be creating/modifying a bib record and a mfhd record in our union catalog.
Next, the cataloger will turn to our local catalog, Koha, expecting to find this record already imported.
If there is a way for the harvester to decide which ONE record to get...maybe even with some intervention by the cataloger...?
So when this ONE record is asked for, we want it to be a quick process getting it from the source and into Koha.

Then nightly more massive harvests could be performed to catch up with other modifications to the union catalog.

From my point of view it seems a little contrived to use a OAI harvester for this kind of job - synching Koha with union catalog just to be able to instantly work locally on a record.
But it is what has been recommended to us by the union catalog from which we are importing records.
Comment 52 David Cook 2015-12-03 01:14:57 UTC
(In reply to Leif Andersson from comment #51)
> (In reply to David Cook from comment #50)
> > Honestly though, I would ask that people think further about the frequency
> > of harvests. Is every 2-3 seconds really necessary? Do we really need it to
> > be able to perform that quickly?
> > 
> 
> Well, the use case envisioned by Stockholm UL would in practice ideally
> involve fetching ONE record every 10 minutes or so!
> The cataloger will be creating/modifying a bib record and a mfhd record in
> our union catalog.
> Next, the cataloger will turn to our local catalog, Koha, expecting to find
> this record already imported.
> If there is a way for the harvester to decide which ONE record to
> get...maybe even with some intervention by the cataloger...?
> So when this ONE record is asked for, we want it to be a quick process
> getting it from the source and into Koha.
> 

What's Stockholm UL? 

So you're saying that the union catalogue will only have updates about once every 10 minutes? Or that the cataloguer will only be accessing a record in the union catalogue and Koha once every 10 minutes?

I am including a mechanism for fetching "one" record, so long as the user knows the OAI-PMH identifier they're after. They'll be able to add a task for that. Perhaps a future development could be done to provide an interface in the cataloguing module for adding/updating a single record. In place of that interface, they'll be able to add a task in the same area as the other tasks in order to get the one record...

> Then nightly more massive harvests could be performed to catch up with other
> modifications to the union catalog.
> 

Those nightly harvests would certainly be possible with the current design. 

> From my point of view it seems a little contrived to use a OAI harvester for
> this kind of job - synching Koha with union catalog just to be able to
> instantly work locally on a record.
> But it is what has been recommended to us by the union catalog from which we
> are importing records.

What do you mean by "work locally on a record"? The union catalog will be the source of truth, so any modifications to a record in Koha would be overwritten by a change in the union catalogue.
Comment 53 Magnus Enger 2015-12-03 08:25:25 UTC
> What's Stockholm UL? 

Probably Stockholm University Library.
Comment 54 Leif Andersson 2015-12-03 08:36:57 UTC
> What's Stockholm UL? 

Stockholm University Library
 
> So you're saying that the union catalogue will only have updates about once
> every 10 minutes? Or that the cataloguer will only be accessing a record in
> the union catalogue and Koha once every 10 minutes?
>

As a rough estimate the cataloger will access a record every 10 minutes.

> What do you mean by "work locally on a record"? The union catalog will be
> the source of truth, so any modifications to a record in Koha would be
> overwritten by a change in the union catalogue.

Sorry for being unclear. I ment item data: barcode, item type...
Comment 55 David Cook 2015-12-03 23:11:17 UTC
(In reply to Leif Andersson from comment #54)
> > What's Stockholm UL? 
> 
> Stockholm University Library
>

I had a feeling, but I thought you were at the national library, so I was just a bit confused.

> > So you're saying that the union catalogue will only have updates about once
> > every 10 minutes? Or that the cataloguer will only be accessing a record in
> > the union catalogue and Koha once every 10 minutes?
> >
> 
> As a rough estimate the cataloger will access a record every 10 minutes.
> 

In terms of updates, are we aiming at providing very up-to-date records for just cataloguers or both cataloguers and OPAC users?

> > What do you mean by "work locally on a record"? The union catalog will be
> > the source of truth, so any modifications to a record in Koha would be
> > overwritten by a change in the union catalogue.
> 
> Sorry for being unclear. I ment item data: barcode, item type...

Interesting... I've been wondering about that. While I think we can create Koha item records from MFHD records, I'm not sure how we'd update Koha item records from MFHD records, especially if the MFHD records don't have barcodes. 

The model I'm working with right now is that the local record is replaced by the incoming OAI-PMH record... which would eliminate any local changes (including barcode, item type, etc). Trying to automatically merge an incoming MFHD record with a local Koha item record might work but it could be error prone.

Also, without a unique identifier like a barcode, you can't match incoming MFHD records to Koha item records (which weren't originally imported via OAI-PMH). 

Leif: Can you explain more about how you envision the OAI harvester working with MFHD records and Koha items?
Comment 56 David Cook 2015-12-04 00:52:22 UTC
Just realized that I forgot to respond to this comment...

(In reply to Leif Andersson from comment #49)
> (In reply to David Cook from comment #47)
> > Leif and Andreas:
> > 
> > If I understand correctly, your main use case for matching would be to make
> > sure that records previously imported from the union catalogue aren't
> > duplicated when you start using OAI-PMH, yes?
> > 
> > In that case, would matching on the 001 be suitable? 
> > 
> 
> If we are only importing from one source, 001 would be fine.
> But if we will be using several sources for our imports, then relying on 001
> would sooner or later result in a "false matcning" where we end up having
> one record overwritten by a totally different one.
> 

Agreed. I think that would be a very real risk. 

> How reliable would it be to add in 003?
> In MARC21 003 is the alphanumeric "MARC code for the organization whose
> control number is contained in field 001".
> I don't know how this fits UNIMARC, though.
> 

It should be trivial to merge the 001 and 003 together to form a 035 like "(OCoLC)814782" (http://www.loc.gov/marc/bibliographic/bd035.html). That would certainly help eliminate that risk of "false matching" mentioned above.

However, in the LIBRIS data that I've seen, I haven't seen any examples of a 003. 

I'm not sure how this fits UNIMARC either. Hopefully some of the French people lurking on this bug can provide some insight there.

> David suggested in a mail (to Koha devel list) to move field 001 to 035.
> Or even 001 + 003 to 035
> In doing so some refinements could be done to this matching point (e.g.
> normalization, if 003 is empty then add what we know about the exporting
> catalog etc)

Hmm, yeah, that would probably be achievable. However, that wouldn't really help too much, because your local Koha records will be missing that 003/additional exporting catalog information. So the matching will still fail.

In terms of your local Koha catalogue, you could add a 003 to all records before starting to use the OAI-PMH harvester. Then either update records in LIBRIS to have a 003 as well, OR have the harvester inject a 003 into incoming records which will match the 003 in your local Koha catalogue. That's probably the way to go...

Actually, let me think about that for a second...

What do your existing Koha records have for 001 and 003? If they've been previously imported from LIBRIS, do they have the LIBRIS 001 in the Koha 001 field?

--

In terms of matching, we basically need to make sure that incoming data can map/match to existing data. If Koha records have 001 but no 003, then we have to be able to match using just the 001 from the incoming record. If Koha records have 001 and 003, then we need to match the 001 and 003 of the incoming record against those... 

Failing that, we need to match a 035 on the incoming record with a 035 on an existing Koha record. I imagine that none of the records have a 035 field. Perhaps it would be worthwhile to create one of those as well as I described above... that would probably be best (at least for bibliographic records and authority records).

As I noted in my other comment, matching item records is going to be tricky, as we won't have the 035 mechanism available, unless we cheat a bit and put it in the 952$i (inventory number) or something like that... 

--

I admit that I'm starting to think a bit about how to add support for MARC holdings into Koha. While long-term we do want to get rid of MARC, I wonder if it could be useful having a "holdings" database in Zebra as well. In terms of library systems, I imagine there will always be a separation of abstract entities and print/digital holdings. 

Right now, we're really limited using the 952 field for items in Koha... but we do it that way instead of supporting MFHD because there was no other way of searching Zebra using both "bibliographic" and "holdings" data at the same time if that data was in separate records.

I suppose that brings me back to the idea that Koha should really have an intermediary extensible metadata format which is indexed. MARC bibliographic and MARC holdings records could be held separately and then processed into a single intermediary record which is indexed and used for search. Display... we could either use the intermediary record or use the internal system numbers to fetch the original MARC metadata for display. 

Of course, that would require a significant and separate development effort to achieve...

Plus... even if we could match a MARC holdings record to a MARC holdings record... each holdings record can have X items specified within it... so you still need a unique identifier at the item-level in order to do full matching.

--

An added problem with using OAI-PMH and items is that items can be on loan or otherwise in a "process" which cannot be affected by changes upstream at the OAI-PMH server. 

Actually, now that I think about it, there is so much data stored in the 952 item record that cannot be overwritten by an upstream change... 

What is the ideal scenario for harvesting MFHD records via OAI-PMH?
Comment 57 Viktor Sarge 2015-12-04 08:10:42 UTC
> It should be trivial to merge the 001 and 003 together to form a 035 like
> "(OCoLC)814782" (http://www.loc.gov/marc/bibliographic/bd035.html). That
> would certainly help eliminate that risk of "false matching" mentioned above.
> 
> However, in the LIBRIS data that I've seen, I haven't seen any examples of a
> 003. 

Interesting. I've actually perceived it the other way around - that Libris is actually quite good at sticking "SE-LIBR" into 003 of all their records. 

(A quick example http://libris.kb.se/bib/14862617?vw=full&tab3=marc)
Comment 58 Magnus Enger 2015-12-04 08:21:09 UTC
(In reply to Viktor Sarge from comment #57)
> Interesting. I've actually perceived it the other way around - that Libris
> is actually quite good at sticking "SE-LIBR" into 003 of all their records. 
> 
> (A quick example http://libris.kb.se/bib/14862617?vw=full&tab3=marc)

A search for se-libr in Hylte gives 7538 hits:
http://hylte.bibkat.se/cgi-bin/koha/opac-search.pl?q=se-libr

The total number of records is 64397.
Comment 59 David Cook 2015-12-06 22:32:25 UTC
(In reply to Viktor Sarge from comment #57)
> > It should be trivial to merge the 001 and 003 together to form a 035 like
> > "(OCoLC)814782" (http://www.loc.gov/marc/bibliographic/bd035.html). That
> > would certainly help eliminate that risk of "false matching" mentioned above.
> > 
> > However, in the LIBRIS data that I've seen, I haven't seen any examples of a
> > 003. 
> 
> Interesting. I've actually perceived it the other way around - that Libris
> is actually quite good at sticking "SE-LIBR" into 003 of all their records. 
> 
> (A quick example http://libris.kb.se/bib/14862617?vw=full&tab3=marc)

That's very interesting, Viktor!

When I look at http://libris.kb.se/bib/219553?vw=full&tab3=marc, I see a 003. However, when I look at http://data.libris.kb.se/bib/oaipmh?verb=GetRecord&metadataPrefix=marcxml&identifier=http://libris.kb.se/resource/bib/219553, I do not see a 003.

There are a fair number of other discrepancies between the catalogue and the OAI-PMH server (which is password protected) it seems. I think the OAI-PMH server is still in beta, so perhaps that's the explanation?
Comment 60 Andreas Hedström Mace 2015-12-08 11:16:37 UTC
(In reply to David Cook from comment #52)
> (In reply to Leif Andersson from comment #51)
> > (In reply to David Cook from comment #50)
> > > Honestly though, I would ask that people think further about the frequency
> > > of harvests. Is every 2-3 seconds really necessary? Do we really need it to
> > > be able to perform that quickly?
> > > 
> > 
> > Well, the use case envisioned by Stockholm UL would in practice ideally
> > involve fetching ONE record every 10 minutes or so!
> > The cataloger will be creating/modifying a bib record and a mfhd record in
> > our union catalog.
> > Next, the cataloger will turn to our local catalog, Koha, expecting to find
> > this record already imported.
> > If there is a way for the harvester to decide which ONE record to
> > get...maybe even with some intervention by the cataloger...?
> > So when this ONE record is asked for, we want it to be a quick process
> > getting it from the source and into Koha.
> > 

To confuse things a little, I will have to contradict my colleague at Stockholm Univ. Library a little buy saying that I don't see why we would want to replicate functionality already offered by LIBRIS (the Swedish union catalogue, for those lurking this thread) - where you can download records individually and then run batch exports at night - rather than creating something better/faster.

For me it is preferable to have the catalogue as up-to-date as possible, since LIBRIS will be the "master" (or source of truth as David calls it) for our data. I would rather want the harvester to run every ten seconds or so (or however fast we can get it), to get all updates made to "our" records. The only drawback I can see from such an approach would be an increased load on the servers, which is not a trivial thing of course, but something that should be manageable. (LIBRIS might have a bigger problem if a lot of Swedish libraries start using OAI-PMH harvesting with this approach, but they have themselves recommended this use and will have to handle it accordingly.)

Also, I would prefer if the process of harvesting records can be as automated as possible, not involving any extra steps on the catalogers part. We want to make their cataloging easier - not more complex!

> So you're saying that the union catalogue will only have updates about once
> every 10 minutes? Or that the cataloguer will only be accessing a record in
> the union catalogue and Koha once every 10 minutes?

Records that we handle, i.e. adding/updating either the bibliographic record or the holdings record (or both), is probably only done around every 5-10 min as Leif say. But changes to the bibliographic records, for which we have holdings attached, made by other Swedish libraries is probably much more often. I will try to look at this in the upcoming days, manually harvesting at close intervals (I'm thinking about both trying a 3 second and 10 second approach) to see how many records are downloaded with each harvest.

> I am including a mechanism for fetching "one" record, so long as the user
> knows the OAI-PMH identifier they're after. They'll be able to add a task
> for that. Perhaps a future development could be done to provide an interface
> in the cataloguing module for adding/updating a single record. In place of
> that interface, they'll be able to add a task in the same area as the other
> tasks in order to get the one record...
> 
> > Then nightly more massive harvests could be performed to catch up with other
> > modifications to the union catalog.
> > 
> 
> Those nightly harvests would certainly be possible with the current design. 

As I mentioned above, ideally I would want the harvester to run repeatedly, on short intervals. But other libraries whom are interested in using the harvester might have other ideas on which set-up would be best for them. So having the flexibility of running either way (individual harvest+night more massive harvestes or repeated harvest every 10 seconds or so) would be wonderful!

I think Davids idea for a plug-in approach, together with the harvest tasks, will work well here!
Comment 61 David Cook 2015-12-09 05:14:12 UTC
I've been doing more thinking about OAI-PMH harvesting of holdings records...

I looked at some online documentation for how systems like Voyager link holdings records and item records, and while the information I found was a bit spotty, the linkage seems rather tenuous.

It looks like the holdings record will store a location and a call number in a 852 field, and then when creating an item, that location and call number will be pulled into the item. It looks like the item is also linked in some manner to the holdings record.

So when you're viewing a bibliographic record in the catalogue, you'll also see data from the holdings record merged together with data from the item record.

--

So one bibliographic record can be linked to many holdings records which can be linked to many item records.

The problem with harvesting holdings records and trying to do something with them in Koha is that Koha doesn't support MARC holdings records. This leads us to want to create item records directly from holdings records, but holdings records aren't item records. While they store item-level data, they themselves aren't item records. Nor do they appear to have any way of specifying how many items are actually held. 

They have holdings statements for things like journals, but that's free-form plain text. Not really something that can be used programmatically.

--

So I don't think you can actually reliably create items from a holdings record, as you don't know how many items to create, nor do you have anything like a barcode to identify those items.

If an item record in Koha had a link to a holdings record, (such as in a 952$H subfield), you could use the holdings record to update select subfields in multiple existing Koha item records though. But that's about it.
Comment 62 David Cook 2015-12-09 05:48:07 UTC
It's worth mentioning that my above comment regarding holdings is probably only relevant for some systems like Voyager, and are not relevant for all holdings records...

Those comments are based on the following pages:

http://library.princeton.edu/departments/tsd/katmandu/voyager/holitmono.html
http://library.princeton.edu/departments/tsd/katmandu/voyager/relink.html
http://library.princeton.edu/departments/tsd/katmandu/voyager/chgloc.html
http://www.library.illinois.edu/cam/training/voyagerscript.html

However, it does appear that there are fields in holdings records for individual items:

876$a is an "Internal item number" (http://www.loc.gov/marc/holdings/hd876878.html). While I haven't seen this in LIBRIS holdings data, it is a field/subfield which exists. 876$p could then be used for the barcode. 

Looking at this example:
852	0#$aTxAM$bStacks$hHD9195.A5$iW5
876	##$aAAH8128-1-1$c$13.75$pA14802137389

852$a = Location (ie Library)
852$b = Sublocation/Collection (Branch)
852$h = Classification part (start of call number)
852$i = Item part (end of call number)

876$a = Internal item number (itemnumber)
876$c = Cost (cost)
876$p = Piece designation (barcode)

However... the 876 subfields are really only useful if they're present. If your record just has a 852 without any actual item-level data... you can't very well do add/match/update items in Koha without it. 

Even in the event though that you have 876 data, there are hurdles as outlined below.

It's not too bad when there is one 852 and one 876 for a single part item: "One copy, one 852 field - Item fields do not need a linking subfield as all apply to the one holdings item. (Holdings are recorded at level 1 or 2 in Leader/17 (Encoding Level).) [See example under subfield $p below]" (http://www.loc.gov/marc/holdings/hd876878.html).

However, "Multiple copies, one or more 852 fields - Subfield $3 is used to link item fields to the appropriate copy in 852 field(s). (Holdings are recorded at level 1 or 2 in Leader/17 (Encoding Level).) [See example under subfield $e below]" (http://www.loc.gov/marc/holdings/hd876878.html). 

But the $3 subfield isn't a coded field. It's free text, which foils any consistent automated linking.

It gets even more complicated for "Multiple physical part holdings item": 

"Holdings in 866-868 (Textual Holdings) fields - Subfield $3 is used to link item fields to the appropriate part specified in fields 866-868. (Holdings are recorded at level 3 or 4 in Leader/17 (Encoding Level).) [See example under subfield $h below]

Holdings in 863-865 (Enumeration and Chronology) fields - Subfield $8 is used to link item fields to the appropriate part specified in fields 863-865. Each part (volume or volumes) for which item level information is included requires a separate 863-865 field. (Holdings are recorded at level 3 or 4 with piece designation in Leader/17 (Encoding Level).) [See example under subfield $l below]"
Comment 63 David Cook 2015-12-09 06:07:06 UTC
Finally, I don't know how much sense it makes harvesting MARC holdings records via OAI-PMH.

I can understand harvesting MARC authority records and MARC bibliographic records, as both are theoretically very universal and interchangeable. You might need to change the system numbers, but that's about it.

However, with holdings records, you're describing something that is local to your particular library. 

In theory, wouldn't you want to be uploading holdings records to a union catalogue, rather than downloading holdings records from it?

I believe that's how it works with the National Library of Australia. We do some automated exports for the National Library of Australia union catalogue. I think some bibliographic details are included for matching purposes, and then we export all the item data so that they can update the holdings in the union catalogue on a regular (I think it's nightly) basis.

--

In the case of LIBRIS, it appears minimal information about "items" is included in a holdings record. Really just the sublocation/collection and the call number.

I haven't seen any indication as to the number of items, their barcodes, or really anything specific about "items". So I think it would be difficult if not impossible to use the data I've seen to add/update items in Koha.

That said, the data in 866 does certainly seem valuable regarding holdings, and I can see the utility in adding that to the local catalogue. 

Part of me wants to add support for MARC holdings records in Koha, although I imagine there would be resistance to that, as it further entrenches our use of MARC. 

The problem with adding OAI-PMH support for holdings is that merging holdings data in to bibliographic records would be tricky. Easy to add it initially, but difficult - if not impossible - to reliably update later. 

But it would be trivially to import the holdings records into a table and link them to an existing bibliographic record.

We could then embed holdings records into bibliographic records at index time (like we already do with item records), so that data from holdings records would be searchable and displayable on the detail page and search results page.

That would make it rather easy to "merge" MARC holdings records into MARC bibliographic records for search/retrieval and display.

It wouldn't help with "items" but at least the holdings data would be in Koha.

We could also consider linking items and holdings records... although I think we'd want to think about the long-term implications of that in a non-MARC environment. It probably would be OK... because "more_subfields_xml" really is filling that gap at the moment, and "items" could just be used for more transactional data while additional data could be offloaded to a XML record (like we do for authorities and bibliographic records... and how we probably would with RDF records anyway).

Mind you, it would be more complex with holdings, because holdings records can still refer to multiple items... the idea of linking "items" and "holdings" still seems like it's very specialized and ILS-specific which is no good at all...

--

Anyway, hopefully these 3 epic-length comments help further the conversation around harvesting MARC holdings records via OAI-PMH.

I think it's a very difficult thing... and provides us with challenges not necessarily in terms of "logic" but rather with "data", links, and relationships.
Comment 64 David Cook 2016-02-05 06:07:48 UTC Comment hidden (obsolete)
Comment 65 David Cook 2016-02-05 06:11:33 UTC
Hi all:

Here's a rough draft of what I've been working on.

It still needs some fixes, some improvements, and quite a bit of polishing. I'm hoping to do most of that over the next couple of weeks, but I thought I'd post this now as my wife is 35 weeks pregnant, and you never know when babies will be born...

In terms of bare bones functionality, it should all be there. Feel free to add comments about things you'd like to see (like parameters or names for Active Icarus Tasks), and I'll hopefully incorporate that feedback into my finishing work.
Comment 66 David Cook 2016-02-16 06:26:14 UTC Comment hidden (obsolete)
Comment 67 David Cook 2016-02-16 06:33:29 UTC
Here's an updated version, which should actually work.

Updates include:
- Fixed atomic update and kohastructure.sql to actually include another needed table
- Added link via admin-menu.inc and admin-home.tt
- Added params for active tasks, and changed param display in saved_tasks.pl
- Improved the logging, so the output from icarusd.pl will be a lot easier to read and hopefully more useful
- Lots of internal changes which hopefully you won't notice on the user side

---

Things I desperately want to update next:

- Add OAI-PMH deletion support for svc/import_oai
- Improve data validation for the plugins (especially web ui side)
- Change Makefile.PL so you don't have to manually edit koha-conf.xml for Icarus config
Comment 68 David Cook 2016-04-04 00:21:49 UTC Comment hidden (obsolete)
Comment 69 Mirko Tietgen 2016-04-07 22:02:43 UTC
Hi David,

(In reply to David Cook from comment #68)

> 6) In Koha, create a record matching rule:
>     Code = OAI
>     Match threshold = 100
>     Record type = Bibliographic
>     Search index = control-number
>     Score = 100
>     Tag = 001
>     Search index = id-other,st-urx
>     Score = 100
>     Tag = 024
>     Subfields = a
>     Normalization rule = raw

does that mean "create two match points with
Match point 1

Search index = control-number
Score = 100
Tag = 001


Match point 2

Search index = id-other,st-urx
Score = 100
Tag = 024
Subfields = a
Normalization rule = raw

"?
Comment 70 Mirko Tietgen 2016-04-08 13:53:08 UTC
Hi David,

nice to see this move forward! I gave it a first ride and here are a few comments/questions:

- Would it make sense for you to use Catmandu::OAI instead of IO::*? We will use Catmandu for Elasticsearch, it would probably make sense here too?

- Task type is set on separate page for add and edit of tasks. Should be on the same page as the rest of the config.

- I can add a task, I can start a task -- but I can't stop a task, just remove.

- Task numbering always starts at 2. There is no task 1?

- Tasks should be sorted by task number

- I can send a single task to Icarus multiple times. Is that intended?

- "Send to Icarus" leads to empty page if Icarus is not running

- Permissions for the OAI user? Even with superlibrarian I get several auth errors, and I would not want to give it superlibrarian permissions anyway.

- Log should display something more useful than [server 1], like name or IP

- Log shows lots of "Connection n started.1" and "Connection n failed or ended" but there is no hint what that actually means. It does not seem to be relevant for fulfilling the task

- Enqueue needs an identifier to work. What if I want to get more than one record? Using just the prefix does not work.

- Enqueue seems to work so far, I downloaded a record.

- Dequeue does not work for me. Several auth errors, then a working auth. A record is created, but it only contains a (broken) leader.

- Have not tested matching yet.
Comment 71 Mirko Tietgen 2016-04-08 15:10:36 UTC
(In reply to Mirko Tietgen from comment #70)

> - Dequeue does not work for me. Several auth errors, then a working auth. A
> record is created, but it only contains a (broken) leader.

I had metadataPrefix set to oai_dc. I successfully imported a record in Koha after switching to marcxml. Very nice!
Comment 72 David Cook 2016-04-10 23:30:15 UTC
(In reply to Mirko Tietgen from comment #69)
> Hi David,
> 
> does that mean "create two match points with
> Match point 1
> 
> Search index = control-number
> Score = 100
> Tag = 001
> 
> 
> Match point 2
> 
> Search index = id-other,st-urx
> Score = 100
> Tag = 024
> Subfields = a
> Normalization rule = raw
> 
> "?

Yep, that's right. I should have been clearer.
Comment 73 David Cook 2016-04-10 23:32:00 UTC
(In reply to Mirko Tietgen from comment #71)
> I had metadataPrefix set to oai_dc. I successfully imported a record in Koha
> after switching to marcxml. Very nice!

The default XSLT only handles OAI-PMH records containing MARCXML, but you can write your own XSLT to handle oai_dc to MARCXML and then it would work, although you need to make sure to include the OAI identifier in the MARCXML for future updates/deletions (by default, that's in 024$a).
Comment 74 David Cook 2016-04-10 23:48:41 UTC
(In reply to Mirko Tietgen from comment #70)
> - Would it make sense for you to use Catmandu::OAI instead of IO::*? We will
> use Catmandu for Elasticsearch, it would probably make sense here too?
> 

I'd be open to that. HTTP::OAI is already a dependency of Koha, so I used that, but I think it's a bit rubbish, so I would be happy to use something else like Catmandu::OAI I suspect.

> - Task type is set on separate page for add and edit of tasks. Should be on
> the same page as the rest of the config.
> 

It's on a separate page, as changing it will change the template for the rest of the config. I suppose this could be done with AJAX to make it prettier, but at the moment I'm going for function over everything else. 

> - I can add a task, I can start a task -- but I can't stop a task, just
> remove.
> 

Yeah, that's on my TODO list. Ideally, I'd add "pause" and "stop". Maybe even "edit", which would require a "stop" first. 

> - Task numbering always starts at 2. There is no task 1?
> 

If it's active tasks, that's an artifact of POE. I suppose that could be changed, although I never thought order would matter much.

> - Tasks should be sorted by task number
> 

Honestly, I've thought about doing away with task numbers, and using task names instead, as that would probably be more useful. 

> - I can send a single task to Icarus multiple times. Is that intended?
> 

Mmm, I know that it does this, but it's unintentional. I have thought about adding safeguards, but I've been focusing on core functionality first. 

> - "Send to Icarus" leads to empty page if Icarus is not running
> 

Ahhh, I'd heard of the blank page, but not the cause. Cool. I'll look at fixing that. 

> - Permissions for the OAI user? Even with superlibrarian I get several auth
> errors, and I would not want to give it superlibrarian permissions anyway.
> 

What do you mean by "OAI user"? Do you mean the user for the /svc/import_oai API? Those auth errors are misleading. Like /svc/import_bib, it tries to do the import first before doing any auth, so you'll get a 403 error (and it'll probably show up twice because of bad logging). On the second try, it should work. I think all you need is "catalogue edit" permissions for that user (like with /svc/import_bib). 

> - Log should display something more useful than [server 1], like name or IP
> 

It's a bit of a muchness. The name or IP would be localhost/127.0.0.1. [server 1] refers to the Icarus listener.

> - Log shows lots of "Connection n started.1" and "Connection n failed or
> ended" but there is no hint what that actually means. It does not seem to be
> relevant for fulfilling the task
> 

True. It's mostly for debugging. I'll be removing a lot of logging before I'm ready for a sign off. 

> - Enqueue needs an identifier to work. What if I want to get more than one
> record? Using just the prefix does not work.
> 

It only needs an identifier to work if you're using the GetRecord verb. You don't need it for ListRecords. I could use Javascript to make that easier in the UI. As mentioned above, I'm still just at a barebones level with this feature.

> - Enqueue seems to work so far, I downloaded a record.
> 
> - Dequeue does not work for me. Several auth errors, then a working auth. A
> record is created, but it only contains a (broken) leader.
> 

As above, the auth errors are the same as you'd get with /svc/import_bib. I have ideas about how to improve that, but they're more optimisations than anything. 

Is this when you were using oai_dc? I'm amazed that anything was created... that's probably a bug. I would've expected it to fail...

> - Have not tested matching yet.

That'll probably be the hardest/most interesting bit :p.

For that to work correctly, a person will have applied the other bugzilla dependencies, and have Zebra indexing rapidly. By default, the Debian packages are going to be too slow, I think, as they only process updates every 5 minutes, I think. We update Zebra every 5 seconds, so I haven't noticed any matching problems to date, when everything is configured correctly...

---

Thanks for the feedback, Mirko! I have another update that's almost ready to go out. Just juggling a couple of projects atm.
Comment 75 David Cook 2016-04-11 07:52:29 UTC
More progress today, but hoping to make more tomorrow. Depending on my productivity tomorrow, I'm hoping to do another upload here of a more up-to-date version which fixes some of the problems already identified and removes a few of my TODO list items.
Comment 76 David Cook 2016-04-15 05:51:45 UTC
1) Active Icarus tasks are now sorted by Task id
2) Fixed "Send to Icarus" blank page when Icarus offline
3) Changed "Enqueue" and "Dequeue" to "Download" and "Upload" respectively to make it more clear what the tasks do. 
4) Uploading a Dublin Core record will no longer result in an empty MARC record. It just won't create a new record at all.
--
5) There's now a tool at "tools/manage-oai-import.pl" which allows you to manually fix deletion errors when a record is deleted upstream but it is unable to be deleted automatically in Koha due to items, holds, etc.
6) Updated koha-conf.xml, rewrite-config.PL, and Makefile.PL, so now you can use the standard install/upgrade process to configure Icarus without manual changes to the file.
7) Some minor fixes to the logging readability
8) "deleted" OAI-PMH records should delete records in Koha, so long as there are no normal obstacles to a user deleting the record in Koha.

--

TODO: 
a) Add paging to the "tools/manage-oai-imports" interface
b) Improve handling of import errors... 
c) Update kohastructure.sql and other structural files
d) Need to remove unhelpful logging
e) Need to improve the data validation when adding/updating saved tasks
f) Complete the navigation menus

Those are my current priorities. I have a list of other things I could do, but I'll privilege tester feedback over that list.

--

It's a bit of a mammoth patch... in the end, it might be necessary to split it into an "Icarus patch" and an "OAI-PMH patch".

Icarus itself is a background job manager, and the saved_task.pl page allows user interaction with that. The only existing tasks for it right now are OAI-PMH tasks, but the tasks can do anything. We could replace the existing "Task scheduler" report emailer thing an Icarus plugin. 

The OAI-PMH patch is a web service and some hacking of Koha's existing MARC import system.

While both are integral to Andreas's project, these two sets of functionality can actually be tested independently.

I'm mostly interested in feedback about the OAI-PMH side of things. Icarus itself is a bit rough, but it can be refined over time, or even replaced with a different job manager. I'm mostly interested in how people want records to be imported via OAI-PMH.
Comment 77 David Cook 2016-04-15 07:02:03 UTC Comment hidden (obsolete)
Comment 78 David Cook 2016-04-29 07:32:15 UTC Comment hidden (obsolete)
Comment 79 David Cook 2016-04-29 07:34:44 UTC
This latest patch fixes a typo in atomic update, rebases the patches, adds a "Start Icarus" function to the Icarus dashboard (which has a few caveats vis-a-vis Apache permissions), and more error handling.

I still have to finish the OAI import error resolution functionality... while it's an edge case, it's something that needs to be done. That hopefully won't impact any of you though!

Need to improve tools/manage-oai-imports as well (add it to navigation, add paging, etc). 

Also need to improve task data validation...

--

But in any case, this should be in a testable state once again!
Comment 80 David Cook 2016-05-10 07:18:41 UTC
Fixed the manual error resolution process, so that should be good.

Need to add tools/manage-oai-imports to navigation and give it paging, but that should be trivial.

Need to improve error handling for Koha::Icarus::Task::Upload::Biblio, and need to add more data validation to Koha::Icarus::Task::Upload and Koha::Icarus::Task::Download... 

I'm out of time this week, but I'll resume work on these next week.
Comment 81 David Cook 2016-05-12 23:52:50 UTC
Just a reminder that koha-gitify can't be used for testing changes to Zebra configuration files or koha-conf.xml, and this patch makes changes to both.

You'll need to do a source install. It doesn't matter which type you do, although I personally find a "dev" install to be the nicest. Regardless of what you choose, you'll need to use "make" and "make upgrade" to update your Zebra configuration files and koha-conf.xml.
Comment 82 David Cook 2016-05-16 05:45:10 UTC
I keep wanting to split this bug into 2 separate bug reports, but I don't want to lose this CC list... so for now I'm going to split it into 2 separate patches, which can technically be tested separately.

The actual "OAI-PMH Harvesting Client" will be part of the Icarus patch. It's implemented as a plugin for the Icarus job server. One plugin downloads (ie "harvests") records, and one plugin uploads them to a Koha HTTP API.

The second patch will include the OAI-PMH import HTTP API for Koha. To test it, you can use cURL to post to the API, you can write your own OAI-PMH client and point it at the API, or whatever.
Comment 83 David Cook 2016-05-16 06:13:02 UTC Comment hidden (obsolete)
Comment 84 David Cook 2016-05-16 06:13:50 UTC Comment hidden (obsolete)
Comment 85 David Cook 2016-05-16 06:14:29 UTC Comment hidden (obsolete)
Comment 86 David Cook 2016-05-16 07:12:40 UTC Comment hidden (obsolete)
Comment 87 David Cook 2016-05-17 05:25:21 UTC Comment hidden (obsolete)
Comment 88 David Cook 2016-05-17 05:27:24 UTC Comment hidden (obsolete)
Comment 89 David Cook 2016-05-17 06:52:57 UTC Comment hidden (obsolete)
Comment 90 David Cook 2016-05-17 07:02:56 UTC Comment hidden (obsolete)
Comment 91 David Cook 2016-05-23 03:01:07 UTC Comment hidden (obsolete)
Comment 92 David Cook 2016-07-11 05:41:23 UTC
Created attachment 53258 [details] [review]
Bug 10662 - kohastructure.sql changes
Comment 93 David Cook 2016-07-11 05:42:22 UTC Comment hidden (obsolete)
Comment 94 David Cook 2016-07-11 05:42:52 UTC Comment hidden (obsolete)
Comment 95 David Cook 2016-07-13 06:15:53 UTC
Created attachment 53361 [details] [review]
Bug 10662 - Create svc/import_oai API
Comment 96 David Cook 2016-07-13 06:16:22 UTC
Created attachment 53362 [details] [review]
Bug 10662 - Icarus job server and Koha UI for it

NOTE: You cannot use koha-gitify to test changes to koha-conf.xml
NOTE: Check koha_perl_deps.pl; you may need to install POE.pm

<To be updated>
Comment 97 Mark Tompsett 2016-07-25 02:40:39 UTC
PerlDependencies issue.
Comment 98 David Cook 2016-08-01 23:13:58 UTC
(In reply to M. Tompsett from comment #97)
> PerlDependencies issue.

Could you elaborate?

I've added the Perl dependency.

https://bugs.koha-community.org/bugzilla3/page.cgi?id=splinter.html&bug=10662&attachment=53362
Comment 99 Mark Tompsett 2016-08-02 08:35:13 UTC
(In reply to David Cook from comment #98)
> (In reply to M. Tompsett from comment #97)
> > PerlDependencies issue.
> 
> Could you elaborate?
> 
> I've added the Perl dependency.
> 
> https://bugs.koha-community.org/bugzilla3/page.cgi?id=splinter.
> html&bug=10662&attachment=53362

I'm going to guess off the top of my head without checking... But I usually state the file where the problem is. That's why it is marked as "Patch does not apply".
Comment 100 David Cook 2016-08-15 00:34:25 UTC
(In reply to M. Tompsett from comment #99)
> I'm going to guess off the top of my head without checking... But I usually
> state the file where the problem is. That's why it is marked as "Patch does
> not apply".

How very concise.

In hindsight, "PerlDependencies issue." and "Patch does not apply" does imply that the patch didn't apply, because of an issue with the PerlDependencies.pm changes.

Thanks for that. I'll look into it.
Comment 101 Sebastian Hierl 2016-09-30 09:03:59 UTC
Just a brief note to confirm that we are also very interested in harvesting records via OAI-PMH. This mainly to update Koha with records from our ArchivesSpace implementation, but also beyond.  

Sebastian
Comment 102 David Cook 2016-10-11 06:10:22 UTC
(In reply to Sebastian Hierl from comment #101)
> Just a brief note to confirm that we are also very interested in harvesting
> records via OAI-PMH. This mainly to update Koha with records from our
> ArchivesSpace implementation, but also beyond.  
> 
> Sebastian

Great, Sebastian!

I'm hoping to get some updated patches up this week for people to test!
Comment 103 David Cook 2016-10-14 05:57:43 UTC
Ran out of time this week, but I'm working on improving the test coverage, and adding the ability to parse a OAI-PMH response as a stream rather than as a DOM tree.

This is a requirement for the LIBRIS OAI-PMH server, since it's not using resumptionTokens, which do appear to be optional according to the spec. In any case, everyone will benefit, since parsing the stream should be faster, since a child process downloads the content while the parent process parses the XML. Also, if your OAI-PMH server sends a long stream rather than chunks with resumptionTokens, you'll reduce the overhead that comes from multiple HTTP requests.
Comment 104 David Cook 2016-11-28 06:01:42 UTC
There's a lot of code in these changes, so I'm thinking once again of breaking up this enhancement into a couple of Bugzilla issues.

At the moment, the enhancement can be thought of as an import API and an asynchronous task queue. 

--

The import API takes a few simple parameters and an OAI-PMH response, which has been downloaded from an OAI-PMH repository, and imports the records from the response into Koha.

That can be tested and pushed independent of the asynchronous task queue.

The asynchronous task queue is the largest part of the change, and I've written it myself using the POE framework. There are more out-of-the-box third-party periodic asynchronous task queues out there, such as Celery (written in Python), which we could use instead, but this doesn't add any dependencies beyond the POE Perl modules. Plus, if we did use something like Celery, we'd have to write the task code, typically in Python, for it anyway.

The task queue (ie Icarus) works pretty well and doesn't touch Koha at all itself. I've included the code in the Koha:: namespace, but I could easily put it into its own namespace. It has a lot of unit tests, and I still need to add a lot more. 

As per Stockholm University Library's requirements, Icarus also has an interface in Koha, so that's where Icarus does touch Koha, although only indirectly. They communicate using JSON over a Unix socket.
Comment 105 David Cook 2016-11-29 04:45:08 UTC
I've added a new report for the Icarus task scheduler with bug #17690.

At this point, I figure it's more efficient to test and push the OAI-PMH import API and then focus on the task scheduler. 

If the API is pushed to Koha, it also means that anyone could write a harvester and POST records to the API. You might write a cronjob which is run nightly. Or you might write a daemon that you control from the commandline.

The idea with Icarus is that there will be a user interface in Koha that librarians can use to schedule OAI-PMH tasks to run. 

For performance, I've split the task into two parts. One task downloads records, while the other task uploads them. This lets you work in parallel, which means you upload records as they're downloaded rather than waiting for all the downloads to complete before uploading. (Another idea I had was to set up an import daemon for Koha, which could be used in lots of different ways... whether it's from the web or the command line.)
Comment 106 David Cook 2016-11-29 04:54:28 UTC
(In reply to Galen Charlton from comment #1)
> (In reply to David Cook from comment #0)
> > Currently, Koha only acts as a OAI-PMH server, I propose to add a harvesting
> > client as well (likely using the HTTP::OAI::Harvester module), so that Koha
> > can ingest records from other data sources (such as digital repositories
> > like Dspace).
> 
> Interesting idea.
> 
> > I've only started reading about it but despite initial reservations about
> > resumption tokens, I think the hardest part will not be with the retrieval
> > of records so much as the parsing of those records into MARC.
> 
> This may be less of a problem in the long run with my plans to allow Koha to
> support multiple metadata formats (although even once that's available, you
> may still want the harvester to be able to convert the source metadata into
> something else).
>  
> One thing I'd suggest is that the harvester keep a copy of the original
> metadata record in a database table; that would be more flexible than
> immediately converting it to MARC and discarding the source data.

Recently, I've been wondering if we really need an API just for OAI-PMH records, but that thought always brings me back to Galen's comment from 2013. 

I figure it's worthwhile having this API because it stores the entire OAI-PMH container record. If your metadata transform is bad, you won't get a MARCXML record in Koha, but you'll be able to re-try the transformation since the OAI-PMH container record is stored in the database. Plus, it shows an import history for records over time using the OAI-PMH identifier.

I would like to link OAI-PMH identifiers more closely to MARCXML records, but it's a bit problematic. At the moment, I store it in the 024$a, but that's a fairly generic field. It would be nice to store it in the database, but then it might be lost during a record merge or a changing of biblionumbers in some other way. In the long-run, I'm planning to store the OAI-PMH identifier in RDF, but that's still in the future yet.

Of course, OAI-PMH identifiers aren't foolproof either. In theory, they should be unique, but there's no guarantee. 

Anyway, I think it can't hurt to store OAI-PMH records in Koha. There's a cleanup_database script which will clear out old records, so the database table doesn't grow too large, so you could lose the original OAI-PMH record that way, but Koha should store data for a period long enough to let you fix transformation problems and things like that.
Comment 107 David Cook 2016-11-29 06:24:42 UTC
I had hoped bug 17318 would fix the normalization problems with matching, but it looks like I might have to revive bug 15541 since the search query normalization mangles URLs.

Since most OAI-PMH identifiers are URLs, it makes it impossible to match them.
Comment 108 David Cook 2016-11-29 23:22:02 UTC
Thinking more about matching and how using a OAI-PMH identifier really isn't enough, especially as the identifier is unique only within the repository. You could have two separate repositories with the exact same identifier, so you need to check the OAI-PMH repository URL as well. 

https://www.openarchives.org/OAI/openarchivesprotocol.html#UniqueIdentifier

There are a few ways of verifying that two records describe the same upstream record, but it involves some analysis. 

https://www.openarchives.org/OAI/2.0/guidelines-aggregator.htm#Identifiers

And that analysis gets tricky when you're wanting to match against MARCXML records using Zebra. 

Especially since different frameworks may or may not contain the fields that you store OAI-PMH data in for the purposes of matching.

--

I'm thinking of maybe making a sort of tiered search... where we search the database for OAI-PMH details... and if none are found then we use the Zebra search. However, that's problematic as it sets up some inconsistencies in methods of importing.

--

To date, we think about importing only in terms of MARCXML... and that makes some sense. So with OAI-PMH, surely we can still just think of it in terms of MARCXML. Except that the harvested record isn't necessarily the same as the imported record. 

Although maybe it should be. 

Maybe instead of using OAI specific details, we should require the use of http://www.loc.gov/marc/bibliographic/bd035.html. 

I don't know how realistic that is though. How many organisations actually have registered MARC Organisation codes?

Maybe that's the prerogative of the Koha user rather than the Koha system though. 

It looks like VuFind uses the MARC 001 for matching (https://github.com/vufind-org/vufind/blob/master/import/marc.properties), although that's obviously highly problematic. It has some facility for adding a prefix to the 001 for uniqueness, but that's a hack.

A sample harvest of DSpace's oai_dc to VuFind's Solr indexes uses the OAI-PMH identifier for matching it seemes (https://vufind.org/wiki/indexing:dspace), but as I noted above that's also technically problematic as you may have the same identifier in multiple repositories. In theory it shouldn't happen... but it could.

It looks like DSpace uses the OAI-PMH identifier (stored in the database it seems) for matching as well https://github.com/DSpace/DSpace/blob/master/dspace-api/src/main/java/org/dspace/harvest/OAIHarvester.java#L485. As noted above, this has issues if the identifier isn't unique outside the repository. 

DSpace has a sanity check to make sure the item hasn't been deleted before trying to do an update... I've been thinking I could match a OAI-PMH identifier to a biblionumber using a foreign key with restrict so that you can't delete a bib record without unlinking it from its OAI-PMH provenance record.

So VuFind and DSpace don't have the most sophisticated of matchers and they both have problems which I'd like to avoid.

--

I recall Leif suggesting that we export some data to MySQL tables (e.g. 001, 003, 020, 022, 035), but that's not without its difficulties as well, and keeps us locked into MARC as well. 

--

I also remember Mirko mentioning Catmandu::OAI, but it's just a layer over HTTP::OAI, and HTTP::OAI is flawed in a few ways and won't meet Stockholm University Library's requirements of having a OAI-PMH harvester that parses a XML stream.

In any case... the downloading of OAI-PMH records is the easy part. It's what to do with them once we're uploading to Koha...

--

For a truly robust solution, I think we'd need to overhaul Koha's importing facilities, and I'm not sure of the best way to do that.

We need to be able to link X number of arbitrary identifiers with a particular Koha "record", and we need to be able to query those identifiers in a that allows for rapid matching.

To be honest, this is something that Linked Data seems to be good at. You have a particular subject, and then you can link data to it arbitrarily. Then for your query, you could look for "?subject <http://koha/hasOAIPMHid> <oai:koha:5000>" or "?subject <http://koha/marc/controlNumber> '666'"

Of course, ElasticSearch would work just as well, although you'd still want to save that data somewhere as a source of truth. We don't tend to use things like Zebra/Solr/ElasticSearch as the sole repository of data since we can refresh it from a source of truth.

I suppose both a triplestore and a RDBMS have the same problem. You can link an identifier to a record, but what if you lose that link? You wind up with duplicates. I suppose the best you can do is try to prevent people from destroying links by accident.
Comment 109 David Cook 2016-11-29 23:52:30 UTC
I think part of the problem is that I don't want to use Koha's batch import system. I want to handle importing individual records belonging to arbitrary metadata schemas myself. At the moment, I'm abusing the batch system to do what I want.

I want to filter the data, I want to check database-level OAI-PMH data, then I want to check Zebra-level MARCXML data, and the batch import isn't really set up for that. And I think it might be prohibitively difficult to update it to handle that.

I could decompose C4::ImportBatch::BatchCommitRecords into individual add/update/delete functionality. That would mostly work, although then I would lose the history that the batches afford you. But then there's bug 14367 which would make up for that in theory.

I suppose this might all just be idealistic thinking on my part, and perhaps outside the scope of this enhancement.

I already have code that mostly works using just the OAI-PMH identifier. It's a flawed concept, but it's the same one used by VuFind and DSpace, and they haven't fallen over yet. 

I have a few fixes I need to make to C4::Matcher and C4::Search for the matching to work as expected though. 

I think perhaps what I have so far will just have to be good enough for now. It's far from ideal, but it's functional. 

I'm tempted to add an "originDescription" as per https://www.openarchives.org/OAI/2.0/guidelines-provenance.htm in each harvested record so that we preserve as much metadata as possible during the OAI-PMH harvest, although I think that's an inappropriate use of that element, as that's supposed to be created at dissemination time rather than harvest time.

Perhaps I'll just capture all relevant data and store it in the RDBMS, and we can use it at a later time if necessary.
Comment 110 David Cook 2016-12-01 01:52:34 UTC
Adding 15541 back as a dependency, since it's required for matching URIs.

OAI-PMH identifiers must be URIs, so it's necessary for matching OAI-PMH identifiers.

"The format of the unique identifier must correspond to that of the URI (Uniform Resource Identifier) syntax." http://www.openarchives.org/OAI/openarchivesprotocol.html#UniqueIdentifier
Comment 111 Andreas Hedström Mace 2016-12-01 15:48:53 UTC
(In reply to David Cook from comment #109)
> I suppose this might all just be idealistic thinking on my part, and perhaps
> outside the scope of this enhancement.
> 
> I already have code that mostly works using just the OAI-PMH identifier.
> It's a flawed concept, but it's the same one used by VuFind and DSpace, and
> they haven't fallen over yet. 
> 
> I have a few fixes I need to make to C4::Matcher and C4::Search for the
> matching to work as expected though. 
> 
> I think perhaps what I have so far will just have to be good enough for now.
> It's far from ideal, but it's functional. 

Yes, I definitely think this will be good enough for now!!! Getting a better matching and/or storing the incoming data to me sounds like future enhancements!
Comment 112 David Cook 2016-12-02 06:20:06 UTC
(In reply to Andreas Hedström Mace from comment #111)
> Yes, I definitely think this will be good enough for now!!! Getting a better
> matching and/or storing the incoming data to me sounds like future
> enhancements!

I spent some time today on these enhancements as they make the import much faster and more robust*, and I would've needed to do them soon for the RDFXML OAI-PMH downloads anyway**.

I've ditched the import batches and I'm doing adds/updates/deletes more directly. If we need to track changes to bibliographic metadata records in Koha, I think it would make more sense to look at new functionality than relying on the existing batch system which has issues.

In any case, I'll be working more on this early next week, and hopefully posting the API code here next week.

*most matching will be done based on OAI-PMH identifier URI and repository URI in the database, so we won't need to worry about Zebra issues. However, in the event that there is no matching OAI-PMH identifier and repository URI, the API can be given an optional matcher code, and Koha's Zebra-based matcher will be used to find a match. This 

**The RDFXML OAI-PMH downloads will make use of this database-based matching, although if there's no OAI-PMH identifier URI and repository URI for that downloaded record, it'll be an error state, since only MARCXML can be used with the Zebra-based matcher. We may have to talk more about that at some point, although I suppose it's out of the scope of this bug report anyway.
Comment 113 Eugene Espinoza 2017-03-08 01:20:46 UTC
Hi David! Any update on this? I'm interested on this feature. Thanks!
Comment 114 David Cook 2017-03-13 01:44:10 UTC
(In reply to Eugene Espinoza from comment #113)
> Hi David! Any update on this? I'm interested on this feature. Thanks!

Still working on it but I hope to have something posted soon. Glad to hear you're interested!
Comment 115 Sebastian Hierl 2017-03-13 07:53:44 UTC
We are interested as well and I trust that there are many more Koha libraries who would like to update their data via OAI-PMH.  Thank you, Sebastian
Comment 116 David Cook 2017-03-14 23:27:25 UTC
(In reply to Sebastian Hierl from comment #115)
> We are interested as well and I trust that there are many more Koha
> libraries who would like to update their data via OAI-PMH.  Thank you,
> Sebastian

Glad to hear it! I hope the end result works well for everyone!
Comment 117 David Cook 2017-06-20 01:20:07 UTC
Created attachment 64444 [details] [review]
Bug 10662 - Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.
Comment 118 David Cook 2017-06-20 01:20:36 UTC
Created attachment 64445 [details] [review]
Bug 10662 - Remove workaround for pre-17710 behaviour
Comment 119 David Cook 2017-06-20 01:20:45 UTC
Created attachment 64446 [details] [review]
Bug 10662 - Modify OAI-PMH harvester to import RDFXML

Typically, the OAI-PMH harvester only imports MARCXML, but this
patch allows you to also import RDFXML.

This functionality was requested by Stockholm University Library,
and importing RDFXML requires a parallel import of MARCXML if you
want to be able to use the RDFXML. That is to say, you can download
and import RDFXML using this patch, but it will probably only be
useful if you're also importing MARCXML which can be linked to the RDFXML
using the OAI-PMH repository and identifier as a link. This might change
in the future if Koha moves away from MARCXML as the central required
metadata format.
Comment 120 David Cook 2017-06-20 01:46:14 UTC
Hey everyone, 

Here's the latest release of the OAI-PMH harvester (along with code for harvesting RDFXML as well as MARCXML via OAI-PMH, as requested and sponsored by Stockholm University Library).

I think this will be the last major iteration. After this, I'll work on bugs that people find in testing, and do whatever is necessary to work it through the QA process, but in terms of design I think this is mostly it. 

__TEST PLAN__
0a) Apply bugs 18585, 18586, 18713
0b) Upgrade your git dev system by using "make" and "make upgrade". I haven't tried this on a gitified system, but I doubt it will work because of the changes to koha-conf.xml and the added oai-pmh-harvester.yaml file (as well as the added triplestore.yaml file from 18585). (Note that I have included Debian scripts. I haven't tried building a DEB package yet, but you could try building a package from your Git and use that for testing.)
0c) If you want to test the RDF functionality, you'll need a triplestore (preferably Apache Jena Fuseki) and to update triplestore.yaml with the details for the SPARQL endpoint.

1) On the command line, run KOHA_CONF=/path/to/etc/koha-conf.xml perl misc/harvesterd.pl
OPTIONALLY: Add the --log-level DEBUG option for verbose debugging logging
2) Go to /cgi-bin/koha/tools/tools-home.pl
3) Click on "OAI-PMH harvester" (/cgi-bin/koha/tools/oai-pmh-harvester/dashboard.pl)
4) At the toolbar choose "New request"
5) Give the request a "Name", add "URL" for an OAI-PMH repository, and fill in the OAI-PMH parameters as desired/required. Everything else is probably fine as a default, although if you want your request to run periodically, change "Interval (seconds)" to something you think it reasonable. 
6) Click "Test parameters" to make sure your inputs are all valid. 
7) Click "Save" (note that you can save without validating your inputs for works in progress)
8) Click "Actions" and choose "Send"
9) If the send is successful, go to the "Submitted requests" tab
10) Click "Actions" and choose "Start"
11) Go to the "Import history" tab
12) Click "Refresh import history" to update the table and see your results. If nothing is coming up, you may have to go back to the "Submitted requests" tab and double-check the status of your result and if there were any errors.
NOTE: If your request isn't periodic, it will disappear from the "Submitted requests" table once it's finished, so you may have to check your command line screen. If you enabled the DEBUG logging, you should be able to see any problems that arose during the harvest. If you're not getting output on your terminal screen, it may be going to your log file. Double-check oai-pmh-harvester.yaml to see if you have a log file defined.
Comment 121 Josef Moravec 2017-06-20 07:19:01 UTC
Sorry David, after applying all 3 dependencies on current master I get this when applying first patch of this bug:

error: sha1 information is lacking or useless (debian/scripts/koha-create).
error: could not build fake ancestor
Comment 122 Magnus Enger 2017-06-20 09:03:52 UTC
(In reply to Josef Moravec from comment #121)
> Sorry David, after applying all 3 dependencies on current master I get this
> when applying first patch of this bug:
> 
> error: sha1 information is lacking or useless (debian/scripts/koha-create).
> error: could not build fake ancestor

I get more or less the same, but with a slightly different message:

Applying: Bug 10662 - Build OAI-PMH Harvesting Client
fatal: sha1 information is lacking or useless (debian/scripts/koha-create).
Repository lacks necessary blobs to fall back on 3-way merge.
Cannot fall back to three-way merge.
Patch failed at 0001 Bug 10662 - Build OAI-PMH Harvesting Client
Comment 123 David Cook 2017-06-20 23:31:20 UTC
(In reply to Magnus Enger from comment #122)
> (In reply to Josef Moravec from comment #121)
> > Sorry David, after applying all 3 dependencies on current master I get this
> > when applying first patch of this bug:
> > 
> > error: sha1 information is lacking or useless (debian/scripts/koha-create).
> > error: could not build fake ancestor
> 
> I get more or less the same, but with a slightly different message:
> 
> Applying: Bug 10662 - Build OAI-PMH Harvesting Client
> fatal: sha1 information is lacking or useless (debian/scripts/koha-create).
> Repository lacks necessary blobs to fall back on 3-way merge.
> Cannot fall back to three-way merge.
> Patch failed at 0001 Bug 10662 - Build OAI-PMH Harvesting Client

I get a different message yet again:

Using index info to reconstruct a base tree...
M       debian/scripts/koha-create
Falling back to patching base and 3-way merge...
Auto-merging debian/scripts/koha-create
CONFLICT (content): Merge conflict in debian/scripts/koha-create
error: Failed to merge in the changes.
Patch failed at 0001 Bug 10662 - Build OAI-PMH Harvesting Client
The copy of the patch that failed is found in: .git/rebase-apply/patch
When you have resolved this problem run "git bz apply --continue".
If you would prefer to skip this patch, instead run "git bz apply --skip".
To restore the original branch and stop patching run "git bz apply --abort".
Patch left in /tmp/Bug-10662---Build-OAI-PMH-Harvesting-Client-s_47Bl.patch

So it looks like a merge conflict... which is interesting since my dev branch rebases just fine against master. I must've mucked up another commit locally during a rebase. I'll fix this in a sec.
Comment 124 David Cook 2017-06-20 23:44:10 UTC
Created attachment 64479 [details] [review]
Bug 10662 - Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.
Comment 125 David Cook 2017-06-20 23:44:21 UTC
Created attachment 64480 [details] [review]
Bug 10662 - Remove workaround for pre-17710 behaviour
Comment 126 David Cook 2017-06-20 23:44:32 UTC
Created attachment 64481 [details] [review]
Bug 10662 - Modify OAI-PMH harvester to import RDFXML

Typically, the OAI-PMH harvester only imports MARCXML, but this
patch allows you to also import RDFXML.

This functionality was requested by Stockholm University Library,
and importing RDFXML requires a parallel import of MARCXML if you
want to be able to use the RDFXML. That is to say, you can download
and import RDFXML using this patch, but it will probably only be
useful if you're also importing MARCXML which can be linked to the RDFXML
using the OAI-PMH repository and identifier as a link. This might change
in the future if Koha moves away from MARCXML as the central required
metadata format.
Comment 127 David Cook 2017-06-20 23:48:18 UTC
That should do the trick. I just tested applying 18585, 18586, 18713, and 10662 and it applied successfully. 

I'm not actually 100% sure why it wasn't working before. A botched rebase at some point, although it wasn't obvious based on commit diffs. Oh well. Fixed now.
Comment 128 David Cook 2017-06-21 00:11:54 UTC
WARNING!

I just remembered that these patches won't work as expected on Debian Jessie at the moment, if you're using the RDF options. 

That's because there are bugs in RDF::Trine, especially in the version found in Debian Jessie. I've written fixes and the maintainer for RDF::Trine has merged them into master and included them in the latest CPAN releases. The critical fix is in version 1.017. The newest version is 1.018, which includes some additional fixes, but they're not critical. 

In terms of testing, there are some options. 

1) You wait for Mirko to package the latest RDF::Trine and put it in Koha's Debian repositories and install it from there
2) You install the latest RDF::Trine from CPAN
3) You clone the RDF::Trine github at https://github.com/kasei/perlrdf and then run KOHA_CONF=/path/to/etc/koha-conf.xml perl -I /path/to/perlrdf/RDF-Trine/lib misc/harvesterd.pl

Your best bet is probably option #3. That's what I did while I was waiting for kasei to merge and release my fixes to RDF::Trine. 

In that event, while I'd appreciate the sign off, I think I'd leave it as "Needs Signoff" since we haven't resolved the RDF::Trine dependency yet. But this is a way that you can test the code. Only the harvester daemon needs RDF::Trine for these patches, so you don't need to play with your Apache config or anything like that.
Comment 129 Josef Moravec 2017-06-21 09:13:52 UTC
Started testing, from first view I can see these issues:

1) RDF::Query is not in dependencies but used in various files

2) There are some style issues etc, could be catched easily by qa tools

3) Module UUID (package libuuid-perl) is packaged in version 0.05 for jessie, but version 0.27 needed as you use the sub "uuid", this version is packaged for stretch (which is released for few days now)

4) You use oai-pmh-harvester.yaml filename everywhere, but in /debian/templates/koha-conf-site.xml.in it is 'oai_pmh_harvester.yaml' (with underscores) - should be consistant

5) The message "Test succesfull!" should be more visible, you could use "dialog message" classes to make it standard koha message


But the main think is: It does work well! ;)

Not tested RDF harvesting, as I had no luck to install Fuseki properly :(
Comment 130 Mirko Tietgen 2017-06-21 10:42:34 UTC
(In reply to David Cook from comment #128)

> In terms of testing, there are some options. 
> 
> 1) You wait for Mirko to package the latest RDF::Trine and put it in Koha's
> Debian repositories and install it from there
> 2) You install the latest RDF::Trine from CPAN
> 3) You clone the RDF::Trine github at https://github.com/kasei/perlrdf and
> then run KOHA_CONF=/path/to/etc/koha-conf.xml perl -I
> /path/to/perlrdf/RDF-Trine/lib misc/harvesterd.pl

I have backported librdf-trine-perl 1.018-1 to the Koha unstable repository. Please use option 1: install the package from our repository and report problems if you encounter any.
Comment 131 David Cook 2017-06-22 01:52:57 UTC
Thanks for the feedback, Josef! I'll try to address it as soon as I can. I have a busy day today, but I'll do my best to get to it.

That reminds me that I should double-check the RDF::Query versions for compatibility as well. 

According to https://packages.qa.debian.org/libr/librdf-query-perl.html, Debian Stretch is on 2.918, which is the same version that I've been using on openSUSE Leap 42.2. But Debian Jessie is on 2.912. I don't know if that will be significant or not, so I'll have to check that later.
Comment 132 David Cook 2017-07-13 04:19:18 UTC
(In reply to Mirko Tietgen from comment #130)
> (In reply to David Cook from comment #128)
> 
> > In terms of testing, there are some options. 
> > 
> > 1) You wait for Mirko to package the latest RDF::Trine and put it in Koha's
> > Debian repositories and install it from there
> > 2) You install the latest RDF::Trine from CPAN
> > 3) You clone the RDF::Trine github at https://github.com/kasei/perlrdf and
> > then run KOHA_CONF=/path/to/etc/koha-conf.xml perl -I
> > /path/to/perlrdf/RDF-Trine/lib misc/harvesterd.pl
> 
> I have backported librdf-trine-perl 1.018-1 to the Koha unstable repository.
> Please use option 1: install the package from our repository and report
> problems if you encounter any.

librdf-trine-perl 1.018-1 exists in Debian testing and Debian unstable at the moment (https://packages.debian.org/buster/librdf-trine-perl).

I've made a bug report in Debian to get it updated in Debian stable and Debian oldstable (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865506), but I think Jonas is saying that it's unlikely that it will make it back.
Comment 133 David Cook 2017-07-13 05:23:18 UTC
(In reply to Josef Moravec from comment #129)
> Started testing, from first view I can see these issues:
> 
> 1) RDF::Query is not in dependencies but used in various files
> 
> 2) There are some style issues etc, could be catched easily by qa tools
> 
> 3) Module UUID (package libuuid-perl) is packaged in version 0.05 for
> jessie, but version 0.27 needed as you use the sub "uuid", this version is
> packaged for stretch (which is released for few days now)
> 
> 4) You use oai-pmh-harvester.yaml filename everywhere, but in
> /debian/templates/koha-conf-site.xml.in it is 'oai_pmh_harvester.yaml' (with
> underscores) - should be consistant
> 
> 5) The message "Test succesfull!" should be more visible, you could use
> "dialog message" classes to make it standard koha message
> 
> 
> But the main think is: It does work well! ;)
> 
> Not tested RDF harvesting, as I had no luck to install Fuseki properly :(

1) Ok, I'll add the RDF::Query dependency in a new patch.

2) Could you elaborate on the style issues? I went to https://wiki.koha-community.org/wiki/QA_Test_Tools, but the configure dependency step is risky and it looks set up to be used on Debian (and I'm on openSUSE although I should try out kohadevbox again) so I might hold off trying the tools for now. 

3) I only use uuid() in two places, so I'll go back and make those bits compatible with UUID 0.05.

4) I'll add that to a new patch as well. Thanks for catching that one! 

5) Makes sense! I've split the difference and done both, and added to a new patch!
Comment 134 David Cook 2017-07-13 05:25:26 UTC
Created attachment 65016 [details] [review]
Add RDF::Query dependency
Comment 135 David Cook 2017-07-13 05:25:33 UTC
Created attachment 65017 [details] [review]
Fix configuration typo for Debian instances
Comment 136 David Cook 2017-07-13 05:25:40 UTC
Created attachment 65018 [details] [review]
Standardize OAI-PMH test success/failure
Comment 137 David Cook 2017-07-13 05:25:48 UTC
Created attachment 65019 [details] [review]
Use old style of UUID generation
Comment 138 David Cook 2017-07-13 05:26:42 UTC
(In reply to Josef Moravec from comment #129)
> 
> Not tested RDF harvesting, as I had no luck to install Fuseki properly :(

If you're unable to install it, perhaps Magnus Enger can give you an account to his Fuseki instance?
Comment 139 Josef Moravec 2017-07-13 07:47:23 UTC
(In reply to David Cook from comment #133)
> 
> 2) Could you elaborate on the style issues? I went to
> https://wiki.koha-community.org/wiki/QA_Test_Tools, but the configure
> dependency step is risky and it looks set up to be used on Debian (and I'm
> on openSUSE although I should try out kohadevbox again) so I might hold off
> trying the tools for now. 
> 

No problem, here is relevant output from qa tools:

FAIL	Koha/Daemon.pm
   FAIL	  critic
		"$fh" is declared but not used at line 78, column 14. Unused variables clutter code and make it harder to read.
		Bareword file handle opened at line 64, column 18. See pages 202,204 of PBP.


FAIL	Koha/OAI/Harvester.pm
   FAIL	  critic
		Variable declared in conditional statement at line 291, column 9. Declare variables outside of the condition.
   FAIL	  pod
		=head3 without preceding higher level
		 in file Koha/OAI/Harvester.pm


FAIL	Koha/OAI/Harvester/Downloader.pm
   FAIL	  pod
		 in file Koha/OAI/Harvester/Downloader.pm
		=head2 without preceding higher level


FAIL	Koha/OAI/Harvester/Import/RDFXML.pm
   FAIL	  critic
		Variable declared in conditional statement at line 110, column 5. Declare variables outside of the condition.
   FAIL	  forbidden patterns
		forbidden pattern: tab char (line 32)


FAIL	Koha/OAI/Harvester/Import/Record.pm
   FAIL	  critic
		Variable declared in conditional statement at line 325, column 17. Declare variables outside of the condition.
		Variable declared in conditional statement at line 262, column 13. Declare variables outside of the condition.
		"return" statement with explicit "undef" at line 145, column 41. See page 199 of PBP.
		Variable declared in conditional statement at line 252, column 5. Declare variables outside of the condition.
   FAIL	  forbidden patterns
		forbidden pattern: tab char (line 152)
		forbidden pattern: tab char (line 156)
		forbidden pattern: tab char (line 142)
		forbidden pattern: tab char (line 139)
		forbidden pattern: tab char (line 136)
		forbidden pattern: tab char (line 384)
		forbidden pattern: tab char (line 380)
		forbidden pattern: tab char (line 140)
		forbidden pattern: tab char (line 141)
		forbidden pattern: tab char (line 151)
		forbidden pattern: tab char (line 155)
		forbidden pattern: tab char (line 149)
		forbidden pattern: tab char (line 154)
		forbidden pattern: tab char (line 153)
		forbidden pattern: tab char (line 138)
		forbidden pattern: tab char (line 150)
		forbidden pattern: tab char (line 137)
   FAIL	  pod
		 in file Koha/OAI/Harvester/Import/Record.pm
		=head3 without preceding higher level
		empty =head3
	 

FAIL	Koha/OAI/Harvester/Request.pm
   FAIL	  forbidden patterns
		forbidden pattern: tab char (line 69)
		forbidden pattern: tab char (line 191)
		forbidden pattern: tab char (line 189)
		forbidden pattern: tab char (line 181)
		forbidden pattern: tab char (line 188)
		forbidden pattern: tab char (line 193)
		forbidden pattern: tab char (line 104)
		forbidden pattern: tab char (line 98)
		forbidden pattern: tab char (line 77)
		forbidden pattern: tab char (line 113)
		forbidden pattern: tab char (line 183)
		forbidden pattern: tab char (line 184)
		forbidden pattern: tab char (line 89)
		forbidden pattern: tab char (line 185)
		forbidden pattern: tab char (line 101)
		forbidden pattern: tab char (line 190)
		forbidden pattern: tab char (line 83)
		forbidden pattern: tab char (line 102)
		forbidden pattern: tab char (line 70)
		forbidden pattern: tab char (line 78)
		forbidden pattern: tab char (line 95)
		forbidden pattern: tab char (line 194)
		forbidden pattern: tab char (line 107)
		forbidden pattern: tab char (line 76)
		forbidden pattern: tab char (line 80)
		forbidden pattern: tab char (line 108)
		forbidden pattern: tab char (line 182)
		forbidden pattern: tab char (line 99)
		forbidden pattern: tab char (line 109)
		forbidden pattern: tab char (line 66)
		forbidden pattern: tab char (line 97)
		forbidden pattern: tab char (line 106)
		forbidden pattern: tab char (line 103)
		forbidden pattern: tab char (line 112)
		forbidden pattern: tab char (line 93)
		forbidden pattern: tab char (line 170)
		forbidden pattern: tab char (line 111)
		forbidden pattern: tab char (line 105)
		forbidden pattern: tab char (line 192)
		forbidden pattern: tab char (line 114)
		forbidden pattern: tab char (line 90)
		forbidden pattern: tab char (line 110)
		forbidden pattern: tab char (line 187)
		forbidden pattern: tab char (line 79)
		forbidden pattern: tab char (line 186)
		forbidden pattern: tab char (line 115)
		forbidden pattern: tab char (line 92)
		forbidden pattern: tab char (line 94)
		forbidden pattern: tab char (line 91)
		forbidden pattern: tab char (line 100)
		forbidden pattern: tab char (line 96)


FAIL	installer/data/mysql/kohastructure.sql
   FAIL	  charset_collate
		The table oai_harvester_requests does not have the current charset collate (see bug 11944)


FAIL	koha-tmpl/intranet-tmpl/prog/en/includes/tools-menu.inc
   FAIL	  forbidden patterns
		forbidden pattern: tab char (line 102)


FAIL	koha-tmpl/intranet-tmpl/prog/en/modules/tools/oai-pmh-harvester/dashboard.tt
   FAIL	  forbidden patterns
		forbidden pattern: Do not use line breaks inside template tags (bug 18675) (line 308)
		forbidden pattern: Do not use line breaks inside template tags (bug 18675) (line 287)
Comment 140 Katrin Fischer 2017-10-08 12:54:04 UTC
Setting to failed QA for QA test tool fails (see comment #139)
Comment 141 David Cook 2017-10-25 00:55:59 UTC
(In reply to Josef Moravec from comment #139)
> No problem, here is relevant output from qa tools:
> 
> FAIL	Koha/Daemon.pm
>    FAIL	  critic
> 		"$fh" is declared but not used at line 78, column 14. Unused variables
> clutter code and make it harder to read.
> 		Bareword file handle opened at line 64, column 18. See pages 202,204 of
> PBP.
> 
> 

Fixed.

> FAIL	Koha/OAI/Harvester.pm
>    FAIL	  critic
> 		Variable declared in conditional statement at line 291, column 9. Declare
> variables outside of the condition.
>    FAIL	  pod
> 		=head3 without preceding higher level
> 		 in file Koha/OAI/Harvester.pm
> 
> 

Fixed.

> FAIL	Koha/OAI/Harvester/Downloader.pm
>    FAIL	  pod
> 		 in file Koha/OAI/Harvester/Downloader.pm
> 		=head2 without preceding higher level
> 
> 

Fixed.

> FAIL	Koha/OAI/Harvester/Import/RDFXML.pm
>    FAIL	  critic
> 		Variable declared in conditional statement at line 110, column 5. Declare
> variables outside of the condition.
>    FAIL	  forbidden patterns
> 		forbidden pattern: tab char (line 32)
> 
> 

Fixed.

> FAIL	Koha/OAI/Harvester/Import/Record.pm
>    FAIL	  critic
> 		Variable declared in conditional statement at line 325, column 17. Declare
> variables outside of the condition.
> 		Variable declared in conditional statement at line 262, column 13. Declare
> variables outside of the condition.
> 		"return" statement with explicit "undef" at line 145, column 41. See page
> 199 of PBP.
> 		Variable declared in conditional statement at line 252, column 5. Declare
> variables outside of the condition.
>    FAIL	  forbidden patterns
> 		forbidden pattern: tab char (line 152)
> 		forbidden pattern: tab char (line 156)
> 		forbidden pattern: tab char (line 142)
> 		forbidden pattern: tab char (line 139)
> 		forbidden pattern: tab char (line 136)
> 		forbidden pattern: tab char (line 384)
> 		forbidden pattern: tab char (line 380)
> 		forbidden pattern: tab char (line 140)
> 		forbidden pattern: tab char (line 141)
> 		forbidden pattern: tab char (line 151)
> 		forbidden pattern: tab char (line 155)
> 		forbidden pattern: tab char (line 149)
> 		forbidden pattern: tab char (line 154)
> 		forbidden pattern: tab char (line 153)
> 		forbidden pattern: tab char (line 138)
> 		forbidden pattern: tab char (line 150)
> 		forbidden pattern: tab char (line 137)
>    FAIL	  pod
> 		 in file Koha/OAI/Harvester/Import/Record.pm
> 		=head3 without preceding higher level
> 		empty =head3
> 	 
> 

Fixed.

> FAIL	Koha/OAI/Harvester/Request.pm
>    FAIL	  forbidden patterns
> 		forbidden pattern: tab char (line 69)
> 		forbidden pattern: tab char (line 191)
> 		forbidden pattern: tab char (line 189)
> 		forbidden pattern: tab char (line 181)
> 		forbidden pattern: tab char (line 188)
> 		forbidden pattern: tab char (line 193)
> 		forbidden pattern: tab char (line 104)
> 		forbidden pattern: tab char (line 98)
> 		forbidden pattern: tab char (line 77)
> 		forbidden pattern: tab char (line 113)
> 		forbidden pattern: tab char (line 183)
> 		forbidden pattern: tab char (line 184)
> 		forbidden pattern: tab char (line 89)
> 		forbidden pattern: tab char (line 185)
> 		forbidden pattern: tab char (line 101)
> 		forbidden pattern: tab char (line 190)
> 		forbidden pattern: tab char (line 83)
> 		forbidden pattern: tab char (line 102)
> 		forbidden pattern: tab char (line 70)
> 		forbidden pattern: tab char (line 78)
> 		forbidden pattern: tab char (line 95)
> 		forbidden pattern: tab char (line 194)
> 		forbidden pattern: tab char (line 107)
> 		forbidden pattern: tab char (line 76)
> 		forbidden pattern: tab char (line 80)
> 		forbidden pattern: tab char (line 108)
> 		forbidden pattern: tab char (line 182)
> 		forbidden pattern: tab char (line 99)
> 		forbidden pattern: tab char (line 109)
> 		forbidden pattern: tab char (line 66)
> 		forbidden pattern: tab char (line 97)
> 		forbidden pattern: tab char (line 106)
> 		forbidden pattern: tab char (line 103)
> 		forbidden pattern: tab char (line 112)
> 		forbidden pattern: tab char (line 93)
> 		forbidden pattern: tab char (line 170)
> 		forbidden pattern: tab char (line 111)
> 		forbidden pattern: tab char (line 105)
> 		forbidden pattern: tab char (line 192)
> 		forbidden pattern: tab char (line 114)
> 		forbidden pattern: tab char (line 90)
> 		forbidden pattern: tab char (line 110)
> 		forbidden pattern: tab char (line 187)
> 		forbidden pattern: tab char (line 79)
> 		forbidden pattern: tab char (line 186)
> 		forbidden pattern: tab char (line 115)
> 		forbidden pattern: tab char (line 92)
> 		forbidden pattern: tab char (line 94)
> 		forbidden pattern: tab char (line 91)
> 		forbidden pattern: tab char (line 100)
> 		forbidden pattern: tab char (line 96)
> 
> 

Fixed.

> FAIL	installer/data/mysql/kohastructure.sql
>    FAIL	  charset_collate
> 		The table oai_harvester_requests does not have the current charset collate
> (see bug 11944)
> 
> 

I think this is fixed. (Since I'm not on Debian, I'm just using my own checker script based on https://github.com/joubu/koha-qa-tools.)

> FAIL	koha-tmpl/intranet-tmpl/prog/en/includes/tools-menu.inc
>    FAIL	  forbidden patterns
> 		forbidden pattern: tab char (line 102)
> 
> 

Fixed (there were other pre-existing tab characters that I converted to spaces as well since I was already here).

> FAIL
> koha-tmpl/intranet-tmpl/prog/en/modules/tools/oai-pmh-harvester/dashboard.tt
>    FAIL	  forbidden patterns
> 		forbidden pattern: Do not use line breaks inside template tags (bug 18675)
> (line 308)
> 		forbidden pattern: Do not use line breaks inside template tags (bug 18675)
> (line 287)

I think this is fixed.

--

New patch coming presently...
Comment 142 David Cook 2017-10-25 00:57:42 UTC
Created attachment 68508 [details] [review]
Fix problems reported by Koha QA tools
Comment 143 David Cook 2017-12-06 01:04:17 UTC
Recently, I've been working on a different Linked Data system, and it's got me thinking that I actually want to save the incoming RDFXML in Koha's MySQL database. 

The reason for this is simply to keep the source data on-hand in the event that we need to rebuild the triplestore. 

While reading about Fuseki TDB database corruption, there is talk about rebuilding from the original source data and the only realistic way of doing that with RDFXML harvested by OAI-PMH is to have that RDFXML stored as blobs locally outside the triplestore too.
Comment 144 Andreas Hedström Mace 2017-12-07 08:59:35 UTC
Sounds great that work and/or thinking is being done is regards to harvesting/handling linked data in Koha. But could any potental changes that result from that please be added as a new patch (on top of this one?) so that we don't get another iteration of the harvester, before this has the chance to make it through the QA process!
Comment 145 Katrin Fischer 2017-12-07 21:32:45 UTC
+1
Comment 146 David Cook 2017-12-07 23:00:17 UTC
(In reply to Andreas Hedström Mace from comment #144)
> Sounds great that work and/or thinking is being done is regards to
> harvesting/handling linked data in Koha. But could any potental changes that
> result from that please be added as a new patch (on top of this one?) so
> that we don't get another iteration of the harvester, before this has the
> chance to make it through the QA process!

It's just thinking :).

These patches haven't even been signed off yet. I'm not sure that anyone is even testing them. 

Honestly most of the work would probably be done on a separate bug, and then only if that separate bug were pushed would any required changes would be added here (as an added patch).
Comment 147 Josef Moravec 2017-12-08 09:18:02 UTC
> These patches haven't even been signed off yet. I'm not sure that anyone is
> even testing them. 

I am testing this (again) ;)
Comment 148 David Cook 2017-12-10 23:07:48 UTC
(In reply to Josef Moravec from comment #147)
> > These patches haven't even been signed off yet. I'm not sure that anyone is
> > even testing them. 
> 
> I am testing this (again) ;)

Hurray! You're a champion, Josef!
Comment 149 Blou 2018-01-25 13:41:43 UTC
Hello David,
We'd be interested in testing (
Comment 150 Blou 2018-01-25 13:53:57 UTC
We're interested in testing.  In fact, we'd need this ASAP for one of our client.  So David, if you're around, you'll get some testing starting.... as soon as we can apply it.

We're getting a SHA1 on the first patch, whoever tries to apply it here.

So maybe if you have no signoff it would be a good opportunity to rebase and squash a few patches.

We stand ready, today or tomorrow.
Comment 151 David Cook 2018-01-29 01:03:05 UTC
(In reply to Blou from comment #150)
> We're interested in testing.  In fact, we'd need this ASAP for one of our
> client.  So David, if you're around, you'll get some testing starting.... as
> soon as we can apply it.
> 
> We're getting a SHA1 on the first patch, whoever tries to apply it here.
> 
> So maybe if you have no signoff it would be a good opportunity to rebase and
> squash a few patches.
> 
> We stand ready, today or tomorrow.

I'll look at rebasing it now. Cheers!
Comment 152 David Cook 2018-01-29 01:08:26 UTC
Merge conflict for #18585 but that should be easy enough to fix...
Comment 153 David Cook 2018-01-29 01:15:44 UTC
Created attachment 71008 [details] [review]
Bug 10662 - Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.
Comment 154 David Cook 2018-01-29 01:15:56 UTC
Created attachment 71009 [details] [review]
Bug 10662 - Remove workaround for pre-17710 behaviour
Comment 155 David Cook 2018-01-29 01:16:07 UTC
Created attachment 71010 [details] [review]
Bug 10662 - Modify OAI-PMH harvester to import RDFXML

Typically, the OAI-PMH harvester only imports MARCXML, but this
patch allows you to also import RDFXML.

This functionality was requested by Stockholm University Library,
and importing RDFXML requires a parallel import of MARCXML if you
want to be able to use the RDFXML. That is to say, you can download
and import RDFXML using this patch, but it will probably only be
useful if you're also importing MARCXML which can be linked to the RDFXML
using the OAI-PMH repository and identifier as a link. This might change
in the future if Koha moves away from MARCXML as the central required
metadata format.
Comment 156 David Cook 2018-01-29 01:16:17 UTC
Created attachment 71011 [details] [review]
Add RDF::Query dependency
Comment 157 David Cook 2018-01-29 01:16:26 UTC
Created attachment 71012 [details] [review]
Fix configuration typo for Debian instances
Comment 158 David Cook 2018-01-29 01:16:34 UTC
Created attachment 71013 [details] [review]
Standardize OAI-PMH test success/failure
Comment 159 David Cook 2018-01-29 01:16:42 UTC
Created attachment 71014 [details] [review]
Use old style of UUID generation
Comment 160 David Cook 2018-01-29 01:16:50 UTC
Created attachment 71015 [details] [review]
Fix problems reported by Koha QA tools
Comment 161 David Cook 2018-01-29 01:24:58 UTC
OK, both #18585 and #10662 have been updated.

You should be able to apply 18585, 18586, 18713, and now 10662.

Actually, you could probably test 10662 on its own without the RDF specific patches. If it doesn't work, I should write a patch to make the RDF optional...
Comment 162 David Cook 2018-08-13 02:05:53 UTC
I want to do some more work on this before Kohacon18... 

I think I will separate out the RDF work and put it in its own Bugzilla issue, since overall an OAI-PMH harvester and RDF support are two separate things.
Comment 163 David Cook 2018-09-11 19:44:38 UTC
Created attachment 78569 [details] [review]
Bug 10662 - Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.
Comment 164 David Cook 2018-09-11 19:49:31 UTC
Created attachment 78570 [details] [review]
Bug 10662 - Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.
Comment 165 David Cook 2018-09-11 19:49:38 UTC
Created attachment 78571 [details] [review]
Bug 10662 - Remove workaround for pre-17710 behaviour
Comment 166 David Cook 2018-09-11 19:49:45 UTC
Created attachment 78572 [details] [review]
Bug 10662 - Modify OAI-PMH harvester to import RDFXML

Typically, the OAI-PMH harvester only imports MARCXML, but this
patch allows you to also import RDFXML.

This functionality was requested by Stockholm University Library,
and importing RDFXML requires a parallel import of MARCXML if you
want to be able to use the RDFXML. That is to say, you can download
and import RDFXML using this patch, but it will probably only be
useful if you're also importing MARCXML which can be linked to the RDFXML
using the OAI-PMH repository and identifier as a link. This might change
in the future if Koha moves away from MARCXML as the central required
metadata format.
Comment 167 David Cook 2018-09-11 19:49:53 UTC
Created attachment 78573 [details] [review]
Add RDF::Query dependency
Comment 168 David Cook 2018-09-11 19:50:00 UTC
Created attachment 78574 [details] [review]
Fix configuration typo for Debian instances
Comment 169 David Cook 2018-09-11 19:50:07 UTC
Created attachment 78575 [details] [review]
Standardize OAI-PMH test success/failure
Comment 170 David Cook 2018-09-11 19:50:14 UTC
Created attachment 78576 [details] [review]
Use old style of UUID generation
Comment 171 David Cook 2018-09-11 19:50:22 UTC
Created attachment 78577 [details] [review]
Fix problems reported by Koha QA tools
Comment 172 David Cook 2018-09-11 19:51:37 UTC
Rebased against master.

Going to test this using https://gitlab.com/koha-community/koha-testing-docker later.
Comment 173 David Cook 2018-09-11 20:02:20 UTC
I was reviewing the test plan above and realized that https://gitlab.com/koha-community/koha-testing-docker probably won't work for testing this because it uses koha-gitify, and 10662 changes configuration files which requires a full Koha build...
Comment 174 David Cook 2018-09-11 22:31:39 UTC
Created attachment 78583 [details]
Example OAI-PMH harvester configuration

This is an example configuration file to use with the Docker container from https://gitlab.com/koha-community/koha-testing-docker
Comment 175 David Cook 2018-09-11 22:59:55 UTC
If you do want to use https://gitlab.com/koha-community/koha-testing-docker, here are some steps for how to do it. 

I've actually found a couple issues which I will look at tidying up, but thought I would post this anyway.

--

0. apt-get install libpoe-perl libpoe-component-jobqueue-perl librdf-trine-perl librdf-query-perl
1. cd /kohadevbox/koha
2. perl installer/data/mysql/updatedatabase.pl

3. Download OAI-PMH configuration file from https://bugs.koha-community.org/bugzilla3/attachment.cgi?id=78583 as oai.yml in /kohadevbox/koha

4. vi /etc/koha/sites/kohadev/koha-conf.xml
Add:
<oai_pmh_harvester_config>/kohadevbox/koha/oai.yml</oai_pmh_harvester_config>

5. echo 'flush_all' | nc memcached 11211
Ctrl + C to break out of nc request

6. koha-plack --restart kohadev
7. perl misc/harvesterd.pl --log-level DEBUG
8. perl misc/devel/create_superlibrarian.pl --userid tester --password retest --branchcode CPL --categorycode S --cardnumber TESTER
Comment 176 David Cook 2018-09-12 17:34:40 UTC
After reviewing https://gitlab.com/koha-community/kohadevbox, the instructions for getting the harvester operational should be very similar to the instructions for https://gitlab.com/koha-community/koha-testing-docker at https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662#c175.
Comment 177 David Cook 2018-09-12 19:02:58 UTC
Created attachment 78601 [details] [review]
Bug 10662: Incorrect conditions cause incorrect messages and missing links
Comment 178 David Cook 2018-09-12 23:05:27 UTC
Created attachment 78617 [details]
Sample XSLT filter for converting OAI_DC into MARCXML

This is a sample XSLT filter that can be used in the "Filter" field in the OAI-PMH request like so:

file:///kohadevbox/koha/OAIDC2MARCXML.xsl
Comment 179 Christopher Davis 2018-09-14 16:21:19 UTC
David,

I am impressed with your work on this- so much done in so little time. I think that enabling Koha to harvest metadata is a grand idea, so I have been following this bug. I heard that Koha's new indexer, Elastic Search, can index non-MARC metadata- is this true? If this is true, then I hope that you do not mind me asking you a question: this bug patch would have Koha's OAI-PMH harvester ingest only metadata which has been crosswalked to MARCXML; however, why stick with only MARCXML when Elastic Search can index the incoming metadata in its native schema (Dublin Core, TEI, EAD, etc.)? Please pardon my ignorance, but am I missing something?

Thank you,

Christopher Davis
Comment 180 David Cook 2018-09-14 17:59:23 UTC
(In reply to Christopher Davis from comment #179)
> David,
> 
> I am impressed with your work on this- so much done in so little time. I
> think that enabling Koha to harvest metadata is a grand idea, so I have been
> following this bug. I heard that Koha's new indexer, Elastic Search, can
> index non-MARC metadata- is this true? If this is true, then I hope that you
> do not mind me asking you a question: this bug patch would have Koha's
> OAI-PMH harvester ingest only metadata which has been crosswalked to
> MARCXML; however, why stick with only MARCXML when Elastic Search can index
> the incoming metadata in its native schema (Dublin Core, TEI, EAD, etc.)?
> Please pardon my ignorance, but am I missing something?
> 
> Thank you,
> 
> Christopher Davis

Hi Christopher, 

It feels like a very long time to me, but thank you very much! 

Neither ElasticSearch nor Zebra require MARC themselves per se, so either of them *could* index non-MARC metadata. However, Koha itself is a MARC-driven system, so the limitation you've observed with the OAI-PMH harvester is because of Koha's internals (and how we've chosen to index Koha's metadata).

Indexing is certainly one part of the issue. I'm not 100% familiar with the implementation of ElasticSearch, but I think we're using MARC with that as well. I think the solution would be to have a generic schema that we use for indexing/searching so that we could map any metadata format to Koha's generic index schema. Then we'd have mapping from any metadata format to a Koha generic display format. 

The other issue would be how Koha handles "records" for other purposes. Deleting bibs, modifying bibs, acquisitions, subscriptions, etc. But really the first step is just being able to store a non-MARC bib, index it, and display it.  

Basically... lots of work needs to be done and someone just needs to start it. I think the first step will be creating non-MARC metadata bibliographic records which store non-MARC metadatain the biblio_metadata table. And the first step to doing thing would be changing the "marcflavour" column in that table to "schema". 

There's a lot of work to do there, but I think it's very important work!

And it's work which would be great to do, since the Swedish Union Catalogue is using RDF. It would be great if we could just download the Swedish Union Catalogue metadata in its native format and work with that for indexing and display.
Comment 181 David Cook 2018-09-14 19:15:53 UTC
Kohadevbox setup instructions:

1. apt-get install libpoe-perl libpoe-component-jobqueue-perl librdf-trine-perl librdf-query-perl
2. In your browser, go to localhost:8081 and run web installer
3. cd /home/vagrant/kohaclone
4. sudo koha-shell kohadev -c "perl installer/data/mysql/updatedatabase.pl"
5. Download OAI-PMH configuration file from https://bugs.koha-community.org/bugzilla3/attachment.cgi?id=78583 as oai.yml in /home/vagrant/kohaclone
6. sudo vi /etc/koha/sites/kohadev/koha-conf.xml
Add the following before </config>:
<oai_pmh_harvester_config>/home/vagrant/kohaclone/oai.yml</oai_pmh_harvester_config>
7. restart_all 
8. sudo koha-shell kohadev -c "perl misc/harvesterd.pl --log-level DEBUG"
Comment 182 David Cook 2018-09-14 19:32:26 UTC
Kohadevbox setup instructions:

1. apt-get install libpoe-perl libpoe-component-jobqueue-perl librdf-trine-perl librdf-query-perl
2. In your browser, go to localhost:8081 and run web installer
3. cd /home/vagrant/kohaclone
4. sudo koha-shell kohadev -c "perl installer/data/mysql/updatedatabase.pl"
5. Download OAI-PMH configuration file from https://bugs.koha-community.org/bugzilla3/attachment.cgi?id=78583 as oai.yml in /home/vagrant/kohaclone
6. sudo vi /etc/koha/sites/kohadev/koha-conf.xml
Add the following before </config>:
<oai_pmh_harvester_config>/home/vagrant/kohaclone/oai.yml</oai_pmh_harvester_config>
7. restart_all 
8. sudo KOHA_CONF=/etc/koha/sites/kohadev/koha-conf.xml PERL5LIB=/home/vagrant/kohaclone perl misc/harvesterd.pl --log-level DEBUG
Comment 183 Ed Veal 2018-09-14 22:22:37 UTC
Created attachment 78754 [details] [review]
Bug 10662 - Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.

Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Comment 184 Ed Veal 2018-09-14 22:22:50 UTC
Created attachment 78756 [details] [review]
Bug 10662 - Remove workaround for pre-17710 behaviour

Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Comment 185 Ed Veal 2018-09-14 22:23:01 UTC
Created attachment 78758 [details] [review]
Bug 10662 - Modify OAI-PMH harvester to import RDFXML

Typically, the OAI-PMH harvester only imports MARCXML, but this
patch allows you to also import RDFXML.

This functionality was requested by Stockholm University Library,
and importing RDFXML requires a parallel import of MARCXML if you
want to be able to use the RDFXML. That is to say, you can download
and import RDFXML using this patch, but it will probably only be
useful if you're also importing MARCXML which can be linked to the RDFXML
using the OAI-PMH repository and identifier as a link. This might change
in the future if Koha moves away from MARCXML as the central required
metadata format.

Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Comment 186 Ed Veal 2018-09-14 22:23:10 UTC
Created attachment 78760 [details] [review]
Add RDF::Query dependency

https://bugs.koha-community.org/show_bug.cgi?id=10662

Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Comment 187 Ed Veal 2018-09-14 22:23:19 UTC
Created attachment 78762 [details] [review]
Fix configuration typo for Debian instances

https://bugs.koha-community.org/show_bug.cgi?id=10662

Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Comment 188 Ed Veal 2018-09-14 22:23:28 UTC
Created attachment 78764 [details] [review]
Standardize OAI-PMH test success/failure

https://bugs.koha-community.org/show_bug.cgi?id=10662

Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Comment 189 Ed Veal 2018-09-14 22:23:37 UTC
Created attachment 78767 [details] [review]
Use old style of UUID generation

https://bugs.koha-community.org/show_bug.cgi?id=10662

Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Comment 190 Ed Veal 2018-09-14 22:23:47 UTC
Created attachment 78769 [details] [review]
Fix problems reported by Koha QA tools

https://bugs.koha-community.org/show_bug.cgi?id=10662

Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Comment 191 Ed Veal 2018-09-14 22:23:56 UTC
Created attachment 78771 [details] [review]
Bug 10662: Incorrect conditions cause incorrect messages and missing links

Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Comment 192 David Cook 2018-09-14 23:05:37 UTC
Ed did mention that "Send" request is a bit confusing and I suggested that "Submit" request might be less confusing.
Comment 193 David Cook 2018-09-15 16:51:27 UTC
Created attachment 78859 [details] [review]
Bug 10662: Template fixes

Fix datatable search for oai import history

Re-label "Send" to "Submit"
Comment 194 Ed Veal 2018-09-15 17:21:37 UTC
Created attachment 78860 [details] [review]
Bug 20986 Add 867 and 868 holdings display

Add line breaks in the 866 Holdings display in the OPAC details and Staff details page.  Add 867 and 868 textual holdings with line breaks in the OPAC and Staff details display.

Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Comment 195 David Cook 2018-09-15 22:47:23 UTC
The signed off patches actually contain some functionality for harvesting RDF and importing RDF into Koha, and I think that will actually make QA much more difficult and is more of a long-term goal, so I'm splitting that functionality off into #21359.

As a result, I've removed the dependencies on other Bugzilla RDF issues, and I'm currently in the process of extracting all the RDF handling from the OAI-PMH harvester, so that we're just dealing with MARC handling. Hopefully that makes testing and QA more straightforward. I'll also be squashing the patches down, so it'll be easier to review as well.
Comment 196 David Cook 2018-09-15 23:10:41 UTC
Created attachment 78914 [details] [review]
Bug 10662 - Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.
Comment 197 David Cook 2018-09-15 23:11:55 UTC
Kohadevbox setup instructions:

0. Apply patches
1. apt-get install libpoe-perl libpoe-component-jobqueue-perl 
2. In your browser, go to localhost:8081 and run web installer
3. cd /home/vagrant/kohaclone
4. sudo koha-shell kohadev -c "perl installer/data/mysql/updatedatabase.pl"
5. Download OAI-PMH configuration file from https://bugs.koha-community.org/bugzilla3/attachment.cgi?id=78583 as oai.yml in /home/vagrant/kohaclone
6. sudo vi /etc/koha/sites/kohadev/koha-conf.xml
Add the following before </config>:
<oai_pmh_harvester_config>/home/vagrant/kohaclone/oai.yml</oai_pmh_harvester_config>
7. restart_all 
8. sudo KOHA_CONF=/etc/koha/sites/kohadev/koha-conf.xml PERL5LIB=/home/vagrant/kohaclone perl misc/harvesterd.pl --log-level DEBUG
Comment 198 David Cook 2018-09-16 17:35:59 UTC
To setup Kohadevbox:

0. Apply patches (e.g. git bz apply 10662)
1. apt-get install libpoe-perl libpoe-component-jobqueue-perl 
2. In your browser, go to localhost:8081 and run web installer
3. cd /home/vagrant/kohaclone
4. sudo koha-shell kohadev -c "perl installer/data/mysql/updatedatabase.pl"
5. Download OAI-PMH configuration file from https://bugs.koha-community.org/bugzilla3/attachment.cgi?id=78583 as oai.yml in /home/vagrant/kohaclone
6. sudo vi /etc/koha/sites/kohadev/koha-conf.xml
Add the following before </config>:
<oai_pmh_harvester_config>/home/vagrant/kohaclone/oai.yml</oai_pmh_harvester_config>
7. restart_all 
8. sudo KOHA_CONF=/etc/koha/sites/kohadev/koha-conf.xml PERL5LIB=/home/vagrant/kohaclone perl misc/harvesterd.pl --log-level DEBUG

To test:
1) Start OAI-PMH harvester daemon according to the above Kohadevbox instructions (koha-testing-docker can be used instead but will need some modifications)
2) Go to /cgi-bin/koha/tools/tools-home.pl
3) Click on "OAI-PMH harvester" (/cgi-bin/koha/tools/oai-pmh-harvester/dashboard.pl)
4) At the toolbar choose "New request"
5) Give the request a "Name" (this is just used for Koha and not sent in the request), add "URL" for an OAI-PMH repository (e.g. http://<koha>/cgi-bin/koha/oai.pl), and fill in the OAI-PMH parameters as desired/required (an explanation of the protocol is at http://www.openarchives.org/OAI/openarchivesprotocol.html or see attached screenshot oai_request_example.jpg). The remaining values can be kept unchanged. 
6) Click "Test parameters" to make sure your inputs are all valid. 
7) Click "Save"
8) Click "Actions" and choose "Submit"
9) If the submission is successful, go to the "Submitted requests" tab
10) Click "Actions" and choose "Start"
11) Go to the "Import history" tab
12) Click "Refresh import history" to update the table and see your results. If nothing is coming up, you may need to change your request as the OAI-PMH repository may not have any results for that particular request.
Comment 199 David Cook 2018-09-16 17:36:27 UTC
Created attachment 78956 [details]
OAI-PMH Request Example
Comment 200 Andreas Hedström Mace 2018-09-18 08:31:45 UTC
Created attachment 79039 [details] [review]
Bug 10662 - Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.

Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>
Comment 201 Josef Moravec 2018-09-18 13:01:31 UTC
Comment on attachment 79039 [details] [review]
Bug 10662 - Build OAI-PMH Harvesting Client

Review of attachment 79039 [details] [review]:
-----------------------------------------------------------------

Hi David, 
thanks for your great job!

There are still some issues which I think should be solved to make this part of master

::: Koha/Daemon.pm
@@ +1,2 @@
> +package Koha::Daemon;
> +

there is missing POD in this package

::: Koha/OAI/Harvester.pm
@@ +1,2 @@
> +package Koha::OAI::Harvester;
> +

there is missing POD for some methods in this package

@@ +29,5 @@
> +use DateTime;
> +use DateTime::Format::Strptime;
> +
> +use C4::Context;
> +use Koha::Database;

You should use Koha::Object[s] based classes not Koha::Database itself

@@ +114,5 @@
> +            my $active_tasks = $poe_kernel->call("harvester","list_tasks","active");
> +            my @active_uuids = map { $_->{uuid} } @$active_tasks;
> +
> +            my $schema = Koha::Database->new()->schema();
> +            my $rs = $schema->resultset('OaiHarvesterImportQueue')->search({

You should create and use class based on Koha::Object - could be something like Koha::OAI::Harvester::ImportQueue[s] in this case

@@ +311,5 @@
> +sub reset_imports_status {
> +    my ($self, $kernel, $heap, $session) = @_[OBJECT, KERNEL,HEAP,SESSION];
> +
> +    my $schema = Koha::Database->new()->schema();
> +    my $rs = $schema->resultset('OaiHarvesterImportQueue')->search({

Use Koha::Object[s]

@@ +421,5 @@
> +    my ($self,$uuid) = @_;
> +    my $count = undef;
> +    if ($uuid){
> +        my $schema = Koha::Database->new()->schema();
> +        my $items = $schema->resultset('OaiHarvesterImportQueue')->search({

Use Koha::Object[s]

@@ +443,5 @@
> +    my $schema = Koha::Database->new()->schema();
> +    my @tasks = ();
> +    foreach my $uuid (sort keys %{$heap->{tasks}}){
> +        my $task = $heap->{tasks}->{$uuid};
> +        my $items = $schema->resultset('OaiHarvesterImportQueue')->search({

Use Koha::Object[s]

@@ +594,5 @@
> +        }
> +
> +        #Step Three: stop pending imports for this task
> +        my $schema = Koha::Database->new()->schema();
> +        my $items = $schema->resultset('OaiHarvesterImportQueue')->search({

Use Koha::Object[s]

@@ +625,5 @@
> +        $kernel->call($session,"stop_task",$task_uuid);
> +
> +        #Step Two: delete pending imports in database
> +        my $schema = Koha::Database->new()->schema();
> +        my $items = $schema->resultset('OaiHarvesterImportQueue')->search({

Use Koha::Object[s]

::: Koha/OAI/Harvester/Client.pm
@@ +1,2 @@
> +package Koha::OAI::Harvester::Client;
> +

There is missing POD in this package

::: Koha/OAI/Harvester/Downloader.pm
@@ +95,5 @@
> +        return;
> +    }
> +}
> +
> +=head2 OpenXMLStream

the method is called "GetXMLStream"

@@ +167,5 @@
> +        return;
> +    }
> +}
> +
> +sub ParseXMLStream {

Missing POD

@@ +244,5 @@
> +        warn "ParseXMLStream() requires a 'file_handle' argument.";
> +    }
> +}
> +
> +sub harvest {

missing POD

::: Koha/OAI/Harvester/Import/MARCXML.pm
@@ +1,1 @@
> +package Koha::OAI::Harvester::Import::MARCXML;

There is missing pod in this package

::: Koha/OAI/Harvester/Import/Record.pm
@@ +26,5 @@
> +
> +use C4::Context;
> +use C4::Biblio;
> +
> +use Koha::Database;

use Koha::Object[s] based classes please

@@ +164,5 @@
> +    my ($self, $args) = @_;
> +    my $record_type = $args->{record_type} // "biblio";
> +    my $link_id;
> +    if ($record_type eq "biblio"){
> +        my $link = $schema->resultset('OaiHarvesterBiblio')->find(

Use Koha::Object[s]

::: Koha/OAI/Harvester/Listener.pm
@@ +1,1 @@
> +package Koha::OAI::Harvester::Listener;

Missing POD in this package

::: Koha/OAI/Harvester/Request.pm
@@ +46,5 @@
> +sub _type {
> +    return 'OaiHarvesterRequest';
> +}
> +
> +sub validate {

Please add POD for this sub

::: Koha/OAI/Harvester/Worker.pm
@@ +1,1 @@
> +package Koha::OAI::Harvester::Worker;

There is missing POD in this package

::: Koha/OAI/Harvester/Worker/Download/Stream.pm
@@ +1,1 @@
> +package Koha::OAI::Harvester::Worker::Download::Stream;

There is missing POD in this package

::: Koha/OAI/Harvester/Worker/Import.pm
@@ +1,1 @@
> +package Koha::OAI::Harvester::Worker::Import;

There is missing POD in this package

::: installer/data/mysql/atomicupdate/bug_10662.sql
@@ +10,5 @@
> +  PRIMARY KEY (`import_oai_biblio_id`),
> +  UNIQUE KEY `oai_record` (`oai_identifier`,`oai_repository`) USING BTREE,
> +  KEY `FK_import_oai_biblio_1` (`biblionumber`),
> +  CONSTRAINT `FK_import_oai_biblio_1` FOREIGN KEY (`biblionumber`) REFERENCES `biblio` (`biblionumber`) ON DELETE CASCADE ON UPDATE NO ACTION
> +) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

CHARSET should be utf8mb4 and collation utf8mb4_unicode_ci - for all tables

::: koha-tmpl/intranet-tmpl/prog/en/modules/tools/oai-pmh-harvester/dashboard.tt
@@ +3,5 @@
> +[% INCLUDE 'doc-head-close.inc' %]
> +[% INCLUDE 'datatables.inc' %]
> +[% dashboard_page = '/cgi-bin/koha/tools/oai-pmh-harvester/dashboard.pl' %]
> +[% request_page = '/cgi-bin/koha/tools/oai-pmh-harvester/request.pl' %]
> +<script type="text/javascript">

Javascript should be at end of page - see bug 17858

::: koha-tmpl/intranet-tmpl/prog/en/modules/tools/oai-pmh-harvester/request.tt
@@ +1,5 @@
> +[% INCLUDE 'doc-head-open.inc' %]
> +<title>Koha &rsaquo; Tools &rsaquo; OAI-PMH harvester &rsaquo; Request</title>
> +[% INCLUDE 'doc-head-close.inc' %]
> +[% INCLUDE 'calendar.inc' %]
> +<script type="text/javascript" src="[% interface %]/lib/jquery/plugins/jquery-ui-timepicker-addon.min.js"></script>

Use assets for external js and css - see https://wiki.koha-community.org/wiki/Coding_Guidelines#HTML8:_use_Asset_TT_plugin_for_linking_javascript_and_css_files

@@ +3,5 @@
> +[% INCLUDE 'doc-head-close.inc' %]
> +[% INCLUDE 'calendar.inc' %]
> +<script type="text/javascript" src="[% interface %]/lib/jquery/plugins/jquery-ui-timepicker-addon.min.js"></script>
> +[% INCLUDE 'timepicker.inc' %]
> +<script type="text/javascript">

Javascript should be at end of file

::: rewrite-config.PL
@@ +152,3 @@
>    "__MEMCACHED_NAMESPACE__" => "",
>    "__FONT_DIR__" => "/usr/share/fonts/truetype/ttf-dejavu",
>    "__TEMPLATE_CACHE_DIR__" => "/tmp/koha"

there is missing colon at the end of line

::: svc/oai-pmh-harvester/history
@@ +87,5 @@
> +    }
> +}
> +
> +my $page = ( $start / $length ) + 1;
> +my $schema = Koha::Database->new()->schema();

Please use Koha::Objects

::: tools/oai-pmh-harvester/record.pl
@@ +36,5 @@
> +my $import_oai_id = $input->param('import_oai_id');
> +if ($import_oai_id){
> +    my $schema = Koha::Database->new()->schema();
> +    if ($schema){
> +        my $rs = $schema->resultset("OaiHarvesterHistory");

Please use Koha::Object[s]
Comment 202 Josef Moravec 2018-09-18 13:03:38 UTC
Created attachment 79045 [details]
Datatables

Also the datatables are not working perfectly, you should probably use KohaTable instead of pure DataTable
Comment 203 David Cook 2018-11-01 05:12:48 UTC
Created attachment 81783 [details] [review]
Bug 10662: Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.

Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>
Comment 204 David Cook 2018-11-01 05:13:00 UTC
Created attachment 81784 [details] [review]
Bug 10662: (QA follow-up) addressing qa test tool output

I've done the following in this follow-up:

- Added missing POD
- Replaced DBIC use with Koha::Object[s]
- Replaced DataTables with KohaTable
- Fixed database charset and collation
- Moved Javascript to bottom of templates
- Fixed a syntax error in rewrite-config.PL
Comment 205 David Cook 2018-11-01 05:14:47 UTC
Do I change it to Signed Off or back to Needs Signoff?
Comment 206 David Cook 2018-11-01 05:16:18 UTC
Btw, there will be some QA test tool failures, but they should all be false and stem from this issue: https://gitlab.com/koha-community/qa-test-tools/issues/4
Comment 207 Josef Moravec 2018-11-01 08:07:57 UTC
What is the intention on commented out code in koha-tmpl/intranet-tmpl/prog/en/modules/tools/oai-pmh-harvester/dashboard.tt in table with saved_requests?
Comment 208 Josef Moravec 2018-11-01 08:11:05 UTC
In tables for history and requests you have column import_matcher_code, that is bad I think, you should always use matcher_id and make it foreign key to marc_matchers.
Comment 209 Josef Moravec 2018-11-01 08:25:06 UTC
Created attachment 81786 [details]
Encoding problem + datatable

In submitted requests table I encountered a encoding problem:

"Knihovna Ãstí" should be "Knihovna Ústí".

Also, the heading of datatable is usually formatted in one line, see patron circulation history for example.
Comment 210 Josef Moravec 2018-11-01 08:29:02 UTC
(In reply to Josef Moravec from comment #209)
> Created attachment 81786 [details]
> Encoding problem + datatable
> 
> In submitted requests table I encountered a encoding problem:
> 
> "Knihovna Ãstí" should be "Knihovna Ústí".
> 
> Also, the heading of datatable is usually formatted in one line, see patron
> circulation history for example.

I am adding Owen because of the datatable styling.

Owen, what would you suggest?
Comment 211 Josef Moravec 2018-11-01 08:43:01 UTC
Comment on attachment 81783 [details] [review]
Bug 10662: Build OAI-PMH Harvesting Client

Review of attachment 81783 [details] [review]:
-----------------------------------------------------------------

::: Koha/OAI/Harvester/Worker/Download/Stream.pm
@@ +111,5 @@
> +    }
> +
> +    #NOTE: Prepare database statement handle
> +    my $dbh = C4::Context->dbh;
> +    my $sql = "insert into oai_harvester_import_queue (uuid,result) VALUES (?,?)";

You should not use SQL in Koha modules, use Koha::OAI::Harvester::ImportQueue->new in place of execution of this statement
Comment 212 Josef Moravec 2018-11-01 08:59:35 UTC
 Koha/Schema/Result/*.pm changes should be in its own patch, as they are generated. Could you split it please?
Comment 213 Josef Moravec 2018-11-01 09:04:06 UTC
There are still COLLATE utf8_unicode_ci in column definitions in atomic update and kohastrusture.sql
Comment 214 Josef Moravec 2018-11-01 09:15:09 UTC
Created attachment 81789 [details] [review]
Bug 10662: (QA follow-up) Fix plural in pod and use statements
Comment 215 Josef Moravec 2018-11-01 09:15:16 UTC
Created attachment 81790 [details] [review]
Bug 10662: (QA follow-up) Enhance marc matchers description
Comment 216 Josef Moravec 2018-11-01 12:35:05 UTC
The Refresh import history button does not work, there is an error in js console:

TypeError: history_table.ajax is undefined
Comment 217 David Cook 2018-11-02 00:01:26 UTC
(In reply to Josef Moravec from comment #207)
> What is the intention on commented out code in
> koha-tmpl/intranet-tmpl/prog/en/modules/tools/oai-pmh-harvester/dashboard.tt
> in table with saved_requests?

Originally, I displayed a lot more information in this table, but I commented it out, as I thought it would be too much. I kept it in comments in case during QA people thought there should be more information. This way I wouldn't have to rewrite it all out.
Comment 218 David Cook 2018-11-02 00:03:13 UTC
(In reply to Josef Moravec from comment #208)
> In tables for history and requests you have column import_matcher_code, that
> is bad I think, you should always use matcher_id and make it foreign key to
> marc_matchers.

I can see why you'd say that, but import_matcher_code is an optional field, so it can't be a foreign key. 99.99% of requests should never ever use that field, but it was added for libraries that might want to initially match against an existing set of records so they don't get duplicates when first starting harvesting.
Comment 219 David Cook 2018-11-02 00:04:04 UTC
(In reply to Josef Moravec from comment #209)
> Created attachment 81786 [details]
> Encoding problem + datatable
> 
> In submitted requests table I encountered a encoding problem:
> 
> "Knihovna Ãstí" should be "Knihovna Ústí".
> 
> Also, the heading of datatable is usually formatted in one line, see patron
> circulation history for example.

That's strange. I've never had this problem before. I will try to reproduce.
Comment 220 David Cook 2018-11-02 00:04:51 UTC
(In reply to Josef Moravec from comment #210)
> (In reply to Josef Moravec from comment #209)
> > Created attachment 81786 [details]
> > Encoding problem + datatable
> > 
> > In submitted requests table I encountered a encoding problem:
> > 
> > "Knihovna Ãstí" should be "Knihovna Ústí".
> > 
> > Also, the heading of datatable is usually formatted in one line, see patron
> > circulation history for example.
> 
> I am adding Owen because of the datatable styling.
> 
> Owen, what would you suggest?

I just added KohaTable as requested. If someone can provide additional details regarding styling, I'd be happy to change it.
Comment 221 David Cook 2018-11-02 00:10:01 UTC
(In reply to Josef Moravec from comment #211)
> Comment on attachment 81783 [details] [review] [review]
> Bug 10662: Build OAI-PMH Harvesting Client
> 
> Review of attachment 81783 [details] [review] [review]:
> -----------------------------------------------------------------
> 
> ::: Koha/OAI/Harvester/Worker/Download/Stream.pm
> @@ +111,5 @@
> > +    }
> > +
> > +    #NOTE: Prepare database statement handle
> > +    my $dbh = C4::Context->dbh;
> > +    my $sql = "insert into oai_harvester_import_queue (uuid,result) VALUES (?,?)";
> 
> You should not use SQL in Koha modules, use
> Koha::OAI::Harvester::ImportQueue->new in place of execution of this
> statement

Ordinarily, I would agree. However, Koha::Object[s] use DBIx::Class, which is very slow. If you read more of Koha::OAI::Harvester::Download::Stream, you'll see that I re-use the same prepared statement handle (something we unfortunately have never done in Koha), which makes the inserts much much much faster. 

Here I'm aiming for high performance. DBIx::Class was built for convenience rather than performance, so I've opted to use DBI and SQL, as it's just so much faster. 

As a side note, Tomas and I chatted a bit at Kohacon about making the harvester's workers use Koha's plugins, so perhaps this is something where the built-in for Koha could use DBIC, and I could provide a more high performance plugin outside of Koha. Although it seems a shame that we'd prefer low performance over high performance :/.
Comment 222 David Cook 2018-11-02 00:10:32 UTC
(In reply to Josef Moravec from comment #212)
>  Koha/Schema/Result/*.pm changes should be in its own patch, as they are
> generated. Could you split it please?

Good point. I have done that in the past, and I can certainly do that here.
Comment 223 David Cook 2018-11-02 00:11:39 UTC
(In reply to Josef Moravec from comment #213)
> There are still COLLATE utf8_unicode_ci in column definitions in atomic
> update and kohastrusture.sql

Oh I see at the column level. I did the table level but must have missed those. I'll fix that.
Comment 224 David Cook 2018-11-02 00:13:40 UTC
Comment on attachment 81789 [details] [review]
Bug 10662: (QA follow-up) Fix plural in pod and use statements

Review of attachment 81789 [details] [review]:
-----------------------------------------------------------------

::: Koha/OAI/Harvester/Import/Record.pm
@@ +29,4 @@
>  
>  use Koha::OAI::Harvester::Import::MARCXML;
>  use Koha::OAI::Harvester::Biblios;
> +use Koha::OAI::Harvester::Histories;

This actually should be Koha::OAI::Harvester::History and not Koha::OAI::Harvester::Histories. This change breaks the harvester.
Comment 225 David Cook 2018-11-02 00:14:56 UTC
(In reply to Josef Moravec from comment #216)
> The Refresh import history button does not work, there is an error in js
> console:
> 
> TypeError: history_table.ajax is undefined

That's very strange. I tested this before uploading and it was fine. I will add your follow-ups and try to reproduce.
Comment 226 David Cook 2018-11-02 00:23:22 UTC
Interesting...

I can't even submit a task with a name of "Knihovna Ústí". The harvester is failing to add it. I'll look into it.

And I am getting "Refresh import history" errors now too. Maybe I didn't double-check after converting to KohaTable. I'm getting a different error though:

Uncaught TypeError: Cannot read property 'reload' of undefined
    at HTMLButtonElement.<anonymous> (dashboard.pl?op=send&id=1:1082)
    at HTMLButtonElement.dispatch (jquery-2.2.3.min_18.0600047.js:3)
    at HTMLButtonElement.r.handle (jquery-2.2.3.min_18.0600047.js:3)

Looking into both of these now...
Comment 227 David Cook 2018-11-02 00:48:54 UTC
(In reply to David Cook from comment #226)
> Interesting...
> 
> I can't even submit a task with a name of "Knihovna Ústí". The harvester is
> failing to add it. I'll look into it.
> 
> And I am getting "Refresh import history" errors now too. Maybe I didn't
> double-check after converting to KohaTable. I'm getting a different error
> though:
> 
> Uncaught TypeError: Cannot read property 'reload' of undefined
>     at HTMLButtonElement.<anonymous> (dashboard.pl?op=send&id=1:1082)
>     at HTMLButtonElement.dispatch (jquery-2.2.3.min_18.0600047.js:3)
>     at HTMLButtonElement.r.handle (jquery-2.2.3.min_18.0600047.js:3)
> 
> Looking into both of these now...

I think KohaTable is breaking some functionality from DataTables, because I should be able to do this on a KohaTable: https://datatables.net/reference/api/ajax.reload()
Comment 228 David Cook 2018-11-02 00:58:58 UTC
(In reply to David Cook from comment #227)
> I think KohaTable is breaking some functionality from DataTables, because I
> should be able to do this on a KohaTable:
> https://datatables.net/reference/api/ajax.reload()

I think KohaTable is using the legacy API and DataTables allowed me to use the current API.

Uploading screenshots to show the different objects returned by DataTable and KohaTable...
Comment 229 David Cook 2018-11-02 00:59:43 UTC
Created attachment 81855 [details]
Object returned by KohaTable
Comment 230 David Cook 2018-11-02 01:00:07 UTC
Created attachment 81856 [details]
Object returned by DataTable
Comment 231 David Cook 2018-11-02 01:08:01 UTC
(In reply to David Cook from comment #228)
> (In reply to David Cook from comment #227)
> > I think KohaTable is breaking some functionality from DataTables, because I
> > should be able to do this on a KohaTable:
> > https://datatables.net/reference/api/ajax.reload()
> 
> I think KohaTable is using the legacy API and DataTables allowed me to use
> the current API.
> 
> Uploading screenshots to show the different objects returned by DataTable
> and KohaTable...

OK I figured it out. 

When using "$( selector ).DataTable();", the return value is the API object.

When using KohaTable, the return value is the jQuery object.

So I just needed to add .dataTable().api() to the back of the jQuery object to access the DataTable API.
Comment 232 David Cook 2018-11-02 01:46:42 UTC
(In reply to David Cook from comment #226)
> Interesting...
> 
> I can't even submit a task with a name of "Knihovna Ústí". The harvester is
> failing to add it. I'll look into it.

Ah, I see the problem I was having. Just me being silly. 

Now I see the problem. I have some ideas with this one...
Comment 233 David Cook 2018-11-02 04:16:39 UTC
(In reply to Josef Moravec from comment #209)
> Created attachment 81786 [details]
> Encoding problem + datatable
> 
> In submitted requests table I encountered a encoding problem:
> 
> "Knihovna Ãstí" should be "Knihovna Ústí".
> 
> Also, the heading of datatable is usually formatted in one line, see patron
> circulation history for example.

I am stumped by this one. 

I've added a Encode::Decode("UTF-8",$json_message") to the client used by the web page, and that gets the characters to render as Knihovna Ústí on the web page... but when I look at the actual hex code in the variable... it's not valid UTF8. It's Windows 1252/Latin-1. 

The hex is 4b6e69686f766e6120da7374ed, and the internal representation in Perl is PV = 0xa6bce80 "Knihovna \303\232st\303\255"\0 [UTF8 "Knihovna \x{da}st\x{ed}"].

So DA and ED are Unicode code points that match Ú and í (https://www.utf8-chartable.de/unicode-utf8-table.pl).

However, DA and ED are also the Latin-1 hex for Ú and í (https://en.wikipedia.org/wiki/Windows-1252)(https://en.wikipedia.org/wiki/ISO/IEC_8859-1).

I think maybe I need to try with some Chinese characters that don't exist in Latin-1...
Comment 234 David Cook 2018-11-02 04:19:22 UTC
Yep... put in some Chinese characters and now that decode isn't working either. 

Let's try this again...
Comment 235 David Cook 2018-11-02 05:21:26 UTC
(In reply to Josef Moravec from comment #209)
> Created attachment 81786 [details]
> Encoding problem + datatable
> 
> In submitted requests table I encountered a encoding problem:
> 
> "Knihovna Ãstí" should be "Knihovna Ústí".
> 
> Also, the heading of datatable is usually formatted in one line, see patron
> circulation history for example.

Ok I've fixed that. 

I was reversing the Encode module encode/decode functions and not reading the output of Devel::Peek correctly.

Now I can see both Knihovna Ústí and 我不好 coming back from the harvester as well.
Comment 236 David Cook 2018-11-02 05:45:59 UTC
(In reply to David Cook from comment #224)
> Comment on attachment 81789 [details] [review] [review]
> Bug 10662: (QA follow-up) Fix plural in pod and use statements
> 
> Review of attachment 81789 [details] [review] [review]:
> -----------------------------------------------------------------
> 
> ::: Koha/OAI/Harvester/Import/Record.pm
> @@ +29,4 @@
> >  
> >  use Koha::OAI::Harvester::Import::MARCXML;
> >  use Koha::OAI::Harvester::Biblios;
> > +use Koha::OAI::Harvester::Histories;
> 
> This actually should be Koha::OAI::Harvester::History and not
> Koha::OAI::Harvester::Histories. This change breaks the harvester.

Actually, since Histories uses History, it's probably fine... but I don't actually use Histories in Record.pm.
Comment 237 David Cook 2018-11-02 07:08:39 UTC
Created attachment 81861 [details] [review]
Bug 10662: Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.

Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>
Comment 238 David Cook 2018-11-02 07:08:49 UTC
Created attachment 81862 [details] [review]
Bug 10662: (QA follow-up) provide DBIC schema files

DBIC schema files
Comment 239 David Cook 2018-11-02 07:08:57 UTC
Created attachment 81863 [details] [review]
Bug 10662: (QA follow-up) Fix plural in pod and use statements
Comment 240 David Cook 2018-11-02 07:09:07 UTC
Created attachment 81864 [details] [review]
Bug 10662: (QA follow-up) Enhance marc matchers description
Comment 241 David Cook 2018-11-02 07:12:22 UTC
OK. I have fixed:

- column collation in installer/data/mysql/atomicupdate/bug_10662.sql and installer/data/mysql/kohastructure.sql
- fixed the "Refresh import history" button/ajax KohaTable/DataTable
- fixed UTF-8 encoding problem in client-server communications
- separated out the Koha/Schema/Result/*.pm schema files

I think the only things I haven't changed are:

- SQL statement handle I re-use
- DataTable styling
- matcher_id foreign key issue
Comment 242 Owen Leonard 2018-11-02 14:16:30 UTC
(In reply to David Cook from comment #241)
> - DataTable styling

I'm preparing a template follow-up to this, but the only thing that is a blocker interface-wise is the DataTables styling. The problem is this is missing from the template:

[% Asset.css("css/datatables.css") | $raw %]
Comment 243 Owen Leonard 2018-11-02 15:04:46 UTC Comment hidden (obsolete)
Comment 244 Andreas Hedström Mace 2018-11-06 11:29:16 UTC
Patch doesn't apply for me:

Applying: Bug 10662: Build OAI-PMH Harvesting Client
Using index info to reconstruct a base tree...
M	C4/Installer/PerlDependencies.pm
M	admin/columns_settings.yml
M	debian/scripts/koha-create
M	installer/data/mysql/kohastructure.sql
M	koha-tmpl/intranet-tmpl/prog/en/includes/tools-menu.inc
M	koha-tmpl/intranet-tmpl/prog/en/modules/tools/tools-home.tt
Falling back to patching base and 3-way merge...
Auto-merging koha-tmpl/intranet-tmpl/prog/en/modules/tools/tools-home.tt
Auto-merging koha-tmpl/intranet-tmpl/prog/en/includes/tools-menu.inc
Auto-merging installer/data/mysql/kohastructure.sql
CONFLICT (content): Merge conflict in installer/data/mysql/kohastructure.sql
Auto-merging debian/scripts/koha-create
Auto-merging admin/columns_settings.yml
CONFLICT (content): Merge conflict in admin/columns_settings.yml
Auto-merging C4/Installer/PerlDependencies.pm
Failed to merge in the changes.
Patch failed at 0001 Bug 10662: Build OAI-PMH Harvesting Client
Comment 245 Andreas Hedström Mace 2018-11-06 11:40:50 UTC
Never mind my previous comment, I didn't have the latest master. Testing now.
Comment 246 Andreas Hedström Mace 2018-11-06 11:59:06 UTC
Updatedatabase.pl fails for me.

DEV atomic update: bug_10662.sql
C4::Installer::load_sql returned the following errors while attempting to load /home/vagrant/kohaclone/installer/data/mysql/atomicupdate/bug_10662.sql:

(No actual error shown)
Comment 247 David Cook 2018-11-12 05:27:45 UTC
The patch applies fine to master for me, and I'm having no trouble loading the database.

Andreas, try running "reset_all" in your kohadevbox. I'm guessing you're getting a conflict with an existing database.
Comment 248 David Cook 2018-11-12 05:46:21 UTC
Thanks to Owen for all that work! It looks so much better now!
Comment 249 David Cook 2018-11-12 05:46:58 UTC
Created attachment 82219 [details] [review]
Bug 10662: (follow-up) Template corrections and improvements

This patch makes a number of corrections and improvements to the OAI
harvester templates:

 - Add missing DataTables CSS include
 - Replace YUI grid with Bootstrap
 - Correct style of inline dialogs (.dialog.alert, .dialog.message)
 - Correct class of form hint and error messages
 - Format dates in saved and submitted requests tables
   - Add title-string sorting to DataTables configuration
 - Add delete confirmation to saved and submitted tables
 - Disable sorting on action columns
 - Style action links inside tables as buttons
 - Removed commented markup
 - Add missing JavaScript include (tools-menu.js) to highlight active
   section in sidebar menu
 - Add CodeMirror styling to record view page (CodeMirror XML mode file
   is added to enable this)
 - Remove invalid <script> "type" attribute

To test, apply the patch and clear your cache if necessary.

 - Go to Tools -> OAI-PMH harvester
   - "Saved," "Submitted," and "Import history" table should look
      correct and work correctly: Sorting, column visibility,
      pagination, etc.
   - In the "Saved" and "Submitted" tables, dates should be formatted
     according to the dateformat preference and sorting of these dates
     should work correctly.
   - In the "Import history" table the "View record" and "View in
     catalog" links should be styled as Bootstrap buttons.
     - Click the "View record" button.
       - The view of the downloaded record should have XML syntax
         highlighting.
   - When you perform actions like submitting requests, starting
     requests, etc, corresponding dialogs should be styled correctly.
     Informational/successful information should be "message" style
     dialogs. Error dialogs should be "alert" style.
   - Click "New request"
     - Form hints in the new request form should be styled correctly.
     - Submit the form without filling in any fields. Field-specific
       error messages should be styled italic red.

Signed-off-by: David Cook <dcook@prosentient.com.au>
Comment 250 David Cook 2018-11-20 05:56:40 UTC
Ready and waiting for testers/QA. 

Not sure where we're at in terms of sign offs at the moment.
Comment 251 David Cook 2019-01-23 06:31:11 UTC
Created attachment 84318 [details] [review]
Bug 10662: Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.

Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>
Comment 252 David Cook 2019-01-23 06:31:21 UTC
Created attachment 84319 [details] [review]
Bug 10662: (QA follow-up) provide DBIC schema files

DBIC schema files
Comment 253 David Cook 2019-01-23 06:31:29 UTC
Created attachment 84320 [details] [review]
Bug 10662: (QA follow-up) Fix plural in pod and use statements
Comment 254 David Cook 2019-01-23 06:31:36 UTC
Created attachment 84321 [details] [review]
Bug 10662: (QA follow-up) Enhance marc matchers description
Comment 255 David Cook 2019-01-23 06:31:44 UTC
Created attachment 84322 [details] [review]
Bug 10662: (follow-up) Template corrections and improvements

This patch makes a number of corrections and improvements to the OAI
harvester templates:

 - Add missing DataTables CSS include
 - Replace YUI grid with Bootstrap
 - Correct style of inline dialogs (.dialog.alert, .dialog.message)
 - Correct class of form hint and error messages
 - Format dates in saved and submitted requests tables
   - Add title-string sorting to DataTables configuration
 - Add delete confirmation to saved and submitted tables
 - Disable sorting on action columns
 - Style action links inside tables as buttons
 - Removed commented markup
 - Add missing JavaScript include (tools-menu.js) to highlight active
   section in sidebar menu
 - Add CodeMirror styling to record view page (CodeMirror XML mode file
   is added to enable this)
 - Remove invalid <script> "type" attribute

To test, apply the patch and clear your cache if necessary.

 - Go to Tools -> OAI-PMH harvester
   - "Saved," "Submitted," and "Import history" table should look
      correct and work correctly: Sorting, column visibility,
      pagination, etc.
   - In the "Saved" and "Submitted" tables, dates should be formatted
     according to the dateformat preference and sorting of these dates
     should work correctly.
   - In the "Import history" table the "View record" and "View in
     catalog" links should be styled as Bootstrap buttons.
     - Click the "View record" button.
       - The view of the downloaded record should have XML syntax
         highlighting.
   - When you perform actions like submitting requests, starting
     requests, etc, corresponding dialogs should be styled correctly.
     Informational/successful information should be "message" style
     dialogs. Error dialogs should be "alert" style.
   - Click "New request"
     - Form hints in the new request form should be styled correctly.
     - Submit the form without filling in any fields. Field-specific
       error messages should be styled italic red.

Signed-off-by: David Cook <dcook@prosentient.com.au>
Comment 256 Andreas Hedström Mace 2019-02-15 10:53:49 UTC
Created attachment 85152 [details] [review]
Bug 10662: (follow-up) Template corrections and improvements

This patch makes a number of corrections and improvements to the OAI
harvester templates:

 - Add missing DataTables CSS include
 - Replace YUI grid with Bootstrap
 - Correct style of inline dialogs (.dialog.alert, .dialog.message)
 - Correct class of form hint and error messages
 - Format dates in saved and submitted requests tables
   - Add title-string sorting to DataTables configuration
 - Add delete confirmation to saved and submitted tables
 - Disable sorting on action columns
 - Style action links inside tables as buttons
 - Removed commented markup
 - Add missing JavaScript include (tools-menu.js) to highlight active
   section in sidebar menu
 - Add CodeMirror styling to record view page (CodeMirror XML mode file
   is added to enable this)
 - Remove invalid <script> "type" attribute

To test, apply the patch and clear your cache if necessary.

 - Go to Tools -> OAI-PMH harvester
   - "Saved," "Submitted," and "Import history" table should look
      correct and work correctly: Sorting, column visibility,
      pagination, etc.
   - In the "Saved" and "Submitted" tables, dates should be formatted
     according to the dateformat preference and sorting of these dates
     should work correctly.
   - In the "Import history" table the "View record" and "View in
     catalog" links should be styled as Bootstrap buttons.
     - Click the "View record" button.
       - The view of the downloaded record should have XML syntax
         highlighting.
   - When you perform actions like submitting requests, starting
     requests, etc, corresponding dialogs should be styled correctly.
     Informational/successful information should be "message" style
     dialogs. Error dialogs should be "alert" style.
   - Click "New request"
     - Form hints in the new request form should be styled correctly.
     - Submit the form without filling in any fields. Field-specific
       error messages should be styled italic red.

Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>
Comment 257 Andreas Hedström Mace 2019-02-15 10:59:23 UTC
Tested again, and the functionality works perfectly. As far as I can tell all the stylings are now correct.
Comment 258 Josef Moravec 2019-02-17 20:48:30 UTC
Created attachment 85224 [details] [review]
Bug 10662: Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.

Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 259 Josef Moravec 2019-02-17 20:48:40 UTC
Created attachment 85225 [details] [review]
Bug 10662: (QA follow-up) provide DBIC schema files

DBIC schema files

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 260 Josef Moravec 2019-02-17 20:48:52 UTC
Created attachment 85226 [details] [review]
Bug 10662: (QA follow-up) Fix plural in pod and use statements

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 261 Josef Moravec 2019-02-17 20:49:07 UTC
Created attachment 85227 [details] [review]
Bug 10662: (QA follow-up) Enhance marc matchers description

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 262 Josef Moravec 2019-02-17 20:49:22 UTC
Created attachment 85228 [details] [review]
Bug 10662: (follow-up) Template corrections and improvements

This patch makes a number of corrections and improvements to the OAI
harvester templates:

 - Add missing DataTables CSS include
 - Replace YUI grid with Bootstrap
 - Correct style of inline dialogs (.dialog.alert, .dialog.message)
 - Correct class of form hint and error messages
 - Format dates in saved and submitted requests tables
   - Add title-string sorting to DataTables configuration
 - Add delete confirmation to saved and submitted tables
 - Disable sorting on action columns
 - Style action links inside tables as buttons
 - Removed commented markup
 - Add missing JavaScript include (tools-menu.js) to highlight active
   section in sidebar menu
 - Add CodeMirror styling to record view page (CodeMirror XML mode file
   is added to enable this)
 - Remove invalid <script> "type" attribute

To test, apply the patch and clear your cache if necessary.

 - Go to Tools -> OAI-PMH harvester
   - "Saved," "Submitted," and "Import history" table should look
      correct and work correctly: Sorting, column visibility,
      pagination, etc.
   - In the "Saved" and "Submitted" tables, dates should be formatted
     according to the dateformat preference and sorting of these dates
     should work correctly.
   - In the "Import history" table the "View record" and "View in
     catalog" links should be styled as Bootstrap buttons.
     - Click the "View record" button.
       - The view of the downloaded record should have XML syntax
         highlighting.
   - When you perform actions like submitting requests, starting
     requests, etc, corresponding dialogs should be styled correctly.
     Informational/successful information should be "message" style
     dialogs. Error dialogs should be "alert" style.
   - Click "New request"
     - Form hints in the new request form should be styled correctly.
     - Submit the form without filling in any fields. Field-specific
       error messages should be styled italic red.

Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 263 Josef Moravec 2019-02-17 20:49:33 UTC
Created attachment 85229 [details] [review]
Bug 10662: (QA follow-up) Make atomic update consistent with kohastructrure. Remove utf8 charset

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 264 Josef Moravec 2019-02-17 20:52:09 UTC
I tested again. I have to say it is looking good. So there  is the only last thing, but big one: the tests...
Comment 265 David Cook 2019-03-25 04:07:49 UTC
(In reply to Josef Moravec from comment #264)
> I tested again. I have to say it is looking good. So there  is the only last
> thing, but big one: the tests...

Awesome! I am very busy at the moment, but I'll get to the tests when I have some time.
Comment 266 Josef Moravec 2019-04-03 05:50:20 UTC
(In reply to David Cook from comment #265)
> (In reply to Josef Moravec from comment #264)
> > I tested again. I have to say it is looking good. So there  is the only last
> > thing, but big one: the tests...
> 
> Awesome! I am very busy at the moment, but I'll get to the tests when I have
> some time.

David, do you think it is possible for you to write test as soon as it could get into 19.05 release? It would be great to have it in 19.05...
Comment 267 Magnus Enger 2019-04-03 06:18:39 UTC
(In reply to Josef Moravec from comment #266)
> David, do you think it is possible for you to write test as soon as it could
> get into 19.05 release? It would be great to have it in 19.05...

It would be awesome, indeed!
Comment 268 Andreas Hedström Mace 2019-04-03 09:08:09 UTC
Very very awesome, yes! =)
Comment 269 David Cook 2019-04-03 22:28:19 UTC
(In reply to Josef Moravec from comment #266)
> David, do you think it is possible for you to write test as soon as it could
> get into 19.05 release? It would be great to have it in 19.05...

Do we know the timelines for the 19.05 release yet? I haven't seen anything online.

I'm skeptical about having the time to write them before the 19.05 release. A heavy workload at the office and pressing matters at home mean time is quite short at the moment. 

That said, can you tell me more about the test coverage we're looking for? 

I could go to very time-consuming lengths testing the client-server relationship and POE event handling for the OAI-PMH harvester server. Or is it more important to first focus on functions that directly affect Koha? 

Much of the code is quite independent from Koha, so less vulnerable to breaking due to changes in the rest of Koha. I'd be inclined to focus most on adding tests for modules that reference the database schema or core APIs (like C4::Biblio). Does that make sense?

I know we'd like 100% test coverage, but I'm curious what the minimum requirements are for the tests.
Comment 270 Magnus Enger 2019-04-04 04:12:09 UTC
See the section on “Unit tests” in the Coding guidelines:
https://wiki.koha-community.org/wiki/Coding_Guidelines#PERL17:_Unit_tests_are_required_.28updated_Apr_26.2C_2017.29
Comment 271 Josef Moravec 2019-04-04 06:16:20 UTC
(In reply to David Cook from comment #269)
> (In reply to Josef Moravec from comment #266)
> > David, do you think it is possible for you to write test as soon as it could
> > get into 19.05 release? It would be great to have it in 19.05...
> 
> Do we know the timelines for the 19.05 release yet? I haven't seen anything
> online.


Not yet, but Nick announced he is going to think out and publish terms soon.
Comment 272 Mirko Tietgen 2019-04-30 15:43:19 UTC
I applied the patches, created a request, tested it ok, submitted and then … nothing happens. "OAI-PMH harvester offline"
Is there something else I have to do?

I can't choose "Record type: Authorities", is that intentional?

I manually installed libpoe-perl and libpoe-component-jobqueue-perl. Are there other dependencies I mossed?
Comment 273 David Cook 2019-05-02 00:12:06 UTC
(In reply to Mirko Tietgen from comment #272)
> I applied the patches, created a request, tested it ok, submitted and then …
> nothing happens. "OAI-PMH harvester offline"
> Is there something else I have to do?
> 

Thanks for taking a look at this, Mirko!

You might want to take a look at https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662#c198. 

I'm not 100% sure what you've done so far, but basically you need to start up the OAI-PMH harvester daemon, which runs the code that does the real heavy lifting. 

> I can't choose "Record type: Authorities", is that intentional?
> 

Yeah, at this point I've only built in support for bibliographic records. In theory, authorities should be easy enough, but I can see people trying to download both bibliographic and authority records from a different system and expecting them to stay linked, which they won't, so that's a problem for another day...

> I manually installed libpoe-perl and libpoe-component-jobqueue-perl. Are
> there other dependencies I mossed?

It sounds like you've got the code dependencies but you'll need to set up the harvester daemon and run it as per https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662#c198.
Comment 274 David Cook 2019-05-02 00:12:56 UTC
And thanks again, Mirko!

I really wish that I had time to work on the tests right now, but I don't. Already burning the candle at both ends...
Comment 275 Mirko Tietgen 2019-05-02 09:36:01 UTC
(In reply to David Cook from comment #273)

> It sounds like you've got the code dependencies but you'll need to set up
> the harvester daemon and run it as per
> https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662#c198.

Thanks, that helped. Some comments:
- The Koha user can't create a directory in "/var/spool/koha/$instance/". We have the DB backups there and the folder belongs to root. I think /var/spool/koha/$instance/OAI" belonging to the Koha user would work better
- I have a problem related to the schema change in bug 22155

>[2019-05-02 10:06:00][DEBUG] [download][pid 2866][STDERR] Creating child process to download and feed parent process parser. at Koha/OAI/Harvester/Downloader.pm line 287.
>[2019-05-02 10:06:00][DEBUG] [download][pid 2866][STDERR] Creating parent process parser. at Koha/OAI/Harvester/Downloader.pm line 293.
>[2019-05-02 10:06:01][DEBUG] Registering POE::Session=ARRAY(0x91f7df8) as import for cf4d071b-76a9-4fff-b13d-c5a8bd27ed4a
>[2019-05-02 10:06:01][DEBUG] Child pid 3032 started as wheel 126
>[2019-05-02 10:06:01][DEBUG] [import][pid 3032][STDERR] DBD::mysql::st execute failed: Unknown column 'me.marcflavour' in 'field list' [for Statement "SELECT `me`.`id`, `me`.`biblionumber`, `me`.`format`, `me`.`marcflavour`, `me`.`metadata`, `me`.`timestamp` FROM `biblio_metadata` `me` WHERE ( ( `me`.`biblionumber` = ? AND `me`.`format` = ? AND `me`.`marcflavour` = ? ) )" with ParamValues: 0='1271', 1='marcxml', 2='MARC21'] at /usr/share/perl5/DBIx/Class/Storage/DBI.pm line 1832.
>[2019-05-02 10:06:01][DEBUG] [import][pid 3032][STDERR] DBIx::Class::Storage::DBI::_dbh_execute(): Unknown column 'me.marcflavour' in 'field list' at /usr/share/koha/lib/Koha/Objects.pm line 92
>[2019-05-02 10:06:01][DEBUG] [import][pid 3032] closed all pipes
>[2019-05-02 10:06:01][DEBUG] [import][pid 3032] exited with status 0
Comment 276 David Cook 2019-05-03 04:43:45 UTC
(In reply to Mirko Tietgen from comment #275)
> (In reply to David Cook from comment #273)
> 
> > It sounds like you've got the code dependencies but you'll need to set up
> > the harvester daemon and run it as per
> > https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662#c198.
> 
> Thanks, that helped. Some comments:
> - The Koha user can't create a directory in "/var/spool/koha/$instance/". We
> have the DB backups there and the folder belongs to root. I think
> /var/spool/koha/$instance/OAI" belonging to the Koha user would work better

I think I've already taken that into account? When you do koha-create, it should use root access to create the relevant directory in /var/spool/koha/$instance/ and assign the permissions to the Koha user. I know I've already looked at this but maybe I'll need to look again. I don't think anyone else has had a problem with it?

> - I have a problem related to the schema change in bug 22155
> 
> >[2019-05-02 10:06:00][DEBUG] [download][pid 2866][STDERR] Creating child process to download and feed parent process parser. at Koha/OAI/Harvester/Downloader.pm line 287.
> >[2019-05-02 10:06:00][DEBUG] [download][pid 2866][STDERR] Creating parent process parser. at Koha/OAI/Harvester/Downloader.pm line 293.
> >[2019-05-02 10:06:01][DEBUG] Registering POE::Session=ARRAY(0x91f7df8) as import for cf4d071b-76a9-4fff-b13d-c5a8bd27ed4a
> >[2019-05-02 10:06:01][DEBUG] Child pid 3032 started as wheel 126
> >[2019-05-02 10:06:01][DEBUG] [import][pid 3032][STDERR] DBD::mysql::st execute failed: Unknown column 'me.marcflavour' in 'field list' [for Statement "SELECT `me`.`id`, `me`.`biblionumber`, `me`.`format`, `me`.`marcflavour`, `me`.`metadata`, `me`.`timestamp` FROM `biblio_metadata` `me` WHERE ( ( `me`.`biblionumber` = ? AND `me`.`format` = ? AND `me`.`marcflavour` = ? ) )" with ParamValues: 0='1271', 1='marcxml', 2='MARC21'] at /usr/share/perl5/DBIx/Class/Storage/DBI.pm line 1832.
> >[2019-05-02 10:06:01][DEBUG] [import][pid 3032][STDERR] DBIx::Class::Storage::DBI::_dbh_execute(): Unknown column 'me.marcflavour' in 'field list' at /usr/share/koha/lib/Koha/Objects.pm line 92
> >[2019-05-02 10:06:01][DEBUG] [import][pid 3032] closed all pipes
> >[2019-05-02 10:06:01][DEBUG] [import][pid 3032] exited with status 0

Ah, interesting. I'll keep that in mind for when I find some time to work on this.
Comment 277 David Cook 2019-09-18 09:36:13 UTC
Ok so I've rebased against master and now I'm testing in koha-testing-docker.

I've run into a hiccup, which relates to the DB being stricter with timestamp/datetime fields I think. I'll have to fix that. 

Anyway, going to have to wait for another day.

In any case, I really want to add unit tests for this. I think it's going to be challenging since the core app is a daemon running the POE event framework... but I'll do my best.
Comment 278 David Cook 2019-09-18 09:36:38 UTC
To setup koha-testing-docker:

0. Apply patches (e.g. git bz apply 10662)
1. apt-get install libpoe-perl libpoe-component-jobqueue-perl 
2. In your browser, go to localhost:8081 and check that you get to the login page
3. cd /kohadevbox/koha
4. kshell -c "perl installer/data/mysql/updatedatabase.pl"
5. curl https://bugs.koha-community.org/bugzilla3/attachment.cgi?id=78583 > /kohadevbox/koha/oai.yml
6. sudo vi /etc/koha/sites/kohadev/koha-conf.xml
Add the following before </config>:
<oai_pmh_harvester_config>/kohadevbox/koha/oai.yml</oai_pmh_harvester_config>
7. restart_all 
8. KOHA_CONF=/etc/koha/sites/kohadev/koha-conf.xml PERL5LIB=/kohadevbox/koha perl misc/harvesterd.pl --log-level DEBUG

To test:
<First, you'll need an OAI-PMH server to harvest from.>
1) Start OAI-PMH harvester daemon according to the above koha-testing-docker instructions (kohadevbox can be used instead but will need some modifications)
2) Go to /cgi-bin/koha/tools/tools-home.pl
3) Click on "OAI-PMH harvester" (/cgi-bin/koha/tools/oai-pmh-harvester/dashboard.pl)
4) At the toolbar choose "New request"
5) Give the request a "Name" (e.g. Test)(this is just used for Koha and not sent in the request), add "URL" for an OAI-PMH repository (e.g. http://<koha>/cgi-bin/koha/oai.pl), and fill in the OAI-PMH parameters as desired/required (an explanation of the protocol is at http://www.openarchives.org/OAI/openarchivesprotocol.html or see attached screenshot oai_request_example.jpg). The remaining values can be kept unchanged. 
6) Click "Test parameters" to make sure your inputs are all valid. 
7) Click "Save"
8) Click "Actions" and choose "Submit"
9) If the submission is successful, go to the "Submitted requests" tab
10) Click "Actions" and choose "Start"
11) Go to the "Import history" tab
12) Click "Refresh import history" to update the table and see your results. If nothing is coming up, you may need to change your request as the OAI-PMH repository may not have any results for that particular request.
Comment 279 David Cook 2019-09-18 09:37:50 UTC
Added the updated instructions for koha-testing-docker, since that's the tool I'll be using, and I think it's the tool a lot of you are now using too. 

I'll post some updated patches once I've worked out the latest kink. (I'm also mindful of some issues that Mirko flagged with the package related code.)
Comment 280 David Cook 2020-02-28 06:36:20 UTC
Ok rebased against master and running into that timestamp problem again, so I'll look at fixing that now.

Once I have that fixed, I might actually go test Jonathan's work on RabbitMQ, since it would be great to replace my totally bespoke POE::Component::JobQueue implementation with a Koha Community RabbitMQ implementation.

After reviewing my code, I'd be able to cut the vast majority of my code if we were using RabbitMQ instead.
Comment 281 David Cook 2020-02-28 06:58:53 UTC
Hmm running out of time today.

Having KohaTable/DataTable issues...

Uncaught TypeError: Cannot read property 'fnInit' of undefined
    at tb (datatables.min_19.1200030.js:90)
    at nb (datatables.min_19.1200030.js:72)
    at ha (datatables.min_19.1200030.js:87)
    at e (datatables.min_19.1200030.js:132)
    at HTMLTableElement.<anonymous> (datatables.min_19.1200030.js:132)
    at Function.each (jquery-2.2.3.min_19.1200030.js:2)
    at a.fn.init.each (jquery-2.2.3.min_19.1200030.js:2)
    at a.fn.init.n [as dataTable] (datatables.min_19.1200030.js:122)
    at KohaTable (dashboard.pl:885)
    at HTMLDocument.<anonymous> (dashboard.pl:919)

I must be missing something that has changed with the use of KohaTable...
Comment 282 David Cook 2020-02-28 07:00:48 UTC
Created attachment 99739 [details] [review]
Bug 10662: Build OAI-PMH Harvesting Client

This patch adds an OAI-PMH harvesting client to Koha.

The client runs as a daemon in the background. Users interact with the client
via the Koha web user interface, which communicates with the daemon via a unix socket
using a simple JSON-based protocol.

The harvester ingests MARCXML. You can harvest other metadata formats, but you
must use a XSLT to transform them into MARCXML, if you want them to be imported
into Koha.

You can supply your own download and import modules via the oai-pmh-harvester.yaml
configuration file, but the default modules supplied in this patch should
be good enough for your purposes. If they're not, raise a Bugzilla issue.

There is a cleanup_database.pl addition, because high volume harvesting
will cause the oai_harvester_import_queue table to fill quickly. This table
is not required for adding/updating records. It's mostly just for general
monitoring and audit purposes.

Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 283 David Cook 2020-02-28 07:00:55 UTC
Created attachment 99740 [details] [review]
Bug 10662: (QA follow-up) provide DBIC schema files

DBIC schema files

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 284 David Cook 2020-02-28 07:01:01 UTC
Created attachment 99741 [details] [review]
Bug 10662: (QA follow-up) Fix plural in pod and use statements

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 285 David Cook 2020-02-28 07:01:08 UTC
Created attachment 99742 [details] [review]
Bug 10662: (QA follow-up) Enhance marc matchers description

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 286 David Cook 2020-02-28 07:01:16 UTC
Created attachment 99743 [details] [review]
Bug 10662: (follow-up) Template corrections and improvements

This patch makes a number of corrections and improvements to the OAI
harvester templates:

 - Add missing DataTables CSS include
 - Replace YUI grid with Bootstrap
 - Correct style of inline dialogs (.dialog.alert, .dialog.message)
 - Correct class of form hint and error messages
 - Format dates in saved and submitted requests tables
   - Add title-string sorting to DataTables configuration
 - Add delete confirmation to saved and submitted tables
 - Disable sorting on action columns
 - Style action links inside tables as buttons
 - Removed commented markup
 - Add missing JavaScript include (tools-menu.js) to highlight active
   section in sidebar menu
 - Add CodeMirror styling to record view page (CodeMirror XML mode file
   is added to enable this)
 - Remove invalid <script> "type" attribute

To test, apply the patch and clear your cache if necessary.

 - Go to Tools -> OAI-PMH harvester
   - "Saved," "Submitted," and "Import history" table should look
      correct and work correctly: Sorting, column visibility,
      pagination, etc.
   - In the "Saved" and "Submitted" tables, dates should be formatted
     according to the dateformat preference and sorting of these dates
     should work correctly.
   - In the "Import history" table the "View record" and "View in
     catalog" links should be styled as Bootstrap buttons.
     - Click the "View record" button.
       - The view of the downloaded record should have XML syntax
         highlighting.
   - When you perform actions like submitting requests, starting
     requests, etc, corresponding dialogs should be styled correctly.
     Informational/successful information should be "message" style
     dialogs. Error dialogs should be "alert" style.
   - Click "New request"
     - Form hints in the new request form should be styled correctly.
     - Submit the form without filling in any fields. Field-specific
       error messages should be styled italic red.

Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 287 David Cook 2020-02-28 07:01:24 UTC
Created attachment 99744 [details] [review]
Bug 10662: (QA follow-up) Make atomic update consistent with kohastructrure. Remove utf8 charset

Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Comment 288 David Cook 2020-02-28 07:01:31 UTC
Created attachment 99745 [details] [review]
Bug 10662: Strip UTC designators from header_datestamp
Comment 289 David Cook 2020-02-28 07:02:36 UTC
So need to add "Import history" datatables use on http://localhost:8081/cgi-bin/koha/tools/oai-pmh-harvester/dashboard.pl

Also need to look at Mirko's packaging issue.

Also need to look at adding unit tests (although might be worthwhile to work on RabbitMQ first...)
Comment 290 David Cook 2020-04-20 00:21:19 UTC
I've looked more at Bug 22417, and it's got me thinking. 

The OAI-PMH harvester has a few core needs:

1. Instant communication between Web UI and OAI-PMH harvester to coordinate harvesting tasks
2. Ability to schedule tasks
3. Ability to execute and repeat download tasks in parallel in the background
4. Ability to save downloaded records
5. Ability to import records into Koha

The first is achieved by exchanging JSON messages over a Unix socket. (I'd actually like to change this to a TCP socket and use JSON messages over a HTTP API. Using HTTP would make it easier to communicate over a network, simplify the client code using standard communications mechanisms, and has authentication methods which could help to secure the OAI-PMH harvester. I could also provide a Docker image that contains the OAI-PMH harvester in a separate Docker container from the Koha application server.) This works fairly well and is easy enough to achieve.

The second is provided by a bespoke scheduler using POE timers built into the OAI-PMH harvester. Given the granularity of the scheduling, I don't see this changing any time soon. In theory, a generic Koha task scheduler could replace this functionality, but that seems unlikely any time soon. This is arguably one of the most complex parts of the OAI-PMH harvester. 

Currently, the OAI-PMH uses an in-memory queue for download tasks, and a database queue for import tasks. I thought a lot about how RabbitMQ might be used to replace these queues. It could be useful to replace the in-memory download queue, and the download workers could be split out of the existing OAI-PMH harvester. As for the import tasks, the download workers need to save the records and enqueue an import task ASAP. At the moment, they save the records to disk, and add an import task to the database with a pointer to the records on disk. It works well enough, but it assumes that you have the disk space, and that the import worker has access to the local disk. I've been thinking it would be better to either 1) save the records to the database and enqueue a RabbitMQ message with a pointer to the database, or 2) send the records in a RabbitMQ message. I think the 1st option is probably better, because there is increased visibility. You can't see all the messages in a RabbitMQ queue. That all said, saving to disk is going to be faster than sending the data over a network. (However, in the past I think that I've sent individual records over the network. In this case, I'd be sending whole batches of records at once.) But for a download worker to send records, it would need credentials for either the database or RabbitMQ... so I'm thinking that perhaps it would be better to use an import API with an API key, although that would involve receiving all the data over the network and then sending it to the database. Also slow. But haven't tested the actual speeds. The import API would save the data to the database, and then enqueue an import task. 

Of course, this would just be a re-working of what's already here. The benefits of re-working the queues and workers are arguable at this point, although there is certainly benefits of changing from a bespoke client/server communication protocol to HTTP over TCP.
Comment 291 David Cook 2020-04-20 01:09:36 UTC
On one hand, I feel like we're so close with the current patches. It just needs more unit tests. 

On the other hand, the unit tests for the task scheduling and concurrent processing are actually quite challenging. Also, these patches are a huge chunk of functionality, which increase Koha's overall size. 

I am tempted to take this code and split it into two parts: 
1. Koha plugin for Web functionality
2. Standalone OAI-PMH harvester

My thinking is the Koha plugin will allow you to connect to a separately packaged OAI-PMH Harvester API in order to add/start/stop/update/remove harvesting tasks. Easy!

The standalone OAI-PMH harvester will then take care of the scheduling of tasks and high-performance downloading of records. The OAI-PMH harvester will then have actions to take on downloaded record batches. Not too hard!

This is where things get interesting. Ideally, there would be a Koha API backed by a queue and Koha worker(s) to handle the processing of records. But that doesn't currently exist. I can use the Koha plugin to inject an API route, but there is no existing queue mechanism. Uh oh!

The API could be used to store the records in a database table, but then I would need a Koha-based worker to access that database table and apply all the Koha-specific rules to the data. 

At this point it would be nice to have RabbitMQ for the queue, and then the Koha plugin could provide a Koha worker I suppose, which a sysadmin could manually start. I suppose we don't have to have RabbitMQ. The Koha worker could just tap into the database directly (until RabbitMQ is available). 

So in the end I suppose really 3 parts:
1. Koha plugin (Web functionality)
2. Koha plugin (Import Worker functionality)
3. Standalone OAI-PMH harvester

Alternatively, the import API could handle all the Koha-related processing. I could do some tests to see how fast the web API could process the data. The OAI-PMH download will probably always be faster than the upload, so the OAI-PMH harvester would need to have its own internal queue for the downloaded records, but that could keep the Koha side of things slimmer. Plus... if Koha did implement a message queue like RabbitMQ, that change could be done transparently in the Koha background without affecting the OAI-PMH harvester. 

Ok so...

1. Koha plugin (Web UI to interact with OAI-PMH harvester)
2. Koha plugin (Import API to receive harvested OAI-PMH records)
3. Standalone OAI-PMH harvester

I think that this makes sense. People could use the plugin, and then if they're liking it, then we could try again to get it into Koha master.

I am actually interested in rewriting the OAI-PMH harvester in Golang to take advantage of its concurrent programming strengths. By using the Koha plugin to provide/consume APIs, we're able to use the best tools for the job for the actual OAI-PMH work. Note too that the OAI-PMH harvesting itself isn't actually Koha-specific. The only Koha-specific aspects are the scheduling and the record import. There's no real need to have the OAI-PMH code in the Koha codebase.
Comment 292 David Cook 2020-05-19 13:46:16 UTC
I've been thinking more about this, and I like the following model:

1. Koha plugin (Web UI to interact with OAI-PMH harvester)
2. Koha plugin (Import API to receive harvested OAI-PMH records)
3. Standalone OAI-PMH harvester

The OAI-PMH harvester's only job would be to schedule harvests and to execute those harvests (ie to download records) and then send those harvested records to Koha.

A Koha plugin providing an OAI-PMH Import API would also allow any OAI-PMH harvester to work with Koha. (I might even pursue this as a separate Bugzilla bug for the core codebase as well, since it would be a nice integration to have that many different people could leverage. It would allow Koha to have OAI-PMH harvesting capabilities without being wedded to a particular OAI-PMH client implementation. This functionality would also be massively improved using a job queue like RabbitMQ...)

A Koha plugin for interacting with the OAI-PMH harvester would just be for my OAI-PMH harvester implementation, but it would provide library administrators a lot more power than they usually would have with an OAI-PMH harvester.
Comment 293 Michal Denar 2020-05-20 18:01:26 UTC
I really like this idea. Czech Koha community want to help change this idea to project. How can we cooperate? If we'll work together, we'll finish this sooner. I hope.

Michal
Comment 294 David Cook 2020-05-21 00:00:43 UTC
(In reply to Michal Denar from comment #293)
> I really like this idea. Czech Koha community want to help change this idea
> to project. How can we cooperate? If we'll work together, we'll finish this
> sooner. I hope.
> 
> Michal

That's great to hear, Michal!

The thing that I will probably need most is testers. I don't have anything to test yet, but I am hoping to have something soon.

Here's the list of components I'll be developing:
1. Koha UI - Koha plugin (Web UI to interact with OAI-PMH harvester HTTP management API)
2. Koha Import API - Koha plugin (Import API to receive harvested OAI-PMH records)
3. Standalone OAI-PMH harvester (Statically compiled Golang daemon with HTTP API for management)

Last night, I started work on the "Koha Import API" plugin (https://github.com/minusdavid/koha-plugin-oaipmh-import). My next steps for that are clear, so I just need to sit down and do some work on that. Maybe tomorrow night. 

I've done experimental work on the "Standalone OAI-PMH harvester", but I haven't posted that to Github yet. I think that I've mostly settled on a scheduler methodology so hoping to make rapid progress on this one too. The plan with this will be to post the source code on Github, but also to provide a compiled binary that people can just download and run. (I might start with a Linux binary, but I've been thinking about doing a Windows binary too.) I may also provide a Docker image for the harvester as well to ease testing and deployment. 

The "Koha UI" plugin will probably be the last thing I do, since it depends on the HTTP API for the "Standalone OAI-PMH harvester". However, it should be fairly straight forward. 

Sorry for all the words! Just sharing my current plans! A lot of the work will be modeled off what I did for Bug 10662, but will probably be simpler.
Comment 295 David Cook 2020-05-21 00:20:33 UTC
Now that I know the basics of Koha plugins and OpenAPI specs for API routes, I think the plugin progress will be quite quick.

I am hoping that "koha-plugin-oaipmh-import" may actually be leveraged by many different OAI-PMH harvester tools. 

The standalone OAI-PMH harvester was required by Stockholm University Library, as they wanted a high performance interactive tool, but it might be overkill for many other libraries, which might just want to run a nightly cronjob. This plugin would allow any OAI-PMH harvester to feed records into Koha via the REST API.

So I think this plugin is probably the most important component of the work really, as it will add the fundamental ability to ingest MARCXML records encapsulated within OAI-PMH records. 

With the "koha-plugin-oaipmh-import" plugin, people could use any number of existing OAI-PMH tools to create their own OAI-PMH harvesters to import records into Koha.

But I'll still be working on creating an OAI-PMH harvester that meets the original stated needs of Stockholm University Library, as I still want them to be able to meet their goals.
Comment 296 Michal Denar 2020-05-22 08:17:47 UTC
Hello David,
I'm ready for testing:-) Just one basic point: Koha is global, plugins have to be translatable. 

I like and support your solution with standalone harvester and plugins for administration and import process.
Comment 297 David Cook 2020-05-26 08:17:32 UTC
(In reply to Michal Denar from comment #296)
> I'm ready for testing:-) Just one basic point: Koha is global, plugins have
> to be translatable. 
> 

This is my first time working on Koha plugins, so I don't know what capacity there is for Koha plugin translations, but I'll keep that in mind. 

It might be that koha-plugin-oaipmh-import serves as a proof-of-concept which gets merged into the mainstream codebase, which would be easier to translate too.

> I like and support your solution with standalone harvester and plugins for
> administration and import process.

Thanks. I'm in a coding mood tonight, so I'm going to see how far I get tonight.
Comment 298 David Cook 2020-05-26 12:39:20 UTC
(In reply to David Cook from comment #297)
> Thanks. I'm in a coding mood tonight, so I'm going to see how far I get
> tonight.

I've made significant process, but not quite done "koha-plugin-oaipmh-import" yet. I think maybe one or two more evenings, and I should have something testable. 

That's just for the OAI-PMH import functionality though. It doesn't include any OAI-PMH harvest/download functionality. That'll be handled by a separate plugin, although there's lots of possibilities for the download functionality. A person could write a simple script using HTTP::OAI or use a command line tool from online.
Comment 299 David Cook 2020-06-11 11:03:21 UTC
(In reply to Michal Denar from comment #296)
> Hello David,
> I'm ready for testing:-) Just one basic point: Koha is global, plugins have
> to be translatable. 
> 

Looks like Tomas has already added translations to a plugin (https://gitlab.com/thekesolutions/plugins/koha-plugin-pay-via-paypal/-/tree/master/Koha/Plugin/Com/Theke/PayViaPayPal), so I'll model my work on that.
Comment 300 Michal Denar 2020-06-11 11:08:14 UTC
Hi David,
yes, I saw this solution. JSON file is easy to edit, I like it.
Comment 301 Magnus Enger 2020-06-11 11:29:03 UTC
(In reply to David Cook from comment #299)
> Looks like Tomas has already added translations to a plugin
> (https://gitlab.com/thekesolutions/plugins/koha-plugin-pay-via-paypal/-/tree/
> master/Koha/Plugin/Com/Theke/PayViaPayPal), so I'll model my work on that.

Bonus points to the person who documents how it's done, for example on the wiki! :-)
Comment 302 David Cook 2020-06-12 05:21:36 UTC
(In reply to Michal Denar from comment #300)
> Hi David,
> yes, I saw this solution. JSON file is easy to edit, I like it.

Oh, it's not a JSON file. It'll be Template::Toolkit template files. Still very easy.

Examples:
https://gitlab.com/thekesolutions/plugins/koha-plugin-pay-via-paypal/-/blob/master/Koha/Plugin/Com/Theke/PayViaPayPal/i18n/cs-CZ.inc

https://gitlab.com/thekesolutions/plugins/koha-plugin-pay-via-paypal/-/blob/master/Koha/Plugin/Com/Theke/PayViaPayPal/i18n/zh-Hans-CN.inc
Comment 303 David Cook 2020-06-12 05:22:15 UTC
(In reply to Magnus Enger from comment #301)
> (In reply to David Cook from comment #299)
> > Looks like Tomas has already added translations to a plugin
> > (https://gitlab.com/thekesolutions/plugins/koha-plugin-pay-via-paypal/-/tree/
> > master/Koha/Plugin/Com/Theke/PayViaPayPal), so I'll model my work on that.
> 
> Bonus points to the person who documents how it's done, for example on the
> wiki! :-)

I'm only one man... ﷐[U+1F62D]﷑

But noted heh.
Comment 304 Michal Denar 2020-06-12 06:38:59 UTC
Hi,
I could participate on documetation. But I'm not developer, I need some "how to" from David or Tomás, how integrate translation into code.
Comment 305 David Cook 2020-07-01 06:52:44 UTC
(In reply to Michal Denar from comment #304)
> Hi,
> I could participate on documetation. But I'm not developer, I need some "how
> to" from David or Tomás, how integrate translation into code.

No worries.
Comment 306 David Cook 2020-07-01 06:55:42 UTC
While I hate to keep waffling, I've been thinking that having the OAI-PMH ingest code in mainstream Koha is probably a good idea. I've opened #25905 to that end. 

I'll change my koha-plugin-oaipmh-import work to fit into mainstream Koha, and submit a patch on #25905. 

The initial work won't be very complicated, so it shouldn't be very difficult to test. 

I'll still do the OAI-PMH harvester as a plugin though, since what I have in mind is complex and not necessarily suitable for everyone. 

My hope is that when #25905 is done, people might set up their own harvesters as best fits their needs.
Comment 307 Michal Denar 2020-09-28 19:01:17 UTC
Hi David,
any updates?

Thank you.
Comment 308 David Cook 2020-09-28 23:07:37 UTC
(In reply to Michal Denar from comment #307)
> Hi David,
> any updates?
> 

Not really. I haven't had the energy for working during my free time, so haven't looked at this in a couple months. It's always in the back of my mind though. 

I have been thinking a bit about waiting until after Bug 22417 is pushed though, as that could be useful for improving performance and robustness.
Comment 309 Eugene Espinoza 2022-06-10 07:28:40 UTC
Hi David! Any updates on this? Thanks!
Comment 310 David Cook 2022-06-14 05:13:26 UTC
(In reply to Eugene Espinoza from comment #309)
> Hi David! Any updates on this? Thanks!

No, no updates on this one. 

Occasionally, I think about doing a simpler version, but I don't really have the time for it.
Comment 311 Koha Team University Lyon 3 2022-06-14 07:20:03 UTC
Hi,
KohaLA is intersted in continuing this feature and we are probably going to fund some devs on that.

Sonia
Comment 312 Michal Denar 2022-06-14 07:31:09 UTC
Sonia, that's great news. I can get involved in testing and commenting on this functionality.
Comment 313 David Cook 2022-06-17 00:45:09 UTC
(In reply to Koha Team University Lyon 3 from comment #311)
> Hi,
> KohaLA is intersted in continuing this feature and we are probably going to
> fund some devs on that.
> 
> Sonia

That's great to hear! I think the problem with bug 10662 is that I tried to create too many things in one bug report.
Comment 314 David Cook 2022-06-17 01:04:16 UTC
Fortunately, since we have RabbitMQ now, some of it should be a lot easier! (I was just looking at bug 27421 which asynchronously stages, imports, and reverts MARC imports. It could be helpful for this work too.)

*The hard part for the OAI-PMH harvester in Koha is the task scheduling.*

Koha doesn't have any way to let users define their own task schedules. (Back in 2018, Frido also mentioned Koha support companies might not want to let librarians set task schedules for OAI-PMH anyway for performance/API rate limiting reasons.)

That said, if librarians controlling the scheduling doesn't matter for you, you could just create a cronjob that sends OAI-PMH tasks to RabbitMQ (or a plugin that uses the nightly plugin cronjob). 

Then all that's left is to create a Koha::BackgroundJob::OAIPMHHarvest class.

-- 

The background job class could probably encapsulate the entire task. (For bug 10662, the requirement was to download records every 3 seconds, so I had to split the harvest/download and import tasks into two separate asynchronous tasks to achieve fast enough download speeds.)

For bug 10662, I also had a requirement to handle very long XML streams over HTTP rather than the usual short XML responses, which technically is allowed according to the OAI-PMH specification, and that meant a custom downloader. It was high performance but it meant I had to add even more code. 

In theory, you might be able to have Koha::BackgroundJob::OAIPMH::Download, Koha::BackgroundJob::OAIPMH::Stage and Koha::BackgroundJob::OAIPMH::Import classes. The scheduler (e.g. cronjob) could enqueue a Koha::BackgroundJob::OAIPMH::Download task which downloads the records, that could then enqueue a Koha::BackgroundJob::OAIPMH::Stage task to stage (ie run the matcher/duplicate finder and ideally do some OAI-PMH specific checks), and that could enqueue the final Koha::BackgroundJob::OAIPMH::Import task to run the actual import. 

(The advantage of breaking it into 3 different tasks is that Koha by default only has 1 background job worker, so very long tasks could prevent other tasks from running in a timely way.)

(However, if we had more than 1 background job worker, I'd be a little concerned about race conditions where Worker B tries to import Record 1-A after Worker A has imported Record 1-B where Record 1-A is older than Record 1-B. There needs to be a sanity check to make sure that records only overwrite older records.)

Depending on how bug 27421 works, Koha::BackgroundJob::OAIPMH::Stage and Koha::BackgroundJob::OAIPMH::Import could potentially be subclasses of Koha::BackgroundJob::StageMARCForImport and Koha::BackgroundJob::StageMARCForImport. Although I don't really like Koha's built-in MARC import classes for OAI-PMH, because once records are staged they're imported without any sanity checks. Also record matching rules are user-controllable and solely MARC based so they're unreliable and not great for matching incoming OAI-PMH records to past harvested records.
Comment 315 David Cook 2022-12-09 05:37:14 UTC
I just played a bit with the MarcEdit OAI-PMH tool. It can download OAI-PMH data, transform it, and save it to a local file.

If you use the Windows Task Scheduler, you can also schedule this task. 

The problem I see is that you end up with MARC files that Koha won't be able to necessarily differentiate.

DSpace's OAI-PMH harvester and my OAI-PMH harvesters all track the local system identifier against the OAI-PMH identifier, so it keeps a linkage between the upstream and downstream systems, so that updates and deletions are always perfectly matched up.

Technically, you could probably write a crosswalk for MarcEdit that saves the OAI-PMH identifier and OAI-PMH repository URL into a custom field, so you could use it with Koha's record matching rules, but you'd have to make sure that custom field is indexed...
Comment 316 David Cook 2022-12-09 05:37:59 UTC
Sonia, do you have any updates on KohaLA is going with OAI-PMH harvesting?
Comment 317 David Cook 2022-12-09 06:34:34 UTC
Occasionally I think about how I could do an OAI-PMH harvester plugin, but the tough part is that the harvester would need extra database tables, and I think having a plugin changing the database schema is not a great idea (or even possible depending on your database permissions).

I suppose a person could take the OAI-PMH repository URL and the OAI-PMH record identifier, then hash them together, and then store that on a file system as a file. The content of that file could then contain additional data (like the Koha biblionumber).

Of course, writing a lot of files to the same directory can be problematic, but Fedora Commons has already done some math on that and has recommended pair-tree patterns for creating an optimized sub-folder structure. 

--

I think that would just leave the need for a "koha-plugin" command that can invoke a particular plugin from the command line. A sysadmin could then schedule that whenever.

Alternatively, one could use the "cronjob_nightly" Koha plugin hook. 

--

It wouldn't be all things to all people, but it would be a minimum viable OAI-PMH harvester...
Comment 318 Koha Team University Lyon 3 2023-01-23 16:41:02 UTC
Hi,
here are some news of KohaLa's thoughts.

We would like to use an existing OAI-PMH harvester and 2 of them seem to be good candidates :
- Catmandu harverster
- HTTP::OAI::HARVESTER module

Both are in perl. At the moment, we thought that the second could be a better choice because it's more up to date and we don't necessarily need to use Catmandu.

What we would like is to use all the import tools already existing in Koha (XSLT, Record matching rules,  MARC modification templates, Stage marc for import, Manage MARC overlay rules).

We would like to add a OAI-PMH setting (like Z39-50 / SRU) in the staff interface with URL, SET, XML Format, authentication login, biblio/authority records, deleted records handling, email for logs, XSLT file, encoding, items handling, profile import.

Every harvesting would be scheduled only via the cronjobs.

We are currently trying to propose specifications during the next 2 days (KohaLa hackathon). Don't hesitate to give us feedback on these first ideas.

Sonia
Comment 319 David Cook 2023-01-23 22:58:30 UTC
(In reply to Koha Team University Lyon 3 from comment #318)
> We would like to use an existing OAI-PMH harvester and 2 of them seem to be
> good candidates :
> - Catmandu harverster
> - HTTP::OAI::HARVESTER module
> 
> Both are in perl. At the moment, we thought that the second could be a
> better choice because it's more up to date and we don't necessarily need to
> use Catmandu.

Currently, we're still stuck using HTTP::OAI 3.27 from 2011 in Koha for OAI-PMH server functionality. Bug 17704 is looking at trying to get a later version of HTTP::OAI working with Koha, but it's been open for about 6 years now. (HTTP::OAI was also a dead project for a few years but it was resurrected in 2017 by one of the Catmandu authors. On that note, I think it is a good idea to avoid Catmandu.)

One problem with HTTP::OAI that I encountered back in 2016 was that it needed to parse the entire XML response into a DOM Document tree rather than processing the XML response while it parsed it. 

This usually isn't a problem because most repositories use resumptionToken elements and limit responses to approximately 100 records. But LIBRIS in Sweden would stream the entire response back without resumptionToken elements, so 1 XML response could contain the entire catalogue's worth of records.

That said, in theory the HTTP::OAI module uses event-driven SAX XML parsing, so it shouldn't be building a DOM Document tree from the response. Maybe the dev environment I was using in 2016 didn't have the correct SAX parser dependencies, so it was using a DOM-based parser in lieu of the SAX parser unintentionally. 

Plus, I suppose we could say that the Koha OAI-PMH harvester doesn't support OAI-PMH repositories that don't use resumptionToken elements for flow control. 

Or, since HTTP::OAI is no longer dead, that issue can always be pursued with the current maintainer.

So overall... HTTP::OAI is probably the way to go. Just wanted to add a warning about my past experience with it.

> What we would like is to use all the import tools already existing in Koha
> (XSLT, Record matching rules,  MARC modification templates, Stage marc for
> import, Manage MARC overlay rules).
> 
> We would like to add a OAI-PMH setting (like Z39-50 / SRU) in the staff
> interface with URL, SET, XML Format, authentication login, biblio/authority
> records, deleted records handling, email for logs, XSLT file, encoding,
> items handling, profile import.

Sounds like a plan. I suspect it will involve a lot of testing. It might be worthwhile to break some of that functionality out into separate tickets, so that the whole patch set doesn't need to be re-tested for minor fixes outside the core harvester functionality.

> Every harvesting would be scheduled only via the cronjobs.

That should make it easy to implement and test.
Comment 320 David Cook 2023-03-26 22:18:01 UTC
By the way, the Fedora Commons community is planning to have some conversations about OAI-PMH for Fedora 6.x as well, although they are having their meeting at 10am Eastern Time (in Canada/USA) on March 30th, so that timing might not work out great. I also haven't found them to be responsive online...
Comment 321 David Cook 2024-09-01 23:20:21 UTC
Don't know why I didn't mark this as a duplicate of bug 35659 sooner!

*** This bug has been marked as a duplicate of bug 35659 ***