Bug 25905 - REST API: create endpoint for importing OAI-PMH records from external OAI-PMH clients
Summary: REST API: create endpoint for importing OAI-PMH records from external OAI-PMH...
Status: NEW
Alias: None
Product: Koha
Classification: Unclassified
Component: REST API (show other bugs)
Version: unspecified
Hardware: All All
: P5 - low enhancement (vote)
Assignee: David Cook
QA Contact:
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2020-07-01 06:52 UTC by David Cook
Modified: 2024-03-16 01:02 UTC (History)
5 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
Bug 25905: Add new 'import_oaipmh_biblios' table (3.62 KB, patch)
2020-08-06 12:29 UTC, David Cook
Details | Diff | Splinter Review
Bug 25905: DBIx::Class files (4.43 KB, patch)
2020-08-06 12:29 UTC, David Cook
Details | Diff | Splinter Review
Bug 25905: Create /import/oaipmh/biblios API endpoint (16.60 KB, patch)
2020-08-06 12:29 UTC, David Cook
Details | Diff | Splinter Review
Bug 25905: Tidy up warns (2.13 KB, patch)
2020-08-06 12:29 UTC, David Cook
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description David Cook 2020-07-01 06:52:30 UTC
Following on from previous work on #10662 and koha-plugin-oaipmh-import, I'm now thinking that it would be best to have a REST API endpoint for importing OAI-PMH records in the Koha mainstream.

This would give libraries assurance that the core OAI-PMH ingest functionality will be in the main codebase and get Koha community support. 

For now, I'm still planning to keep my planned OAI-PMH harvester/client as a plugin/third-party dependency, but completion of this work will allow Koha libraries to use any OAI-PMH harvester they choose to ingest records into Koha. This could be with a Perl cronjob using HTTP::OAI, or it could be a Golang daemon, or whatever. Lots of possibilities. 

Anyway, I've got a lot of this work done already in koha-plugin-oaipmh-import, so I'll look at re-organising it to fit into Koha proper. 

It'll be in my own time, so I don't have a timeline for it, but it is on my TODO list.
Comment 1 Andreas Hedström Mace 2020-07-01 09:37:10 UTC
I think this would be really good to have. A stable endpoint to ingest the records, and the harvester as a plug-in (or even various harvester plug-ins).
Comment 2 Katrin Fischer 2020-07-01 20:46:56 UTC
Hi David, can you please write up an RFC for the new endpoint to be voted on?
https://wiki.koha-community.org/wiki/REST_api_RFCs
We are trying to make the new API as consistent and clean as possible.
Comment 3 David Cook 2020-07-01 23:52:09 UTC
(In reply to Katrin Fischer from comment #2)
> Hi David, can you please write up an RFC for the new endpoint to be voted on?
> https://wiki.koha-community.org/wiki/REST_api_RFCs
> We are trying to make the new API as consistent and clean as possible.

Absolutely. That sounds great to me. 

Once I've written up the RFC, do I need to email someone or is someone tracking changes on that page?
Comment 4 Katrin Fischer 2020-07-02 06:09:24 UTC
You can email and put it on the next dev meetings agenda for discussion/vote.
Comment 5 David Cook 2020-07-06 01:07:44 UTC
(In reply to Katrin Fischer from comment #4)
> You can email and put it on the next dev meetings agenda for discussion/vote.

Great. I don't have a timeline for this at the moment, but I'll keep it in mind.
Comment 7 David Cook 2020-08-06 07:49:25 UTC
I never heard back here or on koha-devel about the API RFC, so I'm just going to keep going ahead with this one.
Comment 8 David Cook 2020-08-06 08:36:37 UTC
Annnd I just had another thought for this which could make it more robust. 

At the moment, I'm planning on doing a synchronous import. However, when we have RabbitMQ, it would be cool to have the API just stage the import, and let a background worker do the actual import work.

In this scenario, a person could have the API endpoint actually return a transaction ID, which could be used to poll Koha for status on the import. That said, polling is better for human-machine interactions, rather than machine-machine interactions. And it wouldn't make sense for a harvester/downloader to wait for the transaction to complete anyway.

Maybe I just shouldn't worry about it for now...
Comment 9 David Cook 2020-08-06 12:29:05 UTC
Created attachment 107880 [details] [review]
Bug 25905: Add new 'import_oaipmh_biblios' table
Comment 10 David Cook 2020-08-06 12:29:09 UTC
Created attachment 107881 [details] [review]
Bug 25905: DBIx::Class files
Comment 11 David Cook 2020-08-06 12:29:13 UTC
Created attachment 107882 [details] [review]
Bug 25905: Create /import/oaipmh/biblios API endpoint
Comment 12 David Cook 2020-08-06 12:29:18 UTC
Created attachment 107883 [details] [review]
Bug 25905: Tidy up warns
Comment 13 David Cook 2020-08-06 12:34:28 UTC
These patches should work, but I've run out of time/energy tonight to write the unit tests. 

Test plan:

1. Turn on RESTBasicAuth
2. curl -XPOST -u username:password -H "x-koha-oaipmh-repository: http://koha-community.org/oai.pl" http://localhost:8081/api/v1/import/oaipmh/biblios -d @oaipmh.xml

That oaipmh.xml file can be a ListRecords or GetRecord document. The API will handle add/update/delete. I'll look at providing some test files later too.
Comment 14 David Cook 2020-08-06 12:52:28 UTC
Once this is in place, I will keep working on my Golang-based harvester fronted by a Koha plugin.

Once Koha is using RabbitMQ for a job queue, I'll probably change this API endpoint to stage the import, and then let the background worker complete the actual import.
Comment 15 David Cook 2020-10-20 00:28:05 UTC
(In reply to David Cook from comment #14)
> Once Koha is using RabbitMQ for a job queue, I'll probably change this API
> endpoint to stage the import, and then let the background worker complete
> the actual import.

It looks like the future has caught up with me. Bug 22417 has been pushed to master, so I could make it so that the endpoint accepts and stages the records, and then passes messages to a background worker via RabbitMQ.

It'll be more complicated but it will be much more robust and scalable.