Following on from previous work on #10662 and koha-plugin-oaipmh-import, I'm now thinking that it would be best to have a REST API endpoint for importing OAI-PMH records in the Koha mainstream. This would give libraries assurance that the core OAI-PMH ingest functionality will be in the main codebase and get Koha community support. For now, I'm still planning to keep my planned OAI-PMH harvester/client as a plugin/third-party dependency, but completion of this work will allow Koha libraries to use any OAI-PMH harvester they choose to ingest records into Koha. This could be with a Perl cronjob using HTTP::OAI, or it could be a Golang daemon, or whatever. Lots of possibilities. Anyway, I've got a lot of this work done already in koha-plugin-oaipmh-import, so I'll look at re-organising it to fit into Koha proper. It'll be in my own time, so I don't have a timeline for it, but it is on my TODO list.
I think this would be really good to have. A stable endpoint to ingest the records, and the harvester as a plug-in (or even various harvester plug-ins).
Hi David, can you please write up an RFC for the new endpoint to be voted on? https://wiki.koha-community.org/wiki/REST_api_RFCs We are trying to make the new API as consistent and clean as possible.
(In reply to Katrin Fischer from comment #2) > Hi David, can you please write up an RFC for the new endpoint to be voted on? > https://wiki.koha-community.org/wiki/REST_api_RFCs > We are trying to make the new API as consistent and clean as possible. Absolutely. That sounds great to me. Once I've written up the RFC, do I need to email someone or is someone tracking changes on that page?
You can email and put it on the next dev meetings agenda for discussion/vote.
(In reply to Katrin Fischer from comment #4) > You can email and put it on the next dev meetings agenda for discussion/vote. Great. I don't have a timeline for this at the moment, but I'll keep it in mind.
Ok I've done a little RFC. Referenced at: https://wiki.koha-community.org/wiki/REST_api_RFCs#Endpoints Found at: https://wiki.koha-community.org/wiki/Import_biblios_oaipmh_endpoint_RFC
I never heard back here or on koha-devel about the API RFC, so I'm just going to keep going ahead with this one.
Annnd I just had another thought for this which could make it more robust. At the moment, I'm planning on doing a synchronous import. However, when we have RabbitMQ, it would be cool to have the API just stage the import, and let a background worker do the actual import work. In this scenario, a person could have the API endpoint actually return a transaction ID, which could be used to poll Koha for status on the import. That said, polling is better for human-machine interactions, rather than machine-machine interactions. And it wouldn't make sense for a harvester/downloader to wait for the transaction to complete anyway. Maybe I just shouldn't worry about it for now...
Created attachment 107880 [details] [review] Bug 25905: Add new 'import_oaipmh_biblios' table
Created attachment 107881 [details] [review] Bug 25905: DBIx::Class files
Created attachment 107882 [details] [review] Bug 25905: Create /import/oaipmh/biblios API endpoint
Created attachment 107883 [details] [review] Bug 25905: Tidy up warns
These patches should work, but I've run out of time/energy tonight to write the unit tests. Test plan: 1. Turn on RESTBasicAuth 2. curl -XPOST -u username:password -H "x-koha-oaipmh-repository: http://koha-community.org/oai.pl" http://localhost:8081/api/v1/import/oaipmh/biblios -d @oaipmh.xml That oaipmh.xml file can be a ListRecords or GetRecord document. The API will handle add/update/delete. I'll look at providing some test files later too.
Once this is in place, I will keep working on my Golang-based harvester fronted by a Koha plugin. Once Koha is using RabbitMQ for a job queue, I'll probably change this API endpoint to stage the import, and then let the background worker complete the actual import.
(In reply to David Cook from comment #14) > Once Koha is using RabbitMQ for a job queue, I'll probably change this API > endpoint to stage the import, and then let the background worker complete > the actual import. It looks like the future has caught up with me. Bug 22417 has been pushed to master, so I could make it so that the endpoint accepts and stages the records, and then passes messages to a background worker via RabbitMQ. It'll be more complicated but it will be much more robust and scalable.
With the availability of bug 35659, I don't think this bug report is necessary anymore.