Bug 27584 - Improve OAI-PMH provider performance
Summary: Improve OAI-PMH provider performance
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Web services (show other bugs)
Version: Main
Hardware: All All
: P5 - low enhancement
Assignee: Ere Maijala
QA Contact: Kyle M Hall (khall)
URL:
Keywords:
Depends on:
Blocks: 27463 28741 29135
  Show dependency treegraph
 
Reported: 2021-02-01 12:59 UTC by Ere Maijala
Modified: 2022-06-06 20:29 UTC (History)
8 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
21.05.00
Circulation function:


Attachments
Bug 27584: Refactor OAI-PMH paging to improve performance (16.07 KB, patch)
2021-02-02 07:41 UTC, Ere Maijala
Details | Diff | Splinter Review
Bug 27584: Refactor OAI-PMH paging to improve performance (21.06 KB, patch)
2021-02-04 07:13 UTC, Ere Maijala
Details | Diff | Splinter Review
Bug 27584: Refactor OAI-PMH paging to improve performance (21.06 KB, patch)
2021-02-04 07:15 UTC, Ere Maijala
Details | Diff | Splinter Review
Bug 27584: Refactor OAI-PMH paging to improve performance (21.12 KB, patch)
2021-02-04 23:25 UTC, David Cook
Details | Diff | Splinter Review
Bug 27584: Refactor OAI-PMH paging to improve performance (21.20 KB, patch)
2021-04-29 14:50 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review
Simple script for testing (777 bytes, application/x-perl)
2021-04-29 14:53 UTC, Nick Clemens (kidclamp)
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Ere Maijala 2021-02-01 12:59:49 UTC
Currently Koha's OAI-PMH provider is using LIMIT and OFFSET to return a chunk of records for each request. While it works relatively well for a small to medium database, it gets prohibitely expensive for a large database especially when items are included and there are a lot of them.

I believe this can be improved by using biblionumber for choosing the starting point of the chunk. Patch coming up soon.
Comment 1 Ere Maijala 2021-02-02 07:41:47 UTC
Created attachment 116184 [details] [review]
Bug 27584: Refactor OAI-PMH paging to improve performance

Uses next biblionumber instead of large offset in the queries.

Test plan:

1. Without the patch, try harvesting a Koha database with `include_items: 1` in the OAI-PMH configuration file pointed to by preference OAI-PMH:ConfFile and take note of performance. For useful metrics the database must be large enough to not fit in InnoDB buffers or OS file cache.
2. Apply the patch.
3. Run tests: prove -v t/db_dependent/OAI
4. Try again the harvesting from step 1 and compare performance with step 1.
Comment 2 Ere Maijala 2021-02-02 10:30:19 UTC
Comment on attachment 116184 [details] [review]
Bug 27584: Refactor OAI-PMH paging to improve performance

Initial patch was bad, needs some fixing.
Comment 3 Ere Maijala 2021-02-02 11:02:18 UTC
Comment on attachment 116184 [details] [review]
Bug 27584: Refactor OAI-PMH paging to improve performance

Oops, the patch is fine (I messed up with Plack).
Comment 4 Ere Maijala 2021-02-03 07:56:28 UTC
Comment on attachment 116184 [details] [review]
Bug 27584: Refactor OAI-PMH paging to improve performance

I believe there's an even faster way, a patch coming up when done benchmarking and testing.
Comment 5 David Cook 2021-02-04 06:50:56 UTC
Curious to see what you come up with here.

--

Here's a little look at doing a UNION on nearly 600,000 biblio and deletedbiblio entries. 

EXPLAIN select * from (select biblionumber from deletedbiblio UNION select biblionumber from biblio) u limit 1;
+------+--------------+---------------+-------+---------------+----------+---------+------+--------+-------------+
| id   | select_type  | table         | type  | possible_keys | key      | key_len | ref  | rows   | Extra       |
+------+--------------+---------------+-------+---------------+----------+---------+------+--------+-------------+
|    1 | PRIMARY      | <derived2>    | ALL   | NULL          | NULL     | NULL    | NULL | 578038 |             |
|    2 | DERIVED      | deletedbiblio | index | NULL          | blbnoidx | 4       | NULL |   4361 | Using index |
|    3 | UNION        | biblio        | index | NULL          | blbnoidx | 4       | NULL | 573677 | Using index |
| NULL | UNION RESULT | <union2,3>    | ALL   | NULL          | NULL     | NULL    | NULL |   NULL |             |
+------+--------------+---------------+-------+---------------+----------+---------+------+--------+-------------+
4 rows in set (0.00 sec)

select * from (select biblionumber from deletedbiblio UNION select biblionumber from biblio) u limit 1;
+--------------+
| biblionumber |
+--------------+
|            3 |
+--------------+
1 row in set (13.39 sec)

OR

select * from (select biblionumber from deletedbiblio UNION select biblionumber from biblio) u limit 0,50;
50 rows in set (12.60 sec)

select biblionumber, (select metadata from biblio_metadata where biblionumber = u.biblionumber) from (select biblionumber from deletedbiblio UNION select biblionumber from biblio) u limit 0,50;
50 rows in set (13.58 sec)
--

Of course, there's no index on `timestamp`, so we can't sort that list. However, perhaps if we added a composite `timestamp,biblionumber` index to the biblio and deletedbiblio tables... 

Still... 13 seconds isn't brilliant.
Comment 6 David Cook 2021-02-04 06:53:06 UTC
Other ideas... would be moving the index to a different table or a different system. 

In theory, there's no reason why we couldn't use Zebra or Elasticsearch for doing OAI. If I recall correctly, I think that DSpace uses Solr for its OAI. 

But the further away from the database, the less likely it's going to be correct/up-to-date.
Comment 7 Ere Maijala 2021-02-04 06:53:50 UTC
Hold on, a new version coming up shortly. Much improved, I believe! :)
Comment 8 David Cook 2021-02-04 06:56:00 UTC
I think there's an open bug somewhere to harmonize the biblio and deletebiblio tables as well. That's probably the most optimal plan...
Comment 9 David Cook 2021-02-04 06:56:26 UTC
(In reply to Ere Maijala from comment #7)
> Hold on, a new version coming up shortly. Much improved, I believe! :)

End of the work day for me, mate. Actually, that was an hour ago, but performance always thrills me.

Looking forward to the new version!
Comment 10 Ere Maijala 2021-02-04 07:13:32 UTC
Created attachment 116309 [details] [review]
Bug 27584: Refactor OAI-PMH paging to improve performance

Includes the following optimizations:
- Use next biblionumber instead of large offset in the queries.
- Use unions instead of subqueries
- Avoid fetching item timestamps when items are not included.

Test plan:

1. Without the patch, try harvesting a Koha database with (and without for good measure) `include_items: 1` in the OAI-PMH configuration file pointed to by preference OAI-PMH:ConfFile and take note of performance. For useful metrics the database must be large enough to not fit in InnoDB buffers or OS file cache.
2. Apply the patch.
3. Run tests: prove -v t/db_dependent/OAI
4. Try again the harvesting from step 1 and compare performance with step 1.
Comment 11 Ere Maijala 2021-02-04 07:15:26 UTC
Created attachment 116310 [details] [review]
Bug 27584: Refactor OAI-PMH paging to improve performance

Includes the following optimizations:
- Use next biblionumber instead of large offset in the queries.
- Use unions instead of subqueries
- Avoid fetching item timestamps when items are not included.

Test plan:

1. Without the patch, try harvesting a Koha database with (and without for good measure) `include_items: 1` in the OAI-PMH configuration file pointed to by preference OAI-PMH:ConfFile and take note of performance. For useful metrics the database must be large enough to not fit in InnoDB buffers or OS file cache.
2. Apply the patch.
3. Run tests: prove -v t/db_dependent/OAI
4. Try again the harvesting from step 1 and compare performance with step 1.
Comment 12 Ere Maijala 2021-02-04 07:18:40 UTC
This should work pretty well. I just haven't had time yet to test with a large number of items, but I'll try to accomplish that as well (and hope it works...).
Comment 13 Ere Maijala 2021-02-04 11:04:10 UTC
Some benchmarking results:

My test system is intentionally memory-constrained, items are included and OAI-PMH:MaxCount = 1000. All values are measured from 100 requests starting at offset 190341 when doing a full harvesting without date limits. The database contains 0.97 million biblios and 2.4 million items. Reported times are averages with standard deviation in parentheses.

Full duration of oai.pl: 12.98s (0.94s)
Biblionumber query: 0.0024s (0.0024s)
Fetch+create record: 6.84s (0.51s)
Creating response: 4.97s (0.50s)

As far as I can see, this represents the full harvesting run pretty well, so it doesn't slow down anymore when getting to higher biblionumbers. As the results indicate, query duration for a set of results is now pretty much meaningless. We spend most of the time collecting the record metadata and creating a DOM for it, and then writing the actual response, which is part of the HTTP::OAI module.

So to sum it up: with these changes the biblionumber query is no longer a bottleneck. Previously, it easily took 15 to 20 seconds on my test system.
Comment 14 Ere Maijala 2021-02-04 17:48:34 UTC
Adding to the previous results, harvesting of the forementioned records completed in less than 5 hours, and the harvesting speed was pretty much constant.
Comment 15 David Cook 2021-02-04 22:45:15 UTC
After reviewing the code again, I'm only now realizing that we're not even trying to do a UNION of biblios and deleted biblios, so the results aren't in date order...

Although after reviewing the OAI-PMH spec, it actually explicitly says that date ordering should not be assumed:

"The protocol does not define the semantics of incompleteness. Therefore, a harvester should not assume that the members in an incomplete list conform to some selection criteria (e.g., date ordering)."

You learn something new every day...
Comment 16 David Cook 2021-02-04 23:17:57 UTC
At a glance, I think that your code is probably an improvement.

However, looking at Koha::OAI::Server::ListBase makes me wonder if we shouldn't denormalize a bit though. If we had a timestamp in a biblio table that included a timestamp for the last item activity, that would remove the need for a lot of these complex SQL queries. 

If we fetched all the biblio metadata in our first query, we'd also save overhead. 

But... both of those would involve more work.
Comment 17 David Cook 2021-02-04 23:23:38 UTC
My test plan:

0) Use koha-testing-docker
1) Enable "OAI-PMH"
2) Go to http://localhost:8080/cgi-bin/koha/oai.pl?verb=ListRecords&metadataPrefix=oai_dc
3) Apply patch
4) koha-plack --restart kohadev
5) Go to http://localhost:8080/cgi-bin/koha/oai.pl?verb=ListRecords&metadataPrefix=oai_dc
6) Note that the result lists are the same

7) Set OAI-PMH:ConfFile to "/kohadevbox/koha/oai-conf.yml"
8) Create oai-conf.yml* 
9) Go to http://localhost:8080/cgi-bin/koha/oai.pl?verb=ListRecords&metadataPrefix=marcxml
10) Note that identifiers are same as previous list
11) Note that items (952 fields) are included in metadata


*
---
format:
    marcxml:
      metadataPrefix: marcxml
      metadataNamespace: http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim
      schema: http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd
      include_items: 1
    oai_dc:
      metadataPrefix: oai_dc
      metadataNamespace: http://www.openarchives.org/OAI/2.0/oai_dc/
      schema: http://www.openarchives.org/OAI/2.0/oai_dc.xsd
      xsl_file: /usr/share/koha/intranet/htdocs/intranet-tmpl/prog/en/xslt/MARC21slim2OAIDC.xsl
Comment 18 David Cook 2021-02-04 23:25:28 UTC
Created attachment 116345 [details] [review]
Bug 27584: Refactor OAI-PMH paging to improve performance

Includes the following optimizations:
- Use next biblionumber instead of large offset in the queries.
- Use unions instead of subqueries
- Avoid fetching item timestamps when items are not included.

Test plan:

1. Without the patch, try harvesting a Koha database with (and without for good measure) `include_items: 1` in the OAI-PMH configuration file pointed to by preference OAI-PMH:ConfFile and take note of performance. For useful metrics the database must be large enough to not fit in InnoDB buffers or OS file cache.
2. Apply the patch.
3. Run tests: prove -v t/db_dependent/OAI
4. Try again the harvesting from step 1 and compare performance with step 1.

Signed-off-by: David Cook <dcook@prosentient.com.au>
Comment 19 David Cook 2021-02-04 23:28:06 UTC
As a tester, I'm not really commenting on performance. I'm just confirming that the code doesn't break anything and works as a user would expect.

For what it's worth, on a small database, it's quite quick. I don't have a big enough test database on hand at this moment to test the code on. But at a glance it looks like it should be fine.
Comment 20 Ere Maijala 2021-02-05 06:26:24 UTC
Thanks for the comments and review!

Indeed, things will be easier when deleted records are included in the normal tables, but for now this is quite alright, and there's no need to union them all together.

I think there's an advantage with tracking the different timestamps even if it's more complicated. When item data is not included, it wouldn't be useful to harvest biblios as updated when an item changes, since the biblio record would be identical. If you meant that we could have another timestamp that would indicate the latest change for the logical record that OAI-PMH would provide, yeah, that'd work, but trying to track latest item changes in biblios would complicate other functionality and could also have unintended consequences such as increased overhead for batch operations. Also, I'm still optimistic that we can get bug 20447 merged somewhere in the future, and that would add to the complexity.

As I see it, the "proper" solution would be to have a publishing process that would run in background to create sets of records for harvesting. With published sets we could handle inclusion of items, deletions etc. in the publishing process, and the OAI-PMH provider would only need to serve the results. However, this would be a whole lot more complicated than what we currently do, and there'd be a fair chance that the publishing process would do a lot of work to create result sets that nobody ever harvests. Additionally, it would make quick (semi-realtime) incremental harvesting impossible.
Comment 21 Frédéric Demians 2021-02-05 19:02:32 UTC
It's not without wonder, bug as it is, Koha OAI Server has now become a dark mystery to me, a little like Jean Sibelius violin concerto in D minor op 47.
Comment 22 David Cook 2021-02-08 00:04:42 UTC
(In reply to Ere Maijala from comment #20)

> 
> I think there's an advantage with tracking the different timestamps even if
> it's more complicated. When item data is not included, it wouldn't be useful
> to harvest biblios as updated when an item changes, since the biblio record
> would be identical. 

Agreed

> If you meant that we could have another timestamp that
> would indicate the latest change for the logical record that OAI-PMH would
> provide, yeah, that'd work, but trying to track latest item changes in
> biblios would complicate other functionality and could also have unintended
> consequences such as increased overhead for batch operations. 
> 

I meant we'd keep the existing timestamp for the bibliographic data and then add a timestamp to indicate when the bibliographic data + items/holdings data was changed. Then depending on the OAI setting, it would decide which timestamp was relevant.

> Also, I'm
> still optimistic that we can get bug 20447 merged somewhere in the future,
> and that would add to the complexity.
> 

That'll be interesting when that time comes.

> As I see it, the "proper" solution would be to have a publishing process
> that would run in background to create sets of records for harvesting. With
> published sets we could handle inclusion of items, deletions etc. in the
> publishing process, and the OAI-PMH provider would only need to serve the
> results. However, this would be a whole lot more complicated than what we
> currently do, and there'd be a fair chance that the publishing process would
> do a lot of work to create result sets that nobody ever harvests.
> Additionally, it would make quick (semi-realtime) incremental harvesting
> impossible.

I don't think that would be feasible. I reckon all we need are some good indexes. 

But I think this change makes sense for now.
Comment 23 Nick Clemens (kidclamp) 2021-04-29 14:50:32 UTC
Created attachment 120316 [details] [review]
Bug 27584: Refactor OAI-PMH paging to improve performance

Includes the following optimizations:
- Use next biblionumber instead of large offset in the queries.
- Use unions instead of subqueries
- Avoid fetching item timestamps when items are not included.

Test plan:

1. Without the patch, try harvesting a Koha database with (and without for good measure) `include_items: 1` in the OAI-PMH configuration file pointed to by preference OAI-PMH:ConfFile and take note of performance. For useful metrics the database must be large enough to not fit in InnoDB buffers or OS file cache.
2. Apply the patch.
3. Run tests: prove -v t/db_dependent/OAI
4. Try again the harvesting from step 1 and compare performance with step 1.

Signed-off-by: David Cook <dcook@prosentient.com.au>

Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Comment 24 Nick Clemens (kidclamp) 2021-04-29 14:53:45 UTC
Created attachment 120317 [details]
Simple script for testing

Simple script for benchmarking and/or checking results

With 30k records, none deleted, I got
~3min before patch - ~2:30 with patch

With 15k deleted 15k active:
~2min before / ~1:40 after

Checking lists of biblionumbers were identical before and after the patches
Comment 25 Ere Maijala 2021-04-30 05:24:45 UTC
Nick, thanks for QA and benchmark results. It's good to see that it makes a difference with smaller data sets as well. :)
Comment 26 Jonathan Druart 2021-05-07 12:41:24 UTC
Pushed to master for 21.05, thanks to everybody involved!
Comment 27 Fridolin Somers 2021-05-11 14:31:38 UTC
Enhancement not pushed to 20.11.x