Bug 15108 optimize set search but there's the same issue in: ./misc/migration_tools/build_oai_sets.pl
My frustration around speed/memory use are what caused me to write the patch for bug 37486 But even then... it's still very slow and memory hungry. At a glance, we're fetching the entire database of metadata (deleted and active records) into an array instead of using an iterator, and the only reason we're doing that is do get a count of the array entries. An easy first step optimization would be to first do a count() query and then do the query itself. Database level caching should make that reasonably quick. That should improve memory usage, although I don't know about speed. In theory, that might actually be slower since it'll have to fetch the records one by one from the database result set. Although it might be faster since it won't have to spend time sitting trying to allocate a big enough chunk of memory to manage everything. Time will tell about time I guess. -- Anyway, just a thought for first steps.
I don't have to build sets often, so I'm probably not going to work on this in the short-term... but it would be great to improve this one day...
I was thinking we could also add parallel processing. That would improve speed but also increase memory usage.
Another idea for build_oai_sets.pl specifically... adding a --where option. Typically, this script is only needed for the initial population of an OAI-PMH set, so we fetch everything and compare it to the mappings. However, especially with bug 37486, we can probably put together a SQL WHERE clause which helps us reduce - at the database level - the number of records to consider for the mappings. For instance, I only want bib records that have items for branch A. I don't need all biblio_metadata and deletedbiblio_metadata. I could add a WHERE clause that dramatically reduces the number of metadata records fetched from the database.