We have found on larger sites that when importing large batches we can cause enough load on an ES server to begin getting timeouts. The matching currently generates a query per matchpoint, and uses the standard query builder which adds fields etc that aren't needed. This bug will attempt to simplify the queries we are sending for matching
Created attachment 181717 [details] [review] Bug 39790: Use constant score queries for matching To reduce some overhead in calculating relevancy here, we can use a constant_score query to simplify the search passed to elastic. This patch adds a new build_biblio_match_query routine in Zebra and ES. For ES we update to use the new queries. Zebra is simply a pass through to the previous code. To test: 1 - Export osme records from Koha 2 - Setup a matching rule on title, author, and isbn, score fo 200 each 3 - Stage the file and match, confirm all match 4 - Apply patch, restart all 5 - Change the matching to 'none' and 'Apply different matching rules' 6 - Change matching back to your rule and apply again 7 - Should have the same matches as before 8 - Change your search engine and unmatch/match again 9 - Should have same matches 10 - Test with other matching rules as desired 11 - Sign off!
Marking NSO, this can be tested, but I think there are some things to consider. I am removing relevancy from the search for each point, this could mean that combining across different fields could not get the same matches, however, this is currently a problem with splitting the searches anyways. There is no guarantee the most relevant for a title match are the most relevant for an author match. Matches should really be on fields that are unique to the two records to limit large sets, especially as we limit to 10 matches by default. For an author with 100 books we cannot expect to always get the matching title in the top 10 - with or without relevancy - as ideally the author field would all match and have equal relevancy. I think we need to combine all the searches into a single search, but that will take more refactoring and I suspect this will improve the search cost.