Bug 22690

Summary: Merging records with many items too slow (Elasticsearch)
Product: Koha Reporter: Vitor Fernandes <vitorfernandes87>
Component: Searching - ElasticsearchAssignee: Ere Maijala <ere.maijala>
Status: CLOSED FIXED QA Contact: Martin Renvoize <martin.renvoize>
Severity: normal    
Priority: P5 - low CC: alex.arnaud, axel.amghar, black23, david, ere.maijala, jonathan.druart, joonas.kylmala, josef.moravec, katrin.fischer, kyle, m.de.rooy, martin.renvoize, mtj, nick, nugged, victor
Version: Main   
Hardware: All   
OS: All   
Change sponsored?: --- Patch complexity: ---
Documentation contact: Documentation submission:
Text to go in the release notes:
This enhancement significantly improves the performance when merging records with many items (for an installation using Elasticsearch). Before this enhancement the web server would time out as the search engine was reindexing the origin record and the destination record for each item moving.
Version(s) released in:
21.11.00
Bug Depends on: 28479    
Bug Blocks: 20447    
Attachments: Bug 22690 - Merging records with many items (ElasticSearch)
Bug 22690: Merging records with many items (ElasticSearch)
Bug 22690: Merging records with many items (ElasticSearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
error
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690 - Output from tests after patch applied
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690 - Output from tests for Item.t
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Test results - Bug 22690
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: Add more tests
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: (QA follow-up) Silence manually generated warnings
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: (QA follow-up) Silence manually generated warnings
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: (QA follow-up) Silence manually generated warnings
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: (QA follow-up) Silence manually generated warnings
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level
Bug 22690: Remove MoveItemFromBiblio import
Bug 22690: Add missing txn_begin in subtest
Bug 22690: Remove MoveItemFromBiblio import
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: (QA follow-up) Silence manually generated warnings
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level
Bug 22690: Add missing txn_begin in subtest
Bug 22690: Remove MoveItemFromBiblio import
Bug 22690: (QA follow-up) Rename 'item_orders' to 'orders'
Bug 22690: (QA follow-up) Improve negation syntax
Bug 22690: (QA follow-up) Move adopt_items_from_biblios to Koha::Items
Bug 22690: (QA follow-up) Clarify uses of DBIC
Bug 22690: (QA follow-up) Add relationships to linktracker
Bug 22690: DBIC Schema Updates
Bug 22690: (QA follow-up) Use relationship accessor
Bug 22690: (QA follow-up) Add TrackedLink classes and use them
Bug 22690: (QA follow-up) Correct variable name
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Bug 22690: Add more tests
Bug 22690: (QA follow-up) Silence manually generated warnings
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level
Bug 22690: Add missing txn_begin in subtest
Bug 22690: Remove MoveItemFromBiblio import
Bug 22690: (QA follow-up) Rename 'item_orders' to 'orders'
Bug 22690: (QA follow-up) Improve negation syntax
Bug 22690: (QA follow-up) Move adopt_items_from_biblios to Koha::Items
Bug 22690: (QA follow-up) Clarify uses of DBIC
Bug 22690: (QA follow-up) Add relationships to linktracker
Bug 22690: DBIC Schema Updates
Bug 22690: (QA follow-up) Use relationship accessor
Bug 22690: (QA follow-up) Add TrackedLink classes and use them
Bug 22690: (QA follow-up) Correct variable name
Bug 22690: (QA follow-up) Fix indexing for Items sets
Bug 22690: Remove uneeded return and add no_triggers
Bug 22690: Add koha_object[s]_class to fix TestBuilder.t
Bug 22690: Fix the tracklink feature

Description Vitor Fernandes 2019-04-11 14:19:26 UTC
When merging records with many items, Koha reindexes the origin record and the destination record for each item moving (check MoveItemFromBiblio in C4::Items).
If the record being eliminated has 1000 items, Koha will make 2000 reindexes.
Because of this behaviour, the merging is too slow.

Test plan:

- Create 2 records (one with 1000 items for example)
- Add the records to one list and start the merging process of those records. 
- Choose the record with many items as the one to be eliminated.
- Start the merging
- After a while the web server should give you a timeout error (the merging process continues)
Comment 1 axel Amghar 2019-05-24 07:36:59 UTC
hello,
when i try to merge records, i have an issue :
 Following required fields are missing:
-801


https://snag.gy/76Fh9R.jpg
this is a mandatory tag but i can't add it 
Do someone know why?
Comment 2 Vitor Fernandes 2019-05-24 09:51:17 UTC
(In reply to axel Amghar from comment #1)
> hello,
> when i try to merge records, i have an issue :
>  Following required fields are missing:
> -801
> 
> 
> https://snag.gy/76Fh9R.jpg
> this is a mandatory tag but i can't add it 
> Do someone know why?

Axel that happens because the framework chosen for merging has 801 as mandatory field (that's my guess).
Comment 3 axel Amghar 2019-05-24 10:01:53 UTC
(In reply to Vitor Fernandes from comment #2)

> Axel that happens because the framework chosen for merging has 801 as
> mandatory field (that's my guess).

I think is the same thing with "000" and "001".
They are mandatory but we can't add them.
So..
Do we have to fix it ?
Comment 4 axel Amghar 2019-05-24 15:27:57 UTC Comment hidden (obsolete)
Comment 5 axel Amghar 2019-05-27 11:28:45 UTC Comment hidden (obsolete)
Comment 6 axel Amghar 2019-05-27 11:34:40 UTC
Created attachment 90116 [details] [review]
Bug 22690: Merging records with many items (ElasticSearch)

This patch allow us to merged many items without timeout

Test plan :

Without the patch :

> - Create 2 records (one with 1000 items for example)
> - Add the records to one list and start the merging process of those records.
> - Choose the record with many items as the one to be eliminated.
> - Start the merging
> - After a while the web server should give you a timeout error (the merging process continues)

With the patch :
- make the same
- this time verify that all items have been merged without time out
Comment 7 axel Amghar 2019-05-27 11:42:08 UTC
Hello,
I found an issue when you click on merge selected, and then you click instant on merge,
the destination record doesn't have the time to load.
And when you look your record, it lose his MARC record.
Someone else succeed to reproduce this bug?
Comment 8 Ere Maijala 2019-07-29 08:28:04 UTC
I don't get the logic of the $items_number parameter. As far as I can see the first UPDATE moves all items regardless of the parameter, but the subsequent processing of acquisitions etc. only goes through items in the $items_number parameter. If MoveItemsFromBiblio is supposed to move all items, I'd remove the $items_number parameter and do everything necessary inside MoveItemsFromBiblio.

Also, I believe the 801 issue should be handled in a separate bug since it's not related to Elasticsearch.
Comment 9 Ere Maijala 2019-09-12 07:14:38 UTC
Taking this since I need this for bug 20447.
Comment 10 Ere Maijala 2019-09-16 11:16:10 UTC
Created attachment 92828 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t
Comment 11 Ere Maijala 2019-10-22 07:09:58 UTC
Created attachment 94505 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t
Comment 12 Joonas Kylmälä 2019-12-20 13:38:24 UTC
Doesn't apply anymore:

Apply? [(y)es, (n)o, (i)nteractive] y
Applying: Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Using index info to reconstruct a base tree...
M	C4/Items.pm
M	Koha/Biblio.pm
M	Koha/Item.pm
M	t/db_dependent/Koha/Item.t
Falling back to patching base and 3-way merge...
Auto-merging t/db_dependent/Koha/Item.t
Auto-merging Koha/Item.pm
CONFLICT (content): Merge conflict in Koha/Item.pm
Auto-merging Koha/Biblio.pm
CONFLICT (content): Merge conflict in Koha/Biblio.pm
Auto-merging C4/Items.pm
error: Failed to merge in the changes.
Patch failed at 0001 Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
Comment 13 Ere Maijala 2020-01-22 09:59:39 UTC
Created attachment 97696 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t
Comment 14 Ere Maijala 2020-01-22 10:00:00 UTC
Rebased.
Comment 15 Michal Denar 2020-01-25 16:12:32 UTC
Hi Ere,
patch working, but one test fails:
Test Summary Report
-------------------
t/db_dependent/Koha/Item.t (Wstat: 256 Tests: 5 Failed: 1)
  Failed test:  3
  Non-zero exit status: 1

Second test was OK.
Comment 16 Ere Maijala 2020-01-27 08:10:10 UTC
Mike, which test is failing? I'm seeing some errors from an unrelated test that tries to clean up the database in wrong order (remove patrons before their checkouts).
Comment 17 Michal Denar 2020-01-27 20:11:51 UTC
Hi Ere,
prove t/db_dependent/Koha/Item.t:

    not ok 3 - Value is mapped correctly for column biblionumber

    #   Failed test 'Value is mapped correctly for column biblionumber'
    #   at t/db_dependent/Koha/Item.t line 109.
    #          got: undef
    #     expected: '462'

    not ok 4 - Value is mapped correctly for column biblioitemnumber

    #   Failed test 'Value is mapped correctly for column biblioitemnumber'
    #   at t/db_dependent/Koha/Item.t line 109.
    #          got: undef
    #     expected: '461'

    not ok 28 - Value is mapped correctly for column timestamp

    #   Failed test 'Value is mapped correctly for column timestamp'
    #   at t/db_dependent/Koha/Item.t line 109.
    #          got: undef
    #     expected: '2020-01-24 22:23:22'

   not ok 42 - Value is mapped correctly for column biblionumber

    #   Failed test 'Value is mapped correctly for column biblionumber'
    #   at t/db_dependent/Koha/Item.t line 124.
    #          got: undef
    #     expected: '462'

  not ok 67 - Value is mapped correctly for column timestamp

    #   Failed test 'Value is mapped correctly for column timestamp'
    #   at t/db_dependent/Koha/Item.t line 124.
    #          got: undef
    #     expected: '2020-01-24 22:23:22'


    # Looks like you failed 6 tests of 79.
not ok 3 - as_marc_field() tests


It's problem with my mappings?
Comment 18 Ere Maijala 2020-01-30 09:11:15 UTC
Mike, that seems unrelated to any of the changes here. I tried with both MARC 21 and UNIMARC default mappings and couldn't reproduce, so I'd say it's an issue with your mappings, but I'm a bit at loss if they're at defaults. Do you see the issue with plain master version of Koha?
Comment 19 Michal Denar 2020-01-30 09:31:08 UTC
Hi Ere,
I've kohadevbox on master, with almost defaults settings. Framework test show 1 error: subfields not in same tabs, but at biblio fileds.  I'll test it again, because and propably switch to SO.
Comment 20 Michal Denar 2020-01-30 09:57:04 UTC
Created attachment 98143 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 21 Martin Renvoize 2020-01-30 15:27:59 UTC
Comment on attachment 98143 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

Review of attachment 98143 [details] [review]:
-----------------------------------------------------------------

This looks like a reasonable approach to me and seems to work well.. a relatively minor point regarding the introduce Koha/Biblio method.

::: Koha/Biblio.pm
@@ +854,5 @@
> +Move items from the given biblio
> +
> +=cut
> +
> +sub move_items_from_biblio {

I feel like 'move_items_from_biblio' isn't immediately obvious as a function name.. are we moving from 'this' biblio or to it..  perhaps 'adopt_items_from_biblio' is a bit clearer?
Comment 22 Joonas Kylmälä 2020-01-31 06:46:04 UTC
(In reply to Martin Renvoize from comment #21)
> Comment on attachment 98143 [details] [review] [review]
> Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
> 
> Review of attachment 98143 [details] [review] [review]:
> -----------------------------------------------------------------
> 
> This looks like a reasonable approach to me and seems to work well.. a
> relatively minor point regarding the introduce Koha/Biblio method.
> 
> ::: Koha/Biblio.pm
> @@ +854,5 @@
> > +Move items from the given biblio
> > +
> > +=cut
> > +
> > +sub move_items_from_biblio {
> 
> I feel like 'move_items_from_biblio' isn't immediately obvious as a function
> name.. are we moving from 'this' biblio or to it..  perhaps
> 'adopt_items_from_biblio' is a bit clearer?

I would just reverse the original idea: move_items_to_biblio. Then it should be obvious we are talking about the current object's items being moved. Not sure if perl OO style supports overloading, but in that case it could be just move_items, and the parameters defined whether we move them to another biblio or (yet to be included to Koha) to a holdings record.
Comment 23 Joonas Kylmälä 2020-01-31 06:59:28 UTC
(In reply to Joonas Kylmälä from comment #22)
> I would just reverse the original idea: move_items_to_biblio. Then it should
> be obvious we are talking about the current object's items being moved. Not
> sure if perl OO style supports overloading, but in that case it could be
> just move_items, and the parameters defined whether we move them to another
> biblio or (yet to be included to Koha) to a holdings record.

Or we could do this even in the Item and Items objects for maximum code reusability, like $biblio->items->move_to_biblio?
Comment 24 Katrin Fischer 2020-01-31 07:10:57 UTC
> Or we could do this even in the Item and Items objects for maximum code
> reusability, like $biblio->items->move_to_biblio?

Maybe just move in this case? They can't go anywhere else than another biblio.
Comment 25 Joonas Kylmälä 2020-01-31 07:14:28 UTC
(In reply to Katrin Fischer from comment #24)
> > Or we could do this even in the Item and Items objects for maximum code
> > reusability, like $biblio->items->move_to_biblio?
> 
> Maybe just move in this case? They can't go anywhere else than another
> biblio.

This bug is a dependency for Bug 20447 which introduces holdings records so similar moving of items to another holdings record is needed there in addition to moving to another biblio.
Comment 26 Ere Maijala 2020-01-31 07:23:52 UTC
I'll rename the method as Martin suggested. Any other changes will cause complications with bug 20447 where we need to move holdings too (so that would become adopt_holdings_and_items_from_biblio). They can't really be separated since they depend on each other.
Comment 27 Ere Maijala 2020-01-31 07:43:18 UTC
Created attachment 98210 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 28 Ere Maijala 2020-01-31 07:50:12 UTC
Created attachment 98211 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 29 Ere Maijala 2020-01-31 07:50:50 UTC
Rename done.
Comment 30 Kyle M Hall 2020-02-20 16:41:54 UTC
root@6d3e2c82bf14:koha(bug22690-qa)$ prove t/db_dependent/Koha/Item.t
t/db_dependent/Koha/Item.t .. 4/5     # No tests run!
t/db_dependent/Koha/Item.t .. 5/5
#   Failed test 'No tests run for subtest "move_to_biblio() tests"'
#   at t/db_dependent/Koha/Item.t line 435.
Can't use string ("k8XjSy") as a HASH ref while "strict refs" in use at /kohadevbox/koha/C4/Reserves.pm line 175.
# Looks like your test exited with 255 just after 5.
t/db_dependent/Koha/Item.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
Failed 1/5 subtests

Test Summary Report
-------------------
t/db_dependent/Koha/Item.t (Wstat: 65280 Tests: 5 Failed: 1)
  Failed test:  5
  Non-zero exit status: 255
Files=1, Tests=5,  6 wallclock secs ( 0.04 usr  0.01 sys +  3.30 cusr  0.62 csys =  3.97 CPU)
Result: FAIL
Comment 31 Ere Maijala 2020-02-21 13:30:03 UTC
Created attachment 99376 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 32 Ere Maijala 2020-02-21 13:30:47 UTC
Fixed tests (use the new params for AddReserve).
Comment 33 Ere Maijala 2020-02-21 13:37:28 UTC
Created attachment 99379 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t
Comment 34 Michal Denar 2020-03-04 22:08:03 UTC
Hi Ere,
I get some errors for prove -v t/db_dependent/Koha/Item.t  

  # Looks like you failed 6 tests of 79.
not ok 3 - as_marc_field() tests

ok 5 - move_to_biblio() tests
# Looks like you failed 1 test of 5.
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/5 subtests



Test Summary Report
-------------------
t/db_dependent/Koha/Item.t (Wstat: 256 Tests: 5 Failed: 1)
  Failed test:  3
  Non-zero exit status: 1
Files=1, Tests=5,  4 wallclock secs ( 0.03 usr  0.02 sys +  2.70 cusr  0.32 csys =  3.07 CPU)
Result: FAIL
Comment 35 Ere Maijala 2020-03-05 09:07:14 UTC
Mike,

I can't reproduce. Can you provide more insight on which tests failed? And is it an empty database or does it have existing data?
Comment 36 Michal Denar 2020-03-08 21:41:00 UTC
Created attachment 100316 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 37 Josef Moravec 2020-05-12 17:22:33 UTC
Created attachment 104796 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 38 Josef Moravec 2020-05-12 17:22:59 UTC
Just rebased on master
Comment 39 Josef Moravec 2020-05-12 17:27:09 UTC
Created attachment 104797 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 40 Josef Moravec 2020-05-12 17:40:05 UTC
Comment on attachment 104797 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

Review of attachment 104797 [details] [review]:
-----------------------------------------------------------------

Great job Ere, I have just few concerns about the code. But I like this refactor a lot!

::: Koha/Item.pm
@@ +840,5 @@
> +
> +    my $schema = Koha::Database->new()->schema();
> +
> +    # Acquisition orders related to the item
> +    my $orders = $schema->resultset('Aqorder')->search(

Koha::Object(s) should be used

@@ +847,5 @@
> +    );
> +    $orders->update_all({ biblionumber => $biblionumber });
> +
> +    # reserves
> +    my $reserves = $self->_result->reserves;

there is $self->holds method

@@ +867,5 @@
> +            $dbh->do("UPDATE tmp_holdsqueue SET biblionumber=? WHERE itemnumber=?", undef, $biblionumber, $self->itemnumber);
> +        }
> +    );
> +    #my $tmp_holdsqueues = $self->_result->tmp_holdsqueues;
> +    #$tmp_holdsqueues->update_all({ biblionumber => $biblionumber });

Please, remove these commented lines

::: cataloguing/merge.pl
@@ +106,5 @@
>          UPDATE suggestions SET biblionumber = ? WHERE biblionumber = ?
>      ");
> +    my $sth_articlerequests = $dbh->prepare("
> +        UPDATE article_requests SET biblionumber = ? WHERE biblionumber = ?
> +    ");

There should be no SQL in .pl script. Please, use Koha::Object(s)
Comment 41 Ere Maijala 2020-05-14 07:05:43 UTC
Created attachment 104871 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 42 Ere Maijala 2020-05-14 07:06:17 UTC
Thanks, Josef. Should be fixed now.
Comment 43 Victor Grousset/tuxayo 2020-05-31 15:59:58 UTC
Created attachment 105445 [details]
error

> With the patch:
> - Do the same as above
> - This time verify that the records are merged without timeout
Comment 44 Victor Grousset/tuxayo 2020-05-31 16:08:25 UTC
BTW, librarian question: is it expected to for the merge to complain about the 003 field being missing? (and blocking the merge).
So in my test data (koha-testing-docker and I supose KohaDevBox), I have to edit the books framework to make 003 visible so it will be populated when I create the two records.
Comment 45 Victor Grousset/tuxayo 2020-05-31 16:28:45 UTC
The failure in comment 43 might have been because I had the patch applied before initialize my dev env (koha-testing-docker).

So retried:
- initialize & start instance
- apply patch
- restart_all
- try the merge
- timeout

So there is definitely an issue
Comment 46 Victor Grousset/tuxayo 2020-05-31 17:11:25 UTC
The above tests where done on Debian with MariaDB 10.3 with ElasticSearch 6.8.8

== ==
I retried the above simplified test plan (second half of the full one) with Debian 9, MySQL 5.5 and still ES 6.8.8

Still the same issue. Though Koha is stuck on additem.pl whereas in the previous test it's merge.pl that I was seeing the process list to.
(when checking that it was still running in the background despite the timeout)

== ==
I retried what I did on comment 43 on Debian 9& MySQL 5.5 and it didn't happen (no error, stuck on additem.pl). So maybe it's linked to the OS & DB or I didn't something else that messed it up.

== ==
I think that's all for now from me. I'll be happy to do some more tests with a new patch or precises instructions to help troubleshoot the issue in case it only happens to me.
Comment 47 Victor Grousset/tuxayo 2020-05-31 17:40:43 UTC
Forget what I said about being stuck on additem.pl or merge.pl, it seems related to the first that I queried after I reset the dev env.

After I do a "reset_all", the process that is stuck at 100% CPU is the starman worker. (so maybe it was some kind of other bug)
Comment 48 Victor Grousset/tuxayo 2020-05-31 17:44:57 UTC
One last thing, it also happens with ES5
Comment 49 Ere Maijala 2020-06-18 07:35:48 UTC
Victor, the patch doesn't change anything with the framework handling, so whether that's a problem is unrelated to this issue, as far as I can see. Thanks for testing in any case. I'll see if I can reproduce the problem.
Comment 50 Ere Maijala 2020-06-18 13:55:00 UTC
Ok, I can see it's not fast enough, but it doesn't seem to be related to Elasticsearch but the fact that Koha::Objects must be used. I'll see what I can do to improve it.
Comment 51 Ere Maijala 2020-06-22 07:09:52 UTC
Created attachment 106144 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 52 Ere Maijala 2020-06-22 07:11:02 UTC
Oops. It _was_ still related to ES since I forgot to skip ES update when storing a single item. Should be fixed now.
Comment 53 Joonas Kylmälä 2020-08-13 14:39:54 UTC
Michal: Could you verify the patch is still signed-off even after the rebase Ere did? It seems like he forgot your sign-off there even though I think it should have been removed after the rebase. If the patch is signed-off could you change the bug status to Signed-off?
Comment 54 Joonas Kylmälä 2020-08-21 07:19:20 UTC
Created attachment 108777 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 55 Joonas Kylmälä 2020-09-07 10:17:01 UTC
Created attachment 109694 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 56 Joonas Kylmälä 2020-09-30 13:37:02 UTC
Created attachment 110991 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 57 Joonas Kylmälä 2020-09-30 14:43:22 UTC
Created attachment 110994 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 58 Joonas Kylmälä 2020-09-30 14:55:36 UTC
Rebased to solve conflict caused by "Bug 25265: Prevent double reindex of the same item in batchmod" / 88cb7f223d. Replaced ModZebra calls with Koha::SearchEngine::Indexer.
Comment 59 David Nind 2020-10-03 08:25:20 UTC
Is there an easy way to add 1,000 items for a record?
Comment 60 Michal Denar 2020-10-03 16:43:21 UTC
Hi David,
yes, you can use New item/Add multiple copies of this item.
Comment 61 David Nind 2020-10-03 20:25:19 UTC
(In reply to Michal Denar from comment #60)
> Hi David,
> yes, you can use New item/Add multiple copies of this item.

Thanks Michal! I can see it now!
Comment 62 David Nind 2020-10-04 02:39:46 UTC
Created attachment 111196 [details]
Bug 22690 - Output from tests after patch applied

I had a go at testing this with koha-testing-docker on my average laptop (default KTD uses ES 6.x).

After applying the patch the time reduced from about an hour to around 30 minutes for all the records to be merged and listed under the record (in the staff interface and OPAC).

About 5 mins in I get this timeout message in the browser:

  Proxy Error

  The proxy server received an invalid response from an upstream server.
  The proxy server could not handle the request

  Reason: Error reading from remote server

  Apache/2.4.25 (Debian) Server at 127.0.0.1 Port 8081

Test results after patch applied:

- prove -v t/db_dependent/Koha/Item.t - fail, took about 30 minutes to run
- prove -v t/db_dependent/Items/MoveItemFromBiblio.t

I haven't changed the status to failed QA. I'm not sure my testing has helped!
Comment 63 Ere Maijala 2020-10-05 05:43:25 UTC
David, thanks for testing. It's very valuable! I'm seeing the same as you, and it's not good. I'll investigate why that is.
Comment 64 Ere Maijala 2020-10-05 05:49:57 UTC
Created attachment 111207 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 65 Ere Maijala 2020-10-05 06:01:07 UTC
There was a parameter name change in master that caused the biblio to be reindexed separately for every item. Now fixed, and you should see greatly improved performance.
Comment 66 David Nind 2020-10-05 13:29:35 UTC
Created attachment 111236 [details]
Bug 22690 - Output from tests for Item.t

Thanks Ere! That made a huge difference, and the merge takes almost no time at all now (less than 1 minute)

However, the the tests for prove -v t/db_dependent/Koha/Item.t run a lot faster as well, but still fail - see attachment Bug 22690 - Output from tests for Item.t.
Comment 67 Joonas Kylmälä 2020-10-05 13:40:59 UTC
Hi,

(In reply to David Nind from comment #66)
> However, the the tests for prove -v t/db_dependent/Koha/Item.t run a lot
> faster as well, but still fail - see attachment Bug 22690 - Output from
> tests for Item.t.

please run the test after running "reset_all" and it should work. The Koha unit tests tend to fail if you change system preferences (which seems to be the case here with the searchengine syspref).
Comment 68 David Nind 2020-10-05 14:04:59 UTC
Created attachment 111237 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Items/MoveItemFromBiblio.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>

Signed-off-by: David Nind <david@davidnind.com>
Comment 69 Katrin Fischer 2020-10-14 22:43:52 UTC
Hi Josef, I see you are QA contact on this one, will you be able to finish here or should we reassign?
Comment 70 Nick Clemens 2020-10-15 12:33:29 UTC
Hi Ere,

Thank you! Great refactoring!

A few things:
1 - Generally we assume that we should index unless passed 'skip_record_index', rather than hardcoding not to
2 - If you change 'move_to_biblio' to take that param and allow the store to index otherwise then we can can remove MoveItemFromBiblio and simply use move_to_biblio in moveitem.pl
3 - please add a test to t/db_dependent/Koha/SearchEngine/Indexer.t to cover when indexing happens (or doesn't) for these subroutines (to prove the adopt method only calls once)
Comment 71 Ere Maijala 2020-10-21 13:04:29 UTC
Created attachment 112100 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 72 Ere Maijala 2020-10-21 13:05:16 UTC
Hi Nick, and thanks for the excellent feedback. I've now implemented the changes you requested.
Comment 73 Ere Maijala 2020-10-22 12:38:15 UTC
Created attachment 112184 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 74 Ere Maijala 2020-10-22 12:39:20 UTC
Latest version makes sure params is defined in Item->move_to_biblio.
Comment 75 Joonas Kylmälä 2020-10-26 17:32:56 UTC
Patch doesn't apply anymore.
Comment 76 Joonas Kylmälä 2020-10-27 13:04:29 UTC
Created attachment 112567 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 77 Joonas Kylmälä 2020-10-27 13:06:09 UTC
Rebased. However the patch has two QA failures reported by the QA tool:

 FAIL	C4/Items.pm
   OK	  critic
   OK	  forbidden patterns
   OK	  git manipulation
   OK	  pod
   FAIL	  pod coverage
   OK	  spelling
   OK	  valid

 FAIL 	cataloguing/moveitem.pl
   OK	  critic
   OK	  forbidden patterns
   OK	  git manipulation
   OK	  pod
   OK	  spelling
   FAIL	  valid

These need to be fixed.
Comment 78 Joonas Kylmälä 2020-10-27 13:21:24 UTC
Created attachment 112569 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 79 Joonas Kylmälä 2020-10-27 13:24:44 UTC
Fixed the QA tool error

 FAIL 	cataloguing/moveitem.pl

with 

> -        my $to_biblio = Koha::Biblios->find($biblionumber)
> +        my $to_biblio = Koha::Biblios->find($biblionumber);

Probably happened during one of the rebases. The POD coverage fail

FAIL	C4/Items.pm

is not caused by caused by this patch as far as I can tell. Moving to Needs Signoff.
Comment 80 David Nind 2020-10-27 19:39:03 UTC
I'm getting test failures with prove -v t/db_dependent/Koha/Item.t after the patch is applied. I ran this after applying the patches, but before adding the 1000 items and merging.

They passed before the patch was applied.

Note: I couldn't get the tests to complete after adding the 1000 items and merging.

Apart from this, everything else in the test plan works.
Comment 81 Joonas Kylmälä 2020-10-28 08:20:37 UTC
(In reply to David Nind from comment #80)
> I'm getting test failures with prove -v t/db_dependent/Koha/Item.t after the
> patch is applied. I ran this after applying the patches, but before adding
> the 1000 items and merging.
> 
> They passed before the patch was applied.
> 
> Note: I couldn't get the tests to complete after adding the 1000 items and
> merging.
> 
> Apart from this, everything else in the test plan works.

I cannot reproduce on koha-testing-docker, what is the error you get?
Comment 82 David Nind 2020-10-30 20:03:12 UTC
Created attachment 112754 [details]
Test results - Bug 22690

Attached are the test results I got using koha-testing-docker:

- Before the patch is applied: the tests pass
- After changing the search engine  system preference to elasticsearch and reindexing: tests fail
- After patch applied: tests fail
- After 1,000 records added and merging: tests fail
Comment 83 Joonas Kylmälä 2020-11-02 07:45:00 UTC
(In reply to David Nind from comment #82)
> - Before the patch is applied: the tests pass
> - After changing the search engine  system preference to elasticsearch and
> reindexing: tests fail
> - After patch applied: tests fail
> - After 1,000 records added and merging: tests fail

This seems to be the same case as in comment 67.
Comment 84 Martin Renvoize 2020-11-02 15:16:19 UTC
I think the test failure is a red herring..

However.. I believe we need to add unit tests for the new Koha::Biblio->adopt_items_from_biblio method and Koha::Item->item_orders relation accessor before we proceed.

These should be fairly trivial tests to add, one to prove that the loop happens and the final indexing takes place for the set and the second to prove the relation is setup correctly.

Failing QA
Comment 85 Ere Maijala 2020-11-09 12:31:21 UTC
Created attachment 113308 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 86 Ere Maijala 2020-11-09 12:31:27 UTC
Created attachment 113309 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio
Comment 87 Ere Maijala 2020-11-09 12:32:08 UTC
(In reply to Martin Renvoize from comment #84)
> However.. I believe we need to add unit tests for the new
> Koha::Biblio->adopt_items_from_biblio method and Koha::Item->item_orders
> relation accessor before we proceed.

Absolutely. Tests now added.
Comment 88 Martin Renvoize 2020-11-09 13:03:01 UTC
Thanks Ere, I'll get back to QA on this asap :)
Comment 89 Joonas Kylmälä 2020-11-24 13:02:40 UTC
Doesn't apply anymore
Comment 90 Joonas Kylmälä 2020-11-24 13:09:16 UTC
Created attachment 113941 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 91 Joonas Kylmälä 2020-11-24 13:09:21 UTC
Created attachment 113942 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio
Comment 92 Michal Denar 2020-11-24 21:29:20 UTC
Created attachment 113967 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 93 Michal Denar 2020-11-24 21:29:25 UTC
Created attachment 113968 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 94 Jonathan Druart 2021-02-11 09:48:46 UTC
So the problem is actually that the reindex is not async.
Bug 27344 is trying to provide a solution for that.
Comment 95 Ere Maijala 2021-02-11 09:56:37 UTC
(In reply to Jonathan Druart from comment #94)
> So the problem is actually that the reindex is not async.
> Bug 27344 is trying to provide a solution for that.

That's part of it, but I believe the refactoring improves the flow and readability of the code as well.
Comment 96 Martin Renvoize 2021-03-01 09:11:29 UTC
Comment on attachment 113967 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

Review of attachment 113967 [details] [review]:
-----------------------------------------------------------------

::: Koha/Item.pm
@@ +1082,5 @@
> +
> +$params:
> +    skip_record_index => 1|0
> +
> +Returns undef if the move failed or the biblionumber of the destination record otherwise

I wonder if this might be nicer as a fluent interface (i.e returning $self so it can be chained.. the undef return would become a no-op and the final return would be the updated Koha::Item object?)


One for later perhaps

::: Koha/Schema/Result/Item.pm
@@ +778,4 @@
>      '+exclude_from_local_holds_priority' => { is_boolean => 1 },
>  );
>  
> +# Relationship with orders via the aqorders_item table that not have foreign keys

Was there a reason not to add the foreign key and let dbic generate the relationship?
Comment 97 Martin Renvoize 2021-03-01 09:14:04 UTC
(In reply to Ere Maijala from comment #95)
> (In reply to Jonathan Druart from comment #94)
> > So the problem is actually that the reindex is not async.
> > Bug 27344 is trying to provide a solution for that.
> 
> That's part of it, but I believe the refactoring improves the flow and
> readability of the code as well.

I think I agree, the code flow is clearer with this work.
Comment 98 Martin Renvoize 2021-03-01 09:27:32 UTC
Comment on attachment 113967 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

Review of attachment 113967 [details] [review]:
-----------------------------------------------------------------

::: Koha/Item.pm
@@ +1064,5 @@
> +sub item_orders {
> +    my ( $self ) = @_;
> +
> +    my $orders = $self->_result->item_orders;
> +    return Koha::Acquisition::Orders->_new_from_dbic($orders);

In it's current form, this can result in failure..  I'm not seeing any handling for that...

i.e. if an item is deleted it gets moved to deleted_items but the itemnumber remains in the aqorder_items table as there is no foreign key constraint.. have you tested this case?
Comment 99 Martin Renvoize 2021-03-01 09:42:40 UTC
Comment on attachment 113967 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

Review of attachment 113967 [details] [review]:
-----------------------------------------------------------------

::: Koha/Item.pm
@@ +1102,5 @@
> +        biblioitemnumber => $to_biblio->biblioitem->biblioitemnumber
> +    })->store({ skip_record_index => $params->{skip_record_index} });
> +
> +    # Acquisition orders
> +    $self->item_orders->update({ biblionumber => $biblionumber }, { no_triggers => 1 });

no_triggers: Varified this one is OK.. we don't have any code level triggers based on biblionumber that I can see.. happy with this and understand why your using it.

@@ +1105,5 @@
> +    # Acquisition orders
> +    $self->item_orders->update({ biblionumber => $biblionumber }, { no_triggers => 1 });
> +
> +    # Holds
> +    $self->holds->update({ biblionumber => $biblionumber }, { no_triggers => 1 });

I don't see any code level triggers at all for Koha::Hold or Koha::Holds.. as such I don't think we should call 'no_triggers' here.. 

Without a local ->store method in Koha::Hold, or a local ->update method in Koha::Holds the result of calling Koha::Objects->update should be the same as without no_triggers passed.  As such, I feel for future-proofing we should not pass no_triggers as we don't currently know that our biblionumber change here wouldn't be part of a trigger in the future.
Comment 100 Martin Renvoize 2021-03-01 09:51:09 UTC
Failing QA to get some attention.. I don't think anything is major here.. it's more that I want to get verification that things have been considered and taken care of.
Comment 101 Joonas Kylmälä 2021-03-05 09:58:37 UTC
(In reply to Martin Renvoize from comment #98)
> Comment on attachment 113967 [details] [review] [review]
> Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
> 
> Review of attachment 113967 [details] [review] [review]:
> -----------------------------------------------------------------
> 
> ::: Koha/Item.pm
> @@ +1064,5 @@
> > +sub item_orders {
> > +    my ( $self ) = @_;
> > +
> > +    my $orders = $self->_result->item_orders;
> > +    return Koha::Acquisition::Orders->_new_from_dbic($orders);
> 
> In it's current form, this can result in failure..  I'm not seeing any
> handling for that...
> 
> i.e. if an item is deleted it gets moved to deleted_items but the itemnumber
> remains in the aqorder_items table as there is no foreign key constraint..
> have you tested this case?

Not sure I understand what error possibility you see here. Koha::Item is only for existing items? So if you have an Koha::Item object it must have itemnumber, no?
Comment 102 Ere Maijala 2021-03-15 10:52:06 UTC
(In reply to Martin Renvoize from comment #96)
> Comment on attachment 113967 [details] [review] [review]
> Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
> 
> Review of attachment 113967 [details] [review] [review]:
> -----------------------------------------------------------------
> 
> ::: Koha/Item.pm
> @@ +1082,5 @@
> > +
> > +$params:
> > +    skip_record_index => 1|0
> > +
> > +Returns undef if the move failed or the biblionumber of the destination record otherwise
> 
> I wonder if this might be nicer as a fluent interface (i.e returning $self
> so it can be chained.. the undef return would become a no-op and the final
> return would be the updated Koha::Item object?)
> 
> 
> One for later perhaps

Probably yes, but I tried to make it resemble the old MoveItemFromBiblio. The caller wants to know whether the move succeeeded, and I didn't want to broaden the scope in trying to avoid making this change any larger.

> 
> ::: Koha/Schema/Result/Item.pm
> @@ +778,4 @@
> >      '+exclude_from_local_holds_priority' => { is_boolean => 1 },
> >  );
> >  
> > +# Relationship with orders via the aqorders_item table that not have foreign keys
> 
> Was there a reason not to add the foreign key and let dbic generate the
> relationship?

The same as above, to try to manage the scope of change. It would make sense to do that, but I'm afraid there will be some gotchas, so better handled separately.
Comment 103 Ere Maijala 2021-03-15 11:31:21 UTC
Created attachment 118226 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Biblio.t
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 104 Ere Maijala 2021-03-15 11:31:29 UTC
Created attachment 118227 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 105 Ere Maijala 2021-03-15 11:40:42 UTC
Created attachment 118229 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Biblio.t
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 106 Ere Maijala 2021-03-15 11:40:52 UTC
Created attachment 118231 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 107 Ere Maijala 2021-03-15 11:45:04 UTC
(In reply to Martin Renvoize from comment #98)
> Comment on attachment 113967 [details] [review] [review]
> Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
> 
> Review of attachment 113967 [details] [review] [review]:
> -----------------------------------------------------------------
> 
> ::: Koha/Item.pm
> @@ +1064,5 @@
> > +sub item_orders {
> > +    my ( $self ) = @_;
> > +
> > +    my $orders = $self->_result->item_orders;
> > +    return Koha::Acquisition::Orders->_new_from_dbic($orders);
> 
> In it's current form, this can result in failure..  I'm not seeing any
> handling for that...
> 
> i.e. if an item is deleted it gets moved to deleted_items but the itemnumber
> remains in the aqorder_items table as there is no foreign key constraint..
> have you tested this case?

(See also comment #101) The only case I can see this one fail is if the item for which this is being called  Well, there's the possibility that the item gets deleted in the meantime, so (In reply to Martin Renvoize from comment #98)
> Comment on attachment 113967 [details] [review] [review]
> Bug 22690: Refactor merging of records to improve performance (Elasticsearch)
> 
> Review of attachment 113967 [details] [review] [review]:
> -----------------------------------------------------------------
> 
> ::: Koha/Item.pm
> @@ +1064,5 @@
> > +sub item_orders {
> > +    my ( $self ) = @_;
> > +
> > +    my $orders = $self->_result->item_orders;
> > +    return Koha::Acquisition::Orders->_new_from_dbic($orders);
> 
> In it's current form, this can result in failure..  I'm not seeing any
> handling for that...
> 
> i.e. if an item is deleted it gets moved to deleted_items but the itemnumber
> remains in the aqorder_items table as there is no foreign key constraint..
> have you tested this case?

Error checking added. Not tested, however, since this should be extremely rare. I can't see this happening unelss the underlying item record for the Item instance here gets deleted while item move is being processed.

There are a lot of similar accessor with the same issue.
Comment 108 Joonas Kylmälä 2021-03-17 14:05:48 UTC
The "Bug 22690: Add more tests" patch removes accidentally 

>    $schema->storage->txn_rollback;
>

lines from the subtest above. Also I rebased bug 20447 bug and now the adopt_items_from_biblio() test fails there, it could be probably due to some problem in bug 20447 patches but maybe also in these patches – will investigate.
Comment 109 Joonas Kylmälä 2021-03-17 14:22:35 UTC
Created attachment 118391 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 110 Joonas Kylmälä 2021-03-17 14:24:07 UTC
There was a rebase issue in the attachment made by Ere in comment 106, in the previous revision https://bugs.koha-community.org/bugzilla3/attachment.cgi?id=113942 it was still OK, I fixed the rebase issue now and attached the working patch.
Comment 111 Michal Denar 2021-04-09 20:38:39 UTC
Created attachment 119433 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Biblio.t
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>

Signed-off-by: Michal Denar <black23@gmail.com>

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 112 Michal Denar 2021-04-09 20:38:45 UTC
Created attachment 119434 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>

Signed-off-by: Michal Denar <black23@gmail.com>
Comment 113 Joonas Kylmälä 2021-04-15 07:20:12 UTC
Martin, could you take another look here now that the QA issues you spotted should be fixed? :)
Comment 114 Joonas Kylmälä 2021-05-11 07:44:24 UTC
Created attachment 120820 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Biblio.t
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 115 Joonas Kylmälä 2021-05-11 07:44:31 UTC
Created attachment 120821 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 116 Joonas Kylmälä 2021-05-11 07:45:31 UTC
Rebased. There was only small conflicts in tests.
Comment 117 Victor Grousset/tuxayo 2021-05-11 21:01:50 UTC
Created attachment 120852 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Biblio.t
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Comment 118 Victor Grousset/tuxayo 2021-05-11 21:01:57 UTC
Created attachment 120853 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Comment 119 Victor Grousset/tuxayo 2021-05-11 21:02:31 UTC
It works!

> - After a while the web server should give you a timeout error (the merging process may still continue)

Indeed it continues! refreshing the search can show items being moved from one record to another. It's unbelievably slow. Not going to wait for all:
210 items in 22:35 => 1355 sec => 0.155 items/sec

with patch

1004 item in 50 sec => 20.08 /items
speedup: 129!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Comment 120 Victor Grousset/tuxayo 2021-05-11 21:03:58 UTC
Issue found, more warnings in the tests with these patches:

============ Without patches ===================
kohadev-koha@kohadevbox:/kohadevbox/koha$ prove t/db_dependent/Koha/Biblio.t  t/db_dependent/Koha/Item.t t/db_dependent/Koha/SearchEngine/Indexer.t

t/db_dependent/Koha/Biblio.t ................ 4/14 Use of uninitialized value $sub6 in pattern match (m//) at /kohadevbox/koha/Koha/SearchEngine/Elasticsearch.pm line 596.
t/db_dependent/Koha/Biblio.t ................ ok     
t/db_dependent/Koha/Item.t .................. ok   
t/db_dependent/Koha/SearchEngine/Indexer.t .. ok   
All tests successful.

============ With patches ===================
kohadev-koha@kohadevbox:/kohadevbox/koha$ prove t/db_dependent/Koha/Biblio.t  t/db_dependent/Koha/Item.t t/db_dependent/Koha/SearchEngine/Indexer.t
t/db_dependent/Koha/Biblio.t ................ 4/15 Use of uninitialized value $sub6 in pattern match (m//) at /kohadevbox/koha/Koha/SearchEngine/Elasticsearch.pm line 596.
t/db_dependent/Koha/Biblio.t ................ ok     
t/db_dependent/Koha/Item.t .................. 10/11 DBIx::Class::Storage::DBI::select_single(): Query returned more than one row.  SQL that returns multiple rows is DEPRECATED for ->find and ->single at /kohadevbox/koha/t/lib/TestBuilder.pm line 235
DBIx::Class::Storage::DBI::select_single(): Query returned more than one row.  SQL that returns multiple rows is DEPRECATED for ->find and ->single at /kohadevbox/koha/t/lib/TestBuilder.pm line 235
DBIx::Class::Storage::DBI::select_single(): Query returned more than one row.  SQL that returns multiple rows is DEPRECATED for ->find and ->single at /kohadevbox/koha/t/lib/TestBuilder.pm line 235
t/db_dependent/Koha/Item.t .................. ok     
t/db_dependent/Koha/SearchEngine/Indexer.t .. 1/2 Zebra at t/db_dependent/Koha/SearchEngine/Indexer.t line 91.
Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93.
Zebra at t/db_dependent/Koha/SearchEngine/Indexer.t line 91.
Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93.
Zebra at t/db_dependent/Koha/SearchEngine/Indexer.t line 91.
Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93.
Zebra at t/db_dependent/Koha/SearchEngine/Indexer.t line 91.
Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93.
Elasticsearch at t/db_dependent/Koha/SearchEngine/Indexer.t line 91.
Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93.
Elasticsearch at t/db_dependent/Koha/SearchEngine/Indexer.t line 91.
Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93.
Elasticsearch at t/db_dependent/Koha/SearchEngine/Indexer.t line 91.
Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93.
Elasticsearch at t/db_dependent/Koha/SearchEngine/Indexer.t line 91.
Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line 93.
t/db_dependent/Koha/SearchEngine/Indexer.t .. ok   
All tests successful.
Comment 121 Joonas Kylmälä 2021-05-25 14:53:24 UTC
I think we have a regression here, the new move_to_biblio function (previously MoveItemFromBiblio) doesn't index the old biblio, e.g. the item info there is outdated. The indexing is only done for the biblio where the item was moved to, not the where it came from.
Comment 122 Joonas Kylmälä 2021-05-25 15:17:01 UTC
Created attachment 121389 [details] [review]
Bug 22690: (QA follow-up) Silence manually generated warnings

In our test setup we mock the index_records() to produce warnings like
this:

Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line
93.

By wrapping all our item creations to warnings_are{} we can silence them.

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 123 Joonas Kylmälä 2021-05-25 15:18:28 UTC
I attached a patch to fix those manually added warns but the one with

> DBIx::Class::Storage::DBI::select_single(): Query returned more than one row.  SQL that returns multiple rows is DEPRECATED for ->find and ->single at /kohadevbox/koha/t/lib/TestBuilder.pm line 235

seems a bit more tricky. Not sure if we should fix TestBuilder.pm or...
Comment 124 Joonas Kylmälä 2021-05-25 15:42:33 UTC
Created attachment 121392 [details] [review]
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()

We need to update the search index record for the old biblio where the
item was moved from to keep the item info in search index up-to-date.

To test:
1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Comment 125 Joonas Kylmälä 2021-05-27 15:46:07 UTC
To debug the remaining warning I added the following lines to TestBuilder.pm

> use Data::Dumper;
> warn "$linked_tbl with " . Dumper($fk_value) if $linked_tbl eq 'Item';

this is just above the line:

>        return {} if $self->schema->resultset($linked_tbl)->find( $fk_value );

inside _create_links(). The output I got when running the test then with this is as follows:

> Item with $VAR1 = {
>           'itemnumber' => 1773
>         };
> Item with $VAR1 = {
>           'biblionumber' => 1234
>         };
> DBIx::Class::Storage::DBI::select_single(): Query returned more than one row.  > SQL that returns multiple rows is DEPRECATED for ->find and ->single at /kohadevbox/koha/t/lib/TestBuilder.pm line 239

So for some reason where are using ->find() in TestBuilder.pm even for non-primary keys (here 'biblionumber' of items table) and that causes problems. I haven't still gotten to the bottom of this so any help is still welcome to resolve this.
Comment 126 Joonas Kylmälä 2021-05-28 11:01:45 UTC
Created attachment 121474 [details] [review]
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level

We need to pass undef itemnumber to build_object() to actually have a
hold without an item tied to it. Otherwise build_object() will create
automatically an item for us (thus making it an item-level hold)

To test:
 $ prove t/db_dependent/Koha/Item.t
Comment 127 Joonas Kylmälä 2021-05-28 11:03:23 UTC
Victor the warnings have been addressed now in the follow-ups. Please give one more look and mark this as PQA & sign-off if all looks good to you.
Comment 128 Joonas Kylmälä 2021-05-28 11:06:27 UTC
Note: please be sure to include the patch from bug 28479, it fixes one of the warnings in TestBuilder.pm
Comment 129 Victor Grousset/tuxayo 2021-05-31 20:05:19 UTC
Created attachment 121516 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Biblio.t
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Comment 130 Victor Grousset/tuxayo 2021-05-31 20:05:26 UTC
Created attachment 121517 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Comment 131 Victor Grousset/tuxayo 2021-05-31 20:05:32 UTC
Created attachment 121518 [details] [review]
Bug 22690: (QA follow-up) Silence manually generated warnings

In our test setup we mock the index_records() to produce warnings like
this:

Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line
93.

By wrapping all our item creations to warnings_are{} we can silence them.

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Comment 132 Victor Grousset/tuxayo 2021-05-31 20:05:39 UTC
Created attachment 121519 [details] [review]
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()

We need to update the search index record for the old biblio where the
item was moved from to keep the item info in search index up-to-date.

To test:
1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Comment 133 Victor Grousset/tuxayo 2021-05-31 20:05:45 UTC
Created attachment 121520 [details] [review]
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level

We need to pass undef itemnumber to build_object() to actually have a
hold without an item tied to it. Otherwise build_object() will create
automatically an item for us (thus making it an item-level hold)

To test:
 $ prove t/db_dependent/Koha/Item.t

Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Comment 134 Victor Grousset/tuxayo 2021-05-31 20:23:10 UTC
Thanks for the followup, no additional warnings and the test plan still works :)

---

QA tools:
 FAIL	C4/Items.pm
   OK	  critic
   OK	  forbidden patterns
   OK	  git manipulation
   OK	  pod
   FAIL	  pod coverage
		POD coverage was greater before, try perl -MPod::Coverage=PackageName -e666
   OK	  spelling
   OK	  valid

That looks a false positive, a function was deleted, so it's POD

---

(In reply to Joonas Kylmälä from comment #127)
> Please give one more look and mark this as PQA & sign-off if all looks good to you.

Code is way over my head so I can't go further than this. But at least the QAer only has the code to review.
Comment 135 Martin Renvoize 2021-07-15 09:31:00 UTC
Created attachment 122846 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Biblio.t
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 136 Martin Renvoize 2021-07-15 09:31:05 UTC
Created attachment 122847 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 137 Martin Renvoize 2021-07-15 09:31:10 UTC
Created attachment 122848 [details] [review]
Bug 22690: (QA follow-up) Silence manually generated warnings

In our test setup we mock the index_records() to produce warnings like
this:

Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line
93.

By wrapping all our item creations to warnings_are{} we can silence them.

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 138 Martin Renvoize 2021-07-15 09:31:16 UTC
Created attachment 122849 [details] [review]
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()

We need to update the search index record for the old biblio where the
item was moved from to keep the item info in search index up-to-date.

To test:
1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 139 Martin Renvoize 2021-07-15 09:31:22 UTC
Created attachment 122850 [details] [review]
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level

We need to pass undef itemnumber to build_object() to actually have a
hold without an item tied to it. Otherwise build_object() will create
automatically an item for us (thus making it an item-level hold)

To test:
 $ prove t/db_dependent/Koha/Item.t

Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 140 Martin Renvoize 2021-07-15 09:32:50 UTC
I am so sorry, this completely fell off my radar with an overly busy period at work.

This is all looking great now and has had lots more eyes on since my own comments have all be taken care of.

All works as expected for me, the code is clean and it's a great improvement as a whole.

QA scripts happy, tests all pass.

Passing QA, thanks for the perseverance everyone.
Comment 141 Jonathan Druart 2021-07-16 12:49:19 UTC
Created attachment 122886 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Biblio.t
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 142 Jonathan Druart 2021-07-16 12:49:25 UTC
Created attachment 122887 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 143 Jonathan Druart 2021-07-16 12:49:31 UTC
Created attachment 122888 [details] [review]
Bug 22690: (QA follow-up) Silence manually generated warnings

In our test setup we mock the index_records() to produce warnings like
this:

Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line
93.

By wrapping all our item creations to warnings_are{} we can silence them.

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 144 Jonathan Druart 2021-07-16 12:49:37 UTC
Created attachment 122889 [details] [review]
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()

We need to update the search index record for the old biblio where the
item was moved from to keep the item info in search index up-to-date.

To test:
1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 145 Jonathan Druart 2021-07-16 12:49:43 UTC
Created attachment 122890 [details] [review]
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level

We need to pass undef itemnumber to build_object() to actually have a
hold without an item tied to it. Otherwise build_object() will create
automatically an item for us (thus making it an item-level hold)

To test:
 $ prove t/db_dependent/Koha/Item.t

Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 146 Jonathan Druart 2021-07-16 12:50:14 UTC
Patches rebased.
Comment 147 Victor Grousset/tuxayo 2021-07-16 17:58:59 UTC
Does this need more testing? More review?

(it's passed QA)
Comment 148 Jonathan Druart 2021-07-19 09:31:11 UTC
Created attachment 122936 [details] [review]
Bug 22690: Remove MoveItemFromBiblio import

Added in the meanwhile by bug 17600.
Comment 149 Jonathan Druart 2021-07-19 09:59:35 UTC
Created attachment 122938 [details] [review]
Bug 22690: Add missing txn_begin in subtest
Comment 150 Jonathan Druart 2021-07-19 10:13:42 UTC
1. Why Koha::Item->item_orders is not named Koha::Item->orders?
It returns a Koha::Acquisition::Orders.

2.
in move_to_biblio
1215     return unless $self->biblionumber != $to_biblio->biblionumber; 
I am a fan of unless, not when there is a negation in the test :)
         return if $self->biblionumber == $to_biblio->biblionumber; 
Read much better IMO.

3. There is Koha::Item->move_to_biblio($biblio) and Koha::Biblio->adopt_items_from_biblio($biblio)
Don't you think Koha::Biblio->adopt_items_from_biblio could be Koha::Items->move_to_biblio actually?
It would be more consistent and flexible.

4. 
in move_to_biblio you are calling, on a DBIC rs, ->update, then update_all:
1237         $hold_fill_target->update({ biblionumber => $to_biblionumber });                            

1254     $linktrackers->update_all({ biblionumber => $to_biblionumber });

It's not consistent, is there a good reason for that?

Please provide a fast reply, I can write the follow-up patches if needed.
Comment 151 Jonathan Druart 2021-07-19 10:15:04 UTC
Created attachment 122939 [details] [review]
Bug 22690: Remove MoveItemFromBiblio import

Added in the meanwhile by bug 17600.
Comment 152 Martin Renvoize 2021-07-19 13:45:58 UTC
(In reply to Jonathan Druart from comment #150)
> 1. Why Koha::Item->item_orders is not named Koha::Item->orders?
> It returns a Koha::Acquisition::Orders.

This should certainly be named 'orders', annoyed I overlooked that during QA :(

> 
> 2.
> in move_to_biblio
> 1215     return unless $self->biblionumber != $to_biblio->biblionumber; 
> I am a fan of unless, not when there is a negation in the test :)
>          return if $self->biblionumber == $to_biblio->biblionumber; 
> Read much better IMO.

Agreed.

> 
> 3. There is Koha::Item->move_to_biblio($biblio) and
> Koha::Biblio->adopt_items_from_biblio($biblio)
> Don't you think Koha::Biblio->adopt_items_from_biblio could be
> Koha::Items->move_to_biblio actually?
> It would be more consistent and flexible.

Hmmm, I think so long as the method name is clear I was happy for it to live in either class. I do see what you mean though and for consistency, I certainly like that.

> 
> 4. 
> in move_to_biblio you are calling, on a DBIC rs, ->update, then update_all:
> 1237         $hold_fill_target->update({ biblionumber => $to_biblionumber
> });                            
> 
> 1254     $linktrackers->update_all({ biblionumber => $to_biblionumber });
> 
> It's not consistent, is there a good reason for that?
> 
> Please provide a fast reply, I can write the follow-up patches if needed.

I think this is simply because we didn't have a Koha:: based resultset yet and he didn't want to add further complexity to the patch by adding that new class as well.  Having said that, it's fairly trivial to add such a class so long as it's a basic one, so perhaps we should for consistency.
Comment 153 Jonathan Druart 2021-07-19 14:20:51 UTC
(In reply to Martin Renvoize from comment #152)
> (In reply to Jonathan Druart from comment #150)
> > 4. 
> > in move_to_biblio you are calling, on a DBIC rs, ->update, then update_all:
> > 1237         $hold_fill_target->update({ biblionumber => $to_biblionumber
> > });                            
> > 
> > 1254     $linktrackers->update_all({ biblionumber => $to_biblionumber });
> > 
> > It's not consistent, is there a good reason for that?
> > 
> > Please provide a fast reply, I can write the follow-up patches if needed.
> 
> I think this is simply because we didn't have a Koha:: based resultset yet
> and he didn't want to add further complexity to the patch by adding that new
> class as well.  Having said that, it's fairly trivial to add such a class so
> long as it's a basic one, so perhaps we should for consistency.

Both are "raw" DBIC resultsets :)
Comment 154 Martin Renvoize 2021-07-22 09:26:21 UTC
Created attachment 123029 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Biblio.t
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 155 Martin Renvoize 2021-07-22 09:26:26 UTC
Created attachment 123030 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 156 Martin Renvoize 2021-07-22 09:26:32 UTC
Created attachment 123031 [details] [review]
Bug 22690: (QA follow-up) Silence manually generated warnings

In our test setup we mock the index_records() to produce warnings like
this:

Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line
93.

By wrapping all our item creations to warnings_are{} we can silence them.

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 157 Martin Renvoize 2021-07-22 09:26:37 UTC
Created attachment 123032 [details] [review]
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()

We need to update the search index record for the old biblio where the
item was moved from to keep the item info in search index up-to-date.

To test:
1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 158 Martin Renvoize 2021-07-22 09:26:43 UTC
Created attachment 123033 [details] [review]
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level

We need to pass undef itemnumber to build_object() to actually have a
hold without an item tied to it. Otherwise build_object() will create
automatically an item for us (thus making it an item-level hold)

To test:
 $ prove t/db_dependent/Koha/Item.t

Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 159 Martin Renvoize 2021-07-22 09:26:48 UTC
Created attachment 123034 [details] [review]
Bug 22690: Add missing txn_begin in subtest
Comment 160 Martin Renvoize 2021-07-22 09:26:54 UTC
Created attachment 123035 [details] [review]
Bug 22690: Remove MoveItemFromBiblio import

Added in the meanwhile by bug 17600.
Comment 161 Martin Renvoize 2021-07-22 09:27:00 UTC
Created attachment 123036 [details] [review]
Bug 22690: (QA follow-up) Rename 'item_orders' to 'orders'
Comment 162 Martin Renvoize 2021-07-22 09:27:05 UTC
Created attachment 123037 [details] [review]
Bug 22690: (QA follow-up) Improve negation syntax
Comment 163 Martin Renvoize 2021-07-22 09:27:11 UTC
Created attachment 123038 [details] [review]
Bug 22690: (QA follow-up) Move adopt_items_from_biblios to Koha::Items

This patch moves the Koha::Biblio->adopt_items_from_biblio method to the
Koha::Items set class and updates all calls from
Biblio2->adopt_items_from_biblio(Biblio1) to Biblio->items->move_to_biblio(Biblio2)
Comment 164 Martin Renvoize 2021-07-22 09:27:16 UTC
Created attachment 123039 [details] [review]
Bug 22690: (QA follow-up) Clarify uses of DBIC
Comment 165 Martin Renvoize 2021-07-22 09:59:37 UTC
Created attachment 123040 [details] [review]
Bug 22690: (QA follow-up) Add relationships to linktracker

Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 166 Martin Renvoize 2021-07-22 09:59:43 UTC
Created attachment 123041 [details] [review]
Bug 22690: DBIC Schema Updates
Comment 167 Martin Renvoize 2021-07-22 09:59:48 UTC
Created attachment 123042 [details] [review]
Bug 22690: (QA follow-up) Use relationship accessor

With the addition of foreign key relationships to the linktracker table
we now get a DBIC relationship accessor we can use. This clarifies the
code slightly by using the _result->relationship form to get the DBIC
resultset.  We should still introduce a Koha::Object based class for
this table at some point.
Comment 168 Martin Renvoize 2021-07-22 10:48:09 UTC
Created attachment 123044 [details] [review]
Bug 22690: (QA follow-up) Add TrackedLink classes and use them

This patch adds Koha::TrackedLink(s) classes based on Koha::Object(s)
and then adds the relationship accessor to Koha::Item and uses it within
the move_to_biblio method.

Tests for new relationship also added to t/db_dependent/Koha/Item.t
Comment 169 Martin Renvoize 2021-07-22 10:55:10 UTC
RM comments addressed in followups.

I also took the opportunity to add the Koha::TrackedLink(s) classes and use those Koha::Object(s) based accessors to clarify some of the code found within move_to_biblio.  With this change, the odd call to update_all has been replaced with our Koha::Objects based automatic trigger handling update call (without passing no_triggers yet).  The other DBIC resultset acting call is against a single row so update is correct.. I believe.
Comment 170 Joonas Kylmälä 2021-07-22 14:28:38 UTC
In Koha::Items::move_to_biblio the variable $from_biblio should be $to_biblio, now it produces error because there is no such variable.
Comment 171 Joonas Kylmälä 2021-07-22 14:35:43 UTC
Created attachment 123062 [details] [review]
Bug 22690: (QA follow-up) Correct variable name

The $from_biblio variable name doesn't exists after a refactoring that
happened. Here we need to re-index both the $self biblio and $to_biblio
biblio.
Comment 172 Martin Renvoize 2021-07-22 14:47:34 UTC
Thanks for catching that Joonas :)
Comment 173 Joonas Kylmälä 2021-07-22 15:43:58 UTC
Koha::Items::move_to_biblio only re-indexing the last item's biblio and not all of them, this was not spotted probably in testing because all the items were in the same biblio. Working on a patch.
Comment 174 Joonas Kylmälä 2021-07-22 16:42:45 UTC
I also get the following test failure:

> root@kohadevbox:koha(master)$ prove t/db_dependent/Koha/Item.t 
> t/db_dependent/Koha/Item.t ..     # No tests run!
> t/db_dependent/Koha/Item.t .. 1/12 
> #   Failed test 'No tests run for subtest "tracked_links relationship"'
> #   at t/db_dependent/Koha/Item.t line 56.
> Can't locate object method "_new_from_dbic" via package "Koha::TrackedLinks" (perhaps you forgot to load "Koha::TrackedLinks"?) at /kohadevbox/koha/Koha/Item.pm line 1220.
> # Looks like your test exited with 255 just after 1.
Comment 175 Jonathan Druart 2021-07-26 09:41:44 UTC
(In reply to Martin Renvoize from comment #169)
> RM comments addressed in followups.
> 
> I also took the opportunity to add the Koha::TrackedLink(s) classes and use
> those Koha::Object(s) based accessors to clarify some of the code found
> within move_to_biblio.  With this change, the odd call to update_all has
> been replaced with our Koha::Objects based automatic trigger handling update
> call (without passing no_triggers yet).  The other DBIC resultset acting
> call is against a single row so update is correct.. I believe.

To clarify, I was not asking to add new classes. I was wondering if the inconsistency (update vs update_all) in those two calls was expected:

1257     my $hold_fill_target = $self->_result->hold_fill_target;
1258     if ($hold_fill_target) {
1259         $hold_fill_target->update({ biblionumber => $to_biblionumber });
1260     }


1260     my $linktrackers = $schema->resultset('Linktracker')->search({ itemnumber => $self->itemnumber });
1261     $linktrackers->update_all({ biblionumber => $to_biblionumber });

both are DBIx::Class::ResultSet
Comment 176 Martin Renvoize 2021-08-05 13:28:12 UTC
Created attachment 123481 [details] [review]
Bug 22690: Refactor merging of records to improve performance (Elasticsearch)

This patch allows merging of records with many items without the web server timing out.

Test plan:

Without the patch:

- Create 2 records (one with e.g. 1000 items).
- Do a cataloguing search that displays both records, select them and click "Merge selected".
- Choose the record with many items as the one to be eliminated.
- Start the merging.
- After a while the web server should give you a timeout error (the merging process may still continue)

With the patch:
- Do the same as above
- This time verify that the records are merged without timeout
- Create a new biblio with an item
- Add with the item:
  * acquisition order
  * hold (reserve)
- Merge the biblio to another one
- Verify that the item and its related data was moved
- Verify that tests pass:
  prove -v t/db_dependent/Koha/Biblio.t
  prove -v t/db_dependent/Koha/Item.t
  prove -v t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 177 Martin Renvoize 2021-08-05 13:28:19 UTC
Created attachment 123482 [details] [review]
Bug 22690: Add more tests

- Tests for adopt_items_from_biblio
- Tests for the relationship between items and acquisition orders
- Tests for indexer calls in adopt_items_from_biblio

Signed-off-by: Michal Denar <black23@gmail.com>
Rebased-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 178 Martin Renvoize 2021-08-05 13:28:25 UTC
Created attachment 123483 [details] [review]
Bug 22690: (QA follow-up) Silence manually generated warnings

In our test setup we mock the index_records() to produce warnings like
this:

Koha::Item at t/db_dependent/Koha/SearchEngine/Indexer.t line
93.

By wrapping all our item creations to warnings_are{} we can silence them.

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 179 Martin Renvoize 2021-08-05 13:28:30 UTC
Created attachment 123484 [details] [review]
Bug 22690: (QA follow-up) Index also source biblio when calling move_to_biblio()

We need to update the search index record for the old biblio where the
item was moved from to keep the item info in search index up-to-date.

To test:
1) $ prove t/db_dependent/Koha/SearchEngine/Indexer.t

Signed-off-by: Joonas Kylmälä <joonas.kylmala@helsinki.fi>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 180 Martin Renvoize 2021-08-05 13:28:36 UTC
Created attachment 123485 [details] [review]
Bug 22690: (QA follow-up) Make bib-level hold object actually bib-level

We need to pass undef itemnumber to build_object() to actually have a
hold without an item tied to it. Otherwise build_object() will create
automatically an item for us (thus making it an item-level hold)

To test:
 $ prove t/db_dependent/Koha/Item.t

Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 181 Martin Renvoize 2021-08-05 13:28:42 UTC
Created attachment 123486 [details] [review]
Bug 22690: Add missing txn_begin in subtest
Comment 182 Martin Renvoize 2021-08-05 13:28:47 UTC
Created attachment 123487 [details] [review]
Bug 22690: Remove MoveItemFromBiblio import

Added in the meanwhile by bug 17600.
Comment 183 Martin Renvoize 2021-08-05 13:28:53 UTC
Created attachment 123488 [details] [review]
Bug 22690: (QA follow-up) Rename 'item_orders' to 'orders'
Comment 184 Martin Renvoize 2021-08-05 13:28:59 UTC
Created attachment 123489 [details] [review]
Bug 22690: (QA follow-up) Improve negation syntax
Comment 185 Martin Renvoize 2021-08-05 13:29:04 UTC
Created attachment 123490 [details] [review]
Bug 22690: (QA follow-up) Move adopt_items_from_biblios to Koha::Items

This patch moves the Koha::Biblio->adopt_items_from_biblio method to the
Koha::Items set class and updates all calls from
Biblio2->adopt_items_from_biblio(Biblio1) to Biblio->items->move_to_biblio(Biblio2)
Comment 186 Martin Renvoize 2021-08-05 13:29:10 UTC
Created attachment 123491 [details] [review]
Bug 22690: (QA follow-up) Clarify uses of DBIC
Comment 187 Martin Renvoize 2021-08-05 13:29:16 UTC
Created attachment 123492 [details] [review]
Bug 22690: (QA follow-up) Add relationships to linktracker

Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Comment 188 Martin Renvoize 2021-08-05 13:29:21 UTC
Created attachment 123493 [details] [review]
Bug 22690: DBIC Schema Updates
Comment 189 Martin Renvoize 2021-08-05 13:29:27 UTC
Created attachment 123494 [details] [review]
Bug 22690: (QA follow-up) Use relationship accessor

With the addition of foreign key relationships to the linktracker table
we now get a DBIC relationship accessor we can use. This clarifies the
code slightly by using the _result->relationship form to get the DBIC
resultset.  We should still introduce a Koha::Object based class for
this table at some point.
Comment 190 Martin Renvoize 2021-08-05 13:29:33 UTC
Created attachment 123495 [details] [review]
Bug 22690: (QA follow-up) Add TrackedLink classes and use them

This patch adds Koha::TrackedLink(s) classes based on Koha::Object(s)
and then adds the relationship accessor to Koha::Item and uses it within
the move_to_biblio method.

Tests for new relationship also added to t/db_dependent/Koha/Item.t
Comment 191 Martin Renvoize 2021-08-05 13:29:39 UTC
Created attachment 123496 [details] [review]
Bug 22690: (QA follow-up) Correct variable name

The $from_biblio variable name doesn't exists after a refactoring that
happened. Here we need to re-index both the $self biblio and $to_biblio
biblio.
Comment 192 Martin Renvoize 2021-08-05 13:29:44 UTC
Created attachment 123497 [details] [review]
Bug 22690: (QA follow-up) Fix indexing for Items sets

This patch adds tests and handling for calling move_to_biblio on a
Koha::Items set that contains items from more than one source biblio.

Test plan
1/ Inspect the changes to t/db_dependent/Koha/SearchEngine/Indexer.t
2/ Run t/db_dependent/Koha/SearchEngine/Indexer.t and confirm it passes
Comment 193 Martin Renvoize 2021-08-05 13:30:52 UTC
Fixes made in followup, test suit re-run against everything and all passing.

I think it was a good move adding the Koha:: classes, it makes the code cleaner and shows the path for future work.

Setting back to PQA :)
Comment 194 Jonathan Druart 2021-08-27 12:32:35 UTC
About the new FK added to the linktracker table, is it relevant to keep the entry if the biblio has been deleted?

+  CONSTRAINT `linktracker_biblio_ibfk` FOREIGN KEY (`biblionumber`) REFERENCES `biblio` (`biblionumber`) ON DELETE SET NULL ON UPDATE SET NULL,

(not blocker but please answer)
Comment 195 Jonathan Druart 2021-08-27 12:37:07 UTC
Created attachment 124176 [details] [review]
Bug 22690: Remove uneeded return and add no_triggers

* C4/Items.pm
  - Koha::Biblios not used

* Koha/Item.pm
  - Koha::Item->orders must return an empty set if no order attached
  - no_triggers should be passed to other update calls

* Item.t
  - No need to build a fund
  - Add new test to test Koha::Item->orders when no order attached
Comment 196 Jonathan Druart 2021-08-27 13:36:45 UTC
Pushed to master for 21.11, thanks to everybody involved!
Comment 197 Jonathan Druart 2021-08-27 15:07:49 UTC
Created attachment 124188 [details] [review]
Bug 22690: Add koha_object[s]_class to fix TestBuilder.t

'Can't locate object method "_new_from_dbic" via package "Koha::Linktracker" (perhaps you forgot to load "Koha::Linktracker"?) at /kohadevbox/koha/Koha/Object.pm line 334.

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Comment 198 Jonathan Druart 2021-08-27 15:09:00 UTC
Follow-up pushed to master.
Comment 199 Jonathan Druart 2021-08-31 06:24:31 UTC
Created attachment 124257 [details] [review]
Bug 22690: Fix the tracklink feature

With the FK we must set to undef/NULL, not 0.
Comment 200 Jonathan Druart 2021-08-31 06:25:35 UTC
(In reply to Jonathan Druart from comment #199)
> Created attachment 124257 [details] [review] [review]
> Bug 22690: Fix the tracklink feature
> 
> With the FK we must set to undef/NULL, not 0.

This also means that we need a db rev to fix existing values. Can you follow-up please?
Comment 201 Jonathan Druart 2021-09-01 09:55:25 UTC
I sent the following email to Chris yesterday:

"""
A couple of questions:
  * Having the FK means the ON UPDATE SET NULL clause will apply. We
will "lose" the links but as the ids can be reused several times I
think it makes sense.
  * We will need to update existing entries (set to NULL when 0 or
deleted), we can take the opportunity to remove the rows with
biblionumber=null, itemnumber=null, biblionumber=null.
"""

his reply:
"""
The table is mainly just report the usage of electronic resources. So
if the resource is deleted it doesn't make sense to keep how many
times it whats viewed, so I think the FK will be fine
"""

Maybe we should then have a delete cascade on the biblionumber.

Please help here, I am not willing to follow-up myself.
Comment 202 Ere Maijala 2021-09-01 10:25:49 UTC
Jonathan, I've lost track of things a bit here, but since linktracker also tracks clicks per borrower, would it still be useful to keep the records? Also, if cascade delete would be used, would it only affect biblionumber, not itemnumber or borrowernumber? 
Or to put it another way, if we don't want to keep the records after the biblio is deleted, why is biblionumber nullable in the table?
Comment 203 Jonathan Druart 2021-09-01 12:46:53 UTC
(In reply to Ere Maijala from comment #202)
> Jonathan, I've lost track of things a bit here, but since linktracker also
> tracks clicks per borrower, would it still be useful to keep the records?
> Also, if cascade delete would be used, would it only affect biblionumber,
> not itemnumber or borrowernumber? 

In my understanding we want to keep track of the number of clicks on the records, so even if the patron of item is deleted we should keep the entry.

> Or to put it another way, if we don't want to keep the records after the
> biblio is deleted, why is biblionumber nullable in the table?

Correct, that is not consistent.
Comment 204 Kyle M Hall 2021-09-03 14:04:42 UTC
Does not apply cleanly to 21.05.x. If this is needed for 21.05 please create a 21.05 patch. Thanks!
Comment 205 Jonathan Druart 2021-10-08 13:24:43 UTC
Follow-up still needed for 21.11
Comment 206 Ere Maijala 2021-10-11 08:15:06 UTC
Jonathan,

I'm still not sure it makes sense to delete the tracking information when a biblio is deleted. That would break the first use case described here:

https://bywatersolutions.com/news/koha-tutorial-track-click-system-preference

The second example has a JOIN with the biblio table, but I'm not convinced it's intentionally ignoring the case of deleted biblios.
Comment 207 Nick Clemens 2021-10-12 16:48:56 UTC
(In reply to Ere Maijala from comment #206)
> Jonathan,
> 
> I'm still not sure it makes sense to delete the tracking information when a
> biblio is deleted. That would break the first use case described here:
> 
> https://bywatersolutions.com/news/koha-tutorial-track-click-system-preference
> 
> The second example has a JOIN with the biblio table, but I'm not convinced
> it's intentionally ignoring the case of deleted biblios.

I think both cases there are tracking use of current resources.

My assumption would be that a deleted resource is no longer available and the decisions has already been made (i.e. you ended the subscription) so that tracking does not need to be retained
Comment 208 Katrin Fischer 2021-10-12 21:30:46 UTC
(In reply to Nick Clemens from comment #207)
> (In reply to Ere Maijala from comment #206)
> > Jonathan,
> > 
> > I'm still not sure it makes sense to delete the tracking information when a
> > biblio is deleted. That would break the first use case described here:
> > 
> > https://bywatersolutions.com/news/koha-tutorial-track-click-system-preference
> > 
> > The second example has a JOIN with the biblio table, but I'm not convinced
> > it's intentionally ignoring the case of deleted biblios.
> 
> I think both cases there are tracking use of current resources.
> 
> My assumption would be that a deleted resource is no longer available and
> the decisions has already been made (i.e. you ended the subscription) so
> that tracking does not need to be retained

I think this is true if you want to report on a specific resource, but I think the information could still be useful if you want to purchase the records again in another license model or when doing annual reporting on e-resource use.
Comment 209 Jonathan Druart 2021-10-13 07:03:07 UTC
(In reply to Katrin Fischer from comment #208)
> (In reply to Nick Clemens from comment #207)
> > (In reply to Ere Maijala from comment #206)
> > > Jonathan,
> > > 
> > > I'm still not sure it makes sense to delete the tracking information when a
> > > biblio is deleted. That would break the first use case described here:
> > > 
> > > https://bywatersolutions.com/news/koha-tutorial-track-click-system-preference
> > > 
> > > The second example has a JOIN with the biblio table, but I'm not convinced
> > > it's intentionally ignoring the case of deleted biblios.
> > 
> > I think both cases there are tracking use of current resources.
> > 
> > My assumption would be that a deleted resource is no longer available and
> > the decisions has already been made (i.e. you ended the subscription) so
> > that tracking does not need to be retained
> 
> I think this is true if you want to report on a specific resource, but I
> think the information could still be useful if you want to purchase the
> records again in another license model or when doing annual reporting on
> e-resource use.

I am not sure it's relevant as you lost the link (on delete set null).

The question basically is: do we want to know how many times a user clicked on records that have been deleted?
Comment 210 Ere Maijala 2021-10-13 07:09:27 UTC
(In reply to Jonathan Druart from comment #209)
> I am not sure it's relevant as you lost the link (on delete set null).
> 
> The question basically is: do we want to know how many times a user clicked
> on records that have been deleted?

The url is still recorded in the tracking table, and that might be all you need regardless of the biblio that contained it. The bottom line is, I wouldn't want to make too many assumptions about the possible use cases and potentially handicap someone's reporting. I also think such change, if implemented, should not be a side effect of this issue, but would need to be its own issue with clear decisions, perhaps an RFC as well.
Comment 211 Jonathan Druart 2021-10-13 07:21:22 UTC
ok, let's keep as it for now, thanks!

Note:

(In reply to Jonathan Druart from comment #201)
>   * We will need to update existing entries (set to NULL when 0 or
> deleted), we can take the opportunity to remove the rows with
> biblionumber=null, itemnumber=null, biblionumber=null.

The DBRev already deals with that.