Bug 24562 - addbiblio.pl overloading CPU
Summary: addbiblio.pl overloading CPU
Status: RESOLVED WORKSFORME
Alias: None
Product: Koha
Classification: Unclassified
Component: Cataloging (show other bugs)
Version: 19.11
Hardware: All All
: P1 - high normal
Assignee: Bugs List
QA Contact: Testopia
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2020-02-02 20:13 UTC by Jawad Makki
Modified: 2023-12-31 14:24 UTC (History)
4 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
Koha about page 1 (118.31 KB, image/png)
2020-02-12 13:58 UTC, Jawad Makki
Details
Koha about page 2 (51.86 KB, image/png)
2020-02-12 13:59 UTC, Jawad Makki
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Jawad Makki 2020-02-02 20:13:45 UTC
Hello. 

Sometimes during the day, addbiblio.pl is overloading CPU (100%) and therefore it is making Koha completely not responding for few minutes (can reach 20 minutes).


in apache2 log, the response time for "POST /cgi-bin/koha/cataloguing/addbiblio.pl HTTP/1.1" is reaching gateway timeout few times per day.

below are some errors retrieved from plack-error.log

Use of uninitialized value $mode in string ne at /usr/share/koha/intranet/cgi-bin/cataloguing/addbiblio.pl line 845.
Use of uninitialized value $searchid in concatenation (.) or string at /usr/share/koha/intranet/cgi-bin/cataloguing/addbiblio.pl line 876.


Koha version 19.11 , running on debian 9
memcached and plack are enabled.

this problem was also in 18.11, we have upgraded koha to 19.11 but the problem is still occuring

Any help
Comment 1 Jawad Makki 2020-02-02 20:18:56 UTC
these are more errors retrieved from plack-error.log

/usr/share/koha/intranet/cgi-bin/cataloguing/addbiblio.pl line 708.
Use of uninitialized value $frameworkcode in string eq at /usr/share/koha/intranet/cgi-bin/cataloguing/addbiblio.pl line 712.
Use of uninitialized value $frameworkcode in string eq at /usr/share/koha/intranet/cgi-bin/cataloguing/addbiblio.pl line 731.
Comment 2 Jonathan Druart 2020-02-03 15:21:21 UTC
Hello Jawad,
Are you able to know if it happens on an edit or a new bibliographic record?
Also, were you able to identify the record(s?) that causes this problem?
Comment 3 Jawad Makki 2020-02-05 10:55:28 UTC
Hello Jonathan,

it is not clear if it is happening during edit or new bibliographic. the logs on apache2 are showing a gateway timeout on below operations

POST /cgi-bin/koha/cataloguing/addbiblio.pl 
/cgi-bin/koha/cataloguing/addbiblio.pl?frameworkcode=BKS

Getting the exact time from apache2 logs and checking Koha logs from the intranet this shows the 2 options : sometime it an edit operation and sometime it is a new record.

I've checked some of these records, they look fine :)
Comment 4 Jonathan Druart 2020-02-05 11:22:32 UTC
What happen if you edit this record again? Does it timeout?

You should compare with a supervision log/graph and see if the server was not overloaded (by something else) at that moment.
Comment 5 Jawad Makki 2020-02-05 12:24:40 UTC
the record is edited and saved normally (no timeout).

we were monitoring the server during the problem, and we've noticed that it is the "addbiblio.pl" script that is causing this overload on the CPU !
Comment 6 Jonathan Druart 2020-02-11 12:21:18 UTC
Are you using Zebra or Elasticsearch? How many items on the bibliographic record?

I am afraid that we cannot help you much without more information.

Lowering the severity until the issue is confirmed.
Comment 7 Jawad Makki 2020-02-12 11:59:47 UTC
we are using Zebra.

some bibliographic records include few items (between 1 to 5 items).
other bibliographic records are for Serials, they are including high number of issues (items) that can reach more than 1000 items in some cases.

note also that we are using analytical biblio records.

so do you think it is an indexing issue with Zebra, there are no errors in Zebra logs, but in zebra-output.log, we are getting thousands of this warning per day 

13:56:30-20/01 zebrasrv(17643) [warn] ir_session (exception)
13:56:36-20/01 zebrasrv(17644) [warn] ir_session (exception)
13:56:42-20/01 zebrasrv(17645) [warn] ir_session (exception)
13:56:50-20/01 zebrasrv(17646) [warn] ir_session (exception)
13:56:57-20/01 zebrasrv(17647) [warn] ir_session (exception)
Comment 8 Jonathan Druart 2020-02-12 12:46:09 UTC
What are the values of use_zebra_facets and zebra_max_record_size in your config file (koha-conf.xml)?

You can ignore the zebrasrv warnings.
Comment 9 Jonathan Druart 2020-02-12 12:48:09 UTC
Did you check the Koha about page?
Tab 'server information', what say the 'PSGI' and 'Memcached'?
Tab 'System information', no warnings there?
Comment 10 Jawad Makki 2020-02-12 13:58:34 UTC
Created attachment 98753 [details]
Koha about page 1
Comment 11 Jawad Makki 2020-02-12 13:59:15 UTC
Created attachment 98754 [details]
Koha about page 2
Comment 12 Jawad Makki 2020-02-12 14:01:27 UTC
for use_zebra_facets it is 1  /  <use_zebra_facets>1</use_zebra_facets> 

zebra_max_record_size is missing in my config file (koha-conf.xml) ! 
should I add it with the default value 1024 or i will increase is to 2 MB (<zebra_max_record_size>2048</zebra_max_record_size>)
this adjustment requires only to restart Zebra, there is no need to do a full re-index ? right ?


concerning "Koha about page", every thing is normal (please check the attached screenshots)
PSGI:	Plack (deployment)
Memcached:	Servers: 127.0.0.1:11211 | Namespace: koha_dblibrary | Status: running. | Config read from: koha-conf.xml

in system information tab, there is the below warning related to "Patron relationship problems", this warning has appeared after the upgrade from 18.11 to 19.11. we have ignored it since we are not using borrower relationships.


Patron relationship problems
The following values have been used for guarantee/guarantor relationships, but do not exist in the 'borrowerRelationship' system preference:

If the relationship is one you want, please add it to the 'borrowerRelationship' system preference, otherwise have your system's administrator correct the values in borrowers.relationship and/or borrower_relationships.relationship in the database.
Comment 13 Jonathan Druart 2020-02-12 14:11:09 UTC
I would try to turn the use_zebra_facets config flag off, restart memcached and plack, then wait and see if the problem appears again.
Comment 14 Jawad Makki 2020-02-29 11:45:56 UTC
Do you think that this issue is related somehow to Bug 23844 - Noisy warns in addbiblio.pl when importing from Z3950 ?

can the path that you have suggested in Bug 23844 solve the problem reported here ?
Comment 15 Jonathan Druart 2020-03-03 12:03:05 UTC
(In reply to Jawad Makki from comment #14)
> Do you think that this issue is related somehow to Bug 23844 - Noisy warns
> in addbiblio.pl when importing from Z3950 ?
> 
> can the path that you have suggested in Bug 23844 solve the problem reported
> here ?

No, I do not think so. I just removes warnings from logs.
Unless you get thousands of those warnings in the logs (that could slow the server down), but I do not think it can happen.
Comment 16 Marcel de Rooy 2020-06-04 06:23:43 UTC
Are you really sure that the problem is in addbiblio.pl?
Could it be that some other process is overloading the CPU which could explain that the addbiblio is getting a timeout ?

I recently had some similar issues and I do not trust the z3950 searches for instance. This is a asynchronous search that might create problems under Plack, where I would not expect that so quickly for addbiblio ? But note that I am also still searching for a clue.
Comment 17 David Cook 2020-06-04 06:47:18 UTC
(In reply to Jawad Makki from comment #7)
> some bibliographic records include few items (between 1 to 5 items).
> other bibliographic records are for Serials, they are including high number
> of issues (items) that can reach more than 1000 items in some cases.
> 

Does the high CPU usage only happen for records with a high number of items or does it happen also for records with a low number of items?

Of course, it could be something else on the system. Is your database on the same server or a separate server? It could be stuck waiting for I/O on the database end, although I tend to only see that on databases much larger than the largest Koha database I've ever seen.
Comment 18 Marcel de Rooy 2020-07-03 08:51:19 UTC
Just an observation here: I added the z3950 search scripts to the group of non-Plack scripts, handled by Apache.
And did not have a server overload anymore in the last month.
No proof yet..
Comment 19 Jawad Makki 2021-06-17 08:35:55 UTC
After many test during the last few months, we were able to detect the problem. below are our observations:

1 - the problem is issued definitely from addbiblio.pl
2 - the problem is not related to the size of the record or the number of items
3 - the problem appears when adding a new bibliographic record (and not when editing an existing one)
4 - the problem occurs when we have the "Apostrophe" character in the title (in 245$a)
5 - the source of the problem is coming from the checking for a possible duplicate record function "Duplicate record suspected !".
6 - After disabling the codes of the Duplicate checking in addbiblio.pl, everything is working normally.
	-------------------	
	# getting html input
    my @params = $input->multi_param();
    $record = TransformHtmlToMarc( $input, 1 );
    # check for a duplicate
    my ( $duplicatebiblionumber, $duplicatetitle );
	# Disable checking for possible duplicates 
	#if ( !$is_a_modif ) {
    #    ( $duplicatebiblionumber, $duplicatetitle ) = FindDuplicate($record);
    #}
	-------------------
	
7 - the problem is from the "FindDuplicate($record)" function


any feedback or suggestion !
Comment 20 Jonathan Druart 2021-06-17 08:54:49 UTC
I don't recreate on master.

* Did you try a more recent version?

* Are you able to recreate the problem on a sandbox? https://wiki.koha-community.org/wiki/Sandboxes

* Does the following change fix the problem?

diff --git a/C4/Search.pm b/C4/Search.pm
index 0db460a8083..17b0ab33f87 100644
--- a/C4/Search.pm
+++ b/C4/Search.pm
@@ -102,6 +102,7 @@ sub FindDuplicate {
 
         $result->{title} =~ s /\\//g;
         $result->{title} =~ s /\"//g;
+        $result->{title} =~ s /\'//g;
         $result->{title} =~ s /\(//g;
         $result->{title} =~ s /\)//g;
Comment 21 Jawad Makki 2021-07-04 21:13:50 UTC
Hello Jonathan,

the suggested modification (to add $result->{title} =~ s /\'//g;) in C4/Search.pm has solved the problem.
the sub FindDuplicate is not overloading the CPU anymore when it is called from addbiblio.pl

sub FindDuplicate {
 
         $result->{title} =~ s /\\//g;
         $result->{title} =~ s /\"//g;
+        $result->{title} =~ s /\'//g;
         $result->{title} =~ s /\(//g;
         $result->{title} =~ s /\)//g;
		 
		 

I have applied it also for the author as well

            $result->{author} =~ s /\\//g;
            $result->{author} =~ s /\"//g;            
+           $result->{author} =~ s /\'//g; 
            $result->{author} =~ s /\(//g;
            $result->{author} =~ s /\)//g;
Comment 22 David Cook 2021-07-05 03:11:52 UTC
I have heard of this before, although I'm curious why it's using 100% CPU instead of just failing to find the duplicates. 

Sounds a bit like we're fixing the symptom rather than the problem.
Comment 23 Jonathan Druart 2021-07-06 07:54:20 UTC
(In reply to Jawad Makki from comment #21)
> Hello Jonathan,
> 
> the suggested modification (to add $result->{title} =~ s /\'//g;) in
> C4/Search.pm has solved the problem.
> the sub FindDuplicate is not overloading the CPU anymore when it is called
> from addbiblio.pl

Hello Jawad, what about the other two questions? :)

* Did you try a more recent version?

* Are you able to recreate the problem on a sandbox? https://wiki.koha-community.org/wiki/Sandboxes
Comment 24 David Cook 2021-07-19 07:15:16 UTC
One of my libraries (before they joined us) did a lot of customizations to FindDuplicate to get around this problem. Their customizations were all about making the query more specific.

In theory, the max number of results to be returned is 50. That's quite a few, especially since we're running an inefficient TransformMarcToKoha over every one of those results.

I'll have to look into this more.
Comment 25 David Cook 2021-07-20 02:33:47 UTC
(In reply to Jawad Makki from comment #21)
> 
> the suggested modification (to add $result->{title} =~ s /\'//g;) in
> C4/Search.pm has solved the problem.
> the sub FindDuplicate is not overloading the CPU anymore when it is called
> from addbiblio.pl
> 

This is really interesting. I've created 2 duplicate records in Koha with an apostrophe in the title and I'm not having any issues. 

What exact title are you using?
Comment 26 David Cook 2021-07-20 02:42:34 UTC
Wow, FindDuplicate is so inefficient. 

It looks like we only ever use the 1st result from FindDuplicate, but we return a copy of an array that (in theory) can contain up to 50 results. 

(Also there is a little syntax problem in Search.t although it doesn't affect the test outcome.)

--

my ( $error, $searchresults, undef ) = $searcher->simple_search_compat($query,0,50);

--

grep -R "FindDuplicate(" *
acqui/addorderiso2709.pl:                    $duplifound = 1 if FindDuplicate($marcrecord);
acqui/neworderempty.pl:    ($biblionumber,$duplicatetitle) = FindDuplicate($marcrecord);
C4/Search.pm:($biblionumber,$biblionumber,$title) = FindDuplicate($record);
cataloguing/addbiblio.pl:        ( $duplicatebiblionumber, $duplicatetitle ) = FindDuplicate($record);
opac/opac-suggestions.pl:    if ( my ($duplicatebiblionumber, $duplicatetitle) = FindDuplicate($biblio) ) {
suggestion/suggestion.pl:    elsif ( !$suggestion_only->{suggestionid} && ( my ($duplicatebiblionumber, $duplicatetitle) = FindDuplicate($biblio) ) && !$save_confirmed ) {
t/db_dependent/Search.t:    ($biblionumber,undef,$title) = FindDuplicate($record);
t/db_dependent/Search.t:    ($biblionumber,undef,$title) = FindDuplicate($record);
t/db_dependent/Search.t:        warning_is { C4::Search::FindDuplicate($record_1);}
t/db_dependent/Search.t:        warning_is { C4::Search::FindDuplicate($record_2);}
t/db_dependent/Search.t:        warning_is { C4::Search::FindDuplicate($record_3);}
Comment 27 David Cook 2021-07-20 03:13:53 UTC
(In reply to David Cook from comment #26)
> It looks like we only ever use the 1st result from FindDuplicate, but we
> return a copy of an array that (in theory) can contain up to 50 results. 
>
> --
> 
> my ( $error, $searchresults, undef ) =
> $searcher->simple_search_compat($query,0,50);
> 

I've just created 50+ duplicate records that have an apostrophe in the title, and I've confirmed that FindDuplicate returns a maximum of 50 records (ie 100 array entries - 2 for each record). 

Looking at FindDuplicate, there's no reason why addbiblio.pl should consume 100% CPU when using that function. 

That suggests to me if there is an issue.. it's probably with Koha::SearchEngine::Search::Zebra::Search's simple_search_compat(), new_record_from_zebra(), or TransformMarcToKoha(). 

TransformMarcToKoha() does have a loop which gets run for every potential result, but it's not too consuming at a glance, and it should max out at 50 iterations.

new_record_from_zebra() just creates a MARC::Record object. The one thing is that it could return a null result which isn't checked in FindDuplicate... but TransformMarcToKoha should carp() and return an empty hashref if it gets an undefined record. 

simple_search_compat() actually just uses C4::Search::SimpleSearch(). While there is some looping, it's all quite contained.
Comment 28 David Cook 2021-07-20 03:24:40 UTC
Ohh... if we do a "git blame" we can see that Nick made a change which dates to March 2021 (bug 27928). Before that change, there was no maximum specified, so in theory FindDuplicate could have fetched *a lot* of records. 

That said, that seems unlikely, since we're doing "ti,ext" and "au,ext" searches. Although... that reminds me of a different issue. Bug 27299. There was/is an issue with ICU the phrase register which would've been used by a CCL search using a "ext" qualifier since it does a 'complete-field' search. 

I haven't noticed this problem with any of my CHR libraries but the 1 library that has reported the problem to me was a ICU library.

So the resolution of bug 27928 and bug 27299 may have resolved this issue from Koha 21.05 onwards. 

I think that's a reasonably sound hypothesis.
Comment 29 Katrin Fischer 2023-12-31 14:24:06 UTC
(In reply to David Cook from comment #28)
> So the resolution of bug 27928 and bug 27299 may have resolved this issue
> from Koha 21.05 onwards. 
> 
> I think that's a reasonably sound hypothesis.

I am closing this based on David's hypothesis and the fact that we haven't seen recent reports of this.