| Summary: | index can fail on timeout | ||
|---|---|---|---|
| Product: | Koha | Reporter: | Nick Clemens (kidclamp) <nick> |
| Component: | Searching - Elasticsearch | Assignee: | Bugs List <koha-bugs> |
| Status: | NEW --- | QA Contact: | |
| Severity: | normal | ||
| Priority: | P5 - low | CC: | alex.arnaud, bjorn.nylen, ere.maijala |
| Version: | Main | ||
| Hardware: | All | ||
| OS: | All | ||
| GIT URL: | Initiative type: | --- | |
| Sponsorship status: | --- | Comma delimited list of Sponsors: | |
| Crowdfunding goal: | 0 | Patch complexity: | --- |
| Documentation contact: | Documentation submission: | ||
| Text to go in the release notes: | Version(s) released in: | ||
| Circulation function: | |||
WE've been seeing the same. However we've recently tried to add a request_timeout parameter the the ES indexer in rebuild_elasticsearch.pl to work around the problem. 3 years later - does this problem still exist in our current implementation? |
We have seen occasional timeouts when indexing elasticsearch, these cause the indexing to stop and fail. We should handle the response as the code indicates: 102 sub update_index { 103 my ($self, $biblionums, $records) = @_; 104 105 my $conf = $self->get_elasticsearch_params(); 106 my $elasticsearch = $self->get_elasticsearch(); 107 my $documents = $self->marc_records_to_documents($records); 108 my @body; 109 110 for (my $i=0; $i < scalar @$biblionums; $i++) { 111 my $id = $biblionums->[$i]; 112 my $document = $documents->[$i]; 113 push @body, { 114 index => { 115 _id => $id 116 } 117 }; 118 push @body, $document; 119 } 120 if (@body) { 121 my $response = $elasticsearch->bulk( 122 index => $conf->{index_name}, 123 type => 'data', # is just hard coded in Indexer.pm? 124 body => \@body 125 ); 126 } 127 # TODO: handle response 128 return 1; 129 }