Bug 12478 - Elasticsearch support for Koha
Summary: Elasticsearch support for Koha
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Searching (show other bugs)
Version: Main
Hardware: All All
: P5 - low enhancement
Assignee: Robin Sheat
QA Contact: Testopia
URL:
Keywords:
Depends on: 16249
Blocks: 16588 35372 37057 14567 14899 16248 16445 16448 16453 16489 16660 16708 16838 17048 17134 17255 17372 17373 17377 17500 17727 17739 18130 18131 18163 19415 26141 34359
  Show dependency treegraph
 
Reported: 2014-06-25 06:11 UTC by Robin Sheat
Modified: 2024-06-10 17:25 UTC (History)
36 users (show)

See Also:
Change sponsored?: Sponsored
Patch complexity: Large patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:
Circulation function:


Attachments
Bug 12478 - an almagamation of all the Elasticsearch code so far (104.31 KB, patch)
2014-06-25 06:15 UTC, Robin Sheat
Details | Diff | Splinter Review
Bug 12478 - an almagamation of all the Elasticsearch code so far (104.37 KB, patch)
2014-08-25 03:59 UTC, Robin Sheat
Details | Diff | Splinter Review
Bug 12478 - set up database tables for elasticsearch (2.83 KB, patch)
2014-08-25 03:59 UTC, Robin Sheat
Details | Diff | Splinter Review
Bug 12478 - add some base objects that the ES code will depend on (19.63 KB, patch)
2014-11-03 04:02 UTC, Robin Sheat
Details | Diff | Splinter Review
Bug 12478 - add some base objects that the ES code will depend on (19.63 KB, patch)
2014-11-03 04:05 UTC, Robin Sheat
Details | Diff | Splinter Review
Bug 12478 - pile of elasticsearch code (84.71 KB, patch)
2014-11-03 04:05 UTC, Robin Sheat
Details | Diff | Splinter Review
Bug 12478 - set up database tables for elasticsearch (2.39 KB, patch)
2014-11-03 04:05 UTC, Robin Sheat
Details | Diff | Splinter Review
Bug 12478 - authorities can now be stored in ES (76.06 KB, patch)
2014-11-03 04:05 UTC, Robin Sheat
Details | Diff | Splinter Review
Bug 12478 - add test cases (5.54 KB, patch)
2014-11-03 04:06 UTC, Robin Sheat
Details | Diff | Splinter Review
Bug 12478: fix some compilation errors (3.71 KB, patch)
2015-02-02 15:27 UTC, Jonathan Druart
Details | Diff | Splinter Review
My koha-conf.xml as an example (7.17 KB, text/plain)
2015-07-09 23:03 UTC, Robin Sheat
Details
opac_search_for_d_sort_by_relevance (300.86 KB, image/png)
2015-08-27 14:00 UTC, Jonathan Druart
Details
opac_search_for_d_sort_by_title (335.84 KB, image/png)
2015-08-27 14:00 UTC, Jonathan Druart
Details
opac_search_for_harry_sort_by_title (171.85 KB, image/png)
2015-08-27 14:01 UTC, Jonathan Druart
Details
limit_by_book_sort_by_pubdate (343.28 KB, image/png)
2015-08-27 14:01 UTC, Jonathan Druart
Details
Bug 12478: Fix the UNIMARC and NORMARC indexing (12.46 KB, patch)
2015-08-28 11:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Change the commit count to 5k (1.14 KB, patch)
2015-08-28 11:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Fix the verbose flag on reindexing (1.03 KB, patch)
2015-08-28 11:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Fix pod in the rebuild_ES.pl script (808 bytes, patch)
2015-08-28 11:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Remove Solr occurrences reintroduced by a previous patch (2.15 KB, patch)
2015-08-28 11:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Display the correct number of facets (5 instead of 6) (1.06 KB, patch)
2015-08-28 11:38 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Do not display the 'Show more' links if no more facet available (1.29 KB, patch)
2015-08-28 11:39 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Fix encoding issue on facets (1.61 KB, patch)
2015-08-28 11:39 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Fix error on indexing a specific record (1.60 KB, patch)
2015-09-04 12:37 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Fix encoding issues on indexing (2.33 KB, patch)
2015-09-04 12:37 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Reintroduce the SearchEngine system preference (1.05 KB, patch)
2015-10-05 13:42 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: (follow-up) Display the correct number of facets (1.25 KB, patch)
2015-10-05 13:42 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Remove empty limit parameter (1.49 KB, patch)
2015-10-05 13:42 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Remove Koha::ItemType[s] Class::Accessor classes (5.70 KB, patch)
2015-10-05 13:42 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Add Koha::ItemType[s] classes (2.94 KB, patch)
2015-10-05 13:42 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Use the new Koha::ItemTypes to retrieve itypes descriptions (1.41 KB, patch)
2015-10-05 13:42 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Add Koha::AuthorisedValue[s] class (3.02 KB, patch)
2015-10-05 13:42 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: facets - Display description instead of code for locations (1.52 KB, patch)
2015-10-05 13:42 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Display facet terms ordered by number of occurrences (5.29 KB, patch)
2015-10-05 14:39 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Remove empty limit parameter (1.51 KB, patch)
2015-10-05 14:46 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Remove empty limit parameter (1.51 KB, patch)
2015-10-05 14:48 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Take the FacetMaxCount pref into account (4.01 KB, patch)
2015-10-05 16:16 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: [DO NOT PUSH] script to generate the mappings yaml file (4.23 KB, patch)
2015-10-12 16:17 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Add the yaml mappings file (56.80 KB, patch)
2015-10-12 16:17 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Add Koha::SearchField[s] and Koha::SearchMarcMap[s] classes (5.79 KB, patch)
2015-10-12 16:18 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Move mapping attributes to the join table (10.30 KB, patch)
2015-10-12 16:18 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: update the schema (6.07 KB, patch)
2015-10-12 16:18 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Manually add the many_to_many relationships (1.57 KB, patch)
2015-10-12 16:18 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Remove previous mappings file (sql) (51.64 KB, patch)
2015-10-12 16:18 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 14899: Add a link to the new page in the admin (1.35 KB, patch)
2015-10-12 16:20 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: Display facet terms ordered by number of occurrences (5.13 KB, patch)
2015-10-13 09:53 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug12748 - Fixes duplicate serials with an "expected" status bug (4.23 KB, text/plain)
2016-02-08 20:07 UTC, Rémi Mayrand-Provencher
Details
Bug 12748 - Add test for step 7 and 8 and rename findSerialByStatus (4.40 KB, text/plain)
2016-02-08 20:07 UTC, Rémi Mayrand-Provencher
Details
Bug 12478: Define simple_search_compat for Zebra (1.22 KB, patch)
2016-04-11 07:21 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 12478: (QA followup) Koha::SearchEngine should fallback to Zebra (899 bytes, patch)
2016-04-12 20:07 UTC, Tomás Cohen Arazi (tcohen)
Details | Diff | Splinter Review
Bug 12478 : Fixing the tests in t/Koha_ElasticSearch_Indexer.t (1.25 KB, patch)
2016-04-13 20:54 UTC, Chris Cormack
Details | Diff | Splinter Review
Bug 12478 : Fixing the tests for Koha::SearchEngine::Elasticsearch::Search (2.09 KB, patch)
2016-04-13 21:22 UTC, Chris Cormack
Details | Diff | Splinter Review
Bug 12478 : Fixing the tests in t/Koha_ElasticSearch_Indexer.t (1.30 KB, patch)
2016-04-13 21:37 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review
Bug 12478 : Fixing the tests for Koha::SearchEngine::Elasticsearch::Search (2.15 KB, patch)
2016-04-13 21:37 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review
Bug 12478 Shifting tests and adding copyright headers (5.66 KB, patch)
2016-04-14 20:24 UTC, Chris Cormack
Details | Diff | Splinter Review
Bug 12478 Increasing test Coverage for Koha::SearchEngine::Elasticsearch::Search (2.46 KB, patch)
2016-04-14 21:28 UTC, Chris Cormack
Details | Diff | Splinter Review
Bug 12478 Shifting tests and adding copyright headers (5.71 KB, patch)
2016-04-15 22:42 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review
Bug 12478 Increasing test Coverage for Koha::SearchEngine::Elasticsearch::Search (2.51 KB, patch)
2016-04-15 22:42 UTC, Nick Clemens (kidclamp)
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Robin Sheat 2014-06-25 06:11:47 UTC

    
Comment 1 Robin Sheat 2014-06-25 06:12:33 UTC
Information and large patch dumps will go here periodically, so people can see what's happening.
Comment 2 Robin Sheat 2014-06-25 06:15:22 UTC
Active development is happening here:

http://git.catalyst.net.nz/gw?p=koha.git;a=shortlog;h=refs/heads/elastic_search

you can add this to your own repo if you like (be aware it'll be periodically rebased to keep up with master.)
Comment 3 Robin Sheat 2014-06-25 06:15:43 UTC Comment hidden (obsolete)
Comment 4 Robin Sheat 2014-06-25 06:18:02 UTC
This won't work straight off, to make it work you need to:
* set the system preference 'SearchEngine' to 'Elasticsearch'
* load installer/data/mysql/elasticsearch_mapping.sql into your database
* add something like:

 <elasticsearch>
     <server>es-server:9200</server>
     <index_name>koha_instance</index_name>
 </elasticsearch>

to the config section of koha-conf.xml

Perhaps some other things I forgot.
Comment 5 Robin Sheat 2014-06-25 06:19:36 UTC
(In reply to Robin Sheat from comment #4)
>  <elasticsearch>
>      <server>es-server:9200</server>

<server> can be repeated to point to each server in your cluster, too.
Comment 6 Robin Sheat 2014-06-25 06:24:56 UTC
Perl dependencies are needed to run Catmandu. Packages aimed at Debian Testing are here:

http://www.kallisti.net.nz/~robin/catmandu-deps.tar.gz

they might work on Wheezy if you're lucky, but they haven't been tested there.
Comment 7 Robin Sheat 2014-08-25 03:59:19 UTC Comment hidden (obsolete)
Comment 8 Robin Sheat 2014-08-25 03:59:40 UTC Comment hidden (obsolete)
Comment 9 claire.hernandez@biblibre.com 2014-09-22 11:22:03 UTC
Hi, is there somewhere a target described with features that could be provided ? like a rfc or summary or something like that ?
Comment 10 Robin Sheat 2014-09-22 22:49:49 UTC
(In reply to claire.hernandez@biblibre.com from comment #9)
> Hi, is there somewhere a target described with features that could be
> provided ? like a rfc or summary or something like that ?

There isn't really, at this stage. For now it's still a case of replicating the functionality of zebra. Once that is done, then new features can be added.
Comment 11 Chris Cormack 2014-11-02 20:07:06 UTC
http://git.catalyst.net.nz/gw?p=koha.git;a=shortlog;h=refs/heads/elasticsearch_browser

This has the browser (subject browse etc) code on it.
Comment 12 Robin Sheat 2014-11-03 04:02:53 UTC Comment hidden (obsolete)
Comment 13 Robin Sheat 2014-11-03 04:05:45 UTC Comment hidden (obsolete)
Comment 14 Robin Sheat 2014-11-03 04:05:50 UTC Comment hidden (obsolete)
Comment 15 Robin Sheat 2014-11-03 04:05:54 UTC Comment hidden (obsolete)
Comment 16 Robin Sheat 2014-11-03 04:05:59 UTC Comment hidden (obsolete)
Comment 17 Robin Sheat 2014-11-03 04:06:04 UTC Comment hidden (obsolete)
Comment 18 Robin Sheat 2014-11-03 04:12:51 UTC
I've added a dump of the current state of patches. It's in the process of being split up and having unit tests written, in particular of the underlying modules, i.e. the things that will be needed but aren't central to ES itself.

The last functional change was the ability for authorities to be indexed too. Currently, they can't be searched, but they are there. The next functional change (clearly) is to make them be searchable. I'm expecting that 90% of this will leverage the existing query builder type stuff.

I'm hoping that someone can have a go setting this up on their own installation and trying it out, to see if there are any particular points that need explained.

dcook, I'll be seeing you at the conference tomorrow, I suggest you have a laptop with a VM ready to go on it ;)
Comment 19 David Cook 2014-11-04 23:51:30 UTC
(In reply to Robin Sheat from comment #18)
> I've added a dump of the current state of patches. It's in the process of
> being split up and having unit tests written, in particular of the
> underlying modules, i.e. the things that will be needed but aren't central
> to ES itself.
> 
> The last functional change was the ability for authorities to be indexed
> too. Currently, they can't be searched, but they are there. The next
> functional change (clearly) is to make them be searchable. I'm expecting
> that 90% of this will leverage the existing query builder type stuff.
> 
> I'm hoping that someone can have a go setting this up on their own
> installation and trying it out, to see if there are any particular points
> that need explained.
> 
> dcook, I'll be seeing you at the conference tomorrow, I suggest you have a
> laptop with a VM ready to go on it ;)

Alas, I'll have to try it out another time. Working on some OAI stuff at the moment, but will have to look at this sometime soon!
Comment 20 Robin Sheat 2014-11-19 02:31:17 UTC
http://elasticsearch.koha.catalystdemo.net.nz/cgi-bin/koha/opac-main.pl

This is running Ubuntu 14.10 (I couldn't convince things to work on 14.04, I'd expect Debian Jessie to work also), it has zebra turned off, and the OPAC search interface is running through ES.

The code it's running is taken directly from the catalyst repo, though I probably won't update it when I know things are broken. Otherwise I'll try to keep it current.

If you notice any weirdnesses, feel free to let me know.
Comment 21 Jonathan Druart 2015-01-16 16:06:23 UTC
Is there a doc somewhere? I did not find it...

I tried but failed:

$ git remote add catalyst git://git.catalyst.net.nz/koha.git
$ git checkout -b elastic_search catalyst/elastic_search
$ perl installer/data/mysql/updatedatabase.pl
$ sudo apt-get install elasticsearch

$ perl misc/search_tools/rebuild_elastic_search.pl -h
Can't locate Elasticsearch.pm in @INC (you may need to install the Elasticsearch module) (@INC contains: blablabla)

$ apt-cache search elasticsearch | grep perl
libcatmandu-perl - metadata toolkit

$ sudo apt-get install libcatmandu-perl

But got the same error.

The ElasticSearch module on the cpan is marked as deprecated:
http://search.cpan.org/~drtech/ElasticSearch-0.68/lib/ElasticSearch.pm
Comment 22 Jonathan Druart 2015-01-16 16:08:38 UTC
I also added the lines in the $KOHA_CONF file.
Comment 23 Robin Sheat 2015-01-21 23:17:00 UTC
(In reply to Jonathan Druart from comment #21)
> $ perl misc/search_tools/rebuild_elastic_search.pl -h
> Can't locate Elasticsearch.pm in @INC (you may need to install the
> Elasticsearch module) (@INC contains: blablabla)
> 
> $ apt-cache search elasticsearch | grep perl
> libcatmandu-perl - metadata toolkit

In comment #6 I added a link to the dependencies needed. It's a bit old now, I've been working on making some new ones, but that archive may still work.
Comment 24 Jonathan Druart 2015-01-22 15:01:28 UTC
Back here,
$ mkdir catmandu-deps
$ cd catmandu-deps
$ wget www.kallisti.net.nz/~robin/catmandu-deps.tar.gz
$ tar zxvf catmandu-deps.tar.gz
$ sudo dpkg -i *.deb

I got some dpkg: warning: downgrading $package from $version_a+ to $version_a

And it finished with:
dpkg: dependency problems prevent configuration of libdata-messagepack-perl:
 libdata-messagepack-perl depends on perlapi-5.18.2; however:
  Package perlapi-5.18.2 is not installed.

dpkg: error processing package libdata-messagepack-perl (--install):
 dependency problems - leaving unconfigured
Setting up libdata-spreadpagination-perl (0.1.2-1) ...
Setting up libdispatch-class-perl (0.01-1) ...
Setting up libelasticsearch-perl (1.04-1) ...
Setting up libhijk-perl (0.13-1) ...
Setting up libhttp-tiny-perl (0.043-1) ...
Setting up libjson-maybexs-perl (1.002002-1) ...
Setting up liblog-any-adapter-callback-perl (0.07-1) ...
dpkg: dependency problems prevent configuration of libmarpa-r2-perl:
 libmarpa-r2-perl depends on perlapi-5.18.2; however:
  Package perlapi-5.18.2 is not installed.

dpkg: error processing package libmarpa-r2-perl (--install):
 dependency problems - leaving unconfigured
Setting up libmoox-log-any-perl (0.001-1) ...
Setting up libsearch-elasticsearch-perl (1.11-1) ...
Setting up libtry-tiny-byclass-perl (0.01-1) ...
Setting up libyaml-perl (0.90-1) ...
dpkg: dependency problems prevent configuration of libcatmandu-perl:
 libcatmandu-perl depends on libmarpa-r2-perl (>= 2.084000); however:
  Package libmarpa-r2-perl is not configured yet.

dpkg: error processing package libcatmandu-perl (--install):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libcatmandu-store-elasticsearch-perl:
 libcatmandu-store-elasticsearch-perl depends on libcatmandu-perl (>= 0.8); however:
  Package libcatmandu-perl is not configured yet.

dpkg: error processing package libcatmandu-store-elasticsearch-perl (--install):
 dependency problems - leaving unconfigured
Setting up libelasticsearch-compat-perl (0.03-1) ...
Setting up libsearch-elasticsearch-compat-perl (0.10-1) ...
dpkg: dependency problems prevent configuration of libcatmandu-marc-perl:
 libcatmandu-marc-perl depends on libcatmandu-perl (>= 0.08); however:
  Package libcatmandu-perl is not configured yet.

dpkg: error processing package libcatmandu-marc-perl (--install):
 dependency problems - leaving unconfigured
Processing triggers for man-db (2.6.7.1-1) ...
Errors were encountered while processing:
 libdata-messagepack-perl
 libmarpa-r2-perl
 libcatmandu-perl
 libcatmandu-store-elasticsearch-perl
 libcatmandu-marc-perl

Note: libossp-uuid-perl was missing, I installed it.

I have perl v.5.20 installed.
Comment 25 Robin Sheat 2015-01-26 03:17:25 UTC
Yeah, they worked on Ubuntu 14.10 last I looked. I'm in the process of making a new set for Debian Jessie.

As for the deprecated bits, there are some issues there that make it not quite worth messing with the new versions just yet. Check out the discussion thread here: http://mail.librecat.org/pipermail/librecat-dev/2015-January/000322.html

There should be very few, if any, API changes as the result of switching to the new version when it's suitable, too.
Comment 26 Robin Sheat 2015-01-26 05:56:29 UTC
You can grab the new dependencies from here:

http://debian.koha-community.org/koha/otherthings/

make sure you uninstall liblog-any-adapter-perl before doing anything, it conflicts with the new liblog-any-perl, and is no longer needed. I've added a dummy version of it to keep those packages that think they need it happy.

It should just be a matter of doing:

sudo dpkg -i *.deb
sudo apt-get -f install

Oh, and this requires Debian Jessie, as most of the old dependencies made it into there.
Comment 27 Jonathan Druart 2015-01-26 12:04:35 UTC
Looks better with these new packages.
Continuing debugging...

Catmandu::Importer::MARC was missing, I installed it using cpanm:
  $ sudo cpanm Catmandu::Importer::MARC

Now I got:
$ perl rebuild_elastic_search.pl
  1 to 115 is displayed
  Committing...

And nothing else...

  $ ps aux | grep elasticsearch
returns nothing
  $ sudo service elasticsearch start
  $ ps aux | grep elasticsearch
returns nothing

  $ curl -X GET http://localhost:9200/
curl: (7) Failed to connect to localhost port 9200: Connection refused

Editing /etc/init.d/elasticsearch, I found the binary: /usr/share/elasticsearch/bin/elasticsearch
  $ cd /usr/share/elasticsearch
  $ slog4j:WARN No appenders could be found for logger (node).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.udo bin/elasticsearch

  $ apt-cache show elasticsearch
Package: elasticsearch
Version: 1.0.3+dfsg-5

Should I installed another version? Something is missing in my jvm configuration? Something to do in the elasticsearch config?
Comment 28 Robin Sheat 2015-01-27 01:21:41 UTC
(In reply to Jonathan Druart from comment #27)
> Looks better with these new packages.
> Continuing debugging...
> 
> Catmandu::Importer::MARC was missing, I installed it using cpanm:
>   $ sudo cpanm Catmandu::Importer::MARC

Catmandu::Importer::MARC is provided by libcatmandu-marc-perl which is in Jessie already.

> Now I got:
> $ perl rebuild_elastic_search.pl
>   1 to 115 is displayed
>   Committing...
> 
> And nothing else...
> 
>   $ ps aux | grep elasticsearch
> returns nothing
>   $ sudo service elasticsearch start
>   $ ps aux | grep elasticsearch
> returns nothing

That's not really my code's fault :) you may need to configure elasticsearch before it starts. Don't forget to change the clustername in /etc/elasticsearch/elasticsearch.yml, otherwise you may have it clustering with other ES instances on the same network.

This said, it shouldn't just sit there if the daemon isn't available, I've added a note to look into that.

>   $ curl -X GET http://localhost:9200/
> curl: (7) Failed to connect to localhost port 9200: Connection refused

Yep, that there is your issue.

$ curl -X GET http://koha-es:9200/
{
  "status" : 200,
  "name" : "koha-es",
  "version" : {
    "number" : "1.3.7",
    "build_hash" : "3042293e4b219dfb855a4e6c64241c530d1abeb0",
    "build_timestamp" : "2014-12-16T13:59:32Z",
    "build_snapshot" : false,
    "lucene_version" : "4.9"
  },
  "tagline" : "You Know, for Search"
}


> Editing /etc/init.d/elasticsearch, I found the binary:
> /usr/share/elasticsearch/bin/elasticsearch
>   $ cd /usr/share/elasticsearch
>   $ slog4j:WARN No appenders could be found for logger (node).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.udo bin/elasticsearch
> 
>   $ apt-cache show elasticsearch
> Package: elasticsearch
> Version: 1.0.3+dfsg-5

$ apt-cache policy elasticsearch
elasticsearch:
  Installed: 1.3.7
  Candidate: 1.3.7
  Version table:
 *** 1.3.7 0
        500 http://packages.elasticsearch.org/elasticsearch/1.3/debian/ stable/main amd64 Packages
        100 /var/lib/dpkg/status


> Should I installed another version? Something is missing in my jvm
> configuration? Something to do in the elasticsearch config?

Try the one from the official repo, I'm not sure what differences are in the debian provided version. It might just be you need some log4j stuff, but I can only guess.

http://www.elasticsearch.org/blog/apt-and-yum-repositories/
Comment 29 Jonathan Druart 2015-01-27 09:47:15 UTC
  $ sudo apt-get install libcatmandu-marc-perl

Try with the lastest  (1.4.2):

  $ wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.2.deb

  $ sudo dpkg -i elasticsearch-1.4.2.deb
Selecting previously unselected package elasticsearch.
(Reading database ... 59178 files and directories currently installed.)
Preparing to unpack .../koha/elasticsearch-1.4.2.deb ...
Unpacking elasticsearch (1.4.2) ...
Setting up elasticsearch (1.4.2) ...
Adding system user `elasticsearch' (UID 106) ...
Adding new user `elasticsearch' (UID 106) with group `elasticsearch' ...
Not creating home directory `/usr/share/elasticsearch'.
### NOT starting elasticsearch by default on bootup, please execute
 sudo update-rc.d elasticsearch defaults 95 10
### In order to start elasticsearch, execute
 sudo /etc/init.d/elasticsearch start

  $ sudo update-rc.d elasticsearch defaults 95 10
  $ sudo /etc/init.d/elasticsearch start

  $ ps aux | grep elasticsearch
elastic+  5715 75.4  1.6 1326700 131172 ?      Sl   09:57   0:03 /usr/lib/jvm/java-7-openjdk-i386//bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Delasticsearch -Des.pidfile=/var/run/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch -cp :/usr/share/elasticsearch/lib/elasticsearch-1.4.2.jar:/usr/share/elasticsearch/lib/*:/usr/share/elasticsearch/lib/sigar/* -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch org.elasticsearch.bootstrap.Elasticsearch

better :)

But the indexing is still stuck on "Committing..."

I am using an UNIMARC DB, does the problem could be related to a bad mapping?
Comment 30 Robin Sheat 2015-01-27 22:31:20 UTC
(In reply to Jonathan Druart from comment #29)
> Try with the lastest  (1.4.2):
> 
>   $ wget
> https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-
> 1.4.2.deb

For what it's worth, I'm using 1.3.7 which is the latest stable. I wouldn't think that would make a difference really though.

> But the indexing is still stuck on "Committing..."
> 
> I am using an UNIMARC DB, does the problem could be related to a bad mapping?

It shouldn't be, the mapping process is pretty simple and I'd expect it to just not succeed rather than cause it to lock up.

Sitting on "committing" is consistent with not being able to talk to the ES server, and there's currently no error handling for that (I'd expect it to time out eventually, but who knows how long that'll take.) If I were you, that's where I'd start looking. Make sure you can hit it with curl, and that the config in koha-conf.xml is correct. fwiw, mine is:

 <elasticsearch>
     <server>koha-es:9200</server>
     <index_name>koha_robin</index_name>
 </elasticsearch>

If you're keen, fire it up in the debugger (perl -d) and do:

c Catmandu::Store::ElasticSearch::Bag::commit

and trace through from there, but after that point it gets pretty hard to drill down further I found.

In theory, adding a <timeout> value to the elasticsearch block in the config should cause that to be passed along to the other code, but that didn't seem to happen with a quick test, so I might be wrong there.
Comment 31 Katrin Fischer 2015-02-02 09:55:37 UTC
Hi Robin,

having a discussion about his on IRC right now - we are all quite curious about your work - could you write up some installation docs and/or a summary what the ES patches will include/not include?
Comment 32 Jonathan Druart 2015-02-02 15:25:08 UTC
(In reply to Robin Sheat from comment #30)
> (In reply to Jonathan Druart from comment #29)
> > Try with the lastest  (1.4.2):
> > 
> >   $ wget
> > https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-
> > 1.4.2.deb
> 
> For what it's worth, I'm using 1.3.7 which is the latest stable. I wouldn't
> think that would make a difference really though.

Will try with 1.3.7, but I believe I got the same error as before ( with this version.

> Sitting on "committing" is consistent with not being able to talk to the ES
> server, and there's currently no error handling for that (I'd expect it to
> time out eventually, but who knows how long that'll take.) If I were you,
> that's where I'd start looking. Make sure you can hit it with curl, and that
> the config in koha-conf.xml is correct. fwiw, mine is:
> 
>  <elasticsearch>
>      <server>koha-es:9200</server>
>      <index_name>koha_robin</index_name>
>  </elasticsearch>

Yes, I have something similar:
303  <elasticsearch>
304      <server>http://localhost:9200/</server>
305    <index_name>koha_instance</index_name>
306  </elasticsearch>

> If you're keen, fire it up in the debugger (perl -d) and do:
> 
> c Catmandu::Store::ElasticSearch::Bag::commit

Actually it never reach commit:

% perl -d misc/search_tools/rebuild_elastic_search.pl

Loading DB routines from perl5db.pl version 1.44
Editor support available.

Enter h or 'h h' for help, or 'man perldebug' for more help.

main::(misc/search_tools/rebuild_elastic_search.pl:92):
92:     my $verbose = 0;
  DB<1> c Catmandu::Addable::add_many                             
Indexing biblios
3
Catmandu::Addable::add_many(/usr/share/perl5/Catmandu/Addable.pm:84):
84:         my ($self, $many) = @_;
  DB<2> n
Catmandu::Addable::add_many(/usr/share/perl5/Catmandu/Addable.pm:86):
86:         if (is_hash_ref($many)) {
  DB<2> 
Catmandu::Addable::add_many(/usr/share/perl5/Catmandu/Addable.pm:91):
91:         if (is_array_ref($many)) {
  DB<2> 
Catmandu::Addable::add_many(/usr/share/perl5/Catmandu/Addable.pm:96):
96:         if (is_invocant($many)) {
  DB<2> 
Catmandu::Addable::add_many(/usr/share/perl5/Catmandu/Addable.pm:97):
97:             $many = check_able($many, 'generator')->generator;
  DB<2> 
Catmandu::Addable::add_many(/usr/share/perl5/Catmandu/Addable.pm:100):
100:        check_code_ref($many);
  DB<2> 
Catmandu::Addable::add_many(/usr/share/perl5/Catmandu/Addable.pm:102):
102:        my $data;
  DB<2> 
Catmandu::Addable::add_many(/usr/share/perl5/Catmandu/Addable.pm:103):
103:        my $n = 0;
  DB<2> 
Catmandu::Addable::add_many(/usr/share/perl5/Catmandu/Addable.pm:104):
104:        while (defined($data = $many->())) {
  DB<2> use Data::Dumper; warn Dumper $many;
$VAR1 = sub { "DUMMY" };
 at (eval 1256)[/usr/share/perl/5.20/perl5db.pl:732] line 2.
        eval 'no strict; ($@, $!, $^E, $,, $/, $\\, $^W) = @DB::saved;package Catmandu::Addable; $^D = $^D | $DB::db_stop;
use Data::Dumper; warn Dumper $many;;
' called at /usr/share/perl/5.20/perl5db.pl line 732
        DB::eval called at /usr/share/perl/5.20/perl5db.pl line 3094
        DB::DB called at /usr/share/perl5/Catmandu/Addable.pm line 104
        Catmandu::Addable::add_many(Catmandu::Store::ElasticSearch::Bag=HASH(0xd5c503c), Catmandu::Iterator=CODE(0xc565080)) called at Koha/ElasticSearch/Indexer.pm line 80
        Koha::ElasticSearch::Indexer::update_index(Koha::ElasticSearch::Indexer=HASH(0xcd60c00), ARRAY(0xca8ca80), ARRAY(0xca8cae4)) called at misc/search_tools/rebuild_elastic_search.pl line 184
        main::do_reindex(CODE(0xcae4dc0), "biblios") called at misc/search_tools/rebuild_elastic_search.pl line 132

  DB<3> warn ref($many);
CODE at (eval 1257)[/usr/share/perl/5.20/perl5db.pl:732] line 2.
 at (eval 1257)[/usr/share/perl/5.20/perl5db.pl:732] line 2.
        eval 'no strict; ($@, $!, $^E, $,, $/, $\\, $^W) = @DB::saved;package Catmandu::Addable; $^D = $^D | $DB::db_stop;
warn ref($many);;
' called at /usr/share/perl/5.20/perl5db.pl line 732
        DB::eval called at /usr/share/perl/5.20/perl5db.pl line 3094
        DB::DB called at /usr/share/perl5/Catmandu/Addable.pm line 104
        Catmandu::Addable::add_many(Catmandu::Store::ElasticSearch::Bag=HASH(0xd5c503c), Catmandu::Iterator=CODE(0xc565080)) called at Koha/ElasticSearch/Indexer.pm line 80
        Koha::ElasticSearch::Indexer::update_index(Koha::ElasticSearch::Indexer=HASH(0xcd60c00), ARRAY(0xca8ca80), ARRAY(0xca8cae4)) called at misc/search_tools/rebuild_elastic_search.pl line 184
        main::do_reindex(CODE(0xcae4dc0), "biblios") called at misc/search_tools/rebuild_elastic_search.pl line 132

  DB<4> n

And nothing else (no CPU activity neither).
the ref($many) returning "CODE" does not smell good :)

continuing

  DB<4> n
^CqMARC::File::USMARC::_next(/usr/share/perl5/MARC/File/USMARC.pm:53):
53:         local $/ = END_OF_RECORD;
  DB<4> q
% pmvers MARC::File::USMARC
/usr/bin/pmvers: unknown version for module `MARC::File::USMARC'

hum...

Since I installed it using the CPAN first, I removed it:
% sudo cpanm -U Catmandu::Importer::MARC
[...]
Successfully uninstalled Catmandu::Importer::MARC

% dpkg -l libcatmandu-marc-perl
ii  libcatmandu-marc-perl | 0.206-2 | all | modules for working with MARC data within the Catmandu framework

% perl misc/search_tools/rebuild_elastic_search.pl
[lot of logs]
25888 records indexed.

\o/
Sorry about that!

I should have tried pmpath MARC::File::USMARC to catch the problem quickly.

Now, search records!

OPAC: cgi-bin/koha/opac-search.pl returns
Can't locate Koha/ElasticSearch/Search.pm in @INC
It's seem to be caused by the last commit 
Bug 12478 - authority work in progress
diff --git a/Koha/ElasticSearch/Search.pm b/Koha/ElasticSearch/Search.pm
deleted file mode 100644

I tried to fix some compilation errors (patch coming), but the branch looks to be let in an unusable state.

The next error is: Can't locate object method "mk_accessors" via package "Koha::SearchEngine::ElasticSearch::Search"

I don't want to continue and add conflicts with something you have already fixed.
Comment 33 Jonathan Druart 2015-02-02 15:26:26 UTC
previous comment tldr:
indexing: OK
searching: KO, branch in an unusable state.
Comment 34 Jonathan Druart 2015-02-02 15:27:14 UTC Comment hidden (obsolete)
Comment 35 Robin Sheat 2015-02-02 21:15:09 UTC
(In reply to Jonathan Druart from comment #33)
> previous comment tldr:
> indexing: OK

Yay!

> searching: KO, branch in an unusable state.

Yeah, I forgot to mention that sorry. Roll it back a bit, I'm in the middle of a bit of refactoring (just moving a module to a more consistent place, but I haven't updated the references to it yet), and also developing the authorities searching (though only some of that has been pushed so far.)

I'll try to get it to a properly working state again today.
Comment 36 Robin Sheat 2015-02-03 01:44:53 UTC
(In reply to Robin Sheat from comment #35)
> I'll try to get it to a properly working state again today.

And done, it should work again.

I'll try to put together a wiki page with some info.
Comment 37 Robin Sheat 2015-02-03 02:49:20 UTC
(In reply to Robin Sheat from comment #36)
> I'll try to put together a wiki page with some info.

http://wiki.koha-community.org/wiki/Elasticsearch

It's a quick brain dump, so feel free to update and polish if I've missed things.
Comment 38 Jonathan Druart 2015-02-03 08:47:38 UTC
Thanks Robin for the wiki page.
I have quickly tested this morning, to confirm the search works.
Just some quick notes:
- The indexation should commit every 1k biblio (minimum), the commit is a slow op indeed.
- You have removed the use of search/results.tt in your last commit, is it intended? I am pretty sure it's not a good idea to use the same template for ES and Zebra, but maybe it's temporary.
Note that there are encoding errors in facets (and in the table results also), the "Show more" link appears even if only 1 entry is displayed. I only get 2 facets: authors and itemtype.

Do you plan to rebase your work against master? It would be great to see these patches on top of bug 11944.
Comment 39 Robin Sheat 2015-02-03 22:21:26 UTC
(In reply to Jonathan Druart from comment #38)
> Thanks Robin for the wiki page.
> I have quickly tested this morning, to confirm the search works.
> Just some quick notes:
> - The indexation should commit every 1k biblio (minimum), the commit is a
> slow op indeed.

Yeah. The commit rate I'm defining is currently actually not useful as Catmandu has its own buffering and committing system. I think that's changeable though.

I also think the whole indexing process can be optimised a fair bit. I just haven't looked into it yet.

> - You have removed the use of search/results.tt in your last commit, is it
> intended? I am pretty sure it's not a good idea to use the same template for
> ES and Zebra, but maybe it's temporary.

Hmm? I'm not totally sure what you mean here. I am totally deliberately using the same template to show ES results as also shows zebra results, that's by design. Any other way would require copy-pasting 99% of the template, when there's already a perfectly good one that shows search results. I'm also trying to make a bit of an abstract search layer thing, not totally perfectly, to make it easier to work with this sort of thing in the future (very much based off your solr stuff in that respect.)

> Note that there are encoding errors in facets (and in the table results
> also), the "Show more" link appears even if only 1 entry is displayed. 

Yep, definitely. There'll be a lot of things like that that just need to be polished.

> I only get 2 facets: authors and itemtype.

MARC21 or UNIMARC? You should get more, though I haven't looked at that for a while.

For example,
http://elasticsearch.koha.catalystdemo.net.nz/cgi-bin/koha/opac-search.pl?q=chicken 

give availability, item types, authors and topics. But the facet stuff as a whole will need more work, I mostly got it to the point where it works at all and then moved on.

> Do you plan to rebase your work against master? It would be great to see
> these patches on top of bug 11944.

Yes, I do. I'm just a little afraid to as I know it'll conflict with many, many things.

One day soon I'll suck it up and do it though.
Comment 40 Robin Sheat 2015-02-03 22:25:52 UTC
(In reply to Robin Sheat from comment #39)
> MARC21 or UNIMARC? You should get more, though I haven't looked at that for
> a while.
> 
> For example,
> http://elasticsearch.koha.catalystdemo.net.nz/cgi-bin/koha/opac-search.
> pl?q=chicken 
> 
> give availability, item types, authors and topics. But the facet stuff as a
> whole will need more work, I mostly got it to the point where it works at
> all and then moved on.

Oh, whether a field is suitable for faceting is now defined in the mapping table in the database. This might be why it's not working if you're in unimarc, the field definitions may not be correct for it.

I want (eventually) to extend this to make what is facetable configurable, at the moment it's hardcoded into the template.
Comment 41 Robin Sheat 2015-03-05 05:45:39 UTC
FYI, the catalyst repo branch now has basic authority search working. There's still a good bit to do (paging, biblio counts, many more things aren't there yet), but results are coming out so I'm counting that as a win :)

You can see it in action here:

http://elasticsearch.koha.catalystdemo.net.nz/cgi-bin/koha/opac-authorities-home.pl?op=do_search&type=opac&operator=contains&value=robert&marclist=any&and_or=and&orderby=HeadingAsc

I think the fact that there are so many Jordan, Roberts is due to the data, but I haven't actually checked yet. At the moment it just replicates how the zebra version works, but I do want to push some of the things into the indexing side so that there's less computation needed to display results and things can perhaps be made a bit simpler. We can do this because we can store arbitrary fields in elasticsearch alongside the actual records.
Comment 42 Peter Zhao 2015-03-05 09:18:51 UTC
(In reply to Robin Sheat from comment #41)
> FYI, the catalyst repo branch now has basic authority search working.
> There's still a good bit to do (paging, biblio counts, many more things
> aren't there yet), but results are coming out so I'm counting that as a win
> :)
> 
> You can see it in action here:
> 
> http://elasticsearch.koha.catalystdemo.net.nz/cgi-bin/koha/opac-authorities-
> home.
> pl?op=do_search&type=opac&operator=contains&value=robert&marclist=any&and_or=
> and&orderby=HeadingAsc
> 
> I think the fact that there are so many Jordan, Roberts is due to the data,
> but I haven't actually checked yet. At the moment it just replicates how the
> zebra version works, but I do want to push some of the things into the
> indexing side so that there's less computation needed to display results and
> things can perhaps be made a bit simpler. We can do this because we can
> store arbitrary fields in elasticsearch alongside the actual records.

Dear Robin,
          It is a great job! Thanks a lot. I tried to install a ES.
          It seems to index the biblio. But I can't search the record."No results found!" I can find the record by Zebra.
          koha@koha:~$ /home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl -v -d
Indexing biblios
1

           Could you give some advice?
Comment 43 Robin Sheat 2015-03-06 02:26:02 UTC
(In reply to Peter Zhao from comment #42)
>           It is a great job! Thanks a lot. I tried to install a ES.
>           It seems to index the biblio. But I can't search the record."No
> results found!" I can find the record by Zebra.
>           koha@koha:~$
> /home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl -v -d
> Indexing biblios
> 1
> 
>            Could you give some advice?

There's not really enough info to go on there. I'd start by looking in the logs. Many operations should be filling them full of search traces and so on, so that might have useful things in it.

Have you followed the steps in here:
http://wiki.koha-community.org/wiki/Elasticsearch
in particular, adding the mapping SQL file into the database.
Comment 44 Peter Zhao 2015-03-06 04:43:37 UTC
(In reply to Robin Sheat from comment #43)
> (In reply to Peter Zhao from comment #42)
> >           It is a great job! Thanks a lot. I tried to install a ES.
> >           It seems to index the biblio. But I can't search the record."No
> > results found!" I can find the record by Zebra.
> >           koha@koha:~$
> > /home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl -v -d
> > Indexing biblios
> > 1
> > 
> >            Could you give some advice?
> 
> There's not really enough info to go on there. I'd start by looking in the
> logs. Many operations should be filling them full of search traces and so
> on, so that might have useful things in it.
> 
> Have you followed the steps in here:
> http://wiki.koha-community.org/wiki/Elasticsearch
> in particular, adding the mapping SQL file into the database.

I followed the steps in "http://wiki.koha-community.org/wiki/Elasticsearch",and also added the mapping SQL file into the database.
I useed Ubuntu 14.10 to install the ES Koha.
The following is the information of elasticsearch log file.

[2015-03-06 12:27:18,359][INFO ][node                     ] [koha] version[1.3.7], pid[3448], build[3042293/2014-12-16T13:59:32Z]
[2015-03-06 12:27:18,360][INFO ][node                     ] [koha] initializing ...
[2015-03-06 12:27:18,367][INFO ][plugins                  ] [koha] loaded [], sites []
[2015-03-06 12:27:23,291][INFO ][node                     ] [koha] initialized
[2015-03-06 12:27:23,292][INFO ][node                     ] [koha] starting ...
[2015-03-06 12:27:23,476][INFO ][transport                ] [koha] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.3:9300]}
[2015-03-06 12:27:23,532][INFO ][discovery                ] [koha] koha/yPYdtMzrTtucBGGFgGYAnw
[2015-03-06 12:27:26,588][INFO ][cluster.service          ] [koha] new_master [koha][yPYdtMzrTtucBGGFgGYAnw][koha][inet[/192.168.1.3:9300]], reason: zen-disco-join (elected_as_master)
[2015-03-06 12:27:26,644][INFO ][http                     ] [koha] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.3:9200]}
[2015-03-06 12:27:26,649][INFO ][node                     ] [koha] started
[2015-03-06 12:27:26,728][INFO ][gateway                  ] [koha] recovered [0] indices into cluster_state
[2015-03-06 12:28:45,881][INFO ][cluster.metadata         ] [koha] [koha_biblios] creating index, cause [api], shards [5]/[1], mappings [data]
[2015-03-06 12:28:46,740][INFO ][cluster.metadata         ] [koha] [koha_biblios] deleting index
[2015-03-06 12:29:35,050][INFO ][cluster.metadata         ] [koha] [koha_biblios] creating index, cause [api], shards [5]/[1], mappings []
[2015-03-06 12:31:41,222][INFO ][node                     ] [koha] stopping ...
[2015-03-06 12:31:41,338][INFO ][node                     ] [koha] stopped
[2015-03-06 12:31:41,339][INFO ][node                     ] [koha] closing ...
[2015-03-06 12:31:41,352][INFO ][node                     ] [koha] closed
Comment 45 Peter Zhao 2015-03-06 05:04:13 UTC
(In reply to Robin Sheat from comment #43)
> (In reply to Peter Zhao from comment #42)
> >           It is a great job! Thanks a lot. I tried to install a ES.
> >           It seems to index the biblio. But I can't search the record."No
> > results found!" I can find the record by Zebra.
> >           koha@koha:~$
> > /home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl -v -d
> > Indexing biblios
> > 1
> > 
> >            Could you give some advice?
> 
> There's not really enough info to go on there. I'd start by looking in the
> logs. Many operations should be filling them full of search traces and so
> on, so that might have useful things in it.
> 
> Have you followed the steps in here:
> http://wiki.koha-community.org/wiki/Elasticsearch
> in particular, adding the mapping SQL file into the database.
 
The following is the "koha-opac-error log "
[Fri Mar 06 12:30:19.551872 2015] [cgi:error] [pid 3674] [client 127.0.0.1:45890] AH01215: [Fri Mar  6 12:30:19 2015] opac-search.pl: Use of uninitialized value $f in hash element at /usr/share/koha/lib/Koha/SearchEngine/Elasticsearch/QueryBuilder.pm line 479., referer: http://127.0.1.1/cgi-bin/koha/opac-search.pl?idx=ti&q=theology
[Fri Mar 06 12:33:43.232092 2015] [cgi:error] [pid 3759] [client 127.0.0.1:45904] AH01215: [Fri Mar  6 12:33:43 2015] opac-search.pl: Use of uninitialized value $f in hash element at /usr/share/koha/lib/Koha/SearchEngine/Elasticsearch/QueryBuilder.pm line 479., referer: http://127.0.1.1/cgi-bin/koha/opac-search.pl?idx=&q=theology
[Fri Mar 06 12:33:43.398916 2015] [cgi:error] [pid 3759] [client 127.0.0.1:45904] AH01215: [Fri Mar  6 12:33:43 2015] opac-search.pl: Use of uninitialized value $error in concatenation (.) or string at /usr/share/koha/opac/cgi-bin/opac/opac-search.pl line 568., referer: http://127.0.1.1/cgi-bin/koha/opac-search.pl?idx=&q=theology
Comment 46 Robin Sheat 2015-03-09 01:17:48 UTC
It might be useful to ensure that they indexed right, you can do this:

curl -XGET 'http://localhost:9200/koha_biblios/_search?pretty=1' 

where 'koha_biblios' is whatever your index is called. This'll give you a dump of everything in the index.
Comment 47 Peter Zhao 2015-03-09 06:05:30 UTC
(In reply to Robin Sheat from comment #46)
> It might be useful to ensure that they indexed right, you can do this:
> 
> curl -XGET 'http://localhost:9200/koha_biblios/_search?pretty=1' 
> 
> where 'koha_biblios' is whatever your index is called. This'll give you a
> dump of everything in the index.
 

-----------
$ curl -XGET 'http://localhost:9200/koha_biblios/_search?pretty=1' {
  "error" : "IndexMissingException[[koha_biblios] missing]",
  "status" : 404
}
----------
$ /home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl -v -d
Indexing biblios
1
-----------------
I just catalogued one record to try. But it always stay there.
I think it cannot finish to build the index. How to handle this problem?
Comment 48 Robin Sheat 2015-03-09 22:15:54 UTC
404 means there is no index with that name, you need to verify that the index name that you're providing is correct. You can do that with:

curl 'koha-es:9200/_cat/indices?v'

It is possible that there's a bug that if there aren't enough records to commit, nothing happens. I haven't tested that case yet. Try setting the commit option to the rebuild to '1'
Comment 49 Robin Sheat 2015-03-10 05:14:45 UTC
Yay, the fundamentals of authority searching are now working (biblio count and paging.) Next up is making sure the various types of searches result in the right results.
Comment 50 Peter Zhao 2015-03-10 05:21:47 UTC
(In reply to Robin Sheat from comment #48)
> 404 means there is no index with that name, you need to verify that the
> index name that you're providing is correct. You can do that with:
> 
> curl 'koha-es:9200/_cat/indices?v'
> 
> It is possible that there's a bug that if there aren't enough records to
> commit, nothing happens. I haven't tested that case yet. Try setting the
> commit option to the rebuild to '1'

I try to set the commit option to the rebuild to '1'. And then when rebuild the record, it stays on “ is commiting”.
~$  /home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl -v -d
Indexing biblios
1
Committing...
-----------
~$ curl 'localhost:9200/_cat/indices?v'
health index        pri rep docs.count docs.deleted store.size pri.store.size 
yellow koha_biblios   5   1          0            0       575b           575b

-------------
~$ curl -XGET 'http://localhost:9200/koha_biblios/_search?pretty=1'
{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 0,
    "max_score" : null,
    "hits" : [ ]
  }
}

-----------
koha-conf.xml
<elasticsearch>
     <server>localhost:9200</server>
     <index_name>koha</index_name>
 </elasticsearch>
----------
elasticsearch.yml
cluster.name: koha
node.name: "koha"
--------------
Should I change more setting information?
Comment 51 Robin Sheat 2015-03-10 05:38:11 UTC
How long did you let it sit on 'Committing', the first one can take some time (in the order of 10 seconds or so perhaps.)

Otherwise, it's hard to see what's going on without real information but it does look like things aren't indexing well.

In Koha/ElasticSearch/Indexer.pm there are a couple of commented out 'trace_calls' entries. If you make that active, you may get more information about what's going on.

Otherwise perhaps trace through it using the Perl debugger and see where it stops.
Comment 52 Peter Zhao 2015-03-10 06:24:36 UTC
(In reply to Robin Sheat from comment #51)
> How long did you let it sit on 'Committing', the first one can take some
> time (in the order of 10 seconds or so perhaps.)
> 
> Otherwise, it's hard to see what's going on without real information but it
> does look like things aren't indexing well.
> 
> In Koha/ElasticSearch/Indexer.pm there are a couple of commented out
> 'trace_calls' entries. If you make that active, you may get more information
> about what's going on.
> 
> Otherwise perhaps trace through it using the Perl debugger and see where it
> stops.

~$ perl -d /home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl

Loading DB routines from perl5db.pl version 1.33
Editor support available.

Enter h or `h h' for help, or `man perldebug' for more help.

main::(/home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl:92):
92:	my $verbose = 0;
  DB<1>
Comment 53 Peter Zhao 2015-03-10 13:06:09 UTC
(In reply to Peter Zhao from comment #52)
> (In reply to Robin Sheat from comment #51)
> > How long did you let it sit on 'Committing', the first one can take some
> > time (in the order of 10 seconds or so perhaps.)
> > 
> > Otherwise, it's hard to see what's going on without real information but it
> > does look like things aren't indexing well.
> > 
> > In Koha/ElasticSearch/Indexer.pm there are a couple of commented out
> > 'trace_calls' entries. If you make that active, you may get more information
> > about what's going on.
> > 
> > Otherwise perhaps trace through it using the Perl debugger and see where it
> > stops.
> 
> ~$ perl -d /home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl
> 
> Loading DB routines from perl5db.pl version 1.33
> Editor support available.
> 
> Enter h or `h h' for help, or `man perldebug' for more help.
> 
> main::(/home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl:92):
> 92:	my $verbose = 0;
>   DB<1>

The problem is fixed by "cpan Task::Catmandu".
Index works well.

~$  /home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl -v -d
Indexing biblios
1
Committing...
1 records indexed.
Indexing authorities
0 records indexed.
Comment 54 Robin Sheat 2015-03-10 22:25:53 UTC
(In reply to Peter Zhao from comment #52)
> Enter h or `h h' for help, or `man perldebug' for more help.
> 
> main::(/home/koha/kohaclone/misc/search_tools/rebuild_elastic_search.pl:92):
> 92:	my $verbose = 0;
>   DB<1>

You would actually need to trace it through after this point :)

(In reply to Peter Zhao from comment #53)
> The problem is fixed by "cpan Task::Catmandu".
> Index works well.

Unfortunately, that doesn't solve the cause of the problem. It shouldn't be necessary to cpan anything. It would be better to know what the missing module is so that I can make it fail properly if it's missing.
Comment 55 Robin Sheat 2015-03-13 02:28:34 UTC
Authority searching is now "complete", by which I mean it seems to work but there are bound to be issues in it.
Comment 56 Robin Sheat 2015-03-25 04:46:46 UTC
I've rebased the branch on top of current master* and pushed it into the catalyst repo. This may cause the SearchEngine preference to vanish, if so, just go and re-set it from the system preferences.

* git can get upset if a file is in the thing you're rebasing onto, but there are still patches that have to go against it. It ended up grabbing a totally unrelated file that presumably was the closest match and shoving the patches in there. That's why there's the occasionally weird touching of Koha::Template::Plugin::Price.
Comment 57 Robin Sheat 2015-06-04 03:42:56 UTC
Just a note that I've got basic staff client working on my demo server: http://elasticsearch.koha.catalystdemo.net.nz/
Comment 58 Robin Sheat 2015-06-04 03:43:11 UTC
(In reply to Robin Sheat from comment #57)
> Just a note that I've got basic staff client working on my demo server:
> http://elasticsearch.koha.catalystdemo.net.nz/

Staff client /searching/
Comment 59 Jonathan Druart 2015-06-04 08:45:32 UTC
(In reply to Robin Sheat from comment #57)
> Just a note that I've got basic staff client working on my demo server:
> http://elasticsearch.koha.catalystdemo.net.nz/

"Basic searching there should be working, anything else may explode or otherwise fail."

Does it mean we can make you some feedbacks or you know where bugs are? :)

(for instance a search result for "harry" sorted by author az is not the reverse of author za. The "more" link does not do anything.)
Comment 60 Robin Sheat 2015-06-04 22:53:33 UTC
(In reply to Jonathan Druart from comment #59)
> "Basic searching there should be working, anything else may explode or
> otherwise fail."
> 
> Does it mean we can make you some feedbacks or you know where bugs are? :)

Feedback is welcome :) I'll add things to my todo list.
 
> (for instance a search result for "harry" sorted by author az is not the
> reverse of author za. 

Strictly speaking, the way it does it now is the most correct. But what I'm probably going to do based on this thread: https://lists.katipo.co.nz/pipermail/koha/2015-May/042746.html is have it sort only on the 1x0$a (primary author) field. However I need to figure out a way to try to be as consistent as possible with that, as it causes a problem in that now author search will use a different field than author sort. Hmm, I wonder if I can add named sort fields to my database mapping sorta like I do with facets...

> The "more" link does not do anything.)

I assume you mean the "more" link on facets? Yeah, that's known. It's because it's not currently truncating the list to 5 or whatever the default is. I don't think that's on my todo list though, I should add it...
Comment 61 Robin Sheat 2015-06-10 05:58:05 UTC
Just a heads up that I've refactored how the mappings get stored in the database. If you have a local setup, you'll want to re-import the elasticsearch_mapping.sql (which currently is a weird hybrid, it loads things in the old way, uses SQL to generate new tables from them, and drops the old one. This is temporary until I get around to verifying that the new version is near enough to correct.
Comment 62 Juan Romay Sieira 2015-07-09 18:46:09 UTC
I'm testing ES. My problem is that Koha is always trying to connect to localhost:9200, but my koha-conf.xml has:

<elasticsearch>
   <server>192.168.0.213:9200</server>
   <index_name>core</index_name>
</elasticsearch>

my instance of ES it not in localhost. Do I have to do something to force to connect to 192.168.0.213 and not to localhost?

This is the error: [NoNodes] ** No nodes are available: [http://localhost:9200], called from sub Search::Elasticsearch::Role::Client::Direct::__ANON__ at /usr/local/share/perl/5.10.1/Catmandu/Store/ElasticSearch.pm line 61
Comment 63 Robin Sheat 2015-07-09 23:02:52 UTC
(In reply to Juan Romay Sieira from comment #62)
> I'm testing ES. My problem is that Koha is always trying to connect to
> localhost:9200, but my koha-conf.xml has:
> 
> <elasticsearch>
>    <server>192.168.0.213:9200</server>
>    <index_name>core</index_name>
> </elasticsearch>
> 
> my instance of ES it not in localhost. Do I have to do something to force to
> connect to 192.168.0.213 and not to localhost?
> 
> This is the error: [NoNodes] ** No nodes are available:
> [http://localhost:9200], called from sub
> Search::Elasticsearch::Role::Client::Direct::__ANON__ at
> /usr/local/share/perl/5.10.1/Catmandu/Store/ElasticSearch.pm line 61

It definitely doesn't require localhost, my own test system runs against a remote ES server. I'm not sure what could cause that, are you sure you've got the right koha-conf? I'll attach the whole of mine to make sure the context is right.
Comment 64 Robin Sheat 2015-07-09 23:03:38 UTC
Created attachment 40903 [details]
My koha-conf.xml as an example
Comment 65 Juan Romay Sieira 2015-07-10 06:23:33 UTC
Yes, its the right koha-conf, and Koha gets the right configuration. Now I change it to put the timeout tag and . I put a warn in Koha/SearchEngine/Elasticsearch/Search.pm

    warn Data::Dumper::Dumper(\%$params);
    $self->store(
        Catmandu::Store::ElasticSearch->new(
            %$params, trace_calls => 1,
        )
    ) unless $self->store;

this is what it show in koha opac log

$VAR1 = {
           'index_name' => 'core_biblios',
           'timeout' => '10',
           'servers' => [
                          '192.168.0.213:9200'
                        ]
         };

The ES instance is running too, if I visit the URL of ES in a browser it returns:
{
  "status" : 200,
  "name" : "koha-es",
  "cluster_name" : "koha-cluster",
  "version" : {
    "number" : "1.6.0",
    "build_hash" : "cdd3ac4dde4f69524ec0a14de3828cb95bbb86d0",
    "build_timestamp" : "2015-06-09T13:36:34Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  },
  "tagline" : "You Know, for Search"
}

To temporaly fix it I wrote a iptables rule to forward my 9200 port to the other machine...
Comment 66 Juan Romay Sieira 2015-07-13 11:53:02 UTC
Finally I can do index, search, ... and with the same configuration. My problem is that I was using Squeeze instead Jessie. Thank you!
Comment 67 Robin Sheat 2015-07-13 22:54:07 UTC
(In reply to Juan Romay Sieira from comment #66)
> Finally I can do index, search, ... and with the same configuration. My
> problem is that I was using Squeeze instead Jessie. Thank you!

Oh, excellent! I'm surprised it even let you install the modules on squeeze, to be honest.
Comment 68 Robin Sheat 2015-07-17 02:57:11 UTC
I've just pushed up some updates to the Catalyst git repo, the main thing being that updates now work. That is, you can update or delete records in Koha and they're (immediately) updated in Elasticsearch. There is a provision for background updating (e.g. if the ES server isn't there), but I haven't actually done it yet.

I'm updating the ES demo server (http://elasticsearch.koha.catalystdemo.net.nz/) at the moment. Have a play and let me know if^Wwhen you spot problems.

I'm keeping a rough to-do list here: 
https://tree.taiga.io/project/robins-koha-elasticsearch/kanban
(which wasn't publicly visible, now is as taiga lets that happen.)

If you spot something that's not on that list, let me know and I'll add it to the list.

I'm taking a small break from ES specifically because I need to do stuff on a related thing. But I'll review anything posted here :)
Comment 69 Katrin Fischer 2015-07-17 09:23:44 UTC
Hi Robin,

thx for updating! Some things I tried/found/wondered about, mostly about facetting as I am getting a lot of feedback on our current implementation:

1) Can you show the number of results behind the facets? There is currently a preference for this, but I couldn't turn it on/chefk if it turned on.
2) Can facets be sorted by most used to less used in a result list? 

3) Can you show all entries for a big result list? How does it limit which facets to show?

4) If you search for () it explodes rather spectacularly.
Comment 70 Robin Sheat 2015-07-20 02:10:04 UTC
(In reply to Katrin Fischer from comment #69)
> 1) Can you show the number of results behind the facets? There is currently
> a preference for this, but I couldn't turn it on/chefk if it turned on.

I've turned it on now.

> 2) Can facets be sorted by most used to less used in a result list? 

This is a Koha question, not a search question. We can re-order them however we like (though, I don't know how meaningful that'd be tbh.) I'm trying to not change the behaviour significantly beyond what zebra can do.

> 3) Can you show all entries for a big result list? How does it limit which
> facets to show?

That's not how facets work in this case. It calculates facets across all the results (roughly, but I think mostly accurate for the scale of results in a Koha system.) It then returns the most frequent 10. At the moment, that's what's displaying. When I get to that point in my todo list, it'll show 5 until you click on the "show more" thing. Which forces a full page reload for hysterical raisins.

> 4) If you search for () it explodes rather spectacularly.

It sure does.

There's an option to make the string parsing more lenient I need to enable. I also need to catch that error better. Basically, if you give it anything that can't be parsed as a properly lucene query, you'll see this.

The real solution is to parse the query in Koha into a tree, then turn that tree into a proper Elasticsearch query. For obvious reasons (it's hard!), I'm not doing that yet.

The other part of the real solution is to catch and parse that ES response and say something sensible, rather than just output it verbatim.
Comment 71 Katrin Fischer 2015-07-20 08:53:44 UTC
(In reply to Robin Sheat from comment #70)
> (In reply to Katrin Fischer from comment #69)
> > 1) Can you show the number of results behind the facets? There is currently
> > a preference for this, but I couldn't turn it on/chefk if it turned on.
> 
> I've turned it on now.

Thx!

> 
> > 2) Can facets be sorted by most used to less used in a result list? 
> 
> This is a Koha question, not a search question. We can re-order them however
> we like (though, I don't know how meaningful that'd be tbh.) I'm trying to
> not change the behaviour significantly beyond what zebra can do.

We currently do it alphabetically - which is kind of ok, but as we pick the facets quite randomly out of 20 records... 

I understand about not changing behaviour significantly for now, but I think there are a lot of weirdnesses that we shouldn't translate to Elastic Search if we can help it. And our facets are quite weird right now.

> > 3) Can you show all entries for a big result list? How does it limit which
> > facets to show?
> 
> That's not how facets work in this case. It calculates facets across all the
> results (roughly, but I think mostly accurate for the scale of results in a
> Koha system.) It then returns the most frequent 10. At the moment, that's
> what's displaying. When I get to that point in my todo list, it'll show 5
> until you click on the "show more" thing. Which forces a full page reload
> for hysterical raisins.

Sorry, I don't understand, what do you mean by "(roughly, but I think mostly accurate for the scale of results in a Koha system.)"? 

The demo database seems to be quite small and from here it's not very fast, so wondering about how performance looks for you. Have you done testing with a bigger database by chance? (100.000+?). We are running into a lot of never-noticed problems right now with the bigger databases and searching.

About the number - I think limiting to 10 could be problematic for things like itemtypes, locations and branches. It could be good to have that configurable at some point or an option to just show 'all' for some of the facets.
Comment 72 Robin Sheat 2015-07-20 23:55:38 UTC
(In reply to Katrin Fischer from comment #71)
> We currently do it alphabetically - which is kind of ok, but as we pick the
> facets quite randomly out of 20 records... 

Oh, this should be presenting the facets with the most results across the full set of search results.

> I understand about not changing behaviour significantly for now, but I think
> there are a lot of weirdnesses that we shouldn't translate to Elastic Search
> if we can help it. And our facets are quite weird right now.

Well, they're probably a bit more sane in ES anyway. 

> Sorry, I don't understand, what do you mean by "(roughly, but I think mostly
> accurate for the scale of results in a Koha system.)"? 

Oh, just that when you get to many results over a clustered system, after a point it starts estimating the counts. Like google's "Results 1-10 of about 1,200,000", it's an estimate, not an exact count. But for smaller result counts it's going to be pretty exact.

> The demo database seems to be quite small and from here it's not very fast,
> so wondering about how performance looks for you. Have you done testing with
> a bigger database by chance? (100.000+?). We are running into a lot of
> never-noticed problems right now with the bigger databases and searching.

http://elasticsearch.koha.catalystdemo.net.nz/files/es-search/

Most of the time is spent in Koha::Database. This is something that has to be fixed in general for Koha to use it (alternately, using a persistent environment like Plack where the init only has to happen once.) There is a bit of extra time spent in Catmandu that could be reduced if necessary, we're using it as a fairly thin wrapper over search (as opposed to indexing where it's critical) and it'd be OK to use the elasticsearch libraries directly. However that's a bridge to burn when we get to it.

The database I'm using has 8,679 biblios and 14,841 items. So not very large at all. At some stage I might have a look at putting a bigger one in, but that does impact my reindexing time :)

> About the number - I think limiting to 10 could be problematic for things
> like itemtypes, locations and branches. It could be good to have that
> configurable at some point or an option to just show 'all' for some of the
> facets.

Sure, but is that a search thing or a Koha thing? I mean, putting hooks in to make this possible is certainly a search thing (they don't exist yet), but the decision on how it should be done is out of scope. I have enough problems to worry about as it is :)
Comment 73 Katrin Fischer 2015-07-21 05:43:07 UTC
Sorry, no intention to scope creep :) So far it sounds like facets will be nicer - and we can worry about other things later.
Comment 74 Robin Sheat 2015-08-03 05:59:38 UTC
I've just pushed a new commit that makes the code work with the newer Catmandu::Store::Elasticsearch version. Until I re-roll the dependencies tarball, you'll want to grab the .deb file here:

http://debian.koha-community.org/koha/otherthings/
Comment 75 Jonathan Druart 2015-08-27 14:00:06 UTC
Created attachment 42030 [details]
opac_search_for_d_sort_by_relevance
Comment 76 Jonathan Druart 2015-08-27 14:00:37 UTC
Created attachment 42031 [details]
opac_search_for_d_sort_by_title
Comment 77 Jonathan Druart 2015-08-27 14:01:00 UTC
Created attachment 42032 [details]
opac_search_for_harry_sort_by_title
Comment 78 Jonathan Druart 2015-08-27 14:01:15 UTC
Created attachment 42033 [details]
limit_by_book_sort_by_pubdate
Comment 79 Jonathan Druart 2015-08-27 14:01:41 UTC
Me again :)

So, I have tried to do some tests locally using your branch (OPAC biblio search only).
The first problem I got was to find a MARC21 DB (since the UNIMARC mappings are not defined, I cannot test with an UNIMARC DB).
I have used the one created for the sandboxes (http://git.koha-community.org/gitweb/?p=contrib/global.git;a=blob;f=sandbox/sql/sandbox1.sql.gz;h=19268bccb43b2a33d5644b7d86cbb1abb323016b;hb=HEAD). But there are only 436 biblios, it's not enough to test some stuffs (facets for instance).
Or maybe you can share your DB?

Here some notes:

1/ Add deps to C4/Installer/PerlDependencies.pm

2/ The number of tests provided is very low.

3/ catalyst/elastic_search is 1004 commits behind origin/master, please rebase

4/ The message "No 'elasticsearch' block is defined in koha-conf.xml" should be raised before starting the indexation process, and not on commiting the first batch.

5/ You really need to tune the default value for the commit :)
commit 100:  perl misc/search_tools/rebuild_elastic_search.pl -b  77.57s user 0.86s system 91% cpu 1:25.62 total
commit 1000: perl misc/search_tools/rebuild_elastic_search.pl -b  24.68s user 0.52s system 79% cpu 31.595 total
For Solr, we used 5000.
Yes I know, it's configurable.

6/ Verbose does not work as expected, it could be fixed with
-    print $msg if ($verbose <= $level);
+    print $msg if ($verbose >= $level);

7/ perl -e "use Pod::Checker;podchecker('misc/search_tools/rebuild_elastic_search.pl')";
*** WARNING: empty section in previous paragraph at line 36 in file misc/search_tools/rebuild_elastic_search.pl
*** ERROR: =over on line 38 without closing =back at line EOF in file misc/search_tools/rebuild_elastic_search.pl

8/ 2 occurrences of "Solr" reintroduced in installer/data/mysql/sysprefs.sql and koha-tmpl/intranet-tmpl/prog/en/modules/admin/preferences/admin.pref

9/ Test!
I have launched some searches, with the same DB (the one from the sandbox).
On a local using your remote branch and another one using master (sandbox7 provided by BibLibre).

a. Search for 'd' (screentshot opac_search_for_d_sort_by_relevance.png ES on the left, Zebra on the right).
Main differences:
- 183 vs 182 results (?) 
- the order is not the same (make sense)
- Locations and Places facets are missing
- 6 entries are displayed in the facets for ES (current behavior is 5). 

b. Search for 'd', sort by title AZ (screenshot opac_search_for_d_sort_by_title.png)
- Zebra displayes only 1 facet
- The order is still completely different

c. Search for 'harry', sort by title AZ (screenshot opac_search_for_harry_sort_by_title.png)
- 'Show more' links is displayed even if only 2 entries for a facet are available
- The order is still different ("The discovery of heaven" should be sorted either before Dollhouse (if the is a stopword) either after "Hareios*"
- The availability is wrong for ES (The item for Dollhouse is not for loan)

d. Search for Books (limit by item type in the adv search), sort by pubdate (screenshot limit_by_book_sort_by_pubdate.png)
- "Return to the last advanced search" link is not displayed
- The item types facet contains several entries, which does not make sense
- The number of results highly differ (395 vs 364)
- The order is still completely different. I had a look in the index and found:
"Pictura murală*" has "pubdate":"||||" (/_search?q=_id:39&pretty)
The Korean Go Association's learn to play go  "pubdate":"uuuu" (/_search?q=_id:155&pretty)
Where do come from these values? Shouldn't be a date, or at least an integer?

It's not easy to know what is indexed where. Did you have a look at the indexes configuration page the Solr stuff had?
It provided an interface to configure the different mappings, it was very useful.
Comment 80 Robin Sheat 2015-08-28 00:19:14 UTC
(In reply to Jonathan Druart from comment #79)
> The first problem I got was to find a MARC21 DB (since the UNIMARC mappings
> are not defined, I cannot test with an UNIMARC DB).

The UNIMARC mappings should be defined, though not tested.

> I have used the one created for the sandboxes
> (http://git.koha-community.org/gitweb/?p=contrib/global.git;a=blob;f=sandbox/
> sql/sandbox1.sql.gz;h=19268bccb43b2a33d5644b7d86cbb1abb323016b;hb=HEAD). But
> there are only 436 biblios, it's not enough to test some stuffs (facets for
> instance).
> Or maybe you can share your DB?

I could, but I think we'll get more useful results from different databases.

> Here some notes:
> 
> 1/ Add deps to C4/Installer/PerlDependencies.pm

Yeah, I'm mostly waiting for things to settle (which they have now.)

> 2/ The number of tests provided is very low.

Yes, I've been meaning to go back and add a pile more.

> 3/ catalyst/elastic_search is 1004 commits behind origin/master, please
> rebase

It's just a tedious process, so I keep putting it off :) should do that soon though.

> 4/ The message "No 'elasticsearch' block is defined in koha-conf.xml" should
> be raised before starting the indexation process, and not on commiting the
> first batch.

Added to my TODO.

> 5/ You really need to tune the default value for the commit :)
> commit 100:  perl misc/search_tools/rebuild_elastic_search.pl -b  77.57s
> user 0.86s system 91% cpu 1:25.62 total
> commit 1000: perl misc/search_tools/rebuild_elastic_search.pl -b  24.68s
> user 0.52s system 79% cpu 31.595 total
> For Solr, we used 5000.
> Yes I know, it's configurable.

I just picked a number and haven't gone back to it. I'm also thinking that maybe dropping the committing entirely and just feeding straight into Catmandu and letting it do its own batching, rather than doubling up on it. More experimentation needed really, but definitely increasing the default is a sensible thing to do.

FWIW, committing at 5,000:

real	2m14.627s
user	1m13.272s
sys	0m2.228s

100:

real	6m6.280s
user	4m45.268s
sys	0m2.828s

That's a fair difference :)

> 6/ Verbose does not work as expected, it could be fixed with

Oops. TODOed.

> 
> 7/ perl -e "use
> Pod::Checker;podchecker('misc/search_tools/rebuild_elastic_search.pl')";
> *** WARNING: empty section in previous paragraph at line 36 in file
> misc/search_tools/rebuild_elastic_search.pl
> *** ERROR: =over on line 38 without closing =back at line EOF in file
> misc/search_tools/rebuild_elastic_search.pl

TODOed.

> 8/ 2 occurrences of "Solr" reintroduced in installer/data/mysql/sysprefs.sql
> and koha-tmpl/intranet-tmpl/prog/en/modules/admin/preferences/admin.pref

Must have come about when merging. TODOed.

> 9/ Test!
> I have launched some searches, with the same DB (the one from the sandbox).
> On a local using your remote branch and another one using master (sandbox7
> provided by BibLibre).
> 
> a. Search for 'd' (screentshot opac_search_for_d_sort_by_relevance.png ES on
> the left, Zebra on the right).
> Main differences:
> - 183 vs 182 results (?) 

I wouldn't necessarily expect them to be the same, especially for a fairly meaningless search.

> - the order is not the same (make sense)
> - Locations and Places facets are missing

Yeah, they're not faceted yet. Added that to my TODO list before I forget again.

> - 6 entries are displayed in the facets for ES (current behavior is 5). 
> 
> b. Search for 'd', sort by title AZ (screenshot
> opac_search_for_d_sort_by_title.png)
> - Zebra displayes only 1 facet

That's probably zebra being wrong then :)

> - The order is still completely different

I'm not sure which is right in this case, though I'm doing some work on the sorting at the moment that would allow you to pick which of the fields that end up in title you want to sort by. For example, it might be that ES is putting the ones with a lower series title near the start, even though it displays a different title. That'll be tuneable when I'm done with the current stuff.

> c. Search for 'harry', sort by title AZ (screenshot
> opac_search_for_harry_sort_by_title.png)
> - 'Show more' links is displayed even if only 2 entries for a facet are
> available

Thought I'd fixed that, I'll have to have a look again.

> - The order is still different ("The discovery of heaven" should be sorted
> either before Dollhouse (if the is a stopword) either after "Hareios*"

Dollhouse probably has another title field that's actually being used, as noted above.

> - The availability is wrong for ES (The item for Dollhouse is not for loan)

Why is it not for loan? Is it by policy, because there are no items, or because all items are issued?

> d. Search for Books (limit by item type in the adv search), sort by pubdate
> (screenshot limit_by_book_sort_by_pubdate.png)
> - "Return to the last advanced search" link is not displayed

I wonder how it knows to show that...

I can't actually find that string in my checkout at all.

> - The item types facet contains several entries, which does not make sense

Curious. Are there situations where you have a biblio-level itemtype that differs from the item-level item type, or where one biblio might have multiple items with different item types? At the moment, I think they're all being thrown into one facet pot.

> - The number of results highly differ (395 vs 364)

Probably due to biblio-vs-item itemtype selection not being supported yet. If you can find it giving you a record that plain shouldn't match though, that'd be interesting.

> - The order is still completely different. I had a look in the index and
> found:
> "Pictura murală*" has "pubdate":"||||" (/_search?q=_id:39&pretty)
> The Korean Go Association's learn to play go  "pubdate":"uuuu"
> (/_search?q=_id:155&pretty)
> Where do come from these values? Shouldn't be a date, or at least an integer?

Could be the mapping is funny/broken for that. My test system has things like:

"pubdate":"1998"

though, which implies that it's correct. The actual mapping comes from:

INSERT INTO `elasticsearch_mapping` (`indexname`, `mapping`, `facet`, `suggestible`, `type`, `marc21`, `unimarc`, `normarc`) VALUES ('biblios','pubdate',FALSE,FALSE,'','008_/7-10','100a_/9-12','008_/7-10');

On the other hand, it does have:

"date-entered-on-file":"61006"

which doesn't look right no matter how you carve it.

> It's not easy to know what is indexed where. Did you have a look at the
> indexes configuration page the Solr stuff had?
> It provided an interface to configure the different mappings, it was very
> useful.

I haven't yet got to the point where I have the time to make an interface. At the moment it's all configured in elasticsearch_mapping.sql, which is somewhat human readable/editable. After loading the data into a table, it rewrites all those tables into a form that'll be more conducive for having a GUI on top of, but is less human readable.

BTW, if you add

<trace_to>Stderr</trace_to>

to the <elasticsearch> block, it'll dump all the chatter with ES out to stderr, which is useful for seeing what exactly is going on. I warn you, there is a lot there though.

Thanks for testing, even if I have a pile more things to fix now :)
Comment 81 Jonathan Druart 2015-08-28 11:29:22 UTC
(In reply to Robin Sheat from comment #80)
> (In reply to Jonathan Druart from comment #79)
> > The first problem I got was to find a MARC21 DB (since the UNIMARC mappings
> > are not defined, I cannot test with an UNIMARC DB).
> 
> The UNIMARC mappings should be defined, though not tested.

Well, it's defined yes, but does not work at all (the marc21 mappings are used) :)
It is caused by some errors in the sql file. Patch's coming.

Note the following:
MariaDB [koha_es_unimarc]>  insert into search_field (name, type) select distinct mapping, type from elasticsearch_mapping;
Query OK, 73 rows affected, 57 warnings (0.05 sec)
Records: 73  Duplicates: 0  Warnings: 57

MariaDB [koha_es_unimarc]> show warnings;
+---------+------+--------------------------------------------+
| Level   | Code | Message                                    |
+---------+------+--------------------------------------------+
| Warning | 1265 | Data truncated for column 'type' at row 1  |

and 72 others.

> > I have used the one created for the sandboxes
> > (http://git.koha-community.org/gitweb/?p=contrib/global.git;a=blob;f=sandbox/
> > sql/sandbox1.sql.gz;h=19268bccb43b2a33d5644b7d86cbb1abb323016b;hb=HEAD). But
> > there are only 436 biblios, it's not enough to test some stuffs (facets for
> > instance).
> > Or maybe you can share your DB?
> 
> I could, but I think we'll get more useful results from different databases.

Yes of course, but I am not a real tester, I am a developer, and it would be useful to share info on specific data.
I am fine to use the sandbox DB, if it's ok for you.

> > Here some notes:
> > 
> > 1/ Add deps to C4/Installer/PerlDependencies.pm
> 
> Yeah, I'm mostly waiting for things to settle (which they have now.)
> 
> > 2/ The number of tests provided is very low.
> 
> Yes, I've been meaning to go back and add a pile more.

Ok, I let it that for you :)

> > 6/ Verbose does not work as expected, it could be fixed with
> 
> Oops. TODOed.

Patch is coming.


> > 7/ perl -e "use
> > Pod::Checker;podchecker('misc/search_tools/rebuild_elastic_search.pl')";
> > *** WARNING: empty section in previous paragraph at line 36 in file
> > misc/search_tools/rebuild_elastic_search.pl
> > *** ERROR: =over on line 38 without closing =back at line EOF in file
> > misc/search_tools/rebuild_elastic_search.pl
> 
> TODOed.

Patch is coming.


> > 8/ 2 occurrences of "Solr" reintroduced in installer/data/mysql/sysprefs.sql
> > and koha-tmpl/intranet-tmpl/prog/en/modules/admin/preferences/admin.pref
> 
> Must have come about when merging. TODOed.

Patch is coming.

> > 9/ Test!

> > c. Search for 'harry', sort by title AZ (screenshot
> > opac_search_for_harry_sort_by_title.png)
> > - 'Show more' links is displayed even if only 2 entries for a facet are
> > available
> 
> Thought I'd fixed that, I'll have to have a look again.

Patch is coming.

> > - The order is still different ("The discovery of heaven" should be sorted
> > either before Dollhouse (if the is a stopword) either after "Hareios*"
> 
> Dollhouse probably has another title field that's actually being used, as
> noted above.

Yes it has:
title":["Dollhouse"],["Seasons one & two."]]                                                                              
245$a Dollhouse
490$a Seasons one & two.

But 245$a should be used for sorting :)

> > - The availability is wrong for ES (The item for Dollhouse is not for loan)
> 
> Why is it not for loan? Is it by policy, because there are no items, or
> because all items are issued?

The item is a "Visual Materials" which has a itemtype.notforloan flag set.

> > d. Search for Books (limit by item type in the adv search), sort by pubdate
> > (screenshot limit_by_book_sort_by_pubdate.png)
> > - "Return to the last advanced search" link is not displayed
> 
> I wonder how it knows to show that...
> 
> I can't actually find that string in my checkout at all.

Yes sorry, introduced by Bug 13307: Create a link to the last advanced search in search result page (OPAC).
Which is not in your branch yet.

> > - The item types facet contains several entries, which does not make sense
> 
> Curious. Are there situations where you have a biblio-level itemtype that
> differs from the item-level item type, or where one biblio might have
> multiple items with different item types? At the moment, I think they're all
> being thrown into one facet pot.

It comes from biblioitems.itemtype=2WEEK
Not sure if the data I used are correct...

> > - The number of results highly differ (395 vs 364)
> 
> Probably due to biblio-vs-item itemtype selection not being supported yet.
> If you can find it giving you a record that plain shouldn't match though,
> that'd be interesting.

Outch, not sure how I could find that easily.

> > - The order is still completely different. I had a look in the index and
> > found:
> > "Pictura murală*" has "pubdate":"||||" (/_search?q=_id:39&pretty)
> > The Korean Go Association's learn to play go  "pubdate":"uuuu"
> > (/_search?q=_id:155&pretty)
> > Where do come from these values? Shouldn't be a date, or at least an integer?
> 
> Could be the mapping is funny/broken for that. My test system has things
> like:
> 
> "pubdate":"1998"
> 
> though, which implies that it's correct. The actual mapping comes from:
> 
> INSERT INTO `elasticsearch_mapping` (`indexname`, `mapping`, `facet`,
> `suggestible`, `type`, `marc21`, `unimarc`, `normarc`) VALUES
> ('biblios','pubdate',FALSE,FALSE,'','008_/7-10','100a_/9-12','008_/7-10');

It comes from the 008
> "Pictura murală*" has "pubdate":"||||" (/_search?q=_id:39&pretty)
008 090409|||||||||xx |||||||||||||| ||und||
> The Korean Go Association's learn to play go  "pubdate":"uuuu"
008 971030muuuu9999nyua          000 0 eng 

But the index should not contain an invalid date.

For Solr (you can find the code on the BibLibre repo at https://git.biblibre.com/biblibre/koha_biblibre/commits/dev/solr Browse C4/Search/), we used a system of plugins. And there is a Date plugin (https://git.biblibre.com/biblibre/koha_biblibre/blob/bd38ce1811289fcfbd75a37ec99fc4cd3c5d37f4/C4/Search/Plugins/Date.pm) which does this job.
A plugin can be linked to a mapping.

Just a note: I know nobody has ever had a look at the Solr code, but it is used in production by several (4 or 5) customers for more than 4 years now.
And I have already had all the issues and problems you will encounter.

> > It's not easy to know what is indexed where. Did you have a look at the
> > indexes configuration page the Solr stuff had?
> > It provided an interface to configure the different mappings, it was very
> > useful.
> 
> I haven't yet got to the point where I have the time to make an interface.
> At the moment it's all configured in elasticsearch_mapping.sql, which is
> somewhat human readable/editable. After loading the data into a table, it
> rewrites all those tables into a form that'll be more conducive for having a
> GUI on top of, but is less human readable.
> 
> BTW, if you add
> 
> <trace_to>Stderr</trace_to>
> 
> to the <elasticsearch> block, it'll dump all the chatter with ES out to
> stderr, which is useful for seeing what exactly is going on. I warn you,
> there is a lot there though.

I will try and see if I can find some time and propose something here, I you want some help.
Comment 82 Jonathan Druart 2015-08-28 11:38:31 UTC Comment hidden (obsolete)
Comment 83 Jonathan Druart 2015-08-28 11:38:37 UTC Comment hidden (obsolete)
Comment 84 Jonathan Druart 2015-08-28 11:38:42 UTC Comment hidden (obsolete)
Comment 85 Jonathan Druart 2015-08-28 11:38:46 UTC Comment hidden (obsolete)
Comment 86 Jonathan Druart 2015-08-28 11:38:51 UTC Comment hidden (obsolete)
Comment 87 Jonathan Druart 2015-08-28 11:38:56 UTC Comment hidden (obsolete)
Comment 88 Jonathan Druart 2015-08-28 11:39:01 UTC Comment hidden (obsolete)
Comment 89 Jonathan Druart 2015-08-28 11:39:07 UTC Comment hidden (obsolete)
Comment 90 Jonathan Druart 2015-08-28 11:42:12 UTC
Something else, there is a sort issue in the facets:

[Some entries]
 Zeitoun, Ariel,
 Ó Cadhain, Máirtín.
 Ślez, Ts..

Ó should be after O, not after Z.
Comment 91 Robin Sheat 2015-08-31 05:20:20 UTC
(In reply to Jonathan Druart from comment #81)
> Well, it's defined yes, but does not work at all (the marc21 mappings are
> used) :)
> It is caused by some errors in the sql file. Patch's coming.

Ah, ta.

> 
> Note the following:
> MariaDB [koha_es_unimarc]>  insert into search_field (name, type) select
> distinct mapping, type from elasticsearch_mapping;
> Query OK, 73 rows affected, 57 warnings (0.05 sec)
> Records: 73  Duplicates: 0  Warnings: 57
> 
> MariaDB [koha_es_unimarc]> show warnings;
> +---------+------+--------------------------------------------+
> | Level   | Code | Message                                    |
> +---------+------+--------------------------------------------+
> | Warning | 1265 | Data truncated for column 'type' at row 1  |

Hmm, I remember that, but I'm not 100% sure it mattered. Could be wrong though.

> Yes of course, but I am not a real tester, I am a developer, and it would be
> useful to share info on specific data.
> I am fine to use the sandbox DB, if it's ok for you.

Fair point. Let me see if I can tidy the database some for uploading somewhere.

Here it is:

http://elasticsearch.koha.catalystdemo.net.nz/files/koha_es_marc21.sql.bz2

it's not the best data, but it's good enough for messing about with.

> > > 2/ The number of tests provided is very low.
> > Yes, I've been meaning to go back and add a pile more.
> Ok, I let it that for you :)

Oh, you don't have to. I don't mind if you go and write them all for me :)

> Patch is coming.
> Patch is coming.
> Patch is coming.
> Patch is coming.

Thanks!

> 
> Yes it has:
> title":["Dollhouse"],["Seasons one & two."]]                                
> 
> 245$a Dollhouse
> 490$a Seasons one & two.
> 
> But 245$a should be used for sorting :)

Yes, that's something I'm trying to fix at the moment :)

> The item is a "Visual Materials" which has a itemtype.notforloan flag set.

Good to know, I've not tested that case yet.

> Outch, not sure how I could find that easily.

Probably easiest to construct a case manually.

> It comes from the 008
> > "Pictura murală*" has "pubdate":"||||" (/_search?q=_id:39&pretty)
> 008 090409|||||||||xx |||||||||||||| ||und||
> > The Korean Go Association's learn to play go  "pubdate":"uuuu"
> 008 971030muuuu9999nyua          000 0 eng 
> 
> But the index should not contain an invalid date.

Hmm. I don't know if we can put validation into the fixer rules. I'll have to explore that some further. Possibly also telling ES that this must be a number could cause bad data to get rejected, but it may reject the whole record, not sure.

Do you happen to know how zebra handles that?

> For Solr (you can find the code on the BibLibre repo at
> https://git.biblibre.com/biblibre/koha_biblibre/commits/dev/solr Browse
> C4/Search/), we used a system of plugins. And there is a Date plugin
> (https://git.biblibre.com/biblibre/koha_biblibre/blob/
> bd38ce1811289fcfbd75a37ec99fc4cd3c5d37f4/C4/Search/Plugins/Date.pm) which
> does this job.
> A plugin can be linked to a mapping.

We probably can't directly reuse that, at present we're using Catmandu do do the data conversion and interfacing with ES for the most part. But it's possible I can hook something in somewhere.

> Just a note: I know nobody has ever had a look at the Solr code, but it is
> used in production by several (4 or 5) customers for more than 4 years now.
> And I have already had all the issues and problems you will encounter.

I'm sure I'll encounter some exciting new ones :)

> I will try and see if I can find some time and propose something here, I you
> want some help.

Sure, anything is welcome.

(In reply to Jonathan Druart from comment #90)
> Something else, there is a sort issue in the facets:
> 
> [Some entries]
>  Zeitoun, Ariel,
>  Ó Cadhain, Máirtín.
>  Ślez, Ts..
> 
> Ó should be after O, not after Z.

Line 573 of opac/opac-search.pl does a sort with cmp, which isn't very unicode aware. I'm putting that in the not-my-problem bin as it's in upstream :)
Comment 92 Peter Zhao 2015-09-01 01:35:02 UTC
I'v install a ES Koha.
Index works well and ES server can search records, but the Opac cannot search. 


"No results found!

You did not specify any search criteria.
Error:
Unable to perform your search. Please try again. "
Comment 93 Robin Sheat 2015-09-01 01:41:05 UTC
(In reply to Peter Zhao from comment #92)
> I'v install a ES Koha.
> Index works well and ES server can search records, but the Opac cannot
> search. 

Can you provide more detail? Comment #80, at the bottom, shows how to debug the traces between ES and Koha. This will tell you what search request is actually being made.
Comment 94 Peter Zhao 2015-09-01 02:06:55 UTC
(In reply to Robin Sheat from comment #93)
> (In reply to Peter Zhao from comment #92)
> > I'v install a ES Koha.
> > Index works well and ES server can search records, but the Opac cannot
> > search. 
> 
> Can you provide more detail? Comment #80, at the bottom, shows how to debug
> the traces between ES and Koha. This will tell you what search request is
> actually being made.

koha-opac-error_log shows:

[Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] # Request to: http://localhost:9200, referer: http://127.0.1.1/
[Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] curl -XHEAD 'http://localhost:9200/koha_biblios?pretty=1', referer: http://127.0.1.1/
[Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] , referer: http://127.0.1.1/
[Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] # Response: 200, Took: 14 ms, referer: http://127.0.1.1/
[Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] # 1, referer: http://127.0.1.1/
[Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] [Tue Sep  1 10:03:30 2015] opac-search.pl: [Serializer] ** encountered object '1', but neither allow_blessed nor convert_blessed settings are enabled at /usr/local/share/perl/5.14.2/Search/Elasticsearch/Role/Serializer/JSON.pm line 24., referer: http://127.0.1.1/
[Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] [Tue Sep  1 10:03:30 2015] opac-search.pl: , called from sub Search::Elasticsearch::Role::Client::Direct::__ANON__ at /usr/local/share/perl/5.14.2/Catmandu/Store/ElasticSearch/Bag.pm line 127. With vars: {'var' => {'from' => 0,'query' => {'query_string' => {'fuzziness' => 'auto','default_field' => '_all','query' => '(title:best)','default_operator' => 'AND','lenient' => bless( do{\\(my $o = 1)}, 'JSON::PP::Boolean' )}},'size' => 20,'facets' => {'subject' => {'terms' => {'field' => 'subject__facet'}},'author' => {'terms' => {'field' => 'author__facet'}},'itype' => {'terms' => {'field' => 'itype__facet'}}}}}, referer: http://127.0.1.1/
[Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] [Tue Sep  1 10:03:30 2015] opac-search.pl: Use of uninitialized value $error in concatenation (.) or string at /home/koha/kohaes/opac/opac-search.pl line 578., referer: http://127.0.1.1/
Comment 95 Robin Sheat 2015-09-01 05:07:23 UTC
(In reply to Peter Zhao from comment #94)
> [Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] [Tue Sep  1 10:03:30
> 2015] opac-search.pl: [Serializer] ** encountered object '1', but neither
> allow_blessed nor convert_blessed settings are enabled at
> /usr/local/share/perl/5.14.2/Search/Elasticsearch/Role/Serializer/JSON.pm
> line 24., referer: http://127.0.1.1/
> [Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] [Tue Sep  1 10:03:30
> 2015] opac-search.pl: , called from sub
> Search::Elasticsearch::Role::Client::Direct::__ANON__ at
> /usr/local/share/perl/5.14.2/Catmandu/Store/ElasticSearch/Bag.pm line 127.
> With vars: {'var' => {'from' => 0,'query' => {'query_string' => {'fuzziness'
> => 'auto','default_field' => '_all','query' =>
> '(title:best)','default_operator' => 'AND','lenient' => bless( do{\\(my $o =
> 1)}, 'JSON::PP::Boolean' )}},'size' => 20,'facets' => {'subject' => {'terms'
> => {'field' => 'subject__facet'}},'author' => {'terms' => {'field' =>
> 'author__facet'}},'itype' => {'terms' => {'field' => 'itype__facet'}}}}},
> referer: http://127.0.1.1/

Interesting. What do you get as the output of:

$ perl -MData::Dumper -MJSON -e 'print Dumper JSON::true;'

Also, what about:

perl -MJSON::XS::Boolean -e ''

Thirdly, does installing libjson-xs-perl make things work?
Comment 96 Peter Zhao 2015-09-01 05:33:10 UTC
(In reply to Robin Sheat from comment #95)
> (In reply to Peter Zhao from comment #94)
> > [Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] [Tue Sep  1 10:03:30
> > 2015] opac-search.pl: [Serializer] ** encountered object '1', but neither
> > allow_blessed nor convert_blessed settings are enabled at
> > /usr/local/share/perl/5.14.2/Search/Elasticsearch/Role/Serializer/JSON.pm
> > line 24., referer: http://127.0.1.1/
> > [Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] [Tue Sep  1 10:03:30
> > 2015] opac-search.pl: , called from sub
> > Search::Elasticsearch::Role::Client::Direct::__ANON__ at
> > /usr/local/share/perl/5.14.2/Catmandu/Store/ElasticSearch/Bag.pm line 127.
> > With vars: {'var' => {'from' => 0,'query' => {'query_string' => {'fuzziness'
> > => 'auto','default_field' => '_all','query' =>
> > '(title:best)','default_operator' => 'AND','lenient' => bless( do{\\(my $o =
> > 1)}, 'JSON::PP::Boolean' )}},'size' => 20,'facets' => {'subject' => {'terms'
> > => {'field' => 'subject__facet'}},'author' => {'terms' => {'field' =>
> > 'author__facet'}},'itype' => {'terms' => {'field' => 'itype__facet'}}}}},
> > referer: http://127.0.1.1/
> 
> Interesting. What do you get as the output of:
> 
> $ perl -MData::Dumper -MJSON -e 'print Dumper JSON::true;'
> 
> Also, what about:
> 
> perl -MJSON::XS::Boolean -e ''
> 
> Thirdly, does installing libjson-xs-perl make things work?

koha@koha:~$  perl -MData::Dumper -MJSON -e 'print Dumper JSON::true;'
$VAR1 = bless( do{\(my $o = 1)}, 'JSON::PP::Boolean' );
koha@koha:~$ perl -MJSON::XS::Boolean -e ''
koha@koha:~$ 
koha@koha:~$ sudo apt-get install libjson-xs-perl
[sudo] password for koha: 
正在读取软件包列表... 完成
正在分析软件包的依赖关系树       
正在读取状态信息... 完成       
libjson-xs-perl 已经是最新的版本了。
libjson-xs-perl 被设置为手动安装。
升级了 0 个软件包,新安装了 0 个软件包,要卸载 0 个软件包,有 0 个软件包未被升级。

(In reply to Robin Sheat from comment #95)
> (In reply to Peter Zhao from comment #94)
> > [Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] [Tue Sep  1 10:03:30
> > 2015] opac-search.pl: [Serializer] ** encountered object '1', but neither
> > allow_blessed nor convert_blessed settings are enabled at
> > /usr/local/share/perl/5.14.2/Search/Elasticsearch/Role/Serializer/JSON.pm
> > line 24., referer: http://127.0.1.1/
> > [Tue Sep 01 10:03:30 2015] [error] [client 127.0.0.1] [Tue Sep  1 10:03:30
> > 2015] opac-search.pl: , called from sub
> > Search::Elasticsearch::Role::Client::Direct::__ANON__ at
> > /usr/local/share/perl/5.14.2/Catmandu/Store/ElasticSearch/Bag.pm line 127.
> > With vars: {'var' => {'from' => 0,'query' => {'query_string' => {'fuzziness'
> > => 'auto','default_field' => '_all','query' =>
> > '(title:best)','default_operator' => 'AND','lenient' => bless( do{\\(my $o =
> > 1)}, 'JSON::PP::Boolean' )}},'size' => 20,'facets' => {'subject' => {'terms'
> > => {'field' => 'subject__facet'}},'author' => {'terms' => {'field' =>
> > 'author__facet'}},'itype' => {'terms' => {'field' => 'itype__facet'}}}}},
> > referer: http://127.0.1.1/
> 
> Interesting. What do you get as the output of:
> 
> $ perl -MData::Dumper -MJSON -e 'print Dumper JSON::true;'
> 
> Also, what about:
> 
> perl -MJSON::XS::Boolean -e ''
> 
> Thirdly, does installing libjson-xs-perl make things work?

koha@koha:~$  perl -MData::Dumper -MJSON -e 'print Dumper JSON::true;'
$VAR1 = bless( do{\(my $o = 1)}, 'JSON::PP::Boolean' );
koha@koha:~$ perl -MJSON::XS::Boolean -e ''
koha@koha:~$ 
koha@koha:~$ sudo cpanm JSON::XS

 After "JSON::XS" was installed, Opac works well. Thanks a lot!
Comment 97 Robin Sheat 2015-09-01 05:43:01 UTC
(In reply to Peter Zhao from comment #96)
> koha@koha:~$  perl -MData::Dumper -MJSON -e 'print Dumper JSON::true;'
> $VAR1 = bless( do{\(my $o = 1)}, 'JSON::PP::Boolean' );
> koha@koha:~$ perl -MJSON::XS::Boolean -e ''
> koha@koha:~$ 
> koha@koha:~$ sudo cpanm JSON::XS
> 
>  After "JSON::XS" was installed, Opac works well. Thanks a lot!

Cool. I suspect that your setup is particularly non-standard. I tried to uninstall JSON::XS to test, but I couldn't as it would have cause other critical things to uninstall also.
Comment 98 Peter Zhao 2015-09-01 09:53:57 UTC
ES Opac cannot support Chinese language? I try to search Chinese word in http://elasticsearch.koha.catalystdemo.net.nz/ and my ES, they also show "Software error".

Software error:

Can't escape \x{57FA}, try uri_escape_utf8() instead at /opt/kohaclones/elasticsearch/Koha/SearchEngine/Elasticsearch/QueryBuilder.pm line 217.

For help, please send mail to the webmaster ([no address given]), giving this error message and the time and date of the error. 

Software error:

Can't escape \x{57FA}, try uri_escape_utf8() instead at /home/koha/kohaes/Koha/SearchEngine/Elasticsearch/QueryBuilder.pm line 217.

For help, please send mail to the webmaster (webmaster@koha), giving this error message and the time and date of the error.
Comment 99 Jonathan Druart 2015-09-01 10:29:49 UTC
(In reply to Peter Zhao from comment #98)
> ES Opac cannot support Chinese language? I try to search Chinese word in
> http://elasticsearch.koha.catalystdemo.net.nz/ and my ES, they also show
> "Software error".
> 
> Software error:
> 
> Can't escape \x{57FA}, try uri_escape_utf8() instead at
> /opt/kohaclones/elasticsearch/Koha/SearchEngine/Elasticsearch/QueryBuilder.
> pm line 217.
> 
> For help, please send mail to the webmaster ([no address given]), giving
> this error message and the time and date of the error. 
> 
> Software error:
> 
> Can't escape \x{57FA}, try uri_escape_utf8() instead at
> /home/koha/kohaes/Koha/SearchEngine/Elasticsearch/QueryBuilder.pm line 217.
> 
> For help, please send mail to the webmaster (webmaster@koha), giving this
> error message and the time and date of the error.

This is fixed by
  Bug 12478: Fix encoding issue on facets
Try to add the last 8 patches from this bug report.
The Robin's branch does not contain these fixes.
Comment 100 Jonathan Druart 2015-09-01 11:43:01 UTC
(In reply to Robin Sheat from comment #91)
> (In reply to Jonathan Druart from comment #81)
> > Note the following:
> > MariaDB [koha_es_unimarc]>  insert into search_field (name, type) select
> > distinct mapping, type from elasticsearch_mapping;
> > Query OK, 73 rows affected, 57 warnings (0.05 sec)
> > Records: 73  Duplicates: 0  Warnings: 57
> > 
> > MariaDB [koha_es_unimarc]> show warnings;
> > +---------+------+--------------------------------------------+
> > | Level   | Code | Message                                    |
> > +---------+------+--------------------------------------------+
> > | Warning | 1265 | Data truncated for column 'type' at row 1  |
> 
> Hmm, I remember that, but I'm not 100% sure it mattered. Could be wrong
> though.

It's caused by the fact that you insert a an empty string into an enum field.
I am not sure about the consequences.

> Here it is:
> 
> http://elasticsearch.koha.catalystdemo.net.nz/files/koha_es_marc21.sql.bz2
> 
> it's not the best data, but it's good enough for messing about with.

Great, thanks. Another set of data :)

> > It comes from the 008
> > > "Pictura murală*" has "pubdate":"||||" (/_search?q=_id:39&pretty)
> > 008 090409|||||||||xx |||||||||||||| ||und||
> > > The Korean Go Association's learn to play go  "pubdate":"uuuu"
> > 008 971030muuuu9999nyua          000 0 eng 
> > 
> > But the index should not contain an invalid date.
> 
> Hmm. I don't know if we can put validation into the fixer rules. I'll have
> to explore that some further. Possibly also telling ES that this must be a
> number could cause bad data to get rejected, but it may reject the whole
> record, not sure.
> 
> Do you happen to know how zebra handles that?

Absolutely no idea.

> > For Solr (you can find the code on the BibLibre repo at
> > https://git.biblibre.com/biblibre/koha_biblibre/commits/dev/solr Browse
> > C4/Search/), we used a system of plugins. And there is a Date plugin
> > (https://git.biblibre.com/biblibre/koha_biblibre/blob/
> > bd38ce1811289fcfbd75a37ec99fc4cd3c5d37f4/C4/Search/Plugins/Date.pm) which
> > does this job.
> > A plugin can be linked to a mapping.
> 
> We probably can't directly reuse that, at present we're using Catmandu do do
> the data conversion and interfacing with ES for the most part. But it's
> possible I can hook something in somewhere.

We will have to do some data pre-processing before indexing the records.
I need to learn more about ES, but with Solr we had to process the date values for date type mappings.
Otherwise it is not possible to correctly request on this index (for instance dates range, or sort by, etc.).
By the way, the date type is only used on acqdate and copydate, why not on other dates (at least pubdate)?

> (In reply to Jonathan Druart from comment #90)
> > Something else, there is a sort issue in the facets:
> > 
> > [Some entries]
> >  Zeitoun, Ariel,
> >  Ó Cadhain, Máirtín.
> >  Ślez, Ts..
> > 
> > Ó should be after O, not after Z.
> 
> Line 573 of opac/opac-search.pl does a sort with cmp, which isn't very
> unicode aware. I'm putting that in the not-my-problem bin as it's in
> upstream :)

Yes, and, IMO, there is a design issue here.
We should not reuse the pl and tt files.
How do you plan to add features that Zebra cannot provide? :)
Not sure it will be maintainable to add conditions (if SE == 'ES') in the TT.
For instance, for the facets, we would like to display them as ES retrieve them (order by most used), and add the number of occurrences.
Comment 101 Jonathan Druart 2015-09-01 11:51:07 UTC
 95 # TODO implement in the future - I don't know the best way of doing this yet.
 96 # If fork: make sure process group is changed so apache doesn't wait for us.
 97 
 98 =cut
 99 
100 sub update_index_background {

FWIW: I have developed an "index queue" for Solr. It's inconceivable to index on the fly one by one in a real world.
Koha just added a recordid in a file and the index queue watches the file and indexes a batch when needed (x minutes passed or y records in the file).
I can provide you more information if needed.
Comment 102 Jonathan Druart 2015-09-01 11:58:07 UTC
Ok, just saw the comment about the date format:

Koha/ElasticSearch.pm
185             # TODO be aware of date formats, but this requires pre-parsing
186             # as ES will simply reject anything with an invalid date.

We would pre-process the data at this point, if a "plugin is linked to" (to define) this field.
Comment 103 Peter Zhao 2015-09-03 03:08:23 UTC
(In reply to Peter Zhao from comment #98)
> ES Opac cannot support Chinese language? I try to search Chinese word in
> http://elasticsearch.koha.catalystdemo.net.nz/ and my ES, they also show
> "Software error".
> 
> Software error:
> 
> Can't escape \x{57FA}, try uri_escape_utf8() instead at
> /opt/kohaclones/elasticsearch/Koha/SearchEngine/Elasticsearch/QueryBuilder.
> pm line 217.
> 
> For help, please send mail to the webmaster ([no address given]), giving
> this error message and the time and date of the error. 
> 
> Software error:
> 
> Can't escape \x{57FA}, try uri_escape_utf8() instead at
> /home/koha/kohaes/Koha/SearchEngine/Elasticsearch/QueryBuilder.pm line 217.
> 
> For help, please send mail to the webmaster (webmaster@koha), giving this
> error message and the time and date of the error.

After I changed "uri_escape" to "uri_escape_utf8", ES can index and Opac can search Chinese in MARC21 structure.

But it does not work in UNIMARC structure. It shows messy code.

koha@koha:~$ curl -XGET 'http://localhost:9200/koha_biblios/_search?pretty=1' 
{
  "took" : 132,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "koha_biblios",
      "_type" : "data",
      "_id" : "1",
      "_score" : 1.0,
      "_source":{"pubdate":"    ","subject":[["åºç£æ"],["åºç£æ"],["ç¥å­¦"]],"author__facet":[["赵红å"]],"_id":"1","author":[["赵红å"]],"subject__facet":[["åºç£æ"],["åºç£æ"],["ç¥å­¦"]],"record":[["LDR",null,null,"_","00261nam  22001213  4500"],["001",null,null,"_","1"],["005",null,null,"_","20150903102757.0"],["090"," "," ","a","1"],["100"," "," ","a","20150903d        u||y0chiy50      ea"],["200"," "," ","a","åºç£æææ³å²","c","Peter","f","赵红å"],["600"," "," ","a","åºç£æ"],["601"," "," ","a","ç¥å­¦"],["942"," "," ","c","BK"],["999"," "," ","c","1","d","1"]],"ta":"u","title":[["åºç£æææ³å²"],["Peter"]],"onloan":"0","Local-number":[["1"]]}
    }, {
      "_index" : "koha_biblios",
      "_type" : "data",
      "_id" : "2",
      "_score" : 1.0,
      "_source":{"pubdate":"    ","subject":[["åºç£æ"],["åºç£æ"],["ç¥å­¦"]],"author__facet":[["peter"]],"_id":"2","author":[["peter"]],"subject__facet":[["åºç£æ"],["åºç£æ"],["ç¥å­¦"]],"record":[["LDR",null,null,"_","00257nam  22001213  4500"],["001",null,null,"_","2"],["005",null,null,"_","20150903105155.0"],["090"," "," ","a","2"],["100"," "," ","a","20150903d        u||y0chiy50      ea"],["200"," "," ","a","åºç£æææ³å²","c","Peter","f","peter"],["600"," "," ","a","åºç£æ"],["601"," "," ","a","ç¥å­¦"],["942"," "," ","c","BK"],["999"," "," ","c","2","d","2"]],"ta":"u","title":[["åºç£æææ³å²"],["Peter"]],"onloan":"0","Local-number":[["2"]]}
    } ]
  }
}
koha@koha:~$
Comment 104 Robin Sheat 2015-09-03 04:31:18 UTC
(In reply to Peter Zhao from comment #103)
> After I changed "uri_escape" to "uri_escape_utf8", ES can index and Opac can
> search Chinese in MARC21 structure.
> 
> But it does not work in UNIMARC structure. It shows messy code.

The code paths between MARC21 and UNIMARC shouldn't differ in ways that would make a difference here. The only place where it comes up is determining that XXXy goes into "author" or whatever.
Comment 105 Peter Zhao 2015-09-03 05:03:38 UTC
(In reply to Robin Sheat from comment #104)
> (In reply to Peter Zhao from comment #103)
> > After I changed "uri_escape" to "uri_escape_utf8", ES can index and Opac can
> > search Chinese in MARC21 structure.
> > 
> > But it does not work in UNIMARC structure. It shows messy code.
> 
> The code paths between MARC21 and UNIMARC shouldn't differ in ways that
> would make a difference here. The only place where it comes up is
> determining that XXXy goes into "author" or whatever.

The following are 3 records, Chinese words show messy code, English words show correct, but they cannot search through Opac.

koha@koha:~$ curl -XGET 'http://localhost:9200/koha_biblios/_search?pretty=1' 
{
  "took" : 3,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 3,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "koha_biblios",
      "_type" : "data",
      "_id" : "1",
      "_score" : 1.0,
      "_source":{"pubdate":"    ","subject":[["åºç£æ"],["åºç£æ"],["ç¥å­¦"]],"author__facet":[["赵红å"]],"_id":"1","author":[["赵红å"]],"subject__facet":[["åºç£æ"],["åºç£æ"],["ç¥å­¦"]],"record":[["LDR",null,null,"_","00261nam  22001213  4500"],["001",null,null,"_","1"],["005",null,null,"_","20150903102757.0"],["090"," "," ","a","1"],["100"," "," ","a","20150903d        u||y0chiy50      ea"],["200"," "," ","a","åºç£æææ³å²","c","Peter","f","赵红å"],["600"," "," ","a","åºç£æ"],["601"," "," ","a","ç¥å­¦"],["942"," "," ","c","BK"],["999"," "," ","c","1","d","1"]],"ta":"u","title":[["åºç£æææ³å²"],["Peter"]],"onloan":"0","Local-number":[["1"]]}
    }, {
      "_index" : "koha_biblios",
      "_type" : "data",
      "_id" : "2",
      "_score" : 1.0,
      "_source":{"pubdate":"    ","subject":[["åºç£æ"],["åºç£æ"],["ç¥å­¦"]],"author__facet":[["peter"]],"_id":"2","author":[["peter"]],"subject__facet":[["åºç£æ"],["åºç£æ"],["ç¥å­¦"]],"record":[["LDR",null,null,"_","00257nam  22001213  4500"],["001",null,null,"_","2"],["005",null,null,"_","20150903105155.0"],["090"," "," ","a","2"],["100"," "," ","a","20150903d        u||y0chiy50      ea"],["200"," "," ","a","åºç£æææ³å²","c","Peter","f","peter"],["600"," "," ","a","åºç£æ"],["601"," "," ","a","ç¥å­¦"],["942"," "," ","c","BK"],["999"," "," ","c","2","d","2"]],"ta":"u","title":[["åºç£æææ³å²"],["Peter"]],"onloan":"0","Local-number":[["2"]]}
    }, {
      "_index" : "koha_biblios",
      "_type" : "data",
      "_id" : "3",
      "_score" : 1.0,
      "_source":{"pubdate":"    ","subject":[["History"],["History"],["Christianity"]],"author__facet":[["Peter"]],"_id":"3","author":[["Peter"]],"subject__facet":[["History"],["History"],["Christianity"]],"record":[["LDR",null,null,"_","00259nam  22001213  4500"],["001",null,null,"_","3"],["005",null,null,"_","20150903125354.0"],["090"," "," ","a","3"],["100"," "," ","a","20150903d        u||y0frey50      ba"],["200"," "," ","a","History of Christianity","f","Peter"],["600"," "," ","a","History"],["601"," "," ","a","Christianity"],["942"," "," ","c","BK"],["999"," "," ","c","3","d","3"]],"ta":"u","title":[["History of Christianity"]],"onloan":"0","Local-number":[["3"]]}
    } ]
  }
}
koha@koha:~$
Comment 106 Robin Sheat 2015-09-04 04:21:25 UTC
Just a headsup that I'll be out of touch until about the 17th of September, so that's why there'll be radio silence for a while :)

Jonathan, I was hoping to pull your patches in on top of my sorting fixes, unfortunately I didn't quite get them finished in time.
Comment 107 Jonathan Druart 2015-09-04 12:37:03 UTC Comment hidden (obsolete)
Comment 108 Jonathan Druart 2015-09-04 12:37:09 UTC Comment hidden (obsolete)
Comment 109 Jonathan Druart 2015-09-04 12:38:13 UTC
Peter, the last patch should fix your issue.
Comment 110 Peter Zhao 2015-09-04 14:06:37 UTC
(In reply to Jonathan Druart from comment #109)
> Peter, the last patch should fix your issue.

Jonathan, it works well. Thanks a lot.  ES indexing and Opac works wonderful.

But in staff interface when I input non-latin words  (e.g. Chinese) to search (search the catalog), it shows "Software error:Cannot decode string with wide characters at /usr/lib/i386-linux-gnu/perl/5.20/Encode.pm line 215. For help, please send mail to the webmaster (webmaster@koha), giving this error message and the time and date of the error. "


If I input English words, it can show results, but when I click the record, it shows "The record you requested does not exist ()."
Comment 111 Jonathan Druart 2015-09-04 14:30:54 UTC
I haven't focused on the intranet side yet :)
I have found an error this morning: you cannot access the detail page from the result page.
Comment 112 Robin Sheat 2015-09-25 03:18:03 UTC
I've pushed up my latest changes with the sorting updates. Here's the commit message for a description of how it works:

    Bug 12478: allow more granular sorting configuration
    
    This allows sorting to be configured within a field. For example, while
    many values are included for search on author, sorting should only be
    done on the main entry values. This permits that by having a sort value,
    which can be true, false, or null. true and null are pretty much the
    same, but false means that a field isn't available for sorting on. By
    default (null), fields can be sorted on.
Comment 113 Robin Sheat 2015-09-25 03:52:54 UTC
Jonathan, I've added all your patches to the branch. Thanks!
Comment 114 Mirko Tietgen 2015-09-29 14:54:27 UTC
(In reply to Robin Sheat from comment #80)

> > 3/ catalyst/elastic_search is 1004 commits behind origin/master, please
> > rebase
> 
> It's just a tedious process, so I keep putting it off :) should do that soon
> though.

Now? Please. :)
Comment 115 Robin Sheat 2015-09-30 04:52:19 UTC
(In reply to Mirko Tietgen from comment #114)
> Now? Please. :)

I have a rebase done, though it's untested. I'll force push it up, which will require resetting your local copy.

Will get some testing in tomorrow to make sure not too much has blown up. Apologies Jonathan, due to Koha/Biblio.pm being introduced into master, I renamed the ES version to BiblioUtils.pm, but did it by ignoring all the rebase conflicts and just cleaning it up at the end (otherwise it was going to be a nightmare.) This means some of your commits will have vanished and been rolled into my top commit.

Note to self in case things go horribly wrong: the previous branch head is at commit 9669bbc.
Comment 116 Robin Sheat 2015-10-01 02:55:17 UTC
(In reply to Robin Sheat from comment #115)
> I have a rebase done, though it's untested.

Now it's had a quick test and some things have been fixed. It appears on the surface to be working again.
Comment 117 Jonathan Druart 2015-10-02 09:44:56 UTC
I am getting error on the DB update process:
C4::Installer::load_sql returned the following errors while attempting to load /home/koha/src/installer/data/mysql/elasticsearch_mapping.sql:
ERROR 1062 (23000) at line 294: Duplicate entry 'biblios-marc21-490a' for key 'index_name_2'
ERROR 1062 (23000) at line 294: Duplicate entry 'biblios-marc21-490a' for key 'index_name_2'
Comment 118 Jonathan Druart 2015-10-02 09:46:07 UTC
(In reply to Jonathan Druart from comment #117)
> I am getting error on the DB update process:
> C4::Installer::load_sql returned the following errors while attempting to
> load /home/koha/src/installer/data/mysql/elasticsearch_mapping.sql:
> ERROR 1062 (23000) at line 294: Duplicate entry 'biblios-marc21-490a' for
> key 'index_name_2'
> ERROR 1062 (23000) at line 294: Duplicate entry 'biblios-marc21-490a' for
> key 'index_name_2'

Robin, have a look at the patch "DB changes" I have attached on bug 14899, maybe it should be integrated here.
Comment 119 Jonathan Druart 2015-10-02 09:58:41 UTC
Does the authorities index is supposed to work?

I get
  Use of uninitialized value $id in concatenation (.) or string at misc/search_tools/rebuild_elastic_search.pl line 173.

after a
  perl misc/search_tools/rebuild_elastic_search.pl -a -v

and on /koha-es_authorities/_search?q=*&pretty
      "_index" : "koha-es_authorities",
      "_type" : "data",
      "_id" : "7c8462c8-c813-4013-9ab7-1ca77069bbc4",
Comment 120 Jonathan Druart 2015-10-02 10:10:20 UTC
Robin, It seems that something was wrong during the rebase process.
I have tried the updated branch without any success:

$ git reset --hard catalyst/elastic_search
$ installer/data/mysql/updatedatabase.pl # see error in previous comment, but I should not that the following is still true if I remove the unique key
$ perl misc/search_tools/rebuild_elastic_search.pl -b -v # no error

I can now only search for "*", other patterns return nothing, and 
/koha-es_biblios/_search?q=*&pretty shows entries like:
{
      "_index" : "koha-es_biblios",
      "_type" : "data",
      "_id" : "11",
      "_score" : 1.0,
      "_source":{"record":[["LDR",null,null,"_","01199cam a22003134a 4500"],["001",null,null,"_","12011929"],["005",null,null,"_","20140507153623.0"],["008",null,null,"_","000518s2000    ch a     b    001 0 eng  "],["010"," "," ","a","   00041664 "],["020"," "," ","a","1565924193"],["040"," "," ","a","DLC","c","DLC","d","DLC"],["042"," "," ","a","pcc"],["050","0","0","a","QA76.73.P22","b","G84 2000"],["082","0","0","a","005.2/762","2","21"],["100","1"," ","a","Guelich, Scott.","9","8"],["245","1","0","a","CGI programming with Perl /","c","Scott Guelich, Shishir Gundavaram and Gunther Birznieks."],["250"," "," ","a","2nd ed."],["260"," "," ","a","Beijing ;","a","Cambridge, Mass. :","b","O'Reilly,","c","2000."],["300"," "," ","a","xv, 451 p.","b","ill.","c","24 cm."],["504"," "," ","a","Includes bibliographical references (p. 403-406) and index."],["650"," ","0","a","Perl (Computer program language)","9","35"],["650"," ","0","a","CGI (Computer network protocol)","9","37"],["650"," ","0","a","Internet programming.","9","7"],["700","1"," ","a","Birznieks, Gunther.","9","38"],["856","4","2","3","Publisher description","u","http://www.loc.gov/catdir/enhancements/fy0715/00041664-d.html"],["906"," "," ","a","7","b","cbc","c","orignew","d","1","e","ocip","f","20","g","y-gencatlg"],["942"," "," ","2","ddc","c","BK"],["955"," "," ","a","to ASCD pc16 05-18-00; jf05 (desc.) 05/18/00 ; jf11 to sl 5-19-00; jf12 to Dewey 05-23-00;aa03 5-24-00;CIP ver jf0504/10/01; jf12 to BCCD 04-11-01"],["952"," "," ","0","0","1","0","4","0","6","_","7","0","8","GEN","9","32","a","ALPHA","b","ALPHA","c","GEN","d","2014-09-04","p","39999000000498","r","2014-09-04","w","2014-09-04","y","BK"],["999"," "," ","c","11","d","11"]],"_id":"11"}
    }

There is something wrong when the tables are populated.
Comment 121 Robin Sheat 2015-10-05 03:06:51 UTC
(In reply to Jonathan Druart from comment #119)
> Does the authorities index is supposed to work?
> 
> I get
>   Use of uninitialized value $id in concatenation (.) or string at
> misc/search_tools/rebuild_elastic_search.pl line 173.

That shouldn't happen, I think there was something underlying that changed in the rebase. Lemme sort that out...
Comment 122 Robin Sheat 2015-10-05 03:37:57 UTC
Fixed the authorities thing, there was an API change where what was 'idnumber' had to become 'id', which is now done.

(In reply to Jonathan Druart from comment #120)
> I can now only search for "*", other patterns return nothing, and 
> /koha-es_biblios/_search?q=*&pretty shows entries like:

That's weird. Are you sure you reloaded the tables? I just have, and end up with:

      "_source":{"title__suggestion":{"input":[[["Harmonization of international accounting standards"]]]},"record":[["LDR",null,null,"_","01476pam a2200205 a 4500"],["110"," "," ","a","International Capital Markets Group"],["245"," "," ","a","Harmonization of international accounting standards","c","prepared by the International Federation of Accountants with the assistance of Federation Internationale des Bourses de Valeurs and the International Bar Association Section on Business Law"],["260"," "," ","a","London, United Kingdom","b","International Capital Markets Group","c","?"],["300"," "," ","a","67 p.","c","23 cm."],["650"," "," ","a","INTERNATIONAL ACCOUNTING STANDARDS"],["650"," "," ","a","HARMONI100 33019  100 33019    0     0   388k      0 --:--:-- --:--:-- --:--:--  447k
TION"],["650"," "," ","a","ACCOUNTING STANDARDS"],["650"," "," ","a","ACCOUNTING POLICIES"],["653"," "," ","a","INTERNATIONAL ACCOUNTING STANDARDS"],["505"," "," ","a","Introduction ; Accounting standards around the world - why are they different? ; Accounting standard setting in five countries ; What are the major differences? ; The costs of disharmony ; The barriers to harmonisation ; Toward global harmonisation ; International accounting standards ; Appendices"],["520"," "," ","a","The report examines the reasons for differences in accounting standards around the world, and, in the context of the pressures for global harmonisation of standards, examines the factors that have to be overcome if global harmonisation is to be achieved. It also examines what form global harmonisation might take, and considers the structure and organisation of accounting standard setting in this context."],["942"," "," ","c","Book"],["024"," "," ","a","16"],["900"," "," ","a","AJ9304"],["952"," "," ","0","0","1","0","2","ddc","4","0","6","100_600000000000000_HAR","7","0","9","16","a","WLGTN","b","WLGTN","d","2011-12-19","o","100.6 HAR","p","CA002438","r","2011-12-19","w","2011-12-19","y","BOOK"],["999"," "," ","c","16","d","16"]],"_id":"16","author__facet":[["International Capital Markets Group"]],"title":[["Harmonization of international accounting standards"]],"acqdate":[["2011-12-19"]],"Local-number":[["16"]],"subject":[["INTERNATIONAL ACCOUNTING STANDARDS"],["HARMONISATION"],["ACCOUNTING STANDARDS"],["ACCOUNTING POLICIES"],["INTERNATIONAL ACCOUNTING STANDARDS"]],"publisher":[["International Capital Markets Group"]],"place":[["London, United Kingdom"]],"subject__suggestion":{"input":[[["INTERNATIONAL ACCOUNTING STANDARDS"]],[["HARMONISATION"]],[["ACCOUNTING STANDARDS"]],[["ACCOUNTING POLICIES"]],[["INTERNATIONAL ACCOUNTING STANDARDS"]]]},"homebranch":[["WLGTN"]],"itype__facet":[["Book"],["BOOK"]],"local-classification__suggestion":{"input":[[["100.6 HAR"]]]},"author__suggestion":{"input":[[["International Capital Markets Group"]]]},"itemnumber":[["16"]],"copydate":[["?"]],"copydate__facet":[["?"]],"place__facet":[["London, United Kingdom"]],"author":[["International Capital Markets Group"],["prepared by the International Federation of Accountants with the assistance of Federation Internationale des Bourses de Valeurs and the International Bar Association Section on Business Law"]],"onloan":"0","subject__facet":[["INTERNATIONAL ACCOUNTING STANDARDS"],["HARMONISATION"],["ACCOUNTING STANDARDS"],["ACCOUNTING POLICIES"],["INTERNATIONAL ACCOUNTING STANDARDS"]],"publisher__facet":[["International Capital Markets Group"]],"local-classification":[["100.6 HAR"]],"author__sort":[["International Capital Markets Group"]],"holdingbranch":[["WLGTN"]],"itype":[["Book"],["BOOK"]],"homebranch__facet":[["WLGTN"]]}

Which is what I would have expected.

(sorry about the wall of text there)
Comment 123 Jonathan Druart 2015-10-05 09:37:07 UTC
Robin, just retried now with and everything works fine.
Not sure where did it come from, but it looks to be fixed!

Just got:

Upgrade to XXX done (Bug 12478 - set up elasticsearch tables)
DBD::mysql::db do failed: Can't DROP 'index_name_2'; check that column/key exists at installer/data/mysql/updatedatabase.pl line 10967.DBD::mysql::db do failed: Duplicate column name 'label' at installer/data/mysql/updatedatabase.pl line 10970.
Upgrade to XXX done (Bug 12478/14899 - DB Changes to the elasticsearch tables)

when updating the DB.
You have updated the structure in the elasticsearch_mapping.sql file, so no need to remove the unique key and add the label field in the last updatedatabase entry.
Not a big deal anyway :)
Comment 124 Jonathan Druart 2015-10-05 10:24:51 UTC
I have found an encoding bug on the "Show more" link, it will be fixed by bug 14955. Already exists on master.
Comment 125 Jonathan Druart 2015-10-05 13:38:40 UTC
Robin,
Another set of patches is coming.
1/ Reintroduce the SearchEngine pref
2/ Fix to correct the number of facets displayed (again)
3/ Remove an unnecessary "&limit=" (minor)
4/ Replace the Koha::ItemType[s] classes you have introduced with a Koha::Object[s] way
5/ Display description instead of code for locations in facets.

About 4:
I know you are now agreeing completely with Koha::Object[s] and how it has been implemented.
But I think we need to have an homogeneous code, and all new classes in the Koha namespace should follow the same directions.
I would please you to have a look at bug 14828 (and the see also ones), which introduces the Koha::ItemType[s] classes as well and rewrites the admin/itemtypes.pl script, to see how easy it is to use it.
To avoid future conflicts and don't play to a race against each other we should agree on this kind of decisions now.
Comment 126 Jonathan Druart 2015-10-05 13:42:01 UTC Comment hidden (obsolete)
Comment 127 Jonathan Druart 2015-10-05 13:42:06 UTC Comment hidden (obsolete)
Comment 128 Jonathan Druart 2015-10-05 13:42:12 UTC Comment hidden (obsolete)
Comment 129 Jonathan Druart 2015-10-05 13:42:17 UTC Comment hidden (obsolete)
Comment 130 Jonathan Druart 2015-10-05 13:42:22 UTC Comment hidden (obsolete)
Comment 131 Jonathan Druart 2015-10-05 13:42:28 UTC Comment hidden (obsolete)
Comment 132 Jonathan Druart 2015-10-05 13:42:33 UTC Comment hidden (obsolete)
Comment 133 Jonathan Druart 2015-10-05 13:42:39 UTC Comment hidden (obsolete)
Comment 134 Jonathan Druart 2015-10-05 14:39:25 UTC Comment hidden (obsolete)
Comment 135 Jonathan Druart 2015-10-05 14:40:43 UTC
> (In reply to Jonathan Druart from comment #90)
> > > Something else, there is a sort issue in the facets:
> > > 
> > > [Some entries]
> > >  Zeitoun, Ariel,
> > >  Ó Cadhain, Máirtín.
> > >  Ślez, Ts..
> > > 
> > > Ó should be after O, not after Z.
> > 
> > Line 573 of opac/opac-search.pl does a sort with cmp, which isn't very
> > unicode aware. I'm putting that in the not-my-problem bin as it's in
> > upstream :)
> 
> Yes, and, IMO, there is a design issue here.
> We should not reuse the pl and tt files.
> How do you plan to add features that Zebra cannot provide? :)
> Not sure it will be maintainable to add conditions (if SE == 'ES') in the TT.
> For instance, for the facets, we would like to display them as ES retrieve
> them (order by most used), and add the number of occurrences.

In the last patch I suggest to display the facet terms in the same order as ES build them (i.e. most used first).
Comment 136 Jonathan Druart 2015-10-05 14:46:22 UTC Comment hidden (obsolete)
Comment 137 Jonathan Druart 2015-10-05 14:48:41 UTC Comment hidden (obsolete)
Comment 138 Jonathan Druart 2015-10-05 16:16:40 UTC Comment hidden (obsolete)
Comment 139 Robin Sheat 2015-10-06 04:18:03 UTC
(In reply to Jonathan Druart from comment #123)
> You have updated the structure in the elasticsearch_mapping.sql file, so no
> need to remove the unique key and add the label field in the last
> updatedatabase entry.
> Not a big deal anyway :)

Oh, yes. That should have been removed.

(In reply to Jonathan Druart from comment #125)
> I know you are now agreeing completely with Koha::Object[s] and how it has
> been implemented.
> But I think we need to have an homogeneous code, and all new classes in the
> Koha namespace should follow the same directions.
> I would please you to have a look at bug 14828 (and the see also ones),
> which introduces the Koha::ItemType[s] classes as well and rewrites the
> admin/itemtypes.pl script, to see how easy it is to use it.
> To avoid future conflicts and don't play to a race against each other we
> should agree on this kind of decisions now.

Sure, that's fine. Consistency is generally better than reinventing everything.

(In reply to Jonathan Druart from comment #135)
> In the last patch I suggest to display the facet terms in the same order as
> ES build them (i.e. most used first).

While I've added this, in general I'm avoiding changing of behaviour too much. We can always remove it later if it's decided we don't want it (and personally, I think it'll be better.)

All these patches are added. Thanks Jonathan!

FYI, I'm looking into the notforloan stuff at the moment.
Comment 140 Katrin Fischer 2015-10-06 05:41:11 UTC
sorting facets by number of use sounds better to me than alphabetic - maybe a good change :)
Comment 141 Jonathan Druart 2015-10-06 07:50:48 UTC
Robin,
Thanks for your responsiveness!
Yesterday I have read a bit of doc about facets and on the Terms Facet page of the ES doc (
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-facets-terms-facet.html), there is a big warning on top " Warning Facets are deprecated and will be removed in a future release. You are encouraged to migrate to aggregations instead.".

Did you have time to look into the terms aggregations?
Or maybe is it what you are already using?
Comment 142 Jonathan Druart 2015-10-06 07:52:48 UTC
On a unimarc installation, there are no mapping at all for authorities.
If an unimarc user follows this bug report...
Comment 143 Jonathan Druart 2015-10-06 09:59:39 UTC
Robin,
Maybe I have not drunk enough tea this morning but...
I am trying to improve the mappings area to have a consistent interface to manage them.
The idea is to 1) move the elastic_mapping.sql to a elastic_mapping.json file (easier to modify and read), 2) provide methods to serialize/unserialize mappings and then 3) introduce a backup/import/reset mappings feature and finally 4) make easier the mapping progression to get a good basis to use ES.

I have managed to create a json file from the sql file, the structure is something like:

{
  biblio => {
    title => {
      label    => 'Title',
      type     => 'string',
      mappings => [
        {
          suggestible => 1,
          facet       => 1,
          marc21      => '245a',
          unimarc     => '200a',
          normarc     => '245a',
        },
      ]
   },
}

And I have some questions :)
- Do you agree with the idea?
- Don't you think the index_name should be a column of the search_fields table?
- Some of the fields don't have a type, should we assign "string" as the default value?
- wordings: 'sortable' and 'facetable' sounds more appropriate than 'sort' and 'facet'
- (/me is clearing his throat) I think that all the mappings of a field should be removed if the field is removed. In other word, there is a 1-n relationship beetwen search_field and search_marc_map, which means that the join table (search_marc_to_field) is not needed and we could simplify the structure removing it.

I am going to wait for an answer before starting anything :)
Comment 144 Jonathan Druart 2015-10-06 10:01:29 UTC
Ha, something else: the biblionumber should be a field of the biblios index.
It will fix a couple of issues (click on a result at the intranet and non-xslt views). And certainly some others, for instance I get a lot of warnings in the logs:
opac-search.pl: GetCOinSBiblio called with undefined record at /home/koha/src/opac/opac-search.pl line 636.
opac-search.pl: GetMarcBiblio called with undefined biblionumber at /home/koha/src/opac/opac-search.pl line 635.
Comment 145 Robin Sheat 2015-10-07 06:03:35 UTC
(In reply to Jonathan Druart from comment #141)
> Did you have time to look into the terms aggregations?
> Or maybe is it what you are already using?

Yeah, I'm aware of that. Unfortunately, it only became deprecated after I'd implemented it, and I haven't got around to going back and reworking it. I don't think it'll be a big change, it's possible it's sufficient to just switch the ES type we're using. Anyway, I've put it on the "deal with later" pile.

(In reply to Jonathan Druart from comment #142)
> On a unimarc installation, there are no mapping at all for authorities.
> If an unimarc user follows this bug report...

Working out the MARC21 mappings was tedious enough :)

(In reply to Jonathan Druart from comment #143)
> Maybe I have not drunk enough tea this morning but...
> I am trying to improve the mappings area to have a consistent interface to
> manage them.
> The idea is to 1) move the elastic_mapping.sql to a elastic_mapping.json
> file (easier to modify and read), 2) provide methods to
> serialize/unserialize mappings and then 3) introduce a backup/import/reset
> mappings feature and finally 4) make easier the mapping progression to get a
> good basis to use ES.
> 
> I have managed to create a json file from the sql file, the structure is
> something like:
> 
> {
>   biblio => {
>     title => {
>       label    => 'Title',
>       type     => 'string',
>       mappings => [
>         {
>           suggestible => 1,
>           facet       => 1,
>           marc21      => '245a',
>           unimarc     => '200a',
>           normarc     => '245a',
>         },
>       ]
>    },
> }
> 
> And I have some questions :)
> - Do you agree with the idea?

Well... I don't know. Though I'm not a fan of that structure really, as it's not ideal, and is a bit more limited. Also, in this case you can't have more than one title, but that's not really the big issue. Mostly it's just a very denormalised view of the data. Better for manually editing an SQL file, but not really so good for a computer to use. This is why the SQL file has the data in that form and then normalises it in the database. 

> - Don't you think the index_name should be a column of the search_fields
> table?

Yes, it should be kept with search_field.name as it's effectively more information needed to describe where something gets stored.

> - Some of the fields don't have a type, should we assign "string" as the
> default value?

I'd like to not just because that implies that they've consciously been made strings. Ideally as time goes on, people will decide that this is a date, and this is a ... IP address or something, and add those as types while putting the logic in to handle it. So, if a type is unspecified, then it gets treated like a string by default, but it really means "we haven't decided yet."

> - wordings: 'sortable' and 'facetable' sounds more appropriate than 'sort'
> and 'facet'

hmm. I don't really mind either way. My thinking was that "facet" and "sort" were easier to type. But I broke the consistency because "suggest" seemed weird. I don't object to any of them changing.

> - (/me is clearing his throat) I think that all the mappings of a field
> should be removed if the field is removed. In other word, there is a 1-n
> relationship beetwen search_field and search_marc_map, which means that the
> join table (search_marc_to_field) is not needed and we could simplify the
> structure removing it.

I had a good reason for doing many-to-many. Let me see if I can remember it...

Oh wait, I documented it:

-- This joins the two search tables together. We can have any combination:
-- one marc field could have many search fields (maybe you want one value
-- to go to 'author' and 'corporate-author) and many marc fields could go
-- to one search field (e.g. all the various author fields going into
-- 'author'.)

If you remove the many-to-many relationship then you end up with duplication/denormalisation. My thinking behind the UI is that you might have, say, a list of all the fields and under them, a set of all the MARC fields that map to it. Or perhaps the inverse. I hadn't really thought about it too much, but a properly denormalised relational structure means that we have the maximum amount of flexibility. The only improvement to the structure in this respect is that the sort, facet, suggest things should really be at the join level. I have a feeling I considered that, then decided it risked crossing the line into too fiddly, but it would get more power out of it. At the moment you'd have to duplicate the MARC details if you want different values for those three which isn't ideal.

(In reply to Jonathan Druart from comment #144)
> Ha, something else: the biblionumber should be a field of the biblios index.

Oh, it's embedded as the ID on the ES record. There's no point duplicating it as its own field, but it's reasonable to copy it out as a post-process step and put it into a biblionumber field. We don't have a reliable
Comment 146 Robin Sheat 2015-10-07 06:04:58 UTC
(In reply to Robin Sheat from comment #145)
> We don't have a reliable

I was going to say we don't have a reliable biblionumber source in the marc, but we do as I force it to be correct. But that's not relevant, we should just use the ID.
Comment 147 Jonathan Druart 2015-10-07 07:50:50 UTC
(In reply to Robin Sheat from comment #145)
> (In reply to Jonathan Druart from comment #143)
> > Maybe I have not drunk enough tea this morning but...
> > I am trying to improve the mappings area to have a consistent interface to
> > manage them.
> > The idea is to 1) move the elastic_mapping.sql to a elastic_mapping.json
> > file (easier to modify and read), 2) provide methods to
> > serialize/unserialize mappings and then 3) introduce a backup/import/reset
> > mappings feature and finally 4) make easier the mapping progression to get a
> > good basis to use ES.
> > 
> > I have managed to create a json file from the sql file, the structure is
> > something like:
> > 
> > {
> >   biblio => {
> >     title => {
> >       label    => 'Title',
> >       type     => 'string',
> >       mappings => [
> >         {
> >           suggestible => 1,
> >           facet       => 1,
> >           marc21      => '245a',
> >           unimarc     => '200a',
> >           normarc     => '245a',
> >         },
> >       ]
> >    },
> > }
> > 
> > And I have some questions :)
> > - Do you agree with the idea?
> 
> Well... I don't know. Though I'm not a fan of that structure really, as it's
> not ideal, and is a bit more limited. Also, in this case you can't have more
> than one title, but that's not really the big issue. Mostly it's just a very
> denormalised view of the data. Better for manually editing an SQL file, but
> not really so good for a computer to use. This is why the SQL file has the
> data in that form and then normalises it in the database. 

I don't understand the problem with the structure, you could have several mappings (it's an arrayref of hashrefs).
With this structure I could insert exactly the same data in the tables (except if I missed something...).

> > - Some of the fields don't have a type, should we assign "string" as the
> > default value?
> 
> I'd like to not just because that implies that they've consciously been made
> strings. Ideally as time goes on, people will decide that this is a date,
> and this is a ... IP address or something, and add those as types while
> putting the logic in to handle it. So, if a type is unspecified, then it
> gets treated like a string by default, but it really means "we haven't
> decided yet."

So todo later :)

> > - wordings: 'sortable' and 'facetable' sounds more appropriate than 'sort'
> > and 'facet'
> 
> hmm. I don't really mind either way. My thinking was that "facet" and "sort"
> were easier to type. But I broke the consistency because "suggest" seemed
> weird. I don't object to any of them changing.

Not a big deal but better sooner than later.

> > - (/me is clearing his throat) I think that all the mappings of a field
> > should be removed if the field is removed. In other word, there is a 1-n
> > relationship beetwen search_field and search_marc_map, which means that the
> > join table (search_marc_to_field) is not needed and we could simplify the
> > structure removing it.
> 
> I had a good reason for doing many-to-many. Let me see if I can remember
> it...
> 
> Oh wait, I documented it:
> 
> -- This joins the two search tables together. We can have any combination:
> -- one marc field could have many search fields (maybe you want one value
> -- to go to 'author' and 'corporate-author) and many marc fields could go
> -- to one search field (e.g. all the various author fields going into
> -- 'author'.)
> 
> If you remove the many-to-many relationship then you end up with
> duplication/denormalisation. My thinking behind the UI is that you might
> have, say, a list of all the fields and under them, a set of all the MARC
> fields that map to it. Or perhaps the inverse. I hadn't really thought about
> it too much, but a properly denormalised relational structure means that we
> have the maximum amount of flexibility. The only improvement to the
> structure in this respect is that the sort, facet, suggest things should
> really be at the join level. I have a feeling I considered that, then
> decided it risked crossing the line into too fiddly, but it would get more
> power out of it. At the moment you'd have to duplicate the MARC details if
> you want different values for those three which isn't ideal.

Yes, it's related to the index_name unique key discussion we had last week.
We should either move sort, facet, suggest to the join table or remove it, but not keep the current structure.

I am not sure about the gain of having the three tables, we could still know what fields are mapped with this MARC field or the inverse (same marc_field values).
Anyway, the current structure force us to duplicate the MARC details, because of the sort, facet, suggest, which could differ.

> (In reply to Jonathan Druart from comment #144)
> > Ha, something else: the biblionumber should be a field of the biblios index.
> 
> Oh, it's embedded as the ID on the ES record. There's no point duplicating
> it as its own field, but it's reasonable to copy it out as a post-process
> step and put it into a biblionumber field. We don't have a reliable

Indeed we could add it later, but don't you think it's worth to let the librarians (and devs) search something like "biblionumber:42", which is a more familiar term than "ID"?
Comment 148 Robin Sheat 2015-10-08 01:52:50 UTC
(In reply to Jonathan Druart from comment #147)
> I don't understand the problem with the structure, you could have several
> mappings (it's an arrayref of hashrefs).

Oh, yes, you're right. I missed that bit.

> With this structure I could insert exactly the same data in the tables
> (except if I missed something...).

It's just that it's not a natural representation from a database theory point of view. Why are the various MARC forms lumped together like that? Why should a schema change be necessary when (god forbid) a new flavour is added? Actually, that could be important as if we want to add support for different things into ES, say importing a feed from a journal provider or whatever, you could use a flavour to mark how the source is mapped into elasticsearch. Additionally, it promotes redundancy as you are forced to repeat things to have all the required combinations, which always causes an icky feeling :)

Essentially, having a proper relational many-to-many is the most expressive and flexible system that is feasible.

I think I'm OK with moving the attributes (sortable, etc) into the join though. It probably makes the most sense.
 
> Yes, it's related to the index_name unique key discussion we had last week.
> We should either move sort, facet, suggest to the join table or remove it,
> but not keep the current structure.
> 
> I am not sure about the gain of having the three tables, we could still know
> what fields are mapped with this MARC field or the inverse (same marc_field
> values).
> Anyway, the current structure force us to duplicate the MARC details,
> because of the sort, facet, suggest, which could differ.

Right, but removing the M-to-M still means we have to duplicate things. And that'll be why the index was there but problematic, because the attributes are in the wrong place. The index will (I think) be fine if they get removed. 
 
> Indeed we could add it later, but don't you think it's worth to let the
> librarians (and devs) search something like "biblionumber:42", which is a
> more familiar term than "ID"?

Oh, good point. Yes, that should be done.
Comment 149 Robin Sheat 2015-10-08 04:16:34 UTC
(In reply to Robin Sheat from comment #91)
> > The item is a "Visual Materials" which has a itemtype.notforloan flag set.
> 
> Good to know, I've not tested that case yet.

Well, I spent far too long staring at that, only to find it's been working all along. You can set it on the itemtype and it should have always worked (as that's done in post-processing, which is the same between zebra and es), and all it needed to work at the item level was a mapping from the notforloan field into es to be added.
Comment 150 Jonathan Druart 2015-10-08 07:41:12 UTC
(In reply to Robin Sheat from comment #148)
> (In reply to Jonathan Druart from comment #147)
> > I don't understand the problem with the structure, you could have several
> > mappings (it's an arrayref of hashrefs).
> 
> Oh, yes, you're right. I missed that bit.
> 
> > With this structure I could insert exactly the same data in the tables
> > (except if I missed something...).
> 
> It's just that it's not a natural representation from a database theory
> point of view. Why are the various MARC forms lumped together like that? Why
> should a schema change be necessary when (god forbid) a new flavour is
> added? Actually, that could be important as if we want to add support for
> different things into ES, say importing a feed from a journal provider or
> whatever, you could use a flavour to mark how the source is mapped into
> elasticsearch. Additionally, it promotes redundancy as you are forced to
> repeat things to have all the required combinations, which always causes an
> icky feeling :)

Erk, of course!
I did not c/p the correct structure I have in my document!
It was:
 {
   biblio => {
     title => { # name
       label    => 'Title',
       type     => 'string',
       mappings => [
         {
           suggestible => 1,
           facet       => 1,
           sort        => 0,
           marc_type   => 'marc21',
           marc_field  => '245a',
         },
         {
           suggestible => 1,
           facet       => 1,
           sort        => 0,
           marc_type   => 'unimarc',
           marc_field  => '200a',
         },
[ and normarc of course :) ]
       ]
    },
 }
Comment 151 Jonathan Druart 2015-10-08 16:06:31 UTC
Found something else:
In the sql file you have Heading-main vs Heading-Main. record-source vs Record-Source.
For the first one, it's a typo (both for authorities).
For the second one (it's a typo too, but), and for Local-number, you have the search_field.name which will be mapped with biblios and authorities fields.

Does it make sense in your mind? In mine it's a bit weird (but maybe I need a pint).
Comment 152 Robin Sheat 2015-10-09 02:18:44 UTC
(In reply to Jonathan Druart from comment #150)
> Erk, of course!
> I did not c/p the correct structure I have in my document!

OK, yeah, that's a fair bit better :) But you still have an amount of duplication, for example when two search fields come from one marc field, you'll have to repeat the marc field. I don't know if that's good or bad really, but from a database normalisation point of view, it's bad (not seriously so though.) 

But between what's there now (including having moved the attributes appropriately), and what you're suggesting, I don't have particularly strong feelings. If you think it'll make other parts easier, then go with it. Changing the schema is pretty easy really, there's only a few places it touches.

(In reply to Jonathan Druart from comment #151)
> Found something else:
> In the sql file you have Heading-main vs Heading-Main. record-source vs
> Record-Source.
> For the first one, it's a typo (both for authorities).
> For the second one (it's a typo too, but), and for Local-number, you have
> the search_field.name which will be mapped with biblios and authorities
> fields.
> 
> Does it make sense in your mind? In mine it's a bit weird (but maybe I need
> a pint).

It's OK to have the same field for authorities and biblios, they're totally different indices. As for the capitalisation stuff, there was a period when I wanted everything to be lower case before I discovered that would cause many things to break. So there may be some leftovers from that. Or just typos. What really ought to happen is a big go-over where the mapping is generated from scratch from the zebra files. Probably in an automated fashion.

Mostly these mappings are there to get something functional out of it, it'll be a bit more work to make them actually correct :)
Comment 153 Robin Sheat 2015-10-12 03:43:04 UTC
I've created a zebra-running version to help shake out the bugs that have accumulated in there. It's at:

http://zebra.koha.catalystdemo.net.nz/

and is exactly the same code as the elasticsearch version, but with the SearchEngine preference set to 'Zebra'. It isn't working right now, I've probably broken something.

That's what I'll be fixing for the next little while.
Comment 154 Jonathan Druart 2015-10-12 16:17:38 UTC Comment hidden (obsolete)
Comment 155 Jonathan Druart 2015-10-12 16:17:46 UTC Comment hidden (obsolete)
Comment 156 Jonathan Druart 2015-10-12 16:18:02 UTC Comment hidden (obsolete)
Comment 157 Jonathan Druart 2015-10-12 16:18:09 UTC Comment hidden (obsolete)
Comment 158 Jonathan Druart 2015-10-12 16:18:18 UTC Comment hidden (obsolete)
Comment 159 Jonathan Druart 2015-10-12 16:18:26 UTC Comment hidden (obsolete)
Comment 160 Jonathan Druart 2015-10-12 16:18:42 UTC Comment hidden (obsolete)
Comment 161 Jonathan Druart 2015-10-12 16:20:14 UTC Comment hidden (obsolete)
Comment 162 Jonathan Druart 2015-10-12 16:23:44 UTC
Hi Robin,
I think I did it! :)
Last patch set move the mapping attributes to the join table and provide a yaml file to manage the default mappings.
I will test it a bit more tomorrow, to be sure I have not introduced regression.
Let me know if it goes in the direction of what you had in mind!

I am also attaching patches on bug 14899, the interface is now much more consistent.
Comment 163 Jonathan Druart 2015-10-12 16:24:22 UTC
well, s/much more/a bit more
Comment 164 Robin Sheat 2015-10-13 01:56:37 UTC
Jonathan, you're not putting correct copyright headers into the files you're creating/modifying. You need to.
Comment 165 Robin Sheat 2015-10-13 03:32:05 UTC
(In reply to Jonathan Druart from comment #162)
> Let me know if it goes in the direction of what you had in mind!

Just reading the patches, this all looks good to me. I'll pull them in (excluding the one you don't want pulled :)

My only thought is that doing scripted (e.g. in vim) updates of the yaml will be harder, as it's not each entry on one like the SQL. Perhaps the best way to deal with that is something that will dump the database out to YAML, so you can do batch changes in SQL and then just rexport the YAML.

> I am also attaching patches on bug 14899, the interface is now much more
> consistent.

Brilliant, I need to try that out some time soon.

I'm kinda thinking of moving this somewhere else, maybe gitlab, so that you don't have to wait for me to merge things in for you. What do you think?
Comment 166 Robin Sheat 2015-10-13 04:19:49 UTC
Commit cd4905c2 (Display facet terms ordered by number of occurrences) introduces a bug when searching with zebra (i.e. it makes it break.) I will roll it back until we get a better solution. Feel free to unroll it back if you get the chance to make one.
Comment 167 Jonathan Druart 2015-10-13 09:53:25 UTC Comment hidden (obsolete)
Comment 168 Jonathan Druart 2015-10-13 09:54:11 UTC
(In reply to Robin Sheat from comment #166)
> Commit cd4905c2 (Display facet terms ordered by number of occurrences)
> introduces a bug when searching with zebra (i.e. it makes it break.) I will
> roll it back until we get a better solution. Feel free to unroll it back if
> you get the chance to make one.

Oops, sorry about that!
The last patch should work now.
Comment 169 Robin Sheat 2015-10-14 01:07:55 UTC
(In reply to Jonathan Druart from comment #168)
> Oops, sorry about that!
> The last patch should work now.

It does, thanks!
Comment 170 kohayu 2015-10-16 07:40:39 UTC
Dear folk, 

I git clone git://git.catalyst.net.nz/koha.git . 

os - debian 7.9 wheezy , x86_64 Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u4 x86_64 GNU/Linux

koha version - 	3.21.00.030

perl - 5.014002

mysql - mysql Ver 14.14 Distrib 5.5.44, for debian-linux-gnu (x86_64) using readline 6.2 

Elasticsearch version: 1.7.2

JVM name: Java HotSpot(TM) 64-Bit Server VM
JVM vendor: Oracle Corporation
JVM version: 25.60-b23
Java version: 1.8.0_60

I use bulkmarcimport.pl to import marc data, 
But import 1,000 records every time  , it occurred
 
"................[NoNodes] ** No nodes are available: [http://127.0.0.1:9200], called from sub Search::Elasticsearch::Role::Client::Direct::__ANON__ at /usr/local/share/perl/5.14.2/Catmandu/Store/ElasticSearch.pm line 61.root@debian:/usr/share/koha/bin/migration_tools#"

This job stopped. 

Has anyone see the same condition?

BR. 

longshan
Comment 171 kohayu 2015-10-16 07:41:18 UTC
Dear folk, 

I git clone git://git.catalyst.net.nz/koha.git . 

os - debian 7.9 wheezy , x86_64 Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u4 x86_64 GNU/Linux

koha version - 	3.21.00.030

perl - 5.014002

mysql - mysql Ver 14.14 Distrib 5.5.44, for debian-linux-gnu (x86_64) using readline 6.2 

Elasticsearch version: 1.7.2

JVM name: Java HotSpot(TM) 64-Bit Server VM
JVM vendor: Oracle Corporation
JVM version: 25.60-b23
Java version: 1.8.0_60

I use bulkmarcimport.pl to import marc data, 
But import 1,000 records every time  , it occurred
 
"................[NoNodes] ** No nodes are available: [http://127.0.0.1:9200], called from sub Search::Elasticsearch::Role::Client::Direct::__ANON__ at /usr/local/share/perl/5.14.2/Catmandu/Store/ElasticSearch.pm line 61.root@debian:/usr/share/koha/bin/migration_tools#"

This job stopped. 

Has anyone see the same condition?

BR. 

longshan
Comment 172 Jonathan Druart 2015-10-16 08:27:09 UTC
(In reply to kohayu from comment #171)
> 
> I use bulkmarcimport.pl to import marc data, 
> But import 1,000 records every time  , it occurred
>  
> "................[NoNodes] ** No nodes are available:
> [http://127.0.0.1:9200], called from sub
> Search::Elasticsearch::Role::Client::Direct::__ANON__ at
> /usr/local/share/perl/5.14.2/Catmandu/Store/ElasticSearch.pm line
> 61.root@debian:/usr/share/koha/bin/migration_tools#"
> 
> This job stopped. 

See comment 62.
Comment 173 kohayu 2015-10-16 08:51:23 UTC
(In reply to Jonathan Druart from comment #172)
> (In reply to kohayu from comment #171)
> > 
> > I use bulkmarcimport.pl to import marc data, 
> > But import 1,000 records every time  , it occurred
> >  
> > "................[NoNodes] ** No nodes are available:
> > [http://127.0.0.1:9200], called from sub
> > Search::Elasticsearch::Role::Client::Direct::__ANON__ at
> > /usr/local/share/perl/5.14.2/Catmandu/Store/ElasticSearch.pm line
> > 61.root@debian:/usr/share/koha/bin/migration_tools#"
> > 
> > This job stopped. 
> 
> See comment 62.

my config 

 <elasticsearch>
     <server>127.0.0.1:9200</server>        <!-- may be repeated to include all servers on your cluster -->
     <index_name>koha_robin</index_name>  <!-- should be unique amongst all the indices on your cluster. _biblios and _authorities will be appended. -->
 </elasticsearch>

elasticsearch no log, 

my elasticseach works very well. http://i.imgur.com/ESRj0oj.png 

ElasticSearch.pm 

sub _build_es {
    my ($self) = @_;
    my $es = Search::Elasticsearch->new($self->_es_args);

# line 61     unless ($es->indices->exists(index => $self->index_name)) {

        $es->indices->create(
            index => $self->index_name,
            body  => {
                settings => $self->index_settings,
                mappings => $self->index_mappings,
            },
        );
    }
    $es;
Comment 174 kohayu 2015-10-16 09:02:31 UTC
my elaticsearch index 

http://imgur.com/htHLWCW

koha_robin_authorities
koha_robin_biblios

I import data by hand , koha can't import all marc data once. 

BR,

longshan
Comment 175 Jonathan Druart 2015-10-16 09:35:05 UTC
What is the return of
  curl -X GET http://localhost:9200/
?
Comment 176 kohayu 2015-10-16 10:44:51 UTC
(In reply to Jonathan Druart from comment #175)
> What is the return of
>   curl -X GET http://localhost:9200/
> ?

Dear Jonbathan, 

output as follow,  

root@debian:~# curl -X GET http://localhost:9200/
{
  "status" : 200,
  "name" : "Zero",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "1.7.2",
    "build_hash" : "e43676b1385b8125d647f593f7202acbd816e8ec",
    "build_timestamp" : "2015-09-14T09:49:53Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  },
  "tagline" : "You Know, for Search"
}
Comment 177 Jonathan Druart 2015-10-16 10:53:24 UTC
Sorry, the answer was in your first post: try using jessie instead of wheezy.
Comment 178 Jonathan Druart 2015-10-16 10:55:27 UTC
(In reply to Jonathan Druart from comment #177)
> Sorry, the answer was in your first post: try using jessie instead of wheezy.

At least it's what I am using, maybe it works under wheezy, I don't know!
Comment 179 kohayu 2015-10-16 11:03:31 UTC
(In reply to Jonathan Druart from comment #178)
> (In reply to Jonathan Druart from comment #177)
> > Sorry, the answer was in your first post: try using jessie instead of wheezy.
> 
> At least it's what I am using, maybe it works under wheezy, I don't know!

ok, i try jessie , 

thanks a lot,
Comment 180 Robin Sheat 2015-10-19 01:20:41 UTC
(In reply to Jonathan Druart from comment #178)
> At least it's what I am using, maybe it works under wheezy, I don't know!

I forget the exact details, but there were some issues with wheezy.

Additionally, when debugging, always add:

<trace_to>Stderr</trace_to>

to the <elasticsearch> block in koha-conf.xml, it will trace all the traffic between elasticsearch and Koha to stderr.
Comment 181 kohayu 2015-10-19 01:51:49 UTC
(In reply to Robin Sheat from comment #180)
> (In reply to Jonathan Druart from comment #178)
> > At least it's what I am using, maybe it works under wheezy, I don't know!
> 
> I forget the exact details, but there were some issues with wheezy.
> 
> Additionally, when debugging, always add:
> 
> <trace_to>Stderr</trace_to>
> 
> to the <elasticsearch> block in koha-conf.xml, it will trace all the traffic
> between elasticsearch and Koha to stderr.


Dear Robin, Jonbathan 

I change jessie, it can import marc data to koha successfully.

I also find some es plugin to debug koha. 

https://github.com/lukas-vlcek/bigdesk
https://github.com/mobz/elasticsearch-head
https://github.com/karmi/elasticsearch-paramedic
https://github.com/NLPchina/elasticsearch-sql/

opac http://i.imgur.com/orvED9C.png 

thanks a lot, 

BR, 

longshan
Comment 182 Robin Sheat 2015-10-19 04:33:55 UTC
I've started writing some documentation on how the Elasticsearch stuff works internally here:

http://wiki.koha-community.org/wiki/Elasticsearch/Implementation

Right now it's just the high-level stuff, but I'll get finer detail on it soon.
Comment 183 Robin Sheat 2015-11-16 01:36:45 UTC
FYI, this is my last week at Catalyst and hence working on this. I'm going to get the docs finished off to make it easy for everyone, but if you've got any questions, best get them in soon.
Comment 184 David Cook 2015-11-16 03:34:00 UTC
(In reply to Robin Sheat from comment #183)
> FYI, this is my last week at Catalyst and hence working on this. I'm going
> to get the docs finished off to make it easy for everyone, but if you've got
> any questions, best get them in soon.

On that note, where are all the patches for this development? All I see is "Bug 12478: Display facet terms ordered by number of occurrences". Were the rest accidentally marked as obsolete or are they on different bugs?
Comment 185 Robin Sheat 2015-11-16 03:40:57 UTC
(In reply to David Cook from comment #184)
> On that note, where are all the patches for this development? All I see is
> "Bug 12478: Display facet terms ordered by number of occurrences". Were the
> rest accidentally marked as obsolete or are they on different bugs?

They've all been pulled into the branch on the catalyst repo. It was my plan to break them up into functional patches afterwards.
Comment 186 David Cook 2015-11-16 03:43:00 UTC
(In reply to Robin Sheat from comment #185)
> 
> They've all been pulled into the branch on the catalyst repo. It was my plan
> to break them up into functional patches afterwards.

Ahhh sounds good. I'll look there if/when I get to checking out. Thanks!
Comment 187 Peter Zhao 2015-11-17 00:07:39 UTC
Es can not "edit record" or "edit items"  in UNIMARC structure. (But it works well in USMARC.)

It can show results, but when I click "Edit record", it shows "Add MARC record "--blank MARC. http://127.0.1.1:8080/cgi-bin/koha/cataloguing/addbiblio.pl?biblionumber=



 If I click "Edit items" , it shows

http://127.0.1.1:8080/cgi-bin/koha/cataloguing/additem.pl?biblionumber=

Software error:

Can't call method "fields" on an undefined value at /home/koha/kohaclone/cataloguing/additem.pl line 703.

For help, please send mail to the webmaster (webmaster@koha), giving this error message and the time and date of the error. 

I think it  can not get the " biblionumber= ".
Comment 188 Jonathan Druart 2015-11-25 12:24:52 UTC
(In reply to Peter Zhao from comment #187)
> Es can not "edit record" or "edit items"  in UNIMARC structure. (But it
> works well in USMARC.)
> 
> It can show results, but when I click "Edit record", it shows "Add MARC
> record "--blank MARC.
> http://127.0.1.1:8080/cgi-bin/koha/cataloguing/addbiblio.pl?biblionumber=
> 
> 
> 
>  If I click "Edit items" , it shows
> 
> http://127.0.1.1:8080/cgi-bin/koha/cataloguing/additem.pl?biblionumber=
> 
> Software error:
> 
> Can't call method "fields" on an undefined value at
> /home/koha/kohaclone/cataloguing/additem.pl line 703.
> 
> For help, please send mail to the webmaster (webmaster@koha), giving this
> error message and the time and date of the error. 
> 
> I think it  can not get the " biblionumber= ".

I suspect that it comes from the fact that biblionumber is not indexed in a specific field (see comment 148).
Moreover I don't think the following comment is correct
http://git.catalyst.net.nz/gw?p=koha.git;a=blob;f=Koha/ElasticSearch/Indexer.pm;h=06dcb1d3e0b57633652d4cb70ed46ac59f0f802e;hb=refs/heads/elastic_search#l176

 167 sub _sanitise_records {
 168     my ($self, $biblionums, $records) = @_;
 169 
 170     confess "Unequal number of values in \$biblionums and \$records." if (@$biblionums != @$records);
 171 
 172     my $c = @$biblionums;
 173     for (my $i=0; $i<$c; $i++) {
 174         my $bibnum = $biblionums->[$i];
 175         my $rec = $records->[$i];
 176         # I've seen things you people wouldn't believe. Attack ships on fire
 177         # off the shoulder of Orion. I watched C-beams glitter in the dark near
 178         # the Tannhauser gate. MARC records where 999$c doesn't match the
 179         # biblionumber column. All those moments will be lost in time... like
 180         # tears in rain...
 181         $rec->delete_fields($rec->field('999'));
 182         $rec->append_fields(MARC::Field->new('999','','','c' => $bibnum, 'd' => $bibnum));
 183     }
 184 }

The biblionumber is stored in 001, isn't it?
Comment 189 Magnus Enger 2015-11-25 12:44:16 UTC
(In reply to Jonathan Druart from comment #188)
> The biblionumber is stored in 001, isn't it?

No. For imported records, the 001 is not touched. For original cataloguing (starting from scratch) 001 has to be filled manually. The one place where you can find the biblionumber in our MARC records is in 999 $c and $d.
Comment 190 Katrin Fischer 2015-11-25 13:31:26 UTC
I think UNIMARC uses 001 if I am not mistaken.
Comment 191 Nick Clemens (kidclamp) 2016-02-05 20:28:03 UTC
Hi all,

I just got the elastic branch setup on kohadevbox and am eager to do some testing, I am just wondering if there is anywhere specific we are poking yet, or just a general 'try to break it'/'figure out what it doesn't do'

Searching definitely feels faster, even on a 100 record db.

I do same to be able to switch between zebra and elastic with an issue once they are up and running which is very useful

In just poking for now I spot 2 things:
1 - I don't seem to able to edit/add records.  I get an error:
Can't locate object method "field" via package "<?xml version="1.0" encoding="UTF-8"?>..folowed by full marcxml for record

2 - Searching doesn't automatically truncate/stem.  i.e. to search 'Char' and get 'Charles' I have to search for 'Char*' and when I do highligthing doesn't work

I am excited to do more testing, just want to know what the goals are to get this in shape to make it in to Koha so I can focus on hitting those pieces.

-Nick
Comment 192 Rémi Mayrand-Provencher 2016-02-08 20:07:05 UTC Comment hidden (obsolete)
Comment 193 Rémi Mayrand-Provencher 2016-02-08 20:07:11 UTC Comment hidden (obsolete)
Comment 194 Chris Cormack 2016-03-01 19:34:04 UTC
> I am excited to do more testing, just want to know what the goals are to get
> this in shape to make it in to Koha so I can focus on hitting those pieces.
> 
> -Nick

Hi Nick

From my mind, the main thing to test hard at this point, as that if you have this code, and you turn off the syspref .. that it has no effect.
IE, this breaks nothing existing.

If so then I reckon it could go in as experimental, and we can get more people testing and new bugs can be opened for missing features/bugs related to it. 

Does that makes sense? (just my 2 cents)
Comment 195 Jonathan Druart 2016-03-02 08:07:24 UTC
I have let some works to be done in comments 100-102 (see also Robin's kanban).
It's not ready for production at all and people are waiting for this for 5 years now. It will be very weird not to provide something a bit more complete.
The development is inactive for 6 months, and Robin has left the Koha project.
We need a plan...
The "let's push that and wait for people to open bug reports and enhancement requests" approach sounds a bit risky.
Comment 196 Chris Cormack 2016-03-02 08:08:42 UTC
Not as risky as making one massive bug that will never get pushed because it is too big to test properly.
Comment 197 Mirko Tietgen 2016-03-02 08:23:43 UTC
I think we should try to get it in and explicitly mark it as experimental. That would give interested (non-dev) parties the chance to try it and give feedback. I have spoken to a lot of people about Koha recently, and being able to test ES during their evaluation of the system would likely help some of them to make a decision in favour of Koha.
Comment 198 Katrin Fischer 2016-03-02 10:02:39 UTC
We should avoid to repeat the Solr problem - I think part of the problem was that it was not production ready when included in Koha and I'd hate to see us get stuck in a similar way with ES. 

I think some good information/documentation about what works and what works not (yet) would be good. I am thinking about a wiki page - with more end user oriented information - which system preferences are supported, why does a search x result in y...(FAQ?), how to configure in the GUI, and fill in any gaps while testing.

I could start a page, but will need help filling in the information and also to get ES running on my computer.
Comment 199 Jonathan Druart 2016-03-02 10:16:34 UTC
I would like to get other opinions on the discussion on comment 38 and comment 39 about the idea to reuse existing search scripts and templates.
IMO it will be a mess in a near future:
1/ we will want to implement things for ES, not supported by Zebra
2/ at some point the features already implemented for Zebra won't be available for ES
=> We will have to write switch (if se=='es' {} elsif se == 'zebra') everywhere.
Comment 200 Jonathan Druart 2016-03-02 10:18:37 UTC
And I would also like to know who has already looked at the ES implementation and get their feedbacks.
Comment 201 Brendan Gallagher 2016-03-02 21:42:06 UTC
(In reply to Jonathan Druart from comment #199)
> I would like to get other opinions on the discussion on comment 38 and
> comment 39 about the idea to reuse existing search scripts and templates.
> IMO it will be a mess in a near future:
> 1/ we will want to implement things for ES, not supported by Zebra
> 2/ at some point the features already implemented for Zebra won't be
> available for ES
> => We will have to write switch (if se=='es' {} elsif se == 'zebra')
> everywhere.


A sort of road map that I have in mind (and yes I'll back that up with funding etc.)

1. Get Elastic into Koha (marked as experimental) - making sure that we aren't breaking anything with Zebra!  (that's the main testing we need now)
2. Elasticsearch talks JSON
3. As we want to develop new features that are ES centric and not Zebra - we split those off
4. Meaning we can build a new frontend for it, that uses elasticsearch to search, and the restful api to get records. 
5. New Frontend for it - that is not C4::Search (no one really wants to work that anymore).
6. introducing the Catmandu libraries into Koha should be a good thing - we can start to explore RDF etc.
Comment 202 Kyle M Hall (khall) 2016-03-03 20:01:51 UTC
(In reply to Katrin Fischer from comment #198)
> We should avoid to repeat the Solr problem - I think part of the problem was
> that it was not production ready when included in Koha and I'd hate to see
> us get stuck in a similar way with ES. 

I think the problem was that Solr integration ran out of steam ( i.e. development funding ). That is not going to happen with Elastic. I think it's safe to say that we are fully committed to getting Elastic to work with Koha. It's something of a chicken and egg problem. We can't let past failures cause hesitation on future opportunities, that will only lead to stagnation.

As an aside, I'm looking forward to Elastic as a way to greatly speed up our patron search. With Elastic we'll be able to index our patron data as well as record data! In fact, we'll be able to use it for any type of data where we search on large data sets! Once Elastic is part of Koha I plan on starting to work on an (optional) way for the patron search to take advantage of Elastic.
Comment 203 Katrin Fischer 2016-03-04 07:07:12 UTC
Hi Kyle, I should try and clarify my comment :) I am not against including this as an experimental feature at all! - I just think we need to be clear and upfront in documentation on what is working and what not - so people know what to expect. Right now I am not quite sure what to expect of the work available - what should be working? What is intended to work? It would help me to understand that better and give me a starting point.
Comment 204 Kyle M Hall (khall) 2016-03-07 15:45:32 UTC
(In reply to Katrin Fischer from comment #203)
> Hi Kyle, I should try and clarify my comment :) I am not against including
> this as an experimental feature at all! - I just think we need to be clear
> and upfront in documentation on what is working and what not - so people
> know what to expect. Right now I am not quite sure what to expect of the
> work available - what should be working? What is intended to work? It would
> help me to understand that better and give me a starting point.

Agreed! I think we are all on the same page. Nick is going to begin testing these patches for sign-off. I imagine we will be able to help fill in the gaps where documentation is lacking and be able to tell us if it is lacking in any functionality.
Comment 205 Nick Clemens (kidclamp) 2016-03-07 21:58:34 UTC
So far most things have worked nicely, problems I have found:

Authority search fails when sorting is enabled (Software error:Unable to understand your search query, please rephrase and try again.)
and lots of this in logs: Parse Failure [No mapping found for [Heading.phrase] in order to sort on]

Authority searching did not throw an error, but wouldn't return results with no sorting.  

I tried doing a rebuild to clear above but I got errors about AUTOLOAD inheritance for StripNonXmlChars in Authority.pm. I tried adding "use C4::Charset;" but then got an object error.


Authority searching from a record via popup gave:
Can't locate object method "field" via package "<?xml version="1.0" encoding="UTF-8"?>..followed by full marcxml for record

Location (Branch) facet returns no results when combined with search
Title-series limit often returns no results when combined with search

The only problem I hit so far in switching back to zebrawas when I selected zebra as the search engine the cataloging search fails:
Can't locate object method "simple_search_compat" via package "Koha::SearchEngine::Zebra::Search"

I am going to keep plugging but authorities seems to be the biggest issue for now
Comment 206 Jonathan Druart 2016-03-08 08:41:40 UTC
The branch is 500 commits behind master, is someone in charge of rebasing it?
Comment 207 Chris Cormack 2016-03-09 20:12:08 UTC
(In reply to Jonathan Druart from comment #206)
> The branch is 500 commits behind master, is someone in charge of rebasing it?

Your wish is my command, rebased now :)
Comment 208 Mirko Tietgen 2016-03-24 16:51:33 UTC
_sanitise_records makes it impossible to create or import authority records and to use Z39.50 authority search. (staff client, Z39.50 and bulkmarcimport.pl) It will fail related to

> Can't call method "field" on an undefined value at /home/koha/koha/Koha/ElasticSearch/Indexer.pm line 181.

Without 181 and 182 I am able to create a record in staff client manually and with Z39.50.

> # $rec->delete_fields($rec->field('999'));
> # $rec->append_fields(MARC::Field->new('999','','','c' => $bibnum, 'd' => $bibnum));

It looks like _sanitise_records is biblio-specific and should probably not be used with auth files, or should be changed to handle them correctly.

There are still errors with bulkmarcimport with these lines commented out.
Comment 209 Mirko Tietgen 2016-03-24 16:56:33 UTC
Auth search in "entire record" does not work.

Software error:

Invalid marclist field provided: all at /home/koha/koha/Koha/SearchEngine/Elasticsearch/QueryBuilder.pm line 417.
	Koha::SearchEngine::Elasticsearch::QueryBuilder::build_authorities_query_compat(Koha::SearchEngine::Elasticsearch::QueryBuilder=HASH(0x867c5d8), ARRAY(0x8649338), ARRAY(0x8649308), ARRAY(0x867c4b8), ARRAY(0x8689490), ARRAY(0x9b55800), "CORPO_NAME", "HeadingAsc") called at /home/koha/koha/authorities/authorities-home.pl line 86
Comment 210 Mirko Tietgen 2016-03-24 17:05:06 UTC
Auth search for anything else than "search entire record" throws no error, but returns no results.

I can't find any authority functionality at all. Am I missing something?
Comment 211 Mirko Tietgen 2016-03-24 17:14:44 UTC
Aaaaand, auth search is broken for Zebra too.

Software error:

Can't locate object method "simple_search_compat" via package "Koha::SearchEngine::Zebra::Search" at /home/koha/koha/C4/AuthoritiesMarc.pm line 357.
Comment 212 Chris Cormack 2016-03-28 20:22:28 UTC
(In reply to Mirko Tietgen from comment #211)
> Aaaaand, auth search is broken for Zebra too.
> 
> Software error:
> 
> Can't locate object method "simple_search_compat" via package
> "Koha::SearchEngine::Zebra::Search" at /home/koha/koha/C4/AuthoritiesMarc.pm
> line 357.

Sweet thanks for finding that, I'm mostly concerned in finding things that are broken when not using ElasticSearch. Because those are the blockers. I'll try to get a fix for this, sometime this week.
Comment 213 Jonathan Druart 2016-04-11 07:21:20 UTC
I have rebased the branch (and push it on github https://github.com/joubu/Koha/commits/elastic_search).
I have also submitted a patch to fix the authority search.
Comment 214 Jonathan Druart 2016-04-11 07:21:53 UTC Comment hidden (obsolete)
Comment 215 Jonathan Druart 2016-04-11 08:34:42 UTC
Note that the tests are completely out of sync and do not pass.
Comment 216 Chris Cormack 2016-04-11 20:05:01 UTC
new_12478_elasticsearch is now up to date, including the latest patch. 

Ready for testing for regressions tomorrow
Comment 217 Mirko Tietgen 2016-04-12 10:43:01 UTC
Can't index authorities in ES.

koha-koha@jessie:/home/koha/koha$ perl misc/search_tools/rebuild_elastic_search.pl -v -a
Indexing authorities
Can't locate object method "get_all_authorities_iterator" via package "Koha::Authority" at misc/search_tools/rebuild_elastic_search.pl line 151.

Not sure why that is the case, there is a sub get_all_authorities_iterator in Koha/Authority.pm
Comment 218 Tomás Cohen Arazi (tcohen) 2016-04-12 20:07:37 UTC Comment hidden (obsolete)
Comment 219 Chris Cormack 2016-04-12 20:27:58 UTC
Branch is up to date again with the latest patches
Comment 220 Chris Cormack 2016-04-12 20:47:32 UTC
Branch updated with 2 fixes for installing/upgrading
Comment 221 Nick Clemens (kidclamp) 2016-04-13 01:51:23 UTC
I have tested all the things I can think of (authorities,staff/opac, facets, acquisitions searching, importing) from fresh db and upgrade and found no regression with these patches for using zebra
Comment 222 Chris Cormack 2016-04-13 01:54:17 UTC
The new_12478_elasticsearch now has all the patches signed off
Comment 223 Jonathan Druart 2016-04-13 07:47:10 UTC
(In reply to Jonathan Druart from comment #215)
> Note that the tests are completely out of sync and do not pass.

That will certainly be a blocker for QA.
Comment 224 Chris Cormack 2016-04-13 07:49:18 UTC
Have you tried with bug 16249!
Comment 225 Chris Cormack 2016-04-13 07:50:09 UTC
Heh that was meant to be a ? Not a !
Comment 226 Jonathan Druart 2016-04-13 08:22:37 UTC
(In reply to Chris Cormack from comment #224)
> Have you tried with bug 16249!

I was talking about
t/Koha_ElasticSearch_Indexer.t, t/Koha_ElasticSearch_Search.t and t/Koha_ElasticSearch.t
Comment 227 Chris Cormack 2016-04-13 20:23:25 UTC
Will work on the tests today and attach them here
Comment 228 Chris Cormack 2016-04-13 20:54:13 UTC Comment hidden (obsolete)
Comment 229 Chris Cormack 2016-04-13 21:22:35 UTC Comment hidden (obsolete)
Comment 230 Chris Cormack 2016-04-13 21:25:43 UTC
Right the ElasticSearch related tests should pass now. 

If you checkout the branch and apply the 2 patches on here.
Comment 231 Nick Clemens (kidclamp) 2016-04-13 21:37:15 UTC Comment hidden (obsolete)
Comment 232 Nick Clemens (kidclamp) 2016-04-13 21:37:24 UTC Comment hidden (obsolete)
Comment 233 Jonathan Druart 2016-04-14 06:54:49 UTC
Comments about the last 2 patches:
1/ The tests are db dependent, the files should be moved to t/db_dependent
2/ The license part is missing, you have put a custom block instead


I think we should have a better test coverage before pushing this work.
Comment 234 Chris Cormack 2016-04-14 20:24:28 UTC Comment hidden (obsolete)
Comment 235 Chris Cormack 2016-04-14 20:43:39 UTC
With the 3 test files we have

Koha::ElasticSearch                        61.3% statement coverage
Koha::ElasticSearch::Indexer               56.9% statement coverage
Koha::SearchEngine::Elasticsearch::Search  24.6% statement coverage


I'll work now on increasing the coverage of them.
Comment 236 Chris Cormack 2016-04-14 21:28:44 UTC Comment hidden (obsolete)
Comment 237 Nick Clemens (kidclamp) 2016-04-15 22:42:18 UTC Comment hidden (obsolete)
Comment 238 Nick Clemens (kidclamp) 2016-04-15 22:42:29 UTC Comment hidden (obsolete)
Comment 239 Kyle M Hall (khall) 2016-04-25 16:43:44 UTC
I have been testing the ElasticSearch patch set, focusing on ensuring that it does not alter the search results when using Zebra. May testing has verified that the results look identical using master vs master + elastic. The only noteworthy difference is that with Elastic, the facets are now in alphabetical order. I do not think this is an issue. If a library would like to customize the order of the facets, it can always be done via javascript.

This can be considered my sign-off!

Kyle
Comment 240 Jonathan Druart 2016-04-25 17:08:44 UTC
(In reply to Kyle M Hall from comment #239)
> The only noteworthy difference is that with Elastic, the facets are now in
> alphabetical order. I do not think this is an issue.

You mean using Zebra on the master+ES branch?

Looking at the commit msg of
  Bug 12478: Display facet terms ordered by number of occurrences
    For Zebra it's now done in C4::Search::getRecords, and there is no
    change to expect (still alphabetically).
We should not expect any changes, the facets should already been sorted alphabetically.

> If a library would like
> to customize the order of the facets, it can always be done via javascript.

Not sure it will be an easy one :)
Comment 241 Chris Cormack 2016-04-25 20:18:09 UTC
Comment on attachment 50292 [details] [review]
Bug 12478 Increasing test Coverage for Koha::SearchEngine::Elasticsearch::Search

Branch is now up to date
Comment 242 Jesse Weaver 2016-04-25 23:01:21 UTC
Branch has been updated with signoffs and two followups to fix QA/test issues.
Comment 243 Brendan Gallagher 2016-04-26 21:06:33 UTC
Pushed to Master - Should be in the May 2016 Release.  Thanks!  (will run the Schema update in a bit)
Comment 244 Srdjan Jankovic 2016-04-29 02:37:31 UTC
I had a problem with the database patch. I had to do:

diff --git a/installer/data/mysql/updatedatabase.pl b/installer/data/mysql/updatedatabase.pl
index 9e6d390..e092704 100755
--- a/installer/data/mysql/updatedatabase.pl
+++ b/installer/data/mysql/updatedatabase.pl
@@ -12332,13 +12332,13 @@ my $indexes = LoadFile( $mappings_yaml );
 
 while ( my ( $index_name, $fields ) = each %$indexes ) {
         while ( my ( $field_name, $data ) = each %$fields ) {
-            my $field_type = $data->{type};
+            my $field_type = $data->{type} || 'string';
             my $field_label = $data->{label};
             my $mappings = $data->{mappings};
             my $search_field = Koha::SearchFields->find_or_create({ name => $field_name, label => $field_label, type => $field_type }, { key => 'name' });
             for my $mapping ( @$mappings ) {
                 my $marc_field = Koha::SearchMarcMaps->find_or_create({ index_name => $index_name, marc_type => $mapping->{marc_type}, marc_field => $mapping->{marc_field} });
-                $search_field->add_to_search_marc_maps($marc_field, { facet => $mapping->{facet}, suggestible => $mapping->{suggestible}, sort => $mapping->{sort} } );
+                $search_field->add_to_search_marc_maps($marc_field, { facet => $mapping->{facet} || 0, suggestible => $mapping->{suggestible} || 0, sort => $mapping->{sort} || undef } );
             }
         }
 }

Was it just my mysql?
Comment 245 Srdjan Jankovic 2016-05-05 00:38:04 UTC
Also, with my version of elastic, I had to do following changes:

diff --git a/Koha/ElasticSearch.pm b/Koha/ElasticSearch.pm
index d56c344..22bc064 100644
--- a/Koha/ElasticSearch.pm
+++ b/Koha/ElasticSearch.pm
@@ -171,11 +171,6 @@ sub get_elasticsearch_mappings {
                     include_in_all => JSON::false,
                     type           => "string",
                 },
-                '_all.phrase' => {
-                    search_analyzer => "analyser_phrase",
-                    index_analyzer  => "analyser_phrase",
-                    type            => "string",
-                },
             }
         }
     };
@@ -195,15 +190,12 @@ sub get_elasticsearch_mappings {
               ? 'boolean'
               : 'string';
             $mappings->{data}{properties}{$name} = {
-                search_analyzer => "analyser_standard",
-                index_analyzer  => "analyser_standard",
+                analyzer  => "analyser_standard",
                 type            => $es_type,
                 fields          => {
                     phrase => {
-                        search_analyzer => "analyser_phrase",
-                        index_analyzer  => "analyser_phrase",
+                        analyzer  => "analyser_phrase",
                         type            => "string",
-                        copy_to         => "_all.phrase",
                     },
                     raw => {
                         "type" => "string",
@@ -222,22 +214,19 @@ sub get_elasticsearch_mappings {
             if ($suggestible) {
                 $mappings->{data}{properties}{ $name . '__suggestion' } = {
                     type => 'completion',
-                    index_analyzer => 'simple',
-                    search_analyzer => 'simple',
+                    analyzer => 'simple',
                 };
             }
             # Sort may be true, false, or undef. Here we care if it's
             # anything other than undef.
             if (defined $sort) {
                 $mappings->{data}{properties}{ $name . '__sort' } = {
-                    search_analyzer => "analyser_phrase",
-                    index_analyzer  => "analyser_phrase",
+                    analyzer  => "analyser_phrase",
                     type            => "string",
                     include_in_all  => JSON::false,
                     fields          => {
                         phrase => {
-                            search_analyzer => "analyser_phrase",
-                            index_analyzer  => "analyser_phrase",
+                            analyzer  => "analyser_phrase",
                             type            => "string",
                         },
                     },
diff --git a/installer/data/mysql/updatedatabase.pl b/installer/data/mysql/updatedatabase.pl
index 9e6d390..e092704 100755
--- a/installer/data/mysql/updatedatabase.pl
+++ b/installer/data/mysql/updatedatabase.pl
@@ -12332,13 +12332,13 @@ my $indexes = LoadFile( $mappings_yaml );
 
 while ( my ( $index_name, $fields ) = each %$indexes ) {
         while ( my ( $field_name, $data ) = each %$fields ) {
-            my $field_type = $data->{type};
+            my $field_type = $data->{type} || 'string';
             my $field_label = $data->{label};
             my $mappings = $data->{mappings};
             my $search_field = Koha::SearchFields->find_or_create({ name => $field_name, label => $field_label, type => $field_type }, { key => 'name' });
             for my $mapping ( @$mappings ) {
                 my $marc_field = Koha::SearchMarcMaps->find_or_create({ index_name => $index_name, marc_type => $mapping->{marc_type}, marc_field => $mapping->{marc_field} });
-                $search_field->add_to_search_marc_maps($marc_field, { facet => $mapping->{facet}, suggestible => $mapping->{suggestible}, sort => $mapping->{sort} } );
+                $search_field->add_to_search_marc_maps($marc_field, { facet => $mapping->{facet} || 0, suggestible => $mapping->{suggestible} || 0, sort => $mapping->{sort} || undef } );
             }
         }
 }
Comment 246 Juan Romay Sieira 2016-06-09 15:02:37 UTC
I can't reindex Authorities with rebuild_elastic_search.pl, as Nick Clemens said. I added C4::Charset and now I got this error:

DBIx::Class::ResultSet::new_result(): Result object instantiation requires a hashref as argument at Koha/Object.pm line 59

When I edit an Authority and save it, it goes to ES indexed, but full reindex can not be done.
Comment 247 Katrin Fischer 2016-06-09 15:54:56 UTC
Testing at the KohaCon hackfest I also couldn't reindex authorities - biblios worked fine.
Comment 248 Jonathan Druart 2016-06-10 13:07:33 UTC
I am investigating the authority indexing issue.
It seems to be caused by a bad merge fix conflict.
Comment 249 Jonathan Druart 2016-06-10 13:24:47 UTC
(In reply to Jonathan Druart from comment #248)
> I am investigating the authority indexing issue.
> It seems to be caused by a bad merge fix conflict.

See bug 16708.
Comment 250 Juan Romay Sieira 2016-06-13 09:32:06 UTC
Thank you Jonathan, the patch in bug 16708 applies correctly and the authorities reindex fine
Comment 251 Jonathan Druart 2016-06-13 16:05:57 UTC
(In reply to Juan Romay Sieira from comment #250)
> Thank you Jonathan, the patch in bug 16708 applies correctly and the
> authorities reindex fine

So go and sign it off :)
Comment 252 claire.hernandez@biblibre.com 2016-10-13 10:10:02 UTC
Trying to test implementation of ElasticSearch. I don't understand why it does not work... If someone has an idea..

https://semestriel.framapad.org/p/jTiBzc32B7
Comment 253 Srdjan Jankovic 2016-10-13 23:47:49 UTC
I had something similar with mismatching libcatmandu-perl, libcatmandu-store-elasticsearch-perl and libcatmandu-marc-perl. Which versions do you have please?
Comment 254 Jonathan Druart 2016-10-14 07:23:24 UTC
(In reply to Srdjan Jankovic from comment #253)
> I had something similar with mismatching libcatmandu-perl,
> libcatmandu-store-elasticsearch-perl and libcatmandu-marc-perl. Which
> versions do you have please?

Hi Srdjan, the pad Claire posted contains all the infos ;)

I c/p it here not to loose it:

$ misc/search_tools/rebuild_elastic_search.pl -v -bn=1
Indexing biblios
1
must be hashref, arrayref, coderef or iterable object

Trace begun at /usr/local/share/perl/5.22.1/Catmandu/Fix.pm line 171
Catmandu::Fix::fix('Catmandu::Fix=HASH(0x6c3ba40)', 'Catmandu::Importer::MARC=HASH(0x6b44988)') called at /home/koha/src/Koha/SearchEngine/Elasticsearch/Indexer.pm line 193
Koha::SearchEngine::Elasticsearch::Indexer::_convert_marc_to_json('Koha::SearchEngine::Elasticsearch::Indexer=HASH(0xab9fa0)', 'ARRAY(0x4cd6600)') called at /home/koha/src/Koha/SearchEngine/Elasticsearch/Indexer.pm line 69
Koha::SearchEngine::Elasticsearch::Indexer::update_index('Koha::SearchEngine::Elasticsearch::Indexer=HASH(0xab9fa0)', 'ARRAY(0x4cd6738)', 'ARRAY(0x4cd6600)') called at misc/search_tools/rebuild_elastic_search.pl line 191
main::do_reindex('CODE(0x8b3ba0)', 'biblios') called at misc/search_tools/rebuild_elastic_search.pl line 139

$ git log -1 --oneline --decorate
a8376f9 (HEAD -> master, origin/master, origin/HEAD) Bug 17432: Remove minification

ii  elasticsearch                   1.7.5                all                  Open Source, Distributed, RESTful Search Engine
ii  libcatmandu-store-elasticsearch 0.0304-1             all                  A searchable store backed by Elasticsearch
ii  libsearch-elasticsearch-perl    1.19-1               all                  The official client for Elasticsearch
ii  libcatmandu-marc-perl           0.214-1              all                  modules for working with MARC data within the Catmandu framework
ii  libcatmandu-perl                0.9505-1             all                  metadata toolkit
ii  libcatmandu-store-elasticsearch 0.0304-1             all                  A searchable store backed by Elasticsearch
ii  liblog-any-adapter-perl         0.11-2               all                  <short description; defaults to some wise words>
ii  liblog-any-perl                 1.03-1               all                  Perl module to log messages safely and efficiently
Comment 255 claire.hernandez@biblibre.com 2016-10-14 14:10:15 UTC
End of the week, last chance, I dropped my environnement and did a fresh reinstall and it works \o/

ii  elasticsearch                   1.7.5                all                  Open Source, Distributed, RESTful Search Engine
ii  libcatmandu-store-elasticsearch 0.0304-1             all                  A searchable store backed by Elasticsearch
ii  libsearch-elasticsearch-perl    1.19-1               all                  The official client for Elasticsearch
ii  libcatmandu-marc-perl           0.214-1              all                  modules for working with MARC data within the Catmandu framework
ii  libcatmandu-perl                0.9505-1             all                  metadata toolkit
ii  liblog-any-adapter-perl         0.11-2               all                  <short description; defaults to some wise words>
ii  liblog-any-perl                 1.03-1               all                  Perl module to log messages safely and efficiently

Versions seems the same... but it works !
Comment 256 Jorge de Cardenas 2017-02-16 15:50:02 UTC
I am trying to test Elasticsearch in Koha and I keep getting an error:
No 'elasticsearch' block is defined in koha-conf.xml.

I am running a fresh install of Koha 16.11.03.000 and Debian 8 and imported our data from 3.16.04.000 

I have enabled Memcached and Plack.

Installed openjdk-8 and elastic search.

I first got the error while trying run rebuild_elastic_search.pl
I not sure why but changing <memcached_server> to the static IP allowed the index to run but when I try to do a search on the staff side I get the error again.

If this is not the place to ask please let me know.

Jorge de Cardenas
Comment 257 Nick Clemens (kidclamp) 2017-02-16 18:49:48 UTC
Hi Jorge,

You will need to manual add a block like the below into the config section of your koha-conf.xml:
 <elasticsearch>
     <server>es-server:9200</server>
     <index_name>koha_instance</index_name>
 </elasticsearch>

The best way to reach the community is probably via

Probably using the listservs:
https://koha-community.org/support/koha-mailing-lists/

Or IRC:
https://koha-community.org/get-involved/irc/

Good luck!