It would be great if we could just pass a CCL query to the /biblios route
Should this be a public route?
There's another thing: I think that the only Content-Type that makes sense for the return value is 'application/marc-in-json' because we return 1..n records instead of a single one. Additionally I can't imagine a use-case in which you wouldn't want JSON as a return value for that route. Is there any sane way to discern marc records from one another in an api response?
(In reply to Paul Derscheid from comment #2) > There's another thing: I think that the only Content-Type that makes sense > for the return value is 'application/marc-in-json' because we return 1..n > records instead of a single one. > > Additionally I can't imagine a use-case in which you wouldn't want JSON as a > return value for that route. > > Is there any sane way to discern marc records from one another in an api > response? All of the accepted formats in GET /biblio/{biblio_id} have their plural representations [1] BUT I think we could start with 'application/marc-in-json'. and add things 'as-needed'. So if you want to work on this, it'd be ok to just implement marc-in-json for now. [1] "application/json" => this is the biblio + biblioitem table, plural is an arrary, "application/marcxml+xml" => there's '<collection>' for multiple records, "application/marc-in-json" => as you said, an array, "application/marc" => concatenated usmarc records would work, "text/plain" => double new line separator would work.
Created attachment 138123 [details] [review] Bug 25870 - Add a q_ccl query parameter to /biblios This patch adds new functionality to the existing /public/biblios route. Instead of specifying a biblio by a biblionumber as a path parameter, you can use the new q_ccl query parameter to run a query on the endpoint. To test: 1) Apply the patch 2) Pick an endpoint tester of your choice, e.g. Insomnia or the ThunderClient if you use VSCode or derivatives. 3) Run a query while using Zebra. 4) Observe the marc-in-json response and check for validity. 5) Run a query while using Elasticsearch. 6) Again, observe the marc-in-json response and check for validity. 7) I'd love to say: 'Sign-Off' but the unit tests need work and I need help with that, so.. an attachment or guidance would be great.
The other issue is that this route can be abused for DOS-attacks which must be handled somehow.
(In reply to Paul Derscheid from comment #5) > The other issue is that this route can be abused for DOS-attacks which must > be handled somehow. Wouldn't the simplest solution be to make it *not* public, and just require the catalogue permission? Another option would be to bake in a rate limiting middleware such as https://metacpan.org/pod/Plack::Middleware::Throttle or https://metacpan.org/pod/Plack::Middleware::Throttle::Lite
That's right but at LMSCloud we need this as a public route and tcohen also wants this to be public, so the simple approach doesn't fly. Thanks for the pointers, I'll have a look in a bit.
Also https://metacpan.org/pod/Plack::Middleware::Cerberus We need something that throttles by the minute or even second if possible.
Another question is what a sane 'number of requests/interval' would look like.
(In reply to Paul Derscheid from comment #9) > Another question is what a sane 'number of requests/interval' would look > like. I think that depends on the power of the server, and your use case. What will you be using this API for?
(In reply to Paul Derscheid from comment #5) > The other issue is that this route can be abused for DOS-attacks which must > be handled somehow. Isn't that already the case for bots hitting opac-search.pl? I think we need to introduce some tool for throttling, but not necessarily as part of this dev, as it is really tricky (e.g. it will be common for campus sites to be queried many times from a transparent proxy on the LAN so we will also need to be able to configure exceptions...).
(In reply to Tomás Cohen Arazi from comment #11) > (In reply to Paul Derscheid from comment #5) > > The other issue is that this route can be abused for DOS-attacks which must > > be handled somehow. > > Isn't that already the case for bots hitting opac-search.pl? What I meant to say is that introducing this route just adds another place we need to take care of. But the solution is needed in general. Maybe at Apache level.
(In reply to Tomás Cohen Arazi from comment #11) > (In reply to Paul Derscheid from comment #5) > > The other issue is that this route can be abused for DOS-attacks which must > > be handled somehow. > > Isn't that already the case for bots hitting opac-search.pl? > > I think we need to introduce some tool for throttling, but not necessarily > as part of this dev, as it is really tricky (e.g. it will be common for > campus sites to be queried many times from a transparent proxy on the LAN so > we will also need to be able to configure exceptions...). That is a good point. Now, we just need to add throttling to this api, then use that api in opac-search.... ;) Tomas, at this time I think you have the ultimate authority to declare throttling "out of scope" and move it to a new bug report.
(In reply to Paul Derscheid from comment #7) > That's right but at LMSCloud we need this as a public route and tcohen also > wants this to be public, so the simple approach doesn't fly. We can have both, really. And when we say 'public' we mean 'unprivileged access' which could or not imply authentication.
(In reply to Kyle M Hall from comment #13) > That is a good point. Now, we just need to add throttling to this api, then > use that api in opac-search.... ;) > > Tomas, at this time I think you have the ultimate authority to declare > throttling "out of scope" and move it to a new bug report. I think we should file a separate bug, yes. And I wouldn't make this one dependent on the new one.
I'd really like to see us get away from Apache based solutions, but at the same time it's a very practical way to handle it that works for 99.9% of Koha users. If there exists middleware that would work for Koha, that would be a better solution IMO, especially because we can bake the throttling settings directly into Koha. Now, I think we've done enough bike-shedding. We should move this discussion to a throttling specific bug report!
Comment on attachment 138123 [details] [review] Bug 25870 - Add a q_ccl query parameter to /biblios Review of attachment 138123 [details] [review]: ----------------------------------------------------------------- ::: Koha/REST/V1/Biblios.pm @@ +160,5 @@ > + my $record_processor = Koha::RecordProcessor->new( > + { > + filters => 'ViewPolicy', > + options => { > + interface => 'opac', I think you could implement only a list() method, and rely on $c->stash('is_public') to choose staff vs. public. And later check the OpacHiddenItems stuff accordingly. @@ +177,5 @@ > + } > + ); > + } > + > + sub format_record_by_content_type { This inline subs look really untidy. @@ +243,5 @@ > + } > + > + my $response = > + format_record_by_content_type( > + { content_type => $requested_content_type, records => \@records } ); It feels like this method call should be placed in the data => portion of the respond_to, with the right parameters for each case instead of 'detecting it' inside format_record_... But as I said, I'm still not convinced by those subs.
(In reply to Kyle M Hall from comment #16) > I'd really like to see us get away from Apache based solutions, but at the > same time it's a very practical way to handle it that works for 99.9% of > Koha users. If there exists middleware that would work for Koha, that would > be a better solution IMO, especially because we can bake the throttling > settings directly into Koha. I've just raised bug 31242
(In reply to Tomás Cohen Arazi from comment #14) > (In reply to Paul Derscheid from comment #7) > > That's right but at LMSCloud we need this as a public route and tcohen also > > wants this to be public, so the simple approach doesn't fly. > > We can have both, really. And when we say 'public' we mean 'unprivileged > access' which could or not imply authentication. I know I struggle to wrap my head around how the API is organised as well... If I recall correctly, I think the majority of API endpoints are for admin/staff users and the "public" APIs are for public/OPAC users. Public users don't have authorizations (hence "unprivileged access"). Some public APIs require authentication and some don't.
(In reply to Paul Derscheid from comment #7) > That's right but at LMSCloud we need this as a public route and tcohen also > wants this to be public, so the simple approach doesn't fly. > > Thanks for the pointers, I'll have a look in a bit. I'd like this as a public route as well, as I'd like to point a third-party at it, so that they can search Koha in an event-driven way.
(In reply to David Cook from comment #20) > (In reply to Paul Derscheid from comment #7) > > That's right but at LMSCloud we need this as a public route and tcohen also > > wants this to be public, so the simple approach doesn't fly. > > > > Thanks for the pointers, I'll have a look in a bit. > > I'd like this as a public route as well, as I'd like to point a third-party > at it, so that they can search Koha in an event-driven way. Although for my use case I could go with a non-public route and just give them a "catalogue" authorized user like Kyle suggested. But that wouldn't work if we wanted opac-search.pl to use it. But... we could start with non-public and then add public later if we're concerned. They could probably share the controller even.
Created attachment 139452 [details] [review] Bug 25870 - (follow-up) Add a q_ccl query parameter to /biblios I refactored the controller code but unfortunately still haven't figured out why Koha::Biblios->search doesn't work properly. For the time being I still use Koha::Biblios->find. That aside I think it has become much cleaner now. To test: 1) Apply the patch 2) Pick an endpoint tester of your choice, e.g. Insomnia or the ThunderClient if you use VSCode or derivatives. 3) Run a query while using Zebra. 4) Observe the marc-in-json response and check for validity. 5) Run a query while using Elasticsearch. 6) Again, observe the marc-in-json response and check for validity. 7) Not ready for sign-off but please leave a comment or help me on the Koha::Biblios->search thing.
Created attachment 139469 [details] [review] Bug 25870 - (follow-up) Removed Data::Dumper + call I refactored the controller code but unfortunately still haven't figured out why Koha::Biblios->search doesn't work properly. For the time being I still use Koha::Biblios->find. That aside I think it has become much cleaner now. To test: 1) Apply the patch 2) Pick an endpoint tester of your choice, e.g. Insomnia or the ThunderClient if you use VSCode or derivatives. 3) Run a query while using Zebra. 4) Observe the marc-in-json response and check for validity. 5) Run a query while using Elasticsearch. 6) Again, observe the marc-in-json response and check for validity. 7) Not ready for sign-off but please leave a comment or help me on the Koha::Biblios->search thing.
Created attachment 139491 [details] [review] Bug 25870 - (follow-up) Raw MARC-XML records from Zebra now get appropriate treatment To test: 1) Apply the patch 2) Pick an endpoint tester of your choice, e.g. Insomnia or the ThunderClient if you use VSCode or derivatives. 3) Run a query while using Zebra. 4) Observe the marc-in-json response and check for validity. 5) Run a query while using Elasticsearch. 6) Again, observe the marc-in-json response and check for validity. 7) Not ready for sign-off but please leave a comment or help me on the Koha::Biblios->search thing.
Paul, please redo on top of bug 32734. Let me know if you cannot work on it, so we trace a path. This needs tests as well.
I'll estimate the workload and get back to you, Tomas.
Tomas, I'll continue working on this next week if that's alright with you. I'd love to do it immediately but I'm really swamped this week.
Created attachment 148117 [details] [review] Bug 25870: Add a q_ccl query parameter to /biblios To test: 1) Apply the patch 2) Pick an endpoint tester of your choice, e.g. Insomnia or the ThunderClient if you use VSCode or derivatives. 3) Run a query while using Zebra. 4) Observe the marc-in-json response and check for validity. 5) Run a query while using Elasticsearch. 6) Again, observe the marc-in-json response and check for validity. 7) Not ready for sign-off but please leave a comment or help me on the Koha::Biblios->search thing.
Redoing this bug on top of the bug 32734
A few comments on the preliminary code: * An empty query should return an empty resultset, not a 404 * Koha::SearchEngine::Search->extract_biblionumber should be used instead of reinventing it * Pagination information is missing on the headers. It should be extracted from the query and added. Please look at $c->add_pagination_headers
Created attachment 148247 [details] [review] Bug 25870: (follow-up) Adding pagination information on the headers, returns an empty result when the query is empty
(In reply to Hammat wele from comment #31) > Created attachment 148247 [details] [review] [review] > Bug 25870: (follow-up) Adding pagination information on the headers, returns > an empty result when the query is empty Well :-D You are extracting pagination parameters from the (HTTP) query, good. But they are not being passed to the backend (in the form of offset + limit I think, as they should. Also, you need to know that the add_pagination_headers helper needs two totals: - the page total if the query is paginated (it would always be the case) - the total records from which the page was taken (base_total).
Created attachment 148446 [details] [review] Bug 25870: (follow-up) Adding pagination information on the headers, returns an empty result when the query is empty
I still need help with Koha::Biblios->search thing. how to use it to search biblio hidden_in_opac. With the Koha::Biblios->search adding the pagination would be more easier.
(In reply to Hammat wele from comment #34) > I still need help with Koha::Biblios->search thing. how to use it to search > biblio hidden_in_opac. With the Koha::Biblios->search adding the pagination > would be more easier. How that? You need to rewrite ->hidden_in_opac, to build a parameter hashref from the syspref, then pass it to ->search I haven't had a deep look but I think it is possible and should work.
You really need to perform the search against the search engine, with any implicit filtering like OpacHiddenItems added to the CCL query. Otherwise you would kill the interesting bit of searching on the backend.
(In reply to Jonathan Druart from comment #35) > (In reply to Hammat wele from comment #34) > > I still need help with Koha::Biblios->search thing. how to use it to search > > biblio hidden_in_opac. With the Koha::Biblios->search adding the pagination > > would be more easier. > > How that? > You need to rewrite ->hidden_in_opac, to build a parameter hashref from the > syspref, then pass it to ->search > > I haven't had a deep look but I think it is possible and should work. I don't know if all item DB fields have matching index names/aliases, so that would be something that needs reviewing. (In reply to Tomás Cohen Arazi from comment #36) > You really need to perform the search against the search engine, with any > implicit filtering like OpacHiddenItems added to the CCL query. Otherwise > you would kill the interesting bit of searching on the backend. Yeah, it'll need both OpacSuppression, OpacSuppressionByIPRange, and OpacHiddenItems checking.
As this is blocking the much desired 27113 - what can we do to help move this along?
Any plans to get this moving again ?
This is a rabbit hole. We are now in the time of the year where we prepare our next release AND it's coincidently relevant to what's on master. Bug 27113 is stuck behind this, which is big, complicated and involves many parties with more experience than we do. I need to determine now if we put 27113 straight up on our pile, or it's worth Hammat investing some time on this.
(In reply to Blou from comment #40) > This is a rabbit hole. > > We are now in the time of the year where we prepare our next release AND > it's coincidently relevant to what's on master. Bug 27113 is stuck behind > this, which is big, complicated and involves many parties with more > experience than we do. > > I need to determine now if we put 27113 straight up on our pile, or it's > worth Hammat investing some time on this. Maybe it would be worth adding this one to the agenda for the next dev meeting to get some attention and more direction.
This is still blocking bug 27113. Any chance of getting it moving again?
Since we're progressively moving to Elasticsearch, do we really want a query parameter called q_ccl? CCL is legacy at this point... Personally, I think what we really is an endpoint like "/search". The REST purists might say "but 'search' isn't a resource", but I think it can still be treated as a resource in terms of the API.
Whatever works, if you ask me. :-)
This non-discussion is very hurtful to the development of ANY other feature trying to make the best use of ES capabilities. The fact that Koha still has no autocomplete in the search bar reminds me it took us 4 years to get a reset password mecanism in the OPAC. "Expected" features should be railroaded to completion. This bug needs to move ahead, or the blocked bugs (here Bug 27113) should be allowed to move ahead without it.
(In reply to Blou from comment #40) > This is a rabbit hole. (In reply to Blou from comment #45) > This non-discussion is very hurtful to the development of ANY other feature > trying to make the best use of ES capabilities. Going to have to disagree with you there. I think there's quite a bit of useful/helpful discussion here. I think most of us have actually been waiting for a developer response here. (See Marcel's comment especially.) I think we'd all like to see this development move forward, but there really is a lot to consider too. > The fact that Koha still has no autocomplete in the search bar reminds me it > took us 4 years to get a reset password mecanism in the OPAC. "Expected" > features should be railroaded to completion. The lack of a password reset was painful for sure, but railroading things is also how we got into a lot of trouble in the past. Perhaps what you mean is that functionality that should be seen as central/core to Koha (e.g. search) should be prioritised? I'd agree with that (although it's easier said than done). I think the API is particularly difficult to change, because it's an API. It's not something that's super easy to change once you've created it, so you want to get it right the first time. There's heaps of stuff - especially for security - that I'd love to add to Koha, but I haven't polished it enough for the global community Koha yet. The frustrating but also great thing about the community process is that others can point out the mistakes or shortcomings in our code. It's the whole "if you want to go fast, go alone. If you want to go far, go together" thing. > This bug needs to move ahead, or the blocked bugs (here Bug 27113) should be > allowed to move ahead without it. It looks like Caroline moved bug 27113 to blocked in April 2024. Perhaps after comments from Tomas and Katrin. After skimming through the comments, I don't understand why bug 27113 would need to depend on bug 25870? Isn't it completely different functionality? I'll add a comment on that bug asking about that...
(In reply to Blou from comment #40) > I need to determine now if we put 27113 straight up on our pile, or it's > worth Hammat investing some time on this. That is certainly a challenge. I have the same problem. Difficult committing my time to large changes which move slowly... Sounds like bug 27113 is the goal? Honestly, I'm a bit surprised not to see this first as a Koha plugin. I'm guessing there must be some reason for now doing that though.
> (In reply to Blou from comment #40) > The fact that Koha still has no autocomplete in the search bar reminds me it > took us 4 years to get a reset password mecanism in the OPAC. "Expected" > features should be railroaded to completion. Aware of the roadmap?