Summary: | check-url-quick gives false error 404 | ||
---|---|---|---|
Product: | Koha | Reporter: | Marjorie Barry-Vila <marjorie.barry-vila> |
Component: | Architecture, internals, and plumbing | Assignee: | Aleisha Amohia <aleisha> |
Status: | Needs Signoff --- | QA Contact: | Testopia <testopia> |
Severity: | enhancement | ||
Priority: | P5 - low | CC: | aleisha, dcook |
Version: | 20.05 | ||
Hardware: | All | ||
OS: | All | ||
GIT URL: | Change sponsored?: | Sponsored | |
Patch complexity: | --- | Documentation contact: | |
Documentation submission: | Text to go in the release notes: | ||
Version(s) released in: | Circulation function: | ||
Attachments: |
Sample MARC with error URLs
Bug 30614: [WIP] Fallback to a GET request if HEAD returns error status Bug 30614: Fallback to a GET request if HEAD returns error status |
Description
Marjorie Barry-Vila
2022-04-25 19:31:18 UTC
This script is very old, I imagine the way it's being done is a reflection of it's age. We use HTTP::Request throughout Koha now. I imagine replace that with something close to how we do this in Koha/ERM/EUsage/UsageDataProvider.pm : =head3 _handle_sushi_response Creates and sends the request based on a provided url Also handles any redirects =cut Koha/ERM/EUsage/UsageDataProvider.pm sub _handle_sushi_request { my ($url) = @_; my $request = HTTP::Request->new( 'GET' => $url ); my $ua = LWP::UserAgent->new; $ua->agent( 'Koha/' . Koha::version() ); my $response = $ua->simple_request($request); if ( $response->is_redirect ) { my $redirect_url = $response->header('Location'); $redirect_url = URI->new_abs( $redirect_url, $url ); $response = $ua->get($redirect_url); } return $response; } When it comes to checking URLs, typically you want to do a HTTP HEAD instead of a HEAD GET to increase performance for the checker and to decrease load on the URL being checked. For instance, let's say you're checking if URL "https://path/to/1TB/file.file" exists. If you do a HTTP HEAD, that can be a <1 second check. If you do a HTTP GET, you're going to have to wait for that file to download, so your checker will go much slower. It's also going to cost the server host money in terms of data transfer. If you're using cloud like Azure or AWS, your costs can increase by hundreds or thousands of dollars very easily. So HTTP HEAD is much better than HTTP GET for this use case. However... not all sites honour HTTP HEAD. Misguided people trying to lockdown servers for security reasons will sometimes limit HTTP verbs to just GET/POST, but this actually has a negative effect, because it means you then have to do a full HTTP GET to do something like a URL check. I work on a project with millions of URLs and many terabytes of data, and HTTP HEAD is one of my best friends. Anyway... I'd say an improvement to the script would be to add a CLI parameter to choose whether to use a "HEAD" or a "GET" request, because realistically sometimes you do have to use a GET to get an accurate response - even if HEAD is more optimal in an ideal world. Another optimisation could be to fall back to a GET in the event that a HEAD fails. Fair enough, I like the fall back option. A lot of these third party providers have redirects or bot challenges now that seem to cause HTTP HEAD to fail for some reason (In reply to Aleisha Amohia from comment #3) > Fair enough, I like the fall back option. A lot of these third party > providers have redirects or bot challenges now that seem to cause HTTP HEAD > to fail for some reason Ah right. Of course. Since so many bots start off with HTTP HEAD, it is a natural target for sure. I've run into that with some of my own non-Koha link checkers as well. Created attachment 186380 [details] Sample MARC with error URLs check-url-quick.pl returns false errors for these URLs 439 http://www.tandfonline.com/loi/tejp20 403 Forbidden 441 https://ir.canterbury.ac.nz/handle/10092/1507 403 Forbidden 440 http://onlinelibrary.wiley.com/journal/10.1002/%28ISSN%291552-8618/issues 403 Forbidden 442 https://natlib-primo.hosted.exlibrisgroup.com/permalink/f/1s57t7d/NLNZ_ALMA11271200280002836 403 Cache-Control: private 443 https://coldregions.americangeosciences.org/vufind/Content/ajus 404 Not Found Created attachment 186381 [details] [review] Bug 30614: [WIP] Fallback to a GET request if HEAD returns error status This is a WIP - feedback welcome. Sample MARC attached. Testing using: perl misc/cronjobs/check-url-quick.pl -v > test.txt grep -E "tand|wiley|natlib|cold|canterbury" test.txt Created attachment 187030 [details] [review] Bug 30614: Fallback to a GET request if HEAD returns error status This enhancement makes a GET request as a fallback if HEAD returns an error status. This doesn't fix all of the errors produced by the sample error MARC attached - any further support is welcomed. To test: 1. Download MARC. Go to Cataloguing in the staff interface and stage the MARC file for import, then import. 2. Run the URL check job and confirm erroring URLs perl misc/cronjobs/check-url-quick.pl -v > test.txt grep -E "tand|wiley|natlib|cold|canterbury" test.txt 3. Apply patch and restart services 4. Repeat step 2 and notice 2 of the URLs are now returning 200 OK status codes. Sponsored-by: Earth Sciences New Zealand (In reply to Aleisha Amohia from comment #7) > This enhancement makes a GET request as a fallback if HEAD returns an error > status. > > This doesn't fix all of the errors produced by the sample error MARC > attached - any further support is welcomed. It's a challenging one. On one hand, we're trying to stop people from using bots against Koha. On the other hand, we'd like to use bots to do things like link checking in Koha. We've had a custom link checker for many years that sits adjacent to Koha, and what I did create a hash of configurable domains and then check each link checker URL against that list first. If it matches, then we don't bother checking it, because we know the site is just going to block our link checker attempt anyway. Obviously, it means your link checking is never going to be perfect. But it makes for fewer false positives. To get the more accurate result, we'd have to use a headless browser set up to pretend to be a real human, but then we'd also have become the thing that we were trying to protect ourselves against. Of course, I think we could argue our purposes are more positive, but I don't know that the link checker targets would necessarily agree. Anyway, that's a lot of text for a small idea. That's what I ended up doing locally (outside of Koha) to deal with this sort of situation. Not perfect but it's been fairly practical. (In reply to David Cook from comment #8) > (In reply to Aleisha Amohia from comment #7) > > This enhancement makes a GET request as a fallback if HEAD returns an error > > status. > > > > This doesn't fix all of the errors produced by the sample error MARC > > attached - any further support is welcomed. > > It's a challenging one. On one hand, we're trying to stop people from using > bots against Koha. On the other hand, we'd like to use bots to do things > like link checking in Koha. > > We've had a custom link checker for many years that sits adjacent to Koha, > and what I did create a hash of configurable domains and then check each > link checker URL against that list first. If it matches, then we don't > bother checking it, because we know the site is just going to block our link > checker attempt anyway. > > Obviously, it means your link checking is never going to be perfect. But it > makes for fewer false positives. > > To get the more accurate result, we'd have to use a headless browser set up > to pretend to be a real human, but then we'd also have become the thing that > we were trying to protect ourselves against. Of course, I think we could > argue our purposes are more positive, but I don't know that the link checker > targets would necessarily agree. > > Anyway, that's a lot of text for a small idea. That's what I ended up doing > locally (outside of Koha) to deal with this sort of situation. Not perfect > but it's been fairly practical. I totally hear you. The custom link checker which ignores allowlisted domains is a reasonable workaround, just trying to avoid custom as much as possible these days! (In reply to Aleisha Amohia from comment #9) > I totally hear you. The custom link checker which ignores allowlisted > domains is a reasonable workaround, just trying to avoid custom as much as > possible these days! Yep! That makes a lot of sense. I'm trying to do the same. What I meant was maybe a way to enhance the check-quick-url would be to add a patch for that via bugzilla. Comment on attachment 187030 [details] [review] Bug 30614: Fallback to a GET request if HEAD returns error status Review of attachment 187030 [details] [review]: ----------------------------------------------------------------- ::: misc/cronjobs/check-url-quick.pl @@ +115,5 @@ > + my $request = HTTP::Request->new( 'GET' => $url ); > + my $ua = LWP::UserAgent->new; > + $ua->agent($user_agent); > + my $response = $ua->request($request); > + if ( $response->is_redirect ) { I think that LWP::UserAgent follows redirects by default, so I don't think this code would be triggered? For instance, if you curl https://google.com you'll get a 301 but if you use LWP::UserAgent with that HTTP::Request GET, you'll get a 200. It's not clear to me at a glance if AnyEvent::HTTP::http_request follows redirects by default or not. At a glance, it looks like maybe not. @@ +120,5 @@ > + my $redirect_url = $response->header('Location'); > + $redirect_url = URI->new_abs( $redirect_url, $url ); > + $response = $ua->get($redirect_url); > + } > + my $status_code = substr( $response->status_line, 0, 3 ); You can just use $response->code() here. @@ +121,5 @@ > + $redirect_url = URI->new_abs( $redirect_url, $url ); > + $response = $ua->get($redirect_url); > + } > + my $status_code = substr( $response->status_line, 0, 3 ); > + my $reason = substr( $response->status_line, 3 ); You can use $response->message() here. (In reply to David Cook from comment #10) > (In reply to Aleisha Amohia from comment #9) > > I totally hear you. The custom link checker which ignores allowlisted > > domains is a reasonable workaround, just trying to avoid custom as much as > > possible these days! > > Yep! That makes a lot of sense. I'm trying to do the same. > > What I meant was maybe a way to enhance the check-quick-url would be to add > a patch for that via bugzilla. Absolutely - I've suggested a syspref to the library where they could configure their allowlisted domains, will see if that's a suitable solution for them. |