Bug 2959 - Enhancements for link checker
Summary: Enhancements for link checker
Status: CLOSED FIXED
Alias: None
Product: Koha
Classification: Unclassified
Component: Command-line Utilities (show other bugs)
Version: Main
Hardware: PC All
: P3 enhancement (vote)
Assignee: Frédéric Demians
QA Contact:
URL: misc/cronjobs/check-url.pl
Keywords:
Depends on:
Blocks: 7963
  Show dependency treegraph
 
Reported: 2009-02-12 01:48 UTC by Galen Charlton
Modified: 2013-12-05 20:04 UTC (History)
4 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
Patch to enhance URL checking in 856 tags (13.79 KB, patch)
2010-02-11 15:01 UTC, Chris Cormack
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description Chris Cormack 2010-05-21 01:04:46 UTC


---- Reported by gmcharlt@gmail.com 2009-02-12 13:48:25 ----

Various ideas for enhancing check-url.pl:

* include HTTP response code in output to allow librarian who is dealing with bad links to easily distinguish between resources that are gone temporarily or permanently and ones that have moved.

* store report in database and add interface for librarian to directly edit the bibs

* add checking of authority records

* add link checking to the bib editor



---- Additional Comments From gmcharlt@gmail.com 2009-02-26 13:22:24 ----

Pushed patch that addressed HTTP response code and modularized the URL checker.  Thanks!



---- Additional Comments From dschust1@gmail.com 2010-02-11 14:49:39 ----

This is designed as a cron job you could run say Sunday evening when things are slow and your cataloger could look at the page with errors on Monday or through out the week and resolve the URL issues.

More information can be found at:

http://wiki.koha.org/doku.php?id=en:development:check-url_enhancements#comments

Example command line that only puts out the “bad” urls standard dependancies for perl directory etc.. 

perl check-url.pl –html –htmldir=/path to docs/koha-tmpl –host=http://koha.xxx.xxx your server here:8080 or 80 if required for staff access.

-hosts= ** if this isn't correct the links to the bibs will not work for direct editing.



---- Additional Comments From dschust1@gmail.com 2010-02-11 15:01:15 ----

Created an attachment
Patch to enhance URL checking in 856 tags





---- Additional Comments From frederic@tamil.fr 2010-02-12 06:45:43 ----

> Created an attachment [details]
> Patch to enhance URL checking in 856 tags

This patch add two things:

  (1) Avoid re-fetching an already tested URL
  (2) Beautify the HTML output.

(2) could/must be done outside this script (imo) with CSS rather than hardcoded in HTML <td> tags.

(1) It's a great idea, even if the list of already checked and bad urls should be stored in a hash rather than in an array.  




---- Additional Comments From dschust1@gmail.com 2010-02-12 15:55:02 ----

(In reply to comment #4)
> > Created an attachment [details] [details]
> > Patch to enhance URL checking in 856 tags
> 
> This patch add two things:
> 
>   (1) Avoid re-fetching an already tested URL
>   (2) Beautify the HTML output.
> 
> (2) could/must be done outside this script (imo) with CSS rather than hardcoded
> in HTML <td> tags.
> 
> (1) It's a great idea, even if the list of already checked and bad urls should
> be stored in a hash rather than in an array.  
> 

2 - before there wasn't a way to get at the output - so at least a "Librarian" can access it now fairly easily.  The output page is defined at the command line level.  The information is passed to the HTML through the script so I am a newb at how to do as you suggest with CSS.  My staff programmer knows perl, but not CSS at this point.  I would hate to see that item be a blocker for submission.



---- Additional Comments From gmcharlt@gmail.com 2010-02-15 00:48:25 ----

Pushed Frédéric's patch to cache bad URLs.  Thought occurs: may as well cache *all* URLs that are checked; no point in check a good URL more than once during the run.



---- Additional Comments From frederic@tamil.fr 2010-02-15 06:53:33 ----

> Pushed Frédéric's patch to cache bad URLs.  Thought occurs: may as well cache
> *all* URLs that are checked; no point in check a good URL more than once during
> the run.

I also thought about that and

- For db with a large number of URLs, the memory usage could exceed
  server capacity.
- We can suppose that the number of repeated URLs is low. So caching all
  URLs means an expense of memory for few duplicates and little gain.
- Caching good URLs won't improve a lot the speed of this script. Not as
  bad do. Good URLs are fetching in a delay which is inferior to LWP
  timeout: it's generally quick. Bad URLs block the script until LWP
  timeout (by default 180 seconds).

Your previous suggestion of a DB backend is a good idea and would allow
to improve efficiency and usability of url checking process. But it
requires time.

David S.: drop me a email if you need help to integrate css/jquery to
your script.




---- Additional Comments From gmcharlt@gmail.com 2010-02-16 11:49:17 ----

Pushed timeout patch to HEAD for inclusion in 3.2.



--- Bug imported by chris@bigballofwax.co.nz 2010-05-21 01:04 UTC  ---

This bug was previously known as _bug_ 2959 at http://bugs.koha.org/cgi-bin/bugzilla3/show_bug.cgi?id=2959
Imported an attachment (id=999)

Actual time not defined. Setting to 0.0
The original submitter of attachment 999 [details] [review] is unknown.
   Reassigning to the person who moved it here: chris@bigballofwax.co.nz.

Comment 1 Chris Cormack 2013-01-01 23:38:01 UTC
This has been implemented in other bugs.