Bug 4042 - Public OPAC search can fall prey to web crawlers
Summary: Public OPAC search can fall prey to web crawlers
Status: In Discussion
Alias: None
Product: Koha
Classification: Unclassified
Component: OPAC (show other bugs)
Version: Main
Hardware: All All
: P5 - low enhancement (vote)
Assignee: Bob Birchall
QA Contact: Bugs List
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2010-01-13 11:23 UTC by Rick Welykochy
Modified: 2024-04-21 10:15 UTC (History)
12 users (show)

See Also:
Change sponsored?: ---
Patch complexity: ---
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Chris Cormack 2010-05-21 01:22:29 UTC


---- Reported by rick@praxis.com.au 2010-01-13 23:23:40 ----

The OPAC search and OPAC advanced searches are accessible
by the public from the Koha OPAC home page. Consequently, an over zealous
web crawler indexing the site using the opac-search.pl script can
impact the performance of the Koha system. In the extreme, an under-
resourced system can experience a DoS when the number of searches
exceeds the capacity of the system.

Proposed Solution: modify the opac-search.pl script in the following manner:

(A) Only allow queries using the POST method; otherwise if GET is used
     return a simple page with "No search result found".

(B) Exception: do allow GET queries but only if the HTTP_REFERER
     matches the SERVER_NAME. This allows all the searches to work
     via web site links.

(C) Make this behavior optional by adding a new flag to the system prefs.


Here is the small code segment added to opac-search.pl, immediately after
the BEGIN block:


if ($ENV{HTTP_REFERER} !~ /$ENV{SERVER_NAME}/ && $ENV{REQUEST_METHOD} ne "POST")
{
         print "Content-type: text/html\n\n";
	print "<h1>Search Results</h1>Nothing found.\n";
         exit;
}


CAVEAT: This solution does not allow one to paste an "opac_search.pl"
   link into the browser and have it work as previously expected. But
   this was the cause of the problem in the first place. A better solution
   is to require a user to login to the OPAC before allowing a search.

Addendum: also install a robots.txt file at the following location
in the Koha source tree to stop web crawlers from using the OPAC search.

    opac/htdocs/robots.txt

The robots.txt file should contain the following contents, which deny all
access to indexing engines. You can learn more about robots.txt on the
web, and configure it to allow some indexing if you wish.

-----------------------------
User-agent:  *
  Disallow: /
-----------------------------



---- Additional Comments From rick@praxis.com.au 2010-01-13 23:25:06 ----

The proposed solution has been implemented and tested by Calyx Information Essentials in Australia. We are no longer experiencing any problems relating to web crawlers DoS-ing our Koha server.



---- Additional Comments From oleonard@myacpl.org 2010-01-13 23:44:36 ----

Why is the use of robots.txt not enough in and of itself to solve the problem?



---- Additional Comments From rick@praxis.com.au 2010-01-14 01:02:45 ----

1. robots.txt only influences law-abiding crawlers.

2. Allowing GET searches from outside means that people could save
search URLs on their pages or bookmarks. When a private crawler or link
checker sees this search URL, it might do a fetch without checking
robots.txt. In general public GET URLs that do a significant amount of
work are problematic.




---- Additional Comments From ken@calyx.net.au 2010-01-15 01:33:17 ----

Actually my preference for handing an illegal GET request from outside is to send a redirect to the same URL minus the query parameters. That way, if a human happens to have saved a search result by copying the location bar, they don't get "nothing found" when they use the link but get prompted to enter another query. More useful than a "nothing found" page.

Also your "nothing found" message is English-centric and would need translation for sites in other countries. By sending a redirect you don't have to compose any text. http://bugs.koha.org/cgi-bin/bugzilla3/show_bug.cgi?id=4042



---- Additional Comments From rick@praxis.com.au 2010-01-15 09:16:09 ----

Ken has a good idea. Allow the GET request to be processed, but show the original page, not the search results.

To make things easier for the genuine Koha user, i.e. one who will then actually click on the search button (POST) rather than try a GET, populate the form with the query parameters form the GET request it finds in the URL. 

This saves the user from entering them again. Painless and simple.

Overall this is looking like a good strategy to keep misbehaved bots at bay.

All this is best done in a subroutine residing somewhere in the C4 lib subdirectory so that it can be used anywhere necessary in Koha. Some admins configure Koha to allow public access to other parts of the OPAC as well. 




---- Additional Comments From nengard@gmail.com 2010-01-15 12:49:53 ----

Correct me if I'm wrong - but this would remove the ability for librarians to send their patrons links to queries that they think will help with their research.  This is a huge deal to librarians I have trained.  They love this feature, because it's something they have never had before.  When I do training I always make it clear that every link in Koha can be copied and pasted into an email and will work exactly as you would expect.

So -- I want to bring us all back to a suggestion made on the mailing list or IRC - wherever this discussion started to have a system preference to let the library decide what they want to do.

Many libraries want their catalog indexed by all search engines and they want to be able to send links to results lists to their patrons.  This is a feature they have now and would miss if you were to take that away.  It should be the decision of the librarians and the system admins - not the developers.



---- Additional Comments From magnus@enger.priv.no 2010-01-15 13:04:58 ----

I agree completely with Nicole: Being able to send URLs to searches in email or putting them on a webpage is a big feature, not a bug. I also agree that a syspref to turn this behaviour on and off is the way to go. 



---- Additional Comments From rick@praxis.com.au 2010-01-15 13:07:41 ----

> Correct me if I'm wrong - but this would remove the ability for librarians to
send their patrons links to queries that they think will help with their
research.

Not at all. It has been proposed that there is a system pref to disable this
behavior.

If the behaviour is enabled, all theuse has to do is hit the "Search" button
when they go to a search link. 


> IRC - wherever this discussion started to have a system preference to let the
library decide what they want to do.

Of course. This is necessary.


> Being able to send URLs to searches in email or putting them on a webpage is a big feature, not a bug.

Of course, this is a "bug" repository.

But it is much more as well. This fix proproses to *optionally* make new behaviour available in Koha to prevent web crawlers from DOS-ing your system.

Never fear. This fix will be optional via System Prefs :)




---- Additional Comments From nengard@gmail.com 2010-01-15 13:29:11 ----

Awesome! Then I'm all for it ;) 



--- Bug imported by chris@bigballofwax.co.nz 2010-05-21 01:22 UTC  ---

This bug was previously known as _bug_ 4042 at http://bugs.koha.org/cgi-bin/bugzilla3/show_bug.cgi?id=4042

Actual time not defined. Setting to 0.0
Setting qa contact to the default for this product.
   This bug either had no qa contact or an invalid one.
CC member irma@calyx.net.au does not have an account here

Comment 1 Fred P 2012-02-13 15:46:58 UTC
Very cool. We were experiencing slow-downs related to Google searches. Much of our catalog was visible on Google, by searching the library name and book title. We used robots.txt to effectively block googlebots.

However Baidu spiderbots from China continue to plague us from time to time. Using the robots.txt helped, but did not completely solve the problem, although it seems to have decreased the frequency.

We also get hit with port scans through the koha-tmpl directory (maps to root?), although our security seems to be strong enough to resist those.

A system preference option would help us. Thanks for your hard work!
Comment 2 Pablo AB 2014-09-17 13:52:49 UTC
As told here
http://koha.1045719.n5.nabble.com/Help-100-CPU-utilization-running-Koha-tp5809357.html
we could just put a robots.txt like this on /usr/share/koha/opac/htdocs:

  User-agent: *
  Disallow:/cgi-bin/koha/opac-search.pl
  Disallow:/cgi-bin/koha/opac-export.pl
  Disallow:/cgi-bin/koha/opac-showmarc.pl
  Disallow:/cgi-bin/koha/opac-ISBDdetail.pl
  Disallow:/cgi-bin/koha/opac-MARCdetail.pl
Comment 3 Bob Birchall 2016-10-04 10:30:17 UTC
(In reply to Pablo AB from comment #2)
> As told here
> http://koha.1045719.n5.nabble.com/Help-100-CPU-utilization-running-Koha-
> tp5809357.html
> we could just put a robots.txt like this on /usr/share/koha/opac/htdocs:
> 
>   User-agent: *
>   Disallow:/cgi-bin/koha/opac-search.pl
>   Disallow:/cgi-bin/koha/opac-export.pl
>   Disallow:/cgi-bin/koha/opac-showmarc.pl
>   Disallow:/cgi-bin/koha/opac-ISBDdetail.pl
>   Disallow:/cgi-bin/koha/opac-MARCdetail.pl

This is the file we use:
----------------------------
Crawl-delay: 60

User-agent: *
Disallow: /

User-agent: Googlebot
Disallow: /cgi-bin/koha/opac-search.pl
Disallow: /cgi-bin/koha/opac-showmarc.pl
Disallow: /cgi-bin/koha/opac-detailprint.pl
Disallow: /cgi-bin/koha/opac-ISBDdetail.pl
Disallow: /cgi-bin/koha/opac-MARCdetail.pl
Disallow: /cgi-bin/koha/opac-reserve.pl
Disallow: /cgi-bin/koha/opac-export.pl
Disallow: /cgi-bin/koha/opac-detail.pl
Disallow: /cgi-bin/koha/opac-authoritiesdetail.pl
----------------------------

Can we mark this bug as resolved now?
Comment 4 Katrin Fischer 2016-10-16 13:51:38 UTC
Should we include a default/sample robots.txt with Koha?
Comment 5 Magnus Enger 2016-10-17 06:35:28 UTC
(In reply to Katrin Fischer from comment #4)
> Should we include a default/sample robots.txt with Koha?

There is a file called README.robots at the top of the project (with the other READMEs). Maybe we could include Bob's example there too?
Comment 6 José Anjos 2016-12-22 10:25:27 UTC
I added file robots.txt to /usr/share/koha/opac/htdocs, but makes no difference.
I've try it too on /usr/share/koha/opac/htdocs/opac-tmpl but problem persists.
I have non stop requests from SemrushBot, AhrefsBot and Google.
Server is getting high CPU usage and out of memory every 2/3 days
I want to kill that bots...
Comment 7 Fred P 2016-12-22 17:37:57 UTC
I don't believe this is a Koha issue. Any public site can be "hit" by any user. Blocking Chinese search giant Baidu makes a big difference. Disallow their robots and you will get a lot less hits. You can also block by ip address range by editing your Apache .htaccess file. Keep in mind that you want to back that file up before making changes and take precautions to not block your own access!

In the .htaccess for the appropriate site directory, blocking range 180.76 would disable baidu search engines:

order allow,deny
#partial ip addresses blocking
deny from 180.76

Adding this to your root directory as a robots.txt file should warn off Yandex and Baidu robots, however spiders change and respect for the robots.txt varies:

#Baiduspider
User-agent: Baiduspider
Disallow: /

#Yandex
User-agent: Yandex
Disallow: /

It looks like Chris' proposals were adopted. Does this bug need to remain open?
Comment 8 José Anjos 2016-12-23 11:47:25 UTC
I would suggest that robots.txt like the one in comment 3 should come with fresh installs. Firstly because most times people don't realize that the koha performance are affected by that until it crashes.
Secondly because some bots take longer to stop like:
https://www.semrush.com/bot/
"Please note that there might be a delay up to two weeks before SEMrushBot discovers the changes you made to robots.txt"
Comment 9 Teknia 2019-07-25 07:01:53 UTC
Hi, many good suggestions have been made, but the outstanding option of only allowing someone who is registered on the specific Koha site seems to also be a good option, but seems neglected.

Any feedback on this?
Comment 10 tecnicouncoma 2019-10-11 16:24:40 UTC
Hi people. I tried robots.txt and nothing happend. I installed Koha in an i7 computer with 16G RAM.

We are rebooting the system twice a day because of the DoS.

¿Any alternative procedure to follow?

Thx,
Comment 11 Katrin Fischer 2019-10-12 08:32:29 UTC
(In reply to tecnicouncoma from comment #10)
> Hi people. I tried robots.txt and nothing happend. I installed Koha in an i7
> computer with 16G RAM.
> 
> We are rebooting the system twice a day because of the DoS.
> 
> ¿Any alternative procedure to follow?
> 
> Thx,

Hi, I think you might want to ask on the mailing list how others have dealt with the problem. If your robots.txt was added correctly and the bots are ignoring it, you might want to try and block them by IP in the firewall.

There is also the OPACPublic systm preference that will only allow people to search after being registered (to anser comment#9)
Comment 12 Barry Cannon 2020-01-23 12:44:00 UTC
Reading over this bug it occurs to me that some of the steps we have taken to mitigate this problem might help others. I will post high level information here, in case it might help some people along.


The first step we took (after adding robots.txt) was to add a small piece of code to the OPACHeader syspref that appended a “hidden” a href tag to a script (sneakysnare.pl) on the site. This would be only visible to bots and once this link was followed a page would show with a lot of useless text. In the background though the script grabbed the source IP address and pushed it into a deny rule on the firewall. This worked well for bots that blindly followed all links from the page. Shortly after we noticed that not all bots were following all links.

Our next step was to check the useragent of the incoming traffic. We noticed that there were a lot of useragent strings causing issues. Configuring apache for a CustomLog of “time,IP,METHOD,URI” we configured a script to run regularly to parse this file for know “bad” useragent strings. We were then able to add these IPs to the firewall to be dropped.

Our current setup is expanded from above: all our servers use csf/lfd for local firewall and intrusion detection. We also use ipset extensively for other non-Koha related services. Csf can be configured to offload deny chains to ipset. This helps iptables and lowers the resource strain on the server. Csf can be configured to use an Include file to deny hosts. By expanding on the sneakysnare script and the useragent apache log we create a small job to bring all this together and manage an csf include file. The job checks this new file and if a new IP address has appeared it will add that IP to the deny set.

In some cases we have observed the server being slowly scrapped. This insidious scrapping is harder to detect immediately and often slows/hogs resources over a longer period. Quite often the source of these connections are from a particular geographical region. If this happens - often enough - we can employ geoblocking. Csf can be configured to use Maxmind GeoIP database lookup. Using the configuration file we specify the country code we want to block. For example to block all Irish and British traffic (not that we would!) we enter the “IE,GB” into the config file. Once the daemon is restarted the GeoIP database is referenced and all known CIDR blocks for those nountries are loaded into ipset’s deny set. Csf can also be configured to “deny all, except”. It this setup placing “IE” in the config file would only allow traffic from Ireland and deny all other traffic.

There are pros and cons to all of the above and consideration should be given before implementation.

Third party services are also very useful and moving traffic via CDN providers (and using their security services) will greatly reduce bots, DDOS and other hassle.

Other helpful methods include reverse proxies and mitigating at that level.
Comment 13 David Cook 2022-08-22 05:48:50 UTC
(In reply to Katrin Fischer from comment #4)
> Should we include a default/sample robots.txt with Koha?

It is tempting to add a robots.txt file to the koha-common package.
Comment 14 David Cook 2024-03-21 01:11:57 UTC
(In reply to David Cook from comment #13)
> (In reply to Katrin Fischer from comment #4)
> > Should we include a default/sample robots.txt with Koha?
> 
> It is tempting to add a robots.txt file to the koha-common package.

We still notice users of the standard koha-common package getting bitten by bots due to a lack of a robots.txt file.
Comment 15 Michael 2024-04-21 10:15:13 UTC
Enabling  ModSecurity + ModSecurity-CRS, blocked it all, but needs further configuration to let all functions work probably and overwrite some rules set.