Summary: | Public OPAC search can fall prey to web crawlers | ||
---|---|---|---|
Product: | Koha | Reporter: | Rick Welykochy <rick> |
Component: | OPAC | Assignee: | Bob Birchall <bob> |
Status: | In Discussion --- | QA Contact: | Bugs List <koha-bugs> |
Severity: | enhancement | ||
Priority: | P5 - low | CC: | bc, clodagh.kerin, dcook, dcowens76, fred.pierre, joseanjos, koha, magnus, mengu, michael.r.gendy, nicjdevries, pablo.bianchi, patrick.robitaille, tecnicouncoma |
Version: | Main | ||
Hardware: | All | ||
OS: | All | ||
Change sponsored?: | --- | Patch complexity: | --- |
Documentation contact: | Documentation submission: | ||
Text to go in the release notes: | Version(s) released in: | ||
Circulation function: |
Description
Chris Cormack
2010-05-21 01:22:29 UTC
Very cool. We were experiencing slow-downs related to Google searches. Much of our catalog was visible on Google, by searching the library name and book title. We used robots.txt to effectively block googlebots. However Baidu spiderbots from China continue to plague us from time to time. Using the robots.txt helped, but did not completely solve the problem, although it seems to have decreased the frequency. We also get hit with port scans through the koha-tmpl directory (maps to root?), although our security seems to be strong enough to resist those. A system preference option would help us. Thanks for your hard work! As told here http://koha.1045719.n5.nabble.com/Help-100-CPU-utilization-running-Koha-tp5809357.html we could just put a robots.txt like this on /usr/share/koha/opac/htdocs: User-agent: * Disallow:/cgi-bin/koha/opac-search.pl Disallow:/cgi-bin/koha/opac-export.pl Disallow:/cgi-bin/koha/opac-showmarc.pl Disallow:/cgi-bin/koha/opac-ISBDdetail.pl Disallow:/cgi-bin/koha/opac-MARCdetail.pl (In reply to Pablo AB from comment #2) > As told here > http://koha.1045719.n5.nabble.com/Help-100-CPU-utilization-running-Koha- > tp5809357.html > we could just put a robots.txt like this on /usr/share/koha/opac/htdocs: > > User-agent: * > Disallow:/cgi-bin/koha/opac-search.pl > Disallow:/cgi-bin/koha/opac-export.pl > Disallow:/cgi-bin/koha/opac-showmarc.pl > Disallow:/cgi-bin/koha/opac-ISBDdetail.pl > Disallow:/cgi-bin/koha/opac-MARCdetail.pl This is the file we use: ---------------------------- Crawl-delay: 60 User-agent: * Disallow: / User-agent: Googlebot Disallow: /cgi-bin/koha/opac-search.pl Disallow: /cgi-bin/koha/opac-showmarc.pl Disallow: /cgi-bin/koha/opac-detailprint.pl Disallow: /cgi-bin/koha/opac-ISBDdetail.pl Disallow: /cgi-bin/koha/opac-MARCdetail.pl Disallow: /cgi-bin/koha/opac-reserve.pl Disallow: /cgi-bin/koha/opac-export.pl Disallow: /cgi-bin/koha/opac-detail.pl Disallow: /cgi-bin/koha/opac-authoritiesdetail.pl ---------------------------- Can we mark this bug as resolved now? Should we include a default/sample robots.txt with Koha? (In reply to Katrin Fischer from comment #4) > Should we include a default/sample robots.txt with Koha? There is a file called README.robots at the top of the project (with the other READMEs). Maybe we could include Bob's example there too? I added file robots.txt to /usr/share/koha/opac/htdocs, but makes no difference. I've try it too on /usr/share/koha/opac/htdocs/opac-tmpl but problem persists. I have non stop requests from SemrushBot, AhrefsBot and Google. Server is getting high CPU usage and out of memory every 2/3 days I want to kill that bots... I don't believe this is a Koha issue. Any public site can be "hit" by any user. Blocking Chinese search giant Baidu makes a big difference. Disallow their robots and you will get a lot less hits. You can also block by ip address range by editing your Apache .htaccess file. Keep in mind that you want to back that file up before making changes and take precautions to not block your own access! In the .htaccess for the appropriate site directory, blocking range 180.76 would disable baidu search engines: order allow,deny #partial ip addresses blocking deny from 180.76 Adding this to your root directory as a robots.txt file should warn off Yandex and Baidu robots, however spiders change and respect for the robots.txt varies: #Baiduspider User-agent: Baiduspider Disallow: / #Yandex User-agent: Yandex Disallow: / It looks like Chris' proposals were adopted. Does this bug need to remain open? I would suggest that robots.txt like the one in comment 3 should come with fresh installs. Firstly because most times people don't realize that the koha performance are affected by that until it crashes. Secondly because some bots take longer to stop like: https://www.semrush.com/bot/ "Please note that there might be a delay up to two weeks before SEMrushBot discovers the changes you made to robots.txt" Hi, many good suggestions have been made, but the outstanding option of only allowing someone who is registered on the specific Koha site seems to also be a good option, but seems neglected. Any feedback on this? Hi people. I tried robots.txt and nothing happend. I installed Koha in an i7 computer with 16G RAM. We are rebooting the system twice a day because of the DoS. ¿Any alternative procedure to follow? Thx, (In reply to tecnicouncoma from comment #10) > Hi people. I tried robots.txt and nothing happend. I installed Koha in an i7 > computer with 16G RAM. > > We are rebooting the system twice a day because of the DoS. > > ¿Any alternative procedure to follow? > > Thx, Hi, I think you might want to ask on the mailing list how others have dealt with the problem. If your robots.txt was added correctly and the bots are ignoring it, you might want to try and block them by IP in the firewall. There is also the OPACPublic systm preference that will only allow people to search after being registered (to anser comment#9) Reading over this bug it occurs to me that some of the steps we have taken to mitigate this problem might help others. I will post high level information here, in case it might help some people along. The first step we took (after adding robots.txt) was to add a small piece of code to the OPACHeader syspref that appended a “hidden” a href tag to a script (sneakysnare.pl) on the site. This would be only visible to bots and once this link was followed a page would show with a lot of useless text. In the background though the script grabbed the source IP address and pushed it into a deny rule on the firewall. This worked well for bots that blindly followed all links from the page. Shortly after we noticed that not all bots were following all links. Our next step was to check the useragent of the incoming traffic. We noticed that there were a lot of useragent strings causing issues. Configuring apache for a CustomLog of “time,IP,METHOD,URI” we configured a script to run regularly to parse this file for know “bad” useragent strings. We were then able to add these IPs to the firewall to be dropped. Our current setup is expanded from above: all our servers use csf/lfd for local firewall and intrusion detection. We also use ipset extensively for other non-Koha related services. Csf can be configured to offload deny chains to ipset. This helps iptables and lowers the resource strain on the server. Csf can be configured to use an Include file to deny hosts. By expanding on the sneakysnare script and the useragent apache log we create a small job to bring all this together and manage an csf include file. The job checks this new file and if a new IP address has appeared it will add that IP to the deny set. In some cases we have observed the server being slowly scrapped. This insidious scrapping is harder to detect immediately and often slows/hogs resources over a longer period. Quite often the source of these connections are from a particular geographical region. If this happens - often enough - we can employ geoblocking. Csf can be configured to use Maxmind GeoIP database lookup. Using the configuration file we specify the country code we want to block. For example to block all Irish and British traffic (not that we would!) we enter the “IE,GB” into the config file. Once the daemon is restarted the GeoIP database is referenced and all known CIDR blocks for those nountries are loaded into ipset’s deny set. Csf can also be configured to “deny all, except”. It this setup placing “IE” in the config file would only allow traffic from Ireland and deny all other traffic. There are pros and cons to all of the above and consideration should be given before implementation. Third party services are also very useful and moving traffic via CDN providers (and using their security services) will greatly reduce bots, DDOS and other hassle. Other helpful methods include reverse proxies and mitigating at that level. (In reply to Katrin Fischer from comment #4) > Should we include a default/sample robots.txt with Koha? It is tempting to add a robots.txt file to the koha-common package. (In reply to David Cook from comment #13) > (In reply to Katrin Fischer from comment #4) > > Should we include a default/sample robots.txt with Koha? > > It is tempting to add a robots.txt file to the koha-common package. We still notice users of the standard koha-common package getting bitten by bots due to a lack of a robots.txt file. Enabling ModSecurity + ModSecurity-CRS, blocked it all, but needs further configuration to let all functions work probably and overwrite some rules set. I am trying to make sense of all that is in the comment thread. I have questions: 1. I was unable to find the word "BEGIN" in /usr/share/koha/opac/cgi-bin/opac/opac-search.pl. Part of me wants to ignore this for now, since it has knock down effects, but can someone tell me if I am looking in the right place? 2. I was not sure which place to put robot.txt, so I put it in both places referenced in the https://wiki.koha-community.org/wiki/Koha_Tuning_Guide. Is there a problem placing it both places? 3. As for blocking specific IP addresses, I can try to play whac-a-mole with IP addresses using ufw, but it seems to me that blocking a range of IP addresses would be more efficient. So I started placing deny statements in /usr/share/koha/opac/htdocs/.htaccess. But I am not seeing a result for my labors. I am wondering if there is a better place to put that file. |