| Summary: | RSS feed validation | ||
|---|---|---|---|
| Product: | Koha | Reporter: | Nicolas Hunstein <nicolas.hunstein> |
| Component: | OPAC | Assignee: | Owen Leonard <oleonard> |
| Status: | NEW --- | QA Contact: | Testopia <testopia> |
| Severity: | enhancement | ||
| Priority: | P5 - low | CC: | dcook |
| Version: | Main | ||
| Hardware: | All | ||
| OS: | All | ||
| GIT URL: | Initiative type: | --- | |
| Sponsorship status: | --- | Comma delimited list of Sponsors: | |
| Crowdfunding goal: | 0 | Patch complexity: | --- |
| Documentation contact: | Documentation submission: | ||
| Text to go in the release notes: | Version(s) released in: | ||
| Circulation function: | |||
|
Description
Nicolas Hunstein
2026-02-03 09:26:34 UTC
I use this site: https://validator.w3.org/feed/, and though it has a couple of recommendations it says the feed is valid. I would think the W3C is a more reputable option for checking validation. Also, are you sure the correct content is being read by that validator? When I test with our production site it ends up validating a Cloudfare error page. @Owen: I also had a look at this over the Christmas holidays for an old ticket. I had copied the contents of the page and that validated with warnings, as you said. The problem we have, not using CloudFlare or another proxy, is that the URL doesn't work. And what we hear from the libraries is that they can't subscribe to it in their feed readers. But that said... I wonder if the antibot measures prevent it and that could well be. We are using Koha's internal one and that blocks direct access to the result page somewhat. I'm curious to know what error reporting the feed readers might offer. Mine reports "403 Forbidden" with our ByWater-hosted OPAC. (In reply to Katrin Fischer from comment #2) > But that said... I wonder if the antibot measures prevent it and that could > well be. We are using Koha's internal one and that blocks direct access to > the result page somewhat. Yeah that's a possibility. I don't think we have anyone with RSS (our libraries typically ask us to remove it all together), but that thought has crossed my mind previously. That should probably be solvable with configuration. (In reply to David Cook from comment #4) > Yeah that's a possibility. I don't think we have anyone with RSS (our > libraries typically ask us to remove it all together), but that thought has > crossed my mind previously. That should probably be solvable with > configuration. That said, if a careless bot is hitting the RSS, then that's a problem. There's certainly tradeoffs... (In reply to David Cook from comment #4) > (In reply to Katrin Fischer from comment #2) > > But that said... I wonder if the antibot measures prevent it and that could > > well be. We are using Koha's internal one and that blocks direct access to > > the result page somewhat. > > Yeah that's a possibility. I don't think we have anyone with RSS (our > libraries typically ask us to remove it all together), but that thought has > crossed my mind previously. That should probably be solvable with > configuration. Can you suggest something maybe? Looking at the code I am a bit lost on how to do it since we only include a list of pages. (In reply to Katrin Fischer from comment #6) > Can you suggest something maybe? Looking at the code I am a bit lost on how > to do it since we only include a list of pages. I think there's probably a lot of different ways it could be done. You could add another block between the setting of ANTIBOT_DO and and checking of ANTIBOT_DO. Something like... RewriteCond expr "%{REQUEST_URI} =~ m#^/cgi-bin/koha/(opac-search.pl)$#" RewriteCond %{QUERY_STRING" "format=rss" Rewrite Rule ^ - [E=ANTIBOT_DO:false] I haven't tried it but something like that could potentially work |