Bug 22223 - Item url double-encode when parameter is an encoded URL
Summary: Item url double-encode when parameter is an encoded URL
Status: In Discussion
Alias: None
Product: Koha
Classification: Unclassified
Component: OPAC (show other bugs)
Version: Main
Hardware: All All
: P5 - low normal with 10 votes (vote)
Assignee: Owen Leonard
QA Contact: Tomás Cohen Arazi
URL:
Keywords:
: 23535 29208 (view as bug list)
Depends on: 21526
Blocks:
  Show dependency treegraph
 
Reported: 2019-01-29 05:14 UTC by David Cook
Modified: 2022-12-06 05:21 UTC (History)
9 users (show)

See Also:
Change sponsored?: ---
Patch complexity: Trivial patch
Documentation contact:
Documentation submission:
Text to go in the release notes:
Version(s) released in:


Attachments
Bug 22223: Try to not double encoded URIs in items.uri (3.30 KB, patch)
2019-05-04 15:27 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 22223: Try to not double encoded URIs in items.uri (6.54 KB, patch)
2021-10-13 10:07 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 22223: Try to not double encoded URIs in items.uri (6.69 KB, patch)
2022-03-22 14:47 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 22223: Try to not double encoded URIs in items.uri (6.67 KB, patch)
2022-04-21 14:01 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 22223: Add tests (2.02 KB, patch)
2022-04-21 15:45 UTC, Jonathan Druart
Details | Diff | Splinter Review
Bug 22223: Try to not double encoded URIs in items.uri (6.72 KB, patch)
2022-04-21 16:16 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review
Bug 22223: Add tests (2.10 KB, patch)
2022-04-21 16:17 UTC, Tomás Cohen Arazi
Details | Diff | Splinter Review

Note You need to log in before you can comment on or make changes to this bug.
Description David Cook 2019-01-29 05:14:25 UTC
The following use of the "url" filter is problematic:

[% IF Koha.Preference("OPACURLOpenInNewWindow") %]
<a target="_blank" rel="noreferrer" href="[% ITEM_RESULT.uri | url %]" property="url">[% ITEM_RESULT.uri | html %]</a>
[% ELSE %]
<a href="[% ITEM_RESULT.uri | url %]" property="url">[% ITEM_RESULT.uri | html %]</a>
[% END %]

If ITEM_RESULT.uri is "https://idp.com?redirect_url=https%3A%2F%2Fsomewhere_else.com", then the percent signs in the argument to the "redirect_url" parameters will be encoded incorrectly and the result will be "https://idp.com?redirect_url=https%253A%252F%252Fsomewhere_else.com", which is obviously an invalid URL. 

Can we really expect that no one will ever include a URL with URI encoded parameters?
Comment 1 Jonathan Druart 2019-02-15 13:52:40 UTC
Done with a script, see
  commit 582502644801b44595497caf6bbee449f0347238
  Bug 21526: uri escape TT variables when used in 'a href'
We may need to adjust some occurrences manually.
Comment 2 David Cook 2019-02-18 04:39:45 UTC
(In reply to Jonathan Druart from comment #1)
> Done with a script, see
>   commit 582502644801b44595497caf6bbee449f0347238
>   Bug 21526: uri escape TT variables when used in 'a href'
> We may need to adjust some occurrences manually.

I'm not sure what you're saying, Jonathan. Do you mean that the filter was added by the script and that we need to remove the filters manually?

If so, what would prevent the filters from being re-added by a script in the future?
Comment 3 Jonathan Druart 2019-02-22 12:06:00 UTC
(In reply to David Cook from comment #2)
> (In reply to Jonathan Druart from comment #1)
> > Done with a script, see
> >   commit 582502644801b44595497caf6bbee449f0347238
> >   Bug 21526: uri escape TT variables when used in 'a href'
> > We may need to adjust some occurrences manually.
> 
> I'm not sure what you're saying, Jonathan. Do you mean that the filter was
> added by the script and that we need to remove the filters manually?
> 
> If so, what would prevent the filters from being re-added by a script in the
> future?

Did you read the commit message and the bug description?
I wrote a script to guess what needed to be escaped correctly, in <a href=/uri?param=[% value %]>, 'value' must be uri escaped, not html escaped.

This is true in ~90% of the situations, others (specific cases) need to be handled separately and fixed manually.
If you found one you can provide a patch and I will test it.
Comment 4 David Cook 2019-02-24 23:34:06 UTC
(In reply to Jonathan Druart from comment #3)
> Did you read the commit message and the bug description?

No, I didn't look it up in Git. Like Stackoverflow, I think it makes sense to include the relevant content in the forum rather than sending people off somewhere else. Providing a link isn't the same thing as providing a response. 

> I wrote a script to guess what needed to be escaped correctly, in <a
> href=/uri?param=[% value %]>, 'value' must be uri escaped, not html escaped.
> 

I think you've misunderstood me. I'm saying "href="[% ITEM_RESULT.uri | url %]" is a problem because ITEM_RESULT.uri may already contain an escaped URL. For instance, "https://idp.com?redirect_url=https%3A%2F%2Fsomewhere_else.com". If you run use a filter like [% ITEM_RESULT.uri | url %], that'll make it double-encoded which breaks the URL. It's a different use case. I'm not describing building a URL in the template. I'm talking about when an entire URL is already provided. Filtering it is problematic as you can't know how the URL data is already going to be handled. (Although a person could write a filter that parses the URL and escapes any unescaped parameters and rebuilds the URL, but that's also more work that I doubt anyone wants to do right now.)

> This is true in ~90% of the situations, others (specific cases) need to be
> handled separately and fixed manually.

This is what I don't understand. I understand how the template can be fixed manually, but can you explain to me how the scripts for auto-adding filters will ignore manually fixed cases?

*clicks through to https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=21526*

Are you referring to use of $raw instead? I don't understand what you're trying to say. 

> If you found one you can provide a patch and I will test it.

This also confuses me. What do you mean by "one" here?
Comment 5 Katrin Fischer 2019-02-27 17:13:16 UTC
(In reply to David Cook from comment #4)
> (In reply to Jonathan Druart from comment #3)
> > Did you read the commit message and the bug description?
> 
> No, I didn't look it up in Git. Like Stackoverflow, I think it makes sense
> to include the relevant content in the forum rather than sending people off
> somewhere else. Providing a link isn't the same thing as providing a
> response. 

Commit message and bug description can both be viewed on bugzilla. The commit message is always supposed to contain the most current test plan, so it's often helpful to start there.

If I understand the issue correctly:

Catalogers might copy escaped and unescaped URLs into 856$u/952$u/955? and that leaves us with the problem that we might not use the correct filter in the templates.

What can we do here? Is there a way to determine safely, if an URL has already been escaped? I assume checking for something like % might work?
Comment 6 David Cook 2019-02-28 03:42:14 UTC
(In reply to Katrin Fischer from comment #5)
> If I understand the issue correctly:
> 
> Catalogers might copy escaped and unescaped URLs into 856$u/952$u/955? and
> that leaves us with the problem that we might not use the correct filter in
> the templates.
> 

I'd say that catalogers might copy URLs into 856$u/952$u/955? that have escaped values in the query string (actually there could also be escaped values in the path - this is common with URLs used in IIIF Image Servers like Loris*), and we don't want to double-encode those values. 

*Example Loris URL:
http://loris.example.org/loris/1234%2F5678%2F90123.tif/info.json

> What can we do here? Is there a way to determine safely, if an URL has
> already been escaped? I assume checking for something like % might work?

I don't know if there is a best practice for checking URL encoding. I don't think there is.

When it comes to safety, I think we need to think about the origin of the value. It's not a public end user providing this value; it's an authorized staff member. If this were a CMS, we'd be trusting that the people providing the content for the CMS aren't embedding malicious Javascript, right? I mean... if a staff member wanted to inject Javascript, they could use OpacUserJS.

That said, OpacUserJS requires admin privileges. And maybe a less cautious cataloguer could import a MARCXML record with a malicious URL in it.
Comment 7 David Cook 2019-02-28 03:53:27 UTC
I just removed the "url" filter from [% ITEM_RESULT.uri | url %], and I was able to exploit the XSS vulnerability by putting a malicious string into the "952$u" subfield via the Staff Client...

Looking at https://www.owasp.org/index.php/Cross-site_Scripting_(XSS), it talks about how XSS occurs "anywhere a web application uses input from a user within the output it generates without validating or encoding it". 

In the case of ITEM_RESULT.uri, I don't think that encoding it is the right solution. However, we could validate it by writing a custom Template::Toolkit filter to use Perl's URI module to ensure that it is actually a URL? 

Trying to think about a valid URL that could also include a malicious payload...

--

I just took a malicious string (perhaps best I don't share it publicly?) that exploits [% ITEM_RESULT.uri %] and put it through URI->new() and it escaped the malicious characters, which is good.

I just put "https://idp.com?redirect_url=https%3A%2F%2Fsomewhere_else.com" through URI->new() and it didn't modify it in any way, which is good.

I just put "http://loris.example.org/loris/1234%2F5678%2F90123.tif/info.json" through URI->new() and it didn't modify it in any way, which is good. 

I just put "https://idp.com?redirect_url=https://somewhere_else.com" through URI->new() and it didn't modify it in any way, which is fine.
Comment 8 David Cook 2019-02-28 04:19:59 UTC
Using some of the evasion strategies in https://www.owasp.org/index.php/Cross-site_Scripting_(XSS) and Perl's URI->new() is handling it. 

Ah we can see it at https://metacpan.org/source/OALDERS/URI-1.76/lib/URI.pm#L81 I think.

Basically it encodes everything that isn't in the following:

our $reserved   = q(;/?:@&=+$,[]);
our $mark       = q(-_.!~*'());                                    #'; emacs
our $unreserved = "A-Za-z0-9\Q$mark\E";
our $uric       = quotemeta($reserved) . $unreserved . "%";

Whereas http://template-toolkit.org/docs/manual/Filters.html#section_url encodes everything that is outside the permitted URI charactesrs from RFC 2396, except &, @, /, ;, :, =, +, ? and $. 

The key thing is how the URI module doesn't encode the % sign. 

(Of course reading http://template-toolkit.org/docs/manual/Filters.html#section_uri it says ("(", ")", "~", "*", "!" and the single quote "'") now need to be escaped according to RFC3986... and the URI module doesn't do that?
Comment 9 David Cook 2019-02-28 04:21:58 UTC
I find it interesting that the double quote character is safe in RFC 3986... as that's a character I use for crafting my malicious strings.
Comment 10 David Cook 2019-02-28 05:44:22 UTC
(In reply to David Cook from comment #9)
> I find it interesting that the double quote character is safe in RFC 3986...
> as that's a character I use for crafting my malicious strings.

But in Template 2.28 it is still encoding the double quote character even though it says it doesn't need to... *shrug*
Comment 11 Jonathan Druart 2019-05-04 15:27:17 UTC
Created attachment 89352 [details] [review]
Bug 22223: Try to not double encoded URIs in items.uri

This is just a POC and is not ready for inclusion (Filter.pm and naming
need to be discussed).

Test plan:
Create 2 items, with uri:
  https://www.google.com/url?q=https://buttercup.pw
  https://www.google.com/url?q=https%3A%2F%2Fbuttercup.pw

Go to the OPAC detail page of the bib record, see that the links are
displayed how you entered them.
Click on them and confirm that the uri/page is correct
Comment 12 Jonathan Druart 2019-05-04 15:28:02 UTC
Here is a patch for discussion, could you confirm it fixes the issues you faced?
Comment 13 David Cook 2019-11-11 02:32:16 UTC
(In reply to Jonathan Druart from comment #12)
> Here is a patch for discussion, could you confirm it fixes the issues you
> faced?

Apologies for the delay, Jonathan.

I have too many other priorities at the moment, so I probably won't be testing this or following up for a long time.
Comment 14 Lucas Gass 2020-07-29 21:37:44 UTC
I ran into this problem today with double encoded URL parameters if OPACURLOpenInNewWindow is turned on. Not sure why this is In Discussion, I tested Jonathan's patch (which needs rebased) and it seems to work.
Comment 15 David Cook 2020-07-30 04:31:33 UTC
(In reply to Lucas Gass from comment #14)
> I ran into this problem today with double encoded URL parameters if
> OPACURLOpenInNewWindow is turned on. Not sure why this is In Discussion, I
> tested Jonathan's patch (which needs rebased) and it seems to work.

I think Jonathan probably put it as "In Discussion" as it was a POC to him.

Looking at the code... I don't really like this patch as it's trying to undo double-encoding (rather than just not double-encoding in the first place).
Comment 16 David Cook 2020-07-30 04:34:13 UTC
I think removing the "url" filter seems like the more reasonable solution to me. 

In this case, the ITEM_RESULT.uri is coming from a stored record in the staff interface, so we don't really need to filter unauthenticated untrusted user input.

That said, an authenticated user with cataloguing privileges could put in malicious Javascript into a 856$u subfield. (Then again, an authenticated user with admin privileges could put malicious Javascript into OpacUserJS, so an authenticated staff interface user is always a bit of a risk.)
Comment 17 David Cook 2020-07-30 05:09:34 UTC
Consider the following excerpt from the URI standard https://tools.ietf.org/html/std66#section-3.4:

"However, as query components are often used to carry identifying information in the form of "key=value" pairs and one frequently used value is a reference to another URI, it is sometimes better for usability to avoid percent-encoding those characters."

It seems like the standard itself (from 2005) mentions that URI encoding query components indiscriminately can be problematic.
Comment 18 David Cook 2020-07-30 05:15:37 UTC
Template Toolkit uses RFC 2396 (from 1998) for the url filter.

Looking at RFC 2396 https://tools.ietf.org/html/rfc2396#section-2.4.2:

"Data must be escaped if it does not have a representation using an unreserved character"

However, it also says the following:

"A URI is always in an "escaped" form, since escaping or unescaping a completed URI might change its semantics.  Normally, the only time escape encodings can safely be made is when the URI is being created from its component parts; each component may have its own set of characters that are reserved, so only the mechanism responsible for generating or interpreting that component can determine whether or not escaping a character will change its semantics. Likewise, a URI must be separated into its components before the escaped characters within those components can be safely decoded."

Note also the following:

"Because the percent "%" character always has the reserved purpose of being the escape indicator, it must be escaped as "%25" in order to be used as data within a URI.  Implementers should be careful not to escape or unescape the same string more than once, since unescaping an already unescaped string might lead to misinterpreting a percent data character as another escaped character, or vice versa in the case of escaping an already escaped string."

Template Toolkit have changed the behaviour of the "uri" and "url" filters over time (http://www.template-toolkit.org/docs/manual/Filters.html#section_url). I think they also haven't interpreted RFC3986 correctly in regards to the double quote character...
Comment 19 David Cook 2020-07-30 05:26:45 UTC
Actually, after re-reading those specifications, I think maybe we *should* keep the "url" filter in the Template Toolkit template.

If we consider the staff interface to be the "URI producer", then technically the problem is with storing URLs that contain encoded information.

That said, maybe we would be better off storing encoded URLs in MARC 856$u, and then either passing those through to the interface with $raw, or decoding on the backend and then re-encoding using Template Toolkit. 

Technically, https://www.loc.gov/marc/bibliographic/bd856.html doesn't say anything about percent encoding, although all the examples contain percent encoded examples. 

That goes back to the URI standard that says a URI is always "encoded". Maybe we should be decoding it prior to putting it through the filter.

Let me have a think about that...
Comment 20 David Cook 2020-07-30 05:45:33 UTC
Consider the following code:

#!/usr/bin/perl
use strict;
use warnings;
use URI::Escape;
use Template;
my $one = uri_unescape('https://www.google.com/url?q=https://buttercup.pw"');
my $two = uri_unescape('https://www.google.com/url?q=https%3A%2F%2Fbuttercup.pw%22');
my $template = Template->new();
my $output;
$template->process( \*DATA, { one => $one, two => $two }, \$output );
warn $output;
__DATA__
[% one | url %]
[% two | url %]

Consider the output:
https://www.google.com/url?q=https://buttercup.pw%22
https://www.google.com/url?q=https://buttercup.pw%22

Actually... while that *looks* good... that's not necessarily correct, because the "correct" URL is actually https://www.google.com/url?q=https%3A%2F%2Fbuttercup.pw%22

That said... the end-target will uri_unescape the value of the "q" key anyway, so it shouldn't matter. Plus the URI standard says you may choose not to percent encode the query component due to usability concerns...

But our test cases might also be overly simplistic. I wonder if I have any real-life complex URLs for digital resources laying around...
Comment 21 David Cook 2020-07-30 06:07:45 UTC
Actually, I'm going back to thinking that the "url" filter should be removed in this case.

The 2005 URI standard says at https://tools.ietf.org/html/std66#section-2.4 that "the only time when octets within a URI are percent-encoded is during the process of producing the URI from its component parts.  This is when an implementation determines which of the reserved characters are to be used as subcomponent delimiters and which can be safely used as data.  Once produced, a URI is always in its percent-encoded form."

This is the only safe time to do the uri encoding.

And if you look at the "uri" filter for Template Toolkit at http://www.template-toolkit.org/docs/manual/Filters.html#section_uri, that's how they use the do percent-encoding for URIs. That is, building a URI from its component parts and doing the escaping at those points.

The "url" filter for encoding whole URLs in Template Toolkit is highly problematic. I can certainly get the appeal. After all, say someone submits a URL to a web form and you want to show them their URL on the response page. Technically speaking, a person should decompose the URL, and then rebuild it from its component parts. The "url" filter is a convenient mechanism, but it seems technically incorrect.

So we maybe shouldn't use the "url" filter... but we need to do *something*. 

The 2005 URI standard is dogmatic. Practically speaking, Koha is given whole URLs by library staff members. It's not building URIs itself from component parts. 

In theory, the library staff members should be passing in URLs that are already encoded, but in practice that is unlikely to happen, unless they're copying/pasting from somewhere else, and even then it may be hit or miss.

In theory, we shouldn't be encoding the URL at the template level since it should already be encoded when it was created... but as above... we can't trust that. 

Perhaps we should implement our own filter that first parses the URI and then decodes its component parts before re-encoding its component parts. 

Of course, https://tools.ietf.org/html/std66#section-2.4 also says "Implementations must not percent-encode or decode the same string more than once, as decoding an already decoded string might lead to misinterpreting a percent data octet as the beginning of a percent-encoding, or vice versa in the case of percent-encoding an already percent-encoded string."

So... technically speaking this is kind of unsolvable in terms of strict adherence to the standard? 

The problem being of course the human element. If we were mechanically building URLs, encoding them, sending them, decoding them, and using them... it would all be fine. 

The problem is human input. 

With the OPAC, we might accept a URL but it would just be as text data. But with the staff interface, we're actually using it in HTML... 

The most practical option in my mind is to just not use the "url" filter on this field, because I think that we sort of have to assume that the librarian has put in a properly encoded URL in the first place.
Comment 22 David Cook 2020-07-30 06:33:47 UTC
But... doing nothing is also risky. 

I suppose we could validate that it's actually a URL, but that's easy to bypass. 

But we can't re-encode the URI components either because that could compromise the semantics of the URL.

I suppose one could argue that it's better to compromise the semantics of a good URL than to permit an unchallenged bad URL.

We could also check URLs for characters outside the "unreserved character" list, and percent-encode if we find any (by percent encoding components rather than using the "url" filter). You could get false positives but that's better than a false negative.

That should prevent XSS and allow through properly encoded URLs (e.g. "https://idp.com?redirect_url=https%3A%2F%2Fsomewhere_else.com").
Comment 23 David Cook 2020-07-30 06:38:03 UTC
(In reply to David Cook from comment #22)
> We could also check URLs for characters outside the "unreserved character"
> list, and percent-encode if we find any (by percent encoding components
> rather than using the "url" filter). You could get false positives but
> that's better than a false negative.
> 
> That should prevent XSS and allow through properly encoded URLs (e.g.
> "https://idp.com?redirect_url=https%3A%2F%2Fsomewhere_else.com").

Except that I'm wrong. It wouldn't allow through properly encoded URLs because % is not an "unreserved character".

But as I said in https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=22223#c8 that's why URI "find all funny characters and encode the bytes" when the characters are not reserved, not unreserved, and not a % sign:

https://metacpan.org/source/OALDERS/URI-1.76/lib/URI.pm#L80
Comment 24 David Cook 2020-07-30 06:44:18 UTC
Consider the following code:
#!/usr/bin/perl
use strict;
use warnings;
use URI;
use Template;
my $one = URI->new('https://www.google.com/url?q=https://buttercup.pw"></a><injection></injection><a href="');
my $two = URI->new('https://www.google.com/url?q=https%3A%2F%2Fbuttercup.pw');
my $template = Template->new();
my $output;
$template->process( \*DATA, { one => $one, two => $two }, \$output );
warn $output;
__DATA__
[% one %]
[% two %]

Consider the following output:
https://www.google.com/url?q=https://buttercup.pw%22%3E%3C/a%3E%3Cinjection%3E%3C/injection%3E%3Ca%20href=%22
https://www.google.com/url?q=https%3A%2F%2Fbuttercup.pw

In this case it's let the correctly percent encoded URL through, but it's also encoded the malicious URL.
Comment 25 Jonathan Druart 2021-10-13 10:07:08 UTC
Created attachment 126174 [details] [review]
Bug 22223: Try to not double encoded URIs in items.uri

This is just a POC and is not ready for inclusion (Filter.pm and naming
need to be discussed).

Test plan:
Create 2 items, with uri:
  https://www.google.com/url?q=https://buttercup.pw
  https://www.google.com/url?q=https%3A%2F%2Fbuttercup.pw

Go to the OPAC detail page of the bib record, see that the links are
displayed how you entered them.
Click on them and confirm that the uri/page is correct
Comment 26 Jonathan Druart 2021-10-13 10:07:26 UTC
Patch rebased.
Comment 27 Jonathan Druart 2021-10-13 10:07:40 UTC
*** Bug 29208 has been marked as a duplicate of this bug. ***
Comment 28 Séverine Queune 2021-10-14 09:12:27 UTC
Hi Jonathan,
Sorry for the duplicate, I searched in Bugzilla before I open my bug but it seems I didn't use good keywords...
I tried to apply your POC on Biblibre's sandbox, but I get a "503 Service Unavailable".
The patch correctly applies on ByWater's sandbox and it works pretty well !
I did my part, so I let you devs discuss about the points you mentioned :)
Thanks !
Comment 29 Jonathan Druart 2022-03-22 14:47:29 UTC
Created attachment 132027 [details] [review]
Bug 22223: Try to not double encoded URIs in items.uri

This is just a POC and is not ready for inclusion (Filter.pm and naming
need to be discussed).

Test plan:
Create 2 items, with uri:
  https://www.google.com/url?q=https://buttercup.pw
  https://www.google.com/url?q=https%3A%2F%2Fbuttercup.pw

Go to the OPAC detail page of the bib record, see that the links are
displayed how you entered them.
Click on them and confirm that the uri/page is correct

Signed-off-by: Séverine Queune <severine.queune@bulac.fr>
Comment 30 Tomás Cohen Arazi 2022-04-20 13:20:43 UTC
Do you need this?

use Template::Plugin::Filter;
Comment 31 Jonathan Druart 2022-04-21 13:55:28 UTC
Nope. It's a common mistake in Koha/Template/Plugin/, certainly coming from a copy/paste.
Comment 32 Jonathan Druart 2022-04-21 14:01:51 UTC
Created attachment 133567 [details] [review]
Bug 22223: Try to not double encoded URIs in items.uri

This is just a POC and is not ready for inclusion (Filter.pm and naming
need to be discussed).

Test plan:
Create 2 items, with uri:
  https://www.google.com/url?q=https://buttercup.pw
  https://www.google.com/url?q=https%3A%2F%2Fbuttercup.pw

Go to the OPAC detail page of the bib record, see that the links are
displayed how you entered them.
Click on them and confirm that the uri/page is correct

Signed-off-by: Séverine Queune <severine.queune@bulac.fr>

JD amended patch:
* Remove use Template::Plugin::Filter;
* Fix license statement
Comment 33 Tomás Cohen Arazi 2022-04-21 15:01:47 UTC
Can you provide some regression tests for this?
Comment 34 Jonathan Druart 2022-04-21 15:45:36 UTC
Created attachment 133578 [details] [review]
Bug 22223: Add tests
Comment 35 Tomás Cohen Arazi 2022-04-21 16:16:59 UTC
Created attachment 133589 [details] [review]
Bug 22223: Try to not double encoded URIs in items.uri

This is just a POC and is not ready for inclusion (Filter.pm and naming
need to be discussed).

Test plan:
Create 2 items, with uri:
  https://www.google.com/url?q=https://buttercup.pw
  https://www.google.com/url?q=https%3A%2F%2Fbuttercup.pw

Go to the OPAC detail page of the bib record, see that the links are
displayed how you entered them.
Click on them and confirm that the uri/page is correct

Signed-off-by: Séverine Queune <severine.queune@bulac.fr>

JD amended patch:
* Remove use Template::Plugin::Filter;
* Fix license statement

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Comment 36 Tomás Cohen Arazi 2022-04-21 16:17:05 UTC
Created attachment 133590 [details] [review]
Bug 22223: Add tests

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: fixed tests count
Comment 37 Tomás Cohen Arazi 2022-04-21 16:20:34 UTC
(In reply to Jonathan Druart from comment #34)
> Created attachment 133578 [details] [review] [review]
> Bug 22223: Add tests

Great job, thanks
Comment 38 Fridolin Somers 2022-04-25 19:53:49 UTC
> This is just a POC
No push to master then ?
Comment 39 Jonathan Druart 2022-04-27 12:02:45 UTC
(In reply to Fridolin Somers from comment #38)
> > This is just a POC
> No push to master then ?

It *was* a POC, if everybody is happy with the patch I guess you can push it.
Comment 40 Fridolin Somers 2022-04-27 19:44:04 UTC
That code looks very strange to me.
Naming a TT filter 'Filter'.

I would prefer to see syntax : [% var | $NoDoubleEncode %]
Like TT plugin Price.

Class is using :
Base qw( Template::Plugin::Filter )
But does not implement a 'filter' method.
Comment 41 David Cook 2022-04-27 23:17:59 UTC
(In reply to Fridolin Somers from comment #40)
> That code looks very strange to me.
> Naming a TT filter 'Filter'.
> 
> I would prefer to see syntax : [% var | $NoDoubleEncode %]
> Like TT plugin Price.
> 
> Class is using :
> Base qw( Template::Plugin::Filter )
> But does not implement a 'filter' method.

I think a good alternative would be what I suggest at https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=22223#c7 as well.
Comment 42 Jonathan Druart 2022-04-28 07:08:20 UTC
(In reply to Fridolin Somers from comment #40)
> That code looks very strange to me.
> Naming a TT filter 'Filter'.
> 
> I would prefer to see syntax : [% var | $NoDoubleEncode %]
> Like TT plugin Price.
> 
> Class is using :
> Base qw( Template::Plugin::Filter )
> But does not implement a 'filter' method.

The idea was to have a module that would deal with other encode/decode, or replacements functions.
I still think it is a good idea.

I don't have more time to dedicate to this patch unfortunately.
Comment 43 David Cook 2022-12-06 05:21:24 UTC
*** Bug 23535 has been marked as a duplicate of this bug. ***