block user agent

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

block user agent

Vieri
Hi,

I'm trying to block some user agents (I know it's easy to fake, but most users won't try to fake that header value).

The following works:

acl denied_useragent browser Chrome
acl denied_useragent browser MSIE
acl denied_useragent browser Opera
acl denied_useragent browser Trident
[...]
http_access deny denied_useragent
http_reply_access deny denied_useragent
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent denied_useragent

The following works for HTTP sites, but not for HTTPS sites in an ssl-bumped setup:

acl allowed_useragent browser Firefox/
[...]
http_access deny !allowed_useragent
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=allowed_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=allowed_useragent allowed_useragent


What could I try?

Thanks,

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Amos Jeffries
Administrator
On 16/11/17 00:18, Vieri wrote:

> Hi,
>
> I'm trying to block some user agents (I know it's easy to fake, but most users won't try to fake that header value).
>
> The following works:
>
> acl denied_useragent browser Chrome
> acl denied_useragent browser MSIE
> acl denied_useragent browser Opera
> acl denied_useragent browser Trident
> [...]
> http_access deny denied_useragent
> http_reply_access deny denied_useragent
> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent denied_useragent
>
> The following works for HTTP sites, but not for HTTPS sites in an ssl-bumped setup:
>
> acl allowed_useragent browser Firefox/
> [...]
> http_access deny !allowed_useragent
> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=allowed_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=allowed_useragent allowed_useragent
>
>
> What could I try?
>

The User-Agent along with all HTTP layer details in HTTPS are hidden
behind the encryption layer. TO do anything with them you must decrypt
the traffic first. If you can decrypt it turns into regular HTTP traffic
- the normal access controls should then work as-is.


Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Vieri

________________________________
From: Amos Jeffries <[hidden email]>

>
>> The following works:
>>
>> acl denied_useragent browser Chrome
>> acl denied_useragent browser MSIE
>> acl denied_useragent browser Opera
>> acl denied_useragent browser Trident
>> [...]
>> http_access deny denied_useragent
>> http_reply_access deny denied_useragent
>> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent denied_useragent
>>
>> The following works for HTTP sites, but not for HTTPS sites in an ssl-bumped setup:
>>
>> acl allowed_useragent browser Firefox/
>> [...]
>> http_access deny !allowed_useragent
>> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=allowed_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=allowed_useragent allowed_useragent
>>
> The User-Agent along with all HTTP layer details in HTTPS are hidden
> behind the encryption layer. TO do anything with them you must decrypt
> the traffic first. If you can decrypt it turns into regular HTTP traffic
> - the normal access controls should then work as-is.


So why does my first example actually work even for https sites?

acl denied_useragent browser Chrome
acl denied_useragent browser MSIE
acl denied_useragent browser Opera
acl denied_useragent browser Trident
[...]
http_access deny denied_useragent
http_reply_access deny denied_useragent
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent denied_useragent

If the above "works" then another way would be to use a negated regular expression such as:
acl denied_useragent browser (?!Firefox)
but I don't think it's allowed.

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Vieri
In reply to this post by Amos Jeffries
Let me rephrase my previous question "So why does my first example actually work even for https sites?" to "So why does my first example actually work even for https sites in an ssl-bumped setup (the same as in example 2)?"
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Amos Jeffries
Administrator
In reply to this post by Vieri
On 16/11/17 21:29, Vieri wrote:

>
> ________________________________
> From: Amos Jeffries <[hidden email]>
>>
>>> The following works:
>>>
>>> acl denied_useragent browser Chrome
>>> acl denied_useragent browser MSIE
>>> acl denied_useragent browser Opera
>>> acl denied_useragent browser Trident
>>> [...]
>>> http_access deny denied_useragent
>>> http_reply_access deny denied_useragent
>>> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent denied_useragent
>>>
>>> The following works for HTTP sites, but not for HTTPS sites in an ssl-bumped setup:
>>>
>>> acl allowed_useragent browser Firefox/
>>> [...]
>>> http_access deny !allowed_useragent
>>> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=allowed_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=allowed_useragent allowed_useragent
>>>
>> The User-Agent along with all HTTP layer details in HTTPS are hidden
>> behind the encryption layer. TO do anything with them you must decrypt
>> the traffic first. If you can decrypt it turns into regular HTTP traffic
>> - the normal access controls should then work as-is.
>
>
> So why does my first example actually work even for https sites?

If you are decrypting the traffic, then it works as I said exactly the
same as for HTTP messages.

If you are not decrypting the traffic, but receiving forward-proxy
traffic then you are probably blocking the CONNECT messages that setup
tunnels for HTTPS - it has a User-Agent header *if* it was generated by
a UA instead of an intermediary like Squid.

>
> acl denied_useragent browser Chrome
> acl denied_useragent browser MSIE
> acl denied_useragent browser Opera
> acl denied_useragent browser Trident
> [...]
> http_access deny denied_useragent
> http_reply_access deny denied_useragent
> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent denied_useragent
>
> If the above "works" then another way would be to use a negated regular expression such as:
> acl denied_useragent browser (?!Firefox)
> but I don't think it's allowed.

AFAIK that feature is part of a different regex grammar than the one
Squid uses.

PS. you do know the UA strings of modern browsers all reference each
other right?  "Chrome like-Gecko like Firefox" etc.

Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Vieri

________________________________
From: Amos Jeffries <[hidden email]>
>
> If you are decrypting the traffic, then it works as I said exactly the
> same as for HTTP messages.
>
> If you are not decrypting the traffic, but receiving forward-proxy
> traffic then you are probably blocking the CONNECT messages that setup
> tunnels for HTTPS - it has a User-Agent header *if* it was generated by
> a UA instead of an intermediary like Squid.


So I would need to allow CONNECT messages.
Something like:
http_access allow CONNECT allowed_useragent

Anyway, I'm not sure what "decrypting the traffic" implies. If I want an ssl-bumped setup to fully handle all HTTPS connections, and be able to detect the user-agent on https connections, how should I configure Squid? Should I allow all CONNECT messages?

> AFAIK that feature is part of a different regex grammar than the one
> Squid uses.


I think I read something about Squid being built with a user-defined regex grammar/lib. Anyway, I take it it's not feasible for now.
> PS. you do know the UA strings of modern browsers all reference each
> other right?  "Chrome like-Gecko like Firefox" etc.


Yes, but... We require IE for some Intranet apps, and Firefox for other Extranet apps.
We can set a custom user agent string for the Firefox browser. We also have other http user agents with customized UA strings. So we're 99% sure that all browser clients going through Squid will be tagged correctly. That's the reason why I would prefer to "deny all user agents" except one ("my custom UA string"). Most users will not try to tamper with this.
I do not want to "allow all except a list of substrings" because it would be a nightmare.

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Alex Rousskov
In reply to this post by Vieri
On 11/16/2017 01:44 AM, Vieri wrote:
> Let me rephrase my previous question "So why does my first example
> actually work even for https sites?" to "So why does my first example
> actually work even for https sites in an ssl-bumped setup (the same
> as in example 2)?"

AFAICT, there is not enough information to answer that or the original
question. Going forward, I recommend two steps:

1. Your "works" and "does not work" setups currently differ in at least
three variables: user agent name, slash after the user agent name, and
acl negation in http_access. Find out which single variable is
responsible for the breakage by eliminating all other differences.

2. Post two ALL,2 cache.logs, each containing a single transaction, one
for the "works" case and one for the "does not work" case polished as
discussed in #1.


HTH,

Alex.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Vieri
________________________________
From: Alex Rousskov <[hidden email]>
> 1. Your "works" and "does not work" setups currently differ in at least
> three variables: user agent name, slash after the user agent name, and
> acl negation in http_access. Find out which single variable is
> responsible for the breakage by eliminating all other differences.
>
> 2. Post two ALL,2 cache.logs, each containing a single transaction, one
> for the "works" case and one for the "does not work" case polished as
> discussed in #1.



I can't really do anything about #1 except maybe leave out the forward slash.
That's because my 2 examples are trying to achieve the opposite.
Let me just rephrase everything so it's crystal clear.

My goal is to deny all client traffic from browsers that DO NOT have a specific user-agent string. So this is a negated statement. One of the things I can't do in Squid is define an ACL with a negated lookahead such as (?!useragentname).

So I set up two examples.

Common to both:

acl allowed_useragent browser MyAllowedUAstring
acl denied_useragent browser MyDeniedUAstring

# example 1:
http_access deny denied_useragent
http_reply_access deny denied_useragent
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent denied_useragent

I then run this from my test client:

# curl --insecure --user-agent MyAllowedUAstring https://www.gentoo.org
-> works as expected (I see the web site). I guess you don't need to see cache.log here.

Now I run this:

# curl --insecure --user-agent MyDeniedUAstring https://www.gentoo.org
-> works as expected (I'm denied access and I see Squid's error page).
I guess there's no need for the full log here either. It boils down to this anyway:
2017/11/17 13:24:26.937 kid1| 28,2| RegexData.cc(73) match: aclRegexData::match: match '(MyDeniedUAstring)' found in 'MyDeniedUAstring'
2017/11/17 13:24:26.937 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request GET https://www.gentoo.org/ is DENIED; last ACL checked: denied_useragent

I'm done with example 1. That's because I cannot make a consistent list of all user agents I want to actively block. Instead, I want to "deny everyone except one or two".

Also, since negative lookaheads are not supported in regular expressions, I change my example 1 to:

# example 2:
http_access deny !allowed_useragent
http_reply_access deny !allowed_useragent
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent allowed_useragent

Then I run this from the client:

# curl --insecure --user-agent MyAllowedUAstring https://www.gentoo.org
-> I was expecting to be allowed access since Squid denies "everything that's not" MyAllowedUAstring. Well, at least I should have passed the "deny" line in example 2.
However, I'm being blocked right there. This is the full log:

2017/11/17 13:30:42.216 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection on FD 88
2017/11/17 13:30:42.216 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection on local=[::]:3229 remote=[::] FD 88 flags=25
2017/11/17 13:30:42.216 kid1| 33,2| client_side.cc(3943) httpsSslBumpAccessCheckDone: sslBump needed for local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17 method 4
2017/11/17 13:30:42.216 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
2017/11/17 13:30:42.216 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
---------
CONNECT 89.16.167.134:443 HTTP/1.1
Host: 89.16.167.134:443


----------
2017/11/17 13:30:42.216 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request CONNECT 89.16.167.134:443 is DENIED; last ACL checked: allowed_useragent
2017/11/17 13:30:42.216 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 13:30:42.216 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 13:30:42.216 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 13:30:42.226 kid1| 83,2| client_side.cc(3843) clientNegotiateSSL: clientNegotiateSSL: New session 0x125e030 on FD 8 (10.215.144.48:65262)
2017/11/17 13:30:42.226 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
2017/11/17 13:30:42.226 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
---------
GET / HTTP/1.1
Host: www.gentoo.org
User-Agent: MyAllowedUAstring
Accept: */*


----------
2017/11/17 13:30:42.227 kid1| 28,2| RegexData.cc(73) match: aclRegexData::match: match '(MyAllowedUAstring)' found in 'MyAllowedUAstring'
2017/11/17 13:30:42.227 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET https://www.gentoo.org/ is ALLOWED, because it matched denied_mimetypes_rep
2017/11/17 13:30:42.227 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
2017/11/17 13:30:42.227 kid1| 11,2| client_side.cc(1410) sendStartOfMessage: HTTP Client REPLY:
---------
HTTP/1.1 307 Temporary Redirect
Server: squid
Mime-Version: 1.0
Date: Fri, 17 Nov 2017 12:30:42 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 0
Location: http://proxy-server1/proxy-error/?a=-&B=&e=0&E=%5BNo%20Error%5D&H=89.16.167.134&i=10.215.144.48&M=CONNECT&o=&R=/&T=Fri,%2017%20Nov%202017%2012%3A30%3A42%20GMT&U=https%3A%2F%2F89.16.167.134%2F*&u=89.16.167.134%3A443&w=IT%40mydomain.org&x=&acl=denied_useragent
X-Squid-Error: 403 Access Denied
X-Cache: MISS from proxy-server1
X-Cache-Lookup: NONE from proxy-server1:3227
Connection: close

Note that I have these defaults in my squid conf file:

acl CONNECT method CONNECT
http_access deny CONNECT !SSL_ports

Let's try another one:

# curl --insecure --user-agent MyDeniedUAstring https://www.gentoo.org
-> This is as expected, I guess.

Full log:

2017/11/17 13:30:10.365 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection on FD 88
2017/11/17 13:30:10.365 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection on local=[::]:3229 remote=[::] FD 88 flags=25
2017/11/17 13:30:10.365 kid1| 33,2| client_side.cc(3943) httpsSslBumpAccessCheckDone: sslBump needed for local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17 method 4
2017/11/17 13:30:10.365 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
2017/11/17 13:30:10.365 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
---------
CONNECT 89.16.167.134:443 HTTP/1.1
Host: 89.16.167.134:443


----------
2017/11/17 13:30:10.365 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request CONNECT 89.16.167.134:443 is DENIED; last ACL checked: allowed_useragent
2017/11/17 13:30:10.365 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 13:30:10.365 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 13:30:10.365 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 13:30:10.385 kid1| 83,2| client_side.cc(3843) clientNegotiateSSL: clientNegotiateSSL: New session 0xdbdc70 on FD 8 (10.215.144.48:65237)
2017/11/17 13:30:10.386 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
2017/11/17 13:30:10.386 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
---------
GET / HTTP/1.1
Host: www.gentoo.org
User-Agent: MyDeniedUAstring
Accept: */*


----------
2017/11/17 13:30:10.386 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET https://www.gentoo.org/ is DENIED, because it matched allowed_useragent
2017/11/17 13:30:10.386 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 13:30:10.386 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 13:30:10.386 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 13:30:10.386 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET https://www.gentoo.org/ is ALLOWED, because it matched allowed_useragent
2017/11/17 13:30:10.386 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
2017/11/17 13:30:10.386 kid1| 11,2| client_side.cc(1410) sendStartOfMessage: HTTP Client REPLY:
---------
HTTP/1.1 302 Found
Server: squid
Mime-Version: 1.0
Date: Fri, 17 Nov 2017 12:30:10 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 0
Location: http://proxy-server1/proxy-error/?a=-&B=&e=0&E=%5BNo%20Error%5D&H=www.gentoo.org&i=10.215.144.48&M=GET&o=&R=/&T=Fri,%2017%20Nov%202017%2012%3A30%3A10%20GMT&U=https%3A%2F%2Fwww.gentoo.org%2F&u=https%3A%2F%2Fwww.gentoo.org%2F&w=IT%40mydomain.org&x=&acl=denied_useragent
X-Squid-Error: 403 Access Denied
X-Cache: MISS from proxy-server1
X-Cache-Lookup: NONE from proxy-server1:3227
Connection: close

Now for plain HTTP with example 2.

# curl --user-agent MyDeniedUAstring http://www.fltk.org/index.php
-> As expected. It blocks access.

Full log:

2017/11/17 15:56:52.648 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection on FD 85
2017/11/17 15:56:52.648 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection on local=[::]:3228 remote=[::] FD 85 flags=25
2017/11/17 15:56:52.648 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=66.39.46.122:80 remote=10.215.144.48 FD 8 flags=17
2017/11/17 15:56:52.648 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
---------
GET /index.php HTTP/1.1
Host: www.fltk.org
User-Agent: MyDeniedUAstring
Accept: */*


----------
2017/11/17 15:56:52.648 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request GET http://www.fltk.org/index.php is DENIED; last ACL checked: allowed_useragent
2017/11/17 15:56:52.648 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 15:56:52.648 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 15:56:52.648 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/17 15:56:52.648 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET http://www.fltk.org/index.php is ALLOWED, because it matched allowed_useragent
2017/11/17 15:56:52.648 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: HTTP Client local=66.39.46.122:80 remote=10.215.144.48 FD 8 flags=17
2017/11/17 15:56:52.648 kid1| 11,2| client_side.cc(1410) sendStartOfMessage: HTTP Client REPLY:
---------
HTTP/1.1 302 Found
Server: squid
Mime-Version: 1.0
Date: Fri, 17 Nov 2017 14:56:52 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 0
Location: http://proxy-server1/proxy-error/?a=-&B=&e=0&E=%5BNo%20Error%5D&H=www.fltk.org&i=10.215.144.48&M=GET&o=&R=/index.php&T=Fri,%2017%20Nov%202017%2014%3A56%3A52%20GMT&U=http%3A%2F%2Fwww.fltk.org%2Findex.php&u=http%3A%2F%2Fwww.fltk.org%2Findex.php&w=IT%40mydomain.org&x=&acl=denied_useragent
X-Squid-Error: 403 Access Denied
X-Cache: MISS from proxy-server1
X-Cache-Lookup: NONE from proxy-server1:3227
Connection: keep-alive

However, now comes the interesting part.

# curl --user-agent MyAllowedUAstring http://www.fltk.org/index.php
-> works as expected (I see the web site).

Full log:

2017/11/17 15:55:23.550 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection on FD 85
2017/11/17 15:55:23.550 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection on local=[::]:3228 remote=[::] FD 85 flags=25
2017/11/17 15:55:23.551 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=66.39.46.122:80 remote=10.215.144.48 FD 8 flags=17
2017/11/17 15:55:23.551 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
---------
GET /index.php HTTP/1.1
Host: www.fltk.org
User-Agent: MyAllowedUAstring
Accept: */*


----------
2017/11/17 15:55:23.551 kid1| 28,2| RegexData.cc(73) match: aclRegexData::match: match '(MyAllowedUAstring)' found in 'MyAllowedUAstring'
2017/11/17 15:55:23.551 kid1| 82,2| external_acl.cc(805) aclMatchExternal: bllookup("http www.fltk.org 80 /index.php") = lookup needed
2017/11/17 15:55:23.551 kid1| 82,2| external_acl.cc(808) aclMatchExternal: "http www.fltk.org 80 /index.php": queueing a call.
2017/11/17 15:55:23.551 kid1| 82,2| external_acl.cc(1444) Start: fg lookup in 'bllookup' for 'http www.fltk.org 80 /index.php'
2017/11/17 15:55:23.551 kid1| 82,2| external_acl.cc(811) aclMatchExternal: "http www.fltk.org 80 /index.php": return -1.
2017/11/17 15:55:23.553 kid1| 82,2| external_acl.cc(1372) externalAclHandleReply: reply={result=OK, notes={message: www.fltk.org site not found in blacklist; }}
2017/11/17 15:55:23.553 kid1| 82,2| external_acl.cc(1288) external_acl_cache_add: external_acl_cache_add: Adding 'http www.fltk.org 80 /index.php' = ALLOWED
2017/11/17 15:55:23.553 kid1| 82,2| external_acl.cc(841) aclMatchExternal: bllookup = ALLOWED
2017/11/17 15:55:23.553 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request GET http://www.fltk.org/index.php is ALLOWED; last ACL checked: bl_lookup
2017/11/17 15:55:23.553 kid1| 85,2| client_side_request.cc(721) clientAccessCheck2: No adapted_http_access configuration. default: ALLOW
2017/11/17 15:55:23.553 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request GET http://www.fltk.org/index.php is ALLOWED; last ACL checked: bl_lookup
2017/11/17 15:55:23.554 kid1| 88,2| client_side_reply.cc(593) cacheHit: clientProcessHit: Vary detected!
2017/11/17 15:55:23.554 kid1| 17,2| FwdState.cc(133) FwdState: Forwarding client request local=66.39.46.122:80 remote=10.215.144.48 FD 8 flags=17, url=http://www.fltk.org/index.php
2017/11/17 15:55:23.554 kid1| 44,2| peer_select.cc(280) peerSelectDnsPaths: Found sources for 'http://www.fltk.org/index.php'
2017/11/17 15:55:23.554 kid1| 44,2| peer_select.cc(281) peerSelectDnsPaths:   always_direct = DENIED
2017/11/17 15:55:23.554 kid1| 44,2| peer_select.cc(282) peerSelectDnsPaths:    never_direct = DENIED
2017/11/17 15:55:23.554 kid1| 44,2| peer_select.cc(288) peerSelectDnsPaths:    ORIGINAL_DST = local=10.215.144.48 remote=66.39.46.122:80 flags=25
2017/11/17 15:55:23.554 kid1| 44,2| peer_select.cc(295) peerSelectDnsPaths:        timedout = 0
2017/11/17 15:55:23.708 kid1| 11,2| http.cc(2229) sendRequest: HTTP Server local=10.215.144.48:35373 remote=66.39.46.122:80 FD 13 flags=25
2017/11/17 15:55:23.708 kid1| 11,2| http.cc(2230) sendRequest: HTTP Server REQUEST:
---------
GET /index.php HTTP/1.1
User-Agent: MyAllowedUAstring
Accept: */*
Host: www.fltk.org
Cache-Control: max-age=259200
Connection: keep-alive


----------
2017/11/17 15:55:23.884 kid1| ctx: enter level  0: 'http://www.fltk.org/index.php'
2017/11/17 15:55:23.884 kid1| 11,2| http.cc(719) processReplyHeader: HTTP Server local=10.215.144.48:35373 remote=66.39.46.122:80 FD 13 flags=25
2017/11/17 15:55:23.884 kid1| 11,2| http.cc(720) processReplyHeader: HTTP Server REPLY:
---------
HTTP/1.1 200 OK
Date: Fri, 17 Nov 2017 14:55:23 GMT
Server: Apache/2.4.29
Cache-Control: no-cache
Vary: Accept-Encoding
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html

3a02
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>Fast Light Toolkit - Fast Light Toolkit (FLTK)</title>
<meta http-equiv='Pragma' content='no-cache'>
<meta http-equiv='Content-Type' content='text/html; charset=utf-8'>
<link rel='stylesheet' type='text/css' href='fltk.css'>
<link rel='alternate' title='FLTK RSS' type='application/rss+xml' href='index.rss'>
<link rel='shortcut icon' href='favicon.ico' type='image/x-icon'>
<meta name='keywords' content='gui toolkit,c++,linux,unix,macos x,x11,windows'>
</head>
<body>
<table width='100%' border='0' cellspacing='0' cellpadding='0' summary='Page'>
<tr class='header'><td valign='top' width='15' rowspan='2'><a href='index.php'><img src='images/top-left.gif' width='15' height='70' border='0' alt=''></a></td><td valign='top' width='224' rowspan='2'><a href='index.php'><img src='images/top-middle.gif' width='224' height='70' border='0' alt=''></a></td><td width='100%' height='40'><h1>Fast Light Toolkit</h1> </td><td align='right' nowrap>
<table cellpadding=0 cellspacing=0 border=0><tr><td valign=top nowrap>
<a href=fltk-rss.xml><img src=images/rss-fee
----------
2017/11/17 15:55:23.885 kid1| ctx: exit level  0
2017/11/17 15:55:23.885 kid1| 23,2| url.cc(407) urlParse: urlParse: URI has whitespace: {icap://127.0.0.1:1344/clamav ICAP/1.0
}
2017/11/17 15:55:24.038 kid1| 28,2| RegexData.cc(73) match: aclRegexData::match: match '(MyAllowedUAstring)' found in 'MyAllowedUAstring'
2017/11/17 15:55:24.038 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET http://www.fltk.org/index.php is ALLOWED, because it matched denied_mimetypes_rep
2017/11/17 15:55:24.038 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: HTTP Client local=66.39.46.122:80 remote=10.215.144.48 FD 8 flags=17
2017/11/17 15:55:24.038 kid1| 11,2| client_side.cc(1410) sendStartOfMessage: HTTP Client REPLY:
---------
HTTP/1.1 200 OK
Date: Fri, 17 Nov 2017 14:55:23 GMT
Server: Apache/2.4.29
Cache-Control: no-cache
Vary: Accept-Encoding
Content-Type: text/html
Via: ICAP/1.0 proxy-server1.hospitalmanacor.org (C-ICAP/0.5.2 SquidClamav/Antivirus service )
X-Cache: MISS from proxy-server1
X-Cache-Lookup: MISS from proxy-server1:3227
Transfer-Encoding: chunked
Connection: keep-alive

How can I modify my example 2 settings so this access control works the same way with both http and https in an ssl-bumped environment.

Thanks,

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Yuri Voinov


17.11.2017 21:27, Vieri пишет:

> ________________________________
> From: Alex Rousskov <[hidden email]>
>> 1. Your "works" and "does not work" setups currently differ in at least
>> three variables: user agent name, slash after the user agent name, and
>> acl negation in http_access. Find out which single variable is
>> responsible for the breakage by eliminating all other differences.
>>
>> 2. Post two ALL,2 cache.logs, each containing a single transaction, one
>> for the "works" case and one for the "does not work" case polished as
>> discussed in #1.
>
>
> I can't really do anything about #1 except maybe leave out the forward slash.
> That's because my 2 examples are trying to achieve the opposite.
> Let me just rephrase everything so it's crystal clear.
>
> My goal is to deny all client traffic from browsers that DO NOT have a specific user-agent string. So this is a negated statement. One of the things I can't do in Squid is define an ACL with a negated lookahead such as (?!useragentname).
I hope you listen about browser extensions for UA spoofing?

>
> So I set up two examples.
>
> Common to both:
>
> acl allowed_useragent browser MyAllowedUAstring
> acl denied_useragent browser MyDeniedUAstring
>
> # example 1:
> http_access deny denied_useragent
> http_reply_access deny denied_useragent
> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent denied_useragent
>
> I then run this from my test client:
>
> # curl --insecure --user-agent MyAllowedUAstring https://www.gentoo.org
> -> works as expected (I see the web site). I guess you don't need to see cache.log here.
>
> Now I run this:
>
> # curl --insecure --user-agent MyDeniedUAstring https://www.gentoo.org
> -> works as expected (I'm denied access and I see Squid's error page).
> I guess there's no need for the full log here either. It boils down to this anyway:
> 2017/11/17 13:24:26.937 kid1| 28,2| RegexData.cc(73) match: aclRegexData::match: match '(MyDeniedUAstring)' found in 'MyDeniedUAstring'
> 2017/11/17 13:24:26.937 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request GET https://www.gentoo.org/ is DENIED; last ACL checked: denied_useragent
>
> I'm done with example 1. That's because I cannot make a consistent list of all user agents I want to actively block. Instead, I want to "deny everyone except one or two".
>
> Also, since negative lookaheads are not supported in regular expressions, I change my example 1 to:
>
> # example 2:
> http_access deny !allowed_useragent
> http_reply_access deny !allowed_useragent
> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent allowed_useragent
>
> Then I run this from the client:
>
> # curl --insecure --user-agent MyAllowedUAstring https://www.gentoo.org
> -> I was expecting to be allowed access since Squid denies "everything that's not" MyAllowedUAstring. Well, at least I should have passed the "deny" line in example 2.
> However, I'm being blocked right there. This is the full log:
>
> 2017/11/17 13:30:42.216 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection on FD 88
> 2017/11/17 13:30:42.216 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection on local=[::]:3229 remote=[::] FD 88 flags=25
> 2017/11/17 13:30:42.216 kid1| 33,2| client_side.cc(3943) httpsSslBumpAccessCheckDone: sslBump needed for local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17 method 4
> 2017/11/17 13:30:42.216 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 13:30:42.216 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
> ---------
> CONNECT 89.16.167.134:443 HTTP/1.1
> Host: 89.16.167.134:443
>
>
> ----------
> 2017/11/17 13:30:42.216 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request CONNECT 89.16.167.134:443 is DENIED; last ACL checked: allowed_useragent
> 2017/11/17 13:30:42.216 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:42.216 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:42.216 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:42.226 kid1| 83,2| client_side.cc(3843) clientNegotiateSSL: clientNegotiateSSL: New session 0x125e030 on FD 8 (10.215.144.48:65262)
> 2017/11/17 13:30:42.226 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 13:30:42.226 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
> ---------
> GET / HTTP/1.1
> Host: www.gentoo.org
> User-Agent: MyAllowedUAstring
> Accept: */*
>
>
> ----------
> 2017/11/17 13:30:42.227 kid1| 28,2| RegexData.cc(73) match: aclRegexData::match: match '(MyAllowedUAstring)' found in 'MyAllowedUAstring'
> 2017/11/17 13:30:42.227 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET https://www.gentoo.org/ is ALLOWED, because it matched denied_mimetypes_rep
> 2017/11/17 13:30:42.227 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 13:30:42.227 kid1| 11,2| client_side.cc(1410) sendStartOfMessage: HTTP Client REPLY:
> ---------
> HTTP/1.1 307 Temporary Redirect
> Server: squid
> Mime-Version: 1.0
> Date: Fri, 17 Nov 2017 12:30:42 GMT
> Content-Type: text/html;charset=utf-8
> Content-Length: 0
> Location: http://proxy-server1/proxy-error/?a=-&B=&e=0&E=%5BNo%20Error%5D&H=89.16.167.134&i=10.215.144.48&M=CONNECT&o=&R=/&T=Fri,%2017%20Nov%202017%2012%3A30%3A42%20GMT&U=https%3A%2F%2F89.16.167.134%2F*&u=89.16.167.134%3A443&w=IT%40mydomain.org&x=&acl=denied_useragent
> X-Squid-Error: 403 Access Denied
> X-Cache: MISS from proxy-server1
> X-Cache-Lookup: NONE from proxy-server1:3227
> Connection: close
>
> Note that I have these defaults in my squid conf file:
>
> acl CONNECT method CONNECT
> http_access deny CONNECT !SSL_ports
>
> Let's try another one:
>
> # curl --insecure --user-agent MyDeniedUAstring https://www.gentoo.org
> -> This is as expected, I guess.
>
> Full log:
>
> 2017/11/17 13:30:10.365 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection on FD 88
> 2017/11/17 13:30:10.365 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection on local=[::]:3229 remote=[::] FD 88 flags=25
> 2017/11/17 13:30:10.365 kid1| 33,2| client_side.cc(3943) httpsSslBumpAccessCheckDone: sslBump needed for local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17 method 4
> 2017/11/17 13:30:10.365 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 13:30:10.365 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
> ---------
> CONNECT 89.16.167.134:443 HTTP/1.1
> Host: 89.16.167.134:443
>
>
> ----------
> 2017/11/17 13:30:10.365 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request CONNECT 89.16.167.134:443 is DENIED; last ACL checked: allowed_useragent
> 2017/11/17 13:30:10.365 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:10.365 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:10.365 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:10.385 kid1| 83,2| client_side.cc(3843) clientNegotiateSSL: clientNegotiateSSL: New session 0xdbdc70 on FD 8 (10.215.144.48:65237)
> 2017/11/17 13:30:10.386 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 13:30:10.386 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
> ---------
> GET / HTTP/1.1
> Host: www.gentoo.org
> User-Agent: MyDeniedUAstring
> Accept: */*
>
>
> ----------
> 2017/11/17 13:30:10.386 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET https://www.gentoo.org/ is DENIED, because it matched allowed_useragent
> 2017/11/17 13:30:10.386 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:10.386 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:10.386 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:10.386 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET https://www.gentoo.org/ is ALLOWED, because it matched allowed_useragent
> 2017/11/17 13:30:10.386 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 13:30:10.386 kid1| 11,2| client_side.cc(1410) sendStartOfMessage: HTTP Client REPLY:
> ---------
> HTTP/1.1 302 Found
> Server: squid
> Mime-Version: 1.0
> Date: Fri, 17 Nov 2017 12:30:10 GMT
> Content-Type: text/html;charset=utf-8
> Content-Length: 0
> Location: http://proxy-server1/proxy-error/?a=-&B=&e=0&E=%5BNo%20Error%5D&H=www.gentoo.org&i=10.215.144.48&M=GET&o=&R=/&T=Fri,%2017%20Nov%202017%2012%3A30%3A10%20GMT&U=https%3A%2F%2Fwww.gentoo.org%2F&u=https%3A%2F%2Fwww.gentoo.org%2F&w=IT%40mydomain.org&x=&acl=denied_useragent
> X-Squid-Error: 403 Access Denied
> X-Cache: MISS from proxy-server1
> X-Cache-Lookup: NONE from proxy-server1:3227
> Connection: close
>
> Now for plain HTTP with example 2.
>
> # curl --user-agent MyDeniedUAstring http://www.fltk.org/index.php
> -> As expected. It blocks access.
>
> Full log:
>
> 2017/11/17 15:56:52.648 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection on FD 85
> 2017/11/17 15:56:52.648 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection on local=[::]:3228 remote=[::] FD 85 flags=25
> 2017/11/17 15:56:52.648 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=66.39.46.122:80 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 15:56:52.648 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
> ---------
> GET /index.php HTTP/1.1
> Host: www.fltk.org
> User-Agent: MyDeniedUAstring
> Accept: */*
>
>
> ----------
> 2017/11/17 15:56:52.648 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request GET http://www.fltk.org/index.php is DENIED; last ACL checked: allowed_useragent
> 2017/11/17 15:56:52.648 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 15:56:52.648 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 15:56:52.648 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 15:56:52.648 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET http://www.fltk.org/index.php is ALLOWED, because it matched allowed_useragent
> 2017/11/17 15:56:52.648 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: HTTP Client local=66.39.46.122:80 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 15:56:52.648 kid1| 11,2| client_side.cc(1410) sendStartOfMessage: HTTP Client REPLY:
> ---------
> HTTP/1.1 302 Found
> Server: squid
> Mime-Version: 1.0
> Date: Fri, 17 Nov 2017 14:56:52 GMT
> Content-Type: text/html;charset=utf-8
> Content-Length: 0
> Location: http://proxy-server1/proxy-error/?a=-&B=&e=0&E=%5BNo%20Error%5D&H=www.fltk.org&i=10.215.144.48&M=GET&o=&R=/index.php&T=Fri,%2017%20Nov%202017%2014%3A56%3A52%20GMT&U=http%3A%2F%2Fwww.fltk.org%2Findex.php&u=http%3A%2F%2Fwww.fltk.org%2Findex.php&w=IT%40mydomain.org&x=&acl=denied_useragent
> X-Squid-Error: 403 Access Denied
> X-Cache: MISS from proxy-server1
> X-Cache-Lookup: NONE from proxy-server1:3227
> Connection: keep-alive
>
> However, now comes the interesting part.
>
> # curl --user-agent MyAllowedUAstring http://www.fltk.org/index.php
> -> works as expected (I see the web site).
>
> Full log:
>
> 2017/11/17 15:55:23.550 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection on FD 85
> 2017/11/17 15:55:23.550 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection on local=[::]:3228 remote=[::] FD 85 flags=25
> 2017/11/17 15:55:23.551 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=66.39.46.122:80 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 15:55:23.551 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
> ---------
> GET /index.php HTTP/1.1
> Host: www.fltk.org
> User-Agent: MyAllowedUAstring
> Accept: */*
>
>
> ----------
> 2017/11/17 15:55:23.551 kid1| 28,2| RegexData.cc(73) match: aclRegexData::match: match '(MyAllowedUAstring)' found in 'MyAllowedUAstring'
> 2017/11/17 15:55:23.551 kid1| 82,2| external_acl.cc(805) aclMatchExternal: bllookup("http www.fltk.org 80 /index.php") = lookup needed
> 2017/11/17 15:55:23.551 kid1| 82,2| external_acl.cc(808) aclMatchExternal: "http www.fltk.org 80 /index.php": queueing a call.
> 2017/11/17 15:55:23.551 kid1| 82,2| external_acl.cc(1444) Start: fg lookup in 'bllookup' for 'http www.fltk.org 80 /index.php'
> 2017/11/17 15:55:23.551 kid1| 82,2| external_acl.cc(811) aclMatchExternal: "http www.fltk.org 80 /index.php": return -1.
> 2017/11/17 15:55:23.553 kid1| 82,2| external_acl.cc(1372) externalAclHandleReply: reply={result=OK, notes={message: www.fltk.org site not found in blacklist; }}
> 2017/11/17 15:55:23.553 kid1| 82,2| external_acl.cc(1288) external_acl_cache_add: external_acl_cache_add: Adding 'http www.fltk.org 80 /index.php' = ALLOWED
> 2017/11/17 15:55:23.553 kid1| 82,2| external_acl.cc(841) aclMatchExternal: bllookup = ALLOWED
> 2017/11/17 15:55:23.553 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request GET http://www.fltk.org/index.php is ALLOWED; last ACL checked: bl_lookup
> 2017/11/17 15:55:23.553 kid1| 85,2| client_side_request.cc(721) clientAccessCheck2: No adapted_http_access configuration. default: ALLOW
> 2017/11/17 15:55:23.553 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request GET http://www.fltk.org/index.php is ALLOWED; last ACL checked: bl_lookup
> 2017/11/17 15:55:23.554 kid1| 88,2| client_side_reply.cc(593) cacheHit: clientProcessHit: Vary detected!
> 2017/11/17 15:55:23.554 kid1| 17,2| FwdState.cc(133) FwdState: Forwarding client request local=66.39.46.122:80 remote=10.215.144.48 FD 8 flags=17, url=http://www.fltk.org/index.php
> 2017/11/17 15:55:23.554 kid1| 44,2| peer_select.cc(280) peerSelectDnsPaths: Found sources for 'http://www.fltk.org/index.php'
> 2017/11/17 15:55:23.554 kid1| 44,2| peer_select.cc(281) peerSelectDnsPaths:   always_direct = DENIED
> 2017/11/17 15:55:23.554 kid1| 44,2| peer_select.cc(282) peerSelectDnsPaths:    never_direct = DENIED
> 2017/11/17 15:55:23.554 kid1| 44,2| peer_select.cc(288) peerSelectDnsPaths:    ORIGINAL_DST = local=10.215.144.48 remote=66.39.46.122:80 flags=25
> 2017/11/17 15:55:23.554 kid1| 44,2| peer_select.cc(295) peerSelectDnsPaths:        timedout = 0
> 2017/11/17 15:55:23.708 kid1| 11,2| http.cc(2229) sendRequest: HTTP Server local=10.215.144.48:35373 remote=66.39.46.122:80 FD 13 flags=25
> 2017/11/17 15:55:23.708 kid1| 11,2| http.cc(2230) sendRequest: HTTP Server REQUEST:
> ---------
> GET /index.php HTTP/1.1
> User-Agent: MyAllowedUAstring
> Accept: */*
> Host: www.fltk.org
> Cache-Control: max-age=259200
> Connection: keep-alive
>
>
> ----------
> 2017/11/17 15:55:23.884 kid1| ctx: enter level  0: 'http://www.fltk.org/index.php'
> 2017/11/17 15:55:23.884 kid1| 11,2| http.cc(719) processReplyHeader: HTTP Server local=10.215.144.48:35373 remote=66.39.46.122:80 FD 13 flags=25
> 2017/11/17 15:55:23.884 kid1| 11,2| http.cc(720) processReplyHeader: HTTP Server REPLY:
> ---------
> HTTP/1.1 200 OK
> Date: Fri, 17 Nov 2017 14:55:23 GMT
> Server: Apache/2.4.29
> Cache-Control: no-cache
> Vary: Accept-Encoding
> Keep-Alive: timeout=5, max=100
> Connection: Keep-Alive
> Transfer-Encoding: chunked
> Content-Type: text/html
>
> 3a02
> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
> <html>
> <head>
> <title>Fast Light Toolkit - Fast Light Toolkit (FLTK)</title>
> <meta http-equiv='Pragma' content='no-cache'>
> <meta http-equiv='Content-Type' content='text/html; charset=utf-8'>
> <link rel='stylesheet' type='text/css' href='fltk.css'>
> <link rel='alternate' title='FLTK RSS' type='application/rss+xml' href='index.rss'>
> <link rel='shortcut icon' href='favicon.ico' type='image/x-icon'>
> <meta name='keywords' content='gui toolkit,c++,linux,unix,macos x,x11,windows'>
> </head>
> <body>
> <table width='100%' border='0' cellspacing='0' cellpadding='0' summary='Page'>
> <tr class='header'><td valign='top' width='15' rowspan='2'><a href='index.php'><img src='images/top-left.gif' width='15' height='70' border='0' alt=''></a></td><td valign='top' width='224' rowspan='2'><a href='index.php'><img src='images/top-middle.gif' width='224' height='70' border='0' alt=''></a></td><td width='100%' height='40'><h1>Fast Light Toolkit</h1> </td><td align='right' nowrap>
> <table cellpadding=0 cellspacing=0 border=0><tr><td valign=top nowrap>
> <a href=fltk-rss.xml><img src=images/rss-fee
> ----------
> 2017/11/17 15:55:23.885 kid1| ctx: exit level  0
> 2017/11/17 15:55:23.885 kid1| 23,2| url.cc(407) urlParse: urlParse: URI has whitespace: {icap://127.0.0.1:1344/clamav ICAP/1.0
> }
> 2017/11/17 15:55:24.038 kid1| 28,2| RegexData.cc(73) match: aclRegexData::match: match '(MyAllowedUAstring)' found in 'MyAllowedUAstring'
> 2017/11/17 15:55:24.038 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET http://www.fltk.org/index.php is ALLOWED, because it matched denied_mimetypes_rep
> 2017/11/17 15:55:24.038 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: HTTP Client local=66.39.46.122:80 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 15:55:24.038 kid1| 11,2| client_side.cc(1410) sendStartOfMessage: HTTP Client REPLY:
> ---------
> HTTP/1.1 200 OK
> Date: Fri, 17 Nov 2017 14:55:23 GMT
> Server: Apache/2.4.29
> Cache-Control: no-cache
> Vary: Accept-Encoding
> Content-Type: text/html
> Via: ICAP/1.0 proxy-server1.hospitalmanacor.org (C-ICAP/0.5.2 SquidClamav/Antivirus service )
> X-Cache: MISS from proxy-server1
> X-Cache-Lookup: MISS from proxy-server1:3227
> Transfer-Encoding: chunked
> Connection: keep-alive
>
> How can I modify my example 2 settings so this access control works the same way with both http and https in an ssl-bumped environment.
>
> Thanks,
>
> Vieri
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
--
**************************
* C++: Bug to the future *
**************************



_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

signature.asc (662 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Alex Rousskov
In reply to this post by Vieri
On 11/17/2017 08:27 AM, Vieri wrote:
> From: Alex Rousskov <[hidden email]>
>> 1. Your "works" and "does not work" setups currently differ in at least
>> three variables: user agent name, slash after the user agent name, and
>> acl negation in http_access. Find out which single variable is
>> responsible for the breakage by eliminating all other differences.
>>
>> 2. Post two ALL,2 cache.logs, each containing a single transaction, one
>> for the "works" case and one for the "does not work" case polished as
>> discussed in #1.

> I can't really do anything about #1 except maybe leave out the forward slash.
> That's because my 2 examples are trying to achieve the opposite.

You may be conflating two very different goals:

  A) Understanding why Squid does X.
  B) Configuring Squid to do what you want.

My response was focused on the former. Once you understand, you can
probably accomplish the latter on your own.

To understand why two similar setups act differently, I would reduce the
number of different variables until you find the variable that explains
the difference. Yes, none of the tested reduced setups may do what you
want your production Squid to do, but that should not matter for now.
You are after understanding.

The usual alternative to the above approach is trying random
configurations until you think Squid works the way you want it to work.
Usually, Squid still does not do what you think it does, but your test
cases do not expose the difference.


> My goal is to deny all client traffic from browsers that DO NOT have
> a specific user-agent string. So this is a negated statement.

There is no need to use negation for that. If the goodAgents ACL matches
requests with "specific user-agent string", then you can do this:

  http_access allow goodAgents
  http_access deny all

As you can see, there is no ACL negation or negative ACLs.


> Common to both:
>
> acl allowed_useragent browser MyAllowedUAstring
> acl denied_useragent browser MyDeniedUAstring
>
> # example 1:
> http_access deny denied_useragent
> http_reply_access deny denied_useragent

If you want to block access to the origin server, do not use
http_reply_access. Use http_access.


> # example 2:
> http_access deny !allowed_useragent
> http_reply_access deny !allowed_useragent
>
> Then I run this from the client:
>
> # curl --insecure --user-agent MyAllowedUAstring https://www.gentoo.org
> -> I was expecting to be allowed access since Squid denies "everything that's not" MyAllowedUAstring. Well, at least I should have passed the "deny" line in example 2.
> However, I'm being blocked right there.

> ---------
> CONNECT 89.16.167.134:443 HTTP/1.1
> Host: 89.16.167.134:443
> ----------
> clientAccessCheckDone: The request CONNECT 89.16.167.134:443 is DENIED; last ACL checked: allowed_useragent


As you can see, your CONNECT request was denied (because it lacks the
User-Agent header). The rest does not matter much (for now), but Squid
bumps the connection to serve the error page in response to the first
bumped HTTP request (regardless of what that first bumped HTTP request
looks like).

Alex.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Amos Jeffries
Administrator
In reply to this post by Vieri

On 18/11/17 04:27, Vieri wrote:

> ________________________________
> From: Alex Rousskov <[hidden email]>
>> 1. Your "works" and "does not work" setups currently differ in at least
>> three variables: user agent name, slash after the user agent name, and
>> acl negation in http_access. Find out which single variable is
>> responsible for the breakage by eliminating all other differences.
>>
>> 2. Post two ALL,2 cache.logs, each containing a single transaction, one
>> for the "works" case and one for the "does not work" case polished as
>> discussed in #1.
>
>
>
> I can't really do anything about #1 except maybe leave out the forward slash.
> That's because my 2 examples are trying to achieve the opposite.
> Let me just rephrase everything so it's crystal clear.
>
> My goal is to deny all client traffic from browsers that DO NOT have a specific user-agent string. So this is a negated statement. One of the things I can't do in Squid is define an ACL with a negated lookahead such as (?!useragentname).
>
> So I set up two examples.
>
> Common to both:
>
> acl allowed_useragent browser MyAllowedUAstring
> acl denied_useragent browser MyDeniedUAstring
>
> # example 1:
> http_access deny denied_useragent
> http_reply_access deny denied_useragent
> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent denied_useragent
>
> I then run this from my test client:
>
> # curl --insecure --user-agent MyAllowedUAstring https://www.gentoo.org
> -> works as expected (I see the web site). I guess you don't need to see cache.log here.
>
> Now I run this:
>
> # curl --insecure --user-agent MyDeniedUAstring https://www.gentoo.org
> -> works as expected (I'm denied access and I see Squid's error page).
> I guess there's no need for the full log here either. It boils down to this anyway:
> 2017/11/17 13:24:26.937 kid1| 28,2| RegexData.cc(73) match: aclRegexData::match: match '(MyDeniedUAstring)' found in 'MyDeniedUAstring'
> 2017/11/17 13:24:26.937 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request GET https://www.gentoo.org/ is DENIED; last ACL checked: denied_useragent
>
> I'm done with example 1. That's because I cannot make a consistent list of all user agents I want to actively block. Instead, I want to "deny everyone except one or two".
>
> Also, since negative lookaheads are not supported in regular expressions, I change my example 1 to:
>
> # example 2:
> http_access deny !allowed_useragent
> http_reply_access deny !allowed_useragent
> deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent allowed_useragent
>
> Then I run this from the client:
>
> # curl --insecure --user-agent MyAllowedUAstring https://www.gentoo.org
> -> I was expecting to be allowed access since Squid denies "everything that's not" MyAllowedUAstring. Well, at least I should have passed the "deny" line in example 2.
> However, I'm being blocked right there. This is the full log:
>
> 2017/11/17 13:30:42.216 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection on FD 88
> 2017/11/17 13:30:42.216 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection on local=[::]:3229 remote=[::] FD 88 flags=25
> 2017/11/17 13:30:42.216 kid1| 33,2| client_side.cc(3943) httpsSslBumpAccessCheckDone: sslBump needed for local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17 method 4
> 2017/11/17 13:30:42.216 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 13:30:42.216 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
> ---------
> CONNECT 89.16.167.134:443 HTTP/1.1
> Host: 89.16.167.134:443
>

This is the CONNECT request generated internally by Squid for the
bumping process.


>
> ----------
> 2017/11/17 13:30:42.216 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request CONNECT 89.16.167.134:443 is DENIED; last ACL checked: allowed_useragent
> 2017/11/17 13:30:42.216 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:42.216 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:42.216 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
> 2017/11/17 13:30:42.226 kid1| 83,2| client_side.cc(3843) clientNegotiateSSL: clientNegotiateSSL: New session 0x125e030 on FD 8 (10.215.144.48:65262)
> 2017/11/17 13:30:42.226 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 13:30:42.226 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
> ---------
> GET / HTTP/1.1
> Host: www.gentoo.org
> User-Agent: MyAllowedUAstring
> Accept: */*
>
>
> ----------
> 2017/11/17 13:30:42.227 kid1| 28,2| RegexData.cc(73) match: aclRegexData::match: match '(MyAllowedUAstring)' found in 'MyAllowedUAstring'
> 2017/11/17 13:30:42.227 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET https://www.gentoo.org/ is ALLOWED, because it matched denied_mimetypes_rep

Please notice the above text and what ACL it is talking about.

Hint: it is NOT the one you are talking about testing.


> 2017/11/17 13:30:42.227 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 8 flags=17
> 2017/11/17 13:30:42.227 kid1| 11,2| client_side.cc(1410) sendStartOfMessage: HTTP Client REPLY:
> ---------
> HTTP/1.1 307 Temporary Redirect
> Server: squid
> Mime-Version: 1.0
> Date: Fri, 17 Nov 2017 12:30:42 GMT
> Content-Type: text/html;charset=utf-8
> Content-Length: 0
> Location: http://proxy-server1/proxy-error/?a=-&B=&e=0&E=%5BNo%20Error%5D&H=89.16.167.134&i=10.215.144.48&M=CONNECT&o=&R=/&T=Fri,%2017%20Nov%202017%2012%3A30%3A42%20GMT&U=https%3A%2F%2F89.16.167.134%2F*&u=89.16.167.134%3A443&w=IT%40mydomain.org&x=&acl=denied_useragent
> X-Squid-Error: 403 Access Denied
> X-Cache: MISS from proxy-server1
> X-Cache-Lookup: NONE from proxy-server1:3227
> Connection: close

This is the denial "error" response generated by Squid.

...
>
> How can I modify my example 2 settings so this access control works the same way with both http and https in an ssl-bumped environment.

It already does. The environment is what is different.

You are looking at Squid generated messages and trying to get them
replaced with other Squid generated messages simply because they are
generated by Squid not some arbitrary UA.


If you could replace that Squid generated message with another Squid
generated message, and replace that Squid generated message with another
Squid generated message, and replace that Squid generated message with
another Squid generated message, and replace that Squid generated
message with another Squid generated message, .... until the machine
crashes or client gives up waiting and closes the connection.

Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Vieri
In reply to this post by Alex Rousskov
________________________________
From: Alex Rousskov <[hidden email]>
>
> You may be conflating two very different goals:
>
>   A) Understanding why Squid does X.
>   B) Configuring Squid to do what you want.
>
> My response was focused on the former. Once you understand, you can
> probably accomplish the latter on your own.


You are absolutely right. I'd like to uderstand how Squid *access rules work.


To put it bluntly, http_access and http_reply_access rules are processed one after another as they appear in squid.conf. It "exits" the sequence (ie. stops going through each http_*access rule) as soon as it hits a match.


The http_*access rules take on ACLs which can be AND'ed if the conditions are in one line, or OR'ed if they are on seperate lines.
eg.
http_access allow goodAgents !baddomains (AND)
#--
http_access allow goodAgents
http_access deny baddomains (OR)

>> My goal is to deny all client traffic from browsers that DO NOT have
>> a specific user-agent string. So this is a negated statement.
>
> There is no need to use negation for that. If the goodAgents ACL matches
> requests with "specific user-agent string", then you can do this:
>
>   http_access allow goodAgents
>   http_access deny all
>
> As you can see, there is no ACL negation or negative ACLs.


I understand your example, but unfortunately, I was looking for something else. It's my mistake because I started this thread with basic, stripped-down examples without giving details on what I need to achieve. I wasn't doing ACL negation just for kicks. It's because I need to integrate it into a broader setup.

Your example "works", but Squid will match "goodAgent" in your first line, and exit without going on. I require to apply other rules afterwards. In other words, my intention was to first filter based on the UA string, and block all except eg. MyAllowedUAstring. From then on, I need to apply the rest of my rules.

>> clientAccessCheckDone: The request CONNECT 89.16.167.134:443 is DENIED; last ACL checked: allowed_useragent
>
> As you can see, your CONNECT request was denied (because it lacks the
> User-Agent header). The rest does not matter much (for now), but Squid
> bumps the connection to serve the error page in response to the first
> bumped HTTP request (regardless of what that first bumped HTTP request
> looks like).

So... What is the security implication of allowing all CONNECT messages to port 443?

The following acl + access rules I set up actually "work" as in my previous "example 2". I simply allowed the CONNECT messages. Here's most of my squid.conf file:


acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager

acl explicit myportname 3128
acl intercepted myportname 3129
acl interceptedssl myportname 3130
http_port 3128
http_port 3129 tproxy
https_port 3130 tproxy ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem sslflags=NO_DEFAULT_CA
sslcrtd_program /usr/libexec/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 16MB
sslcrtd_children 40 startup=20 idle=10
cache_dir diskd /var/cache/squid 32 16 256

external_acl_type nt_group ttl=0 children-max=50 %LOGIN /usr/libexec/squid/ext_wbinfo_group_acl -K

auth_param negotiate program /usr/libexec/squid/negotiate_kerberos_auth -s HTTP/[hidden email]
auth_param negotiate children 60
auth_param negotiate keep_alive on

acl localnet src 10.0.0.0/8
acl localnet src 192.168.0.0/16

acl ORG_all proxy_auth REQUIRED

external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 children-startup=10 children-idle=3 concurrency=8 %PROTO %DST %PORT %PATH /opt/custom/scripts/run/scripts/firewall/ext_sql_blwl_acl.pl --table=shallalist_bl --categories=adv,aggressive,alcohol,anonvpn,automobile_bikes,automobile_boats,automobile_cars,automobile_planes,chat,costtraps,dating,drugs,dynamic,finance_insurance,finance_moneylending,finance_other,finance_realestate,finance_trading,fortunetelling,forum,gamble,hacking,hobby_cooking,hobby_games-misc,hobby_games-online,hobby_gardening,hobby_pets,homestyle,imagehosting,isp,jobsearch,military,models,movies,music,podcasts,politics,porn,radiotv,recreation_humor,recreation_martialarts,recreation_restaurants,recreation_sports,recreation_travel,recreation_wellness,redirector,religion,remotecontrol,ringtones,science_astronomy,science_chemistry,sex_education,sex_lingerie,shopping,socialnet,spyware,tracker,updatesites,urlshortener,violence,warez,weapons,webphone,webradio,webtv
acl allowed_ips src "/opt/custom/proxy-settings/allowed.ips"
acl allowed_extra1_ips src "/opt/custom/proxy-settings/allowed.extra1.ips"
acl allowed_groups external nt_group "/opt/custom/proxy-settings/allowed.groups"
acl allowed_domains dstdomain "/opt/custom/proxy-settings/allowed.domains"
acl allowed_domains_filetypes dstdomain "/opt/custom/proxy-settings/allowed.domains.filetypes"
acl allowed_domains_mimetypes dstdomain "/opt/custom/proxy-settings/allowed.domains.mimetypes"
acl denied_domains dstdomain -i "/opt/custom/proxy-settings/denied.domains"
acl denied_extra1_domains dstdomain -i "/opt/custom/proxy-settings/denied.extra1.domains"
acl denied_ads url_regex "/opt/custom/proxy-settings/denied.ads"
acl denied_filetypes urlpath_regex -i "/opt/custom/proxy-settings/denied.filetypes"
acl denied_mimetypes_req req_mime_type -i "/opt/custom/proxy-settings/denied.mimetypes"
acl denied_extra1_mimetypes_req req_mime_type -i "/opt/custom/proxy-settings/denied.extra1.mimetypes"
acl denied_mimetypes_rep rep_mime_type -i "/opt/custom/proxy-settings/denied.mimetypes"
acl denied_extra1_mimetypes_rep rep_mime_type -i "/opt/custom/proxy-settings/denied.extra1.mimetypes"
acl denied_restricted1_mimetypes_req req_mime_type -i "/opt/custom/proxy-settings/denied.restricted1.mimetypes"
acl denied_restricted1_mimetypes_rep rep_mime_type -i "/opt/custom/proxy-settings/denied.restricted1.mimetypes"
acl allowed_restricted1_domains dstdomain -i "/opt/custom/proxy-settings/allowed.restricted1.domains"
acl allowed_restricted1_ips dst "/opt/custom/proxy-settings/allowed.restricted1.ips"
acl restricted_ips src "/opt/custom/proxy-settings/restricted.ips"
acl restricted_groups external nt_group "/opt/custom/proxy-settings/restricted.groups"
acl restricted_domains dstdomain "/opt/custom/proxy-settings/restricted.domains"
acl bl_lookup external bllookup
acl denied_urlshorteners dstdomain -i "/etc/squidGuard/db/HMANshallalist/urlshortener/domains"

acl allowed_useragent browser MyAllowedUAstring

http_access deny explicit !ORG_all
http_access deny explicit SSL_ports
http_access deny intercepted !localnet
http_access deny interceptedssl !localnet

http_access allow CONNECT SSL_ports
http_access deny !allowed_useragent
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent allowed_useragent

http_access allow localnet !restricted_ips allowed_domains
http_access allow localnet !restricted_ips allowed_ips
http_reply_access allow localnet !restricted_ips allowed_ips
http_reply_access allow localnet !restricted_ips allowed_domains
http_access allow restricted_ips restricted_domains
http_access deny restricted_ips

http_access deny !allowed_ips denied_urlshorteners
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_urlshorteners">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_urlshorteners denied_urlshorteners

http_access allow denied_restricted1_mimetypes_req allowed_restricted1_domains
http_access allow denied_restricted1_mimetypes_req allowed_restricted1_ips
http_reply_access allow denied_restricted1_mimetypes_rep allowed_restricted1_domains
http_reply_access allow denied_restricted1_mimetypes_rep allowed_restricted1_ips

http_access allow denied_extra1_mimetypes_req allowed_extra1_ips denied_extra1_domains
http_reply_access allow denied_extra1_mimetypes_rep allowed_extra1_ips denied_extra1_domains

http_access deny denied_restricted1_mimetypes_req
http_reply_access deny denied_restricted1_mimetypes_rep
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes denied_restricted1_mimetypes_rep
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes denied_restricted1_mimetypes_req

http_access deny denied_extra1_mimetypes_req
http_reply_access deny denied_extra1_mimetypes_rep
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes denied_extra1_mimetypes_req
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes denied_extra1_mimetypes_rep

http_access deny !allowed_ips denied_domains
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_domains">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_domains denied_domains

http_access allow allowed_extra1_ips denied_extra1_domains
http_access deny denied_extra1_domains
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_extra1_domains">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_extra1_domains denied_extra1_domains

http_access deny denied_filetypes !allowed_domains_filetypes
http_reply_access deny denied_filetypes !allowed_domains_filetypes
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_filetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_filetypes denied_filetypes

http_access deny denied_mimetypes_req !allowed_domains_mimetypes
http_reply_access deny denied_mimetypes_rep !allowed_domains_mimetypes
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes denied_mimetypes_req
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_mimetypes denied_mimetypes_rep

http_access allow localnet bl_lookup
http_access allow localhost

http_access deny all

I'd greatly appreciate your input on this.

Hoping to understand Squid logic someday.

Thanks,

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Amos Jeffries
Administrator
On 20/11/17 21:45, Vieri wrote:

> ________________________________
> From: Alex Rousskov
>>
>> You may be conflating two very different goals:
>>
>>    A) Understanding why Squid does X.
>>    B) Configuring Squid to do what you want.
>>
>> My response was focused on the former. Once you understand, you can
>> probably accomplish the latter on your own.
>
>
> You are absolutely right. I'd like to uderstand how Squid *access rules work.
>
>
> To put it bluntly, http_access and http_reply_access rules are processed one after another as they appear in squid.conf. It "exits" the sequence (ie. stops going through each http_*access rule) as soon as it hits a match.
>

Not quite. The lines which start with the same directive name are
executed that way. Each directive has a different timing within the
transaction lifetime.


  http_access allow foo
  http_reply_access deny foo
  http_access allow bar

Is the same as

  http_access allow foo
  http_access allow bar

  http_reply_access deny foo


http_access lines are checked on a client HTTP request arriving.
http_reply_access on a server HTTP reply arriving.


>
> The http_*access rules take on ACLs which can be AND'ed if the conditions are in one line, or OR'ed if they are on seperate lines.

That is binary, access control lines are trinary logic with short-cuts.

> eg.
> http_access allow goodAgents !baddomains (AND)
> #--
> http_access allow goodAgents
> http_access deny baddomains (OR)

You also have to take action into account.

For this:

   http_access allow goodAgents !baddomains (AND)

  If the first line matches the allow happens.
  otherwise deny happens

ie. goodAgents are only allowed to non-baddomains. All non-goodAgents
are denied to everything.


For this:

   http_access allow goodAgents
   http_access deny baddomains (OR)

  If the first line matches the allow happens,
  If the second matches deny happens,
  otherwise allow happens.

ie. goodAgents are allowed to do anything. All non-goodAgents are denied
only to baddomains.


>
>>> My goal is to deny all client traffic from browsers that DO NOT have
>>> a specific user-agent string. So this is a negated statement.
>>
>> There is no need to use negation for that. If the goodAgents ACL matches
>> requests with "specific user-agent string", then you can do this:
>>
>>    http_access allow goodAgents
>>    http_access deny all
>>
>> As you can see, there is no ACL negation or negative ACLs.
>
>
> I understand your example, but unfortunately, I was looking for something else. It's my mistake because I started this thread with basic, stripped-down examples without giving details on what I need to achieve. I wasn't doing ACL negation just for kicks. It's because I need to integrate it into a broader setup.
>
> Your example "works", but Squid will match "goodAgent" in your first line, and exit without going on. I require to apply other rules afterwards. In other words, my intention was to first filter based on the UA string, and block all except eg. MyAllowedUAstring. From then on, I need to apply the rest of my rules.
>
>>> clientAccessCheckDone: The request CONNECT 89.16.167.134:443 is DENIED; last ACL checked: allowed_useragent
>>
>> As you can see, your CONNECT request was denied (because it lacks the
>> User-Agent header). The rest does not matter much (for now), but Squid
>> bumps the connection to serve the error page in response to the first
>> bumped HTTP request (regardless of what that first bumped HTTP request
>> looks like).
>
> So... What is the security implication of allowing all CONNECT messages to port 443?
>

Allowing them all the way through Squid is bad. But that is not what is
needed here. ssl_bump rules get applied after the CONNECT is accepted
*in* for proxy processing and they decide what happens to the tunneled
data based on what is found there.
  If bumping is decided the TLS gets removed and the messages inside
individually go through the http_access process.


...
> I'd greatly appreciate your input on this.
>
> Hoping to understand Squid logic someday.

To speed that up enable debug_options 28,3.

I second Alex's recommendation about removing the "denied_" and
"allowed_" bits of your ACL names. It will make what is going on a LOT
clearer to see and understand.


Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Vieri

________________________________
From: Amos Jeffries <[hidden email]>
>
>   http_access allow goodAgents !baddomains (AND)
>
>  If the first line matches the allow happens.
>  otherwise deny happens
>
> ie. goodAgents are only allowed to non-baddomains. All non-goodAgents
> are denied to everything.


From this I deduce that in my case I cannot use "http_access allow goodAgents", but I need to go for "http_access deny !goodAgents" so I can continue on evaluating the rest of my http_access rules.
> Allowing them all the way through Squid is bad. But that is not what is
> needed here. ssl_bump rules get applied after the CONNECT is accepted
> *in* for proxy processing and they decide what happens to the tunneled
> data based on what is found there.
>   If bumping is decided the TLS gets removed and the messages inside
> individually go through the http_access process.


You lost me there. Here's what I did today.

I took your advice (and Alex's), and renamed my ACL labels. Unfortunately, I'm still a little confused :-(.

Here's part of the new Squid config (I took away the "allow all connect messages"):

# grep -v ^# /etc/squid/squid.test.include.rules | grep -v ^$
external_acl_type nt_group ttl=0 children-max=50 %LOGIN /usr/libexec/squid/ext_wbinfo_group_acl -K
auth_param negotiate program /usr/libexec/squid/negotiate_kerberos_auth -s HTTP/[hidden email]
auth_param negotiate children 60
auth_param negotiate keep_alive on
acl localnet src 10.0.0.0/8
acl localnet src 192.168.0.0/16
acl ORG_all proxy_auth REQUIRED
external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 children-startup=10 children-idle=3 concurrency=8 %PROTO %DST %PORT %PATH /opt/custom/scripts/run/scripts/firewall/ext_sql_blwl_acl.pl --table=shallalist_bl --categories=adv,aggressive,alcohol,anonvpn,automobile_bikes,automobile_boats,automobile_cars,automobile_planes,chat,costtraps,dating,drugs,dynamic,finance_insurance,finance_moneylending,finance_other,finance_realestate,finance_trading,fortunetelling,forum,gamble,hacking,hobby_cooking,hobby_games-misc,hobby_games-online,hobby_gardening,hobby_pets,homestyle,imagehosting,isp,jobsearch,military,models,movies,music,podcasts,politics,porn,radiotv,recreation_humor,recreation_martialarts,recreation_restaurants,recreation_sports,recreation_travel,recreation_wellness,redirector,religion,remotecontrol,ringtones,science_astronomy,science_chemistry,sex_education,sex_lingerie,shopping,socialnet,spyware,tracker,updatesites,urlshortener,violence,warez,weapons,webphone,webradio,webtv
acl privileged_src_ips src "/opt/custom/proxy-settings/allowed.ips"
acl privileged_extra1_src_ips src "/opt/custom/proxy-settings/allowed.extra1.ips"
acl privileged_user_groups external nt_group "/opt/custom/proxy-settings/allowed.groups"
acl good_dst_domains dstdomain "/opt/custom/proxy-settings/allowed.domains"
acl good_dst_domains_with_any_filetype dstdomain "/opt/custom/proxy-settings/allowed.domains.filetypes"
acl good_dst_domains_with_any_mimetype dstdomain "/opt/custom/proxy-settings/allowed.domains.mimetypes"
acl bad_dst_domains dstdomain -i "/opt/custom/proxy-settings/denied.domains"
acl limited_dst_domains_1 dstdomain -i "/opt/custom/proxy-settings/denied.extra1.domains"
acl bad_ads url_regex "/opt/custom/proxy-settings/denied.ads"
acl bad_filetypes urlpath_regex -i "/opt/custom/proxy-settings/denied.filetypes"
acl bad_requested_mimetypes req_mime_type -i "/opt/custom/proxy-settings/denied.mimetypes"
acl limited_requested_mimetypes_1 req_mime_type -i "/opt/custom/proxy-settings/denied.extra1.mimetypes"
acl bad_replied_mimetypes rep_mime_type -i "/opt/custom/proxy-settings/denied.mimetypes"
acl limited_replied_mimetypes_1 rep_mime_type -i "/opt/custom/proxy-settings/denied.extra1.mimetypes"
acl restricted_requested_mimetypes_1 req_mime_type -i "/opt/custom/proxy-settings/denied.restricted1.mimetypes"
acl restricted_replied_mimetypes_1 rep_mime_type -i "/opt/custom/proxy-settings/denied.restricted1.mimetypes"
acl restricted_good_dst_domains_1 dstdomain -i "/opt/custom/proxy-settings/allowed.restricted1.domains"
acl restricted_src_ips_1 dst "/opt/custom/proxy-settings/allowed.restricted1.ips"
acl explicit_only_src_ips src "/opt/custom/proxy-settings/restricted.ips"
acl explicit_only_user_groups external nt_group "/opt/custom/proxy-settings/restricted.groups"
acl explicit_only_dst_domains dstdomain "/opt/custom/proxy-settings/restricted.domains"
acl bl_lookup external bllookup
acl bad_urlshorteners dstdomain -i "/etc/squidGuard/db/HMANshallalist/urlshortener/domains"
acl redirected_domain_1 dstdomain .somedomain.com
acl good_useragents browser Firefox/
acl good_useragents browser Edge/
acl src_ips_with_any_useragent src "/opt/custom/proxy-settings/allowed.useragents.ips"
http_access deny explicit !ORG_all
http_access deny explicit SSL_ports
http_access deny intercepted !localnet
http_access deny interceptedssl !localnet
http_access deny !good_useragents !src_ips_with_any_useragent
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_useragents">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_useragents good_useragents
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_useragents">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_useragents src_ips_with_any_useragent
http_access allow localnet !explicit_only_src_ips good_dst_domains
http_access allow localnet !explicit_only_src_ips privileged_src_ips
http_reply_access allow localnet !explicit_only_src_ips privileged_src_ips
http_reply_access allow localnet !explicit_only_src_ips good_dst_domains
http_access allow explicit_only_src_ips explicit_only_dst_domains
http_access deny explicit_only_src_ips
http_access deny redirected_domain_1
deny_info 302:http://www.google.es redirected_domain_1
http_access deny !privileged_src_ips bad_urlshorteners
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_urlshorteners">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_urlshorteners bad_urlshorteners
http_access allow restricted_requested_mimetypes_1 restricted_good_dst_domains_1
http_access allow restricted_requested_mimetypes_1 restricted_src_ips_1
http_reply_access allow restricted_replied_mimetypes_1 restricted_good_dst_domains_1
http_reply_access allow restricted_replied_mimetypes_1 restricted_src_ips_1
http_access allow limited_requested_mimetypes_1 privileged_extra1_src_ips limited_dst_domains_1
http_reply_access allow limited_replied_mimetypes_1 privileged_extra1_src_ips limited_dst_domains_1
http_access deny restricted_requested_mimetypes_1
http_reply_access deny restricted_replied_mimetypes_1
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes restricted_replied_mimetypes_1
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes restricted_requested_mimetypes_1
http_access deny limited_requested_mimetypes_1
http_reply_access deny limited_replied_mimetypes_1
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes limited_requested_mimetypes_1
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes limited_replied_mimetypes_1
http_access deny !privileged_src_ips bad_dst_domains
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_dst_domains">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_dst_domains bad_dst_domains
http_access allow privileged_extra1_src_ips limited_dst_domains_1
http_access deny limited_dst_domains_1
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=limited_dst_domains_1">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=limited_dst_domains_1 limited_dst_domains_1
http_access deny bad_filetypes !good_dst_domains_with_any_filetype
http_reply_access deny bad_filetypes !good_dst_domains_with_any_filetype
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_filetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_filetypes bad_filetypes
http_access deny bad_requested_mimetypes !good_dst_domains_with_any_mimetype
http_reply_access deny bad_replied_mimetypes !good_dst_domains_with_any_mimetype
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes bad_requested_mimetypes
deny_info <a href="http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes">http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes bad_replied_mimetypes
http_access allow localnet bl_lookup
#debug_options rotate=1 28,3
debug_options rotate=1 ALL,2
append_domain .mydomain.org
reply_header_access Alternate-Protocol deny all
ssl_bump stare all
ssl_bump bump all
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service squidclamav respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access squidclamav allow all
include /etc/squid/squid.include.common
include /etc/squid/squid.include.hide
cache_mem 32 MB
max_filedescriptors 65536
icap_service_failure_limit -1

Here's what I do from a client:

curl --insecure --user-agent Firefox/57 https://www.gentoo.org/

Here's what I get with 28,3 debug options:

2017/11/21 10:02:24.278 kid1| 28,3| Checklist.cc(70) preCheck: 0xeb47c8 checking slow rules
2017/11/21 10:02:24.278 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '10.215.144.48' found
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: all = 1
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: (ssl_bump rule) = 1
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: (ssl_bump rules) = 1
2017/11/21 10:02:24.278 kid1| 28,3| Checklist.cc(63) markFinished: 0xeb47c8 answer ALLOWED for match
2017/11/21 10:02:24.278 kid1| 28,3| Checklist.cc(163) checkCallback: ACLChecklist::checkCallback: 0xeb47c8 answer=ALLOWED
2017/11/21 10:02:24.278 kid1| 28,3| Checklist.cc(70) preCheck: 0x13450c8 checking slow rules
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: Safe_ports = 1
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: !Safe_ports = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: http_access#1 = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: CONNECT = 1
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: SSL_ports = 1
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: !SSL_ports = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: http_access#2 = 0
2017/11/21 10:02:24.278 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '10.215.144.48' NOT found
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: localhost = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: http_access#3 = 0
2017/11/21 10:02:24.278 kid1| 28,3| RegexData.cc(51) match: aclRegexData::match: checking '89.16.167.134:443'
2017/11/21 10:02:24.278 kid1| 28,3| RegexData.cc(62) match: aclRegexData::match: looking for '(^cache_object://)'
2017/11/21 10:02:24.278 kid1| 28,3| RegexData.cc(62) match: aclRegexData::match: looking for '(^https?://[^/]+/squid-internal-mgr/)'
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: manager = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: http_access#4 = 0
2017/11/21 10:02:24.278 kid1| 28,3| StringData.cc(34) match: aclMatchStringList: checking '3229'
2017/11/21 10:02:24.278 kid1| 28,3| StringData.cc(37) match: aclMatchStringList: '3229' NOT found
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: explicit = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: http_access#5 = 0
2017/11/21 10:02:24.278 kid1| 28,3| StringData.cc(34) match: aclMatchStringList: checking '3229'
2017/11/21 10:02:24.278 kid1| 28,3| StringData.cc(37) match: aclMatchStringList: '3229' NOT found
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: explicit = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: http_access#6 = 0
2017/11/21 10:02:24.278 kid1| 28,3| StringData.cc(34) match: aclMatchStringList: checking '3229'
2017/11/21 10:02:24.278 kid1| 28,3| StringData.cc(37) match: aclMatchStringList: '3229' NOT found
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: intercepted = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: http_access#7 = 0
2017/11/21 10:02:24.278 kid1| 28,3| StringData.cc(34) match: aclMatchStringList: checking '3229'
2017/11/21 10:02:24.278 kid1| 28,3| StringData.cc(37) match: aclMatchStringList: '3229' found
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: interceptedssl = 1
2017/11/21 10:02:24.278 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '10.215.144.48' found
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: localnet = 1
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: !localnet = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: http_access#8 = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: good_useragents = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: !good_useragents = 1
2017/11/21 10:02:24.278 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '10.215.144.48' NOT found
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: src_ips_with_any_useragent = 0
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: !src_ips_with_any_useragent = 1
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: http_access#9 = 1
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: http_access = 1
2017/11/21 10:02:24.278 kid1| 28,3| Checklist.cc(63) markFinished: 0x13450c8 answer DENIED for match
2017/11/21 10:02:24.278 kid1| 28,3| Checklist.cc(163) checkCallback: ACLChecklist::checkCallback: 0x13450c8 answer=DENIED
2017/11/21 10:02:24.278 kid1| 28,3| Checklist.cc(70) preCheck: 0x7ffd4e3e2530 checking fast ACLs
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: (access_log daemon:/var/log/squid/access.test.log line) = 1
2017/11/21 10:02:24.278 kid1| 28,3| Acl.cc(158) matches: checked: access_log daemon:/var/log/squid/access.test.log = 1
2017/11/21 10:02:24.278 kid1| 28,3| Checklist.cc(63) markFinished: 0x7ffd4e3e2530 answer ALLOWED for match
2017/11/21 10:02:24.288 kid1| 28,3| Checklist.cc(70) preCheck: 0xeb47c8 checking slow rules
2017/11/21 10:02:24.288 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '10.215.144.48' found
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: localnet = 1
2017/11/21 10:02:24.288 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '10.215.144.48' NOT found
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: explicit_only_src_ips = 0
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: !explicit_only_src_ips = 1
2017/11/21 10:02:24.288 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '10.215.144.48' NOT found
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: privileged_src_ips = 0
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: http_reply_access#1 = 0
2017/11/21 10:02:24.288 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '10.215.144.48' found
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: localnet = 1
2017/11/21 10:02:24.288 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: '10.215.144.48' NOT found
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: explicit_only_src_ips = 0
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: !explicit_only_src_ips = 1
2017/11/21 10:02:24.288 kid1| 28,3| DomainData.cc(108) match: aclMatchDomainList: checking 'www.gentoo.org'
2017/11/21 10:02:24.288 kid1| 28,3| DomainData.cc(113) match: aclMatchDomainList: 'www.gentoo.org' NOT found
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: good_dst_domains = 0
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: http_reply_access#2 = 0
2017/11/21 10:02:24.288 kid1| 28,3| RegexData.cc(51) match: aclRegexData::match: checking 'text/html;charset=utf-8'
2017/11/21 10:02:24.288 kid1| 28,3| RegexData.cc(62) match: aclRegexData::match: looking for '(^application/octet-stream$)'
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: restricted_replied_mimetypes_1 = 0
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: http_reply_access#3 = 0
2017/11/21 10:02:24.288 kid1| 28,3| RegexData.cc(51) match: aclRegexData::match: checking 'text/html;charset=utf-8'
2017/11/21 10:02:24.288 kid1| 28,3| RegexData.cc(62) match: aclRegexData::match: looking for '(^application/octet-stream$)'
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: restricted_replied_mimetypes_1 = 0
2017/11/21 10:02:24.288 kid1| 28,3| Acl.cc(158) matches: checked: http_reply_access#4 = 0
2017/11/21 10:02:24.288 kid1| 28,3| RegexData.cc(51) match: aclRegexData::match: checking 'text/html;charset=utf-8'
2017/11/21 10:02:24.289 kid1| 28,3| RegexData.cc(62) match: aclRegexData::match: looking for '(^application/mp21$)|(^application/mp4$)|(^application/vnd.rn-realmedia$)|(^application/vnd.tmobile-livetv$)|(^audio/)|(^video/)'
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: limited_replied_mimetypes_1 = 0
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: http_reply_access#5 = 0
2017/11/21 10:02:24.289 kid1| 28,3| RegexData.cc(51) match: aclRegexData::match: checking 'text/html;charset=utf-8'
2017/11/21 10:02:24.289 kid1| 28,3| RegexData.cc(62) match: aclRegexData::match: looking for '(^application/octet-stream$)'
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: restricted_replied_mimetypes_1 = 0
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: http_reply_access#6 = 0
2017/11/21 10:02:24.289 kid1| 28,3| RegexData.cc(51) match: aclRegexData::match: checking 'text/html;charset=utf-8'
2017/11/21 10:02:24.289 kid1| 28,3| RegexData.cc(62) match: aclRegexData::match: looking for '(^application/mp21$)|(^application/mp4$)|(^application/vnd.rn-realmedia$)|(^application/vnd.tmobile-livetv$)|(^audio/)|(^video/)'
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: limited_replied_mimetypes_1 = 0
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: http_reply_access#7 = 0
2017/11/21 10:02:24.289 kid1| 28,3| RegexData.cc(51) match: aclRegexData::match: checking '/'
2017/11/21 10:02:24.289 kid1| 28,3| RegexData.cc(62) match: aclRegexData::match: looking for '(\.ade(\?.*)?$)|(\.adp(\?.*)?$)|(\.app(\?.*)?$)|(\.asd(\?.*)?$)|(\.asf(\?.*)?$)|(\.asx(\?.*)?$)|(\.avi(\?.*)?$)|(\.bas(\?.*)?$)|(\.bat(\?.*)?$)|(\.cab(\?.*)?$)|(\.chm(\?.*)?$)|(\.cmd(\?.*)?$)|(\.cpl(\?.*)?$)|(\.dll$)|(\.exe(\?.*)?$)|(\.fxp(\?.*)?$)|(\.hlp(\?.*)?$)|(\.hta(\?.*)?$)|(\.hto(\?.*)?$)|(\.inf(\?.*)?$)|(\.ini(\?.*)?$)|(\.ins(\?.*)?$)|(\.iso(\?.*)?$)|(\.isp(\?.*)?$)|(\.jse(.?)(\?.*)?$)|(\.jse(\?.*)?$)|(\.lib(\?.*)?$)|(\.lnk(\?.*)?$)|(\.mar(\?.*)?$)|(\.mdb(\?.*)?$)|(\.mde(\?.*)?$)|(\.mp3(\?.*)?$)|(\.mpeg(\?.*)?$)|(\.mpg(\?.*)?$)|(\.msc(\?.*)?$)|(\.msi(\?.*)?$)|(\.msp(\?.*)?$)|(\.mst(\?.*)?$)|(\.ocx(\?.*)?$)|(\.pcd(\?.*)?$)|(\.pif(\?.*)?$)|(\.prg(\?.*)?$)|(\.reg(\?.*)?$)|(\.scr(\?.*)?$)|(\.sct(\?.*)?$)|(\.sh(\?.*)?$)|(\.shb(\?.*)?$)|(\.shs(\?.*)?$)|(\.sys(\?.*)?$)|(\.url(\?.*)?$)|(\.vb(\?.*)?$)|(\.vbe(\?.*)?$)|(\.vbs(\?.*)?$)|(\.vcs(\?.*)?$)|(\.vxd(\?.*)?$)|(\.wmd(\?.*)?$)|(\.wms(\?.*)?$)|(\.wmv(\?.*)?$)|(\.wmz(\?.*)?$)|(\.wsc(\?.*)?$)|(\.wsf(\?.*)?$)|(\.wsh(\?.*)?$)'
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: bad_filetypes = 0
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: http_reply_access#8 = 0
2017/11/21 10:02:24.289 kid1| 28,3| RegexData.cc(51) match: aclRegexData::match: checking 'text/html;charset=utf-8'
2017/11/21 10:02:24.289 kid1| 28,3| RegexData.cc(62) match: aclRegexData::match: looking for '(^application/ecmascript$)|(^application/mp21$)|(^application/mp4$)|(^application/oebps-package+xml$)|(^application/vnd.amazon.ebook$)|(^application/vnd.android.package-archive$)|(^application/vnd.gmx$)|(^application/vnd.google-earth.kml+xml$)|(^application/vnd.google-earth.kmz$)|(^application/vnd.ms-cab-compressed$)|(^application/vnd.ms-excel.addin.macroenabled.12$)|(^application/vnd.ms-excel.sheet.binary.macroenabled.12$)|(^application/vnd.ms-excel.sheet.macroenabled.12$)|(^application/vnd.ms-excel.template.macroenabled.12$)|(^application/vnd.ms-powerpoint.addin.macroenabled.12$)|(^application/vnd.ms-powerpoint.presentation.macroenabled.12$)|(^application/vnd.ms-powerpoint.slide.macroenabled.12$)|(^application/vnd.ms-powerpoint.slideshow.macroenabled.12$)|(^application/vnd.ms-powerpoint.template.macroenabled.12$)|(^application/vnd.ms-wpl$)|(^application/vnd.ms.wms-hdr.asfv1$)|(^application/vnd.realvnc.bed$)|(^application/vnd.rn-realmedia$)|(^application/vnd.tmobile-livetv$)|(^application/x-authorware-bin$)|(^application/x-cab$)|(^application/x-iso9660-image$)|(^application/x-mms-framed$)|(^application/x-ms-wm$)|(^application/x-msdos-program$)|(^application/x-msdownload$)|(^application/x-shar$)|(^application/x-vbs$)|(^audio/)|(^text/vbs$)|(^text/vbscript$)|(^video/)'
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: bad_replied_mimetypes = 0
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: http_reply_access#9 = 0
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: http_reply_access = 0
2017/11/21 10:02:24.289 kid1| 28,3| Checklist.cc(386) calcImplicitAnswer: 0xeb47c8 NO match found, last action DENIED so returning ALLOWED
2017/11/21 10:02:24.289 kid1| 28,3| Checklist.cc(63) markFinished: 0xeb47c8 answer ALLOWED for implicit rule won
2017/11/21 10:02:24.289 kid1| 28,3| Checklist.cc(163) checkCallback: ACLChecklist::checkCallback: 0xeb47c8 answer=ALLOWED
2017/11/21 10:02:24.289 kid1| 28,3| Checklist.cc(70) preCheck: 0x7ffd4e3e2660 checking fast ACLs
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: (access_log daemon:/var/log/squid/access.test.log line) = 1
2017/11/21 10:02:24.289 kid1| 28,3| Acl.cc(158) matches: checked: access_log daemon:/var/log/squid/access.test.log = 1
2017/11/21 10:02:24.289 kid1| 28,3| Checklist.cc(63) markFinished: 0x7ffd4e3e2660 answer ALLOWED for match

It seems that Squid decides to ALLOW, right?

Now, here's the log with ALL,2:

2017/11/21 10:07:01.079 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection on FD 93
2017/11/21 10:07:01.079 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection on local=[::]:3229 remote=[::] FD 93 flags=25
2017/11/21 10:07:01.079 kid1| 33,2| client_side.cc(3943) httpsSslBumpAccessCheckDone: sslBump needed for local=89.16.167.134:443 remote=10.215.144.48 FD 13 flags=17 method 4
2017/11/21 10:07:01.079 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 13 flags=17
2017/11/21 10:07:01.079 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
---------
CONNECT 89.16.167.134:443 HTTP/1.1
Host: 89.16.167.134:443


----------
2017/11/21 10:07:01.079 kid1| 85,2| client_side_request.cc(745) clientAccessCheckDone: The request CONNECT 89.16.167.134:443 is DENIED; last ACL checked: src_ips_with_any_useragent
2017/11/21 10:07:01.079 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/21 10:07:01.079 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/21 10:07:01.079 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/21 10:07:01.089 kid1| 83,2| client_side.cc(3843) clientNegotiateSSL: clientNegotiateSSL: New session 0x13c6250 on FD 13 (10.215.144.48:42279)
2017/11/21 10:07:01.090 kid1| 11,2| client_side.cc(2372) parseHttpRequest: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 13 flags=17
2017/11/21 10:07:01.090 kid1| 11,2| client_side.cc(2373) parseHttpRequest: HTTP Client REQUEST:
---------
GET / HTTP/1.1
Host: www.gentoo.org
User-Agent: Firefox/57
Accept: */*


----------
2017/11/21 10:07:01.090 kid1| 88,2| client_side_reply.cc(2073) processReplyAccessResult: The reply for GET https://www.gentoo.org/ is ALLOWED, because it matched bad_replied_mimetypes
2017/11/21 10:07:01.090 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: HTTP Client local=89.16.167.134:443 remote=10.215.144.48 FD 13 flags=17
2017/11/21 10:07:01.090 kid1| 11,2| client_side.cc(1410) sendStartOfMessage: HTTP Client REPLY:
---------
HTTP/1.1 307 Temporary Redirect
Server: squid
Mime-Version: 1.0
Date: Tue, 21 Nov 2017 09:07:01 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 0
Location: http://proxy-server1/proxy-error/?a=-&B=&e=0&E=%5BNo%20Error%5D&H=89.16.167.134&i=10.215.144.48&M=CONNECT&o=&R=/&T=Tue,%2021%20Nov%202017%2009%3A07%3A01%20GMT&U=https%3A%2F%2F89.16.167.134%2F*&u=89.16.167.134%3A443&w=IT%40mydomain.org&x=&acl=bad_useragents
X-Squid-Error: 403 Access Denied
X-Cache: MISS from proxy-server1
X-Cache-Lookup: NONE from proxy-server1:3227
Connection: close


----------
2017/11/21 10:07:01.090 kid1| 33,2| client_side.cc(832) swanSong: local=89.16.167.134:443 remote=10.215.144.48 flags=17
2017/11/21 10:07:01.090 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable
2017/11/21 10:07:01.090 kid1| 20,2| store.cc(996) checkCachable: StoreEntry::checkCachable: NO: not cachable

Isn't the message "The request CONNECT 89.16.167.134:443 is DENIED" what I should be concentrating on?
Isn't that the root cause?
In another message, you mentioned that I should notice that Squid reports another ACL name (in this case, after the name change, it's "bad_replied_mimetypes").
In any case, the message "The reply for GET https://www.gentoo.org/ is ALLOWED" means that Squid should ALLOW, right?
However, why do I get a 307 redirect to a deny_info page (where incidentally the URL refers to bad_useragents, not bad_replied_mimetypes)?

I can't seem to clear this out and make it work without adding "http_access allow CONNECT SSL_ports" right before checking for the useragent.

Help greatly appreciated.

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Amos Jeffries
Administrator
On 21/11/17 23:06, Vieri wrote:

>
> ________________________________
> From: Amos Jeffries
>>
>>    http_access allow goodAgents !baddomains (AND)
>>
>>   If the first line matches the allow happens.
>>   otherwise deny happens
>>
>> ie. goodAgents are only allowed to non-baddomains. All non-goodAgents
>> are denied to everything.
>
>
>  From this I deduce that in my case I cannot use "http_access allow goodAgents", but I need to go for "http_access deny !goodAgents" so I can continue on evaluating the rest of my http_access rules.
>> Allowing them all the way through Squid is bad. But that is not what is
>> needed here. ssl_bump rules get applied after the CONNECT is accepted
>> *in* for proxy processing and they decide what happens to the tunneled
>> data based on what is found there.
>>    If bumping is decided the TLS gets removed and the messages inside
>> individually go through the http_access process.
>
>
> You lost me there. Here's what I did today.
>
> I took your advice (and Alex's), and renamed my ACL labels. Unfortunately, I'm still a little confused :-(.
>

[ snipping the log traces to keep the mail relatively small ]
>
> It seems that Squid decides to ALLOW, right?

Look at the "markFinished" lines for the outcome of each set of access
controls outcomes. The lines leading up to those details which ACL tests
are performed and which *_access lines they were on.


Ignoring the first few quoted lines which are for some earlier
ssl-bumped transaction I see:

* http_access line #9 DENIED an HTTP request.

* access_log is ALLOWED to record something. Probably unrelated traffic.

* http_reply_access line #9 ALLOWED a reply to be delivered, then

* access_log is ALLOWED to log something.


>
> Isn't the message "The request CONNECT 89.16.167.134:443 is DENIED" what I should be concentrating on?
> Isn't that the root cause?

Yes, that line is the outcome. The cause of the denial is what ACL
check(s) led to it.

Specifically in these log lines:

   matches: checked: interceptedssl = 1
   match: aclIpMatchIp: '10.215.144.48' found
   matches: checked: localnet = 1
   matches: checked: !localnet = 0
   matches: checked: http_access#8 = 0

   matches: checked: good_useragents = 0
   matches: checked: !good_useragents = 1
   match: aclIpMatchIp: '10.215.144.48' NOT found
   matches: checked: src_ips_with_any_useragent = 0
   matches: checked: !src_ips_with_any_useragent = 1
   matches: checked: http_access#9 = 1
   matches: checked: http_access = 1

Which strangely do not seem to match your squid.conf details. These are
the 8th and 9th http_access lines in the squid.conf which is used by the
running proxy.
But in your quoted squid.conf they are lines #4 and #5.


  http_access deny interceptedssl !localnet
   - it was interceptedssl from localnet
   - so not a match due to !localnet

  http_access deny !good_useragents !src_ips_with_any_useragent
   - it has no UA, and
   - it is not listed n that IP whitelist.
   - the NOT condition (!) on both of those make the whole line a match.


> In another message, you mentioned that I should notice that Squid reports another ACL name (in this case, after the name change, it's "bad_replied_mimetypes").
> In any case, the message "The reply for GET https://www.gentoo.org/ is ALLOWED" means that Squid should ALLOW, right?

No, it means that *reply* is allowed.

A reply *might* be from a server, or from cache, or a 403 denial error
page generated by Squid, or one of the deny_info redirects you have
configured to happen - like that one in the "HTTP Client REPLY" in the
second log trace.



> However, why do I get a 307 redirect to a deny_info page (where incidentally the URL refers to bad_useragents, not bad_replied_mimetypes)?

Because the CONNECT _request_ was denied and that redirect _reply_ is
what a deny_info with a URL generates when its associated ACL causes a
denial.

bad_useragents was the ACL checking the *request* which triggered that
redirect to happen.

bad_replied_mimetypes was just the last ACL tested to see if that
redirect was allowed to be delivered to the client.


If we assume that the two log traces actually line up then
bad_replied_mimetypes is actually irrelevant because the
http_reply_access result is actually "NO match found, last action DENIED
so returning ALLOWED"


>
> I can't seem to clear this out and make it work without adding "http_access allow CONNECT SSL_ports" right before checking for the useragent.


If you place that after the default "deny CONNECT !SSL_ports", and
before your UA checks, AND if you are using ssl_bump on the allowed
tunnels then you can relatively safely use "allow CONNECT".

Just be careful that the CONNECT allowed by that are always handled
safely by the ssl_bump rules you have.
  Meaning that you either bump or terminate traffic you are not sure is
okay, splice if you are reasonably sure, etc. it is a balancing effort
between "splice as much as possible" and "terminate if unsure of the
traffic" advice.


Just FYI you would be a huge amount better off dropping the UA
fingerprinting. It's a _really_ simplistic idea about the HTTP world,
and it is partly because of that overly-simplistic nature and depending
on unreliable values that you are having so much more trouble than
normal admin face.

Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Vieri
________________________________
From: Amos Jeffries <[hidden email]>

>
> If you place that after the default "deny CONNECT !SSL_ports", and
> before your UA checks, AND if you are using ssl_bump on the allowed
> tunnels then you can relatively safely use "allow CONNECT".
>
> Just be careful that the CONNECT allowed by that are always handled
> safely by the ssl_bump rules you have.
>   Meaning that you either bump or terminate traffic you are not sure is
> okay, splice if you are reasonably sure, etc. it is a balancing effort
> between "splice as much as possible" and "terminate if unsure of the
> traffic" advice.


As you say, I placed "allow CONNECT" after the default "deny CONNECT !SSL_ports", and before my UA checks. I'm also using:
ssl_bump stare all
ssl_bump bump all


Considering the following (taken from previous e-mail):

http_access deny intercepted !localnet
http_access deny interceptedssl !localnet
http_access deny explicit !ORG_all
http_access deny explicit SSL_ports

Would it be "safer" or "indifferent" to use the following right before the UA checks?

http_access allow CONNECT interceptedssl SSL_ports


> Just FYI you would be a huge amount better off dropping the UA
> fingerprinting. It's a _really_ simplistic idea about the HTTP world,
> and it is partly because of that overly-simplistic nature and depending
> on unreliable values that you are having so much more trouble than
> normal admin face.


I'm aware that UA checks are not fully reliable, but in a big corporate environment it can reveal a lot of interested information.

I also know that some HTTP clients mimic others' user-agent strings or substrings. They can even sometimes dynamically change them.

However, in my particular case I could define a custom UA for our corporate browser allowed to go through Squid. For instance, Firefox can easily do that. Other browsers such as Edge seem not to.
In any case, it is not my intention to do so long-term. In short-term I found out that:

1) Squid logic *can* be understood :-)

2) some hosts may have HTTP clients that should be blocked even though the rest of the Squid rules were not programmed for that (so I couldn't know about it). A simple example: we may allow traffic to all microsoft sites, but some software may not necessarily be well installed/configured. I found that Microsoft Office may connect to an MS site to download or update software with a utility/service called
OfficeClickToRun. Of course, generic rules in Squid.conf already blocked unauthorized downloads according to mimetypes or filetypes. However, some clients could be whitelisted and allowed to download (eg. from all MS sites). In this case, I would not necessarily want OfficeClickToRun to update. That could be done by identifying the dst domains, but they could change in time, and in any case would require more digging into.


Adobe has similar http client behavior.


Anyway, it's informative to say the least, and can be used to improve the rest of the "standard" squid acl access rules.

I was also thinking of using custom HTTP headers such as X-MyCustomHeader: Whatever instead of UA strings. Custom headers can easily be added in Firefox, and other browsers such as Edge also seem to support that.

Anyway, I had a great time fiddling with Squid.
Thank you for your assistance.

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: block user agent

Amos Jeffries
Administrator
On 22/11/17 23:48, Vieri wrote:

> ________________________________
> From: Amos Jeffries <[hidden email]>
>>
>> If you place that after the default "deny CONNECT !SSL_ports", and
>> before your UA checks, AND if you are using ssl_bump on the allowed
>> tunnels then you can relatively safely use "allow CONNECT".
>>
>> Just be careful that the CONNECT allowed by that are always handled
>> safely by the ssl_bump rules you have.
>>    Meaning that you either bump or terminate traffic you are not sure is
>> okay, splice if you are reasonably sure, etc. it is a balancing effort
>> between "splice as much as possible" and "terminate if unsure of the
>> traffic" advice.
>
>
> As you say, I placed "allow CONNECT" after the default "deny CONNECT !SSL_ports", and before my UA checks. I'm also using:
> ssl_bump stare all
> ssl_bump bump all
>
>
> Considering the following (taken from previous e-mail):
>
> http_access deny intercepted !localnet
> http_access deny interceptedssl !localnet
> http_access deny explicit !ORG_all
> http_access deny explicit SSL_ports
>
> Would it be "safer" or "indifferent" to use the following right before the UA checks?
>
> http_access allow CONNECT interceptedssl SSL_ports
>

All CONNECT transactions that get past that earlier line with !SSL_Ports
will match SSL_Ports. So that part of the line is redundant.

The "CONNECT interceptedssl" is more restricted than just "CONNECT" - so
is safer due to that yes. But also leaves some traffic open to the same
denial problem you had earlier if non-UA CONNECT happen other ways. Up
to you whether that is wanted or acceptible.


Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users