acl for redirect

classic Classic list List threaded Threaded
20 messages Options
Reply | Threaded
Open this post in threaded view
|

acl for redirect

talikarni
We have a server setup using squid 3.5 and e2guardian (newer branch of
dansguardian), the issue is now google has changed a few things around
and google is no longer filtered which is not acceptable. We already
have the browser settings for SSL Proxy set to our server, and squid has
ssl-bump enabled and working. Previously there was enough unsecure
content on Google that the filtering was still working, but now google
has gone 100% encrypted meaning it is 100% unfiltered. What is happening
is it is creating an ssl tunnel (for lack of a better term) between
their server and the browser, so all squid sees is the connection to
www.google.com, and after that it is tunneled and not recognized by
squid or e2guardian at all.

I found a few options online that was used with older squid versions but
nothing is working with squid 3.5... Looking for something like this:

acl google dstdomain .google.com
deny_info http://www.google.com/webhp?nord=1 google
http_access deny google

Essentially want to have squid take all regular requests for google.com
and send/relay it to the unsecured page at
http://www.google.com/webhp?nord=1 which allows e2guardian to properly
filter. With the current settings though, it goes to the squid access
denied page.

Mike
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

Amos Jeffries
Administrator
On 24/06/2015 11:03 a.m., Mike wrote:
> We have a server setup using squid 3.5 and e2guardian (newer branch of
> dansguardian), the issue is now google has changed a few things around
> and google is no longer filtered which is not acceptable. We already
> have the browser settings for SSL Proxy set to our server, and squid has
> ssl-bump enabled and working. Previously there was enough unsecure
> content on Google that the filtering was still working, but now google
> has gone 100% encrypted meaning it is 100% unfiltered.

Maybe, maybe not.

> What is happening
> is it is creating an ssl tunnel (for lack of a better term) between

No. That is the correct and official term for what they are doing. And
"CONNECT tunnel" is the full phrase / name for the particular method of
tunnel creation.


> their server and the browser, so all squid sees is the connection to
> www.google.com, and after that it is tunneled and not recognized by
> squid or e2guardian at all.

BUT ... you said you were SSL-Bump'ing. Which means you are decrypting
such tunnels to filter the content inside them.

So what is the problem? is your method of bumping not decrypting the
Google traffic for Squid access controls and helpers to filter?

Note that DansGuardian and e2guardian being independent HTTP proxies are
not party to that SSL-Bump decrypted content inside Squid. ONly Squid
internals and ICAP/eCAP services have access to it.

>
> I found a few options online that was used with older squid versions but
> nothing is working with squid 3.5... Looking for something like this:
>
> acl google dstdomain .google.com
> deny_info http://www.google.com/webhp?nord=1 google

As you said Google have gone 100% HTTPS. URLs beginning with http:// are
not HTTPS nor accepted there anymore. If used they just get a 30x
redirect to an https:// URL.

Amos

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

talikarni
Amos, thanks for info.

The primary settings being used in squid.conf:

http_port 8080
# this port is what will be used for SSL Proxy on client browser
http_port 8081 intercept

https_port 8082 intercept ssl-bump connection-auth=off generate-host-certificates=on dynamic_cert_mem_cache_size=16MB cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost


Then e2guardian uses 10101 for the browsers, and uses 8080 for connecting to squid on the same server.

Yet what is happening is there is the GET, then CONNECT and the tunnel is created, never allowing squid to decrypt and pass the data along to e2guardian, I suspect Google has changed their settings denying any proxy from intercepting, because we can type the most foul terms which are in the "bannedssllist" for e2guardian and literally nothing is filtered at all on google, nor youtube. Yet other secure sites like wordpress, yahoo, and others are caught and blocked, so it is just google owned sites that are not.

More below...


On 6/24/2015 6:36 AM, Amos Jeffries wrote:
On 24/06/2015 11:03 a.m., Mike wrote:
We have a server setup using squid 3.5 and e2guardian (newer branch of
dansguardian), the issue is now google has changed a few things around
and google is no longer filtered which is not acceptable. We already
have the browser settings for SSL Proxy set to our server, and squid has
ssl-bump enabled and working. Previously there was enough unsecure
content on Google that the filtering was still working, but now google
has gone 100% encrypted meaning it is 100% unfiltered.
Maybe, maybe not.

What is happening
is it is creating an ssl tunnel (for lack of a better term) between
No. That is the correct and official term for what they are doing. And
"CONNECT tunnel" is the full phrase / name for the particular method of
tunnel creation.


their server and the browser, so all squid sees is the connection to
www.google.com, and after that it is tunneled and not recognized by
squid or e2guardian at all.
BUT ... you said you were SSL-Bump'ing. Which means you are decrypting
such tunnels to filter the content inside them.

So what is the problem? is your method of bumping not decrypting the
Google traffic for Squid access controls and helpers to filter?

Note that DansGuardian and e2guardian being independent HTTP proxies are
not party to that SSL-Bump decrypted content inside Squid. ONly Squid
internals and ICAP/eCAP services have access to it.

I found a few options online that was used with older squid versions but
nothing is working with squid 3.5... Looking for something like this:

acl google dstdomain .google.com
deny_info http://www.google.com/webhp?nord=1 google
As you said Google have gone 100% HTTPS. URLs beginning with <a class="moz-txt-link-freetext" href="http://">http:// are
not HTTPS nor accepted there anymore. If used they just get a 30x
redirect to an https:// URL.

Amos
This is why we are thinking we can force the redirect, if you have ides on how to do that. All google pages use the secure aspect, except when that http://www.google.com/webhp?nord=1 is used, it forces use of the insecure pages, and allows e2guardian filtering to work properly.

Thank you,

Mike

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users



_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

Amos Jeffries
Administrator
On 26/06/2015 2:36 a.m., Mike wrote:

> Amos, thanks for info.
>
> The primary settings being used in squid.conf:
>
> http_port 8080
> # this port is what will be used for SSL Proxy on client browser
> http_port 8081 intercept
>
> https_port 8082 intercept ssl-bump connection-auth=off
> generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
> cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
> cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
>
>
> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
> sslcrtd_children 50 startup=5 idle=1
> ssl_bump server-first all
> ssl_bump none localhost
>
>
> Then e2guardian uses 10101 for the browsers, and uses 8080 for
> connecting to squid on the same server.

Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
connection in re-encrypted on outgoing.


I am doubtful eth nord works anymore since Googles own documentation for
schools states that one must install a MITM proxy that does the traffic
filtering - e2guardian is not one of those. IMO you should convert your
e2guardian config into Squid ACL rules that can be applied to the bumped
traffic without forcing http://

But if nord does work, so should the deny_info in Squid. Something like
this probably:

 acl google dstdomain .google.com
 deny_info 301:<a href="http://%H%R?nord=1">http://%H%R?nord=1 google

 acl GwithQuery urlpath_regex ?
 deny_info 301:<a href="http://%H%R&nord=1">http://%H%R&nord=1 GwithQuery

 http_access deny google Gquery
 http_access deny google


Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

FredB
Mike, you can also to try the dev branch https://github.com/e2guardian/e2guardian/tree/develop 
SSLMITM works now. The request from the client is intercepted, a spoofed certificate supplied for
the target site and an encrypted connection made back to the client.  
A separate encrypted connection to the target server is set up.  The resulting
http dencrypted stream is then filtered as normal.

https://github.com/e2guardian/e2guardian/blob/develop/notes/ssl_mitm

Fred
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

Amos Jeffries
Administrator
On 26/06/2015 8:40 p.m., FredB wrote:
> Mike, you can also to try the dev branch https://github.com/e2guardian/e2guardian/tree/develop 
> SSLMITM works now. The request from the client is intercepted, a spoofed certificate supplied for
> the target site and an encrypted connection made back to the client.  
> A separate encrypted connection to the target server is set up.  The resulting
> http dencrypted stream is then filtered as normal.

If that order of operations is correct then the e2guardian dev have made
the same mistake we made back in Squid-3.2. client-first bumping opens a
huge security vulnerability - by hiding issues on the server connection
from the client it enables attackers to hijack the server connection
invisibly. This is the reason the more difficult to get working
server-first and peek-n-splice modes exist and are almost mandatory in
Squid today.

Amos

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

FredB
Thanks Amos, I will discuss this in more details with the dev of SSLMITM in E2

Fred
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect - re Fred

talikarni
In reply to this post by FredB
Yes we already have that version installed, that is the version having
these issues.

[root@Server1 ~]# e2guardian -v
e2guardian 3.0.4


On 6/26/2015 3:40 AM, FredB wrote:

> Mike, you can also to try the dev branch https://github.com/e2guardian/e2guardian/tree/develop
> SSLMITM works now. The request from the client is intercepted, a spoofed certificate supplied for
> the target site and an encrypted connection made back to the client.
> A separate encrypted connection to the target server is set up.  The resulting
> http dencrypted stream is then filtered as normal.
>
> https://github.com/e2guardian/e2guardian/blob/develop/notes/ssl_mitm
>
> Fred
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect - re Amos

talikarni
In reply to this post by Amos Jeffries
Amos,

I would like to use e2guardian if possible, and after checking it out,
http://www.google.com/webhp?nord=1 does force the insecure, but previous
entries attempted just cause all searches to loop back to that same url
instead of passing it along.

We could use a regex option in squid, but since we want the rest of the
sites to be handled normally through e2guardian, what acl entries would
we use to set it up to only take effect on google.com? Essentially "if
dstdomain = google.com then use acl blocklist /etc/squid/badwords".
I have not used a 2 layer or referring acl setup before, but before now
never needed to.

Thank you so much for the help!

Mike


On 6/26/2015 0:29 AM, Amos Jeffries wrote:

> On 26/06/2015 2:36 a.m., Mike wrote:
>> Amos, thanks for info.
>>
>> The primary settings being used in squid.conf:
>>
>> http_port 8080
>> # this port is what will be used for SSL Proxy on client browser
>> http_port 8081 intercept
>>
>> https_port 8082 intercept ssl-bump connection-auth=off
>> generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
>> cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
>> cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
>>
>>
>> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
>> sslcrtd_children 50 startup=5 idle=1
>> ssl_bump server-first all
>> ssl_bump none localhost
>>
>>
>> Then e2guardian uses 10101 for the browsers, and uses 8080 for
>> connecting to squid on the same server.
> Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
> connection in re-encrypted on outgoing.
>
>
> I am doubtful eth nord works anymore since Googles own documentation for
> schools states that one must install a MITM proxy that does the traffic
> filtering - e2guardian is not one of those. IMO you should convert your
> e2guardian config into Squid ACL rules that can be applied to the bumped
> traffic without forcing http://
>
> But if nord does work, so should the deny_info in Squid. Something like
> this probably:
>
>   acl google dstdomain .google.com
>   deny_info 301:<a href="http://%H%R?nord=1">http://%H%R?nord=1 google
>
>   acl GwithQuery urlpath_regex ?
>   deny_info 301:<a href="http://%H%R&nord=1">http://%H%R&nord=1 GwithQuery
>
>   http_access deny google Gquery
>   http_access deny google
>
>
> Amos
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

talikarni
In reply to this post by Amos Jeffries
Nevermind... I found another fix within e2guardian:

etc/e2guardian/lists/urlregexplist

Added this entry:
# Disable Google SSL Search
# allows e2g to filter searches properly
"^https://www.google.[a-z]{2,6}(.*)"->"http://www.google.com/webhp?nord=1"

This means whenever google.com or www.google.com is typed in the address
bar, it loads the insecure page and allows e2guardian to properly filter
whatever search terms they type in. This does break other aspects such
as google toolbars, using the search bar at upper right of many browsers
with google as the set search engine, and other ways, but that is an
issue we can live with.


On 6/26/2015 5:12 AM, Amos Jeffries wrote:

> On 26/06/2015 8:40 p.m., FredB wrote:
>> Mike, you can also to try the dev branch https://github.com/e2guardian/e2guardian/tree/develop
>> SSLMITM works now. The request from the client is intercepted, a spoofed certificate supplied for
>> the target site and an encrypted connection made back to the client.
>> A separate encrypted connection to the target server is set up.  The resulting
>> http dencrypted stream is then filtered as normal.
> If that order of operations is correct then the e2guardian dev have made
> the same mistake we made back in Squid-3.2. client-first bumping opens a
> huge security vulnerability - by hiding issues on the server connection
> from the client it enables attackers to hijack the server connection
> invisibly. This is the reason the more difficult to get working
> server-first and peek-n-splice modes exist and are almost mandatory in
> Squid today.
>
> Amos
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect - re Fred

FredB
In reply to this post by talikarni


Le 26/06/2015 17:38, Mike a écrit :
> Yes we already have that version installed, that is the version having
> these issues.
>
> [root@Server1 ~]# e2guardian -v
> e2guardian 3.0.4

Stable version = 3.0.4 , it's not the develop version
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

talikarni
In reply to this post by talikarni
Scratch that (my previous email to this list), google disabled their
insecure sites when used as part of a redirect. We as individual users
can use that url directly in the browser (
http://www.google.com/webhp?nord=1 ) but any google page load starts
with secure page causing that redirect to fail... The newer 3.1.2
e2guardian SSL MITM requires options (like a der certificate file) that
cannot be used with thousands of existing users on our system, so squid
may be our only option.

Another issue right now is google is using a "VPN-style" internal
redirect on their server, so e2guardian (shown in log) sees
https://www.google.com:443  CONNECT, passes along TCP_TUNNEL/200
www.google.com:443 to squid (shown in squid log), and after that it is
directly between google and the browser, not allowing e2guardian nor
squid to see further urls from google (such as search terms) for the
rest of that specific session. Can click news, maps, images, videos, and
NONE of these are passed along to the proxy.

So my original question still stands, how to set an acl for google urls
that references a file with blocked terms/words/phrases, and denies it
if those terms are found (like a black list)?

Another option I thought of is since the meta content in the code
including title is passed along, so is there a way to have it can the
header or title content as part of the acl "content scan" process?


Thanks
Mike


On 6/26/2015 13:29 PM, Mike wrote:

> Nevermind... I found another fix within e2guardian:
>
> etc/e2guardian/lists/urlregexplist
>
> Added this entry:
> # Disable Google SSL Search
> # allows e2g to filter searches properly
> "^https://www.google.[a-z]{2,6}(.*)"->"http://www.google.com/webhp?nord=1"
>
>
> This means whenever google.com or www.google.com is typed in the
> address bar, it loads the insecure page and allows e2guardian to
> properly filter whatever search terms they type in. This does break
> other aspects such as google toolbars, using the search bar at upper
> right of many browsers with google as the set search engine, and other
> ways, but that is an issue we can live with.
>
> On 26/06/2015 2:36 a.m., Mike wrote:
>> Amos, thanks for info.
>>
>> The primary settings being used in squid.conf:
>>
>> http_port 8080
>> # this port is what will be used for SSL Proxy on client browser
>> http_port 8081 intercept
>>
>> https_port 8082 intercept ssl-bump connection-auth=off
>> generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
>> cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
>> cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
>>
>>
>> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
>> sslcrtd_children 50 startup=5 idle=1
>> ssl_bump server-first all
>> ssl_bump none localhost
>>
>>
>> Then e2guardian uses 10101 for the browsers, and uses 8080 for
>> connecting to squid on the same server.
> Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
> connection in re-encrypted on outgoing.
>
>
> I am doubtful eth nord works anymore since Googles own documentation for
> schools states that one must install a MITM proxy that does the traffic
> filtering - e2guardian is not one of those. IMO you should convert your
> e2guardian config into Squid ACL rules that can be applied to the bumped
> traffic without forcinghttp://
>
> But if nord does work, so should the deny_info in Squid. Something like
> this probably:
>
>   acl google dstdomain .google.com
>   deny_info 301:<a href="http://%H%R?nord=1">http://%H%R?nord=1  google
>
>   acl GwithQuery urlpath_regex ?
>   deny_info 301:<a href="http://%H%R&nord=1">http://%H%R&nord=1  GwithQuery
>
>   http_access deny google Gquery
>   http_access deny google
>
>
> Amos
>>
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

Rafael Akchurin
Hello Mike,

May be it is time to take a look at ICAP/eCAP protocol implementations which target specifically this problem - filtering within the *contents* of the stream on Squid?

Best regards,
Rafael

-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Mike
Sent: Tuesday, June 30, 2015 10:49 PM
To: [hidden email]
Subject: Re: [squid-users] acl for redirect

Scratch that (my previous email to this list), google disabled their insecure sites when used as part of a redirect. We as individual users can use that url directly in the browser (
http://www.google.com/webhp?nord=1 ) but any google page load starts with secure page causing that redirect to fail... The newer 3.1.2 e2guardian SSL MITM requires options (like a der certificate file) that cannot be used with thousands of existing users on our system, so squid may be our only option.

Another issue right now is google is using a "VPN-style" internal redirect on their server, so e2guardian (shown in log) sees
https://www.google.com:443  CONNECT, passes along TCP_TUNNEL/200
www.google.com:443 to squid (shown in squid log), and after that it is directly between google and the browser, not allowing e2guardian nor squid to see further urls from google (such as search terms) for the rest of that specific session. Can click news, maps, images, videos, and NONE of these are passed along to the proxy.

So my original question still stands, how to set an acl for google urls that references a file with blocked terms/words/phrases, and denies it if those terms are found (like a black list)?

Another option I thought of is since the meta content in the code including title is passed along, so is there a way to have it can the header or title content as part of the acl "content scan" process?


Thanks
Mike


On 6/26/2015 13:29 PM, Mike wrote:

> Nevermind... I found another fix within e2guardian:
>
> etc/e2guardian/lists/urlregexplist
>
> Added this entry:
> # Disable Google SSL Search
> # allows e2g to filter searches properly
> "^https://www.google.[a-z]{2,6}(.*)"->"http://www.google.com/webhp?nord=1"
>
>
> This means whenever google.com or www.google.com is typed in the
> address bar, it loads the insecure page and allows e2guardian to
> properly filter whatever search terms they type in. This does break
> other aspects such as google toolbars, using the search bar at upper
> right of many browsers with google as the set search engine, and other
> ways, but that is an issue we can live with.
>
> On 26/06/2015 2:36 a.m., Mike wrote:
>> Amos, thanks for info.
>>
>> The primary settings being used in squid.conf:
>>
>> http_port 8080
>> # this port is what will be used for SSL Proxy on client browser
>> http_port 8081 intercept
>>
>> https_port 8082 intercept ssl-bump connection-auth=off
>> generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
>> cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
>> cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-
>> RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
>>
>>
>> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M
>> 16MB sslcrtd_children 50 startup=5 idle=1 ssl_bump server-first all
>> ssl_bump none localhost
>>
>>
>> Then e2guardian uses 10101 for the browsers, and uses 8080 for
>> connecting to squid on the same server.
> Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
> connection in re-encrypted on outgoing.
>
>
> I am doubtful eth nord works anymore since Googles own documentation
> for schools states that one must install a MITM proxy that does the
> traffic filtering - e2guardian is not one of those. IMO you should
> convert your e2guardian config into Squid ACL rules that can be
> applied to the bumped traffic without forcinghttp://
>
> But if nord does work, so should the deny_info in Squid. Something
> like this probably:
>
>   acl google dstdomain .google.com
>   deny_info 301:<a href="http://%H%R?nord=1">http://%H%R?nord=1  google
>
>   acl GwithQuery urlpath_regex ?
>   deny_info 301:<a href="http://%H%R&nord=1">http://%H%R&nord=1  GwithQuery
>
>   http_access deny google Gquery
>   http_access deny google
>
>
> Amos
>>
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

Marcus Kool
In reply to this post by talikarni
I suggest to read this:
https://support.google.com/websearch/answer/186669

and look at option 3 of section 'Keep SafeSearch turned on for your network'

Marcus


On 06/30/2015 05:48 PM, Mike wrote:

> Scratch that (my previous email to this list), google disabled their insecure sites when used as part of a redirect. We as individual users can use that url directly in the browser (
> http://www.google.com/webhp?nord=1 ) but any google page load starts with secure page causing that redirect to fail... The newer 3.1.2 e2guardian SSL MITM requires options (like a der certificate
> file) that cannot be used with thousands of existing users on our system, so squid may be our only option.
>
> Another issue right now is google is using a "VPN-style" internal redirect on their server, so e2guardian (shown in log) sees https://www.google.com:443  CONNECT, passes along TCP_TUNNEL/200
> www.google.com:443 to squid (shown in squid log), and after that it is directly between google and the browser, not allowing e2guardian nor squid to see further urls from google (such as search terms)
> for the rest of that specific session. Can click news, maps, images, videos, and NONE of these are passed along to the proxy.
>
> So my original question still stands, how to set an acl for google urls that references a file with blocked terms/words/phrases, and denies it if those terms are found (like a black list)?
>
> Another option I thought of is since the meta content in the code including title is passed along, so is there a way to have it can the header or title content as part of the acl "content scan" process?
>
>
> Thanks
> Mike
>
>
> On 6/26/2015 13:29 PM, Mike wrote:
>> Nevermind... I found another fix within e2guardian:
>>
>> etc/e2guardian/lists/urlregexplist
>>
>> Added this entry:
>> # Disable Google SSL Search
>> # allows e2g to filter searches properly
>> "^https://www.google.[a-z]{2,6}(.*)"->"http://www.google.com/webhp?nord=1"
>>
>> This means whenever google.com or www.google.com is typed in the address bar, it loads the insecure page and allows e2guardian to properly filter whatever search terms they type in. This does break
>> other aspects such as google toolbars, using the search bar at upper right of many browsers with google as the set search engine, and other ways, but that is an issue we can live with.
>>
>> On 26/06/2015 2:36 a.m., Mike wrote:
>>> Amos, thanks for info.
>>>
>>> The primary settings being used in squid.conf:
>>>
>>> http_port 8080
>>> # this port is what will be used for SSL Proxy on client browser
>>> http_port 8081 intercept
>>>
>>> https_port 8082 intercept ssl-bump connection-auth=off
>>> generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
>>> cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
>>> cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
>>>
>>>
>>> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
>>> sslcrtd_children 50 startup=5 idle=1
>>> ssl_bump server-first all
>>> ssl_bump none localhost
>>>
>>>
>>> Then e2guardian uses 10101 for the browsers, and uses 8080 for
>>> connecting to squid on the same server.
>> Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
>> connection in re-encrypted on outgoing.
>>
>>
>> I am doubtful eth nord works anymore since Googles own documentation for
>> schools states that one must install a MITM proxy that does the traffic
>> filtering - e2guardian is not one of those. IMO you should convert your
>> e2guardian config into Squid ACL rules that can be applied to the bumped
>> traffic without forcinghttp://
>>
>> But if nord does work, so should the deny_info in Squid. Something like
>> this probably:
>>
>>   acl google dstdomain .google.com
>>   deny_info 301:<a href="http://%H%R?nord=1">http://%H%R?nord=1  google
>>
>>   acl GwithQuery urlpath_regex ?
>>   deny_info 301:<a href="http://%H%R&nord=1">http://%H%R&nord=1  GwithQuery
>>
>>   http_access deny google Gquery
>>   http_access deny google
>>
>>
>> Amos
>>>
>>> _______________________________________________
>>> squid-users mailing list
>>> [hidden email]
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>>
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

talikarni
In reply to this post by Rafael Akchurin
Rafael, We're trying to keep the setups lean, and primarily just deal
with google and youtube, not all websites. ICAP processes deal with a
whole new layer of complexity and usually cover all websites, no just
the few.

On 6/30/2015 16:17 PM, Rafael Akchurin wrote:
> Hello Mike,
>
> May be it is time to take a look at ICAP/eCAP protocol implementations which target specifically this problem - filtering within the *contents* of the stream on Squid?
>
> Best regards,
> Rafael

Marcus,

This is multiple servers used for thousands of customers across North
America, not an office, so changing from a proxy to a DNS server is not
an option, since we would also be required to change all several
thousand of our customers DNS settings.

On 6/30/2015 17:30 PM, Marcus Kool wrote:
> I suggest to read this:
> https://support.google.com/websearch/answer/186669
>
> and look at option 3 of section 'Keep SafeSearch turned on for your
> network'
>
> Marcus

Such a pain, there is no reason for our every day searches should be
encrypted.


Mike

> -----Original Message-----
> From: squid-users [mailto:[hidden email]] On Behalf Of Mike
> Sent: Tuesday, June 30, 2015 10:49 PM
> To: [hidden email]
> Subject: Re: [squid-users] acl for redirect
>
> Scratch that (my previous email to this list), google disabled their insecure sites when used as part of a redirect. We as individual users can use that url directly in the browser (
> http://www.google.com/webhp?nord=1 ) but any google page load starts with secure page causing that redirect to fail... The newer 3.1.2 e2guardian SSL MITM requires options (like a der certificate file) that cannot be used with thousands of existing users on our system, so squid may be our only option.
>
> Another issue right now is google is using a "VPN-style" internal redirect on their server, so e2guardian (shown in log) sees
> https://www.google.com:443  CONNECT, passes along TCP_TUNNEL/200
> www.google.com:443 to squid (shown in squid log), and after that it is directly between google and the browser, not allowing e2guardian nor squid to see further urls from google (such as search terms) for the rest of that specific session. Can click news, maps, images, videos, and NONE of these are passed along to the proxy.
>
> So my original question still stands, how to set an acl for google urls that references a file with blocked terms/words/phrases, and denies it if those terms are found (like a black list)?
>
> Another option I thought of is since the meta content in the code including title is passed along, so is there a way to have it can the header or title content as part of the acl "content scan" process?
>
>
> Thanks
> Mike
>
>
> On 6/26/2015 13:29 PM, Mike wrote:
>> Nevermind... I found another fix within e2guardian:
>>
>> etc/e2guardian/lists/urlregexplist
>>
>> Added this entry:
>> # Disable Google SSL Search
>> # allows e2g to filter searches properly
>> "^https://www.google.[a-z]{2,6}(.*)"->"http://www.google.com/webhp?nord=1"
>>
>>
>> This means whenever google.com or www.google.com is typed in the
>> address bar, it loads the insecure page and allows e2guardian to
>> properly filter whatever search terms they type in. This does break
>> other aspects such as google toolbars, using the search bar at upper
>> right of many browsers with google as the set search engine, and other
>> ways, but that is an issue we can live with.
>>
>> On 26/06/2015 2:36 a.m., Mike wrote:
>>> Amos, thanks for info.
>>>
>>> The primary settings being used in squid.conf:
>>>
>>> http_port 8080
>>> # this port is what will be used for SSL Proxy on client browser
>>> http_port 8081 intercept
>>>
>>> https_port 8082 intercept ssl-bump connection-auth=off
>>> generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
>>> cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
>>> cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-
>>> RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
>>>
>>>
>>> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M
>>> 16MB sslcrtd_children 50 startup=5 idle=1 ssl_bump server-first all
>>> ssl_bump none localhost
>>>
>>>
>>> Then e2guardian uses 10101 for the browsers, and uses 8080 for
>>> connecting to squid on the same server.
>> Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
>> connection in re-encrypted on outgoing.
>>
>>
>> I am doubtful eth nord works anymore since Googles own documentation
>> for schools states that one must install a MITM proxy that does the
>> traffic filtering - e2guardian is not one of those. IMO you should
>> convert your e2guardian config into Squid ACL rules that can be
>> applied to the bumped traffic without forcinghttp://
>>
>> But if nord does work, so should the deny_info in Squid. Something
>> like this probably:
>>
>>    acl google dstdomain .google.com
>>    deny_info 301:<a href="http://%H%R?nord=1">http://%H%R?nord=1  google
>>
>>    acl GwithQuery urlpath_regex ?
>>    deny_info 301:<a href="http://%H%R&nord=1">http://%H%R&nord=1  GwithQuery
>>
>>    http_access deny google Gquery
>>    http_access deny google
>>
>>
>> Amos
>>> _______________________________________________
>>> squid-users mailing list
>>> [hidden email]
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

Marcus Kool
The article does not say to change from a proxy to a DNS server.
Instead, it says to add an entry for google to your own DNS server (the one that Squid uses) and continue to use your proxy.

Marcus

> Marcus,
>
> This is multiple servers used for thousands of customers across North America, not an office, so changing from a proxy to a DNS server is not an option, since we would also be required to change all
> several thousand of our customers DNS settings.
>
> On 6/30/2015 17:30 PM, Marcus Kool wrote:
>> I suggest to read this:
>> https://support.google.com/websearch/answer/186669
>>
>> and look at option 3 of section 'Keep SafeSearch turned on for your network'
>>
>> Marcus
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

talikarni
This is a proxy server, not a DNS server, and does not connect to a DNS
server that we have any control over... The primary/secondary DNS is
handled through the primary host (Cox) for all of our servers so we do
not want to alter it for all several hundred servers, just these 4
(maybe 6).
I was originally thinking of modifying the resolv.conf but again that is
internal DNS used by the server itself. The users will have their own
DNS settings causing it to either ignore our settings, or right back to
the "Website cannot be displayed" errors due to the DNS loop.

So finding a way to redirect in squid should be the better route for us
since DNS is not an option....
Essentially www.google.com --> forcesafesearch.google.com

Mike

On 7/1/2015 11:11 AM, Marcus Kool wrote:

> The article does not say to change from a proxy to a DNS server.
> Instead, it says to add an entry for google to your own DNS server
> (the one that Squid uses) and continue to use your proxy.
>
> Marcus
>
>> Marcus,
>>
>> This is multiple servers used for thousands of customers across North
>> America, not an office, so changing from a proxy to a DNS server is
>> not an option, since we would also be required to change all
>> several thousand of our customers DNS settings.
>>
>> On 6/30/2015 17:30 PM, Marcus Kool wrote:
>>> I suggest to read this:
>>> https://support.google.com/websearch/answer/186669
>>>
>>> and look at option 3 of section 'Keep SafeSearch turned on for your
>>> network'
>>>
>>> Marcus
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

Stuart Henderson
On 2015-07-01, Mike <[hidden email]> wrote:
> This is a proxy server, not a DNS server, and does not connect to a DNS
> server that we have any control over... The primary/secondary DNS is
> handled through the primary host (Cox) for all of our servers so we do
> not want to alter it for all several hundred servers, just these 4
> (maybe 6).
> I was originally thinking of modifying the resolv.conf but again that is
> internal DNS used by the server itself. The users will have their own
> DNS settings causing it to either ignore our settings, or right back to
> the "Website cannot be displayed" errors due to the DNS loop.

resolv.conf would work, or you can use dns_nameservers in squid.conf and
point just squid (if you want) to a private resolver configured to hand
out the forcesafesearch address.

When a proxy is used, the client defers name resolution to the proxy, you
don't need to change DNS on client machines to do this.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

Rafael Akchurin
In reply to this post by talikarni
Hello Mike,

Access to ICAP is controlled with same looking acls as access to anything else. Something like:

icap_enable on
icap_service qlproxy1 reqmod_precache bypass=0 icap://127.0.0.1:1344/reqmod
icap_service qlproxy2 respmod_precache bypass=0 icap://127.0.0.1:1344/respmod
acl target_domains dstdomain "/path/to/target/domains/list"
adaptation_access qlproxy1 allow target_domains
adaptation_access qlproxy2 allow target_domains
adaptation_access qlproxy1 deny all
adaptation_access qlproxy2 deny all

will forward *only* requests/responses to those domain names specified in /path/to/target/domains/list to ICAP REQMOD and RESPMOD services.
All other connections are not forwarded to ICAP.

Raf

________________________________________
From: Mike <[hidden email]>
Sent: Wednesday, July 1, 2015 5:11 PM
To: Rafael Akchurin; [hidden email]
Subject: Re: [squid-users] acl for redirect

Rafael, We're trying to keep the setups lean, and primarily just deal
with google and youtube, not all websites. ICAP processes deal with a
whole new layer of complexity and usually cover all websites, no just
the few.

On 6/30/2015 16:17 PM, Rafael Akchurin wrote:
> Hello Mike,
>
> May be it is time to take a look at ICAP/eCAP protocol implementations which target specifically this problem - filtering within the *contents* of the stream on Squid?
>
> Best regards,
> Rafael

Marcus,

This is multiple servers used for thousands of customers across North
America, not an office, so changing from a proxy to a DNS server is not
an option, since we would also be required to change all several
thousand of our customers DNS settings.

On 6/30/2015 17:30 PM, Marcus Kool wrote:
> I suggest to read this:
> https://support.google.com/websearch/answer/186669
>
> and look at option 3 of section 'Keep SafeSearch turned on for your
> network'
>
> Marcus

Such a pain, there is no reason for our every day searches should be
encrypted.


Mike

> -----Original Message-----
> From: squid-users [mailto:[hidden email]] On Behalf Of Mike
> Sent: Tuesday, June 30, 2015 10:49 PM
> To: [hidden email]
> Subject: Re: [squid-users] acl for redirect
>
> Scratch that (my previous email to this list), google disabled their insecure sites when used as part of a redirect. We as individual users can use that url directly in the browser (
> http://www.google.com/webhp?nord=1 ) but any google page load starts with secure page causing that redirect to fail... The newer 3.1.2 e2guardian SSL MITM requires options (like a der certificate file) that cannot be used with thousands of existing users on our system, so squid may be our only option.
>
> Another issue right now is google is using a "VPN-style" internal redirect on their server, so e2guardian (shown in log) sees
> https://www.google.com:443  CONNECT, passes along TCP_TUNNEL/200
> www.google.com:443 to squid (shown in squid log), and after that it is directly between google and the browser, not allowing e2guardian nor squid to see further urls from google (such as search terms) for the rest of that specific session. Can click news, maps, images, videos, and NONE of these are passed along to the proxy.
>
> So my original question still stands, how to set an acl for google urls that references a file with blocked terms/words/phrases, and denies it if those terms are found (like a black list)?
>
> Another option I thought of is since the meta content in the code including title is passed along, so is there a way to have it can the header or title content as part of the acl "content scan" process?
>
>
> Thanks
> Mike
>
>
> On 6/26/2015 13:29 PM, Mike wrote:
>> Nevermind... I found another fix within e2guardian:
>>
>> etc/e2guardian/lists/urlregexplist
>>
>> Added this entry:
>> # Disable Google SSL Search
>> # allows e2g to filter searches properly
>> "^https://www.google.[a-z]{2,6}(.*)"->"http://www.google.com/webhp?nord=1"
>>
>>
>> This means whenever google.com or www.google.com is typed in the
>> address bar, it loads the insecure page and allows e2guardian to
>> properly filter whatever search terms they type in. This does break
>> other aspects such as google toolbars, using the search bar at upper
>> right of many browsers with google as the set search engine, and other
>> ways, but that is an issue we can live with.
>>
>> On 26/06/2015 2:36 a.m., Mike wrote:
>>> Amos, thanks for info.
>>>
>>> The primary settings being used in squid.conf:
>>>
>>> http_port 8080
>>> # this port is what will be used for SSL Proxy on client browser
>>> http_port 8081 intercept
>>>
>>> https_port 8082 intercept ssl-bump connection-auth=off
>>> generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
>>> cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
>>> cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-
>>> RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
>>>
>>>
>>> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M
>>> 16MB sslcrtd_children 50 startup=5 idle=1 ssl_bump server-first all
>>> ssl_bump none localhost
>>>
>>>
>>> Then e2guardian uses 10101 for the browsers, and uses 8080 for
>>> connecting to squid on the same server.
>> Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
>> connection in re-encrypted on outgoing.
>>
>>
>> I am doubtful eth nord works anymore since Googles own documentation
>> for schools states that one must install a MITM proxy that does the
>> traffic filtering - e2guardian is not one of those. IMO you should
>> convert your e2guardian config into Squid ACL rules that can be
>> applied to the bumped traffic without forcinghttp://
>>
>> But if nord does work, so should the deny_info in Squid. Something
>> like this probably:
>>
>>    acl google dstdomain .google.com
>>    deny_info 301:<a href="http://%H%R?nord=1">http://%H%R?nord=1  google
>>
>>    acl GwithQuery urlpath_regex ?
>>    deny_info 301:<a href="http://%H%R&nord=1">http://%H%R&nord=1  GwithQuery
>>
>>    http_access deny google Gquery
>>    http_access deny google
>>
>>
>> Amos
>>> _______________________________________________
>>> squid-users mailing list
>>> [hidden email]
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: acl for redirect

talikarni
In reply to this post by Stuart Henderson
We have a DNS guru on staff and editing the resolv.conf in this manner
does not work (we tested it to make sure). Looks like we are using an
older desktop to setup a basic DNS server and then point squid to redirect.



Mike


On 7/2/2015 2:06 AM, Stuart Henderson wrote:

> On 2015-07-01, Mike <[hidden email]> wrote:
>> This is a proxy server, not a DNS server, and does not connect to a DNS
>> server that we have any control over... The primary/secondary DNS is
>> handled through the primary host (Cox) for all of our servers so we do
>> not want to alter it for all several hundred servers, just these 4
>> (maybe 6).
>> I was originally thinking of modifying the resolv.conf but again that is
>> internal DNS used by the server itself. The users will have their own
>> DNS settings causing it to either ignore our settings, or right back to
>> the "Website cannot be displayed" errors due to the DNS loop.
> resolv.conf would work, or you can use dns_nameservers in squid.conf and
> point just squid (if you want) to a private resolver configured to hand
> out the forcesafesearch address.
>
> When a proxy is used, the client defers name resolution to the proxy, you
> don't need to change DNS on client machines to do this.
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users