SSL Bump Failures with Google and Wikipedia

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

SSL Bump Failures with Google and Wikipedia

Jeffrey Merkey
Hello All,

I have been working with the squid server and icap and I have been
running into problems with content cached from google and wikipedia.
Some sites using https, such as Centos.org work perfectly with ssl
bumping and I get the decrypted content as html and it's readable.
Other sites, such as google and wikipedia return what looks like
encrypted traffic, or perhaps mime encoded data, I am not sure which.

Are there cases where squid will default to direct mode and not
decrypt the traffic?  I am using the latest squid server 3.5.27.  I
really would like to get this working with google and wikipedia.  I
reviewed the page source code from the browser viewer and it looks
nothing like the data I am getting via the icap server.

Any assistance would be greatly appreciated.

The config I am using is:

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed

acl localnet src 127.0.0.1
acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7       # RFC 4193 local private network range
acl localnet src fe80::/10      # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
#http_port 3128

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache/squid

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320

http_port 3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
http_port 3129

# SSL Bump Config
always_direct allow all
ssl_bump server-first all
sslproxy_cert_error deny all
sslproxy_flags DONT_VERIFY_PEER
sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
-M 4MB sslcrtd_children 8 startup=1 idle=1

# For squid 3.5.x
#sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db -M 4MB

# For squid 4.x
# sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
/var/lib/ssl_db -M 4MB

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_avi_req reqmod_precache
icap://127.0.0.1:1344/request bypass=1
adaptation_access service_avi_req allow all
icap_service service_avi_resp respmod_precache
icap://127.0.0.1:1344/cherokee bypass=0
adaptation_access service_avi_resp allow all

Jeff
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: SSL Bump Failures with Google and Wikipedia

Eliezer Croitoru
Hey Jeffrey,

What happens when you disable the next icap service this way:
icap_service service_avi_resp respmod_precache icap://127.0.0.1:1344/cherokee bypass=0
adaptation_access service_avi_resp deny all

Is it still the same?
What I suspect is that the requests are defined to accept gzip compressed objects and the icap service is not "gnuzip" them which results in what you see.

To make sure that squid is not at fault here try to disable both icap services and then add then one at a time and see which of this triangle is giving you trouble.
I enhanced an ICAP library which is written in GoLang at:
https://github.com/elico/icap

And I have couple examples on how to work with http requests and responses at:
https://github.com/andybalholm/redwood/
https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&type=

Let me know if you need help finding out the issue.

All The Bests,
Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Jeffrey Merkey
Sent: Saturday, September 30, 2017 23:28
To: squid-users <[hidden email]>
Subject: [squid-users] SSL Bump Failures with Google and Wikipedia

Hello All,

I have been working with the squid server and icap and I have been
running into problems with content cached from google and wikipedia.
Some sites using https, such as Centos.org work perfectly with ssl
bumping and I get the decrypted content as html and it's readable.
Other sites, such as google and wikipedia return what looks like
encrypted traffic, or perhaps mime encoded data, I am not sure which.

Are there cases where squid will default to direct mode and not
decrypt the traffic?  I am using the latest squid server 3.5.27.  I
really would like to get this working with google and wikipedia.  I
reviewed the page source code from the browser viewer and it looks
nothing like the data I am getting via the icap server.

Any assistance would be greatly appreciated.

The config I am using is:

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed

acl localnet src 127.0.0.1
acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7       # RFC 4193 local private network range
acl localnet src fe80::/10      # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
#http_port 3128

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache/squid

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320

http_port 3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
http_port 3129

# SSL Bump Config
always_direct allow all
ssl_bump server-first all
sslproxy_cert_error deny all
sslproxy_flags DONT_VERIFY_PEER
sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
-M 4MB sslcrtd_children 8 startup=1 idle=1

# For squid 3.5.x
#sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db -M 4MB

# For squid 4.x
# sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
/var/lib/ssl_db -M 4MB

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_avi_req reqmod_precache icap://127.0.0.1:1344/request bypass=1
adaptation_access service_avi_req allow all

icap_service service_avi_resp respmod_precache icap://127.0.0.1:1344/cherokee bypass=0
adaptation_access service_avi_resp allow all

Jeff
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: SSL Bump Failures with Google and Wikipedia

Jeffrey Merkey
On 9/30/17, Eliezer Croitoru <[hidden email]> wrote:

> Hey Jeffrey,
>
> What happens when you disable the next icap service this way:
> icap_service service_avi_resp respmod_precache
> icap://127.0.0.1:1344/cherokee bypass=0
> adaptation_access service_avi_resp deny all
>
> Is it still the same?
> What I suspect is that the requests are defined to accept gzip compressed
> objects and the icap service is not "gnuzip" them which results in what you
> see.
>
> To make sure that squid is not at fault here try to disable both icap
> services and then add then one at a time and see which of this triangle is
> giving you trouble.
> I enhanced an ICAP library which is written in GoLang at:
> https://github.com/elico/icap
>
> And I have couple examples on how to work with http requests and responses
> at:
> https://github.com/andybalholm/redwood/
> https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&type=
>
> Let me know if you need help finding out the issue.
>
> All The Bests,
> Eliezer
>
> ----
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: [hidden email]
>
>
>
> -----Original Message-----
> From: squid-users [mailto:[hidden email]] On
> Behalf Of Jeffrey Merkey
> Sent: Saturday, September 30, 2017 23:28
> To: squid-users <[hidden email]>
> Subject: [squid-users] SSL Bump Failures with Google and Wikipedia
>
> Hello All,
>
> I have been working with the squid server and icap and I have been
> running into problems with content cached from google and wikipedia.
> Some sites using https, such as Centos.org work perfectly with ssl
> bumping and I get the decrypted content as html and it's readable.
> Other sites, such as google and wikipedia return what looks like
> encrypted traffic, or perhaps mime encoded data, I am not sure which.
>
> Are there cases where squid will default to direct mode and not
> decrypt the traffic?  I am using the latest squid server 3.5.27.  I
> really would like to get this working with google and wikipedia.  I
> reviewed the page source code from the browser viewer and it looks
> nothing like the data I am getting via the icap server.
>
> Any assistance would be greatly appreciated.
>
> The config I am using is:
>
> #
> # Recommended minimum configuration:
> #
>
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
>
> acl localnet src 127.0.0.1
> acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7       # RFC 4193 local private network range
> acl localnet src fe80::/10      # RFC 4291 link-local (directly
> plugged) machines
>
> acl SSL_ports port 443
> acl Safe_ports port 80          # http
> acl Safe_ports port 21          # ftp
> acl Safe_ports port 443         # https
> acl Safe_ports port 70          # gopher
> acl Safe_ports port 210         # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280         # http-mgmt
> acl Safe_ports port 488         # gss-http
> acl Safe_ports port 591         # filemaker
> acl Safe_ports port 777         # multiling http
> acl CONNECT method CONNECT
>
> #
> # Recommended minimum Access Permission configuration:
> #
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
>
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
>
> # Only allow cachemgr access from localhost
> http_access allow localhost manager
> http_access deny manager
>
> # We strongly recommend the following be uncommented to protect innocent
> # web applications running on the proxy server who think the only
> # one who can access services on "localhost" is a local user
> #http_access deny to_localhost
>
> #
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> #
>
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
> http_access allow localnet
> http_access allow localhost
>
> # And finally deny all other access to this proxy
> http_access deny all
>
> # Squid normally listens to port 3128
> #http_port 3128
>
> # Uncomment and adjust the following to add a disk cache directory.
> #cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256
>
> # Leave coredumps in the first cache dir
> coredump_dir /usr/local/squid/var/cache/squid
>
> #
> # Add any of your own refresh_pattern entries above these.
> #
> refresh_pattern ^ftp:           1440    20%     10080
> refresh_pattern ^gopher:        1440    0%      1440
> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
> refresh_pattern .               0       20%     4320
>
> http_port 3128 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
> http_port 3129
>
> # SSL Bump Config
> always_direct allow all
> ssl_bump server-first all
> sslproxy_cert_error deny all
> sslproxy_flags DONT_VERIFY_PEER
> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
> -M 4MB sslcrtd_children 8 startup=1 idle=1
>
> # For squid 3.5.x
> #sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db -M
> 4MB
>
> # For squid 4.x
> # sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
> /var/lib/ssl_db -M 4MB
>
> icap_enable on
> icap_send_client_ip on
> icap_send_client_username on
> icap_client_username_header X-Authenticated-User
> icap_preview_enable on
> icap_preview_size 1024
> icap_service service_avi_req reqmod_precache icap://127.0.0.1:1344/request
> bypass=1
> adaptation_access service_avi_req allow all
>
> icap_service service_avi_resp respmod_precache
> icap://127.0.0.1:1344/cherokee bypass=0
> adaptation_access service_avi_resp allow all
>
> Jeff
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>
>

Eliezer,

Well, you certainly hit the nail on the head.  I added the following
code to check the content being sent to the icap server from squid,
and here is what I found when I check the headers being sent from the
remote web server:

Code to check for content type and encoding received by the icap
server added to c-icap:

    hdrs = ci_http_response_headers(req);
    content_type = ci_headers_value(hdrs, "Content-Type");
    if (content_type)
       ci_debug_printf(1,"srv_cherokee:  content-type: %s\n",
                       content_type);

    content_encoding = ci_headers_value(hdrs, "Content-Encoding");
    if (content_encoding)
       ci_debug_printf(1,"srv_cherokee:  content-encoding: %s\n",
                       content_encoding);

And the output from scanned pages sent over from squid:

srv_cherokee:  init request 0x7f3dbc008eb0
pool hits:1 allocations: 1
Allocating from objects pool object 5
pool hits:1 allocations: 1
Geting buffer from pool 4096:1
Requested service: cherokee
Read preview data if there are and process request
srv_cherokee:  content-type: text/html; charset=utf-8
srv_cherokee:  content-encoding: gzip         <-- As you stated, I am
getting gzipped data
srv_cherokee:  we expect to read :-1 body data
Allow 204...
Preview handler return allow 204 response
srv_cherokee:  release request 0x7f3dbc008eb0
Store buffer to long pool 4096:1
Storing to objects pool object 5
Log request to access log file /var/log/i-cap_access.log


Wikipedia  at https://en.wikipedia.org/wiki/HTTP_compression describes
the process as:

" ...
   Compression scheme negotiation[edit]
   In most cases, excluding the SDCH, the negotiation is done in two
steps, described in
   RFC 2616:

   1. The web client advertises which compression schemes it supports
by including a list
   of tokens in the HTTP request. For Content-Encoding, the list in a
field called Accept -
   Encoding; for Transfer-Encoding, the field is called TE.

   GET /encrypted-area HTTP/1.1
   Host: www.example.com
   Accept-Encoding: gzip, deflate

   2. If the server supports one or more compression schemes, the
outgoing data may be
   compressed by one or more methods supported by both parties. If
this is the case, the
   server will add a Content-Encoding or Transfer-Encoding field in
the HTTP response with
   the used schemes, separated by commas.

   HTTP/1.1 200 OK
   Date: mon, 26 June 2016 22:38:34 GMT
   Server: Apache/1.3.3.7 (Unix)  (Red-Hat/Linux)
   Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
   Accept-Ranges: bytes
   Content-Length: 438
   Connection: close
   Content-Type: text/html; charset=UTF-8
   Content-Encoding: gzip

   The web server is by no means obligated to use any compression method – this
   depends on the internal settings of the web server and also may
depend on the internal
   architecture of the website in question.

   In case of SDCH a dictionary negotiation is also required, which
may involve additional
   steps, like downloading a proper dictionary from .
.."


So, it looks like it is a feature of the browser.  So, is it possible
to have squid gunzip the data or configure the browser not to send the
header  to remove "Accept-Encoding: gzip, deflate" from the request
sent to the remote server telling it to gzip the data?

Thanks

Jeff
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: SSL Bump Failures with Google and Wikipedia

Jeffrey Merkey
On 9/30/17, Jeffrey Merkey <[hidden email]> wrote:

> On 9/30/17, Eliezer Croitoru <[hidden email]> wrote:
>> Hey Jeffrey,
>>
>> What happens when you disable the next icap service this way:
>> icap_service service_avi_resp respmod_precache
>> icap://127.0.0.1:1344/cherokee bypass=0
>> adaptation_access service_avi_resp deny all
>>
>> Is it still the same?
>> What I suspect is that the requests are defined to accept gzip compressed
>> objects and the icap service is not "gnuzip" them which results in what
>> you
>> see.
>>
>> To make sure that squid is not at fault here try to disable both icap
>> services and then add then one at a time and see which of this triangle
>> is
>> giving you trouble.
>> I enhanced an ICAP library which is written in GoLang at:
>> https://github.com/elico/icap
>>
>> And I have couple examples on how to work with http requests and
>> responses
>> at:
>> https://github.com/andybalholm/redwood/
>> https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&type=
>>
>> Let me know if you need help finding out the issue.
>>
>> All The Bests,
>> Eliezer
>>
>> ----
>> Eliezer Croitoru
>> Linux System Administrator
>> Mobile: +972-5-28704261
>> Email: [hidden email]
>>
>>
>>
>> -----Original Message-----
>> From: squid-users [mailto:[hidden email]] On
>> Behalf Of Jeffrey Merkey
>> Sent: Saturday, September 30, 2017 23:28
>> To: squid-users <[hidden email]>
>> Subject: [squid-users] SSL Bump Failures with Google and Wikipedia
>>
>> Hello All,
>>
>> I have been working with the squid server and icap and I have been
>> running into problems with content cached from google and wikipedia.
>> Some sites using https, such as Centos.org work perfectly with ssl
>> bumping and I get the decrypted content as html and it's readable.
>> Other sites, such as google and wikipedia return what looks like
>> encrypted traffic, or perhaps mime encoded data, I am not sure which.
>>
>> Are there cases where squid will default to direct mode and not
>> decrypt the traffic?  I am using the latest squid server 3.5.27.  I
>> really would like to get this working with google and wikipedia.  I
>> reviewed the page source code from the browser viewer and it looks
>> nothing like the data I am getting via the icap server.
>>
>> Any assistance would be greatly appreciated.
>>
>> The config I am using is:
>>
>> #
>> # Recommended minimum configuration:
>> #
>>
>> # Example rule allowing access from your local networks.
>> # Adapt to list your (internal) IP networks from where browsing
>> # should be allowed
>>
>> acl localnet src 127.0.0.1
>> acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>> acl localnet src fc00::/7       # RFC 4193 local private network range
>> acl localnet src fe80::/10      # RFC 4291 link-local (directly
>> plugged) machines
>>
>> acl SSL_ports port 443
>> acl Safe_ports port 80          # http
>> acl Safe_ports port 21          # ftp
>> acl Safe_ports port 443         # https
>> acl Safe_ports port 70          # gopher
>> acl Safe_ports port 210         # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280         # http-mgmt
>> acl Safe_ports port 488         # gss-http
>> acl Safe_ports port 591         # filemaker
>> acl Safe_ports port 777         # multiling http
>> acl CONNECT method CONNECT
>>
>> #
>> # Recommended minimum Access Permission configuration:
>> #
>> # Deny requests to certain unsafe ports
>> http_access deny !Safe_ports
>>
>> # Deny CONNECT to other than secure SSL ports
>> http_access deny CONNECT !SSL_ports
>>
>> # Only allow cachemgr access from localhost
>> http_access allow localhost manager
>> http_access deny manager
>>
>> # We strongly recommend the following be uncommented to protect innocent
>> # web applications running on the proxy server who think the only
>> # one who can access services on "localhost" is a local user
>> #http_access deny to_localhost
>>
>> #
>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
>> #
>>
>> # Example rule allowing access from your local networks.
>> # Adapt localnet in the ACL section to list your (internal) IP networks
>> # from where browsing should be allowed
>> http_access allow localnet
>> http_access allow localhost
>>
>> # And finally deny all other access to this proxy
>> http_access deny all
>>
>> # Squid normally listens to port 3128
>> #http_port 3128
>>
>> # Uncomment and adjust the following to add a disk cache directory.
>> #cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256
>>
>> # Leave coredumps in the first cache dir
>> coredump_dir /usr/local/squid/var/cache/squid
>>
>> #
>> # Add any of your own refresh_pattern entries above these.
>> #
>> refresh_pattern ^ftp:           1440    20%     10080
>> refresh_pattern ^gopher:        1440    0%      1440
>> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
>> refresh_pattern .               0       20%     4320
>>
>> http_port 3128 ssl-bump generate-host-certificates=on
>> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
>> http_port 3129
>>
>> # SSL Bump Config
>> always_direct allow all
>> ssl_bump server-first all
>> sslproxy_cert_error deny all
>> sslproxy_flags DONT_VERIFY_PEER
>> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
>> -M 4MB sslcrtd_children 8 startup=1 idle=1
>>
>> # For squid 3.5.x
>> #sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db -M
>> 4MB
>>
>> # For squid 4.x
>> # sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
>> /var/lib/ssl_db -M 4MB
>>
>> icap_enable on
>> icap_send_client_ip on
>> icap_send_client_username on
>> icap_client_username_header X-Authenticated-User
>> icap_preview_enable on
>> icap_preview_size 1024
>> icap_service service_avi_req reqmod_precache
>> icap://127.0.0.1:1344/request
>> bypass=1
>> adaptation_access service_avi_req allow all
>>
>> icap_service service_avi_resp respmod_precache
>> icap://127.0.0.1:1344/cherokee bypass=0
>> adaptation_access service_avi_resp allow all
>>
>> Jeff
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>
> Eliezer,
>
> Well, you certainly hit the nail on the head.  I added the following
> code to check the content being sent to the icap server from squid,
> and here is what I found when I check the headers being sent from the
> remote web server:
>
> Code to check for content type and encoding received by the icap
> server added to c-icap:
>
>     hdrs = ci_http_response_headers(req);
>     content_type = ci_headers_value(hdrs, "Content-Type");
>     if (content_type)
>        ci_debug_printf(1,"srv_cherokee:  content-type: %s\n",
>                        content_type);
>
>     content_encoding = ci_headers_value(hdrs, "Content-Encoding");
>     if (content_encoding)
>        ci_debug_printf(1,"srv_cherokee:  content-encoding: %s\n",
>                        content_encoding);
>
> And the output from scanned pages sent over from squid:
>
> srv_cherokee:  init request 0x7f3dbc008eb0
> pool hits:1 allocations: 1
> Allocating from objects pool object 5
> pool hits:1 allocations: 1
> Geting buffer from pool 4096:1
> Requested service: cherokee
> Read preview data if there are and process request
> srv_cherokee:  content-type: text/html; charset=utf-8
> srv_cherokee:  content-encoding: gzip         <-- As you stated, I am
> getting gzipped data
> srv_cherokee:  we expect to read :-1 body data
> Allow 204...
> Preview handler return allow 204 response
> srv_cherokee:  release request 0x7f3dbc008eb0
> Store buffer to long pool 4096:1
> Storing to objects pool object 5
> Log request to access log file /var/log/i-cap_access.log
>
>
> Wikipedia  at https://en.wikipedia.org/wiki/HTTP_compression describes
> the process as:
>
> " ...
>    Compression scheme negotiation[edit]
>    In most cases, excluding the SDCH, the negotiation is done in two
> steps, described in
>    RFC 2616:
>
>    1. The web client advertises which compression schemes it supports
> by including a list
>    of tokens in the HTTP request. For Content-Encoding, the list in a
> field called Accept -
>    Encoding; for Transfer-Encoding, the field is called TE.
>
>    GET /encrypted-area HTTP/1.1
>    Host: www.example.com
>    Accept-Encoding: gzip, deflate
>
>    2. If the server supports one or more compression schemes, the
> outgoing data may be
>    compressed by one or more methods supported by both parties. If
> this is the case, the
>    server will add a Content-Encoding or Transfer-Encoding field in
> the HTTP response with
>    the used schemes, separated by commas.
>
>    HTTP/1.1 200 OK
>    Date: mon, 26 June 2016 22:38:34 GMT
>    Server: Apache/1.3.3.7 (Unix)  (Red-Hat/Linux)
>    Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
>    Accept-Ranges: bytes
>    Content-Length: 438
>    Connection: close
>    Content-Type: text/html; charset=UTF-8
>    Content-Encoding: gzip
>
>    The web server is by no means obligated to use any compression method –
> this
>    depends on the internal settings of the web server and also may
> depend on the internal
>    architecture of the website in question.
>
>    In case of SDCH a dictionary negotiation is also required, which
> may involve additional
>    steps, like downloading a proper dictionary from .
> .."
>
>
> So, it looks like it is a feature of the browser.  So, is it possible
> to have squid gunzip the data or configure the browser not to send the
> header  to remove "Accept-Encoding: gzip, deflate" from the request
> sent to the remote server telling it to gzip the data?
>
> Thanks
>
> Jeff
>

I located this online for disabling gzip encoding on the browser side
for firefox:

http://forgetmenotes.blogspot.com/2009/05/how-to-disable-gzip-compression-in.html

Jeff
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: SSL Bump Failures with Google and Wikipedia

Rafael Akchurin
In reply to this post by Jeffrey Merkey
Hello Jeff,

Do not forget Google and YouTube are now using brotli encoding extensively, not only gzip.

Best regards,
Rafael Akchurin

> Op 30 sep. 2017 om 23:49 heeft Jeffrey Merkey <[hidden email]> het volgende geschreven:
>
>> On 9/30/17, Eliezer Croitoru <[hidden email]> wrote:
>> Hey Jeffrey,
>>
>> What happens when you disable the next icap service this way:
>> icap_service service_avi_resp respmod_precache
>> icap://127.0.0.1:1344/cherokee bypass=0
>> adaptation_access service_avi_resp deny all
>>
>> Is it still the same?
>> What I suspect is that the requests are defined to accept gzip compressed
>> objects and the icap service is not "gnuzip" them which results in what you
>> see.
>>
>> To make sure that squid is not at fault here try to disable both icap
>> services and then add then one at a time and see which of this triangle is
>> giving you trouble.
>> I enhanced an ICAP library which is written in GoLang at:
>> https://github.com/elico/icap
>>
>> And I have couple examples on how to work with http requests and responses
>> at:
>> https://github.com/andybalholm/redwood/
>> https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&type=
>>
>> Let me know if you need help finding out the issue.
>>
>> All The Bests,
>> Eliezer
>>
>> ----
>> Eliezer Croitoru
>> Linux System Administrator
>> Mobile: +972-5-28704261
>> Email: [hidden email]
>>
>>
>>
>> -----Original Message-----
>> From: squid-users [mailto:[hidden email]] On
>> Behalf Of Jeffrey Merkey
>> Sent: Saturday, September 30, 2017 23:28
>> To: squid-users <[hidden email]>
>> Subject: [squid-users] SSL Bump Failures with Google and Wikipedia
>>
>> Hello All,
>>
>> I have been working with the squid server and icap and I have been
>> running into problems with content cached from google and wikipedia.
>> Some sites using https, such as Centos.org work perfectly with ssl
>> bumping and I get the decrypted content as html and it's readable.
>> Other sites, such as google and wikipedia return what looks like
>> encrypted traffic, or perhaps mime encoded data, I am not sure which.
>>
>> Are there cases where squid will default to direct mode and not
>> decrypt the traffic?  I am using the latest squid server 3.5.27.  I
>> really would like to get this working with google and wikipedia.  I
>> reviewed the page source code from the browser viewer and it looks
>> nothing like the data I am getting via the icap server.
>>
>> Any assistance would be greatly appreciated.
>>
>> The config I am using is:
>>
>> #
>> # Recommended minimum configuration:
>> #
>>
>> # Example rule allowing access from your local networks.
>> # Adapt to list your (internal) IP networks from where browsing
>> # should be allowed
>>
>> acl localnet src 127.0.0.1
>> acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>> acl localnet src fc00::/7       # RFC 4193 local private network range
>> acl localnet src fe80::/10      # RFC 4291 link-local (directly
>> plugged) machines
>>
>> acl SSL_ports port 443
>> acl Safe_ports port 80          # http
>> acl Safe_ports port 21          # ftp
>> acl Safe_ports port 443         # https
>> acl Safe_ports port 70          # gopher
>> acl Safe_ports port 210         # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280         # http-mgmt
>> acl Safe_ports port 488         # gss-http
>> acl Safe_ports port 591         # filemaker
>> acl Safe_ports port 777         # multiling http
>> acl CONNECT method CONNECT
>>
>> #
>> # Recommended minimum Access Permission configuration:
>> #
>> # Deny requests to certain unsafe ports
>> http_access deny !Safe_ports
>>
>> # Deny CONNECT to other than secure SSL ports
>> http_access deny CONNECT !SSL_ports
>>
>> # Only allow cachemgr access from localhost
>> http_access allow localhost manager
>> http_access deny manager
>>
>> # We strongly recommend the following be uncommented to protect innocent
>> # web applications running on the proxy server who think the only
>> # one who can access services on "localhost" is a local user
>> #http_access deny to_localhost
>>
>> #
>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
>> #
>>
>> # Example rule allowing access from your local networks.
>> # Adapt localnet in the ACL section to list your (internal) IP networks
>> # from where browsing should be allowed
>> http_access allow localnet
>> http_access allow localhost
>>
>> # And finally deny all other access to this proxy
>> http_access deny all
>>
>> # Squid normally listens to port 3128
>> #http_port 3128
>>
>> # Uncomment and adjust the following to add a disk cache directory.
>> #cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256
>>
>> # Leave coredumps in the first cache dir
>> coredump_dir /usr/local/squid/var/cache/squid
>>
>> #
>> # Add any of your own refresh_pattern entries above these.
>> #
>> refresh_pattern ^ftp:           1440    20%     10080
>> refresh_pattern ^gopher:        1440    0%      1440
>> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
>> refresh_pattern .               0       20%     4320
>>
>> http_port 3128 ssl-bump generate-host-certificates=on
>> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
>> http_port 3129
>>
>> # SSL Bump Config
>> always_direct allow all
>> ssl_bump server-first all
>> sslproxy_cert_error deny all
>> sslproxy_flags DONT_VERIFY_PEER
>> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
>> -M 4MB sslcrtd_children 8 startup=1 idle=1
>>
>> # For squid 3.5.x
>> #sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db -M
>> 4MB
>>
>> # For squid 4.x
>> # sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
>> /var/lib/ssl_db -M 4MB
>>
>> icap_enable on
>> icap_send_client_ip on
>> icap_send_client_username on
>> icap_client_username_header X-Authenticated-User
>> icap_preview_enable on
>> icap_preview_size 1024
>> icap_service service_avi_req reqmod_precache icap://127.0.0.1:1344/request
>> bypass=1
>> adaptation_access service_avi_req allow all
>>
>> icap_service service_avi_resp respmod_precache
>> icap://127.0.0.1:1344/cherokee bypass=0
>> adaptation_access service_avi_resp allow all
>>
>> Jeff
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>
> Eliezer,
>
> Well, you certainly hit the nail on the head.  I added the following
> code to check the content being sent to the icap server from squid,
> and here is what I found when I check the headers being sent from the
> remote web server:
>
> Code to check for content type and encoding received by the icap
> server added to c-icap:
>
>    hdrs = ci_http_response_headers(req);
>    content_type = ci_headers_value(hdrs, "Content-Type");
>    if (content_type)
>       ci_debug_printf(1,"srv_cherokee:  content-type: %s\n",
>                       content_type);
>
>    content_encoding = ci_headers_value(hdrs, "Content-Encoding");
>    if (content_encoding)
>       ci_debug_printf(1,"srv_cherokee:  content-encoding: %s\n",
>                       content_encoding);
>
> And the output from scanned pages sent over from squid:
>
> srv_cherokee:  init request 0x7f3dbc008eb0
> pool hits:1 allocations: 1
> Allocating from objects pool object 5
> pool hits:1 allocations: 1
> Geting buffer from pool 4096:1
> Requested service: cherokee
> Read preview data if there are and process request
> srv_cherokee:  content-type: text/html; charset=utf-8
> srv_cherokee:  content-encoding: gzip         <-- As you stated, I am
> getting gzipped data
> srv_cherokee:  we expect to read :-1 body data
> Allow 204...
> Preview handler return allow 204 response
> srv_cherokee:  release request 0x7f3dbc008eb0
> Store buffer to long pool 4096:1
> Storing to objects pool object 5
> Log request to access log file /var/log/i-cap_access.log
>
>
> Wikipedia  at https://en.wikipedia.org/wiki/HTTP_compression describes
> the process as:
>
> " ...
>   Compression scheme negotiation[edit]
>   In most cases, excluding the SDCH, the negotiation is done in two
> steps, described in
>   RFC 2616:
>
>   1. The web client advertises which compression schemes it supports
> by including a list
>   of tokens in the HTTP request. For Content-Encoding, the list in a
> field called Accept -
>   Encoding; for Transfer-Encoding, the field is called TE.
>
>   GET /encrypted-area HTTP/1.1
>   Host: www.example.com
>   Accept-Encoding: gzip, deflate
>
>   2. If the server supports one or more compression schemes, the
> outgoing data may be
>   compressed by one or more methods supported by both parties. If
> this is the case, the
>   server will add a Content-Encoding or Transfer-Encoding field in
> the HTTP response with
>   the used schemes, separated by commas.
>
>   HTTP/1.1 200 OK
>   Date: mon, 26 June 2016 22:38:34 GMT
>   Server: Apache/1.3.3.7 (Unix)  (Red-Hat/Linux)
>   Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
>   Accept-Ranges: bytes
>   Content-Length: 438
>   Connection: close
>   Content-Type: text/html; charset=UTF-8
>   Content-Encoding: gzip
>
>   The web server is by no means obligated to use any compression method – this
>   depends on the internal settings of the web server and also may
> depend on the internal
>   architecture of the website in question.
>
>   In case of SDCH a dictionary negotiation is also required, which
> may involve additional
>   steps, like downloading a proper dictionary from .
> .."
>
>
> So, it looks like it is a feature of the browser.  So, is it possible
> to have squid gunzip the data or configure the browser not to send the
> header  to remove "Accept-Encoding: gzip, deflate" from the request
> sent to the remote server telling it to gzip the data?
>
> Thanks
>
> Jeff
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: SSL Bump Failures with Google and Wikipedia

Jeffrey Merkey
On 9/30/17, Rafael Akchurin <[hidden email]> wrote:

> Hello Jeff,
>
> Do not forget Google and YouTube are now using brotli encoding extensively,
> not only gzip.
>
> Best regards,
> Rafael Akchurin
>
>> Op 30 sep. 2017 om 23:49 heeft Jeffrey Merkey <[hidden email]> het
>> volgende geschreven:
>>
>>> On 9/30/17, Eliezer Croitoru <[hidden email]> wrote:
>>> Hey Jeffrey,
>>>
>>> What happens when you disable the next icap service this way:
>>> icap_service service_avi_resp respmod_precache
>>> icap://127.0.0.1:1344/cherokee bypass=0
>>> adaptation_access service_avi_resp deny all
>>>
>>> Is it still the same?
>>> What I suspect is that the requests are defined to accept gzip
>>> compressed
>>> objects and the icap service is not "gnuzip" them which results in what
>>> you
>>> see.
>>>
>>> To make sure that squid is not at fault here try to disable both icap
>>> services and then add then one at a time and see which of this triangle
>>> is
>>> giving you trouble.
>>> I enhanced an ICAP library which is written in GoLang at:
>>> https://github.com/elico/icap
>>>
>>> And I have couple examples on how to work with http requests and
>>> responses
>>> at:
>>> https://github.com/andybalholm/redwood/
>>> https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&type=
>>>
>>> Let me know if you need help finding out the issue.
>>>
>>> All The Bests,
>>> Eliezer
>>>
>>> ----
>>> Eliezer Croitoru
>>> Linux System Administrator
>>> Mobile: +972-5-28704261
>>> Email: [hidden email]
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: squid-users [mailto:[hidden email]] On
>>> Behalf Of Jeffrey Merkey
>>> Sent: Saturday, September 30, 2017 23:28
>>> To: squid-users <[hidden email]>
>>> Subject: [squid-users] SSL Bump Failures with Google and Wikipedia
>>>
>>> Hello All,
>>>
>>> I have been working with the squid server and icap and I have been
>>> running into problems with content cached from google and wikipedia.
>>> Some sites using https, such as Centos.org work perfectly with ssl
>>> bumping and I get the decrypted content as html and it's readable.
>>> Other sites, such as google and wikipedia return what looks like
>>> encrypted traffic, or perhaps mime encoded data, I am not sure which.
>>>
>>> Are there cases where squid will default to direct mode and not
>>> decrypt the traffic?  I am using the latest squid server 3.5.27.  I
>>> really would like to get this working with google and wikipedia.  I
>>> reviewed the page source code from the browser viewer and it looks
>>> nothing like the data I am getting via the icap server.
>>>
>>> Any assistance would be greatly appreciated.
>>>
>>> The config I am using is:
>>>
>>> #
>>> # Recommended minimum configuration:
>>> #
>>>
>>> # Example rule allowing access from your local networks.
>>> # Adapt to list your (internal) IP networks from where browsing
>>> # should be allowed
>>>
>>> acl localnet src 127.0.0.1
>>> acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
>>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>>> acl localnet src fc00::/7       # RFC 4193 local private network range
>>> acl localnet src fe80::/10      # RFC 4291 link-local (directly
>>> plugged) machines
>>>
>>> acl SSL_ports port 443
>>> acl Safe_ports port 80          # http
>>> acl Safe_ports port 21          # ftp
>>> acl Safe_ports port 443         # https
>>> acl Safe_ports port 70          # gopher
>>> acl Safe_ports port 210         # wais
>>> acl Safe_ports port 1025-65535  # unregistered ports
>>> acl Safe_ports port 280         # http-mgmt
>>> acl Safe_ports port 488         # gss-http
>>> acl Safe_ports port 591         # filemaker
>>> acl Safe_ports port 777         # multiling http
>>> acl CONNECT method CONNECT
>>>
>>> #
>>> # Recommended minimum Access Permission configuration:
>>> #
>>> # Deny requests to certain unsafe ports
>>> http_access deny !Safe_ports
>>>
>>> # Deny CONNECT to other than secure SSL ports
>>> http_access deny CONNECT !SSL_ports
>>>
>>> # Only allow cachemgr access from localhost
>>> http_access allow localhost manager
>>> http_access deny manager
>>>
>>> # We strongly recommend the following be uncommented to protect innocent
>>> # web applications running on the proxy server who think the only
>>> # one who can access services on "localhost" is a local user
>>> #http_access deny to_localhost
>>>
>>> #
>>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
>>> #
>>>
>>> # Example rule allowing access from your local networks.
>>> # Adapt localnet in the ACL section to list your (internal) IP networks
>>> # from where browsing should be allowed
>>> http_access allow localnet
>>> http_access allow localhost
>>>
>>> # And finally deny all other access to this proxy
>>> http_access deny all
>>>
>>> # Squid normally listens to port 3128
>>> #http_port 3128
>>>
>>> # Uncomment and adjust the following to add a disk cache directory.
>>> #cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256
>>>
>>> # Leave coredumps in the first cache dir
>>> coredump_dir /usr/local/squid/var/cache/squid
>>>
>>> #
>>> # Add any of your own refresh_pattern entries above these.
>>> #
>>> refresh_pattern ^ftp:           1440    20%     10080
>>> refresh_pattern ^gopher:        1440    0%      1440
>>> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
>>> refresh_pattern .               0       20%     4320
>>>
>>> http_port 3128 ssl-bump generate-host-certificates=on
>>> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
>>> http_port 3129
>>>
>>> # SSL Bump Config
>>> always_direct allow all
>>> ssl_bump server-first all
>>> sslproxy_cert_error deny all
>>> sslproxy_flags DONT_VERIFY_PEER
>>> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
>>> -M 4MB sslcrtd_children 8 startup=1 idle=1
>>>
>>> # For squid 3.5.x
>>> #sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db -M
>>> 4MB
>>>
>>> # For squid 4.x
>>> # sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
>>> /var/lib/ssl_db -M 4MB
>>>
>>> icap_enable on
>>> icap_send_client_ip on
>>> icap_send_client_username on
>>> icap_client_username_header X-Authenticated-User
>>> icap_preview_enable on
>>> icap_preview_size 1024
>>> icap_service service_avi_req reqmod_precache
>>> icap://127.0.0.1:1344/request
>>> bypass=1
>>> adaptation_access service_avi_req allow all
>>>
>>> icap_service service_avi_resp respmod_precache
>>> icap://127.0.0.1:1344/cherokee bypass=0
>>> adaptation_access service_avi_resp allow all
>>>
>>> Jeff
>>> _______________________________________________
>>> squid-users mailing list
>>> [hidden email]
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>>>
>>
>> Eliezer,
>>
>> Well, you certainly hit the nail on the head.  I added the following
>> code to check the content being sent to the icap server from squid,
>> and here is what I found when I check the headers being sent from the
>> remote web server:
>>
>> Code to check for content type and encoding received by the icap
>> server added to c-icap:
>>
>>    hdrs = ci_http_response_headers(req);
>>    content_type = ci_headers_value(hdrs, "Content-Type");
>>    if (content_type)
>>       ci_debug_printf(1,"srv_cherokee:  content-type: %s\n",
>>                       content_type);
>>
>>    content_encoding = ci_headers_value(hdrs, "Content-Encoding");
>>    if (content_encoding)
>>       ci_debug_printf(1,"srv_cherokee:  content-encoding: %s\n",
>>                       content_encoding);
>>
>> And the output from scanned pages sent over from squid:
>>
>> srv_cherokee:  init request 0x7f3dbc008eb0
>> pool hits:1 allocations: 1
>> Allocating from objects pool object 5
>> pool hits:1 allocations: 1
>> Geting buffer from pool 4096:1
>> Requested service: cherokee
>> Read preview data if there are and process request
>> srv_cherokee:  content-type: text/html; charset=utf-8
>> srv_cherokee:  content-encoding: gzip         <-- As you stated, I am
>> getting gzipped data
>> srv_cherokee:  we expect to read :-1 body data
>> Allow 204...
>> Preview handler return allow 204 response
>> srv_cherokee:  release request 0x7f3dbc008eb0
>> Store buffer to long pool 4096:1
>> Storing to objects pool object 5
>> Log request to access log file /var/log/i-cap_access.log
>>
>>
>> Wikipedia  at https://en.wikipedia.org/wiki/HTTP_compression describes
>> the process as:
>>
>> " ...
>>   Compression scheme negotiation[edit]
>>   In most cases, excluding the SDCH, the negotiation is done in two
>> steps, described in
>>   RFC 2616:
>>
>>   1. The web client advertises which compression schemes it supports
>> by including a list
>>   of tokens in the HTTP request. For Content-Encoding, the list in a
>> field called Accept -
>>   Encoding; for Transfer-Encoding, the field is called TE.
>>
>>   GET /encrypted-area HTTP/1.1
>>   Host: www.example.com
>>   Accept-Encoding: gzip, deflate
>>
>>   2. If the server supports one or more compression schemes, the
>> outgoing data may be
>>   compressed by one or more methods supported by both parties. If
>> this is the case, the
>>   server will add a Content-Encoding or Transfer-Encoding field in
>> the HTTP response with
>>   the used schemes, separated by commas.
>>
>>   HTTP/1.1 200 OK
>>   Date: mon, 26 June 2016 22:38:34 GMT
>>   Server: Apache/1.3.3.7 (Unix)  (Red-Hat/Linux)
>>   Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
>>   Accept-Ranges: bytes
>>   Content-Length: 438
>>   Connection: close
>>   Content-Type: text/html; charset=UTF-8
>>   Content-Encoding: gzip
>>
>>   The web server is by no means obligated to use any compression method –
>> this
>>   depends on the internal settings of the web server and also may
>> depend on the internal
>>   architecture of the website in question.
>>
>>   In case of SDCH a dictionary negotiation is also required, which
>> may involve additional
>>   steps, like downloading a proper dictionary from .
>> .."
>>
>>
>> So, it looks like it is a feature of the browser.  So, is it possible
>> to have squid gunzip the data or configure the browser not to send the
>> header  to remove "Accept-Encoding: gzip, deflate" from the request
>> sent to the remote server telling it to gzip the data?
>>
>> Thanks
>>
>> Jeff
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>


Resetting the browser settings to disable gzip worked.  I can see the
pages now.  The best way to implement this is to intercept the
accept-encoding header and either remove it or modify it from icap
before being sent to the remote server.

Jeff
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: SSL Bump Failures with Google and Wikipedia

Eliezer Croitoru
In reply to this post by Rafael Akchurin
Hey Rafael,

Where have you seen the details about brotli being used?

Thanks,
Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: Rafael Akchurin [mailto:[hidden email]]
Sent: Sunday, October 1, 2017 01:16
To: Jeffrey Merkey <[hidden email]>
Cc: Eliezer Croitoru <[hidden email]>; squid-users
<[hidden email]>
Subject: Re: [squid-users] SSL Bump Failures with Google and Wikipedia

Hello Jeff,

Do not forget Google and YouTube are now using brotli encoding extensively,
not only gzip.

Best regards,
Rafael Akchurin

> Op 30 sep. 2017 om 23:49 heeft Jeffrey Merkey <[hidden email]> het
volgende geschreven:

>
>> On 9/30/17, Eliezer Croitoru <[hidden email]> wrote:
>> Hey Jeffrey,
>>
>> What happens when you disable the next icap service this way:
>> icap_service service_avi_resp respmod_precache
>> icap://127.0.0.1:1344/cherokee bypass=0 adaptation_access
>> service_avi_resp deny all
>>
>> Is it still the same?
>> What I suspect is that the requests are defined to accept gzip
>> compressed objects and the icap service is not "gnuzip" them which
>> results in what you see.
>>
>> To make sure that squid is not at fault here try to disable both icap
>> services and then add then one at a time and see which of this
>> triangle is giving you trouble.
>> I enhanced an ICAP library which is written in GoLang at:
>> https://github.com/elico/icap
>>
>> And I have couple examples on how to work with http requests and
>> responses
>> at:
>> https://github.com/andybalholm/redwood/
>> https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&t
>> ype=
>>
>> Let me know if you need help finding out the issue.
>>
>> All The Bests,
>> Eliezer
>>
>> ----
>> Eliezer Croitoru
>> Linux System Administrator
>> Mobile: +972-5-28704261
>> Email: [hidden email]
>>
>>
>>
>> -----Original Message-----
>> From: squid-users [mailto:[hidden email]]
>> On Behalf Of Jeffrey Merkey
>> Sent: Saturday, September 30, 2017 23:28
>> To: squid-users <[hidden email]>
>> Subject: [squid-users] SSL Bump Failures with Google and Wikipedia
>>
>> Hello All,
>>
>> I have been working with the squid server and icap and I have been
>> running into problems with content cached from google and wikipedia.
>> Some sites using https, such as Centos.org work perfectly with ssl
>> bumping and I get the decrypted content as html and it's readable.
>> Other sites, such as google and wikipedia return what looks like
>> encrypted traffic, or perhaps mime encoded data, I am not sure which.
>>
>> Are there cases where squid will default to direct mode and not
>> decrypt the traffic?  I am using the latest squid server 3.5.27.  I
>> really would like to get this working with google and wikipedia.  I
>> reviewed the page source code from the browser viewer and it looks
>> nothing like the data I am getting via the icap server.
>>
>> Any assistance would be greatly appreciated.
>>
>> The config I am using is:
>>
>> #
>> # Recommended minimum configuration:
>> #
>>
>> # Example rule allowing access from your local networks.
>> # Adapt to list your (internal) IP networks from where browsing #
>> should be allowed
>>
>> acl localnet src 127.0.0.1
>> acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>> acl localnet src fc00::/7       # RFC 4193 local private network range
>> acl localnet src fe80::/10      # RFC 4291 link-local (directly
>> plugged) machines
>>
>> acl SSL_ports port 443
>> acl Safe_ports port 80          # http
>> acl Safe_ports port 21          # ftp
>> acl Safe_ports port 443         # https
>> acl Safe_ports port 70          # gopher
>> acl Safe_ports port 210         # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280         # http-mgmt
>> acl Safe_ports port 488         # gss-http
>> acl Safe_ports port 591         # filemaker
>> acl Safe_ports port 777         # multiling http
>> acl CONNECT method CONNECT
>>
>> #
>> # Recommended minimum Access Permission configuration:
>> #
>> # Deny requests to certain unsafe ports http_access deny !Safe_ports
>>
>> # Deny CONNECT to other than secure SSL ports http_access deny
>> CONNECT !SSL_ports
>>
>> # Only allow cachemgr access from localhost http_access allow
>> localhost manager http_access deny manager
>>
>> # We strongly recommend the following be uncommented to protect
>> innocent # web applications running on the proxy server who think the
>> only # one who can access services on "localhost" is a local user
>> #http_access deny to_localhost
>>
>> #
>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS #
>>
>> # Example rule allowing access from your local networks.
>> # Adapt localnet in the ACL section to list your (internal) IP
>> networks # from where browsing should be allowed http_access allow
>> localnet http_access allow localhost
>>
>> # And finally deny all other access to this proxy http_access deny
>> all
>>
>> # Squid normally listens to port 3128 #http_port 3128
>>
>> # Uncomment and adjust the following to add a disk cache directory.
>> #cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256
>>
>> # Leave coredumps in the first cache dir coredump_dir
>> /usr/local/squid/var/cache/squid
>>
>> #
>> # Add any of your own refresh_pattern entries above these.
>> #
>> refresh_pattern ^ftp:           1440    20%     10080
>> refresh_pattern ^gopher:        1440    0%      1440
>> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
>> refresh_pattern .               0       20%     4320
>>
>> http_port 3128 ssl-bump generate-host-certificates=on
>> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
>> http_port 3129
>>
>> # SSL Bump Config
>> always_direct allow all
>> ssl_bump server-first all
>> sslproxy_cert_error deny all
>> sslproxy_flags DONT_VERIFY_PEER
>> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
>> -M 4MB sslcrtd_children 8 startup=1 idle=1
>>
>> # For squid 3.5.x
>> #sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
>> -M 4MB
>>
>> # For squid 4.x
>> # sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
>> /var/lib/ssl_db -M 4MB
>>
>> icap_enable on
>> icap_send_client_ip on
>> icap_send_client_username on
>> icap_client_username_header X-Authenticated-User icap_preview_enable
>> on icap_preview_size 1024 icap_service service_avi_req
>> reqmod_precache icap://127.0.0.1:1344/request
>> bypass=1
>> adaptation_access service_avi_req allow all
>>
>> icap_service service_avi_resp respmod_precache
>> icap://127.0.0.1:1344/cherokee bypass=0 adaptation_access
>> service_avi_resp allow all
>>
>> Jeff
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>
> Eliezer,
>
> Well, you certainly hit the nail on the head.  I added the following
> code to check the content being sent to the icap server from squid,
> and here is what I found when I check the headers being sent from the
> remote web server:
>
> Code to check for content type and encoding received by the icap
> server added to c-icap:
>
>    hdrs = ci_http_response_headers(req);
>    content_type = ci_headers_value(hdrs, "Content-Type");
>    if (content_type)
>       ci_debug_printf(1,"srv_cherokee:  content-type: %s\n",
>                       content_type);
>
>    content_encoding = ci_headers_value(hdrs, "Content-Encoding");
>    if (content_encoding)
>       ci_debug_printf(1,"srv_cherokee:  content-encoding: %s\n",
>                       content_encoding);
>
> And the output from scanned pages sent over from squid:
>
> srv_cherokee:  init request 0x7f3dbc008eb0 pool hits:1 allocations: 1
> Allocating from objects pool object 5 pool hits:1 allocations: 1
> Geting buffer from pool 4096:1 Requested service: cherokee Read
> preview data if there are and process request
> srv_cherokee:  content-type: text/html; charset=utf-8
> srv_cherokee:  content-encoding: gzip         <-- As you stated, I am
> getting gzipped data
> srv_cherokee:  we expect to read :-1 body data Allow 204...
> Preview handler return allow 204 response
> srv_cherokee:  release request 0x7f3dbc008eb0 Store buffer to long
> pool 4096:1 Storing to objects pool object 5 Log request to access log
> file /var/log/i-cap_access.log
>
>
> Wikipedia  at https://en.wikipedia.org/wiki/HTTP_compression describes
> the process as:
>
> " ...
>   Compression scheme negotiation[edit]
>   In most cases, excluding the SDCH, the negotiation is done in two
> steps, described in
>   RFC 2616:
>
>   1. The web client advertises which compression schemes it supports
> by including a list
>   of tokens in the HTTP request. For Content-Encoding, the list in a
> field called Accept -
>   Encoding; for Transfer-Encoding, the field is called TE.
>
>   GET /encrypted-area HTTP/1.1
>   Host: www.example.com
>   Accept-Encoding: gzip, deflate
>
>   2. If the server supports one or more compression schemes, the
> outgoing data may be
>   compressed by one or more methods supported by both parties. If this
> is the case, the
>   server will add a Content-Encoding or Transfer-Encoding field in the
> HTTP response with
>   the used schemes, separated by commas.
>
>   HTTP/1.1 200 OK
>   Date: mon, 26 June 2016 22:38:34 GMT
>   Server: Apache/1.3.3.7 (Unix)  (Red-Hat/Linux)
>   Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
>   Accept-Ranges: bytes
>   Content-Length: 438
>   Connection: close
>   Content-Type: text/html; charset=UTF-8
>   Content-Encoding: gzip
>
>   The web server is by no means obligated to use any compression method -
this

>   depends on the internal settings of the web server and also may
> depend on the internal
>   architecture of the website in question.
>
>   In case of SDCH a dictionary negotiation is also required, which may
> involve additional
>   steps, like downloading a proper dictionary from .
> .."
>
>
> So, it looks like it is a feature of the browser.  So, is it possible
> to have squid gunzip the data or configure the browser not to send the
> header  to remove "Accept-Encoding: gzip, deflate" from the request
> sent to the remote server telling it to gzip the data?
>
> Thanks
>
> Jeff
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: SSL Bump Failures with Google and Wikipedia

Yuri Voinov
I guess in HTTP headers. =-O :-D


01.10.2017 7:05, Eliezer Croitoru пишет:

> Hey Rafael,
>
> Where have you seen the details about brotli being used?
>
> Thanks,
> Eliezer
>
> ----
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: [hidden email]
>
>
>
> -----Original Message-----
> From: Rafael Akchurin [mailto:[hidden email]]
> Sent: Sunday, October 1, 2017 01:16
> To: Jeffrey Merkey <[hidden email]>
> Cc: Eliezer Croitoru <[hidden email]>; squid-users
> <[hidden email]>
> Subject: Re: [squid-users] SSL Bump Failures with Google and Wikipedia
>
> Hello Jeff,
>
> Do not forget Google and YouTube are now using brotli encoding extensively,
> not only gzip.
>
> Best regards,
> Rafael Akchurin
>
>> Op 30 sep. 2017 om 23:49 heeft Jeffrey Merkey <[hidden email]> het
> volgende geschreven:
>>> On 9/30/17, Eliezer Croitoru <[hidden email]> wrote:
>>> Hey Jeffrey,
>>>
>>> What happens when you disable the next icap service this way:
>>> icap_service service_avi_resp respmod_precache
>>> icap://127.0.0.1:1344/cherokee bypass=0 adaptation_access
>>> service_avi_resp deny all
>>>
>>> Is it still the same?
>>> What I suspect is that the requests are defined to accept gzip
>>> compressed objects and the icap service is not "gnuzip" them which
>>> results in what you see.
>>>
>>> To make sure that squid is not at fault here try to disable both icap
>>> services and then add then one at a time and see which of this
>>> triangle is giving you trouble.
>>> I enhanced an ICAP library which is written in GoLang at:
>>> https://github.com/elico/icap
>>>
>>> And I have couple examples on how to work with http requests and
>>> responses
>>> at:
>>> https://github.com/andybalholm/redwood/
>>> https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&t
>>> ype=
>>>
>>> Let me know if you need help finding out the issue.
>>>
>>> All The Bests,
>>> Eliezer
>>>
>>> ----
>>> Eliezer Croitoru
>>> Linux System Administrator
>>> Mobile: +972-5-28704261
>>> Email: [hidden email]
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: squid-users [mailto:[hidden email]]
>>> On Behalf Of Jeffrey Merkey
>>> Sent: Saturday, September 30, 2017 23:28
>>> To: squid-users <[hidden email]>
>>> Subject: [squid-users] SSL Bump Failures with Google and Wikipedia
>>>
>>> Hello All,
>>>
>>> I have been working with the squid server and icap and I have been
>>> running into problems with content cached from google and wikipedia.
>>> Some sites using https, such as Centos.org work perfectly with ssl
>>> bumping and I get the decrypted content as html and it's readable.
>>> Other sites, such as google and wikipedia return what looks like
>>> encrypted traffic, or perhaps mime encoded data, I am not sure which.
>>>
>>> Are there cases where squid will default to direct mode and not
>>> decrypt the traffic?  I am using the latest squid server 3.5.27.  I
>>> really would like to get this working with google and wikipedia.  I
>>> reviewed the page source code from the browser viewer and it looks
>>> nothing like the data I am getting via the icap server.
>>>
>>> Any assistance would be greatly appreciated.
>>>
>>> The config I am using is:
>>>
>>> #
>>> # Recommended minimum configuration:
>>> #
>>>
>>> # Example rule allowing access from your local networks.
>>> # Adapt to list your (internal) IP networks from where browsing #
>>> should be allowed
>>>
>>> acl localnet src 127.0.0.1
>>> acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
>>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>>> acl localnet src fc00::/7       # RFC 4193 local private network range
>>> acl localnet src fe80::/10      # RFC 4291 link-local (directly
>>> plugged) machines
>>>
>>> acl SSL_ports port 443
>>> acl Safe_ports port 80          # http
>>> acl Safe_ports port 21          # ftp
>>> acl Safe_ports port 443         # https
>>> acl Safe_ports port 70          # gopher
>>> acl Safe_ports port 210         # wais
>>> acl Safe_ports port 1025-65535  # unregistered ports
>>> acl Safe_ports port 280         # http-mgmt
>>> acl Safe_ports port 488         # gss-http
>>> acl Safe_ports port 591         # filemaker
>>> acl Safe_ports port 777         # multiling http
>>> acl CONNECT method CONNECT
>>>
>>> #
>>> # Recommended minimum Access Permission configuration:
>>> #
>>> # Deny requests to certain unsafe ports http_access deny !Safe_ports
>>>
>>> # Deny CONNECT to other than secure SSL ports http_access deny
>>> CONNECT !SSL_ports
>>>
>>> # Only allow cachemgr access from localhost http_access allow
>>> localhost manager http_access deny manager
>>>
>>> # We strongly recommend the following be uncommented to protect
>>> innocent # web applications running on the proxy server who think the
>>> only # one who can access services on "localhost" is a local user
>>> #http_access deny to_localhost
>>>
>>> #
>>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS #
>>>
>>> # Example rule allowing access from your local networks.
>>> # Adapt localnet in the ACL section to list your (internal) IP
>>> networks # from where browsing should be allowed http_access allow
>>> localnet http_access allow localhost
>>>
>>> # And finally deny all other access to this proxy http_access deny
>>> all
>>>
>>> # Squid normally listens to port 3128 #http_port 3128
>>>
>>> # Uncomment and adjust the following to add a disk cache directory.
>>> #cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256
>>>
>>> # Leave coredumps in the first cache dir coredump_dir
>>> /usr/local/squid/var/cache/squid
>>>
>>> #
>>> # Add any of your own refresh_pattern entries above these.
>>> #
>>> refresh_pattern ^ftp:           1440    20%     10080
>>> refresh_pattern ^gopher:        1440    0%      1440
>>> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
>>> refresh_pattern .               0       20%     4320
>>>
>>> http_port 3128 ssl-bump generate-host-certificates=on
>>> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
>>> http_port 3129
>>>
>>> # SSL Bump Config
>>> always_direct allow all
>>> ssl_bump server-first all
>>> sslproxy_cert_error deny all
>>> sslproxy_flags DONT_VERIFY_PEER
>>> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
>>> -M 4MB sslcrtd_children 8 startup=1 idle=1
>>>
>>> # For squid 3.5.x
>>> #sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
>>> -M 4MB
>>>
>>> # For squid 4.x
>>> # sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
>>> /var/lib/ssl_db -M 4MB
>>>
>>> icap_enable on
>>> icap_send_client_ip on
>>> icap_send_client_username on
>>> icap_client_username_header X-Authenticated-User icap_preview_enable
>>> on icap_preview_size 1024 icap_service service_avi_req
>>> reqmod_precache icap://127.0.0.1:1344/request
>>> bypass=1
>>> adaptation_access service_avi_req allow all
>>>
>>> icap_service service_avi_resp respmod_precache
>>> icap://127.0.0.1:1344/cherokee bypass=0 adaptation_access
>>> service_avi_resp allow all
>>>
>>> Jeff
>>> _______________________________________________
>>> squid-users mailing list
>>> [hidden email]
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>>>
>> Eliezer,
>>
>> Well, you certainly hit the nail on the head.  I added the following
>> code to check the content being sent to the icap server from squid,
>> and here is what I found when I check the headers being sent from the
>> remote web server:
>>
>> Code to check for content type and encoding received by the icap
>> server added to c-icap:
>>
>>    hdrs = ci_http_response_headers(req);
>>    content_type = ci_headers_value(hdrs, "Content-Type");
>>    if (content_type)
>>       ci_debug_printf(1,"srv_cherokee:  content-type: %s\n",
>>                       content_type);
>>
>>    content_encoding = ci_headers_value(hdrs, "Content-Encoding");
>>    if (content_encoding)
>>       ci_debug_printf(1,"srv_cherokee:  content-encoding: %s\n",
>>                       content_encoding);
>>
>> And the output from scanned pages sent over from squid:
>>
>> srv_cherokee:  init request 0x7f3dbc008eb0 pool hits:1 allocations: 1
>> Allocating from objects pool object 5 pool hits:1 allocations: 1
>> Geting buffer from pool 4096:1 Requested service: cherokee Read
>> preview data if there are and process request
>> srv_cherokee:  content-type: text/html; charset=utf-8
>> srv_cherokee:  content-encoding: gzip         <-- As you stated, I am
>> getting gzipped data
>> srv_cherokee:  we expect to read :-1 body data Allow 204...
>> Preview handler return allow 204 response
>> srv_cherokee:  release request 0x7f3dbc008eb0 Store buffer to long
>> pool 4096:1 Storing to objects pool object 5 Log request to access log
>> file /var/log/i-cap_access.log
>>
>>
>> Wikipedia  at https://en.wikipedia.org/wiki/HTTP_compression describes
>> the process as:
>>
>> " ...
>>   Compression scheme negotiation[edit]
>>   In most cases, excluding the SDCH, the negotiation is done in two
>> steps, described in
>>   RFC 2616:
>>
>>   1. The web client advertises which compression schemes it supports
>> by including a list
>>   of tokens in the HTTP request. For Content-Encoding, the list in a
>> field called Accept -
>>   Encoding; for Transfer-Encoding, the field is called TE.
>>
>>   GET /encrypted-area HTTP/1.1
>>   Host: www.example.com
>>   Accept-Encoding: gzip, deflate
>>
>>   2. If the server supports one or more compression schemes, the
>> outgoing data may be
>>   compressed by one or more methods supported by both parties. If this
>> is the case, the
>>   server will add a Content-Encoding or Transfer-Encoding field in the
>> HTTP response with
>>   the used schemes, separated by commas.
>>
>>   HTTP/1.1 200 OK
>>   Date: mon, 26 June 2016 22:38:34 GMT
>>   Server: Apache/1.3.3.7 (Unix)  (Red-Hat/Linux)
>>   Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
>>   Accept-Ranges: bytes
>>   Content-Length: 438
>>   Connection: close
>>   Content-Type: text/html; charset=UTF-8
>>   Content-Encoding: gzip
>>
>>   The web server is by no means obligated to use any compression method -
> this
>>   depends on the internal settings of the web server and also may
>> depend on the internal
>>   architecture of the website in question.
>>
>>   In case of SDCH a dictionary negotiation is also required, which may
>> involve additional
>>   steps, like downloading a proper dictionary from .
>> .."
>>
>>
>> So, it looks like it is a feature of the browser.  So, is it possible
>> to have squid gunzip the data or configure the browser not to send the
>> header  to remove "Accept-Encoding: gzip, deflate" from the request
>> sent to the remote server telling it to gzip the data?
>>
>> Thanks
>>
>> Jeff
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

signature.asc (523 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: SSL Bump Failures with Google and Wikipedia

Eliezer Croitoru
Hey Yuri and Rafael,

I have tried to find a site which uses brotli compression but yet to find one.
Also I have not seen any brotli request headers in firefox or chrome, maybe there is a specific browser which uses it?

Thanks,
Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]


-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Yuri
Sent: Sunday, October 1, 2017 04:08
To: [hidden email]
Subject: Re: [squid-users] SSL Bump Failures with Google and Wikipedia

I guess in HTTP headers. =-O :-D


01.10.2017 7:05, Eliezer Croitoru пишет:

> Hey Rafael,
>
> Where have you seen the details about brotli being used?
>
> Thanks,
> Eliezer
>
> ----
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: [hidden email]
>
>
>
> -----Original Message-----
> From: Rafael Akchurin [mailto:[hidden email]]
> Sent: Sunday, October 1, 2017 01:16
> To: Jeffrey Merkey <[hidden email]>
> Cc: Eliezer Croitoru <[hidden email]>; squid-users
> <[hidden email]>
> Subject: Re: [squid-users] SSL Bump Failures with Google and Wikipedia
>
> Hello Jeff,
>
> Do not forget Google and YouTube are now using brotli encoding
> extensively, not only gzip.
>
> Best regards,
> Rafael Akchurin
>
>> Op 30 sep. 2017 om 23:49 heeft Jeffrey Merkey <[hidden email]>
>> het
> volgende geschreven:
>>> On 9/30/17, Eliezer Croitoru <[hidden email]> wrote:
>>> Hey Jeffrey,
>>>
>>> What happens when you disable the next icap service this way:
>>> icap_service service_avi_resp respmod_precache
>>> icap://127.0.0.1:1344/cherokee bypass=0 adaptation_access
>>> service_avi_resp deny all
>>>
>>> Is it still the same?
>>> What I suspect is that the requests are defined to accept gzip
>>> compressed objects and the icap service is not "gnuzip" them which
>>> results in what you see.
>>>
>>> To make sure that squid is not at fault here try to disable both
>>> icap services and then add then one at a time and see which of this
>>> triangle is giving you trouble.
>>> I enhanced an ICAP library which is written in GoLang at:
>>> https://github.com/elico/icap
>>>
>>> And I have couple examples on how to work with http requests and
>>> responses
>>> at:
>>> https://github.com/andybalholm/redwood/
>>> https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&
>>> t
>>> ype=
>>>
>>> Let me know if you need help finding out the issue.
>>>
>>> All The Bests,
>>> Eliezer
>>>
>>> ----
>>> Eliezer Croitoru
>>> Linux System Administrator
>>> Mobile: +972-5-28704261
>>> Email: [hidden email]
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: squid-users [mailto:[hidden email]]
>>> On Behalf Of Jeffrey Merkey
>>> Sent: Saturday, September 30, 2017 23:28
>>> To: squid-users <[hidden email]>
>>> Subject: [squid-users] SSL Bump Failures with Google and Wikipedia
>>>
>>> Hello All,
>>>
>>> I have been working with the squid server and icap and I have been
>>> running into problems with content cached from google and wikipedia.
>>> Some sites using https, such as Centos.org work perfectly with ssl
>>> bumping and I get the decrypted content as html and it's readable.
>>> Other sites, such as google and wikipedia return what looks like
>>> encrypted traffic, or perhaps mime encoded data, I am not sure which.
>>>
>>> Are there cases where squid will default to direct mode and not
>>> decrypt the traffic?  I am using the latest squid server 3.5.27.  I
>>> really would like to get this working with google and wikipedia.  I
>>> reviewed the page source code from the browser viewer and it looks
>>> nothing like the data I am getting via the icap server.
>>>
>>> Any assistance would be greatly appreciated.
>>>
>>> The config I am using is:
>>>
>>> #
>>> # Recommended minimum configuration:
>>> #
>>>
>>> # Example rule allowing access from your local networks.
>>> # Adapt to list your (internal) IP networks from where browsing #
>>> should be allowed
>>>
>>> acl localnet src 127.0.0.1
>>> acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
>>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>>> acl localnet src fc00::/7       # RFC 4193 local private network range
>>> acl localnet src fe80::/10      # RFC 4291 link-local (directly
>>> plugged) machines
>>>
>>> acl SSL_ports port 443
>>> acl Safe_ports port 80          # http
>>> acl Safe_ports port 21          # ftp
>>> acl Safe_ports port 443         # https
>>> acl Safe_ports port 70          # gopher
>>> acl Safe_ports port 210         # wais
>>> acl Safe_ports port 1025-65535  # unregistered ports
>>> acl Safe_ports port 280         # http-mgmt
>>> acl Safe_ports port 488         # gss-http
>>> acl Safe_ports port 591         # filemaker
>>> acl Safe_ports port 777         # multiling http
>>> acl CONNECT method CONNECT
>>>
>>> #
>>> # Recommended minimum Access Permission configuration:
>>> #
>>> # Deny requests to certain unsafe ports http_access deny !Safe_ports
>>>
>>> # Deny CONNECT to other than secure SSL ports http_access deny
>>> CONNECT !SSL_ports
>>>
>>> # Only allow cachemgr access from localhost http_access allow
>>> localhost manager http_access deny manager
>>>
>>> # We strongly recommend the following be uncommented to protect
>>> innocent # web applications running on the proxy server who think
>>> the only # one who can access services on "localhost" is a local
>>> user #http_access deny to_localhost
>>>
>>> #
>>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS #
>>>
>>> # Example rule allowing access from your local networks.
>>> # Adapt localnet in the ACL section to list your (internal) IP
>>> networks # from where browsing should be allowed http_access allow
>>> localnet http_access allow localhost
>>>
>>> # And finally deny all other access to this proxy http_access deny
>>> all
>>>
>>> # Squid normally listens to port 3128 #http_port 3128
>>>
>>> # Uncomment and adjust the following to add a disk cache directory.
>>> #cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256
>>>
>>> # Leave coredumps in the first cache dir coredump_dir
>>> /usr/local/squid/var/cache/squid
>>>
>>> #
>>> # Add any of your own refresh_pattern entries above these.
>>> #
>>> refresh_pattern ^ftp:           1440    20%     10080
>>> refresh_pattern ^gopher:        1440    0%      1440
>>> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
>>> refresh_pattern .               0       20%     4320
>>>
>>> http_port 3128 ssl-bump generate-host-certificates=on
>>> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
>>> http_port 3129
>>>
>>> # SSL Bump Config
>>> always_direct allow all
>>> ssl_bump server-first all
>>> sslproxy_cert_error deny all
>>> sslproxy_flags DONT_VERIFY_PEER
>>> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
>>> -M 4MB sslcrtd_children 8 startup=1 idle=1
>>>
>>> # For squid 3.5.x
>>> #sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s
>>> /var/lib/ssl_db -M 4MB
>>>
>>> # For squid 4.x
>>> # sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
>>> /var/lib/ssl_db -M 4MB
>>>
>>> icap_enable on
>>> icap_send_client_ip on
>>> icap_send_client_username on
>>> icap_client_username_header X-Authenticated-User icap_preview_enable
>>> on icap_preview_size 1024 icap_service service_avi_req
>>> reqmod_precache icap://127.0.0.1:1344/request
>>> bypass=1
>>> adaptation_access service_avi_req allow all
>>>
>>> icap_service service_avi_resp respmod_precache
>>> icap://127.0.0.1:1344/cherokee bypass=0 adaptation_access
>>> service_avi_resp allow all
>>>
>>> Jeff
>>> _______________________________________________
>>> squid-users mailing list
>>> [hidden email]
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>>>
>> Eliezer,
>>
>> Well, you certainly hit the nail on the head.  I added the following
>> code to check the content being sent to the icap server from squid,
>> and here is what I found when I check the headers being sent from the
>> remote web server:
>>
>> Code to check for content type and encoding received by the icap
>> server added to c-icap:
>>
>>    hdrs = ci_http_response_headers(req);
>>    content_type = ci_headers_value(hdrs, "Content-Type");
>>    if (content_type)
>>       ci_debug_printf(1,"srv_cherokee:  content-type: %s\n",
>>                       content_type);
>>
>>    content_encoding = ci_headers_value(hdrs, "Content-Encoding");
>>    if (content_encoding)
>>       ci_debug_printf(1,"srv_cherokee:  content-encoding: %s\n",
>>                       content_encoding);
>>
>> And the output from scanned pages sent over from squid:
>>
>> srv_cherokee:  init request 0x7f3dbc008eb0 pool hits:1 allocations: 1
>> Allocating from objects pool object 5 pool hits:1 allocations: 1
>> Geting buffer from pool 4096:1 Requested service: cherokee Read
>> preview data if there are and process request
>> srv_cherokee:  content-type: text/html; charset=utf-8
>> srv_cherokee:  content-encoding: gzip         <-- As you stated, I am
>> getting gzipped data
>> srv_cherokee:  we expect to read :-1 body data Allow 204...
>> Preview handler return allow 204 response
>> srv_cherokee:  release request 0x7f3dbc008eb0 Store buffer to long
>> pool 4096:1 Storing to objects pool object 5 Log request to access
>> log file /var/log/i-cap_access.log
>>
>>
>> Wikipedia  at https://en.wikipedia.org/wiki/HTTP_compression 
>> describes the process as:
>>
>> " ...
>>   Compression scheme negotiation[edit]
>>   In most cases, excluding the SDCH, the negotiation is done in two
>> steps, described in
>>   RFC 2616:
>>
>>   1. The web client advertises which compression schemes it supports
>> by including a list
>>   of tokens in the HTTP request. For Content-Encoding, the list in a
>> field called Accept -
>>   Encoding; for Transfer-Encoding, the field is called TE.
>>
>>   GET /encrypted-area HTTP/1.1
>>   Host: www.example.com
>>   Accept-Encoding: gzip, deflate
>>
>>   2. If the server supports one or more compression schemes, the
>> outgoing data may be
>>   compressed by one or more methods supported by both parties. If
>> this is the case, the
>>   server will add a Content-Encoding or Transfer-Encoding field in
>> the HTTP response with
>>   the used schemes, separated by commas.
>>
>>   HTTP/1.1 200 OK
>>   Date: mon, 26 June 2016 22:38:34 GMT
>>   Server: Apache/1.3.3.7 (Unix)  (Red-Hat/Linux)
>>   Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
>>   Accept-Ranges: bytes
>>   Content-Length: 438
>>   Connection: close
>>   Content-Type: text/html; charset=UTF-8
>>   Content-Encoding: gzip
>>
>>   The web server is by no means obligated to use any compression
>> method -
> this
>>   depends on the internal settings of the web server and also may
>> depend on the internal
>>   architecture of the website in question.
>>
>>   In case of SDCH a dictionary negotiation is also required, which
>> may involve additional
>>   steps, like downloading a proper dictionary from .
>> .."
>>
>>
>> So, it looks like it is a feature of the browser.  So, is it possible
>> to have squid gunzip the data or configure the browser not to send
>> the header  to remove "Accept-Encoding: gzip, deflate" from the
>> request sent to the remote server telling it to gzip the data?
>>
>> Thanks
>>
>> Jeff
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users



_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: SSL Bump Failures with Google and Wikipedia

Rafael Akchurin
Hello Eliezer,

From desktop ff/chrome goto youtube. It will be br encoded.

Best regards,
Rafael Akchurin

> Op 6 okt. 2017 om 02:43 heeft Eliezer Croitoru <[hidden email]> het volgende geschreven:
>
> Hey Yuri and Rafael,
>
> I have tried to find a site which uses brotli compression but yet to find one.
> Also I have not seen any brotli request headers in firefox or chrome, maybe there is a specific browser which uses it?
>
> Thanks,
> Eliezer
>
> ----
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: [hidden email]
>
>
> -----Original Message-----
> From: squid-users [mailto:[hidden email]] On Behalf Of Yuri
> Sent: Sunday, October 1, 2017 04:08
> To: [hidden email]
> Subject: Re: [squid-users] SSL Bump Failures with Google and Wikipedia
>
> I guess in HTTP headers. =-O :-D
>
>
> 01.10.2017 7:05, Eliezer Croitoru пишет:
>> Hey Rafael,
>>
>> Where have you seen the details about brotli being used?
>>
>> Thanks,
>> Eliezer
>>
>> ----
>> Eliezer Croitoru
>> Linux System Administrator
>> Mobile: +972-5-28704261
>> Email: [hidden email]
>>
>>
>>
>> -----Original Message-----
>> From: Rafael Akchurin [mailto:[hidden email]]
>> Sent: Sunday, October 1, 2017 01:16
>> To: Jeffrey Merkey <[hidden email]>
>> Cc: Eliezer Croitoru <[hidden email]>; squid-users
>> <[hidden email]>
>> Subject: Re: [squid-users] SSL Bump Failures with Google and Wikipedia
>>
>> Hello Jeff,
>>
>> Do not forget Google and YouTube are now using brotli encoding
>> extensively, not only gzip.
>>
>> Best regards,
>> Rafael Akchurin
>>
>>> Op 30 sep. 2017 om 23:49 heeft Jeffrey Merkey <[hidden email]>
>>> het
>> volgende geschreven:
>>>> On 9/30/17, Eliezer Croitoru <[hidden email]> wrote:
>>>> Hey Jeffrey,
>>>>
>>>> What happens when you disable the next icap service this way:
>>>> icap_service service_avi_resp respmod_precache
>>>> icap://127.0.0.1:1344/cherokee bypass=0 adaptation_access
>>>> service_avi_resp deny all
>>>>
>>>> Is it still the same?
>>>> What I suspect is that the requests are defined to accept gzip
>>>> compressed objects and the icap service is not "gnuzip" them which
>>>> results in what you see.
>>>>
>>>> To make sure that squid is not at fault here try to disable both
>>>> icap services and then add then one at a time and see which of this
>>>> triangle is giving you trouble.
>>>> I enhanced an ICAP library which is written in GoLang at:
>>>> https://github.com/elico/icap
>>>>
>>>> And I have couple examples on how to work with http requests and
>>>> responses
>>>> at:
>>>> https://github.com/andybalholm/redwood/
>>>> https://github.com/andybalholm/redwood/search?utf8=%E2%9C%93&q=gzip&
>>>> t
>>>> ype=
>>>>
>>>> Let me know if you need help finding out the issue.
>>>>
>>>> All The Bests,
>>>> Eliezer
>>>>
>>>> ----
>>>> Eliezer Croitoru
>>>> Linux System Administrator
>>>> Mobile: +972-5-28704261
>>>> Email: [hidden email]
>>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: squid-users [mailto:[hidden email]]
>>>> On Behalf Of Jeffrey Merkey
>>>> Sent: Saturday, September 30, 2017 23:28
>>>> To: squid-users <[hidden email]>
>>>> Subject: [squid-users] SSL Bump Failures with Google and Wikipedia
>>>>
>>>> Hello All,
>>>>
>>>> I have been working with the squid server and icap and I have been
>>>> running into problems with content cached from google and wikipedia.
>>>> Some sites using https, such as Centos.org work perfectly with ssl
>>>> bumping and I get the decrypted content as html and it's readable.
>>>> Other sites, such as google and wikipedia return what looks like
>>>> encrypted traffic, or perhaps mime encoded data, I am not sure which.
>>>>
>>>> Are there cases where squid will default to direct mode and not
>>>> decrypt the traffic?  I am using the latest squid server 3.5.27.  I
>>>> really would like to get this working with google and wikipedia.  I
>>>> reviewed the page source code from the browser viewer and it looks
>>>> nothing like the data I am getting via the icap server.
>>>>
>>>> Any assistance would be greatly appreciated.
>>>>
>>>> The config I am using is:
>>>>
>>>> #
>>>> # Recommended minimum configuration:
>>>> #
>>>>
>>>> # Example rule allowing access from your local networks.
>>>> # Adapt to list your (internal) IP networks from where browsing #
>>>> should be allowed
>>>>
>>>> acl localnet src 127.0.0.1
>>>> acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
>>>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>>>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>>>> acl localnet src fc00::/7       # RFC 4193 local private network range
>>>> acl localnet src fe80::/10      # RFC 4291 link-local (directly
>>>> plugged) machines
>>>>
>>>> acl SSL_ports port 443
>>>> acl Safe_ports port 80          # http
>>>> acl Safe_ports port 21          # ftp
>>>> acl Safe_ports port 443         # https
>>>> acl Safe_ports port 70          # gopher
>>>> acl Safe_ports port 210         # wais
>>>> acl Safe_ports port 1025-65535  # unregistered ports
>>>> acl Safe_ports port 280         # http-mgmt
>>>> acl Safe_ports port 488         # gss-http
>>>> acl Safe_ports port 591         # filemaker
>>>> acl Safe_ports port 777         # multiling http
>>>> acl CONNECT method CONNECT
>>>>
>>>> #
>>>> # Recommended minimum Access Permission configuration:
>>>> #
>>>> # Deny requests to certain unsafe ports http_access deny !Safe_ports
>>>>
>>>> # Deny CONNECT to other than secure SSL ports http_access deny
>>>> CONNECT !SSL_ports
>>>>
>>>> # Only allow cachemgr access from localhost http_access allow
>>>> localhost manager http_access deny manager
>>>>
>>>> # We strongly recommend the following be uncommented to protect
>>>> innocent # web applications running on the proxy server who think
>>>> the only # one who can access services on "localhost" is a local
>>>> user #http_access deny to_localhost
>>>>
>>>> #
>>>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS #
>>>>
>>>> # Example rule allowing access from your local networks.
>>>> # Adapt localnet in the ACL section to list your (internal) IP
>>>> networks # from where browsing should be allowed http_access allow
>>>> localnet http_access allow localhost
>>>>
>>>> # And finally deny all other access to this proxy http_access deny
>>>> all
>>>>
>>>> # Squid normally listens to port 3128 #http_port 3128
>>>>
>>>> # Uncomment and adjust the following to add a disk cache directory.
>>>> #cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256
>>>>
>>>> # Leave coredumps in the first cache dir coredump_dir
>>>> /usr/local/squid/var/cache/squid
>>>>
>>>> #
>>>> # Add any of your own refresh_pattern entries above these.
>>>> #
>>>> refresh_pattern ^ftp:           1440    20%     10080
>>>> refresh_pattern ^gopher:        1440    0%      1440
>>>> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
>>>> refresh_pattern .               0       20%     4320
>>>>
>>>> http_port 3128 ssl-bump generate-host-certificates=on
>>>> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
>>>> http_port 3129
>>>>
>>>> # SSL Bump Config
>>>> always_direct allow all
>>>> ssl_bump server-first all
>>>> sslproxy_cert_error deny all
>>>> sslproxy_flags DONT_VERIFY_PEER
>>>> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
>>>> -M 4MB sslcrtd_children 8 startup=1 idle=1
>>>>
>>>> # For squid 3.5.x
>>>> #sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s
>>>> /var/lib/ssl_db -M 4MB
>>>>
>>>> # For squid 4.x
>>>> # sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
>>>> /var/lib/ssl_db -M 4MB
>>>>
>>>> icap_enable on
>>>> icap_send_client_ip on
>>>> icap_send_client_username on
>>>> icap_client_username_header X-Authenticated-User icap_preview_enable
>>>> on icap_preview_size 1024 icap_service service_avi_req
>>>> reqmod_precache icap://127.0.0.1:1344/request
>>>> bypass=1
>>>> adaptation_access service_avi_req allow all
>>>>
>>>> icap_service service_avi_resp respmod_precache
>>>> icap://127.0.0.1:1344/cherokee bypass=0 adaptation_access
>>>> service_avi_resp allow all
>>>>
>>>> Jeff
>>>> _______________________________________________
>>>> squid-users mailing list
>>>> [hidden email]
>>>> http://lists.squid-cache.org/listinfo/squid-users
>>>>
>>>>
>>> Eliezer,
>>>
>>> Well, you certainly hit the nail on the head.  I added the following
>>> code to check the content being sent to the icap server from squid,
>>> and here is what I found when I check the headers being sent from the
>>> remote web server:
>>>
>>> Code to check for content type and encoding received by the icap
>>> server added to c-icap:
>>>
>>>   hdrs = ci_http_response_headers(req);
>>>   content_type = ci_headers_value(hdrs, "Content-Type");
>>>   if (content_type)
>>>      ci_debug_printf(1,"srv_cherokee:  content-type: %s\n",
>>>                      content_type);
>>>
>>>   content_encoding = ci_headers_value(hdrs, "Content-Encoding");
>>>   if (content_encoding)
>>>      ci_debug_printf(1,"srv_cherokee:  content-encoding: %s\n",
>>>                      content_encoding);
>>>
>>> And the output from scanned pages sent over from squid:
>>>
>>> srv_cherokee:  init request 0x7f3dbc008eb0 pool hits:1 allocations: 1
>>> Allocating from objects pool object 5 pool hits:1 allocations: 1
>>> Geting buffer from pool 4096:1 Requested service: cherokee Read
>>> preview data if there are and process request
>>> srv_cherokee:  content-type: text/html; charset=utf-8
>>> srv_cherokee:  content-encoding: gzip         <-- As you stated, I am
>>> getting gzipped data
>>> srv_cherokee:  we expect to read :-1 body data Allow 204...
>>> Preview handler return allow 204 response
>>> srv_cherokee:  release request 0x7f3dbc008eb0 Store buffer to long
>>> pool 4096:1 Storing to objects pool object 5 Log request to access
>>> log file /var/log/i-cap_access.log
>>>
>>>
>>> Wikipedia  at https://en.wikipedia.org/wiki/HTTP_compression 
>>> describes the process as:
>>>
>>> " ...
>>>  Compression scheme negotiation[edit]
>>>  In most cases, excluding the SDCH, the negotiation is done in two
>>> steps, described in
>>>  RFC 2616:
>>>
>>>  1. The web client advertises which compression schemes it supports
>>> by including a list
>>>  of tokens in the HTTP request. For Content-Encoding, the list in a
>>> field called Accept -
>>>  Encoding; for Transfer-Encoding, the field is called TE.
>>>
>>>  GET /encrypted-area HTTP/1.1
>>>  Host: www.example.com
>>>  Accept-Encoding: gzip, deflate
>>>
>>>  2. If the server supports one or more compression schemes, the
>>> outgoing data may be
>>>  compressed by one or more methods supported by both parties. If
>>> this is the case, the
>>>  server will add a Content-Encoding or Transfer-Encoding field in
>>> the HTTP response with
>>>  the used schemes, separated by commas.
>>>
>>>  HTTP/1.1 200 OK
>>>  Date: mon, 26 June 2016 22:38:34 GMT
>>>  Server: Apache/1.3.3.7 (Unix)  (Red-Hat/Linux)
>>>  Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
>>>  Accept-Ranges: bytes
>>>  Content-Length: 438
>>>  Connection: close
>>>  Content-Type: text/html; charset=UTF-8
>>>  Content-Encoding: gzip
>>>
>>>  The web server is by no means obligated to use any compression
>>> method -
>> this
>>>  depends on the internal settings of the web server and also may
>>> depend on the internal
>>>  architecture of the website in question.
>>>
>>>  In case of SDCH a dictionary negotiation is also required, which
>>> may involve additional
>>>  steps, like downloading a proper dictionary from .
>>> .."
>>>
>>>
>>> So, it looks like it is a feature of the browser.  So, is it possible
>>> to have squid gunzip the data or configure the browser not to send
>>> the header  to remove "Accept-Encoding: gzip, deflate" from the
>>> request sent to the remote server telling it to gzip the data?
>>>
>>> Thanks
>>>
>>> Jeff
>>> _______________________________________________
>>> squid-users mailing list
>>> [hidden email]
>>> http://lists.squid-cache.org/listinfo/squid-users
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>
>
>
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: SSL Bump Failures with Google and Wikipedia

Amos Jeffries
Administrator
On 06/10/17 18:24, Rafael Akchurin wrote:
> Hello Eliezer,
>
>  From desktop ff/chrome goto youtube. It will be br encoded.
>
> Best regards,
> Rafael Akchurin
>


Also, from the discussions in the IETF I get the impression that;

* the Firefox support is still only in their experimental version(s)
maybe even limited to some devs project build.

* chrome support is a bit further on but only just starting to go public.

* services using it are limited to a few IETF HTTPbis and QUIC working
group participants that are interested in it.
  - Google is the biggest player involved there and are still only
experimentally enabling it in their services. So YMMV with any of their
services depending on where you are in the world and how you are
accessing them.


In general you can expect the following content encodings to be seen in
real-world HTTP today:

  aes / aes128
  br / bro
  bz2
  chunked
  compress
  deflate / x-deflate
  diff
  gzip / x-gzip
  identity
  sdch
  zip / x-zip / x-lz

Transfer-Encoding headers can technically use all of the above too.
Though in practice I've only seen or heard about chunked, aes and gzip
being used for transfer encodings, and not heard of chunked being used
in Content-Encoding.

Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users