Squid for windows Very slow downloads of large files through squid with normal uploads

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

Squid for windows Very slow downloads of large files through squid with normal uploads

Keith Hartley

I am using squid 3.5 for windows as a transparent proxy to provide internet access to 7 servers in a secure environment that otherwise does not have internet access. I have two squids running behind a load balancer, each one is running server 2016 core with 2 Xeon processors that is either haswell generation with 1:1 physical processor to virtual processor mapping or a hyper-threading Broadwell generation processor that is 1:1 logical processor to virtual processor mapping, depending on how they are provisioned when they get started.

 

Doing a bandwidth test directly in the VM I am able to get internet throughput of 800-1200 Mbps.

 

Doing a file copy to and from the VM I am able to get 1200 Mbps lan throughput.

 

In proxied uploads I have observed speeds as high as 120 Mbps, which is more than enough for what I need and the bottleneck is likely in the backup software rather than squid. Uploads performance I am not worried about where they are at now – even if I only got 20-30 Mbps it would be adequate for what I need it for.

 

Downloads however are very slow. Small files do not seem to be impacted. Using the test a thinkbroadband.com/download, files up to 20 Mb will download at a reasonable 20-30 Mbps, but when I get to 50, it slows down to about 17 Mbps, and when I download AD Connect from Microsoft, which is about 80 Mb, I can see it start at about 30 Mbps, but eventually goes down to about 115 kbps and levels off. When I put an IP on the server I am using for testing that proxies through squid, I am able to download the file at several hundred mbps.  When I download the same file on the squid server – I can’t tell exactly what throughput I was getting, but the 80 Mb file downloaded within 5 seconds.

 

In both squid servers, other than when the servers were booting, processor activity has not exceeded 9% in the last 7 days but usually sits below 2%. Memory usage has not exceeded 2 Gb, leaving 2 Gb free.

 

I am using OpenDNS for a DNS source, and have tried changing DNS to level3 but it made no performance difference.

 

I think that this may be squid trying to cache something, but had tried to turn all caching off.

 

My cache.log doesn’t really have anything interesting in it that I can see. It’s the same ~30 or so log entries each time the service starts, and that is about it. Here it is:

 

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Starting Squid Cache version 3.5.27 for x86_64-unknown-cygwin...

2018/03/22 09:47:27 kid1| Service Name: squid

2018/03/22 09:47:27 kid1| Process ID 1164

2018/03/22 09:47:27 kid1| Process Roles: worker

2018/03/22 09:47:27 kid1| With 3200 file descriptors available

2018/03/22 09:47:27 kid1| Initializing IP Cache...

2018/03/22 09:47:27 kid1| parseEtcHosts: /etc/hosts: (2) No such file or directory

2018/03/22 09:47:27 kid1| DNS Socket created at [::], FD 5

2018/03/22 09:47:27 kid1| DNS Socket created at 0.0.0.0, FD 6

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.222.222 from squid.conf

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.220.220 from squid.conf

2018/03/22 09:47:27 kid1| Logfile: opening log daemon:/var/log/squid/access.log

2018/03/22 09:47:27 kid1| Logfile Daemon: opening log /var/log/squid/access.log

2018/03/22 09:47:27 kid1| WARNING: no_suid: setuid(0): (22) Invalid argument

2018/03/22 09:47:27 kid1| Store logging disabled

2018/03/22 09:47:27 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects

2018/03/22 09:47:27 kid1| Target number of buckets: 1008

2018/03/22 09:47:27 kid1| Using 8192 Store buckets

2018/03/22 09:47:27 kid1| Max Mem  size: 262144 KB

2018/03/22 09:47:27 kid1| Max Swap size: 0 KB

2018/03/22 09:47:27 kid1| Using Least Load store dir selection

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Finished loading MIME types and icons.

2018/03/22 09:47:27 kid1| HTCP Disabled.

2018/03/22 09:47:27 kid1| Squid plugin modules loaded: 0

2018/03/22 09:47:27 kid1| Adaptation support is off.

2018/03/22 09:47:27 kid1| Accepting HTTP Socket connections at local=[::]:3128 remote=[::] FD 10 flags=9

2018/03/22 09:47:28 kid1| storeLateRelease: released 0 objects

 

 

And this is my squid.conf:

 

#

# Recommended minimum configuration:

#

 

# Example rule allowing access from your local networks.

# Adapt to list your (internal) IP networks from where browsing

# should be allowed

 

#acl localnet src 10.0.0.0/8           # RFC1918 possible internal network

#acl localnet src 172.16.0.0/12    # RFC1918 possible internal network

#acl localnet src 192.168.0.0/16  # RFC1918 possible internal network

acl localnet src fc00::/7       # RFC 4193 local private network range

acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl WSUS src 192.168.225.4/32

acl BACKUP src 192.168.225.11/32

acl ADFS src 192.168.224.7/32

acl ADFS src 192.168.228.8/32

acl DEVWEB src 192.168.226.6/32

acl UATWEB src 192.168.226.13/32

acl PRDWEB src 192.168.226.8/32

acl PRDWEB src 192.168.226.9/32

 

 

 

acl SSL_ports port 443

acl Safe_ports port 80                    # http

#acl Safe_ports port 21                  # ftp

acl Safe_ports port 443                  # https

#acl Safe_ports port 70                  # gopher

#acl Safe_ports port 210                                # wais

#acl Safe_ports port 1025-65535                # unregistered ports

#acl Safe_ports port 280                                # http-mgmt

#acl Safe_ports port 488                                # gss-http

#acl Safe_ports port 591                                # filemaker

#acl Safe_ports port 777                                # multiling http

acl CONNECT method CONNECT

 

#

# Recommended minimum Access Permission configuration:

#

 

# Only allow cachemgr access from localhost

#http_access allow localhost manager

#http_access deny manager

 

# Deny requests to certain unsafe ports

http_access deny !Safe_ports

 

# Deny CONNECT to other than secure SSL ports

http_access deny CONNECT !SSL_ports

 

# We strongly recommend the following be uncommented to protect innocent

# web applications running on the proxy server who think the only

# one who can access services on "localhost" is a local user

#http_access deny to_localhost

 

#

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

#

 

# Example rule allowing access from your local networks.

# Adapt localnet in the ACL section to list your (internal) IP networks

# from where browsing should be allowed

http_access allow localnet

http_access allow localhost

http_access allow WSUS

http_access allow ADFS

http_access allow BACKUP

http_access allow DEVWEB

http_access allow UATWEB

http_access allow PRDWEB

 

# And finally deny all other access to this proxy

http_access deny all

 

# Squid normally listens to port 3128

http_port 3128

 

# Uncomment the line below to enable disk caching - path format is /cygdrive/<full path to cache folder>, i.e.

#cache_dir aufs /cygdrive/d/squid/cache 3000 16 256

cache deny all

 

 

# Leave coredumps in the first cache dir

coredump_dir /var/cache/squid

 

# Add any of your own refresh_pattern entries above these.

refresh_pattern ^ftp:                     1440       20%        10080

refresh_pattern ^gopher:            1440       0%          1440

refresh_pattern -i (/cgi-bin/|\?) 0             0%          0

refresh_pattern .                             0              20%        4320

 

dns_nameservers 208.67.222.222 208.67.220.220

 

max_filedescriptors 3200

 

 

 

Does anyone see anything I am missing here?

 

 

My access.log doesn’t really have anything interesting in it either, it just looks like it is working normally. I can attach that too if anyone wants to look at it after I redact some of the hosts.

 

 

Keith Hartley

Network Engineer II

MCSE: Productivity, MCSA: Server 2008, 2012, Office 365 |

Certified Meraki Network Associate, Security+

Geocent, LLC

o: 504-405-3578

a: 2219 Lakeshore drive Ste 300, New Orleans, LA 70122

w: www.geocent.com| e: [hidden email]

 

   

 


Confidentiality Notice:
This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Yuri Voinov



22.03.2018 23:10, Keith Hartley пишет:

I am using squid 3.5 for windows as a transparent proxy to provide internet access to 7 servers in a secure environment that otherwise does not have internet access. I have two squids running behind a load balancer, each one is running server 2016 core with 2 Xeon processors that is either haswell generation with 1:1 physical processor to virtual processor mapping or a hyper-threading Broadwell generation processor that is 1:1 logical processor to virtual processor mapping, depending on how they are provisioned when they get started.

 

Doing a bandwidth test directly in the VM I am able to get internet throughput of 800-1200 Mbps.

 

Doing a file copy to and from the VM I am able to get 1200 Mbps lan throughput.

 

In proxied uploads I have observed speeds as high as 120 Mbps, which is more than enough for what I need and the bottleneck is likely in the backup software rather than squid. Uploads performance I am not worried about where they are at now – even if I only got 20-30 Mbps it would be adequate for what I need it for.

 

Downloads however are very slow. Small files do not seem to be impacted. Using the test a thinkbroadband.com/download, files up to 20 Mb will download at a reasonable 20-30 Mbps, but when I get to 50, it slows down to about 17 Mbps, and when I download AD Connect from Microsoft, which is about 80 Mb, I can see it start at about 30 Mbps, but eventually goes down to about 115 kbps and levels off. When I put an IP on the server I am using for testing that proxies through squid, I am able to download the file at several hundred mbps.  When I download the same file on the squid server – I can’t tell exactly what throughput I was getting, but the 80 Mb file downloaded within 5 seconds.

 

In both squid servers, other than when the servers were booting, processor activity has not exceeded 9% in the last 7 days but usually sits below 2%. Memory usage has not exceeded 2 Gb, leaving 2 Gb free.

 

I am using OpenDNS for a DNS source, and have tried changing DNS to level3 but it made no performance difference.

 

I think that this may be squid trying to cache something, but had tried to turn all caching off.

 

My cache.log doesn’t really have anything interesting in it that I can see. It’s the same ~30 or so log entries each time the service starts, and that is about it. Here it is:

 

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Starting Squid Cache version 3.5.27 for x86_64-unknown-cygwin...

2018/03/22 09:47:27 kid1| Service Name: squid

2018/03/22 09:47:27 kid1| Process ID 1164

2018/03/22 09:47:27 kid1| Process Roles: worker

2018/03/22 09:47:27 kid1| With 3200 file descriptors available

2018/03/22 09:47:27 kid1| Initializing IP Cache...

2018/03/22 09:47:27 kid1| parseEtcHosts: /etc/hosts: (2) No such file or directory

2018/03/22 09:47:27 kid1| DNS Socket created at [::], FD 5

2018/03/22 09:47:27 kid1| DNS Socket created at 0.0.0.0, FD 6

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.222.222 from squid.conf

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.220.220 from squid.conf

2018/03/22 09:47:27 kid1| Logfile: opening log daemon:/var/log/squid/access.log

2018/03/22 09:47:27 kid1| Logfile Daemon: opening log /var/log/squid/access.log

2018/03/22 09:47:27 kid1| WARNING: no_suid: setuid(0): (22) Invalid argument

2018/03/22 09:47:27 kid1| Store logging disabled

2018/03/22 09:47:27 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects

2018/03/22 09:47:27 kid1| Target number of buckets: 1008

2018/03/22 09:47:27 kid1| Using 8192 Store buckets

2018/03/22 09:47:27 kid1| Max Mem  size: 262144 KB

2018/03/22 09:47:27 kid1| Max Swap size: 0 KB

2018/03/22 09:47:27 kid1| Using Least Load store dir selection

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Finished loading MIME types and icons.

2018/03/22 09:47:27 kid1| HTCP Disabled.

2018/03/22 09:47:27 kid1| Squid plugin modules loaded: 0

2018/03/22 09:47:27 kid1| Adaptation support is off.

2018/03/22 09:47:27 kid1| Accepting HTTP Socket connections at local=[::]:3128 remote=[::] FD 10 flags=9

2018/03/22 09:47:28 kid1| storeLateRelease: released 0 objects

 

 

And this is my squid.conf:

 

#

# Recommended minimum configuration:

#

 

# Example rule allowing access from your local networks.

# Adapt to list your (internal) IP networks from where browsing

# should be allowed

 

#acl localnet src 10.0.0.0/8           # RFC1918 possible internal network

#acl localnet src 172.16.0.0/12    # RFC1918 possible internal network

#acl localnet src 192.168.0.0/16  # RFC1918 possible internal network

acl localnet src fc00::/7       # RFC 4193 local private network range

acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl WSUS src 192.168.225.4/32

acl BACKUP src 192.168.225.11/32

acl ADFS src 192.168.224.7/32

acl ADFS src 192.168.228.8/32

acl DEVWEB src 192.168.226.6/32

acl UATWEB src 192.168.226.13/32

acl PRDWEB src 192.168.226.8/32

acl PRDWEB src 192.168.226.9/32

 

 

 

acl SSL_ports port 443

acl Safe_ports port 80                    # http

#acl Safe_ports port 21                  # ftp

acl Safe_ports port 443                  # https

#acl Safe_ports port 70                  # gopher

#acl Safe_ports port 210                                # wais

#acl Safe_ports port 1025-65535                # unregistered ports

#acl Safe_ports port 280                                # http-mgmt

#acl Safe_ports port 488                                # gss-http

#acl Safe_ports port 591                                # filemaker

#acl Safe_ports port 777                                # multiling http

acl CONNECT method CONNECT

 

#

# Recommended minimum Access Permission configuration:

#

 

# Only allow cachemgr access from localhost

#http_access allow localhost manager

#http_access deny manager

 

# Deny requests to certain unsafe ports

http_access deny !Safe_ports

 

# Deny CONNECT to other than secure SSL ports

http_access deny CONNECT !SSL_ports

 

# We strongly recommend the following be uncommented to protect innocent

# web applications running on the proxy server who think the only

# one who can access services on "localhost" is a local user

#http_access deny to_localhost

 

#

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

#

 

# Example rule allowing access from your local networks.

# Adapt localnet in the ACL section to list your (internal) IP networks

# from where browsing should be allowed

http_access allow localnet

http_access allow localhost

http_access allow WSUS

http_access allow ADFS

http_access allow BACKUP

http_access allow DEVWEB

http_access allow UATWEB

http_access allow PRDWEB

 

# And finally deny all other access to this proxy

http_access deny all

 

# Squid normally listens to port 3128

http_port 3128

 

# Uncomment the line below to enable disk caching - path format is /cygdrive/<full path to cache folder>, i.e.

#cache_dir aufs /cygdrive/d/squid/cache 3000 16 256

cache deny all

 

 

# Leave coredumps in the first cache dir

coredump_dir /var/cache/squid

 

# Add any of your own refresh_pattern entries above these.

refresh_pattern ^ftp:                     1440       20%        10080

refresh_pattern ^gopher:            1440       0%          1440

refresh_pattern -i (/cgi-bin/|\?) 0             0%          0

refresh_pattern .                             0              20%        4320

 

dns_nameservers 208.67.222.222 208.67.220.220

 

max_filedescriptors 3200

 

 

 

Does anyone see anything I am missing here?

Yes. In your almost default configuration (it is complete squid.conf?) obvious thing is:

a) You do not use on-disk cache.
b) You use memory cache by default - i.e. 256 Mb.
c) You cache nothing due to deny all cache. So, it makes useless cache_mem default.
d) Your configuration technically useless. I see neither proxying parameters, nor caching. Your squid now only additional hop for files. No more.

So, squid nothing to do here. It simple should retransmit GET (GET?) request to server, and, without any caching/storing, retransmit it to user.

Still correct?

This put us directly to raw network IO. Without any buffering (which can be - but don't - your squid).

On your place, I can start playing around with cache_mem parameter; of course, only after removing cache deny all.

And after some experiments, may be, will make decision about drop out useless Squid's box.

Seriously, what role of squid's here? Just setup border firewall to your servers to access it to Internet. It will be enough.

 

 

My access.log doesn’t really have anything interesting in it either, it just looks like it is working normally. I can attach that too if anyone wants to look at it after I redact some of the hosts.

 

 

Keith Hartley

Network Engineer II

MCSE: Productivity, MCSA: Server 2008, 2012, Office 365 |

Certified Meraki Network Associate, Security+

Geocent, LLC

o: 504-405-3578

a: 2219 Lakeshore drive Ste 300, New Orleans, LA 70122

w: www.geocent.com| e: [hidden email]

 

   

 


Confidentiality Notice:
This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

-- 
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

signature.asc (673 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Yuri Voinov

And also:

your configuration is not transparent proxy.

a) Squid 3.5 for windows does not built as transparent proxy (i.e. with NAT support).

b) You do not have keyword intercept in your configuration.

This is simple forwarding proxy.


23.03.2018 04:38, Yuri пишет:



22.03.2018 23:10, Keith Hartley пишет:

I am using squid 3.5 for windows as a transparent proxy to provide internet access to 7 servers in a secure environment that otherwise does not have internet access. I have two squids running behind a load balancer, each one is running server 2016 core with 2 Xeon processors that is either haswell generation with 1:1 physical processor to virtual processor mapping or a hyper-threading Broadwell generation processor that is 1:1 logical processor to virtual processor mapping, depending on how they are provisioned when they get started.

 

Doing a bandwidth test directly in the VM I am able to get internet throughput of 800-1200 Mbps.

 

Doing a file copy to and from the VM I am able to get 1200 Mbps lan throughput.

 

In proxied uploads I have observed speeds as high as 120 Mbps, which is more than enough for what I need and the bottleneck is likely in the backup software rather than squid. Uploads performance I am not worried about where they are at now – even if I only got 20-30 Mbps it would be adequate for what I need it for.

 

Downloads however are very slow. Small files do not seem to be impacted. Using the test a thinkbroadband.com/download, files up to 20 Mb will download at a reasonable 20-30 Mbps, but when I get to 50, it slows down to about 17 Mbps, and when I download AD Connect from Microsoft, which is about 80 Mb, I can see it start at about 30 Mbps, but eventually goes down to about 115 kbps and levels off. When I put an IP on the server I am using for testing that proxies through squid, I am able to download the file at several hundred mbps.  When I download the same file on the squid server – I can’t tell exactly what throughput I was getting, but the 80 Mb file downloaded within 5 seconds.

 

In both squid servers, other than when the servers were booting, processor activity has not exceeded 9% in the last 7 days but usually sits below 2%. Memory usage has not exceeded 2 Gb, leaving 2 Gb free.

 

I am using OpenDNS for a DNS source, and have tried changing DNS to level3 but it made no performance difference.

 

I think that this may be squid trying to cache something, but had tried to turn all caching off.

 

My cache.log doesn’t really have anything interesting in it that I can see. It’s the same ~30 or so log entries each time the service starts, and that is about it. Here it is:

 

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Starting Squid Cache version 3.5.27 for x86_64-unknown-cygwin...

2018/03/22 09:47:27 kid1| Service Name: squid

2018/03/22 09:47:27 kid1| Process ID 1164

2018/03/22 09:47:27 kid1| Process Roles: worker

2018/03/22 09:47:27 kid1| With 3200 file descriptors available

2018/03/22 09:47:27 kid1| Initializing IP Cache...

2018/03/22 09:47:27 kid1| parseEtcHosts: /etc/hosts: (2) No such file or directory

2018/03/22 09:47:27 kid1| DNS Socket created at [::], FD 5

2018/03/22 09:47:27 kid1| DNS Socket created at 0.0.0.0, FD 6

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.222.222 from squid.conf

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.220.220 from squid.conf

2018/03/22 09:47:27 kid1| Logfile: opening log daemon:/var/log/squid/access.log

2018/03/22 09:47:27 kid1| Logfile Daemon: opening log /var/log/squid/access.log

2018/03/22 09:47:27 kid1| WARNING: no_suid: setuid(0): (22) Invalid argument

2018/03/22 09:47:27 kid1| Store logging disabled

2018/03/22 09:47:27 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects

2018/03/22 09:47:27 kid1| Target number of buckets: 1008

2018/03/22 09:47:27 kid1| Using 8192 Store buckets

2018/03/22 09:47:27 kid1| Max Mem  size: 262144 KB

2018/03/22 09:47:27 kid1| Max Swap size: 0 KB

2018/03/22 09:47:27 kid1| Using Least Load store dir selection

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Finished loading MIME types and icons.

2018/03/22 09:47:27 kid1| HTCP Disabled.

2018/03/22 09:47:27 kid1| Squid plugin modules loaded: 0

2018/03/22 09:47:27 kid1| Adaptation support is off.

2018/03/22 09:47:27 kid1| Accepting HTTP Socket connections at local=[::]:3128 remote=[::] FD 10 flags=9

2018/03/22 09:47:28 kid1| storeLateRelease: released 0 objects

 

 

And this is my squid.conf:

 

#

# Recommended minimum configuration:

#

 

# Example rule allowing access from your local networks.

# Adapt to list your (internal) IP networks from where browsing

# should be allowed

 

#acl localnet src 10.0.0.0/8           # RFC1918 possible internal network

#acl localnet src 172.16.0.0/12    # RFC1918 possible internal network

#acl localnet src 192.168.0.0/16  # RFC1918 possible internal network

acl localnet src fc00::/7       # RFC 4193 local private network range

acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl WSUS src 192.168.225.4/32

acl BACKUP src 192.168.225.11/32

acl ADFS src 192.168.224.7/32

acl ADFS src 192.168.228.8/32

acl DEVWEB src 192.168.226.6/32

acl UATWEB src 192.168.226.13/32

acl PRDWEB src 192.168.226.8/32

acl PRDWEB src 192.168.226.9/32

 

 

 

acl SSL_ports port 443

acl Safe_ports port 80                    # http

#acl Safe_ports port 21                  # ftp

acl Safe_ports port 443                  # https

#acl Safe_ports port 70                  # gopher

#acl Safe_ports port 210                                # wais

#acl Safe_ports port 1025-65535                # unregistered ports

#acl Safe_ports port 280                                # http-mgmt

#acl Safe_ports port 488                                # gss-http

#acl Safe_ports port 591                                # filemaker

#acl Safe_ports port 777                                # multiling http

acl CONNECT method CONNECT

 

#

# Recommended minimum Access Permission configuration:

#

 

# Only allow cachemgr access from localhost

#http_access allow localhost manager

#http_access deny manager

 

# Deny requests to certain unsafe ports

http_access deny !Safe_ports

 

# Deny CONNECT to other than secure SSL ports

http_access deny CONNECT !SSL_ports

 

# We strongly recommend the following be uncommented to protect innocent

# web applications running on the proxy server who think the only

# one who can access services on "localhost" is a local user

#http_access deny to_localhost

 

#

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

#

 

# Example rule allowing access from your local networks.

# Adapt localnet in the ACL section to list your (internal) IP networks

# from where browsing should be allowed

http_access allow localnet

http_access allow localhost

http_access allow WSUS

http_access allow ADFS

http_access allow BACKUP

http_access allow DEVWEB

http_access allow UATWEB

http_access allow PRDWEB

 

# And finally deny all other access to this proxy

http_access deny all

 

# Squid normally listens to port 3128

http_port 3128

 

# Uncomment the line below to enable disk caching - path format is /cygdrive/<full path to cache folder>, i.e.

#cache_dir aufs /cygdrive/d/squid/cache 3000 16 256

cache deny all

 

 

# Leave coredumps in the first cache dir

coredump_dir /var/cache/squid

 

# Add any of your own refresh_pattern entries above these.

refresh_pattern ^ftp:                     1440       20%        10080

refresh_pattern ^gopher:            1440       0%          1440

refresh_pattern -i (/cgi-bin/|\?) 0             0%          0

refresh_pattern .                             0              20%        4320

 

dns_nameservers 208.67.222.222 208.67.220.220

 

max_filedescriptors 3200

 

 

 

Does anyone see anything I am missing here?

Yes. In your almost default configuration (it is complete squid.conf?) obvious thing is:

a) You do not use on-disk cache.
b) You use memory cache by default - i.e. 256 Mb.
c) You cache nothing due to deny all cache. So, it makes useless cache_mem default.
d) Your configuration technically useless. I see neither proxying parameters, nor caching. Your squid now only additional hop for files. No more.

So, squid nothing to do here. It simple should retransmit GET (GET?) request to server, and, without any caching/storing, retransmit it to user.

Still correct?

This put us directly to raw network IO. Without any buffering (which can be - but don't - your squid).

On your place, I can start playing around with cache_mem parameter; of course, only after removing cache deny all.

And after some experiments, may be, will make decision about drop out useless Squid's box.

Seriously, what role of squid's here? Just setup border firewall to your servers to access it to Internet. It will be enough.

 

 

My access.log doesn’t really have anything interesting in it either, it just looks like it is working normally. I can attach that too if anyone wants to look at it after I redact some of the hosts.

 

 

Keith Hartley

Network Engineer II

MCSE: Productivity, MCSA: Server 2008, 2012, Office 365 |

Certified Meraki Network Associate, Security+

Geocent, LLC

o: 504-405-3578

a: 2219 Lakeshore drive Ste 300, New Orleans, LA 70122

w: www.geocent.com| e: [hidden email]

 

   

 


Confidentiality Notice:
This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

-- 
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************

-- 
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

signature.asc (673 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Keith Hartley
In reply to this post by Yuri Voinov

I don’t need it to cache anything – the goal of it is not performance optimization, it is to provide restricted access to the internet. I have 1200 Mbps of network i/o available to the squid servers and can confirm I am able to reliably achieve at least 800 Mbps when I download something directly on the squid server. Additionally, it would be extremely rare that the same file ever would get downloaded more than once, if it ever actually happens.

 

By policy none of the servers may have direct internet access. This is to protect the data contained in the environment. Only one 4 bit subnet has internet access, where the squids are located, and 8 of the 45 servers need restricted internet access.

 

This config is complete at least in a base configuration. If I have time in the project I am going to add URI restrictions. The 8 servers will only need to get to about 30-40 static URIs in total and want to block the others, but first I need to get the throughput up.

 

I have 800 Mbps minimum available bandwidth to the squid servers that I can confirm is available in download tests from the squids. I have 1200 Mbps (these are Azure virtual machines) of bandwidth available in both directions between the servers that use the squids and the squids.

 

However on large files I am only getting 115 Kbps sustained download speeds.

 

Now if squid needs to be able to buffer the downloads to cache in order to perform well – I could enable caching if that is the case, but would prefer to not cache anything. I very seriously doubt that I will ever download the same file two times in this environment as the only thing being downloaded is software updates that are centrally distributed from WSUS, and antivirus definitions that are released about 6-10 times per day. Most of the traffic is also https, with very little http.

 

Is it the case that I may see better performance if I configure it to cache the files first before sending it to clients?

 

Keith Hartley

Network Engineer II

[hidden email]

www.geocent.com

 

From: squid-users [mailto:[hidden email]] On Behalf Of Yuri
Sent: Thursday, March 22, 2018 5:39 PM
To: [hidden email]
Subject: Re: [squid-users] Squid for windows Very slow downloads of large files through squid with normal uploads

 

 

 

22.03.2018 23:10, Keith Hartley пишет:

I am using squid 3.5 for windows as a transparent proxy to provide internet access to 7 servers in a secure environment that otherwise does not have internet access. I have two squids running behind a load balancer, each one is running server 2016 core with 2 Xeon processors that is either haswell generation with 1:1 physical processor to virtual processor mapping or a hyper-threading Broadwell generation processor that is 1:1 logical processor to virtual processor mapping, depending on how they are provisioned when they get started.

 

Doing a bandwidth test directly in the VM I am able to get internet throughput of 800-1200 Mbps.

 

Doing a file copy to and from the VM I am able to get 1200 Mbps lan throughput.

 

In proxied uploads I have observed speeds as high as 120 Mbps, which is more than enough for what I need and the bottleneck is likely in the backup software rather than squid. Uploads performance I am not worried about where they are at now – even if I only got 20-30 Mbps it would be adequate for what I need it for.

 

Downloads however are very slow. Small files do not seem to be impacted. Using the test a thinkbroadband.com/download, files up to 20 Mb will download at a reasonable 20-30 Mbps, but when I get to 50, it slows down to about 17 Mbps, and when I download AD Connect from Microsoft, which is about 80 Mb, I can see it start at about 30 Mbps, but eventually goes down to about 115 kbps and levels off. When I put an IP on the server I am using for testing that proxies through squid, I am able to download the file at several hundred mbps.  When I download the same file on the squid server – I can’t tell exactly what throughput I was getting, but the 80 Mb file downloaded within 5 seconds.

 

In both squid servers, other than when the servers were booting, processor activity has not exceeded 9% in the last 7 days but usually sits below 2%. Memory usage has not exceeded 2 Gb, leaving 2 Gb free.

 

I am using OpenDNS for a DNS source, and have tried changing DNS to level3 but it made no performance difference.

 

I think that this may be squid trying to cache something, but had tried to turn all caching off.

 

My cache.log doesn’t really have anything interesting in it that I can see. It’s the same ~30 or so log entries each time the service starts, and that is about it. Here it is:

 

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Starting Squid Cache version 3.5.27 for x86_64-unknown-cygwin...

2018/03/22 09:47:27 kid1| Service Name: squid

2018/03/22 09:47:27 kid1| Process ID 1164

2018/03/22 09:47:27 kid1| Process Roles: worker

2018/03/22 09:47:27 kid1| With 3200 file descriptors available

2018/03/22 09:47:27 kid1| Initializing IP Cache...

2018/03/22 09:47:27 kid1| parseEtcHosts: /etc/hosts: (2) No such file or directory

2018/03/22 09:47:27 kid1| DNS Socket created at [::], FD 5

2018/03/22 09:47:27 kid1| DNS Socket created at 0.0.0.0, FD 6

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.222.222 from squid.conf

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.220.220 from squid.conf

2018/03/22 09:47:27 kid1| Logfile: opening log daemon:/var/log/squid/access.log

2018/03/22 09:47:27 kid1| Logfile Daemon: opening log /var/log/squid/access.log

2018/03/22 09:47:27 kid1| WARNING: no_suid: setuid(0): (22) Invalid argument

2018/03/22 09:47:27 kid1| Store logging disabled

2018/03/22 09:47:27 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects

2018/03/22 09:47:27 kid1| Target number of buckets: 1008

2018/03/22 09:47:27 kid1| Using 8192 Store buckets

2018/03/22 09:47:27 kid1| Max Mem  size: 262144 KB

2018/03/22 09:47:27 kid1| Max Swap size: 0 KB

2018/03/22 09:47:27 kid1| Using Least Load store dir selection

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Finished loading MIME types and icons.

2018/03/22 09:47:27 kid1| HTCP Disabled.

2018/03/22 09:47:27 kid1| Squid plugin modules loaded: 0

2018/03/22 09:47:27 kid1| Adaptation support is off.

2018/03/22 09:47:27 kid1| Accepting HTTP Socket connections at local=[::]:3128 remote=[::] FD 10 flags=9

2018/03/22 09:47:28 kid1| storeLateRelease: released 0 objects

 

 

And this is my squid.conf:

 

#

# Recommended minimum configuration:

#

 

# Example rule allowing access from your local networks.

# Adapt to list your (internal) IP networks from where browsing

# should be allowed

 

#acl localnet src 10.0.0.0/8           # RFC1918 possible internal network

#acl localnet src 172.16.0.0/12    # RFC1918 possible internal network

#acl localnet src 192.168.0.0/16  # RFC1918 possible internal network

acl localnet src fc00::/7       # RFC 4193 local private network range

acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl WSUS src 192.168.225.4/32

acl BACKUP src 192.168.225.11/32

acl ADFS src 192.168.224.7/32

acl ADFS src 192.168.228.8/32

acl DEVWEB src 192.168.226.6/32

acl UATWEB src 192.168.226.13/32

acl PRDWEB src 192.168.226.8/32

acl PRDWEB src 192.168.226.9/32

 

 

 

acl SSL_ports port 443

acl Safe_ports port 80                    # http

#acl Safe_ports port 21                  # ftp

acl Safe_ports port 443                  # https

#acl Safe_ports port 70                  # gopher

#acl Safe_ports port 210                                # wais

#acl Safe_ports port 1025-65535                # unregistered ports

#acl Safe_ports port 280                                # http-mgmt

#acl Safe_ports port 488                                # gss-http

#acl Safe_ports port 591                                # filemaker

#acl Safe_ports port 777                                # multiling http

acl CONNECT method CONNECT

 

#

# Recommended minimum Access Permission configuration:

#

 

# Only allow cachemgr access from localhost

#http_access allow localhost manager

#http_access deny manager

 

# Deny requests to certain unsafe ports

http_access deny !Safe_ports

 

# Deny CONNECT to other than secure SSL ports

http_access deny CONNECT !SSL_ports

 

# We strongly recommend the following be uncommented to protect innocent

# web applications running on the proxy server who think the only

# one who can access services on "localhost" is a local user

#http_access deny to_localhost

 

#

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

#

 

# Example rule allowing access from your local networks.

# Adapt localnet in the ACL section to list your (internal) IP networks

# from where browsing should be allowed

http_access allow localnet

http_access allow localhost

http_access allow WSUS

http_access allow ADFS

http_access allow BACKUP

http_access allow DEVWEB

http_access allow UATWEB

http_access allow PRDWEB

 

# And finally deny all other access to this proxy

http_access deny all

 

# Squid normally listens to port 3128

http_port 3128

 

# Uncomment the line below to enable disk caching - path format is /cygdrive/<full path to cache folder>, i.e.

#cache_dir aufs /cygdrive/d/squid/cache 3000 16 256

cache deny all

 

 

# Leave coredumps in the first cache dir

coredump_dir /var/cache/squid

 

# Add any of your own refresh_pattern entries above these.

refresh_pattern ^ftp:                     1440       20%        10080

refresh_pattern ^gopher:            1440       0%          1440

refresh_pattern -i (/cgi-bin/|\?) 0             0%          0

refresh_pattern .                             0              20%        4320

 

dns_nameservers 208.67.222.222 208.67.220.220

 

max_filedescriptors 3200

 

 

 

Does anyone see anything I am missing here?

Yes. In your almost default configuration (it is complete squid.conf?) obvious thing is:

a) You do not use on-disk cache.
b) You use memory cache by default - i.e. 256 Mb.
c) You cache nothing due to deny all cache. So, it makes useless cache_mem default.
d) Your configuration technically useless. I see neither proxying parameters, nor caching. Your squid now only additional hop for files. No more.

So, squid nothing to do here. It simple should retransmit GET (GET?) request to server, and, without any caching/storing, retransmit it to user.

Still correct?

This put us directly to raw network IO. Without any buffering (which can be - but don't - your squid).

On your place, I can start playing around with cache_mem parameter; of course, only after removing cache deny all.

And after some experiments, may be, will make decision about drop out useless Squid's box.

Seriously, what role of squid's here? Just setup border firewall to your servers to access it to Internet. It will be enough.


 

 

My access.log doesn’t really have anything interesting in it either, it just looks like it is working normally. I can attach that too if anyone wants to look at it after I redact some of the hosts.

 

 

Keith Hartley

Network Engineer II

MCSE: Productivity, MCSA: Server 2008, 2012, Office 365 |

Certified Meraki Network Associate, Security+

Geocent, LLC

o: 504-405-3578

a: 2219 Lakeshore drive Ste 300, New Orleans, LA 70122

w: www.geocent.com| e: [hidden email]

 

   

 

 

Confidentiality Notice:

This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.




_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users



-- 
"C++ seems like a language suitable for firing other people's legs."
 
*****************************
* C++20 : Bug to the future *
*****************************

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Yuri Voinov



23.03.2018 05:08, Keith Hartley пишет:

I don’t need it to cache anything – the goal of it is not performance optimization, it is to provide restricted access to the internet. I have 1200 Mbps of network i/o available to the squid servers and can confirm I am able to reliably achieve at least 800 Mbps when I download something directly on the squid server. Additionally, it would be extremely rare that the same file ever would get downloaded more than once, if it ever actually happens.

 

By policy none of the servers may have direct internet access. This is to protect the data contained in the environment. Only one 4 bit subnet has internet access, where the squids are located, and 8 of the 45 servers need restricted internet access.

Now your protects nothing. You don't have any advanced ACLs in your config.

 

This config is complete at least in a base configuration. If I have time in the project I am going to add URI restrictions. The 8 servers will only need to get to about 30-40 static URIs in total and want to block the others, but first I need to get the throughput up.

 

I have 800 Mbps minimum available bandwidth to the squid servers that I can confirm is available in download tests from the squids. I have 1200 Mbps (these are Azure virtual machines) of bandwidth available in both directions between the servers that use the squids and the squids.

 

However on large files I am only getting 115 Kbps sustained download speeds.

 

Now if squid needs to be able to buffer the downloads to cache in order to perform well – I could enable caching if that is the case, but would prefer to not cache anything. I very seriously doubt that I will ever download the same file two times in this environment as the only thing being downloaded is software updates that are centrally distributed from WSUS, and antivirus definitions that are released about 6-10 times per day. Most of the traffic is also https, with very little http.

 

Is it the case that I may see better performance if I configure it to cache the files first before sending it to clients?

Nothing above can not be solved by trivial border firewall.

Just imagine - now you have useless server which not buffers network IO.

Ideally just drop it. And setup border firewall. This solves all of your problems.

Squid's (especially Windows Squid) is not appropriate tool here.

 

Keith Hartley

Network Engineer II

[hidden email]

www.geocent.com

 

From: squid-users [[hidden email]] On Behalf Of Yuri
Sent: Thursday, March 22, 2018 5:39 PM
To: [hidden email]
Subject: Re: [squid-users] Squid for windows Very slow downloads of large files through squid with normal uploads

 

 

 

22.03.2018 23:10, Keith Hartley пишет:

I am using squid 3.5 for windows as a transparent proxy to provide internet access to 7 servers in a secure environment that otherwise does not have internet access. I have two squids running behind a load balancer, each one is running server 2016 core with 2 Xeon processors that is either haswell generation with 1:1 physical processor to virtual processor mapping or a hyper-threading Broadwell generation processor that is 1:1 logical processor to virtual processor mapping, depending on how they are provisioned when they get started.

 

Doing a bandwidth test directly in the VM I am able to get internet throughput of 800-1200 Mbps.

 

Doing a file copy to and from the VM I am able to get 1200 Mbps lan throughput.

 

In proxied uploads I have observed speeds as high as 120 Mbps, which is more than enough for what I need and the bottleneck is likely in the backup software rather than squid. Uploads performance I am not worried about where they are at now – even if I only got 20-30 Mbps it would be adequate for what I need it for.

 

Downloads however are very slow. Small files do not seem to be impacted. Using the test a thinkbroadband.com/download, files up to 20 Mb will download at a reasonable 20-30 Mbps, but when I get to 50, it slows down to about 17 Mbps, and when I download AD Connect from Microsoft, which is about 80 Mb, I can see it start at about 30 Mbps, but eventually goes down to about 115 kbps and levels off. When I put an IP on the server I am using for testing that proxies through squid, I am able to download the file at several hundred mbps.  When I download the same file on the squid server – I can’t tell exactly what throughput I was getting, but the 80 Mb file downloaded within 5 seconds.

 

In both squid servers, other than when the servers were booting, processor activity has not exceeded 9% in the last 7 days but usually sits below 2%. Memory usage has not exceeded 2 Gb, leaving 2 Gb free.

 

I am using OpenDNS for a DNS source, and have tried changing DNS to level3 but it made no performance difference.

 

I think that this may be squid trying to cache something, but had tried to turn all caching off.

 

My cache.log doesn’t really have anything interesting in it that I can see. It’s the same ~30 or so log entries each time the service starts, and that is about it. Here it is:

 

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Starting Squid Cache version 3.5.27 for x86_64-unknown-cygwin...

2018/03/22 09:47:27 kid1| Service Name: squid

2018/03/22 09:47:27 kid1| Process ID 1164

2018/03/22 09:47:27 kid1| Process Roles: worker

2018/03/22 09:47:27 kid1| With 3200 file descriptors available

2018/03/22 09:47:27 kid1| Initializing IP Cache...

2018/03/22 09:47:27 kid1| parseEtcHosts: /etc/hosts: (2) No such file or directory

2018/03/22 09:47:27 kid1| DNS Socket created at [::], FD 5

2018/03/22 09:47:27 kid1| DNS Socket created at 0.0.0.0, FD 6

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.222.222 from squid.conf

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.220.220 from squid.conf

2018/03/22 09:47:27 kid1| Logfile: opening log daemon:/var/log/squid/access.log

2018/03/22 09:47:27 kid1| Logfile Daemon: opening log /var/log/squid/access.log

2018/03/22 09:47:27 kid1| WARNING: no_suid: setuid(0): (22) Invalid argument

2018/03/22 09:47:27 kid1| Store logging disabled

2018/03/22 09:47:27 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects

2018/03/22 09:47:27 kid1| Target number of buckets: 1008

2018/03/22 09:47:27 kid1| Using 8192 Store buckets

2018/03/22 09:47:27 kid1| Max Mem  size: 262144 KB

2018/03/22 09:47:27 kid1| Max Swap size: 0 KB

2018/03/22 09:47:27 kid1| Using Least Load store dir selection

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Finished loading MIME types and icons.

2018/03/22 09:47:27 kid1| HTCP Disabled.

2018/03/22 09:47:27 kid1| Squid plugin modules loaded: 0

2018/03/22 09:47:27 kid1| Adaptation support is off.

2018/03/22 09:47:27 kid1| Accepting HTTP Socket connections at local=[::]:3128 remote=[::] FD 10 flags=9

2018/03/22 09:47:28 kid1| storeLateRelease: released 0 objects

 

 

And this is my squid.conf:

 

#

# Recommended minimum configuration:

#

 

# Example rule allowing access from your local networks.

# Adapt to list your (internal) IP networks from where browsing

# should be allowed

 

#acl localnet src 10.0.0.0/8           # RFC1918 possible internal network

#acl localnet src 172.16.0.0/12    # RFC1918 possible internal network

#acl localnet src 192.168.0.0/16  # RFC1918 possible internal network

acl localnet src fc00::/7       # RFC 4193 local private network range

acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl WSUS src 192.168.225.4/32

acl BACKUP src 192.168.225.11/32

acl ADFS src 192.168.224.7/32

acl ADFS src 192.168.228.8/32

acl DEVWEB src 192.168.226.6/32

acl UATWEB src 192.168.226.13/32

acl PRDWEB src 192.168.226.8/32

acl PRDWEB src 192.168.226.9/32

 

 

 

acl SSL_ports port 443

acl Safe_ports port 80                    # http

#acl Safe_ports port 21                  # ftp

acl Safe_ports port 443                  # https

#acl Safe_ports port 70                  # gopher

#acl Safe_ports port 210                                # wais

#acl Safe_ports port 1025-65535                # unregistered ports

#acl Safe_ports port 280                                # http-mgmt

#acl Safe_ports port 488                                # gss-http

#acl Safe_ports port 591                                # filemaker

#acl Safe_ports port 777                                # multiling http

acl CONNECT method CONNECT

 

#

# Recommended minimum Access Permission configuration:

#

 

# Only allow cachemgr access from localhost

#http_access allow localhost manager

#http_access deny manager

 

# Deny requests to certain unsafe ports

http_access deny !Safe_ports

 

# Deny CONNECT to other than secure SSL ports

http_access deny CONNECT !SSL_ports

 

# We strongly recommend the following be uncommented to protect innocent

# web applications running on the proxy server who think the only

# one who can access services on "localhost" is a local user

#http_access deny to_localhost

 

#

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

#

 

# Example rule allowing access from your local networks.

# Adapt localnet in the ACL section to list your (internal) IP networks

# from where browsing should be allowed

http_access allow localnet

http_access allow localhost

http_access allow WSUS

http_access allow ADFS

http_access allow BACKUP

http_access allow DEVWEB

http_access allow UATWEB

http_access allow PRDWEB

 

# And finally deny all other access to this proxy

http_access deny all

 

# Squid normally listens to port 3128

http_port 3128

 

# Uncomment the line below to enable disk caching - path format is /cygdrive/<full path to cache folder>, i.e.

#cache_dir aufs /cygdrive/d/squid/cache 3000 16 256

cache deny all

 

 

# Leave coredumps in the first cache dir

coredump_dir /var/cache/squid

 

# Add any of your own refresh_pattern entries above these.

refresh_pattern ^ftp:                     1440       20%        10080

refresh_pattern ^gopher:            1440       0%          1440

refresh_pattern -i (/cgi-bin/|\?) 0             0%          0

refresh_pattern .                             0              20%        4320

 

dns_nameservers 208.67.222.222 208.67.220.220

 

max_filedescriptors 3200

 

 

 

Does anyone see anything I am missing here?

Yes. In your almost default configuration (it is complete squid.conf?) obvious thing is:

a) You do not use on-disk cache.
b) You use memory cache by default - i.e. 256 Mb.
c) You cache nothing due to deny all cache. So, it makes useless cache_mem default.
d) Your configuration technically useless. I see neither proxying parameters, nor caching. Your squid now only additional hop for files. No more.

So, squid nothing to do here. It simple should retransmit GET (GET?) request to server, and, without any caching/storing, retransmit it to user.

Still correct?

This put us directly to raw network IO. Without any buffering (which can be - but don't - your squid).

On your place, I can start playing around with cache_mem parameter; of course, only after removing cache deny all.

And after some experiments, may be, will make decision about drop out useless Squid's box.

Seriously, what role of squid's here? Just setup border firewall to your servers to access it to Internet. It will be enough.


 

 

My access.log doesn’t really have anything interesting in it either, it just looks like it is working normally. I can attach that too if anyone wants to look at it after I redact some of the hosts.

 

 

Keith Hartley

Network Engineer II

MCSE: Productivity, MCSA: Server 2008, 2012, Office 365 |

Certified Meraki Network Associate, Security+

Geocent, LLC

o: 504-405-3578

a: 2219 Lakeshore drive Ste 300, New Orleans, LA 70122

w: www.geocent.com| e: [hidden email]

 

   

 

 

Confidentiality Notice:

This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.




_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users



-- 
"C++ seems like a language suitable for firing other people's legs."
 
*****************************
* C++20 : Bug to the future *
*****************************

-- 
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

signature.asc (673 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Yuri Voinov

Your task is simple - you need a simple control of access to the Internet, for servers, without any caching. Squid here is excessive, moreover, in your configuration it gives an excessive overhead.

You not requires advanced requests processing, SSL bumping, content adaptation, AV real-time checking, advanced caching, content compression - am I right yet?

So, firewall is enough.


23.03.2018 05:11, Yuri пишет:



23.03.2018 05:08, Keith Hartley пишет:

I don’t need it to cache anything – the goal of it is not performance optimization, it is to provide restricted access to the internet. I have 1200 Mbps of network i/o available to the squid servers and can confirm I am able to reliably achieve at least 800 Mbps when I download something directly on the squid server. Additionally, it would be extremely rare that the same file ever would get downloaded more than once, if it ever actually happens.

 

By policy none of the servers may have direct internet access. This is to protect the data contained in the environment. Only one 4 bit subnet has internet access, where the squids are located, and 8 of the 45 servers need restricted internet access.

Now your protects nothing. You don't have any advanced ACLs in your config.

 

This config is complete at least in a base configuration. If I have time in the project I am going to add URI restrictions. The 8 servers will only need to get to about 30-40 static URIs in total and want to block the others, but first I need to get the throughput up.

 

I have 800 Mbps minimum available bandwidth to the squid servers that I can confirm is available in download tests from the squids. I have 1200 Mbps (these are Azure virtual machines) of bandwidth available in both directions between the servers that use the squids and the squids.

 

However on large files I am only getting 115 Kbps sustained download speeds.

 

Now if squid needs to be able to buffer the downloads to cache in order to perform well – I could enable caching if that is the case, but would prefer to not cache anything. I very seriously doubt that I will ever download the same file two times in this environment as the only thing being downloaded is software updates that are centrally distributed from WSUS, and antivirus definitions that are released about 6-10 times per day. Most of the traffic is also https, with very little http.

 

Is it the case that I may see better performance if I configure it to cache the files first before sending it to clients?

Nothing above can not be solved by trivial border firewall.

Just imagine - now you have useless server which not buffers network IO.

Ideally just drop it. And setup border firewall. This solves all of your problems.

Squid's (especially Windows Squid) is not appropriate tool here.

 

Keith Hartley

Network Engineer II

[hidden email]

www.geocent.com

 

From: squid-users [[hidden email]] On Behalf Of Yuri
Sent: Thursday, March 22, 2018 5:39 PM
To: [hidden email]
Subject: Re: [squid-users] Squid for windows Very slow downloads of large files through squid with normal uploads

 

 

 

22.03.2018 23:10, Keith Hartley пишет:

I am using squid 3.5 for windows as a transparent proxy to provide internet access to 7 servers in a secure environment that otherwise does not have internet access. I have two squids running behind a load balancer, each one is running server 2016 core with 2 Xeon processors that is either haswell generation with 1:1 physical processor to virtual processor mapping or a hyper-threading Broadwell generation processor that is 1:1 logical processor to virtual processor mapping, depending on how they are provisioned when they get started.

 

Doing a bandwidth test directly in the VM I am able to get internet throughput of 800-1200 Mbps.

 

Doing a file copy to and from the VM I am able to get 1200 Mbps lan throughput.

 

In proxied uploads I have observed speeds as high as 120 Mbps, which is more than enough for what I need and the bottleneck is likely in the backup software rather than squid. Uploads performance I am not worried about where they are at now – even if I only got 20-30 Mbps it would be adequate for what I need it for.

 

Downloads however are very slow. Small files do not seem to be impacted. Using the test a thinkbroadband.com/download, files up to 20 Mb will download at a reasonable 20-30 Mbps, but when I get to 50, it slows down to about 17 Mbps, and when I download AD Connect from Microsoft, which is about 80 Mb, I can see it start at about 30 Mbps, but eventually goes down to about 115 kbps and levels off. When I put an IP on the server I am using for testing that proxies through squid, I am able to download the file at several hundred mbps.  When I download the same file on the squid server – I can’t tell exactly what throughput I was getting, but the 80 Mb file downloaded within 5 seconds.

 

In both squid servers, other than when the servers were booting, processor activity has not exceeded 9% in the last 7 days but usually sits below 2%. Memory usage has not exceeded 2 Gb, leaving 2 Gb free.

 

I am using OpenDNS for a DNS source, and have tried changing DNS to level3 but it made no performance difference.

 

I think that this may be squid trying to cache something, but had tried to turn all caching off.

 

My cache.log doesn’t really have anything interesting in it that I can see. It’s the same ~30 or so log entries each time the service starts, and that is about it. Here it is:

 

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Starting Squid Cache version 3.5.27 for x86_64-unknown-cygwin...

2018/03/22 09:47:27 kid1| Service Name: squid

2018/03/22 09:47:27 kid1| Process ID 1164

2018/03/22 09:47:27 kid1| Process Roles: worker

2018/03/22 09:47:27 kid1| With 3200 file descriptors available

2018/03/22 09:47:27 kid1| Initializing IP Cache...

2018/03/22 09:47:27 kid1| parseEtcHosts: /etc/hosts: (2) No such file or directory

2018/03/22 09:47:27 kid1| DNS Socket created at [::], FD 5

2018/03/22 09:47:27 kid1| DNS Socket created at 0.0.0.0, FD 6

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.222.222 from squid.conf

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.220.220 from squid.conf

2018/03/22 09:47:27 kid1| Logfile: opening log daemon:/var/log/squid/access.log

2018/03/22 09:47:27 kid1| Logfile Daemon: opening log /var/log/squid/access.log

2018/03/22 09:47:27 kid1| WARNING: no_suid: setuid(0): (22) Invalid argument

2018/03/22 09:47:27 kid1| Store logging disabled

2018/03/22 09:47:27 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects

2018/03/22 09:47:27 kid1| Target number of buckets: 1008

2018/03/22 09:47:27 kid1| Using 8192 Store buckets

2018/03/22 09:47:27 kid1| Max Mem  size: 262144 KB

2018/03/22 09:47:27 kid1| Max Swap size: 0 KB

2018/03/22 09:47:27 kid1| Using Least Load store dir selection

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Finished loading MIME types and icons.

2018/03/22 09:47:27 kid1| HTCP Disabled.

2018/03/22 09:47:27 kid1| Squid plugin modules loaded: 0

2018/03/22 09:47:27 kid1| Adaptation support is off.

2018/03/22 09:47:27 kid1| Accepting HTTP Socket connections at local=[::]:3128 remote=[::] FD 10 flags=9

2018/03/22 09:47:28 kid1| storeLateRelease: released 0 objects

 

 

And this is my squid.conf:

 

#

# Recommended minimum configuration:

#

 

# Example rule allowing access from your local networks.

# Adapt to list your (internal) IP networks from where browsing

# should be allowed

 

#acl localnet src 10.0.0.0/8           # RFC1918 possible internal network

#acl localnet src 172.16.0.0/12    # RFC1918 possible internal network

#acl localnet src 192.168.0.0/16  # RFC1918 possible internal network

acl localnet src fc00::/7       # RFC 4193 local private network range

acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl WSUS src 192.168.225.4/32

acl BACKUP src 192.168.225.11/32

acl ADFS src 192.168.224.7/32

acl ADFS src 192.168.228.8/32

acl DEVWEB src 192.168.226.6/32

acl UATWEB src 192.168.226.13/32

acl PRDWEB src 192.168.226.8/32

acl PRDWEB src 192.168.226.9/32

 

 

 

acl SSL_ports port 443

acl Safe_ports port 80                    # http

#acl Safe_ports port 21                  # ftp

acl Safe_ports port 443                  # https

#acl Safe_ports port 70                  # gopher

#acl Safe_ports port 210                                # wais

#acl Safe_ports port 1025-65535                # unregistered ports

#acl Safe_ports port 280                                # http-mgmt

#acl Safe_ports port 488                                # gss-http

#acl Safe_ports port 591                                # filemaker

#acl Safe_ports port 777                                # multiling http

acl CONNECT method CONNECT

 

#

# Recommended minimum Access Permission configuration:

#

 

# Only allow cachemgr access from localhost

#http_access allow localhost manager

#http_access deny manager

 

# Deny requests to certain unsafe ports

http_access deny !Safe_ports

 

# Deny CONNECT to other than secure SSL ports

http_access deny CONNECT !SSL_ports

 

# We strongly recommend the following be uncommented to protect innocent

# web applications running on the proxy server who think the only

# one who can access services on "localhost" is a local user

#http_access deny to_localhost

 

#

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

#

 

# Example rule allowing access from your local networks.

# Adapt localnet in the ACL section to list your (internal) IP networks

# from where browsing should be allowed

http_access allow localnet

http_access allow localhost

http_access allow WSUS

http_access allow ADFS

http_access allow BACKUP

http_access allow DEVWEB

http_access allow UATWEB

http_access allow PRDWEB

 

# And finally deny all other access to this proxy

http_access deny all

 

# Squid normally listens to port 3128

http_port 3128

 

# Uncomment the line below to enable disk caching - path format is /cygdrive/<full path to cache folder>, i.e.

#cache_dir aufs /cygdrive/d/squid/cache 3000 16 256

cache deny all

 

 

# Leave coredumps in the first cache dir

coredump_dir /var/cache/squid

 

# Add any of your own refresh_pattern entries above these.

refresh_pattern ^ftp:                     1440       20%        10080

refresh_pattern ^gopher:            1440       0%          1440

refresh_pattern -i (/cgi-bin/|\?) 0             0%          0

refresh_pattern .                             0              20%        4320

 

dns_nameservers 208.67.222.222 208.67.220.220

 

max_filedescriptors 3200

 

 

 

Does anyone see anything I am missing here?

Yes. In your almost default configuration (it is complete squid.conf?) obvious thing is:

a) You do not use on-disk cache.
b) You use memory cache by default - i.e. 256 Mb.
c) You cache nothing due to deny all cache. So, it makes useless cache_mem default.
d) Your configuration technically useless. I see neither proxying parameters, nor caching. Your squid now only additional hop for files. No more.

So, squid nothing to do here. It simple should retransmit GET (GET?) request to server, and, without any caching/storing, retransmit it to user.

Still correct?

This put us directly to raw network IO. Without any buffering (which can be - but don't - your squid).

On your place, I can start playing around with cache_mem parameter; of course, only after removing cache deny all.

And after some experiments, may be, will make decision about drop out useless Squid's box.

Seriously, what role of squid's here? Just setup border firewall to your servers to access it to Internet. It will be enough.


 

 

My access.log doesn’t really have anything interesting in it either, it just looks like it is working normally. I can attach that too if anyone wants to look at it after I redact some of the hosts.

 

 

Keith Hartley

Network Engineer II

MCSE: Productivity, MCSA: Server 2008, 2012, Office 365 |

Certified Meraki Network Associate, Security+

Geocent, LLC

o: 504-405-3578

a: 2219 Lakeshore drive Ste 300, New Orleans, LA 70122

w: www.geocent.com| e: [hidden email]

 

   

 

 

Confidentiality Notice:

This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.




_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users



-- 
"C++ seems like a language suitable for firing other people's legs."
 
*****************************
* C++20 : Bug to the future *
*****************************

-- 
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************

-- 
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

signature.asc (673 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Yuri Voinov

And, if you still insist that you need a proxy, consider Privoxy.

Lightweight primitive HTTP proxy with basic access control, has Windows implementation, works as service.

It will be good enough.

https://www.privoxy.org/

23.03.2018 05:27, Yuri пишет:

Your task is simple - you need a simple control of access to the Internet, for servers, without any caching. Squid here is excessive, moreover, in your configuration it gives an excessive overhead.

You not requires advanced requests processing, SSL bumping, content adaptation, AV real-time checking, advanced caching, content compression - am I right yet?

So, firewall is enough.


23.03.2018 05:11, Yuri пишет:



23.03.2018 05:08, Keith Hartley пишет:

I don’t need it to cache anything – the goal of it is not performance optimization, it is to provide restricted access to the internet. I have 1200 Mbps of network i/o available to the squid servers and can confirm I am able to reliably achieve at least 800 Mbps when I download something directly on the squid server. Additionally, it would be extremely rare that the same file ever would get downloaded more than once, if it ever actually happens.

 

By policy none of the servers may have direct internet access. This is to protect the data contained in the environment. Only one 4 bit subnet has internet access, where the squids are located, and 8 of the 45 servers need restricted internet access.

Now your protects nothing. You don't have any advanced ACLs in your config.

 

This config is complete at least in a base configuration. If I have time in the project I am going to add URI restrictions. The 8 servers will only need to get to about 30-40 static URIs in total and want to block the others, but first I need to get the throughput up.

 

I have 800 Mbps minimum available bandwidth to the squid servers that I can confirm is available in download tests from the squids. I have 1200 Mbps (these are Azure virtual machines) of bandwidth available in both directions between the servers that use the squids and the squids.

 

However on large files I am only getting 115 Kbps sustained download speeds.

 

Now if squid needs to be able to buffer the downloads to cache in order to perform well – I could enable caching if that is the case, but would prefer to not cache anything. I very seriously doubt that I will ever download the same file two times in this environment as the only thing being downloaded is software updates that are centrally distributed from WSUS, and antivirus definitions that are released about 6-10 times per day. Most of the traffic is also https, with very little http.

 

Is it the case that I may see better performance if I configure it to cache the files first before sending it to clients?

Nothing above can not be solved by trivial border firewall.

Just imagine - now you have useless server which not buffers network IO.

Ideally just drop it. And setup border firewall. This solves all of your problems.

Squid's (especially Windows Squid) is not appropriate tool here.

 

Keith Hartley

Network Engineer II

[hidden email]

www.geocent.com

 

From: squid-users [[hidden email]] On Behalf Of Yuri
Sent: Thursday, March 22, 2018 5:39 PM
To: [hidden email]
Subject: Re: [squid-users] Squid for windows Very slow downloads of large files through squid with normal uploads

 

 

 

22.03.2018 23:10, Keith Hartley пишет:

I am using squid 3.5 for windows as a transparent proxy to provide internet access to 7 servers in a secure environment that otherwise does not have internet access. I have two squids running behind a load balancer, each one is running server 2016 core with 2 Xeon processors that is either haswell generation with 1:1 physical processor to virtual processor mapping or a hyper-threading Broadwell generation processor that is 1:1 logical processor to virtual processor mapping, depending on how they are provisioned when they get started.

 

Doing a bandwidth test directly in the VM I am able to get internet throughput of 800-1200 Mbps.

 

Doing a file copy to and from the VM I am able to get 1200 Mbps lan throughput.

 

In proxied uploads I have observed speeds as high as 120 Mbps, which is more than enough for what I need and the bottleneck is likely in the backup software rather than squid. Uploads performance I am not worried about where they are at now – even if I only got 20-30 Mbps it would be adequate for what I need it for.

 

Downloads however are very slow. Small files do not seem to be impacted. Using the test a thinkbroadband.com/download, files up to 20 Mb will download at a reasonable 20-30 Mbps, but when I get to 50, it slows down to about 17 Mbps, and when I download AD Connect from Microsoft, which is about 80 Mb, I can see it start at about 30 Mbps, but eventually goes down to about 115 kbps and levels off. When I put an IP on the server I am using for testing that proxies through squid, I am able to download the file at several hundred mbps.  When I download the same file on the squid server – I can’t tell exactly what throughput I was getting, but the 80 Mb file downloaded within 5 seconds.

 

In both squid servers, other than when the servers were booting, processor activity has not exceeded 9% in the last 7 days but usually sits below 2%. Memory usage has not exceeded 2 Gb, leaving 2 Gb free.

 

I am using OpenDNS for a DNS source, and have tried changing DNS to level3 but it made no performance difference.

 

I think that this may be squid trying to cache something, but had tried to turn all caching off.

 

My cache.log doesn’t really have anything interesting in it that I can see. It’s the same ~30 or so log entries each time the service starts, and that is about it. Here it is:

 

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Starting Squid Cache version 3.5.27 for x86_64-unknown-cygwin...

2018/03/22 09:47:27 kid1| Service Name: squid

2018/03/22 09:47:27 kid1| Process ID 1164

2018/03/22 09:47:27 kid1| Process Roles: worker

2018/03/22 09:47:27 kid1| With 3200 file descriptors available

2018/03/22 09:47:27 kid1| Initializing IP Cache...

2018/03/22 09:47:27 kid1| parseEtcHosts: /etc/hosts: (2) No such file or directory

2018/03/22 09:47:27 kid1| DNS Socket created at [::], FD 5

2018/03/22 09:47:27 kid1| DNS Socket created at 0.0.0.0, FD 6

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.222.222 from squid.conf

2018/03/22 09:47:27 kid1| Adding nameserver 208.67.220.220 from squid.conf

2018/03/22 09:47:27 kid1| Logfile: opening log daemon:/var/log/squid/access.log

2018/03/22 09:47:27 kid1| Logfile Daemon: opening log /var/log/squid/access.log

2018/03/22 09:47:27 kid1| WARNING: no_suid: setuid(0): (22) Invalid argument

2018/03/22 09:47:27 kid1| Store logging disabled

2018/03/22 09:47:27 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects

2018/03/22 09:47:27 kid1| Target number of buckets: 1008

2018/03/22 09:47:27 kid1| Using 8192 Store buckets

2018/03/22 09:47:27 kid1| Max Mem  size: 262144 KB

2018/03/22 09:47:27 kid1| Max Swap size: 0 KB

2018/03/22 09:47:27 kid1| Using Least Load store dir selection

2018/03/22 09:47:27 kid1| Set Current Directory to /var/cache/squid

2018/03/22 09:47:27 kid1| Finished loading MIME types and icons.

2018/03/22 09:47:27 kid1| HTCP Disabled.

2018/03/22 09:47:27 kid1| Squid plugin modules loaded: 0

2018/03/22 09:47:27 kid1| Adaptation support is off.

2018/03/22 09:47:27 kid1| Accepting HTTP Socket connections at local=[::]:3128 remote=[::] FD 10 flags=9

2018/03/22 09:47:28 kid1| storeLateRelease: released 0 objects

 

 

And this is my squid.conf:

 

#

# Recommended minimum configuration:

#

 

# Example rule allowing access from your local networks.

# Adapt to list your (internal) IP networks from where browsing

# should be allowed

 

#acl localnet src 10.0.0.0/8           # RFC1918 possible internal network

#acl localnet src 172.16.0.0/12    # RFC1918 possible internal network

#acl localnet src 192.168.0.0/16  # RFC1918 possible internal network

acl localnet src fc00::/7       # RFC 4193 local private network range

acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl WSUS src 192.168.225.4/32

acl BACKUP src 192.168.225.11/32

acl ADFS src 192.168.224.7/32

acl ADFS src 192.168.228.8/32

acl DEVWEB src 192.168.226.6/32

acl UATWEB src 192.168.226.13/32

acl PRDWEB src 192.168.226.8/32

acl PRDWEB src 192.168.226.9/32

 

 

 

acl SSL_ports port 443

acl Safe_ports port 80                    # http

#acl Safe_ports port 21                  # ftp

acl Safe_ports port 443                  # https

#acl Safe_ports port 70                  # gopher

#acl Safe_ports port 210                                # wais

#acl Safe_ports port 1025-65535                # unregistered ports

#acl Safe_ports port 280                                # http-mgmt

#acl Safe_ports port 488                                # gss-http

#acl Safe_ports port 591                                # filemaker

#acl Safe_ports port 777                                # multiling http

acl CONNECT method CONNECT

 

#

# Recommended minimum Access Permission configuration:

#

 

# Only allow cachemgr access from localhost

#http_access allow localhost manager

#http_access deny manager

 

# Deny requests to certain unsafe ports

http_access deny !Safe_ports

 

# Deny CONNECT to other than secure SSL ports

http_access deny CONNECT !SSL_ports

 

# We strongly recommend the following be uncommented to protect innocent

# web applications running on the proxy server who think the only

# one who can access services on "localhost" is a local user

#http_access deny to_localhost

 

#

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

#

 

# Example rule allowing access from your local networks.

# Adapt localnet in the ACL section to list your (internal) IP networks

# from where browsing should be allowed

http_access allow localnet

http_access allow localhost

http_access allow WSUS

http_access allow ADFS

http_access allow BACKUP

http_access allow DEVWEB

http_access allow UATWEB

http_access allow PRDWEB

 

# And finally deny all other access to this proxy

http_access deny all

 

# Squid normally listens to port 3128

http_port 3128

 

# Uncomment the line below to enable disk caching - path format is /cygdrive/<full path to cache folder>, i.e.

#cache_dir aufs /cygdrive/d/squid/cache 3000 16 256

cache deny all

 

 

# Leave coredumps in the first cache dir

coredump_dir /var/cache/squid

 

# Add any of your own refresh_pattern entries above these.

refresh_pattern ^ftp:                     1440       20%        10080

refresh_pattern ^gopher:            1440       0%          1440

refresh_pattern -i (/cgi-bin/|\?) 0             0%          0

refresh_pattern .                             0              20%        4320

 

dns_nameservers 208.67.222.222 208.67.220.220

 

max_filedescriptors 3200

 

 

 

Does anyone see anything I am missing here?

Yes. In your almost default configuration (it is complete squid.conf?) obvious thing is:

a) You do not use on-disk cache.
b) You use memory cache by default - i.e. 256 Mb.
c) You cache nothing due to deny all cache. So, it makes useless cache_mem default.
d) Your configuration technically useless. I see neither proxying parameters, nor caching. Your squid now only additional hop for files. No more.

So, squid nothing to do here. It simple should retransmit GET (GET?) request to server, and, without any caching/storing, retransmit it to user.

Still correct?

This put us directly to raw network IO. Without any buffering (which can be - but don't - your squid).

On your place, I can start playing around with cache_mem parameter; of course, only after removing cache deny all.

And after some experiments, may be, will make decision about drop out useless Squid's box.

Seriously, what role of squid's here? Just setup border firewall to your servers to access it to Internet. It will be enough.


 

 

My access.log doesn’t really have anything interesting in it either, it just looks like it is working normally. I can attach that too if anyone wants to look at it after I redact some of the hosts.

 

 

Keith Hartley

Network Engineer II

MCSE: Productivity, MCSA: Server 2008, 2012, Office 365 |

Certified Meraki Network Associate, Security+

Geocent, LLC

o: 504-405-3578

a: 2219 Lakeshore drive Ste 300, New Orleans, LA 70122

w: www.geocent.com| e: [hidden email]

 

   

 

 

Confidentiality Notice:

This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.




_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users



-- 
"C++ seems like a language suitable for firing other people's legs."
 
*****************************
* C++20 : Bug to the future *
*****************************

-- 
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************

-- 
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************

-- 
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

signature.asc (673 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Matus UHLAR - fantomas
In reply to this post by Keith Hartley
On 22.03.18 23:08, Keith Hartley wrote:
>However on large files I am only getting 115 Kbps sustained download speeds.

does this happen evben when you try using squid on the mavchine squid is
installed?


--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I drive way too fast to worry about cholesterol.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Keith Hartley
I had not thought to test that. I will do that today.

In regards to Yuri's comments on firewall vs squid - I don’t agree that a firewall would be a direct replacement in this case.

The 30-40 URIs I need to access resolve to a potential pool of several million IP addresses, and the pool of IP addresses gets updated multiple times per year. Writing rules at the network level would not be practical to implement even one time, let alone maintain over time. A more expensive firewall that is able to implement ACLs by hostname would be needed, and options for virtual firewalls hosted in Azure are limited. It would also require either implementing many static routes, or a transit network with a virtual router, and this environment will be supported by an organization that does not have a network engineer on staff.

I understand that there is very little functionality I need to leverage, but I like Squid, as it is a name that most people in IT will recognize and be able to google.

I may still review privoxy however. If it is simple enough that supporting it would be something easy to just figure out with minimal research, it may still be a good option. I like simple, but high supportability is mandatory


Keith Hartley
Network Engineer II
[hidden email]
www.geocent.com

-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Matus UHLAR - fantomas
Sent: Friday, March 23, 2018 3:56 AM
To: [hidden email]
Subject: Re: [squid-users] Squid for windows Very slow downloads of large files through squid with normal uploads

On 22.03.18 23:08, Keith Hartley wrote:
>However on large files I am only getting 115 Kbps sustained download speeds.

does this happen evben when you try using squid on the mavchine squid is installed?


--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I drive way too fast to worry about cholesterol.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

Confidentiality Notice:
This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Yuri Voinov


23.03.2018 21:25, Keith Hartley пишет:
> I had not thought to test that. I will do that today.
>
> In regards to Yuri's comments on firewall vs squid - I don’t agree that a firewall would be a direct replacement in this case.
>
> The 30-40 URIs I need to access resolve to a potential pool of several million IP addresses, and the pool of IP addresses gets updated multiple times per year. Writing rules at the network level would not be practical to implement even one time, let alone maintain over time. A more expensive firewall that is able to implement ACLs by hostname would be needed, and options for virtual firewalls hosted in Azure are limited. It would also require either implementing many static routes, or a transit network with a virtual router, and this environment will be supported by an organization that does not have a network engineer on staff.
It depends. If your make Internet access for servers due to updates - in
most cases updates has limited distribution points (of course, we're not
considering CDN now). Some cases can be easy solved by server's built-in
firewall.

If we're talking about infrastructure, best solution for updates is
internal updates server (like WSUS), which only have access to Internet
with all security restrictions. You know this better than me ;) Anyway,
centralized patch/updates server behind the border firewall is best
solution.

But this is, of course, abstract discussion.
>
> I understand that there is very little functionality I need to leverage, but I like Squid, as it is a name that most people in IT will recognize and be able to google.
We're like it too, but Squid's itself is big and relatively complex
software, requires much experience to use and not always easy in
support. It has a lot of functions and can have very complex
configurations. This is why I can't recommend use it in all cases
requires proxying/caching without serious reasons.
>
> I may still review privoxy however. If it is simple enough that supporting it would be something easy to just figure out with minimal research, it may still be a good option. I like simple, but high supportability is mandatory
Yes. Privoxy is very simple instead Squid. It is non-caching proxy,
which have all functionality you require. It works with hostnames.

Don't worry - you will not require much support for it. It's just works. ;)

>
>
> Keith Hartley
> Network Engineer II
> [hidden email]
> www.geocent.com
>
> -----Original Message-----
> From: squid-users [mailto:[hidden email]] On Behalf Of Matus UHLAR - fantomas
> Sent: Friday, March 23, 2018 3:56 AM
> To: [hidden email]
> Subject: Re: [squid-users] Squid for windows Very slow downloads of large files through squid with normal uploads
>
> On 22.03.18 23:08, Keith Hartley wrote:
>> However on large files I am only getting 115 Kbps sustained download speeds.
> does this happen evben when you try using squid on the mavchine squid is installed?
>
>
> --
> Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> I drive way too fast to worry about cholesterol.
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>
> Confidentiality Notice:
> This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
--
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************



_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

signature.asc (673 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Keith Hartley
Yeah, there are some other considerations with this environment. While WSUS is the only service that downloads files in any significant quantity, there were architectural decisions made in the application that this environment hosts which requires the application to have some minimal internet access, which is what caused the need for the proxy. WSUS was originally set up exactly as you described until I learned of the application requirements, and technically it is the only thing that needs a proxy, but I didn't want to set up a different way of accessing the internet for each service that needed access and making it confusing, so went to using a proxy for everything so there would only be one path to the internet.


Keith Hartley
Network Engineer II
[hidden email]
www.geocent.com

-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Yuri
Sent: Friday, March 23, 2018 10:41 AM
To: [hidden email]
Subject: Re: [squid-users] Squid for windows Very slow downloads of large files through squid with normal uploads



23.03.2018 21:25, Keith Hartley пишет:
> I had not thought to test that. I will do that today.
>
> In regards to Yuri's comments on firewall vs squid - I don’t agree that a firewall would be a direct replacement in this case.
>
> The 30-40 URIs I need to access resolve to a potential pool of several million IP addresses, and the pool of IP addresses gets updated multiple times per year. Writing rules at the network level would not be practical to implement even one time, let alone maintain over time. A more expensive firewall that is able to implement ACLs by hostname would be needed, and options for virtual firewalls hosted in Azure are limited. It would also require either implementing many static routes, or a transit network with a virtual router, and this environment will be supported by an organization that does not have a network engineer on staff.
It depends. If your make Internet access for servers due to updates - in most cases updates has limited distribution points (of course, we're not considering CDN now). Some cases can be easy solved by server's built-in firewall.

If we're talking about infrastructure, best solution for updates is internal updates server (like WSUS), which only have access to Internet with all security restrictions. You know this better than me ;) Anyway, centralized patch/updates server behind the border firewall is best solution.

But this is, of course, abstract discussion.
>
> I understand that there is very little functionality I need to leverage, but I like Squid, as it is a name that most people in IT will recognize and be able to google.
We're like it too, but Squid's itself is big and relatively complex software, requires much experience to use and not always easy in support. It has a lot of functions and can have very complex configurations. This is why I can't recommend use it in all cases requires proxying/caching without serious reasons.
>
> I may still review privoxy however. If it is simple enough that
> supporting it would be something easy to just figure out with minimal
> research, it may still be a good option. I like simple, but high
> supportability is mandatory
Yes. Privoxy is very simple instead Squid. It is non-caching proxy, which have all functionality you require. It works with hostnames.

Don't worry - you will not require much support for it. It's just works. ;)

>
>
> Keith Hartley
> Network Engineer II
> [hidden email]
> www.geocent.com
>
> -----Original Message-----
> From: squid-users [mailto:[hidden email]]
> On Behalf Of Matus UHLAR - fantomas
> Sent: Friday, March 23, 2018 3:56 AM
> To: [hidden email]
> Subject: Re: [squid-users] Squid for windows Very slow downloads of
> large files through squid with normal uploads
>
> On 22.03.18 23:08, Keith Hartley wrote:
>> However on large files I am only getting 115 Kbps sustained download speeds.
> does this happen evben when you try using squid on the mavchine squid is installed?
>
>
> --
> Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> I drive way too fast to worry about cholesterol.
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>
> Confidentiality Notice:
> This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users

--
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Squid for windows Very slow downloads of large files through squid with normal uploads

Keith Hartley
In reply to this post by Yuri Voinov
Good recommendation on Privoxy. It took a few hours to get it installed, but most of that came from struggles configuring a work group server, missing a lot of the normal tools that I would have in a domain-joined server.

It took me maybe 30 minutes to figure out how to configure it and get it up and running and I think will definitely be more practical for implementing static screening of the 30-40 URIs that I need


Keith Hartley
Network Engineer II
[hidden email]
www.geocent.com

-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Yuri
Sent: Friday, March 23, 2018 10:41 AM
To: [hidden email]
Subject: Re: [squid-users] Squid for windows Very slow downloads of large files through squid with normal uploads



23.03.2018 21:25, Keith Hartley пишет:
> I had not thought to test that. I will do that today.
>
> In regards to Yuri's comments on firewall vs squid - I don’t agree that a firewall would be a direct replacement in this case.
>
> The 30-40 URIs I need to access resolve to a potential pool of several million IP addresses, and the pool of IP addresses gets updated multiple times per year. Writing rules at the network level would not be practical to implement even one time, let alone maintain over time. A more expensive firewall that is able to implement ACLs by hostname would be needed, and options for virtual firewalls hosted in Azure are limited. It would also require either implementing many static routes, or a transit network with a virtual router, and this environment will be supported by an organization that does not have a network engineer on staff.
It depends. If your make Internet access for servers due to updates - in most cases updates has limited distribution points (of course, we're not considering CDN now). Some cases can be easy solved by server's built-in firewall.

If we're talking about infrastructure, best solution for updates is internal updates server (like WSUS), which only have access to Internet with all security restrictions. You know this better than me ;) Anyway, centralized patch/updates server behind the border firewall is best solution.

But this is, of course, abstract discussion.
>
> I understand that there is very little functionality I need to leverage, but I like Squid, as it is a name that most people in IT will recognize and be able to google.
We're like it too, but Squid's itself is big and relatively complex software, requires much experience to use and not always easy in support. It has a lot of functions and can have very complex configurations. This is why I can't recommend use it in all cases requires proxying/caching without serious reasons.
>
> I may still review privoxy however. If it is simple enough that
> supporting it would be something easy to just figure out with minimal
> research, it may still be a good option. I like simple, but high
> supportability is mandatory
Yes. Privoxy is very simple instead Squid. It is non-caching proxy, which have all functionality you require. It works with hostnames.

Don't worry - you will not require much support for it. It's just works. ;)

>
>
> Keith Hartley
> Network Engineer II
> [hidden email]
> www.geocent.com
>
> -----Original Message-----
> From: squid-users [mailto:[hidden email]]
> On Behalf Of Matus UHLAR - fantomas
> Sent: Friday, March 23, 2018 3:56 AM
> To: [hidden email]
> Subject: Re: [squid-users] Squid for windows Very slow downloads of
> large files through squid with normal uploads
>
> On 22.03.18 23:08, Keith Hartley wrote:
>> However on large files I am only getting 115 Kbps sustained download speeds.
> does this happen evben when you try using squid on the mavchine squid is installed?
>
>
> --
> Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> I drive way too fast to worry about cholesterol.
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>
> Confidentiality Notice:
> This email communication may contain confidential information, may be legally privileged, and is intended only for the use of the intended recipients(s) identified. Any unauthorized review, use, distribution, downloading, or copying of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, immediately notify the sender by reply email, delete the communication, and destroy all copies. Thank you.
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users

--
"C++ seems like a language suitable for firing other people's legs."

*****************************
* C++20 : Bug to the future *
*****************************


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users