I've configured squid with ssl_bump and now the squid process (not the
helpers) takes quite load. There aren't too much clients on it (max 50).
This is the config (ripped some acl to make it readable):
cache_mgr [hidden email] visible_hostname proxy.xxx.com
authenticate_ip_ttl 1 hour
### negotiate kerberos and ntlm authentication
auth_param negotiate program /usr/local/bin/negotiate_wrapper --ntlm
/usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --domain=xxx
--kerberos /usr/local/bin/squid_kerb_auth -s GSS_C_NO_NAME
auth_param negotiate children 50
auth_param negotiate keep_alive off
### pure ntlm authentication
auth_param ntlm program /usr/bin/ntlm_auth
auth_param ntlm children 50
auth_param ntlm keep_alive off
### provide basic authentication via ldap for clients not authenticated via
auth_param basic program /usr/local/squid/libexec/basic_ldap_auth -v 3 -R -b
"dc=xxx,dc=local" -D [hidden email] -W /etc/squid/ldappass.txt -f
sAMAccountName=%s -h srv-dc1.xxx.local
auth_param basic children 50
auth_param basic realm Proxy xxx
Locked for only some users via ACL, the acl is placed at the end, so that
only few users hit this acl
I've already increased the number of vcpu for the machine, but the only
process that i see eating cpu is squid, the helpers aren't eating a lot. I
see only sometimes the clamav service goind high on usage but i think that's
There is something that i miss or optimize in the config, or simply the
sslbump requires a lot of resources?
> I've configured squid with ssl_bump and now the squid process (not the
> helpers) takes quite load. There aren't too much clients on it (max 50).
> I've already increased the number of vcpu for the machine, but the only
> process that i see eating cpu is squid, the helpers aren't eating a lot.
> There is something that i miss or optimize in the config, or simply the
> sslbump requires a lot of resources?
I have not studied your configuration, but doing SSL encryption and/or
decryption (including the SslBump "bump" action) does require a lot of
CPU cycles. Enabling bumping may decrease sustained peak throughput by
70% or more.
Wow, a lot to read (and understand, for a newbie like me :-|)....
From what i've seen it's sufficient to insert the "workers n" directive in
the conf (n number of workers). With some limitations with the features that
support SMP (delay pools, cache, etc - i not think to use any of them)
For now i've tried with the "workers 3" directive, i can see 3 squid process,
seem that they span quite evenly the load and the page loading seem better.
Hope that fix the bottlenek...
In any case, i not know if there is somtheing wrong in the config that can
hurt the performance....
On 27/04/18 03:19, masterx81 wrote:
> For now i've tried with the "workers 3" directive, i can see 3 squid process,
> seem that they span quite evenly the load and the page loading seem better.
> Hope that fix the bottlenek...
> In any case, i not know if there is somtheing wrong in the config that can
> hurt the performance....
Maybe yes, maybe no. The big performance drags are ICAP with extra TCP
resources requirements and delays, SSL-Bump with the TLS overheads, lots
of complex ACL processing, and regular network delays.
You mention having many ACLs but elided them so we cannot provide any
audit or hints to optimizing that part. The other parts you will have to
yourself test and check for what the actual delays are from each.
By now i not see anymore the single squid process taking all the resources,
using the multi process the load is spread and all seem work really well. I
see only sometimes the clam-d service hitting 100% for few istants but i
think that is normal, as it's a single process, but not cause any slowdown.
The ACL that i've cut are only big lists of dstdomain (i think that not
require much cpu), and acl for some groups of users (time based ACL).
Nothing really intensive.
The only thing that i think can be intensive is the extension checking for
locking some users, but only few clients hit this ACL.