On 07/09/17 01:26, Alex Gutiérrez Martínez wrote:
> Hi everyone, i have 100 GB on my cache partition, but squid only use 1.5
> GB. My internet connection its incredibly slow, any advice on how
> optimize my connection will be appreciated.
You are missing details of;
* what Squid version you are using (squid -v output), and
* how much traffic is going through the proxy, and
* roughly how many users this cache is servicing, and
* what HIT rates you are currently achieving, and
* what (the 'info' cache manager report - "squidclient mgr:info" or
cachemgr.cgi page or http://example.local:3128/squid-internal-mgr/info)
Some things to keep in mind (in no particular order):
* A proxy cache is best for large numbers of clients. The fewer users
exist the more likely the data is being cached on the client machine
itself (eg in Browser caches) - an aggregating proxy cache will not
store much of it unless their traffic is significantly different AND the
proxy cache is larger than the client caches. As user count grows the
per-user differences build up and proxy shows more caching benefits.
* Some traffic is simply not cacheable. Cacheability is determined by
the type of domains and sites being contacted, what they do etc.
Sometimes traffic is simply not cacheable.
* Caches take time to fill up, the fill rate decreases exponentially.
Each new object added requires that it has not already been used before.
It is very likely that your users only visited a small number of sites
and thus only a small amount of content actually is being used. Again
the more clients use the proxy the more data differences grow the cache.
Have you given it enough time for more than a few GB of HTTP traffic
to go through the proxy?
* HTTPS is not cacheable in its encrypted form. As the Internet drive
towards HTTPS grows increasingly less content is cacheable without
performing an MITM on the traffic.
* a 64-bit build of Squid is needed to operate well with more than a few
GB of data. 'Large file' support does not help much as the size of
individual files is not he problem, size counters for cache management
need to be 64-bit.
1.5GB looks suspiciously like the 32-bit numerical wrap happening.
> This is the configuration of my cache.
> maximum_object_size 300 MB
> cache_dir aufs /var/cache/squid3 1024000 16 256
The above cache uses just under 1 TB of disk space, not 100 GB.
Try 97280 for a 100GB disk. That is 97% of the drive of cache and 3% bit
for OS use and temporary oversize object storage.
This maybe part of your problem. The larger this value is the more
likely that dynamic content will *not* be cached.
It is checking whether objects are fresh or stale 600sec in the future
and only caching the ones that will be fresh at that time. Which is very
unlikely to be true for any dynamic content.
My recommendation is to remove this from your config file or configure
it a bit smaller than the default 60sec - but not too much smaller.
> cache_swap_low 87
> cache_swap_high 90
Raise these back to the default 90-95% thresholds for data purging.
You can do that by removing the directives entirely.
NP: the closer these are to 100% the more cache will be able to be
filled during normal use. But also the more work Squid will do when
purging to make space for new stuff - which can slow down all
transactions underway is one of the transactions need a lot of space.
It is a slow job tuning these properly and requires the cache to be
relatively full first.