rock issue

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

rock issue

patrick.mkhael
Dears,

Kindly note that i have a lab, where the internet traffic is 100 Mb/s  of pure http targeted traffic, i m trying to achieve cache gain ratio of 60% , i was able to do this using ufs cache dir  and single worker.
But i found that i need to work with rock cache dir since the real traffic is 1000 Mb/s , so one procesor won't be able to 
handle all the traffic , so i used the same config , i only switched from ufs to rock , the gain ratio dropped from 60% to 6%.

below is the config that i m using : [ i didn't include the refresh pattern]

workers 10
cpu_affinity_map process_numbers=1,2,3,4,5,6,7,8,9,10 cores=1,2,3,4,5,6,7,8,9,10
cache_mem 20 GB
maximum_object_size_in_memory 50 MB
maximum_object_size 2 GB
cache_miss_revalidate off
quick_abort_pct 85
cache_dir rock /mnt/sdb/1 2048 max-size=10000 max-swap-rate=200 swap-timeout=300
cache_dir rock /mnt/sdb/2 2048 max-size=10000 max-swap-rate=200 swap-timeout=300
cache_dir rock /mnt/sdb/3 4960 max-size=50000 max-swap-rate=200 swap-timeout=300
cache_dir rock /mnt/sdb/4 4960 max-size=50000 max-swap-rate=200 swap-timeout=300
cache_dir rock /mnt/sdb/5 10000 max-size=100000 max-swap-rate=200 swap-timeout=300
cache_dir rock /mnt/sdb/6 10000 max-size=100000 max-swap-rate=200 swap-timeout=300
cache_dir rock /mnt/sdb/7 20000 max-size=500000 max-swap-rate=200 swap-timeout=300
cache_dir rock /mnt/sdb/8 20000 max-size=500000 max-swap-rate=200 swap-timeout=300
cache_dir rock /mnt/sdb/9 40000 max-size=1000000 max-swap-rate=200 swap-timeout=300
cache_dir rock /mnt/sdb/10 40000 max-size=1000000 max-swap-rate=200 swap-timeout=300

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

Alex Rousskov
On 7/1/20 1:45 PM, patrick mkhael wrote:

> Kindly note that i have a lab, where the internet traffic is 100 Mb/s 
> of pure http targeted traffic, i m trying to achieve cache gain ratio of
> 60% , i was able to do this using ufs cache dir  and single worker.
> But i found that i need to work with rock cache dir since the real
> traffic is 1000 Mb/s , so one procesor won't be able to 
> handle all the traffic , so i used the same config , i only switched
> from ufs to rock , the gain ratio dropped from 60% to 6%.

> workers 10
> cpu_affinity_map process_numbers=1,2,3,4,5,6,7,8,9,10 cores=1,2,3,4,5,6,7,8,9,10

Please note that you have 20 kids worth mapping (10 workers and 10
diskers), but you map only the first 10. This is _not_ the reason for a
drop in hit ratio though.

> cache_dir rock /mnt/sdb/1   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/2   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/3   4960 max-size=50000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/4   4960 max-size=50000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/5  10000 max-size=100000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/6  10000 max-size=100000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/7  20000 max-size=500000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/8  20000 max-size=500000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/9  40000 max-size=1000000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/10 40000 max-size=1000000 max-swap-rate=200 swap-timeout=300

Why do you have 10 rock caches of various sizes? The number of caches
itself should not affect the hit ratio directly, but a large number of
caches may complicate analysis and, if you do not have enough
independent disk "spindles", it may slow down disk I/O and lead to
timeouts and rate-based rejections (that do affect hit ratio).

How many independent disk spindles (or equivalent) do you have? All
other factors being equal (they rarely are), you want to dedicate one
independent disk spindle to one rock cache_dir.

How did you select the swap rate limits and timeouts for cache_dirs?

Do you see any ERRORs or WARNINGs in cache log?


Thank you,

Alex.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

patrick.mkhael

***Please note that you have 20 kids worth mapping (10 workers and 10
diskers), but you map only the first 10.
​{since i did not get the point of the diskers ,as far as i understood  , it should be like  (simple example)
 >workers 2
> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
> cache_dir rock /mnt/sdb/1   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/2   2048 max-size=10000 max-swap-rate=200 swap-timeout=300



***Why do you have 10 rock caches of various sizes?  [ to be honest , i saw in many websites that it should be like this from the smallest to the bigest with diff size, i tought it should serve from small size pool to high ]

*****How many independent disk spindles (or equivalent) do you have? [ i have one raid 5 ssd disks , used by the 10 rock cache dir]

***How did you select the swap rate limits and timeouts for cache_dirs? [I took it also from online forum , can i leave it empty for both]


****Do you see any ERRORs or WARNINGs in cache log? [NO error or warning found in cache]

thank u so much !


From: Alex Rousskov <[hidden email]>
Sent: Wednesday, July 1, 2020 10:26 PM
To: patrick mkhael <[hidden email]>; [hidden email] <[hidden email]>
Subject: Re: [squid-users] rock issue
 
On 7/1/20 1:45 PM, patrick mkhael wrote:

> Kindly note that i have a lab, where the internet traffic is 100 Mb/s 
> of pure http targeted traffic, i m trying to achieve cache gain ratio of
> 60% , i was able to do this using ufs cache dir  and single worker.
> But i found that i need to work with rock cache dir since the real
> traffic is 1000 Mb/s , so one procesor won't be able to 
> handle all the traffic , so i used the same config , i only switched
> from ufs to rock , the gain ratio dropped from 60% to 6%.

> workers 10
> cpu_affinity_map process_numbers=1,2,3,4,5,6,7,8,9,10 cores=1,2,3,4,5,6,7,8,9,10

Please note that you have 20 kids worth mapping (10 workers and 10
diskers), but you map only the first 10. This is _not_ the reason for a
drop in hit ratio though.

> cache_dir rock /mnt/sdb/1   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/2   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/3   4960 max-size=50000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/4   4960 max-size=50000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/5  10000 max-size=100000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/6  10000 max-size=100000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/7  20000 max-size=500000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/8  20000 max-size=500000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/9  40000 max-size=1000000 max-swap-rate=200 swap-timeout=300
> cache_dir rock /mnt/sdb/10 40000 max-size=1000000 max-swap-rate=200 swap-timeout=300

Why do you have 10 rock caches of various sizes? The number of caches
itself should not affect the hit ratio directly, but a large number of
caches may complicate analysis and, if you do not have enough
independent disk "spindles", it may slow down disk I/O and lead to
timeouts and rate-based rejections (that do affect hit ratio).

How many independent disk spindles (or equivalent) do you have? All
other factors being equal (they rarely are), you want to dedicate one
independent disk spindle to one rock cache_dir.

How did you select the swap rate limits and timeouts for cache_dirs?

Do you see any ERRORs or WARNINGs in cache log?


Thank you,

Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

Amos Jeffries
Administrator
On 2/07/20 8:45 am, patrick mkhael wrote:

>
> ***Please note that you have 20 kids worth mapping (10 workers and 10
> diskers), but you map only the first 10.​{since i did not get the point
> of the diskers ,as far as i understood  , it should be like  (simple
> example)
>  >workers 2
>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>> cache_dir rock /mnt/sdb/1   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
>> cache_dir rock /mnt/sdb/2   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
>
>
>
> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
> saw in many websites that it should be like this from the smallest to
> the bigest with diff size, i tought it should serve from small size pool
> to high ]
>

In general yes. BUT the size ranges to use should be determined via
traffic analysis. Specifically measure and graph the object sizes being
handled. There will be a wavy / cyclic line resulting. The size
boundaries should be set to the *minimum* point(s) along that line.


That said. Since you are comparing the new rock to an old UFS setup. It
would be best to start with the rock being setup as similar to the UFS
as you can - number of cache_dir, ranges of objects stored there etc.

ie. if these ranges were in the old UFS, then keep them for now. That
can be re-calculated after the HIT ratio drop is identified.


> *****How many independent disk spindles (or equivalent) do you have? [ i
> have one raid 5 ssd disks , used by the 10 rock cache dir]
>

Ouch.

Ideally you would have either:

 5x SSD disks separately mounted. With one rock cache on each.

or,

 1x RAID 10 with one rock cache per disk pair/stripe. This requires
ability for controller to map a sub-directory tree exclusively onto one
of the sub-array stripes.

or,

 2x RAID 1 (drive pair mirroring) with one rock cache on each pair. This
is the simplest way to achieve above when sub-array feature is not
available in RAID 10.

or,

 1x RAID 10 with a single rock cache



The reasons;

Squid I/O pattern is mostly writes and erase. Low on reads.

RAID types in order of best->worst for that pattern are:
  none, RAID 1, RAID 10, RAID 5, RAID 0
<https://wiki.squid-cache.org/SquidFaq/RAID>

Normal SSD controllers cannot handle the Squid I/O pattern well. Squid
*will* age the disk much faster than manufacturer measured statistics
indicate. (True for even HDD, just less of a problem there).

This means that the design needs to plan for coping with relatively
frequent disk failures. Loss of the data itself irrelevant. Only the
outage time + reduction in HIT ratio actually matter on failures.

Look for SSD with high write cycle measurements, and for RAID hot-swap
capability (even if the machine itself cant do that).



> ***How did you select the swap rate limits and timeouts for
> cache_dirs?[I took it also from online forum , can i leave it empty for
> both]
>

Alex may have better ideas if you can refer us to which tutorials or
documents you found that info in.

Without specific details on why they were chosen I would go with one
rock cache with default values to start with. Only changing them if
followup analysis indicates some other value is better.


Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

patrick.mkhael
Dear Amos,

**i will use each rock dir with one physical disk , i m going to set up it now. + i will change to default rock dir optional values.

**Please note that i m switching to rock, since one processor won't hande 800 Mb/s traffic

**theoratically would squid with rock cache dir , give me the same gain ratio of ufs cache dir ? [for 100mb/s ufs gave me 70%]



**how much do u think of (worker+RAM) is needed for 800 Mb/s traffic?

thank u


From: squid-users <[hidden email]> on behalf of Amos Jeffries <[hidden email]>
Sent: Thursday, July 2, 2020 1:20 PM
To: [hidden email] <[hidden email]>
Subject: Re: [squid-users] rock issue
 
On 2/07/20 8:45 am, patrick mkhael wrote:
>
> ***Please note that you have 20 kids worth mapping (10 workers and 10
> diskers), but you map only the first 10.​{since i did not get the point
> of the diskers ,as far as i understood  , it should be like  (simple
> example)
>  >workers 2
>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>> cache_dir rock /mnt/sdb/1   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
>> cache_dir rock /mnt/sdb/2   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
>
>
>
> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
> saw in many websites that it should be like this from the smallest to
> the bigest with diff size, i tought it should serve from small size pool
> to high ]
>

In general yes. BUT the size ranges to use should be determined via
traffic analysis. Specifically measure and graph the object sizes being
handled. There will be a wavy / cyclic line resulting. The size
boundaries should be set to the *minimum* point(s) along that line.


That said. Since you are comparing the new rock to an old UFS setup. It
would be best to start with the rock being setup as similar to the UFS
as you can - number of cache_dir, ranges of objects stored there etc.

ie. if these ranges were in the old UFS, then keep them for now. That
can be re-calculated after the HIT ratio drop is identified.


> *****How many independent disk spindles (or equivalent) do you have? [ i
> have one raid 5 ssd disks , used by the 10 rock cache dir]
>

Ouch.

Ideally you would have either:

 5x SSD disks separately mounted. With one rock cache on each.

or,

 1x RAID 10 with one rock cache per disk pair/stripe. This requires
ability for controller to map a sub-directory tree exclusively onto one
of the sub-array stripes.

or,

 2x RAID 1 (drive pair mirroring) with one rock cache on each pair. This
is the simplest way to achieve above when sub-array feature is not
available in RAID 10.

or,

 1x RAID 10 with a single rock cache



The reasons;

Squid I/O pattern is mostly writes and erase. Low on reads.

RAID types in order of best->worst for that pattern are:
  none, RAID 1, RAID 10, RAID 5, RAID 0
<https://wiki.squid-cache.org/SquidFaq/RAID>

Normal SSD controllers cannot handle the Squid I/O pattern well. Squid
*will* age the disk much faster than manufacturer measured statistics
indicate. (True for even HDD, just less of a problem there).

This means that the design needs to plan for coping with relatively
frequent disk failures. Loss of the data itself irrelevant. Only the
outage time + reduction in HIT ratio actually matter on failures.

Look for SSD with high write cycle measurements, and for RAID hot-swap
capability (even if the machine itself cant do that).



> ***How did you select the swap rate limits and timeouts for
> cache_dirs?[I took it also from online forum , can i leave it empty for
> both]
>

Alex may have better ideas if you can refer us to which tutorials or
documents you found that info in.

Without specific details on why they were chosen I would go with one
rock cache with default values to start with. Only changing them if
followup analysis indicates some other value is better.


Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

Alex Rousskov
In reply to this post by patrick.mkhael
On 7/1/20 4:45 PM, patrick mkhael wrote:

> ***Please note that you have 20 kids worth mapping (10 workers and 10
> diskers), but you map only the first 10.​{since i did not get the point
> of the diskers ,as far as i understood  , it should be like  (simple
> example)

>> workers 2
>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>> cache_dir rock ...
>> cache_dir rock ...

The above looks OK. Each worker is a kid process. Each rock cache_dir is
a kid process (we call them diskers).  If you have physical CPU cores to
spare, give each kid process its own physical core. Otherwise, give each
worker process its own physical core (if you can). Diskers can share
physical cores with less harm because they usually do not consume much
CPU cycles. Squid wiki has more detailed information about that:
https://wiki.squid-cache.org/Features/SmpScale#How_to_configure_SMP_Squid_for_top_performance.3F


> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
> saw in many websites that it should be like this from the smallest to
> the bigest with diff size, i tought it should serve from small size pool
> to high ]

IMHO, you should stop reading those web pages :-). There is no general
need to segregate objects by sizes, especially when you are using the
same slot size for all cache_dirs. Such segregation may be necessary in
some special cases, but we have not yet established that your case is
special.


> *****How many independent disk spindles (or equivalent) do you have? [ i
> have one raid 5 ssd disks , used by the 10 rock cache dir]

Do not use RAID. If possible, use one rock cache_dir per SSD disk. The
only reason this may not be possible, AFAICT, is if you want to cache
more (per SSD disk) than a single Squid cache_dir can hold, but I would
not worry about overcoming that limit at the beginning. If you want to
know more about the limit, look for "33554431" in
http://www.squid-cache.org/mail-archive/squid-users/201312/0034.html


> ***How did you select the swap rate limits and timeouts for
> cache_dirs?[I took it also from online forum , can i leave it empty for
> both]

If you want a simple answer, then it is "yes". Unfortunately, there is
no simple correct answer to that question. To understand what is going
on and how to tune things, I recommend studying the Performance Tuning
section of https://wiki.squid-cache.org/Features/RockStore


> ****Do you see any ERRORs or WARNINGs in cache log?[NO error or warning
> found in cache]

Good. I assume you do see some regular messages in cache.log. Keep an
eye for ERRORs and WARNINGs as you change settings.


HTH,

Alex.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

patrick.mkhael
Dear Alex,

kindly note that i have adjusted the config , in addition to checking the provided links.
First i have 3 disk with no RAID config, each rock cache_dir has it own disk to write to.
then each disker and worker have it own process. In addition to this i have adjusted some value as per "https://wiki.squid-cache.org/Features/RockStore" recomandation.

Below is the new config:


workers 3
cpu_affinity_map process_numbers=1,2,3,4,5,6 cores=1,2,3,4,5,6
cache_dir rock /rock1 200000 max-size=32000 swap-timeout=300 max-swap-rate=100
cache_dir rock /rock2 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
cache_dir rock /rock3 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
cache_mem 17 GB
maximum_object_size_in_memory 25 MB
maximum_object_size 1 GB
cache_miss_revalidate off
quick_abort_pct 95

This config is giving 4% of cache gain ratio, 
in addition as i already mentionned before if i take the same above config without worker and cach_dir with the same traffiic using aufs on one of the disks ,  i have automatically i har 60 % cache ratio. [ my lab i 250 Mb/s]

Shoud rock give me the same performance as aufs ?

for a traffic of 1 Gb/s , is there a way to use aufs ?

thank u



From: Alex Rousskov <[hidden email]>
Sent: Thursday, July 2, 2020 4:24 PM
To: patrick mkhael <[hidden email]>; [hidden email] <[hidden email]>
Subject: Re: [squid-users] rock issue
 
On 7/1/20 4:45 PM, patrick mkhael wrote:

> ***Please note that you have 20 kids worth mapping (10 workers and 10
> diskers), but you map only the first 10.​{since i did not get the point
> of the diskers ,as far as i understood  , it should be like  (simple
> example)

>> workers 2
>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>> cache_dir rock ...
>> cache_dir rock ...

The above looks OK. Each worker is a kid process. Each rock cache_dir is
a kid process (we call them diskers).  If you have physical CPU cores to
spare, give each kid process its own physical core. Otherwise, give each
worker process its own physical core (if you can). Diskers can share
physical cores with less harm because they usually do not consume much
CPU cycles. Squid wiki has more detailed information about that:
https://wiki.squid-cache.org/Features/SmpScale#How_to_configure_SMP_Squid_for_top_performance.3F


> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
> saw in many websites that it should be like this from the smallest to
> the bigest with diff size, i tought it should serve from small size pool
> to high ]

IMHO, you should stop reading those web pages :-). There is no general
need to segregate objects by sizes, especially when you are using the
same slot size for all cache_dirs. Such segregation may be necessary in
some special cases, but we have not yet established that your case is
special.


> *****How many independent disk spindles (or equivalent) do you have? [ i
> have one raid 5 ssd disks , used by the 10 rock cache dir]

Do not use RAID. If possible, use one rock cache_dir per SSD disk. The
only reason this may not be possible, AFAICT, is if you want to cache
more (per SSD disk) than a single Squid cache_dir can hold, but I would
not worry about overcoming that limit at the beginning. If you want to
know more about the limit, look for "33554431" in
http://www.squid-cache.org/mail-archive/squid-users/201312/0034.html


> ***How did you select the swap rate limits and timeouts for
> cache_dirs?[I took it also from online forum , can i leave it empty for
> both]

If you want a simple answer, then it is "yes". Unfortunately, there is
no simple correct answer to that question. To understand what is going
on and how to tune things, I recommend studying the Performance Tuning
section of https://wiki.squid-cache.org/Features/RockStore


> ****Do you see any ERRORs or WARNINGs in cache log?[NO error or warning
> found in cache]

Good. I assume you do see some regular messages in cache.log. Keep an
eye for ERRORs and WARNINGs as you change settings.


HTH,

Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

Loučanský Lukáš
By my observation while you set workers squid spawns multiples processes
something like this (example workers = 2, 3 rock cache_dir diskers):

squid parent -> worker 1, worker 2, disker 1, disker 2, disker 3, squid
smp coordinator

Process names like squid-1 or squid-disk-3 (note process number after dash).

I have more rock cache_dirs with diferent slot-sizes, max and min sizes
and aufs cache_dir for each of my workers (I'm aware of possible
multiple copies of cached object).  But I use rock dirs up-to 4MB max
size (estimated experimental size. Why do you cap max-size to 32000?
I've seen this https://wiki.squid-cache.org/Features/RockStore, but what
about large rock
<https://wiki.squid-cache.org/Features/RockStore>https://wiki.squid-cache.org/Features/LargeRockStore 
<https://wiki.squid-cache.org/Features/LargeRockStore>You have
maximum_object_size 1 GB vs max-size=3200.

By Store Directory Stats I can clearly see that my 32k-4MB rock
cache_dir is beeing filled. So how do you compare your hitrate ratios?
Do you cap object size in your aufs config as well? What do you mean by
"cache gain ratio"?

LL

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

Alex Rousskov
In reply to this post by patrick.mkhael
On 7/3/20 4:50 AM, patrick mkhael wrote:

> workers 3
> cpu_affinity_map process_numbers=1,2,3,4,5,6 cores=1,2,3,4,5,6
> cache_dir rock /rock1 200000 max-size=32000 swap-timeout=300 max-swap-rate=100
> cache_dir rock /rock2 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
> cache_dir rock /rock3 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
> cache_mem 17 GB
> maximum_object_size_in_memory 25 MB
> maximum_object_size 1 GB
> cache_miss_revalidate off
> quick_abort_pct 95


FYI: The combination of 1GB maximum_object_size and much smaller size
limits for objects in memory and disk caches does not make sense: There
is no cache to store a, say, 26MB object. If Squid lacks the
corresponding configuration "lint" check, somebody should add it.


> This config is giving 4% of cache gain ratio, 

> in addition as i already mentionned before if i take the same above
> config without worker and cach_dir with the same traffiic using aufs on
> one of the disks ,  i have automatically i har 60 % cache ratio.

When using AUFS, do you limit disk-cached object sizes to 32KB like you
do with rock? If not, then you should remove the max-size limit from
rock cache_dirs. Modern rock cache_dirs are capable of storing large
objects.

What kind of hit ratio do you get with rock if you do not limit
swap-rate and do not specify swap-timeout?

What kind of hit ratio do you get with rock if you use one worker, one
rock cache_dir, do not limit swap-rate, do not specify swap-timeout, and
start Squid with -N to disable SMP?

As you can see, I am trying to understand whether the size limitation,
the rate limiting, or SMP problems explain the drop in hit ratio.


> Shoud rock give me the same performance as aufs ?

It is a difficult question to answer correctly (for me). The goal is for
rock performance to exceed that of (a)ufs, but I doubt we have reached
that goal in every environment that matters (including yours).

* In a non-SMP environment, I would expect similar hit ratios in most
cases, but I would not be surprised if there are significant exceptions.
Rock is focused on SMP support, and there are complexities/costs
associated with SMP. Rock is getting better, but there are some known
areas where rock cannot yet do what ufs (including aufs) can.

* In a SMP environment, the question is mostly meaningless because there
is no SMP support for ufs-based caches. Folks use squid.conf
preprocessor hacks to configure ufs-based caches in SMP mode, but those
setups usually violate HTTP and may cause serious problems. YMMV.


> for a traffic of 1 Gb/s , is there a way to use aufs ?

Before trying unsupported combinations of aufs and SMP, I would try to
understand why your hit ratio is so low with rock. The questions above
may be a good start in that investigation.


Cheers,

Alex.


> ------------------------------------------------------------------------
> *From:* Alex Rousskov <[hidden email]>
> *Sent:* Thursday, July 2, 2020 4:24 PM
> *To:* patrick mkhael <[hidden email]>;
> [hidden email] <[hidden email]>
> *Subject:* Re: [squid-users] rock issue
>  
> On 7/1/20 4:45 PM, patrick mkhael wrote:
>
>> ***Please note that you have 20 kids worth mapping (10 workers and 10
>> diskers), but you map only the first 10.​{since i did not get the point
>> of the diskers ,as far as i understood  , it should be like  (simple
>> example)
>
>>> workers 2
>>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>>> cache_dir rock ...
>>> cache_dir rock ...
>
> The above looks OK. Each worker is a kid process. Each rock cache_dir is
> a kid process (we call them diskers).  If you have physical CPU cores to
> spare, give each kid process its own physical core. Otherwise, give each
> worker process its own physical core (if you can). Diskers can share
> physical cores with less harm because they usually do not consume much
> CPU cycles. Squid wiki has more detailed information about that:
> https://wiki.squid-cache.org/Features/SmpScale#How_to_configure_SMP_Squid_for_top_performance.3F
>
>
>> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
>> saw in many websites that it should be like this from the smallest to
>> the bigest with diff size, i tought it should serve from small size pool
>> to high ]
>
> IMHO, you should stop reading those web pages :-). There is no general
> need to segregate objects by sizes, especially when you are using the
> same slot size for all cache_dirs. Such segregation may be necessary in
> some special cases, but we have not yet established that your case is
> special.
>
>
>> *****How many independent disk spindles (or equivalent) do you have? [ i
>> have one raid 5 ssd disks , used by the 10 rock cache dir]
>
> Do not use RAID. If possible, use one rock cache_dir per SSD disk. The
> only reason this may not be possible, AFAICT, is if you want to cache
> more (per SSD disk) than a single Squid cache_dir can hold, but I would
> not worry about overcoming that limit at the beginning. If you want to
> know more about the limit, look for "33554431" in
> http://www.squid-cache.org/mail-archive/squid-users/201312/0034.html
>
>
>> ***How did you select the swap rate limits and timeouts for
>> cache_dirs?[I took it also from online forum , can i leave it empty for
>> both]
>
> If you want a simple answer, then it is "yes". Unfortunately, there is
> no simple correct answer to that question. To understand what is going
> on and how to tune things, I recommend studying the Performance Tuning
> section of https://wiki.squid-cache.org/Features/RockStore
>
>
>> ****Do you see any ERRORs or WARNINGs in cache log?[NO error or warning
>> found in cache]
>
> Good. I assume you do see some regular messages in cache.log. Keep an
> eye for ERRORs and WARNINGs as you change settings.
>
>
> HTH,
>
> Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

patrick.mkhael
Dear Alex,


**What kind of hit ratio do you get with rock if you do not limit swap-rate and do not specify swap-timeout? [i also removed the max size as recomended], the gain ratio is max 13 %.

​**What kind of hit ratio do you get with rock if you use one worker, one
rock cache_dir, do not limit swap-rate, do not specify swap-timeout, and
start Squid with -N to disable SMP? [ as recomended, only one rock cache_dir , no limit swap and excuted with -N option,the gain ration is 7%]

additonial info:
i have a total of 24 GB RAM which i use 15 GB of them as cache_mem and i use ext4 in fstab.

thank u



From: Alex Rousskov <[hidden email]>
Sent: Saturday, July 4, 2020 3:40 AM
To: patrick mkhael <[hidden email]>; [hidden email] <[hidden email]>
Subject: Re: [squid-users] rock issue
 
On 7/3/20 4:50 AM, patrick mkhael wrote:

> workers 3
> cpu_affinity_map process_numbers=1,2,3,4,5,6 cores=1,2,3,4,5,6
> cache_dir rock /rock1 200000 max-size=32000 swap-timeout=300 max-swap-rate=100
> cache_dir rock /rock2 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
> cache_dir rock /rock3 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
> cache_mem 17 GB
> maximum_object_size_in_memory 25 MB
> maximum_object_size 1 GB
> cache_miss_revalidate off
> quick_abort_pct 95


FYI: The combination of 1GB maximum_object_size and much smaller size
limits for objects in memory and disk caches does not make sense: There
is no cache to store a, say, 26MB object. If Squid lacks the
corresponding configuration "lint" check, somebody should add it.


> This config is giving 4% of cache gain ratio, 

> in addition as i already mentionned before if i take the same above
> config without worker and cach_dir with the same traffiic using aufs on
> one of the disks ,  i have automatically i har 60 % cache ratio.

When using AUFS, do you limit disk-cached object sizes to 32KB like you
do with rock? If not, then you should remove the max-size limit from
rock cache_dirs. Modern rock cache_dirs are capable of storing large
objects.

What kind of hit ratio do you get with rock if you do not limit
swap-rate and do not specify swap-timeout?

What kind of hit ratio do you get with rock if you use one worker, one
rock cache_dir, do not limit swap-rate, do not specify swap-timeout, and
start Squid with -N to disable SMP?

As you can see, I am trying to understand whether the size limitation,
the rate limiting, or SMP problems explain the drop in hit ratio.


> Shoud rock give me the same performance as aufs ?

It is a difficult question to answer correctly (for me). The goal is for
rock performance to exceed that of (a)ufs, but I doubt we have reached
that goal in every environment that matters (including yours).

* In a non-SMP environment, I would expect similar hit ratios in most
cases, but I would not be surprised if there are significant exceptions.
Rock is focused on SMP support, and there are complexities/costs
associated with SMP. Rock is getting better, but there are some known
areas where rock cannot yet do what ufs (including aufs) can.

* In a SMP environment, the question is mostly meaningless because there
is no SMP support for ufs-based caches. Folks use squid.conf
preprocessor hacks to configure ufs-based caches in SMP mode, but those
setups usually violate HTTP and may cause serious problems. YMMV.


> for a traffic of 1 Gb/s , is there a way to use aufs ?

Before trying unsupported combinations of aufs and SMP, I would try to
understand why your hit ratio is so low with rock. The questions above
may be a good start in that investigation.


Cheers,

Alex.


> ------------------------------------------------------------------------
> *From:* Alex Rousskov <[hidden email]>
> *Sent:* Thursday, July 2, 2020 4:24 PM
> *To:* patrick mkhael <[hidden email]>;
> [hidden email] <[hidden email]>
> *Subject:* Re: [squid-users] rock issue
>  
> On 7/1/20 4:45 PM, patrick mkhael wrote:
>
>> ***Please note that you have 20 kids worth mapping (10 workers and 10
>> diskers), but you map only the first 10.​{since i did not get the point
>> of the diskers ,as far as i understood  , it should be like  (simple
>> example)
>
>>> workers 2
>>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>>> cache_dir rock ...
>>> cache_dir rock ...
>
> The above looks OK. Each worker is a kid process. Each rock cache_dir is
> a kid process (we call them diskers).  If you have physical CPU cores to
> spare, give each kid process its own physical core. Otherwise, give each
> worker process its own physical core (if you can). Diskers can share
> physical cores with less harm because they usually do not consume much
> CPU cycles. Squid wiki has more detailed information about that:
> https://wiki.squid-cache.org/Features/SmpScale#How_to_configure_SMP_Squid_for_top_performance.3F
>
>
>> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
>> saw in many websites that it should be like this from the smallest to
>> the bigest with diff size, i tought it should serve from small size pool
>> to high ]
>
> IMHO, you should stop reading those web pages :-). There is no general
> need to segregate objects by sizes, especially when you are using the
> same slot size for all cache_dirs. Such segregation may be necessary in
> some special cases, but we have not yet established that your case is
> special.
>
>
>> *****How many independent disk spindles (or equivalent) do you have? [ i
>> have one raid 5 ssd disks , used by the 10 rock cache dir]
>
> Do not use RAID. If possible, use one rock cache_dir per SSD disk. The
> only reason this may not be possible, AFAICT, is if you want to cache
> more (per SSD disk) than a single Squid cache_dir can hold, but I would
> not worry about overcoming that limit at the beginning. If you want to
> know more about the limit, look for "33554431" in
> http://www.squid-cache.org/mail-archive/squid-users/201312/0034.html
>
>
>> ***How did you select the swap rate limits and timeouts for
>> cache_dirs?[I took it also from online forum , can i leave it empty for
>> both]
>
> If you want a simple answer, then it is "yes". Unfortunately, there is
> no simple correct answer to that question. To understand what is going
> on and how to tune things, I recommend studying the Performance Tuning
> section of https://wiki.squid-cache.org/Features/RockStore
>
>
>> ****Do you see any ERRORs or WARNINGs in cache log?[NO error or warning
>> found in cache]
>
> Good. I assume you do see some regular messages in cache.log. Keep an
> eye for ERRORs and WARNINGs as you change settings.
>
>
> HTH,
>
> Alex.


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

Alex Rousskov
On 7/7/20 6:26 AM, patrick mkhael wrote:

> **What kind of hit ratio do you get with rock if you do not
> limit swap-rate and do not specify swap-timeout? [i also removed the max
> size as recomended], the gain ratio is max 13 %.

Noted, thank you.


> ​**What kind of hit ratio do you get with rock if you use one worker, one
> rock cache_dir, do not limit swap-rate, do not specify swap-timeout, and
> start Squid with -N to disable SMP? [ as recomended, only one rock
> cache_dir , no limit swap and excuted with -N option,the gain ration is 7%]

Was this 7% measured with max-size=32000 or without?


When using AUFS (without rock), do you limit disk-cached object sizes to
~32KB (max-size=32000)?


Finally, what is your Squid version?


Thank you,

Alex.


> ------------------------------------------------------------------------
> *From:* Alex Rousskov <[hidden email]>
> *Sent:* Saturday, July 4, 2020 3:40 AM
> *To:* patrick mkhael <[hidden email]>;
> [hidden email] <[hidden email]>
> *Subject:* Re: [squid-users] rock issue
>  
> On 7/3/20 4:50 AM, patrick mkhael wrote:
>
>> workers 3
>> cpu_affinity_map process_numbers=1,2,3,4,5,6 cores=1,2,3,4,5,6
>> cache_dir rock /rock1 200000 max-size=32000 swap-timeout=300 max-swap-rate=100
>> cache_dir rock /rock2 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
>> cache_dir rock /rock3 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
>> cache_mem 17 GB
>> maximum_object_size_in_memory 25 MB
>> maximum_object_size 1 GB
>> cache_miss_revalidate off
>> quick_abort_pct 95
>
>
> FYI: The combination of 1GB maximum_object_size and much smaller size
> limits for objects in memory and disk caches does not make sense: There
> is no cache to store a, say, 26MB object. If Squid lacks the
> corresponding configuration "lint" check, somebody should add it.
>
>
>> This config is giving 4% of cache gain ratio, 
>
>> in addition as i already mentionned before if i take the same above
>> config without worker and cach_dir with the same traffiic using aufs on
>> one of the disks ,  i have automatically i har 60 % cache ratio.
>
> When using AUFS, do you limit disk-cached object sizes to 32KB like you
> do with rock? If not, then you should remove the max-size limit from
> rock cache_dirs. Modern rock cache_dirs are capable of storing large
> objects.
>
> What kind of hit ratio do you get with rock if you do not limit
> swap-rate and do not specify swap-timeout?
>
> What kind of hit ratio do you get with rock if you use one worker, one
> rock cache_dir, do not limit swap-rate, do not specify swap-timeout, and
> start Squid with -N to disable SMP?
>
> As you can see, I am trying to understand whether the size limitation,
> the rate limiting, or SMP problems explain the drop in hit ratio.
>
>
>> Shoud rock give me the same performance as aufs ?
>
> It is a difficult question to answer correctly (for me). The goal is for
> rock performance to exceed that of (a)ufs, but I doubt we have reached
> that goal in every environment that matters (including yours).
>
> * In a non-SMP environment, I would expect similar hit ratios in most
> cases, but I would not be surprised if there are significant exceptions.
> Rock is focused on SMP support, and there are complexities/costs
> associated with SMP. Rock is getting better, but there are some known
> areas where rock cannot yet do what ufs (including aufs) can.
>
> * In a SMP environment, the question is mostly meaningless because there
> is no SMP support for ufs-based caches. Folks use squid.conf
> preprocessor hacks to configure ufs-based caches in SMP mode, but those
> setups usually violate HTTP and may cause serious problems. YMMV.
>
>
>> for a traffic of 1 Gb/s , is there a way to use aufs ?
>
> Before trying unsupported combinations of aufs and SMP, I would try to
> understand why your hit ratio is so low with rock. The questions above
> may be a good start in that investigation.
>
>
> Cheers,
>
> Alex.
>
>
>> ------------------------------------------------------------------------
>> *From:* Alex Rousskov <[hidden email]>
>> *Sent:* Thursday, July 2, 2020 4:24 PM
>> *To:* patrick mkhael <[hidden email]>;
>> [hidden email] <[hidden email]>
>> *Subject:* Re: [squid-users] rock issue
>>  
>> On 7/1/20 4:45 PM, patrick mkhael wrote:
>>
>>> ***Please note that you have 20 kids worth mapping (10 workers and 10
>>> diskers), but you map only the first 10.​{since i did not get the point
>>> of the diskers ,as far as i understood  , it should be like  (simple
>>> example)
>>
>>>> workers 2
>>>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>>>> cache_dir rock ...
>>>> cache_dir rock ...
>>
>> The above looks OK. Each worker is a kid process. Each rock cache_dir is
>> a kid process (we call them diskers).  If you have physical CPU cores to
>> spare, give each kid process its own physical core. Otherwise, give each
>> worker process its own physical core (if you can). Diskers can share
>> physical cores with less harm because they usually do not consume much
>> CPU cycles. Squid wiki has more detailed information about that:
>> https://wiki.squid-cache.org/Features/SmpScale#How_to_configure_SMP_Squid_for_top_performance.3F
>>
>>
>>> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
>>> saw in many websites that it should be like this from the smallest to
>>> the bigest with diff size, i tought it should serve from small size pool
>>> to high ]
>>
>> IMHO, you should stop reading those web pages :-). There is no general
>> need to segregate objects by sizes, especially when you are using the
>> same slot size for all cache_dirs. Such segregation may be necessary in
>> some special cases, but we have not yet established that your case is
>> special.
>>
>>
>>> *****How many independent disk spindles (or equivalent) do you have? [ i
>>> have one raid 5 ssd disks , used by the 10 rock cache dir]
>>
>> Do not use RAID. If possible, use one rock cache_dir per SSD disk. The
>> only reason this may not be possible, AFAICT, is if you want to cache
>> more (per SSD disk) than a single Squid cache_dir can hold, but I would
>> not worry about overcoming that limit at the beginning. If you want to
>> know more about the limit, look for "33554431" in
>> http://www.squid-cache.org/mail-archive/squid-users/201312/0034.html
>>
>>
>>> ***How did you select the swap rate limits and timeouts for
>>> cache_dirs?[I took it also from online forum , can i leave it empty for
>>> both]
>>
>> If you want a simple answer, then it is "yes". Unfortunately, there is
>> no simple correct answer to that question. To understand what is going
>> on and how to tune things, I recommend studying the Performance Tuning
>> section of https://wiki.squid-cache.org/Features/RockStore
>>
>>
>>> ****Do you see any ERRORs or WARNINGs in cache log?[NO error or warning
>>> found in cache]
>>
>> Good. I assume you do see some regular messages in cache.log. Keep an
>> eye for ERRORs and WARNINGs as you change settings.
>>
>>
>> HTH,
>>
>> Alex.
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: rock issue

patrick.mkhael
dear alex,

Was this 7% measured with max-size=32000 or without? [ i did not use max-size option]

When using AUFS (without rock), do you limit disk-cached object sizes to
~32KB (max-size=32000)? 
[ i use maximum_object_size_in_memory 250 MB and maximum_object_size 2 GB] // which i also use it in rock ]


squid version [ Version 4.8]
 thank u

From: Alex Rousskov <[hidden email]>
Sent: Tuesday, July 7, 2020 4:32 PM
To: patrick mkhael <[hidden email]>; [hidden email] <[hidden email]>
Subject: Re: [squid-users] rock issue
 
On 7/7/20 6:26 AM, patrick mkhael wrote:

> **What kind of hit ratio do you get with rock if you do not
> limit swap-rate and do not specify swap-timeout? [i also removed the max
> size as recomended], the gain ratio is max 13 %.

Noted, thank you.


> ​**What kind of hit ratio do you get with rock if you use one worker, one
> rock cache_dir, do not limit swap-rate, do not specify swap-timeout, and
> start Squid with -N to disable SMP? [ as recomended, only one rock
> cache_dir , no limit swap and excuted with -N option,the gain ration is 7%]

 


When using AUFS (without rock), do you limit disk-cached object sizes to
~32KB (max-size=32000)?


Finally, what is your Squid version?


Thank you,

Alex.


> ------------------------------------------------------------------------
> *From:* Alex Rousskov <[hidden email]>
> *Sent:* Saturday, July 4, 2020 3:40 AM
> *To:* patrick mkhael <[hidden email]>;
> [hidden email] <[hidden email]>
> *Subject:* Re: [squid-users] rock issue
>  
> On 7/3/20 4:50 AM, patrick mkhael wrote:
>
>> workers 3
>> cpu_affinity_map process_numbers=1,2,3,4,5,6 cores=1,2,3,4,5,6
>> cache_dir rock /rock1 200000 max-size=32000 swap-timeout=300 max-swap-rate=100
>> cache_dir rock /rock2 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
>> cache_dir rock /rock3 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
>> cache_mem 17 GB
>> maximum_object_size_in_memory 25 MB
>> maximum_object_size 1 GB
>> cache_miss_revalidate off
>> quick_abort_pct 95
>
>
> FYI: The combination of 1GB maximum_object_size and much smaller size
> limits for objects in memory and disk caches does not make sense: There
> is no cache to store a, say, 26MB object. If Squid lacks the
> corresponding configuration "lint" check, somebody should add it.
>
>
>> This config is giving 4% of cache gain ratio, 
>
>> in addition as i already mentionned before if i take the same above
>> config without worker and cach_dir with the same traffiic using aufs on
>> one of the disks ,  i have automatically i har 60 % cache ratio.
>
> When using AUFS, do you limit disk-cached object sizes to 32KB like you
> do with rock? If not, then you should remove the max-size limit from
> rock cache_dirs. Modern rock cache_dirs are capable of storing large
> objects.
>
> What kind of hit ratio do you get with rock if you do not limit
> swap-rate and do not specify swap-timeout?
>
> What kind of hit ratio do you get with rock if you use one worker, one
> rock cache_dir, do not limit swap-rate, do not specify swap-timeout, and
> start Squid with -N to disable SMP?
>
> As you can see, I am trying to understand whether the size limitation,
> the rate limiting, or SMP problems explain the drop in hit ratio.
>
>
>> Shoud rock give me the same performance as aufs ?
>
> It is a difficult question to answer correctly (for me). The goal is for
> rock performance to exceed that of (a)ufs, but I doubt we have reached
> that goal in every environment that matters (including yours).
>
> * In a non-SMP environment, I would expect similar hit ratios in most
> cases, but I would not be surprised if there are significant exceptions.
> Rock is focused on SMP support, and there are complexities/costs
> associated with SMP. Rock is getting better, but there are some known
> areas where rock cannot yet do what ufs (including aufs) can.
>
> * In a SMP environment, the question is mostly meaningless because there
> is no SMP support for ufs-based caches. Folks use squid.conf
> preprocessor hacks to configure ufs-based caches in SMP mode, but those
> setups usually violate HTTP and may cause serious problems. YMMV.
>
>
>> for a traffic of 1 Gb/s , is there a way to use aufs ?
>
> Before trying unsupported combinations of aufs and SMP, I would try to
> understand why your hit ratio is so low with rock. The questions above
> may be a good start in that investigation.
>
>
> Cheers,
>
> Alex.
>
>
>> ------------------------------------------------------------------------
>> *From:* Alex Rousskov <[hidden email]>
>> *Sent:* Thursday, July 2, 2020 4:24 PM
>> *To:* patrick mkhael <[hidden email]>;
>> [hidden email] <[hidden email]>
>> *Subject:* Re: [squid-users] rock issue
>>  
>> On 7/1/20 4:45 PM, patrick mkhael wrote:
>>
>>> ***Please note that you have 20 kids worth mapping (10 workers and 10
>>> diskers), but you map only the first 10.​{since i did not get the point
>>> of the diskers ,as far as i understood  , it should be like  (simple
>>> example)
>>
>>>> workers 2
>>>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>>>> cache_dir rock ...
>>>> cache_dir rock ...
>>
>> The above looks OK. Each worker is a kid process. Each rock cache_dir is
>> a kid process (we call them diskers).  If you have physical CPU cores to
>> spare, give each kid process its own physical core. Otherwise, give each
>> worker process its own physical core (if you can). Diskers can share
>> physical cores with less harm because they usually do not consume much
>> CPU cycles. Squid wiki has more detailed information about that:
>> https://wiki.squid-cache.org/Features/SmpScale#How_to_configure_SMP_Squid_for_top_performance.3F
>>
>>
>>> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
>>> saw in many websites that it should be like this from the smallest to
>>> the bigest with diff size, i tought it should serve from small size pool
>>> to high ]
>>
>> IMHO, you should stop reading those web pages :-). There is no general
>> need to segregate objects by sizes, especially when you are using the
>> same slot size for all cache_dirs. Such segregation may be necessary in
>> some special cases, but we have not yet established that your case is
>> special.
>>
>>
>>> *****How many independent disk spindles (or equivalent) do you have? [ i
>>> have one raid 5 ssd disks , used by the 10 rock cache dir]
>>
>> Do not use RAID. If possible, use one rock cache_dir per SSD disk. The
>> only reason this may not be possible, AFAICT, is if you want to cache
>> more (per SSD disk) than a single Squid cache_dir can hold, but I would
>> not worry about overcoming that limit at the beginning. If you want to
>> know more about the limit, look for "33554431" in
>> http://www.squid-cache.org/mail-archive/squid-users/201312/0034.html
>>
>>
>>> ***How did you select the swap rate limits and timeouts for
>>> cache_dirs?[I took it also from online forum , can i leave it empty for
>>> both]
>>
>> If you want a simple answer, then it is "yes". Unfortunately, there is
>> no simple correct answer to that question. To understand what is going
>> on and how to tune things, I recommend studying the Performance Tuning
>> section of https://wiki.squid-cache.org/Features/RockStore
>>
>>
>>> ****Do you see any ERRORs or WARNINGs in cache log?[NO error or warning
>>> found in cache]
>>
>> Good. I assume you do see some regular messages in cache.log. Keep an
>> eye for ERRORs and WARNINGs as you change settings.
>>
>>
>> HTH,
>>
>> Alex.
>


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users