squid workers question

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
20 messages Options
Reply | Threaded
Open this post in threaded view
|

squid workers question

Matus UHLAR - fantomas
Hello,

I have installed squid 3.4.8 on linux 3.16/64bit (debian 8 / jessie version)

(I know it's old, but I prefer using distribution-provided SW unless it has
real problem distribution isn't able to fix)

- does this version have known memory leaks?
http://www.squid-cache.org/Versions/v3/3.5/ChangeLog.txt
shows some leaks fixed but they all seem to be related to something we don't
use (certificated, Surrogate capability), unless the:

"Fix memory leak of HttpRequest objects"
that is fixed in 3.5.16 applies to 3.4 too.


I configured rock store (for smaller files) and (later) standard aufs for others:

cache_dir rock /var/spool/squid3/rock 1024 max-size=32768
#cache_dir aufs /var/spool/squid3 8192 16 256 min-size=32769

are those correct values? (bug 3411 says something about 256B metadata)


logs said this:

2017/03/02 18:32:15 kid1| /var/spool/squid3 exists
...
2017/03/02 18:32:18 kid3| Swap maxSize 0 + 262144 KB, estimated 20164 objects
2017/03/02 18:32:18 kid2| Swap maxSize 1048576 + 262144 KB, estimated 100824 objects

- do I get it right that kid1 is the Master, kid2 is the disker for rock
   store and kid3 is the single worker process?


After first start I noticed crash:

2017/03/02 18:32:18 kid3| Max Mem  size: 262144 KB [shared]
2017/03/02 18:32:18 kid2| Max Mem  size: 262144 KB [shared]
2017/03/02 18:32:18 kid3| Max Swap size: 0 KB
2017/03/02 18:32:18 kid1| WARNING: disk-cache maximum object size is too large for mem-cache: 16384.00 KB > 32.00 KB
2017/03/02 18:32:18 kid2| Max Swap size: 1048576 KB
2017/03/02 18:32:18 kid3| Using Least Load store dir selection
2017/03/02 18:32:18 kid3| Set Current Directory to /var/spool/squid3
FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-cache_mem.shm): (2) No such file or directory

Squid Cache (Version 3.4.8): Terminated abnormally.
FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-var.spool.squid3.rock.shm): (2) No such file or directory

Squid Cache (Version 3.4.8): Terminated abnormally.


... this happened in http://bugs.squid-cache.org/show_bug.cgi?id=3411
however that

- restart with "workers 1" worked, but isn't that the default?
   or was the creash caused by something else?
(will try to replicate)



--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Fucking windows! Bring Bill Gates! (Southpark the movie)
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Amos Jeffries
Administrator
On 10/03/2017 3:21 a.m., Matus UHLAR - fantomas wrote:

> Hello,
>
> I have installed squid 3.4.8 on linux 3.16/64bit (debian 8 / jessie
> version)
>
> (I know it's old, but I prefer using distribution-provided SW unless it has
> real problem distribution isn't able to fix)
>
> - does this version have known memory leaks?
> http://www.squid-cache.org/Versions/v3/3.5/ChangeLog.txt
> shows some leaks fixed but they all seem to be related to something we
> don't
> use (certificated, Surrogate capability), unless the:
>
> "Fix memory leak of HttpRequest objects" that is fixed in 3.5.16 applies
> to 3.4 too.
>

IIRC that does, and there were some issues with CONNECT exceeding
configured limits.

The Bug 3553 issue
<http://www.squid-cache.org/Versions/v3/3.5/changesets/squid-3.5-13903.patch>
can also cause nasty issues on busy proxy as the cache disk overflows
from too-slow purging.


>
> I configured rock store (for smaller files) and (later) standard aufs
> for others:
>
> cache_dir rock /var/spool/squid3/rock 1024 max-size=32768
> #cache_dir aufs /var/spool/squid3 8192 16 256 min-size=32769
>
> are those correct values? (bug 3411 says something about 256B metadata)
>

Those 256 Byte will matter for Squid-3.4. It may be worthwhile adjusting
for anyway.

>
> logs said this:
>
> 2017/03/02 18:32:15 kid1| /var/spool/squid3 exists
> ...
> 2017/03/02 18:32:18 kid3| Swap maxSize 0 + 262144 KB, estimated 20164
> objects
> 2017/03/02 18:32:18 kid2| Swap maxSize 1048576 + 262144 KB, estimated
> 100824 objects
>
> - do I get it right that kid1 is the Master, kid2 is the disker for rock
>   store and kid3 is the single worker process?
>

Alex may corect me here but AFAIK; Master (the daemon manager) should
not have a number, the workers should be kid1 thru kid(N), the Disker
should be kid (N+1) thru kid (N+D), and the Coordinator should be kid(N+D+).

I suspect the coordinator changes its kid number during config parse as
things like workers and diskers are discovered if that matters. After
config the numbers are reliable.

There is also a bug that the FATAL messages do not indicate their
timestamp or what kid is applicable. So one has to guess somewhat based
on surrounding log info.

>
> After first start I noticed crash:
>
> 2017/03/02 18:32:18 kid3| Max Mem  size: 262144 KB [shared]
> 2017/03/02 18:32:18 kid2| Max Mem  size: 262144 KB [shared]
> 2017/03/02 18:32:18 kid3| Max Swap size: 0 KB
> 2017/03/02 18:32:18 kid1| WARNING: disk-cache maximum object size is too
> large for mem-cache: 16384.00 KB > 32.00 KB
> 2017/03/02 18:32:18 kid2| Max Swap size: 1048576 KB
> 2017/03/02 18:32:18 kid3| Using Least Load store dir selection
> 2017/03/02 18:32:18 kid3| Set Current Directory to /var/spool/squid3
> FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-cache_mem.shm):
> (2) No such file or directory
>
> Squid Cache (Version 3.4.8): Terminated abnormally.
> FATAL: Ipc::Mem::Segment::open failed to
> shm_open(/squid-var.spool.squid3.rock.shm): (2) No such file or directory
>
> Squid Cache (Version 3.4.8): Terminated abnormally.
>
>
> ... this happened in http://bugs.squid-cache.org/show_bug.cgi?id=3411
> however that
> - restart with "workers 1" worked, but isn't that the default?

Maybe. There is SMP and no SMP at all - both have 1 worker. It is
unclear to me which is the default and whether "workers 1" switches to
the other or not.


>   or was the creash caused by something else?
> (will try to replicate)

In my experience that "No such X" messages on the SHM usually means
/dev/shm is not mounted.

(For completeness it could also mean the device name/path is too long on
MacOS.)

Amos

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Alex Rousskov
In reply to this post by Matus UHLAR - fantomas
On 03/09/2017 07:21 AM, Matus UHLAR - fantomas wrote:

> I have installed squid 3.4.8 on linux 3.16/64bit (debian 8 / jessie
> version)
>
> (I know it's old, but I prefer using distribution-provided SW unless it has
> real problem distribution isn't able to fix)

My answers are based on v5 code. (I know v5 is new, but I do not
remember v3.4 specifics and v5 answers will be valid for a longer time.)


> I configured rock store (for smaller files) and (later) standard aufs
> for others:
>
> cache_dir rock /var/spool/squid3/rock 1024 max-size=32768
> #cache_dir aufs /var/spool/squid3 8192 16 256 min-size=32769
>
> are those correct values? (bug 3411 says something about 256B metadata)

Both rock and AUFS stores support large objects so there is no
requirement to split storage based on object sizes. Each store has
various pros and cons, but lack of large object support is not one of
the distinguishing characteristics.


> - do I get it right that kid1 is the Master, kid2 is the disker for rock
>   store and kid3 is the single worker process?

In SMP mode (which, BTW, AUFS store does not support), Master is not a
kid (it is a parent of all kids), the first N kids are workers, the next
D kids are diskers, and the last kid is Coordinator. Please see the
following wiki section for more details.

   http://wiki.squid-cache.org/Features/SmpScale#Terminology

If possible, avoid relying on this specific numbering scheme because
mapping kid numbers to kid roles is not a part of a stable Squid
interface IMO.


> - restart with "workers 1" worked, but isn't that the default?

Yes, "1" is the default value for the workers directive.


HTH,

Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Matus UHLAR - fantomas
In reply to this post by Amos Jeffries
>On 10/03/2017 3:21 a.m., Matus UHLAR - fantomas wrote:
>> I have installed squid 3.4.8 on linux 3.16/64bit (debian 8 / jessie
>> version)

>> - does this version have known memory leaks?
>> http://www.squid-cache.org/Versions/v3/3.5/ChangeLog.txt
>> shows some leaks fixed but they all seem to be related to something we
>> don't
>> use (certificated, Surrogate capability), unless the:
>>
>> "Fix memory leak of HttpRequest objects" that is fixed in 3.5.16 applies
>> to 3.4 too.

On 10.03.17 05:00, Amos Jeffries wrote:
>IIRC that does, and there were some issues with CONNECT exceeding
>configured limits.
>
>The Bug 3553 issue
><http://www.squid-cache.org/Versions/v3/3.5/changesets/squid-3.5-13903.patch>
>can also cause nasty issues on busy proxy as the cache disk overflows
>from too-slow purging.

seems that my memory problem is somehow related to 4g of "2K Buffers"
whatever that means. This is cachrmgr output:


                (bytes) KB/ch obj/ch (#) used free part %Frag (#) (KB) high (KB) high (hrs) %Tot (#) (KB) high (KB) high (hrs) %alloc (#) (KB) high (KB) (#) %cnt %vol (#)/sec
2K Buffer 2048 1986398 3972796 3972796 0.00 89.763 1986390 3972780 3972796 0.00 100.000 8 16 198 10736355 4.914 19.208 0.009


>> cache_dir rock /var/spool/squid3/rock 1024 max-size=32768
>> #cache_dir aufs /var/spool/squid3 8192 16 256 min-size=32769
>>
>> are those correct values? (bug 3411 says something about 256B metadata)

>Those 256 Byte will matter for Squid-3.4.

doesn't it for later squid versions?

>It may be worthwhile adjusting for anyway.

changed to:

#cache_dir rock /var/spool/squid3/rock 1024 max-size=32512
#cache_dir aufs /var/spool/squid3 4096 16 256 min-size=32513


>> After first start I noticed crash:
>>
>> 2017/03/02 18:32:18 kid3| Max Mem  size: 262144 KB [shared]
>> 2017/03/02 18:32:18 kid2| Max Mem  size: 262144 KB [shared]
>> 2017/03/02 18:32:18 kid3| Max Swap size: 0 KB
>> 2017/03/02 18:32:18 kid1| WARNING: disk-cache maximum object size is too
>> large for mem-cache: 16384.00 KB > 32.00 KB
>> 2017/03/02 18:32:18 kid2| Max Swap size: 1048576 KB
>> 2017/03/02 18:32:18 kid3| Using Least Load store dir selection
>> 2017/03/02 18:32:18 kid3| Set Current Directory to /var/spool/squid3
>> FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-cache_mem.shm):
>> (2) No such file or directory
>>
>> Squid Cache (Version 3.4.8): Terminated abnormally.
>> FATAL: Ipc::Mem::Segment::open failed to
>> shm_open(/squid-var.spool.squid3.rock.shm): (2) No such file or directory
>>
>> Squid Cache (Version 3.4.8): Terminated abnormally.
>>
>>
>> ... this happened in http://bugs.squid-cache.org/show_bug.cgi?id=3411
>> however that
>> - restart with "workers 1" worked, but isn't that the default?
>
>Maybe. There is SMP and no SMP at all - both have 1 worker. It is
>unclear to me which is the default and whether "workers 1" switches to
>the other or not.
>
>
>>   or was the creash caused by something else?
>> (will try to replicate)
>
>In my experience that "No such X" messages on the SHM usually means
>/dev/shm is not mounted.

I believe it was all the time.

--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I don't have lysdexia. The Dog wouldn't allow that.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Matus UHLAR - fantomas
In reply to this post by Alex Rousskov
>On 03/09/2017 07:21 AM, Matus UHLAR - fantomas wrote:
>> I have installed squid 3.4.8 on linux 3.16/64bit (debian 8 / jessie
>> version)
>>
>> (I know it's old, but I prefer using distribution-provided SW unless it has
>> real problem distribution isn't able to fix)

On 09.03.17 09:07, Alex Rousskov wrote:
>My answers are based on v5 code. (I know v5 is new, but I do not
>remember v3.4 specifics and v5 answers will be valid for a longer time.)

OK

>> I configured rock store (for smaller files) and (later) standard aufs
>> for others:
>>
>> cache_dir rock /var/spool/squid3/rock 1024 max-size=32768
>> #cache_dir aufs /var/spool/squid3 8192 16 256 min-size=32769
>>
>> are those correct values? (bug 3411 says something about 256B metadata)

>Both rock and AUFS stores support large objects so there is no
>requirement to split storage based on object sizes. Each store has
>various pros and cons, but lack of large object support is not one of
>the distinguishing characteristics.

I thought the cons of *ufs/disks is ineffective storage of small files,
while rock is ineffective with big files...

>> - do I get it right that kid1 is the Master, kid2 is the disker for rock
>>   store and kid3 is the single worker process?
>
>In SMP mode (which, BTW, AUFS store does not support),

could it crash squid instead of complaining?

> Master is not a
>kid (it is a parent of all kids), the first N kids are workers, the next
>D kids are diskers, and the last kid is Coordinator. Please see the
>following wiki section for more details.
>
>   http://wiki.squid-cache.org/Features/SmpScale#Terminology

I read that prior to posting, buuw I was wondering where do those kids get.

>If possible, avoid relying on this specific numbering scheme because
>mapping kid numbers to kid roles is not a part of a stable Squid
>interface IMO.
>
>
>> - restart with "workers 1" worked, but isn't that the default?
>
>Yes, "1" is the default value for the workers directive.

and this is why I wonder we have three kids, both when "workers" commented out
and when set to 1.

--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
     One OS to rule them all, One OS to find them,
One OS to bring them all and into darkness bind them
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Amos Jeffries
Administrator
In reply to this post by Matus UHLAR - fantomas
On 10/03/2017 5:14 a.m., Matus UHLAR - fantomas wrote:

>> On 10/03/2017 3:21 a.m., Matus UHLAR - fantomas wrote:
>>> I have installed squid 3.4.8 on linux 3.16/64bit (debian 8 / jessie
>>> version)
>
>>> - does this version have known memory leaks?
>>> http://www.squid-cache.org/Versions/v3/3.5/ChangeLog.txt
>>> shows some leaks fixed but they all seem to be related to something we
>>> don't
>>> use (certificated, Surrogate capability), unless the:
>>>
>>> "Fix memory leak of HttpRequest objects" that is fixed in 3.5.16 applies
>>> to 3.4 too.
>
> On 10.03.17 05:00, Amos Jeffries wrote:
>> IIRC that does, and there were some issues with CONNECT exceeding
>> configured limits.
>>
>> The Bug 3553 issue
>> <http://www.squid-cache.org/Versions/v3/3.5/changesets/squid-3.5-13903.patch>
>>
>> can also cause nasty issues on busy proxy as the cache disk overflows
>> from too-slow purging.
>
> seems that my memory problem is somehow related to 4g of "2K Buffers"
> whatever that means. This is cachrmgr output:
>
>
>         (bytes)    KB/ch    obj/ch    (#)    used    free    part  
> %Frag    (#)    (KB)    high (KB)    high (hrs)    %Tot    (#)  
> (KB)    high (KB)    high (hrs)    %alloc    (#)    (KB)    high (KB)  
> (#)        %cnt    %vol    (#)/sec
> 2K Buffer    2048                                1986398    3972796  
> 3972796        0.00        89.763    1986390    3972780  
> 3972796        0.00        100.000    8    16    198        10736355  
> 4.914    19.208    0.009
>

Ah. So anything that is using a generic 2KB of memory. Tricky to track
down :-(.

I would look at the next few entries to see if there is a good clue
about what system might be worth a closer look (largest amount of things
active, or anything else not being released). Mostly it is I/O using the
various Buffer's, but some other things do as well.


>
>>> cache_dir rock /var/spool/squid3/rock 1024 max-size=32768
>>> #cache_dir aufs /var/spool/squid3 8192 16 256 min-size=32769
>>>
>>> are those correct values? (bug 3411 says something about 256B metadata)
>
>> Those 256 Byte will matter for Squid-3.4.
>
> doesn't it for later squid versions?

Nope :-). Squid-3.5 'large rock' feature adds slots as needed to fit the
extra meta bytes. So 32KB is no longer an absolute limit.

Amos

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Amos Jeffries
Administrator
In reply to this post by Matus UHLAR - fantomas
On 10/03/2017 5:19 a.m., Matus UHLAR - fantomas wrote:

>> On 03/09/2017 07:21 AM, Matus UHLAR - fantomas wrote:
>>> I have installed squid 3.4.8 on linux 3.16/64bit (debian 8 / jessie
>>> version)
>>>
>>> (I know it's old, but I prefer using distribution-provided SW unless
>>> it has
>>> real problem distribution isn't able to fix)
>
> On 09.03.17 09:07, Alex Rousskov wrote:
>> My answers are based on v5 code. (I know v5 is new, but I do not
>> remember v3.4 specifics and v5 answers will be valid for a longer time.)
>
> OK
>
>>> I configured rock store (for smaller files) and (later) standard aufs
>>> for others:
>>>
>>> cache_dir rock /var/spool/squid3/rock 1024 max-size=32768
>>> #cache_dir aufs /var/spool/squid3 8192 16 256 min-size=32769
>>>
>>> are those correct values? (bug 3411 says something about 256B metadata)
>
>> Both rock and AUFS stores support large objects so there is no
>> requirement to split storage based on object sizes. Each store has
>> various pros and cons, but lack of large object support is not one of
>> the distinguishing characteristics.
>
> I thought the cons of *ufs/disks is ineffective storage of small files,
> while rock is ineffective with big files...
>

Yes, *efficiency*, not lack of support.

[except in this case of your 3.4 where rock does explicitly lack the
support].


>>> - do I get it right that kid1 is the Master, kid2 is the disker for rock
>>>   store and kid3 is the single worker process?
>>
>> In SMP mode (which, BTW, AUFS store does not support),
>
> could it crash squid instead of complaining?

Are you talking about when AUFS is used in SMP mode? it is not that
simple, there are ways to configure it to be used by a single worker
("if" directive, or $(process_number) macro) which is fine and work well
for certain version-specific situations.


Amos

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Alex Rousskov
In reply to this post by Matus UHLAR - fantomas
On 03/09/2017 09:19 AM, Matus UHLAR - fantomas wrote:
> On 09.03.17 09:07, Alex Rousskov wrote:
>> Both rock and AUFS stores support large objects so there is no
>> requirement to split storage based on object sizes. Each store has
>> various pros and cons, but lack of large object support is not one of
>> the distinguishing characteristics.

> I thought the cons of *ufs/disks is ineffective storage of small files,
> while rock is ineffective with big files...

While this assertion is true in some environments for some definitions
of "ineffective", I would avoid such generalizations: If your goal is to
optimize overall Squid performance, then there are a lot more factors
that you should take into account before deciding whether it is is a
good idea to split cache_dirs based on object sizes.


>>> - do I get it right that kid1 is the Master, kid2 is the disker for rock
>>>   store and kid3 is the single worker process?
>>
>> In SMP mode (which, BTW, AUFS store does not support),
>
> could it crash squid instead of complaining?

In SMP mode, SMP-unaware stores crash, corrupt, and/or duplicate
responses, depending on your luck and squid.conf. They do not complain.


>> Master is not a
>> kid (it is a parent of all kids), the first N kids are workers, the next
>> D kids are diskers, and the last kid is Coordinator. Please see the
>> following wiki section for more details.
>>   http://wiki.squid-cache.org/Features/SmpScale#Terminology



>> Yes, "1" is the default value for the workers directive.

> and this is why I wonder we have three kids, both when "workers"
> commented out and when set to 1.

In my earlier paragraph, N is the value of the workers directive. There
are other kids (D diskers and one Coordinator). D is the number of rock
cache_dirs. Both workers and diskers need shared memory and IPC.

If you want to turn off SMP while using rock store, start Squid with -N.
When started with -N, there will be a single process playing all four
roles (master, worker, disker, and Coordinator).


HTH,

Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Matus UHLAR - fantomas
>> On 09.03.17 09:07, Alex Rousskov wrote:
>>> Both rock and AUFS stores support large objects so there is no
>>> requirement to split storage based on object sizes. Each store has
>>> various pros and cons, but lack of large object support is not one of
>>> the distinguishing characteristics.
>
>> I thought the cons of *ufs/disks is ineffective storage of small files,
>> while rock is ineffective with big files...
>
>While this assertion is true in some environments for some definitions
>of "ineffective", I would avoid such generalizations: If your goal is to
>optimize overall Squid performance, then there are a lot more factors
>that you should take into account before deciding whether it is is a
>good idea to split cache_dirs based on object sizes.

aha, there's much to learn yet.
any idea where should I start?


>>> Master is not a
>>> kid (it is a parent of all kids), the first N kids are workers, the next
>>> D kids are diskers, and the last kid is Coordinator. Please see the
>>> following wiki section for more details.
>>>   http://wiki.squid-cache.org/Features/SmpScale#Terminology

>>> Yes, "1" is the default value for the workers directive.
>
>> and this is why I wonder we have three kids, both when "workers"
>> commented out and when set to 1.
>
>In my earlier paragraph, N is the value of the workers directive. There
>are other kids (D diskers and one Coordinator). D is the number of rock
>cache_dirs. Both workers and diskers need shared memory and IPC.
>
>If you want to turn off SMP while using rock store, start Squid with -N.
>When started with -N, there will be a single process playing all four
>roles (master, worker, disker, and Coordinator).

Will running with "workers 1" avoid this issue while using separate
processes for diskers?

--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
- Have you got anything without Spam in it?
- Well, there's Spam egg sausage and Spam, that's not got much Spam in it.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Alex Rousskov
On 03/09/2017 09:54 AM, Matus UHLAR - fantomas wrote:
>>>> Master is not a
>>>> kid (it is a parent of all kids), the first N kids are workers, the
>>>> next D kids are diskers, and the last kid is Coordinator. Please see the
>>>> following wiki section for more details.
>>>>   http://wiki.squid-cache.org/Features/SmpScale#Terminology

>>>> Yes, "1" is the default value for the workers directive.

>>> and this is why I wonder we have three kids, both when "workers"
>>> commented out and when set to 1.

>> In my earlier paragraph, N is the value of the workers directive. There
>> are other kids (D diskers and one Coordinator). D is the number of rock
>> cache_dirs. Both workers and diskers need shared memory and IPC.
>>
>> If you want to turn off SMP while using rock store, start Squid with -N.
>> When started with -N, there will be a single process playing all four
>> roles (master, worker, disker, and Coordinator).

> Will running with "workers 1" avoid this issue while using separate
> processes for diskers?

No. Disker processes need shared memory to communicate with worker(s).

Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Matus UHLAR - fantomas
In reply to this post by Amos Jeffries
On 10.03.17 05:30, Amos Jeffries wrote:

>> seems that my memory problem is somehow related to 4g of "2K Buffers"
>> whatever that means. This is cachrmgr output:
>>
>>
>>         (bytes)    KB/ch    obj/ch    (#)    used    free    part
>> %Frag    (#)    (KB)    high (KB)    high (hrs)    %Tot    (#)
>> (KB)    high (KB)    high (hrs)    %alloc    (#)    (KB)    high (KB)
>> (#)        %cnt    %vol    (#)/sec
>> 2K Buffer    2048                                1986398    3972796
>> 3972796        0.00        89.763    1986390    3972780
>> 3972796        0.00        100.000    8    16    198        10736355
>> 4.914    19.208    0.009
>>
>
>Ah. So anything that is using a generic 2KB of memory. Tricky to track
>down :-(.
>
>I would look at the next few entries to see if there is a good clue
>about what system might be worth a closer look (largest amount of things
>active, or anything else not being released). Mostly it is I/O using the
>various Buffer's, but some other things do as well.

will be glad if this helps...

                        (bytes) KB/ch obj/ch (#) used free part %Frag (#) (KB) high (KB) high (hrs) %Tot (#) (KB) high (KB) high (hrs) %alloc (#) (KB) high (KB) (#) %cnt %vol (#)/sec
mem_node 4136 62447 252228 274188 2.37 5.699 62423 252131 274188 2.37 99.962 24 97 26545 6917600 3.166 24.994 0.005
cbdata MemBuf (10) 64 2015362 125961 125961 0.01 2.846 2015348 125960 125961 0.01 99.999 14 1 103 3800466 1.739 0.212 0.004
Short Strings 40 670389 26188 28203 2.51 0.592 670008 26173 28203 2.51 99.943 381 15 1423 113557901 51.974 3.968 0.078


>>>> cache_dir rock /var/spool/squid3/rock 1024 max-size=32768
>>>> #cache_dir aufs /var/spool/squid3 8192 16 256 min-size=32769
>>>>
>>>> are those correct values? (bug 3411 says something about 256B metadata)
>>
>>> Those 256 Byte will matter for Squid-3.4.
>>
>> doesn't it for later squid versions?
>
>Nope :-). Squid-3.5 'large rock' feature adds slots as needed to fit the
>extra meta bytes. So 32KB is no longer an absolute limit.

will it waste whole slot or does it already support smaller chunks?

--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux - It's now safe to turn on your computer.
Linux - Teraz mozete pocitac bez obav zapnut.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Matus UHLAR - fantomas
In reply to this post by Alex Rousskov
>On 03/09/2017 09:54 AM, Matus UHLAR - fantomas wrote:
>>>>> Master is not a
>>>>> kid (it is a parent of all kids), the first N kids are workers, the
>>>>> next D kids are diskers, and the last kid is Coordinator. Please see the
>>>>> following wiki section for more details.
>>>>>   http://wiki.squid-cache.org/Features/SmpScale#Terminology
>
>>>>> Yes, "1" is the default value for the workers directive.
>
>>>> and this is why I wonder we have three kids, both when "workers"
>>>> commented out and when set to 1.
>
>>> In my earlier paragraph, N is the value of the workers directive. There
>>> are other kids (D diskers and one Coordinator). D is the number of rock
>>> cache_dirs. Both workers and diskers need shared memory and IPC.
>>>
>>> If you want to turn off SMP while using rock store, start Squid with -N.
>>> When started with -N, there will be a single process playing all four
>>> roles (master, worker, disker, and Coordinator).
>
>> Will running with "workers 1" avoid this issue while using separate
>> processes for diskers?


On 09.03.17 10:01, Alex Rousskov wrote:
>No. Disker processes need shared memory to communicate with worker(s).

I should rephrase my question:

since aufs is incompatible with SMP and rockstore runs separate diskers,
is running aufs with rock store and safe, when not running with "-N"?

Thanks
--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
M$ Win's are shit, do not use it !
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Alex Rousskov
In reply to this post by Matus UHLAR - fantomas
On 03/09/2017 10:18 AM, Matus UHLAR - fantomas wrote:

>>>>> cache_dir rock /var/spool/squid3/rock 1024 max-size=32768
>>>>> #cache_dir aufs /var/spool/squid3 8192 16 256 min-size=32769
>>>>>
>>>>> are those correct values? (bug 3411 says something about 256B
>>>>> metadata)
>>>
>>>> Those 256 Byte will matter for Squid-3.4.
>>>
>>> doesn't it for later squid versions?
>>
>> Nope :-). Squid-3.5 'large rock' feature adds slots as needed to fit the
>> extra meta bytes. So 32KB is no longer an absolute limit.
>
> will it waste whole slot or does it already support smaller chunks?

Just like a regular file system used by AUFS, Rock always uses the whole
slot, usually wasting some space. Please note that smaller
slots/pages/chunks also waste various resources (including space in some
cases!).

Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Alex Rousskov
In reply to this post by Matus UHLAR - fantomas
On 03/09/2017 10:24 AM, Matus UHLAR - fantomas wrote:

>> On 03/09/2017 09:54 AM, Matus UHLAR - fantomas wrote:
>>>>>> Master is not a
>>>>>> kid (it is a parent of all kids), the first N kids are workers, the
>>>>>> next D kids are diskers, and the last kid is Coordinator. Please
>>>>>> see the
>>>>>> following wiki section for more details.
>>>>>>   http://wiki.squid-cache.org/Features/SmpScale#Terminology
>>
>>>>>> Yes, "1" is the default value for the workers directive.
>>
>>>>> and this is why I wonder we have three kids, both when "workers"
>>>>> commented out and when set to 1.
>>
>>>> In my earlier paragraph, N is the value of the workers directive. There
>>>> are other kids (D diskers and one Coordinator). D is the number of rock
>>>> cache_dirs. Both workers and diskers need shared memory and IPC.
>>>>
>>>> If you want to turn off SMP while using rock store, start Squid with
>>>> -N.
>>>> When started with -N, there will be a single process playing all four
>>>> roles (master, worker, disker, and Coordinator).
>>
>>> Will running with "workers 1" avoid this issue while using separate
>>> processes for diskers?
>
>
> On 09.03.17 10:01, Alex Rousskov wrote:
>> No. Disker processes need shared memory to communicate with worker(s).
>
> I should rephrase my question:
>
> since aufs is incompatible with SMP and rockstore runs separate diskers,

To clarify: Rock store is supported with our without separate disker
processes. Rock is SMP-aware, SMP-capable, and nonSMP-capable. All other
stores are SMP-unaware and nonSMP-capable.


> is running aufs with rock store and safe, when not running with "-N"?

Running a combination of AUFS and Rock stores in non-SMP mode may be
safe but primary Rock store developers do not test this combination.

Running AUFS in SMP mode is unsafe by default but some admins use
configuration hacks to make it work for them. Primary Store developers
do not test such configurations (or AUFS in general).

Running a combination of AUFS and Rock stores in SMP mode is crazy.

Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Matus UHLAR - fantomas
>On 03/09/2017 10:24 AM, Matus UHLAR - fantomas wrote:
>> is running aufs with rock store and safe, when not running with "-N"?

On 09.03.17 11:02, Alex Rousskov wrote:
>Running AUFS in SMP mode is unsafe by default but some admins use
>configuration hacks to make it work for them. Primary Store developers
>do not test such configurations (or AUFS in general).

according to the "workers" docs, "workers 0" is the same as "squid -N" and
"workers 1" is non-SMP mode

so, with "workers 1", both aufs and rock should work properly, with rock
using separate process, correct?

>Running a combination of AUFS and Rock stores in SMP mode is crazy.

I get that as in SMP mode, ufs should be used?

--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
10 GOTO 10 : REM (C) Bill Gates 1998, All Rights Reserved!
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Eliezer Croitoru
In reply to this post by Amos Jeffries
Just to add that one of my current test labs of squid is a combination of:
1 haproxy as balancer(or a custom LB I wrote)
2+ squid instances with the proxy protocol enabled and each has it's own ufs\aufs cache_dir

The idea was to verify if it would be possible to let different instances communicate via htcp\icp as the "signaling" or "communication" channel instead of shared memory.
The situation is that I cannot continue testing it since there is a bug with the sibling proxies communication.

I would be happy if this is will be resolved and then the test might be able to let couple squid instances that runs on the same machine to be able to utilize aufs\ufs better(taking into account that there is an overhead to this whole setup..) for some places.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]


-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Amos Jeffries
Sent: Thursday, March 9, 2017 6:36 PM
To: [hidden email]
Subject: Re: [squid-users] squid workers question

On 10/03/2017 5:19 a.m., Matus UHLAR - fantomas wrote:

>> On 03/09/2017 07:21 AM, Matus UHLAR - fantomas wrote:
>>> I have installed squid 3.4.8 on linux 3.16/64bit (debian 8 / jessie
>>> version)
>>>
>>> (I know it's old, but I prefer using distribution-provided SW unless
>>> it has real problem distribution isn't able to fix)
>
> On 09.03.17 09:07, Alex Rousskov wrote:
>> My answers are based on v5 code. (I know v5 is new, but I do not
>> remember v3.4 specifics and v5 answers will be valid for a longer
>> time.)
>
> OK
>
>>> I configured rock store (for smaller files) and (later) standard
>>> aufs for others:
>>>
>>> cache_dir rock /var/spool/squid3/rock 1024 max-size=32768 #cache_dir
>>> aufs /var/spool/squid3 8192 16 256 min-size=32769
>>>
>>> are those correct values? (bug 3411 says something about 256B
>>> metadata)
>
>> Both rock and AUFS stores support large objects so there is no
>> requirement to split storage based on object sizes. Each store has
>> various pros and cons, but lack of large object support is not one of
>> the distinguishing characteristics.
>
> I thought the cons of *ufs/disks is ineffective storage of small
> files, while rock is ineffective with big files...
>

Yes, *efficiency*, not lack of support.

[except in this case of your 3.4 where rock does explicitly lack the support].


>>> - do I get it right that kid1 is the Master, kid2 is the disker for rock
>>>   store and kid3 is the single worker process?
>>
>> In SMP mode (which, BTW, AUFS store does not support),
>
> could it crash squid instead of complaining?

Are you talking about when AUFS is used in SMP mode? it is not that simple, there are ways to configure it to be used by a single worker ("if" directive, or $(process_number) macro) which is fine and work well for certain version-specific situations.


Amos

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Alex Rousskov
In reply to this post by Matus UHLAR - fantomas
On 03/10/2017 02:38 AM, Matus UHLAR - fantomas wrote:

>> On 03/09/2017 10:24 AM, Matus UHLAR - fantomas wrote:
>>> is running aufs with rock store and safe, when not running with "-N"?
>
> On 09.03.17 11:02, Alex Rousskov wrote:
>> Running AUFS in SMP mode is unsafe by default but some admins use
>> configuration hacks to make it work for them. Primary Store developers
>> do not test such configurations (or AUFS in general).
>
> according to the "workers" docs, "workers 0" is the same as "squid -N" and
> "workers 1" is non-SMP mode

Sorry, but that 2010 documentation is outdated. It was written before
Rock store, a 2011 feature that changed what "SMP mode" means. This is
my fault. Here is a replacement draft that I was working on until wiki
went down:

> NAME: workers
> DEFAULT: 1
> Number of main Squid processes or "workers" to fork and maintain.
>
> In a typical setup, each worker listens on all http_port(s) and
> proxies requests without talking to other workers. Depending on
> configuration, other Squid processes (e.g., rock store "diskers")
> may also participate in request processing. All such Squid processes
> are collectively called "kids".
>
> Setting workers to 0 disables kids creation and is similar to
> running "squid -N ...". A positive value starts that many workers.
>
> When multiple concurrent kids are in use, Squid is said to work in
> "SMP mode". Some Squid features (e.g., ufs-based cache_dirs) are not
> SMP-aware and should not or cannot be used in SMP mode.
>
> See http://wiki.squid-cache.org/Features/SmpScale for details.

The final version will probably move and extend the terminology-related
text to the SMP section preamble -- it is kind of wrong to talk about
diskers when documenting workers. Improvements and constructive
suggestions welcomed!


> so, with "workers 1", both aufs and rock should work properly, with rock
> using separate process, correct?

There are several ways to interpret your question, but the most likely
interpretation leads to the "incorrect" answer: Without -N, a
combination of "workers 1" and at least one "cache_dir rock" enables
SMP. Do not use ufs-based cache_dirs in SMP mode.


>> Running a combination of AUFS and Rock stores in SMP mode is crazy.

> I get that as in SMP mode, ufs should be used?

I do not know what you mean by "that" but ufs is not SMP-aware. Thus,
ufs (including all ufs-based stores such as aufs and diskd):

* should not be used in SMP mode at all and
* must not be used in SMP mode in combination with rock stores.


HTH,

Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Matus UHLAR - fantomas
On 10.03.17 08:52, Alex Rousskov wrote:

>Sorry, but that 2010 documentation is outdated. It was written before
>Rock store, a 2011 feature that changed what "SMP mode" means. This is
>my fault. Here is a replacement draft that I was working on until wiki
>went down:
>
>> NAME: workers
>> DEFAULT: 1
>> Number of main Squid processes or "workers" to fork and maintain.
>>
>> In a typical setup, each worker listens on all http_port(s) and
>> proxies requests without talking to other workers. Depending on
>> configuration, other Squid processes (e.g., rock store "diskers")
>> may also participate in request processing. All such Squid processes
>> are collectively called "kids".
>>
>> Setting workers to 0 disables kids creation and is similar to
>> running "squid -N ...". A positive value starts that many workers.

The default of 1 (only) creates kids for each rock store configured.

>> When multiple concurrent kids are in use, Squid is said to work in
>> "SMP mode". Some Squid features (e.g., ufs-based cache_dirs) are not
>> SMP-aware and should not or cannot be used in SMP mode.
>>
>> See http://wiki.squid-cache.org/Features/SmpScale for details.

very nice, thanks. However this is not meant for the wiki, but for:
http://www.squid-cache.org/Doc/config/workers/

maybe that pages could be updated (all but 3.2 versions are the same).


>The final version will probably move and extend the terminology-related
>text to the SMP section preamble -- it is kind of wrong to talk about
>diskers when documenting workers. Improvements and constructive
>suggestions welcomed!

compared to current version I'd change it to:

        1: start one main Squid process daemon (default)
            "no SMP" when rock store is not used
            "SMP" when rock store in use

>> so, with "workers 1", both aufs and rock should work properly, with rock
>> using separate process, correct?
>
>There are several ways to interpret your question, but the most likely
>interpretation leads to the "incorrect" answer: Without -N, a
>combination of "workers 1" and at least one "cache_dir rock" enables
>SMP. Do not use ufs-based cache_dirs in SMP mode.

That explains it. thanks.


--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
99 percent of lawyers give the rest a bad name.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Alex Rousskov
On 03/20/2017 09:20 AM, Matus UHLAR - fantomas wrote:

> On 10.03.17 08:52, Alex Rousskov wrote:
>> Sorry, but that 2010 documentation is outdated. It was written before
>> Rock store, a 2011 feature that changed what "SMP mode" means. This is
>> my fault. Here is a replacement draft that I was working on until wiki
>> went down:
>>
>>> NAME: workers
>>> DEFAULT: 1
>>>     Number of main Squid processes or "workers" to fork and maintain.
>>>
>>>     In a typical setup, each worker listens on all http_port(s) and
>>>     proxies requests without talking to other workers. Depending on
>>>     configuration, other Squid processes (e.g., rock store "diskers")
>>>     may also participate in request processing. All such Squid processes
>>>     are collectively called "kids".
>>>
>>>     Setting workers to 0 disables kids creation and is similar to
>>>     running "squid -N ...". A positive value starts that many workers.

> The default of 1 (only) creates kids for each rock store configured.

What makes you think that? I believe "workers 1" in the presence of rock
cache_dirs should create one kid to handle HTTP transaction _plus_ one
kid for each rock cache_dir.


>>>     When multiple concurrent kids are in use, Squid is said to work in
>>>     "SMP mode". Some Squid features (e.g., ufs-based cache_dirs) are not
>>>     SMP-aware and should not or cannot be used in SMP mode.
>>>
>>>     See http://wiki.squid-cache.org/Features/SmpScale for details.

> very nice, thanks. However this is not meant for the wiki, but for:
> http://www.squid-cache.org/Doc/config/workers/

To be more precise, the text is meant for src/cf.data.pre, from which
squid.conf.documented (and Doc/Config pages) are generated from. Not
sure why you say "However" though.


> maybe that pages could be updated (all but 3.2 versions are the same).

Once the above worker documentation changes are polished and committed
to the Squid repository, the affected generated pages/files will be
updated automatically.

The documentation for earlier versions may never be updated though -- it
depends on whether the changes are going to be ported and committed to
the code branches corresponding to those earlier versions.


>> The final version will probably move and extend the terminology-related
>> text to the SMP section preamble -- it is kind of wrong to talk about
>> diskers when documenting workers. Improvements and constructive
>> suggestions welcomed!
>
> compared to current version I'd change it to:
>
>     1: start one main Squid process daemon (default)
>            "no SMP" when rock store is not used
>            "SMP" when rock store in use

I agree that we should add something like this as a common-case example
of general rules. Thank you.

Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid workers question

Matus UHLAR - fantomas
>> On 10.03.17 08:52, Alex Rousskov wrote:
>>> Sorry, but that 2010 documentation is outdated. It was written before
>>> Rock store, a 2011 feature that changed what "SMP mode" means. This is
>>> my fault. Here is a replacement draft that I was working on until wiki
>>> went down:
>>>
>>>> NAME: workers
>>>> DEFAULT: 1
>>>>     Number of main Squid processes or "workers" to fork and maintain.
>>>>
>>>>     In a typical setup, each worker listens on all http_port(s) and
>>>>     proxies requests without talking to other workers. Depending on
>>>>     configuration, other Squid processes (e.g., rock store "diskers")
>>>>     may also participate in request processing. All such Squid processes
>>>>     are collectively called "kids".
>>>>
>>>>     Setting workers to 0 disables kids creation and is similar to
>>>>     running "squid -N ...". A positive value starts that many workers.

>On 03/20/2017 09:20 AM, Matus UHLAR - fantomas wrote:
>> The default of 1 (only) creates kids for each rock store configured.

On 20.03.17 12:32, Alex Rousskov wrote:
>What makes you think that? I believe "workers 1" in the presence of rock
>cache_dirs should create one kid to handle HTTP transaction _plus_ one
>kid for each rock cache_dir.

That's exactly what I meant, for inclusion to your paragraph.
Should I replace "kids" with "one extra kid"?
and should I replace (only) by "however"?

>>>>     When multiple concurrent kids are in use, Squid is said to work in
>>>>     "SMP mode". Some Squid features (e.g., ufs-based cache_dirs) are not
>>>>     SMP-aware and should not or cannot be used in SMP mode.
>>>>
>>>>     See http://wiki.squid-cache.org/Features/SmpScale for details.
>
>> very nice, thanks. However this is not meant for the wiki, but for:
>> http://www.squid-cache.org/Doc/config/workers/
>
>To be more precise, the text is meant for src/cf.data.pre, from which
>squid.conf.documented (and Doc/Config pages) are generated from. Not
>sure why you say "However" though.

You mentioned you were working on the draft until wiki went down.
I understood the paragraph as replacement for "workers" documentation, not
as something to be written to wiki...

>> maybe that pages could be updated (all but 3.2 versions are the same).
>
>Once the above worker documentation changes are polished and committed
>to the Squid repository, the affected generated pages/files will be
>updated automatically.
>
>The documentation for earlier versions may never be updated though -- it
>depends on whether the changes are going to be ported and committed to
>the code branches corresponding to those earlier versions.

it's up to the release team.
I would recommend update the docs on the web to avoid issues for people
using older squid versions, e.g. in enterprise environment

>>> The final version will probably move and extend the terminology-related
>>> text to the SMP section preamble -- it is kind of wrong to talk about
>>> diskers when documenting workers. Improvements and constructive
>>> suggestions welcomed!
>>
>> compared to current version I'd change it to:
>>
>>     1: start one main Squid process daemon (default)
>>            "no SMP" when rock store is not used
>>            "SMP" when rock store in use
>
>I agree that we should add something like this as a common-case example
>of general rules. Thank you.

if we replace the current paragraph with your proposed one, I have proposed
change at the top

--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Eagles may soar, but weasels don't get sucked into jet engines.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users