Your cache is running out of filedescriptors

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Your cache is running out of filedescriptors

Vieri
Hi,

I'm opening a new thread realted to my previous post here: http://lists.squid-cache.org/pipermail/squid-users/2017-August/016233.html
I'm doing so because the subject is more specific now.

I'm obviously having trouble with file descriptors since I'm gettign the following message in the log on a regular basis.

WARNING! Your cache is running out of filedescriptors

Sometimes, but not always, I also get the message "WARNING: Consider increasing the number of ssl_crtd processes in your config file".

I've upped twice now the value of "children", but I'm still getting the same warnigns and performance issues. Here are some values:

external_acl_type bllookup ttl=86400 children-max=100 ...
sslcrtd_children 128 startup=20 idle=4

I also increased nofiles in ulimit:

# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127521
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 127521
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

If I run "lsof" I get a huge listing.

I could keep increasing the ulimit nofiles and squid "children" values, but I'd like to know if I'm on the right path or not.

What do you suggest?

I appreciate any tip you can share.

Regards,

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Your cache is running out of filedescriptors

Eliezer Croitoru
Hey,

Just so you would notice:
open files                      (-n) 4096

you should first make it at least 16384 if not more...
It's not harmful to start with 65535 and then see if the issue still persists or things get resolved.
Maybe the issue with the ssl_crtd is related to the FD issue but I'm not 100% sure.
What OS are you using?

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Vieri
Sent: Thursday, August 31, 2017 00:48
To: [hidden email]
Subject: [squid-users] Your cache is running out of filedescriptors

Hi,

I'm opening a new thread realted to my previous post here: http://lists.squid-cache.org/pipermail/squid-users/2017-August/016233.html
I'm doing so because the subject is more specific now.

I'm obviously having trouble with file descriptors since I'm gettign the following message in the log on a regular basis.

WARNING! Your cache is running out of filedescriptors

Sometimes, but not always, I also get the message "WARNING: Consider increasing the number of ssl_crtd processes in your config file".

I've upped twice now the value of "children", but I'm still getting the same warnigns and performance issues. Here are some values:

external_acl_type bllookup ttl=86400 children-max=100 ...
sslcrtd_children 128 startup=20 idle=4

I also increased nofiles in ulimit:

# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127521
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 127521
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

If I run "lsof" I get a huge listing.

I could keep increasing the ulimit nofiles and squid "children" values, but I'd like to know if I'm on the right path or not.

What do you suggest?

I appreciate any tip you can share.

Regards,

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Your cache is running out of filedescriptors

Vieri

________________________________
From: Eliezer Croitoru <[hidden email]>
>
> Just so you would notice:
> open files                      (-n) 4096
>
> you should first make it at least 16384 if not more...
> It's not harmful to start with 65535 and then see if the issue still persists or things get resolved.
> Maybe the issue with the ssl_crtd is related to the FD issue but I'm not 100% sure.
> What OS are you using?


Thanks for the tip Eliezer.

I'm using Gentoo Linux with the standard kernel and base system. I used to use the "hardened" version, but I recently had networking issues with it so I moved away from it. I'm saying this because I already increased the default ulimit values I reported (of which "open files 4096") in the "standard" Gentoo system. The original default was half as much (2048). This is only my guess, but I think this Gentoo flavor is meant for general use, especially desktop. On the other hand, Gentoo Hardened (and other flavors) might be more server-oriented. I do NOT know yet if the ulimit values in the hardened version are different.

I did not know that the OS defaults would be so restrictive, especially if you say that I can safely start with 65535 open files.


To make a long story short, I'll try raising the value to 65535. Would you suggest to set the same for both soft and hard?
* soft nofile 65535
* hard nofile ?

Is a squid restart enough to apply, or is it recommended to restart the kernel/system?

I also stumbled on the following directives in squid.conf.

client_lifetime defaults to 1 day. I might need to set it to a lower value. However, I don't see too many connections with:
# netstat -a -n | grep CLOSE_WAIT


Squid doc also suggests to tune these settings:
read_timeout, request_timeout, persistent_request_timeout and quick_abort

A bit risky... but I'll take a look at it.

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Your cache is running out of filedescriptors

Vieri
In reply to this post by Eliezer Croitoru
I'd like to add a note to my previous message.

I set the following values, and I'll see what happens:

* hard nofile 65535
* soft nofile 16384


("hard" being a top limit a non-root process cannot exceed)

So I take it that Squid will start with a default of 16384, but will be able to increase up to 65535 if it needs to.


By the way, restarting squid from the same shell (ssh) does not apply the new values.
I had to re-log into the system.

There's probably a ulimit command line option to apply the values without logging out.

Anyway, the squid log confirms the new value.


Also, I guess it would be preferable to reboot the server if I wanted the same limits to apply to all running processes (or restart each and every service/daemon one by one).


I also set the following directives.
For local caching proxy:

client_lifetime 480 minutes


For reverse proxies:

client_lifetime 60 minutes


I left the other options alone:
read_timeout, request_timeout, persistent_request_timeout and quick_abort

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Your cache is running out of filedescriptors

Amos Jeffries
Administrator
On 31/08/17 19:26, Vieri wrote:

> I'd like to add a note to my previous message.
>
> I set the following values, and I'll see what happens:
>
> * hard nofile 65535
> * soft nofile 16384
>
>
> ("hard" being a top limit a non-root process cannot exceed)
>
> So I take it that Squid will start with a default of 16384, but will be able to increase up to 65535 if it needs to.

Squid starts with a fixed amount, either the limit you built it with or
the max_filedescriptors config directive value. It will auto-shrink if
those limits are too large for the system/ulimit settings, but will not
auto-grow beyond.

>
> By the way, restarting squid from the same shell (ssh) does not apply the new values.
> I had to re-log into the system.
>
> There's probably a ulimit command line option to apply the values without logging out.
>

"ulimit -n ..." should do it.


Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Your cache is running out of filedescriptors

Eliezer Croitoru
In reply to this post by Vieri
Hey Vieri,

The hard and soft limit are designed to administratively allow a specific service or user have a "space" between the expected to the unexpected.
I assume it's meant also for other things like giving a specific user or service a basic(soft) limit and a higher limit that will not cripple the whole machine(hard..).
But I don't remember the exact idea behind it.

For a sysadmin it's only a matter of restrictions to not wear out the hardware or to prevent resources abuse.

As Amos suggested, since squid almost 100% requires root privileges then you can add to the openrc or system startup service\script the specific limit you want to apply in the scope of any start\restart of the service(squid).
You will need to use:
ulimit -Hn 65535

first and after this apply the lower limit:
ulimit -Hn 16384

just notice that depends on the hardware and the network you should monitor the server and the services to make sure squid will not be used to abuse your connection.

I hope the above will help you.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Vieri
Sent: Thursday, August 31, 2017 10:27
To: [hidden email]
Subject: Re: [squid-users] Your cache is running out of filedescriptors

I'd like to add a note to my previous message.

I set the following values, and I'll see what happens:

* hard nofile 65535
* soft nofile 16384


("hard" being a top limit a non-root process cannot exceed)

So I take it that Squid will start with a default of 16384, but will be able to increase up to 65535 if it needs to.


By the way, restarting squid from the same shell (ssh) does not apply the new values.
I had to re-log into the system.

There's probably a ulimit command line option to apply the values without logging out.

Anyway, the squid log confirms the new value.


Also, I guess it would be preferable to reboot the server if I wanted the same limits to apply to all running processes (or restart each and every service/daemon one by one).


I also set the following directives.
For local caching proxy:

client_lifetime 480 minutes


For reverse proxies:

client_lifetime 60 minutes


I left the other options alone:
read_timeout, request_timeout, persistent_request_timeout and quick_abort

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Your cache is running out of filedescriptors

Eliezer Croitoru
Sorry a typo..

You will need to use:
ulimit -Hn 65535

first and after this apply the lower limit:
ulimit -n 16384

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Eliezer Croitoru
Sent: Friday, September 1, 2017 02:43
To: 'Vieri' <[hidden email]>; [hidden email]
Subject: Re: [squid-users] Your cache is running out of filedescriptors

Hey Vieri,

The hard and soft limit are designed to administratively allow a specific service or user have a "space" between the expected to the unexpected.
I assume it's meant also for other things like giving a specific user or service a basic(soft) limit and a higher limit that will not cripple the whole machine(hard..).
But I don't remember the exact idea behind it.

For a sysadmin it's only a matter of restrictions to not wear out the hardware or to prevent resources abuse.

As Amos suggested, since squid almost 100% requires root privileges then you can add to the openrc or system startup service\script the specific limit you want to apply in the scope of any start\restart of the service(squid).
You will need to use:
ulimit -Hn 65535

first and after this apply the lower limit:
ulimit -Hn 16384

just notice that depends on the hardware and the network you should monitor the server and the services to make sure squid will not be used to abuse your connection.

I hope the above will help you.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Vieri
Sent: Thursday, August 31, 2017 10:27
To: [hidden email]
Subject: Re: [squid-users] Your cache is running out of filedescriptors

I'd like to add a note to my previous message.

I set the following values, and I'll see what happens:

* hard nofile 65535
* soft nofile 16384


("hard" being a top limit a non-root process cannot exceed)

So I take it that Squid will start with a default of 16384, but will be able to increase up to 65535 if it needs to.


By the way, restarting squid from the same shell (ssh) does not apply the new values.
I had to re-log into the system.

There's probably a ulimit command line option to apply the values without logging out.

Anyway, the squid log confirms the new value.


Also, I guess it would be preferable to reboot the server if I wanted the same limits to apply to all running processes (or restart each and every service/daemon one by one).


I also set the following directives.
For local caching proxy:

client_lifetime 480 minutes


For reverse proxies:

client_lifetime 60 minutes


I left the other options alone:
read_timeout, request_timeout, persistent_request_timeout and quick_abort

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Your cache is running out of filedescriptors

Vieri
________________________________
From: Eliezer Croitoru <[hidden email]>

>
> You will need to use:> ulimit -Hn 65535
>
> first and after this apply the lower limit:
> ulimit -n 16384
>

> As Amos suggested, since squid almost 100% requires root privileges then you can add to the openrc or system startup

> service\script the specific limit you want to apply in the scope of any start\restart of the service(squid).

Many thanks to both of you.

I created 01_squid.conf in /etc/security/limits.d/ with:
* hard nofile 65535
* soft nofile 16384

I then restarted squid, and haven't had any issues for the last 24+ hours.

I was hoping to change that file to:
squid hard nofile 65535
squid soft nofile 16384


However, correct me if I'm wrong, but it seems to me that you're saying that Squid adjusts the limit as "root" user, not as the squid user.

I have these main processes:

root      5690  0.0  0.0  87444  5676 ?        Ss   Aug31   0:00 /usr/sbin/squid -YC -f /etc/squid/squid.conf -n squid
squid     5694  2.9  3.3 1188628 1109564 ?     S    Aug31  55:06 (squid-1) -YC -f /etc/squid/squid.conf -n squid


So, is it preferable to use the squid user name in limits.conf's "domain" field, or should I use your method by modifying my openrc init script?

BTW my system is Gentoo, and here's what I can read in the default openrc init script:

# Maximum file descriptors squid can open is determined by:
# a basic default of N=1024
#  ... altered by ./configure --with-filedescriptors=N
#  ... overridden on production by squid.conf max_filedescriptors (if,
#  and only if, setrlimit() RLIMIT_NOFILE is able to be built+used).
# Since we do not configure hard coded # of filedescriptors anymore,
# there is no need for ulimit calls in the init script.
# Use max_filedescriptors in squid.conf instead.


... and here's the start function:

start() {
checkconfig || return 1
checkpath -d -q -m 0750 -o squid:squid /run/${SVCNAME}
ebegin "Starting ${SVCNAME} (service name ${SVCNAME//[^[:alnum:]]/})"
KRB5_KTNAME="${SQUID_KEYTAB}" /usr/sbin/squid ${SQUID_OPTS} -f /etc/squid/${SVCNAME}.conf -n ${SVCNAME//[^[:alnum:]]/}
eend $? && sleep 1
}


The thing is that if Gentoo's default hard ulimit is x then I can't just set max_filedescriptors to a value >x in squid.conf. It simply won't work. Or will it?
When squid starts up as root, can it increase via setrlimit() to whatever value is in max_filedescriptors even if ulimit -Ha shows a lower value for nofiles?


These are the defaults on my system:

# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127512
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 127512
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


# ulimit -Ha
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127512
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 127512
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


So, if I were to use your method I guess I would need to modify the init script's start() function like this:

start() {
[...]
ulimit -Hn 65535
ulimit -n 16384
ebegin "Starting ${SVCNAME} (service name ${SVCNAME//[^[:alnum:]]/})"
KRB5_KTNAME="${SQUID_KEYTAB}" /usr/sbin/squid ${SQUID_OPTS} -f /etc/squid/${SVCNAME}.conf -n ${SVCNAME//[^[:alnum:]]/}

[...]

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Your cache is running out of filedescriptors

Eliezer Croitoru
Squid uses the root limit and also the current environment limit.
Then current environment limit can be changed only by the root user..
So your openrc script change should apply the best fix instead of allowing root have a basic high limit.
But if someone has root privilges on the machine it doesn't matter anyway so..
Choose how you want to upper the limit.
Using the basic limits way or the openrc one.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Vieri
Sent: Friday, September 1, 2017 18:21
To: [hidden email]
Subject: Re: [squid-users] Your cache is running out of filedescriptors

________________________________
From: Eliezer Croitoru <[hidden email]>

>
> You will need to use:> ulimit -Hn 65535
>
> first and after this apply the lower limit:
> ulimit -n 16384
>

> As Amos suggested, since squid almost 100% requires root privileges then you can add to the openrc or system startup

> service\script the specific limit you want to apply in the scope of any start\restart of the service(squid).

Many thanks to both of you.

I created 01_squid.conf in /etc/security/limits.d/ with:
* hard nofile 65535
* soft nofile 16384

I then restarted squid, and haven't had any issues for the last 24+ hours.

I was hoping to change that file to:
squid hard nofile 65535
squid soft nofile 16384


However, correct me if I'm wrong, but it seems to me that you're saying that Squid adjusts the limit as "root" user, not as the squid user.

I have these main processes:

root      5690  0.0  0.0  87444  5676 ?        Ss   Aug31   0:00 /usr/sbin/squid -YC -f /etc/squid/squid.conf -n squid
squid     5694  2.9  3.3 1188628 1109564 ?     S    Aug31  55:06 (squid-1) -YC -f /etc/squid/squid.conf -n squid


So, is it preferable to use the squid user name in limits.conf's "domain" field, or should I use your method by modifying my openrc init script?

BTW my system is Gentoo, and here's what I can read in the default openrc init script:

# Maximum file descriptors squid can open is determined by:
# a basic default of N=1024
#  ... altered by ./configure --with-filedescriptors=N
#  ... overridden on production by squid.conf max_filedescriptors (if,
#  and only if, setrlimit() RLIMIT_NOFILE is able to be built+used).
# Since we do not configure hard coded # of filedescriptors anymore,
# there is no need for ulimit calls in the init script.
# Use max_filedescriptors in squid.conf instead.


... and here's the start function:

start() {
checkconfig || return 1
checkpath -d -q -m 0750 -o squid:squid /run/${SVCNAME}
ebegin "Starting ${SVCNAME} (service name ${SVCNAME//[^[:alnum:]]/})"
KRB5_KTNAME="${SQUID_KEYTAB}" /usr/sbin/squid ${SQUID_OPTS} -f /etc/squid/${SVCNAME}.conf -n ${SVCNAME//[^[:alnum:]]/}
eend $? && sleep 1
}


The thing is that if Gentoo's default hard ulimit is x then I can't just set max_filedescriptors to a value >x in squid.conf. It simply won't work. Or will it?
When squid starts up as root, can it increase via setrlimit() to whatever value is in max_filedescriptors even if ulimit -Ha shows a lower value for nofiles?


These are the defaults on my system:

# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127512
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 127512
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


# ulimit -Ha
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127512
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 127512
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


So, if I were to use your method I guess I would need to modify the init script's start() function like this:

start() {
[...]
ulimit -Hn 65535
ulimit -n 16384
ebegin "Starting ${SVCNAME} (service name ${SVCNAME//[^[:alnum:]]/})"
KRB5_KTNAME="${SQUID_KEYTAB}" /usr/sbin/squid ${SQUID_OPTS} -f /etc/squid/${SVCNAME}.conf -n ${SVCNAME//[^[:alnum:]]/}

[...]

Vieri
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users