squid in container aborted on low memory server

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

squid in container aborted on low memory server

George Xie
hi all:

Squid version: 3.5.23-5+deb9u1
Docker version 18.09.3, build 774a1f4
Linux instance-4 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27) x86_64 GNU/Linux

I have the following squid config:

http_port 127.0.0.1:3128
cache deny all
access_log none

runs in a container with following Dockerfile:

FROM debian:9
RUN apt update && \
apt install --yes squid

the total memory of the host server is very low, only 592m, about 370m free memory.
if I start squid in the container, squid will abort immediately. 

error messages in /var/log/squid/cache.log:

FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!

Squid Cache (Version 3.5.23): Terminated abnormally.
CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
Maximum Resident Size: 47168 KB

error message captured with strace -f -e trace=memory:

[pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)

it appears that squid (or glibc) tries to allocate 392m memory, which is larger than host free memory 370m.
but I guess squid don't need that much memory, I have another running squid instance, which only uses < 200m memory.
the oddest thing is if I run squid on the host (also Debian 9) directly, not in the container, squid could start and run as normal.

am I doing something wrong thing here?

Xie Shi

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid in container aborted on low memory server

Amos Jeffries
Administrator
On 4/03/19 5:39 pm, George Xie wrote:

> hi all:
>
> Squid version: 3.5.23-5+deb9u1
> Docker version 18.09.3, build 774a1f4
> Linux instance-4 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27)
> x86_64 GNU/Linux
>
> I have the following squid config:
>
>
>     http_port 127.0.0.1:3128
>     cache deny all
>     access_log none
>

What is it exactly that you think this is doing in regards to Squid
memory needs?


>
> runs in a container with following Dockerfile:
>
>     FROM debian:9
>     RUN apt update && \
>     apt install --yes squid
>
>
> the total memory of the host server is very low, only 592m, about 370m
> free memory.
> if I start squid in the container, squid will abort immediately. 
>
> error messages in /var/log/squid/cache.log:
>
>
>     FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!
>
>     Squid Cache (Version 3.5.23): Terminated abnormally.
>     CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
>     Maximum Resident Size: 47168 KB
>
>
> error message captured with strace -f -e trace=memory:
>
>     [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
>     MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
>
>
> it appears that squid (or glibc) tries to allocate 392m memory, which is
> larger than host free memory 370m.
> but I guess squid don't need that much memory, I have another running
> squid instance, which only uses < 200m memory.

No doubt it is configured to use less memory. For example by reducing
the default memory cache size.


> the oddest thing is if I run squid on the host (also Debian 9) directly,
> not in the container, squid could start and run as normal.
>

Linux typically allows RAM over-allocation. Which works okay so long as
there is sufficient swap space and there is time between memory usage to
do the swap in/out process.

Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid in container aborted on low memory server

George Xie
In reply to this post by George Xie
> On 4/03/19 5:39 pm, George Xie wrote:

> > hi all:
> >
> > Squid version: 3.5.23-5+deb9u1
> > Docker version 18.09.3, build 774a1f4
> > Linux instance-4 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27)
> > x86_64 GNU/Linux
> >
> > I have the following squid config:
> >
> >
> >     http_port 127.0.0.1:3128
> >     cache deny all
> >     access_log none
> >
> What is it exactly that you think this is doing in regards to Squid
> memory needs?
>

sorry, I don't get your quest.
 
> >

> > runs in a container with following Dockerfile:
> >
> >     FROM debian:9
> >     RUN apt update && \
> >     apt install --yes squid
> >
> >
> > the total memory of the host server is very low, only 592m, about 370m
> > free memory.
> > if I start squid in the container, squid will abort immediately.
> >
> > error messages in /var/log/squid/cache.log:
> >
> >
> >     FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!
> >
> >     Squid Cache (Version 3.5.23): Terminated abnormally.
> >     CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
> >     Maximum Resident Size: 47168 KB
> >
> >
> > error message captured with strace -f -e trace=memory:
> >
> >     [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
> >     MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
> >
> >
> > it appears that squid (or glibc) tries to allocate 392m memory, which is
> > larger than host free memory 370m.
> > but I guess squid don't need that much memory, I have another running
> > squid instance, which only uses < 200m memory.
> No doubt it is configured to use less memory. For example by reducing
> the default memory cache size.
>

that running squid instance has the same config.
 
> > the oddest thing is if I run squid on the host (also Debian 9) directly,
> > not in the container, squid could start and run as normal.
> >
> Linux typically allows RAM over-allocation. Which works okay so long as
> there is sufficient swap space and there is time between memory usage to
> do the swap in/out process.
> Amos

swap is disabled in the host server, so do in the container. 

after all, I wonder why squid would try to claim 392m memory if don't need that much.

XieShi

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid in container aborted on low memory server

Alex Rousskov
In reply to this post by George Xie
On 3/3/19 9:39 PM, George Xie wrote:

> Squid version: 3.5.23-5+deb9u1

>     http_port 127.0.0.1:3128
>     cache deny all
>     access_log none

Unfortunately, this configuration wastes RAM: Squid is not yet smart
enough to understand that you do not want any caching and may allocate
256+ MB of memory cache plus supporting indexes. To correct that default
behavior, add this:

      cache_mem 0

Furthermore, older Squids, possibly including your no-longer-supported
version, may allocate shared memory indexes where none are needed. That
might explain why you see your Squid allocating a 392 MB table.

If you want to know what is going on for sure, then configure malloc to
dump core on allocation failures and post a stack trace leading to that
allocation failure so that we know _what_ Squid was trying to allocate
when it ran out of RAM.


HTH,

Alex.


> runs in a container with following Dockerfile:
>
>     FROM debian:9
>     RUN apt update && \
>     apt install --yes squid
>
>
> the total memory of the host server is very low, only 592m, about 370m
> free memory.
> if I start squid in the container, squid will abort immediately. 
>
> error messages in /var/log/squid/cache.log:
>
>
>     FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!
>
>     Squid Cache (Version 3.5.23): Terminated abnormally.
>     CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
>     Maximum Resident Size: 47168 KB
>
>
> error message captured with strace -f -e trace=memory:
>
>     [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
>     MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
>
>
> it appears that squid (or glibc) tries to allocate 392m memory, which is
> larger than host free memory 370m.
> but I guess squid don't need that much memory, I have another running
> squid instance, which only uses < 200m memory.
> the oddest thing is if I run squid on the host (also Debian 9) directly,
> not in the container, squid could start and run as normal.
>
> am I doing something wrong thing here?
>
> Xie Shi
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid in container aborted on low memory server

Matus UHLAR - fantomas
>On 3/3/19 9:39 PM, George Xie wrote:
>> Squid version: 3.5.23-5+deb9u1

debian 9, currently stable, soon to be replaced by debian 10, containing
squid-4.4

>>     http_port 127.0.0.1:3128
>>     cache deny all
>>     access_log none

On 04.03.19 09:34, Alex Rousskov wrote:
>Unfortunately, this configuration wastes RAM: Squid is not yet smart
>enough to understand that you do not want any caching and may allocate
>256+ MB of memory cache plus supporting indexes. To correct that default
>behavior, add this:
>
>      cache_mem 0

this should help most.

>Furthermore, older Squids, possibly including your no-longer-supported
>version

its supported, just not by squid developers. There are many SW distributions
that try to support software for longer than just a few weeks/months, e.g
during whole few-year release cycle.

>might explain why you see your Squid allocating a 392 MB table.
>
>If you want to know what is going on for sure, then configure malloc to
>dump core on allocation failures and post a stack trace leading to that
>allocation failure so that we know _what_ Squid was trying to allocate
>when it ran out of RAM.
--
Matus UHLAR - fantomas, [hidden email] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
The early bird may get the worm, but the second mouse gets the cheese.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid in container aborted on low memory server

George Xie
In reply to this post by Alex Rousskov
> To correct that default
> behavior, add this:
>   cache_mem 0

thanks for your advice, but actually, I have tried this option before,
found no difference. besides, and I have tried `memory_pools off`.

> Furthermore, older Squids, possibly including your no-longer-supported
> version, may allocate shared memory indexes where none are needed. That
> might explain why you see your Squid allocating a 392 MB table.

that's fair, I will give squid 4.4 a try later.

> If you want to know what is going on for sure, then configure malloc to
> dump core on allocation failures and post a stack trace leading to that
> allocation failure so that we know _what_ Squid was trying to allocate
> when it ran out of RAM.

hope following backtrace is helpful:

(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x00007ffff562e42a in __GI_abort () at abort.c:89
#2  0x0000555555728eb5 in fatal_dump (
    message=0x555555e764e0 <xcalloc::msg> "xcalloc: Unable to allocate
1048576 blocks of 392 bytes!\n") at fatal.cc:113
#3  0x0000555555a09837 in xcalloc (n=1048576, sz=sz@entry=392) at xalloc.cc:90
#4  0x00005555558a3d0a in comm_init () at comm.cc:1206
#5  0x0000555555789104 in SquidMain (argc=<optimized out>, argv=0x7fffffffed48)
    at main.cc:1481
#6  0x000055555568a48b in SquidMainSafe (argv=<optimized out>,
argc=<optimized out>)
    at main.cc:1261
#7  main (argc=<optimized out>, argv=<optimized out>) at main.cc:1254


Xie Shi


On Tue, Mar 5, 2019 at 12:34 AM Alex Rousskov
<[hidden email]> wrote:

>
> On 3/3/19 9:39 PM, George Xie wrote:
>
> > Squid version: 3.5.23-5+deb9u1
>
> >     http_port 127.0.0.1:3128
> >     cache deny all
> >     access_log none
>
> Unfortunately, this configuration wastes RAM: Squid is not yet smart
> enough to understand that you do not want any caching and may allocate
> 256+ MB of memory cache plus supporting indexes. To correct that default
> behavior, add this:
>
>       cache_mem 0
>
> Furthermore, older Squids, possibly including your no-longer-supported
> version, may allocate shared memory indexes where none are needed. That
> might explain why you see your Squid allocating a 392 MB table.
>
> If you want to know what is going on for sure, then configure malloc to
> dump core on allocation failures and post a stack trace leading to that
> allocation failure so that we know _what_ Squid was trying to allocate
> when it ran out of RAM.
>
>
> HTH,
>
> Alex.
>
>
> > runs in a container with following Dockerfile:
> >
> >     FROM debian:9
> >     RUN apt update && \
> >     apt install --yes squid
> >
> >
> > the total memory of the host server is very low, only 592m, about 370m
> > free memory.
> > if I start squid in the container, squid will abort immediately.
> >
> > error messages in /var/log/squid/cache.log:
> >
> >
> >     FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!
> >
> >     Squid Cache (Version 3.5.23): Terminated abnormally.
> >     CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
> >     Maximum Resident Size: 47168 KB
> >
> >
> > error message captured with strace -f -e trace=memory:
> >
> >     [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
> >     MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
> >
> >
> > it appears that squid (or glibc) tries to allocate 392m memory, which is
> > larger than host free memory 370m.
> > but I guess squid don't need that much memory, I have another running
> > squid instance, which only uses < 200m memory.
> > the oddest thing is if I run squid on the host (also Debian 9) directly,
> > not in the container, squid could start and run as normal.
> >
> > am I doing something wrong thing here?
> >
> > Xie Shi
> >
> > _______________________________________________
> > squid-users mailing list
> > [hidden email]
> > http://lists.squid-cache.org/listinfo/squid-users
> >
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid in container aborted on low memory server

George Xie
In reply to this post by Alex Rousskov
more detail of the backtrace:

(gdb) up
#4  0x00005555558a3d0a in comm_init () at comm.cc:1206
1206        fd_table =(fde *) xcalloc(Squid_MaxFD, sizeof(fde));
(gdb) p Squid_MaxFD
$1 = 1048576
(gdb) p sizeof(fde)
$2 = 392

It seems Squid_MaxFD is way too large, and its value is directly from ulimit:

# ulimit -n
1048576

therefore, I try to add this option:

max_filedesc 4096

now squid works and only takes ~50m memory.
thanks very much for your help!

Xie Shi

Xie Shi

On Tue, Mar 5, 2019 at 12:34 AM Alex Rousskov
<[hidden email]> wrote:

>
> On 3/3/19 9:39 PM, George Xie wrote:
>
> > Squid version: 3.5.23-5+deb9u1
>
> >     http_port 127.0.0.1:3128
> >     cache deny all
> >     access_log none
>
> Unfortunately, this configuration wastes RAM: Squid is not yet smart
> enough to understand that you do not want any caching and may allocate
> 256+ MB of memory cache plus supporting indexes. To correct that default
> behavior, add this:
>
>       cache_mem 0
>
> Furthermore, older Squids, possibly including your no-longer-supported
> version, may allocate shared memory indexes where none are needed. That
> might explain why you see your Squid allocating a 392 MB table.
>
> If you want to know what is going on for sure, then configure malloc to
> dump core on allocation failures and post a stack trace leading to that
> allocation failure so that we know _what_ Squid was trying to allocate
> when it ran out of RAM.
>
>
> HTH,
>
> Alex.
>
>
> > runs in a container with following Dockerfile:
> >
> >     FROM debian:9
> >     RUN apt update && \
> >     apt install --yes squid
> >
> >
> > the total memory of the host server is very low, only 592m, about 370m
> > free memory.
> > if I start squid in the container, squid will abort immediately.
> >
> > error messages in /var/log/squid/cache.log:
> >
> >
> >     FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!
> >
> >     Squid Cache (Version 3.5.23): Terminated abnormally.
> >     CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
> >     Maximum Resident Size: 47168 KB
> >
> >
> > error message captured with strace -f -e trace=memory:
> >
> >     [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
> >     MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
> >
> >
> > it appears that squid (or glibc) tries to allocate 392m memory, which is
> > larger than host free memory 370m.
> > but I guess squid don't need that much memory, I have another running
> > squid instance, which only uses < 200m memory.
> > the oddest thing is if I run squid on the host (also Debian 9) directly,
> > not in the container, squid could start and run as normal.
> >
> > am I doing something wrong thing here?
> >
> > Xie Shi
> >
> > _______________________________________________
> > squid-users mailing list
> > [hidden email]
> > http://lists.squid-cache.org/listinfo/squid-users
> >
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid in container aborted on low memory server

Amos Jeffries
Administrator
In reply to this post by George Xie
On 4/03/19 9:45 pm, George Xie wrote:

>     > On 4/03/19 5:39 pm, George Xie wrote:
>     > > hi all:
>     > >
>     > > Squid version: 3.5.23-5+deb9u1
>     > > Docker version 18.09.3, build 774a1f4
>     > > Linux instance-4 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27)
>     > > x86_64 GNU/Linux
>     > >
>     > > I have the following squid config:
>     > >
>     > >
>     > >     http_port 127.0.0.1:3128 <http://127.0.0.1:3128>
>     > >     cache deny all
>     > >     access_log none
>     > >
>     > What is it exactly that you think this is doing in regards to Squid
>     > memory needs?
>     >
>
>
> sorry, I don't get your quest.
>  

I was asking to see what you were thinking was going on with those settings.

As Alex already pointed out the "cache deny all" does not reduce memory
needs of Squid in any way. It just makes 256MB of that RAM become
pointless allocating.

So, if you actually do not want the proxy caching anything, then
disabling the cache_mem (set it to 0 as per Alex response) would be the
best choice of action before you go any further.

Or if you *do* want caching, and were trying to disable it for testing
the memory issue. Then your test was wrong, and produces incorrect
conclusion. Just reducing cache_mem would be best for this case - set it
to a value that should reasonably fit this container and see if the
proxy runs okay.


...

>     > >
>     > > it appears that squid (or glibc) tries to allocate 392m memory,
>     which is
>     > > larger than host free memory 370m.
>     > > but I guess squid don't need that much memory, I have another
>     running
>     > > squid instance, which only uses < 200m memory.
>     > No doubt it is configured to use less memory. For example by reducing
>     > the default memory cache size.
>     >
>
>
> that running squid instance has the same config.
>  

Then something odd is going on between the two. They should indeed have
had the same behaviour (either work or same error).

Whatever the issue is it is being triggered by the large blocks of RAM
allocated by a default Squid. The easiest to modify is the cache_mem.


>
>     > > the oddest thing is if I run squid on the host (also Debian 9)
>     directly,
>     > > not in the container, squid could start and run as normal.
>     > >
>     > Linux typically allows RAM over-allocation. Which works okay so
>     long as
>     > there is sufficient swap space and there is time between memory
>     usage to
>     > do the swap in/out process.
>     > Amos
>
>
> swap is disabled in the host server, so do in the container. 
>
> after all, I wonder why squid would try to claim 392m memory if don't
> need that much.
>

Squid thinks it does. All client traffic is denied being cached by that
"deny all". BUT ... there are internally generated items which also use
cache. So there is 256MB default RAM cache allocated and only those few
small things being put in it.

You could set it to '0' or to some small value and the allocation size
should go down accordingly.


That said, every bit of client traffic headed towards the proxy uses
memory of volatile amount and at peak times it may need to allocate
large blocks.

So disabling swap entirely on the server is not a great idea. It just
moves the error and shutdown to happen at peak traffic times when it is
least wanted.


Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid in container aborted on low memory server

Alex Rousskov
In reply to this post by George Xie
On 3/4/19 9:45 PM, George Xie wrote:

> #4  0x00005555558a3d0a in comm_init () at comm.cc:1206
> 1206        fd_table =(fde *) xcalloc(Squid_MaxFD, sizeof(fde));
> (gdb) p Squid_MaxFD
> $1 = 1048576
> (gdb) p sizeof(fde)
> $2 = 392
>
> It seems Squid_MaxFD is way too large, and its value is directly from ulimit:
>
> # ulimit -n
> 1048576
>
> therefore, I try to add this option:
>
> max_filedesc 4096
>
> now squid works and only takes ~50m memory.
> thanks very much for your help!

Glad you figured it out!

Alex.


> Xie Shi
> On Tue, Mar 5, 2019 at 12:22 PM George Xie <[hidden email]> wrote:
>>
>>> To correct that default
>>> behavior, add this:
>>>   cache_mem 0
>>
>> thanks for your advice, but actually, I have tried this option before,
>> found no difference. besides, and I have tried `memory_pools off`.
>>
>>> Furthermore, older Squids, possibly including your no-longer-supported
>>> version, may allocate shared memory indexes where none are needed. That
>>> might explain why you see your Squid allocating a 392 MB table.
>>
>> that's fair, I will give squid 4.4 a try later.
>>
>>> If you want to know what is going on for sure, then configure malloc to
>>> dump core on allocation failures and post a stack trace leading to that
>>> allocation failure so that we know _what_ Squid was trying to allocate
>>> when it ran out of RAM.
>>
>> hope following backtrace is helpful:
>>
>> (gdb) bt
>> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
>> #1  0x00007ffff562e42a in __GI_abort () at abort.c:89
>> #2  0x0000555555728eb5 in fatal_dump (
>>     message=0x555555e764e0 <xcalloc::msg> "xcalloc: Unable to allocate
>> 1048576 blocks of 392 bytes!\n") at fatal.cc:113
>> #3  0x0000555555a09837 in xcalloc (n=1048576, sz=sz@entry=392) at xalloc.cc:90
>> #4  0x00005555558a3d0a in comm_init () at comm.cc:1206
>> #5  0x0000555555789104 in SquidMain (argc=<optimized out>, argv=0x7fffffffed48)
>>     at main.cc:1481
>> #6  0x000055555568a48b in SquidMainSafe (argv=<optimized out>,
>> argc=<optimized out>)
>>     at main.cc:1261
>> #7  main (argc=<optimized out>, argv=<optimized out>) at main.cc:1254
>>
>>
>> Xie Shi
>>
>>
>> On Tue, Mar 5, 2019 at 12:34 AM Alex Rousskov
>> <[hidden email]> wrote:
>>>
>>> On 3/3/19 9:39 PM, George Xie wrote:
>>>
>>>> Squid version: 3.5.23-5+deb9u1
>>>
>>>>     http_port 127.0.0.1:3128
>>>>     cache deny all
>>>>     access_log none
>>>
>>> Unfortunately, this configuration wastes RAM: Squid is not yet smart
>>> enough to understand that you do not want any caching and may allocate
>>> 256+ MB of memory cache plus supporting indexes. To correct that default
>>> behavior, add this:
>>>
>>>       cache_mem 0
>>>
>>> Furthermore, older Squids, possibly including your no-longer-supported
>>> version, may allocate shared memory indexes where none are needed. That
>>> might explain why you see your Squid allocating a 392 MB table.
>>>
>>> If you want to know what is going on for sure, then configure malloc to
>>> dump core on allocation failures and post a stack trace leading to that
>>> allocation failure so that we know _what_ Squid was trying to allocate
>>> when it ran out of RAM.
>>>
>>>
>>> HTH,
>>>
>>> Alex.
>>>
>>>
>>>> runs in a container with following Dockerfile:
>>>>
>>>>     FROM debian:9
>>>>     RUN apt update && \
>>>>     apt install --yes squid
>>>>
>>>>
>>>> the total memory of the host server is very low, only 592m, about 370m
>>>> free memory.
>>>> if I start squid in the container, squid will abort immediately.
>>>>
>>>> error messages in /var/log/squid/cache.log:
>>>>
>>>>
>>>>     FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes!
>>>>
>>>>     Squid Cache (Version 3.5.23): Terminated abnormally.
>>>>     CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
>>>>     Maximum Resident Size: 47168 KB
>>>>
>>>>
>>>> error message captured with strace -f -e trace=memory:
>>>>
>>>>     [pid   920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE,
>>>>     MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
>>>>
>>>>
>>>> it appears that squid (or glibc) tries to allocate 392m memory, which is
>>>> larger than host free memory 370m.
>>>> but I guess squid don't need that much memory, I have another running
>>>> squid instance, which only uses < 200m memory.
>>>> the oddest thing is if I run squid on the host (also Debian 9) directly,
>>>> not in the container, squid could start and run as normal.
>>>>
>>>> am I doing something wrong thing here?
>>>>
>>>> Xie Shi
>>>>
>>>> _______________________________________________
>>>> squid-users mailing list
>>>> [hidden email]
>>>> http://lists.squid-cache.org/listinfo/squid-users
>>>>
>>>
>>> _______________________________________________
>>> squid-users mailing list
>>> [hidden email]
>>> http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid in container aborted on low memory server

George Xie
In reply to this post by Amos Jeffries
thanks for your detailed reply, Amos.

now I have found out the culprit is large file descriptor number limit.
Docker CE for Debian package has set this limit (RLIMIT_NOFILE) in the
container to 1048576, thus squid set Squid_MaxFD to 1048576, then
allocates 1048576*392 = 392m memory for `fd_table`.
in the host, RLIMIT_NOFILE is only 1024.

> So, if you actually do not want the proxy caching anything, then
> disabling the cache_mem (set it to 0 as per Alex response) would be the
> best choice of action before you go any further.

in fact, I have tried this option before but failed.
it is very nice to learn this advice, I will add it to my proxy only
squid config.

> So disabling swap entirely on the server is not a great idea. It just
> moves the error and shutdown to happen at peak traffic times when it is
> least wanted.

but I guess this is common in the "cloud" era, eg: Google Compute Engine.


Xie Shi
On Tue, Mar 5, 2019 at 4:13 PM Amos Jeffries <[hidden email]> wrote:

>
> On 4/03/19 9:45 pm, George Xie wrote:
> >     > On 4/03/19 5:39 pm, George Xie wrote:
> >     > > hi all:
> >     > >
> >     > > Squid version: 3.5.23-5+deb9u1
> >     > > Docker version 18.09.3, build 774a1f4
> >     > > Linux instance-4 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27)
> >     > > x86_64 GNU/Linux
> >     > >
> >     > > I have the following squid config:
> >     > >
> >     > >
> >     > >     http_port 127.0.0.1:3128 <http://127.0.0.1:3128>
> >     > >     cache deny all
> >     > >     access_log none
> >     > >
> >     > What is it exactly that you think this is doing in regards to Squid
> >     > memory needs?
> >     >
> >
> >
> > sorry, I don't get your quest.
> >
>
> I was asking to see what you were thinking was going on with those settings.
>
> As Alex already pointed out the "cache deny all" does not reduce memory
> needs of Squid in any way. It just makes 256MB of that RAM become
> pointless allocating.
>
> So, if you actually do not want the proxy caching anything, then
> disabling the cache_mem (set it to 0 as per Alex response) would be the
> best choice of action before you go any further.
>
> Or if you *do* want caching, and were trying to disable it for testing
> the memory issue. Then your test was wrong, and produces incorrect
> conclusion. Just reducing cache_mem would be best for this case - set it
> to a value that should reasonably fit this container and see if the
> proxy runs okay.
>
>
> ...
> >     > >
> >     > > it appears that squid (or glibc) tries to allocate 392m memory,
> >     which is
> >     > > larger than host free memory 370m.
> >     > > but I guess squid don't need that much memory, I have another
> >     running
> >     > > squid instance, which only uses < 200m memory.
> >     > No doubt it is configured to use less memory. For example by reducing
> >     > the default memory cache size.
> >     >
> >
> >
> > that running squid instance has the same config.
> >
>
> Then something odd is going on between the two. They should indeed have
> had the same behaviour (either work or same error).
>
> Whatever the issue is it is being triggered by the large blocks of RAM
> allocated by a default Squid. The easiest to modify is the cache_mem.
>
>
> >
> >     > > the oddest thing is if I run squid on the host (also Debian 9)
> >     directly,
> >     > > not in the container, squid could start and run as normal.
> >     > >
> >     > Linux typically allows RAM over-allocation. Which works okay so
> >     long as
> >     > there is sufficient swap space and there is time between memory
> >     usage to
> >     > do the swap in/out process.
> >     > Amos
> >
> >
> > swap is disabled in the host server, so do in the container.
> >
> > after all, I wonder why squid would try to claim 392m memory if don't
> > need that much.
> >
>
> Squid thinks it does. All client traffic is denied being cached by that
> "deny all". BUT ... there are internally generated items which also use
> cache. So there is 256MB default RAM cache allocated and only those few
> small things being put in it.
>
> You could set it to '0' or to some small value and the allocation size
> should go down accordingly.
>
>
> That said, every bit of client traffic headed towards the proxy uses
> memory of volatile amount and at peak times it may need to allocate
> large blocks.
>
> So disabling swap entirely on the server is not a great idea. It just
> moves the error and shutdown to happen at peak traffic times when it is
> least wanted.
>
>
> Amos
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: squid in container aborted on low memory server

Alex Rousskov
On 3/6/19 11:25 AM, George Xie wrote:

>> So disabling swap entirely on the server is not a great idea. It just
>> moves the error and shutdown to happen at peak traffic times when it is
>> least wanted.

> but I guess this is common in the "cloud" era, eg: Google Compute Engine.

Moreover, in many production environments "swapping during peak traffic
times" is worse than "shutting down during peak traffic times". YMMV.

Alex.


> Xie Shi
> On Tue, Mar 5, 2019 at 4:13 PM Amos Jeffries <[hidden email]> wrote:
>>
>> On 4/03/19 9:45 pm, George Xie wrote:
>>>     > On 4/03/19 5:39 pm, George Xie wrote:
>>>     > > hi all:
>>>     > >
>>>     > > Squid version: 3.5.23-5+deb9u1
>>>     > > Docker version 18.09.3, build 774a1f4
>>>     > > Linux instance-4 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27)
>>>     > > x86_64 GNU/Linux
>>>     > >
>>>     > > I have the following squid config:
>>>     > >
>>>     > >
>>>     > >     http_port 127.0.0.1:3128 <http://127.0.0.1:3128>
>>>     > >     cache deny all
>>>     > >     access_log none
>>>     > >
>>>     > What is it exactly that you think this is doing in regards to Squid
>>>     > memory needs?
>>>     >
>>>
>>>
>>> sorry, I don't get your quest.
>>>
>>
>> I was asking to see what you were thinking was going on with those settings.
>>
>> As Alex already pointed out the "cache deny all" does not reduce memory
>> needs of Squid in any way. It just makes 256MB of that RAM become
>> pointless allocating.
>>
>> So, if you actually do not want the proxy caching anything, then
>> disabling the cache_mem (set it to 0 as per Alex response) would be the
>> best choice of action before you go any further.
>>
>> Or if you *do* want caching, and were trying to disable it for testing
>> the memory issue. Then your test was wrong, and produces incorrect
>> conclusion. Just reducing cache_mem would be best for this case - set it
>> to a value that should reasonably fit this container and see if the
>> proxy runs okay.
>>
>>
>> ...
>>>     > >
>>>     > > it appears that squid (or glibc) tries to allocate 392m memory,
>>>     which is
>>>     > > larger than host free memory 370m.
>>>     > > but I guess squid don't need that much memory, I have another
>>>     running
>>>     > > squid instance, which only uses < 200m memory.
>>>     > No doubt it is configured to use less memory. For example by reducing
>>>     > the default memory cache size.
>>>     >
>>>
>>>
>>> that running squid instance has the same config.
>>>
>>
>> Then something odd is going on between the two. They should indeed have
>> had the same behaviour (either work or same error).
>>
>> Whatever the issue is it is being triggered by the large blocks of RAM
>> allocated by a default Squid. The easiest to modify is the cache_mem.
>>
>>
>>>
>>>     > > the oddest thing is if I run squid on the host (also Debian 9)
>>>     directly,
>>>     > > not in the container, squid could start and run as normal.
>>>     > >
>>>     > Linux typically allows RAM over-allocation. Which works okay so
>>>     long as
>>>     > there is sufficient swap space and there is time between memory
>>>     usage to
>>>     > do the swap in/out process.
>>>     > Amos
>>>
>>>
>>> swap is disabled in the host server, so do in the container.
>>>
>>> after all, I wonder why squid would try to claim 392m memory if don't
>>> need that much.
>>>
>>
>> Squid thinks it does. All client traffic is denied being cached by that
>> "deny all". BUT ... there are internally generated items which also use
>> cache. So there is 256MB default RAM cache allocated and only those few
>> small things being put in it.
>>
>> You could set it to '0' or to some small value and the allocation size
>> should go down accordingly.
>>
>>
>> That said, every bit of client traffic headed towards the proxy uses
>> memory of volatile amount and at peak times it may need to allocate
>> large blocks.
>>
>> So disabling swap entirely on the server is not a great idea. It just
>> moves the error and shutdown to happen at peak traffic times when it is
>> least wanted.
>>
>>
>> Amos
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users