FATAL: shm_open(/squid-ssl_session_cache.shm)

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

FATAL: shm_open(/squid-ssl_session_cache.shm)

Aaron Turner
So I'm trying to setup a config much like the one documented here for
squid v3.5.26:
https://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster

The frontend which is bumping the ssl connections however is throwing the error:

2017/08/25 17:11:40 kid1| Set Current Directory to /var/spool/squid
2017/08/25 17:11:40 kid1| Starting Squid Cache version 3.5.26 for
x86_64-redhat-linux-gnu...
2017/08/25 17:11:40 kid1| Service Name: squid
2017/08/25 17:11:40 kid1| Process ID 13817
2017/08/25 17:11:40 kid1| Process Roles: worker
2017/08/25 17:11:40 kid1| With 16384 file descriptors available
2017/08/25 17:11:40 kid1| Initializing IP Cache...
2017/08/25 17:11:40 kid1| DNS Socket created at [::], FD 12
2017/08/25 17:11:40 kid1| DNS Socket created at 0.0.0.0, FD 13
2017/08/25 17:11:40 kid1| Adding domain lab.ppops.net from /etc/resolv.conf
2017/08/25 17:11:40 kid1| Adding nameserver 10.21.43.21 from /etc/resolv.conf
2017/08/25 17:11:40 kid1| Adding nameserver 10.21.44.254 from /etc/resolv.conf
2017/08/25 17:11:40 kid1| Adding nameserver 10.21.44.255 from /etc/resolv.conf
2017/08/25 17:11:40 kid1| helperOpenServers: Starting 5/10 'ssl_crtd' processes
2017/08/25 17:11:40 kid1| storeDirWriteCleanLogs: Starting...
2017/08/25 17:11:40 kid1|   Finished.  Wrote 0 entries.
2017/08/25 17:11:40 kid1|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory

Squid Cache (Version 3.5.26): Terminated abnormally.
CPU Usage: 0.033 seconds = 0.023 user + 0.010 sys
Maximum Resident Size: 52512 KB
Page faults with physical i/o: 0

I've verified that /dev/shm is mounted and based on the list of files
in there, clearly squid is able to create files there, so it's not a
Linux/shm config issue.

my frontend.conf:

# BEGIN CONFIG
http_port 3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=100MB cert=/etc/squid/ssl_cert/myCA.pem
ssl_bump bump all
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB
sslcrtd_children 10
sslproxy_session_cache_size 100 MB

# add user authentication and similar options here
http_access allow manager localhost
http_access deny manager

# add backends - one line for each additional worker you configured
# NOTE how the port number matches the kid number
cache_peer localhost parent 4002 0 carp login=PASS name=backend-kid2
cache_peer localhost parent 4003 0 carp login=PASS name=backend-kid3

#you want the frontend to have a significant cache_mem
cache_mem 10 GB

# change /tmp to your own log directory, e.g. /var/log/squid
access_log /var/log/squid/frontend.access.log
cache_log /var/log/squid/frontend.cache.log

# the frontend requires a different name to the backend(s)
visible_hostname frontend.company.com

forwarded_for transparent

#END CONFIG

So here's the funny thing... this worked fine until I enabled
ssl-bumping on the backends (I was debugging some problems and on a
whim I tried enabling it).  That didn't solve my problem and so I
disabled ssl bumping on the backends.  And that's when this SHM error
started happening with my frontend.   Re-enabling ssl-bump on the
backends fixes the SHM error, but I don't think that would be a
correct config?

Seems like there's some stale state being left on the filesystem which
is causing this problem, but I'm at a loss to figure out where/what it
is.

--
Aaron Turner
https://synfin.net/         Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality.  "Something cannot emerge from nothing,"
he said.  This is profound thinking if you understand how unstable
"the truth" can be.  -- Frank Herbert, Dune
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: FATAL: shm_open(/squid-ssl_session_cache.shm)

Alex Rousskov
On 08/25/2017 11:21 AM, Aaron Turner wrote:
> FATAL: Ipc::Mem::Segment::open failed to
> shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory

> I've verified that /dev/shm is mounted and based on the list of files
> in there, clearly squid is able to create files there, so it's not a
> Linux/shm config issue.

Yes, moreover, this is not a segment creation failure. This is a failure
to open a segment that should exist but is missing. That segment should
have been created by the master process, but since your config (ab)uses
SMP macros, I am guessing that depending on the configuration details,
the master process may not know that it needs to create that segment.

For the record, the same error happens in older Squids (including v3.5)
when there are two concurrent Squid instances running. However, I
speculate that you are suffering from a misconfiguration, not broken PID
file management here.


> So here's the funny thing... this worked fine until I enabled
> ssl-bumping on the backends (I was debugging some problems and on a
> whim I tried enabling it).  That didn't solve my problem and so I
> disabled ssl bumping on the backends.  And that's when this SHM error
> started happening with my frontend.   Re-enabling ssl-bump on the
> backends fixes the SHM error, but I don't think that would be a
> correct config?

This is one of the reasons folks should not abuse SMP Squid for
implementing CARP clusters IMHO -- the config on that wiki page is
conceptually wrong, even though it may work in some cases.

SMP macros are useful for simple, localized hacks like splitting
cache.log into worker-specific files or adding worker ID to access.log
entries. However, the more process-specific changes you introduce, the
higher are the changes that Squid will get confused.

The overall principle is that all Squid processes should see the same
configuration. YMMV, but the number of places where SMP Squid relies on
that principle keeps growing...

Alex.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: FATAL: shm_open(/squid-ssl_session_cache.shm)

Aaron Turner
Thanks Alex.

So I guess what I'd like to know is how squid handles a multi-layer
cache config with ssl bumping?  For obvious performance reasons, I
don't want to bump the same connection twice.  Much rather have the
first layer bump the connection and have a memory cache.  If that
cache is a miss, then hit the slower disk cache/outbound network
connection.

Thanks,
Aaron
--
Aaron Turner
https://synfin.net/         Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality.  "Something cannot emerge from nothing,"
he said.  This is profound thinking if you understand how unstable
"the truth" can be.  -- Frank Herbert, Dune


On Fri, Aug 25, 2017 at 3:13 PM, Alex Rousskov
<[hidden email]> wrote:

> On 08/25/2017 11:21 AM, Aaron Turner wrote:
>> FATAL: Ipc::Mem::Segment::open failed to
>> shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory
>
>> I've verified that /dev/shm is mounted and based on the list of files
>> in there, clearly squid is able to create files there, so it's not a
>> Linux/shm config issue.
>
> Yes, moreover, this is not a segment creation failure. This is a failure
> to open a segment that should exist but is missing. That segment should
> have been created by the master process, but since your config (ab)uses
> SMP macros, I am guessing that depending on the configuration details,
> the master process may not know that it needs to create that segment.
>
> For the record, the same error happens in older Squids (including v3.5)
> when there are two concurrent Squid instances running. However, I
> speculate that you are suffering from a misconfiguration, not broken PID
> file management here.
>
>
>> So here's the funny thing... this worked fine until I enabled
>> ssl-bumping on the backends (I was debugging some problems and on a
>> whim I tried enabling it).  That didn't solve my problem and so I
>> disabled ssl bumping on the backends.  And that's when this SHM error
>> started happening with my frontend.   Re-enabling ssl-bump on the
>> backends fixes the SHM error, but I don't think that would be a
>> correct config?
>
> This is one of the reasons folks should not abuse SMP Squid for
> implementing CARP clusters IMHO -- the config on that wiki page is
> conceptually wrong, even though it may work in some cases.
>
> SMP macros are useful for simple, localized hacks like splitting
> cache.log into worker-specific files or adding worker ID to access.log
> entries. However, the more process-specific changes you introduce, the
> higher are the changes that Squid will get confused.
>
> The overall principle is that all Squid processes should see the same
> configuration. YMMV, but the number of places where SMP Squid relies on
> that principle keeps growing...
>
> Alex.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: FATAL: shm_open(/squid-ssl_session_cache.shm)

Alex Rousskov
On 08/28/2017 10:27 AM, Aaron Turner wrote:

> So I guess what I'd like to know is how squid handles a multi-layer
> cache config with ssl bumping?

If you are asking how to SSL bump requests in one Squid worker and then
satisfy those bumped requests in another Squid worker (and/or another
Squid instance), then the answer is that you cannot do that because
Squid does not support exporting decrypted bumped requests (without
encrypting them) from a Squid worker.


> For obvious performance reasons, I
> don't want to bump the same connection twice.  Much rather have the
> first layer bump the connection and have a memory cache.  If that
> cache is a miss, then hit the slower disk cache/outbound network
> connection.

Your desires require the currently missing Squid code to export bumped
requests _and_ they clash with the current Squid Project policy of
prohibiting the export of bumped requests.

If performance is important, consider using SMP-aware rock cache_dirs
instead of multiple Squid instances (including hacks that emulate
multiple Squid instances in a single Squid instance by abusing SMP macros).


HTH,

Alex.


> On Fri, Aug 25, 2017 at 3:13 PM, Alex Rousskov wrote:
>> On 08/25/2017 11:21 AM, Aaron Turner wrote:
>>> FATAL: Ipc::Mem::Segment::open failed to
>>> shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory
>>
>>> I've verified that /dev/shm is mounted and based on the list of files
>>> in there, clearly squid is able to create files there, so it's not a
>>> Linux/shm config issue.
>>
>> Yes, moreover, this is not a segment creation failure. This is a failure
>> to open a segment that should exist but is missing. That segment should
>> have been created by the master process, but since your config (ab)uses
>> SMP macros, I am guessing that depending on the configuration details,
>> the master process may not know that it needs to create that segment.
>>
>> For the record, the same error happens in older Squids (including v3.5)
>> when there are two concurrent Squid instances running. However, I
>> speculate that you are suffering from a misconfiguration, not broken PID
>> file management here.
>>
>>
>>> So here's the funny thing... this worked fine until I enabled
>>> ssl-bumping on the backends (I was debugging some problems and on a
>>> whim I tried enabling it).  That didn't solve my problem and so I
>>> disabled ssl bumping on the backends.  And that's when this SHM error
>>> started happening with my frontend.   Re-enabling ssl-bump on the
>>> backends fixes the SHM error, but I don't think that would be a
>>> correct config?
>>
>> This is one of the reasons folks should not abuse SMP Squid for
>> implementing CARP clusters IMHO -- the config on that wiki page is
>> conceptually wrong, even though it may work in some cases.
>>
>> SMP macros are useful for simple, localized hacks like splitting
>> cache.log into worker-specific files or adding worker ID to access.log
>> entries. However, the more process-specific changes you introduce, the
>> higher are the changes that Squid will get confused.
>>
>> The overall principle is that all Squid processes should see the same
>> configuration. YMMV, but the number of places where SMP Squid relies on
>> that principle keeps growing...
>>
>> Alex.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: FATAL: shm_open(/squid-ssl_session_cache.shm)

Aaron Turner
Fair enough.  I can understand why Squid would want to do that for
user security purposes.

Sounds like having a single layer/wide cache using the rock cache is
the way to go.  Probably would end up fixing a lot of issues I'm
seeing.
--
Aaron Turner
https://synfin.net/         Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality.  "Something cannot emerge from nothing,"
he said.  This is profound thinking if you understand how unstable
"the truth" can be.  -- Frank Herbert, Dune


On Mon, Aug 28, 2017 at 10:48 AM, Alex Rousskov
<[hidden email]> wrote:

> On 08/28/2017 10:27 AM, Aaron Turner wrote:
>
>> So I guess what I'd like to know is how squid handles a multi-layer
>> cache config with ssl bumping?
>
> If you are asking how to SSL bump requests in one Squid worker and then
> satisfy those bumped requests in another Squid worker (and/or another
> Squid instance), then the answer is that you cannot do that because
> Squid does not support exporting decrypted bumped requests (without
> encrypting them) from a Squid worker.
>
>
>> For obvious performance reasons, I
>> don't want to bump the same connection twice.  Much rather have the
>> first layer bump the connection and have a memory cache.  If that
>> cache is a miss, then hit the slower disk cache/outbound network
>> connection.
>
> Your desires require the currently missing Squid code to export bumped
> requests _and_ they clash with the current Squid Project policy of
> prohibiting the export of bumped requests.
>
> If performance is important, consider using SMP-aware rock cache_dirs
> instead of multiple Squid instances (including hacks that emulate
> multiple Squid instances in a single Squid instance by abusing SMP macros).
>
>
> HTH,
>
> Alex.
>
>
>> On Fri, Aug 25, 2017 at 3:13 PM, Alex Rousskov wrote:
>>> On 08/25/2017 11:21 AM, Aaron Turner wrote:
>>>> FATAL: Ipc::Mem::Segment::open failed to
>>>> shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory
>>>
>>>> I've verified that /dev/shm is mounted and based on the list of files
>>>> in there, clearly squid is able to create files there, so it's not a
>>>> Linux/shm config issue.
>>>
>>> Yes, moreover, this is not a segment creation failure. This is a failure
>>> to open a segment that should exist but is missing. That segment should
>>> have been created by the master process, but since your config (ab)uses
>>> SMP macros, I am guessing that depending on the configuration details,
>>> the master process may not know that it needs to create that segment.
>>>
>>> For the record, the same error happens in older Squids (including v3.5)
>>> when there are two concurrent Squid instances running. However, I
>>> speculate that you are suffering from a misconfiguration, not broken PID
>>> file management here.
>>>
>>>
>>>> So here's the funny thing... this worked fine until I enabled
>>>> ssl-bumping on the backends (I was debugging some problems and on a
>>>> whim I tried enabling it).  That didn't solve my problem and so I
>>>> disabled ssl bumping on the backends.  And that's when this SHM error
>>>> started happening with my frontend.   Re-enabling ssl-bump on the
>>>> backends fixes the SHM error, but I don't think that would be a
>>>> correct config?
>>>
>>> This is one of the reasons folks should not abuse SMP Squid for
>>> implementing CARP clusters IMHO -- the config on that wiki page is
>>> conceptually wrong, even though it may work in some cases.
>>>
>>> SMP macros are useful for simple, localized hacks like splitting
>>> cache.log into worker-specific files or adding worker ID to access.log
>>> entries. However, the more process-specific changes you introduce, the
>>> higher are the changes that Squid will get confused.
>>>
>>> The overall principle is that all Squid processes should see the same
>>> configuration. YMMV, but the number of places where SMP Squid relies on
>>> that principle keeps growing...
>>>
>>> Alex.
>
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: FATAL: shm_open(/squid-ssl_session_cache.shm)

Alex Rousskov
On 08/28/2017 12:06 PM, Aaron Turner wrote:

> Sounds like having a single layer/wide cache using the rock cache is
> the way to go.  Probably would end up fixing a lot of issues I'm
> seeing.

Yes, but it will not fix all of them, and it will probably add a few new
ones.

You have to pick your poison, but, with a single configuration, you
would not be swimming against the current as much. With enough effort
and/or money, all problems can be fixed, but changing or circumventing
Project policies is often much harder or more expensive.

Alex.


> On Mon, Aug 28, 2017 at 10:48 AM, Alex Rousskov wrote:
>> On 08/28/2017 10:27 AM, Aaron Turner wrote:
>>
>>> So I guess what I'd like to know is how squid handles a multi-layer
>>> cache config with ssl bumping?
>>
>> If you are asking how to SSL bump requests in one Squid worker and then
>> satisfy those bumped requests in another Squid worker (and/or another
>> Squid instance), then the answer is that you cannot do that because
>> Squid does not support exporting decrypted bumped requests (without
>> encrypting them) from a Squid worker.
>>
>>
>>> For obvious performance reasons, I
>>> don't want to bump the same connection twice.  Much rather have the
>>> first layer bump the connection and have a memory cache.  If that
>>> cache is a miss, then hit the slower disk cache/outbound network
>>> connection.
>>
>> Your desires require the currently missing Squid code to export bumped
>> requests _and_ they clash with the current Squid Project policy of
>> prohibiting the export of bumped requests.
>>
>> If performance is important, consider using SMP-aware rock cache_dirs
>> instead of multiple Squid instances (including hacks that emulate
>> multiple Squid instances in a single Squid instance by abusing SMP macros).
>>
>>
>> HTH,
>>
>> Alex.
>>
>>
>>> On Fri, Aug 25, 2017 at 3:13 PM, Alex Rousskov wrote:
>>>> On 08/25/2017 11:21 AM, Aaron Turner wrote:
>>>>> FATAL: Ipc::Mem::Segment::open failed to
>>>>> shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory
>>>>
>>>>> I've verified that /dev/shm is mounted and based on the list of files
>>>>> in there, clearly squid is able to create files there, so it's not a
>>>>> Linux/shm config issue.
>>>>
>>>> Yes, moreover, this is not a segment creation failure. This is a failure
>>>> to open a segment that should exist but is missing. That segment should
>>>> have been created by the master process, but since your config (ab)uses
>>>> SMP macros, I am guessing that depending on the configuration details,
>>>> the master process may not know that it needs to create that segment.
>>>>
>>>> For the record, the same error happens in older Squids (including v3.5)
>>>> when there are two concurrent Squid instances running. However, I
>>>> speculate that you are suffering from a misconfiguration, not broken PID
>>>> file management here.
>>>>
>>>>
>>>>> So here's the funny thing... this worked fine until I enabled
>>>>> ssl-bumping on the backends (I was debugging some problems and on a
>>>>> whim I tried enabling it).  That didn't solve my problem and so I
>>>>> disabled ssl bumping on the backends.  And that's when this SHM error
>>>>> started happening with my frontend.   Re-enabling ssl-bump on the
>>>>> backends fixes the SHM error, but I don't think that would be a
>>>>> correct config?
>>>>
>>>> This is one of the reasons folks should not abuse SMP Squid for
>>>> implementing CARP clusters IMHO -- the config on that wiki page is
>>>> conceptually wrong, even though it may work in some cases.
>>>>
>>>> SMP macros are useful for simple, localized hacks like splitting
>>>> cache.log into worker-specific files or adding worker ID to access.log
>>>> entries. However, the more process-specific changes you introduce, the
>>>> higher are the changes that Squid will get confused.
>>>>
>>>> The overall principle is that all Squid processes should see the same
>>>> configuration. YMMV, but the number of places where SMP Squid relies on
>>>> that principle keeps growing...
>>>>
>>>> Alex.
>>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users