Bug: Missing MemObject::storeId value

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

Bug: Missing MemObject::storeId value

Aaron Turner
Version: 3.5.26 on CentOS 7.3 on AWS EC2 m3.xlarge and 2x 100GB EBS
volumes for rock cache.

Doing some basic system tests and we're seeing a bunch of errors like:

2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start() 0x7f169c6cc9d0
2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.finish() 0x7f169dae4e40
2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 20209
2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
2017/09/22 22:43:15 kid1| MemObject->nclients: 0
2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f167ee60db0
2017/09/22 22:43:15 kid1| MemObject->request: 0
2017/09/22 22:43:15 kid1| MemObject->logUri:
2017/09/22 22:43:15 kid1| MemObject->storeId:
2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.start() 0x7f16a6a4a500
2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.finish() 0x7f16a6a4a4d0
2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 50265
2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
2017/09/22 22:43:15 kid1| MemObject->nclients: 0
2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f169f83d7d0
2017/09/22 22:43:15 kid1| MemObject->request: 0
2017/09/22 22:43:15 kid1| MemObject->logUri:
2017/09/22 22:43:15 kid1| MemObject->storeId:

I did some googling and seems like a lot of comments about this with
Rock (we're using) and ICP/HTCP (not using).  Curious if this the same
bug or something new?  Are there config changes we can make to prevent
this (perhaps switching away from rock cache??)

We have a bunch of clients behind haproxy which is load balancing to
4x Squid.  Config of the squids is as:

http_access allow localhost manager
http_access deny manager

external_acl_type client_ip_map_0 %>ha{Our-Client}
/usr/lib64/squid/user_loadbalance.py 0 4
external_acl_type client_ip_map_1 %>ha{Our-Client}
/usr/lib64/squid/user_loadbalance.py 1 4
external_acl_type client_ip_map_2 %>ha{Our-Client}
/usr/lib64/squid/user_loadbalance.py 2 4
external_acl_type client_ip_map_3 %>ha{Our-Client}
/usr/lib64/squid/user_loadbalance.py 3 4

acl client_group_0 external client_ip_map_0
acl client_group_1 external client_ip_map_1
acl client_group_2 external client_ip_map_2
acl client_group_3 external client_ip_map_3

http_access allow client_group_0
http_access allow client_group_1
http_access allow client_group_2
http_access allow client_group_3
http_access deny all

tcp_outgoing_address 10.93.2.41 client_group_0
tcp_outgoing_address 10.93.2.76 client_group_1
tcp_outgoing_address 10.93.2.198 client_group_2
tcp_outgoing_address 10.93.3.178 client_group_3

cache_dir rock /var/lib/squid/cache1 51200
cache_dir rock /var/lib/squid/cache2 51200
coredump_dir /var/spool/squid
maximum_object_size_in_memory 8 MB
maximum_object_size 8 MB

cache_mem 6 GB
memory_cache_shared on
workers 4

refresh_pattern . 0 100% 30

http_port squid0001:3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=400MB cert=/etc/squid/ssl_cert/myCA.pem
http_port localhost:3128
ssl_bump bump all

request_header_access Our-Client deny all
request_header_access Via deny all
forwarded_for delete

visible_hostname squid0001.lab.company.com
logformat adttest %tg %6tr %>a %Ss/%03>Hs %<st %rm %>ru %[un %Sh/%<a %mt %ea
access_log daemon:/var/log/squid/access.${process_number}.log adttest
icon_directory /usr/share/squid/icons

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB
sslcrtd_children 32 startup=2 idle=2
sslproxy_session_cache_size 100 MB
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER


--
Aaron Turner
https://synfin.net/         Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality.  "Something cannot emerge from nothing,"
he said.  This is profound thinking if you understand how unstable
"the truth" can be.  -- Frank Herbert, Dune
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Heiler Bemerguy

I have this since forever.. 3.5.27 with one cache_peer and 4 rockstores

2017/09/21 11:19:45 kid1| Bug: Missing MemObject::storeId value
2017/09/21 11:19:45 kid1| mem_hdr: 0x1902d240 nodes.start() 0x552baa0
2017/09/21 11:19:45 kid1| mem_hdr: 0x1902d240 nodes.finish() 0x552baa0
2017/09/21 11:19:45 kid1| MemObject->start_ping: 0.000000
2017/09/21 11:19:45 kid1| MemObject->inmem_hi: 3335
2017/09/21 11:19:45 kid1| MemObject->inmem_lo: 0
2017/09/21 11:19:45 kid1| MemObject->nclients: 0
2017/09/21 11:19:45 kid1| MemObject->reply: 0xae4da80
2017/09/21 11:19:45 kid1| MemObject->request: 0
2017/09/21 11:19:45 kid1| MemObject->logUri:
2017/09/21 11:19:45 kid1| MemObject->storeId:

2017/09/21 11:19:46 kid1| Bug: Missing MemObject::storeId value
2017/09/21 11:19:46 kid1| mem_hdr: 0x6ce75d0 nodes.start() 0x54585b0
2017/09/21 11:19:46 kid1| mem_hdr: 0x6ce75d0 nodes.finish() 0xb237550
2017/09/21 11:19:46 kid1| MemObject->start_ping: 0.000000
2017/09/21 11:19:46 kid1| MemObject->inmem_hi: 14892
2017/09/21 11:19:46 kid1| MemObject->inmem_lo: 0
2017/09/21 11:19:46 kid1| MemObject->nclients: 0
2017/09/21 11:19:46 kid1| MemObject->reply: 0x9f08d10
2017/09/21 11:19:46 kid1| MemObject->request: 0
2017/09/21 11:19:46 kid1| MemObject->logUri:
2017/09/21 11:19:46 kid1| MemObject->storeId:


Em 22/09/2017 20:18, Aaron Turner escreveu:

> Version: 3.5.26 on CentOS 7.3 on AWS EC2 m3.xlarge and 2x 100GB EBS
> volumes for rock cache.
>
> Doing some basic system tests and we're seeing a bunch of errors like:
>
> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start() 0x7f169c6cc9d0
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.finish() 0x7f169dae4e40
> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 20209
> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f167ee60db0
> 2017/09/22 22:43:15 kid1| MemObject->request: 0
> 2017/09/22 22:43:15 kid1| MemObject->logUri:
> 2017/09/22 22:43:15 kid1| MemObject->storeId:
> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.start() 0x7f16a6a4a500
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.finish() 0x7f16a6a4a4d0
> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 50265
> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f169f83d7d0
> 2017/09/22 22:43:15 kid1| MemObject->request: 0
> 2017/09/22 22:43:15 kid1| MemObject->logUri:
> 2017/09/22 22:43:15 kid1| MemObject->storeId:
>
> I did some googling and seems like a lot of comments about this with
> Rock (we're using) and ICP/HTCP (not using).  Curious if this the same
> bug or something new?  Are there config changes we can make to prevent
> this (perhaps switching away from rock cache??)
>
> We have a bunch of clients behind haproxy which is load balancing to
> 4x Squid.  Config of the squids is as:
>
> http_access allow localhost manager
> http_access deny manager
>
> external_acl_type client_ip_map_0 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 0 4
> external_acl_type client_ip_map_1 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 1 4
> external_acl_type client_ip_map_2 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 2 4
> external_acl_type client_ip_map_3 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 3 4
>
> acl client_group_0 external client_ip_map_0
> acl client_group_1 external client_ip_map_1
> acl client_group_2 external client_ip_map_2
> acl client_group_3 external client_ip_map_3
>
> http_access allow client_group_0
> http_access allow client_group_1
> http_access allow client_group_2
> http_access allow client_group_3
> http_access deny all
>
> tcp_outgoing_address 10.93.2.41 client_group_0
> tcp_outgoing_address 10.93.2.76 client_group_1
> tcp_outgoing_address 10.93.2.198 client_group_2
> tcp_outgoing_address 10.93.3.178 client_group_3
>
> cache_dir rock /var/lib/squid/cache1 51200
> cache_dir rock /var/lib/squid/cache2 51200
> coredump_dir /var/spool/squid
> maximum_object_size_in_memory 8 MB
> maximum_object_size 8 MB
>
> cache_mem 6 GB
> memory_cache_shared on
> workers 4
>
> refresh_pattern . 0 100% 30
>
> http_port squid0001:3128 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=400MB cert=/etc/squid/ssl_cert/myCA.pem
> http_port localhost:3128
> ssl_bump bump all
>
> request_header_access Our-Client deny all
> request_header_access Via deny all
> forwarded_for delete
>
> visible_hostname squid0001.lab.company.com
> logformat adttest %tg %6tr %>a %Ss/%03>Hs %<st %rm %>ru %[un %Sh/%<a %mt %ea
> access_log daemon:/var/log/squid/access.${process_number}.log adttest
> icon_directory /usr/share/squid/icons
>
> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB
> sslcrtd_children 32 startup=2 idle=2
> sslproxy_session_cache_size 100 MB
> sslproxy_cert_error allow all
> sslproxy_flags DONT_VERIFY_PEER
>
>
> --
> Aaron Turner
> https://synfin.net/         Twitter: @synfinatic
> My father once told me that respect for the truth comes close to being
> the basis for all morality.  "Something cannot emerge from nothing,"
> he said.  This is profound thinking if you understand how unstable
> "the truth" can be.  -- Frank Herbert, Dune
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users

--
Atenciosamente / Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Eliezer Croitoru
In reply to this post by Aaron Turner
Hey Aaron,

Just to clear out the doubt's, what happen when you use squid-cache without rock cache_dir? Is the problem appearing again?
Also, there is a possibility of a bug which is related to squid ssl-bump termination code on 3.5.X.
Testing 4.0.21 would be the best to understand if the issue is 3.5 local or if it was fixed in 4.X+ but, from my memory I think you will need to adapt your squid.conf ssl_bump configurations.
You can get the latest beta and stable binaries from my repo and the beta repo details are at:
https://wiki.squid-cache.org/action/edit/KnowledgeBase/CentOS#Squid_Beta_release

Also, since you are using haproxy in front of squid I would suggest you to use the proxy protocol(v1) which is the best way to pass the source ip addresses to the proxy.
I have tested squid to work with the proxy protocol v1 but yet to test v2.

All The Bests,
Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Aaron Turner
Sent: Saturday, September 23, 2017 02:19
To: [hidden email]
Subject: [squid-users] Bug: Missing MemObject::storeId value

Version: 3.5.26 on CentOS 7.3 on AWS EC2 m3.xlarge and 2x 100GB EBS
volumes for rock cache.

Doing some basic system tests and we're seeing a bunch of errors like:

2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start() 0x7f169c6cc9d0
2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.finish() 0x7f169dae4e40
2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 20209
2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
2017/09/22 22:43:15 kid1| MemObject->nclients: 0
2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f167ee60db0
2017/09/22 22:43:15 kid1| MemObject->request: 0
2017/09/22 22:43:15 kid1| MemObject->logUri:
2017/09/22 22:43:15 kid1| MemObject->storeId:
2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.start() 0x7f16a6a4a500
2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.finish() 0x7f16a6a4a4d0
2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 50265
2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
2017/09/22 22:43:15 kid1| MemObject->nclients: 0
2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f169f83d7d0
2017/09/22 22:43:15 kid1| MemObject->request: 0
2017/09/22 22:43:15 kid1| MemObject->logUri:
2017/09/22 22:43:15 kid1| MemObject->storeId:

I did some googling and seems like a lot of comments about this with
Rock (we're using) and ICP/HTCP (not using).  Curious if this the same
bug or something new?  Are there config changes we can make to prevent
this (perhaps switching away from rock cache??)

We have a bunch of clients behind haproxy which is load balancing to
4x Squid.  Config of the squids is as:

http_access allow localhost manager
http_access deny manager

external_acl_type client_ip_map_0 %>ha{Our-Client}
/usr/lib64/squid/user_loadbalance.py 0 4
external_acl_type client_ip_map_1 %>ha{Our-Client}
/usr/lib64/squid/user_loadbalance.py 1 4
external_acl_type client_ip_map_2 %>ha{Our-Client}
/usr/lib64/squid/user_loadbalance.py 2 4
external_acl_type client_ip_map_3 %>ha{Our-Client}
/usr/lib64/squid/user_loadbalance.py 3 4

acl client_group_0 external client_ip_map_0
acl client_group_1 external client_ip_map_1
acl client_group_2 external client_ip_map_2
acl client_group_3 external client_ip_map_3

http_access allow client_group_0
http_access allow client_group_1
http_access allow client_group_2
http_access allow client_group_3
http_access deny all

tcp_outgoing_address 10.93.2.41 client_group_0
tcp_outgoing_address 10.93.2.76 client_group_1
tcp_outgoing_address 10.93.2.198 client_group_2
tcp_outgoing_address 10.93.3.178 client_group_3

cache_dir rock /var/lib/squid/cache1 51200
cache_dir rock /var/lib/squid/cache2 51200
coredump_dir /var/spool/squid
maximum_object_size_in_memory 8 MB
maximum_object_size 8 MB

cache_mem 6 GB
memory_cache_shared on
workers 4

refresh_pattern . 0 100% 30

http_port squid0001:3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=400MB cert=/etc/squid/ssl_cert/myCA.pem
http_port localhost:3128
ssl_bump bump all

request_header_access Our-Client deny all
request_header_access Via deny all
forwarded_for delete

visible_hostname squid0001.lab.company.com
logformat adttest %tg %6tr %>a %Ss/%03>Hs %<st %rm %>ru %[un %Sh/%<a %mt %ea
access_log daemon:/var/log/squid/access.${process_number}.log adttest
icon_directory /usr/share/squid/icons

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB
sslcrtd_children 32 startup=2 idle=2
sslproxy_session_cache_size 100 MB
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER


--
Aaron Turner
https://synfin.net/         Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality.  "Something cannot emerge from nothing,"
he said.  This is profound thinking if you understand how unstable
"the truth" can be.  -- Frank Herbert, Dune
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Aaron Turner
So is v4 stable?  I was the impression it was beta?  That said, if v4
has better memory tuning options then I'm all ears.  Right now I'm
fighting OOM errors (and the kernel OOM reaper) under sustained load.
I've come to realize 6GB is way way too much for my 14GB RAM systems,
but finding even 1GB is too much since each squid process is exceeding
4GB.  About to try 500MB now.

I can disable rock cache, but I need some disk cache- is there a better option?

As for haproxy, I actually don't care about the client IP... I'm
running haproxy locally on the servers where the clients reside.
Mostly I'm using it for squid failover and cache affinity so I don't
have to make all my caches peers of each other.


--
Aaron Turner
https://synfin.net/         Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality.  "Something cannot emerge from nothing,"
he said.  This is profound thinking if you understand how unstable
"the truth" can be.  -- Frank Herbert, Dune


On Mon, Sep 25, 2017 at 11:45 AM, Eliezer Croitoru <[hidden email]> wrote:

> Hey Aaron,
>
> Just to clear out the doubt's, what happen when you use squid-cache without rock cache_dir? Is the problem appearing again?
> Also, there is a possibility of a bug which is related to squid ssl-bump termination code on 3.5.X.
> Testing 4.0.21 would be the best to understand if the issue is 3.5 local or if it was fixed in 4.X+ but, from my memory I think you will need to adapt your squid.conf ssl_bump configurations.
> You can get the latest beta and stable binaries from my repo and the beta repo details are at:
> https://wiki.squid-cache.org/action/edit/KnowledgeBase/CentOS#Squid_Beta_release
>
> Also, since you are using haproxy in front of squid I would suggest you to use the proxy protocol(v1) which is the best way to pass the source ip addresses to the proxy.
> I have tested squid to work with the proxy protocol v1 but yet to test v2.
>
> All The Bests,
> Eliezer
>
> ----
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: [hidden email]
>
>
>
> -----Original Message-----
> From: squid-users [mailto:[hidden email]] On Behalf Of Aaron Turner
> Sent: Saturday, September 23, 2017 02:19
> To: [hidden email]
> Subject: [squid-users] Bug: Missing MemObject::storeId value
>
> Version: 3.5.26 on CentOS 7.3 on AWS EC2 m3.xlarge and 2x 100GB EBS
> volumes for rock cache.
>
> Doing some basic system tests and we're seeing a bunch of errors like:
>
> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start() 0x7f169c6cc9d0
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.finish() 0x7f169dae4e40
> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 20209
> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f167ee60db0
> 2017/09/22 22:43:15 kid1| MemObject->request: 0
> 2017/09/22 22:43:15 kid1| MemObject->logUri:
> 2017/09/22 22:43:15 kid1| MemObject->storeId:
> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.start() 0x7f16a6a4a500
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.finish() 0x7f16a6a4a4d0
> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 50265
> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f169f83d7d0
> 2017/09/22 22:43:15 kid1| MemObject->request: 0
> 2017/09/22 22:43:15 kid1| MemObject->logUri:
> 2017/09/22 22:43:15 kid1| MemObject->storeId:
>
> I did some googling and seems like a lot of comments about this with
> Rock (we're using) and ICP/HTCP (not using).  Curious if this the same
> bug or something new?  Are there config changes we can make to prevent
> this (perhaps switching away from rock cache??)
>
> We have a bunch of clients behind haproxy which is load balancing to
> 4x Squid.  Config of the squids is as:
>
> http_access allow localhost manager
> http_access deny manager
>
> external_acl_type client_ip_map_0 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 0 4
> external_acl_type client_ip_map_1 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 1 4
> external_acl_type client_ip_map_2 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 2 4
> external_acl_type client_ip_map_3 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 3 4
>
> acl client_group_0 external client_ip_map_0
> acl client_group_1 external client_ip_map_1
> acl client_group_2 external client_ip_map_2
> acl client_group_3 external client_ip_map_3
>
> http_access allow client_group_0
> http_access allow client_group_1
> http_access allow client_group_2
> http_access allow client_group_3
> http_access deny all
>
> tcp_outgoing_address 10.93.2.41 client_group_0
> tcp_outgoing_address 10.93.2.76 client_group_1
> tcp_outgoing_address 10.93.2.198 client_group_2
> tcp_outgoing_address 10.93.3.178 client_group_3
>
> cache_dir rock /var/lib/squid/cache1 51200
> cache_dir rock /var/lib/squid/cache2 51200
> coredump_dir /var/spool/squid
> maximum_object_size_in_memory 8 MB
> maximum_object_size 8 MB
>
> cache_mem 6 GB
> memory_cache_shared on
> workers 4
>
> refresh_pattern . 0 100% 30
>
> http_port squid0001:3128 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=400MB cert=/etc/squid/ssl_cert/myCA.pem
> http_port localhost:3128
> ssl_bump bump all
>
> request_header_access Our-Client deny all
> request_header_access Via deny all
> forwarded_for delete
>
> visible_hostname squid0001.lab.company.com
> logformat adttest %tg %6tr %>a %Ss/%03>Hs %<st %rm %>ru %[un %Sh/%<a %mt %ea
> access_log daemon:/var/log/squid/access.${process_number}.log adttest
> icon_directory /usr/share/squid/icons
>
> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB
> sslcrtd_children 32 startup=2 idle=2
> sslproxy_session_cache_size 100 MB
> sslproxy_cert_error allow all
> sslproxy_flags DONT_VERIFY_PEER
>
>
> --
> Aaron Turner
> https://synfin.net/         Twitter: @synfinatic
> My father once told me that respect for the truth comes close to being
> the basis for all morality.  "Something cannot emerge from nothing,"
> he said.  This is profound thinking if you understand how unstable
> "the truth" can be.  -- Frank Herbert, Dune
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Amos Jeffries
Administrator
On 26/09/17 08:56, Aaron Turner wrote:
> So is v4 stable?  I was the impression it was beta?  That said, if v4
> has better memory tuning options then I'm all ears.

Yes it is beta. Some bugs still to work out in the ssl-bump code, but
that is all.

Overall the v4 ssl-bump code is far better behaved and capable than the
3.5 series - so it is probably worth using despite the remaining bugs
*if* your current issues are not showing up there.


First thing I would do though is adding sslflags=NO_DEFAULT_CA to the
http_port line(s). It reduces the memory needs a lot when bumping in v3.5.


>  Right now I'm
> fighting OOM errors (and the kernel OOM reaper) under sustained load.
> I've come to realize 6GB is way way too much for my 14GB RAM systems,
> but finding even 1GB is too much since each squid process is exceeding
> 4GB.  About to try 500MB now.
>
> I can disable rock cache, but I need some disk cache- is there a better option?

Possibly a smaller rock cache, and a UFS/AUFS/diskd cache - rock can
share disk with another cache, its just the UFS/* caches that do not
share well with each other.


Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Alex Rousskov
On 09/25/2017 08:19 PM, Amos Jeffries wrote:
> On 26/09/17 08:56, Aaron Turner wrote:
>> I can disable rock cache, but I need some disk cache- is there a
>> better option?

> Possibly a smaller rock cache, and a UFS/AUFS/diskd cache - rock can
> share disk with another cache, its just the UFS/* caches that do not
> share well with each other.


In SMP mode, please:

* Do _not_ use rock cache with any other disk cache. The caching code is
not designed to mix SMP-unaware and SMP-aware caches in SMP mode.

* Avoid using non-rock disk caches in SMP mode because non-rock disk
caches are not SMP-aware. If you have to use SMP-unaware disk caches in
SMP mode, then confine each cache to one worker using SMP macros and do
not expect much help when things go wrong (e.g., Squid crashes and/or
HTTP violations). You should probably disable shared memory cache as
well in this case, for the reasons mentioned in the first bullet.


HTH,

Alex.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Eliezer Croitoru
In reply to this post by Aaron Turner
Hey Aaron,

Consider the comments from Amos and Alex first before moving forward.
And again we need to clear out the current doubt's for both you and us.
We don't know if the issue is related to rock cache_dir or to squid-cache in general.
Currently for SMP aware caches the best disk cache is rock but you need to understand that the situation is that disk cache is a second level of caching and not the main goal.
You first need to make sure that squid works for you and then to make sure rock works good enough for you.
Also take into account that you actually "all in" for disk caching and it's not clear if you even need all this cache.
Before you decide that the disk cache is for you and that you really need it start low and aim higher, then in small steps move forward.
Start with a simple squid with ssl-bump without caching at all, then when you see it's stable enough from basic memory perspective for a period of 24 to 72 hours.
Then and only then when you see it's stable enough for you and the machine can take the load try to see if adding memory cache into the picture makes sense.
Check squid with it's default settings of cache_object sizes and try to analyze the cache logs to verify what are the most hot sites and objects that in use of your cache.
Only when you will have a clear view what is the demand from your cache proxy service you should consider moving forward to start investigating the usage of disk cache(with default cache object sizes).
Take into account that there is a possibility that squid will write object to the disk cache but will not use then and this is a very good reason to first test and analyze before going all in or out with squid.
Also start with a small disk cache(10GB max) and only after verifying that indeed the setup is working good enough try to find the right memory and disk cache utilization for your setup.

The above is my recommended recipe for a good and smooth start with squid in production environment.
You are not the first and probably not the last to receive this recommendation and I believe that some articles and resources that can be fetched from the Internet can miss-lead a Linux system administrator expectation from squid-cache or any cache.

Please test Squid-Cache one step at a time and do not get tempted to try to "cache all" since it's practically not possible.
Update us as you move forward with your tests.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: Aaron Turner [mailto:[hidden email]]
Sent: Monday, September 25, 2017 22:57
To: Eliezer Croitoru <[hidden email]>
Cc: [hidden email]
Subject: Re: [squid-users] Bug: Missing MemObject::storeId value

So is v4 stable?  I was the impression it was beta?  That said, if v4
has better memory tuning options then I'm all ears.  Right now I'm
fighting OOM errors (and the kernel OOM reaper) under sustained load.
I've come to realize 6GB is way way too much for my 14GB RAM systems,
but finding even 1GB is too much since each squid process is exceeding
4GB.  About to try 500MB now.

I can disable rock cache, but I need some disk cache- is there a better option?

As for haproxy, I actually don't care about the client IP... I'm
running haproxy locally on the servers where the clients reside.
Mostly I'm using it for squid failover and cache affinity so I don't
have to make all my caches peers of each other.


--
Aaron Turner
https://synfin.net/         Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality.  "Something cannot emerge from nothing,"
he said.  This is profound thinking if you understand how unstable
"the truth" can be.  -- Frank Herbert, Dune


On Mon, Sep 25, 2017 at 11:45 AM, Eliezer Croitoru <[hidden email]> wrote:

> Hey Aaron,
>
> Just to clear out the doubt's, what happen when you use squid-cache without rock cache_dir? Is the problem appearing again?
> Also, there is a possibility of a bug which is related to squid ssl-bump termination code on 3.5.X.
> Testing 4.0.21 would be the best to understand if the issue is 3.5 local or if it was fixed in 4.X+ but, from my memory I think you will need to adapt your squid.conf ssl_bump configurations.
> You can get the latest beta and stable binaries from my repo and the beta repo details are at:
> https://wiki.squid-cache.org/action/edit/KnowledgeBase/CentOS#Squid_Beta_release
>
> Also, since you are using haproxy in front of squid I would suggest you to use the proxy protocol(v1) which is the best way to pass the source ip addresses to the proxy.
> I have tested squid to work with the proxy protocol v1 but yet to test v2.
>
> All The Bests,
> Eliezer
>
> ----
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: [hidden email]
>
>
>
> -----Original Message-----
> From: squid-users [mailto:[hidden email]] On Behalf Of Aaron Turner
> Sent: Saturday, September 23, 2017 02:19
> To: [hidden email]
> Subject: [squid-users] Bug: Missing MemObject::storeId value
>
> Version: 3.5.26 on CentOS 7.3 on AWS EC2 m3.xlarge and 2x 100GB EBS
> volumes for rock cache.
>
> Doing some basic system tests and we're seeing a bunch of errors like:
>
> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start() 0x7f169c6cc9d0
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.finish() 0x7f169dae4e40
> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 20209
> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f167ee60db0
> 2017/09/22 22:43:15 kid1| MemObject->request: 0
> 2017/09/22 22:43:15 kid1| MemObject->logUri:
> 2017/09/22 22:43:15 kid1| MemObject->storeId:
> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.start() 0x7f16a6a4a500
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.finish() 0x7f16a6a4a4d0
> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 50265
> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f169f83d7d0
> 2017/09/22 22:43:15 kid1| MemObject->request: 0
> 2017/09/22 22:43:15 kid1| MemObject->logUri:
> 2017/09/22 22:43:15 kid1| MemObject->storeId:
>
> I did some googling and seems like a lot of comments about this with
> Rock (we're using) and ICP/HTCP (not using).  Curious if this the same
> bug or something new?  Are there config changes we can make to prevent
> this (perhaps switching away from rock cache??)
>
> We have a bunch of clients behind haproxy which is load balancing to
> 4x Squid.  Config of the squids is as:
>
> http_access allow localhost manager
> http_access deny manager
>
> external_acl_type client_ip_map_0 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 0 4
> external_acl_type client_ip_map_1 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 1 4
> external_acl_type client_ip_map_2 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 2 4
> external_acl_type client_ip_map_3 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 3 4
>
> acl client_group_0 external client_ip_map_0
> acl client_group_1 external client_ip_map_1
> acl client_group_2 external client_ip_map_2
> acl client_group_3 external client_ip_map_3
>
> http_access allow client_group_0
> http_access allow client_group_1
> http_access allow client_group_2
> http_access allow client_group_3
> http_access deny all
>
> tcp_outgoing_address 10.93.2.41 client_group_0
> tcp_outgoing_address 10.93.2.76 client_group_1
> tcp_outgoing_address 10.93.2.198 client_group_2
> tcp_outgoing_address 10.93.3.178 client_group_3
>
> cache_dir rock /var/lib/squid/cache1 51200
> cache_dir rock /var/lib/squid/cache2 51200
> coredump_dir /var/spool/squid
> maximum_object_size_in_memory 8 MB
> maximum_object_size 8 MB
>
> cache_mem 6 GB
> memory_cache_shared on
> workers 4
>
> refresh_pattern . 0 100% 30
>
> http_port squid0001:3128 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=400MB cert=/etc/squid/ssl_cert/myCA.pem
> http_port localhost:3128
> ssl_bump bump all
>
> request_header_access Our-Client deny all
> request_header_access Via deny all
> forwarded_for delete
>
> visible_hostname squid0001.lab.company.com
> logformat adttest %tg %6tr %>a %Ss/%03>Hs %<st %rm %>ru %[un %Sh/%<a %mt %ea
> access_log daemon:/var/log/squid/access.${process_number}.log adttest
> icon_directory /usr/share/squid/icons
>
> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB
> sslcrtd_children 32 startup=2 idle=2
> sslproxy_session_cache_size 100 MB
> sslproxy_cert_error allow all
> sslproxy_flags DONT_VERIFY_PEER
>
>
> --
> Aaron Turner
> https://synfin.net/         Twitter: @synfinatic
> My father once told me that respect for the truth comes close to being
> the basis for all morality.  "Something cannot emerge from nothing,"
> he said.  This is profound thinking if you understand how unstable
> "the truth" can be.  -- Frank Herbert, Dune
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Aaron Turner
Yeah, sounds like I need to prove that ssl-bump is not eating memory
before I start worrying about caching.    Then slowly add features
until I find the smoking gun and focus on that.

I'm curious, does anyone have a suggestion of what modern high traffic
volume squid deployments look like? Seems like lots of the suggestions
are a bit out dated.  I'm trying to go with the KISS principle and not
do any fancy ICP/etc or multi-layer proxy config since that seems much
more difficult to deploy and benchmark.  Instead we're using haproxy
to have cache affinity across systems.  Obviously this may result in
some hot spotting, but it seems like we'll need enough servers that
hopefully the pain will be distributed.

The reason I'm looking at squid is that I've got a small server farm
of ~850 web clients which will be making ~10M page requests/day.
Right now I'm estimating about 50% of my traffic is SSL so bumping SSL
connections is pretty important.

--
Aaron Turner
https://synfin.net/         Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality.  "Something cannot emerge from nothing,"
he said.  This is profound thinking if you understand how unstable
"the truth" can be.  -- Frank Herbert, Dune


On Mon, Sep 25, 2017 at 9:21 PM, Eliezer Croitoru <[hidden email]> wrote:

> Hey Aaron,
>
> Consider the comments from Amos and Alex first before moving forward.
> And again we need to clear out the current doubt's for both you and us.
> We don't know if the issue is related to rock cache_dir or to squid-cache in general.
> Currently for SMP aware caches the best disk cache is rock but you need to understand that the situation is that disk cache is a second level of caching and not the main goal.
> You first need to make sure that squid works for you and then to make sure rock works good enough for you.
> Also take into account that you actually "all in" for disk caching and it's not clear if you even need all this cache.
> Before you decide that the disk cache is for you and that you really need it start low and aim higher, then in small steps move forward.
> Start with a simple squid with ssl-bump without caching at all, then when you see it's stable enough from basic memory perspective for a period of 24 to 72 hours.
> Then and only then when you see it's stable enough for you and the machine can take the load try to see if adding memory cache into the picture makes sense.
> Check squid with it's default settings of cache_object sizes and try to analyze the cache logs to verify what are the most hot sites and objects that in use of your cache.
> Only when you will have a clear view what is the demand from your cache proxy service you should consider moving forward to start investigating the usage of disk cache(with default cache object sizes).
> Take into account that there is a possibility that squid will write object to the disk cache but will not use then and this is a very good reason to first test and analyze before going all in or out with squid.
> Also start with a small disk cache(10GB max) and only after verifying that indeed the setup is working good enough try to find the right memory and disk cache utilization for your setup.
>
> The above is my recommended recipe for a good and smooth start with squid in production environment.
> You are not the first and probably not the last to receive this recommendation and I believe that some articles and resources that can be fetched from the Internet can miss-lead a Linux system administrator expectation from squid-cache or any cache.
>
> Please test Squid-Cache one step at a time and do not get tempted to try to "cache all" since it's practically not possible.
> Update us as you move forward with your tests.
>
> Eliezer
>
> ----
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: [hidden email]
>
>
>
> -----Original Message-----
> From: Aaron Turner [mailto:[hidden email]]
> Sent: Monday, September 25, 2017 22:57
> To: Eliezer Croitoru <[hidden email]>
> Cc: [hidden email]
> Subject: Re: [squid-users] Bug: Missing MemObject::storeId value
>
> So is v4 stable?  I was the impression it was beta?  That said, if v4
> has better memory tuning options then I'm all ears.  Right now I'm
> fighting OOM errors (and the kernel OOM reaper) under sustained load.
> I've come to realize 6GB is way way too much for my 14GB RAM systems,
> but finding even 1GB is too much since each squid process is exceeding
> 4GB.  About to try 500MB now.
>
> I can disable rock cache, but I need some disk cache- is there a better option?
>
> As for haproxy, I actually don't care about the client IP... I'm
> running haproxy locally on the servers where the clients reside.
> Mostly I'm using it for squid failover and cache affinity so I don't
> have to make all my caches peers of each other.
>
>
> --
> Aaron Turner
> https://synfin.net/         Twitter: @synfinatic
> My father once told me that respect for the truth comes close to being
> the basis for all morality.  "Something cannot emerge from nothing,"
> he said.  This is profound thinking if you understand how unstable
> "the truth" can be.  -- Frank Herbert, Dune
>
>
> On Mon, Sep 25, 2017 at 11:45 AM, Eliezer Croitoru <[hidden email]> wrote:
>> Hey Aaron,
>>
>> Just to clear out the doubt's, what happen when you use squid-cache without rock cache_dir? Is the problem appearing again?
>> Also, there is a possibility of a bug which is related to squid ssl-bump termination code on 3.5.X.
>> Testing 4.0.21 would be the best to understand if the issue is 3.5 local or if it was fixed in 4.X+ but, from my memory I think you will need to adapt your squid.conf ssl_bump configurations.
>> You can get the latest beta and stable binaries from my repo and the beta repo details are at:
>> https://wiki.squid-cache.org/action/edit/KnowledgeBase/CentOS#Squid_Beta_release
>>
>> Also, since you are using haproxy in front of squid I would suggest you to use the proxy protocol(v1) which is the best way to pass the source ip addresses to the proxy.
>> I have tested squid to work with the proxy protocol v1 but yet to test v2.
>>
>> All The Bests,
>> Eliezer
>>
>> ----
>> Eliezer Croitoru
>> Linux System Administrator
>> Mobile: +972-5-28704261
>> Email: [hidden email]
>>
>>
>>
>> -----Original Message-----
>> From: squid-users [mailto:[hidden email]] On Behalf Of Aaron Turner
>> Sent: Saturday, September 23, 2017 02:19
>> To: [hidden email]
>> Subject: [squid-users] Bug: Missing MemObject::storeId value
>>
>> Version: 3.5.26 on CentOS 7.3 on AWS EC2 m3.xlarge and 2x 100GB EBS
>> volumes for rock cache.
>>
>> Doing some basic system tests and we're seeing a bunch of errors like:
>>
>> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start() 0x7f169c6cc9d0
>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.finish() 0x7f169dae4e40
>> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
>> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 20209
>> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
>> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
>> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f167ee60db0
>> 2017/09/22 22:43:15 kid1| MemObject->request: 0
>> 2017/09/22 22:43:15 kid1| MemObject->logUri:
>> 2017/09/22 22:43:15 kid1| MemObject->storeId:
>> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.start() 0x7f16a6a4a500
>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.finish() 0x7f16a6a4a4d0
>> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
>> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 50265
>> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
>> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
>> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f169f83d7d0
>> 2017/09/22 22:43:15 kid1| MemObject->request: 0
>> 2017/09/22 22:43:15 kid1| MemObject->logUri:
>> 2017/09/22 22:43:15 kid1| MemObject->storeId:
>>
>> I did some googling and seems like a lot of comments about this with
>> Rock (we're using) and ICP/HTCP (not using).  Curious if this the same
>> bug or something new?  Are there config changes we can make to prevent
>> this (perhaps switching away from rock cache??)
>>
>> We have a bunch of clients behind haproxy which is load balancing to
>> 4x Squid.  Config of the squids is as:
>>
>> http_access allow localhost manager
>> http_access deny manager
>>
>> external_acl_type client_ip_map_0 %>ha{Our-Client}
>> /usr/lib64/squid/user_loadbalance.py 0 4
>> external_acl_type client_ip_map_1 %>ha{Our-Client}
>> /usr/lib64/squid/user_loadbalance.py 1 4
>> external_acl_type client_ip_map_2 %>ha{Our-Client}
>> /usr/lib64/squid/user_loadbalance.py 2 4
>> external_acl_type client_ip_map_3 %>ha{Our-Client}
>> /usr/lib64/squid/user_loadbalance.py 3 4
>>
>> acl client_group_0 external client_ip_map_0
>> acl client_group_1 external client_ip_map_1
>> acl client_group_2 external client_ip_map_2
>> acl client_group_3 external client_ip_map_3
>>
>> http_access allow client_group_0
>> http_access allow client_group_1
>> http_access allow client_group_2
>> http_access allow client_group_3
>> http_access deny all
>>
>> tcp_outgoing_address 10.93.2.41 client_group_0
>> tcp_outgoing_address 10.93.2.76 client_group_1
>> tcp_outgoing_address 10.93.2.198 client_group_2
>> tcp_outgoing_address 10.93.3.178 client_group_3
>>
>> cache_dir rock /var/lib/squid/cache1 51200
>> cache_dir rock /var/lib/squid/cache2 51200
>> coredump_dir /var/spool/squid
>> maximum_object_size_in_memory 8 MB
>> maximum_object_size 8 MB
>>
>> cache_mem 6 GB
>> memory_cache_shared on
>> workers 4
>>
>> refresh_pattern . 0 100% 30
>>
>> http_port squid0001:3128 ssl-bump generate-host-certificates=on
>> dynamic_cert_mem_cache_size=400MB cert=/etc/squid/ssl_cert/myCA.pem
>> http_port localhost:3128
>> ssl_bump bump all
>>
>> request_header_access Our-Client deny all
>> request_header_access Via deny all
>> forwarded_for delete
>>
>> visible_hostname squid0001.lab.company.com
>> logformat adttest %tg %6tr %>a %Ss/%03>Hs %<st %rm %>ru %[un %Sh/%<a %mt %ea
>> access_log daemon:/var/log/squid/access.${process_number}.log adttest
>> icon_directory /usr/share/squid/icons
>>
>> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB
>> sslcrtd_children 32 startup=2 idle=2
>> sslproxy_session_cache_size 100 MB
>> sslproxy_cert_error allow all
>> sslproxy_flags DONT_VERIFY_PEER
>>
>>
>> --
>> Aaron Turner
>> https://synfin.net/         Twitter: @synfinatic
>> My father once told me that respect for the truth comes close to being
>> the basis for all morality.  "Something cannot emerge from nothing,"
>> he said.  This is profound thinking if you understand how unstable
>> "the truth" can be.  -- Frank Herbert, Dune
>> _______________________________________________
>> squid-users mailing list
>> [hidden email]
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Aaron Turner
Just a followup.  Thanks to Amos which suggested setting
sslflags=NO_DEFAULT_CA on the http_port(s).  That seems to have fixed
the memory (leak??) problem.  Probably should run this for a few days
to be sure, but at least now I can run squid for a few hours and the
memory is much more stable vs. before where I'd start having problems
after about ~30min.

On a side note, the MemObject bug I referred to at the start of this
thread seems definitely related to enabling Rock cache.  I wasn't
seeing the error when I was running with and without the memcache, but
seems to have come back once I enabled the rock.  I'm still working on
tuning my squid caching preferences to match our needs, so I may have
more info later.

--
Aaron Turner
https://synfin.net/         Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality.  "Something cannot emerge from nothing,"
he said.  This is profound thinking if you understand how unstable
"the truth" can be.  -- Frank Herbert, Dune


On Mon, Sep 25, 2017 at 9:57 PM, Aaron Turner <[hidden email]> wrote:

> Yeah, sounds like I need to prove that ssl-bump is not eating memory
> before I start worrying about caching.    Then slowly add features
> until I find the smoking gun and focus on that.
>
> I'm curious, does anyone have a suggestion of what modern high traffic
> volume squid deployments look like? Seems like lots of the suggestions
> are a bit out dated.  I'm trying to go with the KISS principle and not
> do any fancy ICP/etc or multi-layer proxy config since that seems much
> more difficult to deploy and benchmark.  Instead we're using haproxy
> to have cache affinity across systems.  Obviously this may result in
> some hot spotting, but it seems like we'll need enough servers that
> hopefully the pain will be distributed.
>
> The reason I'm looking at squid is that I've got a small server farm
> of ~850 web clients which will be making ~10M page requests/day.
> Right now I'm estimating about 50% of my traffic is SSL so bumping SSL
> connections is pretty important.
>
> --
> Aaron Turner
> https://synfin.net/         Twitter: @synfinatic
> My father once told me that respect for the truth comes close to being
> the basis for all morality.  "Something cannot emerge from nothing,"
> he said.  This is profound thinking if you understand how unstable
> "the truth" can be.  -- Frank Herbert, Dune
>
>
> On Mon, Sep 25, 2017 at 9:21 PM, Eliezer Croitoru <[hidden email]> wrote:
>> Hey Aaron,
>>
>> Consider the comments from Amos and Alex first before moving forward.
>> And again we need to clear out the current doubt's for both you and us.
>> We don't know if the issue is related to rock cache_dir or to squid-cache in general.
>> Currently for SMP aware caches the best disk cache is rock but you need to understand that the situation is that disk cache is a second level of caching and not the main goal.
>> You first need to make sure that squid works for you and then to make sure rock works good enough for you.
>> Also take into account that you actually "all in" for disk caching and it's not clear if you even need all this cache.
>> Before you decide that the disk cache is for you and that you really need it start low and aim higher, then in small steps move forward.
>> Start with a simple squid with ssl-bump without caching at all, then when you see it's stable enough from basic memory perspective for a period of 24 to 72 hours.
>> Then and only then when you see it's stable enough for you and the machine can take the load try to see if adding memory cache into the picture makes sense.
>> Check squid with it's default settings of cache_object sizes and try to analyze the cache logs to verify what are the most hot sites and objects that in use of your cache.
>> Only when you will have a clear view what is the demand from your cache proxy service you should consider moving forward to start investigating the usage of disk cache(with default cache object sizes).
>> Take into account that there is a possibility that squid will write object to the disk cache but will not use then and this is a very good reason to first test and analyze before going all in or out with squid.
>> Also start with a small disk cache(10GB max) and only after verifying that indeed the setup is working good enough try to find the right memory and disk cache utilization for your setup.
>>
>> The above is my recommended recipe for a good and smooth start with squid in production environment.
>> You are not the first and probably not the last to receive this recommendation and I believe that some articles and resources that can be fetched from the Internet can miss-lead a Linux system administrator expectation from squid-cache or any cache.
>>
>> Please test Squid-Cache one step at a time and do not get tempted to try to "cache all" since it's practically not possible.
>> Update us as you move forward with your tests.
>>
>> Eliezer
>>
>> ----
>> Eliezer Croitoru
>> Linux System Administrator
>> Mobile: +972-5-28704261
>> Email: [hidden email]
>>
>>
>>
>> -----Original Message-----
>> From: Aaron Turner [mailto:[hidden email]]
>> Sent: Monday, September 25, 2017 22:57
>> To: Eliezer Croitoru <[hidden email]>
>> Cc: [hidden email]
>> Subject: Re: [squid-users] Bug: Missing MemObject::storeId value
>>
>> So is v4 stable?  I was the impression it was beta?  That said, if v4
>> has better memory tuning options then I'm all ears.  Right now I'm
>> fighting OOM errors (and the kernel OOM reaper) under sustained load.
>> I've come to realize 6GB is way way too much for my 14GB RAM systems,
>> but finding even 1GB is too much since each squid process is exceeding
>> 4GB.  About to try 500MB now.
>>
>> I can disable rock cache, but I need some disk cache- is there a better option?
>>
>> As for haproxy, I actually don't care about the client IP... I'm
>> running haproxy locally on the servers where the clients reside.
>> Mostly I'm using it for squid failover and cache affinity so I don't
>> have to make all my caches peers of each other.
>>
>>
>> --
>> Aaron Turner
>> https://synfin.net/         Twitter: @synfinatic
>> My father once told me that respect for the truth comes close to being
>> the basis for all morality.  "Something cannot emerge from nothing,"
>> he said.  This is profound thinking if you understand how unstable
>> "the truth" can be.  -- Frank Herbert, Dune
>>
>>
>> On Mon, Sep 25, 2017 at 11:45 AM, Eliezer Croitoru <[hidden email]> wrote:
>>> Hey Aaron,
>>>
>>> Just to clear out the doubt's, what happen when you use squid-cache without rock cache_dir? Is the problem appearing again?
>>> Also, there is a possibility of a bug which is related to squid ssl-bump termination code on 3.5.X.
>>> Testing 4.0.21 would be the best to understand if the issue is 3.5 local or if it was fixed in 4.X+ but, from my memory I think you will need to adapt your squid.conf ssl_bump configurations.
>>> You can get the latest beta and stable binaries from my repo and the beta repo details are at:
>>> https://wiki.squid-cache.org/action/edit/KnowledgeBase/CentOS#Squid_Beta_release
>>>
>>> Also, since you are using haproxy in front of squid I would suggest you to use the proxy protocol(v1) which is the best way to pass the source ip addresses to the proxy.
>>> I have tested squid to work with the proxy protocol v1 but yet to test v2.
>>>
>>> All The Bests,
>>> Eliezer
>>>
>>> ----
>>> Eliezer Croitoru
>>> Linux System Administrator
>>> Mobile: +972-5-28704261
>>> Email: [hidden email]
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: squid-users [mailto:[hidden email]] On Behalf Of Aaron Turner
>>> Sent: Saturday, September 23, 2017 02:19
>>> To: [hidden email]
>>> Subject: [squid-users] Bug: Missing MemObject::storeId value
>>>
>>> Version: 3.5.26 on CentOS 7.3 on AWS EC2 m3.xlarge and 2x 100GB EBS
>>> volumes for rock cache.
>>>
>>> Doing some basic system tests and we're seeing a bunch of errors like:
>>>
>>> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
>>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start() 0x7f169c6cc9d0
>>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.finish() 0x7f169dae4e40
>>> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
>>> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 20209
>>> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
>>> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
>>> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f167ee60db0
>>> 2017/09/22 22:43:15 kid1| MemObject->request: 0
>>> 2017/09/22 22:43:15 kid1| MemObject->logUri:
>>> 2017/09/22 22:43:15 kid1| MemObject->storeId:
>>> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
>>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.start() 0x7f16a6a4a500
>>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.finish() 0x7f16a6a4a4d0
>>> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
>>> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 50265
>>> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
>>> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
>>> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f169f83d7d0
>>> 2017/09/22 22:43:15 kid1| MemObject->request: 0
>>> 2017/09/22 22:43:15 kid1| MemObject->logUri:
>>> 2017/09/22 22:43:15 kid1| MemObject->storeId:
>>>
>>> I did some googling and seems like a lot of comments about this with
>>> Rock (we're using) and ICP/HTCP (not using).  Curious if this the same
>>> bug or something new?  Are there config changes we can make to prevent
>>> this (perhaps switching away from rock cache??)
>>>
>>> We have a bunch of clients behind haproxy which is load balancing to
>>> 4x Squid.  Config of the squids is as:
>>>
>>> http_access allow localhost manager
>>> http_access deny manager
>>>
>>> external_acl_type client_ip_map_0 %>ha{Our-Client}
>>> /usr/lib64/squid/user_loadbalance.py 0 4
>>> external_acl_type client_ip_map_1 %>ha{Our-Client}
>>> /usr/lib64/squid/user_loadbalance.py 1 4
>>> external_acl_type client_ip_map_2 %>ha{Our-Client}
>>> /usr/lib64/squid/user_loadbalance.py 2 4
>>> external_acl_type client_ip_map_3 %>ha{Our-Client}
>>> /usr/lib64/squid/user_loadbalance.py 3 4
>>>
>>> acl client_group_0 external client_ip_map_0
>>> acl client_group_1 external client_ip_map_1
>>> acl client_group_2 external client_ip_map_2
>>> acl client_group_3 external client_ip_map_3
>>>
>>> http_access allow client_group_0
>>> http_access allow client_group_1
>>> http_access allow client_group_2
>>> http_access allow client_group_3
>>> http_access deny all
>>>
>>> tcp_outgoing_address 10.93.2.41 client_group_0
>>> tcp_outgoing_address 10.93.2.76 client_group_1
>>> tcp_outgoing_address 10.93.2.198 client_group_2
>>> tcp_outgoing_address 10.93.3.178 client_group_3
>>>
>>> cache_dir rock /var/lib/squid/cache1 51200
>>> cache_dir rock /var/lib/squid/cache2 51200
>>> coredump_dir /var/spool/squid
>>> maximum_object_size_in_memory 8 MB
>>> maximum_object_size 8 MB
>>>
>>> cache_mem 6 GB
>>> memory_cache_shared on
>>> workers 4
>>>
>>> refresh_pattern . 0 100% 30
>>>
>>> http_port squid0001:3128 ssl-bump generate-host-certificates=on
>>> dynamic_cert_mem_cache_size=400MB cert=/etc/squid/ssl_cert/myCA.pem
>>> http_port localhost:3128
>>> ssl_bump bump all
>>>
>>> request_header_access Our-Client deny all
>>> request_header_access Via deny all
>>> forwarded_for delete
>>>
>>> visible_hostname squid0001.lab.company.com
>>> logformat adttest %tg %6tr %>a %Ss/%03>Hs %<st %rm %>ru %[un %Sh/%<a %mt %ea
>>> access_log daemon:/var/log/squid/access.${process_number}.log adttest
>>> icon_directory /usr/share/squid/icons
>>>
>>> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB
>>> sslcrtd_children 32 startup=2 idle=2
>>> sslproxy_session_cache_size 100 MB
>>> sslproxy_cert_error allow all
>>> sslproxy_flags DONT_VERIFY_PEER
>>>
>>>
>>> --
>>> Aaron Turner
>>> https://synfin.net/         Twitter: @synfinatic
>>> My father once told me that respect for the truth comes close to being
>>> the basis for all morality.  "Something cannot emerge from nothing,"
>>> he said.  This is profound thinking if you understand how unstable
>>> "the truth" can be.  -- Frank Herbert, Dune
>>> _______________________________________________
>>> squid-users mailing list
>>> [hidden email]
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>>
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Amos Jeffries
Administrator
On 27/09/17 10:30, Aaron Turner wrote:

> Doing some basic system tests and we're seeing a bunch of errors like:
>
> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start() 0x7f169c6cc9d0
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.finish() 0x7f169dae4e40
> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 20209
> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f167ee60db0
> 2017/09/22 22:43:15 kid1| MemObject->request: 0
> 2017/09/22 22:43:15 kid1| MemObject->logUri:
> 2017/09/22 22:43:15 kid1| MemObject->storeId:
> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.start() 0x7f16a6a4a500
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.finish() 0x7f16a6a4a4d0
> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 50265
> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f169f83d7d0
> 2017/09/22 22:43:15 kid1| MemObject->request: 0
> 2017/09/22 22:43:15 kid1| MemObject->logUri:
> 2017/09/22 22:43:15 kid1| MemObject->storeId:


This is <http://bugs.squid-cache.org/show_bug.cgi?id=4527>

Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Aaron Turner
So reading the bug comments it doesn't sound like there's any config
changes I can make (other then not use rock, which in smp doesn't
sound like a good idea).   I might be able to run ALL,9 and collect
the output... would need to sanitize the URL's due to privacy/security
concerns.  Anything else I can/should do/consider?

Honestly, I'm not sure what the impact of this bug really is?  Is it
just a cache miss or???
--
Aaron Turner
https://synfin.net/         Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality.  "Something cannot emerge from nothing,"
he said.  This is profound thinking if you understand how unstable
"the truth" can be.  -- Frank Herbert, Dune


On Tue, Sep 26, 2017 at 4:46 PM, Amos Jeffries <[hidden email]> wrote:

> On 27/09/17 10:30, Aaron Turner wrote:
>>
>> Doing some basic system tests and we're seeing a bunch of errors like:
>>
>> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start()
>> 0x7f169c6cc9d0
>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.finish()
>> 0x7f169dae4e40
>> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
>> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 20209
>> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
>> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
>> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f167ee60db0
>> 2017/09/22 22:43:15 kid1| MemObject->request: 0
>> 2017/09/22 22:43:15 kid1| MemObject->logUri:
>> 2017/09/22 22:43:15 kid1| MemObject->storeId:
>> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.start()
>> 0x7f16a6a4a500
>> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.finish()
>> 0x7f16a6a4a4d0
>> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
>> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 50265
>> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
>> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
>> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f169f83d7d0
>> 2017/09/22 22:43:15 kid1| MemObject->request: 0
>> 2017/09/22 22:43:15 kid1| MemObject->logUri:
>> 2017/09/22 22:43:15 kid1| MemObject->storeId:
>
>
>
> This is <http://bugs.squid-cache.org/show_bug.cgi?id=4527>
>
> Amos
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Bug: Missing MemObject::storeId value

Amos Jeffries
Administrator

On 27/09/17 12:55, Aaron Turner wrote:
> So reading the bug comments it doesn't sound like there's any config
> changes I can make (other then not use rock, which in smp doesn't
> sound like a good idea).   I might be able to run ALL,9 and collect
> the output... would need to sanitize the URL's due to privacy/security
> concerns.  Anything else I can/should do/consider?
>
> Honestly, I'm not sure what the impact of this bug really is?  Is it
> just a cache miss or???

As far as I know yes.

Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users