I would like to know performance sizing aspects.

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

I would like to know performance sizing aspects.

kitamura
We are considering to use Squid for our proxy, and would like to know performance sizing aspects.

Current web access request averages per 1 hour are as followings 
Clients:30,000、
Page Views:141,741/hour
*Requests:4,893,106

We will install Squid on CentOS 8.1.   Please kindly share your thoughts / advices
Is there sizing methodology and tools?
How much resources are generally recommended for our environment?
 CPU:  Memory:  Disk space : Other factors to be considered if any:
Do you have a generally recommended performance testing tools? Any suggested guidelines?

Best regards,
kitamura

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: I would like to know performance sizing aspects.

Amos Jeffries
Administrator
On 5/08/20 11:28 am, m k wrote:
>> We are considering to use Squid for our proxy, and would like to know
>> performance sizing aspects.
>>
>> Current web access request averages per 1 hour are as followings 
>> Clients:30,000、
>> Page Views:141,741/hour
>> *Requests:4,893,106
>>

Okay. Requests and client count are the important numbers there.

The ~1359 req/sec is well within a default Squid capabilities, which can
extend up to around 10k req/sec before needing careful tuning.

That number was gained before HTTPS became so popular. So YMMV depending
on how many CONNECT tunnels you have to deal with. That HTTPS traffic
can possibly be decrypted and cached but performance trade-offs are
quite large.


>> We will install Squid on CentOS 8.1.   Please kindly share your
>> thoughts / advices

Whatever OS you are most comfortable with administering. Be aware that
CentOS official Squid packages are very slow to update - Apparently they
still have only v4.4 (8 months old) despite a 8.2 point release only a
few weeks ago.

So you may need to be building your own from sources and/or using other
semi-official packagers such as the ones from Eliezer at NGTech when he
gets around to CentOS 8 packages.
  <https://wiki.squid-cache.org/KnowledgeBase/CentOS>


FYI; If you find yourself having to use SSL-Bump, then we highly
recommended to follow the latest Squid releases with fairly frequent
updates (at minimum a few times per year - worst case monthly). If you
like CentOS you may find Fedora more suitable to track the security
environment volatility and update churn.


>> Is there sizing methodology and tools?

There are a couple of methodologies, depending on what aspect you are
tuning towards - and one for identifying the limitation points to begin
a tuning process tuning.

The info you gave above is the beginning. Checking to see if your
traffic rate is reasonably within capability of a single Squid instance.

Yours is reasonable, so next step is to get Squid running and see where
the trouble points (if any) are.

 For more see <https://wiki.squid-cache.org/SquidFaq/>



>> How much resources are generally recommended for our environment?
>>  CPU:  Memory:  Disk space : Other factors to be considered if any:
>> Do you have a generally recommended performance testing tools? Any
>> suggested guidelines?
>>


 CPU - squid is still mostly single-process. So prioritize faster GHz
rates over core number. Multi-core can help of course, but not as much
as cycle speeds do. Hyper-threading is useless for Squid.

 Memory - Squid will use as much as you can give it. Let your budget
govern this.

 Disk - Squid will happily run with no disk - or lots of large ones.

   - Avoid RAID. Squid *will* shorten disk lifetimes with its unusually
high write I/O pattern. How much shorter varies by disk type (HDD vs
SSD). So you may find it better to plan budget towards maintenance costs
of replacing disks in future rather than buying multiple up-front for
RAID use.
 see <https://wiki.squid-cache.org/SquidFaq/RAID> for details.

    - Up to a few hundred GB per cache_dir can be good for large caches.
Going up to TB is not (yet) worth the disk cost as Squid has a per-cache
limit on stored objects.

   - Disk caches can be re-tuned, added, moved, removed, and/or extended
at any time and will depend on the profile of object sizes your proxy
handles - which itself likely changes over time. So general let your
budget decide the initial disks and work from there.



Load Testing - the tools us dev use to review performance are listed at
the bottom of the profiling FAQ page. These are best for testing the
theoretical limits of a particular installation - real traffic tends to
be somewhat lower. So I personally prefer taking stats from the running
proxy on real traffic and seeing what I can observe from those.


HTH
Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: I would like to know performance sizing aspects.

Eliezer Croitoru-3
Hey Amos,

I got to CentOS 8...
RedHat claimed they will keep the module up-to-date and I would be able to stop building them.
From what you describe I understand their speed is the same as it was before.

I can build the RPMs but cannot host them 24/7.
For now if and when 8.2 RPMs will be built the squid version of CentOS should be blocked (excluded) from CentOS AppStream repo.

Do we have a tested that can test 8.2 with my RPMs?
( I will try to share 4.12 RPM's via OneDrive )

Eliezer

----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: [hidden email]

-----Original Message-----
From: squid-users <[hidden email]> On Behalf Of Amos Jeffries
Sent: Wednesday, August 5, 2020 4:28 AM
To: [hidden email]
Subject: Re: [squid-users] I would like to know performance sizing aspects.

On 5/08/20 11:28 am, m k wrote:
>> We are considering to use Squid for our proxy, and would like to know
>> performance sizing aspects.
>>
>> Current web access request averages per 1 hour are as followings
>> Clients:30,000、
>> Page Views:141,741/hour
>> *Requests:4,893,106
>>

Okay. Requests and client count are the important numbers there.

The ~1359 req/sec is well within a default Squid capabilities, which can
extend up to around 10k req/sec before needing careful tuning.

That number was gained before HTTPS became so popular. So YMMV depending
on how many CONNECT tunnels you have to deal with. That HTTPS traffic
can possibly be decrypted and cached but performance trade-offs are
quite large.


>> We will install Squid on CentOS 8.1.   Please kindly share your
>> thoughts / advices

Whatever OS you are most comfortable with administering. Be aware that
CentOS official Squid packages are very slow to update - Apparently they
still have only v4.4 (8 months old) despite a 8.2 point release only a
few weeks ago.

So you may need to be building your own from sources and/or using other
semi-official packagers such as the ones from Eliezer at NGTech when he
gets around to CentOS 8 packages.
  <https://wiki.squid-cache.org/KnowledgeBase/CentOS>


FYI; If you find yourself having to use SSL-Bump, then we highly
recommended to follow the latest Squid releases with fairly frequent
updates (at minimum a few times per year - worst case monthly). If you
like CentOS you may find Fedora more suitable to track the security
environment volatility and update churn.


>> Is there sizing methodology and tools?

There are a couple of methodologies, depending on what aspect you are
tuning towards - and one for identifying the limitation points to begin
a tuning process tuning.

The info you gave above is the beginning. Checking to see if your
traffic rate is reasonably within capability of a single Squid instance.

Yours is reasonable, so next step is to get Squid running and see where
the trouble points (if any) are.

 For more see <https://wiki.squid-cache.org/SquidFaq/>



>> How much resources are generally recommended for our environment?
>>  CPU:  Memory:  Disk space : Other factors to be considered if any:
>> Do you have a generally recommended performance testing tools? Any
>> suggested guidelines?
>>


 CPU - squid is still mostly single-process. So prioritize faster GHz
rates over core number. Multi-core can help of course, but not as much
as cycle speeds do. Hyper-threading is useless for Squid.

 Memory - Squid will use as much as you can give it. Let your budget
govern this.

 Disk - Squid will happily run with no disk - or lots of large ones.

   - Avoid RAID. Squid *will* shorten disk lifetimes with its unusually
high write I/O pattern. How much shorter varies by disk type (HDD vs
SSD). So you may find it better to plan budget towards maintenance costs
of replacing disks in future rather than buying multiple up-front for
RAID use.
 see <https://wiki.squid-cache.org/SquidFaq/RAID> for details.

    - Up to a few hundred GB per cache_dir can be good for large caches.
Going up to TB is not (yet) worth the disk cost as Squid has a per-cache
limit on stored objects.

   - Disk caches can be re-tuned, added, moved, removed, and/or extended
at any time and will depend on the profile of object sizes your proxy
handles - which itself likely changes over time. So general let your
budget decide the initial disks and work from there.



Load Testing - the tools us dev use to review performance are listed at
the bottom of the profiling FAQ page. These are best for testing the
theoretical limits of a particular installation - real traffic tends to
be somewhat lower. So I personally prefer taking stats from the running
proxy on real traffic and seeing what I can observe from those.


HTH
Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: I would like to know performance sizing aspects.

kitamura
In reply to this post by Amos Jeffries
Amos,

Thank you for your reply.
It was very helpful.

> That number was gained before HTTPS became so popular. So YMMV depending
> on how many CONNECT tunnels you have to deal with. That HTTPS traffic can possibly be decrypted 
> and cached but performance trade-offs are quite large.

Squid uses SSL-Bump.
I'm very worried about the internet slowing down due to https decording. and I'm also worried about the internet slowing down due to using Blacklist.
I load tens of thousands of URL(black list file) every time I set up ACL.

How many requests does SSL-Bump in one second?

Thank you,
kitamura

2020年8月5日(水) 10:32 Amos Jeffries <[hidden email]>:
On 5/08/20 11:28 am, m k wrote:
>> We are considering to use Squid for our proxy, and would like to know
>> performance sizing aspects.
>>
>> Current web access request averages per 1 hour are as followings 
>> Clients:30,000、
>> Page Views:141,741/hour
>> *Requests:4,893,106
>>

Okay. Requests and client count are the important numbers there.

The ~1359 req/sec is well within a default Squid capabilities, which can
extend up to around 10k req/sec before needing careful tuning.

That number was gained before HTTPS became so popular. So YMMV depending
on how many CONNECT tunnels you have to deal with. That HTTPS traffic
can possibly be decrypted and cached but performance trade-offs are
quite large.

>> We will install Squid on CentOS 8.1.   Please kindly share your
>> thoughts / advices

Whatever OS you are most comfortable with administering. Be aware that
CentOS official Squid packages are very slow to update - Apparently they
still have only v4.4 (8 months old) despite a 8.2 point release only a
few weeks ago.

So you may need to be building your own from sources and/or using other
semi-official packagers such as the ones from Eliezer at NGTech when he
gets around to CentOS 8 packages.
  <https://wiki.squid-cache.org/KnowledgeBase/CentOS>


FYI; If you find yourself having to use SSL-Bump, then we highly
recommended to follow the latest Squid releases with fairly frequent
updates (at minimum a few times per year - worst case monthly). If you
like CentOS you may find Fedora more suitable to track the security
environment volatility and update churn.


>> Is there sizing methodology and tools?

There are a couple of methodologies, depending on what aspect you are
tuning towards - and one for identifying the limitation points to begin
a tuning process tuning.

The info you gave above is the beginning. Checking to see if your
traffic rate is reasonably within capability of a single Squid instance.

Yours is reasonable, so next step is to get Squid running and see where
the trouble points (if any) are.

 For more see <https://wiki.squid-cache.org/SquidFaq/>



>> How much resources are generally recommended for our environment?
>>  CPU:  Memory:  Disk space : Other factors to be considered if any:
>> Do you have a generally recommended performance testing tools? Any
>> suggested guidelines?
>>


 CPU - squid is still mostly single-process. So prioritize faster GHz
rates over core number. Multi-core can help of course, but not as much
as cycle speeds do. Hyper-threading is useless for Squid.

 Memory - Squid will use as much as you can give it. Let your budget
govern this.

 Disk - Squid will happily run with no disk - or lots of large ones.

   - Avoid RAID. Squid *will* shorten disk lifetimes with its unusually
high write I/O pattern. How much shorter varies by disk type (HDD vs
SSD). So you may find it better to plan budget towards maintenance costs
of replacing disks in future rather than buying multiple up-front for
RAID use.
 see <https://wiki.squid-cache.org/SquidFaq/RAID> for details.

    - Up to a few hundred GB per cache_dir can be good for large caches.
Going up to TB is not (yet) worth the disk cost as Squid has a per-cache
limit on stored objects.

   - Disk caches can be re-tuned, added, moved, removed, and/or extended
at any time and will depend on the profile of object sizes your proxy
handles - which itself likely changes over time. So general let your
budget decide the initial disks and work from there.



Load Testing - the tools us dev use to review performance are listed at
the bottom of the profiling FAQ page. These are best for testing the
theoretical limits of a particular installation - real traffic tends to
be somewhat lower. So I personally prefer taking stats from the running
proxy on real traffic and seeing what I can observe from those.


HTH
Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: I would like to know performance sizing aspects.

Eliezer Croitoru-3

Kitamura,

 

About the tens of thousands of URLs, Have you considered using a Blacklisting utility, it might lower the memory footprint.

 

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: [hidden email]

 

From: squid-users <[hidden email]> On Behalf Of m k
Sent: Thursday, August 6, 2020 7:25 AM
To: Amos Jeffries <[hidden email]>
Cc: [hidden email]
Subject: Re: [squid-users] I would like to know performance sizing aspects.

 

Amos,

 

Thank you for your reply.

It was very helpful.

 

> That number was gained before HTTPS became so popular. So YMMV depending
> on how many CONNECT tunnels you have to deal with. That HTTPS traffic can possibly be decrypted 

> and cached but performance trade-offs are quite large.

 

Squid uses SSL-Bump.

I'm very worried about the internet slowing down due to https decording. and I'm also worried about the internet slowing down due to using Blacklist.

I load tens of thousands of URL(black list file) every time I set up ACL.

 

How many requests does SSL-Bump in one second?

 

Thank you,

kitamura

 

202085() 10:32 Amos Jeffries <[hidden email]>:

On 5/08/20 11:28 am, m k wrote:
>> We are considering to use Squid for our proxy, and would like to know
>> performance sizing aspects.
>>
>> Current web access request averages per 1 hour are as followings 
>> Clients30,000
>> Page Views:141,741/hour
>> *Requests:4,893,106
>>

Okay. Requests and client count are the important numbers there.

The ~1359 req/sec is well within a default Squid capabilities, which can
extend up to around 10k req/sec before needing careful tuning.

That number was gained before HTTPS became so popular. So YMMV depending
on how many CONNECT tunnels you have to deal with. That HTTPS traffic
can possibly be decrypted and cached but performance trade-offs are
quite large.


>> We will install Squid on CentOS 8.1.   Please kindly share your
>> thoughts / advices

Whatever OS you are most comfortable with administering. Be aware that
CentOS official Squid packages are very slow to update - Apparently they
still have only v4.4 (8 months old) despite a 8.2 point release only a
few weeks ago.

So you may need to be building your own from sources and/or using other
semi-official packagers such as the ones from Eliezer at NGTech when he
gets around to CentOS 8 packages.
  <https://wiki.squid-cache.org/KnowledgeBase/CentOS>


FYI; If you find yourself having to use SSL-Bump, then we highly
recommended to follow the latest Squid releases with fairly frequent
updates (at minimum a few times per year - worst case monthly). If you
like CentOS you may find Fedora more suitable to track the security
environment volatility and update churn.


>> Is there sizing methodology and tools?

There are a couple of methodologies, depending on what aspect you are
tuning towards - and one for identifying the limitation points to begin
a tuning process tuning.

The info you gave above is the beginning. Checking to see if your
traffic rate is reasonably within capability of a single Squid instance.

Yours is reasonable, so next step is to get Squid running and see where
the trouble points (if any) are.

 For more see <https://wiki.squid-cache.org/SquidFaq/>



>> How much resources are generally recommended for our environment?
>>  CPU:  Memory:  Disk space : Other factors to be considered if any:
>> Do you have a generally recommended performance testing tools? Any
>> suggested guidelines?
>>


 CPU - squid is still mostly single-process. So prioritize faster GHz
rates over core number. Multi-core can help of course, but not as much
as cycle speeds do. Hyper-threading is useless for Squid.

 Memory - Squid will use as much as you can give it. Let your budget
govern this.

 Disk - Squid will happily run with no disk - or lots of large ones.

   - Avoid RAID. Squid *will* shorten disk lifetimes with its unusually
high write I/O pattern. How much shorter varies by disk type (HDD vs
SSD). So you may find it better to plan budget towards maintenance costs
of replacing disks in future rather than buying multiple up-front for
RAID use.
 see <https://wiki.squid-cache.org/SquidFaq/RAID> for details.

    - Up to a few hundred GB per cache_dir can be good for large caches.
Going up to TB is not (yet) worth the disk cost as Squid has a per-cache
limit on stored objects.

   - Disk caches can be re-tuned, added, moved, removed, and/or extended
at any time and will depend on the profile of object sizes your proxy
handles - which itself likely changes over time. So general let your
budget decide the initial disks and work from there.



Load Testing - the tools us dev use to review performance are listed at
the bottom of the profiling FAQ page. These are best for testing the
theoretical limits of a particular installation - real traffic tends to
be somewhat lower. So I personally prefer taking stats from the running
proxy on real traffic and seeing what I can observe from those.


HTH
Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: I would like to know performance sizing aspects.

kitamura
Eliezer,

Squid's default setting is 1 core CPU, 16GB mem.
How many URLs(Blacklist) will degrade Squid's performance?

Also, SSL-Bump.

Thank you,
kitamura


2020年8月6日(木) 13:38 Eliezer Croitor <[hidden email]>:

Kitamura,

 

About the tens of thousands of URLs, Have you considered using a Blacklisting utility, it might lower the memory footprint.

 

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: [hidden email]

 

From: squid-users <[hidden email]> On Behalf Of m k
Sent: Thursday, August 6, 2020 7:25 AM
To: Amos Jeffries <[hidden email]>
Cc: [hidden email]
Subject: Re: [squid-users] I would like to know performance sizing aspects.

 

Amos,

 

Thank you for your reply.

It was very helpful.

 

> That number was gained before HTTPS became so popular. So YMMV depending
> on how many CONNECT tunnels you have to deal with. That HTTPS traffic can possibly be decrypted 

> and cached but performance trade-offs are quite large.

 

Squid uses SSL-Bump.

I'm very worried about the internet slowing down due to https decording. and I'm also worried about the internet slowing down due to using Blacklist.

I load tens of thousands of URL(black list file) every time I set up ACL.

 

How many requests does SSL-Bump in one second?

 

Thank you,

kitamura

 

202085() 10:32 Amos Jeffries <[hidden email]>:

On 5/08/20 11:28 am, m k wrote:
>> We are considering to use Squid for our proxy, and would like to know
>> performance sizing aspects.
>>
>> Current web access request averages per 1 hour are as followings 
>> Clients30,000
>> Page Views:141,741/hour
>> *Requests:4,893,106
>>

Okay. Requests and client count are the important numbers there.

The ~1359 req/sec is well within a default Squid capabilities, which can
extend up to around 10k req/sec before needing careful tuning.

That number was gained before HTTPS became so popular. So YMMV depending
on how many CONNECT tunnels you have to deal with. That HTTPS traffic
can possibly be decrypted and cached but performance trade-offs are
quite large.


>> We will install Squid on CentOS 8.1.   Please kindly share your
>> thoughts / advices

Whatever OS you are most comfortable with administering. Be aware that
CentOS official Squid packages are very slow to update - Apparently they
still have only v4.4 (8 months old) despite a 8.2 point release only a
few weeks ago.

So you may need to be building your own from sources and/or using other
semi-official packagers such as the ones from Eliezer at NGTech when he
gets around to CentOS 8 packages.
  <https://wiki.squid-cache.org/KnowledgeBase/CentOS>


FYI; If you find yourself having to use SSL-Bump, then we highly
recommended to follow the latest Squid releases with fairly frequent
updates (at minimum a few times per year - worst case monthly). If you
like CentOS you may find Fedora more suitable to track the security
environment volatility and update churn.


>> Is there sizing methodology and tools?

There are a couple of methodologies, depending on what aspect you are
tuning towards - and one for identifying the limitation points to begin
a tuning process tuning.

The info you gave above is the beginning. Checking to see if your
traffic rate is reasonably within capability of a single Squid instance.

Yours is reasonable, so next step is to get Squid running and see where
the trouble points (if any) are.

 For more see <https://wiki.squid-cache.org/SquidFaq/>



>> How much resources are generally recommended for our environment?
>>  CPU:  Memory:  Disk space : Other factors to be considered if any:
>> Do you have a generally recommended performance testing tools? Any
>> suggested guidelines?
>>


 CPU - squid is still mostly single-process. So prioritize faster GHz
rates over core number. Multi-core can help of course, but not as much
as cycle speeds do. Hyper-threading is useless for Squid.

 Memory - Squid will use as much as you can give it. Let your budget
govern this.

 Disk - Squid will happily run with no disk - or lots of large ones.

   - Avoid RAID. Squid *will* shorten disk lifetimes with its unusually
high write I/O pattern. How much shorter varies by disk type (HDD vs
SSD). So you may find it better to plan budget towards maintenance costs
of replacing disks in future rather than buying multiple up-front for
RAID use.
 see <https://wiki.squid-cache.org/SquidFaq/RAID> for details.

    - Up to a few hundred GB per cache_dir can be good for large caches.
Going up to TB is not (yet) worth the disk cost as Squid has a per-cache
limit on stored objects.

   - Disk caches can be re-tuned, added, moved, removed, and/or extended
at any time and will depend on the profile of object sizes your proxy
handles - which itself likely changes over time. So general let your
budget decide the initial disks and work from there.



Load Testing - the tools us dev use to review performance are listed at
the bottom of the profiling FAQ page. These are best for testing the
theoretical limits of a particular installation - real traffic tends to
be somewhat lower. So I personally prefer taking stats from the running
proxy on real traffic and seeing what I can observe from those.


HTH
Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: I would like to know performance sizing aspects.

Vacheslav

having 3GB memory with a ufdb improves performace

6.08.20 08:28, m k пишет:
Eliezer,

Squid's default setting is 1 core CPU, 16GB mem.
How many URLs(Blacklist) will degrade Squid's performance?

Also, SSL-Bump.

Thank you,
kitamura


2020年8月6日(木) 13:38 Eliezer Croitor <[hidden email]>:

Kitamura,

 

About the tens of thousands of URLs, Have you considered using a Blacklisting utility, it might lower the memory footprint.

 

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: [hidden email]

 

From: squid-users <[hidden email]> On Behalf Of m k
Sent: Thursday, August 6, 2020 7:25 AM
To: Amos Jeffries <[hidden email]>
Cc: [hidden email]
Subject: Re: [squid-users] I would like to know performance sizing aspects.

 

Amos,

 

Thank you for your reply.

It was very helpful.

 

> That number was gained before HTTPS became so popular. So YMMV depending
> on how many CONNECT tunnels you have to deal with. That HTTPS traffic can possibly be decrypted 

> and cached but performance trade-offs are quite large.

 

Squid uses SSL-Bump.

I'm very worried about the internet slowing down due to https decording. and I'm also worried about the internet slowing down due to using Blacklist.

I load tens of thousands of URL(black list file) every time I set up ACL.

 

How many requests does SSL-Bump in one second?

 

Thank you,

kitamura

 

202085() 10:32 Amos Jeffries <[hidden email]>:

On 5/08/20 11:28 am, m k wrote:
>> We are considering to use Squid for our proxy, and would like to know
>> performance sizing aspects.
>>
>> Current web access request averages per 1 hour are as followings 
>> Clients30,000
>> Page Views:141,741/hour
>> *Requests:4,893,106
>>

Okay. Requests and client count are the important numbers there.

The ~1359 req/sec is well within a default Squid capabilities, which can
extend up to around 10k req/sec before needing careful tuning.

That number was gained before HTTPS became so popular. So YMMV depending
on how many CONNECT tunnels you have to deal with. That HTTPS traffic
can possibly be decrypted and cached but performance trade-offs are
quite large.


>> We will install Squid on CentOS 8.1.   Please kindly share your
>> thoughts / advices

Whatever OS you are most comfortable with administering. Be aware that
CentOS official Squid packages are very slow to update - Apparently they
still have only v4.4 (8 months old) despite a 8.2 point release only a
few weeks ago.

So you may need to be building your own from sources and/or using other
semi-official packagers such as the ones from Eliezer at NGTech when he
gets around to CentOS 8 packages.
  <https://wiki.squid-cache.org/KnowledgeBase/CentOS>


FYI; If you find yourself having to use SSL-Bump, then we highly
recommended to follow the latest Squid releases with fairly frequent
updates (at minimum a few times per year - worst case monthly). If you
like CentOS you may find Fedora more suitable to track the security
environment volatility and update churn.


>> Is there sizing methodology and tools?

There are a couple of methodologies, depending on what aspect you are
tuning towards - and one for identifying the limitation points to begin
a tuning process tuning.

The info you gave above is the beginning. Checking to see if your
traffic rate is reasonably within capability of a single Squid instance.

Yours is reasonable, so next step is to get Squid running and see where
the trouble points (if any) are.

 For more see <https://wiki.squid-cache.org/SquidFaq/>



>> How much resources are generally recommended for our environment?
>>  CPU:  Memory:  Disk space : Other factors to be considered if any:
>> Do you have a generally recommended performance testing tools? Any
>> suggested guidelines?
>>


 CPU - squid is still mostly single-process. So prioritize faster GHz
rates over core number. Multi-core can help of course, but not as much
as cycle speeds do. Hyper-threading is useless for Squid.

 Memory - Squid will use as much as you can give it. Let your budget
govern this.

 Disk - Squid will happily run with no disk - or lots of large ones.

   - Avoid RAID. Squid *will* shorten disk lifetimes with its unusually
high write I/O pattern. How much shorter varies by disk type (HDD vs
SSD). So you may find it better to plan budget towards maintenance costs
of replacing disks in future rather than buying multiple up-front for
RAID use.
 see <https://wiki.squid-cache.org/SquidFaq/RAID> for details.

    - Up to a few hundred GB per cache_dir can be good for large caches.
Going up to TB is not (yet) worth the disk cost as Squid has a per-cache
limit on stored objects.

   - Disk caches can be re-tuned, added, moved, removed, and/or extended
at any time and will depend on the profile of object sizes your proxy
handles - which itself likely changes over time. So general let your
budget decide the initial disks and work from there.



Load Testing - the tools us dev use to review performance are listed at
the bottom of the profiling FAQ page. These are best for testing the
theoretical limits of a particular installation - real traffic tends to
be somewhat lower. So I personally prefer taking stats from the running
proxy on real traffic and seeing what I can observe from those.


HTH
Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: I would like to know performance sizing aspects.

Eliezer Croitoru-3
In reply to this post by kitamura

Did you mean 1 CPU with couple cores?

 

For squid it most of the time takes time to load these lists into ram.

It’s not wrong to do so since in many cases it’s the right thing to do.

In case these lists are stale for at-least a day I assume it should be fine.

 

However there are tools like ufdbguard and others which are very good in the sense of
memory footprint and fast URLs lookup.

 

How do you add these URLS, I am not mistaken about URLs and not domains right?

 

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: [hidden email]

 

From: m k <[hidden email]>
Sent: Thursday, August 6, 2020 8:29 AM
To: Eliezer Croitor <[hidden email]>
Cc: [hidden email]
Subject: Re: [squid-users] I would like to know performance sizing aspects.

 

Eliezer,

 

Squid's default setting is 1 core CPU, 16GB mem.

How many URLs(Blacklist) will degrade Squid's performance?

 

Also, SSL-Bump.

 

Thank you,

kitamura

 

 

202086() 13:38 Eliezer Croitor <[hidden email]>:

Kitamura,

 

About the tens of thousands of URLs, Have you considered using a Blacklisting utility, it might lower the memory footprint.

 

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: [hidden email]

 

From: squid-users <[hidden email]> On Behalf Of m k
Sent: Thursday, August 6, 2020 7:25 AM
To: Amos Jeffries <[hidden email]>
Cc: [hidden email]
Subject: Re: [squid-users] I would like to know performance sizing aspects.

 

Amos,

 

Thank you for your reply.

It was very helpful.

 

> That number was gained before HTTPS became so popular. So YMMV depending
> on how many CONNECT tunnels you have to deal with. That HTTPS traffic can possibly be decrypted 

> and cached but performance trade-offs are quite large.

 

Squid uses SSL-Bump.

I'm very worried about the internet slowing down due to https decording. and I'm also worried about the internet slowing down due to using Blacklist.

I load tens of thousands of URL(black list file) every time I set up ACL.

 

How many requests does SSL-Bump in one second?

 

Thank you,

kitamura

 

202085() 10:32 Amos Jeffries <[hidden email]>:

On 5/08/20 11:28 am, m k wrote:
>> We are considering to use Squid for our proxy, and would like to know
>> performance sizing aspects.
>>
>> Current web access request averages per 1 hour are as followings 
>> Clients30,000
>> Page Views:141,741/hour
>> *Requests:4,893,106
>>

Okay. Requests and client count are the important numbers there.

The ~1359 req/sec is well within a default Squid capabilities, which can
extend up to around 10k req/sec before needing careful tuning.

That number was gained before HTTPS became so popular. So YMMV depending
on how many CONNECT tunnels you have to deal with. That HTTPS traffic
can possibly be decrypted and cached but performance trade-offs are
quite large.


>> We will install Squid on CentOS 8.1.   Please kindly share your
>> thoughts / advices

Whatever OS you are most comfortable with administering. Be aware that
CentOS official Squid packages are very slow to update - Apparently they
still have only v4.4 (8 months old) despite a 8.2 point release only a
few weeks ago.

So you may need to be building your own from sources and/or using other
semi-official packagers such as the ones from Eliezer at NGTech when he
gets around to CentOS 8 packages.
  <https://wiki.squid-cache.org/KnowledgeBase/CentOS>


FYI; If you find yourself having to use SSL-Bump, then we highly
recommended to follow the latest Squid releases with fairly frequent
updates (at minimum a few times per year - worst case monthly). If you
like CentOS you may find Fedora more suitable to track the security
environment volatility and update churn.


>> Is there sizing methodology and tools?

There are a couple of methodologies, depending on what aspect you are
tuning towards - and one for identifying the limitation points to begin
a tuning process tuning.

The info you gave above is the beginning. Checking to see if your
traffic rate is reasonably within capability of a single Squid instance.

Yours is reasonable, so next step is to get Squid running and see where
the trouble points (if any) are.

 For more see <https://wiki.squid-cache.org/SquidFaq/>



>> How much resources are generally recommended for our environment?
>>  CPU:  Memory:  Disk space : Other factors to be considered if any:
>> Do you have a generally recommended performance testing tools? Any
>> suggested guidelines?
>>


 CPU - squid is still mostly single-process. So prioritize faster GHz
rates over core number. Multi-core can help of course, but not as much
as cycle speeds do. Hyper-threading is useless for Squid.

 Memory - Squid will use as much as you can give it. Let your budget
govern this.

 Disk - Squid will happily run with no disk - or lots of large ones.

   - Avoid RAID. Squid *will* shorten disk lifetimes with its unusually
high write I/O pattern. How much shorter varies by disk type (HDD vs
SSD). So you may find it better to plan budget towards maintenance costs
of replacing disks in future rather than buying multiple up-front for
RAID use.
 see <https://wiki.squid-cache.org/SquidFaq/RAID> for details.

    - Up to a few hundred GB per cache_dir can be good for large caches.
Going up to TB is not (yet) worth the disk cost as Squid has a per-cache
limit on stored objects.

   - Disk caches can be re-tuned, added, moved, removed, and/or extended
at any time and will depend on the profile of object sizes your proxy
handles - which itself likely changes over time. So general let your
budget decide the initial disks and work from there.



Load Testing - the tools us dev use to review performance are listed at
the bottom of the profiling FAQ page. These are best for testing the
theoretical limits of a particular installation - real traffic tends to
be somewhat lower. So I personally prefer taking stats from the running
proxy on real traffic and seeing what I can observe from those.


HTH
Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: I would like to know performance sizing aspects.

Amos Jeffries
Administrator
In reply to this post by kitamura
On 6/08/20 5:28 pm, m k wrote:
> Eliezer,
>
> Squid's default setting is 1 core CPU, 16GB mem.
> How many URLs(Blacklist) will degrade Squid's performance?
>

Eliezer's answer covers that already, so I will skip here.


> Also, SSL-Bump.
>

This is "unknown" - as far as I am aware none has published numbers
recently about it. There are a lot of factors in the network traffic and
your servers internal state (eg the RNG engine) that multiply up to
cause varying amounts of delay - plus the volatile nature of this
feature set itself month by month changes the effects or relevance of
each factor.
  So numbers from me today will be wrong in a few weeks, or may be wrong
for your network already. All that we can be sure of is that there is
extra work needed by Squid thus "slower" than plain-text HTTP is to be
expected.

For planning the consideration is just to be aware that the numbers we
can give you (for plain-text) will be over-estimates of capacity for
SSL-Bump traffic and allow some margins. Once you have an install
running you can test and measure the actual numbers for your traffic.


Amos
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: I would like to know performance sizing aspects.

kitamura
Hi all,

I built squid using SSL-bump. In addition, squid also authenticates users with active directory. The hardware is openstack virtual. os is centos8.1. There is one CPU. The memory is 16GB. The hard disk is SSD 200GB.

I'm thinking of load testing with Apache Jmeter in this environment.
I don't know the standard, so the test stops and I am in trouble.
How many simultaneous connection sessions?
How many requests per minute?
Help me.

thank you,
Kitamura

2020年8月7日(金) 8:53 Amos Jeffries <[hidden email]>:
On 6/08/20 5:28 pm, m k wrote:

> Eliezer,

>

> Squid's default setting is 1 core CPU, 16GB mem.

> How many URLs(Blacklist) will degrade Squid's performance?

>



Eliezer's answer covers that already, so I will skip here.





> Also, SSL-Bump.

>



This is "unknown" - as far as I am aware none has published numbers

recently about it. There are a lot of factors in the network traffic and

your servers internal state (eg the RNG engine) that multiply up to

cause varying amounts of delay - plus the volatile nature of this

feature set itself month by month changes the effects or relevance of

each factor.

  So numbers from me today will be wrong in a few weeks, or may be wrong

for your network already. All that we can be sure of is that there is

extra work needed by Squid thus "slower" than plain-text HTTP is to be

expected.



For planning the consideration is just to be aware that the numbers we

can give you (for plain-text) will be over-estimates of capacity for

SSL-Bump traffic and allow some margins. Once you have an install

running you can test and measure the actual numbers for your traffic.





Amos

_______________________________________________

squid-users mailing list

[hidden email]

http://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: I would like to know performance sizing aspects.

Eliezer Croitoru-3

Hey Kitamura,

 

Technically speaking Openstack admin can create a flavor which has 1 vCPU and 16GB RAM however,
it’s recommended to have 1 vCPU per 4 GB of RAM.

Openstack default vCPU ratio is 16 vCPUs per 1 physical Core.

So for a proxy which use SSL-Bump it’s recommended to have more then 1 vCPU ie at-least 2 if not 4.

 

All The Bests,

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: [hidden email]

 

From: squid-users <[hidden email]> On Behalf Of m k
Sent: Monday, August 17, 2020 2:12 PM
To: Amos Jeffries <[hidden email]>
Cc: [hidden email]
Subject: Re: [squid-users] I would like to know performance sizing aspects.

 

Hi all,

 

I built squid using SSL-bump. In addition, squid also authenticates users with active directory. The hardware is openstack virtual. os is centos8.1. There is one CPU. The memory is 16GB. The hard disk is SSD 200GB.

 

I'm thinking of load testing with Apache Jmeter in this environment.

I don't know the standard, so the test stops and I am in trouble.

How many simultaneous connection sessions?

How many requests per minute?

Help me.

 

thank you,

Kitamura

 

202087() 8:53 Amos Jeffries <[hidden email]>:

On 6/08/20 5:28 pm, m k wrote:

> Eliezer,

>

> Squid's default setting is 1 core CPU, 16GB mem.

> How many URLs(Blacklist) will degrade Squid's performance?

>



Eliezer's answer covers that already, so I will skip here.





> Also, SSL-Bump.

>



This is "unknown" - as far as I am aware none has published numbers

recently about it. There are a lot of factors in the network traffic and

your servers internal state (eg the RNG engine) that multiply up to

cause varying amounts of delay - plus the volatile nature of this

feature set itself month by month changes the effects or relevance of

each factor.

  So numbers from me today will be wrong in a few weeks, or may be wrong

for your network already. All that we can be sure of is that there is

extra work needed by Squid thus "slower" than plain-text HTTP is to be

expected.



For planning the consideration is just to be aware that the numbers we

can give you (for plain-text) will be over-estimates of capacity for

SSL-Bump traffic and allow some margins. Once you have an install

running you can test and measure the actual numbers for your traffic.





Amos

_______________________________________________

squid-users mailing list

[hidden email]

http://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users