IPVS/LVS load balancing Squid servers, anyone did it?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

IPVS/LVS load balancing Squid servers, anyone did it?

Eliezer Croitoru-3

Hey All,

 

I am reading about LB and tried to find an up-to-date example or tutorial specific to squid with no luck.

I have seen: http://kb.linuxvirtualserver.org/wiki/Building_Web_Cache_Cluster_using_LVS

 

Which makes sense and also is similar or kind of identical to WCCP with gre.

 

Anyone knows about a working Squid setup with IPVS/LVS?

 

Thanks,

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: [hidden email]

 


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: IPVS/LVS load balancing Squid servers, anyone did it?

Bruce Rosenberg
Hi Eliezer,

We are running a couple of Squid proxies (the real servers) in front of a pair of LVS servers with keepalived and it works flawlessly.
The 2 x Squid proxies are active / active and the LVS servers are active / passive.
If a Squid proxy dies the remaining proxy takes all the traffic.
If the active LVS server dies, keepalived running on the backup LVS (via VRRP) moves the VIP to itself and it takes all the traffic, so the only difference between the two is one has a higher priority so it gets the VIP first.
I have included some sanitised snippets from a keepalived.conf file that should help you.
You could easily scale this out if you need more than 2 Squid proxies.

The config I provided is for LVS/DR (Direct Route) mode.
This method rewrites the MAC address of forwarded packets to that of one of the real servers and is the most scalable way to run LVS.
It does require the LVS and real servers be on the same L2 network.
If that is not possible then consider LVS/TUN mode or LVS/NAT mode.

As LVS/DR rewrites the MAC address, it requires each real server to have the VIP address plumbed on an interface and also requires the real servers to ignore ARP requests for the VIP address as the only device that should respond to ARP requests for the VIP is the active LVS server.
We do this by configuring the VIP on the loopback interface on each real but there are other methods as well such as dropping the ARP responses using arptables, iptables or firewalld.
I think back in the kernel 2.4 and 2.6 days people used the noarp kernel module which could be configured to ignore ARP requests for a particular IP address but you don't really need this anymore.


If you are using a RPM based distro then to set up the LVS servers you only need the ipvsadm and keepalived packages.
Install squid on the reals and configure the VIP on each and disable ARP.
Then build the keepalived.conf on both LVS servers and restart keepalived.

The priority configuration stanza in the vrrp_instance section determines the primary VRRP node (LVS server) for that virtual router instance.
The secondary LVS server needs a lower priority compared to the primary.
You can configure one as the MASTER and the other as the BACKUP but our guys make them both BACKUP and let the priority sort the election of the primary out.
I think this might be to solve a problem of bringing up a BACKUP without a MASTER but I can't confirm that.


Good luck.


$ cat /etc/keepalived/keepalived.conf

global_defs {

    notification_email {
        # [hidden email]
    }
    notification_email_from [hidden email]
    smtp_server 10.1.2.3        # mail.example.com
    smtp_connect_timeout 30
    lvs_id lvs01.example.com    # Name to mention in email.
}

vrrp_instance LVS_example {

    state BACKUP
    priority 150
    interface eth0
    lvs_sync_daemon_interface eth0
    virtual_router_id 5
    preempt_delay 20

    virtual_ipaddress_excluded {
       
        10.10.10.10   # Squid proxy
    }

    notify_master "some command to log or send an alert"
    notify_backup "some command to log or send an alert"
    notify_fault "some command to log or send an alert"
}


# SQUID Proxy
virtual_server 10.10.10.10 3128 {

    delay_loop 5
    lb_algo wrr
    lb_kind DR
    protocol TCP

    real_server 10.10.10.11 3128 {   # proxy01.example.com
        weight 1
        inhibit_on_failure 1
        TCP_CHECK {
            connect_port 3128
            connect_timeout 5
        }
    }

    real_server 10.10.10.12 3128 {   # proxy02.example.com
        weight 1
        inhibit_on_failure 1
        TCP_CHECK {
            connect_port 3128
            connect_timeout 5
        }
    }
}


On Thu, Aug 27, 2020 at 8:24 AM Eliezer Croitor <[hidden email]> wrote:

Hey All,

 

I am reading about LB and tried to find an up-to-date example or tutorial specific to squid with no luck.

I have seen: http://kb.linuxvirtualserver.org/wiki/Building_Web_Cache_Cluster_using_LVS

 

Which makes sense and also is similar or kind of identical to WCCP with gre.

 

Anyone knows about a working Squid setup with IPVS/LVS?

 

Thanks,

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: [hidden email]

 

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: IPVS/LVS load balancing Squid servers, anyone did it?

Amos Jeffries
Administrator
Nice writeup. Do you mind if I add this to the Squid wiki as an example
for high-performance proxying?


Amos



On 27/08/20 4:35 pm, Bruce Rosenberg wrote:

> Hi Eliezer,
>
> We are running a couple of Squid proxies (the real servers) in front of
> a pair of LVS servers with keepalived and it works flawlessly.
> The 2 x Squid proxies are active / active and the LVS servers are active
> / passive.
> If a Squid proxy dies the remaining proxy takes all the traffic.
> If the active LVS server dies, keepalived running on the backup LVS (via
> VRRP) moves the VIP to itself and it takes all the traffic, so the only
> difference between the two is one has a higher priority so it gets the
> VIP first.
> I have included some sanitised snippets from a keepalived.conf file that
> should help you.
> You could easily scale this out if you need more than 2 Squid proxies.
>
> The config I provided is for LVS/DR (Direct Route) mode.
> This method rewrites the MAC address of forwarded packets to that of one
> of the real servers and is the most scalable way to run LVS.
> It does require the LVS and real servers be on the same L2 network.
> If that is not possible then consider LVS/TUN mode or LVS/NAT mode.
>
> As LVS/DR rewrites the MAC address, it requires each real server to have
> the VIP address plumbed on an interface and also requires the real
> servers to ignore ARP requests for the VIP address as the only device
> that should respond to ARP requests for the VIP is the active LVS server.
> We do this by configuring the VIP on the loopback interface on each real
> but there are other methods as well such as dropping the ARP responses
> using arptables, iptables or firewalld.
> I think back in the kernel 2.4 and 2.6 days people used the noarp kernel
> module which could be configured to ignore ARP requests for a particular
> IP address but you don't really need this anymore.
>
> More info on the loopback arp blocking method -
> https://www.loadbalancer.org/blog/layer-4-direct-routing-lvs-dr-and-layer-4-tun-lvs-tun-in-aws/
> More info on firewall type arp blocking methods
> - https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-direct-vsa
> More info about LVS/DR - http://kb.linuxvirtualserver.org/wiki/LVS/DR
>
> If you are using a RPM based distro then to set up the LVS servers you
> only need the ipvsadm and keepalived packages.
> Install squid on the reals and configure the VIP on each and disable ARP.
> Then build the keepalived.conf on both LVS servers and restart keepalived.
>
> The priority configuration stanza in the vrrp_instance section
> determines the primary VRRP node (LVS server) for that virtual router
> instance.
> The secondary LVS server needs a lower priority compared to the primary.
> You can configure one as the MASTER and the other as the BACKUP but our
> guys make them both BACKUP and let the priority sort the election of the
> primary out.
> I think this might be to solve a problem of bringing up a BACKUP without
> a MASTER but I can't confirm that.
>
>
> Good luck.
>
>
> $ cat /etc/keepalived/keepalived.conf
>
> global_defs {
>
>     notification_email {
>         # [hidden email] <mailto:[hidden email]>
>     }
>     notification_email_from [hidden email]
> <mailto:[hidden email]>
>     smtp_server 10.1.2.3        # mail.example.com <http://mail.example.com>
>     smtp_connect_timeout 30
>     lvs_id lvs01.example.com <http://lvs01.example.com>    # Name to
> mention in email.
> }
>
> vrrp_instance LVS_example {
>
>     state BACKUP
>     priority 150
>     interface eth0
>     lvs_sync_daemon_interface eth0
>     virtual_router_id 5
>     preempt_delay 20
>
>     virtual_ipaddress_excluded {
>        
>         10.10.10.10   # Squid proxy
>     }
>
>     notify_master "some command to log or send an alert"
>     notify_backup "some command to log or send an alert"
>     notify_fault "some command to log or send an alert"
> }
>
>
> # SQUID Proxy
> virtual_server 10.10.10.10 3128 {
>
>     delay_loop 5
>     lb_algo wrr
>     lb_kind DR
>     protocol TCP
>
>     real_server 10.10.10.11 3128 {   # proxy01.example.com
> <http://proxy01.example.com>
>         weight 1
>         inhibit_on_failure 1
>         TCP_CHECK {
>             connect_port 3128
>             connect_timeout 5
>         }
>     }
>
>     real_server 10.10.10.12 3128 {   # proxy02.example.com
> <http://proxy02.example.com>
>         weight 1
>         inhibit_on_failure 1
>         TCP_CHECK {
>             connect_port 3128
>             connect_timeout 5
>         }
>     }
> }
>
>
> On Thu, Aug 27, 2020 at 8:24 AM Eliezer Croitor <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Hey All,____
>
>     __ __
>
>     I am reading about LB and tried to find an up-to-date example or
>     tutorial specific to squid with no luck.____
>
>     I have seen:
>     http://kb.linuxvirtualserver.org/wiki/Building_Web_Cache_Cluster_using_LVS____
>
>     __ __
>
>     Which makes sense and also is similar or kind of identical to WCCP
>     with gre.____
>
>     __ __
>
>     Anyone knows about a working Squid setup with IPVS/LVS?____
>
>     __ __
>
>     Thanks,____
>
>     Eliezer____
>
>     __ __
>
>     ----____
>
>     Eliezer Croitoru____
>
>     Tech Support____
>
>     Mobile: +972-5-28704261____
>
>     Email: [hidden email] <mailto:[hidden email]>____
>
>     __ __
>
>     _______________________________________________
>     squid-users mailing list
>     [hidden email]
>     <mailto:[hidden email]>
>     http://lists.squid-cache.org/listinfo/squid-users
>
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: IPVS/LVS load balancing Squid servers, anyone did it?

FUSTE Emmanuel
In reply to this post by Bruce Rosenberg
Hi,

To complement this, on modern kernel take the opportunity to try nftlb
instead of LVS too.
https://www.zevenet.com/knowledge-base/nftlb/what-is-nftlb/

Emmanuel.

Le 27/08/2020 à 06:35, Bruce Rosenberg a écrit :

> Hi Eliezer,
>
> We are running a couple of Squid proxies (the real servers) in front
> of a pair of LVS servers with keepalived and it works flawlessly.
> The 2 x Squid proxies are active / active and the LVS servers are
> active / passive.
> If a Squid proxy dies the remaining proxy takes all the traffic.
> If the active LVS server dies, keepalived running on the backup LVS
> (via VRRP) moves the VIP to itself and it takes all the traffic, so
> the only difference between the two is one has a higher priority so it
> gets the VIP first.
> I have included some sanitised snippets from a keepalived.conf file
> that should help you.
> You could easily scale this out if you need more than 2 Squid proxies.
>
> The config I provided is for LVS/DR (Direct Route) mode.
> This method rewrites the MAC address of forwarded packets to that of
> one of the real servers and is the most scalable way to run LVS.
> It does require the LVS and real servers be on the same L2 network.
> If that is not possible then consider LVS/TUN mode or LVS/NAT mode.
>
> As LVS/DR rewrites the MAC address, it requires each real server to
> have the VIP address plumbed on an interface and also requires the
> real servers to ignore ARP requests for the VIP address as the only
> device that should respond to ARP requests for the VIP is the active
> LVS server.
> We do this by configuring the VIP on the loopback interface on each
> real but there are other methods as well such as dropping the ARP
> responses using arptables, iptables or firewalld.
> I think back in the kernel 2.4 and 2.6 days people used the noarp
> kernel module which could be configured to ignore ARP requests for a
> particular IP address but you don't really need this anymore.
>
> More info on the loopback arp blocking method -
> https://www.loadbalancer.org/blog/layer-4-direct-routing-lvs-dr-and-layer-4-tun-lvs-tun-in-aws/
> More info on firewall type arp blocking methods -
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-direct-vsa
> More info about LVS/DR - http://kb.linuxvirtualserver.org/wiki/LVS/DR
>
> If you are using a RPM based distro then to set up the LVS servers you
> only need the ipvsadm and keepalived packages.
> Install squid on the reals and configure the VIP on each and disable ARP.
> Then build the keepalived.conf on both LVS servers and restart keepalived.
>
> The priority configuration stanza in the vrrp_instance section
> determines the primary VRRP node (LVS server) for that virtual router
> instance.
> The secondary LVS server needs a lower priority compared to the primary.
> You can configure one as the MASTER and the other as the BACKUP but
> our guys make them both BACKUP and let the priority sort the election
> of the primary out.
> I think this might be to solve a problem of bringing up a BACKUP
> without a MASTER but I can't confirm that.
>
>
> Good luck.
>
>
> $ cat /etc/keepalived/keepalived.conf
>
> global_defs {
>
>     notification_email {
>         # [hidden email] <mailto:[hidden email]>
>     }
>     notification_email_from [hidden email]
> <mailto:[hidden email]>
>     smtp_server 10.1.2.3        # mail.example.com
> <http://mail.example.com>
>     smtp_connect_timeout 30
>     lvs_id lvs01.example.com <http://lvs01.example.com>    # Name to
> mention in email.
> }
>
> vrrp_instance LVS_example {
>
>     state BACKUP
>     priority 150
>     interface eth0
>     lvs_sync_daemon_interface eth0
>     virtual_router_id 5
>     preempt_delay 20
>
>     virtual_ipaddress_excluded {
>
>         10.10.10.10   # Squid proxy
>     }
>
>     notify_master "some command to log or send an alert"
>     notify_backup "some command to log or send an alert"
>     notify_fault "some command to log or send an alert"
> }
>
>
> # SQUID Proxy
> virtual_server 10.10.10.10 3128 {
>
>     delay_loop 5
>     lb_algo wrr
>     lb_kind DR
>     protocol TCP
>
>     real_server 10.10.10.11 3128 {   # proxy01.example.com
> <http://proxy01.example.com>
>         weight 1
>         inhibit_on_failure 1
>         TCP_CHECK {
>             connect_port 3128
>             connect_timeout 5
>         }
>     }
>
>     real_server 10.10.10.12 3128 {   # proxy02.example.com
> <http://proxy02.example.com>
>         weight 1
>         inhibit_on_failure 1
>         TCP_CHECK {
>             connect_port 3128
>             connect_timeout 5
>         }
>     }
> }
>
>
> On Thu, Aug 27, 2020 at 8:24 AM Eliezer Croitor <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Hey All,
>
>     I am reading about LB and tried to find an up-to-date example or
>     tutorial specific to squid with no luck.
>
>     I have seen:
>     http://kb.linuxvirtualserver.org/wiki/Building_Web_Cache_Cluster_using_LVS
>
>     Which makes sense and also is similar or kind of identical to WCCP
>     with gre.
>
>     Anyone knows about a working Squid setup with IPVS/LVS?
>
>     Thanks,
>
>     Eliezer
>
>     ----
>
>     Eliezer Croitoru
>
>     Tech Support
>
>     Mobile: +972-5-28704261
>
>     Email: [hidden email] <mailto:[hidden email]>
>
>     _______________________________________________
>     squid-users mailing list
>     [hidden email]
>     <mailto:[hidden email]>
>     http://lists.squid-cache.org/listinfo/squid-users
>
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: IPVS/LVS load balancing Squid servers, anyone did it?

Eliezer Croitoru-3
In reply to this post by Bruce Rosenberg

Hey Bruce,

 

Thanks for the detailed and beautiful answer.

I am actually trying to understand what ipvs gives and to compare it to nftables.

 

We need a setup structure which will make it more real.

 

I am trying to think about a setup Sketch:

3+ Proxies

2 LB

1 Edge Router VIP(maybe more actual routers)

 

Networks:

PX Internal net: 192.168.100.0/24

Wan Edge Routers net: 192.168.200.0/24

 

R1:

WAN VIP: 192.168.200.200/24

LAN VIP: 192.168.100.254/24

Static route toward 192.168.101.100 via 192.168.200.200

(Another option would be using FRR for ECMP and ACTIVE/ACTIVE LB)

 

Proxies VIP: 192.168.101.100/32

PX1 IP:  192.168.100.101/24 GW 192.168.100.254

PX2 IP:  192.168.100.102/24 GW 192.168.100.254

PX3 IP:  192.168.100.103/24 GW 192.168.100.254

 

LBs VIP: 192.168.200.200/24

LB 1 IP: 192.168.200.201/24

LB 2 IP: 192.168.200.202/24

 

## Things to consider about the setup:

We can use either FWMARK based LB or mac replacement.

It is possible to avoid arp issues with either tunnels or VIP assignment to interfaces.

There are couple tunneling options such as GENEVE\GUE\FUE\GRE\IPIP which can be used with IPVS.

 

Thoughts?

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: [hidden email]

 

From: Bruce Rosenberg <[hidden email]>
Sent: Thursday, August 27, 2020 7:35 AM
To: Eliezer Croitor <[hidden email]>
Cc: [hidden email]
Subject: Re: [squid-users] IPVS/LVS load balancing Squid servers, anyone did it?

 

Hi Eliezer,

 

We are running a couple of Squid proxies (the real servers) in front of a pair of LVS servers with keepalived and it works flawlessly.

The 2 x Squid proxies are active / active and the LVS servers are active / passive.

If a Squid proxy dies the remaining proxy takes all the traffic.

If the active LVS server dies, keepalived running on the backup LVS (via VRRP) moves the VIP to itself and it takes all the traffic, so the only difference between the two is one has a higher priority so it gets the VIP first.

I have included some sanitised snippets from a keepalived.conf file that should help you.

You could easily scale this out if you need more than 2 Squid proxies.

 

The config I provided is for LVS/DR (Direct Route) mode.

This method rewrites the MAC address of forwarded packets to that of one of the real servers and is the most scalable way to run LVS.

It does require the LVS and real servers be on the same L2 network.

If that is not possible then consider LVS/TUN mode or LVS/NAT mode.

 

As LVS/DR rewrites the MAC address, it requires each real server to have the VIP address plumbed on an interface and also requires the real servers to ignore ARP requests for the VIP address as the only device that should respond to ARP requests for the VIP is the active LVS server.

We do this by configuring the VIP on the loopback interface on each real but there are other methods as well such as dropping the ARP responses using arptables, iptables or firewalld.

I think back in the kernel 2.4 and 2.6 days people used the noarp kernel module which could be configured to ignore ARP requests for a particular IP address but you don't really need this anymore.

 

 

If you are using a RPM based distro then to set up the LVS servers you only need the ipvsadm and keepalived packages.

Install squid on the reals and configure the VIP on each and disable ARP.

Then build the keepalived.conf on both LVS servers and restart keepalived.

 

The priority configuration stanza in the vrrp_instance section determines the primary VRRP node (LVS server) for that virtual router instance.

The secondary LVS server needs a lower priority compared to the primary.

You can configure one as the MASTER and the other as the BACKUP but our guys make them both BACKUP and let the priority sort the election of the primary out.

I think this might be to solve a problem of bringing up a BACKUP without a MASTER but I can't confirm that.

 

 

Good luck.

 

 

$ cat /etc/keepalived/keepalived.conf

global_defs {

    notification_email {
        # [hidden email]
    }
    notification_email_from [hidden email]
    smtp_server 10.1.2.3        # mail.example.com
    smtp_connect_timeout 30
    lvs_id lvs01.example.com    # Name to mention in email.
}

vrrp_instance LVS_example {

    state BACKUP
    priority 150
    interface eth0
    lvs_sync_daemon_interface eth0
    virtual_router_id 5
    preempt_delay 20

    virtual_ipaddress_excluded {
       
        10.10.10.10   # Squid proxy
    }

    notify_master "some command to log or send an alert"
    notify_backup "some command to log or send an alert"
    notify_fault "some command to log or send an alert"
}


# SQUID Proxy
virtual_server 10.10.10.10 3128 {

    delay_loop 5
    lb_algo wrr
    lb_kind DR
    protocol TCP

    real_server 10.10.10.11 3128 {   # proxy01.example.com
        weight 1
        inhibit_on_failure 1
        TCP_CHECK {
            connect_port 3128
            connect_timeout 5
        }
    }

    real_server 10.10.10.12 3128 {   # proxy02.example.com
        weight 1
        inhibit_on_failure 1
        TCP_CHECK {
            connect_port 3128
            connect_timeout 5
        }
    }
}

 

 

On Thu, Aug 27, 2020 at 8:24 AM Eliezer Croitor <[hidden email]> wrote:

Hey All,

 

I am reading about LB and tried to find an up-to-date example or tutorial specific to squid with no luck.

I have seen: http://kb.linuxvirtualserver.org/wiki/Building_Web_Cache_Cluster_using_LVS

 

Which makes sense and also is similar or kind of identical to WCCP with gre.

 

Anyone knows about a working Squid setup with IPVS/LVS?

 

Thanks,

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: [hidden email]

 

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: IPVS/LVS load balancing Squid servers, anyone did it?

Eliezer Croitoru-3
In reply to this post by FUSTE Emmanuel
Hey Emmanuel,

I was just trying to understand if and how nftables is can LB and what exactly makes IPVS that fast.
It seems that IPVS in DR actually converts the linux box into a Switch.
I am still unsure if mac address replacement is better then FWMARK for DR.

Thanks,
Eliezer

----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: [hidden email]

-----Original Message-----
From: squid-users <[hidden email]> On Behalf Of FUSTE Emmanuel
Sent: Thursday, August 27, 2020 2:23 PM
To: [hidden email]
Subject: Re: [squid-users] IPVS/LVS load balancing Squid servers, anyone did it?

Hi,

To complement this, on modern kernel take the opportunity to try nftlb
instead of LVS too.
https://www.zevenet.com/knowledge-base/nftlb/what-is-nftlb/

Emmanuel.

Le 27/08/2020 à 06:35, Bruce Rosenberg a écrit :

> Hi Eliezer,
>
> We are running a couple of Squid proxies (the real servers) in front
> of a pair of LVS servers with keepalived and it works flawlessly.
> The 2 x Squid proxies are active / active and the LVS servers are
> active / passive.
> If a Squid proxy dies the remaining proxy takes all the traffic.
> If the active LVS server dies, keepalived running on the backup LVS
> (via VRRP) moves the VIP to itself and it takes all the traffic, so
> the only difference between the two is one has a higher priority so it
> gets the VIP first.
> I have included some sanitised snippets from a keepalived.conf file
> that should help you.
> You could easily scale this out if you need more than 2 Squid proxies.
>
> The config I provided is for LVS/DR (Direct Route) mode.
> This method rewrites the MAC address of forwarded packets to that of
> one of the real servers and is the most scalable way to run LVS.
> It does require the LVS and real servers be on the same L2 network.
> If that is not possible then consider LVS/TUN mode or LVS/NAT mode.
>
> As LVS/DR rewrites the MAC address, it requires each real server to
> have the VIP address plumbed on an interface and also requires the
> real servers to ignore ARP requests for the VIP address as the only
> device that should respond to ARP requests for the VIP is the active
> LVS server.
> We do this by configuring the VIP on the loopback interface on each
> real but there are other methods as well such as dropping the ARP
> responses using arptables, iptables or firewalld.
> I think back in the kernel 2.4 and 2.6 days people used the noarp
> kernel module which could be configured to ignore ARP requests for a
> particular IP address but you don't really need this anymore.
>
> More info on the loopback arp blocking method -
> https://www.loadbalancer.org/blog/layer-4-direct-routing-lvs-dr-and-layer-4-tun-lvs-tun-in-aws/
> More info on firewall type arp blocking methods -
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-direct-vsa
> More info about LVS/DR - http://kb.linuxvirtualserver.org/wiki/LVS/DR
>
> If you are using a RPM based distro then to set up the LVS servers you
> only need the ipvsadm and keepalived packages.
> Install squid on the reals and configure the VIP on each and disable ARP.
> Then build the keepalived.conf on both LVS servers and restart keepalived.
>
> The priority configuration stanza in the vrrp_instance section
> determines the primary VRRP node (LVS server) for that virtual router
> instance.
> The secondary LVS server needs a lower priority compared to the primary.
> You can configure one as the MASTER and the other as the BACKUP but
> our guys make them both BACKUP and let the priority sort the election
> of the primary out.
> I think this might be to solve a problem of bringing up a BACKUP
> without a MASTER but I can't confirm that.
>
>
> Good luck.
>
>
> $ cat /etc/keepalived/keepalived.conf
>
> global_defs {
>
>     notification_email {
>         # [hidden email] <mailto:[hidden email]>
>     }
>     notification_email_from [hidden email]
> <mailto:[hidden email]>
>     smtp_server 10.1.2.3        # mail.example.com
> <http://mail.example.com>
>     smtp_connect_timeout 30
>     lvs_id lvs01.example.com <http://lvs01.example.com>    # Name to
> mention in email.
> }
>
> vrrp_instance LVS_example {
>
>     state BACKUP
>     priority 150
>     interface eth0
>     lvs_sync_daemon_interface eth0
>     virtual_router_id 5
>     preempt_delay 20
>
>     virtual_ipaddress_excluded {
>
>         10.10.10.10   # Squid proxy
>     }
>
>     notify_master "some command to log or send an alert"
>     notify_backup "some command to log or send an alert"
>     notify_fault "some command to log or send an alert"
> }
>
>
> # SQUID Proxy
> virtual_server 10.10.10.10 3128 {
>
>     delay_loop 5
>     lb_algo wrr
>     lb_kind DR
>     protocol TCP
>
>     real_server 10.10.10.11 3128 {   # proxy01.example.com
> <http://proxy01.example.com>
>         weight 1
>         inhibit_on_failure 1
>         TCP_CHECK {
>             connect_port 3128
>             connect_timeout 5
>         }
>     }
>
>     real_server 10.10.10.12 3128 {   # proxy02.example.com
> <http://proxy02.example.com>
>         weight 1
>         inhibit_on_failure 1
>         TCP_CHECK {
>             connect_port 3128
>             connect_timeout 5
>         }
>     }
> }
>
>
> On Thu, Aug 27, 2020 at 8:24 AM Eliezer Croitor <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Hey All,
>
>     I am reading about LB and tried to find an up-to-date example or
>     tutorial specific to squid with no luck.
>
>     I have seen:
>     http://kb.linuxvirtualserver.org/wiki/Building_Web_Cache_Cluster_using_LVS
>
>     Which makes sense and also is similar or kind of identical to WCCP
>     with gre.
>
>     Anyone knows about a working Squid setup with IPVS/LVS?
>
>     Thanks,
>
>     Eliezer
>
>     ----
>
>     Eliezer Croitoru
>
>     Tech Support
>
>     Mobile: +972-5-28704261
>
>     Email: [hidden email] <mailto:[hidden email]>
>
>     _______________________________________________
>     squid-users mailing list
>     [hidden email]
>     <mailto:[hidden email]>
>     http://lists.squid-cache.org/listinfo/squid-users
>
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: IPVS/LVS load balancing Squid servers, anyone did it?

FUSTE Emmanuel
Le 27/08/2020 à 14:14, Eliezer Croitor a écrit :
> Hey Emmanuel,
>
> I was just trying to understand if and how nftables is can LB and what exactly makes IPVS that fast.
nftlb is simply a rulemanager in front of nftables. And IPVS is not as
fast as nftables, it is the reverse.
All you could do with IPVS is normally do-able with nftable/nftlb and more.
> It seems that IPVS in DR actually converts the linux box into a Switch.
Technically it is purely hw address mangling not full switching.
> I am still unsure if mac address replacement is better then FWMARK for DR.
FWMARK is just for packet selection grouping for mac address
replacement. It does not replace it.
And with the expressive capability of nftable this FWMARK dance is no
longer necessary.

Emmanuel.

>
> Thanks,
> Eliezer
>
> ----
> Eliezer Croitoru
> Tech Support
> Mobile: +972-5-28704261
> Email: [hidden email]
>
> -----Original Message-----
> From: squid-users <[hidden email]> On Behalf Of FUSTE Emmanuel
> Sent: Thursday, August 27, 2020 2:23 PM
> To: [hidden email]
> Subject: Re: [squid-users] IPVS/LVS load balancing Squid servers, anyone did it?
>
> Hi,
>
> To complement this, on modern kernel take the opportunity to try nftlb
> instead of LVS too.
> https://www.zevenet.com/knowledge-base/nftlb/what-is-nftlb/
>
> Emmanuel.
>
> Le 27/08/2020 à 06:35, Bruce Rosenberg a écrit :
>> Hi Eliezer,
>>
>> We are running a couple of Squid proxies (the real servers) in front
>> of a pair of LVS servers with keepalived and it works flawlessly.
>> The 2 x Squid proxies are active / active and the LVS servers are
>> active / passive.
>> If a Squid proxy dies the remaining proxy takes all the traffic.
>> If the active LVS server dies, keepalived running on the backup LVS
>> (via VRRP) moves the VIP to itself and it takes all the traffic, so
>> the only difference between the two is one has a higher priority so it
>> gets the VIP first.
>> I have included some sanitised snippets from a keepalived.conf file
>> that should help you.
>> You could easily scale this out if you need more than 2 Squid proxies.
>>
>> The config I provided is for LVS/DR (Direct Route) mode.
>> This method rewrites the MAC address of forwarded packets to that of
>> one of the real servers and is the most scalable way to run LVS.
>> It does require the LVS and real servers be on the same L2 network.
>> If that is not possible then consider LVS/TUN mode or LVS/NAT mode.
>>
>> As LVS/DR rewrites the MAC address, it requires each real server to
>> have the VIP address plumbed on an interface and also requires the
>> real servers to ignore ARP requests for the VIP address as the only
>> device that should respond to ARP requests for the VIP is the active
>> LVS server.
>> We do this by configuring the VIP on the loopback interface on each
>> real but there are other methods as well such as dropping the ARP
>> responses using arptables, iptables or firewalld.
>> I think back in the kernel 2.4 and 2.6 days people used the noarp
>> kernel module which could be configured to ignore ARP requests for a
>> particular IP address but you don't really need this anymore.
>>
>> More info on the loopback arp blocking method -
>> https://www.loadbalancer.org/blog/layer-4-direct-routing-lvs-dr-and-layer-4-tun-lvs-tun-in-aws/
>> More info on firewall type arp blocking methods -
>> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-direct-vsa
>> More info about LVS/DR - http://kb.linuxvirtualserver.org/wiki/LVS/DR
>>
>> If you are using a RPM based distro then to set up the LVS servers you
>> only need the ipvsadm and keepalived packages.
>> Install squid on the reals and configure the VIP on each and disable ARP.
>> Then build the keepalived.conf on both LVS servers and restart keepalived.
>>
>> The priority configuration stanza in the vrrp_instance section
>> determines the primary VRRP node (LVS server) for that virtual router
>> instance.
>> The secondary LVS server needs a lower priority compared to the primary.
>> You can configure one as the MASTER and the other as the BACKUP but
>> our guys make them both BACKUP and let the priority sort the election
>> of the primary out.
>> I think this might be to solve a problem of bringing up a BACKUP
>> without a MASTER but I can't confirm that.
>>
>>
>> Good luck.
>>
>>
>> $ cat /etc/keepalived/keepalived.conf
>>
>> global_defs {
>>
>>      notification_email {
>>          # [hidden email] <mailto:[hidden email]>
>>      }
>>      notification_email_from [hidden email]
>> <mailto:[hidden email]>
>>      smtp_server 10.1.2.3        # mail.example.com
>> <http://mail.example.com>
>>      smtp_connect_timeout 30
>>      lvs_id lvs01.example.com <http://lvs01.example.com>    # Name to
>> mention in email.
>> }
>>
>> vrrp_instance LVS_example {
>>
>>      state BACKUP
>>      priority 150
>>      interface eth0
>>      lvs_sync_daemon_interface eth0
>>      virtual_router_id 5
>>      preempt_delay 20
>>
>>      virtual_ipaddress_excluded {
>>
>>          10.10.10.10   # Squid proxy
>>      }
>>
>>      notify_master "some command to log or send an alert"
>>      notify_backup "some command to log or send an alert"
>>      notify_fault "some command to log or send an alert"
>> }
>>
>>
>> # SQUID Proxy
>> virtual_server 10.10.10.10 3128 {
>>
>>      delay_loop 5
>>      lb_algo wrr
>>      lb_kind DR
>>      protocol TCP
>>
>>      real_server 10.10.10.11 3128 {   # proxy01.example.com
>> <http://proxy01.example.com>
>>          weight 1
>>          inhibit_on_failure 1
>>          TCP_CHECK {
>>              connect_port 3128
>>              connect_timeout 5
>>          }
>>      }
>>
>>      real_server 10.10.10.12 3128 {   # proxy02.example.com
>> <http://proxy02.example.com>
>>          weight 1
>>          inhibit_on_failure 1
>>          TCP_CHECK {
>>              connect_port 3128
>>              connect_timeout 5
>>          }
>>      }
>> }
>>
>>
>> On Thu, Aug 27, 2020 at 8:24 AM Eliezer Croitor <[hidden email]
>> <mailto:[hidden email]>> wrote:
>>
>>      Hey All,
>>
>>      I am reading about LB and tried to find an up-to-date example or
>>      tutorial specific to squid with no luck.
>>
>>      I have seen:
>>      http://kb.linuxvirtualserver.org/wiki/Building_Web_Cache_Cluster_using_LVS
>>
>>      Which makes sense and also is similar or kind of identical to WCCP
>>      with gre.
>>
>>      Anyone knows about a working Squid setup with IPVS/LVS?
>>
>>      Thanks,
>>
>>      Eliezer
>>
>>      ----
>>
>>      Eliezer Croitoru
>>
>>      Tech Support
>>
>>      Mobile: +972-5-28704261
>>
>>      Email: [hidden email] <mailto:[hidden email]>
>>
>>      _______________________________________________
>>
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: IPVS/LVS load balancing Squid servers, anyone did it?

Bruce Rosenberg
In reply to this post by Amos Jeffries
Hi Amos,

Sure, please add it.
Always nice to contribute a little bit :)

Cheers,
Bruce

On Thu, Aug 27, 2020 at 8:30 PM Amos Jeffries <[hidden email]> wrote:
Nice writeup. Do you mind if I add this to the Squid wiki as an example
for high-performance proxying?


Amos



On 27/08/20 4:35 pm, Bruce Rosenberg wrote:
> Hi Eliezer,
>
> We are running a couple of Squid proxies (the real servers) in front of
> a pair of LVS servers with keepalived and it works flawlessly.
> The 2 x Squid proxies are active / active and the LVS servers are active
> / passive.
> If a Squid proxy dies the remaining proxy takes all the traffic.
> If the active LVS server dies, keepalived running on the backup LVS (via
> VRRP) moves the VIP to itself and it takes all the traffic, so the only
> difference between the two is one has a higher priority so it gets the
> VIP first.
> I have included some sanitised snippets from a keepalived.conf file that
> should help you.
> You could easily scale this out if you need more than 2 Squid proxies.
>
> The config I provided is for LVS/DR (Direct Route) mode.
> This method rewrites the MAC address of forwarded packets to that of one
> of the real servers and is the most scalable way to run LVS.
> It does require the LVS and real servers be on the same L2 network.
> If that is not possible then consider LVS/TUN mode or LVS/NAT mode.
>
> As LVS/DR rewrites the MAC address, it requires each real server to have
> the VIP address plumbed on an interface and also requires the real
> servers to ignore ARP requests for the VIP address as the only device
> that should respond to ARP requests for the VIP is the active LVS server.
> We do this by configuring the VIP on the loopback interface on each real
> but there are other methods as well such as dropping the ARP responses
> using arptables, iptables or firewalld.
> I think back in the kernel 2.4 and 2.6 days people used the noarp kernel
> module which could be configured to ignore ARP requests for a particular
> IP address but you don't really need this anymore.
>
> More info on the loopback arp blocking method -
> https://www.loadbalancer.org/blog/layer-4-direct-routing-lvs-dr-and-layer-4-tun-lvs-tun-in-aws/
> More info on firewall type arp blocking methods
> - https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-direct-vsa
> More info about LVS/DR - http://kb.linuxvirtualserver.org/wiki/LVS/DR
>
> If you are using a RPM based distro then to set up the LVS servers you
> only need the ipvsadm and keepalived packages.
> Install squid on the reals and configure the VIP on each and disable ARP.
> Then build the keepalived.conf on both LVS servers and restart keepalived.
>
> The priority configuration stanza in the vrrp_instance section
> determines the primary VRRP node (LVS server) for that virtual router
> instance.
> The secondary LVS server needs a lower priority compared to the primary.
> You can configure one as the MASTER and the other as the BACKUP but our
> guys make them both BACKUP and let the priority sort the election of the
> primary out.
> I think this might be to solve a problem of bringing up a BACKUP without
> a MASTER but I can't confirm that.
>
>
> Good luck.
>
>
> $ cat /etc/keepalived/keepalived.conf
>
> global_defs {
>
>     notification_email {
>         # [hidden email] <mailto:[hidden email]>
>     }
>     notification_email_from [hidden email]
> <mailto:[hidden email]>
>     smtp_server 10.1.2.3        # mail.example.com <http://mail.example.com>
>     smtp_connect_timeout 30
>     lvs_id lvs01.example.com <http://lvs01.example.com>    # Name to
> mention in email.
> }
>
> vrrp_instance LVS_example {
>
>     state BACKUP
>     priority 150
>     interface eth0
>     lvs_sync_daemon_interface eth0
>     virtual_router_id 5
>     preempt_delay 20
>
>     virtual_ipaddress_excluded {
>        
>         10.10.10.10   # Squid proxy
>     }
>
>     notify_master "some command to log or send an alert"
>     notify_backup "some command to log or send an alert"
>     notify_fault "some command to log or send an alert"
> }
>
>
> # SQUID Proxy
> virtual_server 10.10.10.10 3128 {
>
>     delay_loop 5
>     lb_algo wrr
>     lb_kind DR
>     protocol TCP
>
>     real_server 10.10.10.11 3128 {   # proxy01.example.com
> <http://proxy01.example.com>
>         weight 1
>         inhibit_on_failure 1
>         TCP_CHECK {
>             connect_port 3128
>             connect_timeout 5
>         }
>     }
>
>     real_server 10.10.10.12 3128 {   # proxy02.example.com
> <http://proxy02.example.com>
>         weight 1
>         inhibit_on_failure 1
>         TCP_CHECK {
>             connect_port 3128
>             connect_timeout 5
>         }
>     }
> }
>
>
> On Thu, Aug 27, 2020 at 8:24 AM Eliezer Croitor <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Hey All,____
>
>     __ __
>
>     I am reading about LB and tried to find an up-to-date example or
>     tutorial specific to squid with no luck.____
>
>     I have seen:
>     http://kb.linuxvirtualserver.org/wiki/Building_Web_Cache_Cluster_using_LVS____
>
>     __ __
>
>     Which makes sense and also is similar or kind of identical to WCCP
>     with gre.____
>
>     __ __
>
>     Anyone knows about a working Squid setup with IPVS/LVS?____
>
>     __ __
>
>     Thanks,____
>
>     Eliezer____
>
>     __ __
>
>     ----____
>
>     Eliezer Croitoru____
>
>     Tech Support____
>
>     Mobile: +972-5-28704261____
>
>     Email: [hidden email] <mailto:[hidden email]>____
>
>     __ __
>
>     _______________________________________________
>     squid-users mailing list
>     [hidden email]
>     <mailto:[hidden email]>
>     http://lists.squid-cache.org/listinfo/squid-users
>
>
> _______________________________________________
> squid-users mailing list
> [hidden email]
> http://lists.squid-cache.org/listinfo/squid-users
>
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users