Our goals is to set up high-available load-balanced scalable enviroment. We want to use active-active modes.

We will have two layers:

  • frontend – virtual routers – at least two servers working in HA and dispatching requests to available real servers.
  • backend – real servers – at least two server processing all requests.

Pre-requisites - LVS and Real Server Configs

We need to launch 4 virtual machines:

  • LVS1, LVS2 – with at least one interface for incoming connections and two interfaces for backend communication.
    • Network interfaces:
      • eth0 – added automatically by VMWare for internet connection (we left this, but it’s not going to be used)
      • eth1 – configured in VMWare in Manage-> Virtual Machine Settings as “custom: VMNet1”, which means it’s connected switch VMNet1 (VMNet is attached to host system)
      • eth2 – custom: VMNet 12 (VMNet12 choosen arbitrarily – it’s just one of virtual switches available in VMWare Workstation)
      • eth3 – custom: VMNet 13 (likewise)
    • Software:
      • required: keepalived, ipvsadm,
      • useful: iptraf, tcpdump, screen,
  • RS1, RS2 – with at leas two interfaces for communication to front end servers
    • Network interface
      • eth0 – added automatically by VMWare for internet connection (we left this, but it’s not going to be used)
      • eth1 – attached to custom: VMNet 12 
      • eth2 – attached to custom: VMNet 13
    • Software:
      • required: keepalived, ipvsadm, apache2, php
      • useful: iptraf, tcpdump, screen,

On the drawing below we see the desired infrastructure:

  • four virtual machines: LVS1, LVS2, RS1, RS2 and a host system which will work for us as an external client,
  • virtual switches: VMNet1, VMNet12, VMNet 13 and NET, plus interfaces eth* attached to this networks in each VM,
  • virtual router instances : VI_1 – Virtual Router Instance 1 carrying VIP1 and similarily VI_2 with DIP2, VI_3 with VIP2, VI_4 with DIP2.
  • dotted lines represent links automatically generated by VMWare Workstation which will be used for standard Internet access, but not for communication in our cluster,
  • black solid lines and black font representnetwork config in which all the virtual IPs are carried by original master routers,
  • orange solid lines and orange font represent backup links and location of backup virtual IP.

We will start with configuring the networks and adding inerfaces.

Step 1 - adding networks and interfaces

In this step we will use VMWare Workstation Settings for each Virtual Machines. VMWare Workstation has (at least in my test environment) 20 available switches/networks from VMNet0 to VMNet19. (We say switches/networks, because each switch is working like a separate VLAN, i.e. the communication is closed in it).

Besides default NAT (VMNet8) switch, we will use 3 virtual additional ones VMWare:

  • VMNet1 will be frontend network (192.168.50.0/24) for receiving incoming connections (in our case from host system only, but in general from anywhere)

    We want to set up:

    • on LVS1 and LVS2 interface eth1 with no IP configuration (we only need to bring up the interface – see commands below)
    • on Windows Host interface VMWare Network Adapter VMnet1: (statically assign) 192.168.50.100/24

    With keepalived we will set up two instances of virtual router (VRRP) in VMNet1:

    • VI_1 for 192.168.50.1 – set up between LVS1:eth1 (MASTER by default, prio 150) and LVS2:eth1 (BACKUP by default, prio 100)
    • VI_3 for 192.168.50.2 – set up between LVS1:eth1 (BACKUP by default, prio 100) and LVS2:eth1 (MASTER by default, prio 150)

    Keepalived will autoassing the appropriate network configuration and will assure the high availability of both IPs in case of failure of one LVS.

  • VMNet12 for backend communication between LVSs and real servers.

    With keepalived we will set up network interfaces:

    • on LVS1 and LVS2 interface eth2 – no IP configuration (just set the interface up)
    • on RS1 interface eth1 – static 192.168.100.10/24
    • on RS2 interface eth1 – static 192.168.100.20/24

    With keepalived we will set up one instance of virtual router (VRRP) in VMNet12:

    • VI_3 for 192.168.100.1 – set up between LVS1:eth2 (MASTER by default, prio 150) and LVS2:eth2 (BACKUP by default, prio 100)
  • VMNet13 for backend communication between LVSs and real servers.

    With keepalived we will set up network interfaces:

    • on LVS1 and LVS2 interface eth2 – no IP configuration (just set the interface up)
    • on RS1 interface eth1 – static 192.168.200.10/24
    • on RS2 interface eth1 – static 192.168.200.20/24

    With keepalived we will set up one instance of virtual router (VRRP) in VMNet13:

    • VI_3 for 192.168.200.1 – set up between LVS1:eth2 (MASTER by default, prio 150) and LVS2:eth2 (BACKUP by default, prio 100)

Step 2 - VRRP with LVS on LVS1 and LVS2

In the next step we will configure Keepalived. We wont’t assing any static IP to LVS network interfaces – keepalived will do this job for us.

Here’s the LVS1 keepalived.conf:

LVS2 keepalived.conf:

We need to restart keepalived:

The VRRP and LVS automation is already working, but probably nothing has changed. By default, all interfaces in Linux are down. We need to set the link up for eth1, eth2 and eth3, so they can send and receive VRRP packets and check TCP connection to backend servers for LVS.

Note that, TCP connection to backend servers will be available if and only if the proper backend IP was assigned by keepalived with VRRP protocol.

Na LVS1 wykonujemy:

Następnie w /etc/network/interfaces ustawiamy:

oraz wykonujemy:

Na końću w pliku /etc/sysctl.conf ustawiamy na stałe forwardowanie pakietów, poprzez usunięcie komentarza z następującej linii:

By forwardowanie pakietów miało skutek od razu (tj. przed restartem maszyny) możemy wykonać jeszcze komendę:

Dla LVS2 powtarzamy takie same kroki podmieniając jedynie adres na 192.168.47.131 (zamiast 192.168.47.132).

Step 3 - setting up the backend communication

Our aim is to:

  • configure static IPs on RS1 and RS2 interfaces
  • set the routing policy so that the packet received trhough 192.168.100.1 to 192.168.100.x will be send back through 192.168.100.1 (respectively with 192.168.200.1 and 192.168.200.x)

RS1:

Add the following lines to file /etc/network/interfaces:

Bash commands (bring interfaces up):

Bash commands (set the routing tables, so that packets from 192.168.100.0/24 will be routed via 192.168.100.1 and 192.168.200.0/24 via 192.168.200.1):

RS2:

Add the following lines to file /etc/network/interfaces:

Bash commands (bring interfaces up):

Bash commands (set the routing tables, so that packets from 192.168.100.0/24 will be routed via 192.168.100.1 and 192.168.200.0/24 via 192.168.200.1):

Checklist

Jeśli coś nie działa:

  1. Sprawdzam czy na LVS1 i LVS2 dobrze działa VRRP (tj. adresy są podniesione tak jak by wszystko działało)
  2. Sprawdzam ipvsadm na LVS1 i LVS2 (tam gdzie jest podniesiony 50.1. powinny być dodane 100.10 i 100.20, tam gdzie jest podniesiony 50.2. powinny być dodane 200.10 i 200.20
  3. Sprawdzam ip rules show na RS1 i RS2 – czy zawierają przekazanie do tablicy LVS1 dla 100.0/24 i do tablicy LVS2 dla 200.0/24
  4. Sprawdzam ip route show dla tablic – czy zawierają po dwa wpisy:* w LVS1 sieć 100.0/24 i default gateway przez 100.1* w LVS2 sieć 200.0/24 i default gateway przez 200.1
  5. Sprawdzić czy na LVS1 i LVS są ustawione ip_forward na 1
  6. Sprawdzić czy karty sieciowe są podpięte do dobrych sieci (VMNet1,12,13).
  7. Sprawdzić czy po stronie systemu hosta (w przypadku zajęć RSO – Windows i VMWare) jest w panelu sterowania skonfigurowna sieć 192.168.50.100/24 na karcie sieciowej z opisem interfejsu VMNet1