Getting to know Citrix NetScaler CPX

Citrix NetScaler is a commercial load balancer that competes with open source load balancers such as HAProxy and Nginx. NetScaler is available as a container as well, in a product version known as NetScaler CPX. Even better, there is a free version available, called the CPX Express. For the rest of the blog post, I’m using my Mac laptop running OS X Sierra, and Docker for Mac version 1.12.

CPX Express can be installed from the Docker Store.

$ docker pull store/citrix/netscalercpx:11.1-53.11

You should now have the CPX image available:

$ docker images store/citrix/netscalercpx 
REPOSITORY                  TAG                 IMAGE ID
store/citrix/netscalercpx   11.1-53.11          267dfad3a03e        4 weeks ago         413MB

Create a container from this image:

$ docker run -e EULA=yes -dt -p 22 -p 80 --name cpx --ulimit core=-1 --cap-add=NET_ADMIN store/citrix/netscalercpx:11.1-53.11
275626b96389755a88362c8df3ae4851f13e45d22f502b29900612fc2da28444

Determine the SSH port of this running container:

$ docker port cpx 22
0.0.0.0:32849

Login to the NetScaler CPX using the id/password combination of ‘root/linux’:

$ ssh -p 32849 root@127.0.0.1 
The authenticity of host '[127.0.0.1]:32849 ([127.0.0.1]:32849)' can't be established.
ECDSA key fingerprint is SHA256:N3dlhNYbvYjOPkbt9ogrg2fwwUDWPVs8JwvLuE64/RQ.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[127.0.0.1]:32849' (ECDSA) to the list of known hosts.
root@127.0.0.1's password: 
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.19.0-39-generic x86_64)

Once logged in, you can execute almost any regular NetScaler CLI command by passing the command to the cli_script.sh script. For example:

root@275626b96389:~# cli_script.sh 'show ip'   
exec: show ip
    Ipaddress        Traffic Domain  Type             Mode     Arp      Icmp     Vserver  State
    ---------        --------------  ----             ----     ---      ----     -------  ------
1)  172.17.0.2       0               NetScaler IP     Active   Enabled  Enabled  NA       Enabled
2)  192.0.0.1        0               SNIP             Active   Enabled  Enabled  NA       Enabled
Done

In the next blog we’ll see how to exercise the CPX’s primary function: load balancing.

Advertisements

Automating NetScaler CPX configuration

In a previous post we saw how to configure the NetScaler CPX to perform URL-based routing to a set of microservices. The microservices (accounts, cart and catalog) are running in containers. In order to configure the NetScaler, we had to figure out the service names and IP addresses of the containers as well as type in the URL routes into the NetScaler config. This doesn’t scale as the number of microservices and containers increase. Also, in a microservices architecture, the topology (size, layout) of the service is expected to change frequently.  What if we could automatically configure (and reconfigure) the CPX as the topology gets created and updated?

To do this, we use a simple pattern that can be repeated in almost any automation scenario:

  1. We obtain container information (such as IP addresses) from the container orchestrator. In this example, the orchestrator is Docker Compose.
  2. We add metadata to the containers that is discoverable. In this case we use Docker labels to add information about the URL routes.
  3. We combine this information and configure the NetScaler CPX using its REST-based API (Nitro)

You can follow along in the example with the ‘ex3’ folder of the GitHub repository: (https://github.com/chiradeep/cpxblog).

The part of the Docker Compose file that describes the services looks like this:

services:
  accounts_a:
    image: httpd:alpine
    labels:
      com.widgetshop.service: "accounts"
      com.widgetshop.url: "/accounts/*"
  accounts_b:
    extends: accounts_a

  cart_a:
    image: httpd:alpine
    labels:
      com.widgetshop.service: "cart"
      com.widgetshop.url: "/cart/*"
  
  catalog_a:
    image: httpd:alpine
    labels:
      com.widgetshop.service: "catalog"
      com.widgetshop.url: "/catalog/*"
  catalog_b:
    extends: catalog_a

We use the extends keyword to quickly replicate common attributes of containers. For example, the accounts_b container is identical to accounts_a. Each container is annotated with labels. The first label com.widgetshop.service identifies the service name. The second label com.widgetshop.url identifies the URL for that service. To configure the NetScaler CPX now, all we need are the IP addresses of the running containers and their labels.

We’ll use the Nitro Python SDK to configure the NetScaler CPX. We’ll use the Docker Python client API to discover information from Docker Compose. You can see the code in the ‘ex3/automate’ subfolder of the Github repository (https://github.com/chiradeep/cpxblog).

First we discover the static information from Docker: the service names and the URLs. We’ll use this to configure the NetScaler thus:

  • each service in Docker Compose is mapped to a Service Group in NetScaler
  •  lb vservers (one per service group) are created to load balance each service group
  • A cs policy per URL is created to switch traffic from the cs vserver to the respective lb vserver

In main.py we’ll get the service names and urls and configure the NetScaler with this information:

    for svc in SERVICES:
        url = dockr.get_service_url(SVC_LABEL_NAME, svc, SVC_LABEL_URL)
        services_urls[svc] = url
    # create cs vserver, lb vservers and service groups
    netskaler.configure_cs_frontend(CS_VSERVER_NAME, "127.0.0.1",
                                    CS_VSERVER_PORT, services_urls)

Note that we use 127.0.0.1 as the VIP for the cs vserver. This tells the NetScaler CPX to use the IP assigned by the Docker engine to the NetScaler CPX.

Next we’ll get the container IPs and populate the service groups in the NetScaler:

    # populate service group members into service groups
    for svc in SERVICES:
        ip_ports = dockr.get_service_members(SVC_LABEL_NAME, svc)
        netskaler.add_remove_services(svc, ip_ports)

To tie it all together, we need this Python code to run whenever the Docker Compose file is executed ( docker-compose up -d ). To achieve this we will build a container to run the Python code. Whenever we call docker-compose up -d, the container will get built (if necessary) and the main program will be run. The relevant portion of the Docker Compose file:

  automate:
    build: ./automate
    image: automate
    container_name: automate
    depends_on:
      - cpx
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    network_mode: "host"

This builds a container image called automate and when it is run, it runs a container called automate after the CPX container is created. Get the topology up:

$ docker-compose up -d
Creating network "ex3_default" with the default driver
Building automate
Step 1 : FROM python:2
 ---> 5e79709d3871
[...truncated...]
Step 9 : ENTRYPOINT python /usr/src/app/main.py
 ---> Running in 66f3b747c085
 ---> 70832ec8b6b3
Removing intermediate container 66f3b747c085
Successfully built 70832ec8b6b3
WARNING: Image for service automate was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating ex3_accounts_b_1
Creating ex3_catalog_b_1
Creating ex3_accounts_a_1
Creating ex3_cart_a_1
Creating ex3_catalog_a_1
Creating ex3_cpx_1
Creating automate

The image for automate was built since it didn’t exist. The next run of docker-compose will find the image and not build it. Let’s check out the logs for this run:

$ docker logs automate
[main.py:  ]  (MainThread) NS_PORT 32910
[dockr.py:get_service_url]  (MainThread) Getting backends for svc label com.widgetshop.service=accounts
[main.py:  ]  (MainThread) Service: accounts, url: /accounts/*
[dockr.py:get_service_url]  (MainThread) Getting backends for svc label com.widgetshop.service=cart
[main.py:  ]  (MainThread) Service: cart, url: /cart/*
[dockr.py:get_service_url]  (MainThread) Getting backends for svc label com.widgetshop.service=catalog
[main.py:  ]  (MainThread) Service: catalog, url: /catalog/*
[netscaler.py:wait_for_ready]  (MainThread) NetScaler API is not ready
[netscaler.py:wait_for_ready]  (MainThread) NetScaler is ready at 127.0.0.1:32910
[netscaler.py:configure_cs_frontend]  (MainThread) LB Catalog_lb, ServiceGroup catalog
[netscaler.py:configure_cs_frontend]  (MainThread) Policy catalog_policy, rule /catalog/*
[netscaler.py:configure_cs_frontend]  (MainThread) Policy catalog_policy, rule /catalog/*
[netscaler.py:configure_cs_frontend]  (MainThread) LB Accounts_lb, ServiceGroup accounts
[netscaler.py:configure_cs_frontend]  (MainThread) Policy accounts_policy, rule /accounts/*
[netscaler.py:configure_cs_frontend]  (MainThread) Policy accounts_policy, rule /accounts/*
[netscaler.py:configure_cs_frontend]  (MainThread) LB Cart_lb, ServiceGroup cart
[netscaler.py:configure_cs_frontend]  (MainThread) Policy cart_policy, rule /cart/*
[netscaler.py:configure_cs_frontend]  (MainThread) Policy cart_policy, rule /cart/*
[dockr.py:get_service_members]  (MainThread) Getting backends for svc label com.widgetshop.service=accounts
[main.py:  ]  (MainThread) Service: accounts, ip_ports=[(u'172.22.0.4', 80), (u'172.22.0.2', 80)]
[netscaler.py:add_remove_services]  (MainThread) Binding 172.22.0.2:80 from service group accounts 
[netscaler.py:add_remove_services]  (MainThread) Binding 172.22.0.4:80 from service group accounts 
[dockr.py:get_service_members]  (MainThread) Getting backends for svc label com.widgetshop.service=cart
[main.py:  ]  (MainThread) Service: cart, ip_ports=[(u'172.22.0.6', 80)]
[netscaler.py:add_remove_services]  (MainThread) Binding 172.22.0.6:80 from service group cart 
[dockr.py:get_service_members]  (MainThread) Getting backends for svc label com.widgetshop.service=catalog
[main.py:  ]  (MainThread) Service: catalog, ip_ports=[(u'172.22.0.5', 80), (u'172.22.0.3', 80)]
[netscaler.py:add_remove_services]  (MainThread) Binding 172.22.0.3:80 from service group catalog 
[netscaler.py:add_remove_services]  (MainThread) Binding 172.22.0.5:80 from service group catalog

From the logs, we can see that

  • The code discovered the API port for the NetScaler CPX (32898) using the Docker API
  • The Docker API also provides the URL for each of the services
  • The code waits for the NetScaler NITRO API to be ‘ready’. Since the automate container runs immediately after the CPX, the CPX may not have fully booted up before the code runs. The code polls for the  NITRO API port to be available.
  • The Docker API provides the membership info for each service group.

Let’s test this out:

$ docker-compose port ex3_cpx_1 88
0.0.0.0:32897
$ wget -q -O - http://localhost:32897/accounts/

This is the Accounts Service

Let’s update the topology by adding a second cart container:

  cart_a:
    image: httpd:alpine
    labels:
      com.widgetshop.service: "cart"
      com.widgetshop.url: "/cart/*"
  cart_b:
     extends: cart_a

Run this with docker-compose up -d :

$ docker-compose up -d
ex3_catalog_a_1 is up-to-date
ex3_accounts_b_1 is up-to-date
ex3_catalog_b_1 is up-to-date
ex3_accounts_a_1 is up-to-date
ex3_cart_a_1 is up-to-date
ex3_cpx_1 is up-to-date
Starting automate
Creating ex3_cart_b_1
$ docker logs automate
[..truncated..]
Service: cart, ip_ports=[(u'172.22.0.8', 80), (u'172.22.0.6', 80)]
Binding 172.22.0.8:80 from service group cart 
Service 172.22.0.6:80 is already bound to  service group cart

Since the automate container exited during the previous run of docker-compose  , it was run again. docker-compose also creates the second cart container. During the second run the automate program discovered the previously configured cart service group members and only added the new container.
Now if we remove the cart_b from the docker-compose.yaml and re-run:

$ docker-compose up -d --remove-orphans
Removing orphan container "ex3_cart_b_1"
ex3_catalog_b_1 is up-to-date
ex3_accounts_a_1 is up-to-date
ex3_accounts_b_1 is up-to-date
ex3_catalog_a_1 is up-to-date
ex3_cart_a_1 is up-to-date
ex3_cpx_1 is up-to-date
Starting automate
$ docker logs automate
Unbinding 172.22.0.8:80 from service group cart 

The removed container is removed from the service group.

While the example code is fairly complete:

  • It relies on the automate container to run after the service containers are created. The solution is to run the automate container in the background while listening to the Docker API event stream.
  • The program also hard codes things such as the service names, service label names and the cs vserver name into the main.py file. These can be passed in as environment variables to the automate container in the docker-compose.yaml file.

Both are left as exercises to the reader.

URL-based routing with NetScaler CPX

In a previous blog post, I showed how to use the NetScaler CPX to load balance a set of backend web servers. In this post, I’ll show how to use the CPX in a modern microservices-based architecture.

In a micro services environment, a service is split up into several co-operating micro services, each located at a different network endpoint. A different way to build the same service is to use a monolith pattern.

For example, imagine a WidgetShop service that has been built as a monolith

lb4

Although there are clearly identifiable services within the monolith, the entire service is always deployed and scaled together. So, even if the cart service didn’t need to be scaled up, it still gets deployed to every backend server. When the monolith is split into micro services, it might look like this:

lb5

Each microservice gets deployed in containers and can be deployed independently of the other microservices and also scaled independently. Depending on the networking, each Docker container could get a unique routable IP address, or get a unique TCP port on its host. To route the incoming traffic  properly, the load balancer has to look at the path component of the incoming URL. Based on the path, the load balancer “switches” or “routes” the traffic to different servers/microservices/containers. This is sometimes known as URL-based routing.

To achieve this in NetScaler CPX we have to use something called ‘content switching‘. Content Switching is very powerful — it can switch/route traffic based on the URL, HTTP headers, hostname, the payload itself and so on. To demonstrate URL-based routing, we’ll use Docker Compose again to layout the WidgetShop topology. To follow along, use the ‘ex2’ folder from this git repository https://github.com/chiradeep/cpxblog/

The Docker Compose file:

version: '2'
services:
  accounts_a:
    image: httpd:alpine
    volumes:
      - ${PWD}/:/usr/local/apache2/htdocs/
    expose:
      - 80
  accounts_b:
    image: httpd:alpine
    volumes:
      - ${PWD}/:/usr/local/apache2/htdocs/
    expose:
      - 80

  cart_a:
    image: httpd:alpine
    volumes:
      - ${PWD}/:/usr/local/apache2/htdocs/
    expose:
      - 80
  
  catalog_a:
    image: httpd:alpine
    volumes:
      - ${PWD}/:/usr/local/apache2/htdocs/
    expose:
      - 80
  catalog_b:
    image: httpd:alpine
    volumes:
      - ${PWD}/:/usr/local/apache2/htdocs/
    expose:
      - 80
  cpx:
    image: store/citrix/netscalercpx:11.1-53.11 
    ports:
      - 22
      - 88
    tty: true
    privileged: true

You can see that there’s 2 Accounts containers, 2 Catalog containers and 1 cart container, plus the CPX. Let’s run this and determine the IP addresses Docker assigns to these containers.

$ docker-compose up -d
$ names=$(docker-compose ps | awk -F" " '{print $1}' | tail -n+3)
$ for c in $names ; do    ip=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $c);   echo "$c : $ip"; done
ex2_accounts_a_1 : 172.21.0.6
ex2_accounts_b_1 : 172.21.0.5
ex2_cart_a_1 : 172.21.0.2
ex2_catalog_a_1 : 172.21.0.4
ex2_catalog_b_1 : 172.21.0.3
ex2_cpx_1 : 172.21.0.7

To configure content switching in the NetScaler we first have to enable it:

$ docker-compose port ex2_cpx_1 22
0.0.0.0:32862
$ ssh -p 32862 root@localhost
root@629e788ff846:~# cli_script.sh 'enable feature cs'
Done

In a NetScaler a content switching virtual server (“cs vserver”) becomes the front-end listener. This in turn sends traffic to LB vservers that represent the backend microservices. The microservices are grouped into Service Groups.

lb6

The rule that tells the cs vserver to send traffic to a particular lb vserver is called a policy (cs policy). The set of CLI commands looks like this:
First, configure the service groups with the backend IPs we have already discovered:

cli_script.sh 'add servicegroup accounts HTTP'
cli_script.sh 'add servicegroup cart HTTP'
cli_script.sh 'add servicegroup catalog HTTP'
cli_script.sh 'bind servicegroup accounts 172.21.0.6 80'
cli_script.sh 'bind servicegroup accounts 172.21.0.5 80'
cli_script.sh 'bind servicegroup cart 172.21.0.2 80'
cli_script.sh 'bind servicegroup catalog 172.21.0.3 80'
cli_script.sh 'bind servicegroup catalog 172.21.0.4 80'

Then create the lb vservers and bind the servicegroups to them:

cli_script.sh 'add lb vserver Accounts HTTP'
cli_script.sh 'add lb vserver Cart HTTP'
cli_script.sh 'add lb vserver Catalog HTTP'
cli_script.sh 'bind lb vserver Accounts accounts'
cli_script.sh 'bind lb vserver Cart cart'
cli_script.sh 'bind lb vserver Catalog catalog'

Create the WidgetShop cs vserver:

cli_script.sh 'add cs vserver WidgetShop HTTP 172.21.0.7 88'

Create the policies and bind them to the cs vserver:

cli_script.sh 'add cs policy accounts_policy -url "/accounts/*"'
cli_script.sh 'add cs policy cart_policy -url "/cart/*"'
cli_script.sh 'add cs policy catalog_policy -url "/catalog/*"'
cli_script.sh 'bind cs vserver WidgetShop -policyname accounts_policy -targetLBVServer Accounts'
cli_script.sh 'bind cs vserver WidgetShop -policyname cart_policy -targetLBVServer Cart'
cli_script.sh 'bind cs vserver WidgetShop -policyname catalog_policy -targetLBVServer Catalog'

Try it out:

$ docker-compose port ex2_cpx_1 88
0.0.0.0:32861
$ wget -q -O - http://localhost:32861/accounts/
$

This is the Accounts Service

In a subsequent blog post we’ll see how to automate this somewhat manual process.

Introduction to Load Balancing with NetScaler CPX

The primary job of a load balancer is to spread client traffic to a set of servers that can handle the traffic. Compared to an architecture with a single server, this adds security,  scalability, resilience and availability.

lb3.png

The load balancer accepts (‘terminates’) the connection from the clients and initiates new connections to the servers (“backends”). The part of the load balancer that accepts connections from clients is called the “lb vserver” in NetScaler terminology. (In HAProxy, this is the “frontend” or “listener”; in Nginx, this is the server). The  backend servers that accepts the load from the load balancer are called “services” in NetScaler (“server” in HAProxy, “upstream” in Nginx).

To create the right-hand-side configuration in NetScaler, you

  1. Create an lb vserver with an IP (“VIP”) on 53.52.51.20 and port 80
  2. Create services for each of the backend servers
  3. Bind the services from step 2 to the lb vserver

To recreate the topology on the right hand side, we’ll use docker-compose with this compose file (you can find this in the ex1 folder of this git repository:  https://github.com/chiradeep/cpxblog/)

$ cat docker-compose.yaml
version: '2'
services:
  web_a:
    image: httpd:alpine
    expose:
      - 80

  web_b:
    image: httpd:alpine
    expose:
      - 80
  web_c:
    image: httpd:alpine
    expose:
      - 80
  cpx:
    image: store/citrix/netscalercpx:11.1-53.11 
    links:
      - web_a
      - web_b
      - web_c
    ports:
      - 22
      - 88
    tty: true
    privileged: true

The file specifies 3 identical containerized web servers each running apache httpd. The fourth container is the NetScaler CPX with references to the 3 web servers. The ports declaration tells Docker to map some host ports to the container ports 22 and 88. We’ll use port 88 as the frontend/lb vserver listening port (we can’t use 80 since the NetScaler reserves it). Get this topology running:

$ docker-compose up -d
$ docker-compose ps
   Name                  Command               State                                   Ports                                  
-----------------------------------------------------------------------------------------------------------------------------
ex1_cpx_1     /bin/sh -c bash -C '/var/n ...   Up      161/udp, 0.0.0.0:32855->22/tcp, 443/tcp, 80/tcp, 0.0.0.0:32854->88/tcp 
ex1_web_a_1   httpd-foreground                 Up      80/tcp                                                                 
ex1_web_b_1   httpd-foreground                 Up      80/tcp                                                                 
ex1_web_c_1   httpd-foreground                 Up      80/tcp    

We can see that Docker has mapped ports 22 and 88 on the CPX to the host, but not the port 80 on the httpd containers (or 80, 443 and 161/udp on the CPX). These ports are however visible between the containers but not to the outside world.
To configure the NetScaler CPX, we need the IP addresses of each of the containers:

$ for c in ex1_cpx_1  ex1_web_a_1 ex1_web_b_1 ex1_web_c_1 
> do 
>   ip=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $c)
>   echo "$c : $ip"
> done
ex1_cpx_1 : 172.21.0.5
ex1_web_a_1 : 172.21.0.2
ex1_web_b_1 : 172.21.0.3
ex1_web_c_1 : 172.21.0.4

Armed with these IPs, we’ll configure the NetScaler thus:

  1. Create ‘services’ for each httpd containers
  2. Create ‘lb vserver’ using the IP and port for the CPX
  3. Bind the services to the lb vserver
$ docker-compose port ex1_cpx_1 22
0.0.0.0:32855
$ ssh -p 32855 root@127.0.0.1
root@127.0.0.1's password: 
...
root@2d644f279d16:~# cli_script.sh 'add service Web_A 172.21.0.2 HTTP 80'
exec: add service Web_A 172.21.0.2 HTTP 80
Done
root@2d644f279d16:~# cli_script.sh 'add service Web_B 172.21.0.3 HTTP 80'
exec: add service Web_B 172.21.0.3 HTTP 80
Done
root@2d644f279d16:~# cli_script.sh 'add service Web_C 172.21.0.4 HTTP 80'
exec: add service Web_C 172.21.0.4 HTTP 80
Done
root@2d644f279d16:~# cli_script.sh 'add lb vserver Web HTTP 172.21.0.5 88'
exec: add lb vserver Web HTTP 172.21.0.5 88
Done
root@2d644f279d16:~# cli_script.sh 'bind lb vserver Web Web_A'           
exec: bind lb vserver Web Web_A
Done
root@2d644f279d16:~# cli_script.sh 'bind lb vserver Web Web_B'
exec: bind lb vserver Web Web_B
Done
root@2d644f279d16:~# cli_script.sh 'bind lb vserver Web Web_C'
exec: bind lb vserver Web Web_C
Done
root@2d644f279d16:~# cli_script.sh 'show lb vserver Web'  
exec: show lb vserver Web
    Web (172.21.0.5:88) - HTTP  Type: ADDRESS 
    State: UP
    Effective State: UP
    Client Idle Timeout: 180 sec
    Down state flush: ENABLED
    Disable Primary Vserver On Down : DISABLED
    Appflow logging: ENABLED
    Port Rewrite : DISABLED
    No. of Bound Services :  3 (Total)   3 (Active)
    Configured Method: LEASTCONNECTION
    Current Method: Round Robin, Reason: A new service is bound      BackupMethod: ROUNDROBIN
    Mode: IP
    Persistence: NONE
    <...truncated...>
2) Web_A (172.21.0.2: 80) - HTTP State: UP  Weight: 1
3) Web_B (172.21.0.3: 80) - HTTP State: UP  Weight: 1
4) Web_C (172.21.0.4: 80) - HTTP State: UP  Weight: 1
Done

We can see that the backends are in state UP, but they don’t have any traffic yet :

root@2d644f279d16:~# cli_script.sh 'stat lb vserver Web'      
exec: stat lb vserver Web
Virtual Server Summary
                      vsvrIP  port     Protocol        State   Health  actSvcs 
Web               172.21.0.5    88         HTTP           UP      100        3

           inactSvcs 
Web                0

Virtual Server Statistics
                                          Rate (/s)                Total 
Vserver hits                                       0                    0
Requests                                           0                    0
Responses                                          0                    0
Request bytes                                      0                    0
<....truncated...>
Web_A             172.21.0.2    80         HTTP           UP        0      0/s
Web_B             172.21.0.3    80         HTTP           UP        0      0/s
Web_C             172.21.0.4    80         HTTP           UP        0      0/s
Done

Let’s send some traffic to this topology!
At the Docker host (my Mac laptop in this case):

$ docker-compose port  ex1_cpx_1 88
0.0.0.0:32854
$  wget -q -O - http://localhost:32854/

It works!

We can send more traffic in a loop:

$ i=0; while [ $i -lt 100 ]; do wget -q http://localhost:32854/ -O /dev/null; let i=i+1; done;
$ ssh -p 32855 root@127.0.0.1 "/var/netscaler/bins/cli_script.sh  'stat lb vserver Web'"
root@127.0.0.1's password: 
exec: stat lb vserver Web

Virtual Server Summary
                      vsvrIP  port     Protocol        State   Health  actSvcs 
Web               172.21.0.5    88         HTTP           UP      100        3

           inactSvcs 
Web                0

Virtual Server Statistics
                                          Rate (/s)                Total 
Vserver hits                                       0                  101
Requests                                           0                  101
Responses                                          0                  101
Request bytes                                      0                11716
<...truncated...>
Web_A             172.21.0.2    80         HTTP           UP       34      0/s
Web_B             172.21.0.3    80         HTTP           UP       34      0/s
Web_C             172.21.0.4    80         HTTP           UP       33      0/s
Done

We can see that the 101 requests we sent have been equally split among the 3 backend containers!

In the next blog we’ll explore how to configure the CPX for a common use case: URL routing.