Automating NetScaler CPX configuration

In a previous post we saw how to configure the NetScaler CPX to perform URL-based routing to a set of microservices. The microservices (accounts, cart and catalog) are running in containers. In order to configure the NetScaler, we had to figure out the service names and IP addresses of the containers as well as type in the URL routes into the NetScaler config. This doesn’t scale as the number of microservices and containers increase. Also, in a microservices architecture, the topology (size, layout) of the service is expected to change frequently.  What if we could automatically configure (and reconfigure) the CPX as the topology gets created and updated?

To do this, we use a simple pattern that can be repeated in almost any automation scenario:

  1. We obtain container information (such as IP addresses) from the container orchestrator. In this example, the orchestrator is Docker Compose.
  2. We add metadata to the containers that is discoverable. In this case we use Docker labels to add information about the URL routes.
  3. We combine this information and configure the NetScaler CPX using its REST-based API (Nitro)

You can follow along in the example with the ‘ex3’ folder of the GitHub repository: (https://github.com/chiradeep/cpxblog).

The part of the Docker Compose file that describes the services looks like this:

services:
  accounts_a:
    image: httpd:alpine
    labels:
      com.widgetshop.service: "accounts"
      com.widgetshop.url: "/accounts/*"
  accounts_b:
    extends: accounts_a

  cart_a:
    image: httpd:alpine
    labels:
      com.widgetshop.service: "cart"
      com.widgetshop.url: "/cart/*"
  
  catalog_a:
    image: httpd:alpine
    labels:
      com.widgetshop.service: "catalog"
      com.widgetshop.url: "/catalog/*"
  catalog_b:
    extends: catalog_a

We use the extends keyword to quickly replicate common attributes of containers. For example, the accounts_b container is identical to accounts_a. Each container is annotated with labels. The first label com.widgetshop.service identifies the service name. The second label com.widgetshop.url identifies the URL for that service. To configure the NetScaler CPX now, all we need are the IP addresses of the running containers and their labels.

We’ll use the Nitro Python SDK to configure the NetScaler CPX. We’ll use the Docker Python client API to discover information from Docker Compose. You can see the code in the ‘ex3/automate’ subfolder of the Github repository (https://github.com/chiradeep/cpxblog).

First we discover the static information from Docker: the service names and the URLs. We’ll use this to configure the NetScaler thus:

  • each service in Docker Compose is mapped to a Service Group in NetScaler
  •  lb vservers (one per service group) are created to load balance each service group
  • A cs policy per URL is created to switch traffic from the cs vserver to the respective lb vserver

In main.py we’ll get the service names and urls and configure the NetScaler with this information:

    for svc in SERVICES:
        url = dockr.get_service_url(SVC_LABEL_NAME, svc, SVC_LABEL_URL)
        services_urls[svc] = url
    # create cs vserver, lb vservers and service groups
    netskaler.configure_cs_frontend(CS_VSERVER_NAME, "127.0.0.1",
                                    CS_VSERVER_PORT, services_urls)

Note that we use 127.0.0.1 as the VIP for the cs vserver. This tells the NetScaler CPX to use the IP assigned by the Docker engine to the NetScaler CPX.

Next we’ll get the container IPs and populate the service groups in the NetScaler:

    # populate service group members into service groups
    for svc in SERVICES:
        ip_ports = dockr.get_service_members(SVC_LABEL_NAME, svc)
        netskaler.add_remove_services(svc, ip_ports)

To tie it all together, we need this Python code to run whenever the Docker Compose file is executed ( docker-compose up -d ). To achieve this we will build a container to run the Python code. Whenever we call docker-compose up -d, the container will get built (if necessary) and the main program will be run. The relevant portion of the Docker Compose file:

  automate:
    build: ./automate
    image: automate
    container_name: automate
    depends_on:
      - cpx
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    network_mode: "host"

This builds a container image called automate and when it is run, it runs a container called automate after the CPX container is created. Get the topology up:

$ docker-compose up -d
Creating network "ex3_default" with the default driver
Building automate
Step 1 : FROM python:2
 ---> 5e79709d3871
[...truncated...]
Step 9 : ENTRYPOINT python /usr/src/app/main.py
 ---> Running in 66f3b747c085
 ---> 70832ec8b6b3
Removing intermediate container 66f3b747c085
Successfully built 70832ec8b6b3
WARNING: Image for service automate was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating ex3_accounts_b_1
Creating ex3_catalog_b_1
Creating ex3_accounts_a_1
Creating ex3_cart_a_1
Creating ex3_catalog_a_1
Creating ex3_cpx_1
Creating automate

The image for automate was built since it didn’t exist. The next run of docker-compose will find the image and not build it. Let’s check out the logs for this run:

$ docker logs automate
[main.py:  ]  (MainThread) NS_PORT 32910
[dockr.py:get_service_url]  (MainThread) Getting backends for svc label com.widgetshop.service=accounts
[main.py:  ]  (MainThread) Service: accounts, url: /accounts/*
[dockr.py:get_service_url]  (MainThread) Getting backends for svc label com.widgetshop.service=cart
[main.py:  ]  (MainThread) Service: cart, url: /cart/*
[dockr.py:get_service_url]  (MainThread) Getting backends for svc label com.widgetshop.service=catalog
[main.py:  ]  (MainThread) Service: catalog, url: /catalog/*
[netscaler.py:wait_for_ready]  (MainThread) NetScaler API is not ready
[netscaler.py:wait_for_ready]  (MainThread) NetScaler is ready at 127.0.0.1:32910
[netscaler.py:configure_cs_frontend]  (MainThread) LB Catalog_lb, ServiceGroup catalog
[netscaler.py:configure_cs_frontend]  (MainThread) Policy catalog_policy, rule /catalog/*
[netscaler.py:configure_cs_frontend]  (MainThread) Policy catalog_policy, rule /catalog/*
[netscaler.py:configure_cs_frontend]  (MainThread) LB Accounts_lb, ServiceGroup accounts
[netscaler.py:configure_cs_frontend]  (MainThread) Policy accounts_policy, rule /accounts/*
[netscaler.py:configure_cs_frontend]  (MainThread) Policy accounts_policy, rule /accounts/*
[netscaler.py:configure_cs_frontend]  (MainThread) LB Cart_lb, ServiceGroup cart
[netscaler.py:configure_cs_frontend]  (MainThread) Policy cart_policy, rule /cart/*
[netscaler.py:configure_cs_frontend]  (MainThread) Policy cart_policy, rule /cart/*
[dockr.py:get_service_members]  (MainThread) Getting backends for svc label com.widgetshop.service=accounts
[main.py:  ]  (MainThread) Service: accounts, ip_ports=[(u'172.22.0.4', 80), (u'172.22.0.2', 80)]
[netscaler.py:add_remove_services]  (MainThread) Binding 172.22.0.2:80 from service group accounts 
[netscaler.py:add_remove_services]  (MainThread) Binding 172.22.0.4:80 from service group accounts 
[dockr.py:get_service_members]  (MainThread) Getting backends for svc label com.widgetshop.service=cart
[main.py:  ]  (MainThread) Service: cart, ip_ports=[(u'172.22.0.6', 80)]
[netscaler.py:add_remove_services]  (MainThread) Binding 172.22.0.6:80 from service group cart 
[dockr.py:get_service_members]  (MainThread) Getting backends for svc label com.widgetshop.service=catalog
[main.py:  ]  (MainThread) Service: catalog, ip_ports=[(u'172.22.0.5', 80), (u'172.22.0.3', 80)]
[netscaler.py:add_remove_services]  (MainThread) Binding 172.22.0.3:80 from service group catalog 
[netscaler.py:add_remove_services]  (MainThread) Binding 172.22.0.5:80 from service group catalog

From the logs, we can see that

  • The code discovered the API port for the NetScaler CPX (32898) using the Docker API
  • The Docker API also provides the URL for each of the services
  • The code waits for the NetScaler NITRO API to be ‘ready’. Since the automate container runs immediately after the CPX, the CPX may not have fully booted up before the code runs. The code polls for the  NITRO API port to be available.
  • The Docker API provides the membership info for each service group.

Let’s test this out:

$ docker-compose port ex3_cpx_1 88
0.0.0.0:32897
$ wget -q -O - http://localhost:32897/accounts/

This is the Accounts Service

Let’s update the topology by adding a second cart container:

  cart_a:
    image: httpd:alpine
    labels:
      com.widgetshop.service: "cart"
      com.widgetshop.url: "/cart/*"
  cart_b:
     extends: cart_a

Run this with docker-compose up -d :

$ docker-compose up -d
ex3_catalog_a_1 is up-to-date
ex3_accounts_b_1 is up-to-date
ex3_catalog_b_1 is up-to-date
ex3_accounts_a_1 is up-to-date
ex3_cart_a_1 is up-to-date
ex3_cpx_1 is up-to-date
Starting automate
Creating ex3_cart_b_1
$ docker logs automate
[..truncated..]
Service: cart, ip_ports=[(u'172.22.0.8', 80), (u'172.22.0.6', 80)]
Binding 172.22.0.8:80 from service group cart 
Service 172.22.0.6:80 is already bound to  service group cart

Since the automate container exited during the previous run of docker-compose  , it was run again. docker-compose also creates the second cart container. During the second run the automate program discovered the previously configured cart service group members and only added the new container.
Now if we remove the cart_b from the docker-compose.yaml and re-run:

$ docker-compose up -d --remove-orphans
Removing orphan container "ex3_cart_b_1"
ex3_catalog_b_1 is up-to-date
ex3_accounts_a_1 is up-to-date
ex3_accounts_b_1 is up-to-date
ex3_catalog_a_1 is up-to-date
ex3_cart_a_1 is up-to-date
ex3_cpx_1 is up-to-date
Starting automate
$ docker logs automate
Unbinding 172.22.0.8:80 from service group cart 

The removed container is removed from the service group.

While the example code is fairly complete:

  • It relies on the automate container to run after the service containers are created. The solution is to run the automate container in the background while listening to the Docker API event stream.
  • The program also hard codes things such as the service names, service label names and the cs vserver name into the main.py file. These can be passed in as environment variables to the automate container in the docker-compose.yaml file.

Both are left as exercises to the reader.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s