Implement Contour as Ingress Controller for TKGI

Ravi Panchal
10 min readJan 28, 2021

Overview

For our Tanzu Kubernetes Grid Integrated (TKGI) deployed with NSX-T, we needed an Ingress controller which supports SSL termination & re-encryption, Session Affinity at Ingress level and mTLS.

One of the salient features of CaaS (Container as a Service) platform such as Kubernetes is the flexibility it offers to pick and choose the best tool from a plethora of both open source and proprietary vendor products that best fits your purpose. Contour is one such open source Kubernetes Ingress Controller that supports all the above requirements. This document is an attempt to elaborate step by step to Install Contour on TKGI with NSX-T enabled and explain how the following use cases can be achieved with Contour Ingress Controller.

1. TLS Termination & Re-encryption

Ingress at its simplest form is a reverse proxy server that acts as an intermediary point between client and backend services (running in pods). TLS is often terminated at Ingress while the traffic from Ingress to backend application is over HTTP. This reduces the load on your applications as TLS offloading consumes cpu resources and hence impacts performance.

However, there are use cases such as in banking and finance industry or regulatory requirements that mandates end to end TLS. This can be achieved using SSL passthrough when an incoming SSL request is not decrypted at Ingress instead the same is passed along to the backend application for decryption. But this means the domain certificates need to be embedded in each application container too which can be operation overhead.

Alternatively, the SSL request can be decrypted in Ingress using domain certificate and then encrypted using a common certificate shared with applications.

TLS termination and re-encryption at Ingress

We will elaborate later on how to configure SSL termination and re-encryption using Contour.

2. Session Affinity

Session affinity, also known as sticky sessions, is a load balancing strategy whereby a sequence of requests from a single client are consistently routed to the same backend service (pod). Contour supports session affinity on a per route basis.

Ingress maintaining session affinity to the same backend pod

We will deploy a test application with 2 instances and demonstrate session affinity using Contour.

3. Mutual TLS Authentication (mTLS)

It is possible to protect the backend service from unauthorized external clients by requiring the client to present a valid TLS certificate. Ingress will validate the client certificate by verifying that it is not expired and that a chain of trust can be established to the configured trusted root CA certificate. Only those requests with a valid client certificate will be accepted and forwarded to the backend service.

Configuring this at Ingress improves operational efficiency as it will not require each application within the namespace to configure and validate certificates.

Ingress authenticating client using mTLS

All files referred to in this document can be downloaded from github account here.

Platform & Toolsets

The IaaS, Platform and software versions used in this implementation are:

IaaS: vSphere 6.7v3, NSX-T v2.5.0

Tanzu foundation: VMware Tanzu Operations Manager v2.10.2, VMware Tanzu Kubernetes Grid Integrated Edition v1.9.0

Other dependencies: TKGI CLI v1.9.0, kubectl v1.9.0, Contour v1.11.0

Other than the step for provisioning a TKGI cluster, the rest of the steps to install Contour and setup test backend service could be followed on any other upstream Kubernetes.

1. Provision TKGI Cluster

It is assumed that you have access to TKGI with admin access and have successfully authenticated to TKGI. If not, refer here.

Create a network profile configuration to disable NSX Ingress Controller as mentioned below and save as network-profile-disable-nsx-ingress.json

network-profile-disable-nsx-ingress.json

Create TKGI network profile:

tkgi create-network-profile network-profile-disable-nsx-ingress.json

Then create a new TKGI cluster with default NSX Ingress Controller disabled.

tkgi create-cluster my-cluster — external-hostname example.hostname — plan production — network-profile nsx_disable_ncp_ingress

2. Install Contour Ingress Controller

Download Contour Ingress Controller from github.
Review the yaml:

  1. Comment out below two lines in the contour.yaml file
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
# externalTrafficPolicy: Local

2. Notice that the yaml refers to below 2 Docker images

image: docker.io/projectcontour/contour:v1.11.0
image: docker.io/envoyproxy/envoy:v1.16.2

If your environment is air-gapped or you prefer to download from a private or on-premise container registry, ensure that you update the image tag as per your registry.

Deploy contour

kubectl apply -f contour.yaml

Note that Contour deploys Envoy as Kubernetes Service of type LoadBalancer.

kubectl get service envoy -n projectcontour
Envoy service with EXTERNAL-IP

The EXTERNAL-IP of this service should be mapped in your DNS against the domain for e.g. myapp.mydomain.com. This domain will be used to configure Ingress to route request to backend service.

3. Deploy test application

Deploy NGINX web server with both ports 80 and 443 as backend service to test Ingress. NGINX requires a TLS certificate and custom configuration to expose the web-server on port 443. To make these files available within the NGINX container, we will create the Kubernetes ConfigMap objects from these files and mount the ConfigMap as volume to a specific directory in the container.

But first, we need to generate a TLS certificate. We’ll use the below script to generate certificates. Feel free to use any other tool you are comfortable with to generate TLS certificates.

gencrt.sh

Generate CA and NGINX server TLS keys and certificates:

#Replace HOST with your domain.
cd <path-where-you-downloaded-files>
mkdir nginx-backend-certs
cd nginx-backend-certs
<path-to-gencrt.sh> myapp.mydomain.com

You should find two files “server.crt” and “server.key” in the directory where you executed the above command.

We will now create a ConfigMap “nginx-certs” from the server key and certificate files using the below command.

cd nginx-backend-certs
kubectl create cm nginx-certs — from-file=./server.key — from-file=./server.crt — dry-run -o yaml > ../nginx-certs.yaml

nginx-certs.yaml

nginx-certs.yaml

Apply nginx-certs.yaml file to the cluster

kubectl apply -f nginx-certs.yaml

Next, create ConfigMap “nginx-conf” to configure nginx web-server with the above generated TLS certificate.

nginx-config.yaml

nginx-config.yaml

Apply nginx-config.yaml file to the cluster

kubectl apply -f nginx-config.yaml

Finally, create the last ConfigMap “nginx-html” to configure nginx web-server with the default “index.html” file. The index html page when rendered in the browser displays an “Ok” message.

nginx-html.yaml

nginx-html.yaml

Apply nginx-html.yaml file to the cluster

kubectl apply -f nginx-html.yaml

Now that all ConfigMap objects are created, we can finally deploy nginx to the cluster. Replace the image registry with the public or private registry from where to pull the NGINX image.

nginx-backend.yaml

nginx-backend.yaml

Apply nginx-backend.yaml file to the cluster and kubernetes Service and Deployment with two replica pods.

kubectl apply -f nginx-backend.yaml#verify the deployment
kubectl get deployment
#verify the service
kubectl get service

4. Use Cases

Configuring SSL Termination & Re-encryption

The Ingress object was added to Kubernetes in version 1.1 to describe properties of a cluster-wide reverse HTTP proxy. Since that time, the Ingress API has remained relatively unchanged, and the need to express implementation-specific capabilities has inspired an explosion of annotations.

Contour HTTPProxy Custom Resource Definition (CRD) expands upon the Ingress API features to allow for multi-team Kubernetes clusters, with the ability to limit which Namespaces may configure virtual hosts and TLS credentials, etc. For further details, refer here.

HTTPProxy follows a similar pattern to Ingress for configuring TLS credentials. You can secure a HTTPProxy by specifying a Secret that contains TLS private key and certificate information.

To secure HTTPProxy, create a domain TLS certificate as shown below. These steps are similar to the steps for creating a nginx web-server TLS certificate, except the hostname that now refers to the Ingress domain.

#Replace HOST with your domain.
cd <path-where-you-downloaded-files>
mkdir ingress-certs
cd ingress-certs
<path-to-gencrt.sh> myapp.mydomain.com

You should find two files “server.crt” and “server.key” in the directory where you executed the above command.

We will now create a Secret “nginx-backend-ingress-cert” of type TLS from the ingress-domain key and certificate files. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use.

cd ingress-certskubectl create secret tls nginx-backend-ingress-cert — key=server.key — cert=server.crt — dry-run -o yaml > ../nginx-backend-ingress-cert.yaml

nginx-backend-ingress-cert.yaml

nginx-backend-ingress-cert.yaml

Apply nginx-backend-ingress-cert.yaml file to the cluster

kubectl apply -f nginx-backend-ingress-cert.yaml

Now let’s configure the HTTPProxy with TLS virtual host by using tls.secretName property. This property should reference Kubernetes TLS Secret “nginx-backend-ingress-cert” created above. Also, HTTPProxy can be configured to send traffic to the backend nginx web-server over TLS by setting the protocol name in the spec.routes.services[].protocol field.

nginx-backend-httpproxy.yaml

nginx-backend-httpproxy.yaml

The virtualhost field here is a root HTTPProxy that is the top level entry point for this domain. The fully qualified domain name (FQDN) will be used to match against Host: HTTP headers in an HTTP request to route traffic to a backend service based on the matching route conditions. To configure a matrix of matching routes and corresponding backend services, each HTTPProxy can have one or more routes, each of which can have one or more services which will handle the HTTP traffic. In addition, each route may have one or more conditions to match against.

In our case, all HTTPs requests to myapp.mydomain.com will be routed to backend nginx service over TLS port 443. Contour does few things before it routes requests to the backend service. It presents a TLS certificate to HTTP client as defined in Secret nginx-backend-ingress-cert, terminates TLS and initiates a new TLS connection to the secured backend service on port 443.

Apply nginx-backend-httpproxy.yaml to the cluster.

kubectl apply -f nginx-backend-httpproxy.yaml

Verify HTTPProxy status is valid and applied successfully to the cluster.

kubectl get httpproxy
HTTPProxy status

Test
Let’s put our work to test and verify if we could successfully access our secured NGINX backend service over TLS. Access the domain “myapp.mydomain.com” either in browser or using command line clients such as “curl”. If everything has been configured properly, you should expect a successful response with HTTP status code of 200 and a message “Ok”.

curl -kv https://myapp.mydomain.com

If configured incorrectly, you should get response as:

400 Bad Request “The plain http request was sent to https port”.

Configuring Session Affinity

Contour HTTPProxy can be configured with session affinity on a per route basis with loadBalancerPolicy strategy: Cookie. Let’s update nginx-backend-httpproxy.yaml file to enable session affinity.

nginx-backend-httpproxy.yaml

Apply nginx-backend-httpproxy.yaml to the cluster.
kubectl apply -f nginx-backend-httpproxy.yaml

Verify HTTPProxy status is valid and applied successfully to the cluster.

kubectl get httpproxy
HTTPProxy

Test

Verify the client connection from the browser always goes to the same backend pod. This can be achieved by simultaneously tailing logs from all backend pods and hitting the URL endpoint from the browser or using curl like commands. If using curl like command, don’t forget to send cookies in the request as this feature relies on cookie based session affinity. Notice that logs are appearing on one pod only for a successful outcome.

kubectl get pods
kubectl logs <pod-name1> -f
kubectl logs <pod-name2> -f

Enabling client authentication using mTLS

To simulate mutual TLS authentication, we will first create a client TLS key and certificate. Again, these steps are similar to the steps for creating NGINX TLS key and certificate, except the HOST and client CA names.

#Create a new directory for the Client TLS key and certificate.
cd <path-where-you-downloaded-files>
mkdir client-certs
cd client-certs
<path-to-gencrt.sh> client-app

# Since we want to use the generated certificates as client certificates, we’ll rename server.key and server.crt to client.key and client.crt

mv server.key client.key
mv server.crt client.crt

Take note of two files “client.crt” and “client.key” in the directory where you executed the above command. These files will be used later to invoke HTTPProxy FQDN using mTLS.

Also, take a note of the “ca.crt” file in the directory. Create Kubernetes Secret of type “Opaque” and with a data key named ca.crt using this file. This certificate is used to validate client certificates during mTLS handshake.

cd <path-where-you-downloaded-files>kubectl create secret generic client-ca-crt — from-file=ca.crt=./ca.crt — dry-run -o yaml > ../client-ca-crt.yaml

client-ca-crt.yaml

client-ca-crt.yaml

Apply client-ca-crt.yaml to the cluster.

kubectl apply -f client-ca-crt.yaml

Configure Contour HTTPProxy with mTLS validation by setting the clientValidation attribute. Its mandatory attribute caSecret contains a name of an existing Kubernetes Secret that must be of type “Opaque”. Let’s update nginx-backend-httpproxy.yaml file with these details.

nginx-backend-httpproxy.yaml

nginx-backend-httpproxy.yaml

Apply nginx-backend-httpproxy.yaml to the cluster.

kubectl apply -f nginx-backend-httpproxy.yaml

Verify HTTPProxy status is valid and applied successfully to the cluster.

kubectl get httpproxy

Test

This time our backend HTTPProxy expects a client certificate to authenticate the client. We’ll use client.key and client.crt files that we created above to test and verify the HTTPS endpoint with client key and client certificate and expect a successful response with HTTP status code of 200.

cd client-certs
curl -kv — key ./client.key — cert ./client.crt https://myapp.mydomain.com

Conclusion

Contour as Ingress controller seems to be a good fit to simplify configuration of TLS termination with re-encryption, configure mTLS and session affinity at route level. It is quite easy to get started with installation and configuration, and found the documentation quite elaborate while realizing the above use cases.

--

--