As a Networking and Security Technical Account Specialist for VMware, I get a lot of questions regarding NSX and container integration with Kubernetes. Many network and security professionals are not aware of the underlying Kubernetes container architecture, services and communications paths, so before I start into how NSX works with containers, let’s examine why container development simplifies inter-process and application programming and how containers communicate via Kubernetes services.
The development of containers is driven greatly by the programmatic advantages it has versus server-based application development. A Kubernetes Pod is a group of one or more containers. Containers in a pod share an IP address, port space and have localhost communications within the pod. Containers also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory space. These capabilities provide developers with much tighter and quicker development methods, and a large amount of abstraction from what they have to contend with in server-based application development.
Containers in a pod also share data volumes. Colocation (co-scheduling), shared fate, coordinated replication, resource sharing and dependency management are managed automatically in a pod.
One of the key internal Kubernetes services for NSX-T integration is the kube-proxy service. The kube-proxy service watches the Kubernetes master for the addition and removal of Service Endpoint objects. For each service, it creates an iptables rule that captures traffic to the Kubernetes Service’s back-end sets. Also, for each Endpoint object, it creates iptables rules which select a back-end Pod.
With NSX-T and the NSX Container Plugin (NCP), we leverage the NSX Kube-Proxy, which is a daemon running on each of the Kubernetes Nodes, which most refer to as “Minions” or “Workers”. It replaces the native distributed east-west load balancer in Kubernetes (kube-proxy & iptables) with Open vSwitch (OVS) load-balancing services.
Now that we’ve covered east-west communications in Kubernetes, I’ll address ingress and egress to Kubernetes clusters.
The Kubernetes Ingress is an API object that manages external access to the services cluster. By default, and in typical scenarios, Kubernetes services and pods have IPs only routable by the cluster network. All traffic destined for an edge router is dropped or forwarded elsewhere. An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
Kubernetes Ingress can be configured to give services externally-reachable URLs, load-balance traffic, terminate SSL and offer name based hosting. The most common Open-source software used in non-NSX environments are Nginx and HAProxy, which many of you may be familiar with, supporting server-based application operations. There’s also an external load balancer object, not to be confused with the Ingress object.
When creating a Kubernetes service, you have the option to automatically create a cloud network load balancer. It provides an external-accessible IP address that forwards traffic to the correct port on the assigned minion / cluster node.
Now, let’s add NSX to the picture… When we install NSX-T in a Kubernetes environment, we replace the Kubernetes Ingress object with the NSX-T native layer-7 load balancer that performs these functions.
Since we’ve reviewed how traffic gets into a Kubernetes cluster, let’s take a look at how network security is handled.
A Kubernetes Network Policy is a security construct of how groups of pods are allowed to communicate among themselves and other network endpoints. Kubernetes Network Policies are implemented by the network plugin, so you must use a networking solution which supports NetworkPolicy, as simply creating the resource without a controller to implement it will not work. By default, Kubernetes pods are non-isolated and accept any traffic, from any source. Pods become isolated by having a Kubernetes Network Policy assigned to them. Namespaces that have a Network Policy or more assigned to them allow and reject any traffic per the policy.
And finally, let’s review NSX-T networking components for Kubernetes. As you can see in the graphic below, NSX-T components are deployed to support both the Kubernetes Master management network and the Kubernetes Minion nodes. This diagram depicts the use of a non-routable, “black-holed” network on a logical switch that is not connected to any logical router.
The Kubernetes Master management network and logical switch are uplinked to the NSX-T tier-0 router in the center of the diagram. The tier-0 router is also providing NAT to the Kubernetes Cluster. eBGP, being the only dynamic routing protocol supported by NSX-T at this time, will be configured with peer routes to the top or rack or even back to the core.
NSX tier-1 routers are instantiated in each Kubernetes Cluster Node and one is deployed for ingress services and load balancing we discussed previously.
For those who are unfamiliar with the difference between NSX-v Edges and NSX-T Edges see the “High-Level View of NSX-T Edge Within a Transport Zone” at docs.vmware.com. If you’re an NSX engineer or work with NSX as part of a team, I would highly recommend the VMware NSX-T: Install, Configure, Manage [V2.2] course. NSX-T is a major architecture change from NSX-v and there are too many changes in each component to even begin to list here. With that being said, in a simplistic view, NSX-T tier-0 routers serve as provider routers and NSX-T tier-1 routers serve as tenant routers. Each has different capabilities, so ensure to read up on features and the Service Router (SR) and Distributed Router (DR) components for a better understanding if needed.
Wrapping it up, Kubernetes with VMware NSX-T provides a much richer set of networking and security capabilities than are provided with native Kubernetes. It simplifies and provides automation and orchestration via a CMP or REST API, for K8s DevOps engineers and container developers alike. And to add that NSX-T is hybrid-cloud and multi-cloud capable, with greatly simplified networking and security, Kubernetes users should be very excited once they see what happens when they #runNSX.
Leave a Reply