Kubernetes (K8S) supports multiple networks and the range of configuration choices is vast. Sometimes, it gets complex for an organization to choose the right network that satisfies a specific requirement. This gets more difficult if you do not have hands-on experience on that particular network configuration. For example, if an administrator wants to configure a network with Layer 3 feature, will any network supported by Kubernetes satisfy the requirement? Probably not.
So how does one go about choosing the right network for one’s Kubernetes cluster work loads? The following steps could help you in doing so:
- Verify if the network supports a specific feature that you need for the cluster.
- Get to know the performance (throughput and latency) of that network.
Through this blog I am sharing a comparative study of features and performance metrics of six networks that were configured for a Kubernetes cluster.
Choosing a Network for Your K8S Cluster Based on Desired Features
|Romana NW||Weavenet NW|
|Network model||VXLAN||L2, L3||VXLAN||L3||L3||VXLAN|
|Container Networking Model||NO||YES||NO||YES||NO||YES|
Looking at the above table, should an admin want a network with Layer 3 feature, the choices are clear – Contiv, Calico or Romana would do it.
Performance Measurement in Comparison of Networks
Our team set up a K8S cluster of three Ubuntu 16.04 VMs (4GB RAM, 2 CPU cores, 50GB HDD) with Kubernetes packages pre-installed – one as master and the other two configured to be worker nodes.
The steps to deploy a Kubernetes cluster with a specific pod network are:
- Initialize the master node, running the following command with sudo privileges
$ sudo kubeadm init --pod-network-cidr=<CIDR>
Note: The parameter pod-network-cidr changes as per the network option.
Example: The suggested CIDR for flannel and canal networks is 10.244.0.0/16 and for calico network it could be 192.168.0.0/16.
It is not necessary to provide the parameter –pod-network-cidr for other network options like Contiv, Romana and Weavenet. However, it is must for the networks Romana and Weavenet to set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains.
- Install the pod network
$ kubectl apply -f [podnetwork].yaml
[podnetwork].yaml is the manifest file corresponding to the network of interest. Example: To setup Kubernetes cluster with Flannel network, point the [podnetwork].yaml to https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
Note: Only one pod network per cluster can be installed.
- Join the nodes to Kubernetes cluster,
- SSH to the machine
- Become root (e.g. sudo su -)
- Run the command that was output by kubeadm init.
$ kubeadm join --token <token> <master_node-ip>:<master_node-port> --discovery-token-ca-cert-hash sha256: <hash>
Note: Create a new token by running the following command on the master node:
$ kubeadm token create
Get discovery-token-ca-cert-hash by running the following command chain on the master node:
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
After initializing the master node as stated above in step 1, we took snapshots on all machines and then performed the steps 2 and 3. Initially, the cluster gets configured with Flannel network as part of step 2. On completing the K8S deployment with Flannel network, we executed a battery of test cases on the cluster to measure the performance parameters (bandwidth and latency) by sending network packets between pods for the following cases:
i). Pods in different nodes
ii). Pods in the same node
To repeat for other networks, all VMs were first restored to their snapshots taken before the Flannel network configuration. Then next up, Romana network was configured as part of step 2 – “Install pod network”. Testing was continued with identical test cases to get performance metrics. We repeated this process for other networks: Contiv, Calico, Canal, Weavenet and host network (No pod network) to get final performance/feature metrics for comparison
Comparison of Performance Statistics Across Network Types
In order to measure a network’s throughput and latency, we used the popular tool ‘iperf’.
One pod runs as iperf server while the other runs as client and sends network packets with TCP Packet size of 85.3 KByte and duration of 10 seconds.
We have iterated the above test case five times each for all network options and calculated average throughput and latency, with pods in different nodes and pods in same node cases. The following charts show a comparative study among all the networks supported by Kubernetes.
Bandwidth metrics of Pods in different nodes
This graph shows that the host network (No pod network configured) generates higher throughput than any other networks configured on Kubernetes. Flannel and Romana networks also perform well.
Bandwidth metrics of Pods in the same node
In this case, the host network (No pod network configured) configuration has higher throughput than any other networks configured on Kubernetes. Flannel and Contiv also perform well.
Latency metrics of Pods in different nodes
Flannel and Canal generated lower latency than other network solutions configured on Kubernetes.
Latency metrics of Pods in the same node
Flannel and Weavenet networks generated the lowest latency among network solutions configured on Kubernetes.
When configuring Kubernetes (K8S) cluster, choosing the right network architecture is of great importance. As shown above there are a range of configuration options available along with their performance metrics. With some diligence in matching options to the requirements, one can choose a network architecture that suits the desired features and performance requirements.
In the meantime, if you are looking for Cloud Services – ONAP, OpenStack, Kubernetes, Cloud Native Application, DevSecOps and Infrastructure Modernization please contact us.
Contributor: Venkat Krishna