This article will discuss the networking components of Google Kubernetes engine (GKE). We’ll also explore the many options available. Kubernetes, an open-source platform for managing containerized workloads or services, and GKE, a fully managed environment to run Kubernetes on Google Cloud infrastructure.
Kubernetes uses IP addresses and ports for communication between various network components. IP addresses are unique addresses that identify different components of the network.
- Containers – These components are essential for the execution of application processes. A pod can contain one or more containers.
- Pods – A group of containers that can be physically grouped together. Nodes are assigned to pods.
- Nodes – Nodes can be described as worker machines within a cluster, which is a collection of nodes. A node runs zero or more pods.
- ClusterIP – These addresses can be assigned to a particular service.
- Load Balancer – This balances the internal and external traffic to cluster nodes.
- Ingress – Loadbalancer that handles HTTP(S).
The IP addresses are assigned by subnets to components and services. Variable length subnet masks (VLSM) are used to create CIDR blocks. The subnet mask determines how many hosts are available on a subnet.
Google Cloud uses 2 n– 4 for the calculation of available hosts. This is not the formula used for on-premise networks.
This is how the flow of IP address assignments looks:
- Cluster’s VPC network assigns IP addresses to nodes
- The Node IPv4 block automatically assigns internal load balancer IP addresses. You can specify a range to your Loadbalancers, and then use the
loadBalancerIPoption for that address.
- The addresses assigned to pods come from a list of addresses that have been issued to the pods on that particular node. 110 pods are the default maximum allowed per node. This number is multiplied with 2 to allocate an address. (110*2=220). The nearest subnet is then used. /24 is the default. This creates a buffer to allow for scheduling the pods. This limit can be set at creation.
- Containers share the IP addresses of the Pods that they are running on.
- The address pool that is reserved for services includes service (Cluster IP), addresses.
You can see an example of how to plan and scope address ranges in the IP addresses for VPC native clusters section.
Domain Naming System
DNS allows for name to IP address resolution. This allows services to automatically create name entries. GKE offers several options.
- kube-dns – Kubernetes native add-on service. Kube-dns can only be run on a deployment which is accessible via a cluster IP. This service is used by default for DNS queries by pods within a cluster. This document explains how it works.
- Cloud DNS – This is Google Cloud DNS managed services. This service can be used for managing your cluster DNS. Cloud DNS has some advantages over kube DNS:
- This reduces the administration of a cluster-hosted DNS Server.
- Local resolution of DNS for GKE nodes. This is achieved by caching local responses, which allows for both speed and scaleability.
- Integrates with Google cloud Operations monitoring suite.
Service Directory can also be integrated with GKE or Cloud DNS to manage services through namespaces.
The gke-networking-recipes github repo has some Service Directory examples you can try out for Internal LoadBalancers, ClusterIP, Headless & NodePort.
You can learn more about DNS options in GKE by reading the article DNS on GKE: Everything that you need to know.
These devices control access and distribute traffic over clutter resources. GKE offers several options:
- Internal Load Balancers
- External load balancers
They handle HTTP(S), traffic to your cluster. They use the Ingress resource type. This creates an HTTP(S), load balancer for GKE. To ensure that the address does not change, you can assign a static address to the loadbalancer when configuring.
GKE allows you to provision both internal and external Ingress. These guides will show you how to configure GKE.
- Ingress configuration for internal HTTP(S), load balancing
- External load balancing
GKE allows you container native load balancing that directs traffic directly towards the pod IP via Network Endpoint groups (NEGs).
These are the main points you need to know about this topic.
- Frontend This exposes your service to clients via a frontend that allows traffic to be accepted based on different rules. This could be either a static IP address or a DNS name.
- Load Balancing Once traffic has been allowed, the load balancer allocates resources to the requested request according to rules.
- Backend There are many endpoints that can also be used in GKE.
GKE offers many ways to design your clusters network.
- Standard This mode gives the administrator the ability to manage the clusters’ underlying infrastructure. If you require greater control and responsibility, this mode is for you.
- Autopilot GKE provides and manages the cluster’s underlying infrastructure. This configuration is ready for use and allows you to have some hand-off management.
- Private cluster This allows only internal IP connections. Clients need to be able to access the internet (e.g. Cloud NAT is a way to provide clients with access to the internet (e.g. for updates).
- Private Access (Lets your VPC connect with service producer via private Ip addresses. Private-Service Connect, Allows private consumption across VPC networks.
Bringing everything together
Here is a brief, high-level overview.
- Your cluster assigns IP addresses to different resources
- These IP addresses are reserved for various resource types. Subnetting allows you to adjust the size of the range to suit your needs. It is a good idea to limit external access to your cluster.
- By default, pods can communicate with each other across the cluster.
- A service is required to expose pod-running applications.
- Services are assigned cluster IPs.
- You can either use kube-dns for DNS resolution or Google Cloud DNS within your GKE Cluster.
- External and internal load balancers can both be used with your cluster to distribute traffic and expose applications.
- Ingress handles HTTP(S). This uses the HTTP(S) loadbalancing service of Google cloud. Ingress can be used to create internal or external configurations.