A Descent into Kubernetes ⚓

A Descent into Kubernetes ⚓

Introduction ☄️

Kubernetes is a container orchestration tool, that is portable & open-source & provided commonly as a service. Widely known for it's self healing mechanism, scaling applications and overall managing the application so that it reaches it's desired state that it is supposed to be in. Service providers like Amazon's Elastic Kubernetes Service, Google's Kubernetes Engine and Azure's Kubernetes Service are the most commonly / widely used services for container orchestration or rather cloud providing in general. ( Civo is a startup that also provides container orchestration services ).

Back-In-Time ⏳


Initial days of application deployment involved organizations applying the use of Bare-Metal servers. In brief, the application layers do not have boundaries, this can cause a particular layer of the app to utilize an un-proportional amount of the hardware resource available on the server. Risk of resource allocation issues. In general, the utilization of this technique for an organization to manage compared to other solutions available in the market is very expensive.


Container era 📦

Containers were introduced in 1979 if you would have believed. Containers are a step up from the Virtual Machine technology used ( based on virtualization of the hardware resources ). Although virtual machines provided resource boundaries and introduced security aspects to applications. They are not efficient enough, Containers are a lightweight OS virtualization technique as they do not need a dedicated OS to be used. Containers are generally ephemeral. control-groups & namespaces are the features used by the Linux OS Kernel to limit the no. of resources, & provide container isolation.

A container can be created / managed using a container runtime. There are different types of container run-times eg: Containerd / CRI-O / LXC / RKT / Docker Engine, the most commonly used container runtime is Containerd & Docker Engine. Although Containerd is a very stable runtime used by Kubernetes.

Kubernetes Behavior | Features

Kubernetes is a container orchestration tool, that provides several features.

Self Healing mechanisms

Bringing current state to a desired state. If a component fails, k8s re-deploys the particular component to retain the desired state. Involves auto-scaling.

Load Balancing

redirecting / forwarding requests made to the k8s cluster using the Ingress component, distributing requests across the cluster that it is required to be directed towards (Services with specific IPs & Pods with specific Ports).

Ability to Scale | Scaling

Consists of horizontal scaling of resources like pods, volumes. Scaling of resources based on demand or desired state.

Life Cycle Management

Automated deployments & updates. Ability to be able to Roll-Back to cluster's previous versions. Able to pause & carry on deployment.

Persistent Storage

Dynamic storage of the cluster information in the cluster-brain ( etcd ), stores information about the cluster's current state.

Breaking Down Kubernetes | Kubernetes Architecture

Kubernetes-architecture (1).png

A Kubernetes cluster consists of many parts including Master node, Worker nodes, and the Cluster brain (etcd).

Control plane ( Master Node )

The Control plane can be divided into several parts. The API-Server, etcd, controller manager, and the scheduler.

API-Server is a client interface for communication b/w the k8s REST API and the process/user.

etcd is a distributed database storage system, that essentially stores the state of the Kubernetes cluster. etcd uses key-value storage. It is highly available, consistent, and persistent with its data storage ( cluster data ).

Controller manager as a whole manages the different types of controllers ( replica set, namespace, endpoint, and service controller ). controller managers act as control loops that watch over your k8s cluster state. It compares the current state, then adjusts the current state as necessary to move toward the desired state as it constantly listens to the Api-server.

Scheduler role is to listen to requests from the Api-server, like scheduling new pods. It would then schedule pods on worker nodes depending on the health & availability of the nodes.


Worker Nodes

Worker node executes containers and applications that it is assigned with. Worker nodes can be divided into several parts ; Kubelet, Container runtime, Pod, Kube proxy.

Kubelet is another kubernetes controller that it present in every worker node. It acts as an interface b/w the control plane and the container runtime. In accordance to the scheduled pods the kubelet creates the containers accordingly, by first reading the Podspec of the container then instructs the container runtime by the CRI ( container runtime interface ) to initiate the container. ( containerd supports CRI )

Kubeproxy takes the role of assigning the particular worker node with a particular IP address. Takes the role of updating the IP table, and maintains network rules.

Container Runtime is low level component of a container engine that is responsible for mounting containers and working with the OS kernel to enable containerization.

Kubernetes Networking

Ingress acts as a reverse proxy or a load-balancer to redirects your requests to the specific pods/services. Essentially it is used for cluster applications to be accessed through the browser externally via HTTP/HTTPS. When a HTTP/HTTPS request is sent to a URL/path,the domain is linked with a particular service for which ingress redirects particularly to.


Services are a logical abstraction of a particular set of pods in the cluster consisting of similar processes. It consists of multiple pods that is registered to the service , for which the Service redirects/forward requests to the particular target port belonging to the pods.

Deployments are abstractions of pods , and they manage the pods created under them. Eg: Because pods are ephemeral IP addresses don't need to be kept assigning to the newly generated or replicated pod, but instead changes are made to the Deployments rather.

Kubernetes on your system


  • Setting up a kubernetes cluster can often be complicated, instead mini-kube can be used as a test cluster on your local-machine.


  • For external master processes to access the kubernetes cluster, there needs to be a client that talks to the kubernetes cluster ( control plane ). The API server provides this functionality, and often a command-line tool like kubectl is used to communicate with the k8s cluster ( control plane ).

  • Prerequisites : Install minkube, kubectl ( command line tools )

minikube start

Learning Resources Used

Kubernetes Tutorial

TechWorld with Nana

Kunal Kushwaha


next in the series . Kubernetes Deep Dive .

print("Thanks for Reading")

Did you find this article valuable?

Support WeMakeDevs by becoming a sponsor. Any amount is appreciated!