Kubernetes Architecture and components

jaffar shaik
8 min readSep 4, 2022

--

Am sure you will love this article . This article explains a very good overview of kubernetes architecture .the contents starts from explaining differences between IMAGE and CONTAINER

What is Kubernetes?

Defination:

kubernetes also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.

if we look at Definition we can see word CONTAINER.

Lets understand the difference between IMAGE and CONTAINER

Image definition:

> “Image is a package with dependencies and configurations.

> It is an artifact which can be moved around.

> if its not running its an image.

  1. POD definition:

> Pod is basically abstraction layer on top of container.

> Each pod has its own IP address its an internal IP.

> a pod can communicate to another pod using their ip .

> pods are ephemeral means if a container or app crashes the pod will die a new pod will be created with new ip address.

types of pods :

  1. regular pods
  2. static pods

Scheduling regular pods :

apiserver ……..> gets request……..> scheduler decides ………..which node……> kubelet…….>schedules pod.

  • when we deploy pod we need to send request to apiserver. scheduler decides where to schedule pod. data will be updated in etcd .
  • control manager is responsible for watch and schedule a regular pod if it fails.
  • this is for regular pod deployment.

scheduling static pod:

  1. if we need to schedule pods without master process we use kubelet.

2 . kubelet watches specific location on node that its running /etc/kubernetes/manifest.

3. no master process required to deploy static pod .

4. kubelet is responsible to watch and reschedule a static pod if it fails.

5. we can easily identify static pods because “pod names are suffixed with node name”

6. to sedule static pods kubelet watches to specific location of node it is running /etc/Kubernetes/manifest

7.we will keep these pod configuration files in /etc/Kubernetes/manifest.

8. when kubelet will find manifest files in this location it schedules pod.

2. Container definition:

> “ In simple words a container is a running environment for an image.

> “when we pull an image on local machine and starts it, the Application inside the container will starts and its creates a container environment .

> “if its running it is a container.

Figure Kubernetes Architecture.

Kubernetes architecture composed of worker node and master node

Master process in Kubernetes:

  • The Kubernetes master is defined as the Kubernetes master node is the node in which that can direct and arrange a set of worker node.
  • There are 4 processes that run on master node. they control worker node and kubernetes cluster as well.
  • Master components should be deployed as pods .
  1. Apiserver
  2. scheduler
  3. controller
  4. etcd

Apiserver:

  • the first process in master node is apiserver.
  • It acts as an entry point in kubernetes cluster.
  • As a user when you deploy application in kubernetes cluster we interact with the Apiserver through a client or kubelet.
  • Apiserver is a cluster gateway which gets intial requests or updates into the cluster.

Example:

you if want to deploy new pods or application or components or if you need to check cluster health or deployment status we need to talk to the apiserver.

2. Scheduler

  • the process that schedules the pod on nodes is scheduler.
  • It just decides on which node a new pod to be scheduled.
  • once the apiserver validates the request to have a new pod , it handles it to a scheduler , inorder to start the pod on one of the worker nodes.
  • Kublet gets the request from scheduler and schedules the pod on node.

3. Controller Manager

  • Controller manager detects the state of cluster.
  • what happens if pod dies on any node we need to reschedule those pods as soon as possible to perform this activity it interacts with scheduler.

4. Etcd

  • Etcd stores a critical data for Kubernetes.
  • By distributed process , it also maintains a copy of data stores across all clusters on distributed machines/servers.
  • it stores the cluster state in key/value pair.
  • Etcd is cluster brain
  • if a pod dies, a new pod joins all these values are stored in etcd.

Worker node :

  • Its a physical server or virtual server .
    The worker nodes are the part of the Kubernetes clusters which actually execute the containers and applications on them.
  • Each node has multiple pods and containers running on it
  • 3 processes must be installed on each every node .
  • these 3 processes are responsible for scheduling and managing these pods.
  • nodes are the actual cluster server that do the actual work that's why they are called “worker nodes”

worker node processes:

worker node have 3 processes they are:

  1. Container runtime
  2. kubelet
  3. kubeproxy

Container runtime

  • “Its the first process that runs on each every node is container runtime.
  • container runtime is a process to execute containers.
  • container runtime is needed on both master and worker nodes.
  • kuberentes use container runtime to schedule containers.
  • container runtime is a seperate component not a kubernetes component.
  • our applications run as containers.
  • all the processes or applications apiserver , scheduler,controller, etcd also run as containers in kubernetes.

2. kubelet

  • the process that schedules the pod under container is called kubelet
  • kubelet interacts with container and nodes.
  • kubelet starts the pod with a container inside it

3.Kubeproxy

  • the process that forwards request form services to pod is kubeproxy.
  • Kubeproxy must be installed on each worker node.

Service object

The way communication happens between pod and application is through service object .

Figure service object
  • Service object has static IP address with DNS name that will be attached to pod.
  • service object catches the request and forward it to the pod .
  • even if pod dies the service and its IP address will present.
  • here the service object is load balancer

we have 2 types of services.

  • External service
  • internal service

external service

if the application needs to be accessed through bowser we need external service.

Example:

  • external service opens the communication from external sources.
  • the URl of external service looks like exmaple :https://165:54:39:88:8080
  • This URL is good for testing Environments.
  • here 165:54:39:88 is ip of node and 8080 is port number of application.

Ingress:

if we need to access the application by end users it should be in form of

https://facebook.com as example here https is secure protocal fallowed by domain name “facebook”. for this we have a component in kubernetes called as “INGRESS”.the request first goes to ingress and then the request will be forwarded to service.

Internal service:

  • if we need to open the database for public requests we need to create a service called internal service.

ConfigMap:

  • It is an external configuration for an application.
  • configmap contains configurations of application.
  • configmap contains URL of databases .we just connect it to a pod .
  • pod gets the data that is present in configmap.
  • if we change the name of service or change in end point we will just adjust the configmap.
  • pod of the external configuration can also be database what if the username and password changes in the application deployment process .
  • Having username and password in config map is insecure although its an external service.
  • to handle this we have a component called “SECRETS”

secrets:

  • It stores the secret data like credentials and certificates in base64 encoded format however this is not secure .for this we use third party services in kuberenetes to handle secrets.
  • we just connect secret to pod so that pod can get the information from secret.
  • we can use data of config map and sectets into our application pod using environment variables or as properties life.

Volumes :

  • what happens to the data when we restart a pod ?
  • the data or log inside the pod will gone. if we need persist the data for a long term we use volumes.
  • kuberenetes does not manage data persistent. the k8 admin is responsible for replicating and availability of data.
  • volumes are physical storage attached to hard drive of the server node to the pod or local machine where pod is running.
  • It can be remote storage outside of the Kubernetes cluster like cloud storage .
  • we can consider storage as external hard drive that can plugged into a Kubernetes cluster.

Deployment object :

Figure Deployment object
  • In deployment we can specify number of replicas of pod we needed .
  • we can scale up and down pods as needed
  • in reality we work with deployments not pods ‘
  • pod is a layer on top of container.
  • deployment is abstraction on top of pod.
  • if one of the replica of application pod dies service will forward the request to the another one and application can still be accessed by the end users.
  • this is true in case of application pod.
  • how about the case of database pod dies ??
  • in this case we cant access the application.
  • database cant be replicated by deployment object why ?
  • this is because database has state .meaning that if we have cloned replicas of data base that would all need to share data storage .
  • here we need some process that manage which pods are reading from the storage and writing to the storage to avoid data inconsiustency . this mechanism is handled throough statefulset.

statefulset:

  • They are for stateful applications and databases like mysql.
  • Any database app must be created with statefulsets not with Deployment object.
  • stateful set just like deployment object they are responsible for scaling and replicating pods and handling database consistency.
  • Generally databases are hosted outside of the kuberenetes cluster.

Daemon set:

  • when we add or delete nodes we need to adjust replica count , with deployment we cant ensure that pods are equally distributed.
  • but with Daemon set it automatically calculates number of replica's needed based on number of nodes.
  • Daemon set just deploys 1 replica or 1 pod per node.
  • when we add a node to cluster Daemon set adds a pod replica .
  • we no need to define replica count .

Conclusion:

In this article we have demonstrated the kubernetes architecture and components in detail .

--

--

jaffar shaik
jaffar shaik

Written by jaffar shaik

Am DevOps Engineer and SRE based in india.

No responses yet