Minikube is an open source tool or utility that allows you to run Kubernetes, a container orchestration system, on your local machine. It creates a single node cluster inside a virtual machine (VM). It can work with different operating systems, such as Linux, Mac, and Windows. It is useful for users who are new to containers or who want to test Kubernetes without large infrastructures.
What is Kubenetes ?
Kubernetes is an open source software platform for the automation of processes related to the development, implementation, scaling and management of containerized applications. It was originally developed by Google, but is now maintained by the Cloud Native Computing Foundation (CNCF).
Kubernetes provides a platform-agnostic way to manage containerized applications across multiple hosts, providing features such as load balancing, service discovery, automatic scaling, and rolling updates. It allows developers to focus on writing code and building applications, while abstracting away the underlying infrastructure.
Kubernetes is widely used in modern cloud-native applications, especially those based on microservices architecture. It supports multiple container runtimes such as Docker, containerd, and CRI-O, and can run on various cloud platforms, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure, as well as on-premises data centers.
Architecture of Kubernetes:
Kubernetes Cluster mainly consists of Worker Machines called Nodes and a Control Plane. In a cluster, there is at least one worker node. The Kubectl CLI communicates with the Control Plane and Control Plane manages the Worker Nodes.
Now we see the architecture in detail.
Kubernetes is composed of a number of components, each of which plays a specific role in the overall system. These components can be divided into two categories:
Control plane
Worker nodes
Control Plane/ Master Node:
It is basically a collection of various components that help us in managing the overall health of a cluster. For example, if you want to set up new pods, destroy pods, scale pods, etc. Basically, 4 services run on Control Plane:
API Server: The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. It is like an initial gateway to the cluster that listens to updates or queries via CLI like Kubectl. Kubectl communicates with API Server to inform what needs to be done like creating pods or deleting pods etc. It also works as a gatekeeper. It generally validates requests received and then forwards them to other processes. No request can be directly passed to the cluster, it has to be passed through the API Server.
Scheduler: is used to schedule the work to different worker nodes. It also manages the new requests coming from the API Server and assigns them to healthy nodes. It intelligently decides on which node to schedule the pod for better efficiency of the cluster.
Controller Manager: The kube-controller-manager is responsible for running the controllers that handle the various aspects of the cluster’s control loop. These controllers includes:
Replication controller, which ensures that the desired number of replicas of a given application are running.
Node controller, which ensures that nodes are correctly marked as “ready” or “not ready” based on their current state.
Endpoints controller, it populates the endpoints object that is, joins Services & Pods.
etcd: It is a key-value store of a Cluster. The Cluster State Changes get stored in the etcd. It acts as the Cluster brain because it tells the Scheduler and other processes about which resources are available and about cluster state changes.
Worker Node:
These are the nodes where the actual work happens. Each Node can have multiple pods and pods have containers running inside them. There are 3 processes in every Node that are used to Schedule and manage those pods.
Container runtime: A container runtime is needed to run the application containers running on pods inside a pod. Example-> Docker
kubelet: Kubelet is an agent that runs on each worker node and communicates with the master node. It also makes sure that the containers which are part of the pods are always healthy. It watches for tasks sent from the API Server, executes the task like deploy or destroy the container, and then reports back to the Master.
kube-proxy: It is the process responsible for forwarding the request from Services to the pods. It has intelligent logic to forward the request to the right pod in the worker node.
OR
Kube-proxy is used to communicate between the multiple worker nodes. It maintains network rules on nodes and also make sure there are necessary rules define on the worker node so the container can communicate to each in different nodes.
Why to use Kubernetes ?
Scalability: Kubernetes can scale your applications automatically based on their resource usage and demand. This helps to ensure that your applications are always available and responsive, no matter how many users or requests they receive.
Fault tolerance: Kubernetes can detect and recover from failures in your applications or infrastructure automatically. This helps to minimize downtime and ensure that your applications are always up and running.
Flexibility: Kubernetes supports a wide range of container runtimes and platforms, which means that you can use it with any cloud provider or on-premises infrastructure.
Portability: Kubernetes allows you to run your applications on any infrastructure, which means that you can easily move your applications between different cloud providers or on-premises environments without any changes to your code.
Resource utilization: Kubernetes optimizes the use of resources, allowing you to get the most out of your infrastructure while minimizing costs.
Easy deployment and management: Kubernetes provides a unified API for deploying and managing your application, making it easier to automate deployment, scaling, and monitoring.
What's the difference between kubectl and kubelets.
Feature | Kubectl | Kubelet |
Purpose | Command-line tool for managing Kubernetes clusters | System service for managing containers running on individual nodes |
Functionality | Create and delete resources, scale deployments, inspect cluster status, etc. | Start and stop containers, pull container images, report container status to cluster |
Scope | Manages entire cluster | Manages individual nodes and containers running on those nodes |
User Interface | Command-line interface | No user interface, managed by Kubernetes control plane |
What is?
Replica Set: In Kubernetes, a ReplicaSet is responsible for ensuring that a specified number of replicas (or copies) of a pod are running at all times. If a pod fails, the ReplicaSet will create a new pod to replace it, thereby maintaining the desired number of replicas.
Replication Controller: A ReplicationController is an older version of a ReplicaSet in Kubernetes. It's responsible for ensuring that a specified number of replicas of a pod are running at all times, similar to a ReplicaSet. However, it has a few limitations, such as not being able to manage pods that were not created by itself.
Namespace: In Kubernetes, a Namespace is a way to divide a cluster into multiple virtual clusters. Each Namespace provides a scope for resources such as pods, services, and replication controllers. This can be useful for separating different applications or environments, or for providing different levels of access to different teams.
Deployement: Deployment in Kubernetes is a resource that defines how an application should be deployed and managed. It creates and manages identical pods based on a defined template, and continuously monitors their state. Deployments provide benefits such as easy scaling, rolling updates and rollbacks, managing different application versions, and self-healing. They are crucial for managing application deployments in Kubernetes.
When you create a Deployment in Kubernetes, it creates a ReplicaSet, which in turn creates and manages a set of identical pods based on the template defined in the Deployment. The Deployment then continuously monitors the state of these pods, and automatically handles scaling, rolling updates, and rollbacks based on the desired state specified in the Deployment manifest.
Service: In Kubernetes, a Service is an abstraction that provides a stable IP address and DNS name for a set of Pods, allowing communication between different parts of an application or between different applications running within a Kubernetes cluster. Services are created based on a set of labels, and can be exposed in several ways including ClusterIP, NodePort, and LoadBalancer. Services are a key component of Kubernetes and enable scalable and reliable communication within the cluster.
Docker Swarm Vs Kubernetes
Feature | Docker Swarm | Kubernetes |
Architecture | Built into Docker Engine, simpler and lightweight | More complex architecture with a separate control plane |
Scalability | Capable of managing and scaling containerized applications | More scalable and better suited for larger, complex deployments |
Features | Basic scheduling, scaling, and self-healing capabilities | Advanced scheduling, automatic scaling, and self-healing capabilities, with a robust ecosystem of third-party tools and plugins |
Ease of Use | Easier to set up and use, especially for smaller deployments | Steeper learning curve, requires more configuration and management |
Community | Smaller and less active community compared to Kubernetes | Larger and more active community, with more resources, documentation, and support available |
Kubernetes Vs Docker Compose
Feature | Docker Compose | Kubernetes |
Container Orchestration Scope | Single host | Cluster of hosts |
Scaling and load balancing | Limited scaling and no built-in load balancing | Automatic scaling and built-in load balancing |
Health checks and self-healing | Limited health check capabilities and manual self-healing | Robust health check capabilities and automatic self-healing |
Configuration management | Simple configuration management | More complex configuration management and versioning |
Deployment strategies | Rolling updates or blue-green deployment | Rolling updates, canary deployment, or blue-green deployment |
Service discovery | DNS-based service discovery | DNS-based service discovery with built-in load balancing |
Security | Basic security features | Advanced security features such as network policies and secrets management |
Resource consumption | Less resource-intensive | More resource-intensive due to additional components |
Learning curve | Easy to learn and use | Steeper learning curve, but more powerful once mastered |
Commands
Here are some common commands for interacting with a Kubernetes cluster:
Viewing the cluster state
To view a list of all the pods in the cluster, you can use the following command:
kubectl get pods
To view a list of all the nodes in the cluster, you can use the following command:
kubectl get nodes
To view a list of all the services in the cluster, you can use the following command:
kubectl get services
kubectl get svc
To view a list of deployment in the cluster, you can use the following command:
kubectl get deploy
Ceate a deployment with a specific image:
kubectl create deployment <name> --image=<image>
Epose a deployment as a service on a specific port:
kubectl expose deployment <name> --port=<port>
- Scale a deployment to a specific number of replicas:
kubectl scale deployment <name> --replicas=<num>
- Delete a deployment:
kubectl delete deployment <name>
- Delete a pod:
kubectl delete pod <name>
- View the logs of a pod:
kubectl logs <pod>
- Get detailed information about a pod:
kubectl describe pod <name>
- Open a terminal in a running pod:
kubectl exec -it <pod> -- /bin/bash
- Apply a YAML configuration file to create or update resources:
kubectl apply -f <filename>
- List all configmaps in the default namespace:
kubectl get configmaps
- Create a new configmap:
kubectl create configmap <name> --from-literal=<key>=<value>
- Edit an existing configmap:
kubectl edit configmap <name>
- List all secrets in the default namespace:
kubectl get secrets
- Create a new secret:
kubectl create secret generic <name> --from-literal=<key>=<value>
- Check the status of a deployment rollout:
kubectl rollout status deployment/<name>