Introduction to Basic Concept of Kubernetes
Have you heard Kubernetes? You must be interested in that topic. That’s why you open this article. This article is about the basic concept of Kubernetes and how to use it.
What is Kubernetes?
Kubernetes is a open-source platform/tool created by Google for managing containerized workloads and services, that facilitates both declarative configuration and automation.
It is written in GO-Lang. So currently Kubernetes is an open-source project under Apache 2.0 license. Sometimes in the industry, Kubernetes is also known as “K8s”. With Kubernetes, you can run any Linux container across private, public, and hybrid cloud environments.
Kubernetes provides some edge functions, such as Loadbalancer, Service discovery, and Roled Based Access Control(RBAC). It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.
Why we need Kubernetes?
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime.
For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?
The answer is to help us manage containers. When we run our production environments using a microservice pattern with many containers, we need to make sure many things. Such as health check, version control, scaling, and rollback mechanism.
Kubernetes gives you the orchestration and management capabilities required to deploy containers, at scale. Kubernetes orchestration allows you to build application services that span multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time.
That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently.
It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
In nutshell, I can say Kubernetes is more like a manager that has many subordinates(containers). What manager does is maintain what subordinates need to do.
Key feature of Kubernetes:
How it works?
When you start the Kubernetes by reading the official documentation, you might be overwhelmed to encounter a lot of terminologies. Sometimes we need the overview to get a better understanding of the kubernetes concept. Here I show you the complete overview diagram of Kubernetes Architecture. I hope this helps.
When you deploy Kubernetes, you get a cluster. A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.
The worker node host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
This document outlines the various components you need to have a complete and working Kubernetes cluster.
Here’s the diagram of a Kubernetes cluster with all the components tied together.
These machine is the controlling element of the cluster. Master has 3 parts:
- API Server:
- The application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster.
- Scheduler: Scheduler watches API server for new Pod requests. It communicates with Nodes to create new pods and to assign work to nodes while allocating resources or imposing constraints.
- Controller Manager: Component on the master that runs controllers. Includes Node controller, Endpoint Controller, Namespace Controller, etc.
These machines perform the requested, assigned tasks. The Kubernetes master controls them. There 4 component inside Nodes:
- Pod: All containers will run in a pod. Pods abstract the network and storage away from the underlying containers. Your app will run here.
- Kubelet: Kubectl registering the nodes with the cluster, watches for work assignments from the scheduler, instantiate new Pods, report back to the master.
- Container Engine: Responsible for managing containers, image pulling, stopping the container, starting the container, destroying the container, etc.
- Kube Proxy: Responsible for forwarding app user requests to the right pod.
I’m not going to describe the detailed concept of kubernetes here, cause it will lead to a boring situation. This article should be more dirty and fun. You can read the official documentation for more details information. Click here.