The Kubernetes Handbook – Learn Kubernetes for Beginners

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling and management of containerized applications. Since its release in 2014, it has become the de facto standard for running production workloads in the cloud.

In this beginner-friendly guide, you‘ll learn the fundamentals of Kubernetes and how to use it to deploy a real-world application. I‘ll explain the core concepts with simple analogies, and show you how to set up a local cluster and deploy your first app on it.

By the end of this handbook, you‘ll have a solid understanding of how Kubernetes works, and you‘ll be able to use it to deploy and manage your own applications with confidence. Let‘s get started!

What is Kubernetes?

Kubernetes is a system for running and coordinating containerized applications across a cluster of machines. It runs your containers on a cluster, scales them up or down when needed, and automatically recovers if something goes wrong.

Think of it like a conductor in an orchestra. Just as the conductor manages the musicians and ensures that the right music is played at the right time, Kubernetes manages your ‘container orchestra‘ and ensures that the right containers are running at the right time.

Here are some key features of Kubernetes:

  • Automatic binpacking: Kubernetes automatically schedules your containers based on resource requirements and constraints, without sacrificing availability.
  • Self-healing: Kubernetes automatically restarts containers that fail, replaces containers, kills containers that don‘t respond to your health check, and doesn‘t advertise them to clients until they are ready to serve.
  • Horizontal scaling: With Kubernetes, you can easily scale your application up and down with a simple command, or automatically based on CPU usage.
  • Service discovery and load balancing: Kubernetes can expose a container using a DNS name or its own IP address, and load balance traffic to that container as well.

These features make Kubernetes an ideal platform for deploying and managing microservices and cloud-native applications.

Kubernetes Architecture

At a high level, a Kubernetes cluster consists of a set of worker machines called nodes, that run containerized applications. The control plane manages the worker nodes and the containers in the cluster.

Here‘s a diagram showing the main components of a Kubernetes cluster:

Let‘s go through each of these components in more detail.

Control Plane Components

The control plane consists of several components that make global decisions about the cluster, and detect and respond to cluster events. These components can run on any machine in the cluster, but are typically run on a dedicated master node.

  • kube-apiserver: The API server is the front end for the Kubernetes control plane. It exposes the Kubernetes API, which is used by external users to interact with the cluster.
  • etcd: etcd is the backing store for all cluster data. It‘s a distributed key-value store that provides a reliable way to store configuration data and state information for the cluster.
  • kube-scheduler: The scheduler is responsible for assigning newly created pods to nodes. It factors in the resource requirements of the pods, the constraints specified by the user, and the capacity of the nodes.
  • kube-controller-manager: The controller manager runs a set of controllers that regulate the state of the cluster. Examples of controllers include the replication controller, which maintains the correct number of pod replicas, and the endpoints controller, which populates the Endpoints object.

Node Components

The node components run on every node in the cluster, and are responsible for running the containers and providing the Kubernetes runtime environment.

  • kubelet: The kubelet is the primary node agent. It watches for pod specifications assigned to its node, and ensures that the containers described in those specifications are running and healthy.
  • kube-proxy: The kube-proxy is responsible for network proxying and load balancing. It routes traffic to the appropriate container based on IP and port number of the incoming request.
  • Container runtime: Kubernetes supports several container runtimes, such as Docker, containerd, and CRI-O. The container runtime is responsible for running the containers.

Now that you have a basic understanding of the architecture, let‘s set up a local Kubernetes cluster and deploy an application on it.

Setting Up a Local Kubernetes Cluster with Minikube

Minikube is a tool that makes it easy to run a single-node Kubernetes cluster on your local machine. It‘s great for learning and development purposes.

Here are the steps to set up Minikube:

  1. Install a hypervisor, such as VirtualBox or Hyper-V, on your local machine.
  2. Install the Minikube binary on your local machine. You can download it from the official Minikube repository on GitHub.
  3. Start the Minikube cluster by running the following command:
    minikube start
  4. Verify that the cluster is running by running the following command:
    kubectl get nodes

    You should see a single node, which is the Minikube node.

That‘s it! You now have a local Kubernetes cluster running on your machine.

Deploying Your First Application

Now that you have a cluster running, let‘s deploy a simple Node.js application on it. The application is a basic Express.js app that displays a "Hello World" message.

Here are the steps to deploy the application:

  1. Create a deployment manifest for the application. This is a YAML file that describes the desired state of the application, including the container image to use, the number of replicas, and any environment variables.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello-world
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: hello-world
      template:
        metadata:
          labels:
            app: hello-world
        spec:
          containers:
          - name: hello-world
            image: hello-world:v1
            ports:
            - containerPort: 8080
  2. Apply the deployment manifest to the cluster by running the following command:
    kubectl apply -f deployment.yaml
  3. Verify that the deployment was created successfully by running:
    kubectl get deployments

    You should see the hello-world deployment in the output.

  4. Create a service manifest to expose the deployment. This is another YAML file that describes the service and how it should route traffic to the pods.
    apiVersion: v1
    kind: Service
    metadata:
      name: hello-world
    spec:
      selector:
        app: hello-world
      ports:
        - protocol: TCP
          port: 80
          targetPort: 8080
  5. Apply the service manifest to the cluster:
    kubectl apply -f service.yaml
  6. Verify that the service was created successfully:
    kubectl get services

    You should see the hello-world service in the output.

  7. Access the application by running:
    minikube service hello-world

    This will open the application in your default web browser. You should see a "Hello World" message displayed.

Congratulations! You‘ve just deployed your first application on Kubernetes.

Kubernetes Concepts In Depth

Now that you‘ve seen Kubernetes in action, let‘s take a deeper dive into some of the key concepts.

Pods

Pods are the smallest deployable units in Kubernetes. A pod is a group of one or more containers with shared storage and network resources, and a specification for how to run the containers.

Pods are designed to run multiple containers that need to work together. For example, you might have an application container and a logging container in the same pod, because they operate as a single unit.

All containers in a pod run on the same node, and share the same IP address and port space. They can communicate with each other using localhost, and can also share volumes.

Here‘s an example of a pod manifest:

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: web
    image: nginx
    ports:
    - containerPort: 80
  - name: log
    image: log-collector

This manifest defines a pod with two containers: a web container running Nginx, and a log container running a log collector. The containers share the same network namespace, so they can communicate with each other using localhost.

ReplicaSets

A ReplicaSet is a Kubernetes object that ensures a specified number of pod replicas are running at any given time. If a pod crashes or is deleted, the ReplicaSet will automatically create a new one to replace it.

ReplicaSets are used by Deployments to manage the number of pod replicas. When you create a Deployment, it automatically creates a ReplicaSet to manage the pods.

Here‘s an example of a ReplicaSet manifest:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v3

This manifest defines a ReplicaSet that ensures there are always three replicas of the frontend pod running. The selector field specifies which pods the ReplicaSet should manage, based on the labels.

Persistent Volumes

Persistent Volumes (PVs) are pieces of storage in the cluster that have been provisioned by an administrator. They are consumed by Pods and have a lifecycle independent of any individual Pod that uses them.

PVs are used to provide durable storage for stateful applications, such as databases. They can be provisioned dynamically using a StorageClass, or manually by an administrator.

Here‘s an example of a PV manifest:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/example"

This manifest defines a PV with 5GB of storage, using a hostPath volume on the node. The accessModes field specifies how the volume can be mounted, in this case ReadWriteOnce, meaning it can only be mounted by a single node for read-write access.

To use a PV, you need to create a PersistentVolumeClaim (PVC) that requests the storage. Here‘s an example of a PVC manifest:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

This manifest defines a PVC that requests 5GB of storage with ReadWriteOnce access mode. Kubernetes will automatically bind this claim to an available PV that meets the requirements.

Best Practices

Here are some best practices to follow when deploying applications on Kubernetes:

  • Use Deployments instead of Pods: Deployments provide a higher-level abstraction for managing pods, and make it easy to perform rolling updates and rollbacks.
  • Use Services to expose your applications: Services provide a stable IP address and DNS name for your pods, and allow you to load balance traffic across multiple replicas.
  • Use ConfigMaps and Secrets for configuration: ConfigMaps and Secrets allow you to decouple configuration from your application code, and make it easy to update configuration without rebuilding your containers.
  • Use readiness and liveness probes: Readiness and liveness probes allow Kubernetes to check the health of your containers and automatically restart them if they become unhealthy.
  • Use resource requests and limits: Resource requests and limits allow you to specify the minimum and maximum amount of CPU and memory that your containers need, and help Kubernetes schedule your pods efficiently.

Following these best practices will help you build scalable and resilient applications on Kubernetes.

Conclusion

In this handbook, you‘ve learned the fundamentals of Kubernetes and how to use it to deploy and manage containerized applications. You‘ve seen how to set up a local cluster using Minikube, deploy a simple application, and explored some of the key concepts such as pods, deployments, and persistent volumes.

Kubernetes is a powerful platform that can help you build and run applications at scale. By mastering the concepts and techniques covered in this handbook, you‘ll be well on your way to becoming a Kubernetes expert.

Remember, practice makes perfect! Keep experimenting with Kubernetes, and don‘t be afraid to ask for help when you need it. The Kubernetes community is vibrant and welcoming, and there are plenty of resources available to help you learn and grow.

Good luck on your Kubernetes journey!

Similar Posts