Registration is open - Live, Instructor-led Online Classes - Elasticsearch in March - Solr in April - OpenSearch in May. See all classes


Definition: What Is a Kubernetes Pod?

A Pod is the smallest deployable unit in Kubernetes. A Kubernetes pod is a group of one or more containers running instances of an application. Worker machines called Nodes host pods and create a configured environment for containers to run efficiently. This includes providing dependencies and resources such as:

  • Storage: Stores data in volumes shared amongst containers.
  • Networks: Provide internal IP addresses that allow the containers to communicate with each other using localhost.
  • Configuration information: Have information on how to run each container, such as what port to use or the image version of the container.

Kubernetes Pod vs. Container

Kubernetes pods contains one or more containers. A container is a package of software dependencies and resources needed to run an application. The resources include code, libraries, tools, and settings. Pods create an abstraction layer over the containers providing dependencies and resources that allows Kubernetes to manage the containers efficiently.

Read more about containers.

Kubernetes Pod vs. Nodes

Kubernetes pods are hosted inside nodes in a cluster. Nodes are worker machines (virtual machines or physical) that run a cluster. A node has several pods embedded in it that run containers. There are two types of nodes in Kubernetes, master nodes and worker nodes. The master node has a control plane that handles the scheduling of the pods contained in the worker nodes. The worker nodes host pods that run the containerized applications.

Kubernetes Pod vs. Cluster

In Kubernetes, a cluster is a set of nodes. Once deployed, pods are assigned to one or more nodes running instances of an application. A cluster is created using at least one master and several worker nodes. Through nodes, clusters provide Kubernetes pods with configured and lightweight environments to develop, deploy and manage applications efficiently. Since Kubernetes containers can run on any operating system, clusters allow pods to run in various machines and environments.

Why Are Pods Useful?

Pods have revolutionized how containers are managed in production. A single pod can be configured to manage multiple containers in real-time. Kubernetes pods provide a conducive environment for these containers to operate in.

Pods in Kubernetes have networks such as localhost that simplify communication among containers. The networks enable containers to share resources such as each container’s data, volumes, and status. Additionally, pods have IP addresses that allow them to communicate and share resources with other pods.

They also facilitate application scalability. Depending on the demand and supply status, the controller can configure pods to create replica pods or shut down unused ones automatically.

Types of Pods

Pods run containers in two ways:

  • Single containers: One Kubernetes pod can run a single container. In this case, the container represents an entire application, including all dependencies and resources needed to run the application. Pods with single containers are simple and easy to run.
  • Multiple containers: Equally, a pod in Kubernetes can run multiple containers on one server. Such cases occur when an application’s programs depend on one another and need to share resources such as files, volume, or data. Kubernetes pods enable this co-dependency by forming an abstraction layer over the containers where they can easily share resources in a controlled environment.

At the same time, multiple pods can run one application. The concept is known as replication.

Replication of Pods

Replication of pods in Kubernetes refers to using more than one pod to run multiple instances of an application. Usually, each pod is supposed to run a single instance of an application. You can use multiple pods to scale the application by running several instances. Replication can be used to scale and reduce the downtime of an application. Controllers will create and manage Kubernetes replica Pods as specified in the pod template.

How Do Pods Work?

A pod manages one or more containers scheduled on the same Kubernetes node. Kubernetes can be installed on a physical or virtual machine (VM). All containers in a pod share resources and dependencies. The pods control their execution and termination. For instance, pods can have ‘init’ containers that run before the application runs, setting up an environment for the applications that will follow.

Pod Lifecycle

Controllers manage pods based on their status. All Kubernetes pods have a status field in the PodStatus API object where they publish their status in the Phase field. The phase status summarizes the current state of the pod.

The status of a Kubernetes pod can be any of the following:

  • Pending: Indicates that the pod has been successfully created in the cluster, but one or more of its containers is not running.
  • Running: Indicates a pod has been embedded to a node with all its containers created, running, or restarting.
  • Succeeded: Indicates all pod containers have been terminated successfully. Once terminated, the pods will not restart.
  • Failed: Indicates one container in the pods failed to terminate or delete. All terminated pods issue a zero status. Anything else is regarded as a failure.
  • Unknown: Indicates the controller cannot determine the Pod’s state.

In addition, the PodStatus has an array called PodConditions. Conditions show factors causing the current state of the Pod. The PodConditions array has

  • A Type field that can be either be PodScheduled, Ready, Initialized, and Unschedulable;
  • A Status field that can contain True, False, or Unknown.

Controllers

In Kubernetes, Controllers, are situated in the Controller Manager. The Controller Manager is a daemon that embeds the control loops shipped with Kubernetes. The loops watch the cluster state, which then sends requests to the API server to make changes according to the state. The controller’s purpose is to maintain the cluster closer to the desired state.

For instance, when a pod fails or is unresponsive, the controller is alerted of the status and will quickly send requests to create a replica pod to replace the failed one. That’s because pods cannot create, repair, or delete themselves. In most cases, the controller will send messages to the API server. In response, the API sends effects to be applied to the Kubernetes pod to ensure the continuous running of the application.

Some controllers can change resources outside of the cluster to get the resources needed to achieve the desired cluster state. They then report the current state back to the cluster’s API server.

Kubernetes has in-built controllers that run in the Controller Manager. These include:

  • Job controller: Ensures a particular task is run to completion.
  • Deployment: Manages stateless applications such as web servers (HPPT servers).
  • StatefulSet: Manages applications that are stateful and persistent such as databases.

Kubernetes also allows you to create controllers in the control plane. In case the in-built controllers fail, your controllers can take over.

Pod Templates

In Kubernetes, controllers have a PodTemplate field. A Kubernetes pod template contains specifications about how each pod runs. The specs include which containers (mentioned by name and Image) should run in the pods and the amount of volume used.

Controllers use the pod templates to create new pods and to manage their desired state within the clusters. Once specifications for creating pods are included in the PodTemplate, it’s available to workload resources such as Deployments, Jobs, and DaemonSets. In short, a pod template is a blueprint used to create pods such that a change in the template is reflected on all new pods created from it.

For instance, in deployment, you can initiate an update on the Kubernetes pods by changing the pod template specification within the deployment. The controller prevents running pods from accepting new requests when a change is detected in the pod template. They are then scaled back to allow all pods to be terminated. After successful termination, the updated pod template creates new pods with the updated features.

Pod Storage

Pods store data in directories called volumes. All the containers can access the data in a Kubernetes pod. Pods use different types of volumes, but the most popular are ephemeral and persistent data volumes. The latter are preferred as they can be recovered if a pod fails. This is the opposite of ephemeral data that is destroyed by Kubernetes once the pod ceases to exist.

Kubernetes has a PersistentVolume subsystem which provides an API for administrators to relay storage information. Persistent volumes have a lifecycle independent of the pods that use the volumes. To use a volume, you must specify the pod’s volume specification in a YAML file.

Also, you need to declare which container the volume will be mounted in. Volumes are mounted at specified paths in the image, meaning each container within a Kubernetes pod has an independent path to access the volume it uses.

Pod Networking

Pods in Kubernetes communicate via a unique IP address assigned to each pod. Pod containers share network namespace, IP address, and network ports, which makes it possible for them to interact with one another via the local host. But they can also use inter-process communications such as SystemV semaphores or shared memory.

Containers can also communicate with containers running in different Kubernetes pods. However, they must use IP networking since each pod has a unique IP address which doesn’t allow communication on an OS level.

Working with Pods

Pods can be accessed using a command-line tool called kubectl. Kubectl allows you to run a range of management tasks in the nodes. The tasks may include deploying applications, inspecting, managing resources, and viewing logs. Below are some basic commands used in Kubernetes via kubectl.

Get Pods

When you need data from Kubernetes pods, use the kubectl get pods command. This command gives you tabulated information about the resources in a pod. If you want data about a specific pod, you can use kubectl get pods name_of _pod.

You can filter the data according to user requests using label selectors in the kubectl api-resources command. The result is a summary display of all supported resources in the cluster.

Create Pods

To create a pod, use the $ kubectl create -f FILENAME command. While this is possible, Kubernetes doesn’t recommend it. The reason is that pods are ephemeral and may fail with all the application data. Instead, you should create pods using controllers. Controllers such as deployment, Job, and StatefulSets ensure pods are replicated, have persistent volume, and run efficiently.

Update or Replace Pods

Once created, some information on Kubernetes pods is immutable. The information includes metadata, fields, names, and some specifications. So attempts to update some fields, replace, and patch information in pods have limitations. Instead, Kubernetes recommends changes to the pod template and using it to create new pods.

Delete Pods

To delete a pod use the command: $ kubectl delete -f ./mypod.yaml. The command deletes the pods immediately, overriding the grace period for termination. Kubernetes pods have a default 30 seconds graceful period for termination. It’s not recommended to forcefully delete pods with shared resources such as storage, API, or names. Nodes take a while to notice a forceful deletion so a deletion may affect other processes using the same identification as the deleted pod. It will lead to inconsistency and data corruption.

If you do not delete the pods, Kubernetes automatically deletes them when their assigned processes are complete. After a pod is deleted, Kubernetes issues an invalidation of the discovery cache that takes up to 10 minutes. If you don’t have 10 minutes to wait, run kubectl api-resources to refresh the discovery cache quickly.

Monitoring Pods

Due to its many components, Kubernetes poses a higher risk of having performance bottlenecks. This makes monitoring imperative.

Kubernetes monitoring gives you insight into the cluster’s health by tracking performance metrics and resource counts and giving you an overview of the operations in the cluster. You need to be constantly alerted when issues arise to act on them quickly.

However, the multifaceted nature of Kubernetes increased the complexity of logging and monitoring its components. To effectively perform monitoring in such a distributed and dynamic environment, you need a granular approach, a system that tracks each component, including resources in a Kubernetes deployment or the health of deployed pods.

You can use two types of monitoring tools:

  • Built-in tools: Tools such as Controllers provide basic resource metrics. You can use the metrics.k8s.io API to query the nodes’ kubectl for usage information about the pods. However, a third-party monitoring tool would be ideal if you need in-depth data.
  • Third-party tools: These tools give an in-depth analysis of a cluster. They provide complex data on applications deployed on Kubernetes and capture metrics and events in real-time. You can structure, visualize and analyze the data. A dimensional data model makes it easier for developers and administrators to enquire, debug and report on the state of applications.
Sematext Kubernetes Monitoring

Effortlessly monitor your entire Kubernetes infrastructure in minutes.

Kubernetes Pods Monitoring with Sematext

Sematext Monitoring is a monitoring solution with support for Kubernetes monitoring that provides insight into the health of your cluster and all the nodes, and monitors pods, deployments, and services. Pre-built and customizable dashboards expose all the metrics needed for efficient monitoring, from container metrics like CPU, memory, disk I/O, to network utilization statistics. Combined with anomaly detection and powerful alerting capabilities, this tool makes it easier and faster to identify problematic pods.

Sematext offers a complete view of resource utilization and availability, helping with cost management. That way, you can ensure that pods, individual containers, and namespaces use underlying resources efficiently.

Watch the video below to learn more about Sematext Monitoring, or start the 14-day free trial and see how it can help ensure the performance of your Kubernetes environment.

Start Free Trial


See Also