Cloud-native application development is quickly becoming the norm in the industry. With more and more reliance on cloud technologies, application development has shifted to a more cloud-focused approach. Containerization is at the forefront of powering these cloud-native applications and containerized apps have aroused the need for container orchestration.

Kubernetes was born from this need for container orchestration as a robust solution for container management.

It has become the de-facto standard for container orchestration with its powerful feature set, robust nature, and active community that continuously improves the platform. However, this widespread usage has also made Kubernetes a complex solution. This complexity has resulted in a relatively steeper learning curve for anyone starting on Kubernetes. So, in this article, let’s look at some key concepts and practices you should know as a newcomer to Kubernetes.

Pods are not equal to containers

The first thing anyone should know is the difference between a pod and a container. Pods are the smallest deployable unit in K8s. Pods are not analogous to containers as they can consist of a single or multiple containers and be managed as a single entity. A Pod can be considered as a group of tightly coupled containers that share resources. Containers with a pod can be simply considered as containers running within a single logical host.

Ignoring the importance of Labels

Labels may not be unique identifiers, yet they provide a mechanism for users to add meaningful and identifiable metadata to Kubernetes objects. These key-value pairs can be implemented and modified at any point of the lifecycle of a K8s object. Other than providing identifiable information, labels are crucial when selecting Kubernetes objects. The Kubernetes API uses label sectors to identify and select the necessary K8s objects either via Equality-based or Set-based selectors.

Labels are used across the board to select K8s objects whether you create a Deployment, ReplicaSet, define a networking service, or simply query the Pods via kubectl.

Always consider the Pod termination behavior

It is essential to consider the termination behavior of your application to reduce the impact on the end-user and facilitate fast recovery. Kubernetes utilize Linux signals to signal termination. The common procedure is to send a SIGTERM signal to the containers in a Pod which signals them to terminate and wait for the specified termination grace period (by default 30 seconds) to shut down. Finally, Kubernetes sends the SIGKILL signal to remove the pod and clean any Kubernetes objects.

Thus the containers must be programmed to receive these signals, and proper graceful termination processes should be implemented in your application. Depending on the requirements, preStop hook or “terminationGracePeriod” flags can be used to easily change the termination behavior without modifying the application code. This method is highly useful when troubleshooting errors related to Pod termination. Furthermore, it allows users to easily understand the underlying issues by following a guide on SIGKILL, which is represented as signal 9.

Defining resource requests or limits

As with any application, resource management should be a core part of any Kubernetes cluster management. Not specifying the request and limits for containers or incorrect specifications can lead to disastrous consequences like resource starvation in the cluster, drastic cost increases in managed K8s clusters due to containers consuming unlimited resources, or out-of-memory and CPU throttling issues.

Therefore, it is crucial to properly configure requests and limits for the containers for performance tuning and increasing the efficiency of the K8s environment. The requested amount defines the resource amount the containers can request, while the limit specifies the maximum limit of resources the container can consume. Setting these limits must be done depending on the requirements of the application and the specific use case. Moreover, it’s always a good idea to have enough headroom for mission-critical containers to handle unexpected workloads.

Utilizing Kubernetes Monitoring

Monitoring is an important aspect of the proper maintenance of an application throughout its lifecycle. Kubernetes monitoring provides the backbone for proactive management of K8s clusters. Kubernetes utilizes its metric server to aggregate and collect data from each node of the cluster. Some key metrics that are available via the metric server are node status, pod availability, CPU and memory utilization, API request latency, available storage, etc.

These metrics are crucial for determining the performance of the overall cluster and identifying failures or misconfigurations that can cause availability or performance issues for K8s objects and infrastructure. The metrics server is crucial to provide data to the Kubernetes Web UI as well as manage auto-scaling with services such as the Horizontal Pod Autoscaler. K8s monitoring helps in this aspect to easily keep track of resources within the cluster as there is no direct collaboration between the application and the infrastructure due to the abstraction offered by containerization.


Kubernetes is a powerful tool, and it is vital to properly understand the ins and outs of it to harness the full potential of K8s. The road to getting started with Kubernetes will be a relatively complex one. However, once you overcome that hurdle, the advantages offered by K8s will far surpass any difficulties you face while learning it, making your development journey easier.

Disclosure: This is a sponsored post but we don’t get any commission from sales made.

You may also like to check out:

You can follow us on Twitter, or Instagram, and even like our Facebook page to keep yourself updated on all the latest from Microsoft, Google, Apple, and the Web.

Related Stories