Logging and monitoring play a vital role in Kubernetes for maintaining visibility, troubleshooting, optimizing performance, planning capacity, enhancing security, and driving continuous improvement. Logging provides valuable insights into application events, errors, and warnings, while monitoring tracks resource utilization and performance metrics. With these practices, you can observe and understand your cluster, quickly identify and resolve issues, optimize application performance, plan for capacity needs, ensure security and compliance, and drive ongoing improvements. By leveraging logging and monitoring in Kubernetes, you can effectively manage and scale your applications and infrastructure while ensuring reliability and efficiency.


Kubernetes itself does not provide a complete logging solution, but it offers support for collecting and managing logs from applications and system components running within the cluster. Kubernetes provides a framework and infrastructure to facilitate logging by allowing containers to write their logs to standard output (stdout) and standard error (stderr), which can then be collected by external logging systems. Here are some key features and components offered by Kubernetes for logging:

  1. Standard Output and Standard Error: Kubernetes encourages containers to follow the practice of writing logs to stdout and stderr. By default, logs written to these streams are captured and made available for collection.

  2. kubectl Logs: The kubectl command-line tool provides a convenient way to access and view container logs in Kubernetes. You can use commands like kubectl logs <pod-name> or kubectl logs -f <pod-name> to fetch and stream logs from a specific pod or container.

  3. Pod Log Retrieval: Kubernetes allows you to retrieve logs for individual pods. By using the Kubernetes API, you can programmatically fetch logs for specific pods using the GET /api/v1/namespaces/{namespace}/pods/{podName}/log endpoint.

  4. Log Aggregation and Collection: Kubernetes integrates with external log aggregation and collection tools to centralize and manage logs from the cluster. You can deploy and configure logging agents like Fluentd, Fluent Bit, or Filebeat as sidecar containers or DaemonSets to collect logs from containers on each node. These agents can forward logs to external log aggregators or storage systems like Elasticsearch, Logstash, Kibana (ELK Stack), or managed logging services provided by cloud providers.

  5. Logging Sidecars: Kubernetes allows you to deploy logging agents as sidecar containers alongside your application containers within a pod. These sidecar containers collect logs generated by the application containers and send them to a central log aggregator. This pattern simplifies log collection and ensures that logs are captured consistently across multiple containers within a pod.

  6. Container Log Rotation: Kubernetes supports log rotation for containers, which helps in managing log file sizes and preventing logs from consuming excessive storage. You can configure log rotation policies by specifying log file size limits and the maximum number of log files to retain.

It's important to note that while Kubernetes provides the infrastructure and tools to facilitate logging, the choice of logging solution and its configuration is left to the user. The specific logging features and capabilities available may also depend on the underlying infrastructure, cloud provider, or the logging stack deployed in the Kubernetes cluster. In the examples below I will go over some examples of looking at pod and container logs.


Kubernetes offers several tools and features for resource monitoring within the cluster. These tools enable users to track and manage resource utilization, performance metrics, and health status. Here are some key tools offered by Kubernetes for resource monitoring:

  1. Metrics API: Kubernetes provides a built-in Metrics API (also known as the Metrics Server) that exposes cluster-level and resource-specific metrics. This API allows users to query and retrieve metrics related to CPU usage, memory usage, network statistics, and more for nodes, pods, containers, and other Kubernetes resources.

  2. kube-state-metrics: kube-state-metrics is an add-on component that collects metrics from the Kubernetes API server about the state and health of various resources within the cluster. It provides insights into the desired state, current state, and other metadata for objects like deployments, replica sets, pods, and services.

  3. Resource Metrics: Kubernetes tracks resource metrics at both the node and pod levels. These metrics include CPU usage, memory consumption, disk utilization, and network traffic. By monitoring resource metrics, users can identify resource bottlenecks, optimize resource allocation, and ensure efficient utilization of cluster resources.

  4. Horizontal Pod Autoscaler (HPA): The Horizontal Pod Autoscaler is a Kubernetes feature that automatically adjusts the number of pod replicas based on observed metrics. HPA scales the number of pods up or down to maintain desired performance levels, such as CPU utilization or custom metrics.

  5. kubectl top: The kubectl top command provides a way to fetch resource utilization statistics for nodes, pods, and containers. It allows users to view real-time CPU and memory usage metrics for specific resources within the cluster.

  6. Prometheus Integration: Kubernetes integrates well with Prometheus, a popular open-source monitoring system. Prometheus can be deployed as a separate monitoring solution alongside Kubernetes or as a Kubernetes-native service using the Prometheus Operator. Prometheus enables advanced resource monitoring, alerting, and time-series data analysis.

  7. Custom Metrics: Kubernetes supports the collection and utilization of custom metrics. Users can define their own custom metrics, such as application-specific metrics or business metrics, and use them for monitoring and scaling decisions. Custom metrics can be exposed through the Kubernetes API or integrated with external monitoring systems.

These tools and features provide comprehensive resource monitoring capabilities in Kubernetes, enabling users to effectively monitor, optimize, and scale their applications and infrastructure within the cluster. In the example I will enable the metric-server and show what information it can return.



To run the example I had the following tools/software installed:

I also setup a alias for kubectl using the following command:

Set-Alias -Name k -Value kubectl

Make sure minikube is up and running:

minikube start


Logging (single container)

  1. Create pod

Create a pod.yaml file definition like below:

apiVersion: v1
kind: Pod
  name: counter
  - name: count
    image: busybox:1.28
    args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

Then apply it.

> k apply -f .\pod.yaml
pod/counter created
  1. Verify logs
> k logs counter
0: Sat Jun 17 22:41:09 UTC 2023
1: Sat Jun 17 22:41:10 UTC 2023
2: Sat Jun 17 22:41:11 UTC 2023
### output shortened for brevity ### 

> k logs counter -f
164: Sat Jun 17 22:43:58 UTC 2023
165: Sat Jun 17 22:43:59 UTC 2023
166: Sat Jun 17 22:44:00 UTC 2023
### output shortened for brevity - streams the logs ### 

> k logs counter > counter.log

> cat counter.log
0: Sat Jun 17 22:41:09 UTC 2023
1: Sat Jun 17 22:41:10 UTC 2023
2: Sat Jun 17 22:41:11 UTC 2023
### output shortened for brevity ### 
  1. Clean up
> k delete pod counter
pod "counter" deleted

Logging (multi container)

  1. Create multi continuer pod

Create a multi-pod.yaml file definition following contents:

apiVersion: v1
kind: Pod
  name: nginx-counter
  - name: nginx
    image: nginx
  - name: counter
    image: busybox
    args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

Apply the definition

> k apply -f .\multi-pod.yaml
pod/nginx-counter created
  1. Verify logs
> k logs nginx-counter -c counter
0: Sat Jun 17 22:48:04 UTC 2023
1: Sat Jun 17 22:48:05 UTC 2023
2: Sat Jun 17 22:48:06 UTC 2023
  1. Clean up
k delete pod nginx-counter


  1. Enable metrics-server
> minikube addons enable metrics-server
💡  metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at:
    ▪ Using image
🌟  The 'metrics-server' addon is enabled
  1. Verify metrics-server

You can check that metrics-server is on by listing the addons.

> minikube addons list
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
| ambassador                  | minikube | disabled     | 3rd party (Ambassador)         |
| auto-pause                  | minikube | disabled     | Google                         |
| cloud-spanner               | minikube | disabled     | Google                         |
| csi-hostpath-driver         | minikube | disabled     | Kubernetes                     |
| dashboard                   | minikube | enabled ✅   | Kubernetes                     |
| default-storageclass        | minikube | enabled ✅   | Kubernetes                     |
| efk                         | minikube | disabled     | 3rd party (Elastic)            |
| freshpod                    | minikube | disabled     | Google                         |
| gcp-auth                    | minikube | disabled     | Google                         |
| gvisor                      | minikube | disabled     | Google                         |
| headlamp                    | minikube | disabled     | 3rd party (         |
| helm-tiller                 | minikube | disabled     | 3rd party (Helm)               |
| inaccel                     | minikube | disabled     | 3rd party (InAccel             |
|                             |          |              | [])            |
| ingress                     | minikube | disabled     | Kubernetes                     |
| ingress-dns                 | minikube | disabled     | Google                         |
| istio                       | minikube | disabled     | 3rd party (Istio)              |
| istio-provisioner           | minikube | disabled     | 3rd party (Istio)              |
| kong                        | minikube | disabled     | 3rd party (Kong HQ)            |
| kubevirt                    | minikube | disabled     | 3rd party (KubeVirt)           |
| logviewer                   | minikube | disabled     | 3rd party (unknown)            |
| metallb                     | minikube | disabled     | 3rd party (MetalLB)            |
| metrics-server              | minikube | enabled ✅   | Kubernetes                     |
| nvidia-driver-installer     | minikube | disabled     | Google                         |
| nvidia-gpu-device-plugin    | minikube | disabled     | 3rd party (Nvidia)             |
| olm                         | minikube | disabled     | 3rd party (Operator Framework) |
| pod-security-policy         | minikube | disabled     | 3rd party (unknown)            |
| portainer                   | minikube | disabled     | 3rd party (       |
| registry                    | minikube | disabled     | Google                         |
| registry-aliases            | minikube | disabled     | 3rd party (unknown)            |
| registry-creds              | minikube | disabled     | 3rd party (UPMC Enterprises)   |
| storage-provisioner         | minikube | enabled ✅   | Google                         |
| storage-provisioner-gluster | minikube | disabled     | 3rd party (Gluster)            |
| volumesnapshots             | minikube | disabled     | Kubernetes                     |
  1. Verify metric info
> k top nodes
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
minikube   417m         2%     21Mi            0%

> k top pods
NAME            CPU(cores)   MEMORY(bytes)
nginx-counter   3m           23Mi

Further reading