Monitoring and Optimization for Containers in Google Kubernetes Engine

Identifying Resource Usage in Containers

Question

You support an e-commerce application that runs on a large Google Kubernetes Engine (GKE) cluster deployed on-premises and on Google Cloud Platform.

The application consists of microservices that run in containers.

You want to identify containers that are using the most CPU and memory.

What should you do?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

B.

To identify the containers that are using the most CPU and memory in a large Google Kubernetes Engine (GKE) cluster that is deployed on-premises and on Google Cloud Platform, you have several options.

Option A: Use Stackdriver Kubernetes Engine Monitoring Stackdriver Kubernetes Engine Monitoring is a built-in monitoring tool that provides insights into the performance of GKE clusters and the workloads running on them. With Stackdriver Kubernetes Engine Monitoring, you can view metrics related to CPU and memory usage for each pod and container in your cluster. You can also set up alerts based on these metrics to get notified when a container or pod exceeds a certain usage threshold. This option is a good choice if you want to quickly identify which containers are using the most CPU and memory without having to set up any additional tools.

Option B: Use Prometheus to collect and aggregate logs per container, and then analyze the results in Grafana. Prometheus is an open-source monitoring and alerting toolkit that is often used in conjunction with Kubernetes. It allows you to collect metrics from various sources, including Kubernetes and its components, and create custom dashboards for visualization and analysis using Grafana. With Prometheus, you can set up exporters that collect metrics from each container in your cluster and send them to Prometheus for aggregation. You can then create a Grafana dashboard that shows the CPU and memory usage of each container. This option requires more setup time and expertise, but it provides a more customizable monitoring solution.

Option C: Use the Stackdriver Monitoring API to create custom metrics, and then organize your containers using groups. The Stackdriver Monitoring API allows you to create custom metrics that can be used to monitor any aspect of your application or infrastructure that is not covered by the built-in metrics in Stackdriver. With the API, you can create metrics that track the CPU and memory usage of your containers and use them to organize your containers into groups. You can then view the usage of each group and compare it to other groups or the entire cluster. This option requires programming expertise, but it provides a flexible and customizable monitoring solution.

Option D: Use Stackdriver Logging to export application logs to BigQuery, aggregate logs per container, and then analyze CPU and memory consumption. Stackdriver Logging allows you to collect and analyze logs from your applications and infrastructure. With this option, you can export your application logs to BigQuery and then create queries that aggregate logs per container. By analyzing these logs, you can identify which containers are using the most CPU and memory. This option requires expertise in both logging and querying, but it can provide insights into the performance of your applications that are not available through other monitoring tools.

In conclusion, Option A, using Stackdriver Kubernetes Engine Monitoring, is the simplest and quickest option for identifying containers that are using the most CPU and memory in a GKE cluster. However, if you require more customization or granularity in your monitoring, you may want to consider options B, C, or D.