Configuring Microservices with Replicas on Google Kubernetes Engine | Professional Cloud Architect Exam Guide

Implementing Microservice Configuration with Replicas on Google Kubernetes Engine

Question

You are developing an application using different microservices that should remain internal to the cluster.

You want to be able to configure each microservice with a specific number of replicas.

You also want to be able to address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to.

You need to implement this solution on Google Kubernetes Engine.

What should you do?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

A.

The correct answer is A. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.

Explanation:

Google Kubernetes Engine (GKE) is a managed Kubernetes service that allows you to deploy, manage, and scale containerized applications on Google Cloud. Kubernetes provides a powerful platform for deploying microservices as containers, and GKE simplifies the process of setting up and managing a Kubernetes cluster. To address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to, you can deploy each microservice as a Deployment and expose the Deployment in the cluster using a Service.

A Deployment in Kubernetes provides a way to manage a set of replica Pods that are running the same application. By defining a Deployment, you can specify the number of replicas you want to run and the desired state of the Pod template that defines each replica. Kubernetes will automatically create and manage the number of replica Pods you specify, and will ensure that they remain in the desired state.

A Service in Kubernetes provides a way to expose a set of replica Pods as a single endpoint. By defining a Service, you can specify the port and protocol you want to use for accessing the Pods, as well as a selector that identifies the set of Pods to expose. Kubernetes will automatically create an endpoint for the Service that resolves to the IP addresses of the replica Pods, and will load-balance traffic between them.

When you deploy a microservice as a Deployment and expose it in the cluster using a Service, you can address the microservice from other microservices within the cluster using the Service DNS name. The DNS name resolves to the IP addresses of the replica Pods, and Kubernetes load-balances traffic between them. This means that you can scale the number of replicas up or down, and other microservices can continue to access the microservice using the same DNS name.

Option B is incorrect because an Ingress is used to expose HTTP(S) services outside the cluster, not internally within the cluster. While it is possible to use an Ingress to expose a microservice within the cluster, it is not the recommended approach for addressing a microservice from other microservices.

Option C is incorrect because deploying each microservice as a Pod without a Deployment means you will have to manually manage the lifecycle of the Pod. You will need to create new Pods if existing ones fail, and ensure that they are running the desired version of the application. This approach is not recommended for managing microservices at scale.

Option D is incorrect because an Ingress is not the appropriate way to address a microservice from other microservices within the cluster. An Ingress is a Kubernetes resource that provides a way to expose HTTP(S) services outside the cluster, and is typically used in conjunction with a load balancer or reverse proxy to route traffic to different services based on the requested host or URL path.