Minimizing Effort for Connecting a Compute Engine Instance to a GKE Application

Connect Compute Engine Instance to GKE Application

Question

You have an application running in Google Kubernetes Engine (GKE) with cluster autoscaling enabled.

The application exposes a TCP endpoint.

There are several replicas of this application.

You have a Compute Engine instance in the same region, but in another Virtual Private Cloud (VPC), called gce-network, that has no overlapping IP ranges with the first VPC.

This instance needs to connect to the application on GKE.

You want to minimize effort.

What should you do?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D. E.

A.

The correct answer is A:

  1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.
  2. Set the service's externalTrafficPolicy to Cluster.
  3. Configure the Compute Engine instance to use the address of the load balancer that has been created.

Explanation:

The scenario requires connecting a Compute Engine instance in a different VPC to an application running on Google Kubernetes Engine (GKE). One way to achieve this is by creating a load balancer service in GKE and configuring the Compute Engine instance to use the address of the load balancer.

Option A is the correct answer because it meets the requirements and minimizes effort. Here are the steps to implement option A:

  1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.

    • A LoadBalancer service type is used to expose the application outside the cluster and can be used to expose the application to other networks, such as a Compute Engine instance in a different VPC.
    • The service uses the application's Pods as backend, which means that traffic to the service will be load balanced across the replicas of the application.
  2. Set the service's externalTrafficPolicy to Cluster.

    • By default, a LoadBalancer service type will send traffic to any node in the cluster, which may result in traffic leaving the cluster and being routed back to the same cluster node that hosts the target Pod.
    • By setting externalTrafficPolicy to Cluster, the service will only route traffic to nodes that have a Pod matching the service's selector.
    • This ensures that traffic remains within the cluster and doesn't incur unnecessary egress charges.
  3. Configure the Compute Engine instance to use the address of the load balancer that has been created.

    • After the load balancer service is created, the Compute Engine instance can use the load balancer's IP address or DNS name to connect to the application.
    • Since the VPCs don't have overlapping IP ranges, there won't be any IP conflicts.

Option B is incorrect because it requires creating a Compute Engine instance called proxy with two network interfaces, one in each VPC. This adds unnecessary complexity to the solution.

Option C is incorrect because it requires using iptables on the Compute Engine instance to forward traffic from gce-network to the GKE nodes. This is also unnecessarily complex and requires manual configuration.

Option D is incorrect because it requires peering the two VPCs together. This is not necessary since the VPCs don't have overlapping IP ranges.

Option E is incorrect because it requires adding a Cloud Armor Security Policy to the load balancer that whitelists the internal IPs of the MIG's instances. This is not necessary since the Compute Engine instance is in a different VPC and not part of the Managed Instance Group (MIG).