GKE Cluster Creation: Best Practices for Networking and Access Control

Create a GKE Cluster in an Existing VPC for On-Premises Access

Question

You need to create a GKE cluster in an existing VPC that is accessible from on-premises.

You must meet the following requirements: -> IP ranges for pods and services must be as small as possible.

-> The nodes and the master must not be reachable from the internet.

-> You must be able to use kubectl commands from on-premises subnets to manage the cluster.

How should you create the GKE cluster?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

C.

https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips

To create a GKE cluster in an existing VPC that is accessible from on-premises, while meeting the requirements of having small IP ranges for pods and services, keeping the nodes and master inaccessible from the internet, and enabling kubectl commands from on-premises subnets, the best approach would be to create a private VPC-native GKE cluster using user-managed IP ranges.

Option A is not recommended as it sets the pod and service ranges as /24, which is larger than necessary and increases the chances of IP address conflicts.

Option B is closer to the optimal solution but uses GKE-managed IP ranges, which are larger and less flexible than user-managed IP ranges.

Option C is the closest to the optimal solution. It uses user-managed IP ranges, which allow for more flexibility and control over IP address allocation. Additionally, enabling a GKE cluster network policy helps to enforce network security policies at the cluster level. To ensure that the nodes and master are not reachable from the internet, it is recommended to enable master authorized networks, which restricts access to the cluster master to specific IP addresses.

Option D is also a viable solution, but it uses privateEndpoint on the cluster master instead of master authorized networks to restrict access to the master. PrivateEndpoint provides a private IP address for the cluster master that is accessible only from within the VPC network, which can be an advantage in some scenarios.

In summary, the optimal solution would be to create a private VPC-native GKE cluster using user-managed IP ranges, enable a GKE cluster network policy, set the pod and service ranges as /24, enable master authorized networks, and set up a network proxy to access the master.