Migrating Third-Party Applications to Google Cloud: CPU and Memory Optimization

Optimizing CPU and Memory Options for Migrating Third-Party Applications

Question

You are migrating third-party applications from optimized on-premises virtual machines to Google Cloud.

You are unsure about the optimum CPU and memory options.

The applications have a consistent usage pattern across multiple weeks.

You want to optimize resource usage for the lowest cost.

What should you do?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

A.

https://avinetworks.com/docs/18.2/server-autoscaling-in-gcp/

To optimize resource usage for the lowest cost when migrating third-party applications from optimized on-premises virtual machines to Google Cloud, we should choose the option that best matches the usage pattern of the application while using the lowest possible resources.

Option A: Create an instance template with the smallest available machine type, and use an image of the third-party application taken from a current on-premises virtual machine. Create a managed instance group that uses average CPU utilization to autoscale the number of instances in the group. Modify the average CPU utilization threshold to optimize the number of instances running.

This option suggests creating an instance template with the smallest available machine type and using an image of the third-party application taken from a current on-premises virtual machine. By creating a managed instance group, we can use average CPU utilization to autoscale the number of instances in the group. We can also modify the average CPU utilization threshold to optimize the number of instances running.

Option B: Create an App Engine flexible environment, and deploy the third-party application using a Dockerfile and a custom runtime. Set CPU and memory options similar to your application's current on-premises virtual machine in the app.yaml file.

This option suggests creating an App Engine flexible environment and deploying the third-party application using a Dockerfile and a custom runtime. We should set CPU and memory options similar to our application's current on-premises virtual machine in the app.yaml file.

Option C: Create multiple Compute Engine instances with varying CPU and memory options. Install the Cloud Monitoring agent, and deploy the third-party application on each of them. Run a load test with high traffic levels on the application, and use the results to determine the optimal settings.

This option suggests creating multiple Compute Engine instances with varying CPU and memory options. We should install the Cloud Monitoring agent and deploy the third-party application on each of them. By running a load test with high traffic levels on the application, we can use the results to determine the optimal settings.

Option D: Create a Compute Engine instance with CPU and memory options similar to your application's current on-premises virtual machine. Install the Cloud Monitoring agent and deploy the third-party application. Run a load test with normal traffic levels on the application and follow the Rightsizing Recommendations in the Cloud Console.

This option suggests creating a Compute Engine instance with CPU and memory options similar to our application's current on-premises virtual machine. We should install the Cloud Monitoring agent and deploy the third-party application. By running a load test with normal traffic levels on the application, we can follow the Rightsizing Recommendations in the Cloud Console.

Among the given options, option A is the most appropriate. By creating an instance template with the smallest available machine type and using an image of the third-party application taken from a current on-premises virtual machine, we can optimize resource usage while minimizing costs. By creating a managed instance group that uses average CPU utilization to autoscale the number of instances in the group, we can further optimize resource usage. Modifying the average CPU utilization threshold can help us determine the optimal number of instances running.