HRL Helicopter Racing League

Migrating to a New Platform for AI-Driven Race Predictions

Question

Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing.

Each year HRL holds the world championship and several regional league competitions where teams compete to earn a spot in the world championship.

HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race.

Solution concept - HRL wants to migrate their existing service to a new platform to expand their use of managed AI and ML services to facilitate race predictions.

Additionally, as new fans engage with the sport, particularly in emerging regions, they want to move the serving of their content, both real-time and recorded, closer to their users.

Existing technical environment - HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider.

Video recording and editing is performed at the race tracks, and the content is encoded and transcoded, where needed, in the cloud.

Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers.

Their race prediction services are hosted exclusively on their existing public cloud provider.

Their existing technical environment is as follows:Existing content is stored in an object storage service on their existing public cloud provider.Video encoding and transcoding is performed on VMs created for each job.Race predictions are performed using TensorFlow running on VMs in the current public cloud provider.

Business requirements - HRL's owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets.

Their requirements are:Support ability to expose the predictive models to partners.Increase predictive capabilities during and before races: -‹ Race results -‹ Mechanical failures -‹ Crowd sentimentIncrease telemetry and create additional insights.Measure fan engagement with new predictions.Enhance global availability and quality of the broadcasts.Increase the number of concurrent viewers.Minimize operational complexity.Ensure compliance with regulations.Create a merchandising revenue stream.

Technical requirements -Maintain or increase prediction throughput and accuracy.Reduce viewer latency.Increase transcoding performance.Create real-time analytics of viewer consumption patterns and engagement.Create a data mart to enable processing of large volumes of race data.

Executive statement - Airwolf.

The security team wants to run Airwolf against the predictive capability application as soon as it is released every Tuesday.

You need to set up Airwolf to run at the recurring weekly cadence.

What should you do?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

A.

The task at hand is to set up Airwolf, a security tool, to run against HRL's predictive capability application every Tuesday. To achieve this, a recurring job needs to be scheduled at a weekly cadence.

Option A suggests setting up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function. Cloud Tasks is a managed service that allows for the scheduling and execution of tasks in the cloud. A Cloud Storage bucket can trigger a Cloud Function when an object is created, deleted, or updated. A Cloud Function is a serverless compute service that allows for the execution of code in response to an event. This option suggests creating a Cloud Task that triggers the execution of a Cloud Function when a new object is created in a Cloud Storage bucket. However, this option does not provide sufficient information on how Airwolf would be integrated into this process. Therefore, this option is unlikely to be the correct choice.

Option B proposes setting up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function. Cloud Logging allows for the collection, storage, and analysis of log data from applications and services. A Cloud Storage bucket can trigger a Cloud Function when a new object is created, deleted, or updated. This option suggests setting up a Cloud Logging sink that exports log data to a Cloud Storage bucket, which then triggers the execution of a Cloud Function. This Cloud Function could potentially run Airwolf against the predictive capability application. However, this option does not provide sufficient information on how to schedule the recurring job every Tuesday. Therefore, this option is unlikely to be the correct choice.

Option C suggests configuring the deployment job to notify a Pub/Sub queue that triggers a Cloud Function. Pub/Sub is a messaging service that allows for the exchange of messages between applications and services. This option proposes configuring the deployment job to send a message to a Pub/Sub queue that triggers the execution of a Cloud Function. This Cloud Function could potentially run Airwolf against the predictive capability application. However, this option does not provide sufficient information on how to schedule the recurring job every Tuesday. Therefore, this option is unlikely to be the correct choice.

Option D proposes setting up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function. IAM allows for the management of access control to Google Cloud resources. Confidential Computing allows for the execution of code in a secure and confidential environment. This option suggests setting up IAM and Confidential Computing to trigger the execution of a Cloud Function that runs Airwolf against the predictive capability application. However, this option does not provide sufficient information on how to schedule the recurring job every Tuesday. Therefore, this option is unlikely to be the correct choice.

In conclusion, none of the options provide a clear and direct solution for scheduling a recurring job every Tuesday. Therefore, none of the options are likely to be the correct choice. It is recommended to gather more information or seek further clarification to determine the appropriate solution.