Security Master Data Streaming for Financial Services | AWS Certified Machine Learning Exam MLS-C01

Stream Security Master Data in Near-Real Time | AWS Certified Machine Learning Exam MLS-C01

Question

You are a machine learning specialist working for a financial services firm.

Your machine learning team is responsible for producing security master data for your quantitative analysis group.

Your quant group uses your security master data to feed their quantitative analysis machine learning models to drive their stock selection for their active management portfolios.

You need to stream security master data from various sources into your security master data store in near-real time.

Which solutions meet your requirements in the most efficient manner? (Select TWO)

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D. E.

Correct Answers: A and C.

Option A is correct.

You can use SageMaker Feature Store to house your security master data.

You can ingest data into SageMaker Feature Store using the PutRecord API call using small batches of data.

Kafka is a common service used to stream data into SageMaker Feature Store.

Option B is incorrect.

The API call to put your data in small batches into SageMaker Feature Store is the PutRecord API call, not the PutRecords API call.

Also, to meet your near-real time requirement, you should use small batches, not large batches of data.

Option C is correct.

You can use SageMaker Feature Store to house your security master data.

You can ingest data into SageMaker Feature Store using the PutRecord API call using small batches of data.

Kinesis is a common service used to stream data into SageMaker Feature Store.

Option D is incorrect.

The API call to put your data in small batches into SageMaker Feature Store is the PutRecord API call, not the PutRecords API call.

Option E is incorrect.

The API call to put your data in small batches into SageMaker Feature Store is the PutRecord API call, not the PutRecords API call.

Also, to meet your near-real time requirement, you should use small batches, not large batches of data.

Reference:

Please see the Amazon SageMaker developer guide titled Data Sources and Ingestion (https://docs.amazonaws.cn/en_us/sagemaker/latest/dg/feature-store-ingest-data.html), the Amazon SageMaker developer guide titled Get started with Amazon SageMaker Feature Store (https://docs.amazonaws.cn/en_us/sagemaker/latest/dg/feature-store-getting-started.html)

The question is asking for efficient solutions to stream security master data from various sources into a security master data store in near-real time. We are given a choice of five solutions and we need to select two of them. Let's analyze each of the solutions:

A. Stream your security master data using Kafka; extract the data into your online feature store using the PutRecord API call in small batch sets.

Kafka is a distributed streaming platform that allows you to publish and subscribe to streams of records. It is a popular choice for real-time data streaming due to its high throughput and low latency. The PutRecord API call is used to write a single data record into an Amazon Kinesis data stream. This solution suggests using Kafka to stream the data and then extract it into an online feature store in small batch sets. This solution is efficient because Kafka is a high-throughput, low-latency streaming platform and small batch sets will allow for faster processing times.

B. Stream your security master data using Spark Streaming; extract the data into your online feature store using the PutRecords API call in large batch sets.

Spark Streaming is an extension of the core Apache Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. The PutRecords API call is used to write multiple data records into a Kinesis data stream in a single call. This solution suggests using Spark Streaming to stream the data and then extract it into an online feature store in large batch sets. This solution may not be as efficient as Solution A because Spark Streaming may have higher latency than Kafka and large batch sets may slow down the processing times.

C. Stream your security master data using Kinesis; extract the data into your online feature store using the PutRecord API call in small batch sets.

Amazon Kinesis is a platform for streaming data on AWS. It can handle large amounts of data in real-time, making it an ideal solution for this use case. This solution suggests using Kinesis to stream the data and then extract it into an online feature store in small batch sets. This solution is efficient because Kinesis is designed for real-time data streaming and small batch sets will allow for faster processing times.

D. Stream your security master data using Apache Storm; extract the data into your online feature store using the PutRecords API call in small batch sets.

Apache Storm is a distributed real-time computation system for processing fast, large streams of data. The PutRecords API call is used to write multiple data records into a Kinesis data stream in a single call. This solution suggests using Apache Storm to stream the data and then extract it into an online feature store in small batch sets. This solution may not be as efficient as Solution A because Apache Storm may have higher latency than Kafka and small batch sets may slow down the processing times.

E. Stream your security master data using Kinesis; extract the data into your online feature store using the PutRecords API call in large batch sets.

This solution suggests using Kinesis to stream the data and then extract it into an online feature store in large batch sets. This solution may not be as efficient as Solution A or Solution C because large batch sets may slow down the processing times.

In conclusion, the most efficient solutions to stream security master data from various sources into a security master data store in near-real time are A and C. Solution A suggests using Kafka to stream the data and then extract it into an online feature store in small batch sets. Solution C suggests using Kinesis to stream the data and then extract it into an online feature store in small batch sets. Both solutions leverage the high-throughput, low-latency streaming capabilities of their respective platforms, which makes them ideal for this use case.