MassOrigins is an online social media and social networking service company.
As of June 2018, it had an estimated 2 million users.
MassOrigins for Business, the company's advertising portal, has an estimated 2,000 clients MassOrigins hosted its entire infrastructure on AWS and uses Data Pipeline as a data integration mechanism.
A lot of scheduled pipelines are created to address integration with different data repositories like DynamoDB, S3, RDS and Redshift databases.
What are the key items of the scheduled Pipeline? Select 4 options.
Click on the arrows to vote for the correct answer
A. B. C. D. E. F. G.Answer: A, B, D, G.
Option A is correct -Pipeline Components - Pipeline components represent the business logic of the pipeline and are represented by the different sections of a pipeline definition.
Pipeline components specify the data sources, activities, schedule, and preconditions of the workflow.
They can inherit properties from parent components.
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-tasks-scheduled.htmlOption B is correct -When AWS Data Pipeline runs a pipeline, it compiles the pipeline components to create a set of actionable instances.
Each instance contains all the information for performing a specific task.
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-tasks-scheduled.htmlOption C is incorrect - When AWS Data Pipeline runs a pipeline, it compiles the pipeline components to create a set of actionable instances.
Each instance contains all the information for performing a specific task.
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-tasks-scheduled.htmlOption D is correct - To provide robust data management, AWS Data Pipeline retries a failed operation.
It continues to do so until the task reaches the maximum number of allowed retry attempts.
Attempt objects track the various attempts, results, and failure reasons if applicable.
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-tasks-scheduled.htmlOption E is incorrect -To provide robust data management, AWS Data Pipeline retries a failed operation.
It continues to do so until the task reaches the maximum number of allowed retry attempts.
Attempt objects track the various attempts, results, and failure reasons if applicable.
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-tasks-scheduled.htmlOption F is incorrect -AWS Data Pipeline hands the instances out to task runners to process.
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-tasks-scheduled.htmlOption G is correct - AWS Data Pipeline hands the instances out to task runners to process.
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-tasks-scheduled.htmlAWS Data Pipeline is a cloud-based service offered by Amazon Web Services (AWS) to enable users to reliably and efficiently process and transfer data between different AWS services and on-premises data sources. Scheduled pipelines are a key component of AWS Data Pipeline and are used to automate the execution of data integration tasks.
The following are the key items of a scheduled pipeline:
A. Pipeline components represent the business logic of the pipeline and are represented by the different sections of a pipeline definition. These components define the tasks and resources required for the pipeline to execute. They include input and output data sources, data processing activities, and data destinations.
B. When AWS Data Pipeline runs a pipeline, it compiles the pipeline components to create a set of actionable instances. These instances are the actual tasks that perform the work of the pipeline. They are created dynamically by the pipeline service and are managed by the service throughout their lifetime.
C. When AWS Data Pipeline runs a pipeline, it compiles the pipeline components to create a set of task runners. Task runners are the virtual machines or instances that run the tasks defined in the pipeline. AWS Data Pipeline manages the distribution of tasks to these runners.
D. AWS Data Pipeline retries a failed operation. It continues to do so until the task reaches the maximum number of allowed retry attempts. This feature ensures that data processing tasks are performed reliably and efficiently, even in the face of transient errors.
E. AWS Data Pipeline does not retry a failed operation. This statement is false. As mentioned in point D, AWS Data Pipeline retries failed operations up to a specified maximum number of times.
F. AWS Data Pipeline hands the task runners out to instances to process. This statement is false. Task runners are the instances that run the tasks defined in the pipeline. AWS Data Pipeline manages the distribution of tasks to these runners.
G. AWS Data Pipeline hands the instances out to task runners to process. This statement is false. Task runners are the instances that run the tasks defined in the pipeline. AWS Data Pipeline manages the distribution of tasks to these runners.
In summary, the key items of a scheduled pipeline in AWS Data Pipeline include pipeline components, actionable instances, task runners, and the ability to retry failed operations.