Each day, company plans to store hundreds of files in Azure Blob Storage and Azure Data Lake Storage. The company uses the parquet format.
You must develop a pipeline that meets the following requirements:
-> Process data every six hours
-> Offer interactive data analysis capabilities
-> Offer the ability to process data using solid-state drive (SSD) caching
-> Use Directed Acyclic Graph(DAG) processing mechanisms
-> Provide support for REST API calls to monitor processes
-> Provide native support for Python
-> Integrate with Microsoft Power BI
You need to select the appropriate data technology to implement the pipeline.
Which data technology should you implement?
Click on the arrows to vote for the correct answer
A. B. C. D. E.B
Storm runs topologies instead of the Apache Hadoop MapReduce jobs that you might be familiar with. Storm topologies are composed of multiple components that are arranged in a directed acyclic graph (DAG). Data flows between the components in the graph. Each component consumes one or more data streams, and can optionally emit one or more streams.
Python can be used to develop Storm components.
https://docs.microsoft.com/en-us/azure/hdinsight/storm/apache-storm-overviewBased on the requirements mentioned in the question, the most appropriate data technology to implement the pipeline is E. HDInsight Spark cluster. Here's why:
Process data every six hours: Spark has support for batch processing, and it can be scheduled to run periodically using a cron job or some other scheduling mechanism.
Offer interactive data analysis capabilities: Spark provides a fast and interactive environment for data analysis and processing, which can help meet this requirement.
Offer the ability to process data using solid-state drive (SSD) caching: Spark supports SSD caching and can leverage it to improve the performance of data processing.
Use Directed Acyclic Graph (DAG) processing mechanisms: Spark processes data using a DAG execution engine, which can help optimize the pipeline's performance and handle complex data processing workflows.
Provide support for REST API calls to monitor processes: Spark provides REST API endpoints that can be used to monitor and manage Spark applications.
Provide native support for Python: Spark provides Python APIs (PySpark) that allow developers to write Spark applications using Python.
Integrate with Microsoft Power BI: Spark has connectors that allow it to integrate with Microsoft Power BI, making it possible to use Power BI to visualize and analyze data processed by Spark.
On the other hand, the other options are not as appropriate for this scenario:
A. Azure SQL Data Warehouse: Azure SQL Data Warehouse is a relational database that is optimized for large-scale data warehousing. While it may be able to process large amounts of data, it does not have the same level of support for interactive data analysis or DAG processing that Spark does.
B. HDInsight Apache Storm cluster: Apache Storm is a distributed real-time stream processing system, which is not well-suited for processing batch data every six hours.
C. Azure Stream Analytics: Azure Stream Analytics is a real-time data streaming service that is designed for processing and analyzing data in real-time, which is not well-suited for processing batch data every six hours.
D. HDInsight Apache Hadoop cluster using MapReduce: Apache Hadoop is a batch processing system that is optimized for processing large amounts of data. However, MapReduce is not as flexible or efficient as Spark when it comes to processing and analyzing data.
Therefore, based on the requirements mentioned in the question, HDInsight Spark cluster is the most appropriate data technology to implement the pipeline.