Process Azure Blob Storage Transaction Logs Asynchronously | Exam AZ-204 Solution

Process Azure Blob Storage Transaction Logs Asynchronously

Question

You are developing an application that uses Azure Blob storage.

The application must read the transaction logs of all the changes that occur to the blobs and the blob metadata in the storage account for auditing purposes.

The changes must be in the order in which they occurred, include only create, update, delete, and copy operations and be retained for compliance reasons.

You need to process the transaction logs asynchronously.

What should you do?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

B.

Change feed support in Azure Blob Storage The purpose of the change feed is to provide transaction logs of all the changes that occur to the blobs and the blob metadata in your storage account.

The change feed provides ordered, guaranteed, durable, immutable, read-only log of these changes.

Client applications can read these logs at any time, either in streaming or in batch mode.

The change feed enables you to build efficient and scalable solutions that process change events that occur in your Blob Storage account at a low cost.

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-change-feed

To read transaction logs of changes that occur to blobs and blob metadata in Azure Blob storage, you can use Azure Storage analytics logs, change feed, or Azure Event Grid. However, for this scenario, the requirement is to process the transaction logs asynchronously, in order of occurrence, and only for specific events.

Option A: Process all Azure Blob storage events by using Azure Event Grid with a subscriber Azure Function app. Azure Event Grid is a managed event-routing service in Azure that can be used to create and manage event subscriptions. Using this service, you can receive notifications of events that occur in Azure Blob storage. However, processing all events can result in a large number of notifications, and you will need to filter out the ones you are interested in. Also, there is no guarantee that the events will be received in the order in which they occurred, which could cause problems with auditing requirements.

Option B: Enable the change feed on the storage account and process all changes for available events. Azure Blob storage provides a change feed that captures all the changes that occur to a blob or its metadata. The change feed provides a reliable way to process events asynchronously, in the order in which they occurred, and only for create, update, delete, and copy operations. The change feed delivers events in batches of up to 5,000 events at a time, and each batch contains events that have occurred in a contiguous time window. This option meets the requirements of the scenario and is the recommended solution.

Option C: Process all Azure Storage Analytics logs for successful blob events. Azure Storage analytics logs provide detailed records of storage requests, including successful and failed requests, that can be used for auditing purposes. However, processing these logs asynchronously can be complex and time-consuming, and the logs may not be delivered in the order in which the events occurred.

Option D: Use the Azure Monitor HTTP Data Collector API and scan the request body for successful blob events. Azure Monitor HTTP Data Collector API is a service that allows you to send log data to Azure Monitor from any REST client. While it's possible to send Azure Blob storage events to this API, scanning the request body for successful blob events is not an efficient solution, and it may not meet the order and compliance requirements of the scenario.

In conclusion, Option B is the recommended solution for this scenario, as it allows for asynchronous processing of changes to blobs and their metadata in the order in which they occurred, and only for create, update, delete, and copy operations.