A company is planning on using DynamoDB as a data store for their application.
Below are the data access patterns for the table Data is uploaded to the table via an application The data is heavily used within a week's time from the ingestion of data After a week's time, the data is not frequently used Which of the following can be used to effectively store the table data in DynamoDB? Choose 2 answers from the options given below.
Click on the arrows to vote for the correct answer
A. B. C. D.Answer - A and B.
An example of this is given in the AWS Documentation.
########
Design Pattern for Time-Series Data.
Consider a typical time-series scenario, where you want to track a high volume of events.
Your write access pattern is that all the events being recorded have today's date.
Your read access pattern might be to read today's events most frequently, yesterday's events much less frequently, and then older events very little at all.
The read access pattern is best handled by building the current date and time into the primary key.
But that is certain to create one or more hot partitions.
The latest one is always the only partition that is being written to.
All other partitions, including all the partitions from previous days, divert provisioned write capacity from where you need it most.
The following design pattern often handles this kind of scenario effectively:
Create one table per time period, provisioned with write capacity less than 1,000 write capacity units (WCUs) per partition-key value, and minimum necessary read capacity.
Before the end of each time period, prebuild the table for the next period.
Just as the current period ends, direct event traffic to the new table.
You can assign names to these tables that specify the time periods that they have recorded.
As soon as a table is no longer being written to, reduce its provisioned write capacity to 1 WCU and provision whatever read capacity is appropriate.
Reduce the provisioned read capacity of earlier tables as they age, and archive or delete the ones whose contents will rarely or never be needed.
########
Options C and D even though possible are less efficient since we are using the scan operation to delete the data in the tables.
For more information on time series data for DynamoDB, please refer to the below URL.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-time-series.htmlThe given scenario suggests that a company wants to use DynamoDB as a data store for their application. The data is frequently used within a week's time from the ingestion of data, but after that, the data is not frequently used. Based on this, there are a few ways to effectively store the table data in DynamoDB. Two of the most suitable options are:
C. Create a table and ensure a datetimestamp is placed as the partition Key: In this option, we can create a DynamoDB table with a partition key as a datetimestamp. This means that the table data can be partitioned based on the timestamp of the data. The data that is frequently used within a week's time can be stored in the most recent partition while the data that is not frequently used can be stored in the older partitions. This approach can be effective as it allows us to retrieve the most recent data quickly by querying the most recent partition. Moreover, this approach also helps in managing the read and write capacity of the DynamoDB table as the recent partition can be allocated higher read and write capacity units compared to the older partitions.
D. Perform scan operations on the table and delete table data which has an older datetimestamp of more than one week: In this option, we can perform scan operations on the DynamoDB table to identify the data that is older than one week. We can then delete this data from the table to free up the storage and reduce the read and write capacity requirements. This approach can be effective as it helps in optimizing the usage of DynamoDB storage and capacity. However, it is important to note that scan operations can be expensive and time-consuming, especially if the DynamoDB table contains a large amount of data. Additionally, deleting the data can also cause data loss, so it is important to ensure that the data is backed up before deletion.
Options A and B are not suitable for the given scenario:
A. Create tables based on a weekly basis. Ensure a high read and write capacity for these tables: Creating tables based on a weekly basis can lead to the creation of a large number of tables, which can be difficult to manage. Moreover, it can also result in increased costs as each table requires a certain amount of read and write capacity units. Allocating high read and write capacity for these tables can also lead to underutilization of the resources as the data is not frequently used after a week's time.
B. Change the Read and write capacity to a lower value after a week's time for the table: Changing the read and write capacity of the DynamoDB table after a week's time can be ineffective as the data is frequently used within a week's time. Lowering the read and write capacity can lead to increased latency and slower access to the data, which can negatively impact the application's performance. Additionally, it is also important to note that changing the read and write capacity can take some time to reflect, which can result in inconsistent performance.