To work with DynamoDB more efficiently, we should follow some best practices while designing tables and items:
- Table best practices: DynamoDB tables are distributed across multiple partitions. For the best results, design your tables and applications so that read and write activities are spread evenly across all the items on your table, and avoid I/O hotspots that can degrade the performance of your application:
- Design for uniform data access across items in your table
- Distribute the write activity during data upload operations
- Understand the access platform for the time series data
- Item best practices: DynamoDB items are limited in size. However, there is no limit to the number of items in the table. Instead of storing large attribute data values in an item, consider the following alternative for your design:
- Use one-to-many tables instead of a large set of attributes
- Use multiple tables to support a varied access pattern
- Compress large attribute values
- Store large attribute values on Amazon S3
- Break large attributes across multiple items
- Query and scan best practices: Sudden and unexpected read activity can quickly consume the provisioned read capacity of the table. Such activity can be inefficient if it is not evenly spread across the table partitions:
- Avoid sudden read activities
- Take advantage of parallel scans
- Local secondary indexes best practices: Local secondary indexes let you define alternate keys on the table. You can then issue queries against these keys. This enables the efficient retrieval of data based on your requirement. Before using local secondary indexes, you should know their effects in terms of provisioned throughput cost, storage cost, and query efficiency:
- Use indexes sparingly
- Choose your projection carefully
- Optimize frequent queries
- Take advantage of sparse indexes
- Check for expanding item collections