Sample Questions And Answers (DBS-C01)
1. Which AWS service allows automated backups, snapshots, and failover for a relational database?
A. Amazon DynamoDB
B. Amazon S3
C. Amazon RDS
D. Amazon ElastiCache
Answer: C. Amazon RDS
Explanation: Amazon RDS provides automated backups, database snapshots, and Multi-AZ failover to ensure high availability for relational databases.
2. What is the best way to encrypt data at rest in Amazon Aurora?
A. Use application-level encryption
B. Store data in plaintext
C. Use Aurora’s built-in encryption with KMS
D. Enable VPC Flow Logs
Answer: C. Use Aurora’s built-in encryption with KMS
Explanation: Aurora integrates with AWS Key Management Service (KMS) to encrypt data at rest using industry-standard encryption algorithms.
3. Which database is best suited for a use case that requires millisecond latency and key-value access patterns?
A. Amazon Aurora
B. Amazon RDS
C. Amazon Redshift
D. Amazon DynamoDB
Answer: D. Amazon DynamoDB
Explanation: DynamoDB is designed for high-performance, low-latency access with key-value data models.
4. Which storage type does Amazon RDS for SQL Server use by default?
A. S3
B. EBS
C. EC2 instance store
D. EFS
Answer: B. EBS
Explanation: Amazon RDS relies on EBS (Elastic Block Store) for storage.
5. What is the maximum retention period for automated backups in Amazon RDS?
A. 7 days
B. 14 days
C. 35 days
D. 60 days
Answer: C. 35 days
Explanation: Automated backups in RDS can be retained for up to 35 days.
6. What AWS tool helps analyze database workloads and provides optimization recommendations?
A. AWS Trusted Advisor
B. Amazon CloudWatch
C. Performance Insights
D. AWS Inspector
Answer: C. Performance Insights
Explanation: Performance Insights helps monitor and optimize database performance in RDS and Aurora.
7. What is Aurora Global Database designed for?
A. Intra-region replication
B. Reduced latency for global applications
C. NoSQL database workloads
D. Data archival
Answer: B. Reduced latency for global applications
Explanation: Aurora Global Database replicates data across AWS Regions for globally distributed applications.
8. How does Amazon Redshift Spectrum work?
A. Queries S3 data using EC2
B. Queries S3 data using DynamoDB
C. Allows Redshift to query data in S3 without loading it
D. Extracts data from S3 to a local server
Answer: C. Allows Redshift to query data in S3 without loading it
Explanation: Redshift Spectrum allows querying data stored in S3 directly using standard SQL.
9. A customer wants to migrate an on-premises Oracle database to AWS with minimal refactoring. Which service is best?
A. Amazon DynamoDB
B. Amazon Aurora
C. Amazon RDS for Oracle
D. Amazon Neptune
Answer: C. Amazon RDS for Oracle
Explanation: RDS for Oracle allows running Oracle with minimal changes.
10. Which feature of DynamoDB provides strong consistency across multiple regions?
A. DAX
B. Global Tables
C. Streams
D. Backup and Restore
Answer: B. Global Tables
Explanation: DynamoDB Global Tables replicate data across regions with eventual or strong consistency.
11. What does the Database Migration Service (DMS) require to migrate a database?
A. S3 bucket
B. Source and target endpoints
C. Lambda function
D. EC2 instance with SQL client
Answer: B. Source and target endpoints
Explanation: DMS requires defining both source and target endpoints to perform migrations.
12. Which Amazon RDS feature creates a read-only copy of a database?
A. Backup
B. Multi-AZ
C. Read Replica
D. Failover
Answer: C. Read Replica
Explanation: Read Replicas help offload read traffic and provide scalability.
13. What AWS service supports property graphs and SPARQL queries?
A. Amazon RDS
B. Amazon Neptune
C. Amazon Redshift
D. Amazon DynamoDB
Answer: B. Amazon Neptune
Explanation: Neptune is AWS’s graph database, supporting property graphs and SPARQL for RDF data.
14. What is a benefit of using Aurora Serverless?
A. Manual scaling
B. Reserved capacity
C. Automatic scaling based on workload
D. Static pricing
Answer: C. Automatic scaling based on workload
Explanation: Aurora Serverless scales up or down automatically depending on demand.
15. What is the default consistency model for DynamoDB reads?
A. Strongly consistent
B. Eventually consistent
C. Weakly consistent
D. Linearizable
Answer: B. Eventually consistent
Explanation: By default, DynamoDB uses eventually consistent reads to improve performance.
16. Which engine is NOT supported by Amazon RDS?
A. Oracle
B. MySQL
C. Cassandra
D. MariaDB
Answer: C. Cassandra
Explanation: Cassandra is not available in Amazon RDS but is offered through Amazon Keyspaces.
17. How can you reduce IOPS costs in Amazon RDS?
A. Use EBS-optimized instances
B. Use provisioned IOPS
C. Use general-purpose SSD (gp3)
D. Use Multi-AZ deployment
Answer: C. Use general-purpose SSD (gp3)
Explanation: gp3 volumes provide a cost-effective alternative to provisioned IOPS volumes.
18. Which method can migrate a schema and data from SQL Server to Aurora MySQL?
A. AWS Glue
B. AWS Backup
C. DMS and SCT
D. EC2 rsync
Answer: C. DMS and SCT
Explanation: Schema Conversion Tool (SCT) and DMS work together for heterogeneous database migrations.
19. Which RDS engines support parallel query execution?
A. PostgreSQL
B. Aurora MySQL
C. Aurora PostgreSQL
D. Oracle
Answer: C. Aurora PostgreSQL
Explanation: Aurora PostgreSQL supports parallel query to speed up analytic workloads.
20. What feature of DynamoDB automatically expires items?
A. Streams
B. TTL
C. Backup and Restore
D. DAX
Answer: B. TTL
Explanation: Time to Live (TTL) automatically deletes expired items from DynamoDB.
21. Which storage backend does Amazon Aurora use?
A. EBS
B. S3
C. Aurora Storage Volume
D. Instance Store
Answer: C. Aurora Storage Volume
Explanation: Aurora uses a distributed, fault-tolerant storage volume independent of EC2 or EBS.
22. What ensures transactional consistency in DynamoDB?
A. BatchGetItem
B. Global Secondary Indexes
C. Transactions API
D. DAX
Answer: C. Transactions API
Explanation: DynamoDB Transactions API enables atomic, consistent operations across multiple items.
23. What feature helps reduce Aurora recovery time after crash?
A. Lazy loading
B. Continuous backup
C. Fault-tolerant storage
D. Parallel read replicas
Answer: A. Lazy loading
Explanation: Lazy loading restores only the needed pages to memory, reducing recovery time.
24. What is a limitation of using Amazon RDS Multi-AZ deployments?
A. No failover support
B. Only supports read replicas
C. Increased latency due to synchronous replication
D. No backups
Answer: C. Increased latency due to synchronous replication
Explanation: Multi-AZ uses synchronous replication, which can add write latency.
25. Which AWS service supports columnar storage and is optimized for OLAP workloads?
A. Amazon Aurora
B. Amazon Redshift
C. Amazon Neptune
D. Amazon RDS
Answer: B. Amazon Redshift
Explanation: Redshift uses columnar storage and is ideal for OLAP and analytics workloads.
26. What tool can assess database schema compatibility between Oracle and PostgreSQL?
A. DMS
B. AWS Schema Conversion Tool
C. CloudFormation
D. AWS Config
Answer: B. AWS Schema Conversion Tool
Explanation: SCT analyzes schema compatibility between database engines and provides conversion guidance.
27. What AWS service provides high-speed in-memory cache for databases?
A. AWS Backup
B. Amazon DAX
C. Amazon Aurora
D. AWS Lambda
Answer: B. Amazon DAX
Explanation: DynamoDB Accelerator (DAX) provides fast, in-memory caching for DynamoDB tables.
28. Which Aurora feature provides near real-time replication across regions?
A. Global Databases
B. Read Replicas
C. Backup Export
D. Cross-Region Copy
Answer: A. Global Databases
Explanation: Aurora Global Databases replicate data across AWS regions with sub-second latency.
29. What is Amazon Keyspaces used for?
A. Managed Cassandra database
B. Graph database
C. Columnar analytics
D. JSON document storage
Answer: A. Managed Cassandra database
Explanation: Amazon Keyspaces is AWS’s scalable, serverless Cassandra-compatible database.
30. What does Amazon Timestream specialize in?
A. Key-value storage
B. Document management
C. Time series data
D. Graph traversal
Answer: C. Time series data
Explanation: Amazon Timestream is optimized for storing and analyzing time series data like logs and metrics.
Reviews
There are no reviews yet.