Sample Questions and Answers
How does BigQuery charge for data storage?
Per GB stored per month
B. Flat fee regardless of size
C. Based on number of queries run
D. Per table created
Answer: A. Per GB stored per month
Explanation: BigQuery storage costs are charged monthly based on data size.
What is the purpose of the Cloud Bigtable replication feature?
Increase availability and durability
B. Increase write throughput
C. Automatically backup data
D. Encrypt data at rest
Answer: A. Increase availability and durability
Explanation: Replication stores copies in multiple zones for fault tolerance.
Which is a valid way to connect an application to Cloud Spanner?
Using standard JDBC drivers
B. REST API only
C. Through BigQuery interface
D. FTP connection
Answer: A. Using standard JDBC drivers
Explanation: Cloud Spanner supports standard JDBC for connectivity.
Which BigQuery feature automatically optimizes query execution by reusing previous results?
Query caching
B. Data partitioning
C. Table clustering
D. Materialized views
Answer: A. Query caching
Explanation: Query caching improves performance by storing results of recent queries.
What is the main difference between Cloud Bigtable and Firestore?
Bigtable is NoSQL wide-column, Firestore is NoSQL document store
B. Firestore supports SQL queries, Bigtable does not
C. Bigtable is relational, Firestore is key-value
D. Firestore requires schema, Bigtable does not
Answer: A. Bigtable is NoSQL wide-column, Firestore is NoSQL document store
Explanation: Bigtable stores data in wide-column format; Firestore stores documents.
Which Cloud service allows you to run fully managed, serverless SQL queries over structured data?
Cloud Bigtable
B. Cloud Spanner
C. BigQuery
D. Cloud SQL
Answer: C. BigQuery
Explanation: BigQuery is a fully managed, serverless data warehouse service designed for running fast SQL queries.
What is the recommended method for controlling access to BigQuery datasets?
Firewall rules
B. IAM roles and permissions
C. VPN access
D. Password authentication
Answer: B. IAM roles and permissions
Explanation: Google Cloud IAM is used to control access to BigQuery resources securely.
What is the best practice to avoid data loss when performing schema changes in Cloud Spanner?
Use online schema changes with versioning
B. Export data and re-import after changes
C. Perform schema changes during a downtime window
D. Use manual data migration tools
Answer: A. Use online schema changes with versioning
Explanation: Cloud Spanner supports online schema changes without downtime using versioned changes.
How does BigQuery pricing for queries primarily work?
Based on the amount of data scanned
B. Fixed monthly subscription
C. Number of queries executed
D. CPU time used
Answer: A. Based on the amount of data scanned
Explanation: BigQuery charges primarily on bytes processed during query execution.
Which Cloud SQL feature allows automated failover to maintain high availability?
Backup scheduling
B. Read replicas
C. High availability (HA) configuration
D. Manual failover
Answer: C. High availability (HA) configuration
Explanation: HA configuration uses a primary and standby instance to automatically failover.
What is the main use of Cloud Bigtable’s row key design?
To optimize query filtering
B. To organize data storage and access patterns
C. To enforce security policies
D. To perform backups
Answer: B. To organize data storage and access patterns
Explanation: Row keys control how data is distributed and accessed efficiently in Bigtable.
How do you enforce encryption at rest for Cloud SQL databases?
Enable customer-managed encryption keys (CMEK)
B. Use SSL certificates
C. Encrypt data on the client side only
D. Cloud SQL does not support encryption at rest
Answer: A. Enable customer-managed encryption keys (CMEK)
Explanation: CMEK lets customers control encryption keys for data at rest in Cloud SQL.
Which BigQuery feature lets you restrict access to specific rows in a table?
Column-level security
B. Row-level security
C. Authorized views
D. Table partitioning
Answer: B. Row-level security
Explanation: Row-level security allows fine-grained access control at the row level.
What is the purpose of BigQuery’s partitioned tables?
To increase storage capacity
B. To reduce query cost and improve performance by dividing tables by date or integer range
C. To enable transactional consistency
D. To allow multi-region replication
Answer: B. To reduce query cost and improve performance by dividing tables by date or integer range
Explanation: Partitioning reduces the amount of data scanned during queries.
What does Cloud Spanner use to achieve strong consistency across global instances?
Paxos consensus algorithm
B. Two-phase commit
C. Eventual consistency replication
D. Timestamp ordering
Answer: A. Paxos consensus algorithm
Explanation: Cloud Spanner uses Paxos to achieve distributed consensus and strong consistency.
Which Cloud Firestore feature supports offline synchronization for mobile apps?
Real-time database sync
B. Local cache with offline persistence
C. Cloud functions triggers
D. Scheduled batch updates
Answer: B. Local cache with offline persistence
Explanation: Firestore SDK supports offline persistence to synchronize changes when reconnected.
How do you minimize costs when running frequent, small queries in BigQuery?
Use query caching and partition pruning
B. Run queries in batch mode
C. Avoid using SQL functions
D. Use on-demand pricing only
Answer: A. Use query caching and partition pruning
Explanation: Caching and querying smaller partitions reduces scanned data and costs.
What is the primary function of the Cloud Bigtable emulator?
To develop and test applications locally without using actual Cloud Bigtable resources
B. To perform backups of Bigtable data
C. To migrate data to Cloud Bigtable
D. To analyze query performance
Answer: A. To develop and test applications locally without using actual Cloud Bigtable resources
Explanation: The emulator allows local development without incurring costs.
What does the Database Migration Service (DMS) support for minimizing downtime during migrations?
Continuous data replication
B. One-time bulk export/import
C. Manual synchronization
D. Scheduled maintenance only
Answer: A. Continuous data replication
Explanation: DMS replicates changes continuously for minimal downtime migrations.
Which BigQuery feature helps manage and optimize query workload for large teams?
Dataset ACLs
B. Reservations and slot allocation
C. Query scheduling
D. Data lineage tracking
Answer: B. Reservations and slot allocation
Explanation: Reservations allocate query processing slots for workload management.
What is the benefit of using Cloud Firestore’s collection group queries?
Query across multiple collections with the same name in different documents
B. Aggregate multiple databases
C. Perform batch updates across collections
D. Manage access control policies
Answer: A. Query across multiple collections with the same name in different documents
Explanation: Collection group queries allow searching subcollections with the same name.
How can you automate backup of Cloud SQL instances?
Using automated backup settings in the instance configuration
B. Manual SQL dumps only
C. Cloud Storage lifecycle policies
D. Cloud Scheduler jobs
Answer: A. Using automated backup settings in the instance configuration
Explanation: Cloud SQL supports automatic daily backups.
What is the default consistency level for Firestore in Native mode?
Strong consistency for document reads and writes
B. Eventual consistency
C. Causal consistency only
D. No consistency guarantees
Answer: A. Strong consistency for document reads and writes
Explanation: Firestore Native mode provides strong consistency for most operations.
Which BigQuery SQL dialect supports user-defined functions written in JavaScript?
Legacy SQL
B. Standard SQL
C. Both Legacy and Standard SQL
D. Neither support JavaScript UDFs
Answer: B. Standard SQL
Explanation: Standard SQL supports user-defined functions in JavaScript.
How can you reduce BigQuery query costs on large datasets?
Use table partitioning and clustering to limit scanned data
B. Avoid filters
C. Disable caching
D. Run queries at peak hours
Answer: A. Use table partitioning and clustering to limit scanned data
Explanation: Partitioning and clustering reduce the data scanned and cost.
Which service is most suitable for storing large binary objects, such as images or videos, alongside metadata?
Cloud Storage with metadata in BigQuery or Firestore
B. Cloud SQL
C. Cloud Bigtable
D. Cloud Spanner
Answer: A. Cloud Storage with metadata in BigQuery or Firestore
Explanation: Cloud Storage stores binary objects efficiently; metadata can be managed in databases.
What is a best practice when designing schema for Cloud Bigtable?
Design row keys to avoid hotspots by distributing access evenly
B. Use auto-incrementing numeric keys
C. Store relational data with foreign keys
D. Use multiple columns for indexing
Answer: A. Design row keys to avoid hotspots by distributing access evenly
Explanation: Proper row key design prevents performance bottlenecks in Bigtable.
Which tool can you use to monitor Cloud SQL performance and resource utilization?
Cloud Monitoring and Cloud Logging
B. Stackdriver Trace only
C. Cloud Composer
D. Cloud Scheduler
Answer: A. Cloud Monitoring and Cloud Logging
Explanation: These tools provide metrics and logs to monitor database health.
How does BigQuery handle data ingestion from streaming sources?
Data is immediately available for querying with minimal latency
B. Data is ingested in batches once a day
C. Data must be manually loaded from Cloud Storage
D. Streaming ingestion is not supported
Answer: A. Data is immediately available for querying with minimal latency
Explanation: BigQuery supports streaming inserts with near real-time availability.
What is the maximum size of a single row in Cloud Bigtable?
10 MB
B. 100 MB
C. 256 KB
D. No explicit limit
Answer: B. 100 MB
Explanation: Cloud Bigtable supports up to 100 MB per row.
Reviews
There are no reviews yet.