Which type of constraint is commonly used to enforce data integrity at the column level in a relational database?

  • CHECK
  • FOREIGN KEY
  • INDEX
  • PRIMARY KEY
The CHECK constraint is commonly used to enforce data integrity at the column level in a relational database. It allows you to define conditions that must be met for the data in a column, ensuring its correctness.

In database design, how do you handle recursive relationships between entities?

  • Create a new database for each recursive relationship
  • Ignore recursive relationships in database design
  • Use a foreign key in the same table to establish the relationship
  • Use a separate table to represent the recursive relationship
In handling recursive relationships in database design, you typically use a foreign key within the same table to establish the relationship. This involves referencing the primary key of the same table, allowing an entity to be related to itself. It's a common technique for modeling hierarchical structures.

How does version control handle rollback of changes in data models?

  • Automatically rolling back to the previous version
  • Creating a new branch for each rollback
  • Deleting the entire version history
  • Manually reverting changes to a specific commit
Version control handles rollback by allowing users to manually revert changes to a specific commit. This ensures flexibility in undoing undesirable modifications and restoring the data model to a previous state while maintaining a record of version history.

What is the primary data structure used in document-based modeling?

  • Graph
  • JSON
  • Key-Value Pair
  • Table
The primary data structure used in document-based modeling is JSON (JavaScript Object Notation). JSON allows for flexible and hierarchical data representation, making it suitable for storing and retrieving complex data structures. Document databases leverage this format to organize and query data efficiently.

What does a modality of "Optional" mean in a relationship?

  • The relationship is mandatory for all entities involved
  • The relationship is not necessary for the entities involved
  • The relationship is optional for all entities involved
  • The relationship is optional for one entity and mandatory for the other entity
In a relationship with a modality of "Optional," it means that the relationship is optional for all entities involved. This implies that an entity can exist without being associated with another entity through the specified relationship.

What strategies can be employed to optimize indexing for large-scale databases?

  • Avoid indexing altogether for large-scale databases
  • Choose appropriate column(s) for indexing
  • Regularly rebuild all indexes
  • Use fewer indexes to minimize overhead
Optimizing indexing for large-scale databases involves choosing appropriate columns for indexing, considering the query patterns. It's essential to strike a balance between query performance and maintenance overhead.

Scenario: A university has students and courses. Each student can enroll in multiple courses, and each course can have multiple students enrolled in it. What type of entity would you introduce to represent the relationship between students and courses in an ERD?

  • Association entity
  • Composite entity
  • Derived entity
  • Intersection entity
In this case, introducing an Intersection entity (or associative entity) is suitable. It represents the many-to-many relationship between students and courses and stores additional attributes related to the enrollment, such as enrollment date or grades.

Which of the following techniques can be employed for database performance tuning?

  • Data isolation
  • Data replication
  • Data validation
  • Denormalization
Denormalization is one of the techniques employed for database performance tuning. It involves intentionally introducing redundancy into a database schema to improve read performance by reducing the need for joins and simplifying data retrieval operations.

How does a composite attribute differ from a simple attribute?

  • A composite attribute can be divided into smaller, independent sub-parts
  • A composite attribute is always derived, while a simple attribute is inherent
  • A simple attribute can be divided into smaller, independent sub-parts
  • A simple attribute is composed of multiple sub-parts
A composite attribute is one that can be divided into smaller, independent sub-parts, each with its own meaning. In contrast, a simple attribute is indivisible and represents an elementary piece of data. Composite attributes provide a way to model complex information in a database.

What is the result of applying aggregation functions to a dataset in a database?

  • A summary or statistical result
  • Detailed records of individual entries
  • No change in the dataset
  • Randomized order of records
Applying aggregation functions to a dataset in a database results in a summary or statistical outcome. Instead of displaying detailed records, these functions provide valuable insights into the dataset, such as total, average, maximum, minimum, or count, helping in the analysis and interpretation of the data.