How does merge sort divide and conquer a given list/array?

  • It multiplies each element by a random factor
  • It randomly splits the list into parts
  • It recursively divides the list into halves, sorts each half, and then merges them back together.
  • It selects the smallest element and moves it to the beginning
Merge sort divides a given list or array by recursively breaking it into halves until individual elements. Then, it sorts each segment and merges them back together to construct a sorted array.

In a social network application, you need to find the shortest path between two users based on mutual friends. Would BFS be suitable for this task, or would another algorithm be more appropriate?

  • A* Algorithm
  • Breadth-First Search (BFS)
  • Depth-First Search (DFS)
  • Dijkstra's Algorithm
BFS would be suitable for finding the shortest path based on mutual friends in a social network. BFS explores neighbors first, making it effective for finding mutual connections. Other algorithms like DFS may not guarantee the shortest path and Dijkstra's Algorithm is more suitable for weighted graphs, which may not be relevant in a social network context.

In selection sort, what is the main operation performed in each iteration?

  • Doubling the size of the sorted portion
  • Finding the minimum element in the unsorted portion and swapping it with the first element of the unsorted part
  • Multiplying elements in the unsorted portion
  • Randomly rearranging elements in the unsorted portion
The main operation in each iteration of selection sort is finding the minimum element in the unsorted portion and swapping it with the first element of the unsorted part. This gradually builds the sorted portion.

Consider a scenario where you have a limited amount of memory available, and you need to sort a large dataset stored on disk. Discuss the feasibility of using bubble sort in this situation and propose an alternative approach if necessary.

  • Feasible and Efficient
  • Feasible but Inefficient
  • Feasible but Memory Intensive
  • Infeasible on Disk
Using bubble sort in this scenario is infeasible due to its quadratic time complexity, making it highly inefficient for large datasets. A more suitable alternative would be external sorting algorithms like external merge sort, which involve dividing the dataset into smaller chunks that fit into memory and merging them externally.

How does the longest common substring problem differ from the longest common subsequence problem?

  • In the longest common substring problem, the characters in the common sequence can appear in any order.
  • In the longest common substring problem, the characters in the common sequence must appear consecutively.
  • The longest common substring problem allows for overlapping substrings.
  • The longest common substring problem deals with strings of equal length only.
The primary difference between the longest common substring problem and the longest common subsequence problem is that in the longest common substring problem, the characters in the common sequence must appear consecutively within the strings, whereas in the longest common subsequence problem, the characters do not have to be contiguous.

The algorithm selects the next node with the _______ shortest distance from the source node.

  • Average
  • Largest
  • Median
  • Smallest
In Dijkstra's algorithm, the next node is selected based on having the smallest shortest distance from the source node. The algorithm prioritizes nodes with the minimum known distance, ensuring that it explores the most promising paths first.

To find the shortest path in a weighted graph using BFS, one can modify the algorithm to use _______ for determining the next node to explore.

  • Binary Search Tree
  • Linked List
  • Priority Queue
  • Stack
To find the shortest path in a weighted graph using BFS, one can modify the algorithm to use a priority queue for determining the next node to explore. This allows selecting the node with the minimum distance efficiently.

Discuss the trade-offs between using a fixed-size hash table versus a dynamically resizing hash table.

  • Both fixed-size and dynamically resizing hash tables have the same space complexity. The only difference is in their time complexity for insertion and deletion operations.
  • Fixed-size hash tables are always preferable due to their simplicity and lack of memory management concerns.
  • Fixed-size hash tables dynamically adjust their size based on the number of elements, while dynamically resizing hash tables maintain a constant size.
  • Fixed-size hash tables offer constant space complexity but may lead to collisions. Dynamically resizing hash tables adapt to the number of elements but incur additional memory management overhead.
The trade-offs between fixed-size and dynamically resizing hash tables involve space complexity and adaptability. Fixed-size tables offer constant space complexity but may lead to collisions when the number of elements grows. Dynamically resizing tables adjust their size to accommodate the number of elements but introduce memory management overhead and potential performance hits during resizing operations.

The dynamic programming approach to solving Edit Distance involves constructing a _______ to store intermediate results.

  • Hash table
  • Matrix
  • Queue
  • Stack
The dynamic programming approach for Edit Distance involves constructing a matrix to store intermediate results. Each cell in the matrix represents the minimum number of operations required to transform substrings of the two input strings.

How do you find the middle element of a singly linked list in one pass?

  • Iterate through the list, counting the number of elements, and then traverse the list again to the middle element.
  • There is no efficient way to find the middle element in one pass for a singly linked list.
  • Use recursion to find the middle element efficiently.
  • Use two pointers, one moving at twice the speed of the other. When the faster pointer reaches the end, the slower pointer will be at the middle element.
By using two pointers, one moving at twice the speed of the other, you can efficiently find the middle element in one pass. The faster pointer reaches the end while the slower pointer points to the middle element.