Q-learning is a type of ________ learning algorithm that aims to find the best action to take given a current state.

  • Reinforcement
  • Supervised
  • Unsupervised
  • Semi-supervised
Q-learning is a type of reinforcement learning that focuses on finding the best action to take in a given state to maximize cumulative rewards.

Which term refers to using a model that has already been trained on a large dataset and fine-tuning it for a specific task?

  • Model adaptation
  • Model transformation
  • Model modification
  • Fine-tuning
Fine-tuning is the process of taking a pre-trained model and adjusting it to perform a specific task. It's a crucial step in transfer learning, where the model adapts its features and parameters to suit the new task.

Imagine a scenario where an online learning platform wants to categorize its vast number of courses into different topics. The platform doesn't have predefined categories but wants the algorithm to determine them based on course content. This task would best be accomplished using which learning approach?

  • Clustering
  • Reinforcement Learning
  • Supervised Learning
  • Unsupervised Learning
Unsupervised learning is the most suitable approach. Here, the algorithm should discover inherent structures or clusters within the courses without any predefined categories, making unsupervised learning a fitting choice.

When visualizing high-dimensional data in two or three dimensions, one might use PCA to project the data onto the first few ________.

  • Principal Components
  • Data Points
  • Dimensions
  • Eigenvalues
PCA (Principal Component Analysis) is used to reduce the dimensionality of data by projecting it onto the first few Principal Components, which are linear combinations of the original dimensions.

For a medical test, it's crucial to minimize the number of false negatives. Which metric would be particularly important to optimize in this context?

  • Accuracy
  • F1 Score
  • Precision
  • Recall
In the context of a medical test, minimizing false negatives is vital because you want to avoid missing actual positive cases. This emphasizes the importance of optimizing recall, which measures the ability of the test to correctly identify all positive cases, even at the expense of more false positives.

In hierarchical clustering, the ________ method involves merging the closest clusters in each iteration.

  • Agglomerative
  • Divisive
  • DBSCAN
  • OPTICS
In hierarchical clustering, the Agglomerative method starts with individual data points as clusters and iteratively merges the closest clusters, creating a hierarchy. "Agglomerative" is the correct option, representing the bottom-up approach of this clustering method.

In the context of GANs, the generator tries to produce fake data, while the discriminator tries to ________ between real and fake data.

  • Differentiate
  • Discriminate
  • Generate
  • Classify
The discriminator in a GAN is responsible for distinguishing or discriminating between real and fake data, not generating or differentiating.

What type of neural network is designed for encoding input data into a compressed representation and then decoding it back to its original form?

  • Convolutional Neural Network (CNN)
  • Recurrent Neural Network (RNN)
  • Autoencoder
  • Long Short-Term Memory (LSTM)
Autoencoders are neural networks designed for this task. They consist of an encoder network that compresses input data into a compact representation and a decoder network that reconstructs the original data from this representation.

What is the main challenge faced by NLP systems when processing clinical notes in electronic health records?

  • Variability in clinical language
  • Availability of data
  • Lack of computational resources
  • Precision and recall
Clinical notes often use varied and context-specific language, making it challenging for NLP systems to accurately interpret and extract information from electronic health records. This variability can impact system accuracy.

Which technique involves setting a fraction of input units to 0 at each update during training time, which helps to prevent overfitting?

  • Dropout
  • Batch Normalization
  • Data Augmentation
  • Early Stopping
Dropout involves setting a fraction of input units to 0 during training, which helps prevent overfitting by making the model more robust and reducing reliance on specific neurons.