For text classification problems, the ________ variant of Naive Bayes is often used.
- K-Means
- Multinomial
- Random Forest
- SVM
In text classification, the Multinomial variant of Naive Bayes is commonly used due to its suitability for modeling discrete data like word counts.
The Actor-Critic model combines value-based and ________ methods to optimize its decision-making process.
- Policy-Based
- Model-Free
- Model-Based
- Q-Learning
The Actor-Critic model combines value-based (critic) and model-free (actor) methods to optimize decision-making. The critic evaluates actions using value functions, and the actor selects actions based on this evaluation, thus combining two approaches for improved learning.
Which regression technique is primarily used for predicting a continuous outcome variable (like house price)?
- Decision Tree Regression
- Linear Regression
- Logistic Regression
- Polynomial Regression
Linear Regression is the most common technique for predicting a continuous outcome variable, such as house prices. It establishes a linear relationship between input features and the output.
Variational autoencoders (VAEs) introduce a probabilistic spin to autoencoders by associating a ________ with the encoded representations.
- Probability Distribution
- Singular Value Decomposition
- Principal Component
- Regression Function
VAEs introduce a probabilistic element to autoencoders by associating a probability distribution (typically Gaussian) with the encoded representations. This allows for generating new data points.
Which ensemble method combines multiple decision trees and aggregates their results for improved accuracy and reduced overfitting?
- Logistic Regression
- Naive Bayes
- Principal Component Analysis (PCA)
- Random Forest
Random Forest is an ensemble method that combines multiple decision trees. It aggregates their results through techniques like bagging and boosting to achieve better accuracy and reduce overfitting. Random Forest is a popular choice for various machine learning tasks.
In the context of text classification, Naive Bayes often works well because it can handle what type of data?
- High-Dimensional and Sparse Data
- Images and Videos
- Low-Dimensional and Dense Data
- Numeric Data
Naive Bayes is effective with high-dimensional and sparse data as it assumes independence between features, making it suitable for text data with numerous attributes.
In ________ learning, the algorithm isn't provided with the correct answers but discovers them through exploration and exploitation.
- Reinforcement
- Semi-supervised
- Supervised
- Unsupervised
Reinforcement learning involves exploration and exploitation strategies, where the algorithm learns by trial and error and discovers correct answers over time. It doesn't start with pre-defined correct answers.
Which algorithm is a popular choice for solving the multi-armed bandit problem when the number of arms is large and some structure can be assumed on the rewards?
- Epsilon-Greedy
- UCB1
- Thompson Sampling
- Greedy
UCB1 (Upper Confidence Bound 1) is a popular choice for the multi-armed bandit problem when you can assume some structure on the rewards and the number of arms is large. UCB1 balances exploration and exploitation effectively by using confidence bounds to select arms.
In the context of decision trees, what is "information gain" used for?
- To assess the tree's overall accuracy
- To calculate the depth of the tree
- To determine the number of leaf nodes
- To measure the purity of a split
Information gain is used to measure the purity of a split in a decision tree. It helps decide which feature to split on by evaluating how much it reduces uncertainty or entropy.
________ is a technique where during training, random subsets of neurons are ignored, helping to make the model more robust.
- Dropout
- Regularization
- Batch Normalization
- Activation Function
Dropout is a regularization technique that involves randomly deactivating a fraction of neurons during training. This helps prevent overfitting, making the model more robust and less dependent on specific neurons.
When using K-means clustering, why is it sometimes recommended to run the algorithm multiple times with different initializations?
- To ensure deterministic results.
- To make the algorithm run faster.
- To mitigate sensitivity to initial cluster centers.
- To reduce the number of clusters.
K-means clustering is sensitive to initial cluster centers. Running it multiple times with different initializations helps find a more stable solution.
When both precision and recall are important for a problem, one might consider optimizing the ________ score.
- Accuracy
- F1 Score
- ROC AUC
- Specificity
The F1 Score is a measure that balances both precision and recall. It is especially useful when you want to consider both false positives and false negatives in your classification problem.