The visualization tool used to represent the arrangement of the clusters produced by hierarchical clustering is called a _________.
- Cluster Map
- Dendrogram
- Heatmap
- Scatter Plot
A dendrogram is a tree-like diagram that shows the arrangement of the clusters produced by hierarchical clustering. It provides a visual representation of the clustering process, displaying how individual data points are grouped into clusters. It is a valuable tool for understanding the hierarchy and deciding where to cut the tree to form clusters.
What are some alternative methods to the Elbow Method for determining the number of clusters in K-Means?
- Cross-validation
- Principal Component Analysis
- Random Initialization
- Silhouette Method, Gap Statistic
Alternatives to the Elbow Method include methods like the Silhouette Method and Gap Statistic, which consider cluster cohesion and separation to determine the optimal number of clusters.
Choosing too small a value for K in KNN can lead to a __________ model, while choosing too large a value can lead to a __________ model.
- fast, slow
- noisy, smooth
- slow, fast
- smooth, noisy
A small K leads to a noisy model as it is sensitive to noise, whereas a large K results in a smooth model due to the averaging effect over more neighbors.
Why is it problematic for a model to fit too closely to the training data?
- It improves generalization
- It increases model simplicity
- It leads to poor performance on unseen data
- It reduces model bias
Fitting too closely to the training data leads to overfitting and poor performance on unseen data, as the model captures noise and fails to generalize well.
An Odds Ratio greater than 1 in Logistic Regression indicates that the __________ of the event increases for each unit increase in the predictor variable.
- Likelihood
- Margin
- Odds
- Probability
An Odds Ratio greater than 1 indicates that the odds of the event occurring increase for each unit increase in the predictor variable.
How can you determine the degree of the polynomial in Polynomial Regression?
- By cross-validation or visual inspection
- By the number of features
- By the number of observations
- By the type of problem
The degree of the polynomial in Polynomial Regression can be determined by techniques like cross-validation or visual inspection of the fit. Choosing the right degree helps in balancing the bias-variance trade-off.
What is overfitting in the context of machine learning?
- Enhancing generalization
- Fitting the model too closely to the training data
- Fitting the model too loosely to the data
- Reducing model complexity
Overfitting occurs when a model fits the training data too closely, capturing the noise and outliers, making it perform poorly on unseen data.
If the data is linearly separable, using a _________ kernel in SVM will create a linear decision boundary.
- Linear
- Polynomial
- RBF
- Sigmoid
Using a linear kernel in SVM will create a linear decision boundary when the data is linearly separable.
What do the ROC Curve and AUC represent in classification problems?
- Curve of false positive rate vs. true positive rate
- Curve of precision vs. recall
- Curve of true negatives vs. false negatives
- nan
The ROC (Receiver Operating Characteristic) Curve is a plot of the false positive rate versus the true positive rate. The AUC (Area Under the Curve) is a single value summarizing the overall ability of the test to discriminate between positive and negative instances.
In a text classification task, why might you choose a Naive Bayes classifier over a more complex model like a deep learning algorithm?
- Deep learning is not suitable for text classification
- Deep learning requires less preprocessing
- Naive Bayes always outperforms deep learning
- Naive Bayes might be preferred for its simplicity and efficiency, especially with limited data
Naive Bayes is a probabilistic classifier that can be simpler and more computationally efficient, especially when dealing with small or medium-sized datasets. In contrast, deep learning models might require more data and computational resources.
What is the primary goal of the K-Means Clustering algorithm?
- All of the Above
- Maximizing inter-cluster distance
- Minimizing intra-cluster distance
- Predicting new data points
The primary goal of K-Means is to minimize the intra-cluster distance, meaning the distance within the same cluster, to make the clusters as tight and well-separated as possible.
In a case where both overfitting and underfitting are concerns depending on the chosen algorithm, how would you systematically approach model selection and tuning?
- Increase model complexity
- Reduce model complexity
- Use L1 regularization
- Use grid search with cross-validation
Systematic approach involves the use of techniques like grid search with cross-validation to explore different hyperparameters and model complexities. This ensures that the selected model neither overfits nor underfits the data and generalizes well to unseen data.