The __________ method in K-Means clustering helps in determining the optimal number of clusters by plotting the variance as a function of the number of clusters.
- Elbow
- Gap Statistic
- Initialization
- Random
The Elbow Method is used to determine the optimal number of clusters in K-Means by plotting the variance as a function of the number of clusters. An "elbow" in the plot indicates the point beyond which additional clusters add little value.
How does Ridge regression differ in the way it penalizes large coefficients compared to Lasso?
- Both eliminate coefficients
- Both reduce coefficients
- Ridge eliminates coefficients, Lasso reduces them
- Ridge reduces coefficients, Lasso eliminates them
Ridge regularization reduces the size of coefficients but keeps them non-zero, while Lasso can eliminate some coefficients by setting them to zero.
You are tasked with creating a model that can adapt and optimize its strategy through trial and error. Which type of learning would you employ?
- Reinforcement Learning
- Semi-Supervised Learning
- Supervised Learning
- Unsupervised Learning
Reinforcement Learning employs trial and error by learning from rewards and penalties, making it suitable for adaptive and optimization tasks.
What is the primary purpose of the hyperplane in SVM?
- Clustering
- Data Compression
- Data Transformation
- Separation of Classes
The hyperplane in SVM is used to separate classes in the feature space.
How does DBSCAN differentiate between border points and noise points?
- By analyzing their density within Epsilon radius
- By assigning different weights to them
- By clustering them separately
- By the number of points within Epsilon distance and their relation to MinPts
In DBSCAN, border points have fewer than MinPts within their Epsilon radius but are reachable from a core point. Noise points don't satisfy either condition. Differentiating between these allows DBSCAN to form clusters without being influenced by noise and to create clusters of varying shapes by including border points.
How can the Eigenvalues in PCA be used to determine the significance of the corresponding Eigenvectors?
- By defining the direction of the eigenvectors
- By indicating the mean of each eigenvector
- By representing the amount of variance captured
- By showing the noise in the data
In PCA, eigenvalues are used to determine the significance of the corresponding eigenvectors by representing the amount of variance captured by each component. The larger the eigenvalue, the more significant the component.
Explain how overfitting manifests itself in Polynomial Regression.
- Through fitting data too loosely
- Through fitting data with low-degree polynomials
- Through fitting noise and showing oscillatory behavior
- Through underfitting the model
Overfitting in Polynomial Regression is characterized by fitting the noise in the data and showing oscillatory behavior. A high-degree polynomial can capture minute fluctuations, leading to a complex model that doesn't generalize well.
You have limited computational resources but need to develop a predictive model. What would you choose between AI, Machine Learning, or Deep Learning, and why?
- AI, for its flexibility and lower computational demands
- Deep Learning, for its high accuracy
- Machine Learning, for its data efficiency
- nan
Traditional AI models often require fewer computational resources compared to the complex models in Machine Learning and Deep Learning.
Describe how the concepts of features, targets, training, and testing are interrelated in Machine Learning.
- Features and targets are for clustering; Training and testing for prediction
- Features and targets are unrelated; Training and testing are used interchangeably
- Features are for prediction; Targets for evaluation; Training and testing are unrelated
- Features are used to predict targets; Training is learning patterns; Testing evaluates performance
Features are the input variables used to predict targets. Training involves learning the patterns from features to predict targets, and testing evaluates how well this learning generalizes to unseen data. These concepts are essential in building and evaluating supervised learning models.
You've built a model with high variance. How can Cross-Validation help in diagnosing and improving the model?
- By automatically reducing the complexity of the model
- By helping in feature selection
- By providing a robust estimation of model performance and aiding hyperparameter tuning
- By providing more data for training
Cross-Validation provides a robust estimation of the model's performance across different data splits. For a high variance model, it can help in diagnosing the issue by highlighting overfitting and assist in hyperparameter tuning to find the best complexity that captures underlying patterns without fitting noise.
You are building a model to predict whether a given email is spam or not. Why might Logistic Regression be a suitable approach?
- Because it can model binary outcomes and estimate probabilities
- Because it can predict multiple classes
- Because it works well with unstructured data
- Because it's a regression algorithm
Logistic Regression is suitable for binary classification problems such as spam detection, as it models binary outcomes and can estimate the probability of an email being spam or not.
You have developed a regression model, and the R-Squared value is very close to 1. What could this indicate, and what would you check?
- Good fit; No need to check anything
- Perfect fit; Check for overfitting
- Perfect fit; Check for underfitting
- Poor fit; Check for bias
An R-Squared value close to 1 typically indicates a nearly perfect fit, but this might be a sign of overfitting. It is essential to verify the model's performance on unseen data, as it may be capturing noise and specificities of the training data rather than the underlying trend. Cross-validation or a hold-out validation set can help in this assessment.