In the context of regularization, what is the primary difference between L1 and L2 regularization?

  • L1 regularization adds the absolute values of coefficients as a penalty, leading to feature selection
  • L1 regularization adds the squared values of coefficients as a penalty, promoting sparsity
  • L2 regularization adds the absolute values of coefficients as a penalty, promoting sparsity
  • L2 regularization adds the squared values of coefficients as a penalty, leading to feature selection
L1 regularization, also known as Lasso, adds the absolute values of coefficients as a penalty, which promotes feature selection by driving some coefficients to zero. In contrast, L2 regularization, or Ridge, adds the squared values of coefficients as a penalty, which doesn't drive coefficients to zero and instead promotes a "shrinking" effect.

Which application of machine learning in healthcare helps in predicting patient diseases based on their medical history?

  • Diagnostic Prediction
  • Medication Recommendation
  • Patient Scheduling
  • X-ray Image Analysis
Machine learning in healthcare is extensively used for Diagnostic Prediction, where algorithms analyze a patient's medical history to predict diseases.

When the outcome variable is continuous and has a linear relationship with the predictor variables, you would use ________ regression.

  • Linear
  • Logistic
  • Polynomial
  • Ridge
Linear regression is used when there is a continuous outcome variable, and the relationship between the predictor variables and the outcome is linear. It's a fundamental technique in statistics and machine learning for regression tasks.

A machine learning model trained for predicting whether an email is spam or not has a very high accuracy of 99%. However, almost all emails (including non-spam) are classified as non-spam by the model. What could be a potential issue with relying solely on accuracy in this case?

  • Data Imbalance
  • Lack of Feature Engineering
  • Overfitting
  • Underfitting
The issue here is data imbalance, where the model is heavily biased toward the majority class (non-spam). Relying solely on accuracy in imbalanced datasets can be misleading as it doesn't account for the misclassification of the minority class (spam), which is a significant problem.

When an agent overly focuses on actions that have previously yielded rewards without exploring new possibilities, it might fall into a ________ trap.

  • Exploitation
  • Exploration
  • Learning
  • Reward
If an agent overly focuses on actions that have yielded rewards in the past, it falls into an exploitation trap, neglecting the exploration needed to find potentially better actions.

If you want to visualize high-dimensional data in a 2D or 3D space, which of the following techniques would be suitable?

  • Principal Component Analysis
  • Regression Analysis
  • Naive Bayes
  • Linear Discriminant Analysis
Principal Component Analysis (PCA) is suitable for visualizing high-dimensional data in a lower-dimensional space. It identifies the directions of maximum variance, making data more manageable for visualization.

For binary classification tasks, which regression outputs a probability score between 0 and 1?

  • Lasso Regression
  • Linear Regression
  • Logistic Regression
  • Support Vector Regression
Logistic Regression outputs probability scores between 0 and 1, making it suitable for binary classification. It uses the logistic function to model the probability of the positive class.

How does the architecture of a CNN ensure translational invariance?

  • CNNs use weight sharing in convolutional layers, making features invariant to translation
  • CNNs utilize pooling layers to reduce feature maps size
  • CNNs randomly initialize weights to break translational invariance
  • CNNs use a large number of layers for translation invariance
CNNs ensure translational invariance by sharing weights in convolutional layers, allowing learned features to detect patterns regardless of their location in the image. This is a key property of CNNs.

For the k-NN algorithm, what could be a potential drawback of using a very large value of k?

  • Decreased Model Sensitivity
  • Improved Generalization
  • Increased Computational Cost
  • Reduced Memory Usage
A large value of k in k-NN can make the model less sensitive to local patterns, leading to a loss in predictive accuracy due to averaging over more neighbors.

In hierarchical clustering, as the name suggests, the data is grouped into a hierarchy of clusters. What visualization is commonly used to represent this hierarchy?

  • Bar Chart
  • Dendrogram
  • Heatmap
  • Scatter Plot
A dendrogram is commonly used in hierarchical clustering to visualize the hierarchical structure of clusters, showing the merging and splitting of clusters.

When dealing with a small dataset and wanting to leverage the knowledge from a model trained on a larger dataset, which approach would be most suitable?

  • Fine-tuning
  • Transfer Learning
  • Random Initialization
  • Gradient Descent Optimization
The most suitable approach for leveraging knowledge from a model trained on a larger dataset with a small dataset is "Transfer Learning." It involves adapting the pre-trained model to the new task.

________ is the problem when a model learns the training data too well, including its noise and outliers.

  • Bias
  • Overfitting
  • Underfitting
  • Variance
Overfitting is the problem where a model becomes too specialized in the training data and captures its noise and outliers. This can lead to poor performance on unseen data.