A healthcare company wants to classify patients into risk categories based on their medical history. They have a vast amount of patient data, but the relationships between variables are complex and non-linear. Which algorithm might be more suitable for this task?

  • Decision Trees
  • K-Nearest Neighbors (K-NN)
  • Logistic Regression
  • Naive Bayes
Decision Trees are suitable for complex and non-linear relationships between variables. They can capture intricate patterns in patient data, making them effective for risk categorization in healthcare.

In pharmacology, machine learning can aid in the process of drug discovery by predicting potential ________ of new compounds.

  • Toxicity
  • Flavor Profile
  • Market Demand
  • Molecular Structure
Machine learning can predict potential toxicity of new compounds by analyzing their chemical properties and interactions in pharmacology.

The Naive Bayes classifier assumes that the presence or absence of a particular feature of a class is ________ of the presence or absence of any other feature.

  • Correlated
  • Dependent
  • Independent
  • Unrelated
Naive Bayes assumes that features are independent of each other. This simplifying assumption helps make the algorithm computationally tractable but might not hold in all real-world cases.

Regularization techniques help in preventing overfitting. Which of these is NOT a regularization technique: Batch Normalization, Dropout, Adam Optimizer, L1 Regularization?

  • Adam Optimizer
  • Batch Normalization
  • Dropout
  • L1 Regularization
Adam Optimizer is not a regularization technique. It's an optimization algorithm used in training neural networks, while the others are regularization methods.

A medical research team is studying the relationship between various health metrics (like blood pressure, cholesterol level) and the likelihood of developing a certain disease. The outcome is binary (disease: yes/no). Which regression model should they employ?

  • Decision Tree Regression
  • Linear Regression
  • Logistic Regression
  • Polynomial Regression
Logistic Regression is the appropriate choice for binary outcomes, such as the likelihood of developing a disease (yes/no). It models the probability of a binary outcome based on predictor variables, making it well-suited for this medical research.

Variational autoencoders (VAEs) introduce a probabilistic spin to autoencoders by associating a ________ with the encoded representations.

  • Probability Distribution
  • Singular Value Decomposition
  • Principal Component
  • Regression Function
VAEs introduce a probabilistic element to autoencoders by associating a probability distribution (typically Gaussian) with the encoded representations. This allows for generating new data points.

Which regression technique is primarily used for predicting a continuous outcome variable (like house price)?

  • Decision Tree Regression
  • Linear Regression
  • Logistic Regression
  • Polynomial Regression
Linear Regression is the most common technique for predicting a continuous outcome variable, such as house prices. It establishes a linear relationship between input features and the output.

The Actor-Critic model combines value-based and ________ methods to optimize its decision-making process.

  • Policy-Based
  • Model-Free
  • Model-Based
  • Q-Learning
The Actor-Critic model combines value-based (critic) and model-free (actor) methods to optimize decision-making. The critic evaluates actions using value functions, and the actor selects actions based on this evaluation, thus combining two approaches for improved learning.

For text classification problems, the ________ variant of Naive Bayes is often used.

  • K-Means
  • Multinomial
  • Random Forest
  • SVM
In text classification, the Multinomial variant of Naive Bayes is commonly used due to its suitability for modeling discrete data like word counts.

For a non-linearly separable dataset, which property of SVMs allows them to classify the data?

  • Feature selection
  • Kernel functions
  • Large training dataset
  • Parallel processing
SVMs can classify non-linearly separable data using kernel functions, which map the data into a higher-dimensional space where it becomes linearly separable.