Which method involves reducing the number of input variables when developing a predictive model?

  • Dimensionality Reduction
  • Feature Expansion
  • Feature Scaling
  • Model Training
Dimensionality reduction is the process of reducing the number of input variables by selecting the most informative ones, combining them, or transforming them into a lower-dimensional space. This helps simplify models and can improve their efficiency and performance.

With the aid of machine learning, wearable devices can predict potential health events by analyzing ________ data.

  • Sensor
  • Biometric
  • Personal
  • Lifestyle
Machine learning applied to wearable devices can predict potential health events by analyzing biometric data. This includes information such as heart rate, blood pressure, and other physiological indicators that provide insights into the wearer's health status.

A medical imaging company is trying to diagnose diseases from X-ray images. Considering the spatial structure and patterns in these images, which type of neural network would be most appropriate?

  • Convolutional Neural Network (CNN)
  • Recurrent Neural Network (RNN)
  • Feedforward Neural Network
  • Radial Basis Function Network
A Convolutional Neural Network (CNN) is designed to capture spatial patterns and structures in images effectively, making it suitable for image analysis, such as X-ray diagnosis.

ICA is often used to separate ________ that have been mixed into a single data source.

  • Signals
  • Components
  • Patterns
  • Features
Independent Component Analysis (ICA) is used to separate mixed components in a data source, making 'Components' the correct answer.

In the Actor-Critic approach, the ________ provides a gradient for policy improvement based on feedback.

  • Critic
  • Agent
  • Selector
  • Actor
In the Actor-Critic approach, the Critic evaluates the policy and provides a gradient that guides policy improvement based on feedback, making it a fundamental element of the approach.

Q-learning is an off-policy algorithm because it learns the value of the optimal policy's actions, which may be different from the current ________'s actions.

  • Agent's
  • Environment's
  • Agent's or Environment's
  • Policy's
Q-learning is indeed an off-policy algorithm, as it learns the value of the optimal policy's actions (maximizing expected rewards) irrespective of the current environment's actions.

Which method can be seen as a probabilistic extension to k-means clustering, allowing soft assignments of data points?

  • Mean-Shift Clustering
  • Hierarchical Clustering
  • Expectation-Maximization (EM)
  • DBSCAN Clustering
The Expectation-Maximization (EM) method is a probabilistic extension to k-means, allowing soft assignments of data points based on probability distributions.

A bank wants to use transaction details to determine the likelihood that a transaction is fraudulent. The outcome is either "fraudulent" or "not fraudulent." Which regression method would be ideal for this purpose?

  • Decision Tree Regression
  • Linear Regression
  • Logistic Regression
  • Polynomial Regression
Logistic Regression is the ideal choice for binary classification tasks, like fraud detection (fraudulent or not fraudulent). It models the probability of an event occurring, making it the right tool for this scenario.

Why is ethics important in machine learning applications?

  • To ensure fairness and avoid bias
  • To improve model accuracy
  • To speed up model training
  • To reduce computational cost
Ethics in machine learning is vital to ensure fairness and avoid bias, preventing discrimination against certain groups or individuals in model predictions. It's a fundamental concern in the field of AI and ML.

How does the Random Forest algorithm handle the issue of overfitting seen in individual decision trees?

  • By aggregating predictions from multiple trees
  • By increasing the tree depth
  • By reducing the number of features
  • By using a smaller number of trees
Random Forest handles overfitting by aggregating predictions from multiple decision trees. This ensemble method combines the results from different trees, reducing the impact of individual overfitting.