In SVM, what does the term "kernel" refer to?
- A feature transformation
- A hardware component
- A software component
- A support vector
The term "kernel" in Support Vector Machines (SVM) refers to a feature transformation. Kernels are used to map data into a higher-dimensional space, making it easier to find a linear hyperplane that separates different classes.
Which type of learning would be best suited for categorizing news articles into topics without pre-defined categories?
- Reinforcement learning
- Semi-supervised learning
- Supervised learning
- Unsupervised learning
Unsupervised learning is the best choice for categorizing news articles into topics without predefined categories. Unsupervised learning algorithms can cluster similar articles based on patterns and topics discovered from the data without the need for labeled examples. Reinforcement learning is more suitable for scenarios with rewards and actions. Supervised learning requires labeled data, and semi-supervised learning combines labeled and unlabeled data.
A real estate company wants to predict the selling price of houses based on features like square footage, number of bedrooms, and location. Which regression technique would be most appropriate?
- Decision Tree Regression
- Linear Regression
- Logistic Regression
- Polynomial Regression
Linear Regression is the most suitable regression technique for predicting a continuous variable, such as the selling price of houses. It establishes a linear relationship between the independent and dependent variables, making it ideal for this scenario.
A key challenge in machine learning ethics is ensuring that algorithms do not perpetuate or amplify existing ________.
- Inequalities
- Biases
- Advantages
- Opportunities
Ensuring that algorithms do not perpetuate or amplify existing inequalities is a fundamental challenge in machine learning ethics. Addressing this challenge requires creating more equitable models and datasets.
In which scenario is unsupervised learning least suitable: predicting house prices based on features, grouping customers into segments, or classifying emails as spam or not spam?
- Classifying emails as spam or not spam
- Grouping customers into segments
- Predicting house prices based on features
- Unsupervised learning is suitable for all scenarios
Unsupervised learning is least suitable for classifying emails as spam or not spam. This is because unsupervised learning doesn't have labeled data to distinguish between spam and non-spam emails. It is more applicable to clustering or grouping data when you don't have clear labels, such as grouping customers into segments.
In the context of the multi-armed bandit problem, what is regret?
- The feeling of loss and remorse
- An optimization metric
- A random variable
- An arm selection policy
In the context of the multi-armed bandit problem, regret is an optimization metric that quantifies how much an agent's total reward falls short of the best possible reward it could have achieved by always choosing the best arm. It's a way to measure how well an agent's arm selection policy performs.
A medical research team is studying the relationship between various health metrics (like blood pressure, cholesterol level) and the likelihood of developing a certain disease. The outcome is binary (disease: yes/no). Which regression model should they employ?
- Decision Tree Regression
- Linear Regression
- Logistic Regression
- Polynomial Regression
Logistic Regression is the appropriate choice for binary outcomes, such as the likelihood of developing a disease (yes/no). It models the probability of a binary outcome based on predictor variables, making it well-suited for this medical research.
Regularization techniques help in preventing overfitting. Which of these is NOT a regularization technique: Batch Normalization, Dropout, Adam Optimizer, L1 Regularization?
- Adam Optimizer
- Batch Normalization
- Dropout
- L1 Regularization
Adam Optimizer is not a regularization technique. It's an optimization algorithm used in training neural networks, while the others are regularization methods.
The Naive Bayes classifier assumes that the presence or absence of a particular feature of a class is ________ of the presence or absence of any other feature.
- Correlated
- Dependent
- Independent
- Unrelated
Naive Bayes assumes that features are independent of each other. This simplifying assumption helps make the algorithm computationally tractable but might not hold in all real-world cases.
In pharmacology, machine learning can aid in the process of drug discovery by predicting potential ________ of new compounds.
- Toxicity
- Flavor Profile
- Market Demand
- Molecular Structure
Machine learning can predict potential toxicity of new compounds by analyzing their chemical properties and interactions in pharmacology.
A healthcare company wants to classify patients into risk categories based on their medical history. They have a vast amount of patient data, but the relationships between variables are complex and non-linear. Which algorithm might be more suitable for this task?
- Decision Trees
- K-Nearest Neighbors (K-NN)
- Logistic Regression
- Naive Bayes
Decision Trees are suitable for complex and non-linear relationships between variables. They can capture intricate patterns in patient data, making them effective for risk categorization in healthcare.
Why is Independent Component Analysis (ICA) primarily used in applications like audio signal processing?
- It's more computationally efficient
- It separates mixed sources effectively
- It requires less data for training
- It's based on supervised learning
ICA is used in audio signal processing because it can effectively separate mixed sources, making it useful for source separation and blind signal separation tasks.