A key challenge in machine learning ethics is ensuring that algorithms do not perpetuate or amplify existing ________.
- Inequalities
- Biases
- Advantages
- Opportunities
Ensuring that algorithms do not perpetuate or amplify existing inequalities is a fundamental challenge in machine learning ethics. Addressing this challenge requires creating more equitable models and datasets.
A real estate company wants to predict the selling price of houses based on features like square footage, number of bedrooms, and location. Which regression technique would be most appropriate?
- Decision Tree Regression
- Linear Regression
- Logistic Regression
- Polynomial Regression
Linear Regression is the most suitable regression technique for predicting a continuous variable, such as the selling price of houses. It establishes a linear relationship between the independent and dependent variables, making it ideal for this scenario.
Which type of learning would be best suited for categorizing news articles into topics without pre-defined categories?
- Reinforcement learning
- Semi-supervised learning
- Supervised learning
- Unsupervised learning
Unsupervised learning is the best choice for categorizing news articles into topics without predefined categories. Unsupervised learning algorithms can cluster similar articles based on patterns and topics discovered from the data without the need for labeled examples. Reinforcement learning is more suitable for scenarios with rewards and actions. Supervised learning requires labeled data, and semi-supervised learning combines labeled and unlabeled data.
In SVM, what does the term "kernel" refer to?
- A feature transformation
- A hardware component
- A software component
- A support vector
The term "kernel" in Support Vector Machines (SVM) refers to a feature transformation. Kernels are used to map data into a higher-dimensional space, making it easier to find a linear hyperplane that separates different classes.
In the bias-variance decomposition of the expected test error, which component represents the error due to the noise in the training data?
- Bias
- Both Bias and Variance
- Neither Bias nor Variance
- Variance
In the bias-variance trade-off, the component that represents the error due to noise in the training data is both bias and variance. Bias refers to the error introduced by overly simplistic assumptions in the model, while variance represents the error due to model sensitivity to fluctuations in the training data. Together, they account for the expected test error.
What is the primary goal of the Principal Component Analysis (PCA) technique in machine learning?
- Clustering Data
- Finding Anomalies
- Increasing Dimensionality
- Reducing Dimensionality
PCA's primary goal is to reduce dimensionality by identifying and retaining the most significant features, making data analysis and modeling more efficient.
To prevent a model from becoming too complex and overfitting the training data, ________ techniques are often applied.
- Regularization
- Optimization
- Stochastic Gradient Descent
- Batch Normalization
Regularization techniques add penalties to the loss function to discourage complex models, helping prevent overfitting and improving model generalization.
Game-playing agents, like those used in board games or video games, often use ________ learning to optimize their strategies.
- Reinforcement
- Semi-supervised
- Supervised
- Unsupervised
Game-playing agents frequently employ reinforcement learning. This approach involves learning by trial and error, where agents receive feedback (rewards) based on their actions, helping them optimize their strategies over time.
In a fraud detection system, you have data with numerous features. You suspect that not all features are relevant, and some may even be redundant. Before feeding the data into a classifier, you want to reduce its dimensionality without losing critical information. Which technique would be apt for this?
- Principal Component Analysis (PCA)
- Support Vector Machines (SVM)
- Breadth-First Search
- Quick Sort
Principal Component Analysis (PCA) is used for dimensionality reduction. It identifies the most significant features in the data, allowing you to reduce dimensionality while retaining critical information. In a fraud detection system, this is valuable for improving model performance.
How is NLP primarily used in healthcare?
- Identifying Medical Trends
- Patient Entertainment
- Managing Hospital Inventory
- Extracting Medical Information
NLP is primarily used in healthcare to extract structured information from unstructured medical notes, aiding in decision-making and research.