Policy Gradient Methods often use which of the following to estimate the gradient of the expected reward with respect to the policy parameters?
- Monte Carlo estimation
- Finite difference
- Gradient ascent
- Random sampling
Policy Gradient Methods often use Monte Carlo estimation to estimate the gradient of the expected reward with respect to policy parameters. It involves sampling trajectories and averaging returns to estimate the gradient.
While t-SNE is excellent for visualization, it can sometimes produce misleading results due to which of its properties?
- Crowding Problem
- Curse of Dimensionality
- Convergence Issues
- Data Scaling
t-SNE can produce misleading results due to the "Curse of Dimensionality," which can lead to points appearing too clustered together in high-dimensional space, making it challenging to visualize and interpret.
In the context of autoencoders, what is the significance of the "bottleneck" layer?
- The bottleneck layer reduces model complexity
- The bottleneck layer enhances training speed
- The bottleneck layer compresses input data
- The bottleneck layer adds noise to data
The "bottleneck" layer in an autoencoder serves as the compression layer, reducing input data to a lower-dimensional representation. This compression is essential for capturing essential features in a compact representation, facilitating feature extraction and denoising.
The ________ gate in an LSTM controls which parts of the cell state should be updated.
- Update
- Forget
- Input
- Output
In an LSTM (Long Short-Term Memory), the update gate (also known as the input gate) regulates which parts of the cell state should be updated based on the current input and previous state.
The ability of an individual or a group to understand and trust the model's decisions is often tied to the model's ________.
- Explainability
- Complexity
- Accuracy
- Processing speed
Model explainability is essential for understanding and trusting a model's decisions, especially in critical applications like healthcare or finance, where transparency is key for decision-making and accountability.
Which machine learning algorithm is commonly used for time series forecasting due to its ability to remember long sequences?
- Decision Trees.
- Recurrent Neural Networks (RNNs).
- Support Vector Machines (SVMs).
- K-Means Clustering.
Recurrent Neural Networks (RNNs) are favored for time series forecasting because they can remember and model long sequences of data, making them suitable for sequential data like time series.
Random Forests introduce randomness in two main ways: by bootstrapping the data and by selecting a random subset of ______ for every split.
- Data Points
- Features
- Leaves
- Trees
Random Forests introduce randomness by selecting a random subset of "Features" for every split in each tree. This helps in creating diverse trees, which collectively improve the overall performance and reduce the risk of overfitting.
When dealing with high-dimensional data, which of the two algorithms (k-NN or Naive Bayes) is likely to be more efficient in terms of computational time?
- Both Equally Efficient
- It depends on the dataset size
- Naive Bayes
- k-NN
Naive Bayes is generally more efficient in terms of computational time for high-dimensional data because it doesn't require distance calculations.
Why do traditional RNNs face difficulties in learning long-term dependencies?
- Vanishing Gradient Problem
- Overfitting
- Underfitting
- Activation Function Selection
Traditional RNNs face difficulties due to the "Vanishing Gradient Problem." During backpropagation, gradients can become extremely small, making it challenging to update weights for long sequences. This issue inhibits the model's ability to learn long-term dependencies effectively, a critical limitation in sequence data tasks.
Ridge and Lasso are techniques used for ________ to prevent overfitting.
- Data Preprocessing
- Feature Engineering
- Hyperparameter Tuning
- Regularization
Ridge and Lasso are both regularization techniques used to prevent overfitting in machine learning. Regularization adds penalty terms to the model's loss function to discourage excessive complexity and make the model generalize better.