Which RNN architecture is more computationally efficient but might not capture all the intricate patterns that its counterpart can: LSTM or GRU?

  • GRU
  • LSTM
  • Both capture patterns efficiently
  • Neither captures patterns effectively
The GRU (Gated Recurrent Unit) is more computationally efficient than LSTM (Long Short-Term Memory) but may not capture all intricate patterns in data due to its simplified architecture. LSTM is more expressive but computationally demanding.

Which term describes a model that has been trained too closely to the training data and may not perform well on new, unseen data?

  • Bias
  • Generalization
  • Overfitting
  • Underfitting
Overfitting is a common issue in machine learning where a model becomes too specialized to the training data and fails to generalize well to new data. It's essential to strike a balance between fitting the training data and generalizing to unseen data.

One of the hyperparameters in a Random Forest algorithm that determines the maximum depth of the trees is called ______.

  • Entropy
  • Gini Index
  • LeafNodes
  • MaxDepth
The hyperparameter controlling the maximum depth of trees in a Random Forest is typically called "MaxDepth." It determines how deep each decision tree can grow in the ensemble.

In reinforcement learning, ________ focuses on trying new actions, while ________ focuses on leveraging known rewards.

  • Exploration Policy
  • Exploitation Policy
  • Random Policy
  • Deterministic Policy
In reinforcement learning, exploration policy focuses on trying new actions to learn more about the environment. Exploitation policy, on the other hand, leverages known rewards to make optimal decisions based on what's already learned.

In reinforcement learning, what do we call the function that determines the value of taking an action in a particular state?

  • Action Evaluator
  • Value Function
  • Policy Function
  • Reward Function
The 'Value Function' in reinforcement learning determines the expected cumulative reward of taking an action in a particular state, guiding decision-making.

Which type of learning is characterized by an agent interacting with an environment and learning to make decisions based on rewards and penalties?

  • Supervised Learning
  • Reinforcement Learning
  • Unsupervised Learning
  • Semi-Supervised Learning
Reinforcement learning is the type of learning where an agent learns through interaction with an environment by receiving rewards and penalties.

Why might a deep learning practitioner use regularization techniques on a model?

  • To make the model larger
  • To simplify the model
  • To prevent overfitting
  • To increase training speed
Deep learning practitioners use regularization techniques to 'prevent overfitting.' Overfitting is when a model learns noise in the training data, and regularization helps in making the model more generalized and robust to new data.

Which NLP technique is often employed to extract structured information from unstructured medical notes?

  • Sentiment Analysis
  • Named Entity Recognition
  • Part-of-Speech Tagging
  • Machine Translation
Named Entity Recognition is an NLP technique used to identify and categorize entities (e.g., drugs, diseases) within unstructured medical text.

Which regression technique uses the logistic function (or sigmoid function) to squeeze the output between 0 and 1?

  • Linear Regression
  • Logistic Regression
  • Poisson Regression
  • Ridge Regression
Logistic Regression uses the logistic function (sigmoid function) to model the probability of a binary outcome. This function ensures that the output is constrained between 0 and 1, making it suitable for classification tasks.

In the context of Q-learning, what does the 'Q' stand for?

  • Quality
  • Quantity
  • Question
  • Quotient
In Q-learning, the 'Q' stands for Quality, representing the quality or expected return of taking a specific action in a given state.

Time series forecasting is crucial in fields like finance and meteorology because it helps in predicting stock prices and ________ respectively.

  • Temperature
  • Rainfall
  • Crop yields
  • Wind speed
Time series forecasting in meteorology is important for predicting variables like rainfall, not stock prices.

Experience replay, often used in DQNs, helps in stabilizing the learning by doing what?

  • Reducing Correlation between Data
  • Speeding up convergence
  • Improving Exploration
  • Saving Memory Space
Experience replay in DQNs reduces the correlation between consecutive data samples, which stabilizes learning by providing uncorrelated transitions for training.