Which NLP technique is often employed to extract structured information from unstructured medical notes?
- Sentiment Analysis
- Named Entity Recognition
- Part-of-Speech Tagging
- Machine Translation
Named Entity Recognition is an NLP technique used to identify and categorize entities (e.g., drugs, diseases) within unstructured medical text.
Why might a deep learning practitioner use regularization techniques on a model?
- To make the model larger
- To simplify the model
- To prevent overfitting
- To increase training speed
Deep learning practitioners use regularization techniques to 'prevent overfitting.' Overfitting is when a model learns noise in the training data, and regularization helps in making the model more generalized and robust to new data.
Which type of learning is characterized by an agent interacting with an environment and learning to make decisions based on rewards and penalties?
- Supervised Learning
- Reinforcement Learning
- Unsupervised Learning
- Semi-Supervised Learning
Reinforcement learning is the type of learning where an agent learns through interaction with an environment by receiving rewards and penalties.
The weights and biases in a neural network are adjusted during the ________ process to minimize the loss.
- Forward Propagation
- Backpropagation
- Initialization
- Regularization
Weights and biases in a neural network are adjusted during the 'Backpropagation' process to minimize the loss by propagating the error backward through the network.
In the context of deep learning, what is the primary use case of autoencoders?
- Image Classification
- Anomaly Detection
- Text Generation
- Reinforcement Learning
The primary use case of autoencoders in deep learning is for anomaly detection. They can learn the normal patterns in data and detect anomalies or deviations from these patterns, making them useful in various applications, including fraud detection and fault diagnosis.
When models are too simple and cannot capture the underlying trend of the data, it's termed as ________.
- Misfitting
- Overfitting
- Simplification
- Underfitting
When a model is too simple to capture the underlying patterns in the data, it is referred to as "underfitting." Underfit models have high bias and low variance, making them ineffective for predictions.
You are developing a recommendation system for a music app. While the system's bias is low, it tends to offer very different song recommendations for slight variations in user input. This is an indication of which issue in the bias-variance trade-off?
- High Bias
- High Variance
- Overfitting
- Underfitting
This scenario indicates overfitting in the bias-variance trade-off. Overfit models tend to provide very different recommendations for slight input changes, suggesting that the model is fitting noise in the data and not generalizing well to new user preferences.
Which process involves transforming and creating new variables to improve a machine learning model's predictive performance?
- Data preprocessing
- Feature engineering
- Hyperparameter tuning
- Model training
Feature engineering is the process of transforming and creating new variables based on the existing data to enhance a model's predictive performance. This can involve scaling, encoding, or creating new features from existing ones.
A researcher is working on a medical imaging problem with a limited amount of labeled data. To improve the performance of the deep learning model, the researcher decides to use a model pre-trained on a large generic image dataset. This approach is an example of what?
- Transfer Learning
- Reinforcement Learning
- Ensemble Learning
- Supervised Learning
Transfer learning is the practice of using a pre-trained model as a starting point to solve a new problem. In this case, it leverages prior knowledge from generic images to enhance medical image analysis.
What is the primary benefit of using transfer learning in deep learning models?
- Improved training time
- Better performance
- Reduced data requirement
- Enhanced model complexity
The primary benefit of transfer learning in deep learning is 'Better performance.' This technique leverages knowledge from pre-trained models, allowing the model to perform well even with limited data and reducing the need for lengthy training.