A self-driving car company is trying to detect and classify objects around the car in real-time. The team is considering using a neural network architecture that can capture local patterns and hierarchies in images. Which type of neural network should they primarily focus on?
- Recurrent Neural Network (RNN)
- Convolutional Neural Network (CNN)
- Long Short-Term Memory (LSTM) Network
- Gated Recurrent Unit (GRU) Network
When detecting and classifying objects in images, especially in real-time for self-driving cars, Convolutional Neural Networks (CNNs) should be the primary choice. CNNs excel at capturing local patterns and hierarchies in images, making them ideal for tasks like object detection in computer vision, which is essential for self-driving cars to understand their environment.
Which type of filtering is often used to reduce the amount of noise in an image?
- Median Filtering
- Edge Detection
- Histogram Equalization
- Convolutional Filtering
Median filtering is commonly used to reduce noise in an image. It replaces each pixel value with the median value in a local neighborhood, making it effective for removing salt-and-pepper noise and preserving the edges and features in the image.
After deploying a Gradient Boosting model, you observe that its performance deteriorates after some time. What might be a potential step to address this?
- Re-train the model with additional data
- Increase the learning rate
- Reduce the model complexity
- Regularly update the model with new data
To address the performance deterioration of a deployed Gradient Boosting model, it's crucial to regularly update the model with new data (option D). Data drift is common, and updating the model ensures it adapts to the changing environment. While re-training with additional data (option A) may help, regularly updating the model with new data is more sustainable. Increasing the learning rate (option B) or reducing model complexity (option C) may not be effective in addressing performance deterioration over time.
To prevent overfitting in neural networks, the _______ technique can be used, which involves dropping out random neurons during training.
- Normalization
- L1 Regularization
- Dropout
- Batch Normalization
The technique used to prevent overfitting in neural networks is called "Dropout." During training, dropout randomly removes a fraction of neurons, helping to prevent overreliance on specific neurons and improving generalization.
In time series analysis, what is a sequence of data points measured at successive points in time called?
- Time steps
- Data snapshots
- Data vectors
- Time series data
In time series analysis, a sequence of data points measured at successive points in time is called "time series data." This data structure is used to analyze and forecast trends, patterns, and dependencies over time. It's fundamental in fields like finance, economics, and climate science.
In the context of neural networks, what does the term "backpropagation" refer to?
- Training a model using historical data
- Forward pass computation
- Adjusting the learning rate
- Updating model weights
"Backpropagation" in neural networks refers to the process of updating the model's weights based on the computed errors during the forward pass. It's a key step in training neural networks and involves minimizing the loss function.
You're building a system that needs to store vast amounts of unstructured data, like user posts, images, and comments. Which type of database would be the best fit for this use case?
- Relational Database
- Document Database
- Graph Database
- Key-Value Store
A document database, like MongoDB, is well-suited for storing unstructured data with variable schemas, making it an ideal choice for use cases involving user posts, images, and comments.
Considering the evolution of data privacy, which technology allows computation on encrypted data without decrypting it?
- Blockchain
- Homomorphic Encryption
- Quantum Computing
- Data Masking
Homomorphic Encryption allows computation on encrypted data without the need for decryption. It's a significant advancement in data privacy because it ensures that sensitive data remains encrypted during processing, reducing the risk of data exposure and breaches while still enabling useful computations.
How does transfer learning primarily benefit deep learning models in terms of training time and data requirements?
- Increases training time
- Requires more data
- Decreases training time
- Requires less data
Transfer learning benefits deep learning models by decreasing training time and data requirements. It allows models to leverage pre-trained knowledge, saving time and reducing the need for large datasets. The model starts with knowledge from a source task and fine-tunes it for a target task, which is often faster and requires less data than training from scratch.
While training a deep neural network for a regression task, the model starts to memorize the training data. What's a suitable approach to address this issue?
- Increase the learning rate
- Add more layers to the network
- Apply dropout regularization
- Decrease the batch size
Memorization indicates overfitting. Applying dropout regularization (Option C) is a suitable approach to prevent overfitting in deep neural networks. Increasing the learning rate (Option A) can lead to convergence issues. Adding more layers (Option B) can worsen overfitting. Decreasing the batch size (Option D) may not directly address memorization.