Which term refers to the ethical principle where AI systems should be transparent about how they make decisions?
- Accountability
- Bias and Fairness
- Transparency
- Predictive Analytics
Transparency is an essential ethical principle in AI, emphasizing that AI systems should be open and transparent about how they make decisions. It ensures that users and stakeholders can understand the logic behind AI-generated outcomes and trust the system.
When handling missing data in a dataset, if the data is not missing at random, it's referred to as _______.
- Data Imputation
- Data Normalization
- Data Outlier
- Data Leakage
When data is not missing at random, it's often referred to as "data leakage." Data leakage can occur when missing data is not random but systematically related to the target variable, which can lead to biased results in data analysis.
In RNNs, what term is used to describe the function of retaining information from previous inputs in the sequence?
- Convolution
- Feedback Loop
- Gradient Descent
- Memory Cell (or Hidden State)
In RNNs, the function that retains information from previous inputs in the sequence is typically referred to as the "Memory Cell" or "Hidden State." This element allows RNNs to maintain a form of memory that influences their predictions at each step in the sequence, making them suitable for sequential data processing.
You're tasked with deploying a Random Forest model to a production environment where response time is critical. Which of the following considerations is the most important?
- Model accuracy
- Model interpretability
- Model training time
- Model inference time
In a production environment where response time is critical, the most important consideration is the model's inference time (option D). While accuracy and interpretability are essential, they may be secondary to the need for quick model predictions. Reducing inference time might involve optimizations such as model compression, efficient hardware, or algorithm selection. Model training time (option C) typically occurs offline and isn't as crucial for real-time predictions.
Which method involves creating interaction terms between variables to capture combined effects in a model?
- Principal Component Analysis (PCA)
- Feature Engineering
- Feature Scaling
- Hypothesis Testing
Feature Engineering involves creating interaction terms or combinations of variables to capture the combined effects of those variables in a predictive model. These engineered features can enhance the model's ability to capture complex relationships in the data. PCA is a dimensionality reduction technique, and the other options are not directly related to creating interaction terms.
What is the primary goal of tokenization in NLP?
- Removing stop words
- Splitting text into words
- Extracting named entities
- Translating text to other languages
The primary goal of tokenization in NLP is to split text into words or tokens. This process is essential for various NLP tasks such as text analysis, language modeling, and information retrieval. Tokenization helps in breaking down text into meaningful units for analysis.
For models with a large number of layers, which technique helps in improving the internal covariate shift and accelerates the training?
- Stochastic Gradient Descent (SGD) with a small learning rate
- Batch Normalization
- L1 Regularization
- DropConnect
Batch Normalization is a technique used to improve the training of deep neural networks. It addresses the internal covariate shift problem by normalizing the activations of each layer. This helps in accelerating training and allows for the use of higher learning rates without the risk of divergence. It also aids in better gradient flow.
In the context of AI ethics, what is the primary concern of "interpretability"?
- Ensuring AI is always right
- Making AI faster
- Understanding how AI makes decisions
- Controlling the cost of AI deployment
"Interpretability" in AI ethics is about understanding how AI systems make decisions. It's crucial for accountability, transparency, and identifying and addressing potential biases in AI algorithms. AI being right or fast is important but not the primary concern in this context.
You are responsible for ensuring that the data in your company's data warehouse is consistent, reliable, and easily accessible. Recently, there have been complaints about data discrepancies. Which stage in the ETL process should you primarily focus on to resolve these issues?
- Extraction
- Transformation
- Loading
- Data Ingestion
The Transformation stage is where data discrepancies are often addressed. During transformation, data is cleaned, normalized, and validated to ensure consistency and reliability. This stage is critical for data quality and consistency in the data warehouse. Extraction involves collecting data, Loading is about data loading into the warehouse, and Data Ingestion is the process of bringing data into the system.
A common method to combat the vanishing gradient problem in RNNs is to use _______.
- Long Short-Term Memory (LSTM)
- Decision Trees
- K-Means Clustering
- Principal Component Analysis
To address the vanishing gradient problem in RNNs, one common technique is to use Long Short-Term Memory (LSTM) networks. LSTMs are a type of RNN that helps mitigate the vanishing gradient problem by preserving and updating information over long sequences. LSTMs are designed to capture long-term dependencies and are more effective than traditional RNNs for tasks where data from distant time steps is important.