The process of transforming skewed data into a more Gaussian-like distribution is known as _______.

  • Normalization
  • Standardization
  • Imputation
  • Resampling
The process of transforming skewed data into a more Gaussian-like distribution is called "standardization." It involves shifting the data's distribution to have a mean of 0 and a standard deviation of 1, making it more amenable to certain statistical techniques.

Which method involves filling missing values in a dataset using the column's average?

  • Min-Max Scaling
  • Imputation with Mean
  • Standardization
  • Principal Component Analysis
Imputation with Mean is a common technique in Data Science to fill missing values by replacing them with the mean of the respective column. It helps maintain the integrity of the dataset by using the column's central tendency.

In the context of data warehousing, which process is responsible for periodically loading fresh data into the data warehouse?

  • Data Extraction
  • Data Transformation
  • Data Loading
  • Data Integration
Data Loading is the process responsible for periodically loading fresh data into the data warehouse. It involves taking the data extracted from source systems, transforming it into the appropriate format, and then loading it into the data warehouse for analysis and reporting. Data Extraction, Transformation, and Integration are important steps in this process but are not solely responsible for loading data into the warehouse.

What is the primary purpose of using activation functions in neural networks?

  • To add complexity to the model
  • To control the learning rate
  • To introduce non-linearity in the model
  • To speed up the training process
The primary purpose of activation functions in neural networks is to introduce non-linearity into the model. Without non-linearity, neural networks would reduce to linear regression models, limiting their ability to learn complex patterns in data. Activation functions enable neural networks to approximate complex functions and make them suitable for a wide range of tasks.

In light of AI ethics, why is the "right to explanation" becoming increasingly important?

  • It ensures AI algorithms remain proprietary
  • It promotes transparency in AI decision-making
  • It limits the use of AI in sensitive applications
  • It reduces the complexity of AI algorithms
The "right to explanation" is important as it promotes transparency in AI decision-making. In ethical AI, users should have insight into how AI algorithms arrive at their decisions. This transparency is vital to prevent bias, discrimination, and unethical decision-making, making it a critical aspect of AI ethics.

A common method to combat the vanishing gradient problem in RNNs is to use _______.

  • Gradient boosting
  • Long Short-Term Memory (LSTM)
  • Principal Component Analysis
  • K-means clustering
To combat the vanishing gradient problem in RNNs, a common approach is to use Long Short-Term Memory (LSTM) units. LSTMs are designed to alleviate the vanishing gradient issue by allowing gradients to flow over longer sequences.

Which term refers to the process of transforming data to have a mean of 0 and a standard deviation of 1?

  • Outlier Detection
  • Data Imputation
  • Standardization
  • Feature Engineering
Standardization is the process of transforming data to have a mean of 0 and a standard deviation of 1. This helps in making data more interpretable and suitable for various machine learning algorithms, as it removes the scale effect.

A company is transitioning from a monolithic system to microservices. They need a database that can ensure strong transactional guarantees. What kind of database system would be suitable?

  • NoSQL Database
  • NewSQL Database
  • Columnar Database
  • Time-Series Database
NewSQL databases like Google Spanner are designed to combine the scalability of NoSQL databases with strong transactional guarantees, making them suitable for microservices transitioning from monolithic systems.

In computer vision, what process involves converting an image into an array of pixel values?

  • Segmentation
  • Feature Extraction
  • Pre-processing
  • Quantization
Pre-processing in computer vision typically includes steps like resizing, filtering, and transforming an image. It's during this phase that an image is converted into an array of pixel values, making it ready for subsequent analysis and feature extraction.

You are working on a dataset with income values, and you notice that a majority of incomes are clustered around $50,000, but a few are as high as $1,000,000. What transformation would be best suited to reduce the impact of these high incomes on your analysis?

  • Min-Max Scaling
  • Log Transformation
  • Z-score Standardization
  • Removing Outliers
To reduce the impact of extreme values in income data, a log transformation is often used. It compresses the range of values and makes the distribution more symmetrical. Min-Max scaling and z-score standardization don't address the issue of extreme values, and removing outliers may lead to loss of important information.