The removal of outliers can lead to a reduction in the ________ of the data set.

  • Mean
  • Median
  • Mode
  • Variability
The removal of outliers often leads to a reduction in the variability (or variance) of the dataset as outliers are extreme values that increase variability.

When would you choose a histogram over a kernel density plot for univariate data visualization?

  • When data is categorical
  • When data is continuous
  • When data is discrete
  • When data is skewed
A Histogram is preferred over a kernel density plot for discrete data. While kernel density plots can give a smoother representation of data, they are more suitable for continuous data. A histogram's bar-like representation suits the discrete nature of the data.

You have a large dataset where removing the outliers would lead to loss of significant data. What method would you recommend for outlier handling?

  • Binning
  • Removal
  • Transformation
  • nan
If the dataset is large and removing outliers would lead to a significant loss of data, binning could be a suitable method. In binning, the outliers are not removed but rather they are replaced with summary statistics like mean, median, etc.

Consider you are dealing with a dataset with zero skewness but high kurtosis. How would this shape the data distribution and affect your analysis?

  • The data distribution would be negatively skewed with a wider spread.
  • The data distribution would be perfectly symmetrical with a narrower spread and potential outliers.
  • The data distribution would be perfectly symmetrical with a wider spread.
  • The data distribution would be positively skewed with a narrower spread.
Zero skewness means the distribution is symmetrical, and high kurtosis means the distribution is leptokurtic with a sharp peak and fatter tails. Therefore, the data distribution will be symmetrical but with a potential for outliers. This may affect the results of statistical tests or models that assume normality, as extreme values could have a disproportionate effect on the results.

EDA techniques can help detect ________ in a dataset.

  • Data leakage
  • Multicollinearity
  • Overfitting
  • Underfitting
EDA techniques can help detect multicollinearity in a dataset. By examining correlation matrices or scatter plots, we can get a sense of whether predictor variables are correlated with each other, which might indicate multicollinearity. This is an important consideration as multicollinearity can affect the interpretability of some models and can lead to unstable estimates of regression coefficients.

If a machine learning model uses distance-based methods, we need to apply _____ to bring all features to the same level of magnitudes.

  • Binning
  • Data Encoding
  • Data Integration
  • Data Scaling
If a machine learning model uses distance-based methods, we need to apply Data Scaling to bring all features to the same level of magnitudes. This is because distance-based methods are sensitive to the scale of the features.

Why is readability important in data visualization?

  • To demonstrate the designer's skills
  • To ensure the graph looks good
  • To help the audience understand and interpret the data correctly
  • To make the graph appealing to the audience
Readability is crucial in data visualization because it directly impacts the audience's ability to understand and interpret the data correctly. A readable graph communicates the data's message effectively, allows the audience to draw accurate conclusions, and makes the data accessible to a broader audience.

The method of transforming data to handle outliers often involves applying a ________ to the data.

  • Box-Cox transformation
  • Inverse transformation
  • Logarithmic transformation
  • Square root transformation
The logarithmic transformation is a common method used in data transformation to handle outliers. It helps in pulling in high values, which reduces skewness.

Why might you prefer to use multiple imputation over a simpler method like mean imputation?

  • Mean imputation always leads to bias
  • Multiple imputation is easier to use
  • Multiple imputation is quicker
  • Multiple imputation provides more accurate estimates
You might prefer to use multiple imputation over a simpler method like mean imputation because multiple imputation provides more accurate estimates. This is because it estimates multiple values for each missing value, reflecting the uncertainty around the true value. It also better preserves the relationships between variables.

_______ is a type of data analysis that helps in formulating hypotheses while the primary purpose of _______ is to test the formulated hypotheses.

  • CDA, EDA
  • EDA, CDA
  • EDA, Predictive Modeling
  • Predictive Modeling, EDA
EDA (Exploratory Data Analysis) is used to understand data patterns or trends and to formulate hypotheses, while CDA (Confirmatory Data Analysis) is applied to test those formulated hypotheses.