What is the importance of the 'explore' step in the EDA process?

  • To analyze and investigate the data
  • To clean and transform data
  • To communicate the results
  • To pose initial questions
The 'explore' step in the EDA process is crucial as it involves the analysis and investigation of the cleaned and transformed data, using statistical techniques and visualization methods. This stage helps uncover patterns, trends, relationships, and anomalies in the data, and aids in forming or refining hypotheses.

What is the importance of understanding data distributions in Exploratory Data Analysis?

  • All of the above
  • It helps in identifying the right statistical tests to apply
  • It helps in spotting outliers and anomalies
  • It helps in understanding the underlying structure of the data
Understanding data distributions is fundamental in Exploratory Data Analysis. It aids in understanding the structure of data, identifying outliers, formulating hypotheses, and selecting appropriate statistical tests.

Imagine you are working with a data set that includes survey responses on a 1-5 scale (1=Very Unsatisfied, 5=Very Satisfied). How would you classify this data type?

  • Continuous data
  • Interval data
  • Nominal data
  • Ordinal data
This type of data is ordinal because the ratings exist on an arbitrary scale where the rank order (1-5) is significant, but the precise numerical differences between the scale values are not.

Standardization or z-score normalization is a scaling technique where the values are centered around the _____ with a unit _____.

  • mean; standard deviation
  • mean; variance
  • median; interquartile range
  • mode; range
Standardization or z-score normalization is a scaling technique where the values are centered around the mean with a unit standard deviation. This technique subtracts the mean from each observation and then divides by the standard deviation, effectively scaling the data to have a mean of 0 and a standard deviation of 1.

What aspects should be considered to improve the readability of a graph?

  • All of the mentioned
  • The amount of data displayed
  • The color scheme
  • The scale and labels
Improving the readability of a graph involves considering several aspects, including the color scheme (which should be clear and not misleading), the scale and labels (which should be appropriate and informative), and the amount of data displayed (too much data can overwhelm the audience and obscure the main message).

You are given a dataset with a significant amount of outliers. Which scaling method would be most suitable and why?

  • , outliers should always be removed
  • Min-Max scaling because it scales all values between 0 and 1
  • Robust scaling because it is not affected by outliers
  • Z-score standardization because it reduces skewness
Robust scaling would be the most suitable as it uses the median and the interquartile range, which are not sensitive to outliers, for scaling the data. Other methods like Min-Max and Z-score are affected by the presence of outliers.

Under what circumstances might 'removal' of outliers lead to biased results?

  • When outliers are a result of data duplication
  • When outliers are due to data collection errors
  • When outliers are extreme but legitimate data points
  • When outliers do not significantly impact the analysis
Removing outliers can lead to biased results when the outliers are extreme but legitimate data points, as they could represent important aspects of the phenomenon being studied.

You have applied mean imputation to a dataset where values are missing not at random. What kind of bias might you have unintentionally introduced, and why?

  • Confirmation bias
  • Overfitting bias
  • Selection bias
  • Underfitting bias
If you have applied mean imputation to a dataset where values are missing not at random, you might have unintentionally introduced selection bias. This is because mean imputation could lead to an underestimation of the variability in the data and potentially introduce a systematic bias, as it doesn't consider the reasons behind the missingness.

When features in a dataset are highly correlated, they might suffer from a problem known as ________, which can negatively impact the machine learning model.

  • Bias
  • Multicollinearity
  • Overfitting
  • Underfitting
When features in a dataset are highly correlated, they might suffer from a problem known as multicollinearity, which can negatively impact the machine learning model. Multicollinearity can affect the stability and interpretability of the model, and may cause certain algorithms to perform poorly.

The removal of outliers can lead to a reduction in the ________ of the data set.

  • Mean
  • Median
  • Mode
  • Variability
The removal of outliers often leads to a reduction in the variability (or variance) of the dataset as outliers are extreme values that increase variability.