In what circumstances can the IQR method lead to incorrect detection of outliers?

  • When data has a high standard deviation
  • When data is heavily skewed or bimodal
  • When data is normally distributed
  • When data is uniformly distributed
The IQR method might lead to incorrect detection of outliers in heavily skewed or bimodal distributions because it's based on percentiles which can be influenced by such irregularities.

A potential drawback of using regression imputation is that it can underestimate the ___________.

  • Mean
  • Median
  • Mode
  • Variance
One of the potential drawbacks of using regression imputation is that it can underestimate the variance. This is because it uses the relationship with other variables to estimate the missing values, which usually leads to less variability.

Why is Multicollinearity a potential issue in data analysis and predictive modeling?

  • It can cause instability in the coefficient estimates of regression models.
  • It can cause the data to be skewed.
  • It can cause the mean and median of the data to be significantly different.
  • It can lead to overfitting in machine learning models.
Multicollinearity can cause instability in the coefficient estimates of regression models. This means that small changes in the data can lead to large changes in the model, making the interpretation of the output problematic and unreliable.

During a data analysis project, your team came up with a novel hypothesis after examining patterns and trends in your dataset. Which type of analysis will be the best for further exploring this hypothesis?

  • All are equally suitable
  • CDA
  • EDA
  • Predictive Modeling
EDA would be most suitable in this case as it provides a flexible framework for exploring patterns, trends, and relationships in the data, allowing for a deeper understanding and further exploration of the novel hypothesis.

Which method of handling missing data removes only the instances where certain variables are missing, preserving the rest of the data in the row?

  • Listwise Deletion
  • Mean Imputation
  • Pairwise Deletion
  • Regression Imputation
The 'Pairwise Deletion' method of handling missing data only removes the instances where certain variables are missing, preserving the rest of the data in the row. This approach can be beneficial because it retains as much data as possible, but it may lead to inconsistencies and bias if the missingness is not completely random.

How does standard deviation differ in a sample versus a population?

  • The denominator in the calculation of the sample standard deviation is (n-1)
  • The standard deviation of a sample is always larger
  • The standard deviation of a sample is always smaller
  • They are calculated in the same way
The "Standard Deviation" in a sample differs from that in a population in the way it is calculated. For a sample, the denominator is (n-1) instead of n, which is Bessel's correction to account for sample bias.

What does a correlation coefficient close to 0 indicate about the relationship between two variables?

  • A perfect negative linear relationship
  • A perfect positive linear relationship
  • A very strong linear relationship
  • No linear relationship
A correlation coefficient close to 0 indicates that there is no linear relationship between the two variables. This means that changes in one variable are not consistently associated with changes in the other variable. It does not necessarily mean that there is no relationship at all, as there may be a non-linear relationship.

What step comes after 'wrangling' in the EDA process?

  • Communicating
  • Concluding
  • Exploring
  • Questioning
Once the data has been 'wrangled' i.e., cleaned and transformed, the next step in the EDA process is 'exploring'. This stage involves examining the data through statistical analysis and visual methods.

In a dataset with a categorical variable missing for some rows, why might mode imputation not be the best strategy?

  • All of the above
  • It can introduce bias if the data is not missing at random
  • It could distort the original data distribution
  • It may not capture the underlying data pattern
Mode imputation might not be the best strategy for a dataset with a categorical variable missing for some rows. Although it's simple to implement, it may fail to capture the underlying data pattern, introduce bias if the data is not missing at random, and distort the original data distribution by overrepresenting the mode.

In a scenario where your dataset has a Gaussian distribution, which scaling method is typically recommended and why?

  • All scaling methods work equally well with Gaussian distributed data
  • Min-Max scaling because it scales all values between 0 and 1
  • Robust scaling because it is not affected by outliers
  • Z-score standardization because it creates a normal distribution
Z-score standardization is typically recommended for a dataset with a Gaussian distribution. Although it doesn't create a normal distribution, it scales the data such that it has a mean of 0 and a standard deviation of 1, which aligns with the properties of a standard normal distribution.