Which method of variable selection can help mitigate the impact of Multicollinearity?

  • All of these methods.
  • Backward elimination.
  • Best subset selection.
  • Forward selection.
All these variable selection methods can be useful to mitigate the impact of multicollinearity. They help by eliminating irrelevant variables and keeping only those that contribute the most to the prediction of the dependent variable.

What implications can a negative correlation coefficient value hold?

  • One variable tends to increase as the other decreases
  • The relationship between variables is not linear
  • There is no relationship between variables
  • Variables tend to increase or decrease together
A negative correlation coefficient value implies that one variable tends to increase as the other decreases. In other words, it indicates a negative or inverse relationship between the two variables.

Incorrectly filling missing values in a feature can disproportionately increase the feature's ________, affecting model interpretability.

  • importance
  • precision
  • recall
  • weight
If missing values in a feature are filled incorrectly, it can disproportionately increase the feature's importance, potentially causing other important features to be overlooked and making the model difficult to interpret.

You've received feedback that your box plots are not providing a clear visual of the distribution of your dataset. What alternative plot could you use and why?

  • Bar graph
  • Line graph
  • Scatter plot
  • Violin plot
If box plots are not providing a clear visualization, an alternative could be Violin plots. Violin plots are similar to box plots, but also show the probability density of the data at different values. This can provide a more detailed depiction of the distribution of the dataset.

How would the mean change if an additional number far away from the current mean were added to the dataset?

  • It would always decrease
  • It would always increase
  • It would increase or decrease depending on the value
  • It would not change
The addition of an additional number far away from the current mean would either increase or decrease the mean, depending on the value. If the added number is greater than the current mean, the mean will increase; if less, the mean will decrease. This illustrates how sensitive the mean is to outliers or extreme values.

Consider you have a regression model that is underfitting. On investigation, you discover missing data was dropped instead of imputed. What might be the reason for underfitting in this context?

  • The model didn't have enough data to learn from.
  • The model was over-regularized.
  • The model's complexity was too low.
  • The model's hyperparameters were not optimized.
Dropping missing data can significantly reduce the size of the training set. If much of the data is discarded, the model may not have enough data to learn the underlying patterns, leading to underfitting.

Why is listwise deletion not recommended when the data missingness is systematic or 'not at random'?

  • It can cause overfitting
  • It can introduce bias
  • It can introduce random noise
  • It can lead to underfitting
Listwise deletion is not recommended when the data missingness is systematic or 'not at random' because it can introduce bias. If missing values are related to any underlying unobservable phenomena, listwise deletion might result in biased or misleading results by excluding certain types of observations.

In a machine learning project, your data is not normally distributed, which is causing problems in your model. What are some strategies you could use to address this issue?

  • All of the above
  • Change the type of machine learning model to one that does not assume a normal distribution
  • Use data transformation techniques like logarithmic or square root transformations
  • Use non-parametric statistical methods
Several strategies can be used to address non-normal data in a machine learning project: data can be transformed using methods like logarithmic or square root transformations; non-parametric statistical methods that do not assume a normal distribution can be used; or a different type of machine learning model that does not assume a normal distribution can be chosen.

You're examining a dataset on company revenues and discover a significant jump in revenue for one quarter, which is not consistent with the rest of the data. What could this jump in revenue be considered in the context of your analysis?

  • A random fluctuation
  • A seasonal effect
  • A trend
  • An outlier
This significant jump in revenue could be considered an outlier in the context of your analysis, as it deviates significantly from the other data points.

You are analyzing a dataset where the variable 'income' has a skewed distribution due to a few high-income individuals. What method would you recommend to handle these outliers?

  • Binning
  • Removal
  • Transformation
  • nan
In this case, the transformation method, such as log transformation, would be the best fit. It will help to reduce the skewness of the data by pulling in high values.