If you are working with a large data set and need to produce interactive visualizations for a web application, which Python library would be the most suitable?
- Bokeh
- Matplotlib
- Plotly
- Seaborn
Plotly is well-suited for creating interactive visualizations and can handle large data sets efficiently. It also supports rendering in web applications, making it ideal for this scenario.
What type of bias could be introduced by mean/median/mode imputation, particularly if the data is not missing at random?
- Confirmation bias
- Overfitting bias
- Selection bias
- Underfitting bias
Mean/Median/Mode Imputation, particularly when data is not missing at random, could introduce a type of bias known as 'Selection Bias'. This is because it might lead to incorrect estimation of variability and distorted representation of true relationships between variables, as the substituted values may not accurately reflect the reasons behind the missingness.
How can regularization techniques contribute to feature selection?
- By adding a penalty term to the loss function
- By avoiding overfitting
- By reducing model complexity
- By shrinking coefficients towards zero
Regularization techniques contribute to feature selection by shrinking the coefficients of less important features towards zero. This has the effect of effectively removing these features from the model, thus achieving feature selection.
What type of data visualization method is typically color-coded to represent different values?
- Heatmap
- Histogram
- Line plot
- Scatter plot
Heatmaps are typically color-coded to represent different values. In a heatmap, data values are represented as colors, making it an excellent tool for visualizing large amounts of data and the correlation between different variables.
What is the potential disadvantage of using listwise deletion for handling missing data?
- It causes overfitting
- It discards valuable data
- It introduces random noise
- It leads to multicollinearity
The potential disadvantage of using listwise deletion for handling missing data is that it can discard valuable data. If the missing values are not completely random, discarding the entire observation might lead to biased or incorrect results because it might exclude certain types of observations.
If a data point's Z-score is 0, it indicates that the data point is _______.
- above the mean
- an outlier
- below the mean
- on the mean
A Z-score of 0 indicates that the data point is on the mean.
Can multiple imputation be applied when data are missing completely at random (MCAR)?
- No
- Only if data is numerical
- Only in rare cases
- Yes
Yes, multiple imputation can be applied when data are missing completely at random (MCAR). In fact, it is a flexible method that can be applied in various missing data situations including MCAR, MAR (missing at random), and even NMAR (not missing at random).
You're in the 'explore' phase of the EDA process and you notice a potential error back in the 'wrangle' phase. How should you proceed?
- Conclude the analysis with the current data.
- Go back to the wrangling phase to correct the error.
- Ignore the error and continue with the exploration.
- Inform the stakeholders about the error.
If you notice a potential error in the 'wrangle' phase while you are in the 'explore' phase, you should go back to the 'wrangle' phase to correct the error. Ensuring the accuracy and quality of the data during the 'wrangle' phase is crucial for the validity of the insights drawn in subsequent phases.
What is the impact on training time if missing data is incorrectly handled in a large dataset?
- Decreases dramatically.
- Depends on the specific dataset.
- Increases dramatically.
- Remains largely the same.
If missing data is not handled correctly, particularly in a large dataset, the training time can increase significantly. This is because the model might struggle to learn from the distorted data, requiring more time to try to fit the data.
The _______ method of feature selection involves removing features one by one until the removal of further features decreases model accuracy.
- Backward elimination
- Forward selection
- Recursive feature elimination
- Stepwise selection
The backward elimination method of feature selection involves removing features one by one until the removal of further features decreases model accuracy. This process starts with a model trained on all features and iteratively removes the least important feature until the overall model performance declines.