How does ridge regression help in dealing with multicollinearity?

  • By eliminating the correlated variables.
  • By increasing the sample size.
  • By introducing a penalty term to shrink the coefficients.
  • By transforming the variables.
Ridge regression introduces a regularization term (penalty term) into the loss function which helps to shrink the coefficients towards zero and mitigate the effect of multicollinearity.

How does adding more predictors to a multiple linear regression model affect its inferences?

  • It always improves the model
  • It always makes the model worse
  • It can lead to overfitting
  • It has no effect on the model
Adding more predictors to a model may increase the R-squared value, making it appear that the model is improving. However, if these additional predictors are not truly associated with the response variable, it may result in overfitting, making the model perform poorly on new, unseen data.

In probability, what does an outcome refer to?

  • A confirmed hypothesis
  • A result of a random experiment
  • A result of a statistical analysis
  • A successful event
In the context of probability, an outcome refers to a possible result of a random experiment. For example, if the experiment is tossing a coin, the possible outcomes are 'Heads' or 'Tails'. Each outcome is considered mutually exclusive, meaning only one outcome can occur at a time.

________ is a problem that can arise in multiple linear regression when two or more predictor variables are highly correlated with each other.

  • Autocorrelation
  • Heteroscedasticity
  • Homoscedasticity
  • Multicollinearity
Multicollinearity is a problem that can occur in multiple linear regression when two or more predictor variables are highly correlated with each other. This can lead to unstable estimates of the regression coefficients and make it difficult to determine the individual effects of the predictor variables.

In a multiple linear regression equation, the ________ represents the expected change in the dependent variable for a one-unit change in the corresponding independent variable, holding all other independent variables constant.

  • F-statistic
  • R-squared value
  • regression coefficient
  • residual
In a multiple linear regression equation, the regression coefficient represents the expected change in the dependent variable for a one-unit change in the corresponding independent variable, while holding all other independent variables constant. It gives the direction and strength of the relationship between the dependent variable and each independent variable.

What is the effect of multicollinearity on the power of a statistical test?

  • It decreases the power.
  • It has no effect on the power.
  • It increases the power.
  • It makes the power equal to one.
Multicollinearity can inflate the variance of the regression coefficients, thus widening the confidence intervals and reducing the power of the statistical test.

What is the purpose of point estimation in statistics?

  • To calculate the variance of a dataset
  • To compare two different datasets
  • To estimate the range of possible values for an unknown population parameter
  • To give a single best guess of an unknown population parameter
The purpose of point estimation in statistics is to provide a single "best guess" or "most likely" value for an unknown parameter of a population, such as the mean or the proportion. It's a single value that approximates an unknown parameter based on sampled data.

What does it mean if two events are independent in probability?

  • The occurrence of one affects the occurrence of the other
  • The occurrence of one does not affect the occurrence of the other
  • They have the same probability of occurrence
  • They occur at the same time
In probability, two events are independent if the occurrence of one event does not affect the occurrence of the other. This means that the probability of both events occurring is the product of their individual probabilities.

How can a Chi-square test for independence be used in feature selection?

  • It can identify the features that are independent from the target variable
  • It can identify the features that are most correlated with the target variable
  • It can identify the features that have a significant association with the target variable
  • It can identify the features that have the highest variance
A Chi-square test for independence can be used in feature selection by identifying the features that have a significant association with the target variable.

Spearman's Rank Correlation is based on the ________ of the data rather than their raw values.

  • Means
  • Medians
  • Modes
  • Ranks
Spearman's Rank Correlation is based on the ranks of the data rather than their raw values, which makes it a non-parametric method.

A Type II error occurs when we fail to reject the null hypothesis, even though it is _______.

  • FALSE
  • Not applicable
  • Not proven
  • TRUE
A Type II error occurs when we fail to reject the null hypothesis, even though it is false. This is also known as a "false negative" error.

The type of data that describes attributes or characteristics of a group is called ________ data.

  • Continuous
  • Discrete
  • Qualitative
  • Quantitative
The type of data that describes attributes or characteristics of a group is called Qualitative data. These are often non-numeric and may include data types such as text, audio, or video. Examples include a person's gender, eye color, or the make of a car.