What does ANOVA stand for?
- Analysis Of Variance
- Analysis Of Vitality
- Average Of Variance
- nan
ANOVA stands for Analysis Of Variance. It's a statistical technique used to check if the means of two or more groups are significantly different from each other.
What is the Central Limit Theorem and how does it relate to the normal distribution?
- It states that all distributions are ultimately normal distributions
- It states that the mean of a large sample is always equal to the population mean
- It states that the sum of a large number of independent and identically distributed random variables tends to be normally distributed
- It states that the sum of a small number of random variables has an exponential distribution
The Central Limit Theorem states that, given certain conditions, the arithmetic mean of a sufficiently large number of iterates of independent random variables, each with a well-defined (finite) expected value and finite variance, will be approximately normally distributed, regardless of the shape of the original distribution.
How does polynomial regression differ from linear regression?
- Linear regression models relationships as curves
- Linear regression models relationships as straight lines
- Polynomial regression models relationships as curves
- Polynomial regression models relationships as straight lines
Polynomial regression models relationships as curves, not straight lines. This allows polynomial regression to capture non-linear relationships, where the relationship changes direction at different levels of the independent variables. On the other hand, linear regression models relationships as straight lines, assuming a constant rate of change.
Pearson's Correlation Coefficient assumes that the variables are ________ distributed.
- negatively
- normally
- positively
- randomly
Pearson's Correlation Coefficient assumes that the variables are normally distributed. It's one of the key assumptions made when calculating the coefficient, and it refers to the shape of the distribution of the values.
What is the role of eigenvalues in factor analysis?
- They are used to categorize the data
- They are used to transform the data
- They help in normalizing the data
- They represent the variance explained by each factor
In factor analysis, eigenvalues represent the total variance explained by each factor. A larger eigenvalue indicates that more of the total variance is accounted for by that factor.
What is the null hypothesis of the Spearman's Rank Correlation test?
- The variables are not related
- The variables have a negative correlation
- The variables have a positive correlation
- There is no monotonic relationship between the variables
The null hypothesis of the Spearman's Rank Correlation test is that there is no monotonic relationship between the variables. That is, changes in one variable do not consistently correspond to changes in the other variable.
How do you calculate the probability of the intersection of two independent events?
- P(A ∩ B) = P(A) * P(B)
- P(A ∩ B) = P(A) + P(B)
- P(A ∩ B) = P(A) - P(B)
- P(A ∩ B) = P(A) / P(B)
The probability of the intersection of two independent events is calculated as the product of their individual probabilities. So if A and B are independent, P(A ∩ B) = P(A) * P(B). This is a direct result of the Multiplication Rule for independent events.
The normal distribution is also known as the ________ distribution.
- Exponential
- Gaussian
- Poisson
- Uniform
The normal distribution is also known as the Gaussian distribution. It is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is bell-shaped.
How does the presence of outliers affect measures of dispersion like range, variance, and standard deviation?
- Decreases them
- Depends on the values of the outliers
- Increases them
- No effect
Outliers can greatly affect measures of dispersion like the range, variance, and standard deviation by making them larger. These measures consider the distance of each value from the mean, so an outlier (which is a value that is significantly higher or lower than the other values) can result in a much larger measure of dispersion.
What is the implication of multicollinearity in polynomial regression?
- It increases the fit of the model to the training data
- It increases the interpretability of the model
- It reduces the complexity of the model
- It reduces the precision of coefficient estimates
Multicollinearity in polynomial regression can reduce the precision of the coefficient estimates and cause them to be highly sensitive to minor changes in the model. This can lead to unstable and unreliable estimates, making it difficult to interpret the model and infer about the relationships between variables.