What are the degrees of freedom in a Chi-square test for a 2x3 contingency table?
- 2
- 3
- 4
- 6
In a Chi-square test, the degrees of freedom for a 2x3 contingency table is (2-1) * (3-1) = 2.
The process that aims to identify underlying variables, or factors, that explain the pattern of correlations within a set of observed variables is called _______.
- correlation analysis
- covariance analysis
- factor analysis
- regression analysis
The process that aims to identify underlying variables, or factors, that explain the pattern of correlations within a set of observed variables is called factor analysis.
If the Kruskal-Wallis H test is significant, it is often followed up with ________ to find which groups differ.
- ANOVA
- correlation analysis
- post hoc tests
- t-tests
If the Kruskal-Wallis H test is significant, it is often followed up with post hoc tests to find which groups differ. These tests are used to make pairwise comparisons between groups.
What are the implications of autocorrelation in the residuals of a regression model?
- It causes bias in the parameter estimates
- It indicates that the model is overfit
- It suggests that the model is underfit
- It violates the assumption of independent residuals
Autocorrelation in the residuals of a regression model violates the assumption of independent residuals. This can lead to inefficient estimates and incorrect standard errors, leading to unreliable hypothesis tests and confidence intervals.
How does increasing the sample size affect the power of a statistical test?
- Decreases the power
- Does not affect the power
- Increases the power
- May either increase or decrease the power
Increasing the sample size generally increases the power of a statistical test. This is because a larger sample provides more information, making it more likely that the test will detect a true effect if one exists.
What is the skewness value for a perfect normal distribution?
- -1
- 0
- 1
- It varies
For a perfect normal distribution, the skewness value is zero. This is because a normal distribution is perfectly symmetrical, so its left and right tails are identical.
The Chi-square statistic is calculated by summing the squared difference between observed and expected frequencies, each divided by the ________ frequency.
- expected
- median
- mode
- observed
The Chi-square statistic is calculated by summing the squared differences between observed and expected frequencies, each divided by the expected frequency. This reflects how much the observed data deviate from the expected data.
What are some potential issues with interpreting the results of factor analysis?
- Factor analysis is not sensitive to outliers, and results are always reliable and consistent
- Factors are always straightforward to interpret, and factor loadings are always clear and unambiguous
- Factors may be hard to interpret, factor loadings can be ambiguous, and results can be sensitive to outliers
- Results are always conclusive, factors can be easily interpreted, and factor loadings are never ambiguous
Some potential issues with interpreting the results of factor analysis include: factors can sometimes be hard to interpret, factor loadings can be ambiguous (a variable may load onto multiple factors), and the results can be sensitive to outliers.
How does factor analysis help in understanding the structure of a dataset?
- By identifying underlying factors
- By normalizing the data
- By reducing noise in the data
- By transforming the data
Factor analysis helps in understanding the structure of a dataset by identifying the underlying factors that give rise to the pattern of correlations within the set of observed variables. These factors can explain the latent structure in the data.
What is the name of the rule that states the probability of the sum of all possible outcomes of an experiment is 1?
- Bayes' Theorem
- Law of Large Numbers
- Law of Total Probability
- Rule of Complementary Events
The Law of Total Probability states that the sum of the probabilities of all possible outcomes of an experiment is 1. This rule is fundamental to probability theory and provides a way to calculate the probability of complex events by breaking them down into simpler, mutually exclusive events.