How does increasing the sample size affect the power of a statistical test?
- Decreases the power
- Does not affect the power
- Increases the power
- May either increase or decrease the power
Increasing the sample size generally increases the power of a statistical test. This is because a larger sample provides more information, making it more likely that the test will detect a true effect if one exists.
How do post-hoc tests in ANOVA assist in interpreting the results?
- They help to adjust the level of significance
- They help to calculate the F statistic
- They help to check the assumptions of the ANOVA
- They help to determine which specific group means are significantly different from each other
Post-hoc tests in ANOVA help to determine which specific group means are significantly different from each other, after a significant overall ANOVA result. They control the overall Type I error rate across multiple comparisons.
A one-way ANOVA compares ________ group(s), while a two-way ANOVA compares ________ group(s).
- one; two
- three or more; two or more
- two; three
- two; two or more
A one-way ANOVA compares the means of three or more unrelated groups, while a two-way ANOVA compares the means of two or more groups that are split on two independent variables.
What are the potential disadvantages of using non-parametric statistical methods?
- They always give inaccurate results
- They can be less powerful than parametric tests when assumptions for parametric tests are met
- They cannot be used for certain types of data
- They cannot handle large data sets
Non-parametric statistical methods can be less powerful than parametric tests when the assumptions for the parametric tests are met. This is because they use less information (e.g., they use ranks rather than actual values). Therefore, if the data does meet the assumptions of parametric tests, parametric tests might be preferred.
Hypothesis testing in statistics is a way to test the validity of a claim that is made about a _______.
- Dataset
- Population
- Sample
- Statistic
In statistics, hypothesis testing is typically used to test claims about a population parameter, not a sample statistic, dataset, or an individual statistic.
The _______ Information Criterion is a measure used in model selection that takes into account the goodness of fit and the simplicity of the model.
- Akaike
- Bayesian
- Pearson
- Spearman
The Akaike Information Criterion (AIC) balances goodness of fit with model simplicity by including a penalty for the number of parameters in the model. This discourages overfitting.
What is the skewness value for a perfect normal distribution?
- -1
- 0
- 1
- It varies
For a perfect normal distribution, the skewness value is zero. This is because a normal distribution is perfectly symmetrical, so its left and right tails are identical.
The Chi-square statistic is calculated by summing the squared difference between observed and expected frequencies, each divided by the ________ frequency.
- expected
- median
- mode
- observed
The Chi-square statistic is calculated by summing the squared differences between observed and expected frequencies, each divided by the expected frequency. This reflects how much the observed data deviate from the expected data.
What are some potential issues with interpreting the results of factor analysis?
- Factor analysis is not sensitive to outliers, and results are always reliable and consistent
- Factors are always straightforward to interpret, and factor loadings are always clear and unambiguous
- Factors may be hard to interpret, factor loadings can be ambiguous, and results can be sensitive to outliers
- Results are always conclusive, factors can be easily interpreted, and factor loadings are never ambiguous
Some potential issues with interpreting the results of factor analysis include: factors can sometimes be hard to interpret, factor loadings can be ambiguous (a variable may load onto multiple factors), and the results can be sensitive to outliers.
How does factor analysis help in understanding the structure of a dataset?
- By identifying underlying factors
- By normalizing the data
- By reducing noise in the data
- By transforming the data
Factor analysis helps in understanding the structure of a dataset by identifying the underlying factors that give rise to the pattern of correlations within the set of observed variables. These factors can explain the latent structure in the data.