The measure of how much individual sample means will vary is called the __________ error.
- Absolute
- Margin of
- Sampling
- Standard
The standard error of a statistic is a measure of the statistical accuracy of an estimate, equal to the standard deviation of the theoretical distribution of a large population of such estimates. It is used to test hypotheses on the grounds of a set of data. For sample means, the standard error tells us how the mean varies from one sample to another.
Loading...
Related Quiz
- What does a negative kurtosis indicate about the distribution of the dataset?
- The _______ Rule is used when we want to find the probability of two events happening at the same time.
- A ________ result in the Chi-square test for goodness of fit indicates that the observed distribution does not significantly differ from the expected distribution.
- What is the difference between a one-way and a two-way ANOVA?
- What is the main difference between a population and a sample?