How do bias and variability affect sampling methods?

  • Bias and variability always increase the accuracy of estimates
  • Bias and variability are unrelated concepts in statistics
  • Bias increases the spread of a data distribution, and variability leads to consistent errors
  • Bias leads to consistent errors in one direction, and variability refers to the spread of a data distribution
Bias and variability are two key concepts in sampling methods. Bias refers to consistent, systematic errors that lead to an overestimate or underestimate of the true population parameter. Variability refers to the spread or dispersion of a data distribution, or in this context, the sampling distribution. Lower bias and lower variability are generally desirable to increase the accuracy and precision of estimates.

What is the alternative hypothesis in the context of statistical testing?

  • A condition of no effect or no difference
  • A specific outcome of the experiment
  • An effect or difference exists
  • The sample size is large enough for the test
The alternative hypothesis is the hypothesis used in hypothesis testing that is contrary to the null hypothesis. It is usually taken to be that the observations are the result of a real effect.

How does the sample size impact the result of a Z-test?

  • Larger sample sizes can produce more precise estimates, reducing the standard error
  • Larger sample sizes increase the likelihood of a Type I error
  • Sample size has no impact on the results of a Z-test
  • nan
Larger sample sizes generally allow for more precise estimates of population parameters. This reduces the standard error, making the z-score larger and potentially leading to stronger evidence against the null hypothesis in a Z-test.

When should you use the Spearman’s Rank Correlation test?

  • When data is normally distributed
  • When data is ordinal or not normally distributed
  • When data is perfectly ranked
  • When the correlation is linear
The Spearman’s Rank Correlation test should be used when data is ordinal or not normally distributed. It is a non-parametric test that does not require the assumption of normal distribution.

Is the Kruskal-Wallis Test used for comparing two groups or more than two groups?

  • Both
  • More than two groups
  • Neither
  • Two groups
The Kruskal-Wallis Test is used for comparing more than two groups.

What can be a potential drawback of using a high degree polynomial in regression analysis?

  • It can lead to overfitting
  • It can lead to underfitting
  • It doesn't capture relationships between variables
  • It simplifies the model too much
Using a high degree polynomial in regression analysis can lead to overfitting. Overfitting occurs when a model captures not only the underlying pattern but also the noise in the data, making it perform well on the training data but poorly on new, unseen data.

How does independence between events affect the calculation of their joint probability?

  • It makes the joint probability equal to the difference of the probabilities of each event
  • It makes the joint probability equal to the product of the probabilities of each event
  • It makes the joint probability equal to the ratio of the probabilities of each event
  • It makes the joint probability equal to the sum of the probabilities of each event
If events are independent, their joint probability equals the product of their individual probabilities. That is, P(A ∩ B) = P(A) * P(B) for independent events A and B.

How does the Spearman rank correlation deal with categorical variables?

  • It assigns a numerical value to each category
  • It can't handle categorical variables
  • It groups categorical variables together
  • It transforms categorical variables into ranks
The Spearman rank correlation transforms categorical variables into ranks, which allows it to handle both continuous and ordinal (a type of categorical variable) data.

The process of testing the effect of varying one predictor at different levels of another predictor is known as ________ effect analysis.

  • Additive
  • Independent
  • Interaction
  • Subtractive
This is known as interaction effect analysis. Interaction effect analysis involves testing how the effect of one predictor on the response variable changes at different levels of another predictor. It helps in understanding how different variables interact with each other to affect the dependent variable.

The probability of committing a Type I error is also known as the ______ level of the test.

  • Confidence
  • Power
  • Significance
  • Size
The probability of committing a Type I error (rejecting a true null hypothesis) is known as the significance level (often denoted by alpha) of the test. A common significance level is 0.05, indicating a 5% risk of committing a Type I error if the null hypothesis is true.

The ________ is the average of a data set calculated by adding all values and then dividing by the number of values.

  • Mean
  • Median
  • Mode
  • nan
The mean, also referred to as average or arithmetic mean, is calculated by adding all values in the data set and then dividing by the number of values. The mean is often used as a summary statistic.

What type of statistical test is the Kruskal-Wallis Test?

  • Chi-square test
  • Non-parametric
  • Parametric
  • T-test
The Kruskal-Wallis Test is a non-parametric statistical test.